安装 llama-index-llms-ipex-llm
。这将同时安装 ipex-llm
及其依赖项。
In [ ]
已复制!
%pip install llama-index-llms-ipex-llm
%pip install llama-index-llms-ipex-llm
在本示例中,我们将使用 HuggingFaceH4/zephyr-7b-alpha 模型进行演示。这需要更新 transformers
和 tokenizers
包。
In [ ]
已复制!
%pip install -U transformers==4.37.0 tokenizers==0.15.2
%pip install -U transformers==4.37.0 tokenizers==0.15.2
在加载 Zephyr 模型之前,您需要定义 completion_to_prompt
和 messages_to_prompt
来格式化提示词。这对于准备模型能够准确解释的输入至关重要。
In [ ]
已复制!
# Transform a string into input zephyr-specific input
def completion_to_prompt(completion):
return f"<|system|>\n</s>\n<|user|>\n{completion}</s>\n<|assistant|>\n"
# Transform a list of chat messages into zephyr-specific input
def messages_to_prompt(messages):
prompt = ""
for message in messages:
if message.role == "system":
prompt += f"<|system|>\n{message.content}</s>\n"
elif message.role == "user":
prompt += f"<|user|>\n{message.content}</s>\n"
elif message.role == "assistant":
prompt += f"<|assistant|>\n{message.content}</s>\n"
# ensure we start with a system prompt, insert blank if needed
if not prompt.startswith("<|system|>\n"):
prompt = "<|system|>\n</s>\n" + prompt
# add final assistant prompt
prompt = prompt + "<|assistant|>\n"
return prompt
# Transform a string into input zephyr-specific input def completion_to_prompt(completion): return f"<|system|>\n\n<|user|>\n{completion}\n<|assistant|>\n" # Transform a list of chat messages into zephyr-specific input def messages_to_prompt(messages): prompt = "" for message in messages: if message.role == "system": prompt += f"<|system|>\n{message.content}\n" elif message.role == "user": prompt += f"<|user|>\n{message.content}\n" elif message.role == "assistant": prompt += f"<|assistant|>\n{message.content}\n" # ensure we start with a system prompt, insert blank if needed if not prompt.startswith("<|system|>\n"): prompt = "<|system|>\n\n" + prompt # add final assistant prompt prompt = prompt + "<|assistant|>\n" return prompt
基本用法¶
使用 `IpexLLM.from_model_id` 通过 IpexLLM 在本地加载 Zephyr 模型。它将直接以 Huggingface 格式加载模型,并自动将其转换为低比特格式以进行推理。
In [ ]
已复制!
import warnings
warnings.filterwarnings(
"ignore", category=UserWarning, message=".*padding_mask.*"
)
from llama_index.llms.ipex_llm import IpexLLM
llm = IpexLLM.from_model_id(
model_name="HuggingFaceH4/zephyr-7b-alpha",
tokenizer_name="HuggingFaceH4/zephyr-7b-alpha",
context_window=512,
max_new_tokens=128,
generate_kwargs={"do_sample": False},
completion_to_prompt=completion_to_prompt,
messages_to_prompt=messages_to_prompt,
)
import warnings warnings.filterwarnings( "ignore", category=UserWarning, message=".*padding_mask.*" ) from llama_index.llms.ipex_llm import IpexLLM llm = IpexLLM.from_model_id( model_name="HuggingFaceH4/zephyr-7b-alpha", tokenizer_name="HuggingFaceH4/zephyr-7b-alpha", context_window=512, max_new_tokens=128, generate_kwargs={"do_sample": False}, completion_to_prompt=completion_to_prompt, messages_to_prompt=messages_to_prompt, )
Loading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]
2024-04-11 21:36:54,739 - INFO - Converting the current model to sym_int4 format......
现在,您可以使用加载的模型进行文本补全和交互式聊天。
文本补全¶
In [ ]
已复制!
completion_response = llm.complete("Once upon a time, ")
print(completion_response.text)
completion_response = llm.complete("Once upon a time, ") print(completion_response.text)
in a far-off land, there was a young girl named Lily. Lily lived in a small village surrounded by lush green forests and rolling hills. She loved nothing more than spending her days exploring the woods and playing with her animal friends. One day, while wandering through the forest, Lily stumbled upon a magical tree. The tree was unlike any other she had ever seen. Its trunk was made of shimmering crystal, and its branches were adorned with sparkling jewels. Lily was immediately drawn to the tree and sat down to admire its beauty. Suddenly,
流式文本补全¶
In [ ]
已复制!
response_iter = llm.stream_complete("Once upon a time, there's a little girl")
for response in response_iter:
print(response.delta, end="", flush=True)
response_iter = llm.stream_complete("Once upon a time, there's a little girl") for response in response_iter: print(response.delta, end="", flush=True)
who loved to play with her toys. She had a favorite teddy bear named Ted, and a doll named Dolly. She would spend hours playing with them, imagining all sorts of adventures. One day, she decided to take Ted and Dolly on a real adventure. She packed a backpack with some snacks, a blanket, and a map. They set off on a hike in the nearby woods. The little girl was so excited that she could barely contain her joy. Ted and Dolly were happy to be along for the ride. They walked for what seemed like hours, but the little girl didn't mind
聊天¶
In [ ]
已复制!
from llama_index.core.llms import ChatMessage
message = ChatMessage(role="user", content="Explain Big Bang Theory briefly")
resp = llm.chat([message])
print(resp)
from llama_index.core.llms import ChatMessage message = ChatMessage(role="user", content="Explain Big Bang Theory briefly") resp = llm.chat([message]) print(resp)
assistant: The Big Bang Theory is a popular American sitcom that aired from 2007 to 2019. The show follows the lives of two brilliant but socially awkward physicists, Leonard Hofstadter (Johnny Galecki) and Sheldon Cooper (Jim Parsons), and their friends and colleagues, Penny (Kaley Cuoco), Rajesh Koothrappali (Kunal Nayyar), and Howard Wolowitz (Simon Helberg). The show is set in Pasadena, California, and revolves around the characters' work at Caltech and
流式聊天¶
In [ ]
已复制!
message = ChatMessage(role="user", content="What is AI?")
resp = llm.stream_chat([message], max_tokens=256)
for r in resp:
print(r.delta, end="")
message = ChatMessage(role="user", content="What is AI?") resp = llm.stream_chat([message], max_tokens=256) for r in resp: print(r.delta, end="")
AI stands for Artificial Intelligence. It refers to the development of computer systems that can perform tasks that typically require human intelligence, such as learning, reasoning, and problem-solving. AI involves the use of machine learning algorithms, natural language processing, and other advanced techniques to enable computers to understand and respond to human input in a more natural and intuitive way.
保存/加载低比特模型¶
此外,您可以将低比特模型保存到磁盘一次,然后使用 `from_model_id_low_bit` 而不是 `from_model_id` 来重新加载它,以便后续使用 - 即使在不同的机器上也可以。这样做可以节省空间,因为低比特模型比原始模型所需的磁盘空间少得多。而且,`from_model_id_low_bit` 在速度和内存使用方面也比 `from_model_id` 更高效,因为它跳过了模型转换步骤。
要保存低比特模型,请按如下方式使用 `save_low_bit`。
In [ ]
已复制!
saved_lowbit_model_path = (
"./zephyr-7b-alpha-low-bit" # path to save low-bit model
)
llm._model.save_low_bit(saved_lowbit_model_path)
del llm
saved_lowbit_model_path = ( "./zephyr-7b-alpha-low-bit" # path to save low-bit model ) llm._model.save_low_bit(saved_lowbit_model_path) del llm
按如下方式从保存的低比特模型路径加载模型。
请注意,低比特模型的保存路径仅包含模型本身,不包含分词器。如果您希望将所有文件放在一个位置,则需要手动将分词器文件从原始模型的目录下载或复制到保存低比特模型的路径。
In [ ]
已复制!
llm_lowbit = IpexLLM.from_model_id_low_bit(
model_name=saved_lowbit_model_path,
tokenizer_name="HuggingFaceH4/zephyr-7b-alpha",
# tokenizer_name=saved_lowbit_model_path, # copy the tokenizers to saved path if you want to use it this way
context_window=512,
max_new_tokens=64,
completion_to_prompt=completion_to_prompt,
generate_kwargs={"do_sample": False},
)
llm_lowbit = IpexLLM.from_model_id_low_bit( model_name=saved_lowbit_model_path, tokenizer_name="HuggingFaceH4/zephyr-7b-alpha", # tokenizer_name=saved_lowbit_model_path, # copy the tokenizers to saved path if you want to use it this way context_window=512, max_new_tokens=64, completion_to_prompt=completion_to_prompt, generate_kwargs={"do_sample": False}, )
2024-04-11 21:38:06,151 - INFO - Converting the current model to sym_int4 format......
尝试使用加载的低比特模型进行流式补全。
In [ ]
已复制!
response_iter = llm_lowbit.stream_complete("What is Large Language Model?")
for response in response_iter:
print(response.delta, end="", flush=True)
response_iter = llm_lowbit.stream_complete("What is Large Language Model?") for response in response_iter: print(response.delta, end="", flush=True)
A large language model (LLM) is a type of artificial intelligence (AI) model that is trained on a massive amount of text data. These models are capable of generating human-like responses to text inputs and can be used for various natural language processing (NLP) tasks, such as text classification, sentiment analysis