使用 IPEX 后端优化的 Optimum Intel LLM¶
Optimum Intel 利用 Intel Extension for Pytorch (IPEX) 优化加速 Hugging Face 管道在 Intel 架构上的运行
Optimum Intel 模型可以通过 LlamaIndex 封装的 OptimumIntelLLM 实体在本地运行
在下面这行,我们安装此演示所需的软件包
In [ ]
已复制!
%pip install llama-index-llms-optimum-intel
%pip install llama-index-llms-optimum-intel
现在我们已经设置好了,来试玩一下吧
如果您在 Colab 上打开此 Notebook,您可能需要安装 LlamaIndex 🦙。
In [ ]
已复制!
!pip install llama-index
!pip install llama-index
In [ ]
已复制!
from llama_index.llms.optimum_intel import OptimumIntelLLM
from llama_index.llms.optimum_intel import OptimumIntelLLM
In [ ]
已复制!
def messages_to_prompt(messages):
prompt = ""
for message in messages:
if message.role == "system":
prompt += f"<|system|>\n{message.content}</s>\n"
elif message.role == "user":
prompt += f"<|user|>\n{message.content}</s>\n"
elif message.role == "assistant":
prompt += f"<|assistant|>\n{message.content}</s>\n"
# ensure we start with a system prompt, insert blank if needed
if not prompt.startswith("<|system|>\n"):
prompt = "<|system|>\n</s>\n" + prompt
# add final assistant prompt
prompt = prompt + "<|assistant|>\n"
return prompt
def completion_to_prompt(completion):
return f"<|system|>\n</s>\n<|user|>\n{completion}</s>\n<|assistant|>\n"
def messages_to_prompt(messages): prompt = "" for message in messages: if message.role == "system": prompt += f"<|system|>\n{message.content}\n" elif message.role == "user": prompt += f"<|user|>\n{message.content}\n" elif message.role == "assistant": prompt += f"<|assistant|>\n{message.content}\n" # ensure we start with a system prompt, insert blank if needed if not prompt.startswith("<|system|>\n"): prompt = "<|system|>\n\n" + prompt # add final assistant prompt prompt = prompt + "<|assistant|>\n" return prompt def completion_to_prompt(completion): return f"<|system|>\n\n<|user|>\n{completion}\n<|assistant|>\n"
模型加载¶
可以使用 OptimumIntelLLM
方法通过指定模型参数来加载模型。
In [ ]
已复制!
oi_llm = OptimumIntelLLM(
model_name="Intel/neural-chat-7b-v3-3",
tokenizer_name="Intel/neural-chat-7b-v3-3",
context_window=3900,
max_new_tokens=256,
generate_kwargs={"temperature": 0.7, "top_k": 50, "top_p": 0.95},
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
device_map="cpu",
)
oi_llm = OptimumIntelLLM( model_name="Intel/neural-chat-7b-v3-3", tokenizer_name="Intel/neural-chat-7b-v3-3", context_window=3900, max_new_tokens=256, generate_kwargs={"temperature": 0.7, "top_k": 50, "top_p": 0.95}, messages_to_prompt=messages_to_prompt, completion_to_prompt=completion_to_prompt, device_map="cpu", )
In [ ]
已复制!
response = oi_llm.complete("What is the meaning of life?")
print(str(response))
response = oi_llm.complete("What is the meaning of life?") print(str(response))
流式传输¶
使用 stream_complete
端点
In [ ]
已复制!
response = oi_llm.stream_complete("Who is Mother Teresa?")
for r in response:
print(r.delta, end="")
response = oi_llm.stream_complete("Who is Mother Teresa?") for r in response: print(r.delta, end="")
使用 stream_chat
端点
In [ ]
已复制!
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(
role="system",
content="You are an American chef in a small restaurant in New Orleans",
),
ChatMessage(role="user", content="What is your dish of the day?"),
]
resp = oi_llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
from llama_index.core.llms import ChatMessage messages = [ ChatMessage( role="system", content="You are an American chef in a small restaurant in New Orleans", ), ChatMessage(role="user", content="What is your dish of the day?"), ] resp = oi_llm.stream_chat(messages) for r in resp: print(r.delta, end="")