在 Intel CPU 上使用 IPEX-LLM 进行本地嵌入¶
IPEX-LLM 是一个 PyTorch 库,用于在 Intel CPU 和 GPU(例如带有 iGPU 的本地电脑,以及 Arc、Flex 和 Max 等独立 GPU)上以极低的延迟运行 LLM。
本示例介绍了如何使用 LlamaIndex 在 Intel CPU 上利用 ipex-llm
优化来执行嵌入任务。这对于 RAG、文档问答等应用非常有用。
注意
你可以参考这里,获取
IpexLLMEmbedding
的完整示例。请注意,要在 Intel CPU 上运行,请在运行示例时在命令行参数中指定-d 'cpu'
。
安装 llama-index-embeddings-ipex-llm
¶
这也会安装 ipex-llm
及其依赖项。
In [ ]
已复制!
%pip install llama-index-embeddings-ipex-llm
%pip install llama-index-embeddings-ipex-llm
IpexLLMEmbedding
¶
In [ ]
已复制!
from llama_index.embeddings.ipex_llm import IpexLLMEmbedding
embedding_model = IpexLLMEmbedding(model_name="BAAI/bge-large-en-v1.5")
from llama_index.embeddings.ipex_llm import IpexLLMEmbedding embedding_model = IpexLLMEmbedding(model_name="BAAI/bge-large-en-v1.5")
请注意,
IpexLLMEmbedding
目前仅为 Hugging Face Bge 模型提供优化。
In [ ]
已复制!
sentence = "IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency."
query = "What is IPEX-LLM?"
text_embedding = embedding_model.get_text_embedding(sentence)
print(f"embedding[:10]: {text_embedding[:10]}")
text_embeddings = embedding_model.get_text_embedding_batch([sentence, query])
print(f"text_embeddings[0][:10]: {text_embeddings[0][:10]}")
print(f"text_embeddings[1][:10]: {text_embeddings[1][:10]}")
query_embedding = embedding_model.get_query_embedding(query)
print(f"query_embedding[:10]: {query_embedding[:10]}")
sentence = "IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency." query = "What is IPEX-LLM?" text_embedding = embedding_model.get_text_embedding(sentence) print(f"embedding[:10]: {text_embedding[:10]}") text_embeddings = embedding_model.get_text_embedding_batch([sentence, query]) print(f"text_embeddings[0][:10]: {text_embeddings[0][:10]}") print(f"text_embeddings[1][:10]: {text_embeddings[1][:10]}") query_embedding = embedding_model.get_query_embedding(query) print(f"query_embedding[:10]: {query_embedding[:10]}")
Batches: 0%| | 0/1 [00:00<?, ?it/s]
embedding[:10]: [0.03578318655490875, 0.032746609300374985, -0.016696255654096603, 0.0074520050548017025, 0.016294749453663826, -0.001968140248209238, -0.002897330094128847, -0.041390497237443924, 0.030955366790294647, 0.05438097193837166]
Batches: 0%| | 0/1 [00:00<?, ?it/s]
text_embeddings[0][:10]: [0.03578318655490875, 0.032746609300374985, -0.016696255654096603, 0.0074520050548017025, 0.016294749453663826, -0.001968140248209238, -0.002897330094128847, -0.041390497237443924, 0.030955366790294647, 0.05438097193837166] text_embeddings[1][:10]: [0.03155018016695976, 0.03177601844072342, -0.00304483063519001, 0.004364349413663149, 0.005002604331821203, -0.02680951915681362, -0.005840071476995945, -0.022466979920864105, 0.05162270367145538, 0.05928812175989151]
Batches: 0%| | 0/1 [00:00<?, ?it/s]
query_embedding[:10]: [0.053250256925821304, 0.0036771567538380623, 0.003390512429177761, 0.014903719536960125, -0.00263631297275424, -0.022365037351846695, -0.004524332471191883, -0.018143195658922195, 0.03799865022301674, 0.07393667846918106]