Deepinfra
已复制!
DeepInfra¶
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/embeddings/deepinfra.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
通过此集成,您可以使用 DeepInfra 嵌入模型获取文本数据的嵌入。这里是嵌入模型的链接。
首先,您需要在DeepInfra 网站上注册并获取 API token。您可以从模型卡中复制 model_ids
并在代码中开始使用它们。
安装¶
!pip install llama-index llama-index-embeddings-deepinfra
from dotenv import load_dotenv, find_dotenv from llama_index.embeddings.deepinfra import DeepInfraEmbeddingModel _ = load_dotenv(find_dotenv()) model = DeepInfraEmbeddingModel( model_id="BAAI/bge-large-en-v1.5", # 使用自定义模型 ID api_token="YOUR_API_TOKEN", # 可选在此处提供 token normalize=True, # 可选归一化 text_prefix="text: ", # 可选文本前缀 query_prefix="query: ", # 可选查询前缀 )
已复制!
DeepInfra¶
from dotenv import load_dotenv, find_dotenv
from llama_index.embeddings.deepinfra import DeepInfraEmbeddingModel
_ = load_dotenv(find_dotenv())
model = DeepInfraEmbeddingModel(
model_id="BAAI/bge-large-en-v1.5", # Use custom model ID
api_token="YOUR_API_TOKEN", # Optionally provide token here
normalize=True, # Optional normalization
text_prefix="text: ", # Optional text prefix
query_prefix="query: ", # Optional query prefix
)
同步请求¶
获取文本嵌入¶
response = model.get_text_embedding("hello world") print(response)
texts = ["hello world", "goodbye world"] response_batch = model.get_text_embedding_batch(texts) print(response_batch)
query_response = model.get_query_embedding("hello world") print(query_response)
async def main(): text = "hello world" async_response = await model.aget_text_embedding(text) print(async_response) if __name__ == "__main__": import asyncio asyncio.run(main())
response = model.get_text_embedding("hello world") print(response)
已复制!
DeepInfra¶
async def main():
text = "hello world"
async_response = await model.aget_text_embedding(text)
print(async_response)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
如有任何问题或反馈,请通过 [email protected] 联系我们。
返回顶部