Fleet Context 嵌入 - 为 Llamaindex 库构建混合搜索引擎#
在本指南中,我们将使用 Fleet Context 下载 LlamaIndex 文档的嵌入,并在其之上构建一个混合密集/稀疏向量检索引擎。
前提条件#
!pip install llama-index
!pip install --upgrade fleet-context
import os
import openai
os.environ["OPENAI_API_KEY"] = "sk-..." # add your API key here!
openai.api_key = os.environ["OPENAI_API_KEY"]
从 Fleet Context 下载嵌入#
我们将使用 Fleet Context 下载 LlamaIndex 完整文档的嵌入(约 1.2 万个块,约 100MB 内容)。您可以通过指定库名称作为参数来下载前 1220 个库中的任何一个的嵌入。您可以在页面底部此处查看支持库的完整列表。
我们这样做是因为 Fleet 构建了一个嵌入流水线,它保留了许多重要信息,这些信息将改进检索和生成效果,包括页面上的位置(用于重排)、块类型(类/函数/属性等)、父章节等等。您可以在他们的 Github 页面上阅读更多信息。
from context import download_embeddings
df = download_embeddings("llamaindex")
输出:
100%|██████████| 83.7M/83.7M [00:03<00:00, 27.4MiB/s]
id \
0 e268e2a1-9193-4e7b-bb9b-7a4cb88fc735
1 e495514b-1378-4696-aaf9-44af948de1a1
2 e804f616-7db0-4455-9a06-49dd275f3139
3 eb85c854-78f1-4116-ae08-53b2a2a9fa41
4 edfc116e-cf58-4118-bad4-c4bc0ca1495e
# Show some examples of the metadata
df["metadata"][0]
display(Markdown(f"{df['metadata'][8000]['text']}"))
输出:
classmethod from_dict(data: Dict[str, Any], kwargs: Any) → Self classmethod from_json(data_str: str, kwargs: Any) → Self classmethod from_orm(obj: Any) → Model json(, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True*, dumps_kwargs: Any) → unicode Generate a JSON representation of the model, include and exclude arguments as per dict().
在 LlamaIndex 中为混合搜索创建 Pinecone 索引#
我们将创建一个 Pinecone 索引,并将我们的向量上传到那里,以便进行稀疏向量和密集向量的混合检索。在继续之前,请确保您有一个 Pinecone 帐户。
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().handlers = []
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
import pinecone
api_key = "..." # Add your Pinecone API key here
pinecone.init(
api_key=api_key, environment="us-east-1-aws"
) # Add your db region here
# Fleet Context uses the text-embedding-ada-002 model from OpenAI with 1536 dimensions.
# NOTE: Pinecone requires dotproduct similarity for hybrid search
pinecone.create_index(
"quickstart-fleet-context",
dimension=1536,
metric="dotproduct",
pod_type="p1",
)
pinecone.describe_index(
"quickstart-fleet-context"
) # Make sure you create an index in pinecone
from llama_index.vector_stores.pinecone import PineconeVectorStore
pinecone_index = pinecone.Index("quickstart-fleet-context")
vector_store = PineconeVectorStore(pinecone_index, add_sparse_vector=True)
将向量批量上传到 Pinecone#
Pinecone 建议一次上传 100 个向量。在稍微修改数据格式后,我们将进行此操作。
import random
import itertools
def chunks(iterable, batch_size=100):
"""A helper function to break an iterable into chunks of size batch_size."""
it = iter(iterable)
chunk = tuple(itertools.islice(it, batch_size))
while chunk:
yield chunk
chunk = tuple(itertools.islice(it, batch_size))
# generator that generates many (id, vector, metadata, sparse_values) pairs
data_generator = map(
lambda row: {
"id": row[1]["id"],
"values": row[1]["values"],
"metadata": row[1]["metadata"],
"sparse_values": row[1]["sparse_values"],
},
df.iterrows(),
)
# Upsert data with 1000 vectors per upsert request
for ids_vectors_chunk in chunks(data_generator, batch_size=100):
print(f"Upserting {len(ids_vectors_chunk)} vectors...")
pinecone_index.upsert(vectors=ids_vectors_chunk)
在 LlamaIndex 中构建 Pinecone 向量存储#
最后,我们将通过 LlamaIndex 构建 Pinecone 向量存储并查询它以获取结果。
from llama_index.core import VectorStoreIndex
from IPython.display import Markdown, display
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
查询您的索引!#
query_engine = index.as_query_engine(
vector_store_query_mode="hybrid", similarity_top_k=8
)
response = query_engine.query("How do I use llama_index SimpleDirectoryReader")
display(Markdown(f"<b>{response}</b>"))
输出:
<b>To use the SimpleDirectoryReader in llama_index, you need to import it from the llama_index library. Once imported, you can create an instance of the SimpleDirectoryReader class by providing the directory path as an argument. Then, you can use the `load_data()` method on the SimpleDirectoryReader instance to load the documents from the specified directory.</b>