输入 [ ]
已复制!
import os
import openai
os.environ["OPENAI_API_KEY"] = "sk-..."
openai.api_key = os.environ["OPENAI_API_KEY"]
import os import openai os.environ["OPENAI_API_KEY"] = "sk-..." openai.api_key = os.environ["OPENAI_API_KEY"]
设置¶
对于这个 notebook,我们将使用我们文档中两个非常相似的页面,每个页面存储在一个单独的索引中。
输入 [ ]
已复制!
from llama_index.core import SimpleDirectoryReader
documents_1 = SimpleDirectoryReader(
input_files=["../../community/integrations/vector_stores.md"]
).load_data()
documents_2 = SimpleDirectoryReader(
input_files=["../../module_guides/storing/vector_stores.md"]
).load_data()
from llama_index.core import SimpleDirectoryReader documents_1 = SimpleDirectoryReader( input_files=["../../community/integrations/vector_stores.md"] ).load_data() documents_2 = SimpleDirectoryReader( input_files=["../../module_guides/storing/vector_stores.md"] ).load_data()
输入 [ ]
已复制!
from llama_index.core import VectorStoreIndex
index_1 = VectorStoreIndex.from_documents(documents_1)
index_2 = VectorStoreIndex.from_documents(documents_2)
from llama_index.core import VectorStoreIndex index_1 = VectorStoreIndex.from_documents(documents_1) index_2 = VectorStoreIndex.from_documents(documents_2)
融合索引!¶
在这一步中,我们将索引融合为一个单独的检索器。该检索器还将通过生成与原始问题相关的额外查询来增强我们的查询,并聚合结果。
此设置将查询 4 次,一次使用您的原始查询,并生成 3 个额外查询。
默认情况下,它使用以下提示生成额外查询
QUERY_GEN_PROMPT = (
"You are a helpful assistant that generates multiple search queries based on a "
"single input query. Generate {num_queries} search queries, one on each line, "
"related to the following input query:\n"
"Query: {query}\n"
"Queries:\n"
)
输入 [ ]
已复制!
from llama_index.core.retrievers import QueryFusionRetriever
retriever = QueryFusionRetriever(
[index_1.as_retriever(), index_2.as_retriever()],
similarity_top_k=2,
num_queries=4, # set this to 1 to disable query generation
use_async=True,
verbose=True,
# query_gen_prompt="...", # we could override the query generation prompt here
)
from llama_index.core.retrievers import QueryFusionRetriever retriever = QueryFusionRetriever( [index_1.as_retriever(), index_2.as_retriever()], similarity_top_k=2, num_queries=4, # 设置为 1 以禁用查询生成 use_async=True, verbose=True, # query_gen_prompt="...", # 我们可以在这里覆盖查询生成提示 )
输入 [ ]
已复制!
# apply nested async to run in a notebook
import nest_asyncio
nest_asyncio.apply()
# 应用嵌套 async 在 notebook 中运行 import nest_asyncio nest_asyncio.apply()
输入 [ ]
已复制!
nodes_with_scores = retriever.retrieve("How do I setup a chroma vector store?")
nodes_with_scores = retriever.retrieve("How do I setup a chroma vector store?")
Generated queries: 1. What are the steps to set up a chroma vector store? 2. Best practices for configuring a chroma vector store 3. Troubleshooting common issues when setting up a chroma vector store
输入 [ ]
已复制!
for node in nodes_with_scores:
print(f"Score: {node.score:.2f} - {node.text[:100]}...")
for node in nodes_with_scores: print(f"Score: {node.score:.2f} - {node.text[:100]}...")
Score: 0.78 - # Vector Stores Vector stores contain embedding vectors of ingested document chunks (and sometimes ... Score: 0.78 - # Using Vector Stores LlamaIndex offers multiple integration points with vector stores / vector dat...
在查询引擎中使用!¶
现在,我们可以将检索器插入到查询引擎中,以合成自然语言响应。
输入 [ ]
已复制!
from llama_index.core.query_engine import RetrieverQueryEngine
query_engine = RetrieverQueryEngine.from_args(retriever)
from llama_index.core.query_engine import RetrieverQueryEngine query_engine = RetrieverQueryEngine.from_args(retriever)
输入 [ ]
已复制!
response = query_engine.query(
"How do I setup a chroma vector store? Can you give an example?"
)
response = query_engine.query( "How do I setup a chroma vector store? Can you give an example?" )
Generated queries: 1. How to set up a chroma vector store? 2. Step-by-step guide for creating a chroma vector store. 3. Examples of chroma vector store setups and configurations.
输入 [ ]
已复制!
from llama_index.core.response.notebook_utils import display_response
display_response(response)
from llama_index.core.response.notebook_utils import display_response display_response(response)
最终响应:
要设置 Chroma 向量存储,您需要执行以下步骤
- 导入必要的库
import chromadb
from llama_index.vector_stores.chroma import ChromaVectorStore
- 创建 Chroma 客户端
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("quickstart")
- 构建向量存储
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
以下是使用上述步骤设置 Chroma 向量存储的示例
import chromadb
from llama_index.vector_stores.chroma import ChromaVectorStore
# Creating a Chroma client
# EphemeralClient operates purely in-memory, PersistentClient will also save to disk
chroma_client = chromadb.EphemeralClient()
chroma_collection = chroma_client.create_collection("quickstart")
# construct vector store
vector_store = ChromaVectorStore(chroma_collection=chroma_collection)
此示例演示了如何创建 Chroma 客户端,创建一个名为“quickstart”的集合,然后使用该集合构建 Chroma 向量存储。