Weaviate 向量存储¶
如果您在 colab 上打开此 Notebook,您可能需要安装 LlamaIndex 🦙。
In [ ]
已复制!
%pip install llama-index-vector-stores-weaviate
%pip install llama-index-vector-stores-weaviate
In [ ]
已复制!
!pip install llama-index
!pip install llama-index
创建 Weaviate 客户端¶
In [ ]
已复制!
import os
import openai
os.environ["OPENAI_API_KEY"] = ""
openai.api_key = os.environ["OPENAI_API_KEY"]
import os import openai os.environ["OPENAI_API_KEY"] = "" openai.api_key = os.environ["OPENAI_API_KEY"]
In [ ]
已复制!
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
import logging import sys logging.basicConfig(stream=sys.stdout, level=logging.INFO) logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
In [ ]
已复制!
import weaviate
import weaviate
In [ ]
已复制!
# cloud
cluster_url = ""
api_key = ""
client = weaviate.connect_to_wcs(
cluster_url=cluster_url,
auth_credentials=weaviate.auth.AuthApiKey(api_key),
)
# local
# client = connect_to_local()
# cloud cluster_url = "" api_key = "" client = weaviate.connect_to_wcs( cluster_url=cluster_url, auth_credentials=weaviate.auth.AuthApiKey(api_key), ) # local # client = connect_to_local()
加载文档,构建 VectorStoreIndex¶
In [ ]
已复制!
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.vector_stores.weaviate import WeaviateVectorStore
from IPython.display import Markdown, display
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.vector_stores.weaviate import WeaviateVectorStore from IPython.display import Markdown, display
下载数据
In [ ]
已复制!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/' !wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
In [ ]
已复制!
# load documents
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
# 加载文档 documents = SimpleDirectoryReader("./data/paul_graham").load_data()
In [ ]
已复制!
from llama_index.core import StorageContext
# If you want to load the index later, be sure to give it a name!
vector_store = WeaviateVectorStore(
weaviate_client=client, index_name="LlamaIndex"
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context
)
# NOTE: you may also choose to define a index_name manually.
# index_name = "test_prefix"
# vector_store = WeaviateVectorStore(weaviate_client=client, index_name=index_name)
from llama_index.core import StorageContext # 如果你想稍后加载索引,请务必给它一个名字! vector_store = WeaviateVectorStore( weaviate_client=client, index_name="LlamaIndex" ) storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents( documents, storage_context=storage_context ) # NOTE: 你也可以手动定义一个 index_name。 # index_name = "test_prefix" # vector_store = WeaviateVectorStore(weaviate_client=client, index_name=index_name)
使用自定义批处理配置¶
Llamaindex 默认使用 Weaviate 的动态批处理,这针对大多数常见场景进行了优化。然而,在低延迟设置中,这可能会使服务器过载或达到任何 GRPC 消息限制的上限。为了获得更好的控制和摄取过程,请考虑使用固定大小批处理来调整批处理大小。
您可以通过以下方式微调 WeaviateVectorStore 并定义自定义批处理
In [ ]
已复制!
from weaviate.classes.config import ConsistencyLevel
custom_batch = client.batch.fixed_size(
batch_size=123,
concurrent_requests=3,
consistency_level=ConsistencyLevel.ALL,
)
vector_store_fixed = WeaviateVectorStore(
weaviate_client=client,
index_name="LlamaIndex",
# we pass our custom batch as a client_kwargs
client_kwargs={"custom_batch": custom_batch},
)
from weaviate.classes.config import ConsistencyLevel custom_batch = client.batch.fixed_size( batch_size=123, concurrent_requests=3, consistency_level=ConsistencyLevel.ALL, ) vector_store_fixed = WeaviateVectorStore( weaviate_client=client, index_name="LlamaIndex", # 我们将自定义批处理作为 client_kwargs 传递 client_kwargs={"custom_batch": custom_batch}, )
查询索引¶
In [ ]
已复制!
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
# 将日志设置为 DEBUG 以获取更详细的输出 query_engine = index.as_query_engine() response = query_engine.query("What did the author do growing up?")
In [ ]
已复制!
display(Markdown(f"<b>{response}</b>"))
display(Markdown(f"{response}"))
加载索引¶
在这里,我们使用与创建初始索引时相同的索引名称。这样可以防止它自动生成,并使我们能够轻松地重新连接到它。
In [ ]
已复制!
cluster_url = ""
api_key = ""
client = weaviate.connect_to_wcs(
cluster_url=cluster_url,
auth_credentials=weaviate.auth.AuthApiKey(api_key),
)
# local
# client = weaviate.connect_to_local()
cluster_url = "" api_key = "" client = weaviate.connect_to_wcs( cluster_url=cluster_url, auth_credentials=weaviate.auth.AuthApiKey(api_key), ) # local # client = weaviate.connect_to_local()
In [ ]
已复制!
vector_store = WeaviateVectorStore(
weaviate_client=client, index_name="LlamaIndex"
)
loaded_index = VectorStoreIndex.from_vector_store(vector_store)
vector_store = WeaviateVectorStore( weaviate_client=client, index_name="LlamaIndex" ) loaded_index = VectorStoreIndex.from_vector_store(vector_store)
In [ ]
已复制!
# set Logging to DEBUG for more detailed outputs
query_engine = loaded_index.as_query_engine()
response = query_engine.query("What happened at interleaf?")
display(Markdown(f"<b>{response}</b>"))
# 将日志设置为 DEBUG 以获取更详细的输出 query_engine = loaded_index.as_query_engine() response = query_engine.query("What happened at interleaf?") display(Markdown(f"{response}"))
元数据过滤¶
让我们插入一个虚拟文档,并尝试进行过滤,以便只返回该文档。
In [ ]
已复制!
from llama_index.core import Document
doc = Document.example()
print(doc.metadata)
print("-----")
print(doc.text[:100])
from llama_index.core import Document doc = Document.example() print(doc.metadata) print("-----") print(doc.text[:100])
In [ ]
已复制!
loaded_index.insert(doc)
loaded_index.insert(doc)
In [ ]
已复制!
from llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters
filters = MetadataFilters(
filters=[ExactMatchFilter(key="filename", value="README.md")]
)
query_engine = loaded_index.as_query_engine(filters=filters)
response = query_engine.query("What is the name of the file?")
display(Markdown(f"<b>{response}</b>"))
from llama_index.core.vector_stores import ExactMatchFilter, MetadataFilters filters = MetadataFilters( filters=[ExactMatchFilter(key="filename", value="README.md")] ) query_engine = loaded_index.as_query_engine(filters=filters) response = query_engine.query("What is the name of the file?") display(Markdown(f"{response}"))
彻底删除索引¶
您可以使用 delete_index
函数删除向量存储创建的索引
In [ ]
已复制!
vector_store.delete_index()
vector_store.delete_index()
In [ ]
已复制!
vector_store.delete_index() # calling the function again does nothing
vector_store.delete_index() # 再次调用该函数不执行任何操作
连接终止¶
您必须确保客户端连接已关闭
In [ ]
已复制!
client.close()
client.close()