AwaDB 向量存储¶
如果您正在 Colab 上打开此 Notebook,您可能需要安装 LlamaIndex 🦙。
In [ ]
已复制!
%pip install llama-index-embeddings-huggingface
%pip install llama-index-vector-stores-awadb
%pip install llama-index-embeddings-huggingface %pip install llama-index-vector-stores-awadb
In [ ]
已复制!
!pip install llama-index
!pip install llama-index
创建 AwaDB 索引¶
In [ ]
已复制!
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
import logging import sys logging.basicConfig(stream=sys.stdout, level=logging.INFO) logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
加载文档,构建 VectorStoreIndex¶
In [ ]
已复制!
from llama_index.core import (
SimpleDirectoryReader,
VectorStoreIndex,
StorageContext,
)
from IPython.display import Markdown, display
import openai
openai.api_key = ""
from llama_index.core import ( SimpleDirectoryReader, VectorStoreIndex, StorageContext, ) from IPython.display import Markdown, display import openai openai.api_key = ""
INFO:numexpr.utils:Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. INFO:numexpr.utils:NumExpr defaulting to 8 threads. NumExpr defaulting to 8 threads.
下载数据¶
In [ ]
已复制!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/' !wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
加载数据¶
In [ ]
已复制!
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
# 加载文档 documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
In [ ]
已复制!
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.vector_stores.awadb import AwaDBVectorStore
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
vector_store = AwaDBVectorStore()
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, embed_model=embed_model
)
from llama_index.embeddings.huggingface import HuggingFaceEmbedding from llama_index.vector_stores.awadb import AwaDBVectorStore embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5") vector_store = AwaDBVectorStore() storage_context = StorageContext.from_defaults(vector_store=vector_store) index = VectorStoreIndex.from_documents( documents, storage_context=storage_context, embed_model=embed_model )
查询索引¶
In [ ]
已复制!
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
# 将日志记录设置为 DEBUG 以获取更详细的输出 query_engine = index.as_query_engine() response = query_engine.query("What did the author do growing up?")
In [ ]
已复制!
display(Markdown(f"<b>{response}</b>"))
display(Markdown(f"{response}"))
作者在成长过程中写短篇故事,在 IBM 1401 上尝试编程,缠着父亲买了一台 TRS-80 电脑,写了简单的游戏、一个预测模型火箭飞行高度的程序以及一个文字处理器。他还在大学里学习哲学,后来转学人工智能,并致力于构建网络的底层基础设施。他写文章并在网上发表,每周四晚上为一群朋友举办晚宴,画画,并在剑桥买了一栋楼。
In [ ]
已复制!
# set Logging to DEBUG for more detailed outputs
query_engine = index.as_query_engine()
response = query_engine.query(
"What did the author do after his time at Y Combinator?"
)
# 将日志记录设置为 DEBUG 以获取更详细的输出 query_engine = index.as_query_engine() response = query_engine.query( "What did the author do after his time at Y Combinator?" )
In [ ]
已复制!
display(Markdown(f"<b>{response}</b>"))
display(Markdown(f"{response}"))
在 Y Combinator 之后,作者写文章,研究 Lisp,并画画。他还去俄勒冈探望了母亲,并帮助她离开了疗养院。