设置¶
如果您在 colab 上打开此 Notebook,您可能需要确保已安装 llama-index
输入 [ ]
已复制!
!pip install llama-index
!pip install llama-index
输入 [ ]
已复制!
!pip install opensearch-py
!pip install opensearch-py
输入 [ ]
已复制!
%pip install llama-index-vector-stores-lindorm
%pip install llama-index-vector-stores-lindorm
输入 [ ]
已复制!
# choose dashscope as embedding and llm model, your can also use default openai or other model to test
%pip install llama-index-embeddings-dashscope
%pip install llama-index-llms-dashscope
# 选择 dashscope 作为嵌入和 LLM 模型,您也可以使用默认的 openai 或其他模型进行测试 %pip install llama-index-embeddings-dashscope %pip install llama-index-llms-dashscope
导入所需的包依赖
输入 [ ]
已复制!
from llama_index.core import SimpleDirectoryReader
from llama_index.vector_stores.lindorm import (
LindormVectorStore,
LindormVectorClient,
)
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.core import SimpleDirectoryReader from llama_index.vector_stores.lindorm import ( LindormVectorStore, LindormVectorClient, ) from llama_index.core import VectorStoreIndex, StorageContext
配置 dashscope 嵌入和 LLM 模型,您也可以使用默认的 openai 或其他模型进行测试
输入 [ ]
已复制!
# set Embbeding model
from llama_index.core import Settings
from llama_index.embeddings.dashscope import DashScopeEmbedding
# Global Settings
Settings.embed_model = DashScopeEmbedding()
# 设置嵌入模型 from llama_index.core import Settings from llama_index.embeddings.dashscope import DashScopeEmbedding # 全局设置 Settings.embed_model = DashScopeEmbedding()
输入 [ ]
已复制!
# config llm model
from llama_index.llms.dashscope import DashScope, DashScopeGenerationModels
dashscope_llm = DashScope(model_name=DashScopeGenerationModels.QWEN_MAX)
# 配置 LLM 模型 from llama_index.llms.dashscope import DashScope, DashScopeGenerationModels dashscope_llm = DashScope(model_name=DashScopeGenerationModels.QWEN_MAX)
下载示例数据:¶
输入 [ ]
已复制!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/' !wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
--2024-07-10 14:01:02-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt 正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ... 正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... 已连接。 已发出 HTTP 请求,正在等待回应... 200 OK 长度:75042 (73K) [text/plain] 正在保存至: “data/paul_graham/paul_graham_essay.txt” data/paul_graham/pa 100%[===================>] 73.28K 43.2KB/s 用时 1.7s 2024-07-10 14:01:04 (43.2 KB/s) - 已保存 “data/paul_graham/paul_graham_essay.txt” [75042/75042])
加载数据:¶
输入 [ ]
已复制!
# load documents
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
print(f"Total documents: {len(documents)}")
print(f"First document, id: {documents[0].doc_id}")
print(f"First document, hash: {documents[0].hash}")
print(
"First document, text"
f" ({len(documents[0].text)} characters):\n{'='*20}\n{documents[0].text[:360]} ..."
)
# 加载文档 documents = SimpleDirectoryReader("./data/paul_graham/").load_data() print(f"Total documents: {len(documents)}") print(f"First document, id: {documents[0].doc_id}") print(f"First document, hash: {documents[0].hash}") print( "First document, text" f" ({len(documents[0].text)} characters):\n{'='*20}\n{documents[0].text[:360]} ..." )
Total documents: 1 First document, id: 5ddae8c1-f137-4500-83cd-e38e42d4f72b First document, hash: 8fde8a692925d317c5544f3dbaa88eeb5e9ec0cbdb74da1de19d57ee75ac0c3c First document, text (75014 characters): ==================== What I Worked On February 2021 Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined ma ...
创建 Lindorm Vector Store 对象:¶
输入 [ ]
已复制!
# only for jupyter notebook
import nest_asyncio
nest_asyncio.apply()
# lindorm instance info
host = "ld-bp******jm*******-proxy-search-pub.lindorm.aliyuncs.com"
port = 30070
username = "your_username"
password = "your_password"
# index demonstrate the VectorStore impl
index_name = "lindorm_rag_test"
# extenion param of lindorm search, number of cluster units to query; between 1 and method.parameters.nlist(ivfpq param); no default value.
nprobe = "2"
# extenion param of lindorm search, usually used to improve recall accuracy, but it increases performance overhead; between 1 and 200; default: 10.
reorder_factor = "10"
# LindormVectorClient encapsulates logic for a single index with vector search enabled
client = LindormVectorClient(
host,
port,
username,
password,
index=index_name,
dimension=1536, # match dimension of your embedding model
nprobe=nprobe,
reorder_factor=reorder_factor,
# filter_type="pre_filter/post_filter(default)"
)
# initialize vector store
vector_store = LindormVectorStore(client)
# 仅用于 jupyter notebook 导入 nest_asyncio nest_asyncio.apply() # lindorm 实例信息 host = "ld-bp******jm*******-proxy-search-pub.lindorm.aliyuncs.com" port = 30070 username = "your_username" password = "your_password" # 索引演示 VectorStore 实现 index_name = "lindorm_rag_test" # lindorm 搜索的扩展参数,查询的集群单元数量;介于 1 和 method.parameters.nlist (ivfpq 参数) 之间;无默认值。nprobe = "2" # lindorm 搜索的扩展参数,通常用于提高召回精度,但这会增加性能开销;介于 1 和 200 之间;默认值:10。reorder_factor = "10" # LindormVectorClient 封装了单个启用向量搜索的索引的逻辑 client = LindormVectorClient( host, port, username, password, index=index_name, dimension=1536, # 匹配你的 embedding 模型维度 nprobe=nprobe, reorder_factor=reorder_factor, # filter_type="pre_filter/post_filter(default)" ) # 初始化向量存储 vector_store = LindormVectorStore(client)
从文档构建索引:¶
输入 [ ]
已复制!
storage_context = StorageContext.from_defaults(vector_store=vector_store)
# initialize an index using our sample data and the client we just created
index = VectorStoreIndex.from_documents(
documents=documents, storage_context=storage_context, show_progress=True
)
storage_context = StorageContext.from_defaults(vector_store=vector_store) # 使用我们的示例数据和刚创建的客户端初始化索引 index = VectorStoreIndex.from_documents( documents=documents, storage_context=storage_context, show_progress=True )
/Users/guoguo/Library/Python/3.9/lib/python/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 25.27it/s] Generating embeddings: 100%|██████████| 22/22 [00:02<00:00, 10.31it/s]
查询存储:¶
搜索测试¶
输入 [ ]
已复制!
# Set Retriever
vector_retriever = index.as_retriever()
# search
source_nodes = vector_retriever.retrieve("What did the author do growing up?")
# check source_nodes
for node in source_nodes:
# print(node.metadata)
print(f"---------------------------------------------")
print(f"Score: {node.score:.3f}")
print(node.get_content())
print(f"---------------------------------------------\n\n")
# 设置 Retriever vector_retriever = index.as_retriever() # 搜索 source_nodes = vector_retriever.retrieve("What did the author do growing up?") # 检查 source_nodes for node in source_nodes: # print(node.metadata) print(f"---------------------------------------------") print(f"Score: {node.score:.3f}") print(node.get_content()) print(f"---------------------------------------------\n\n")
--------------------------------------------- Score: 0.448 What I Worked On February 2021 Before college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep. The first programs I tried writing were on the IBM 1401 that our school district used for what was then called "data processing." This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines — CPU, disk drives, printer, card reader — sitting up on a raised floor under bright fluorescent lights. The language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in the card reader and press a button to load the program into memory and run it. The result would ordinarily be to print something on the spectacularly loud printer. I was puzzled by the 1401. I couldn't figure out what to do with it. And in retrospect there's not much I could have done with it. The only form of input to programs was data stored on punched cards, and I didn't have any data stored on punched cards. The only other option was to do things that didn't rely on any input, like calculate approximations of pi, but I didn't know enough math to do anything interesting of that type. So I'm not surprised I can't remember any programs I wrote, because they can't have done much. My clearest memory is of the moment I learned it was possible for programs not to terminate, when one of mine didn't. On a machine without time-sharing, this was a social as well as a technical error, as the data center manager's expression made clear. With microcomputers, everything changed. Now you could have a computer sitting right in front of you, on a desk, that could respond to your keystrokes as it was running instead of just churning through a stack of punch cards and then stopping. [1] The first of my friends to get a microcomputer built it himself. It was sold as a kit by Heathkit. I remember vividly how impressed and envious I felt watching him sitting in front of it, typing programs right into the computer. Computers were expensive in those days and it took me years of nagging before I convinced my father to buy one, a TRS-80, in about 1980. The gold standard then was the Apple II, but a TRS-80 was good enough. This was when I really started programming. I wrote simple games, a program to predict how high my model rockets would fly, and a word processor that my father used to write at least one book. There was only room in memory for about 2 pages of text, so he'd write 2 pages at a time and then print them out, but it was a lot better than a typewriter. Though I liked programming, I didn't plan to study it in college. In college I was going to study philosophy, which sounded much more powerful. It seemed, to my naive high school self, to be the study of the ultimate truths, compared to which the things studied in other fields would be mere domain knowledge. What I discovered when I got to college was that the other fields took up so much of the space of ideas that there wasn't much left for these supposed ultimate truths. All that seemed left for philosophy were edge cases that people in other fields felt could safely be ignored. I couldn't have put this into words when I was 18. All I knew at the time was that I kept taking philosophy courses and they kept being boring. So I decided to switch to AI. AI was in the air in the mid 1980s, but there were two things especially that made me want to work on it: a novel by Heinlein called The Moon is a Harsh Mistress, which featured an intelligent computer called Mike, and a PBS documentary that showed Terry Winograd using SHRDLU. I haven't tried rereading The Moon is a Harsh Mistress, so I don't know how well it has aged, but when I read it I was drawn entirely into its world. --------------------------------------------- --------------------------------------------- Score: 0.434 It was not, in fact, simply a matter of teaching SHRDLU more words. That whole way of doing AI, with explicit data structures representing concepts, was not going to work. Its brokenness did, as so often happens, generate a lot of opportunities to write papers about various band-aids that could be applied to it, but it was never going to get us Mike. So I looked around to see what I could salvage from the wreckage of my plans, and there was Lisp. I knew from experience that Lisp was interesting for its own sake and not just for its association with AI, even though that was the main reason people cared about it at the time. So I decided to focus on Lisp. In fact, I decided to write a book about Lisp hacking. It's scary to think how little I knew about Lisp hacking when I started writing that book. But there's nothing like writing a book about something to help you learn it. The book, On Lisp, wasn't published till 1993, but I wrote much of it in grad school. Computer Science is an uneasy alliance between two halves, theory and systems. The theory people prove things, and the systems people build things. I wanted to build things. I had plenty of respect for theory — indeed, a sneaking suspicion that it was the more admirable of the two halves — but building things seemed so much more exciting. The problem with systems work, though, was that it didn't last. Any program you wrote today, no matter how good, would be obsolete in a couple decades at best. People might mention your software in footnotes, but no one would actually use it. And indeed, it would seem very feeble work. Only people with a sense of the history of the field would even realize that, in its time, it had been good. There were some surplus Xerox Dandelions floating around the computer lab at one point. Anyone who wanted one to play around with could have one. I was briefly tempted, but they were so slow by present standards; what was the point? No one else wanted one either, so off they went. That was what happened to systems work. I wanted not just to build things, but to build things that would last. In this dissatisfied state I went in 1988 to visit Rich Draves at CMU, where he was in grad school. One day I went to visit the Carnegie Institute, where I'd spent a lot of time as a kid. While looking at a painting there I realized something that might seem obvious, but was a big surprise to me. There, right on the wall, was something you could make that would last. Paintings didn't become obsolete. Some of the best ones were hundreds of years old. And moreover this was something you could make a living doing. Not as easily as you could by writing software, of course, but I thought if you were really industrious and lived really cheaply, it had to be possible to make enough to survive. And as an artist you could be truly independent. You wouldn't have a boss, or even need to get research funding. I had always liked looking at paintings. Could I make them? I had no idea. I'd never imagined it was even possible. I knew intellectually that people made art — that it didn't just appear spontaneously — but it was as if the people who made it were a different species. They either lived long ago or were mysterious geniuses doing strange things in profiles in Life magazine. The idea of actually being able to make art, to put that verb before that noun, seemed almost miraculous. That fall I started taking art classes at Harvard. Grad students could take classes in any department, and my advisor, Tom Cheatham, was very easy going. If he even knew about the strange classes I was taking, he never said anything. So now I was in a PhD program in computer science, yet planning to be an artist, yet also genuinely in love with Lisp hacking and working away at On Lisp. In other words, like many a grad student, I was working energetically on multiple projects that were not my thesis. I didn't see a way out of this situation. I didn't want to drop out of grad school, but how else was I going to get out? I remember when my friend Robert Morris got kicked out of Cornell for writing the internet worm of 1988, I was envious that he'd found such a spectacular way to get out of grad school. Then one day in April 1990 a crack appeared in the wall. ---------------------------------------------
基本查询¶
输入 [ ]
已复制!
# run query
query_engine = index.as_query_engine(llm=dashscope_llm)
# query_engine = index.as_query_engine()
res = query_engine.query("What did the author do growing up?")
res.response
# 运行查询 query_engine = index.as_query_engine(llm=dashscope_llm) # query_engine = index.as_query_engine() res = query_engine.query("What did the author do growing up?") res.response
输出[ ]
'Growing up, the author worked on two main activities outside of school: writing and programming. They wrote short stories instead of essays, and their early programming attempts involved using an IBM 1401 computer with Fortran, despite the challenges posed by the limited input methods and their lack of sophisticated mathematical knowledge.'
元数据过滤¶
Lindorm 向量存储现在支持元数据过滤,形式为精确匹配的 key=value 对以及查询时的范围过滤,形式为 >、<、>=、<=。
输入 [ ]
已复制!
from llama_index.core import Document
from llama_index.core.vector_stores import (
MetadataFilters,
MetadataFilter,
FilterOperator,
FilterCondition,
)
import regex as re
from llama_index.core import Document from llama_index.core.vector_stores import ( MetadataFilters, MetadataFilter, FilterOperator, FilterCondition, ) import regex as re
输入 [ ]
已复制!
# Split the text into paragraphs.
text_chunks = documents[0].text.split("\n\n")
# Create a document for each footnote
footnotes = [
Document(
text=chunk,
id=documents[0].doc_id,
metadata={
"is_footnote": bool(re.search(r"^\s*\[\d+\]\s*", chunk)),
"mark_id": i,
},
)
for i, chunk in enumerate(text_chunks)
if bool(re.search(r"^\s*\[\d+\]\s*", chunk))
]
# 将文本分割成段落。text_chunks = documents[0].text.split("\n\n") # 为每个脚注创建一个文档 footnotes = [ Document( text=chunk, id=documents[0].doc_id, metadata={ "is_footnote": bool(re.search(r"^\s*\[\d+\]\s*", chunk)), "mark_id": i, }, ) for i, chunk in enumerate(text_chunks) if bool(re.search(r"^\s*\[\d+\]\s*", chunk)) ]
输入 [ ]
已复制!
# Insert the footnotes into the index
for f in footnotes:
index.insert(f)
# 将脚注插入到索引中 for f in footnotes: index.insert(f)
Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1140.07it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 506.62it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 957.82it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1170.94it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1043.88it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1337.47it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1055.97it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1331.10it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1408.43it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1081.84it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 479.68it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 946.15it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1062.66it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 363.93it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 760.53it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 1027.76it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 470.37it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 513.13it/s] Parsing nodes: 100%|██████████| 1/1 [00:00<00:00, 460.81it/s]
输入 [ ]
已复制!
retriever = index.as_retriever(
filters=MetadataFilters(
filters=[
MetadataFilter(
key="is_footnote", value="true", operator=FilterOperator.EQ
),
MetadataFilter(
key="mark_id", value=0, operator=FilterOperator.GTE
),
],
condition=FilterCondition.AND,
),
)
result = retriever.retrieve("What did the author about space aliens and lisp?")
print(result)
retriever = index.as_retriever( filters=MetadataFilters( filters=[ MetadataFilter( key="is_footnote", value="true", operator=FilterOperator.EQ ), MetadataFilter( key="mark_id", value=0, operator=FilterOperator.GTE ), ], condition=FilterCondition.AND, ), ) result = retriever.retrieve("What did the author about space aliens and lisp?") print(result)
[NodeWithScore(node=TextNode(id_='c307ea8e-3647-43f0-9858-34581cc50ce5', embedding=None, metadata={'ref_doc_id': 'd9f7000e-412c-466d-9792-88b15aad7148', 'mark_id': 173, 'is_footnote': True, 'document_id': 'd9f7000e-412c-466d-9792-88b15aad7148', '_node_type': 'TextNode', 'doc_id': 'd9f7000e-412c-466d-9792-88b15aad7148', 'content': "[19] One way to get more precise about the concept of invented vs discovered is to talk about space aliens. Any sufficiently advanced alien civilization would certainly know about the Pythagorean theorem, for example. I believe, though with less certainty, that they would also know about the Lisp in McCarthy's 1960 paper.", '_node_content': '{"id_": "c307ea8e-3647-43f0-9858-34581cc50ce5", "embedding": null, "metadata": {"is_footnote": true, "mark_id": 173}, "excluded_embed_metadata_keys": [], "excluded_llm_metadata_keys": [], "relationships": {"1": {"node_id": "d9f7000e-412c-466d-9792-88b15aad7148", "node_type": "4", "metadata": {"is_footnote": true, "mark_id": 173}, "hash": "b43f450088029936fd7a03f5917ff9c487ba2e3ed9c6c22de43e024a67f8f48e", "class_name": "RelatedNodeInfo"}}, "text": "", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 323, "text_template": "{metadata_str}\\n\\n{content}", "metadata_template": "{key}: {value}", "metadata_seperator": "\\n", "class_name": "TextNode"}'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text="[19] One way to get more precise about the concept of invented vs discovered is to talk about space aliens. Any sufficiently advanced alien civilization would certainly know about the Pythagorean theorem, for example. I believe, though with less certainty, that they would also know about the Lisp in McCarthy's 1960 paper.", mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.5169236)]
输入 [ ]
已复制!
# Create a query engine that only searches certain footnotes.
footnote_query_engine = index.as_query_engine(
filters=MetadataFilters(
filters=[
MetadataFilter(
key="is_footnote", value="true", operator=FilterOperator.EQ
),
MetadataFilter(
key="mark_id", value=0, operator=FilterOperator.GTE
),
],
condition=FilterCondition.AND,
),
llm=dashscope_llm,
)
res = footnote_query_engine.query(
"What did the author about space aliens and lisp?"
)
res.response
# 创建一个只搜索特定脚注的查询引擎。footnote_query_engine = index.as_query_engine( filters=MetadataFilters( filters=[ MetadataFilter( key="is_footnote", value="true", operator=FilterOperator.EQ ), MetadataFilter( key="mark_id", value=0, operator=FilterOperator.GTE ), ], condition=FilterCondition.AND, ), llm=dashscope_llm, ) res = footnote_query_engine.query( "What did the author about space aliens and lisp?" ) res.response
输出[ ]
"The author suggests that any sufficiently advanced alien civilization would be aware of the Pythagorean theorem and, albeit with less certainty, they would also be familiar with Lisp as described in McCarthy's 1960 paper."
混合搜索¶
Lindorm 搜索支持混合搜索,注意查询字符串的最小搜索粒度是一个 token。
输入 [ ]
已复制!
from llama_index.core.vector_stores.types import VectorStoreQueryMode
retriever = index.as_retriever(
vector_store_query_mode=VectorStoreQueryMode.HYBRID
)
result = retriever.retrieve("What did the author about space aliens and lisp?")
print(result)
from llama_index.core.vector_stores.types import VectorStoreQueryMode retriever = index.as_retriever( vector_store_query_mode=VectorStoreQueryMode.HYBRID ) result = retriever.retrieve("What did the author about space aliens and lisp?") print(result)
[NodeWithScore(node=TextNode(id_='c307ea8e-3647-43f0-9858-34581cc50ce5', embedding=None, metadata={'ref_doc_id': 'd9f7000e-412c-466d-9792-88b15aad7148', 'mark_id': 173, 'is_footnote': True, 'document_id': 'd9f7000e-412c-466d-9792-88b15aad7148', '_node_type': 'TextNode', 'doc_id': 'd9f7000e-412c-466d-9792-88b15aad7148', 'content': "[19] One way to get more precise about the concept of invented vs discovered is to talk about space aliens. Any sufficiently advanced alien civilization would certainly know about the Pythagorean theorem, for example. I believe, though with less certainty, that they would also know about the Lisp in McCarthy's 1960 paper.", '_node_content': '{"id_": "c307ea8e-3647-43f0-9858-34581cc50ce5", "embedding": null, "metadata": {"is_footnote": true, "mark_id": 173}, "excluded_embed_metadata_keys": [], "excluded_llm_metadata_keys": [], "relationships": {"1": {"node_id": "d9f7000e-412c-466d-9792-88b15aad7148", "node_type": "4", "metadata": {"is_footnote": true, "mark_id": 173}, "hash": "b43f450088029936fd7a03f5917ff9c487ba2e3ed9c6c22de43e024a67f8f48e", "class_name": "RelatedNodeInfo"}}, "text": "", "mimetype": "text/plain", "start_char_idx": 0, "end_char_idx": 323, "text_template": "{metadata_str}\\n\\n{content}", "metadata_template": "{key}: {value}", "metadata_seperator": "\\n", "class_name": "TextNode"}'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text="[19] One way to get more precise about the concept of invented vs discovered is to talk about space aliens. Any sufficiently advanced alien civilization would certainly know about the Pythagorean theorem, for example. I believe, though with less certainty, that they would also know about the Lisp in McCarthy's 1960 paper.", mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.5169236), NodeWithScore(node=TextNode(id_='4252133e-d45a-49ff-9206-30a946716e83', embedding=None, metadata={'file_path': '/Users/guoguo/code/llama_git/fork/llama_index/docs/docs/examples/vector_stores/data/paul_graham/paul_graham_essay.txt', 'ref_doc_id': '5ddae8c1-f137-4500-83cd-e38e42d4f72b', 'file_name': 'paul_graham_essay.txt', 'file_type': 'text/plain', 'last_modified_date': '2024-07-10', 'creation_date': '2024-07-10', 'document_id': '5ddae8c1-f137-4500-83cd-e38e42d4f72b', '_node_type': 'TextNode', 'doc_id': '5ddae8c1-f137-4500-83cd-e38e42d4f72b', 'content': "It was not, in fact, simply a matter of teaching SHRDLU more words. That whole way of doing AI, with explicit data structures representing concepts, was not going to work. Its brokenness did, as so often happens, generate a lot of opportunities to write papers about various band-aids that could be applied to it, but it was never going to get us Mike.\n\nSo I looked around to see what I could salvage from the wreckage of my plans, and there was Lisp. I knew from experience that Lisp was interesting for its own sake and not just for its association with AI, even though that was the main reason people cared about it at the time. So I decided to focus on Lisp. In fact, I decided to write a book about Lisp hacking. It's scary to think how little I knew about Lisp hacking when I started writing that book. But there's nothing like writing a book about something to help you learn it. The book, On Lisp, wasn't published till 1993, but I wrote much of it in grad school.\n\nComputer Science is an uneasy alliance between two halves, theory and systems. The theory people prove things, and the systems people build things. I wanted to build things. I had plenty of respect for theory — indeed, a sneaking suspicion that it was the more admirable of the two halves — but building things seemed so much more exciting.\n\nThe problem with systems work, though, was that it didn't last. Any program you wrote today, no matter how good, would be obsolete in a couple decades at best. People might mention your software in footnotes, but no one would actually use it. And indeed, it would seem very feeble work. Only people with a sense of the history of the field would even realize that, in its time, it had been good.\n\nThere were some surplus Xerox Dandelions floating around the computer lab at one point. Anyone who wanted one to play around with could have one. I was briefly tempted, but they were so slow by present standards; what was the point? No one else wanted one either, so off they went. That was what happened to systems work.\n\nI wanted not just to build things, but to build things that would last.\n\nIn this dissatisfied state I went in 1988 to visit Rich Draves at CMU, where he was in grad school. One day I went to visit the Carnegie Institute, where I'd spent a lot of time as a kid. While looking at a painting there I realized something that might seem obvious, but was a big surprise to me. There, right on the wall, was something you could make that would last. Paintings didn't become obsolete. Some of the best ones were hundreds of years old.\n\nAnd moreover this was something you could make a living doing. Not as easily as you could by writing software, of course, but I thought if you were really industrious and lived really cheaply, it had to be possible to make enough to survive. And as an artist you could be truly independent. You wouldn't have a boss, or even need to get research funding.\n\nI had always liked looking at paintings. Could I make them? I had no idea. I'd never imagined it was even possible. I knew intellectually that people made art — that it didn't just appear spontaneously — but it was as if the people who made it were a different species. They either lived long ago or were mysterious geniuses doing strange things in profiles in Life magazine. The idea of actually being able to make art, to put that verb before that noun, seemed almost miraculous.\n\nThat fall I started taking art classes at Harvard. Grad students could take classes in any department, and my advisor, Tom Cheatham, was very easy going. If he even knew about the strange classes I was taking, he never said anything.\n\nSo now I was in a PhD program in computer science, yet planning to be an artist, yet also genuinely in love with Lisp hacking and working away at On Lisp. In other words, like many a grad student, I was working energetically on multiple projects that were not my thesis.\n\nI didn't see a way out of this situation. I didn't want to drop out of grad school, but how else was I going to get out? I remember when my friend Robert Morris got kicked out of Cornell for writing the internet worm of 1988, I was envious that he'd found such a spectacular way to get out of grad school.\n\nThen one day in April 1990 a crack appeared in the wall.", 'file_size': 75042, '_node_content': '{"id_": "4252133e-d45a-49ff-9206-30a946716e83", "embedding": null, "metadata": {"file_path": "/Users/guoguo/code/llama_git/fork/llama_index/docs/docs/examples/vector_stores/data/paul_graham/paul_graham_essay.txt", "file_name": "paul_graham_essay.txt", "file_type": "text/plain", "file_size": 75042, "creation_date": "2024-07-10", "last_modified_date": "2024-07-10"}, "excluded_embed_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "excluded_llm_metadata_keys": ["file_name", "file_type", "file_size", "creation_date", "last_modified_date", "last_accessed_date"], "relationships": {"1": {"node_id": "5ddae8c1-f137-4500-83cd-e38e42d4f72b", "node_type": "4", "metadata": {"file_path": "/Users/guoguo/code/llama_git/fork/llama_index/docs/docs/examples/vector_stores/data/paul_graham/paul_graham_essay.txt", "file_name": "paul_graham_essay.txt", "file_type": "text/plain", "file_size": 75042, "creation_date": "2024-07-10", "last_modified_date": "2024-07-10"}, "hash": "8fde8a692925d317c5544f3dbaa88eeb5e9ec0cbdb74da1de19d57ee75ac0c3c", "class_name": "RelatedNodeInfo"}, "2": {"node_id": "2ced3a1a-145e-4385-9dc0-1db6fc1c6380", "node_type": "1", "metadata": {"file_path": "/Users/guoguo/code/llama_git/fork/llama_index/docs/docs/examples/vector_stores/data/paul_graham/paul_graham_essay.txt", "file_name": "paul_graham_essay.txt", "file_type": "text/plain", "file_size": 75042, "creation_date": "2024-07-10", "last_modified_date": "2024-07-10"}, "hash": "9ea48d733470e38167724bb328f410ee310a1823df667fe2fa78d9ac5aee1e9b", "class_name": "RelatedNodeInfo"}, "3": {"node_id": "b9b09870-58e1-4501-acd3-6254d97d0864", "node_type": "1", "metadata": {}, "hash": "6ceb89cb4f34e38e01c21ddc09378cd5c8cd8b201edf8fccb6ac8cc82b471676", "class_name": "RelatedNodeInfo"}}, "text": "", "mimetype": "text/plain", "start_char_idx": 6812, "end_char_idx": 11085, "text_template": "{metadata_str}\\n\\n{content}", "metadata_template": "{key}: {value}", "metadata_seperator": "\\n", "class_name": "TextNode"}'}, excluded_embed_metadata_keys=[], excluded_llm_metadata_keys=[], relationships={}, text="It was not, in fact, simply a matter of teaching SHRDLU more words. That whole way of doing AI, with explicit data structures representing concepts, was not going to work. Its brokenness did, as so often happens, generate a lot of opportunities to write papers about various band-aids that could be applied to it, but it was never going to get us Mike.\n\nSo I looked around to see what I could salvage from the wreckage of my plans, and there was Lisp. I knew from experience that Lisp was interesting for its own sake and not just for its association with AI, even though that was the main reason people cared about it at the time. So I decided to focus on Lisp. In fact, I decided to write a book about Lisp hacking. It's scary to think how little I knew about Lisp hacking when I started writing that book. But there's nothing like writing a book about something to help you learn it. The book, On Lisp, wasn't published till 1993, but I wrote much of it in grad school.\n\nComputer Science is an uneasy alliance between two halves, theory and systems. The theory people prove things, and the systems people build things. I wanted to build things. I had plenty of respect for theory — indeed, a sneaking suspicion that it was the more admirable of the two halves — but building things seemed so much more exciting.\n\nThe problem with systems work, though, was that it didn't last. Any program you wrote today, no matter how good, would be obsolete in a couple decades at best. People might mention your software in footnotes, but no one would actually use it. And indeed, it would seem very feeble work. Only people with a sense of the history of the field would even realize that, in its time, it had been good.\n\nThere were some surplus Xerox Dandelions floating around the computer lab at one point. Anyone who wanted one to play around with could have one. I was briefly tempted, but they were so slow by present standards; what was the point? No one else wanted one either, so off they went. That was what happened to systems work.\n\nI wanted not just to build things, but to build things that would last.\n\nIn this dissatisfied state I went in 1988 to visit Rich Draves at CMU, where he was in grad school. One day I went to visit the Carnegie Institute, where I'd spent a lot of time as a kid. While looking at a painting there I realized something that might seem obvious, but was a big surprise to me. There, right on the wall, was something you could make that would last. Paintings didn't become obsolete. Some of the best ones were hundreds of years old.\n\nAnd moreover this was something you could make a living doing. Not as easily as you could by writing software, of course, but I thought if you were really industrious and lived really cheaply, it had to be possible to make enough to survive. And as an artist you could be truly independent. You wouldn't have a boss, or even need to get research funding.\n\nI had always liked looking at paintings. Could I make them? I had no idea. I'd never imagined it was even possible. I knew intellectually that people made art — that it didn't just appear spontaneously — but it was as if the people who made it were a different species. They either lived long ago or were mysterious geniuses doing strange things in profiles in Life magazine. The idea of actually being able to make art, to put that verb before that noun, seemed almost miraculous.\n\nThat fall I started taking art classes at Harvard. Grad students could take classes in any department, and my advisor, Tom Cheatham, was very easy going. If he even knew about the strange classes I was taking, he never said anything.\n\nSo now I was in a PhD program in computer science, yet planning to be an artist, yet also genuinely in love with Lisp hacking and working away at On Lisp. In other words, like many a grad student, I was working energetically on multiple projects that were not my thesis.\n\nI didn't see a way out of this situation. I didn't want to drop out of grad school, but how else was I going to get out? I remember when my friend Robert Morris got kicked out of Cornell for writing the internet worm of 1988, I was envious that he'd found such a spectacular way to get out of grad school.\n\nThen one day in April 1990 a crack appeared in the wall.", mimetype='text/plain', start_char_idx=None, end_char_idx=None, text_template='{metadata_str}\n\n{content}', metadata_template='{key}: {value}', metadata_seperator='\n'), score=0.5134669)]
输入 [ ]
已复制!
query_engine = index.as_query_engine(
llm=dashscope_llm, vector_store_query_mode=VectorStoreQueryMode.HYBRID
)
res = query_engine.query("What did the author about space aliens and lisp?")
res.response
query_engine = index.as_query_engine( llm=dashscope_llm, vector_store_query_mode=VectorStoreQueryMode.HYBRID ) res = query_engine.query("What did the author about space aliens and lisp?") res.response
输出[ ]
"The author believes that any sufficiently advanced alien civilization would know about fundamental mathematical concepts like the Pythagorean theorem. They also express, with less certainty, the idea that these aliens would be familiar with Lisp, a programming language discussed in McCarthy's 1960 paper. This thought experiment serves as a way to explore the distinction between ideas that are invented versus discovered."