聊天引擎 - 最佳模式¶
默认的聊天引擎模式是 "best",如果您使用支持最新函数调用 API 的 OpenAI 模型,则使用 "openai" 模式,否则使用 "react" 模式
如果您在 colab 上打开此 Notebook,您可能需要安装 LlamaIndex 🦙。
In [ ]
已复制!
%pip install llama-index-llms-anthropic
%pip install llama-index-llms-openai
%pip install llama-index-llms-anthropic %pip install llama-index-llms-openai
In [ ]
已复制!
!pip install llama-index
!pip install llama-index
下载数据¶
In [ ]
已复制!
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
!mkdir -p 'data/paul_graham/' !wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
--2024-01-27 12:15:55-- https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 2606:50c0:8001::154, 2606:50c0:8002::154, 2606:50c0:8003::154, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|2606:50c0:8001::154|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 75042 (73K) [text/plain] Saving to: ‘data/paul_graham/paul_graham_essay.txt’ data/paul_graham/pa 100%[===================>] 73.28K --.-KB/s in 0.008s 2024-01-27 12:15:55 (9.38 MB/s) - ‘data/paul_graham/paul_graham_essay.txt’ saved [75042/75042]
5 行代码快速入门¶
加载数据并构建索引
In [ ]
已复制!
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
from llama_index.llms.anthropic import Anthropic
llm = OpenAI(model="gpt-4")
data = SimpleDirectoryReader(input_dir="./data/paul_graham/").load_data()
index = VectorStoreIndex.from_documents(data)
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.llms.openai import OpenAI from llama_index.llms.anthropic import Anthropic llm = OpenAI(model="gpt-4") data = SimpleDirectoryReader(input_dir="./data/paul_graham/").load_data() index = VectorStoreIndex.from_documents(data)
配置聊天引擎
In [ ]
已复制!
chat_engine = index.as_chat_engine(chat_mode="best", llm=llm, verbose=True)
chat_engine = index.as_chat_engine(chat_mode="best", llm=llm, verbose=True)
与您的数据聊天
In [ ]
已复制!
response = chat_engine.chat(
"What are the first programs Paul Graham tried writing?"
)
response = chat_engine.chat( "What are the first programs Paul Graham tried writing?" )
Added user message to memory: What are the first programs Paul Graham tried writing? === Calling Function === Calling function: query_engine_tool with args: { "input": "What are the first programs Paul Graham tried writing?" } Got output: The first programs Paul Graham tried writing were on the IBM 1401 that their school district used for what was then called "data processing." The language he used was an early version of Fortran. ========================
In [ ]
已复制!
print(response)
print(response)
The first programs Paul Graham tried writing were on the IBM 1401. He used an early version of Fortran for these initial programs.