函数调用 AWS Bedrock Converse Agent¶
本 Notebook 展示了如何使用我们基于函数调用功能的 AWS Bedrock Converse agent。
初始设置¶
让我们从导入一些简单的构建块开始。
我们需要的主要内容是:
- 具有 Bedrock 和 Claude Haiku LLM 访问权限的 AWS 凭据
- 一个保存对话历史记录的地方
- 代理可以使用的工具定义。
如果您在 Colab 上打开此 Notebook,可能需要安装 LlamaIndex 🦙。
In [ ]
已复制!
%pip install llama-index
%pip install llama-index-llms-bedrock-converse
%pip install llama-index-embeddings-huggingface
%pip install llama-index %pip install llama-index-llms-bedrock-converse %pip install llama-index-embeddings-huggingface
让我们为代理定义一些非常简单的计算器工具。
In [ ]
已复制!
def multiply(a: int, b: int) -> int:
"""Multiple two integers and returns the result integer"""
return a * b
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
def multiply(a: int, b: int) -> int: """Multiple two integers and returns the result integer""" return a * b def add(a: int, b: int) -> int: """Add two integers and returns the result integer""" return a + b
请确保设置您的 AWS 凭据,可以是 profile_name
或下面的密钥。
In [ ]
已复制!
from llama_index.llms.bedrock_converse import BedrockConverse
llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
# NOTE replace with your own AWS credentials
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, eg. us-east-1",
)
from llama_index.llms.bedrock_converse import BedrockConverse llm = BedrockConverse( model="anthropic.claude-3-haiku-20240307-v1:0", # 注意:替换为您的 AWS 凭据 aws_access_key_id="AWS Access Key ID to use", aws_secret_access_key="AWS Secret Access Key to use", aws_session_token="AWS Session Token to use", region_name="AWS Region to use, eg. us-east-1", )
初始化 AWS Bedrock Converse Agent¶
这里我们使用计算器函数初始化一个简单的 AWS Bedrock Converse agent。
In [ ]
已复制!
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[multiply, add],
llm=llm,
)
from llama_index.core.agent.workflow import FunctionAgent agent = FunctionAgent( tools=[multiply, add], llm=llm, )
聊天¶
In [ ]
已复制!
response = await agent.run("What is (121 + 2) * 5?")
print(str(response))
response = await agent.run("What is (121 + 2) * 5?") print(str(response))
In [ ]
已复制!
# inspect sources
print(response.tool_calls)
# 检查来源 print(response.tool_calls)
基于 RAG Pipeline 的 AWS Bedrock Converse Agent¶
构建一个基于简单 10K 文档的 AWS Bedrock Converse agent。我们使用 HuggingFace embeddings 和 BAAI/bge-small-en-v1.5
来构建 RAG pipeline,并将其作为工具传递给 AWS Bedrock Converse agent。
In [ ]
已复制!
!mkdir -p 'data/10k/'
!curl -o 'data/10k/uber_2021.pdf' 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf'
!mkdir -p 'data/10k/' !curl -o 'data/10k/uber_2021.pdf' 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf'
In [ ]
已复制!
from llama_index.core.tools import QueryEngineTool
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.llms.bedrock_converse import BedrockConverse
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
query_llm = BedrockConverse(
model="anthropic.claude-3-haiku-20240307-v1:0",
# NOTE replace with your own AWS credentials
aws_access_key_id="AWS Access Key ID to use",
aws_secret_access_key="AWS Secret Access Key to use",
aws_session_token="AWS Session Token to use",
region_name="AWS Region to use, eg. us-east-1",
)
# load data
uber_docs = SimpleDirectoryReader(
input_files=["./data/10k/uber_2021.pdf"]
).load_data()
# build index
uber_index = VectorStoreIndex.from_documents(
uber_docs, embed_model=embed_model
)
uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=query_llm)
query_engine_tool = QueryEngineTool.from_defaults(
query_engine=uber_engine,
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
)
from llama_index.core.tools import QueryEngineTool from llama_index.core import SimpleDirectoryReader, VectorStoreIndex from llama_index.embeddings.huggingface import HuggingFaceEmbedding from llama_index.llms.bedrock_converse import BedrockConverse embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5") query_llm = BedrockConverse( model="anthropic.claude-3-haiku-20240307-v1:0", # 注意:替换为您的 AWS 凭据 aws_access_key_id="AWS Access Key ID to use", aws_secret_access_key="AWS Secret Access Key to use", aws_session_token="AWS Session Token to use", region_name="AWS Region to use, eg. us-east-1", ) # 加载数据 uber_docs = SimpleDirectoryReader( input_files=["./data/10k/uber_2021.pdf"] ).load_data() # 构建索引 uber_index = VectorStoreIndex.from_documents( uber_docs, embed_model=embed_model ) uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=query_llm) query_engine_tool = QueryEngineTool.from_defaults( query_engine=uber_engine, name="uber_10k", description=( "提供有关 Uber 2021 财年的信息。使用详细的纯文本问题作为工具的输入。" ), )
In [ ]
已复制!
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[query_engine_tool],
llm=llm,
)
from llama_index.core.agent.workflow import FunctionAgent agent = FunctionAgent( tools=[query_engine_tool], llm=llm, )
In [ ]
已复制!
response = await agent.run(
"Tell me both the risk factors and tailwinds for Uber? Do two parallel tool calls."
)
response = await agent.run( "请告诉我 Uber 的风险因素和有利因素?进行两次并行工具调用。" )
In [ ]
已复制!
print(str(response))
print(str(response))