LlamaCloud
函数调用 NVIDIA Agent¶
本 Notebook 展示了如何使用我们由函数调用能力驱动的 NVIDIA agent。
Agent 可以使用的工具定义。
In [ ]
%pip install --upgrade --quiet llama-index-llms-nvidia
已复制!
Agent 可以使用的工具定义。
In [ ]
import getpass
import os
# del os.environ['NVIDIA_API_KEY'] ## delete key and reset
if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
print("Valid NVIDIA_API_KEY already in environment. Delete to reset")
else:
nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ")
assert nvapi_key.startswith(
"nvapi-"
), f"{nvapi_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvapi_key
%pip install --upgrade --quiet llama-index-llms-nvidia
Valid NVIDIA_API_KEY already in environment. Delete to reset
Agent 可以使用的工具定义。
In [ ]
from llama_index.llms.nvidia import NVIDIA
from llama_index.core.tools import FunctionTool
from llama_index.embeddings.nvidia import NVIDIAEmbedding
import getpass import os # del os.environ['NVIDIA_API_KEY'] ## delete key and reset if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"): print("Valid NVIDIA_API_KEY already in environment. Delete to reset") else: nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ") assert nvapi_key.startswith( "nvapi-" ), f"{nvapi_key[:5]}... is not a valid key" os.environ["NVIDIA_API_KEY"] = nvapi_key
from llama_index.llms.nvidia import NVIDIA from llama_index.core.tools import FunctionTool from llama_index.embeddings.nvidia import NVIDIAEmbedding
Agent 可以使用的工具定义。
In [ ]
def multiply(a: int, b: int) -> int:
"""Multiple two integers and returns the result integer"""
return a * b
def add(a: int, b: int) -> int:
"""Add two integers and returns the result integer"""
return a + b
让我们为 Agent 定义一些非常简单的计算器工具。
def multiply(a: int, b: int) -> int: """Multiple two integers and returns the result integer""" return a * b def add(a: int, b: int) -> int: """Add two integers and returns the result integer""" return a + b
Agent 可以使用的工具定义。
In [ ]
llm = NVIDIA("meta/llama-3.1-70b-instruct")
这里我们初始化一个带计算器功能的简单 NVIDIA Agent。
Agent 可以使用的工具定义。
In [ ]
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[multiply, add],
llm=llm,
)
llm = NVIDIA("meta/llama-3.1-70b-instruct")
from llama_index.core.agent.workflow import FunctionAgent agent = FunctionAgent( tools=[multiply, add], llm=llm, )
Agent 可以使用的工具定义。
Agent 可以使用的工具定义。
In [ ]
# inspect sources
print(response.tool_calls)
response = await agent.run("What is (121 * 3) + 42?") print(str(response))
# inspect sources print(response.tool_calls)
管理上下文/记忆¶
Agent 可以使用的工具定义。
In [ ]
from llama_index.core.agent.workflow import Context
ctx = Context(agent)
response = await agent.run("Hello, my name is John Doe.", ctx=ctx)
print(str(response))
response = await agent.run("What is my name?", ctx=ctx)
print(str(response))
默认情况下,
.run()
是无状态的。如果您想维护状态,可以传入一个 context
对象。from llama_index.core.agent.workflow import Context ctx = Context(agent) response = await agent.run("Hello, my name is John Doe.", ctx=ctx) print(str(response)) response = await agent.run("What is my name?", ctx=ctx) print(str(response))
带个性的 Agent¶
Agent 可以使用的工具定义。
In [ ]
agent = FunctionAgent(
tools=[multiply, add],
llm=llm,
system_prompt="Talk like a pirate in every response.",
)
您可以指定系统提示词来为 Agent 提供额外的指示或个性。
Agent 可以使用的工具定义。
In [ ]
response = await agent.run("Hi")
print(response)
agent = FunctionAgent( tools=[multiply, add], llm=llm, system_prompt="Talk like a pirate in every response.", )
Agent 可以使用的工具定义。
In [ ]
response = await agent.run("Tell me a story")
print(response)
response = await agent.run("Hi") print(response)
response = await agent.run("Tell me a story") print(response)
Agent 可以使用的工具定义。
In [ ]
!mkdir -p 'data/10k/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
带 RAG/查询引擎工具的 NVIDIA Agent¶
Agent 可以使用的工具定义。
In [ ]
from llama_index.core.tools import QueryEngineTool
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
embed_model = NVIDIAEmbedding(model="NV-Embed-QA", truncate="END")
# load data
uber_docs = SimpleDirectoryReader(
input_files=["./data/10k/uber_2021.pdf"]
).load_data()
# build index
uber_index = VectorStoreIndex.from_documents(
uber_docs, embed_model=embed_model
)
uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=llm)
query_engine_tool = QueryEngineTool.from_defaults(
query_engine=uber_engine,
name="uber_10k",
description=(
"Provides information about Uber financials for year 2021. "
"Use a detailed plain text question as input to the tool."
),
)
!mkdir -p 'data/10k/' !wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/10k/uber_2021.pdf' -O 'data/10k/uber_2021.pdf'
Agent 可以使用的工具定义。
In [ ]
agent = FunctionAgent(tools=[query_engine_tool], llm=llm)
from llama_index.core.tools import QueryEngineTool from llama_index.core import SimpleDirectoryReader, VectorStoreIndex embed_model = NVIDIAEmbedding(model="NV-Embed-QA", truncate="END") # load data uber_docs = SimpleDirectoryReader( input_files=["./data/10k/uber_2021.pdf"] ).load_data() # build index uber_index = VectorStoreIndex.from_documents( uber_docs, embed_model=embed_model ) uber_engine = uber_index.as_query_engine(similarity_top_k=3, llm=llm) query_engine_tool = QueryEngineTool.from_defaults( query_engine=uber_engine, name="uber_10k", description=( "Provides information about Uber financials for year 2021. " "Use a detailed plain text question as input to the tool." ), )
Agent 可以使用的工具定义。
In [ ]
response = await agent.run(
"Tell me both the risk factors and tailwinds for Uber? Do two parallel tool calls."
)
print(str(response))
response = await agent.run( "告诉我 Uber 的风险因素和有利因素?进行两次并行工具调用。" ) print(str(response))
ReAct 代理¶
Agent 可以使用的工具定义。
In [ ]
from llama_index.core.agent.workflow import ReActAgent
from llama_index.core.agent.workflow import ReActAgent
Agent 可以使用的工具定义。
In [ ]
agent = ReActAgent([multiply_tool, add_tool], llm=llm, verbose=True)
agent = ReActAgent([multiply_tool, add_tool], llm=llm, verbose=True)
使用 stream_events()
方法,我们可以流式传输生成中的响应,以查看代理的思考过程。
最终响应将只包含最终答案。
Agent 可以使用的工具定义。
In [ ]
from llama_index.core.agent.workflow import AgentStream
handler = agent.run("What is 20+(2*4)? Calculate step by step ")
async for ev in handler.stream_events():
if isinstance(ev, AgentStream):
print(ev.delta, end="", flush=True)
response = await handler
from llama_index.core.agent.workflow import AgentStream handler = agent.run("20+(2*4) 是多少?分步计算 ") async for ev in handler.stream_events(): if isinstance(ev, AgentStream): print(ev.delta, end="", flush=True) response = await handler
Agent 可以使用的工具定义。
In [ ]
print(str(response))
print(str(response))
Agent 可以使用的工具定义。
In [ ]
print(response.tool_calls)
print(response.tool_calls)