如果您正在 Colab 上打开此 Notebook,您可能需要安装 LlamaIndex 🦙。
In [ ]
已复制!
%pip install llama-index llama-index-memory-mem0
%pip install llama-index llama-index-memory-mem0
In [ ]
已复制!
import os
os.environ["MEM0_API_KEY"] = "m0-..."
import os os.environ["MEM0_API_KEY"] = "m0-..."
使用 from_client
(针对 Mem0 平台 API)
In [ ]
已复制!
from llama_index.memory.mem0 import Mem0Memory
context = {"user_id": "test_users_1"}
memory_from_client = Mem0Memory.from_client(
context=context,
api_key="m0-...",
search_msg_limit=4, # Default is 5
)
from llama_index.memory.mem0 import Mem0Memory context = {"user_id": "test_users_1"} memory_from_client = Mem0Memory.from_client( context=context, api_key="m0-...", search_msg_limit=4, # 默认值为 5 )
Mem0 上下文用于在 Mem0 中识别用户、代理或对话。在 Mem0Memory
构造函数中,至少需要传入其中一个字段。
search_msg_limit
是可选的,默认值为 5。它是用于从 Mem0 中检索记忆的聊天历史消息数量。更多的消息数量将导致用于检索的上下文更多,但也会增加检索时间,并可能导致一些不想要的结果。
使用 from_config
(针对 Mem0 开源版)
In [ ]
已复制!
os.environ["OPENAI_API_KEY"] = "<your-api-key>"
config = {
"vector_store": {
"provider": "qdrant",
"config": {
"collection_name": "test_9",
"host": "localhost",
"port": 6333,
"embedding_model_dims": 1536, # Change this according to your local model's dimensions
},
},
"llm": {
"provider": "openai",
"config": {
"model": "gpt-4o",
"temperature": 0.2,
"max_tokens": 1500,
},
},
"embedder": {
"provider": "openai",
"config": {"model": "text-embedding-3-small"},
},
"version": "v1.1",
}
memory_from_config = Mem0Memory.from_config(
context=context,
config=config,
search_msg_limit=4, # Default is 5
)
os.environ["OPENAI_API_KEY"] = "" config = { "vector_store": { "provider": "qdrant", "config": { "collection_name": "test_9", "host": "localhost", "port": 6333, "embedding_model_dims": 1536, # 根据您的本地模型维度更改此处 }, }, "llm": { "provider": "openai", "config": { "model": "gpt-4o", "temperature": 0.2, "max_tokens": 1500, }, }, "embedder": { "provider": "openai", "config": {"model": "text-embedding-3-small"}, }, "version": "v1.1", } memory_from_config = Mem0Memory.from_config( context=context, config=config, search_msg_limit=4, # 默认值为 5 )
初始化 LLM¶
In [ ]
已复制!
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-4o", api_key="sk-...")
from llama_index.llms.openai import OpenAI llm = OpenAI(model="gpt-4o", api_key="sk-...")
用于函数调用代理的 Mem0¶
使用 Mem0
作为 FunctionCallingAgents
的记忆。
初始化工具¶
In [ ]
已复制!
def call_fn(name: str):
"""Call the provided name.
Args:
name: str (Name of the person)
"""
print(f"Calling... {name}")
def email_fn(name: str):
"""Email the provided name.
Args:
name: str (Name of the person)
"""
print(f"Emailing... {name}")
def call_fn(name: str): """调用提供的名字。Args: name: str(人名)""" print(f"Calling... {name}") def email_fn(name: str): """给提供的名字发送电子邮件。Args: name: str(人名)""" print(f"Emailing... {name}")
In [ ]
已复制!
from llama_index.core.agent.workflow import FunctionAgent
agent = FunctionAgent(
tools=[email_fn, call_fn],
llm=llm,
)
from llama_index.core.agent.workflow import FunctionAgent agent = FunctionAgent( tools=[email_fn, call_fn], llm=llm, )
In [ ]
已复制!
response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client)
print(str(response))
response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client) print(str(response))
/Users/loganmarkewich/Library/Caches/pypoetry/virtualenvs/llama-index-caVs7DDe-py3.10/lib/python3.10/site-packages/mem0/client/main.py:33: DeprecationWarning: output_format='v1.0' is deprecated therefore setting it to 'v1.1' by default.Check out the docs for more information: https://docs.mem0.ai/platform/quickstart#4-1-create-memories return func(*args, **kwargs)
Hello Mayank! How can I assist you today?
In [ ]
已复制!
response = await agent.run(
"My preferred way of communication would be Email.",
memory=memory_from_client,
)
print(str(response))
response = await agent.run( "My preferred way of communication would be Email.", memory=memory_from_client, ) print(str(response))
Got it, Mayank! Your preferred way of communication is Email. If there's anything specific you need, feel free to let me know!
In [ ]
已复制!
response = await agent.run(
"Send me an update of your product.", memory=memory_from_client
)
print(str(response))
response = await agent.run( "Send me an update of your product.", memory=memory_from_client ) print(str(response))
Emailing... Mayank Emailing... Mayank Calling... Mayank Emailing... Mayank
I've sent you an update on our product via email. If you have any other questions or need further assistance, feel free to ask!
用于 ReAct 代理的 Mem0¶
使用 Mem0
作为 ReActAgent
的记忆。
In [ ]
已复制!
from llama_index.core.agent.workflow import ReActAgent
agent = ReActAgent(
tools=[call_fn, email_fn],
llm=llm,
)
from llama_index.core.agent.workflow import ReActAgent agent = ReActAgent( tools=[call_fn, email_fn], llm=llm, )
In [ ]
已复制!
response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client)
print(str(response))
response = await agent.run("Hi, My name is Mayank.", memory=memory_from_client) print(str(response))
In [ ]
已复制!
response = await agent.run(
"My preferred way of communication would be Email.",
memory=memory_from_client,
)
print(str(response))
response = await agent.run( "My preferred way of communication would be Email.", memory=memory_from_client, ) print(str(response))
In [ ]
已复制!
response = await agent.run(
"Send me an update of your product.", memory=memory_from_client
)
print(str(response))
response = await agent.run( "Send me an update of your product.", memory=memory_from_client ) print(str(response))
In [ ]
已复制!
response = await agent.run(
"First call me and then communicate me requirements.",
memory=memory_from_client,
)
print(str(response))
response = await agent.run( "First call me and then communicate me requirements.", memory=memory_from_client, ) print(str(response))