Maritalk¶
介绍¶
MariTalk 是由巴西公司 Maritaca AI 开发的助手。MariTalk 基于经过专门训练以很好地理解葡萄牙语的语言模型。
本 notebook 通过两个示例演示如何将 MariTalk 与 Llama Index 一起使用
- 使用聊天方法获取宠物名称建议;
- 使用 complete 方法通过 few-shot 示例将影评分类为负面或正面。
安装¶
如果您在 colab 上打开此 Notebook,您可能需要安装 LlamaIndex。
In [ ]
已复制!
!pip install llama-index
!pip install llama-index-llms-maritalk
!pip install asyncio
!pip install llama-index !pip install llama-index-llms-maritalk !pip install asyncio
API 密钥¶
您需要一个 API 密钥,该密钥可以从 chat.maritaca.ai("Chaves da API" 部分)获取。
示例 1 - 使用聊天获取宠物名称建议¶
In [ ]
已复制!
from llama_index.core.llms import ChatMessage
from llama_index.llms.maritalk import Maritalk
import asyncio
# To customize your API key, do this
# otherwise it will lookup MARITALK_API_KEY from your env variable
llm = Maritalk(api_key="<your_maritalk_api_key>", model="sabia-2-medium")
# Call chat with a list of messages
messages = [
ChatMessage(
role="system",
content="You are an assistant specialized in suggesting pet names. Given the animal, you must suggest 4 names.",
),
ChatMessage(role="user", content="I have a dog."),
]
# Sync chat
response = llm.chat(messages)
print(response)
# Async chat
async def get_dog_name(llm, messages):
response = await llm.achat(messages)
print(response)
asyncio.run(get_dog_name(llm, messages))
from llama_index.core.llms import ChatMessage from llama_index.llms.maritalk import Maritalk import asyncio # To customize your API key, do this # otherwise it will lookup MARITALK_API_KEY from your env variable llm = Maritalk(api_key="", model="sabia-2-medium") # Call chat with a list of messages messages = [ ChatMessage( role="system", content="You are an assistant specialized in suggesting pet names. Given the animal, you must suggest 4 names.", ), ChatMessage(role="user", content="I have a dog."), ] # Sync chat response = llm.chat(messages) print(response) # Async chat async def get_dog_name(llm, messages): response = await llm.achat(messages) print(response) asyncio.run(get_dog_name(llm, messages))
流式生成¶
对于涉及生成长文本的任务,例如撰写长篇文章或翻译大型文档,分批接收响应(随着文本生成)比等待完整的文本更有优势。这使得应用程序更加响应迅速且高效,尤其是在生成的文本很长时。我们提供两种方法来满足此需求:同步和异步。
In [ ]
已复制!
# Sync streaming chat
response = llm.stream_chat(messages)
for chunk in response:
print(chunk.delta, end="", flush=True)
# Async streaming chat
async def get_dog_name_streaming(llm, messages):
async for chunk in await llm.astream_chat(messages):
print(chunk.delta, end="", flush=True)
asyncio.run(get_dog_name_streaming(llm, messages))
# Sync streaming chat response = llm.stream_chat(messages) for chunk in response: print(chunk.delta, end="", flush=True) # Async streaming chat async def get_dog_name_streaming(llm, messages): async for chunk in await llm.astream_chat(messages): print(chunk.delta, end="", flush=True) asyncio.run(get_dog_name_streaming(llm, messages))
示例 2 - 使用 Complete 方法进行 Few-shot 示例¶
当模型使用 few-shot 示例时,建议使用 llm.complete()
方法
In [ ]
已复制!
prompt = """Classifique a resenha de filme como "positiva" ou "negativa".
Resenha: Gostei muito do filme, é o melhor do ano!
Classe: positiva
Resenha: O filme deixa muito a desejar.
Classe: negativa
Resenha: Apesar de longo, valeu o ingresso..
Classe:"""
# Sync complete
response = llm.complete(prompt)
print(response)
# Async complete
async def classify_review(llm, prompt):
response = await llm.acomplete(prompt)
print(response)
asyncio.run(classify_review(llm, prompt))
prompt = """Classifique a resenha de filme como "positiva" ou "negativa". Resenha: Gostei muito do filme, é o melhor do ano! Classe: positiva Resenha: O filme deixa muito a desejar. Classe: negativa Resenha: Apesar de longo, valeu o ingresso.. Classe:""" # Sync complete response = llm.complete(prompt) print(response) # Async complete async def classify_review(llm, prompt): response = await llm.acomplete(prompt) print(response) asyncio.run(classify_review(llm, prompt))
In [ ]
已复制!
# Sync streaming complete
response = llm.stream_complete(prompt)
for chunk in response:
print(chunk.delta, end="", flush=True)
# Async streaming complete
async def classify_review_streaming(llm, prompt):
async for chunk in await llm.astream_complete(prompt):
print(chunk.delta, end="", flush=True)
asyncio.run(classify_review_streaming(llm, prompt))
# Sync streaming complete response = llm.stream_complete(prompt) for chunk in response: print(chunk.delta, end="", flush=True) # Async streaming complete async def classify_review_streaming(llm, prompt): async for chunk in await llm.astream_complete(prompt): print(chunk.delta, end="", flush=True) asyncio.run(classify_review_streaming(llm, prompt))