Azure OpenAI¶
如果您在 colab 上打开此 Notebook,您可能需要安装 LlamaIndex 🦙。
输入 [ ]
已复制!
%pip install llama-index-llms-azure-openai
%pip install llama-index-llms-azure-openai
输入 [ ]
已复制!
!pip install llama-index
!pip install llama-index
先决条件¶
环境设置¶
查找您的设置信息 - API 基本 URL、API 密钥、部署名称(即引擎)等¶
要查找必要的设置信息,请执行以下步骤:
- 转到 Azure OpenAI Studio 在此处
- 转到聊天或完成游乐场(取决于您正在设置哪种 LLM)
- 点击“查看代码”(如下面图片所示)
输入 [ ]
已复制!
from IPython.display import Image
Image(filename="./azure_playground.png")
from IPython.display import Image Image(filename="./azure_playground.png")
输出 [ ]
- 记下
api_type
、api_base
、api_version
、engine
(这应该与之前的“部署名称”相同)以及key
输入 [ ]
已复制!
from IPython.display import Image
Image(filename="./azure_env.png")
from IPython.display import Image Image(filename="./azure_env.png")
输出 [ ]
配置环境变量¶
使用 Azure 部署的 OpenAI 模型与普通的 OpenAI 非常相似。您只需配置几个额外的环境变量。
OPENAI_API_VERSION
:将其设置为2023-07-01-preview
。将来可能会更改此值。AZURE_OPENAI_ENDPOINT
:您的终结点应如下所示 https://YOUR_RESOURCE_NAME.openai.azure.com/OPENAI_API_KEY
:您的 API 密钥
输入 [ ]
已复制!
import os
os.environ["OPENAI_API_KEY"] = "<your-api-key>"
os.environ[
"AZURE_OPENAI_ENDPOINT"
] = "https://<your-resource-name>.openai.azure.com/"
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
import os os.environ["OPENAI_API_KEY"] = "" os.environ[ "AZURE_OPENAI_ENDPOINT" ] = "https://.openai.azure.com/" os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
使用您的 LLM¶
输入 [ ]
已复制!
from llama_index.llms.azure_openai import AzureOpenAI
from llama_index.llms.azure_openai import AzureOpenAI
与普通的 OpenAI
不同,除了 model
之外,您还需要传递一个 engine
参数。engine
是您在 Azure OpenAI Studio 中选择的模型部署的名称。更多详细信息请参阅前面关于“查找您的设置信息”的部分。
输入 [ ]
已复制!
llm = AzureOpenAI(
engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0
)
llm = AzureOpenAI( engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0 )
或者,您也可以跳过设置环境变量,并通过构造函数直接传递参数。
输入 [ ]
已复制!
llm = AzureOpenAI(
engine="my-custom-llm",
model="gpt-35-turbo-16k",
temperature=0.0,
azure_endpoint="https://<your-resource-name>.openai.azure.com/",
api_key="<your-api-key>",
api_version="2023-07-01-preview",
)
llm = AzureOpenAI( engine="my-custom-llm", model="gpt-35-turbo-16k", temperature=0.0, azure_endpoint="https://.openai.azure.com/", api_key="", api_version="2023-07-01-preview", )
使用 complete
终结点进行文本完成
输入 [ ]
已复制!
response = llm.complete("The sky is a beautiful blue and")
print(response)
response = llm.complete("The sky is a beautiful blue and") print(response)
the sun is shining brightly. Fluffy white clouds float lazily across the sky, creating a picturesque scene. The vibrant blue color of the sky brings a sense of calm and tranquility. It is a perfect day to be outside, enjoying the warmth of the sun and the gentle breeze. The sky seems to stretch endlessly, reminding us of the vastness and beauty of the world around us. It is a reminder to appreciate the simple pleasures in life and to take a moment to admire the natural wonders that surround us.
输入 [ ]
已复制!
response = llm.stream_complete("The sky is a beautiful blue and")
for r in response:
print(r.delta, end="")
response = llm.stream_complete("The sky is a beautiful blue and") for r in response: print(r.delta, end="")
the sun is shining brightly. Fluffy white clouds float lazily across the sky, creating a picturesque scene. The vibrant blue color of the sky brings a sense of calm and tranquility. It is a perfect day to be outside, enjoying the warmth of the sun and the gentle breeze. The sky seems to stretch endlessly, reminding us of the vastness and beauty of the world around us. It is a reminder to appreciate the simple pleasures in life and to take a moment to pause and admire the natural wonders that surround us.
使用 chat
终结点进行对话
输入 [ ]
已复制!
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(
role="system", content="You are a pirate with colorful personality."
),
ChatMessage(role="user", content="Hello"),
]
response = llm.chat(messages)
print(response)
from llama_index.core.llms import ChatMessage messages = [ ChatMessage( role="system", content="你是一个性格鲜明的海盗。" ), ChatMessage(role="user", content="你好"), ] response = llm.chat(messages) print(response)
assistant: Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger, the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?
输入 [ ]
已复制!
response = llm.stream_chat(messages)
for r in response:
print(r.delta, end="")
response = llm.stream_chat(messages) for r in response: print(r.delta, end="")
Ahoy there, matey! How be ye on this fine day? I be Captain Jolly Roger, the most colorful pirate ye ever did lay eyes on! What brings ye to me ship?
与其向每个聊天或完成调用添加相同的参数,不如使用 additional_kwargs
在每个实例级别设置它们。
输入 [ ]
已复制!
llm = AzureOpenAI(
engine="simon-llm",
model="gpt-35-turbo-16k",
temperature=0.0,
additional_kwargs={"user": "your_user_id"},
)
llm = AzureOpenAI( engine="simon-llm", model="gpt-35-turbo-16k", temperature=0.0, additional_kwargs={"user": "your_user_id"}, )