使用模式(响应评估)#
使用 BaseEvaluator
#
LlamaIndex 中的所有评估模块都实现了 BaseEvaluator
类,其中包含两个主要方法
evaluate
方法接受query
、contexts
、response
和额外的关键字参数。
def evaluate(
self,
query: Optional[str] = None,
contexts: Optional[Sequence[str]] = None,
response: Optional[str] = None,
**kwargs: Any,
) -> EvaluationResult:
evaluate_response
方法提供了一个替代接口,它接受一个 llamaindexResponse
对象(包含响应字符串和源节点),而不是单独的contexts
和response
。
def evaluate_response(
self,
query: Optional[str] = None,
response: Optional[Response] = None,
**kwargs: Any,
) -> EvaluationResult:
它的功能与 evaluate
相同,只是在使用 llamaindex 对象时更简单。
使用 EvaluationResult
#
每个评估器执行后都会输出一个 EvaluationResult
eval_result = evaluator.evaluate(query=..., contexts=..., response=...)
eval_result.passing # binary pass/fail
eval_result.score # numerical score
eval_result.feedback # string feedback
不同的评估器可能会填充部分结果字段。
评估响应忠实度(即幻觉)#
FaithfulnessEvaluator
评估答案是否忠实于检索到的上下文(换句话说,是否存在幻觉)。
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import FaithfulnessEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
# define evaluator
evaluator = FaithfulnessEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
response = query_engine.query(
"What battles took place in New York City in the American Revolution?"
)
eval_result = evaluator.evaluate_response(response=response)
print(str(eval_result.passing))
您还可以选择单独评估每个源上下文
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import FaithfulnessEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
# define evaluator
evaluator = FaithfulnessEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
response = query_engine.query(
"What battles took place in New York City in the American Revolution?"
)
response_str = response.response
for source_node in response.source_nodes:
eval_result = evaluator.evaluate(
response=response_str, contexts=[source_node.get_content()]
)
print(str(eval_result.passing))
您将获得一个结果列表,对应于 response.source_nodes
中的每个源节点。
评估查询 + 响应相关性#
RelevancyEvaluator
评估检索到的上下文和答案对于给定查询是否相关且一致。
请注意,除了 Response
对象之外,此评估器还需要传入 query
。
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import RelevancyEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
# define evaluator
evaluator = RelevancyEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
query = "What battles took place in New York City in the American Revolution?"
response = query_engine.query(query)
eval_result = evaluator.evaluate_response(query=query, response=response)
print(str(eval_result))
类似地,您也可以在特定源节点上进行评估。
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import RelevancyEvaluator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build index
...
# define evaluator
evaluator = RelevancyEvaluator(llm=llm)
# query index
query_engine = vector_index.as_query_engine()
query = "What battles took place in New York City in the American Revolution?"
response = query_engine.query(query)
response_str = response.response
for source_node in response.source_nodes:
eval_result = evaluator.evaluate(
query=query,
response=response_str,
contexts=[source_node.get_content()],
)
print(str(eval_result.passing))
问题生成#
LlamaIndex 还可以使用您的数据生成要回答的问题。结合使用上述评估器,您可以创建对数据进行完全自动化的评估流程。
from llama_index.core import SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
from llama_index.core.llama_dataset.generator import RagDatasetGenerator
# create llm
llm = OpenAI(model="gpt-4", temperature=0.0)
# build documents
documents = SimpleDirectoryReader("./data").load_data()
# define generator, generate questions
dataset_generator = RagDatasetGenerator.from_documents(
documents=documents,
llm=llm,
num_questions_per_chunk=10, # set the number of questions per nodes
)
rag_dataset = dataset_generator.generate_questions_from_nodes()
questions = [e.query for e in rag_dataset.examples]
批量评估#
我们还提供一个批量评估运行器,用于跨多个问题运行一组评估器。
from llama_index.core.evaluation import BatchEvalRunner
runner = BatchEvalRunner(
{"faithfulness": faithfulness_evaluator, "relevancy": relevancy_evaluator},
workers=8,
)
eval_results = await runner.aevaluate_queries(
vector_index.as_query_engine(), queries=questions
)
集成#
我们还集成了社区评估工具。
DeepEval#
DeepEval 提供 6 个评估器(包括 3 个 RAG 评估器,用于检索器和生成器评估),这些评估器由其专有的评估指标提供支持。首先,安装 deepeval
pip install -U deepeval
然后,您可以从 deepeval
导入和使用评估器。完整示例
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from deepeval.integrations.llama_index import DeepEvalAnswerRelevancyEvaluator
documents = SimpleDirectoryReader("YOUR_DATA_DIRECTORY").load_data()
index = VectorStoreIndex.from_documents(documents)
rag_application = index.as_query_engine()
# An example input to your RAG application
user_input = "What is LlamaIndex?"
# LlamaIndex returns a response object that contains
# both the output string and retrieved nodes
response_object = rag_application.query(user_input)
evaluator = DeepEvalAnswerRelevancyEvaluator()
evaluation_result = evaluator.evaluate_response(
query=user_input, response=response_object
)
print(evaluation_result)
您可以按照以下方式从 deepeval
导入所有 6 个评估器
from deepeval.integrations.llama_index import (
DeepEvalAnswerRelevancyEvaluator,
DeepEvalFaithfulnessEvaluator,
DeepEvalContextualRelevancyEvaluator,
DeepEvalSummarizationEvaluator,
DeepEvalBiasEvaluator,
DeepEvalToxicityEvaluator,
)
要详细了解如何将 deepeval
的评估指标与 LlamaIndex 结合使用并利用其完整的 LLM 测试套件,请访问文档。