集成检索指南¶
在构建 RAG 应用时,常常需要决定多种检索参数/策略(例如,从块大小到向量、关键词或混合搜索)。
想法:如果我们可以同时尝试多种策略,并让任何 AI/重新排序器/LLM 来筛选结果,那会怎么样?
这实现了两个目的
- 通过汇总多种策略的结果,获得更好的(尽管成本更高)检索结果(假设重新排序器效果良好)
- 一种相互比较不同检索策略(相对于重新排序器)的方法
本指南将针对 Llama 2 论文展示这一点。我们对不同的块大小和不同的索引执行集成检索。
注意:我们还有一篇密切相关的指南是 Ensemble Query Engine Guide - 请务必查看!
In [ ]
已复制!
%pip install llama-index-llms-openai
%pip install llama-index-postprocessor-cohere-rerank
%pip install llama-index-readers-file pymupdf
%pip install llama-index-llms-openai %pip install llama-index-postprocessor-cohere-rerank %pip install llama-index-readers-file pymupdf
In [ ]
已复制!
%load_ext autoreload
%autoreload 2
%load_ext autoreload %autoreload 2
设置¶
在这里我们定义了必要的导入。
如果你在 colab 上打开此 Notebook,你可能需要安装 LlamaIndex 🦙。
In [ ]
已复制!
!pip install llama-index
!pip install llama-index
In [ ]
已复制!
# NOTE: This is ONLY necessary in jupyter notebook.
# Details: Jupyter runs an event-loop behind the scenes.
# This results in nested event-loops when we start an event-loop to make async queries.
# This is normally not allowed, we use nest_asyncio to allow it for convenience.
import nest_asyncio
nest_asyncio.apply()
# 注意:这仅在 jupyter notebook 中需要。 # 详情:Jupyter 在后台运行一个事件循环。 # 当我们启动一个事件循环来执行异步查询时,这会导致嵌套事件循环。 # 这通常是不允许的,我们使用 nest_asyncio 来方便地允许它。 import nest_asyncio nest_asyncio.apply()
In [ ]
已复制!
import logging
import sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().handlers = []
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import (
VectorStoreIndex,
SimpleDirectoryReader,
StorageContext,
)
from llama_index.core import SummaryIndex
from llama_index.core.response.notebook_utils import display_response
from llama_index.llms.openai import OpenAI
import logging import sys logging.basicConfig(stream=sys.stdout, level=logging.INFO) logging.getLogger().handlers = [] logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout)) from llama_index.core import ( VectorStoreIndex, SimpleDirectoryReader, StorageContext, ) from llama_index.core import SummaryIndex from llama_index.core.response.notebook_utils import display_response from llama_index.llms.openai import OpenAI
Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8. NumExpr defaulting to 8 threads.
加载数据¶
在本节中,我们首先将 Llama 2 论文作为单个文档加载。然后根据不同的块大小对其进行多次分块。我们为每个块大小构建一个单独的向量索引。
In [ ]
已复制!
!wget --user-agent "Mozilla" "https://arxiv.org/pdf/2307.09288.pdf" -O "data/llama2.pdf"
!wget --user-agent "Mozilla" "https://arxiv.org/pdf/2307.09288.pdf" -O "data/llama2.pdf"
--2023-09-28 12:56:38-- https://arxiv.org/pdf/2307.09288.pdf Resolving arxiv.org (arxiv.org)... 128.84.21.199 Connecting to arxiv.org (arxiv.org)|128.84.21.199|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 13661300 (13M) [application/pdf] Saving to: ‘data/llama2.pdf’ data/llama2.pdf 100%[===================>] 13.03M 521KB/s in 42s 2023-09-28 12:57:20 (320 KB/s) - ‘data/llama2.pdf’ saved [13661300/13661300]
In [ ]
已复制!
from pathlib import Path
from llama_index.core import Document
from llama_index.readers.file import PyMuPDFReader
from pathlib import Path from llama_index.core import Document from llama_index.readers.file import PyMuPDFReader
In [ ]
已复制!
loader = PyMuPDFReader()
docs0 = loader.load(file_path=Path("./data/llama2.pdf"))
doc_text = "\n\n".join([d.get_content() for d in docs0])
docs = [Document(text=doc_text)]
loader = PyMuPDFReader() docs0 = loader.load(file_path=Path("./data/llama2.pdf")) doc_text = "\n\n".join([d.get_content() for d in docs0]) docs = [Document(text=doc_text)]
在这里,我们尝试不同的块大小:128、256、512 和 1024。
In [ ]
已复制!
# initialize modules
llm = OpenAI(model="gpt-4")
chunk_sizes = [128, 256, 512, 1024]
nodes_list = []
vector_indices = []
for chunk_size in chunk_sizes:
print(f"Chunk Size: {chunk_size}")
splitter = SentenceSplitter(chunk_size=chunk_size)
nodes = splitter.get_nodes_from_documents(docs)
# add chunk size to nodes to track later
for node in nodes:
node.metadata["chunk_size"] = chunk_size
node.excluded_embed_metadata_keys = ["chunk_size"]
node.excluded_llm_metadata_keys = ["chunk_size"]
nodes_list.append(nodes)
# build vector index
vector_index = VectorStoreIndex(nodes)
vector_indices.append(vector_index)
# 初始化模块 llm = OpenAI(model="gpt-4") chunk_sizes = [128, 256, 512, 1024] nodes_list = [] vector_indices = [] for chunk_size in chunk_sizes: print(f"Chunk Size: {chunk_size}") splitter = SentenceSplitter(chunk_size=chunk_size) nodes = splitter.get_nodes_from_documents(docs) # 将块大小添加到节点元数据中以便后续跟踪 for node in nodes: node.metadata["chunk_size"] = chunk_size node.excluded_embed_metadata_keys = ["chunk_size"] node.excluded_llm_metadata_keys = ["chunk_size"] nodes_list.append(nodes) # 构建向量索引 vector_index = VectorStoreIndex(nodes) vector_indices.append(vector_index)
Chunk Size: 128 Chunk Size: 256 Chunk Size: 512 Chunk Size: 1024
定义集成检索器¶
我们主要使用递归检索抽象来设置一个“集成”检索器。其工作方式如下:
- 为每个块大小的向量检索器定义一个单独的 IndexNode(例如,块大小 128 的检索器,块大小 256 的检索器等)
- 将所有 IndexNode 放入一个 SummaryIndex 中 - 当调用相应的检索器时,将返回所有节点。
- 定义一个 Recursive Retriever,根节点是摘要索引检索器。它将首先从摘要索引检索器中获取所有节点,然后递归调用每个块大小的向量检索器。
- 对最终结果进行重新排序。
最终结果是,当运行查询时,会调用所有向量检索器。
In [ ]
已复制!
# try ensemble retrieval
from llama_index.core.tools import RetrieverTool
from llama_index.core.schema import IndexNode
# retriever_tools = []
retriever_dict = {}
retriever_nodes = []
for chunk_size, vector_index in zip(chunk_sizes, vector_indices):
node_id = f"chunk_{chunk_size}"
node = IndexNode(
text=(
"Retrieves relevant context from the Llama 2 paper (chunk size"
f" {chunk_size})"
),
index_id=node_id,
)
retriever_nodes.append(node)
retriever_dict[node_id] = vector_index.as_retriever()
# 尝试集成检索 from llama_index.core.tools import RetrieverTool from llama_index.core.schema import IndexNode # retriever_tools = [] retriever_dict = {} retriever_nodes = [] for chunk_size, vector_index in zip(chunk_sizes, vector_indices): node_id = f"chunk_{chunk_size}" node = IndexNode( text=( "Retrieves relevant context from the Llama 2 paper (chunk size" f" {chunk_size})" ), index_id=node_id, ) retriever_nodes.append(node) retriever_dict[node_id] = vector_index.as_retriever()
定义递归检索器。
In [ ]
已复制!
from llama_index.core.selectors import PydanticMultiSelector
from llama_index.core.retrievers import RouterRetriever
from llama_index.core.retrievers import RecursiveRetriever
from llama_index.core import SummaryIndex
# the derived retriever will just retrieve all nodes
summary_index = SummaryIndex(retriever_nodes)
retriever = RecursiveRetriever(
root_id="root",
retriever_dict={"root": summary_index.as_retriever(), **retriever_dict},
)
from llama_index.core.selectors import PydanticMultiSelector from llama_index.core.retrievers import RouterRetriever from llama_index.core.retrievers import RecursiveRetriever from llama_index.core import SummaryIndex # 派生检索器将仅检索所有节点 summary_index = SummaryIndex(retriever_nodes) retriever = RecursiveRetriever( root_id="root", retriever_dict={"root": summary_index.as_retriever(), **retriever_dict}, )
让我们在示例查询上测试检索器。
In [ ]
已复制!
nodes = await retriever.aretrieve(
"Tell me about the main aspects of safety fine-tuning"
)
nodes = await retriever.aretrieve( "告诉我关于安全微调的主要方面" )
In [ ]
已复制!
print(f"Number of nodes: {len(nodes)}")
for node in nodes:
print(node.node.metadata["chunk_size"])
print(node.node.get_text())
print(f"节点数量:{len(nodes)}") for node in nodes: print(node.node.metadata["chunk_size"]) print(node.node.get_text())
定义重新排序器来处理最终检索到的节点集。
In [ ]
已复制!
# define reranker
from llama_index.core.postprocessor import LLMRerank, SentenceTransformerRerank
from llama_index.postprocessor.cohere_rerank import CohereRerank
# reranker = LLMRerank()
# reranker = SentenceTransformerRerank(top_n=10)
reranker = CohereRerank(top_n=10)
# 定义重新排序器 from llama_index.core.postprocessor import LLMRerank, SentenceTransformerRerank from llama_index.postprocessor.cohere_rerank import CohereRerank # reranker = LLMRerank() # reranker = SentenceTransformerRerank(top_n=10) reranker = CohereRerank(top_n=10)
定义检索器查询引擎,将递归检索器 + 重新排序器集成在一起。
In [ ]
已复制!
# define RetrieverQueryEngine
from llama_index.core.query_engine import RetrieverQueryEngine
query_engine = RetrieverQueryEngine(retriever, node_postprocessors=[reranker])
# 定义 RetrieverQueryEngine from llama_index.core.query_engine import RetrieverQueryEngine query_engine = RetrieverQueryEngine(retriever, node_postprocessors=[reranker])
In [ ]
已复制!
response = query_engine.query(
"Tell me about the main aspects of safety fine-tuning"
)
response = query_engine.query( "告诉我关于安全微调的主要方面" )
In [ ]
已复制!
display_response(
response, show_source=True, source_length=500, show_source_metadata=True
)
display_response( response, show_source=True, source_length=500, show_source_metadata=True )
分析每个分块的相对重要性¶
集成检索的一个有趣特性是,通过重新排序,我们可以实际使用最终检索集中块的顺序来确定每个块大小的重要性。例如,如果某些块大小始终排在靠前的位置,那么它们可能与查询更相关。
In [ ]
已复制!
# compute the average precision for each chunk size based on positioning in combined ranking
from collections import defaultdict
import pandas as pd
def mrr_all(metadata_values, metadata_key, source_nodes):
# source nodes is a ranked list
# go through each value, find out positioning in source_nodes
value_to_mrr_dict = {}
for metadata_value in metadata_values:
mrr = 0
for idx, source_node in enumerate(source_nodes):
if source_node.node.metadata[metadata_key] == metadata_value:
mrr = 1 / (idx + 1)
break
else:
continue
# normalize AP, set in dict
value_to_mrr_dict[metadata_value] = mrr
df = pd.DataFrame(value_to_mrr_dict, index=["MRR"])
df.style.set_caption("Mean Reciprocal Rank")
return df
# 根据组合排名中的位置计算每个块大小的平均精度 from collections import defaultdict import pandas as pd def mrr_all(metadata_values, metadata_key, source_nodes): # source nodes 是一个有序列表 # 遍历每个值,找出其在 source_nodes 中的位置 value_to_mrr_dict = {} for metadata_value in metadata_values: mrr = 0 for idx, source_node in enumerate(source_nodes): if source_node.node.metadata[metadata_key] == metadata_value: mrr = 1 / (idx + 1) break else: continue # 归一化 AP,设置到字典中 value_to_mrr_dict[metadata_value] = mrr df = pd.DataFrame(value_to_mrr_dict, index=["MRR"]) df.style.set_caption("平均倒数排名") return df
In [ ]
已复制!
# Compute the Mean Reciprocal Rank for each chunk size (higher is better)
# we can see that chunk size of 256 has the highest ranked results.
print("Mean Reciprocal Rank for each Chunk Size")
mrr_all(chunk_sizes, "chunk_size", response.source_nodes)
# 计算每个块大小的平均倒数排名(越高越好) # 我们可以看到块大小为 256 的结果排名最高。 print("每个块大小的平均倒数排名") mrr_all(chunk_sizes, "chunk_size", response.source_nodes)
Mean Reciprocal Rank for each Chunk Size
Out [ ]
128 | 256 | 512 | 1024 | |
---|---|---|---|---|
平均倒数排名 (MRR) | 0.333333 | 1.0 | 0.5 | 0.25 |
评估¶
我们更严格地评估了集成检索器与“基线”检索器的效果对比。
我们定义/加载一个评估基准数据集,然后在其上运行不同的评估。
警告:这可能会很昂贵,特别是使用 GPT-4。请谨慎使用,并调整样本大小以适应你的预算。
In [ ]
已复制!
from llama_index.core.evaluation import DatasetGenerator, QueryResponseDataset
from llama_index.llms.openai import OpenAI
import nest_asyncio
nest_asyncio.apply()
from llama_index.core.evaluation import DatasetGenerator, QueryResponseDataset from llama_index.llms.openai import OpenAI import nest_asyncio nest_asyncio.apply()
In [ ]
已复制!
# NOTE: run this if the dataset isn't already saved
eval_llm = OpenAI(model="gpt-4")
# generate questions from the largest chunks (1024)
dataset_generator = DatasetGenerator(
nodes_list[-1],
llm=eval_llm,
show_progress=True,
num_questions_per_chunk=2,
)
# 注意:如果数据集尚未保存,请运行此代码 eval_llm = OpenAI(model="gpt-4") # 从最大块(1024)生成问题 dataset_generator = DatasetGenerator( nodes_list[-1], llm=eval_llm, show_progress=True, num_questions_per_chunk=2, )
In [ ]
已复制!
eval_dataset = await dataset_generator.agenerate_dataset_from_nodes(num=60)
eval_dataset = await dataset_generator.agenerate_dataset_from_nodes(num=60)
In [ ]
已复制!
eval_dataset.save_json("data/llama2_eval_qr_dataset.json")
eval_dataset.save_json("data/llama2_eval_qr_dataset.json")
In [ ]
已复制!
# optional
eval_dataset = QueryResponseDataset.from_json(
"data/llama2_eval_qr_dataset.json"
)
# 可选 eval_dataset = QueryResponseDataset.from_json( "data/llama2_eval_qr_dataset.json" )
比较结果¶
In [ ]
已复制!
import asyncio
import nest_asyncio
nest_asyncio.apply()
import asyncio import nest_asyncio nest_asyncio.apply()
In [ ]
已复制!
from llama_index.core.evaluation import (
CorrectnessEvaluator,
SemanticSimilarityEvaluator,
RelevancyEvaluator,
FaithfulnessEvaluator,
PairwiseComparisonEvaluator,
)
# NOTE: can uncomment other evaluators
evaluator_c = CorrectnessEvaluator(llm=eval_llm)
evaluator_s = SemanticSimilarityEvaluator(llm=eval_llm)
evaluator_r = RelevancyEvaluator(llm=eval_llm)
evaluator_f = FaithfulnessEvaluator(llm=eval_llm)
pairwise_evaluator = PairwiseComparisonEvaluator(llm=eval_llm)
from llama_index.core.evaluation import ( CorrectnessEvaluator, SemanticSimilarityEvaluator, RelevancyEvaluator, FaithfulnessEvaluator, PairwiseComparisonEvaluator, ) # 注意:可以取消注释其他评估器 evaluator_c = CorrectnessEvaluator(llm=eval_llm) evaluator_s = SemanticSimilarityEvaluator(llm=eval_llm) evaluator_r = RelevancyEvaluator(llm=eval_llm) evaluator_f = FaithfulnessEvaluator(llm=eval_llm) pairwise_evaluator = PairwiseComparisonEvaluator(llm=eval_llm)
In [ ]
已复制!
from llama_index.core.evaluation.eval_utils import (
get_responses,
get_results_df,
)
from llama_index.core.evaluation import BatchEvalRunner
max_samples = 60
eval_qs = eval_dataset.questions
qr_pairs = eval_dataset.qr_pairs
ref_response_strs = [r for (_, r) in qr_pairs]
# resetup base query engine and ensemble query engine
# base query engine
base_query_engine = vector_indices[-1].as_query_engine(similarity_top_k=2)
# ensemble query engine
reranker = CohereRerank(top_n=4)
query_engine = RetrieverQueryEngine(retriever, node_postprocessors=[reranker])
from llama_index.core.evaluation.eval_utils import ( get_responses, get_results_df, ) from llama_index.core.evaluation import BatchEvalRunner max_samples = 60 eval_qs = eval_dataset.questions qr_pairs = eval_dataset.qr_pairs ref_response_strs = [r for (_, r) in qr_pairs] # 重新设置基础查询引擎和集成查询引擎 # 基础查询引擎 base_query_engine = vector_indices[-1].as_query_engine(similarity_top_k=2) # 集成查询引擎 reranker = CohereRerank(top_n=4) query_engine = RetrieverQueryEngine(retriever, node_postprocessors=[reranker])
In [ ]
已复制!
base_pred_responses = get_responses(
eval_qs[:max_samples], base_query_engine, show_progress=True
)
base_pred_responses = get_responses( eval_qs[:max_samples], base_query_engine, show_progress=True )
In [ ]
已复制!
pred_responses = get_responses(
eval_qs[:max_samples], query_engine, show_progress=True
)
pred_responses = get_responses( eval_qs[:max_samples], query_engine, show_progress=True )
In [ ]
已复制!
import numpy as np
pred_response_strs = [str(p) for p in pred_responses]
base_pred_response_strs = [str(p) for p in base_pred_responses]
import numpy as np pred_response_strs = [str(p) for p in pred_responses] base_pred_response_strs = [str(p) for p in base_pred_responses]
In [ ]
已复制!
evaluator_dict = {
"correctness": evaluator_c,
"faithfulness": evaluator_f,
# "relevancy": evaluator_r,
"semantic_similarity": evaluator_s,
}
batch_runner = BatchEvalRunner(evaluator_dict, workers=1, show_progress=True)
evaluator_dict = { "correctness": evaluator_c, "faithfulness": evaluator_f, # "relevancy": evaluator_r, "semantic_similarity": evaluator_s, } batch_runner = BatchEvalRunner(evaluator_dict, workers=1, show_progress=True)
In [ ]
已复制!
eval_results = await batch_runner.aevaluate_responses(
queries=eval_qs[:max_samples],
responses=pred_responses[:max_samples],
reference=ref_response_strs[:max_samples],
)
eval_results = await batch_runner.aevaluate_responses( queries=eval_qs[:max_samples], responses=pred_responses[:max_samples], reference=ref_response_strs[:max_samples], )
In [ ]
已复制!
base_eval_results = await batch_runner.aevaluate_responses(
queries=eval_qs[:max_samples],
responses=base_pred_responses[:max_samples],
reference=ref_response_strs[:max_samples],
)
base_eval_results = await batch_runner.aevaluate_responses( queries=eval_qs[:max_samples], responses=base_pred_responses[:max_samples], reference=ref_response_strs[:max_samples], )
In [ ]
已复制!
results_df = get_results_df(
[eval_results, base_eval_results],
["Ensemble Retriever", "Base Retriever"],
["correctness", "faithfulness", "semantic_similarity"],
)
display(results_df)
results_df = get_results_df( [eval_results, base_eval_results], ["集成检索器", "基础检索器"], ["correctness", "faithfulness", "semantic_similarity"], ) display(results_df)
名称 | 正确性 | 忠实性 | 语义相似性 | |
---|---|---|---|---|
0 | 集成检索器 | 4.375000 | 0.983333 | 0.964546 |
1 | 基础检索器 | 4.066667 | 0.983333 | 0.956692 |
In [ ]
已复制!
batch_runner = BatchEvalRunner(
{"pairwise": pairwise_evaluator}, workers=3, show_progress=True
)
pairwise_eval_results = await batch_runner.aevaluate_response_strs(
queries=eval_qs[:max_samples],
response_strs=pred_response_strs[:max_samples],
reference=base_pred_response_strs[:max_samples],
)
batch_runner = BatchEvalRunner( {"pairwise": pairwise_evaluator}, workers=3, show_progress=True ) pairwise_eval_results = await batch_runner.aevaluate_response_strs( queries=eval_qs[:max_samples], response_strs=pred_response_strs[:max_samples], reference=base_pred_response_strs[:max_samples], )
In [ ]
已复制!
results_df = get_results_df(
[eval_results, base_eval_results],
["Ensemble Retriever", "Base Retriever"],
["pairwise"],
)
display(results_df)
results_df = get_results_df( [eval_results, base_eval_results], ["集成检索器", "基础检索器"], ["pairwise"], ) display(results_df)
Out [ ]
名称 | 成对比较 | |
---|---|---|
0 | 成对比较 | 0.5 |