我是Langchain的新手,我按照这个Retrieval QA – Langchain的指南进行操作。我有一个自定义的提示,但当我尝试通过chain_type_kwargs
传递提示时,在pydantic
的StuffDocumentsChain
中抛出了错误。而当我移除chain_type_kwargs
后,它就正常工作了。
如何传递提示呢?
错误
File /usr/local/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()ValidationError: 1 validation error for StuffDocumentsChain__root__ document_variable_name context was not found in llm_chain input_variables: ['question'] (type=value_error)
代码
import json, osfrom langchain.chains import RetrievalQAfrom langchain.llms import OpenAIfrom langchain.document_loaders import JSONLoaderfrom langchain.text_splitter import CharacterTextSplitterfrom langchain.embeddings.openai import OpenAIEmbeddingsfrom langchain.vectorstores import Chromafrom langchain.chat_models import ChatOpenAIfrom langchain import PromptTemplatefrom pathlib import Pathfrom pprint import pprintos.environ["OPENAI_API_KEY"] = "my-key"def metadata_func(record: dict, metadata: dict) -> dict: metadata["drug_name"] = record["drug_name"] return metadataloader = JSONLoader( file_path='./drugs_data_v2.json', jq_schema='.drugs[]', content_key="data", metadata_func=metadata_func)docs = loader.load()text_splitter = CharacterTextSplitter(chunk_size=5000, chunk_overlap=200)texts = text_splitter.split_documents(docs)embeddings = OpenAIEmbeddings()docsearch = Chroma.from_documents(texts, embeddings)template = """/example custom prommptQuestion: {question}Answer: """PROMPT = PromptTemplate(template=template, input_variables=['question'])qa = RetrievalQA.from_chain_type( llm=ChatOpenAI( model_name='gpt-3.5-turbo-16k' ), chain_type="stuff", chain_type_kwargs={"prompt": PROMPT}, retriever=docsearch.as_retriever(),)query = "What did the president say about Ketanji Brown Jackson"qa.run(query)
回答:
模板中缺少{context}。