使用 AzureOpenAI 创建 SQL 代理?

我已经编写了一个使用 OpenAI API 的脚本,运行得很好。现在我尝试将其切换到 AzureOpenAI,但似乎在使用 create_sql_agent() 时遇到了问题。您能使用 AzureOpenAI 模型 gpt-35-turbo-1106 创建 SQL 代理吗?这是否与我在 AzureOpenAI() 中的 api_version 设置有关?我收到的错误是“TypeError: Completions.create() got an unexpected keyword argument ‘tools’”,我认为这也可能是由于我使用了 ‘openai-tools’ 作为 agent_type 的选项导致的?

代码

import osfrom langchain_openai import AzureOpenAIfrom langchain.agents import create_sql_agentfrom langchain.agents.agent_toolkits import SQLDatabaseToolkitfrom langchain.sql_database import SQLDatabasefrom dotenv import load_dotenvfrom langchain.agents import AgentExecutorfrom langchain_core.prompts.chat import (    ChatPromptTemplate,    HumanMessagePromptTemplate,    SystemMessagePromptTemplate,    AIMessagePromptTemplate,    MessagesPlaceholder,)path = (os.getcwd()+'\creds.env')load_dotenv(path)  db = SQLDatabase.from_uri(    f"postgresql://{os.environ.get('user')}:{os.environ.get('password')}@{os.environ.get('host')}:{os.environ.get('port')}/{os.environ.get('database')}")llm = AzureOpenAI(azure_endpoint=MY_ENDPOINT,                  deployment_name=MY_DEPLOYMENT_NAME,                  model_name='gpt-35-turbo', # 应该使用 'gpt-35-turbo-1106' 吗?                 temperature = 0,                 api_key = MY_KEY,                 api_version = '2023-07-01-preview') # 我的 api_version 是否正确?不确定使用哪个版本toolkit = SQLDatabaseToolkit(db=db, llm=llm)prefix = """您是一个设计用于与 SQL 数据库交互的代理。给定一个输入问题,创建一个语法正确的 {dialect} 查询来运行,然后查看查询结果并返回答案。除非用户指定希望获得的特定数量的示例,否则始终将查询限制在最多 {top_k} 个结果。您可以按相关列对结果进行排序,以返回数据库中最有趣的示例。永远不要查询特定表的所有列,只询问与问题相关的列。您可以访问与数据库交互的工具。仅使用以下工具。仅使用以下工具返回的信息来构建您的最终答案。您必须在执行查询之前仔细检查您的查询。如果在执行查询时遇到错误,请重写查询并再次尝试。不要对数据库执行任何 DML 语句(INSERT, UPDATE, DELETE, DROP, CASCADE 等)。如果问题似乎与数据库无关,只需返回“我不知道”作为答案。如果被问及某个人,不要返回“ID”,而是返回名字和姓氏。"""suffix = """我应该查看数据库中的表,看看我可以查询什么。然后我应该查询最相关表的架构。"""messages = [                SystemMessagePromptTemplate.from_template(prefix),                HumanMessagePromptTemplate.from_template("{input}"),                AIMessagePromptTemplate.from_template(suffix),                MessagesPlaceholder(variable_name="agent_scratchpad"),            ]agent_executor = create_sql_agent(llm,                                  toolkit=toolkit,                                  agent_type='openai-tools', # 这是否与 Azure 兼容?                                  prompt=prompt,                                  verbose=False)print(agent_executor.invoke("表的名称是什么"))

错误

---------------------------------------------------------------------------TypeError                                 Traceback (most recent call last)Cell In[69], line 1----> 1 print(agent_executor.invoke("What are the names of the tables"))File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:163, in Chain.invoke(self, input, config, **kwargs)    161 except BaseException as e:    162     run_manager.on_chain_error(e)--> 163     raise e    164 run_manager.on_chain_end(outputs)    166 if include_run_info:File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:153, in Chain.invoke(self, input, config, **kwargs)    150 try:    151     self._validate_inputs(inputs)    152     outputs = (--> 153         self._call(inputs, run_manager=run_manager)    154         if new_arg_supported    155         else self._call(inputs)    156     )    158     final_outputs: Dict[str, Any] = self.prep_outputs(    159         inputs, outputs, return_only_outputs    160     )    161 except BaseException as e:File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\agent.py:1432, in AgentExecutor._call(self, inputs, run_manager)   1430 # We now enter the agent loop (until it returns something).   1431 while self._should_continue(iterations, time_elapsed):-> 1432     next_step_output = self._take_next_step(   1433         name_to_tool_map,   1434         color_mapping,   1435         inputs,   1436         intermediate_steps,   1437         run_manager=run_manager,   1438     )   1439 if isinstance(next_step_output, AgentFinish):   1440     return self._return(   1441         next_step_output, intermediate_steps, run_manager=run_manager   1442     )File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\agent.py:1138, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)   1129 def _take_next_step(   1130     self,   1131     name_to_tool_map: Dict[str, BaseTool],   (...)   1135     run_manager: Optional[CallbackManagerForChainRun] = None,   1136 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:   1137     return self._consume_next_step(-> 1138         [   1139             a   1140             for a in self._iter_next_step(   1141                 name_to_tool_map,   1142                 color_mapping,   1143                 inputs,   1144                 intermediate_steps,   1145                 run_manager,   1146             )   1147         ]   1148     )File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\agent.py:1138, in <listcomp>(.0)   1129 def _take_next_step(   1130     self,   1131     name_to_tool_map: Dict[str, BaseTool],   (...)   1135     run_manager: Optional[CallbackManagerForChainRun] = None,   1136 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:   1137     return self._consume_next_step(-> 1138         [   1139             a   1140             for a in self._iter_next_step(   1141                 name_to_tool_map,   1142                 color_mapping,   1143                 inputs,   1144                 intermediate_steps,   1145                 run_manager,   1146             )   1147         ]   1148     )File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\agent.py:1166, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)   1163     intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)   1165     # Call the LLM to see what to do.-> 1166     output = self.agent.plan(   1167         intermediate_steps,   1168         callbacks=run_manager.get_child() if run_manager else None,   1169         **inputs,   1170     )   1171 except OutputParserException as e:   1172     if isinstance(self.handle_parsing_errors, bool):File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\agent.py:514, in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs)    506 final_output: Any = None    507 if self.stream_runnable:    508     # Use streaming to make sure that the underlying LLM is invoked in a    509     # streaming   (...)    512     # Because the response from the plan is not a generator, we need to    513     # accumulate the output into final output and return that.--> 514     for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):    515         if final_output is None:    516             final_output = chunkFile ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py:2875, in RunnableSequence.stream(self, input, config, **kwargs)   2869 def stream(   2870     self,   2871     input: Input,   2872     config: Optional[RunnableConfig] = None,   2873     **kwargs: Optional[Any],   2874 ) -> Iterator[Output]:-> 2875     yield from self.transform(iter([input]), config, **kwargs)File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py:2862, in RunnableSequence.transform(self, input, config, **kwargs)   2856 def transform(   2857     self,   2858     input: Iterator[Input],   2859     config: Optional[RunnableConfig] = None,   2860     **kwargs: Optional[Any],   2861 ) -> Iterator[Output]:-> 2862     yield from self._transform_stream_with_config(   2863         input,   2864         self._transform,   2865         patch_config(config, run_name=(config or {}).get("run_name") or self.name),   2866         **kwargs,   2867     )File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py:1880, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)   1878 try:   1879     while True:-> 1880         chunk: Output = context.run(next, iterator)  # type: ignore   1881         yield chunk   1882         if final_output_supported:File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py:2826, in RunnableSequence._transform(self, input, run_manager, config)   2817 for step in steps:   2818     final_pipeline = step.transform(   2819         final_pipeline,   2820         patch_config(   (...)   2823         ),   2824     )-> 2826 for output in final_pipeline:   2827     yield outputFile ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py:1283, in Runnable.transform(self, input, config, **kwargs)   1280 final: Input   1281 got_first_val = False-> 1283 for chunk in input:   1284     if not got_first_val:   1285         final = adapt_first_streaming_chunk(chunk)  # type: ignoreFile ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py:4728, in RunnableBindingBase.transform(self, input, config, **kwargs)   4722 def transform(   4723     self,   4724     input: Iterator[Input],   4725     config: Optional[RunnableConfig] = None,   4726     **kwargs: Any,   4727 ) -> Iterator[Output]:-> 4728     yield from self.bound.transform(   4729         input,   4730         self._merge_configs(config),   4731         **{**self.kwargs, **kwargs},   4732     )File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\runnables\base.py:1300, in Runnable.transform(self, input, config, **kwargs)   1293             raise TypeError(   1294                 f"Failed while trying to add together "   1295                 f"type {type(final)} and {type(chunk)}."   1296                 f"These types should be addable for transform to work."   1297             )   1299 if got_first_val:-> 1300     yield from self.stream(final, config, **kwargs)File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\language_models\llms.py:458, in BaseLLM.stream(self, input, config, stop, **kwargs)    451 except BaseException as e:    452     run_manager.on_llm_error(    453         e,    454         response=LLMResult(    455             generations=[[generation]] if generation else []    456         ),    457     )--> 458     raise e    459 else:    460     run_manager.on_llm_end(LLMResult(generations=[[generation]]))File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\language_models\llms.py:442, in BaseLLM.stream(self, input, config, stop, **kwargs)    440 generation: Optional[GenerationChunk] = None    441 try:--> 442     for chunk in self._stream(    443         prompt, stop=stop, run_manager=run_manager, **kwargs    444     ):    445         yield chunk.text    446         if generation is None:File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_openai\llms\base.py:262, in BaseOpenAI._stream(self, prompt, stop, run_manager, **kwargs)    260 params = {**self._invocation_params, **kwargs, "stream": True}    261 self.get_sub_prompts(params, [prompt], stop)  # this mutates params--> 262 for stream_resp in self.client.create(prompt=prompt, **params):    263     if not isinstance(stream_resp, dict):    264         stream_resp = stream_resp.model_dump()File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\_utils\_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)    275             msg = f"Missing required argument: {quote(missing[0])}"    276     raise TypeError(msg)--> 277 return func(*args, **kwargs)TypeError: Completions.create() got an unexpected keyword argument 'tools'

回答:

您的模型名称和 API 版本应该没问题。但是,您应该使用聊天模型类型。为此使用 AzureChatOpenAI 类。

更新您的代码:

from langchain.chat_models import AzureChatOpenAI# ...llm = AzureChatOpenAI(azure_endpoint=MY_ENDPOINT,                  deployment_name=MY_DEPLOYMENT_NAME,                  model_name='gpt-35-turbo',                  temperature = 0,                  api_key = MY_KEY,                  api_version = '2023-07-01-preview')

在创建 SQL 代理时,使用 AgentType 枚举器,并使用零镜头来告诉代理不使用记忆。

from langchain.agents import AgentType, create_sql_agent# ...agent_executor = create_sql_agent(llm=llm,                                  toolkit=toolkit,                                  agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,                                  prompt=prompt,                                  verbose=False)

Related Posts

Keras Dense层输入未被展平

这是我的测试代码: from keras import…

无法将分类变量输入随机森林

我有10个分类变量和3个数值变量。我在分割后直接将它们…

如何在Keras中对每个输出应用Sigmoid函数?

这是我代码的一部分。 model = Sequenti…

如何选择类概率的最佳阈值?

我的神经网络输出是一个用于多标签分类的预测类概率表: …

在Keras中使用深度学习得到不同的结果

我按照一个教程使用Keras中的深度神经网络进行文本分…

‘MatMul’操作的输入’b’类型为float32,与参数’a’的类型float64不匹配

我写了一个简单的TensorFlow代码,但不断遇到T…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注