我最近在我的Next.js应用程序中从使用GPT-4o切换到使用Azure AI服务的Phi-4-multimodel-instruct,但遇到了以下错误:
BadRequestError: 400 {"object":"error","message":"\"auto\" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set","type":"BadRequestError","param":null,"code":400}
错误发生在调用runTools()
方法时,该方法在使用GPT-4o时运行良好。以下是我的实现:
OpenAI实例配置:
import { AzureOpenAI } from "openai";export const OpenAIInstance = () => { try { if ( !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_KEY || !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION || !process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_INSTANCE_NAME ) { throw new Error( "Missing required environment variables for OpenAI instance." ); } const azureOpenAI = new AzureOpenAI({ apiKey: process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_KEY, apiVersion: process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION, baseURL: `https://${process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_INSTANCE_NAME}.openai.azure.com/models/chat/completions?api-version=${process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_API_VERSION}` }); return azureOpenAI; } catch (error) { console.error( "Error initializing OpenAI instance:", (error as Error).message ); throw error; }};
聊天API扩展实现:
export const ChatApiExtensions = async (props: { chatThread: ChatThreadModel; userMessage: string; history: ChatCompletionMessageParam[]; extensions: RunnableToolFunction<any>[]; signal: AbortSignal;}): Promise<ChatCompletionStreamingRunner> => { const { userMessage, history, signal, chatThread, extensions } = props; const openAI = OpenAIInstance(); const model = process.env.AZURE_SERVICE_PHI_4_MULTIMODEL_MODEL_NAME; if (!model) { throw new Error("Model deployment name is not configured"); } const systemMessage = await extensionsSystemMessage(chatThread); try { return await openAI.beta.chat.completions.runTools( { model: model, stream: true, messages: [ { role: "system", content: chatThread.personaMessage + "\n" + systemMessage, }, ...history, { role: "user", content: userMessage, }, ], tools: extensions, temperature: 0.7, max_tokens: 4000, }, { signal: signal, } ); } catch (error) { console.error("Error in ChatApiExtensions:", error); throw error; }};
根据错误信息,似乎Phi-4-multimodel-instruct需要额外的参数来使用工具,而这些参数在使用GPT-4o时是不需要的。我已经查阅了Azure的文档,但没有找到关于这些标志(--enable-auto-tool-choice
和--tool-call-parser
)的具体信息。
有没有人成功地在Azure上使用Phi-4-multimodel-instruct来使用工具?我该如何修改我的代码以使其正常工作?
环境:
- Next.js(服务器组件)
- Azure OpenAI服务
- OpenAI Node.js SDK
回答:
您找不到这些选项是因为目前phi-4-multimodel-instruct不支持工具调用。 查看详情