我有以下代码:
public static String simpleQuestion(String projectId, String location, String modelName) throws Exception { // Initialize client that will be used to send requests. // This client only needs to be created once, and can be reused for multiple requests. try (VertexAI vertexAI = new VertexAI(projectId, location)) { String output; GenerativeModel model = new GenerativeModel(modelName, vertexAI); model.setGenerationConfig(GenerationConfig.newBuilder().build()); model.setSafetySettings(Collections.singletonList( SafetySetting.newBuilder() .setThreshold(SafetySetting .HarmBlockThreshold.BLOCK_NONE).build())); GenerateContentResponse response = model.generateContent("What can you tell me about Pythagorean theorem"); output = ResponseHandler.getText(response); return output; } }
有时我会遇到这个错误:
context/6.1.1/spring-context-6.1.1.jar com.mysticriver.service.GoogleGeminiServiceException in thread "main" java.lang.IllegalArgumentException: The response is blocked due to safety reason. at com.google.cloud.vertexai.generativeai.preview.ResponseHandler.getText(ResponseHandler.java:46)
即使我在设置中使用了HarmBlockThreshold.BLOCK_NONE
回答:
问题
您需要为特定的伤害类别设置伤害阻止阈值。
解决方案
将这个…
SafetySetting.newBuilder() .setThreshold(SafetySetting.HarmBlockThreshold.BLOCK_NONE) .build()
…改为这个。
SafetySetting.newBuilder() .setCategory(HarmCategory.HARM_CATEGORY_YOUR_CATEGORY) .setThreshold(SafetySetting.HarmBlockThreshold.BLOCK_NONE) .build()
所有伤害类别的列表:
HARM_CATEGORY_SEXUALLY_EXPLICIT
HARM_CATEGORY_HATE_SPEECH
HARM_CATEGORY_HARASSMENT
HARM_CATEGORY_DANGEROUS_CONTENT