我在尝试在Intel Neural Compute Stick硬件上运行FP16格式的人员检测模型person-detection-retail-0013和人员重新识别模型person-reidentification-retail-0079,但是一旦我运行应用程序加载这些模型到设备上时,我得到了如下异常:
[INFERENCE ENGINE EXCEPTION] Dynamic batch is not supported
我已经将网络加载设置为最大批处理大小为1,并且我从OpenVINO工具包中的行人跟踪演示开始我的项目:
main.cpp –> CreatePedestrianTracker
CnnConfig reid_config(reid_model, reid_weights); reid_config.max_batch_size = 16; try { if (ie.GetConfig(deviceName, CONFIG_KEY(DYN_BATCH_ENABLED)).as<std::string>() != PluginConfigParams::YES) { reid_config.max_batch_size = 1; std::cerr << "[DEBUG] Dynamic batch is not supported for " << deviceName << ". Fall back to batch 1." << std::endl; } } catch (const InferenceEngine::details::InferenceEngineException& e) { reid_config.max_batch_size = 1; std::cerr << e.what() << " for " << deviceName << ". Fall back to batch 1." << std::endl; }
Cnn.cpp –> void CnnBase::InferBatch
void CnnBase::InferBatch(const std::vector<cv::Mat>& frames,std::function<void(const InferenceEngine::BlobMap&, size_t)> fetch_results) const {const size_t batch_size = input_blob_->getTensorDesc().getDims()[0];size_t num_imgs = frames.size();for (size_t batch_i = 0; batch_i < num_imgs; batch_i += batch_size) { const size_t current_batch_size = std::min(batch_size, num_imgs - batch_i); for (size_t b = 0; b < current_batch_size; b++) { matU8ToBlob<uint8_t>(frames[batch_i + b], input_blob_, b); } if ((deviceName_.find("MYRIAD") == std::string::npos) && (deviceName_.find("HDDL") == std::string::npos)) { infer_request_.SetBatch(current_batch_size); } infer_request_.Infer(); fetch_results(outputs_, current_batch_size); }}
我认为问题可能出在检测网络的拓扑结构上,但我想问一下是否有人遇到过相同的问题并解决了这个问题。
谢谢。
回答:
很遗憾,Myriad插件不支持动态批处理。请尝试使用更新版本的演示。你可以在例如这里找到它: https://github.com/opencv/open_model_zoo/tree/master/demos/pedestrian_tracker_demo 这个演示已经更新为完全不使用动态批处理。