在Python中基于颜色过滤器绘制区域

我正在开发一个脚本,用于检测地面上激光轮廓的两条线,

例如:

enter image description here

通过下面的代码,我可以识别光线并在检测区域内绘制线条:

vermelho_inicio = np.array([0, 9, 178]) #131,72,208 vermelho_fim = np.array([255, 60, 255]) mask = cv2.inRange(img, vermelho_inicio, vermelho_fim)edges = cv2.Canny(mask, 100, 200)#DESENHO AS LINHAS NO LASER (Cone)lines = cv2.HoughLinesP(edges, 5, np.pi/180, 0, maxLineGap=100)a,b,c = lines.shapeif lines is not None:   for line in lines:                x1, y1, x2, y2 = line[0]      cv2.line(img, (x1, y1), (x2, y2), (0, 255, 0), 5)

我的结果是:

enter image description here

我需要什么?

我需要将检测到的区域用红色绘制出来,并获取绘制区域的x1, y1, x2, y2位置。我希望的结果是下面的结果或类似的东西:

enter image description here

我的完整代码:

# -*- coding: utf-8 -*-import numpy as npimport cv2import timeimport math#STREAMINGS#http://68.116.13.142:82/mjpg/video.mjpg INDUSTRIAL#http://95.255.38.86:8080/mjpg/video.mjpg RUA ITALIA#http://81.198.213.128:82/mjpg/video.mjpg CORREDOR MOVIMENTADOclass DetectorAPI:     cap = cv2.VideoCapture("VideoCone.MOV")     while True:        r, img = cap.read()        #DEFINE A ÁREA DO VIDEO EM QUE O MODELO IRA ATUAR        #img = img[10:1280, 230:1280]        img = cv2.resize(img, (800, 600))        #Frame Detectação Red Zone        #frame = cv2.GaussianBlur(img (5, 5), 0)        vermelho_inicio = np.array([0, 9, 178])        #131,72,208        vermelho_fim = np.array([255, 60, 255])        mask = cv2.inRange(img, vermelho_inicio, vermelho_fim)        edges = cv2.Canny(mask, 100, 200)        #DESENHO AS LINHAS NO LASER (Cone)        lines = cv2.HoughLinesP(edges, 5, np.pi/180, 0, maxLineGap=100)        a,b,c = lines.shape        if lines is not None:           for line in lines:                        x1, y1, x2, y2 = line[0]              cv2.line(img, (x1, y1), (x2, y2), (0, 255, 0), 5)        #Crio o overlay para fazer a transparência no quadrado da Danger Área        overlay = img.copy()        #DESENHO A DANGER ÁREA        #x1,y1 ------        #|          |        #|          |        #|          |        #--------x2,y2        #CAPTURO AS INFORMAÇÕES DO FRAME        height, width, channels = img.shape        #DIVISÃO PARA CAPTURAR O CENTRO DA IMAGEM        upper_left = (int(width / 4), int(height / 4))        bottom_right = (int(width * 3 / 4), int(height * 3 / 4))        #ESCREVO O RETANGULO NO CENTRO DO VÍDEO        #DangerArea = cv2.rectangle(overlay,upper_left, bottom_right,(0,0,255),-1);        #Escrevo o texto na Danger Area        #cv2.putText(DangerArea,'Danger Area',(int(width / 4),int(height * 3 / 4)),  cv2.FONT_HERSHEY_SIMPLEX, 0.5,(255,255,255),2,cv2.LINE_AA)        #cv2.addWeighted(overlay,0.3,img,1-0.4,0,img);        #Imprimo no console o centro da imagem        print('Upper_Left: '+str(upper_left)+' bottom_right: '+str(bottom_right));        #Exibe o video         cv2.imshow("edges", edges)        cv2.imshow("Detectar Pessoas", img)        key = cv2.waitKey(1)        if key & 0xFF == ord('q'):            break

回答:

我能达到的最接近的结果是凸包:

convex hull

mask = cv2.inRange(img, vermelho_inicio, vermelho_fim)np_points = np.transpose(np.nonzero(mask))points = np.fliplr(np_points) # opencv uses flipped x,y coordinates approx = cv2.convexHull(points)cv2.polylines(img, [approx],True, (0,255,255), 5)

Related Posts

使用LSTM在Python中预测未来值

这段代码可以预测指定股票的当前日期之前的值,但不能预测…

如何在gensim的word2vec模型中查找双词组的相似性

我有一个word2vec模型,假设我使用的是googl…

dask_xgboost.predict 可以工作但无法显示 – 数据必须是一维的

我试图使用 XGBoost 创建模型。 看起来我成功地…

ML Tuning – Cross Validation in Spark

我在https://spark.apache.org/…

如何在React JS中使用fetch从REST API获取预测

我正在开发一个应用程序,其中Flask REST AP…

如何分析ML.NET中多类分类预测得分数组?

我在ML.NET中创建了一个多类分类项目。该项目可以对…

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注