如何将缓冲区添加到视频输出中?视频源来自具有许多过滤/滞后的私人相机
有什么方法可以添加缓冲区,以便帧每秒从相机源收集足够的图像?因为到目前为止,视频输出落后了很糟糕。
该代码运行良好,不会崩溃,但是它会滞后直到屏幕不再移动。
添加缓冲区是否更容易,还是可以绕过实际视频源制成的滞后/悬挂?
import cv2
import numpy as np
# Resolution Pixel Size
# 640 x 480 - Standard Definition(SD)
# 1280 x 720 - High Definition(HD)
# 1920 x 1080 - Ultra High Definiion(HD+)
# 3840 x 2160 - Ultra High Definition(UHD or 2K)
cyanColor = (255, 255, 0)
pinkColor = (255, 0, 255)
yellowColor = (0, 255, 255)
greenColor = (0, 255, 0)
blueColor = (255, 0, 0)
redColor = (0,0,255)
path_vid = "Resources/video/license_plate.mp4"
path_main_ent = 'rtsp://~~~/Streaming/Channels/101'
path_parking_Lot = 'rtsp://~~~/Streaming/Channels/101'
button_person = False
button_car = False
button_truck = False
counter = 0
nmsThreshold = 0.3
confThreshold = 0.4
def rescale_frame(image, percentage):
width = int(image.shape[1] * percentage / 100)
height = int(image.shape[0] * percentage / 100)
new_frame = (width, height)
return cv2.resize(image, new_frame, interpolation=cv2.INTER_AREA)
def click_button(event,x,y,flags,params):
global button_person
global button_car
global button_truck
if event == cv2.EVENT_LBUTTONDOWN:
print(x,y)
# ------ Person Button Clicking ---------
polygon_person = np.array([[(20, 160), (200, 160), (200, 230), (20, 230)]])
is_inside_person_button = cv2.pointPolygonTest(polygon_person,(x,y),False)
if is_inside_person_button>0:
print("YAYYYY, We're clicking inside Button!!",x,y)
if button_person is False:
button_person = True
else:
button_person = False
print("Now Person Button is: ",button_person)
# ------ Car Button Clicking ---------
polygon_car = np.array([[(20, 250), (200, 250), (200, 320), (20, 320)]])
is_inside_car_button = cv2.pointPolygonTest(polygon_car, (x, y), False)
if is_inside_car_button > 0:
print("YAYYYY, We're clicking inside Button!!", x, y)
if button_car is False:
button_car = True
else:
button_car = False
print("Now Car Button is: ", button_car)
# ------ Truck Button Clicking ---------
polygon_truck = np.array([[(20, 340), (200, 340), (200, 410), (20, 410)]])
is_inside_truck_button = cv2.pointPolygonTest(polygon_truck, (x, y), False)
if is_inside_truck_button > 0:
print("YAYYYY, We're clicking inside Button!!", x, y)
if button_truck is False:
button_truck = True
else:
button_truck = False
print("Now Truck Button is: ", button_truck)
# net = cv2.dnn.readNet("dnn_model/yolov3.weights", "dnn_model/yolov3.cfg")
# net = cv2.dnn.readNet("dnn_model/yolov3-spp.weights", "dnn_model/yolov3-spp.cfg")
# net = cv2.dnn.readNet("dnn_model/yolov3-tiny.weights", "dnn_model/yolov3-tiny.cfg")
# net = cv2.dnn.readNet("dnn_model/yolov4.weights", "dnn_model/yolov4.cfg")
net = cv2.dnn.readNet("dnn_model/yolov4-tiny.weights", "dnn_model/yolov4-tiny.cfg")
model = cv2.dnn_DetectionModel(net)
model.setInputParams(size=(640,480), scale=1/255)
classes = []
with open("dnn_model/classes.txt","r") as file_object:
for class_name in file_object.readlines():
class_name = class_name.strip()
classes.append(class_name)
# print("- Object List -")
# print(classes[0])
cap = cv2.VideoCapture(path_main_ent)
# cap = cv2.VideoCapture(path_parking_Lot)
cv2.namedWindow("Frame") # The name should be same with cv2.imshow("_Name_")
cv2.setMouseCallback("Frame",click_button)
while True:
ret,frame = cap.read()
# print(type(frame))
if not ret:
continue
img_frame_90 = rescale_frame(frame, 90)
line_frame_above = cv2.line(img_frame_90, (190, 430), (1220, 470), yellowColor, 2)
line_frame = cv2.line(img_frame_90, (180, 440), (1220, 480), blueColor, 4)
line_frame_bottom = cv2.line(img_frame_90, (170, 450), (1220, 490), yellowColor, 2)
polygon_person = np.array([[(20, 160), (200, 160), (200, 230), (20, 230)]])
cv2.fillPoly(img_frame_90, polygon_person, greenColor)
cv2.putText(img_frame_90, "Person", (50, 200), cv2.FONT_HERSHEY_PLAIN, 2, blueColor, 2)
polygon_car = np.array([[(20, 250), (200, 250), (200, 320), (20, 320)]])
cv2.fillPoly(img_frame_90, polygon_car, greenColor)
cv2.putText(img_frame_90, "Car", (50, 290), cv2.FONT_HERSHEY_PLAIN, 2, blueColor, 2)
polygon_truck = np.array([[(20, 340), (200, 340), (200, 410), (20, 410)]])
cv2.fillPoly(img_frame_90, polygon_truck, greenColor)
cv2.putText(img_frame_90, "Truck", (50, 380), cv2.FONT_HERSHEY_PLAIN, 2, blueColor, 2)
# polygon_truck = np.array([[(20, 430), (200, 430), (200, 500), (20, 500)]])
# cv2.fillPoly(img_frame_90, polygon_truck, greenColor)
# cv2.putText(img_frame_90, "Bus", (50, 470), cv2.FONT_HERSHEY_PLAIN, 2, blueColor, 2)
(class_ids, scores, bboxes) = model.detect(img_frame_90,nmsThreshold=nmsThreshold, confThreshold=confThreshold)
if len(class_ids) != 0:
for class_id, score, bbox in zip(class_ids,scores,bboxes):
(x,y,w,h) = bbox
class_name = classes[class_id]
xmid = int((x + (x + w)) / 2)
ymid = int((y + (y + h)) / 2)
if class_name == "person" and button_person is True:
cv2.rectangle(img_frame_90, (x, y), (x + w, y + h), pinkColor, 2)
cv2.circle(img_frame_90, (xmid, ymid), 3, redColor, -1)
# cv2.putText(img_frame_90,str(class_name),(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,1,yellowColor,2)
if ymid > 431 and ymid < 500 and xmid > 170 and xmid <1220:
line_frame = cv2.line(img_frame_90, (350, 440), (1220, 480), greenColor, 4)
counter += 1
if class_name == "car" and button_car is True:
cv2.rectangle(img_frame_90, (x, y), (x + w, y + h), greenColor, 2)
cv2.circle(img_frame_90, (xmid, ymid), 3, redColor, -1)
# cv2.putText(img_frame_90,str(class_name),(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,1,yellowColor,2)
if ymid > 431 and ymid < 507:
line_frame = cv2.line(img_frame_90, (350, 440), (1220, 480), greenColor, 4)
counter += 1
if class_name == "truck" and button_truck is True:
cv2.rectangle(img_frame_90, (x, y), (x + w, y + h), cyanColor, 2)
cv2.circle(img_frame_90, (xmid, ymid), 3, redColor, -1)
# cv2.putText(img_frame_90,str(class_name),(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,1,yellowColor,2)
if ymid > 431 and ymid < 507:
line_frame = cv2.line(img_frame_90, (350, 440), (1220, 480), greenColor, 4)
counter += 1
cv2.putText(img_frame_90, 'Total Vehicles : {}'.format(counter), (0, 100), cv2.FONT_HERSHEY_SIMPLEX, 2, yellowColor,2)
cv2.imshow("Frame",img_frame_90)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Is there any way to add buffer so that the frame only collect enough image per second from a camera source? Because as of now, the video output is lagging so bad.
The code is running well and doesn't crash but it will lag until the screen no longer moves.
Is adding a buffer easier or is it possible to bypass the lagging/hanging made from the actual video source?
import cv2
import numpy as np
# Resolution Pixel Size
# 640 x 480 - Standard Definition(SD)
# 1280 x 720 - High Definition(HD)
# 1920 x 1080 - Ultra High Definiion(HD+)
# 3840 x 2160 - Ultra High Definition(UHD or 2K)
cyanColor = (255, 255, 0)
pinkColor = (255, 0, 255)
yellowColor = (0, 255, 255)
greenColor = (0, 255, 0)
blueColor = (255, 0, 0)
redColor = (0,0,255)
path_vid = "Resources/video/license_plate.mp4"
path_main_ent = 'rtsp://~~~/Streaming/Channels/101'
path_parking_Lot = 'rtsp://~~~/Streaming/Channels/101'
button_person = False
button_car = False
button_truck = False
counter = 0
nmsThreshold = 0.3
confThreshold = 0.4
def rescale_frame(image, percentage):
width = int(image.shape[1] * percentage / 100)
height = int(image.shape[0] * percentage / 100)
new_frame = (width, height)
return cv2.resize(image, new_frame, interpolation=cv2.INTER_AREA)
def click_button(event,x,y,flags,params):
global button_person
global button_car
global button_truck
if event == cv2.EVENT_LBUTTONDOWN:
print(x,y)
# ------ Person Button Clicking ---------
polygon_person = np.array([[(20, 160), (200, 160), (200, 230), (20, 230)]])
is_inside_person_button = cv2.pointPolygonTest(polygon_person,(x,y),False)
if is_inside_person_button>0:
print("YAYYYY, We're clicking inside Button!!",x,y)
if button_person is False:
button_person = True
else:
button_person = False
print("Now Person Button is: ",button_person)
# ------ Car Button Clicking ---------
polygon_car = np.array([[(20, 250), (200, 250), (200, 320), (20, 320)]])
is_inside_car_button = cv2.pointPolygonTest(polygon_car, (x, y), False)
if is_inside_car_button > 0:
print("YAYYYY, We're clicking inside Button!!", x, y)
if button_car is False:
button_car = True
else:
button_car = False
print("Now Car Button is: ", button_car)
# ------ Truck Button Clicking ---------
polygon_truck = np.array([[(20, 340), (200, 340), (200, 410), (20, 410)]])
is_inside_truck_button = cv2.pointPolygonTest(polygon_truck, (x, y), False)
if is_inside_truck_button > 0:
print("YAYYYY, We're clicking inside Button!!", x, y)
if button_truck is False:
button_truck = True
else:
button_truck = False
print("Now Truck Button is: ", button_truck)
# net = cv2.dnn.readNet("dnn_model/yolov3.weights", "dnn_model/yolov3.cfg")
# net = cv2.dnn.readNet("dnn_model/yolov3-spp.weights", "dnn_model/yolov3-spp.cfg")
# net = cv2.dnn.readNet("dnn_model/yolov3-tiny.weights", "dnn_model/yolov3-tiny.cfg")
# net = cv2.dnn.readNet("dnn_model/yolov4.weights", "dnn_model/yolov4.cfg")
net = cv2.dnn.readNet("dnn_model/yolov4-tiny.weights", "dnn_model/yolov4-tiny.cfg")
model = cv2.dnn_DetectionModel(net)
model.setInputParams(size=(640,480), scale=1/255)
classes = []
with open("dnn_model/classes.txt","r") as file_object:
for class_name in file_object.readlines():
class_name = class_name.strip()
classes.append(class_name)
# print("- Object List -")
# print(classes[0])
cap = cv2.VideoCapture(path_main_ent)
# cap = cv2.VideoCapture(path_parking_Lot)
cv2.namedWindow("Frame") # The name should be same with cv2.imshow("_Name_")
cv2.setMouseCallback("Frame",click_button)
while True:
ret,frame = cap.read()
# print(type(frame))
if not ret:
continue
img_frame_90 = rescale_frame(frame, 90)
line_frame_above = cv2.line(img_frame_90, (190, 430), (1220, 470), yellowColor, 2)
line_frame = cv2.line(img_frame_90, (180, 440), (1220, 480), blueColor, 4)
line_frame_bottom = cv2.line(img_frame_90, (170, 450), (1220, 490), yellowColor, 2)
polygon_person = np.array([[(20, 160), (200, 160), (200, 230), (20, 230)]])
cv2.fillPoly(img_frame_90, polygon_person, greenColor)
cv2.putText(img_frame_90, "Person", (50, 200), cv2.FONT_HERSHEY_PLAIN, 2, blueColor, 2)
polygon_car = np.array([[(20, 250), (200, 250), (200, 320), (20, 320)]])
cv2.fillPoly(img_frame_90, polygon_car, greenColor)
cv2.putText(img_frame_90, "Car", (50, 290), cv2.FONT_HERSHEY_PLAIN, 2, blueColor, 2)
polygon_truck = np.array([[(20, 340), (200, 340), (200, 410), (20, 410)]])
cv2.fillPoly(img_frame_90, polygon_truck, greenColor)
cv2.putText(img_frame_90, "Truck", (50, 380), cv2.FONT_HERSHEY_PLAIN, 2, blueColor, 2)
# polygon_truck = np.array([[(20, 430), (200, 430), (200, 500), (20, 500)]])
# cv2.fillPoly(img_frame_90, polygon_truck, greenColor)
# cv2.putText(img_frame_90, "Bus", (50, 470), cv2.FONT_HERSHEY_PLAIN, 2, blueColor, 2)
(class_ids, scores, bboxes) = model.detect(img_frame_90,nmsThreshold=nmsThreshold, confThreshold=confThreshold)
if len(class_ids) != 0:
for class_id, score, bbox in zip(class_ids,scores,bboxes):
(x,y,w,h) = bbox
class_name = classes[class_id]
xmid = int((x + (x + w)) / 2)
ymid = int((y + (y + h)) / 2)
if class_name == "person" and button_person is True:
cv2.rectangle(img_frame_90, (x, y), (x + w, y + h), pinkColor, 2)
cv2.circle(img_frame_90, (xmid, ymid), 3, redColor, -1)
# cv2.putText(img_frame_90,str(class_name),(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,1,yellowColor,2)
if ymid > 431 and ymid < 500 and xmid > 170 and xmid <1220:
line_frame = cv2.line(img_frame_90, (350, 440), (1220, 480), greenColor, 4)
counter += 1
if class_name == "car" and button_car is True:
cv2.rectangle(img_frame_90, (x, y), (x + w, y + h), greenColor, 2)
cv2.circle(img_frame_90, (xmid, ymid), 3, redColor, -1)
# cv2.putText(img_frame_90,str(class_name),(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,1,yellowColor,2)
if ymid > 431 and ymid < 507:
line_frame = cv2.line(img_frame_90, (350, 440), (1220, 480), greenColor, 4)
counter += 1
if class_name == "truck" and button_truck is True:
cv2.rectangle(img_frame_90, (x, y), (x + w, y + h), cyanColor, 2)
cv2.circle(img_frame_90, (xmid, ymid), 3, redColor, -1)
# cv2.putText(img_frame_90,str(class_name),(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,1,yellowColor,2)
if ymid > 431 and ymid < 507:
line_frame = cv2.line(img_frame_90, (350, 440), (1220, 480), greenColor, 4)
counter += 1
cv2.putText(img_frame_90, 'Total Vehicles : {}'.format(counter), (0, 100), cv2.FONT_HERSHEY_SIMPLEX, 2, yellowColor,2)
cv2.imshow("Frame",img_frame_90)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
添加此类并更改代码的几行。这将使用最新的帧。
您的代码:
新代码:
示例代码:
Add this class and change few lines of your code. This will use the latest frame.
Your code:
New code:
Example Code: