Flask Live-Stream 不适用于主机属性
我想通过使用flask使用opencv方法对视频进行分类,要分类的视频是来自用户的直播。
我用谷歌搜索了很多,最后我找到了一些代码并执行了以下操作:-
from flask import Flask, render_template
from flask_socketio import SocketIO, emit
import time
import io
from PIL import Image
import base64,cv2
import numpy as np
# import pyshine as ps
from flask_cors import CORS,cross_origin
import imutils
# import dlib
from engineio.payload import Payload
# detector = dlib.get_frontal_face_detector()
# predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
Payload.max_decode_packets = 2048
app = Flask(__name__)
socketio = SocketIO(app,cors_allowed_origins='*' )
CORS(app)
@app.route('/', methods=['POST', 'GET'])
def index():
return render_template('index.html')
def readb64(base64_string):
idx = base64_string.find('base64,')
base64_string = base64_string[idx+7:]
sbuf = io.BytesIO()
sbuf.write(base64.b64decode(base64_string, ' /'))
pimg = Image.open(sbuf)
return cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
def moving_average(x):
return np.mean(x)
@socketio.on('catch-frame')
def catch_frame(data):
emit('response_back', data)
global fps,prev_recv_time,cnt,fps_array
fps=30
prev_recv_time = 0
cnt=0
fps_array=[0]
@socketio.on('image')
def image(data_image):
global fps,cnt, prev_recv_time,fps_array
recv_time = time.time()
text = 'FPS: '+str(fps)
frame = (readb64(data_image))
# frame = changeLipstick(frame,[255,0,0])
# frame = ps.putBText(frame,text,text_offset_x=20,text_offset_y=30,vspace=20,hspace=10, font_scale=1.0,background_RGB=(10,20,222),text_RGB=(255,255,255))
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
imgencode = cv2.imencode('.jpeg', frame,[cv2.IMWRITE_JPEG_QUALITY,40])[1]
# base64 encode
stringData = base64.b64encode(imgencode).decode('utf-8')
b64_src = 'data:image/jpeg;base64,'
stringData = b64_src + stringData
# emit the frame back
emit('response_back', stringData)
fps = 1/(recv_time - prev_recv_time)
fps_array.append(fps)
fps = round(moving_average(np.array(fps_array)),1)
prev_recv_time = recv_time
#print(fps_array)
cnt+=1
if cnt==30:
fps_array=[fps]
cnt=0
def getMaskOfLips(img,points):
""" This function will input the lips points and the image
It will return the mask of lips region containing white pixels
"""
mask = np.zeros_like(img)
mask = cv2.fillPoly(mask,[points],(255,255,255))
return mask
def changeLipstick(img,value):
""" This funciton will take img image and lipstick color RGB
Out the image with a changed lip color of the image
"""
img = cv2.resize(img,(0,0),None,1,1)
imgOriginal = img.copy()
imgColorLips=imgOriginal
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector(imgGray)
for face in faces:
x1,y1 = face.left(),face.top()
x2,y2 = face.right(),face.bottom()
facial_landmarks = predictor(imgGray,face)
points =[]
for i in range(68):
x = facial_landmarks.part(i).x
y = facial_landmarks.part(i).y
points.append([x,y])
points = np.array(points)
imgLips = getMaskOfLips(img,points[48:61])
imgColorLips = np.zeros_like(imgLips)
imgColorLips[:] =value[2],value[1],value[0]
imgColorLips = cv2.bitwise_and(imgLips,imgColorLips)
value = 1
value=value//10
if value%2==0:
value+=1
kernel_size = (6+value,6+value) # +1 is to avoid 0
weight = 1
weight = 0.4 + (weight)/400
imgColorLips = cv2.GaussianBlur(imgColorLips,kernel_size,10)
imgColorLips = cv2.addWeighted(imgOriginal,1,imgColorLips,weight,0)
return imgColorLips
if __name__ == '__main__':
socketio.run(app,port=9990 ,debug=True)
这是在 Flask 的 main.py 中
在模板文件夹中,我制作了 index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
<style>
#video {
transform: rotateY(180deg);
-webkit-transform:rotateY(180deg); /* Safari and Chrome */
-moz-transform:rotateY(180deg); /* Firefox */
}
</style>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src='https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.0.0/socket.io.js'></script>
</head>
<body>
<div id="container">
<video autoplay playsinline id="videoElement"></video>
<canvas id="canvas" width="400" height="300"></canvas>
</div>
<div class = 'video'>
<img id="photo" width="400" height="300">
<h1>video</h1>
</div>
<script type="text/javascript" charset="utf-8">
var socket = io.connect(window.location.protocol + '//' + document.domain + ':' + location.port);
socket.on('connect', function(){
console.log("Connected...!", socket.connected)
});
var canvas = document.getElementById('canvas');
var context = canvas.getContext('2d');
const video = document.querySelector("#videoElement");
video.width = 400;
video.height = 300;
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
video.srcObject = stream;
video.play();
})
.catch(function (err0r) {
});
}
const FPS = 6;
setInterval(() => {
width=video.width;
height=video.height;
context.drawImage(video, 0, 0, width , height );
var data = canvas.toDataURL('image/jpeg', 0.5);
context.clearRect(0, 0, width,height );
socket.emit('image', data);
}, 1000/FPS);
socket.on('response_back', function(image){
photo.setAttribute('src', image );
});
</script>
</body>
</html>
问题是:我需要确保它适用于用户设备,所以当我更改 socketio.run() 并添加 host='0.0.0.0' 它停止访问我的网络摄像头 我该如何解决这个问题?
我也无法显示视频:(
控制台中的错误
polling-xhr.js:264 GET http://localhost:9990/socket.io/?EIO=3&transport=polling&t=N_GnQuZ 400 (BAD REQUEST)
I want to classify a video using opencv methods by using flask, the video to be classified is live-stream that is from the user.
I googled a lot, then finally I found some code and did this:-
from flask import Flask, render_template
from flask_socketio import SocketIO, emit
import time
import io
from PIL import Image
import base64,cv2
import numpy as np
# import pyshine as ps
from flask_cors import CORS,cross_origin
import imutils
# import dlib
from engineio.payload import Payload
# detector = dlib.get_frontal_face_detector()
# predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
Payload.max_decode_packets = 2048
app = Flask(__name__)
socketio = SocketIO(app,cors_allowed_origins='*' )
CORS(app)
@app.route('/', methods=['POST', 'GET'])
def index():
return render_template('index.html')
def readb64(base64_string):
idx = base64_string.find('base64,')
base64_string = base64_string[idx+7:]
sbuf = io.BytesIO()
sbuf.write(base64.b64decode(base64_string, ' /'))
pimg = Image.open(sbuf)
return cv2.cvtColor(np.array(pimg), cv2.COLOR_RGB2BGR)
def moving_average(x):
return np.mean(x)
@socketio.on('catch-frame')
def catch_frame(data):
emit('response_back', data)
global fps,prev_recv_time,cnt,fps_array
fps=30
prev_recv_time = 0
cnt=0
fps_array=[0]
@socketio.on('image')
def image(data_image):
global fps,cnt, prev_recv_time,fps_array
recv_time = time.time()
text = 'FPS: '+str(fps)
frame = (readb64(data_image))
# frame = changeLipstick(frame,[255,0,0])
# frame = ps.putBText(frame,text,text_offset_x=20,text_offset_y=30,vspace=20,hspace=10, font_scale=1.0,background_RGB=(10,20,222),text_RGB=(255,255,255))
frame = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
imgencode = cv2.imencode('.jpeg', frame,[cv2.IMWRITE_JPEG_QUALITY,40])[1]
# base64 encode
stringData = base64.b64encode(imgencode).decode('utf-8')
b64_src = 'data:image/jpeg;base64,'
stringData = b64_src + stringData
# emit the frame back
emit('response_back', stringData)
fps = 1/(recv_time - prev_recv_time)
fps_array.append(fps)
fps = round(moving_average(np.array(fps_array)),1)
prev_recv_time = recv_time
#print(fps_array)
cnt+=1
if cnt==30:
fps_array=[fps]
cnt=0
def getMaskOfLips(img,points):
""" This function will input the lips points and the image
It will return the mask of lips region containing white pixels
"""
mask = np.zeros_like(img)
mask = cv2.fillPoly(mask,[points],(255,255,255))
return mask
def changeLipstick(img,value):
""" This funciton will take img image and lipstick color RGB
Out the image with a changed lip color of the image
"""
img = cv2.resize(img,(0,0),None,1,1)
imgOriginal = img.copy()
imgColorLips=imgOriginal
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector(imgGray)
for face in faces:
x1,y1 = face.left(),face.top()
x2,y2 = face.right(),face.bottom()
facial_landmarks = predictor(imgGray,face)
points =[]
for i in range(68):
x = facial_landmarks.part(i).x
y = facial_landmarks.part(i).y
points.append([x,y])
points = np.array(points)
imgLips = getMaskOfLips(img,points[48:61])
imgColorLips = np.zeros_like(imgLips)
imgColorLips[:] =value[2],value[1],value[0]
imgColorLips = cv2.bitwise_and(imgLips,imgColorLips)
value = 1
value=value//10
if value%2==0:
value+=1
kernel_size = (6+value,6+value) # +1 is to avoid 0
weight = 1
weight = 0.4 + (weight)/400
imgColorLips = cv2.GaussianBlur(imgColorLips,kernel_size,10)
imgColorLips = cv2.addWeighted(imgOriginal,1,imgColorLips,weight,0)
return imgColorLips
if __name__ == '__main__':
socketio.run(app,port=9990 ,debug=True)
This is in main.py of flask
In the templates folder I have made index.html with
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
<style>
#video {
transform: rotateY(180deg);
-webkit-transform:rotateY(180deg); /* Safari and Chrome */
-moz-transform:rotateY(180deg); /* Firefox */
}
</style>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src='https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.0.0/socket.io.js'></script>
</head>
<body>
<div id="container">
<video autoplay playsinline id="videoElement"></video>
<canvas id="canvas" width="400" height="300"></canvas>
</div>
<div class = 'video'>
<img id="photo" width="400" height="300">
<h1>video</h1>
</div>
<script type="text/javascript" charset="utf-8">
var socket = io.connect(window.location.protocol + '//' + document.domain + ':' + location.port);
socket.on('connect', function(){
console.log("Connected...!", socket.connected)
});
var canvas = document.getElementById('canvas');
var context = canvas.getContext('2d');
const video = document.querySelector("#videoElement");
video.width = 400;
video.height = 300;
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
video.srcObject = stream;
video.play();
})
.catch(function (err0r) {
});
}
const FPS = 6;
setInterval(() => {
width=video.width;
height=video.height;
context.drawImage(video, 0, 0, width , height );
var data = canvas.toDataURL('image/jpeg', 0.5);
context.clearRect(0, 0, width,height );
socket.emit('image', data);
}, 1000/FPS);
socket.on('response_back', function(image){
photo.setAttribute('src', image );
});
</script>
</body>
</html>
The problem is: I need to make sure that it works on the user's device, so when I change socketio.run() and add host='0.0.0.0' It stops having access to my webcam
How can I solve this?
also I am unable to show back the video :(
The Error in Console
polling-xhr.js:264 GET http://localhost:9990/socket.io/?EIO=3&transport=polling&t=N_GnQuZ 400 (BAD REQUEST)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
嘿,
首先您使用的是非常旧的 socketio 客户端版本。
使用较新版本的 socketio 客户端,这就是 400 错误的原因,如果您要转到开发工具(chrome)上的请求的预览窗口,您会注意到有不支持的版本消息。
为什么你看不到网络摄像头!?是因为你没有安全上下文,当你离开像 0.0.0.0 这样的本地主机时,你需要在基于 Chrome 的浏览器上有一个安全上下文它意味着您需要 HTTPS,否则“navigator.mediaDevices.getUserMedia”将返回未定义。
查看这篇文章
Hey,
First you are using a very old socketio client version.
use a newer version of socketio client, thats the reason for 400 Error, if you are going to the preview window of the request on dev tools (chrome) you have notice that there are a not supported version message.
Why you dont see the webcam!? is because you have no security context, when you go away from localhost like 0.0.0.0 you need a security context on chrome based browser it means you need HTTPS otherwise "navigator.mediaDevices.getUserMedia" will return undefined.
See this post