我想从视频中获取每一帧作为图像。背景如下。我写了一个能够识别手势的神经网络。现在我想开始一个视频流,其中流的每个图像/帧都通过神经网络。为了使其适合我的神经网络,我想渲染每一帧并将图像缩小到 28*28 像素。最后它看起来应该类似于:https ://www.youtube.com/watch?v= JfSao30fMxY 我已经通过网络搜索并发现我可以使用 cv2.VideoCapture 来获取流。但是我怎样才能挑选帧的每个图像,渲染它并将结果打印回屏幕上。到目前为止,我的代码看起来像这样:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
# Todo: each Frame/Image from the video should be saved as a variable and open imageToLabel()
# Todo: before the image is handed to the method, it needs to be translated into a 28*28 np Array
# Todo: the returned Label should be printed onto the video (otherwise it can be )
i = 0
while (True):
# Capture frame-by-frame
# Load model once and pass it as an parameter
ret, frame = cap.read()
i += 1
image = cv2.imwrite('database/{index}.png'.format(index=i), frame)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRAY)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
def imageToLabel(imgArr, checkpointLoad):
new_model = tf.keras.models.load_model(checkpointLoad)
imgArrNew = imgArr.reshape(1, 28, 28, 1) / 255
prediction = new_model.predict(imgArrNew)
label = np.argmax(prediction)
return label
冉冉说
神不在的星期二
随时随地看视频慕课网APP
相关分类