Face Recognition | Real Time Face Recognition OpenCV

0
572

[ad_1]

In this text, we’re going to learn how to detect faces in real-time utilizing OpenCV. After detecting the face from the webcam stream, we’re going to save the frames containing the face. Later we’ll cross these frames (photographs) to our masks detector classifier to seek out out if the individual is carrying a masks or not.

We are additionally going to see tips on how to make a customized masks detector utilizing Tensorflow and Keras however you’ll be able to skip that as I can be attaching the skilled mannequin file under which you’ll be able to obtain and use. Here is the record of subtopics we’re going to cowl:

  1. What is Face Detection?
  2. Face Detection Methods
  3. Face detection algorithm
  4. Face recognition
  5. Face Detection utilizing Python
  6. Face Detection utilizing OpenCV
  7. Create a mannequin to recognise faces carrying a masks (Optional)
  8. How to do Real-time Mask detection 

What is Face Detection?

The objective of face detection is to find out if there are any faces within the picture or video. If a number of faces are current, every face is enclosed by a bounding field and thus we all know the placement of the faces

The main goal of face detection algorithms is to precisely and effectively decide the presence and place of faces in a picture or video. The algorithms analyze the visible content material of the info, trying to find patterns and options that correspond to facial traits. By using varied strategies, resembling machine studying, picture processing, and sample recognition, face detection algorithms intention to tell apart faces from different objects or background parts inside the visible knowledge.

Human faces are troublesome to mannequin as there are various variables that may change for instance facial features, orientation, lighting situations, and partial occlusions resembling sun shades, scarfs, masks, and many others. The results of the detection offers the face location parameters and it could possibly be required in varied kinds, as an example, a rectangle protecting the central a part of the face, eye facilities or landmarks together with eyes, nostril and mouth corners, eyebrows, nostrils, and many others.

Face Detection Methods

There are two most important approaches for Face Detection:

  1. Feature Base Approach
  2. Image Base Approach

Feature Base Approach

Objects are often acknowledged by their distinctive options. There are many options in a human face, which might be acknowledged between a face and plenty of different objects. It locates faces by extracting structural options like eyes, nostril, mouth and many others. after which makes use of them to detect a face. Typically, some type of statistical classifier certified then useful to separate between facial and non-facial areas. In addition, human faces have explicit textures which can be utilized to distinguish between a face and different objects. Moreover, the sting of options may also help to detect the objects from the face. In the approaching part, we’ll implement a feature-based strategy through the use of the OpenCV tutorial.

Image Base Approach

In normal, Image-based strategies depend on strategies from statistical evaluation and machine studying to seek out the related traits of face and non-face photographs. The discovered traits are within the type of distribution fashions or discriminant capabilities that’s consequently used for face detection. In this methodology, we use completely different algorithms resembling Neural-networks, HMM, SVM, AdaBoost studying. In the approaching part, we’ll see how we are able to detect faces with MTCNN or Multi-Task Cascaded Convolutional Neural Network, which is an Image-based strategy of face detection

Face detection algorithm

One of the favored algorithms that use a feature-based strategy is the Viola-Jones algorithm and right here I’m briefly going to debate it. If you wish to learn about it intimately, I might recommend going by way of this text, Face Detection utilizing Viola Jones Algorithm.

Viola-Jones algorithm is called after two laptop imaginative and prescient researchers who proposed the strategy in 2001, Paul Viola and Michael Jones of their paper, “Rapid Object Detection using a Boosted Cascade of Simple Features”. Despite being an outdated framework, Viola-Jones is kind of highly effective, and its utility has confirmed to be exceptionally notable in real-time face detection. This algorithm is painfully gradual to coach however can detect faces in real-time with spectacular velocity.

Given a picture(this algorithm works on grayscale photographs), the algorithm seems to be at many smaller subregions and tries to discover a face by on the lookout for particular options in every subregion. It must verify many alternative positions and scales as a result of a picture can comprise many faces of varied sizes. Viola and Jones used Haar-like options to detect faces on this algorithm.

Face Recognition

Face detection and Face Recognition are sometimes used interchangeably however these are fairly completely different. In reality, Face detection is simply a part of Face Recognition.

Face recognition is a technique of figuring out or verifying the id of a person utilizing their face. There are varied algorithms that may do face recognition however their accuracy may range. Here I’m going to explain how we do face recognition utilizing deep studying.

In reality right here is an article, Face Recognition Python which exhibits tips on how to implement Face Recognition.

Face Detection utilizing Python

As talked about earlier than, right here we’re going to see how we are able to detect faces through the use of an Image-based strategy. MTCNN or Multi-Task Cascaded Convolutional Neural Network is definitely one of the vital in style and most correct face detection instruments that work this precept. As such, it’s based mostly on a deep studying structure, it particularly consists of three neural networks (P-Net, R-Net, and O-Net) linked in a cascade.

So, let’s see how we are able to use this algorithm in Python to detect faces in real-time. First, it is advisable set up MTCNN library which comprises a skilled mannequin that may detect faces.

pip set up mtcnn

Now allow us to see tips on how to use MTCNN:

from mtcnn import MTCNN
import cv2
detector = MTCNN()
#Load a videopip TensorFlow
video_capture = cv2.VideoCapture(0)

whereas (True):
    ret, body = video_capture.learn()
    body = cv2.resize(body, (600, 400))
    containers = detector.detect_faces(body)
    if containers:

        field = containers[0]['box']
        conf = containers[0]['confidence']
        x, y, w, h = field[0], field[1], field[2], field[3]

        if conf > 0.5:
            cv2.rectangle(body, (x, y), (x + w, y + h), (255, 255, 255), 1)

    cv2.imshow("Frame", body)
    if cv2.waitKey(25) & 0xFF == ord('q'):
        break

video_capture.launch()
cv2.destroyAllWindows()

Face Detection utilizing OpenCV

In this part, we’re going to carry out real-time face detection utilizing OpenCV from a reside stream through our webcam.

As you recognize movies are principally made up of frames, that are nonetheless photographs. We carry out face detection for every body in a video. So with regards to detecting a face in a nonetheless picture and detecting a face in a real-time video stream, there may be not a lot distinction between them.

We can be utilizing Haar Cascade algorithm, also called Voila-Jones algorithm to detect faces. It is principally a machine studying object detection algorithm that’s used to determine objects in a picture or video. In OpenCV, now we have a number of skilled  Haar Cascade fashions that are saved as XML information. Instead of making and coaching the mannequin from scratch, we use this file. We are going to make use of “haarcascade_frontalface_alt2.xml” file on this challenge. Now allow us to begin coding this up

The first step is to seek out the trail to the “haarcascade_frontalface_alt2.xml” file. We do that through the use of the os module of Python language.

import os
cascPath = os.path.dirname(
    cv2.__file__) + "/knowledge/haarcascade_frontalface_alt2.xml"

The subsequent step is to load our classifier. The path to the above XML file goes as an argument to CascadeClassifier() methodology of OpenCV.

faceCascade = cv2.CascadeClassifier(cascPath)

After loading the classifier, allow us to open the webcam utilizing this easy OpenCV one-liner code

video_capture = cv2.VideoCapture(0)

Next, we have to get the frames from the webcam stream, we do that utilizing the learn() perform. We use it in infinite loop to get all of the frames till the time we wish to shut the stream.

whereas True:
    # Capture frame-by-frame
    ret, body = video_capture.learn()

The learn() perform returns:

  1. The precise video body learn (one body on every loop)
  2. A return code

The return code tells us if now we have run out of frames, which can occur if we’re studying from a file. This doesn’t matter when studying from the webcam since we are able to file ceaselessly, so we’ll ignore it.

For this particular classifier to work, we have to convert the body into greyscale.

grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)

The faceCascade object has a way detectMultiScale(), which receives a body(picture) as an argument and runs the classifier cascade over the picture. The time period MultiScale signifies that the algorithm seems to be at subregions of the picture in a number of scales, to detect faces of various sizes.

  faces = faceCascade.detectMultiScale(grey,
                                         scaleFactor=1.1,
                                         minNeighbors=5,
                                         minSize=(60, 60),
                                         flags=cv2.CASCADE_SCALE_IMAGE)

Let us undergo these arguments of this perform:

  • scaleFactor – Parameter specifying how a lot the picture measurement is lowered at every picture scale. By rescaling the enter picture, you’ll be able to resize a bigger face to a smaller one, making it detectable by the algorithm. 1.05 is an efficient potential worth for this, which suggests you utilize a small step for resizing, i.e. cut back the dimensions by 5%, you enhance the possibility of an identical measurement with the mannequin for detection is discovered.
  • minNeighbors – Parameter specifying what number of neighbors every candidate rectangle ought to must retain it. This parameter will have an effect on the standard of the detected faces. Higher worth ends in fewer detections however with larger high quality. 3~6 is an efficient worth for it.
  • flags –Mode of operation
  • minSize – Minimum potential object measurement. Objects smaller than which might be ignored.

The variable faces now comprise all of the detections for the goal picture. Detections are saved as pixel coordinates. Each detection is outlined by its top-left nook coordinates and the width and top of the rectangle that encompasses the detected face.

To present the detected face, we’ll draw a rectangle over it.OpenCV’s rectangle() attracts rectangles over photographs, and it must know the pixel coordinates of the top-left and bottom-right corners. The coordinates point out the row and column of pixels within the picture. We can simply get these coordinates from the variable face.

for (x,y,w,h) in faces:
        cv2.rectangle(body, (x, y), (x + w, y + h),(0,255,0), 2)

rectangle() accepts the next arguments:

  • The unique picture
  • The coordinates of the top-left level of the detection
  • The coordinates of the bottom-right level of the detection
  • The color of the rectangle (a tuple that defines the quantity of pink, inexperienced, and blue (0-255)).In our case, we set as inexperienced simply conserving the inexperienced part as 255 and relaxation as zero.
  • The thickness of the rectangle traces

Next, we simply show the ensuing body and likewise set a option to exit this infinite loop and shut the video feed. By urgent the ‘q’ key, we are able to exit the script right here

 cv2.imshow('Video', body)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

The subsequent two traces are simply to wash up and launch the image.

video_capture.launch()
cv2.destroyAllWindows()

Here are the complete code and output.

import cv2
import os
cascPath = os.path.dirname(
    cv2.__file__) + "/knowledge/haarcascade_frontalface_alt2.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
video_capture = cv2.VideoCapture(0)
whereas True:
    # Capture frame-by-frame
    ret, body = video_capture.learn()
    grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
    faces = faceCascade.detectMultiScale(grey,
                                         scaleFactor=1.1,
                                         minNeighbors=5,
                                         minSize=(60, 60),
                                         flags=cv2.CASCADE_SCALE_IMAGE)
    for (x,y,w,h) in faces:
        cv2.rectangle(body, (x, y), (x + w, y + h),(0,255,0), 2)
        # Display the ensuing body
    cv2.imshow('Video', body)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
video_capture.launch()
cv2.destroyAllWindows()

Output:

Create a mannequin to acknowledge faces carrying a masks

In this part, we’re going to make a classifier that may differentiate between faces with masks and with out masks. In case you wish to skip this half, here’s a hyperlink to obtain the pre-trained mannequin. Save it and transfer on to the following part to know tips on how to use it to detect masks utilizing OpenCV. Check out our assortment of OpenCV programs that can assist you develop your expertise and perceive higher.

So for creating this classifier, we’d like knowledge within the type of Images. Luckily now we have a dataset containing photographs faces with masks and and not using a masks. Since these photographs are very much less in quantity, we can’t practice a neural community from scratch. Instead, we finetune a pre-trained community referred to as CellNetV2 which is skilled on the Imagenet dataset.

Let us first import all the required libraries we’re going to want.

from tensorflow.keras.preprocessing.picture import ImageDataGenerator
from tensorflow.keras.functions import CellNetV2
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.fashions import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.functions.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.picture import img_to_array
from tensorflow.keras.preprocessing.picture import load_img
from tensorflow.keras.utils import to_categorical
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import os

The subsequent step is to learn all the photographs and assign them to some record. Here we get all of the paths related to these photographs after which label them accordingly. Remember our dataset is contained in two folders viz- with_masks and without_masks. So we are able to simply get the labels by extracting the folder title from the trail. Also, we preprocess the picture and resize it to 224x 224 dimensions.

imagePaths = record(paths.list_images('/content material/drive/My Drive/dataset'))
knowledge = []
labels = []
# loop over the picture paths
for imagePath in imagePaths:
	# extract the category label from the filename
	label = imagePath.break up(os.path.sep)[-2]
	# load the enter picture (224x224) and preprocess it
	picture = load_img(imagePath, target_size=(224, 224))
	picture = img_to_array(picture)
	picture = preprocess_input(picture)
	# replace the info and labels lists, respectively
	knowledge.append(picture)
	labels.append(label)
# convert the info and labels to NumPy arrays
knowledge = np.array(knowledge, dtype="float32")
labels = np.array(labels)

The subsequent step is to load the pre-trained mannequin and customise it in response to our downside. So we simply take away the highest layers of this pre-trained mannequin and add few layers of our personal. As you’ll be able to see the final layer has two nodes as now we have solely two outputs. This known as switch studying.

baseModel = CellNetV2(weights="imagenet", include_top=False,
	input_shape=(224, 224, 3))
# assemble the top of the mannequin that can be positioned on prime of the
# the bottom mannequin
headModel = baseModel.output
headModel = AveragePooling2D(pool_size=(7, 7))(headModel)
headModel = Flatten(title="flatten")(headModel)
headModel = Dense(128, activation="relu")(headModel)
headModel = Dropout(0.5)(headModel)
headModel = Dense(2, activation="softmax")(headModel)

# place the top FC mannequin on prime of the bottom mannequin (this may turn out to be
# the precise mannequin we'll practice)
mannequin = Model(inputs=baseModel.enter, outputs=headModel)
# loop over all layers within the base mannequin and freeze them so they are going to
# *not* be up to date throughout the first coaching course of
for layer in baseModel.layers:
	layer.trainable = False

Now we have to convert the labels into one-hot encoding. After that, we break up the info into coaching and testing units to judge them. Also, the following step is knowledge augmentation which considerably will increase the variety of information accessible for coaching fashions, with out truly accumulating new knowledge. Data augmentation strategies resembling cropping, rotation, shearing and horizontal flipping are generally used to coach massive neural networks.

lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels = to_categorical(labels)
# partition the info into coaching and testing splits utilizing 80% of
# the info for coaching and the remaining 20% for testing
(trainX, testX, trainY, testY) = train_test_split(knowledge, labels,
	test_size=0.20, stratify=labels, random_state=42)
# assemble the coaching picture generator for knowledge augmentation
aug = ImageDataGenerator(
	rotation_range=20,
	zoom_range=0.15,
	width_shift_range=0.2,
	height_shift_range=0.2,
	shear_range=0.15,
	horizontal_flip=True,
	fill_mode="nearest")

The subsequent step is to compile the mannequin and practice it on the augmented knowledge.

INIT_LR = 1e-4
EPOCHS = 20
BS = 32
print("[INFO] compiling mannequin...")
choose = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
mannequin.compile(loss="binary_crossentropy", optimizer=choose,
	metrics=["accuracy"])
# practice the top of the community
print("[INFO] coaching head...")
H = mannequin.match(
	aug.movement(trainX, trainY, batch_size=BS),
	steps_per_epoch=len(trainX) // BS,
	validation_data=(testX, testY),
	validation_steps=len(testX) // BS,
	epochs=EPOCHS)

Now that our mannequin is skilled, allow us to plot a graph to see its studying curve. Also, we save the mannequin for later use. Here is a hyperlink to this skilled mannequin.

N = EPOCHS
plt.type.use("ggplot")
plt.determine()
plt.plot(np.arange(0, N), H.historical past["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.historical past["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.historical past["accuracy"], label="train_acc")
plt.plot(np.arange(0, N), H.historical past["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="decrease left")

Output:

face detection
#To save the skilled mannequin
mannequin.save('mask_recog_ver2.h5')

How to do Real-time Mask detection 

Before shifting to the following half, be certain to obtain the above mannequin from this hyperlink and place it in the identical folder because the python script you will write the under code in.

Now that our mannequin is skilled, we are able to modify the code within the first part in order that it will possibly detect faces and likewise inform us if the individual is carrying a masks or not.

In order for our masks detector mannequin to work, it wants photographs of faces. For this, we’ll detect the frames with faces utilizing the strategies as proven within the first part after which cross them to our mannequin after preprocessing them. So allow us to first import all of the libraries we’d like.

import cv2
import os
from tensorflow.keras.preprocessing.picture import img_to_array
from tensorflow.keras.fashions import load_model
from tensorflow.keras.functions.mobilenet_v2 import preprocess_input
import numpy as np

The first few traces are precisely the identical as the primary part. The solely factor that’s completely different is that now we have assigned our pre-trained masks detector mannequin to the variable mannequin.

ascPath = os.path.dirname(
    cv2.__file__) + "/knowledge/haarcascade_frontalface_alt2.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
mannequin = load_model("mask_recog1.h5")

video_capture = cv2.VideoCapture(0)
whereas True:
    # Capture frame-by-frame
    ret, body = video_capture.learn()
    grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
    faces = faceCascade.detectMultiScale(grey,
                                         scaleFactor=1.1,
                                         minNeighbors=5,
                                         minSize=(60, 60),
                                         flags=cv2.CASCADE_SCALE_IMAGE)

Next, we outline some lists. The faces_list comprises all of the faces which might be detected by the faceCascade mannequin and the preds record is used to retailer the predictions made by the masks detector mannequin.

faces_list=[]
preds=[]

Also for the reason that faces variable comprises the top-left nook coordinates, top and width of the rectangle encompassing the faces, we are able to use that to get a body of the face after which preprocess that body in order that it may be fed into the mannequin for prediction. The preprocessing steps are identical which might be adopted when coaching the mannequin within the second part. For instance, the mannequin is skilled on RGB photographs so we convert the picture into RGB right here

    for (x, y, w, h) in faces:
        face_frame = body[y:y+h,x:x+w]
        face_frame = cv2.cvtColor(face_frame, cv2.COLOR_BGR2RGB)
        face_frame = cv2.resize(face_frame, (224, 224))
        face_frame = img_to_array(face_frame)
        face_frame = np.expand_dims(face_frame, axis=0)
        face_frame =  preprocess_input(face_frame)
        faces_list.append(face_frame)
        if len(faces_list)>0:
            preds = mannequin.predict(faces_list)
        for pred in preds:
        #masks comprise probabily of carrying a masks and vice versa
            (masks, with outMask) = pred 

After getting the predictions, we draw a rectangle over the face and put a label in response to the predictions.

label = "Mask" if masks > with outMask else "No Mask"
        shade = (0, 255, 0) if label == "Mask" else (0, 0, 255)
        label = "{}: {:.2f}%".format(label, max(masks, with outMask) * 100)
        cv2.putText(body, label, (x, y- 10),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.45, shade, 2)

        cv2.rectangle(body, (x, y), (x + w, y + h),shade, 2)

The remainder of the steps are the identical as the primary part.

cv2.imshow('Video', body)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
video_capture.launch()
cv2.destroyAllWindows()

Here is the whole code and output:

import cv2
import os
from tensorflow.keras.preprocessing.picture import img_to_array
from tensorflow.keras.fashions import load_model
from tensorflow.keras.functions.mobilenet_v2 import preprocess_input
import numpy as np

cascPath = os.path.dirname(
    cv2.__file__) + "/knowledge/haarcascade_frontalface_alt2.xml"
faceCascade = cv2.CascadeClassifier(cascPath)
mannequin = load_model("mask_recog1.h5")

video_capture = cv2.VideoCapture(0)
whereas True:
    # Capture frame-by-frame
    ret, body = video_capture.learn()
    grey = cv2.cvtColor(body, cv2.COLOR_BGR2GRAY)
    faces = faceCascade.detectMultiScale(grey,
                                         scaleFactor=1.1,
                                         minNeighbors=5,
                                         minSize=(60, 60),
                                         flags=cv2.CASCADE_SCALE_IMAGE)
    faces_list=[]
    preds=[]
    for (x, y, w, h) in faces:
        face_frame = body[y:y+h,x:x+w]
        face_frame = cv2.cvtColor(face_frame, cv2.COLOR_BGR2RGB)
        face_frame = cv2.resize(face_frame, (224, 224))
        face_frame = img_to_array(face_frame)
        face_frame = np.expand_dims(face_frame, axis=0)
        face_frame =  preprocess_input(face_frame)
        faces_list.append(face_frame)
        if len(faces_list)>0:
            preds = mannequin.predict(faces_list)
        for pred in preds:
            (masks, with outMask) = pred
        label = "Mask" if masks > with outMask else "No Mask"
        shade = (0, 255, 0) if label == "Mask" else (0, 0, 255)
        label = "{}: {:.2f}%".format(label, max(masks, with outMask) * 100)
        cv2.putText(body, label, (x, y- 10),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.45, shade, 2)

        cv2.rectangle(body, (x, y), (x + w, y + h),shade, 2)
        # Display the ensuing body
    cv2.imshow('Video', body)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
video_capture.launch()
cv2.destroyAllWindows()

Output:

This brings us to the top of this text the place we discovered tips on how to detect faces in real-time and likewise designed a mannequin that may detect faces with masks. Using this mannequin we had been capable of modify the face detector to masks detector.

Update: I skilled one other mannequin which might classify photographs into carrying a masks, not carrying a masks and never correctly carrying a masks. Here is a hyperlink of the Kaggle pocket book of this mannequin. You can modify it and likewise obtain the mannequin from there and use it in as a substitute of the mannequin we skilled on this article. Although this mannequin shouldn’t be as environment friendly because the mannequin we skilled right here, it has an additional characteristic of detecting not correctly worn masks.

If you might be utilizing this mannequin it is advisable make some minor adjustments to the code. Replace the earlier traces with these traces.

#Here are some minor adjustments in opencv code
for (field, pred) in zip(locs, preds):
        # unpack the bounding field and predictions
        (startX, startY, endX, endY) = field
        (masks, with outMask,notproper) = pred

        # decide the category label and shade we'll use to attract
        # the bounding field and textual content
        if (masks > with outMask and masks>notproper):
            label = "Without Mask"
        elif ( with outMask > notproper and with outMask > masks):
            label = "Mask"
        else:
            label = "Wear Mask Properly"

        if label == "Mask":
            shade = (0, 255, 0)
        elif label=="Without Mask":
            shade = (0, 0, 255)
        else:
            shade = (255, 140, 0)

        # embody the likelihood within the label
        label = "{}: {:.2f}%".format(label,
                                     max(masks, with outMask, notproper) * 100)

        # show the label and bounding field rectangle on the output
        # body
        cv2.putText(body, label, (startX, startY - 10),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.45, shade, 2)
        cv2.rectangle(body, (startX, startY), (endX, endY), shade, 2)

You may also upskill with Great Learning’s PGP Artificial Intelligence and Machine Learning Course. The course gives mentorship from {industry} leaders, and additionally, you will have the chance to work on real-time industry-relevant tasks.

Further Reading

  1. Real-Time Object Detection Using TensorFlow
  2. YOLO object detection utilizing OpenCV
  3. Object Detection in Pytorch | What is Object Detection?

LEAVE A REPLY

Please enter your comment!
Please enter your name here