Face
Face
Logic:
Importing Libraries:
The script starts by importing the OpenCV library (cv2), which is used for image
processing and computer vision tasks.
Results:
Flowchart:
Experiment 2: Detecting faces in video using python and Raspberry Pi
Code:
import numpy as np
import cv2
faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)
cap.set(3,640) # set Width
cap.set(4,480) # set Height
while True:
ret, img = cap.read()
# img = cv2.flip(img)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(20, 20)
)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
cv2.imshow('video',img)
k = cv2.waitKey(30) & 0xff
if k == 27: # press 'ESC' to quit
break
cap.release()
cv2.destroyAllWindows()
Logic:
1. Importing Libraries:
The script starts by importing necessary libraries, specifically NumPy (`numpy`) for
numericaloperations and OpenCV (`cv2`) for image processing.
2. Loading Haar Cascade Classifier:
The Haar Cascade Classifier for face detection is loaded using the `CascadeClassifier`
class. Theclassifier is stored in an XML file (`haarcascade_frontalface_default.xml`) and
contains the necessary information to detect frontal faces.
3. Setting up the Camera:
The script initializes access to the camera (either the built-in webcam or an external
camera) using `cv2.VideoCapture(0)`. It sets the desired width and height for the
captured video frames.
4. Capturing and Preprocessing Frames:
Inside a loop, the script continuously captures frames from the camera. Each
frame isprocessed to convert it from color to grayscale using `cv2.cvtColor(img,
cv2.COLOR_BGR2GRAY)`. This grayscale frame is used for face detection.
5. Face Detection:
The pre-trained Haar Cascade Classifier is applied to the grayscale frame using the
`detectMultiScale` function. This function detects faces by using a sliding window
approach with specified parameters like scale factor, minimum neighbors, and
minimum size of the detected face.
6. Drawing Rectangles Around Detected Faces:
For each detected face in the frame, the script calculates the coordinates (x, y) of
the top-leftcorner and the width (w) and height (h) of the bounding box around the
face. It then draws arectangle around each detected face using `cv2.rectangle`.
7. Displaying the Video Feed with Detected Faces:
The modified frame with rectangles drawn around the detected faces is displayed using
`cv2.imshow('video', img)`. The script waits for a key press using `cv2.waitKey(30)`
to allow the user to view the video feed. If the 'ESC' key is pressed (`k == 27`), the
loop breaks, and thescript releases the camera and closes the displayed video window.
Overall, the script continuously captures video frames from the camera, detects faces
in each frame using the Haar Cascade Classifier, draws rectangles around the detected
faces, and displaysthe video feed with the rectangles indicating the detected faces. The
script allows the user to quitby pressing the 'ESC' key.
Flowchart:
Conclusion:
This experiment highlighted the successful implementation of face detection using OpenCV in
Python on a Raspberry Pi platform. It showcased the real-time capabilities of Raspberry Pi for
image and video processing, making it suitable for a variety of applications in the domain of
computer vision and beyond. The integration of OpenCV with Python provided a flexible and
efficient approach to detect faces, which is a fundamental step in many computer vision
applications.