Last Updated : 23 Jul, 2025
OpenCV is an open-source cross-platform library for various operating systems, including Windows, Linux, and macOS, for computer vision, machine learning, and image processing. With the help of OpenCV, we can easily process images and videos to recognize objects, faces, or even someone's handwriting.
In this article, we will see how to blur faces in images and videos using OpenCV in Python.
RequirementsIn addition to the OpenCV module and in order to recognize faces, we also need Haar Cascade Frontal Face Classifier, which needs to be downloaded. It is provided as an XML file and is used to detect faces in images and videos.
Make sure to download the Haar Cascade Frontal Face Classifier from this link: haarcascade_frontalface_default.xml.
Blur the faces in Images using OpenCV in PythonFirst, we will load an image that contains some faces so, that we can test our code. After that, we will convert it into RGB format and then detect faces using the haar cascade classifier. After this, we will get the bounding box coordinates by using which we will blur that particular region, and then we can show that image along with the original image.
Python3
# Importing libraries
import numpy as np
import cv2
import matplotlib.pyplot as plt
# Reading an image using OpenCV and
# Converting it into a
# RGB image because OpenCV reads
# images by default in BGR format
image = cv2.cvtColor(cv2.imread('image.png'),
cv2.COLOR_BGR2RGB)
# Display the Original Image
print('Original Image')
plt.imshow(image, cmap="gray")
plt.axis('off')
plt.show()
cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
face_data = cascade.detectMultiScale(image,
scaleFactor=2.0,
minNeighbors=4)
for x, y, w, h in face_data:
# Draw a border around the detected face in Red
# colour with a thickness 5.
image = cv2.rectangle(image, (x, y), (x+w, y+h),
(255, 0, 0), 5)
image[y:y+h, x:x+w] = cv2.medianBlur(image[y:y+h,
x:x+w],
35)
# Display the Blured Image
print('Blured Image')
plt.imshow(image, cmap="gray")
plt.axis('off')
plt.show()
Output:
Original image and the blurred face image Blur the faces in Videos using OpenCV in PythonFirst, we will load a video that contains some faces so, that we can test our code. After that, we will convert it into grayscale and then detect faces using the haar cascade classifier. After this, we will get the bounding box coordinates by using which we will blur that particular region, and then we can show that video.
Python3
# Importing libraries
import cv2
# to detect the face of the human
cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
# Create a VideoCapture object and read from input file
video_capture = cv2.VideoCapture('video.mp4')
# Read until video is completed
while(video_capture.isOpened()):
# Capture frame-by-frame
ret, frame = video_capture.read()
# convert the frame into grayscale(shades of black & white)
gray_image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
face_data = cascade.detectMultiScale(gray_image,scaleFactor=1.3,minNeighbors=5)
for x, y, w, h in face_data:
# Draw a border around the detected face in Red colour with a thickness 5.
image = cv2.rectangle(frame, (x, y),(x+w, y+h), (0,0,255), 5)
image[y:y+h, x:x+w] = cv2.medianBlur(image[y:y+h, x:x+w], 35)
# show the blurred face in the video
cv2.imshow('face blurred', frame)
key = cv2.waitKey(1)
# Press Q on keyboard to exit
if key == ord('q'):
break
# When everything done, release
# the video capture object
video_capture.release()
# Closes all the frames
cv2.destroyAllWindows()
Output:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4