Introduction to the FER Project
Facial Expression Recognition (FER) is an advanced project focusing on detecting emotional expressions from facial images. This innovative tool leverages the power of computer vision and machine learning to interpret facial emotions, offering applications across various fields such as psychology, security, and entertainment.
Installation Process
FER operates on Python version 3.6 or later, and its installation is straightforward via pip:
$ pip install fer
For FER to function properly, it requires OpenCV (version 3.2 or later) and TensorFlow (version 1.7.0 or later). These can be installed using:
$ pip install tensorflow>=1.7 opencv-contrib-python==3.3.0.9
Alternatively, they can be compiled from their source code. For users with a GPU, the tensorflow-gpu version can be installed for enhanced performance:
$ pip install tensorflow-gpu>=1.7.0
If you need to handle videos with sound, additional packages like ffmpeg and moviepy must also be installed:
$ pip install ffmpeg moviepy
How to Use FER
The FER library is designed to be user-friendly. The following Python snippet demonstrates a basic use case:
from fer import FER
import cv2
img = cv2.imread("justin.jpg")
detector = FER()
detector.detect_emotions(img)
This script reads an image and uses FER to detect emotions within it, outputting results such as:
[{'box': [277, 90, 48, 63], 'emotions': {'angry': 0.02, 'disgust': 0.0, 'fear': 0.05, 'happy': 0.16, 'neutral': 0.09, 'sad': 0.27, 'surprise': 0.41}}]
For users seeking only the top emotion, a simplified request yields:
emotion, score = detector.top_emotion(img) # e.g., 'happy', 0.99
Enhancing Facial Detection with MTCNN
By default, FER uses OpenCV’s Haar Cascade classifier for facial detection. Users can switch to the MTCNN network, which offers improved accuracy, by passing the mtcnn=True
parameter:
detector = FER(mtcnn=True)
Processing Videos
FER can also analyze videos, breaking them down into frames to evaluate facial expressions. It utilizes either a local Keras model or the Peltarion API:
from fer import Video
from fer import FER
video_filename = "tests/woman2.mp4"
video = Video(video_filename)
detector = FER(mtcnn=True)
raw_data = video.analyze(detector, display=True)
df = video.to_pandas(raw_data)
This process results in a list of JSON objects, each detailing the facial bounding box and the detected emotions.
Leveraging TF-Serving
FER can run with TF Serving docker images, enhancing flexibility and scalability. To enable this, set up the docker environment and initiate FER with the tfserving=True
parameter.
Model Details
The FER package includes a pre-trained Keras model, ready for use out of the box. Users may replace this with a custom model by providing it during the FER detector's instantiation with the emotion_model
parameter.
Licensing and Credits
FER is distributed under the MIT License. The project incorporates methodologies derived from Ivan de Paz Centeno’s MTCNN implementation and Octavio Arriaga's facial expression recognition works. The FER 2013 dataset, curated by Pierre Luc Carrier and Aaron Courville, underpins its technical foundation.
This comprehensive tool not only simplifies facial emotion detection processes but also stands as a testament to the convergence of machine learning and computer vision technologies.