Emotion Recognition With Python, OpenCV and a Face Dataset


Having your computer know how you feel? Madness!
Or actually not madness, but OpenCV and Python. In this tutorial we’ll write a little program to see if we can recognise emotions from images.
How cool would it be to have your computer recognize the emotion on your face? You could make all sorts of things with this, from a dynamic music player that plays music fitting with what you feel, to an emotion-recognizing robot.
For this tutorial I assume that you have:


Important: The code in this tutorial is licensed under the GNU 3.0 open source license and you are free to modify and redistribute the code, given that you give others you share the code with the same right, and cite my name (use citation format below). You are not free to redistribute or modify the tutorial itself in any way. By reading on you agree to these terms. If you disagree, please navigate away from this page.
Troubleshooting: I assume intermediate knowledge of Python for these tutorials. If you don’t have this, please try a few more basic tutorials first or follow an entry-level course on coursera or something similar. This also means you know how to interpret errors. Don’t panic but first read the thing, google if you don’t know the solution, only then ask for help. I’m getting too many emails and requests over very simple errors. Part of learning to program is learning to debug on your own as well. If you really can’t figure it out, let me know.
Unix users: The current tutorial is written for use on windows systems. It will be updated in the near future to be cross-platform.
Citation format
van Gent, P. (2016). Emotion Recognition With Python, OpenCV and a Face Dataset. A tech blog about fun things with Python and embedded electronics. Retrieved from:
http://www.paulvangent.com/2016/04/01/emotion-recognition-with-python-opencv-and-a-face-dataset/


Getting started
To be able to recognize emotions on images we will use OpenCV. OpenCV has a few ‘facerecognizer’ classes that we can also use for emotion recognition. They use different techniques, of which we’ll mostly use the Fisher Face one. For those interested in more background; this page has a clear explanation of what a fisher face is.
Request and download the dataset, here (get the CK+). I cannot distribute it so you will have to request it yourself, or of course create and use your own dataset. It seems the dataset has been taken offline. The other option is to make one of your own or find another one. When making a set: be sure to insert diverse examples and make it BIG. The more data, the more variance there is for the models to extract information from. Please do not request others to share the dataset in the comments, as this is prohibited in the terms they accepted before downloading the set.
Once you have your own dataset, extract it and look at the readme. It is organised into two folders, one containing images, the other txt files with emotions encoded that correspond to the kind of emotion shown. From the readme of the dataset, the encoding is: {0=neutral, 1=anger, 2=contempt, 3=disgust, 4=fear, 5=happy, 6=sadness, 7=surprise}.
Let’s go!


Organising the dataset
First we need to organise the dataset. In the directory you’re working, make two folders called “source_emotion” and “source_images”. Extract the dataset and put all folders containing the txt files (S005, S010, etc.) in a folder called “source_emotion”. Put the folders containing the images in a folder called “source_images”. Also create a folder named “sorted_set”, to house our sorted emotion images. Within this folder, create folders for the emotion labels (“neutral”, “anger”, etc.).
In the readme file, the authors mention that only a subset (327 of the 593) of the emotion sequences actually contain archetypical emotions. Each image sequence consists of the forming of an emotional expression, starting with a neutral face and ending with the emotion. So, from each image sequence we want to extract two images; one neutral (the first image) and one with an emotional expression (the last image). To help, let’s write a small python snippet to do this for us:

import glob
from shutil import copyfile
emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #Define emotion order
participants = glob.glob("source_emotion\\*") #Returns a list of all folders with participant numbers
for x in participants:
    part = "%s" %x[-4:] #store current participant number
    for sessions in glob.glob("%s\\*" %x): #Store list of sessions for current participant
        for files in glob.glob("%s\\*" %sessions):
            current_session = files[20:-30]
            file = open(files, 'r')
            emotion = int(float(file.readline())) #emotions are encoded as a float, readline as float, then convert to integer.
            sourcefile_emotion = glob.glob("source_images\\%s\\%s\\*" %(part, current_session))[-1] #get path for last image in sequence, which contains the emotion
            sourcefile_neutral = glob.glob("source_images\\%s\\%s\\*" %(part, current_session))[0] #do same for neutral image
            dest_neut = "sorted_set\\neutral\\%s" %sourcefile_neutral[25:] #Generate path to put neutral image
            dest_emot = "sorted_set\\%s\\%s" %(emotions[emotion], sourcefile_emotion[25:]) #Do same for emotion containing image
            copyfile(sourcefile_neutral, dest_neut) #Copy file
            copyfile(sourcefile_emotion, dest_emot) #Copy file


Extracting faces
The classifier will work best if the training and classification images are all of the same size and have (almost) only a face on them (no clutter). We need to find the face on each image, convert to grayscale, crop it and save the image to the dataset. We can use a HAAR filter from OpenCV to automate face finding. Actually, OpenCV provides 4 pre-trained classifiers, so to be sure we detect as many faces as possible let’s use all of them in sequence, and abort the face search once we have found one. Get them from the OpenCV directory or from here and extract to the same file you have your python files.
Create another folder called “dataset”, and in it create subfolders for each emotion (“neutral”, “anger”, etc.). The dataset we can use will live in these folders. Then, detect, crop and save faces as such;

import cv2
import glob
faceDet = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
faceDet_two = cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
faceDet_three = cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
faceDet_four = cv2.CascadeClassifier("haarcascade_frontalface_alt_tree.xml")
emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #Define emotions
def detect_faces(emotion):
    files = glob.glob("sorted_set\\%s\\*" %emotion) #Get list of all images with emotion
    filenumber = 0
    for f in files:
        frame = cv2.imread(f) #Open image
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #Convert image to grayscale
        #Detect face using 4 different classifiers
        face = faceDet.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)
        face_two = faceDet_two.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)
        face_three = faceDet_three.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)
        face_four = faceDet_four.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)
        #Go over detected faces, stop at first detected face, return empty if no face.
        if len(face) == 1:
            facefeatures = face
        elif len(face_two) == 1:
            facefeatures = face_two
        elif len(face_three) == 1:
            facefeatures = face_three
        elif len(face_four) == 1:
            facefeatures = face_four
        else:
            facefeatures = ""
        #Cut and save face
        for (x, y, w, h) in facefeatures: #get coordinates and size of rectangle containing face
            print "face found in file: %s" %f
            gray = gray[y:y+h, x:x+w] #Cut the frame to size
            try:
                out = cv2.resize(gray, (350, 350)) #Resize face so all images have same size
                cv2.imwrite("dataset\\%s\\%s.jpg" %(emotion, filenumber), out) #Write image
            except:
               pass #If error, pass file
        filenumber += 1 #Increment image number
for emotion in emotions:
    detect_faces(emotion) #Call functiona

The last step is to clean up the “neutral” folder. Because most participants have expressed more than one emotion, we have more than one neutral image of the same person. This could (not sure if it will, but let’s be conservative) bias the classifier accuracy unfairly, it may recognize the same person on another picture or be triggered by other characteristics rather than the emotion displayed. Do this by hand: get in the folder and delete all multiples of the same face you see, so that only one image of each person remains.


Creating the training and classification set
Now we get to the fun part! The dataset has been organised and is ready to be recognized, but first we need to actually teach the classifier what certain emotions look like. The usual approach is to split the complete dataset into a training set and a classification set. We use the training set to teach the classifier to recognize the to-be-predicted labels, and use the classification set to estimate the classifier performance.
Note the reason for splitting the dataset: estimating the classifier performance on the same set as it has been trained is unfair, because we are not interested in how well the classifier memorizes the training set. Rather, we are interested in how well the classifier generalizes its recognition capability to never-seen-before data.
In any classification problem; the sizes of both sets depend on what you’re trying to classify, the size of the total datset, the number of features, the number of classification targets (categories). It’s a good idea to plot a learning curve. We’ll get into this in another tutorial.
For now let’s create the training and classification set, we randomly sample and train on 80% of the data and classify the remaining 20%, and repeat the process 10 times. Afterwards we play around with several settings a bit and see what useful results we can get.

import cv2
import glob
import random
import numpy as np
emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #Emotion list
fishface = cv2.createFisherFaceRecognizer() #Initialize fisher face classifier
data = {}
def get_files(emotion): #Define function to get file list, randomly shuffle it and split 80/20
    files = glob.glob("dataset\\%s\\*" %emotion)
    random.shuffle(files)
    training = files[:int(len(files)*0.8)] #get first 80% of file list
    prediction = files[-int(len(files)*0.2):] #get last 20% of file list
    return training, prediction
def make_sets():
    training_data = []
    training_labels = []
    prediction_data = []
    prediction_labels = []
    for emotion in emotions:
        training, prediction = get_files(emotion)
        #Append data to training and prediction list, and generate labels 0-7
        for item in training:
            image = cv2.imread(item) #open image
            gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #convert to grayscale
            training_data.append(gray) #append image array to training data list
            training_labels.append(emotions.index(emotion))
        for item in prediction: #repeat above process for prediction set
            image = cv2.imread(item)
            gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            prediction_data.append(gray)
            prediction_labels.append(emotions.index(emotion))
    return training_data, training_labels, prediction_data, prediction_labels
def run_recognizer():
    training_data, training_labels, prediction_data, prediction_labels = make_sets()
    print "training fisher face classifier"
    print "size of training set is:", len(training_labels), "images"
    fishface.train(training_data, np.asarray(training_labels))
    print "predicting classification set"
    cnt = 0
    correct = 0
    incorrect = 0
    for image in prediction_data:
        pred, conf = fishface.predict(image)
        if pred == prediction_labels[cnt]:
            correct += 1
            cnt += 1
        else:
            incorrect += 1
            cnt += 1
    return ((100*correct)/(correct + incorrect))
#Now run it
metascore = []
for i in range(0,10):
    correct = run_recognizer()
    print "got", correct, "percent correct!"
    metascore.append(correct)
print "\n\nend score:", np.mean(metascore), "percent correct!"

Let it run for a while. In the end on my machine this returned 69.3% correct. This may not seem like a lot at first, but remember we have 8 categories. If the classifier learned absolutely nothing and just assigned class labels randomly we would expect on average (1/8)*100 =  12.5% correct. So actually it is already performing really well. Now let’s see if we can optimize it.


Optimizing Dataset
Let’s look critically at the dataset. The first thing to notice is that we have very few examples for “contempt” (18), “fear” (25) and “sadness” (28). I mentioned it’s not fair to predict the same dataset as the classifier has been trained on, and similarly it’s also not fair to give the classifier only a handful of examples and expect it to generalize well.
Change the emotionlist so that “contempt”, “fear” and “sadness” are no longer in it, because we really don’t have enough examples for it:

#Change from:
emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"]
#To:
emotions = ["neutral", "anger", "disgust", "happy", "surprise"]
 

Let it run for a while again. On my computer this results in 82.5% correct. Purely by chance we would expect on average (1/5)*100 = 20%, so the performance is not bad at all. However, something can still be improved.


Providing a more realistic estimate
Performance so far is pretty neat! However, the numbers might not be very reflective of a real-world application. The data set we use is very standardized. All faces are exactly pointed at the camera and the emotional expressions are actually pretty exaggerated and even comical in some situations. Let’s see if we can append the dataset with some more natural images. For this I used google image search and the chrome plugin ZIG lite to batch-download the images from the results.
If you want, do this yourself, clean up the images. Make sure for each image that there is no text overlayed on the face, the emotion is recognizable, and the face is pointed mostly at the camera. Then adapt the facecropper script a bit and generate standardized face images.
Alternatively, save yourself an hour of work and download the set I generated and cleaned.
Merge both datasets and run again on all emotion categories except for “contempt” (so re-include “fear” and “sadness”), I could not find any convincing source images for this emotion.
This gave 61.6% correct. Not bad, but not great either. Despite what we would expect at chance level (14.3%), this still means the classifier will be wrong 38.4% of the time. I think the performance is actually really impressive, considering that emotion recognition is quite a complex task. However impressive, I admit an algorithm that is wrong almost half the time is not very practical.
Speaking about a practical perspective; depending on the goal, an emotion classifier might not actually need so many categories. For example, a dynamic music player that plays songs fitting to your mood would already work well if it recognized anger, happiness and sadness. Using only these categories I get 77.2% accurate. That is a more useful number! This means that almost 4 out of 5 times it will play a song fitting to your emotional state. In a next tutorial we will build such a player.
The spread of accuracies between different runs is still quite large, however. This either indicates the dataset is too small to accurately learn to predict emotions, or the problem is simply too complex. My money is mostly on the former. Using a larger dataset will probably enhance the detection quite a bit.


Looking at mistakes
The last thing that might be nice to look at is what mistakes the algorithm makes. Maybe the mistakes are understandable, maybe not. Add an extra line to the the last part of the function run_recognizer() to copy images that are wrongly classified, also create a folder “difficult” in your root working directory to house the images:

def run_recognizer():
    training_data, training_labels, prediction_data, prediction_labels = make_sets()
    print "training fisher face classifier"
    print "size of training set is:", len(training_labels), "images"
    fishface.train(training_data, np.asarray(training_labels))
    print "predicting classification set"
    cnt = 0
    correct = 0
    incorrect = 0
    for image in prediction_data:
        pred, conf = fishface.predict(image)
        if pred == prediction_labels[cnt]:
            correct += 1
            cnt += 1
        else:
            cv2.imwrite("difficult\\%s_%s_%s.jpg" %(emotions[prediction_labels[cnt]], emotions[pred], cnt), image) #<-- this one is new
            incorrect += 1
            cnt += 1
    return ((100*correct)/(correct + incorrect))

I ran it on all emotions except “contempt”, and ran it only once (for i in range(0,1)).
Some mistakes are understandable, for instance:
“Surprise”, classified as “Happy” surprise_happy_96, honestly it’s a bit of both
“Disgust”, classified as “Sadness”  disgust_sadness_43, he could also be starting to cry.
“Sadness”, classified as “Disgust” sadness_disgust_95

But most are less understandable, for example:
“Anger”, classified as “Happy” anger_happy_30
“Happy”, classified as “Neutral” happy_neutral_73

It’s clear that emotion recognition is a complex task, more so when only using images. Even for us humans this is difficult because the correct recognition of a facial emotion often depends on the context within which the emotion originates and is expressed.
I hope this tutorial gave you some insight into emotion recognition and hopefully some ideas to do something with it. Did anything cool with it or want to try something cool? Let me know below in the comments!


The dataset used in this article is the CK+ dataset, based on the work of:
– Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial expression analysis. Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG’00), Grenoble, France, 46-53.
– Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., & Matthews, I. (2010). The Extended Cohn-Kanade Dataset (CK+): A complete expression dataset for action unit and emotion-specified expression. Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010), San Francisco, USA, 94-101.


378 Comments

  • Ajesh K R

    May 23, 2016

    Hi sir,
    Its really a brilliant work you have done. I am really interested in it, and wants to know more about how the classification of emotions are done. Can you provide some more stuff that help me understand the details??

    Reply
    • Paul van Gent

      May 28, 2016

      Hi Ajesh,
      Thanks for your comment. I’m not sure what exactly you’re interested in. Do you want more background information or are you having problems with something in the code or explanations?
      If you want more background info please see here or here.

      Reply
      • Em Aasim

        December 11, 2016

        hi sir!
        i am having trouble downloading data set. is there any other way to download it?

        Reply
        • Paul van Gent

          December 15, 2016

          Hi Em Aasim. I cannot distribute it, so if the authors choose not to share it with you, then that’s it I’m afraid. You can always make your own set, although this is not a trivial task.

          Reply
        • Salma

          August 15, 2017

          here you can agree to the conditions of use submit in order to have the dataset.

          Reply
        • Ho Xuan Vuong

          June 25, 2018

          How can I download it or make it myself

          Reply
  • Michał Żołnieruk

    May 31, 2016

    Hi from Poland!
    Really cool stuff! It helped me and my friend a lot with our project to write an app for swapping face to adequate emoji. We did use your code to harvest faces from CK dataset and mentioned it in the repo we’ve just created. Take a look and tell us if you are fine with this.
    https://github.com/PiotrDabr/facemoji
    Thanks a lot again, your blog is really interesting and I can’t wait for new posts.

    Reply
    • Paul van Gent

      May 31, 2016

      Hi Michał!
      I like your project, really cool stuff! I’ll be sure to give it a try tonight.
      Also thanks for mentioning me, it’s perfect this way.
      Keep up the good work!

      Reply
    • Sunny Bhadani

      November 22, 2017

      hey bro,i found your GitHub project on emoji really cool….i myself is making a project on facial expression recognition.It will be great if we can collaborate.
      Looking forward to hear from you.

      Reply
    • HKHK

      September 21, 2018

      That is a nice project. But when I use Python 2.7.15 and open 3.2.0.7 to run your project. The accuracy is very bad. Could you specify all library version? Could you help? My email is cognsoft(at)gmail(dot)com

      Reply
      • palkab

        September 28, 2018

        If you’re on linux or osx, make sure the images from the dataset are ordered properly in glob(). Wrap it in a sorted() function to be sure.

        – Paul

        Reply
  • Sam

    June 16, 2016

    It was so well explained and so helpful !!! Thank you so much !! We quoted your work in our synposis

    Reply
    • Paul van Gent

      June 16, 2016

      Thank you Sam! What did you make with it? I’m curious 🙂

      Reply
  • Jasper

    July 17, 2016

    Hi Sir!
    I’m trying to do your Emotion-Aware Music Player but I’m having a problem. Whenever I run the code that crops the image and save it to the “dataset” folder, I get the error “UnboundLocalError: local variable ‘x’ referenced before assignment”. Any help with that? I’m using Spyder with python2.7 bindings and OpenCV 2.4.13.

    Reply
    • Paul van Gent

      July 17, 2016

      Hi Jasper,
      Can you send me an e-mail with your code attached, and the full error message you’re getting? You can send it to “palkab29@gmail.com”. I’ll have a look in the afternoon :).

      Reply
      • Jasper

        July 17, 2016

        I’ve just sent it to you! Thank you!

        Reply
        • Paul van Gent

          July 17, 2016

          Turns out I missed a line when updating the code. Thanks for pointing it out Jasper! It’s updated now.

          Reply
  • Alex

    August 4, 2016

    How would you edit the code that sorts the images in the files to sort the landmarks into different files?

    Reply
    • Paul van Gent

      August 4, 2016

      Hi Alex,
      Most of the parts are there, if you look under “Organising the dataset”, for each iteration the code temporarily stores the participant ID in the variable part, the sessions in sessions and the emotion in the variable emotion. You can use these labels to also access the data from the landmarks database, since these are organized with the same directory structure as the images and emotion labels are.
      On a side-note, I’ll be posting a tutorial on landmark-based emotion detection somewhere this week. Keep an eye on the site if you get stuck. Good luck 🙂

      Reply
      • Mrinalini

        July 18, 2018

        Hi Paul,
        I am getting an error “list index out of range” while organizing the dataset. could you please help me where I am doing wrong.

        Thanks in advance

        Reply
        • palkab

          August 8, 2018

          Hi Mrinalini,

          That error usually means the list is empty, since the code tries to retrieve the last item in the list (the last item of an empty list does not exist). You need to check that the paths generated are correct, and that all the files are there.

          – Paul

          Reply
  • Ali

    August 5, 2016

    Great Article.
    Thanks!

    Reply
  • Pingback: يه نکته ی جالب که به احتمال زیاد نمی دونستید | وبلاگ

  • Alexander

    August 14, 2016

    The python script that sorts the images into emotion types slices ‘Sx’ (S0, S1, S2… S9 etc) from the subject participant part at the beginning of the filename of each image. I used your algorithm to sort the landmarks into facial expression files the same way and it retained the whole filename. Would you know why this happens? Basically the first two characters of the filename of each image are snipped off.

    Reply
    • Paul van Gent

      August 14, 2016

      Hi Alex,
      This is because of line 19 and 20 where I slice the filenames from the path using “sourcefile_neutral[25:] (and the same for sourcefile_emotion). If you want a clean way of dealing with filenames of different lengths, first split the filename on the backslash using split(), for example sourcefile_neutral.split(“\\”). This returns a list of elements. Take the last element in the list with [-1:] to get the complete filename.
      Good luck!

      Reply
      • Alex

        August 15, 2016

        Thanks, I dissected your code and figured that if you change “sourcefile_neutral[25:]” to “sourcefile_neutral[23:]” that I keep the whole .png filename. Oddly… I then had to change it to “sourcefile_neutral[26:]” for .txt files even though [25:] worked fine for them previously.
        I have one more issue… the code that detects the faces in each image and normalizes the dimensions of each image doesn’t appear to do anything. I’ve placed it in a directory above all the folders such as sorted_set, source_emotion etc. Is that the correct location for the script? Thanks again!

        Reply
        • Alex

          August 15, 2016

          Turns out the issue is that either the classifiers are failing to detect the face (unlikely) or the script isn’t actually accessing the classifiers stored in opencv/sources/data/haarcascades

          Reply
          • Alex

            August 15, 2016

            Added the full path to the xml files “C:\opencv\sources\data\haarcascades\…xml” and it worked!

          • Paul van Gent

            August 15, 2016

            Hi Alex, that would also work. In the script I assume you put the trained cascade classifier files in the same directory as the python files. I have updated the tutorial to make this more clear. Thanks!

    • Jane

      November 19, 2016

      Always a good job right here. Keep rolling on thorguh.

      Reply
  • Carlos

    August 17, 2016

    Hello Paul! thanks a lot for this wonderful code, it has helped me a lot. Just one question, I am always getting the first 4 saved images wrong of each set when using the save_face(emotions) but not in the first set. For example I start recording happy face, then angry so in the data set all pictures of happy are fine but the first four pictures in the angry data set are actually happy faces. What can my problem be? It is weird because the name of the picture is angry even if the face is happy. This happens to all subsequent emotions, just the first emotion data set is all good.

    Reply
  • Ali

    September 15, 2016

    Hi Paul
    I have a outOfMemoryError. I use 4 Gigabytes of ram.
    Is there a solution for this problem? or just I should upgrade the ram to 8?
    thanks paul

    Reply
    • Paul van Gent

      September 18, 2016

      Hi Ali,
      I’m not sure, the program shouldn’t be that memory intensive. Are you sure you’re not storing all images somewhere and leaving them in memory? Feel free to mail me your code if you want me to have a look.
      Cheers

      Reply
    • Rena

      November 19, 2016

      That’s really thinking at an imivrsspee level

      Reply
  • Sridhhar

    September 18, 2016

    hi!! i am getting an error while running the 1st code (organising the dataset)
    ” file = open(files, ‘r’)
    IOError: [Errno 13] Permission denied: ‘source_emotion\\Emotion\\S005\\001′”
    can you help me out with this??
    if possible can u mail me the entire code?
    thank you

    Reply
    • Paul van Gent

      September 18, 2016

      Hi Sridhar,
      Read the error message; “Permission denied”. It seems you don’t have permission to read from these folders. What system are you using?

      Reply
      • Sridhar

        September 19, 2016

        i’m using windows 10.
        I tried changing the directories . Moved the entire folder to C and D drive. Didn’t work !!!

        Reply
        • Sridhar

          September 19, 2016

          I’m new to python. Could you please explain it

          Reply
          • Chartric

            November 19, 2016

            Wonderful exoltnaaipn of facts available here.

      • Piyush Saraswat

        May 28, 2017

        Hi paul,
        i’m getting same error, Permission denied: ‘source_emotion\\Emotion_labels\\Emotion\\S005’
        what to do?

        Reply
        • Paul van Gent

          June 12, 2017

          – Check the folders exist and have the exact names as in the code.
          – Does your user account have the correct permissions to these folders?
          – You might need to run the code with elevated privileges.

          Reply
  • sridhar

    September 19, 2016

    hi Paul!!!
    The code is working now.
    There was a mistake in the directory path.
    Thanks for the support

    Reply
    • Paul van Gent

      September 19, 2016

      I suspected something like this. Good to hear you found the issue. Good luck!

      Reply
    • Hines

      November 19, 2016

      You have the monopoly on useful infaomation-rren’t monopolies illegal? 😉

      Reply
    • Prem Sekar

      March 6, 2017

      hi sridhar,
      may i know how u corrected ur error…

      Reply
    • hiren

      November 1, 2017

      hey…bro…how do you solve it?

      Reply
    • deepak

      November 16, 2018

      how did u solved it brother .

      Reply
  • Justin Cruz

    September 30, 2016

    Good Day Mr. Paul! Can you mail me your entire code? If it’s okay with you? Thank you in advance!

    Reply
    • Paul van Gent

      October 1, 2016

      Hi Justin,
      All the information you need is in the article, I’m sure you can figure it out :)!
      Cheers
      Paul

      Reply
  • DONG

    November 17, 2016

    u r amazing

    Reply
  • Virat Trivedi

    November 18, 2016

    Thank you so much Sir.
    Your guide has been of IMMENSE help in my work, can’t thank you enough.
    I just had one doubt which is that you said that “In a next tutorial we will build such a player.” which is a hyperlink.
    But that hyperlink gives a 404 error. Can you please provide us an updated link to the same?

    Reply
    • Paul van Gent

      November 24, 2016

      Hi Virat,
      I’m glad to hear it helped. I will update the link, but you can also find it through the home page :-). Please don’t forget to cite me! Cheers,
      Paul

      Reply
  • Jonathan

    November 24, 2016

    Hello,
    This is amazing. Can I get this accessed from iPhone project? I want to detect emotion from iOS device camera when user look at it. How to achieve this?

    Reply
    • Paul van Gent

      November 24, 2016

      Hi Jonathan,
      I think you could, please see this link for some tutorials on how to get started with OpenCV and iOS:
      http://docs.opencv.org/2.4/doc/tutorials/ios/table_of_content_ios/table_of_content_ios.html
      I wouldn’t know the specifics because I don’t develop for iOS, but translating the code probably won’t be too difficult. You can also probably re-use trained models as the core is still OpenCV.
      Good luck! Let me know if you manage to get it working.

      Reply
      • ramyavr

        November 30, 2017

        Hi Paul
        I am getting
        Traceback (most recent call last):
        File “extract.py”, line 49, in
        filenumber += 1 #Increment image number
        NameError: name ‘filenumber’ is not defined

        Reply
  • vikrant

    December 16, 2016

    After running traing code i am getting this massge— AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
    I am using window 10. I have installed opencv.
    Plz help me.

    Reply
    • vikrant

      December 16, 2016

      thank you very much Paul… i installed latest version of opencv. now its running.

      Reply
      • Ashwin

        January 29, 2017

        I have the latest version of opencv 3.2 but I’m still getting the error-AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
        I am using Windows 10 64 bit
        python version 2.7.13
        Please help…

        Reply
        • Paul van Gent

          February 8, 2017

          Hi Ashwin. Either check the docs to see what changed from 2.4 to 3.2, or use the OpenCV version from the tutorial.

          Reply
          • leon trimble

            February 13, 2017

            hey! you’re the python expert i was hoping you’d tell us! i got facial recognition working from this tutorial https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/
            it took an age to work out how to swap out the webcam for the raspberry pi cam, please help! i need a more fundamental understanding of the codebase to work across versions!!!

          • Paul van Gent

            February 14, 2017

            Hi Leon. I´m not sure what you mean. Do you want more information about using webcams in conjunction with Python? Do you want more information on how to use images from different sources with the visual recognition code on this site? Let me know.
            Cheers
            Paul

          • leon trimble

            February 16, 2017

            …getting it working on opencv 3.

          • Paul van Gent

            February 16, 2017

            The docs provide all the answers.. It seems a new namespace ‘face’ is added in the new opencv versions.

        • Aniket More

          June 30, 2017

          use cv2.face.createFisherFaceRecognizer

          Reply
    • Aniket More

      June 30, 2017

      I am getting only 21.3% accuracy what will be the reason?

      Reply
      • Paul van Gent

        July 1, 2017

        Hi Aniket. The reasons could be numerous. To find out, I would check:
        – Your dataset could be too small for the task you are trying to accomplish. It could be the images you’re trying to recognize the emotions on are diverse, difficult, or too few. Remember that the algorithm needs to have a large range of examples in order to quantify the underlying variance. The most subtle the emotions, or the more variation within each emotion, the more data is required. It is also possible that this algorithm simply isn’t up for the task given your dataset. Also look at my other tutorial using a support vector machine classifier in conjunction with facial landmarks.
        – Where are most mistakes made? Maybe one or two categories have little data in them and are throwing the rest off.
        – Are there no labeling errors or file retrieval errors? If emotion images receive an incorrect label this will obviously wreck performance.
        Good luck!
        -Paul

        Reply
        • Aniket More

          July 5, 2017

          Thanks for the reply Paul, actually I am using the same data set you suggested (CK+). And I am training on 80% of data and classifying 20% as you said, still I am getting 21-23 % accuracy with all the categories and 36% using the reduced set of emotions. I am not getting why the same code with same data set is giving me different results.
          I am using Ubuntu 14.04, OpenCV 3.0.0. Also as it’s mentioned in some of the comments above glob does not work well in Ubuntu, I verified the data after sorting it is same as you mentioned “contempt” (18), “fear” (25) and “sadness” (28).

          Reply
          • Paul van Gent

            July 5, 2017

            I’ve been getting more reports of 3.0 behaving differently. In essence the tutorial ‘abuses’ a facial recognition algorithm to in stead detect variations within the face. It’s likely in 3.0 the approach has been tweaked and doesn’t work so well for this application anymore. Be sure to check software versions specified in the beginning of each tutorial, sometimes higher versions are not better..
            You can also take a look at the other tutorial on emotion recognition, it’s a bit more advanced but also a more ‘proper way’ of approaching this problem.
            -Paul

  • senorita

    January 2, 2017

    hi,
    when i’m trying to run the first snippet of code for organizing data into neutral and expression, i’m getting this error :
    Traceback (most recent call last):
    File “C:\Users\310256803\workspace\FirstProject\pythonProgram\tryingcv.py”, line 12, in
    file = open(files, ‘r’)
    IOError: [Errno 13] Permission denied: ‘c:\\source_emotion\\Emotion\\S005\\001’
    can anyone please help me

    Reply
    • Paul van Gent

      January 12, 2017

      It seems your script doesn’t have permission to access these files. Is the folder correct? Also be sure to run the cmd prompt in administrator mode. If that doesn’t work, try moving the folder to your documents or desktop folder, that is often a quick fix for permission errors.

      Reply
  • Andrej

    January 16, 2017

    Hello, thank you very much for your great tutorial. I was wondering if there is anyway I can save this trained model for later use.

    Reply
  • Nkululeko

    January 17, 2017

    Hi Paul, thank you for this tutorial. It really helped me with my honours project. I would also like to learn how a Neural Network would do in classifying the emotions, maybe a SVM as well. Thanks.

    Reply
    • Paul van Gent

      January 18, 2017

      Hi Nkululeko,
      Glad to hear it was of some help! If you want to learn about how other classifiers work with emotion recognition, you have to make a few intermediary steps of extracting features from images. Take a look at this tutorial. It also discusses the performance of an SVM and Random Forest Classifiers, and some pointers.
      In the near future I plan on writing a similar one for convoluted neural nets (deep learning networks)

      Reply
      • bahar

        August 29, 2017

        hi , do you write this program with deep learning networks? if yes, please give us the link 🙂

        Reply
        • Paul van Gent

          September 11, 2017

          Hi Bahar. This is planned, but not there yet.

          Reply
  • Vish

    February 8, 2017

    Hi Paul, Thank you for such a detailed guide!
    I needed your assistance for my project which would to scan faces and detect emotions ( predicting mental disorders is an enhancement I intend to incorporate) . I’m completely new to this technique and find myself in a fix from where to begin 🙁
    Could you please guide me on the choice of softwares to be used, whether I should opt for MATLAB or OpenCV, or something else? This first step needs to be completed for me to proceed with the development of the application. I would really appreciate your assistance on this.

    Reply
    • Paul van Gent

      February 8, 2017

      Hi Vish. For the software I would say whichever you feel most comfortable with. You are undertaking a complex project so the most important thing is that you are very familiar with your tools, otherwise you might end up demotivated quickly.
      Regarding classifying mental disorders; I don’t think that is possible from just images. Think about how you could automatically extract features to use in classification from other sources than pictures. However, don’t let me discourage you. If you want, keep me updated on your progress (info@paulvangent.com), I’d like that.

      Reply
      • Vish

        February 9, 2017

        Thank you Paul for your quick response 🙂 Could you tell me whether the selection of software varies with the scope of the application?
        For instance, my requirement is to scan a photo clicked from the front camera of an android device. This photo is then processed at a remote server which returns the mood of the person.
        Is this scenario limited to a certain software package or do I have choices? I’m sorry if my questions sound silly, just confused from where to begin. Your guidance will really prove beneficial for me to begin.

        Reply
        • Paul van Gent

          February 9, 2017

          No problem. If you have a server-side application running you need to think about two main things:
          – How much traffic are you expecting and how does your solution scale?
          – What is available on the server OS?
          I’m expecting that sending images to the server for analysis and receiving the results back quickly gets impractical as the number of users grows (you don’t want to wait more than a few seconds for the result..), and puts a lot of strain on server resources.
          However, if you’re developing an Android app, note that OpenCV is available on the platform as well. You can also train several classifiers from the SKLearn framework and use the trained models in an Android app. See ths following link for pointers:
          http://stackoverflow.com/questions/33535103/using-trained-scikit-learn-svm-classifiers-in-android
          Only simple math is required.

          Reply
          • Vish

            April 25, 2017

            Hello Paul,
            I have reduced the scope of my application to detect only sad and happy emotions since I have struggled with using MATLAB as I have no prior knowledge about it. Could you please let me know how do I implement your tutorial on Mac?
            I have created the necessary folder structure but need to know how do I execute the files.

          • Paul van Gent

            April 25, 2017

            Hi Vish. It should be similar to windows, except you use Terminal instead of Command Prompt. To install the necessary packages see the repo manuals for each package
            .

  • Prashanth P Prabhu

    February 22, 2017

    Try as i might I am not able to go beyond 36% accuracy for the combined data set. Any idea why you may be getting better accuracy than me ? Is this dependent on the system that I am using (I doubt it).

    Reply
    • Paul van Gent

      February 22, 2017

      I doubt the system has much to do with it either. What OpenCV version are you using? An earlier report of low accuracy used OpenCV3.x I believe (I mention I used 2.4.9).
      Remember I’m “hijacking” a face recognition algorithm for emotion recognition here. It is very possible that optimizations done on OpenCV’s end in newer versions impair this type of detection in favour of more robust face recognition.
      Take a look at the next tutorial using facial landmarks, that is more robust.

      Reply
      • Prashanth P Prabhu

        February 22, 2017

        Paul thanks for your reply however I found the root cause which was to do with diferent glob.glob implementation between version python2 and 3. In py3 you need to explicitly sort the lists returned. I was not doing that initially which resulted in the training data set getting wrong images…for example sometimes anger would slip into neutral. Fixing this takes the accuracy to about 83% out of box which is pretty cool 🙂 Awesome work!
        Will definitely try out your landmark based tutorial to compare the approaches. Is it out yet ?

        Reply
        • Paul van Gent

          February 22, 2017

          Great you found the issue! Thanks for replying so that others may also benefit :).
          The other is out, see the home page, or use this link.
          Good luck!

          Reply
  • GBoo

    February 22, 2017

    please…help me
    I don’t understand;;;
    I did download ck+ files(4zip…) and made new two folders(“source_emotion” and “source_images”)
    but… i don’t understand next someting….
    how extract file?? images,,, txt,,, ??? i don’t mean…
    i hope to this tutorial video… T T

    Reply
    • Paul van Gent

      February 22, 2017

      Hi GBoo,
      Just follow the tutorial. It’s all there. Looking at the code may also help. If you can’t figure it out I suggest you try a few simpler Python tutorials first, this one assumes at least intermediate Python skills.
      – Paul

      Reply
  • KingKong

    February 23, 2017

    I have some question…
    please answer to me
    I just a little English skills…
    1. Extract 3 zip files(emotion_labels, FACS_labes, Landmarks) and put together in Source_emotion folder?
    source_emotion
    └S005
    └001
    └S005_001_00000001_landmarks.txt ( 11files 1~11)
    └S005_001_00000011_emotion.txt
    └S005_001_00000011_facs.txt
    └S010
    └001
    2. Extract extended-cohn-kanade-images.zip files and move to source_images folder right?
    source_images
    └S005
    └.DS_Store
    └001
    └S005_001_00000001.png ( 11files 1~11)
    └S010
    └001
    └S010_001_00000001.png ( 14files 1~14)
    └002
    3.
    emotion = int(file.readline())
    ValueError: invalid literal for int() with base 10: ‘2.1779878e+02 2.1708728e+02\n’
    I want to try this tutorial but have some problem…
    Please help me…

    Reply
    • KingKong

      February 23, 2017

      3.
      emotion = int(float(file.readline()))
      ValueError: invalid literal for float(): 2.1779878e+02 2.1708728e+02

      Reply
    • Paul van Gent

      February 23, 2017

      It seems like you’re opening the landmarks file, not the emotion text file. The emotion text files contain single floats like 2.000000

      Reply
    • Appy

      February 24, 2017

      I am getting the same error. Did you figure out the problem?

      Reply
      • Paul van Gent

        February 24, 2017

        The mentioned floats are not present in the text files containing the emotion, in these files you should only find integers disguised as floats (e.g. “7.0000000e+00”), not actual floats (e.g. 2.1779878e+02). Please verify which files the code is trying to access when it gives an error.

        Reply
        • Appy

          February 24, 2017

          It was accessing the landmark file. I made the following change to the code and it worked.
          It was:
          for files in glob.glob(“%s\\*” %sessions):
          I changed it to:
          for files in glob.glob(“%s\\*emotion.txt” %sessions):

          Reply
          • Paul van Gent

            February 24, 2017

            I thought something like that was happening. Good you found it. Happy coding!
            -Paul

  • Appy

    February 24, 2017

    Thank you Paul for guiding in the right direction

    Reply
  • Keshav

    February 25, 2017

    Hey Paul, When I try to execute the first python file, instead of taking only the neutral image, it is taking emotional images as well. Any idea why that is happening?

    Reply
    • Keshav

      February 25, 2017

      I meant the first code where you split the different emotions. Other emotions are splitted in a correct way but neutral images has mixtures of both neutral and emotional images.
      sourcefile_neutral = glob.glob(“source_images//%s//%s//*” %(part, current_session))[0]
      should return only the first image right?

      Reply
  • Keshav

    February 25, 2017

    Okay I found the error, we should sort the directory using
    sorted(glob.glob(“source_images//%s//%s//*” %(part, current_session)))[0]
    It works fine then..

    Reply
    • Paul van Gent

      February 25, 2017

      Strange, I didn’t need to sort it, as it was sorted by glob. What python version ans OS are you using?

      Reply
      • Hjortur

        March 10, 2017

        I am using Ubuntu 14 and was working out a few of your posts with much lower accuracy. When I looked at the images I found them generously classified. The problem was what Keshav found that it should be sorted.
        Thanks for these great, great articles!

        Reply
  • Karan

    February 27, 2017

    Hey Paul,
    I am getting this error when i am trying to run your script on ubuntu OS.
    fish_face.train(training_data, np.asarray(training_labels))
    cv2.error: /build/opencv-vU8_lj/opencv-2.4.9.1+dfsg/modules/contrib/src/facerec.cpp:455: error: (-210) In the Fisherfaces method all input samples (training images) must be of equal size! Expected 313600 pixels, but was 307200 pixels. in function train
    Thanks for your help.

    Reply
    • Paul van Gent

      March 13, 2017

      Hi Karan. The error means the images you supply are not similarly sized. All training images and all prediction images need to be the exact same dimensions for the classifier to work properly. Resize your images with either numpy or opencv.
      Cheers

      Reply
  • Nafis

    March 1, 2017

    Hi Paul,
    I faced this error:
    training fisher face classifier
    size of training set is: 506 images
    OpenCV Error: Insufficient memory (Failed to allocate 495880004 bytes) in cv::Ou
    tOfMemoryError, file ..\..\..\..\opencv\modules\core\src\alloc.cpp, line 52
    I used your code without any change. Any idea why this might happen? I am using 4Gb of RAM.

    Reply
    • Paul van Gent

      March 13, 2017

      Hi Nafis. All images are stored in the training_data and prediction_data lists. Are you using 32-bit Python? I believe Windows should allocate virtual memory of OpenCV needs more. In this case I recommend 64-bit python.
      If you can’t, don’t want to or are already using 64-bit python and still get the error, you could try several things:
      – Reduce the number of images in the dataset
      – Reduce the resolution of the images
      – Change the code so that only the training set is loaded when training, then delete this set and load the prediction set once you’re ready to evaluate the trained model.
      Hope this guides you in a usable direction.

      Reply
      • Karthikeyan

        September 9, 2017

        In emotion_dtect. py file im getting a “type error: ‘int’ object is not iterable” in line:pred, conf=fishface. predict(image)
        How to resolve this sir?

        Reply
  • simuxx

    March 3, 2017

    Hi Paul,
    Thank you for your job. your tutorials will help me a lot as I’m working on emotion recognition.
    I’m trying to run the code but i’m having this error
    sourcefile_emotion = glob.glob(“C:/…/source_images/%s/%s/*” %(part, current_session))[-1]
    IndexError: list index out of range
    Can you help me please

    Reply
    • Paul van Gent

      March 13, 2017

      Hi Simuxx. The error is explicit: it cannot find the index of the list you specify, so that likely means the list returned by glob.glob is empty.

      Reply
    • Aniket

      June 30, 2017

      Did you resolve this issue @simuxx?

      Reply
      • Paul van Gent

        July 1, 2017

        Take a look at the list “sourcefile_emotion”, likely it is empty. Are the folders that you feed to glob.glob() correct? Is there something in the folders?

        Reply
  • Prem Sekar

    March 6, 2017

    hi paul,
    i executed ur code to clean the dataset but it shows error…could you help me with it

    Reply
  • karthik

    March 6, 2017

    sir the dataset link provided by you contains many folders and images can u please explain me how to create my own data set with a small example .
    suppose I have a list of images and I stored in source_images folder. and what I need to store In source_emotion folder …… jst can I save happy=1 sad=2.. in theform of txt files

    Reply
    • Paul van Gent

      March 13, 2017

      Hi Karthik. You can do whatever you want actually. The classifier expects two things:
      – A numpy array of imagedata
      – a (similarly shaped!) numpy array or list of numerical labels
      However you synthesize both lists doesn’t matter, as long as the image and the corresponding label are at the same indexes in both arrays or lists!
      Cheers

      Reply
  • Ash

    March 13, 2017

    Hi Paul!!
    Excellent tutorial. Really easy to understand the flow. I just have this doubt, I saved the trained model and when I opened it, it displayed something like this –
    2
    1
    122500
    d
    1.0575786924939467e+002 1.0452300242130751e+002
    1.0227360774818402e+002 1.0003389830508475e+002
    9.7685230024213084e+001 9.5399515738498792e+001
    ………….
    So do you have any idea what these values are?

    Reply
    • Paul van Gent

      March 13, 2017

      Hi Ash,
      Thanks! I’m not sure, these could be either decision boundaries or hyperplane coefficients (see how SVM’s work for more info), depending on the approach the face recognizer class in OpenCV takes. I’m not sure anymore what approach it takes though, been a while since I read up on it.
      Cheers

      Reply
  • Keshav

    March 13, 2017

    Hey Paul, I successfully did everything as per the tutorial and got 95% accuracy . I tried to make a device with intel EDISON board. When I train the system , it says OutofMemory Exception because of Fisherface.train, Any idea how to overcome the memory leak?

    Reply
    • Paul van Gent

      March 14, 2017

      Hi Keshav. It’s not a memory leak, there just isn’t sufficient memory on the system for this type of task. You might try training a model on a computer and transferring the trained model to the Edison to use for just classification. Be sure to also test how the model performs on data from webcams and other sources, as it’s unlikely you retain the 95% when generalising to other sets (this is where the real challenge still lies!).
      Good luck!

      Reply
  • Jack

    March 14, 2017

    Hey Paul, amazing tutorial! I must be doing something wrong but in the run_recognizer function I am returned the following error and am not very sure what is going on.. printing the image variable clearly shows that it is storing a full image..
    —> 58 pred, conf = fishface.predict(image)
    59
    60 if pred == prediction_labels[cnt]:
    TypeError: ‘int’ object is not iterable

    Reply
    • Jack

      March 14, 2017

      fixed ! just gotta remove the confidence interval returned.. I guess it’s all about those python incompatabilities

      Reply
    • Oussama

      September 28, 2017

      Hello Jack,
      I am facing the same error I removed the conf variable but I still get the same error.
      can you please help?
      thank you.

      Reply
      • Paul van Gent

        September 28, 2017

        Hi Oussama. Can you share the exact error message and/or the code with me? info@paulvangent.com

        Reply
  • Rajat

    March 25, 2017

    Followed all the steps. Even tried with the dataset you provided as “googleset”. But I am not getting an accuracy more than 55% even with 5 expressions. Please help!!!!

    Reply
    • Paul van Gent

      April 8, 2017

      Please check whether the generated image list correctly matches the label list. However, 55% with 5 expressions is way above chance level, you would expect 20% (1/5).
      The method will never reach 100% accuracy, and depending on what sets you use, 55% may be the maximum obtainable.

      Reply
  • Keshav

    March 27, 2017

    Oh I managed to solve all the problems and I have made a device for the blind people to detect the intruder with wrong intentions using emotion recognition. Thank you so much for the tutorial Paul. You have been a great inspiration. I have done it using Intel Edison board.

    Reply
    • Paul van Gent

      March 28, 2017

      That sounds like a fun project! Can you share more information on it? You’ve made me curious :).

      Reply
  • Keshav

    March 30, 2017

    Basically, there’s a button which acts as a trigger. Once if you press it, the Video camera starts recording and constantly monitor the emotions. Sometimes the emotions might be incorrect, So I have set up a count value for emotions. So if any different emotions like anger , for example, is detected, the blind person is alerted via a beep sound or some vibration. Your project acts as a base for mine. In case , such emotions are detected, the blind person will be aware of the situation. Moreover once the emotion is detected to be anger, the snapshot of the person standing right in front of him will be stored inside the board And also If you hold the button for a long time, Your location will sent to the already chosen emergency contacts. 😀
    Any idea on improving the accuracy of the detection?

    Reply
  • John

    April 2, 2017

    I saved my xml model and it seems that it not detect very good emotions. The precision is poor.
    I try to figure out what is wrong (with model or with classification).
    Can you save and provide me your model to see if the problemes comes from my training?
    Thanks in advance (my email : ioan_s2000@yahoo.com)

    Reply
    • Paul van Gent

      April 8, 2017

      Hi John. Check whether the labels correspond with the images when training and classifying the model. Also try to expand the training set with more images if performance remains poor.
      Remember that high accuracy might not be possible for a given dataset. Fine-tuning accuracy itself with images beyond excluding outliers is irrelevant, as real-world performance will not increase anyway.

      Reply
  • bilal rafique

    April 4, 2017

    Hi Paul,
    I am getting this error, when i run the code of training fisher face classier
    Traceback (most recent call last):
    File “F:/Emotion Recognition/Ex3.py”, line 64, in
    correct = run_recognizer()
    File “F:/Emotion Recognition/Ex3.py”, line 45, in run_recognizer
    fishface.train(training_data, np.asarray(training_labels))
    error: ..\..\..\..\opencv\modules\core\src\alloc.cpp:52: error: (-4) Failed to allocate 495880004 bytes in function cv::OutOfMemoryError
    Please I want to use your tutorial in my final year project 🙁 Please help me. i have just 10 days 🙁
    regards,

    Reply
    • bilal rafique

      April 4, 2017

      This Error is resolved bro 🙂 by installing x64 Pythhon & one by one folder training but when i done it. i again trained it & error comes here again
      Traceback (most recent call last):
      File “F:\Emotion Recognition\Ex3.py”, line 64, in
      correct = run_recognizer()
      File “F:\Emotion Recognition\Ex3.py”, line 41, in run_recognizer
      training_data, training_labels, prediction_data, prediction_labels = make_sets()
      File “F:\Emotion Recognition\Ex3.py”, line 28, in make_sets
      gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #convert to grayscale
      error: ..\..\..\..\opencv\modules\imgproc\src\color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cv::cvtColor

      Reply
      • Paul van Gent

        April 8, 2017

        It seems OpenCV doesn’t find colour channels. Are you loading greyscale images?

        Reply
    • Paul van Gent

      April 8, 2017

      It seems you either need more RAM, or need to install 64-bit python and openCV (likely the latter). The error explicitly states it runs out of memory

      Reply
  • karthik

    April 7, 2017

    sir u explained that we can create your own dataset. i did a small change in such a way that i replace all the images in s005 with my new set of roger fedrer images but its showing a wrong emotion . is this the correct way to create our own dataset or something different .
    else as u mentioned earlier
    You can do whatever you want actually. The classifier expects two things:
    – A numpy array of imagedata
    – a (similarly shaped!) numpy array or list of numerical labels
    However you synthesize both lists doesn’t matter, as long as the image and the corresponding label are at the same indexes in both arrays or lists!
    but how im going to give emotion values in source_emotion sir i.e ur specifying with s005 emotion as 3.000000+e. how can i give that to my own create done
    please tell me the procedure to create my own dataset sir

    Reply
  • John

    April 9, 2017

    I have modified a little bit your algorithm.
    I crop the face from image according with this tutorial http://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html.
    The crop depends of eyes position and a desired offset (how much to cut from face).
    In this way i eliminate some not useful parts from the image.
    Also I want to capture some frames and do recognition in videos :
    It seems that the capture must be first cleaned from noise and after rotated for maximum recognition rate.

    Reply
  • Mun

    April 16, 2017

    Hey I m trying to run the first python snippet but got following error. Can u help me out!
    Traceback (most recent call last):
    File “C:\Users\mrunali\Desktop\Master\img_seq.py”, line 22, in
    copyfile(sourcefile_neutral, dest_neut) #Copy file
    File “C:\Python27\Lib\shutil.py”, line 83, in copyfile
    with open(dst, ‘wb’) as fdst:
    IOError: [Errno 2] No such file or directory: ‘sorted_set\\neutral\\5_001_00000001.png’

    Reply
    • Paul van Gent

      April 21, 2017

      Hi Mun. The error tells you what is wrong. Check where you store your files and what you link to.

      Reply
  • chandu

    April 18, 2017

    Hi Paul, How did you download CK data set. I entered information in http://www.consortium.ri.cmu.edu/ckagree/ this site, then it is showing “Please wait for delivering the mail “. I waited but I didn’t get. Can you tell exactly how did you get data set.

    Reply
    • karthik

      August 14, 2017

      if you have downloaded ,could you pls tell me how you did?

      Reply
  • johansonik

    April 21, 2017

    Hello!
    My question is do I have to teach this model with my own dataset or can I use your dataset and then mine just to recognize emotions?
    Thanks in advance for reply 🙂

    Reply
  • TRomesh

    April 26, 2017

    Hi Paul
    I would like to know the algorithm that you have used in this project. does it involve Machine learning techniques or Neural networks or just OpenCV’s inbuilt classification methods?

    Reply
    • Paul van Gent

      May 4, 2017

      Hi TRomesh. Sorry for the late reply, haven’t had much time for the site. This particular tutorial uses the FisherFace classifier from OpenCV, which can be considered a form of machine learning.

      Reply
  • Joe

    May 2, 2017

    Hi Paul.
    I am Joe.
    I have some question.
    This one is Machine Learning?
    As far as I know, Fisherface used to LDA(Linear Discriminant Analysis) algorithm right?

    Reply
    • Paul van Gent

      May 4, 2017

      Hi Joe. I believe you are correct about the LDA.
      However, Machine Learning is a broad field, among which algorithms with methods such as LDA or even simple linear regression may fall.

      Reply
  • Dana Moore

    May 3, 2017

    Dear Paul,
    Very nice piece of work. Excellent
    Your script for sorting and organising the images dataset does not work on ubuntu (and other *nix systems such as OSX) as listed.
    * For one thing, the file path separators are proper for windows systems only. One might consider using the python “os.sep” to yield greater flexibility; alternatively, one might also consider using os.join to mesh the siparate parts
    * For another, glob.glob() does not guarantee an in order / sorted array; one might consider using os.listdir(), then sorting the result to yield a correct ordering where the array would start with *00000001.png and end with *00000nn.png
    *For another, the array indexing to retrieve current_session is off (at least for *nix systems)
    That said, it’s a terrific piece of work
    I will attempt to paste in some code that worked for me just below, but no guarantees it formats correctly in a text box. Alternatively, I will be happy to email you a copy
    ========================= BEGIN =========================================
    ======================== END =========================================
    Thank you again for your excellent tutorial

    Reply
    • Paul van Gent

      May 8, 2017

      Hi Dana,
      Thanks for the comments! The code unfortunately doesn’t format well in the text box, but it did in the backend. I planned on updating everything to be unix compatible but kept putting it off due to most of my hours going into work, this is a great reminder to do it. Thanks again.

      Reply
    • Nesthy Vicente

      January 30, 2018

      Good day,
      can you send a copy of your code to me, too. I am using ubuntu and the images were not sorted properly when I used the code in this tutorial.

      Reply
  • Anni

    May 7, 2017

    Good Day sir!
    I followed the steps in this tutorial but I got this error message in the first code..
    Traceback (most recent call last):
    File “F:\Future Projects\files\emotion1.py”, line 12, in
    file = open(files, ‘r’)
    IOError: [Errno 13] Permission denied: ‘source_emotion\\Emotion\\S005\\001’
    how to solve this sir?
    I’m still new in python and i’m using windows 10.
    I hope you can help me with this 🙂
    I badly need it Thank you

    Reply
  • Hetansh

    May 8, 2017

    After running the initial organizing the dataset python script I get no files in the sorted_set folder. Can anyone help me on this?

    Reply
    • Paul van Gent

      May 8, 2017

      Troubleshoot what’s going wrong.
      – Are the files properly placed in correctly named folders?
      – Are the variables “participants”, “sessions” and “files” populated?
      – Are the source and destination paths generated correctly?

      Reply
  • Jacob Jelen

    May 16, 2017

    Hi Paul, Thanks for this tutorial! Super helpful, but I’m having some issues…
    When I run the training code I get the following error:
    training fisher face classifier
    size of training set is: 494 images
    predicting classification set
    Traceback (most recent call last):
    File “trainModel.py”, line 64, in
    correct = run_recognizer()
    File “trainModel.py”, line 52, in run_recognizer
    pred, conf = fishface.predict(image)
    TypeError: ‘int’ object is not iterable
    I have tried changing the line 51 from
    for image in prediction_data:
    to
    for image in range(len(prediction_data)-1):
    That might have solved one of the issues, however I’m still getting errors. It is complaining about the image size not being 350×350=122500 although all the images in my dataset folder are the correct size. And my user name is not ‘jenkins’ as it says in /Users/jenkins/miniconda… not sure where it comes from or how to replace it with my correct path to fisher_faces.cpp
    size of training set is: 494 images
    predicting classification set
    OpenCV Error: Bad argument (Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4.) in predict, file /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp, line 132
    Traceback (most recent call last):
    File “trainModel.py”, line 64, in
    correct = run_recognizer()
    File “trainModel.py”, line 52, in run_recognizer
    pred, conf = fishface.predict(image)
    cv2.error: /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp:132: error: (-5) Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4. in function predict
    Thanks for your help

    Reply
    • Paul van Gent

      June 12, 2017

      Hi Jacob. It seems the images are not loaded and/or stored correctly in prediction_data, and therefore it cannot iterate over it. You can step over this with your proposed change, but then it fails later because there was no image in the first place. Verify that image data is stored there, and if not, where it goes wrong.
      -Paul

      Reply
      • Dixit Thakur

        June 15, 2017

        Hi Paul,
        I am facing the same issue.
        Traceback (most recent call last):
        File “trainData.py”, line 67, in
        correct = run_recognizer()
        File “trainData.py”, line 54, in run_recognizer
        pred, conf = fishface.predict(image)
        TypeError: ‘int’ object is not iterable
        fishface.predict(image),following is the image data
        image data : [[ 80 82 83 …, 108 108 109]
        [ 82 83 83 …, 110 111 111]
        [ 84 84 83 …, 111 112 112]
        …,
        [ 24 25 24 …, 14 17 19]
        [ 25 26 25 …, 14 17 19]
        [ 24 25 25 …, 13 16 17]]
        what can be the possible reason for the failure?

        Reply
        • Paul van Gent

          June 19, 2017

          Hi Dixit. I cannot reproduce the error, that makes it a bit difficult to debug. Could you send me your code at info@paulvangent.com? I’ll see if that works over here, and then we know whether it’s a problem with the code or yours setup.
          -Paul

          Reply
  • Taghuo Fongue

    May 21, 2017

    Extract the dataset and put all folders containing the txt files (S005, S010, etc.) in a folder called “source_emotion” //
    Hi Paul,
    should i extract Emotions, FACS, Landmarks folders under the same folder “source_emotions” or only the Emotions folders has to be extracted and put under the folder “source_emotions”.
    which dataset should i extract exactly? Please let’s me know.

    Reply
    • Taghuo Fongue

      May 21, 2017

      Please can you send me a screen-shot how you have arrange your folder ?

      Reply
    • Paul van Gent

      June 12, 2017

      Hi Taghuo. You extract the emotion textfiles into this folder. So you get:
      source_emotion\\S005\\001\\S005_001_00000011_emotion.txt
      source_emotion\\S010\\…
      etc

      Reply
  • hanaa

    June 13, 2017

    please can you help me — would like to implement emotion recognition using the Raspberry Pi’s camera module, specifically recognizing angry only . I have some simple face detection going on using OpenCV and Python 2.7, but am having a hard time making the jump to emotion recognition. Initial searches yield results involving topics such as optical flow, affective computing, etc, which has so far been intimidating and hard to understand. can you tell me code with fisherface classifer ?

    Reply
    • Paul van Gent

      June 13, 2017

      For the simplest approach I would recommend looking at the section “Creating the training and classification set”, all the code you need is there. You can also take a look at the Emotion-Aware music player tutorial here, that might clarify some things.

      Reply
  • Vadim Peretokin

    June 25, 2017

    It doesn’t seem that the link to download CK+ works anymore?

    Reply
    • Paul van Gent

      June 30, 2017

      It seems it has been taken offline yes. I’ll update the text

      Reply
  • Bharath

    July 3, 2017

    Could you please provide the sorted_set folder even?
    I’m not able to prepare that set from code you had provided

    Reply
    • Bharath

      July 3, 2017

      Anyone who had prepared the dataset with separate folder for each emotion can please reply

      Reply
    • Paul van Gent

      July 4, 2017

      Hi Bharath,
      The sorted set folder simply contains folders which contain images of faces with emotions, like this:
      sorted set
      |
      ——-anger
      |
      ——-contempt
      |
      ——-disgust
      |
      ——-etc
      Where exactly are you getting stuck? Maybe I can help.
      -Paul

      Reply
  • Abhi khandelwal

    July 3, 2017

    Hi Paul
    Could you please tell me that which file should I run First .And In your Codes there is no command for opening WEBCAM[#Video.Capture(0) like this], So how it will detect my Emotion.

    Reply
    • Paul van Gent

      July 4, 2017

      Hi Abhi,
      If you follow the tutorial everything should go in the right order. This tutorial is about building a model using OpenCV tools. There’s another tutorial using a more advanced method here. You can take a peek there on how to access your webcam, or find one of the million pieces of boilerplate code for this online!
      Good luck.
      -Paul

      Reply
  • srikanth

    July 8, 2017

    Hii Mr Paul,
    what is your advise for the beginner , i mean to learn all these face recognition stuff using opencv,
    i want to learn it completely.

    Reply
    • Paul van Gent

      July 8, 2017

      Hi Srikanth. I would recommend doing a few Python courses on Coursera before delving into OpenCV. After this, the OpenCV docs should provide you with a good basis. Do a few projects without tutorials. You’ll learn it quickly.

      Reply
  • Valens

    July 8, 2017

    Hi Paul
    We are looking into having emotion study on our media. However can we do this alogarithm without having to snap picture or store picture. On the fly to understand and predict viewers esperience esp watching certain movie or program?

    Reply
    • Paul van Gent

      July 10, 2017

      This would be possible, but with Python real-time implementations might be too slow. One option is to snap a picture every second and classify that. Even on a slow computer the algorithm will be more than fast enough for this.
      However, be aware that results are not likely to be accurate unless the archetypical emotions the classifier is trained on are displayed. Also be aware that you cannot use the CK+ dataset for any purpose other than academic, so if you want to do this commercially you need explicit permission from the original author of the dataset.

      Reply
      • Valens

        August 13, 2017

        Hi Paul, TQ for your msg. Yes I think snap a picture and then do classification would be ideal and practical. As to have CK+ dataset for commercial use, surely we will get the permission first. Btw, are you the author yourself?

        Reply
        • Paul van Gent

          August 13, 2017

          I’m not the author of the CK+ set, only of the stuff on this website.
          I would recommend you get permission to use it commercially first. Would be a shame if you put a lot of work in it, and then don’t get permission..

          Reply
  • Michael rusev

    July 9, 2017

    Hi paul, nice tutorial i was told to do something like this as a school project. after detecting emotion the system should able to play an audio file for the user if it detects an happy face it should play a certain audio and a sad face another. sorry my english isn’t that good. i want to ask if this can implemented and how can i do something like that

    Reply
    • Paul van Gent

      July 9, 2017

      Hi Michael. This shouldn’t be hard at all. Look at the module “PyAudio“, or the VLC wrappers if you’d rather use that framework.
      You could even use os.open(), however this will open the default media player to play the file (which causes it to pop-up), so this is not a very nice solution..

      Reply
  • Vincent van Hees

    July 10, 2017

    Many thanks for this blog post. It seems the data is back online again, so you can change the text back to how it was 🙂

    Reply
    • Paul van Gent

      July 10, 2017

      Alright, thanks for notifying me!

      Reply
  • Vincent van Hees

    July 10, 2017

    In the section “Extracting faces”, the sentence “The last step is to clean up the “neutral” folder. ”
    Could you please make this sentence more explicit:
    – Is this a description of what the Python code did and is no further action required from the reader?
    – Is this an instruction to the reader to delete that folder manually?
    – Is this an instruction to the reader to delete the files in that folder manually?
    thanks

    Reply
    • Paul van Gent

      July 10, 2017

      Thanks. I have updated the text. You need to do this manually.

      Reply
  • JITESH

    July 12, 2017

    Hi Paul,
    I would like to integrate same system in C#. Can you please help how can I integrate CK+ Model in C#. If you have any sample in c# for same kindly update me on jitesh.facebook@gmail.com please.
    Thanks,
    Jitesh

    Reply
    • Paul van Gent

      July 14, 2017

      Emgu CV is a .NET wrapper for OpenCV. I would look into that. Porting the code should be easy after that :).
      -Paul

      Reply
  • Zlatan Robbinson

    July 24, 2017

    Hello Paul, Brilliant tutorial i may say, just one question is it possible to design a system that can speak to the individual after detecting emotion like when it detects an happy face it should say something to them “Like you are happy keep it up” and when it detects a sad face its should say something like” You are sad cheer up”.
    Just something of that nature, it will be form to see the system speak to the individual after it detects emotion. Please can this be implemented am planning on during this as my Final Year project

    Reply
    • Paul van Gent

      July 24, 2017

      Hi Zlatan. This should be quite easy to implement. To make your life easy you need to look at a package that does TTS (text to speech), for example this one.
      Then it is just a matter of:
      – Detecting emotion
      – Determining label of emotion
      – Have the TTS engine say something.
      Good luck 🙂
      – Paul

      Reply
      • Zlatan Robbinson

        August 6, 2017

        Please Paul how can i contact you personally just in case i have
        something else to discuss

        Reply
  • Salma

    August 3, 2017

    hello , the link of the CK database is broken , and i can not find it on the internet , is there any working link or other alternative for the database,

    Reply
    • Paul van Gent

      August 7, 2017

      As far as I’m aware this is the only link. Sharing the database without the author’s consent is prohibited, so I’m afraid you need to look for another dataset.

      Reply
  • Zlatan Robbinson

    August 6, 2017

    Thanks Paul you saved my day. Keep up the good work

    Reply
  • PKeenan

    August 13, 2017

    Mistake in line 29 of the second code snip facefeatures == face2.

    Reply
    • Paul van Gent

      August 13, 2017

      Thanks for catching that. I’ve updated

      Reply
  • Rodrigo Moraes

    August 21, 2017

    I can’t access the dataset used in this article(http://www.consortium.ri.cmu.edu/ckagree/). Do you know why?

    Reply
    • Paul van Gent

      August 21, 2017

      It’s availability is intermittent, and access is not always granted. You can look at other available datasets or create your own :).

      Reply
  • Thari

    August 22, 2017

    I’m getting this error when run the “Creating the training and classification set” code.
    My system is windows 10, visual studio 2017, python 2.7 (32-bit), RAM- 8GB (There are more than 3 GB free memory when run the code)
    training fisher face classifier
    size of training set is: 1612 images
    OpenCV Error: Insufficient memory (Failed to allocate 1579760004 bytes) in cv::OutOfMemoryError, file ..\..\..\..\opencv\modules\core\src\alloc.cpp, line 52
    Please help me resolve this.

    Reply
    • Paul van Gent

      August 22, 2017

      Hi Thari. You have the 32-bit python version, that means it can only address the first ~4GB of your ram, but most of this is likely taken up by your OS and other applications.
      Consider installing the 64-bit python

      Reply
      • Thari

        August 26, 2017

        Thank you for your reply.
        When I run this with python 2.7(64 bit) I’m getting this error.
        Traceback (most recent call last):
        File “C:\Users\Thari\documents\visual studio 2017\Projects\PythonApplication1\PythonApplication1\PythonApplication1.py”, line 7, in
        fishface = cv2.createFisherFaceRecognizer() #Initialize fisher face classifier
        AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
        Press any key to continue . . .
        Please help me resolve this.

        Reply
  • karthikeyan

    August 24, 2017

    hello sir, i am doing a final year project on this module,but i am using webcam to detect emotions in real time,could you provide me the source code for the complete module for emotion detection using webcam? please, my mail id is : learnerkarthik@gmail.com

    Reply
    • Paul van Gent

      August 25, 2017

      Hi Karthikeyan. All you need is in the texts and the docs of the opencv module (http://docs.opencv.org/). Good luck!

      Reply
      • Adarsh S N

        March 3, 2018

        Sir,I am a newbie in python can you please elaborate the procedure.

        Reply
  • Tham

    August 25, 2017

    Hi Paul,
    createFisherFaceRecognizer(num_components,threshold);
    How many to use num_components and threshold for your project?

    Reply
  • sadaf

    August 25, 2017

    how can I learn python_opencv ?

    Reply
    • Paul van Gent

      August 25, 2017

      I would start with the docs. It also helps to think of a simple project you want to program, and build it from the ground up with the help of the docs. This will help you get familiar with the structure of the module.
      If you’re not very comfortable in Python I would suggest you do a few courses on this first. This will really help speed the rest up.

      Reply
      • Sadaf

        August 25, 2017

        Ok thanks:)

        Reply
  • karthikeyan

    August 25, 2017

    i am getting an error like “attribute error:no module named create fisher face recognizer” , i just copy pasted your code!

    Reply
    • Paul van Gent

      August 25, 2017

      Hi Karthikeyan. I’m sure your final year project is not about copy pasting code. Please read the tutorial as well, that’s what it’s for.

      Reply
      • Karthikeyan

        September 9, 2017

        In emotion_dtect. py file im getting a “type error: ‘int’ object is not iterable” in line:pred, conf=fishface. predict(image)
        How to resolve this sir?

        Reply
        • Nesthy Vicente

          February 3, 2018

          Hello Karthikeyan,
          Have you resolved this one? I got the same error.

          Reply
          • Nesthy Vicente

            February 3, 2018

            nvm. It’s ok now.

  • salma

    September 7, 2017

    the ck+ have more the 3 folders full of .txt files , which ones should i use in the ” source_emotion” folder?
    I’ve been trying since 10 days and i have no result for the emotion recognition , i could appreciate a little help thank you

    Reply
  • Reima

    October 10, 2017

    Hi Paul,
    I just wanted to let you know that I found somebody presenting your tutorial codes as his own handiwork. No citation or links to this page. So that is a clear license violation on his part. I’d say, it is a sign that you’ve made a great tutorial since the copycat pretty much copy-pasted your code snippets and only made minor value adjustments and changed some comment texts:
    https://www.apprendimentoautomatico.it/en/emotions-detection-via-facial-expressions-with-python-opencv/
    Anyway, thanks for a great tutorial,
    -Reima

    Reply
    • Paul van Gent

      October 10, 2017

      Thanks so much Reima, also for notifying me. Making original content is hard and takes time. Unfortunately due to the nature of the internet there will always be freeloaders that benefit of other’s work. I’ve contacted the author and let’s see what happens

      Reply
  • Jamil

    October 14, 2017

    Hello Sir I’m beginner programmer and learner, but have spirit to do anything if someone give me proper guide will you accept as your student ?

    Reply
    • Paul van Gent

      October 14, 2017

      Hi Jamil. If you have questions you can always send them to info@paulvangent.com. I cannot guarantee I will always respond quickly, though.
      There are a lot of great Python tutorials and classes online. I can surely recommend “Python for Everybody” on http://www.coursera.com

      Reply
  • Aditya

    October 29, 2017

    Hi, I’m just getting started with this and I have a question — when you say “Extract the dataset and put all folders containing the txt files (S005, S010, etc.) in a folder called “source_emotion” “, which folders containing the txt files do you mean?
    I’m confused if its all the contents inside “Emotion_labels/Emotion/” or “FACS_Labels/FACS”
    Please help me out. Thanks!

    Reply
    • Aditya

      October 29, 2017

      I have currently saved it as “source_emotion/S005/001/S005_001_00000011_emotion.txt”
      “source_emotion/S010/001/” and “source_emotion/S010/001/S010_002_00000014_emotion.txt” and so on. Is that right?

      Reply
      • Paul van Gent

        October 30, 2017

        Hi Aditya, that indeed looks right. If the code fails you can take a look at either the code or the traceback of the error to see where the mismatch happens.
        -Paul

        Reply
        • Aditya

          October 30, 2017

          Thanks! And I really respect your quick reply, Paul! 🙂

          Reply
  • Huzail

    November 1, 2017

    Hello sir i’m having memory problem in while training the python
    http://prntscr.com/h4q20a
    this the screenshot of the error how to deal with it

    Reply
    • Paul van Gent

      November 1, 2017

      Hi Huzail. You’re running out of memory. Likely you are using 32-bit Python, a 32-bit IDE running the Python environment, or on a 32-bit system. Make sure it’s all 64-bit, try to free up memory, or use less images to train the model.

      Reply
  • Hiren

    November 1, 2017

    Hey…Paul..i does not found any .txt file (S005 – S010) by extracting the database….i only found the images in folder from (S010 TO S130)….SO what should i store in emotion source_emotion…??

    Reply
    • Paul van Gent

      November 1, 2017

      Hi Hiren. In the server where you downloaded the data from, there is a separate zip file containing the emotion labels. You can download this one and extract into source_emotion
      -Paul

      Reply
      • Hiren

        November 13, 2017

        thanks for reply Pual….I have completed the system and its run successfully…. but some time system only store one image instead of 15 (as per the code)….so what should i do??

        Reply
  • Karthik

    November 6, 2017

    Hey Paul, Thanks for this easy to understand tutorials.
    And for anyone on opencv 3.3 and who got createFisherFaceRecogniser not found, install the opencv contrib from https://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv and then use cv2.face.FisherFaceRecognizer_create() instead of cv2.createFisherFaceRecogniser()

    Reply
    • Roshan

      January 24, 2018

      Thanks lot Karthik. its worked perfectly.

      Reply
  • Dominique

    November 12, 2017

    Hi Paul, first of all thank you for the tutorial, but where am I supposed to get the txt files for the source_emotion folder if the CK+ dataset link is broken?

    Reply
    • Paul van Gent

      November 13, 2017

      Hi Dominique. The link functions intermittently. Either try again later, or try using another dataset..
      -Paul

      Reply
  • ramyavr

    November 30, 2017

    Hi Paul
    I am getting this error
    Traceback (most recent call last):
    File “extract.py”, line 49, in
    filenumber += 1 #Increment image number
    NameError: name ‘filenumber’ is not defined
    could you please help me solving this

    Reply
    • Paul van Gent

      December 4, 2017

      Hi Ramyavr. The variable “filenumber” is not defined prior to you using it, as the error states. Check that you initialize the variable correctly, and that the name is spelled correctly there (also see code section in “Extracting Faces”, line 14.

      Reply
  • Sum

    December 4, 2017

    Hi, I have been trying to run the code but am stuck in the very first step where I am unable to get the right path for the txt files.
    ** for sessions in glob.glob(“%s//*” % participant): # store list of sessions for current participant
    for files in glob.glob(“%s//*” % sessions): ***
    gives me a permission denied error even after i have given all the permissions.
    Please help

    Reply
    • Paul van Gent

      December 4, 2017

      Hi Sum. What OS are you using? Try running the application with elevated privileges (“sudo python ” on Linux/MacOs, or run cmd prompt as an administrator on Windows).
      Please check that the paths you reference exist and are spelled correctly in the code, sometimes this can give strange errors.
      – Paul

      Reply
      • Sum

        December 4, 2017

        I am working on windows and have already tried running as administrator
        Still stuck with this error:
        file = open(files, ‘r’)
        IO Error: [Errno 13] Permission denied: ‘E:data/source_emotions\\Emotion\\S005’
        this is my code section throwing the error:
        ****
        for sessions in glob.glob(“%s/*” % participant): # store list of sessions for current participant
        for files in glob.glob(“%s/*” % sessions):
        current_session = files[20:-30]
        file = open(files, ‘r’)
        ****

        Reply
  • MA

    December 5, 2017

    Hy We are using python3. We ran your code but didn’t reach more than 40% of accuracy. The classifier seems to works well. How do you get 80 percents ?
    MA

    Reply
    • Paul van Gent

      December 5, 2017

      Hi MA. There are two likely things that I can think of. The first is glob.glob might sort detected files differently. Please verify that all images in a given emotion folder are actually of that emotion. The second possibility is you’re also using a different OpenCV version. We’re basically abusing a face recognition algorithm to detect emotions, which has been changed in later versions. Please take a look at the other emotion tutorial on here. It’s a bit more technical but also the more ‘proper’ way of going about this task.
      – Paul

      Reply
  • NinjaTuna

    December 8, 2017

    Hello sir, may I ask what algorithm used in the tutorial is called?

    Reply
    • Paul van Gent

      December 8, 2017

      Hi NinjaTuna. Here I (ab)use a facerecognition algorithm called a FisherFace algorithm (see this for more info on FisherFaces). You can find more info in the OpenCV documentation.
      – Paul

      Reply
      • NinjaTuna

        December 11, 2017

        Thank you very much sir, we have a project at our university that must be able to detect emotions on side view faces and we have no idea where to start, so we would like to cite your work, thank you very much 😀

        Reply
  • HD

    December 9, 2017

    hey..Paul…when the first time i ran the program i was able to store more than one image for each emotion…and got accurate result…,but now it is only store one image for each emotion…and not getting accurate result….so, what should i do?…plz…reply..as soon as possible.

    Reply
    • Paul van Gent

      December 11, 2017

      Hi HD. Could you elaborate a bit further? I’m not sure what the issue is.
      – Paul

      Reply
  • sarra

    December 10, 2017

    hi sir , i had this error and i could not solve it
    Traceback (most recent call last):
    File “D:\facemoji-master\prepare_model.py”, line 72, in
    correct = run_recognizer()
    File “D:\facemoji-master\prepare_model.py”, line 60, in run_recognizer
    fishface.train(training_data, np.asarray(training_labels))
    error: C:\projects\opencv-python\opencv_contrib\modules\face\src\fisher_faces.cpp:67: error: (-5) Empty training data was given. You’ll need more than one sample to learn a model. in function cv::face::Fisherfaces::train
    Can you help me?

    Reply
    • Paul van Gent

      December 11, 2017

      Hi Sarra. The error says it all: “error: (-5) Empty training data was given. You’ll need more than one sample to learn a model.“. It seems the data is not loading correctly. Check whether you are referencing the correct paths, whether you have permission to read from the folders, and whether you store the data correctly in the array variable in python.
      – Paul

      Reply
  • Angi

    December 14, 2017

    Hi paul
    I am getting this error
    sourcefile_emotion = glob.glob(“source_images\\%s\\%s\\*” %(part, current_session))[-1] # get path for last image in sequence, which contain the emotion.
    the image is in source_images\S010\001 and my python file is in the same folder as source_images.
    Can you help me?

    Reply
    • Paul van Gent

      December 15, 2017

      Hi Angi. Could you post your error message? You seem to have accidentally pasted a line of code rather than the error message.
      – Paul

      Reply
  • Dhruvi Patel

    December 20, 2017

    I have problem while copying file…
    Permission denied: ‘images\\S005\\001’
    How can I solve this?

    Reply
    • Paul van Gent

      December 24, 2017

      The error is explicit, make sure you have permission to write in the target folder.
      – Paul

      Reply
  • DP

    December 20, 2017

    Traceback (most recent call last):
    File “C:/Users/Dhruvi/Desktop/Projects/master/img_seq.py”, line 34, in
    imageWithEmotionEtraction()
    File “C:/Users/Dhruvi/Desktop/Projects/master/img_seq.py”, line 29, in imageWithEmotionEtraction
    shutil.copy(sourcefile_neutral, dest_neut) # Copy file
    File “C:\Python27\lib\shutil.py”, line 119, in copy
    copyfile(src, dst)
    File “C:\Python27\lib\shutil.py”, line 82, in copyfile
    with open(src, ‘rb’) as fsrc:
    IOError: [Errno 13] Permission denied: ‘images\\S005\\001’
    Plz help me to solve this issue!!

    Reply
    • Paul van Gent

      December 24, 2017

      The error is explicit, make sure you have permission to read or write in the target folder.
      – Paul

      Reply
  • KARTHIKEYAN

    December 24, 2017

    DO YOU HAVE THE SAME SOURCE CODE FOR WINDOWS? AS I AM A BEGINNER IN PYTHON I FIND A BIT DIFFICULT IN PIVOTING TE CODE

    Reply
    • Paul van Gent

      December 27, 2017

      Python is cross-platform, you should be able to follow the tutorial on windows (it was written on windows as well).

      Reply
  • parth thakar

    December 31, 2017

    hey paul.
    hey paul your tutoria is so much useful to me . but i m stuck with several issues.
    1). what should i put in “dataset” in classifier.py
    2). if i put “sorted_set” in place of “dataset” then it gives me an error
    fishface.train(training_data, np.asarray(training_labels))
    cv2.error: ..\..\..\..\opencv\modules\contrib\src\facerec.cpp:455: error: (-210) In the Fisherfaces method all input samples (training images) must be of equal size! Expected 313600 pixel
    please help me what to do. i will be thank ful to you.

    Reply
    • Paul van Gent

      January 4, 2018

      Hi Parth,
      1. I’m not sure what you mean, in the code ‘dataset’ refers to the location of all image files.
      2. The error is explicit..: make sure you resize all images when loading. In the tutorial I slice the faces from the image and save to a standardised size.
      Good luck.
      – Paul

      Reply
      • parth thakar

        January 7, 2018

        yes paul.
        Glad you answered.
        would you please give me some advice about how i resize my image to get rid of that error
        i tried several methods for resizing but still getting the same error

        Reply
  • Luigi Berducci

    January 10, 2018

    Very interesting, I’m currently working on emotion detection and I’m testing different classifiers.
    Using your code with the same dataset (all emotion) on my laptop give me a correctness of 25%. How is possible so different results? The reason is that the dataset is too small for a complete classification?
    Thanks for your time and your work!

    Reply
    • Paul van Gent

      January 25, 2018

      Hi Luigi. In the past others have reported similar problems. In most cases the issue was because “glob.glob” sorts the detected files differently on Linux than on Windows. On *nix systems you need to make sure to first sort the returned list before you take the last element (the final image containing the emotion).
      Another issue some have had is that newer versions of OpenCV use a different algorithm that works less well in this context. Please take a look at the facial landmarks tutorial. Not only is this a more proper way to detect emotions (albeit a bit more difficult), it will bypass OpenCV entirely.
      – Paul

      Reply
  • Aanya

    January 14, 2018

    Hy..Please help me. I am confused at first step.
    There are three main folders named as Emotions_Labels , FACS_lables and Landmarks in the downloaded data set. Each of that folder is consist on sub-folders, Then in the sub folders there are text files.
    Am i supposed to place these all three main folders, in the folder source_emotions (which has been mentioned by you) ?
    Really I am too confused at first code snippet, please make detailed comment to explain that. What is purpose of that step? Please help me with detailed answer. Please. . .

    Reply
    • Paul van Gent

      January 25, 2018

      Hi Aanya. You need to put the contents of the Emotion_Labels folder into “source_emotions”. This folder contains all labels corresponding to the image set. Without it, the classifier has no idea which image represents which emotion.
      – Paul

      Reply
  • pooneh

    January 15, 2018

    hi dear paul:) I’m a student and I wanna know more about fisher face and eigen face and lbph algorithms so I’ll be thankthful if you offer me a good and simple refference:)
    thanks alot:)

    Reply
  • hazem ben ammar

    January 15, 2018

    hi paul thank you for this amazing tutorial but when i run the first code under raspbian with opencv 3.3.0 and python 3 i got this error
    Traceback (most recent call last):
    File “test.py”, line 15, in
    sourcefile_emotion = glob.glob(“/home/pi/Desktop/code/image_source/%s/%s/*.*” %(part, current_session))[-1] #get path for last image in sequence, which contains the emotion
    IndexError: list index out of range
    i will be so glad if you answer me

    Reply
    • Paul van Gent

      January 25, 2018

      Hi Hazen. The error happens when trying to get the last element from the list generated by glob.glob. This implies the list is empty. Please make sure the path is correct, that includes the ‘part’ and ‘current_session’ items.
      – Paul

      Reply
  • Ali

    January 21, 2018

    Your logical “important” part was nice :))

    Reply
  • Sardor

    January 23, 2018

    where is Dataset.zip file? how to i can make it? like extra dataset file?

    Reply
    • Paul van Gent

      January 25, 2018

      Hi Sardor,
      There’s a link in the text to the small google images dataset. It’s this one
      – Paul

      Reply
  • Nesthy Vicente

    January 25, 2018

    Good day Paul,
    I tried the python snippet for sorting the dataset but my sorted_set folder still contains nothing after running the code. What could be the problem?
    I am using opencv3 and python 2.7.12 in ubuntu 16.04.

    Reply
    • Paul van Gent

      January 25, 2018

      Hi Nesthy. The code is written on windows, where the path is separated using “\\”, on Ubuntu you use “/”. Change this in the code and I think this will solve your problem!
      – Paul

      Reply
      • Nesthy Vicente

        January 26, 2018

        Didn’t see that. It worked now. Thanks! There’s still a problem though. There are misplaced pictures (e.g. sad picture in happy folder). And the neutral folder contains pictures with different emotions and almost no neutral emotion. Is there something I can do to make the sorting more accurate?

        Reply
      • Parthesh Soni

        August 4, 2018

        I had the same problem. Thanks a lot for helping!!

        Reply
  • Roshan

    January 25, 2018

    Hey Paul,
    Thanks for providing great two tutorial for us. Those are realy help for me. I am a student and this need to me for my project. Can you please tell how to implement this code to get precentage of emotions like happy = 0.02145362, sad = 0.001523652, neutral = 0.9652321 etc. Because i need the out put by analyzing real web cam frames. Please help.

    Reply
    • Paul van Gent

      January 25, 2018

      Hi Roshan. Take a look at the tutorial that uses Facial Landmarks, the answer is in there. When creating the classification object you need to pass it a”probability=True” flag. After this, if you use its “predict()” function, you’ll get back an array of shape (m,c), m being the number of passed images to classify, c being the total number of classes.
      – Paul

      Reply
      • Roshan

        January 26, 2018

        Thank You Sir, I will try it.

        Reply
  • pooneh

    February 4, 2018

    thanks alot:)

    Reply
  • Karthi

    February 12, 2018

    Have any one here done real time emotion detection using webcam? If so plz mention

    Reply
    • Paul van Gent

      February 14, 2018

      Hi Karthi. You only need to adapt the code a little bit so that it grabs a frame from the webcam, classifies it and stores the result, and repeats the process.
      For stability I recommend pooling results over a few seconds and taking the average prediction. Otherwise you will get a lot of (incorrect) result switching through prediction noise.
      – Paul

      Reply
      • Karthi

        February 23, 2018

        When i tried to grab the emotions from webcam it shows me the error like no module named fisherface?? Does opencv in windows supports fisherface classifier now? It was supporting earlier but!

        Reply
  • Kowsi1997

    February 13, 2018

    I’m getting the following error.please help me to fix this.
    File “data_org.py”, line 16, in
    sourcefile_emotion = glob.glob(“F:\proj\emotion\Emotion-Recognition-master\source_images\\%s\\%s\\” %(part, current_session))[-1] #get path for last image in sequence, which contains the emotion
    IndexError: list index out of range

    Reply
    • Paul van Gent

      February 14, 2018

      Hi Kowsi. This error means glob returns empty lists (there is no last element, which only happens if the list is empty). Make sure the paths are correct, the string substitutions (%s) create correct paths, etc.
      – Paul

      Reply
      • Kowsi1997

        February 15, 2018

        Thank you Paul,I fixed that error.but now i’m getting the following error,
        In File “data_org.py”, line 14, in
        emotion = int(float(file.readline())) #emotions are encoded as a float, readline as float, then convert to integer.
        ValueError: invalid literal for float(): 1.0000000e+00 0.0000000e+00

        Reply
  • getsurreal

    February 14, 2018

    So if you didn’t try to label an exact emotion and went with more of a positive, negative, or neutral reading, the accuracy could be much higher.

    Reply
    • Paul van Gent

      February 19, 2018

      This is definitely a possibility, especially since a mix of emotions is often present in in-situ data.
      However, then we also need to annotate the dataset differently from what it is now, to allow for the model to fit multi-label outputs. That labeling is a lot of work that needs to be done first..
      – Paul

      Reply
  • Sardor

    February 17, 2018

    Hello, My dear Paul.I am working on facial expression project for my master degree.Can you explain to me about what kind of method I can use if I make the project using your dataset and your coding?Please help me with this.

    Reply
  • Sardor

    February 17, 2018

    And I have some question for you, Please Can you give me your email or contact me .my email is mamarasulovsardor@gmail.com

    Reply
  • Vedant

    February 19, 2018

    Hi Paul ,
    Thanks for your help
    But I am getting an error
    C:\bld\opencv_1506447021968\work\opencv-3.3.0\opencv_contrib-3.3.0\modules\face\src\fisher_faces.cpp:67: error: (-5) Empty training data was given. You’ll need more than one sample to learn a model. in function cv::face::Fisherfaces::train
    What should I do to clear that error

    Reply
    • Paul van Gent

      February 19, 2018

      It seems something is going wrong when generating the dataset. The error mentions it is empty! Make sure you check for all steps whether they work as intended.
      In a situation like this I find it helpful to just print (parts of) the output for every step just to verify where the information slow “stops”
      – Paul

      Reply
  • Nirajan Thapa

    February 20, 2018

    Hi Paul,
    Thanks a lot for putting such a beautiful work. Moving on, I had problem in the very first source code. I am using ubuntu 16.04.3 (64 bit), anaconda3, pycharm, python 2.7.12 and opencv 2.4.9. I have already extracted the zip folder of extended cohn kanade dataset and managed the 3 folders in the conda environment as you said. I also changed all the separtors “\\” into “/” for linux compatibility. But as I run it, the following error occurs:
    file = open(files, ‘r’)
    IOError: [Errno 21] Is a directory: ‘source_emotion/Emotion/S005/001’
    So, I changed the code:
    for files in glob.glob(“%s/*” %sessions):
    into
    for files in glob.glob(“%s/* .txt” %sessions):
    Now, the error doesn’t occur. But, there are no files in the “sorted_set” folder, it just contains only the folders like happy, anger,etc that I created earlier manually even though there are images in the “source_images” folder and txt files in the “source_emotion” folder. I had already tried with the latest versions of opencv and python, but no luck there too. Please help, I couldn’t figure out what is wrong.
    -Nirajan

    Reply
    • Paul van Gent

      February 21, 2018

      Hi Nirajan. I suggest you print out the values of the variables included. For example, what’s the content of ‘files’ after your change? If you read the opened file object, what content does the Python interpreter find (just print() file.read() )? These kinds of steps will help you trace the problem.
      Let me know what your findings are.
      – Paul

      Reply
  • Sardor

    February 21, 2018

    Hello, My dear Paul.I am working on facial expression project for my master degree.Can you explain to me about what kind of method I can use if I make the project using your dataset and your coding?Please help me with this.And I have some question for you, Please Can you give me your email or contact me .my email is mamarasulovsardor@gmail.com

    Reply
  • Sardor

    February 25, 2018

    In this project, you have also used from Tensorflow?

    Reply
  • Sardor

    February 25, 2018

    In this project There is no deep learning part? I mean tensorflow or ..? what does it work ?what kind of method?

    Reply
  • Adarsh S N

    March 2, 2018

    Really great code.
    Is there a way to alter the code so that it can be used in real-time through webcam.

    Reply
    • Paul van Gent

      March 5, 2018

      Adapting the code to do real-time detection on a webcam isn’t too difficult. You need to grab a frame from the webcam, then you can run it through the classifier like just like a regular image. With Python there may be performance limits for this, so that you can only classify a few times a second. I would aggregate prediction results over several seconds at least to get rid of some classification noise.
      – Paul

      Reply
  • Pyae Phyo Paing

    March 6, 2018

    Hello Sir, My name is Pyae Phyo Paing. I am from Myanmar. I am working on the project Emotion Detection. Your codes and tutorial is very helpful to me. But I am now facing with a problem. I can’t download the CK and CK+ dataset. So how can I do. So help me if you can. Thank you.

    Reply
  • Dikshant

    March 12, 2018

    Hey paul i’m working on Emotion classification using videos( that is from ck+ dataset considering all frames) i’m having trouble loading it can you please help me in this matter.
    Like how to calculate adjacency matrix for this data with frames.

    Reply
    • Paul van Gent

      March 21, 2018

      Hi Dikshant. This type of classification is beyond my expertise as I’ve not done this before. I’m sure you can find info online on how to compute adjacency matrices in Python. There might even be a package available for this.
      If you have sufficient data you might also want to look at LSTM or GRU deep networks for this type of classification. If you utilise a model with pre-trained weights for for example facial recognition, you might be able to get some results with limited data through what is called ‘transfer learning’.
      – Paul

      Reply
  • hala

    March 15, 2018

    Hi Paul
    Can I use these codes on Linux ?

    Reply
    • Paul van Gent

      March 21, 2018

      Hi Hala. Yes, it’s all written in Python, which is multi-platform. Assuming you install the dependencies mentioned in the tutorial it should run with little-to-no changes.
      – Paul

      Reply
      • hala

        March 30, 2018

        hi Paul
        thank you for your reply, when i run the first code i get this error
        Traceback (most recent call last):
        File “organising_dataset.py”, line 16, in
        sourcefile_emotion = glob.glob(“source_images\\%s\\%s\\*” %(part, current_se
        ssion))[-1] #get path for last image in sequence, which contains the emotion
        IndexError: list index out of range

        Reply
        • Chand

          April 9, 2018

          Hi Paul, I am also getting same error . Please help me .

          Reply
          • Paul van Gent

            April 11, 2018

            This indicates that glob() cannot find any images int he folder path generated. Check that the generated path is correct, the target folder contains the images, and that glob returns something, before moving on in the code.
            – Paul

  • karthik

    March 22, 2018

    cv2.error: ..\..\..\..\opencv\modules\core\src\alloc.cpp:52: error: (-4) Failed to allocate 413560004 bytes in function cv::OutOfMemoryError
    could you help out with this?

    Reply
    • Paul van Gent

      March 27, 2018

      Hi Karthik. You’re out of memory. Likely you’re using 32-bit python. Consider switching to 64-bit so you can address more ram.
      If you’re already on 64-bit but have limited ram in your machine, consider a larger swap partition.
      – Paul

      Reply
  • Omkar Joshi

    March 27, 2018

    Hi Paul. I am using CK+ dataset for implementing this tutorial for my college project.
    However, I am not able to understand the integration of labels.
    files = glob.glob(“/home/pradeep/paul/source_images/*/*/*.png”). This runs perfectly and uses the images for training and prediction. But I am not able to figure out the labels, training data, prediction data.
    But when i try the above command by using %emotion at the end, I get errors. Please assist.
    Thanks !

    Reply
    • Paul van Gent

      March 27, 2018

      Hi Omkar. Be sure to follow the whole tutorial, especially the part under “organising the dataset“. It explains how to segment and structure the dataset before training the model. I embed the labels in the folder structure for clarity’s sake and ease of adding data.
      – Paul

      Reply
  • James

    April 17, 2018

    Hi, I’m confused as to what to do after I have trained the classifier. If I want to get a prediction for a specific photo, how would I go about doing that after training the classifier?
    Thanks
    -James

    Reply
    • Paul van Gent

      April 23, 2018

      Hi James. You can use the classifier’s .predict() function and feed it an image matrix. It will return a numerical label based on the training order of categories, which you need to translate to the correct emotion.

      Reply
  • Asif Ayub Mayo

    April 18, 2018

    Assalam u Alaikum Dear Paul, I have successfully followed your instructions and got an accuracy of 72.4%. First of all thank you for sharing this wonderful work 🙂 I want to know if you can tell me more about exporting the trained model for later use. I have used
    fisherface = cv2.face.FisherFaceRecognizer_create()
    fisherface.train(images,labels)
    fisherface.write(filename)
    for exporting purpose
    i have exported it in xml format than I read it in another program
    with
    model = cv2.face.FisherFaceRecognizer_create()
    model=fisherface.read(filepath)
    but i am unable to read it if i use
    print(type(model))
    it returns None type
    I hope you can understand what I am missing there
    kindly reply as soon as possible…!!!
    Just to let you know I am putting together a system that will monitor facial expression in real-time using local or remote camera as well as in any type of video stream for example skype,messenger or whatsapp video calls as well as I intend to create it on a dedicated hardware device and create API for emotion recognition services I am almost or nearly done with my first iteration of prototyping the product I would love to share my work with you.
    currently I am following this tutorial I intend to work on another algorithm of CNN that is Meta-Cognition Fuzzy Inference System (McFIS) for facial expression recognition that has higher accuracy, I would love if you can read the paper and share your views on it.
    But most importantly please reply ASAP! about the issue. I have to present my progress with in a week. Thanks Again!

    Reply
    • Paul van Gent

      April 23, 2018

      Hi Asif. Sure I’d like to read the paper and share some views on it, you can email me at info@paulvangent.com.
      Regarding the model type I’m unable to reproduce the error. On my system saving the model weights using either .write() or .save(), and then loading it back up results in class 'cv2.face_FisherFaceRecognizer'. Have you tried the .save() as well, just to exclude that something is going wrong with your distribution there? Using .write() results in a shorter model file than using .save(), although I’m unsure this is at the root of your issue.
      p.s. I hope my reply was on time, I’ve been on holidays.
      – Paul

      Reply
  • Asif Ayub Mayo

    April 18, 2018

    Python version – 3.6.4
    OpenCV CV2 3.4.0

    Reply
  • stemcin

    April 21, 2018

    Hi Paul,
    I’m getting the following error message when trying to extract the faces. I was wondering if you could assist?
    (base) C:\Desktop\python1>python extractfaces.py
    OpenCV(3.4.1) Error: Assertion failed (!empty()) in cv::CascadeClassifier::detectMultiScale, file C:\bld\opencv_1520732670222\work\opencv-3.4.1\modules\objdetect\src\cascadedetect.cpp, line 1698
    Traceback (most recent call last):
    File “extractfaces.py”, line 50, in
    detect_faces(emotion) #Call functiona
    File “extractfaces.py”, line 20, in detect_faces
    face = faceDet.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)
    cv2.error: OpenCV(3.4.1) C:\bld\opencv_1520732670222\work\opencv-3.4.1\modules\objdetect\src\cascadedetect.cpp:1698: error: (-215) !empty() in function cv::CascadeClassifier::detectMultiScale
    I look forward to hearing from you

    Reply
    • stemcin

      April 21, 2018

      Managed to work out the issue myself – Didn’t have the “haarcascade” files in the folder 🙁 – Sorted now.

      Reply
  • stemcin

    April 21, 2018

    I am getting the following error message though with the train script? Any assistance would be much appreciated.
    (base) C:\Desktop\python1>python train.py
    File “train.py”, line 43
    print “training fisher face classifier”
    ^
    SyntaxError: Missing parentheses in call to ‘print’. Did you mean print(int “training fisher face classifier”)?

    Reply
    • Paul van Gent

      April 23, 2018

      You’re using Python 3+. You need to add parentheses to print statements in any Python 3 and above environment, as the error states.
      print("training fisher face classifier")

      Reply
  • Euan Robertson

    April 21, 2018

    Hi Paul,
    I’m following this tutorial in visual studio. No matter where i place the opencv files the “import cv2” line always throws an error.
    Any ideas as to why this might be happening?

    Reply
    • Paul van Gent

      April 23, 2018

      Hi Euan. What version are you using? Are you using anaconda? Try pip install opencv-python or conda install opencv. Let me know if that fixes it for you.

      Reply
  • Luis Ruano

    April 25, 2018

    Hi Paul, nice work with the tutorials. I am working in an Animatronix, so I am very interested to recognize emotions to perform a better interaction between human and robot. I got two questions.
    1) When you call the function fishface.predict(), what does the conf means ?. Is is a weight of how accurate it was or something. Or what does it stores ?
    2) I am following your next tutorial to do the algorithm in real time. But I have had problems with the section Detectin the emotion in a face. In the part of updating the model. I have already download the script of Update_Model. How do I know if it is updating ? Or how well it is updating ?

    Reply
  • Luis Ruano

    April 26, 2018

    Hi Paul, nice work with the tutorials. I am working in an Animatronix Project, so I am very interested to recognize emotions to generate a better interaction between robot and human.
    I got two questions.
    1)In the part of fishface.predict(), the variable “conf”, what does this represents. I have printed it and it contains a float number. Is like the weight of how precise was the prediction ?
    2) For the tutorial of the recognize emotions in real time to play music. I ´ve already download the Update_model py script. But I want know if it really is updating ?, how do I know?. And how precise was the preddiction to added to the database ?
    Thanks for you answer !

    Reply
    • Paul van Gent

      April 26, 2018

      Hi Luis.
      1. conf stores the confidence (‘distance’ from the stored representation of the class)
      2. The code there grabs several images for each emotion you display, adds them to the folder structure, and re-trains the model. This is done to update the performance for the person whose pictures were added during the update routine. As the music player only functions for the user of the computer this is sufficient. However, in a project such as yours the problem is very much more difficult because you need to generalise to unknown people with different faces and different styles of facial expressions.
      I recommend you take a look at the tutorial that uses facial landmarks and use that as a basis. As with any of these kinds of projects: you will need to get creative with you how collect your training dataset to achieve good real-world performance. Building the coding and the model is the easy part.
      Let me know if you need further help.
      – Paul

      Reply
      • Luis Ruano

        May 5, 2018

        Ok I will follow that tutorial, and you are right about how to collect trainning, I would reduce the variable to minimum, I will to mantain constanta ilumination and background too get the dataset.
        1.For your answer to conf
        If i didn’t understand bad, I could use that distance, to generate graphics about the algorith, right ?. It is like a correlation ?
        Please I would be nice if I can commuicate with you by another plataform. I am working my graduation work, so it would be nice if you can advice me, or help you investigate in something. Let me know Please

        Reply
        • Paul van Gent

          May 10, 2018

          Hi Luis. Yes it is a performance metric so you can generate graphics.
          You can contact me on info@paulvangent.com.
          – Paul

          Reply
  • Varun

    April 30, 2018

    Hello,
    I am having problem in training the dataset for Kinect .It has 2 sets one is RGB and the other is Depth and I am confused as how to train both together.I have trained both separately but have no idea as how to train them together .Kindly helpo me or if anyone has worked on Kinect Xbox kindly help me.I am stuck.

    Reply
    • Paul van Gent

      April 30, 2018

      Hi Varun. I think the easiest method is to fuse them together into a ‘4 channel’ image. I assume the depth information is single-channel and the same resolution as the RGB image. You can append the depth information as a fourth channel to the image array.
      If you need some help, can you send me:
      – 1 example of the RGB image
      – 1 example of the Depth image
      to info@paulvangent.com? I’ll have a look and help you fuse them.
      – Paul

      Reply
  • Shashank Rao

    May 8, 2018

    Hey paul,
    I made my own dataset but I couldn’t really understand how to organize the dataset. Can you provide more info on it? how did you encode the txt files? All I did was made one text file with the names of the images followed by the emotions. Here’s the link to the file : https://www.dropbox.com/sh/8iqf403zvpdnnqm/AADq6DRgETYzvHrNSXi7ingxa?dl=0

    Reply
    • Shashank Rao

      May 9, 2018

      Figured it out!

      Reply
      • Paul van Gent

        May 10, 2018

        Good to hear!
        – Paul

        Reply
    • Paul van Gent

      May 10, 2018

      Hi Shashank,
      It depends a bit on what classifier and framework you use. The best organisation is to generate two arrays: X[] and Y[], with image data in every X[] index, and the corresponding label in the same index in Y[]. Note that the label needs to be numeric. You can translate back to a human-readable label after classification.
      – Paul

      Reply
    • neha

      September 9, 2018

      hi shashank
      i also want to make my own data set but i am unable to understand the encoded value of txt files.can u help me on this?

      Reply
  • salih karanfil

    May 9, 2018

    Hello Paul;
    First of all thank you for your efforts.
    I have a problem.
    training fisher face classifier
    size of training set is: 0 images
    Traceback (most recent call last):
       File “C: \ Users \ Salih \ Desktop \ project \ step3.py”, line 71, in
         correct = run_recognizer ()
       File “C: \ Users \ Salih \ Desktop \ project \ step3.py”, line 51, in run_recognizer
         fishface.train (training_data, np.asarray (training_labels))
    error: C: \ projects \ opencv-python \ opencv_contrib \ modules \ face \ src \ fisher_faces.cpp: 71: error: (-5) Empty training data was given. You’ll need more than one sample to learn a model. in function cv :: face :: Fisherfaces :: train
    Could you give me a solution?
    Thank you 🙂

    Reply
    • Paul van Gent

      May 10, 2018

      Hi Salih,
      There are no images loaded (‘size of training set: 0 images’). Verify the paths generated and the file detection of glob.
      – Paul

      Reply
  • Mutayyba Waheed

    May 19, 2018

    Hi Sir…..!!!
    Can u plZzz send me source code of your this project on this Email-Address: mutayybawaeed@gmail.com

    Reply
    • Paul van Gent

      May 22, 2018

      All you need is in the tutorial!
      – Paul

      Reply
  • Ranjan

    May 24, 2018

    hi sir,
    what classification algorithm is used for classifying the emotions?

    Reply
    • Paul van Gent

      May 30, 2018

      Hi Ranjan. In this one we use fisher eigenfaces.

      Reply
  • Mutayyba Waheed

    May 25, 2018

    Traceback (most recent call last):
    File “E:\facemoji-master\facemoji-master\prepare_model.py”, line 13, in
    fishface = cv2.createFisherFaceRecognizer()
    AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
    can plZzz tell me how can i fix this error …??

    Reply
    • mauricio

      November 16, 2018

      did you fix it? I have the same problem xd

      Reply
  • Ranjan

    May 25, 2018

    hey paul !!, we did organize the folders (dataset) in the same order as you mentioned in the article.But when we tried to execute the first code ,we didn’t get any output i.e. the images aren’t extracted to the respective folders of sorted_set. and then we tried to execute the second code, the same thing happen with it as well no error occured and yet we get no output.Can you please tell me, why are we facing such chaos?

    Reply
    • Paul van Gent

      May 30, 2018

      It is possible the folders are not correct. I would start with checking that the generated file lists (from glob) are populated with the respected files.
      – Paul

      Reply
  • Robin

    July 5, 2018

    Hi Paul,
    In my train.py –> in the –> def run_recognizer():
    in have a problem with : fishface.train(training_data,np.asarray(training_labels))

    The error is : fishface.train(training_data,np.asarray(training_labels))
    TypeError: src is not a numpy array, neither a scalar

    Can you help me ?
    Best regards

    Reply
    • palkab

      July 6, 2018

      Hi Robin. Likely there’s an issue with loading (some of) the data. Take a look what’s in the arrays training_data and training_labels. My bet is that one or both of them are empty.

      – Paul

      Reply
      • Robin

        July 9, 2018

        Thk for your answer Paul, i will investigate.

        For your information, a french magazine use your code. They put your website in the sources. That’s why i asked you some help. You can find your code page 40 of this french magazine : https://boutique.ed-diamond.com/les-guides/1334-gnulinux-magazine-hs-96.html

        I will come back to tell you if i found the solution

        Reply
        • palkab

          July 11, 2018

          Thanks Robin! Let me know of you need more help. Did you find if any images were loaded or not? You can then check what paths ‘glob’ uses to search, and check if those are ok.

          Thanks for the magazine, I like to hear about those things 🙂

          Reply
          • Robin

            July 11, 2018

            I don’t understand because my glob path is okay, and training_data and training_labels are Ok. I will print you my code if you’ve some time to help me :3.

            import cv2
            import glob
            import random
            import numpy as numpy
            from matplotlib import pyplot as plt

            #Liste des emotions
            emotions = [“neutre”,”colere”,”mepris”,”degout”,”peur”,”joie”,”tristesse”,”surprise”]

            rep = glob.glob(“C:/Users/Robin/Desktop/Divers Projets/Reconnaissance_Faciale/labels/*”)

            fishface = cv2.face.FisherFaceRecognizer_create()
            data = {}

            #On va répartir la base des images en deux : 80% pour l’apprentissage et 20% pour évaluer les performances de l’apprentissage !
            def get_fich(emotion):
            fich = glob.glob(“C:/Users/Robin/Desktop/Divers Projets/Reconnaissance_Faciale/datasheet/%s/*” % emotion)
            random.shuffle(fich)
            training = fich[:int(len(fich)*0.8)] #Utilise les 80% premiers fichiers
            evalperf = fich[-int(len(fich)*0.2):] #Utilise les 20% derniers fichiers

            return training,evalperf

            #Organise les fichiers pour l’apprentissage
            def make_sets():
            training_data = []
            training_labels = []
            evalperf_data = []
            evalperf_labels = []
            for emotion in emotions:
            training, evalperf = get_fich(emotion)
            for i in training:
            image = cv2.imread(i)
            gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            training_data.append(gray)
            training_labels.append(emotions.index(emotion))
            print(“1”)

            for i in evalperf:
            image = cv2.imread(i)
            gray2 = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            evalperf_data.append(gray2)
            evalperf_labels.append(emotions.index(emotion))
            print(“2”)
            return training_data, training_labels, evalperf_data, evalperf_labels

            #Fonction d’apprentissage et d’évaluation
            def run_recognizer():
            #On vient créer des groupes d’images et de labels pour l’apprentissage et l’évaluation des performances
            training_data=make_sets()
            training_labels=make_sets()
            evalperf_data=make_sets()
            evalperf_labels=make_sets()

            print(“1 : ” + str(training_data))
            print (type(training_data))
            print(“——- 2 : ” + str(training_labels))
            print (type(training_data))
            print (type(training_labels))

            print(“Apprentissage de nos visages”)
            fishface.train(numpy.asarray(training_data),numpy.asarray(training_labels))

            print(“Evaluation des performances”)
            cpt = 0
            correct = 0
            incorrect = 0

            for im in evalperf_data:
            evalperf, conf = fishface.predict(im)
            if evalperf == evalperf_labels[cpt]:
            correct = correct + 1
            cpt = cpt + 1
            else :
            incorrect = incorrect + 1
            cpt = cpt + 1

            return ((100*correct)/(correct+incorrect))

            resultat=[]
            for i in range (0,5):
            correct = run_recognizer()
            print (“Résultat du cycle”, i, ” : “, correct, “%”)
            resultat.append(correct)

            print(“resultat toal : ” , np.mean(resultat), “%”)

  • subhiran

    July 8, 2018

    very useful one . but sir actually i wanted to know how can i test my own input image.
    your help will be appreciated.

    Reply
    • palkab

      July 11, 2018

      The fisherface classifier object has a .predict() function. Just call that and give it an image, it will predict the corresponding label!

      Cheere

      Reply
  • Clément Atlan

    July 25, 2018

    Hi Paul,
    Firstly, a big thanks for the work you did! Thank you for sharing it with us, it’s so much appreciable 🙂
    I have no specific problem, I was just wondering few things…
    I got a data set which was very extensive, in other words it contains many emotions (and most of them was really close), something like 16 emotions. Obviously the final result rate I got was not very high, about 20%, which still proves the algorithm works. So I modified a little my extracting process to merge some emotions so that I had less different emotions. Anyway, the very reason to use such an algorithm is to reach high rate of efficiency (Or maybe I am wrong ?). What we want is to be more efficient that could be a human judgment, and the thing is that it is not the case with this algorithm, or maybe with the all openCV librairy. So here is my question: Do you have any idea of how openCV functions work (train(), detectMultiscale() ) or if such a library allows a very accurate detection process ?

    Once again, thank you !

    Reply
    • palkab

      August 8, 2018

      Hi Clément,

      The problem here is two-fold:

      – The fisherfaces are not the most optimal way of detecting emotion. However, they are an accessible one. I might do a more elaborate tutorial in the near future involving deep learning approaches I’m working on now.

      – Getting better results than human observers is not always possible. Read ‘the imperfect finish’ on this link for example.

      The bottom line is: is there enough variance in the data to separate all emotion categories into classes? Is there enough data to learn to generalise to unknown data as well? In the end this is what it will come down to I’m afraid.

      – Paul

      Reply
  • John Peter

    July 30, 2018

    Hello from Canada!

    It’s impressive the work that you did put together. From 2016 to now, what have you learned and have you improved the precision/accuracy? I’m looking/waiting for your next post! This subject is great!

    Thanks!

    Reply
    • palkab

      August 8, 2018

      Hi John,

      I’ve been working a lot on deep learning approaches in similar fields, as well as on open sourcing my heart rate analysis toolkit (and porting it to embedded C for Arduino and such). There will be another post soon-ish regarding emotion recognition building blocks and deep learning.

      – Paul

      Reply
  • Parthesh Soni

    August 4, 2018

    I have this error but i dont know exactly why. I am new to Python and I have followed all of the instruction mentioned in the article except the one with deleting the duplicate neutral images manually. And thanks a lot for such a wonderful tutorial on this. The error is following…..

    OpenCV Error: Bad argument (At least two classes are needed to perform a LDA. Reason: Only one class was given!) in lda, file /build/opencv-zcaJjh/opencv-3.2.0+dfsg/modules/core/src/lda.cpp, line 1018
    Traceback (most recent call last):
    File “trainPredict.py”, line 65, in
    correct=run_recognizer()
    File “trainPredict.py”, line 46, in run_recognizer
    fishface.train(training_data, np.asarray(training_labels))
    cv2.error: /build/opencv-zcaJjh/opencv-3.2.0+dfsg/modules/core/src/lda.cpp:1018: error: (-5) At least two classes are needed to perform a LDA. Reason: Only one class was given! in function lda

    Reply
    • palkab

      August 8, 2018

      Hi Partesh,

      It seems that when you’re passing your training data and training labels, there is only one category! Either make sure all the folders are there including files, and make sure the labels (0, 1, 2, 3, etc) are generated properly (just ‘print()’ them at several steps and take a look where it goes wrong).

      – Paul

      Reply
  • Gaurab

    August 11, 2018

    can you please explain how fishface.predict works and how it shows accuracy of about 69%

    Reply
    • palkab

      August 18, 2018

      Predict runs the image through the network and generates a prediction. By comparing predictions to the expected values you can calculate accuracy.

      – Paul

      Reply
  • joceline

    August 19, 2018

    Hi Paul, i try to run the code in this tutorial,but it has an error:
    Traceback (most recent call last):
    File “C:/Users/USER/PycharmProjects/coba1/master/classifier.py”, line 70, in
    correct = run_recognizer()
    File “C:/Users/USER/PycharmProjects/coba1/master/classifier.py”, line 50, in run_recognizer
    print(“size of training set is:”, len(training_labels), ‘images’, fishface.train(training_data, np.asarray(training_labels)))
    AttributeError: ‘builtin_function_or_method’ object has no attribute ‘train’

    can you tell me how to fix it?

    Reply
    • palkab

      August 19, 2018

      You need to make sure you’re using the right OpenCV version as specified in the tutorial. From 3.0 onwards they changed the API interface. I’m not sure what it has become exactly, but check the docs of whatever version you’re using I’d say.

      – Paul

      Reply
  • Clément Atlan

    August 23, 2018

    Hi Paul,
    Thanks for your reply.
    I understand that the fishefaces method is not the best way for what I want to do. The main goal I have is to detect several scales of pain the most accurately. That algorithm is intended for people with mental disabilities who cannot express their feelings.
    I’ve read “the imperfect finish” as you suggested, quite interesting especially for this kind of issue. I suppose that I should search for a completely different approach to get more training information about emotion features.

    Moreover, you ‘re probably right, the data set I have is not big enough, and maybe the way I sort data is not consistent with my purpose.

    Anyway, thanks for what you’ve done. I read that you were working on a heart rate analysis toolkit. Pretty nice ! The startup I am working for is currently developing its own sensor device (based on arduino as well) which aims to detect emotions via heart rate / skin conductance / temperature analysis.

    I am looking forward your more elaborate tutorial !
    Cheers.
    Clément

    Reply
    • palkab

      August 25, 2018

      Hi Clément, sounds interesting! Send me a mail at P.vangent@tudelft.nl please. I think I can help you with the pain analysis. I recently developed something in another collaboration that is similar.

      – Paul

      Reply
  • Sukhada

    August 30, 2018

    Nice read and helped me to get the insight into the subject ….Excited to make my own version!

    Reply
  • neha

    September 10, 2018

    hi paul

    can u please help how to make our own dataset. i tried making.,it’s working but with errors.
    I am not able to understand what are the text files and what is to written in these text files.

    Reply
    • palkab

      September 12, 2018

      Hi Neha. In the text files are the labels (1, 2, 3, etc). They are encoded as floats. If you read them in Python, pass them through the function int() to convert them.

      There should be a file in the dataset about which label corresponds to which emotion.

      Reply
  • Sherry

    September 17, 2018

    Do you have the complete downloads for ck database? That’s why I keep on getting errors since I used the landmarks text file because I wasn’t able to download the emotions text file.

    And yeah, I can’t download the emotions text file because ck database website is down :/

    Reply
    • palkab

      September 18, 2018

      They usually come back up within a few days. Keep trying!

      Paul

      Reply
      • Sherry

        September 22, 2018

        Thanks Paul! Everything worked perfectly!

        Reply
        • palkab

          September 28, 2018

          Glad to hear! Happy coding
          – Paul

          Reply
  • Alvaro

    September 24, 2018

    Hi!
    How can I download the HAAR filter from OpenCV? Your link (http://www.paulvangent.com/wp-content/uploads/2016/04/OpenCV_FaceCascade.zip) is down and I don’t find them in the OpenCV directory.
    Also I don’t know what folder do you refer to “extract to the same file you have your python files”. It’s where python is installed or where i’m working?

    Reply
    • palkab

      September 28, 2018

      I’m sorry Alvaro, I recently migrated servers and apparently not everything came back in the right folder from the backup. I’ve re-uploaded the file, links should work now.

      – Paul

      Reply
  • Alvaro

    September 24, 2018

    Hi!
    How can I download the HAAR filter from OpenCV? Your link (http://www.paulvangent.com/wp-content/uploads/2016/04/OpenCV_FaceCascade.zip) is down and I don’t find them in the OpenCV directory.
    Also I don’t know what folder do you refer to “extract to the same file you have your python files”. It’s where python is installed or where i’m working?

    Reply
  • Carmen

    September 25, 2018

    Hi Paul!
    I’ve run all the code and it works, I’m still cleaning and uploading new images to get better predictions.

    My question is: how do you predict an emotion by a given image? Lets say I have image.jpg and I want to predict its emotion. How would that be?

    Many thanks,

    Carmen

    Reply
    • palkab

      September 28, 2018

      Hi Carmen,

      After loading (or training) the model, you can call its predict() function. In the tutorial I called the object ‘fishface’. Assuming you’ve kept the name, just load the image into an array and pass it: fishface.predict(image_array)

      Reply
      • Carmen Gonzalez-conde

        October 1, 2018

        I figured it out too! Thank you :))))

        Reply
  • philip

    October 18, 2018

    PermissionError: [Errno 13] Permission denied: ‘source_emotion\\Emotion\\S005\\001’

    Reply
    • palkab

      October 20, 2018

      Your path is likely wrong. If you’re on linux, use different path format (/).

      Reply
  • Emanuel

    October 19, 2018

    hello paul very good what you do. My idea is to improve every day in this world of data science just like you. I just think that the order of the folders should be clearer.
    data/

    *sorted_set
    **dataset
    ***anger
    ***contempt
    ***disgust
    ***etc

    **difficult

    **harcascade_frontalface_deafault

    *source_emotion
    **s005
    **s010
    **etc

    *source_images
    **s005
    **s010
    **etc
    I would like to know how to apply the training to the real camera and tell me what mood I am in

    Reply
  • Glenn Thomas Alex

    October 23, 2018

    Hey Paul,

    I am not able to download the dataset that you generated and cleaned.

    Please do share. Thank you

    http://www.paulvangent.com/wp-content/uploads/2016/04/googleset.zip

    Reply
    • palkab

      October 25, 2018

      Hi Glenn. Sorry I migrated a while ago, seems I did not caught everything that went wrong yet. I’ve re-upped it.

      Cheers
      – Paul

      Reply
  • philip

    November 4, 2018

    Hi paul,
    how can i now pass any random image for the model to predict the emotion in it

    Reply
    • palkab

      November 6, 2018

      Use the model’s .predict() function.

      So if you initialised the model ze ’emotion_model’, you need to do ’emotion_model.predict(img)’, where ‘img’ is an image array.

      Cheers

      Reply
  • Pradeep

    November 11, 2018

    training fisher face classifier
    size of training set is: 0 images
    Traceback (most recent call last):
    File “emotiontraining.py”, line 54, in
    correct = run_recognizer()
    File “emotiontraining.py”, line 37, in run_recognizer
    fishface.train(training_data, np.asarray(training_labels))
    cv2.error: OpenCV(3.4.3) C:\projects\opencv-python\opencv_contrib\modules\face\src\fisher_faces.cpp:71: error: (-5:Bad argument) Empty training data was given. You’ll need more than one sample to learn a model. in function ‘cv::face::Fisherfaces::train’

    Reply
    • palkab

      November 11, 2018

      Your paths are likely incorrect. It says in the top and bottom line: 0 images are loaded.

      Reply
  • Sanghamitra Mohanty

    November 12, 2018

    Getting th eerror: ImportError: No module named ‘cv2’

    Reply
    • palkab

      November 12, 2018

      You need to install opencv. Be sure to read the tutorial and not just paste the code buddy :).

      Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.