Emotion Recognition With Python, OpenCV and a Face Dataset

Banner

Having your computer know how you feel? Madness!

Or actually not madness, but OpenCV and Python. In this tutorial we’ll write a little program to see if we can recognise emotions from images.

How cool would it be to have your computer recognize the emotion on your face? You could make all sorts of things with this, from a dynamic music player that plays music fitting with what you feel, to an emotion-recognizing robot.

For this tutorial I assume that you have:


Important: The code in this tutorial is licensed under the GNU 3.0 open source license and you are free to modify and redistribute the code, given that you give others you share the code with the same right, and cite my name (use citation format below). You are not free to redistribute or modify the tutorial itself in any way. By reading on you agree to these terms. If you disagree, please navigate away from this page.

Troubleshooting: I assume intermediate knowledge of Python for these tutorials. If you don’t have this, please try a few more basic tutorials first or follow an entry-level course on coursera or something similar. This also means you know how to interpret errors. Don’t panic but first read the thing, google if you don’t know the solution, only then ask for help. I’m getting too many emails and requests over very simple errors. Part of learning to program is learning to debug on your own as well. If you really can’t figure it out, let me know.

Unix users: The current tutorial is written for use on windows systems. It will be updated in the near future to be cross-platform.

Citation format
van Gent, P. (2016). Emotion Recognition With Python, OpenCV and a Face Dataset. A tech blog about fun things with Python and embedded electronics. Retrieved from:
http://www.paulvangent.com/2016/04/01/emotion-recognition-with-python-opencv-and-a-face-dataset/

Protected by Copyscape

IE users: I’ve gotten several reports that sometimes the code blocks don’t display correctly or at all on Internet Explorer. Please refresh the page and they should display fine.


Getting started
To be able to recognize emotions on images we will use OpenCV. OpenCV has a few ‘facerecognizer’ classes that we can also use for emotion recognition. They use different techniques, of which we’ll mostly use the Fisher Face one. For those interested in more background; this page has a clear explanation of what a fisher face is.

Request and download the dataset, here (get the CK+). I cannot distribute it so you will have to request it yourself, or of course create and use your own dataset. It seems the dataset has been taken offline. The other option is to make one of your own or find another one. When making a set: be sure to insert diverse examples and make it BIG. The more data, the more variance there is for the models to extract information from. Please do not request others to share the dataset in the comments, as this is prohibited in the terms they accepted before downloading the set.

Once you have your own dataset, extract it and look at the readme. It is organised into two folders, one containing images, the other txt files with emotions encoded that correspond to the kind of emotion shown. From the readme of the dataset, the encoding is: {0=neutral, 1=anger, 2=contempt, 3=disgust, 4=fear, 5=happy, 6=sadness, 7=surprise}.

Let’s go!


Organising the dataset
First we need to organise the dataset. In the directory you’re working, make two folders called “source_emotion” and “source_images”. Extract the dataset and put all folders containing the txt files (S005, S010, etc.) in a folder called “source_emotion”. Put the folders containing the images in a folder called “source_images”. Also create a folder named “sorted_set”, to house our sorted emotion images. Within this folder, create folders for the emotion labels (“neutral”, “anger”, etc.).

In the readme file, the authors mention that only a subset (327 of the 593) of the emotion sequences actually contain archetypical emotions. Each image sequence consists of the forming of an emotional expression, starting with a neutral face and ending with the emotion. So, from each image sequence we want to extract two images; one neutral (the first image) and one with an emotional expression (the last image). To help, let’s write a small python snippet to do this for us:

~

import glob
from shutil import copyfile

emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #Define emotion order
participants = glob.glob("source_emotion\\*") #Returns a list of all folders with participant numbers

for x in participants:
    part = "%s" %x[-4:] #store current participant number
    for sessions in glob.glob("%s\\*" %x): #Store list of sessions for current participant
        for files in glob.glob("%s\\*" %sessions):
            current_session = files[20:-30]
            file = open(files, 'r')
            
            emotion = int(float(file.readline())) #emotions are encoded as a float, readline as float, then convert to integer.
            
            sourcefile_emotion = glob.glob("source_images\\%s\\%s\\*" %(part, current_session))[-1] #get path for last image in sequence, which contains the emotion
            sourcefile_neutral = glob.glob("source_images\\%s\\%s\\*" %(part, current_session))[0] #do same for neutral image
            
            dest_neut = "sorted_set\\neutral\\%s" %sourcefile_neutral[25:] #Generate path to put neutral image
            dest_emot = "sorted_set\\%s\\%s" %(emotions[emotion], sourcefile_emotion[25:]) #Do same for emotion containing image
            
            copyfile(sourcefile_neutral, dest_neut) #Copy file
            copyfile(sourcefile_emotion, dest_emot) #Copy file

 


Extracting faces
The classifier will work best if the training and classification images are all of the same size and have (almost) only a face on them (no clutter). We need to find the face on each image, convert to grayscale, crop it and save the image to the dataset. We can use a HAAR filter from OpenCV to automate face finding. Actually, OpenCV provides 4 pre-trained classifiers, so to be sure we detect as many faces as possible let’s use all of them in sequence, and abort the face search once we have found one. Get them from the OpenCV directory or from here and extract to the same file you have your python files.

Create another folder called “dataset”, and in it create subfolders for each emotion (“neutral”, “anger”, etc.). The dataset we can use will live in these folders. Then, detect, crop and save faces as such;

~

import cv2
import glob

faceDet = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
faceDet_two = cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml")
faceDet_three = cv2.CascadeClassifier("haarcascade_frontalface_alt.xml")
faceDet_four = cv2.CascadeClassifier("haarcascade_frontalface_alt_tree.xml")

emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #Define emotions

def detect_faces(emotion):
    files = glob.glob("sorted_set\\%s\\*" %emotion) #Get list of all images with emotion

    filenumber = 0
    for f in files:
        frame = cv2.imread(f) #Open image
        gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #Convert image to grayscale
        
        #Detect face using 4 different classifiers
        face = faceDet.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)
        face_two = faceDet_two.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)
        face_three = faceDet_three.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)
        face_four = faceDet_four.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=10, minSize=(5, 5), flags=cv2.CASCADE_SCALE_IMAGE)

        #Go over detected faces, stop at first detected face, return empty if no face.
        if len(face) == 1:
            facefeatures = face
        elif len(face_two) == 1:
            facefeatures = face_two
        elif len(face_three) == 1:
            facefeatures = face_three
        elif len(face_four) == 1:
            facefeatures = face_four
        else:
            facefeatures = ""
        
        #Cut and save face
        for (x, y, w, h) in facefeatures: #get coordinates and size of rectangle containing face
            print "face found in file: %s" %f
            gray = gray[y:y+h, x:x+w] #Cut the frame to size
            
            try:
                out = cv2.resize(gray, (350, 350)) #Resize face so all images have same size
                cv2.imwrite("dataset\\%s\\%s.jpg" %(emotion, filenumber), out) #Write image
            except:
               pass #If error, pass file
        filenumber += 1 #Increment image number

for emotion in emotions: 
    detect_faces(emotion) #Call functiona

 

The last step is to clean up the “neutral” folder. Because most participants have expressed more than one emotion, we have more than one neutral image of the same person. This could (not sure if it will, but let’s be conservative) bias the classifier accuracy unfairly, it may recognize the same person on another picture or be triggered by other characteristics rather than the emotion displayed. Do this by hand: get in the folder and delete all multiples of the same face you see, so that only one image of each person remains.


Creating the training and classification set
Now we get to the fun part! The dataset has been organised and is ready to be recognized, but first we need to actually teach the classifier what certain emotions look like. The usual approach is to split the complete dataset into a training set and a classification set. We use the training set to teach the classifier to recognize the to-be-predicted labels, and use the classification set to estimate the classifier performance.

Note the reason for splitting the dataset: estimating the classifier performance on the same set as it has been trained is unfair, because we are not interested in how well the classifier memorizes the training set. Rather, we are interested in how well the classifier generalizes its recognition capability to never-seen-before data.

In any classification problem; the sizes of both sets depend on what you’re trying to classify, the size of the total datset, the number of features, the number of classification targets (categories). It’s a good idea to plot a learning curve. We’ll get into this in another tutorial.

For now let’s create the training and classification set, we randomly sample and train on 80% of the data and classify the remaining 20%, and repeat the process 10 times. Afterwards we play around with several settings a bit and see what useful results we can get.

~

import cv2
import glob
import random
import numpy as np

emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"] #Emotion list
fishface = cv2.createFisherFaceRecognizer() #Initialize fisher face classifier

data = {}

def get_files(emotion): #Define function to get file list, randomly shuffle it and split 80/20
    files = glob.glob("dataset\\%s\\*" %emotion)
    random.shuffle(files)
    training = files[:int(len(files)*0.8)] #get first 80% of file list
    prediction = files[-int(len(files)*0.2):] #get last 20% of file list
    return training, prediction

def make_sets():
    training_data = []
    training_labels = []
    prediction_data = []
    prediction_labels = []
    for emotion in emotions:
        training, prediction = get_files(emotion)
        #Append data to training and prediction list, and generate labels 0-7
        for item in training:
            image = cv2.imread(item) #open image
            gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #convert to grayscale
            training_data.append(gray) #append image array to training data list
            training_labels.append(emotions.index(emotion))
    
        for item in prediction: #repeat above process for prediction set
            image = cv2.imread(item)
            gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            prediction_data.append(gray)
            prediction_labels.append(emotions.index(emotion))

    return training_data, training_labels, prediction_data, prediction_labels

def run_recognizer():
    training_data, training_labels, prediction_data, prediction_labels = make_sets()
    
    print "training fisher face classifier"
    print "size of training set is:", len(training_labels), "images"
    fishface.train(training_data, np.asarray(training_labels))

    print "predicting classification set"
    cnt = 0
    correct = 0
    incorrect = 0
    for image in prediction_data:
        pred, conf = fishface.predict(image)
        if pred == prediction_labels[cnt]:
            correct += 1
            cnt += 1
        else:
            incorrect += 1
            cnt += 1
    return ((100*correct)/(correct + incorrect))

#Now run it
metascore = []
for i in range(0,10):
    correct = run_recognizer()
    print "got", correct, "percent correct!"
    metascore.append(correct)

print "\n\nend score:", np.mean(metascore), "percent correct!"

 

Let it run for a while. In the end on my machine this returned 69.3% correct. This may not seem like a lot at first, but remember we have 8 categories. If the classifier learned absolutely nothing and just assigned class labels randomly we would expect on average (1/8)*100 =  12.5% correct. So actually it is already performing really well. Now let’s see if we can optimize it.


Optimizing Dataset
Let’s look critically at the dataset. The first thing to notice is that we have very few examples for “contempt” (18), “fear” (25) and “sadness” (28). I mentioned it’s not fair to predict the same dataset as the classifier has been trained on, and similarly it’s also not fair to give the classifier only a handful of examples and expect it to generalize well.

Change the emotionlist so that “contempt”, “fear” and “sadness” are no longer in it, because we really don’t have enough examples for it:

~

#Change from:
emotions = ["neutral", "anger", "contempt", "disgust", "fear", "happy", "sadness", "surprise"]

#To:
emotions = ["neutral", "anger", "disgust", "happy", "surprise"]
 

 

Let it run for a while again. On my computer this results in 82.5% correct. Purely by chance we would expect on average (1/5)*100 = 20%, so the performance is not bad at all. However, something can still be improved.


Providing a more realistic estimate
Performance so far is pretty neat! However, the numbers might not be very reflective of a real-world application. The data set we use is very standardized. All faces are exactly pointed at the camera and the emotional expressions are actually pretty exaggerated and even comical in some situations. Let’s see if we can append the dataset with some more natural images. For this I used google image search and the chrome plugin ZIG lite to batch-download the images from the results.

If you want, do this yourself, clean up the images. Make sure for each image that there is no text overlayed on the face, the emotion is recognizable, and the face is pointed mostly at the camera. Then adapt the facecropper script a bit and generate standardized face images.
Alternatively, save yourself an hour of work and download the set I generated and cleaned.

Merge both datasets and run again on all emotion categories except for “contempt” (so re-include “fear” and “sadness”), I could not find any convincing source images for this emotion.

This gave 61.6% correct. Not bad, but not great either. Despite what we would expect at chance level (14.3%), this still means the classifier will be wrong 38.4% of the time. I think the performance is actually really impressive, considering that emotion recognition is quite a complex task. However impressive, I admit an algorithm that is wrong almost half the time is not very practical.

Speaking about a practical perspective; depending on the goal, an emotion classifier might not actually need so many categories. For example, a dynamic music player that plays songs fitting to your mood would already work well if it recognized anger, happiness and sadness. Using only these categories I get 77.2% accurate. That is a more useful number! This means that almost 4 out of 5 times it will play a song fitting to your emotional state. In a next tutorial we will build such a player.

The spread of accuracies between different runs is still quite large, however. This either indicates the dataset is too small to accurately learn to predict emotions, or the problem is simply too complex. My money is mostly on the former. Using a larger dataset will probably enhance the detection quite a bit.


Looking at mistakes
The last thing that might be nice to look at is what mistakes the algorithm makes. Maybe the mistakes are understandable, maybe not. Add an extra line to the the last part of the function run_recognizer() to copy images that are wrongly classified, also create a folder “difficult” in your root working directory to house the images:

~

def run_recognizer():
    training_data, training_labels, prediction_data, prediction_labels = make_sets()
    
    print "training fisher face classifier"
    print "size of training set is:", len(training_labels), "images"
    fishface.train(training_data, np.asarray(training_labels))

    print "predicting classification set"
    cnt = 0
    correct = 0
    incorrect = 0
    for image in prediction_data:
        pred, conf = fishface.predict(image)
        if pred == prediction_labels[cnt]:
            correct += 1
            cnt += 1
        else:
            cv2.imwrite("difficult\\%s_%s_%s.jpg" %(emotions[prediction_labels[cnt]], emotions[pred], cnt), image) #<-- this one is new
            incorrect += 1
            cnt += 1
    return ((100*correct)/(correct + incorrect))

 

I ran it on all emotions except “contempt”, and ran it only once (for i in range(0,1)).

Some mistakes are understandable, for instance:

“Surprise”, classified as “Happy” surprise_happy_96, honestly it’s a bit of both

“Disgust”, classified as “Sadness”  disgust_sadness_43, he could also be starting to cry.

“Sadness”, classified as “Disgust” sadness_disgust_95

 

But most are less understandable, for example:

“Anger”, classified as “Happy” anger_happy_30

“Happy”, classified as “Neutral” happy_neutral_73

 

It’s clear that emotion recognition is a complex task, more so when only using images. Even for us humans this is difficult because the correct recognition of a facial emotion often depends on the context within which the emotion originates and is expressed.

I hope this tutorial gave you some insight into emotion recognition and hopefully some ideas to do something with it. Did anything cool with it or want to try something cool? Let me know below in the comments!


The dataset used in this article is the CK+ dataset, based on the work of:

– Kanade, T., Cohn, J. F., & Tian, Y. (2000). Comprehensive database for facial expression analysis. Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG’00), Grenoble, France, 46-53.
– Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., & Matthews, I. (2010). The Extended Cohn-Kanade Dataset (CK+): A complete expression dataset for action unit and emotion-specified expression. Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010), San Francisco, USA, 94-101.


229 Comments

  • Ajesh K R

    23rd May 2016

    Hi sir,
    Its really a brilliant work you have done. I am really interested in it, and wants to know more about how the classification of emotions are done. Can you provide some more stuff that help me understand the details??

    Reply
    • Paul van Gent

      28th May 2016

      Hi Ajesh,
      Thanks for your comment. I’m not sure what exactly you’re interested in. Do you want more background information or are you having problems with something in the code or explanations?
      If you want more background info please see here or here.

      Reply
      • Em Aasim

        11th December 2016

        hi sir!
        i am having trouble downloading data set. is there any other way to download it?

        Reply
        • Paul van Gent

          15th December 2016

          Hi Em Aasim. I cannot distribute it, so if the authors choose not to share it with you, then that’s it I’m afraid. You can always make your own set, although this is not a trivial task.

          Reply
        • Salma

          15th August 2017

          here you can agree to the conditions of use submit in order to have the dataset.

          Reply
  • Michał Żołnieruk

    31st May 2016

    Hi from Poland!

    Really cool stuff! It helped me and my friend a lot with our project to write an app for swapping face to adequate emoji. We did use your code to harvest faces from CK dataset and mentioned it in the repo we’ve just created. Take a look and tell us if you are fine with this.

    https://github.com/PiotrDabr/facemoji

    Thanks a lot again, your blog is really interesting and I can’t wait for new posts.

    Reply
    • Paul van Gent

      31st May 2016

      Hi Michał!

      I like your project, really cool stuff! I’ll be sure to give it a try tonight.
      Also thanks for mentioning me, it’s perfect this way.

      Keep up the good work!

      Reply
    • Sunny Bhadani

      22nd November 2017

      hey bro,i found your GitHub project on emoji really cool….i myself is making a project on facial expression recognition.It will be great if we can collaborate.
      Looking forward to hear from you.

      Reply
  • Sam

    16th June 2016

    It was so well explained and so helpful !!! Thank you so much !! We quoted your work in our synposis

    Reply
    • Paul van Gent

      16th June 2016

      Thank you Sam! What did you make with it? I’m curious 🙂

      Reply
  • Jasper

    17th July 2016

    Hi Sir!

    I’m trying to do your Emotion-Aware Music Player but I’m having a problem. Whenever I run the code that crops the image and save it to the “dataset” folder, I get the error “UnboundLocalError: local variable ‘x’ referenced before assignment”. Any help with that? I’m using Spyder with python2.7 bindings and OpenCV 2.4.13.

    Reply
    • Paul van Gent

      17th July 2016

      Hi Jasper,

      Can you send me an e-mail with your code attached, and the full error message you’re getting? You can send it to “palkab29@gmail.com”. I’ll have a look in the afternoon :).

      Reply
      • Jasper

        17th July 2016

        I’ve just sent it to you! Thank you!

        Reply
        • Paul van Gent

          17th July 2016

          Turns out I missed a line when updating the code. Thanks for pointing it out Jasper! It’s updated now.

          Reply
  • Alex

    4th August 2016

    How would you edit the code that sorts the images in the files to sort the landmarks into different files?

    Reply
    • Paul van Gent

      4th August 2016

      Hi Alex,

      Most of the parts are there, if you look under “Organising the dataset”, for each iteration the code temporarily stores the participant ID in the variable part, the sessions in sessions and the emotion in the variable emotion. You can use these labels to also access the data from the landmarks database, since these are organized with the same directory structure as the images and emotion labels are.

      On a side-note, I’ll be posting a tutorial on landmark-based emotion detection somewhere this week. Keep an eye on the site if you get stuck. Good luck 🙂

      Reply
  • Ali

    5th August 2016

    Great Article.
    Thanks!

    Reply
  • Pingback: يه نکته ی جالب که به احتمال زیاد نمی دونستید | وبلاگ

  • Alexander

    14th August 2016

    The python script that sorts the images into emotion types slices ‘Sx’ (S0, S1, S2… S9 etc) from the subject participant part at the beginning of the filename of each image. I used your algorithm to sort the landmarks into facial expression files the same way and it retained the whole filename. Would you know why this happens? Basically the first two characters of the filename of each image are snipped off.

    Reply
    • Paul van Gent

      14th August 2016

      Hi Alex,

      This is because of line 19 and 20 where I slice the filenames from the path using “sourcefile_neutral[25:] (and the same for sourcefile_emotion). If you want a clean way of dealing with filenames of different lengths, first split the filename on the backslash using split(), for example sourcefile_neutral.split(“\\”). This returns a list of elements. Take the last element in the list with [-1:] to get the complete filename.

      Good luck!

      Reply
      • Alex

        15th August 2016

        Thanks, I dissected your code and figured that if you change “sourcefile_neutral[25:]” to “sourcefile_neutral[23:]” that I keep the whole .png filename. Oddly… I then had to change it to “sourcefile_neutral[26:]” for .txt files even though [25:] worked fine for them previously.

        I have one more issue… the code that detects the faces in each image and normalizes the dimensions of each image doesn’t appear to do anything. I’ve placed it in a directory above all the folders such as sorted_set, source_emotion etc. Is that the correct location for the script? Thanks again!

        Reply
        • Alex

          15th August 2016

          Turns out the issue is that either the classifiers are failing to detect the face (unlikely) or the script isn’t actually accessing the classifiers stored in opencv/sources/data/haarcascades

          Reply
          • Alex

            15th August 2016

            Added the full path to the xml files “C:\opencv\sources\data\haarcascades\…xml” and it worked!

          • Paul van Gent

            15th August 2016

            Hi Alex, that would also work. In the script I assume you put the trained cascade classifier files in the same directory as the python files. I have updated the tutorial to make this more clear. Thanks!

    • Jane

      19th November 2016

      Always a good job right here. Keep rolling on thorguh.

      Reply
  • Carlos

    17th August 2016

    Hello Paul! thanks a lot for this wonderful code, it has helped me a lot. Just one question, I am always getting the first 4 saved images wrong of each set when using the save_face(emotions) but not in the first set. For example I start recording happy face, then angry so in the data set all pictures of happy are fine but the first four pictures in the angry data set are actually happy faces. What can my problem be? It is weird because the name of the picture is angry even if the face is happy. This happens to all subsequent emotions, just the first emotion data set is all good.

    Reply
  • Ali

    15th September 2016

    Hi Paul
    I have a outOfMemoryError. I use 4 Gigabytes of ram.
    Is there a solution for this problem? or just I should upgrade the ram to 8?
    thanks paul

    Reply
    • Paul van Gent

      18th September 2016

      Hi Ali,
      I’m not sure, the program shouldn’t be that memory intensive. Are you sure you’re not storing all images somewhere and leaving them in memory? Feel free to mail me your code if you want me to have a look.
      Cheers

      Reply
    • Rena

      19th November 2016

      That’s really thinking at an imivrsspee level

      Reply
  • Sridhhar

    18th September 2016

    hi!! i am getting an error while running the 1st code (organising the dataset)
    ” file = open(files, ‘r’)
    IOError: [Errno 13] Permission denied: ‘source_emotion\\Emotion\\S005\\001′”
    can you help me out with this??
    if possible can u mail me the entire code?
    thank you

    Reply
    • Paul van Gent

      18th September 2016

      Hi Sridhar,
      Read the error message; “Permission denied”. It seems you don’t have permission to read from these folders. What system are you using?

      Reply
      • Sridhar

        19th September 2016

        i’m using windows 10.
        I tried changing the directories . Moved the entire folder to C and D drive. Didn’t work !!!

        Reply
        • Sridhar

          19th September 2016

          I’m new to python. Could you please explain it

          Reply
          • Chartric

            19th November 2016

            Wonderful exoltnaaipn of facts available here.

      • Piyush Saraswat

        28th May 2017

        Hi paul,
        i’m getting same error, Permission denied: ‘source_emotion\\Emotion_labels\\Emotion\\S005’
        what to do?

        Reply
        • Paul van Gent

          12th June 2017

          – Check the folders exist and have the exact names as in the code.
          – Does your user account have the correct permissions to these folders?
          – You might need to run the code with elevated privileges.

          Reply
  • sridhar

    19th September 2016

    hi Paul!!!
    The code is working now.
    There was a mistake in the directory path.
    Thanks for the support

    Reply
    • Paul van Gent

      19th September 2016

      I suspected something like this. Good to hear you found the issue. Good luck!

      Reply
    • Hines

      19th November 2016

      You have the monopoly on useful infaomation-rren’t monopolies illegal? 😉

      Reply
    • Prem Sekar

      6th March 2017

      hi sridhar,
      may i know how u corrected ur error…

      Reply
    • hiren

      1st November 2017

      hey…bro…how do you solve it?

      Reply
  • Justin Cruz

    30th September 2016

    Good Day Mr. Paul! Can you mail me your entire code? If it’s okay with you? Thank you in advance!

    Reply
    • Paul van Gent

      1st October 2016

      Hi Justin,

      All the information you need is in the article, I’m sure you can figure it out :)!

      Cheers
      Paul

      Reply
  • DONG

    17th November 2016

    u r amazing

    Reply
  • Virat Trivedi

    18th November 2016

    Thank you so much Sir.
    Your guide has been of IMMENSE help in my work, can’t thank you enough.
    I just had one doubt which is that you said that “In a next tutorial we will build such a player.” which is a hyperlink.
    But that hyperlink gives a 404 error. Can you please provide us an updated link to the same?

    Reply
    • Paul van Gent

      24th November 2016

      Hi Virat,

      I’m glad to hear it helped. I will update the link, but you can also find it through the home page :-). Please don’t forget to cite me! Cheers,
      Paul

      Reply
  • Jonathan

    24th November 2016

    Hello,

    This is amazing. Can I get this accessed from iPhone project? I want to detect emotion from iOS device camera when user look at it. How to achieve this?

    Reply
    • Paul van Gent

      24th November 2016

      Hi Jonathan,

      I think you could, please see this link for some tutorials on how to get started with OpenCV and iOS:
      http://docs.opencv.org/2.4/doc/tutorials/ios/table_of_content_ios/table_of_content_ios.html

      I wouldn’t know the specifics because I don’t develop for iOS, but translating the code probably won’t be too difficult. You can also probably re-use trained models as the core is still OpenCV.

      Good luck! Let me know if you manage to get it working.

      Reply
      • ramyavr

        30th November 2017

        Hi Paul
        I am getting
        Traceback (most recent call last):
        File “extract.py”, line 49, in
        filenumber += 1 #Increment image number
        NameError: name ‘filenumber’ is not defined

        Reply
  • vikrant

    16th December 2016

    After running traing code i am getting this massge— AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
    I am using window 10. I have installed opencv.
    Plz help me.

    Reply
    • vikrant

      16th December 2016

      thank you very much Paul… i installed latest version of opencv. now its running.

      Reply
      • Ashwin

        29th January 2017

        I have the latest version of opencv 3.2 but I’m still getting the error-AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
        I am using Windows 10 64 bit
        python version 2.7.13
        Please help…

        Reply
        • Paul van Gent

          8th February 2017

          Hi Ashwin. Either check the docs to see what changed from 2.4 to 3.2, or use the OpenCV version from the tutorial.

          Reply
          • leon trimble

            13th February 2017

            hey! you’re the python expert i was hoping you’d tell us! i got facial recognition working from this tutorial https://realpython.com/blog/python/face-detection-in-python-using-a-webcam/
            it took an age to work out how to swap out the webcam for the raspberry pi cam, please help! i need a more fundamental understanding of the codebase to work across versions!!!

          • Paul van Gent

            14th February 2017

            Hi Leon. I´m not sure what you mean. Do you want more information about using webcams in conjunction with Python? Do you want more information on how to use images from different sources with the visual recognition code on this site? Let me know.
            Cheers
            Paul

          • leon trimble

            16th February 2017

            …getting it working on opencv 3.

          • Paul van Gent

            16th February 2017

            The docs provide all the answers.. It seems a new namespace ‘face’ is added in the new opencv versions.

        • Aniket More

          30th June 2017

          use cv2.face.createFisherFaceRecognizer

          Reply
    • Aniket More

      30th June 2017

      I am getting only 21.3% accuracy what will be the reason?

      Reply
      • Paul van Gent

        1st July 2017

        Hi Aniket. The reasons could be numerous. To find out, I would check:

        – Your dataset could be too small for the task you are trying to accomplish. It could be the images you’re trying to recognize the emotions on are diverse, difficult, or too few. Remember that the algorithm needs to have a large range of examples in order to quantify the underlying variance. The most subtle the emotions, or the more variation within each emotion, the more data is required. It is also possible that this algorithm simply isn’t up for the task given your dataset. Also look at my other tutorial using a support vector machine classifier in conjunction with facial landmarks.

        – Where are most mistakes made? Maybe one or two categories have little data in them and are throwing the rest off.

        – Are there no labeling errors or file retrieval errors? If emotion images receive an incorrect label this will obviously wreck performance.

        Good luck!
        -Paul

        Reply
        • Aniket More

          5th July 2017

          Thanks for the reply Paul, actually I am using the same data set you suggested (CK+). And I am training on 80% of data and classifying 20% as you said, still I am getting 21-23 % accuracy with all the categories and 36% using the reduced set of emotions. I am not getting why the same code with same data set is giving me different results.
          I am using Ubuntu 14.04, OpenCV 3.0.0. Also as it’s mentioned in some of the comments above glob does not work well in Ubuntu, I verified the data after sorting it is same as you mentioned “contempt” (18), “fear” (25) and “sadness” (28).

          Reply
          • Paul van Gent

            5th July 2017

            I’ve been getting more reports of 3.0 behaving differently. In essence the tutorial ‘abuses’ a facial recognition algorithm to in stead detect variations within the face. It’s likely in 3.0 the approach has been tweaked and doesn’t work so well for this application anymore. Be sure to check software versions specified in the beginning of each tutorial, sometimes higher versions are not better..

            You can also take a look at the other tutorial on emotion recognition, it’s a bit more advanced but also a more ‘proper way’ of approaching this problem.

            -Paul

  • senorita

    2nd January 2017

    hi,
    when i’m trying to run the first snippet of code for organizing data into neutral and expression, i’m getting this error :

    Traceback (most recent call last):
    File “C:\Users\310256803\workspace\FirstProject\pythonProgram\tryingcv.py”, line 12, in
    file = open(files, ‘r’)
    IOError: [Errno 13] Permission denied: ‘c:\\source_emotion\\Emotion\\S005\\001’

    can anyone please help me

    Reply
    • Paul van Gent

      12th January 2017

      It seems your script doesn’t have permission to access these files. Is the folder correct? Also be sure to run the cmd prompt in administrator mode. If that doesn’t work, try moving the folder to your documents or desktop folder, that is often a quick fix for permission errors.

      Reply
  • Andrej

    16th January 2017

    Hello, thank you very much for your great tutorial. I was wondering if there is anyway I can save this trained model for later use.

    Reply
  • Nkululeko

    17th January 2017

    Hi Paul, thank you for this tutorial. It really helped me with my honours project. I would also like to learn how a Neural Network would do in classifying the emotions, maybe a SVM as well. Thanks.

    Reply
    • Paul van Gent

      18th January 2017

      Hi Nkululeko,
      Glad to hear it was of some help! If you want to learn about how other classifiers work with emotion recognition, you have to make a few intermediary steps of extracting features from images. Take a look at this tutorial. It also discusses the performance of an SVM and Random Forest Classifiers, and some pointers.

      In the near future I plan on writing a similar one for convoluted neural nets (deep learning networks)

      Reply
      • bahar

        29th August 2017

        hi , do you write this program with deep learning networks? if yes, please give us the link 🙂

        Reply
        • Paul van Gent

          11th September 2017

          Hi Bahar. This is planned, but not there yet.

          Reply
  • Vish

    8th February 2017

    Hi Paul, Thank you for such a detailed guide!
    I needed your assistance for my project which would to scan faces and detect emotions ( predicting mental disorders is an enhancement I intend to incorporate) . I’m completely new to this technique and find myself in a fix from where to begin 🙁
    Could you please guide me on the choice of softwares to be used, whether I should opt for MATLAB or OpenCV, or something else? This first step needs to be completed for me to proceed with the development of the application. I would really appreciate your assistance on this.

    Reply
    • Paul van Gent

      8th February 2017

      Hi Vish. For the software I would say whichever you feel most comfortable with. You are undertaking a complex project so the most important thing is that you are very familiar with your tools, otherwise you might end up demotivated quickly.

      Regarding classifying mental disorders; I don’t think that is possible from just images. Think about how you could automatically extract features to use in classification from other sources than pictures. However, don’t let me discourage you. If you want, keep me updated on your progress (info@paulvangent.com), I’d like that.

      Reply
      • Vish

        9th February 2017

        Thank you Paul for your quick response 🙂 Could you tell me whether the selection of software varies with the scope of the application?
        For instance, my requirement is to scan a photo clicked from the front camera of an android device. This photo is then processed at a remote server which returns the mood of the person.
        Is this scenario limited to a certain software package or do I have choices? I’m sorry if my questions sound silly, just confused from where to begin. Your guidance will really prove beneficial for me to begin.

        Reply
        • Paul van Gent

          9th February 2017

          No problem. If you have a server-side application running you need to think about two main things:
          – How much traffic are you expecting and how does your solution scale?
          – What is available on the server OS?

          I’m expecting that sending images to the server for analysis and receiving the results back quickly gets impractical as the number of users grows (you don’t want to wait more than a few seconds for the result..), and puts a lot of strain on server resources.

          However, if you’re developing an Android app, note that OpenCV is available on the platform as well. You can also train several classifiers from the SKLearn framework and use the trained models in an Android app. See ths following link for pointers:
          http://stackoverflow.com/questions/33535103/using-trained-scikit-learn-svm-classifiers-in-android
          Only simple math is required.

          Reply
          • Vish

            25th April 2017

            Hello Paul,
            I have reduced the scope of my application to detect only sad and happy emotions since I have struggled with using MATLAB as I have no prior knowledge about it. Could you please let me know how do I implement your tutorial on Mac?
            I have created the necessary folder structure but need to know how do I execute the files.

          • Paul van Gent

            25th April 2017

            Hi Vish. It should be similar to windows, except you use Terminal instead of Command Prompt. To install the necessary packages see the repo manuals for each package
            .

  • Prashanth P Prabhu

    22nd February 2017

    Try as i might I am not able to go beyond 36% accuracy for the combined data set. Any idea why you may be getting better accuracy than me ? Is this dependent on the system that I am using (I doubt it).

    Reply
    • Paul van Gent

      22nd February 2017

      I doubt the system has much to do with it either. What OpenCV version are you using? An earlier report of low accuracy used OpenCV3.x I believe (I mention I used 2.4.9).

      Remember I’m “hijacking” a face recognition algorithm for emotion recognition here. It is very possible that optimizations done on OpenCV’s end in newer versions impair this type of detection in favour of more robust face recognition.

      Take a look at the next tutorial using facial landmarks, that is more robust.

      Reply
      • Prashanth P Prabhu

        22nd February 2017

        Paul thanks for your reply however I found the root cause which was to do with diferent glob.glob implementation between version python2 and 3. In py3 you need to explicitly sort the lists returned. I was not doing that initially which resulted in the training data set getting wrong images…for example sometimes anger would slip into neutral. Fixing this takes the accuracy to about 83% out of box which is pretty cool 🙂 Awesome work!

        Will definitely try out your landmark based tutorial to compare the approaches. Is it out yet ?

        Reply
        • Paul van Gent

          22nd February 2017

          Great you found the issue! Thanks for replying so that others may also benefit :).

          The other is out, see the home page, or use this link.

          Good luck!

          Reply
  • GBoo

    22nd February 2017

    please…help me
    I don’t understand;;;
    I did download ck+ files(4zip…) and made new two folders(“source_emotion” and “source_images”)
    but… i don’t understand next someting….
    how extract file?? images,,, txt,,, ??? i don’t mean…
    i hope to this tutorial video… T T

    Reply
    • Paul van Gent

      22nd February 2017

      Hi GBoo,
      Just follow the tutorial. It’s all there. Looking at the code may also help. If you can’t figure it out I suggest you try a few simpler Python tutorials first, this one assumes at least intermediate Python skills.
      – Paul

      Reply
  • KingKong

    23rd February 2017

    I have some question…
    please answer to me
    I just a little English skills…
    1. Extract 3 zip files(emotion_labels, FACS_labes, Landmarks) and put together in Source_emotion folder?

    source_emotion
    └S005
    └001
    └S005_001_00000001_landmarks.txt ( 11files 1~11)
    └S005_001_00000011_emotion.txt
    └S005_001_00000011_facs.txt
    └S010
    └001

    2. Extract extended-cohn-kanade-images.zip files and move to source_images folder right?

    source_images
    └S005
    └.DS_Store
    └001
    └S005_001_00000001.png ( 11files 1~11)
    └S010
    └001
    └S010_001_00000001.png ( 14files 1~14)
    └002

    3.
    emotion = int(file.readline())
    ValueError: invalid literal for int() with base 10: ‘2.1779878e+02 2.1708728e+02\n’

    I want to try this tutorial but have some problem…
    Please help me…

    Reply
    • KingKong

      23rd February 2017

      3.
      emotion = int(float(file.readline()))
      ValueError: invalid literal for float(): 2.1779878e+02 2.1708728e+02

      Reply
    • Paul van Gent

      23rd February 2017

      It seems like you’re opening the landmarks file, not the emotion text file. The emotion text files contain single floats like 2.000000

      Reply
    • Appy

      24th February 2017

      I am getting the same error. Did you figure out the problem?

      Reply
      • Paul van Gent

        24th February 2017

        The mentioned floats are not present in the text files containing the emotion, in these files you should only find integers disguised as floats (e.g. “7.0000000e+00”), not actual floats (e.g. 2.1779878e+02). Please verify which files the code is trying to access when it gives an error.

        Reply
        • Appy

          24th February 2017

          It was accessing the landmark file. I made the following change to the code and it worked.
          It was:
          for files in glob.glob(“%s\\*” %sessions):
          I changed it to:
          for files in glob.glob(“%s\\*emotion.txt” %sessions):

          Reply
          • Paul van Gent

            24th February 2017

            I thought something like that was happening. Good you found it. Happy coding!
            -Paul

  • Appy

    24th February 2017

    Thank you Paul for guiding in the right direction

    Reply
  • Keshav

    25th February 2017

    Hey Paul, When I try to execute the first python file, instead of taking only the neutral image, it is taking emotional images as well. Any idea why that is happening?

    Reply
    • Keshav

      25th February 2017

      I meant the first code where you split the different emotions. Other emotions are splitted in a correct way but neutral images has mixtures of both neutral and emotional images.
      sourcefile_neutral = glob.glob(“source_images//%s//%s//*” %(part, current_session))[0]
      should return only the first image right?

      Reply
  • Keshav

    25th February 2017

    Okay I found the error, we should sort the directory using
    sorted(glob.glob(“source_images//%s//%s//*” %(part, current_session)))[0]
    It works fine then..

    Reply
    • Paul van Gent

      25th February 2017

      Strange, I didn’t need to sort it, as it was sorted by glob. What python version ans OS are you using?

      Reply
      • Hjortur

        10th March 2017

        I am using Ubuntu 14 and was working out a few of your posts with much lower accuracy. When I looked at the images I found them generously classified. The problem was what Keshav found that it should be sorted.

        Thanks for these great, great articles!

        Reply
  • Karan

    27th February 2017

    Hey Paul,
    I am getting this error when i am trying to run your script on ubuntu OS.

    fish_face.train(training_data, np.asarray(training_labels))
    cv2.error: /build/opencv-vU8_lj/opencv-2.4.9.1+dfsg/modules/contrib/src/facerec.cpp:455: error: (-210) In the Fisherfaces method all input samples (training images) must be of equal size! Expected 313600 pixels, but was 307200 pixels. in function train

    Thanks for your help.

    Reply
    • Paul van Gent

      13th March 2017

      Hi Karan. The error means the images you supply are not similarly sized. All training images and all prediction images need to be the exact same dimensions for the classifier to work properly. Resize your images with either numpy or opencv.

      Cheers

      Reply
  • Nafis

    1st March 2017

    Hi Paul,
    I faced this error:
    training fisher face classifier
    size of training set is: 506 images
    OpenCV Error: Insufficient memory (Failed to allocate 495880004 bytes) in cv::Ou
    tOfMemoryError, file ..\..\..\..\opencv\modules\core\src\alloc.cpp, line 52

    I used your code without any change. Any idea why this might happen? I am using 4Gb of RAM.

    Reply
    • Paul van Gent

      13th March 2017

      Hi Nafis. All images are stored in the training_data and prediction_data lists. Are you using 32-bit Python? I believe Windows should allocate virtual memory of OpenCV needs more. In this case I recommend 64-bit python.

      If you can’t, don’t want to or are already using 64-bit python and still get the error, you could try several things:

      – Reduce the number of images in the dataset
      – Reduce the resolution of the images
      – Change the code so that only the training set is loaded when training, then delete this set and load the prediction set once you’re ready to evaluate the trained model.

      Hope this guides you in a usable direction.

      Reply
      • Karthikeyan

        9th September 2017

        In emotion_dtect. py file im getting a “type error: ‘int’ object is not iterable” in line:pred, conf=fishface. predict(image)
        How to resolve this sir?

        Reply
  • simuxx

    3rd March 2017

    Hi Paul,
    Thank you for your job. your tutorials will help me a lot as I’m working on emotion recognition.
    I’m trying to run the code but i’m having this error
    sourcefile_emotion = glob.glob(“C:/…/source_images/%s/%s/*” %(part, current_session))[-1]

    IndexError: list index out of range
    Can you help me please

    Reply
    • Paul van Gent

      13th March 2017

      Hi Simuxx. The error is explicit: it cannot find the index of the list you specify, so that likely means the list returned by glob.glob is empty.

      Reply
    • Aniket

      30th June 2017

      Did you resolve this issue @simuxx?

      Reply
      • Paul van Gent

        1st July 2017

        Take a look at the list “sourcefile_emotion”, likely it is empty. Are the folders that you feed to glob.glob() correct? Is there something in the folders?

        Reply
  • Prem Sekar

    6th March 2017

    hi paul,
    i executed ur code to clean the dataset but it shows error…could you help me with it

    Reply
  • karthik

    6th March 2017

    sir the dataset link provided by you contains many folders and images can u please explain me how to create my own data set with a small example .
    suppose I have a list of images and I stored in source_images folder. and what I need to store In source_emotion folder …… jst can I save happy=1 sad=2.. in theform of txt files

    Reply
    • Paul van Gent

      13th March 2017

      Hi Karthik. You can do whatever you want actually. The classifier expects two things:

      – A numpy array of imagedata
      – a (similarly shaped!) numpy array or list of numerical labels

      However you synthesize both lists doesn’t matter, as long as the image and the corresponding label are at the same indexes in both arrays or lists!

      Cheers

      Reply
  • Ash

    13th March 2017

    Hi Paul!!
    Excellent tutorial. Really easy to understand the flow. I just have this doubt, I saved the trained model and when I opened it, it displayed something like this –

    2

    1
    122500
    d

    1.0575786924939467e+002 1.0452300242130751e+002
    1.0227360774818402e+002 1.0003389830508475e+002
    9.7685230024213084e+001 9.5399515738498792e+001
    ………….
    So do you have any idea what these values are?

    Reply
    • Paul van Gent

      13th March 2017

      Hi Ash,

      Thanks! I’m not sure, these could be either decision boundaries or hyperplane coefficients (see how SVM’s work for more info), depending on the approach the face recognizer class in OpenCV takes. I’m not sure anymore what approach it takes though, been a while since I read up on it.

      Cheers

      Reply
  • Keshav

    13th March 2017

    Hey Paul, I successfully did everything as per the tutorial and got 95% accuracy . I tried to make a device with intel EDISON board. When I train the system , it says OutofMemory Exception because of Fisherface.train, Any idea how to overcome the memory leak?

    Reply
    • Paul van Gent

      14th March 2017

      Hi Keshav. It’s not a memory leak, there just isn’t sufficient memory on the system for this type of task. You might try training a model on a computer and transferring the trained model to the Edison to use for just classification. Be sure to also test how the model performs on data from webcams and other sources, as it’s unlikely you retain the 95% when generalising to other sets (this is where the real challenge still lies!).

      Good luck!

      Reply
  • Jack

    14th March 2017

    Hey Paul, amazing tutorial! I must be doing something wrong but in the run_recognizer function I am returned the following error and am not very sure what is going on.. printing the image variable clearly shows that it is storing a full image..

    —> 58 pred, conf = fishface.predict(image)
    59
    60 if pred == prediction_labels[cnt]:

    TypeError: ‘int’ object is not iterable

    Reply
    • Jack

      14th March 2017

      fixed ! just gotta remove the confidence interval returned.. I guess it’s all about those python incompatabilities

      Reply
    • Oussama

      28th September 2017

      Hello Jack,
      I am facing the same error I removed the conf variable but I still get the same error.
      can you please help?
      thank you.

      Reply
      • Paul van Gent

        28th September 2017

        Hi Oussama. Can you share the exact error message and/or the code with me? info@paulvangent.com

        Reply
  • Rajat

    25th March 2017

    Followed all the steps. Even tried with the dataset you provided as “googleset”. But I am not getting an accuracy more than 55% even with 5 expressions. Please help!!!!

    Reply
    • Paul van Gent

      8th April 2017

      Please check whether the generated image list correctly matches the label list. However, 55% with 5 expressions is way above chance level, you would expect 20% (1/5).

      The method will never reach 100% accuracy, and depending on what sets you use, 55% may be the maximum obtainable.

      Reply
  • Keshav

    27th March 2017

    Oh I managed to solve all the problems and I have made a device for the blind people to detect the intruder with wrong intentions using emotion recognition. Thank you so much for the tutorial Paul. You have been a great inspiration. I have done it using Intel Edison board.

    Reply
    • Paul van Gent

      28th March 2017

      That sounds like a fun project! Can you share more information on it? You’ve made me curious :).

      Reply
  • Keshav

    30th March 2017

    Basically, there’s a button which acts as a trigger. Once if you press it, the Video camera starts recording and constantly monitor the emotions. Sometimes the emotions might be incorrect, So I have set up a count value for emotions. So if any different emotions like anger , for example, is detected, the blind person is alerted via a beep sound or some vibration. Your project acts as a base for mine. In case , such emotions are detected, the blind person will be aware of the situation. Moreover once the emotion is detected to be anger, the snapshot of the person standing right in front of him will be stored inside the board And also If you hold the button for a long time, Your location will sent to the already chosen emergency contacts. 😀

    Any idea on improving the accuracy of the detection?

    Reply
  • John

    2nd April 2017

    I saved my xml model and it seems that it not detect very good emotions. The precision is poor.
    I try to figure out what is wrong (with model or with classification).

    Can you save and provide me your model to see if the problemes comes from my training?
    Thanks in advance (my email : ioan_s2000@yahoo.com)

    Reply
    • Paul van Gent

      8th April 2017

      Hi John. Check whether the labels correspond with the images when training and classifying the model. Also try to expand the training set with more images if performance remains poor.

      Remember that high accuracy might not be possible for a given dataset. Fine-tuning accuracy itself with images beyond excluding outliers is irrelevant, as real-world performance will not increase anyway.

      Reply
  • bilal rafique

    4th April 2017

    Hi Paul,
    I am getting this error, when i run the code of training fisher face classier

    Traceback (most recent call last):
    File “F:/Emotion Recognition/Ex3.py”, line 64, in
    correct = run_recognizer()
    File “F:/Emotion Recognition/Ex3.py”, line 45, in run_recognizer
    fishface.train(training_data, np.asarray(training_labels))
    error: ..\..\..\..\opencv\modules\core\src\alloc.cpp:52: error: (-4) Failed to allocate 495880004 bytes in function cv::OutOfMemoryError

    Please I want to use your tutorial in my final year project 🙁 Please help me. i have just 10 days 🙁
    regards,

    Reply
    • bilal rafique

      4th April 2017

      This Error is resolved bro 🙂 by installing x64 Pythhon & one by one folder training but when i done it. i again trained it & error comes here again
      Traceback (most recent call last):
      File “F:\Emotion Recognition\Ex3.py”, line 64, in
      correct = run_recognizer()
      File “F:\Emotion Recognition\Ex3.py”, line 41, in run_recognizer
      training_data, training_labels, prediction_data, prediction_labels = make_sets()
      File “F:\Emotion Recognition\Ex3.py”, line 28, in make_sets
      gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) #convert to grayscale
      error: ..\..\..\..\opencv\modules\imgproc\src\color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cv::cvtColor

      Reply
      • Paul van Gent

        8th April 2017

        It seems OpenCV doesn’t find colour channels. Are you loading greyscale images?

        Reply
    • Paul van Gent

      8th April 2017

      It seems you either need more RAM, or need to install 64-bit python and openCV (likely the latter). The error explicitly states it runs out of memory

      Reply
  • karthik

    7th April 2017

    sir u explained that we can create your own dataset. i did a small change in such a way that i replace all the images in s005 with my new set of roger fedrer images but its showing a wrong emotion . is this the correct way to create our own dataset or something different .
    else as u mentioned earlier
    You can do whatever you want actually. The classifier expects two things:

    – A numpy array of imagedata
    – a (similarly shaped!) numpy array or list of numerical labels

    However you synthesize both lists doesn’t matter, as long as the image and the corresponding label are at the same indexes in both arrays or lists!
    but how im going to give emotion values in source_emotion sir i.e ur specifying with s005 emotion as 3.000000+e. how can i give that to my own create done
    please tell me the procedure to create my own dataset sir

    Reply
  • John

    9th April 2017

    I have modified a little bit your algorithm.
    I crop the face from image according with this tutorial http://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html.
    The crop depends of eyes position and a desired offset (how much to cut from face).
    In this way i eliminate some not useful parts from the image.
    Also I want to capture some frames and do recognition in videos :
    It seems that the capture must be first cleaned from noise and after rotated for maximum recognition rate.

    Reply
  • Mun

    16th April 2017

    Hey I m trying to run the first python snippet but got following error. Can u help me out!

    Traceback (most recent call last):
    File “C:\Users\mrunali\Desktop\Master\img_seq.py”, line 22, in
    copyfile(sourcefile_neutral, dest_neut) #Copy file
    File “C:\Python27\Lib\shutil.py”, line 83, in copyfile
    with open(dst, ‘wb’) as fdst:
    IOError: [Errno 2] No such file or directory: ‘sorted_set\\neutral\\5_001_00000001.png’

    Reply
    • Paul van Gent

      21st April 2017

      Hi Mun. The error tells you what is wrong. Check where you store your files and what you link to.

      Reply
  • chandu

    18th April 2017

    Hi Paul, How did you download CK data set. I entered information in http://www.consortium.ri.cmu.edu/ckagree/ this site, then it is showing “Please wait for delivering the mail “. I waited but I didn’t get. Can you tell exactly how did you get data set.

    Reply
    • karthik

      14th August 2017

      if you have downloaded ,could you pls tell me how you did?

      Reply
  • johansonik

    21st April 2017

    Hello!

    My question is do I have to teach this model with my own dataset or can I use your dataset and then mine just to recognize emotions?

    Thanks in advance for reply 🙂

    Reply
  • TRomesh

    26th April 2017

    Hi Paul
    I would like to know the algorithm that you have used in this project. does it involve Machine learning techniques or Neural networks or just OpenCV’s inbuilt classification methods?

    Reply
    • Paul van Gent

      4th May 2017

      Hi TRomesh. Sorry for the late reply, haven’t had much time for the site. This particular tutorial uses the FisherFace classifier from OpenCV, which can be considered a form of machine learning.

      Reply
  • Joe

    2nd May 2017

    Hi Paul.
    I am Joe.
    I have some question.
    This one is Machine Learning?
    As far as I know, Fisherface used to LDA(Linear Discriminant Analysis) algorithm right?

    Reply
    • Paul van Gent

      4th May 2017

      Hi Joe. I believe you are correct about the LDA.
      However, Machine Learning is a broad field, among which algorithms with methods such as LDA or even simple linear regression may fall.

      Reply
  • Dana Moore

    3rd May 2017

    Dear Paul,
    Very nice piece of work. Excellent
    Your script for sorting and organising the images dataset does not work on ubuntu (and other *nix systems such as OSX) as listed.

    * For one thing, the file path separators are proper for windows systems only. One might consider using the python “os.sep” to yield greater flexibility; alternatively, one might also consider using os.join to mesh the siparate parts

    * For another, glob.glob() does not guarantee an in order / sorted array; one might consider using os.listdir(), then sorting the result to yield a correct ordering where the array would start with *00000001.png and end with *00000nn.png

    *For another, the array indexing to retrieve current_session is off (at least for *nix systems)

    That said, it’s a terrific piece of work
    I will attempt to paste in some code that worked for me just below, but no guarantees it formats correctly in a text box. Alternatively, I will be happy to email you a copy
    ========================= BEGIN =========================================

    ======================== END =========================================

    Thank you again for your excellent tutorial

    Reply
    • Paul van Gent

      8th May 2017

      Hi Dana,
      Thanks for the comments! The code unfortunately doesn’t format well in the text box, but it did in the backend. I planned on updating everything to be unix compatible but kept putting it off due to most of my hours going into work, this is a great reminder to do it. Thanks again.

      Reply
  • Anni

    7th May 2017

    Good Day sir!
    I followed the steps in this tutorial but I got this error message in the first code..

    Traceback (most recent call last):
    File “F:\Future Projects\files\emotion1.py”, line 12, in
    file = open(files, ‘r’)
    IOError: [Errno 13] Permission denied: ‘source_emotion\\Emotion\\S005\\001’

    how to solve this sir?
    I’m still new in python and i’m using windows 10.
    I hope you can help me with this 🙂
    I badly need it Thank you

    Reply
  • Hetansh

    8th May 2017

    After running the initial organizing the dataset python script I get no files in the sorted_set folder. Can anyone help me on this?

    Reply
    • Paul van Gent

      8th May 2017

      Troubleshoot what’s going wrong.
      – Are the files properly placed in correctly named folders?
      – Are the variables “participants”, “sessions” and “files” populated?
      – Are the source and destination paths generated correctly?

      Reply
  • Jacob Jelen

    16th May 2017

    Hi Paul, Thanks for this tutorial! Super helpful, but I’m having some issues…
    When I run the training code I get the following error:

    training fisher face classifier
    size of training set is: 494 images
    predicting classification set
    Traceback (most recent call last):
    File “trainModel.py”, line 64, in
    correct = run_recognizer()
    File “trainModel.py”, line 52, in run_recognizer
    pred, conf = fishface.predict(image)
    TypeError: ‘int’ object is not iterable

    I have tried changing the line 51 from
    for image in prediction_data:
    to
    for image in range(len(prediction_data)-1):

    That might have solved one of the issues, however I’m still getting errors. It is complaining about the image size not being 350×350=122500 although all the images in my dataset folder are the correct size. And my user name is not ‘jenkins’ as it says in /Users/jenkins/miniconda… not sure where it comes from or how to replace it with my correct path to fisher_faces.cpp

    size of training set is: 494 images
    predicting classification set
    OpenCV Error: Bad argument (Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4.) in predict, file /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp, line 132
    Traceback (most recent call last):
    File “trainModel.py”, line 64, in
    correct = run_recognizer()
    File “trainModel.py”, line 52, in run_recognizer
    pred, conf = fishface.predict(image)
    cv2.error: /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp:132: error: (-5) Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4. in function predict

    Thanks for your help

    Reply
    • Paul van Gent

      12th June 2017

      Hi Jacob. It seems the images are not loaded and/or stored correctly in prediction_data, and therefore it cannot iterate over it. You can step over this with your proposed change, but then it fails later because there was no image in the first place. Verify that image data is stored there, and if not, where it goes wrong.
      -Paul

      Reply
      • Dixit Thakur

        15th June 2017

        Hi Paul,
        I am facing the same issue.
        Traceback (most recent call last):
        File “trainData.py”, line 67, in
        correct = run_recognizer()
        File “trainData.py”, line 54, in run_recognizer
        pred, conf = fishface.predict(image)
        TypeError: ‘int’ object is not iterable
        fishface.predict(image),following is the image data
        image data : [[ 80 82 83 …, 108 108 109]
        [ 82 83 83 …, 110 111 111]
        [ 84 84 83 …, 111 112 112]
        …,
        [ 24 25 24 …, 14 17 19]
        [ 25 26 25 …, 14 17 19]
        [ 24 25 25 …, 13 16 17]]

        what can be the possible reason for the failure?

        Reply
        • Paul van Gent

          19th June 2017

          Hi Dixit. I cannot reproduce the error, that makes it a bit difficult to debug. Could you send me your code at info@paulvangent.com? I’ll see if that works over here, and then we know whether it’s a problem with the code or yours setup.
          -Paul

          Reply
  • Taghuo Fongue

    21st May 2017

    Extract the dataset and put all folders containing the txt files (S005, S010, etc.) in a folder called “source_emotion” //
    Hi Paul,
    should i extract Emotions, FACS, Landmarks folders under the same folder “source_emotions” or only the Emotions folders has to be extracted and put under the folder “source_emotions”.
    which dataset should i extract exactly? Please let’s me know.

    Reply
    • Taghuo Fongue

      21st May 2017

      Please can you send me a screen-shot how you have arrange your folder ?

      Reply
    • Paul van Gent

      12th June 2017

      Hi Taghuo. You extract the emotion textfiles into this folder. So you get:

      source_emotion\\S005\\001\\S005_001_00000011_emotion.txt
      source_emotion\\S010\\…
      etc

      Reply
  • hanaa

    13th June 2017

    please can you help me — would like to implement emotion recognition using the Raspberry Pi’s camera module, specifically recognizing angry only . I have some simple face detection going on using OpenCV and Python 2.7, but am having a hard time making the jump to emotion recognition. Initial searches yield results involving topics such as optical flow, affective computing, etc, which has so far been intimidating and hard to understand. can you tell me code with fisherface classifer ?

    Reply
    • Paul van Gent

      13th June 2017

      For the simplest approach I would recommend looking at the section “Creating the training and classification set”, all the code you need is there. You can also take a look at the Emotion-Aware music player tutorial here, that might clarify some things.

      Reply
  • Vadim Peretokin

    25th June 2017

    It doesn’t seem that the link to download CK+ works anymore?

    Reply
    • Paul van Gent

      30th June 2017

      It seems it has been taken offline yes. I’ll update the text

      Reply
  • Bharath

    3rd July 2017

    Could you please provide the sorted_set folder even?
    I’m not able to prepare that set from code you had provided

    Reply
    • Bharath

      3rd July 2017

      Anyone who had prepared the dataset with separate folder for each emotion can please reply

      Reply
    • Paul van Gent

      4th July 2017

      Hi Bharath,
      The sorted set folder simply contains folders which contain images of faces with emotions, like this:

      sorted set
      |
      ——-anger
      |
      ——-contempt
      |
      ——-disgust
      |
      ——-etc

      Where exactly are you getting stuck? Maybe I can help.

      -Paul

      Reply
  • Abhi khandelwal

    3rd July 2017

    Hi Paul
    Could you please tell me that which file should I run First .And In your Codes there is no command for opening WEBCAM[#Video.Capture(0) like this], So how it will detect my Emotion.

    Reply
    • Paul van Gent

      4th July 2017

      Hi Abhi,
      If you follow the tutorial everything should go in the right order. This tutorial is about building a model using OpenCV tools. There’s another tutorial using a more advanced method here. You can take a peek there on how to access your webcam, or find one of the million pieces of boilerplate code for this online!
      Good luck.
      -Paul

      Reply
  • srikanth

    8th July 2017

    Hii Mr Paul,
    what is your advise for the beginner , i mean to learn all these face recognition stuff using opencv,
    i want to learn it completely.

    Reply
    • Paul van Gent

      8th July 2017

      Hi Srikanth. I would recommend doing a few Python courses on Coursera before delving into OpenCV. After this, the OpenCV docs should provide you with a good basis. Do a few projects without tutorials. You’ll learn it quickly.

      Reply
  • Valens

    8th July 2017

    Hi Paul

    We are looking into having emotion study on our media. However can we do this alogarithm without having to snap picture or store picture. On the fly to understand and predict viewers esperience esp watching certain movie or program?

    Reply
    • Paul van Gent

      10th July 2017

      This would be possible, but with Python real-time implementations might be too slow. One option is to snap a picture every second and classify that. Even on a slow computer the algorithm will be more than fast enough for this.

      However, be aware that results are not likely to be accurate unless the archetypical emotions the classifier is trained on are displayed. Also be aware that you cannot use the CK+ dataset for any purpose other than academic, so if you want to do this commercially you need explicit permission from the original author of the dataset.

      Reply
      • Valens

        13th August 2017

        Hi Paul, TQ for your msg. Yes I think snap a picture and then do classification would be ideal and practical. As to have CK+ dataset for commercial use, surely we will get the permission first. Btw, are you the author yourself?

        Reply
        • Paul van Gent

          13th August 2017

          I’m not the author of the CK+ set, only of the stuff on this website.

          I would recommend you get permission to use it commercially first. Would be a shame if you put a lot of work in it, and then don’t get permission..

          Reply
  • Michael rusev

    9th July 2017

    Hi paul, nice tutorial i was told to do something like this as a school project. after detecting emotion the system should able to play an audio file for the user if it detects an happy face it should play a certain audio and a sad face another. sorry my english isn’t that good. i want to ask if this can implemented and how can i do something like that

    Reply
    • Paul van Gent

      9th July 2017

      Hi Michael. This shouldn’t be hard at all. Look at the module “PyAudio“, or the VLC wrappers if you’d rather use that framework.

      You could even use os.open(), however this will open the default media player to play the file (which causes it to pop-up), so this is not a very nice solution..

      Reply
  • Vincent van Hees

    10th July 2017

    Many thanks for this blog post. It seems the data is back online again, so you can change the text back to how it was 🙂

    Reply
    • Paul van Gent

      10th July 2017

      Alright, thanks for notifying me!

      Reply
  • Vincent van Hees

    10th July 2017

    In the section “Extracting faces”, the sentence “The last step is to clean up the “neutral” folder. ”
    Could you please make this sentence more explicit:
    – Is this a description of what the Python code did and is no further action required from the reader?
    – Is this an instruction to the reader to delete that folder manually?
    – Is this an instruction to the reader to delete the files in that folder manually?
    thanks

    Reply
    • Paul van Gent

      10th July 2017

      Thanks. I have updated the text. You need to do this manually.

      Reply
  • JITESH

    12th July 2017

    Hi Paul,

    I would like to integrate same system in C#. Can you please help how can I integrate CK+ Model in C#. If you have any sample in c# for same kindly update me on jitesh.facebook@gmail.com please.

    Thanks,
    Jitesh

    Reply
    • Paul van Gent

      14th July 2017

      Emgu CV is a .NET wrapper for OpenCV. I would look into that. Porting the code should be easy after that :).
      -Paul

      Reply
  • Zlatan Robbinson

    24th July 2017

    Hello Paul, Brilliant tutorial i may say, just one question is it possible to design a system that can speak to the individual after detecting emotion like when it detects an happy face it should say something to them “Like you are happy keep it up” and when it detects a sad face its should say something like” You are sad cheer up”.
    Just something of that nature, it will be form to see the system speak to the individual after it detects emotion. Please can this be implemented am planning on during this as my Final Year project

    Reply
    • Paul van Gent

      24th July 2017

      Hi Zlatan. This should be quite easy to implement. To make your life easy you need to look at a package that does TTS (text to speech), for example this one.

      Then it is just a matter of:
      – Detecting emotion
      – Determining label of emotion
      – Have the TTS engine say something.

      Good luck 🙂
      – Paul

      Reply
      • Zlatan Robbinson

        6th August 2017

        Please Paul how can i contact you personally just in case i have
        something else to discuss

        Reply
  • Salma

    3rd August 2017

    hello , the link of the CK database is broken , and i can not find it on the internet , is there any working link or other alternative for the database,

    Reply
    • Paul van Gent

      7th August 2017

      As far as I’m aware this is the only link. Sharing the database without the author’s consent is prohibited, so I’m afraid you need to look for another dataset.

      Reply
  • Zlatan Robbinson

    6th August 2017

    Thanks Paul you saved my day. Keep up the good work

    Reply
  • PKeenan

    13th August 2017

    Mistake in line 29 of the second code snip facefeatures == face2.

    Reply
    • Paul van Gent

      13th August 2017

      Thanks for catching that. I’ve updated

      Reply
  • Rodrigo Moraes

    21st August 2017

    I can’t access the dataset used in this article(http://www.consortium.ri.cmu.edu/ckagree/). Do you know why?

    Reply
    • Paul van Gent

      21st August 2017

      It’s availability is intermittent, and access is not always granted. You can look at other available datasets or create your own :).

      Reply
  • Thari

    22nd August 2017

    I’m getting this error when run the “Creating the training and classification set” code.
    My system is windows 10, visual studio 2017, python 2.7 (32-bit), RAM- 8GB (There are more than 3 GB free memory when run the code)

    training fisher face classifier
    size of training set is: 1612 images
    OpenCV Error: Insufficient memory (Failed to allocate 1579760004 bytes) in cv::OutOfMemoryError, file ..\..\..\..\opencv\modules\core\src\alloc.cpp, line 52

    Please help me resolve this.

    Reply
    • Paul van Gent

      22nd August 2017

      Hi Thari. You have the 32-bit python version, that means it can only address the first ~4GB of your ram, but most of this is likely taken up by your OS and other applications.

      Consider installing the 64-bit python

      Reply
      • Thari

        26th August 2017

        Thank you for your reply.

        When I run this with python 2.7(64 bit) I’m getting this error.

        Traceback (most recent call last):
        File “C:\Users\Thari\documents\visual studio 2017\Projects\PythonApplication1\PythonApplication1\PythonApplication1.py”, line 7, in
        fishface = cv2.createFisherFaceRecognizer() #Initialize fisher face classifier
        AttributeError: ‘module’ object has no attribute ‘createFisherFaceRecognizer’
        Press any key to continue . . .

        Please help me resolve this.

        Reply
  • karthikeyan

    24th August 2017

    hello sir, i am doing a final year project on this module,but i am using webcam to detect emotions in real time,could you provide me the source code for the complete module for emotion detection using webcam? please, my mail id is : learnerkarthik@gmail.com

    Reply
    • Paul van Gent

      25th August 2017

      Hi Karthikeyan. All you need is in the texts and the docs of the opencv module (http://docs.opencv.org/). Good luck!

      Reply
  • Tham

    25th August 2017

    Hi Paul,
    createFisherFaceRecognizer(num_components,threshold);
    How many to use num_components and threshold for your project?

    Reply
  • sadaf

    25th August 2017

    how can I learn python_opencv ?

    Reply
    • Paul van Gent

      25th August 2017

      I would start with the docs. It also helps to think of a simple project you want to program, and build it from the ground up with the help of the docs. This will help you get familiar with the structure of the module.

      If you’re not very comfortable in Python I would suggest you do a few courses on this first. This will really help speed the rest up.

      Reply
      • Sadaf

        25th August 2017

        Ok thanks:)

        Reply
  • karthikeyan

    25th August 2017

    i am getting an error like “attribute error:no module named create fisher face recognizer” , i just copy pasted your code!

    Reply
    • Paul van Gent

      25th August 2017

      Hi Karthikeyan. I’m sure your final year project is not about copy pasting code. Please read the tutorial as well, that’s what it’s for.

      Reply
      • Karthikeyan

        9th September 2017

        In emotion_dtect. py file im getting a “type error: ‘int’ object is not iterable” in line:pred, conf=fishface. predict(image)
        How to resolve this sir?

        Reply
  • salma

    7th September 2017

    the ck+ have more the 3 folders full of .txt files , which ones should i use in the ” source_emotion” folder?
    I’ve been trying since 10 days and i have no result for the emotion recognition , i could appreciate a little help thank you

    Reply
  • Reima

    10th October 2017

    Hi Paul,

    I just wanted to let you know that I found somebody presenting your tutorial codes as his own handiwork. No citation or links to this page. So that is a clear license violation on his part. I’d say, it is a sign that you’ve made a great tutorial since the copycat pretty much copy-pasted your code snippets and only made minor value adjustments and changed some comment texts:
    https://www.apprendimentoautomatico.it/en/emotions-detection-via-facial-expressions-with-python-opencv/

    Anyway, thanks for a great tutorial,
    -Reima

    Reply
    • Paul van Gent

      10th October 2017

      Thanks so much Reima, also for notifying me. Making original content is hard and takes time. Unfortunately due to the nature of the internet there will always be freeloaders that benefit of other’s work. I’ve contacted the author and let’s see what happens

      Reply
  • Jamil

    14th October 2017

    Hello Sir I’m beginner programmer and learner, but have spirit to do anything if someone give me proper guide will you accept as your student ?

    Reply
    • Paul van Gent

      14th October 2017

      Hi Jamil. If you have questions you can always send them to info@paulvangent.com. I cannot guarantee I will always respond quickly, though.

      There are a lot of great Python tutorials and classes online. I can surely recommend “Python for Everybody” on http://www.coursera.com

      Reply
  • Aditya

    29th October 2017

    Hi, I’m just getting started with this and I have a question — when you say “Extract the dataset and put all folders containing the txt files (S005, S010, etc.) in a folder called “source_emotion” “, which folders containing the txt files do you mean?

    I’m confused if its all the contents inside “Emotion_labels/Emotion/” or “FACS_Labels/FACS”

    Please help me out. Thanks!

    Reply
    • Aditya

      29th October 2017

      I have currently saved it as “source_emotion/S005/001/S005_001_00000011_emotion.txt”
      “source_emotion/S010/001/” and “source_emotion/S010/001/S010_002_00000014_emotion.txt” and so on. Is that right?

      Reply
      • Paul van Gent

        30th October 2017

        Hi Aditya, that indeed looks right. If the code fails you can take a look at either the code or the traceback of the error to see where the mismatch happens.
        -Paul

        Reply
        • Aditya

          30th October 2017

          Thanks! And I really respect your quick reply, Paul! 🙂

          Reply
  • Huzail

    1st November 2017

    Hello sir i’m having memory problem in while training the python

    http://prntscr.com/h4q20a

    this the screenshot of the error how to deal with it

    Reply
    • Paul van Gent

      1st November 2017

      Hi Huzail. You’re running out of memory. Likely you are using 32-bit Python, a 32-bit IDE running the Python environment, or on a 32-bit system. Make sure it’s all 64-bit, try to free up memory, or use less images to train the model.

      Reply
  • Hiren

    1st November 2017

    Hey…Paul..i does not found any .txt file (S005 – S010) by extracting the database….i only found the images in folder from (S010 TO S130)….SO what should i store in emotion source_emotion…??

    Reply
    • Paul van Gent

      1st November 2017

      Hi Hiren. In the server where you downloaded the data from, there is a separate zip file containing the emotion labels. You can download this one and extract into source_emotion
      -Paul

      Reply
      • Hiren

        13th November 2017

        thanks for reply Pual….I have completed the system and its run successfully…. but some time system only store one image instead of 15 (as per the code)….so what should i do??

        Reply
  • Karthik

    6th November 2017

    Hey Paul, Thanks for this easy to understand tutorials.

    And for anyone on opencv 3.3 and who got createFisherFaceRecogniser not found, install the opencv contrib from https://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv and then use cv2.face.FisherFaceRecognizer_create() instead of cv2.createFisherFaceRecogniser()

    Reply
  • Dominique

    12th November 2017

    Hi Paul, first of all thank you for the tutorial, but where am I supposed to get the txt files for the source_emotion folder if the CK+ dataset link is broken?

    Reply
    • Paul van Gent

      13th November 2017

      Hi Dominique. The link functions intermittently. Either try again later, or try using another dataset..
      -Paul

      Reply
  • ramyavr

    30th November 2017

    Hi Paul
    I am getting this error
    Traceback (most recent call last):
    File “extract.py”, line 49, in
    filenumber += 1 #Increment image number
    NameError: name ‘filenumber’ is not defined
    could you please help me solving this

    Reply
    • Paul van Gent

      4th December 2017

      Hi Ramyavr. The variable “filenumber” is not defined prior to you using it, as the error states. Check that you initialize the variable correctly, and that the name is spelled correctly there (also see code section in “Extracting Faces”, line 14.

      Reply
  • Sum

    4th December 2017

    Hi, I have been trying to run the code but am stuck in the very first step where I am unable to get the right path for the txt files.
    ** for sessions in glob.glob(“%s//*” % participant): # store list of sessions for current participant
    for files in glob.glob(“%s//*” % sessions): ***

    gives me a permission denied error even after i have given all the permissions.

    Please help

    Reply
    • Paul van Gent

      4th December 2017

      Hi Sum. What OS are you using? Try running the application with elevated privileges (“sudo python ” on Linux/MacOs, or run cmd prompt as an administrator on Windows).

      Please check that the paths you reference exist and are spelled correctly in the code, sometimes this can give strange errors.

      – Paul

      Reply
      • Sum

        4th December 2017

        I am working on windows and have already tried running as administrator
        Still stuck with this error:

        file = open(files, ‘r’)
        IO Error: [Errno 13] Permission denied: ‘E:data/source_emotions\\Emotion\\S005’

        this is my code section throwing the error:

        ****
        for sessions in glob.glob(“%s/*” % participant): # store list of sessions for current participant
        for files in glob.glob(“%s/*” % sessions):
        current_session = files[20:-30]
        file = open(files, ‘r’)
        ****

        Reply
  • MA

    5th December 2017

    Hy We are using python3. We ran your code but didn’t reach more than 40% of accuracy. The classifier seems to works well. How do you get 80 percents ?
    MA

    Reply
    • Paul van Gent

      5th December 2017

      Hi MA. There are two likely things that I can think of. The first is glob.glob might sort detected files differently. Please verify that all images in a given emotion folder are actually of that emotion. The second possibility is you’re also using a different OpenCV version. We’re basically abusing a face recognition algorithm to detect emotions, which has been changed in later versions. Please take a look at the other emotion tutorial on here. It’s a bit more technical but also the more ‘proper’ way of going about this task.

      – Paul

      Reply
  • NinjaTuna

    8th December 2017

    Hello sir, may I ask what algorithm used in the tutorial is called?

    Reply
    • Paul van Gent

      8th December 2017

      Hi NinjaTuna. Here I (ab)use a facerecognition algorithm called a FisherFace algorithm (see this for more info on FisherFaces). You can find more info in the OpenCV documentation.
      – Paul

      Reply
      • NinjaTuna

        11th December 2017

        Thank you very much sir, we have a project at our university that must be able to detect emotions on side view faces and we have no idea where to start, so we would like to cite your work, thank you very much 😀

        Reply
  • HD

    9th December 2017

    hey..Paul…when the first time i ran the program i was able to store more than one image for each emotion…and got accurate result…,but now it is only store one image for each emotion…and not getting accurate result….so, what should i do?…plz…reply..as soon as possible.

    Reply
    • Paul van Gent

      11th December 2017

      Hi HD. Could you elaborate a bit further? I’m not sure what the issue is.
      – Paul

      Reply
  • sarra

    10th December 2017

    hi sir , i had this error and i could not solve it
    Traceback (most recent call last):
    File “D:\facemoji-master\prepare_model.py”, line 72, in
    correct = run_recognizer()
    File “D:\facemoji-master\prepare_model.py”, line 60, in run_recognizer
    fishface.train(training_data, np.asarray(training_labels))
    error: C:\projects\opencv-python\opencv_contrib\modules\face\src\fisher_faces.cpp:67: error: (-5) Empty training data was given. You’ll need more than one sample to learn a model. in function cv::face::Fisherfaces::train
    Can you help me?

    Reply
    • Paul van Gent

      11th December 2017

      Hi Sarra. The error says it all: “error: (-5) Empty training data was given. You’ll need more than one sample to learn a model.“. It seems the data is not loading correctly. Check whether you are referencing the correct paths, whether you have permission to read from the folders, and whether you store the data correctly in the array variable in python.
      – Paul

      Reply
  • Angi

    14th December 2017

    Hi paul
    I am getting this error
    sourcefile_emotion = glob.glob(“source_images\\%s\\%s\\*” %(part, current_session))[-1] # get path for last image in sequence, which contain the emotion.

    the image is in source_images\S010\001 and my python file is in the same folder as source_images.

    Can you help me?

    Reply
    • Paul van Gent

      15th December 2017

      Hi Angi. Could you post your error message? You seem to have accidentally pasted a line of code rather than the error message.
      – Paul

      Reply

Leave a Reply