CHAPTER 1: INTRODUCTION
The main objective behind this project is to provide a device to the world that will not only be a new invention but is going to be a very noble cause for many people. This device will help to sense the brain signals and will convert those brain signals which will express a person feeling without speaking or making any body movement. But currently we are developing a model that detects face expression. Facial expression are important cues for non-verbal communication among human beings. This is only possible because humans are able to recognize emotions quite accurately and efficiently. An automatic facial emotion recognition system is an important component in human machine interaction. Apart from the commercial uses of an automatic facial emotion recognition system it might be useful to incorporate some cues from the biological system in the model and use the model to develop further insights into the cognitive processing of our brain.
We all know that many people in our world who are deaf and dumb face problem as they cannot express their views in any other form than sign language so we are trying to create a device that will help those people to express their views and feelings without making any body movement only just by sensing their brain signals. But currently we are developing a model that could firstly detect our face expression by differentiating among frequencies of various expression in order to predict accuracy of model for brain signals too.
As we all know that many people who are deaf and dumb cannot express their views other than sign language so we are trying to create a device that will sense the brain signals and those brain signals will express their feelings and views which will be a very noble cause of the society. Deep Learning methods have performed very well in MNSIT digit recognition dataset. Our setting is very similar to the task of digit recognition. Corresponding to the digit labels we have emotion labels. But emotion recognition is much more complicated because digit images are much simpler than face images depicting various expressions. Moreover, the variability in the images due to different identities hampers the performance. Human accuracy in facial expression recognition is not as good as in digit recognition and is also aided by other modes of information such as context, prior experience, speech among others.
DEFINITION AND OVERVIEW:
Understanding the human facial expressions and the study of expressions has many aspects, from computer analysis, emotion recognition, lie detectors, airport security, nonverbal communication and even the role of expressions in art. Improving the skills of reading expressions is an important step towards successful relations. A facial expression is a gesture executed with the facial muscles, which convey the emotional state of the subject to observers. An expression sends a message about a person’s internal feeling. That similarity implies about the facial expression most important role- being a channel of nonverbal communication.
Expressions and emotions go hand in hand, i.e. special combinations of face muscular actions reflect a particular emotion. For certain emotions, it is very hard, and maybe even impossible, to avoid it’s fitting facial expression.
For example, a person who is trying to ignore his boss’s annoying offensive comment by keeping a neutral expression might nevertheless show a brief expression of anger. This phenomenon of a brief, involuntary facial expression shown on the face of humans according to emotions experienced is called ‘micro expression’.
Facial Expressions Evolutionary Reasons
4585970706103A common assumption is that facial expressions initially served a functional role and not a communicative one. I will try to justify each one of the seven classical expressions with its functional initially role:
Anger: involves three main features- teeth revealing, eyebrows down and inner side tightening, squinting eyes. The function is clear- preparing for attack. The teeth are ready to bite and threaten enemies, eyes and eyebrows squinting to protect the eyes, but not closing entirely in order to see the enemy.
45760749189Disgust: involves wrinkled nose and mouth. Sometimes even involves tongue coming out.
This expression mimics a person that tasted bad food and wants to spit it out, or smelling foul smell.
Fear: involves widened eyes and sometimes open mouth.
The function- opening the eyes so wide is suppose to help increasing the visual field (though studies show that it doesn’t actually do so) and the fast eye movement, which can assist finding threats. Opening the mouth enables to breath quietly and by that not being revealed by the enemy.
Surprise: very similar to the expression of fear.
Maybe because a surprising situation can frighten us for a brief moment, and then it depends whether the surprise is a good or a bad one. Therefore the function is similar.
458597019033Sadness: involves a slight pulling down of lip corners, inner side of eyebrows is rising. Darwin explained this expression by suppressing the will to cry. The control over the upper lip is greater than the control over the lower lip, and so the lower lip drops. When a person screams during a cry, the eyes are closed in order to protect them from blood pressure that accumulates in the face. So, when we have the urge to cry and we want to stop it, the eyebrows are rising to prevent the eyes from closing.
445262019050Contempt: involves lip corner to rise only on one side of the face. Sometimes only one eyebrow rises. This expression might look like half surprise, half happiness.
This can imply the person who receives this look that we are surprised by what he said or did (not in a good way) and that we are amused by it. This is obviously an offensive expression that leaves the impression that a person is superior to another person.
Happiness: usually involves a smile- both corner of the mouth rising, the eyes are squinting and wrinkles appear at eyes corners. The initial functional role of the smile, which represents happiness, remains a mystery. Some biologists believe that smile was initially a sign of fear. Monkeys and apes clenched teeth in order to show predators that they are harmless. A smile encourages the brain to release endorphins that assist lessening pain and resemble a feeling of well being. Those good feeling that one smile can produce can help dealing with the fear. A smile can also produce positive feelings for someone who is witness to the smile, and might even get him to smile too.
Eigen Face Algorithm
The task of facial recogniton is discriminating input signals (image data) into several classes (persons). The input signals are highly noisy (e.g. the noise is caused by differing lighting conditions, pose etc.), yet the input images are not completely random and in spite of their differences there are patterns which occur in any input signal. Such patterns, which can be observed in all signals could be – in the domain of facial recognition – the presence of some objects (eyes, nose, mouth) in any face as well as relative distances between these objects. These characteristic features are called eigenfaces in the facial recognition domain. They can be extracted out of original image data by means of a mathematical tool called Principal Component Analysis (PCA)
. By means of PCA one can transform each original image of the training set into a corresponding eigenface. An important feature of PCA is that one can reconstruct reconstruct any original image from the training set by combining the eigenfaces. Remember that eigenfaces are nothing less than characteristic features of the faces. to a sum of all eigenfaces, with each eigenface having a certain weight. This weight specifies, to what degree the specific feature (eigenface) is present in the original image.
Fisher Face Algorithm
This paper proposes a face recognition technique that effectively combines elastic graph matching (EGM) and the Fisherface algorithm. EGM as one of the dynamic link architectures uses not only face-shape but also the gray information of image, and the Fisherface algorithm as a class-specific method is robust about variations such as lighting direction and facial expression.
In the proposed face recognition adopting the above two methods, the linear projection per node of an image graph reduces the dimensionality of labeled graph vector and provides a feature space to be used effectively for the classification. In comparison with the conventional method, the proposed approach could obtain satisfactory results f
rom the perspectives of recognition rates and speeds. In particular, we could get maximum recognition rate of 99.3% by the leaving-one-out method for experiments with the Yale face databases.
CHAPTER 2: OVERALL DESCRIPTION
2.1 PROJECT PERSPECTIVE
A screen for detecting the emotion using the inbuilt dataset.
After this the information regarding the emotion will be transferred to us in order to differentiate among various people frequency.
The model can be supported by device being developed . SOFTWARE INTERFACES
This model is prepared on Anaconda Spyder .It has also used OpenCV.
512MB RAM will be required for running the model.
2.2 PROJECT FUNCTIONS
Help us to view and to detect various emotion using face expression recognistaion.
Storing the correct records for future use.
Displays graphs regarding the emotion frequency.
Displaying status of frequencies among various faces.