FACS & Micro-Expressions

Paul Ekman's groundbreaking research proved that facial expressions of emotion are universal — and that tiny, involuntary micro-expressions can reveal hidden feelings.

8 min read

Who Is Paul Ekman?

Paul Ekman (born 1934) is an American psychologist widely regarded as one of the most influential researchers in the study of emotions and facial expressions. Over a career spanning more than five decades, Ekman fundamentally changed how science understands the human face — proving that certain emotional expressions are universal across all cultures and laying the groundwork for the modern science of facial analysis.

Ekman's most famous early research took him to isolated tribal communities in Papua New Guinea in the 1960s. He studied the Fore people, a pre-literate group with virtually no exposure to Western media, television, or photography. When he showed them photographs of facial expressions and asked them to match expressions to emotional stories, the results were striking: the Fore identified happiness, sadness, anger, fear, surprise, and disgust in exactly the same way as Americans, Japanese, and Brazilians.

This finding was revolutionary. It challenged the then-dominant view in anthropology — championed by Margaret Mead and others — that emotional expression was primarily culturally learned. Ekman's work demonstrated that while the rules governing when and how emotions are displayed vary across cultures (what he called "display rules"), the basic muscular patterns of emotional expression are hardwired into human biology. Every human face, regardless of culture, ethnicity, or geography, expresses the same core emotions in the same fundamental way.

The Facial Action Coding System (FACS)

In 1978, Ekman and his colleague Wallace Friesen published the Facial Action Coding System (FACS), a comprehensive, anatomically based system for describing all visually distinguishable facial movement. FACS remains the gold standard for facial expression research nearly five decades after its creation.

At its core, FACS maps the 43 muscles of the human face into discrete units called Action Units (AUs). Each Action Unit corresponds to a specific muscle or muscle group and produces a specific, observable change in facial appearance. For example, AU 1 involves the inner frontalis muscle and raises the inner portion of the eyebrows. AU 6 involves the orbicularis oculi muscle and raises the cheeks, creating the distinctive "crow's feet" associated with genuine smiling. AU 12 involves the zygomaticus major and pulls the lip corners upward.

Any facial expression — no matter how subtle or complex — can be described as a specific combination of Action Units. A genuine smile of happiness, for instance, is coded as AU 6 + AU 12 (the so-called "Duchenne smile"), while a polite or social smile typically involves only AU 12 without AU 6. This level of precision allows researchers to distinguish between authentic and performed expressions with remarkable accuracy.

FACS has been adopted across an extraordinary range of fields. Psychologists use it to study emotion and mental health. Animators at studios like Pixar use it to create believable facial expressions in digital characters. Security professionals use it in deception detection. Neuroscientists use it to study the brain's emotion circuits. And, increasingly, AI developers use FACS as the foundation for teaching computers to recognize and interpret facial expressions.

The Seven Universal Emotions

Ekman's research identified seven basic emotions that are expressed and recognized the same way across all human cultures. Each has a distinctive facial signature that can be precisely described using FACS Action Units.

Happiness is characterized by raised cheeks (AU 6) and upturned lip corners (AU 12). When genuine, it produces crow's feet around the eyes — a feature that is extremely difficult to produce voluntarily. Sadness involves the inner eyebrows pulling upward (AU 1) and lip corners pulling down (AU 15), often with the chin boss pushing up (AU 17).

Anger tightens the brows (AU 4), raises the upper lids (AU 5), and tightens the lips (AU 23 and AU 24). Fear raises and draws together the eyebrows (AU 1 + AU 2 + AU 4), opens the eyes wide (AU 5), and stretches the lips horizontally (AU 20).

Surprise raises the eyebrows (AU 1 + AU 2), opens the eyes wide (AU 5), and drops the jaw (AU 26). Disgust wrinkles the nose (AU 9), raises the upper lip (AU 10), and often involves the tongue pushing forward. Contempt is unique among the basic emotions in being asymmetric — it involves a unilateral tightening of one lip corner (AU 14), producing a distinctive half-smile or smirk.

Free Preview

Curious what your face reveals?

Get a free AI-powered preview of your facial features and what they might say about you.

Try Free Preview

What Are Micro-Expressions?

One of Ekman's most fascinating discoveries is the existence of micro-expressions — involuntary facial expressions that flash across the face in as little as 1/25th of a second and rarely last longer than 1/5th of a second. These fleeting expressions occur when a person experiences an emotion but attempts to conceal or suppress it, either consciously or unconsciously.

What makes micro-expressions remarkable is that they are extremely difficult to fake or control. Because they are produced by involuntary neural pathways — different from the voluntary motor pathways used when we deliberately form an expression — they represent a kind of emotional "leak" that bypasses conscious control. A person may maintain a calm, neutral expression on the surface while a micro-expression of anger, fear, or disgust flashes across their face in a fraction of a second.

Most people miss micro-expressions entirely in everyday interaction. They happen too quickly for casual observation. However, research has shown that the ability to detect micro-expressions can be trained. Ekman developed training programs (such as the Micro Expression Training Tool, or METT) that significantly improve participants' ability to spot these fleeting signals. With practice, trained observers can detect micro-expressions in real-time conversation, gaining insight into emotions that the other person may be trying to hide.

FACS vs Traditional Face Reading

FACS and traditional face reading systems like Chinese Mian Xiang or Indian Samudrika Shastra approach the face from fundamentally different angles — and understanding this distinction is key to appreciating what each contributes.

FACS reads dynamic expressions (movement). It focuses on what the face is doing right now — the momentary configurations of muscles that signal emotions, intentions, and reactions. FACS tells you what someone is feeling in a given moment, whether that feeling is genuine or performed, and whether hidden emotions are leaking through micro-expressions.

Traditional face reading reads static features (structure). It focuses on the permanent or semi-permanent characteristics of the face — bone structure, proportions, feature shapes, lines, and marks. Traditional face reading suggests habitual emotional patterns, personality tendencies, and (in some traditions) life trajectory. It addresses not what you are feeling in this moment but what kind of person your face suggests you tend to be over time.

These two approaches are not in competition — they are complementary. A comprehensive understanding of the face benefits from both perspectives: the scientific precision of FACS for reading emotional states and the interpretive depth of traditional systems for understanding personality patterns. Explore how these traditions intersect in our complete face reading guide and our article on face reading and emotional patterns.

Practical Applications

The practical applications of FACS and micro-expression knowledge extend far beyond academic research. Understanding how faces communicate emotion has proven valuable across a wide range of professional and personal contexts.

In law enforcement and security, FACS-trained professionals use micro-expression detection as one tool (among many) for assessing credibility during interviews and interrogations. It is important to note that micro-expression analysis alone is not a reliable lie detector — but it can flag emotional incongruence that warrants further investigation.

In therapy and counseling, FACS helps clinicians detect emotions that clients may not be verbalizing. A therapist who notices a micro-expression of sadness during an otherwise cheerful account can gently explore what might lie beneath the surface, deepening the therapeutic relationship.

In business and customer service, understanding facial expressions improves negotiation, sales, and client relationships. Recognizing when a client's smile is genuine versus polite, or detecting a flash of concern during a presentation, can provide valuable real-time feedback.

In personal relationships, developing facial expression literacy enhances empathy and emotional intelligence. Partners, parents, and friends who can read subtle emotional cues communicate more effectively and build deeper connections. For more on this topic, see our guide to building empathy.

AI and Facial Expression Analysis

The principles of FACS have become foundational to modern computer vision and AI. Machine learning systems trained on FACS-coded data can now detect and classify facial expressions in real-time, opening new possibilities for automated emotional analysis.

These AI systems use deep learning to identify Action Units from video or photographic input, often achieving accuracy comparable to trained human FACS coders. Applications range from market research (measuring consumer emotional responses to products and advertisements) to healthcare (monitoring patient pain levels or emotional states) to education (assessing student engagement during remote learning).

At MeByFace, we combine FACS-informed expression analysis with structural face reading to create a comprehensive portrait of what your face reveals. Our AI measures both the permanent proportions and features of your face (drawing on traditions like Mian Xiang and modern proportion research) and the expressive patterns visible in your photo (informed by FACS principles). The result is an analysis that bridges the gap between ancient observational wisdom and modern scientific measurement. Learn more on our How It Works page, or explore the question of accuracy in our article on whether face reading is accurate.

Ready to discover your unique insights?

Our expert analysis combines AI-powered facial mapping with psychology-informed interpretation to reveal personality patterns unique to you.