Class 7

Hidden Knowledge: Bluffing, Signalling, and the Price of Information

When players can't see each other's cards, the game changes completely — and the cost of proving who you are becomes the strategy itself.

In 2005, a 26-year-old named Matt recently graduated from a mid-tier university applied for a software engineering position at Google. His résumé was unremarkable — decent GPA, no elite internships, no publications. He was rejected without an interview. Two years later, having spent nights and weekends contributing to a prominent open-source project — writing code that anyone could inspect — he reapplied. This time, Google flew him to Mountain View. The open-source contributions cost Matt hundreds of hours of unpaid labor. They didn't teach him anything he couldn't have learned faster in a paid job. But they solved a problem that no résumé bullet point could: they proved, in a way that was extraordinarily costly to fake, that he was genuinely talented.

Matt's story captures the central puzzle of this chapter. In every game we've analysed so far, players knew each other's payoffs. A firm knew its rival's cost structure. A prisoner knew exactly how much the other prisoner valued freedom versus loyalty. That assumption — complete information — was convenient, and it let us develop powerful tools. But it was a lie. In most strategic situations that matter, you don't know what the other player truly wants, what they're capable of, or what they're willing to endure. You must infer it. And they know you're trying to infer it. Welcome to the world of incomplete information.

From Complete to Incomplete Information

Let's be precise about what we're breaking. In Chapters 1 through 6, we operated under an assumption so pervasive it was almost invisible: every player knew the full structure of the game — all the strategies, all the payoffs, for all the players. This is complete information. Note that this is different from perfect information, which we discussed in Chapter 2. Perfect information means you can observe every move that has been made before choosing your own (as in chess). Complete information means you know what every player wants — you know their utility functions. A game can have complete but imperfect information (like a simultaneous-move game where you know payoffs but not your opponent's choice) or incomplete information (where you don't know some player's payoffs at all).

Think about the difference. When you play rock-paper-scissors, you can't observe your opponent's choice (imperfect information), but you know they prefer winning to losing to drawing (complete information). Now imagine playing a negotiation game against someone, and you don't know whether they're a tough bargainer who would rather walk away than accept a bad deal, or a desperate seller who would accept almost anything. That is incomplete information. Their preferences — their type — are hidden from you.

Why does this matter strategically? Because when you don't know someone's type, every action they take becomes a potential clue. If the seller rejects your low opening offer, does that mean they're genuinely tough, or are they bluffing? If a job candidate has a PhD, does that mean they're brilliant, or did they simply have the patience to endure graduate school? Incomplete information turns every game into a detective story, and in this chapter, we'll develop the tools to read the clues.


Harsanyi's Revolution: Nature as a Player

For decades, the problem of incomplete information seemed analytically intractable. If Player 1 doesn't know Player 2's payoffs, then Player 1 doesn't know what game they're playing. And if Player 1 doesn't know what game they're playing, how can we define equilibrium? John Harsanyi's breakthrough, published across three landmark papers (Harsanyi, 1967), was elegant in its simplicity: introduce a fictitious player called Nature.

Here's how it works. Before the game begins, Nature moves first, randomly assigning each player a type from some set of possible types. Each player observes their own type but not the types of the other players. Crucially, the probability distribution over types — the chances that Nature assigns any particular type — is common knowledge. Everyone knows the distribution; they just don't know the realisation.

This is a subtle but profound move. Consider an insurance market. An applicant knows whether they are a safe driver or a reckless one. The insurance company doesn't know this specific applicant's type, but it knows the population distribution: perhaps 70% of applicants are safe and 30% are reckless. By modelling "Nature assigns the applicant a type — safe with probability 0.7, reckless with probability 0.3," Harsanyi converted a game of incomplete information into a game of imperfect information. And games of imperfect information — games with information sets, like we studied in Chapter 2 — we already know how to analyse.

The resulting game is called a Bayesian game, and the equilibrium concept is Bayesian Nash equilibrium: a strategy profile where each type of each player is maximising their expected payoff, given their beliefs about the distribution of other players' types (Gibbons, 1992). It's Nash equilibrium, but with an extra layer — you're optimising not against a known opponent, but against a distribution of possible opponents.

Figure 1: Harsanyi's transformation — converting incomplete information into imperfect information by introducing Nature as a player who assigns types.
Figure 1: Harsanyi's transformation — converting incomplete information into imperfect information by introducing Nature as a player who assigns types.

Bayesian Updating: Learning from What You See

If the other player's type is hidden, how do you learn anything about it? The answer is Bayesian updating — the process of rationally revising your beliefs when you observe new evidence. You begin with a prior belief (your initial probability estimate about the other player's type), you observe an action or signal, and you use Bayes' rule to compute a posterior belief — your updated probability given what you've seen.

The mathematics are straightforward. Suppose you think there's a 40% chance your opponent is Type H (high) and a 60% chance they're Type L (low). You observe them take some action — say, investing heavily in advertising. You know that Type H firms advertise with probability 0.9, while Type L firms advertise with only probability 0.3. Bayes' rule says:

P(H | advertise) = P(advertise | H) × P(H) / [P(advertise | H) × P(H) + P(advertise | L) × P(L)]

Plugging in: P(H | advertise) = (0.9 × 0.4) / (0.9 × 0.4 + 0.3 × 0.6) = 0.36 / (0.36 + 0.18) = 0.36 / 0.54 ≈ 0.667. Your belief that they're Type H jumped from 40% to 67%. The action carried information because it was much more likely to come from a Type H player.

Notice the critical insight: the informativeness of a signal depends on how differently the two types behave. If both types advertise with equal probability, observing advertising tells you nothing — your posterior equals your prior. If only Type H ever advertises, then observing advertising tells you everything — your posterior jumps to 100%. The greater the gap in behaviour between the types, the more informative the signal. This connects directly to our work on mixed strategies in Chapter 4: when a player randomises, their actions become partially informative rather than fully revealing (Fudenberg & Tirole, 1991).

Think About It

Suppose you believe there's a 50% chance your opponent is bluffing in a poker game. You know that bluffers raise 80% of the time, while players with strong hands raise 95% of the time. Your opponent raises. Before computing: does your belief that they're bluffing go up or down? Why? Now compute the posterior using Bayes' rule.

The key intuition: seeing a raise should make you think they're less likely to be bluffing, because strong hands raise even more often than bluffers do. The posterior is (0.8 × 0.5) / (0.8 × 0.5 + 0.95 × 0.5) = 0.40 / 0.875 ≈ 0.457. A small shift, because both types raise frequently — the signal is noisy. Try the widget below to build your intuition across many different scenarios.

🎲 The Bayesian Belief Updater

Set a prior, define how likely each type is to send a signal, then guess the posterior before revealing the answer.

PRIOR → POSTERIOR VISUALIZATION
0%Prior: 50% | Posterior: ?100%
%
Rounds: 0 | Avg. Error: —

Signalling: When Actions Speak Louder Than Words

Bayesian updating tells us how to read signals. But who sends them, and why? This is the domain of signalling theory, and its most famous model comes from Michael Spence's analysis of the job market (Spence, 1973).

Spence's Job Market Model

Consider a labour market with two types of workers: high-ability (H) and low-ability (L). A high-ability worker produces output worth $100,000 to a firm; a low-ability worker produces $50,000. Workers know their own type, but employers cannot directly observe ability. If the employer simply offers the average wage — say, $75,000 if the population is evenly split — high-ability workers are underpaid and low-ability workers are overpaid. This is a pooling outcome, and it creates a problem: high-ability workers want to distinguish themselves.

Enter education. In Spence's model — and this is the part that shocks students every time — education need not make workers more productive. It doesn't have to teach useful skills. It simply needs to be differentially costly. Specifically, suppose that getting a degree costs a high-ability worker $30,000 in effort, time, and foregone wages, but costs a low-ability worker $60,000 (because the coursework is harder for them, it takes longer, etc.). This difference in cost is called the single-crossing property.

Now consider a potential separating equilibrium: employers believe that workers with degrees are Type H and those without are Type L, paying $100,000 and $50,000 respectively. Would either type want to deviate?

The equilibrium holds. Education separates the types not because it adds value, but because it's cheap for the talented and expensive for everyone else. The signal is credible for precisely the reason a threat is credible (recall Chapter 5): it would be too costly to fake. The high-ability worker burns $30,000 to prove their type — a loss in absolute terms, but a gain relative to being pooled with low-ability workers.

Figure 2: The single-crossing property in Spence's signalling model — education is differentially costly, enabling separation of types.
Figure 2: The single-crossing property in Spence's signalling model — education is differentially costly, enabling separation of types.

Separating vs. Pooling Equilibria

The separating equilibrium isn't the only possibility. Consider a pooling equilibrium where both types choose the same education level — say, no education. If employers believe that education conveys no information (because everyone does the same thing), they offer the average wage of $75,000 to everyone. Would any type deviate? Only if getting a degree and being perceived as Type H (earning $100,000) is worth the cost. For Type H, deviating yields $100,000 − $30,000 = $70,000, which is less than $75,000. For Type L, deviating yields $100,000 − $60,000 = $40,000, which is also less than $75,000. Neither type deviates, so this pooling equilibrium also survives.

Which equilibrium actually emerges? This is one of the deepest questions in signalling theory, and it depends on refinement criteria — ways of ruling out "unreasonable" beliefs. The intuitive criterion, developed by Cho and Kreps, argues that if the only type who could possibly benefit from deviating to a particular education level is Type H, then employers should believe the deviator is Type H (Sobel, 2007). Under this refinement, many pooling equilibria collapse, and the separating equilibrium survives. The intuition is appealing: if only a talented worker would bother getting the degree, it's unreasonable for employers to ignore the degree.

Think About It

Can you think of a real-world signal that works like Spence's education model — not primarily because of what it teaches, but because of what it proves about the person willing to endure it? Consider: military boot camp, unpaid internships, marathon running, writing a PhD dissertation. What makes each costly, and why is that cost harder to bear for the "wrong" type?

Signalling Beyond the Job Market

Spence's logic extends far beyond education. Riley (2001) surveys twenty-five years of applications across economics and biology:

In every case, the structure is identical: a privately-informed party takes a costly action that is differentially costly by type, and an uninformed party updates their beliefs accordingly. The signal works not in spite of its cost, but because of it.

📊 Spence's Signalling Game Simulator

Explore how cost differentials drive separating and pooling equilibria. Adjust worker types, costs, and employer beliefs.

EDUCATION COST
$0K
WAGE OFFERED
$50K
NET PAYOFF
$50K

Screening: When the Uninformed Party Moves First

In signalling, the informed party (the worker) moves first, choosing an action that reveals their type. But what if the uninformed party moves first instead? This is screening, and the distinction matters more than it might seem (Cho & Kreps, 1987).

In a screening model, the uninformed party designs a menu of options — a set of contracts — designed so that each type self-selects into the contract intended for them. The classic example is insurance. An insurance company cannot observe whether you're a safe or reckless driver. But it can offer you a choice:

The key insight: reckless drivers, who expect to file claims frequently, prefer full coverage even at a higher premium. Safe drivers, who rarely file claims, prefer to save on premiums and accept a higher deductible. The menu is designed so that each type voluntarily reveals themselves through their choice. Nobody is forced to disclose anything — the menu structure does the work. This is sometimes called self-selection or incentive-compatible contract design.

The asymmetry between signalling and screening is about who bears the cost of information revelation. In signalling, the informed party pays (the worker buys education). In screening, the uninformed party designs the mechanism, and the "cost" is borne through distortion — the insurance company must offer the safe type an imperfect contract (with a higher deductible than they'd get under full information) to prevent the reckless type from mimicking them. Riley (2001) emphasises that in signalling games, all contracts break even in equilibrium, while screening games can sustain cross-subsidisation between types.


Cheap Talk: When Words Are Free

Not all communication is costly. Politicians make campaign promises. Job applicants describe themselves as "hardworking." Friends recommend restaurants. These messages are cheap talk — they're costless to send, regardless of whether they're true (Crawford & Sobel, 1982).

The central question of cheap talk theory is: can costless messages convey any information at all? If messages are free, why wouldn't every type send the most favourable message? A low-ability worker could say "I'm high-ability" just as easily as a high-ability worker. The signal has zero differential cost — the single-crossing property fails completely.

Crawford and Sobel's (1982) seminal model shows that the answer depends on how aligned the players' interests are. If the sender and receiver have identical preferences — they both want the receiver to take the action that's optimal given the true state — then the sender has no incentive to lie, and cheap talk can be fully informative. But when interests diverge, the sender has an incentive to shade the truth, and the receiver, knowing this, discounts the message. The result is partial information transmission at best.

"In Crawford and Sobel's framework, the degree of information transmission is inversely related to the degree of interest conflict between sender and receiver. When preferences are perfectly aligned, talk is cheap but honest. When they're diametrically opposed, talk is truly just babble." — Sobel (2007)

Consider some examples. A restaurant review from a friend with similar tastes is informative cheap talk — your interests are aligned, so they have little reason to mislead you. A car salesman's claim that "this is the best deal in town" is much less informative — his interest (selling at a high price) conflicts with yours (buying at a low price). A politician's promise to "fight for working families" is perhaps the purest form of cheap talk — costless, vague, and sent by every type regardless of true intent.

An important result from Crawford and Sobel: babbling equilibria always exist. A babbling equilibrium is one where the sender sends random messages, and the receiver ignores all messages. This is always a Nash equilibrium of a cheap talk game because, if the receiver ignores messages, the sender is indifferent about what they say, and if the sender babbles, the receiver is right to ignore them. The existence of babbling equilibria underscores a key point: unlike costly signals, which are credible by construction, cheap talk messages are only informative if both parties tacitly agree to make them so.

Think About It

Your flatmate tells you the restaurant down the street is "amazing." Is this a costly signal, a screen, or cheap talk? What would change if they had invested money in the recommendation — say, by offering to pay for your meal if you don't like it?

Figure 3: The spectrum of strategic communication — from costless cheap talk to credible costly signals to screening mechanisms designed by the uninformed party.
Figure 3: The spectrum of strategic communication — from costless cheap talk to credible costly signals to screening mechanisms designed by the uninformed party.

Putting It All Together: A Taxonomy of Strategic Communication

We now have three fundamental categories for understanding how information moves (or fails to move) between players with asymmetric information:

  1. Signalling — The informed party moves first, taking a costly action that reveals their type. Credibility comes from differential cost (single-crossing). Example: education, product warranties, venture capital investment.
  2. Screening — The uninformed party moves first, offering a menu that induces self-selection. Credibility comes from incentive-compatible design. Example: insurance deductible menus, airline ticket classes (business vs. economy), volume discounts.
  3. Cheap talk — Costless messages that may or may not carry information, depending on preference alignment. Example: political promises, pre-play communication in coordination games, job interview claims.

These categories connect deeply to concepts from earlier chapters. A signal is credible for the same reason a commitment is (Chapter 5) — it involves a real cost that the wrong type wouldn't bear. Screening relies on backward induction (Chapter 3) — the uninformed party reasons backward from the types' incentives to design the optimal menu. Bayesian updating requires the probabilistic reasoning we developed with mixed strategies (Chapter 4). And the information sets from Chapter 2 provide the formal representation: when the employer cannot distinguish between a high-ability and low-ability worker, those two nodes sit in the same information set.

Practice classifying real-world communication with the widget below.

🎯 The Signal Quality Spectrum

Classify each example as cheap talk, costly signal, or screening mechanism. Click a category, then check your answer.

EXAMPLE 1 OF 10
Score: 0 / 0

Why Information Asymmetry Matters: Market Consequences

The concepts in this chapter aren't merely theoretical curiosities. Information asymmetry can cause entire markets to unravel. George Akerlof's famous "Market for Lemons" (which we'll formalise more fully later in the course) shows that when buyers can't distinguish high-quality from low-quality sellers, the average price falls, high-quality sellers exit, and the market may collapse entirely. Signalling, screening, and even cheap talk are all mechanisms that societies develop to fight this unravelling.

Universities exist partly as signalling institutions. Insurance companies invest billions in screening mechanisms. Professional certifications, brand reputations, online review systems, money-back guarantees — all of these are responses to the fundamental problem Harsanyi formalised: players don't know each other's types, and that ignorance has strategic consequences.

The framework also illuminates some counterintuitive policy implications. If education primarily serves as a signal rather than building human capital, then subsidising education might increase the total amount of signalling without increasing total productivity — everyone gets more education, but the relative ranking stays the same. This is the signalling arms race, and it suggests that the social return to education may be lower than the private return. Whether education is primarily a signal or primarily human capital accumulation remains one of the most important — and most contested — questions in labour economics.

Think About It

If everyone in a society gets a university degree, does the degree still work as a signal? What would you predict happens to the signalling value of a bachelor's degree as university attendance rates rise? What new signals might emerge to replace it?


Formal Summary: The Bayesian Game Framework

Let's consolidate the formal machinery. A Bayesian game consists of (Fudenberg & Tirole, 1991; Gibbons, 1992):

  1. A set of players: {1, 2, …, n}
  2. A set of types for each player: Ti
  3. A common prior probability distribution over types: p(t1, t2, …, tn)
  4. A set of actions for each player: Ai
  5. Payoff functions that depend on actions and types: ui(a1, …, an; t1, …, tn)

A Bayesian Nash equilibrium is a strategy profile σ* = (σ*1, …, σ*n), where σ*i maps types to actions (or mixed actions), such that for every player i and every type ti, σ*i(ti) maximises player i's expected payoff given their beliefs about other players' types and the strategies those types play.

In a signalling game (a dynamic Bayesian game), we use a stronger concept: Perfect Bayesian Equilibrium (PBE). A PBE requires not only that strategies are sequentially rational (optimal at every information set), but also that beliefs are derived from Bayes' rule wherever possible. When a player observes an action that was supposed to have zero probability in equilibrium (an "off-path" action), Bayes' rule doesn't apply, and additional refinements — like the intuitive criterion — restrict what beliefs are "reasonable."

Key Takeaways

Looking Ahead

In Chapter 8, we enter the world of auctions — and the signalling framework returns with a vengeance. In a common-value auction, every bidder is uncertain about the true value of the item, and each bid is a signal. The winner's curse — the phenomenon where the winning bidder has systematically overpaid — is a direct consequence of failing to properly Bayesian-update. We'll also see how auction design is fundamentally a screening problem: the seller designs the rules (the menu), and bidders self-select through their bids.

References

Cho, I.-K., & Kreps, D. M. (1987). Sorting out the differences between signaling and screening models (NBER Technical Working Paper No. 0093). National Bureau of Economic Research. https://ideas.repec.org/p/nbr/nberte/0093.html

Crawford, V. P., & Sobel, J. (1982). Strategic information transmission. Econometrica, 50(6), 1431–1451. https://doi.org/10.2307/1913390

Fudenberg, D., & Tirole, J. (1991). Game theory. MIT Press. https://mitpress.mit.edu/9780262061414/game-theory/

Gibbons, R. (1992). Game theory for applied economists. Princeton University Press. https://press.princeton.edu/books/paperback/9780691003955/game-theory-for-applied-economists

Harsanyi, J. C. (1967). Games with incomplete information played by "Bayesian" players, I–III: Part I. The basic model. Management Science, 14(3), 159–182. https://doi.org/10.1287/mnsc.14.3.159

Riley, J. G. (2001). Silver signals: Twenty-five years of screening and signaling. Journal of Economic Literature, 39(2), 432–478. https://doi.org/10.1257/jel.39.2.432

Sobel, J. (2007). Signaling games. In S. N. Durlauf & L. E. Blume (Eds.), The new Palgrave dictionary of economics. Palgrave Macmillan. https://econweb.ucsd.edu/~jsobel/Paris_Lectures/20070527_Signal_encyc_Sobel.pdf

Spence, M. (1973). Job market signaling. The Quarterly Journal of Economics, 87(3), 355–374. https://doi.org/10.2307/1882010