Foundational Foundational: build the core concepts you will reuse in later exercises.
Exercise 1 of 4

Bayes Billiards Simulation

Place a forecast, gather evidence, and watch your accuracy improve.

This is hands-on Bayesian updating with immediate feedback.

Learning Objectives

By the end, you will be able to:

  1. Place a prior forecast and update it as new evidence arrives.
  2. Explain how evidence shifts a probability estimate over time.
  3. Track forecast error to see improvement across rounds.
  4. Connect base rates to alert interpretation in cyber scenarios.

Use left and right arrow keys to move the forecast line. Press Enter or Space to place your forecast.

Your Forecast Left Evidence Right Evidence
Learning Debrief

What You Just Learned

You Did Bayesian Updating

Every time you adjusted your forecast after seeing a red or blue ball, you were intuitively applying Bayes' Theorem:

P(Position | Evidence) = P(Evidence | Position) × P(Position) / P(Evidence)
This simulation uses a simplified update rule for teaching. A rigorous implementation would use a full grid approximation or MCMC sampling to compute the posterior.
  • Prior belief: Your initial guess (uniform probability across table)
  • Likelihood: Probability of seeing that evidence if cue ball is at your guessed position
  • Posterior belief: Your updated estimate after incorporating new evidence

Applying This to Cybersecurity

Billiards Table

  • Question: Where is the cue ball?
  • Evidence: Red/blue ball positions
  • Update: Refine forecast each roll

Threat Intelligence

  • Question: Is this IP malicious?
  • Evidence: Threat feed alerts
  • Update: Adjust threat score with data

Real-World Scenario

Base Rate: 2% of IPs are actually malicious
Detection Rate: 90% true positive rate (catches 90% of malicious IPs)
False Positive Rate: 5% of benign IPs are flagged
Question: If an alert fires, what's the actual probability the IP is malicious?

Using Bayes' Theorem:

P(Malicious | Alert) = (0.90 × 0.02) / [0.90 × 0.02 + 0.05 × 0.98]
P(Malicious | Alert) = 0.018 / 0.067 = 0.269 or 27%

Key Insight: Even with a 90% accurate detection system, only 27% of alerts indicate truly malicious IPs—because the base rate is so low. This is why understanding priors (base rates) is critical in cybersecurity.

Insider Threat Triage

Base Rate: 0.3% of employees attempt data exfiltration per quarter
Detection Rate: 70% of true exfil attempts trigger the DLP rule
False Positive Rate: 3% of normal activity triggers the rule
Question: If the DLP alert fires, how likely is true exfiltration?

Using Bayes' Theorem:

P(Exfil | Alert) = (0.70 × 0.003) / [0.70 × 0.003 + 0.03 × 0.997]
P(Exfil | Alert) ≈ 0.0021 / 0.032 = 0.066 or 6.6%

Vulnerability Exploitation

Base Rate: 1% of internet-facing servers are exploited within 30 days of disclosure
Detection Rate: 80% of real exploits trigger the rule
False Positive Rate: 10% of benign traffic triggers the rule
Question: If the rule triggers, what is the probability of active exploitation?

Using Bayes' Theorem:

P(Exploit | Alert) = (0.80 × 0.01) / [0.80 × 0.01 + 0.10 × 0.99]
P(Exploit | Alert) ≈ 0.008 / 0.107 = 0.075 or 7.5%
Continue to Fermi Estimates