LessWrong

LessWrong 2009-03 education active
Also known as: RationalismRationalist CommunityYudkowsky

What It Means

#LessWrong represents the rationalist community and blog (founded 2009 by Eliezer Yudkowsky) dedicated to refining human reasoning, overcoming cognitive biases, and confronting existential risks (AI doom)—influencing Silicon Valley, Effective Altruism, and AI safety movements while cultivating reputation for brilliant but socially awkward contrarians.

Origin & Context

Eliezer Yudkowsky, AI researcher and science fiction author, created LessWrong (2009) to host his “Sequences”—epic blog posts on rationality, cognitive science, probability, AI risk. The community attracted programmers, mathematicians, philosophers interested in “how to think good” and preventing AI apocalypse.

Timeline:

  • 2006-2008: Yudkowsky writes Sequences on Overcoming Bias blog
  • 2009: LessWrong launches as dedicated platform
  • 2010-2012: Community grows; meetups form in SF, Boston, NYC
  • 2013-2015: Influences Effective Altruism, AI safety field
  • 2015-2017: Slate Star Codex (Scott Alexander) becomes more influential rationalist blog
  • 2017-2020: LessWrong 2.0 redesign; renewed activity
  • 2022-2023: AI safety concerns (ChatGPT, GPT-4) validate decades of Yudkowsky’s warnings

Cultural Impact

  • AI safety: LessWrong community pioneered AI alignment research (2000s-2010s) dismissed as sci-fi; now mainstream concern
  • Effective Altruism: LessWrong rationalists co-founded EA movement (overlapping communities)
  • Silicon Valley: Influenced tech culture (Thiel Fellows, YC founders cite LessWrong)
  • Cognitive bias literacy: Popularized concepts (availability heuristic, planning fallacy, Pascal’s mugging)
  • Prediction markets: Community ran PredictionBook, later Metaculus
  • Criticism—elitism: Accused of intellectual arrogance, dismissing conventional wisdom
  • Criticism—groupthink: Despite rationality focus, developed own orthodoxies (AI doom, cryonics, polyamory)
  • Criticism—social awkwardness: Stereotyped as brilliant but interpersonally challenged; gender imbalance

Key Concepts

  • Map is not territory: Mental models ≠ reality; update beliefs with evidence
  • Bayesian thinking: Update probability estimates as new evidence emerges
  • Cognitive biases: Systematic thinking errors (confirmation bias, sunk cost fallacy, etc.)
  • Existential risk (x-risk): Threats to human survival/potential (AI, bioweapons, nuclear war)
  • AI alignment: Ensuring advanced AI systems remain beneficial to humans

Notable Figures

  • Eliezer Yudkowsky: Founder, AI safety researcher, author of Harry Potter and the Methods of Rationality
  • Scott Alexander: Psychiatrist, blogger (Slate Star Codex/Astral Codex Ten)
  • Nate Soares: Executive Director of MIRI (Machine Intelligence Research Institute)
  • Julia Galef: Co-founder Center for Applied Rationality, The Scout Mindset author

#Rationalism #AIAffety #EffectiveAltruism #CognitiveBias #BayesianThinking #SlateStarCodex #Yudkowsky

Sources

  • LessWrong.com (2009+)
  • Eliezer Yudkowsky, Rationality: From AI to Zombies (2015 compilation of Sequences)
  • Slate Star Codex blog (2013-2020, now Astral Codex Ten)
  • MIRI (Machine Intelligence Research Institute)

Explore #LessWrong

Related Hashtags