Learn WRE — Reflective Equilibrium lessons

Research

What is the fastest way for humans to discover new metaethical and moral truths, and for the public to judge the newest arguments?

No single method solves this quickly. The strongest practical route is an institutional stack that combines philosophical clarification, empirical testing, forecasting, and transparent public reasoning.

The detailed research page explains how to separate values, empirical beliefs, and method-level disagreement, then assign each layer the right feedback channel and public legitimacy checks.

Open research page Moral discovery engine Expanded framework, graphics, institutional stack, public-judgment protocol, and safeguards.

Timing

Current humans is likely to be living at a uniquely influential time, where we have the opportunity to do more good for humanity than either or ancestors or future people.

A million-year model supports three separate claims: long-run stakes can be enormous; today may be unusually important; and solving moral truth now may or may not dominate other interventions.

The first claim is robust. The second and third are conditional on lock-in, discounting assumptions, and whether present institutions can shape later irreversible choices.

Practical conclusion: treat this century as potentially pivotal, but keep confidence calibrated. Build better reflective governance while preserving moral plasticity.

2026-2100 2100-2300 2300-3000 3000-10,000 10,000-1M 0.60 0.45 0.30 0.15 0.00 Perils-weighted Lock-in-weighted Gradualist

Scenario prior for when the single highest-leverage era might occur. The present can be unusually important while still leaving substantial probability mass on later centuries.

Empirical stakes in the model

  • ~83B land animals are slaughtered annually.
  • ~78-171B farmed fish and ~1.1-2.2T wild fish are killed annually.
  • Arthropod numbers are vast; insect-welfare stakes can be large under even modest sentience credence.
  • If lock-in risk is low or discounting is strong, near-term welfare reforms become relatively more competitive.

Robust priority set across uncertainty: reduce existential and conflict risk, improve long-run governance, and avoid irreversible value mistakes.

Source model: Impact Timing and Moral Priorities in a Million-Year Human Future.

No lock-in q = 1% q = 1%, d = 1% q = 10% 1x 10,000x 100x 100,000x Relative value of a one-year acceleration (illustrative order of magnitude)

Timing value usually tracks one year of welfare improvement; it becomes orders of magnitude larger only when that year overlaps irreversible lock-in risk.

A longterm view

Competing ethical theories can climb toward truth from different sides.

As Derek Parfit suggested, routes differ, intermediate judgments diverge, and local views conflict. Yet sustained reflection can reveal convergence on deeper truths.

Mountain View

Each line is a theory-family trajectory. Motion shows inquiry over time: revising cases, principles, and background commitments while climbing.

  • Different starting points: welfare, duty, virtue, contract, relation.
  • Distinct local routes: disagreements persist for long stretches.
  • Truth pressure: broader coherence pulls views upward.
Consequentialist Deontological Virtue Hybrid Relational Contractualist Truth
Divergent local judgments Convergence under reflection

Site Guide

Navigate by workflow

Foundations

Website logic now tracks core debates in moral philosophy.

We reviewed the core entries on moral theory, epistemology, disagreement, dilemmas, realism/cognitivism, consequentialism, deontology, virtue ethics, thought experiments, and reflective equilibrium. Use these links as the conceptual baseline for markets, dialogues, and WRE rounds.

Mutual Deliberation

Taking Everyone's Beliefs Seriously Improves Our Chances of Getting Normative Issues Correct Earlier, which Has Huge Benefits.

Even if humanity only lasts as long as the typical mammalian species (1 million years), we would only be among the first 0.5% of humans to ever live. Our actions bound a world that the following 99.5% of us will inherit. If we figure out that X is wrong 1 year sooner than otherwise, this can immediately save the lives of hundreds of thousands of people and leave a better world for the rest 99.5% of us. The earlier the timeframe, the better it is to make ethical discoveries. This does not require certainty that we are uniquely pivotal; it only requires non-trivial chances of lock-in windows and very long-run effects.

Why should I care about future people and whether they will exist?

If you care about the welfare of other living people

  • because their welfare matter: future people's welfare matter just as much.
  • because they can experience pain and suffering: future people are similarly able to experience pain and suffering.
  • because it is good that a person lives a good life: if future people can't exist, they lose out on living lives potentially as good as yours.

Why should I care about other people?

If you care about your future self

  • because I care about myself: consider that you largely don't remember who you were at 3 years-old. Similarly, suppose that you are 40 years old today. When you become 80 years old, you would likely be very different and not psychologically connected to who you are today. It is highly contested whether you are the same person as you were when you were 15 and whether you will be the same person as you today are. By caring about your future yourself, you are caring for another person.

If you care about your current self

  • because I care about myself: consider the possibility that, if we all cared only for ourselves, each of us would end up worse off.
  • because I matter: the reasons why you matter are likely to indicate that other people (and, many moral philosophers believe, some animals') similarly matter, in many cases just as much as you do.

If you care about the amount of goodness in the world: other people have the potential to live good lives.

How does spending time to think about those philosophical issues benefit the world? What further contributions could I make beyond that of philosophers, who think about those issues for a living?

If philosophers were overall more reliable than other researchers, what would we expect to see? Two things: (a) We'd expect to see philosophers making faster progress than others. (b) We'd expect to see more agreement in philosophy than in other fields.

Obviously, these predictions are the opposite of the truth. So while philosophers may be better thinkers than others in some respects, there is no reason to think philosophers are better at getting to the truth.

Unlike in the sciences and mathematics, you possess the exact same data as philosophers—intuitions about the world. If you think carefully and persistently about philosophical problems, there is a non-trivial chance that you would come up an idea that would contribute to humanity's current understanding of the world. This chance is significantly higher than the chance of anyone making a contribution in the sciences or math that would have an equal benefit to the world.

Longterm Welfare

Equal progress in all domains can move welfare forward and extinction forward at the same time.

Equal progress in all domains doesn't increase total human welfare. It causes humans to enjoy higher welfare sooner, but it also causes humans to develop dangerous technology sooner: human-made pandemics, power-seeking AI, and increasingly destructive weapons.

If progress remains equally accelerated across all domains, then humanity also reaches existential catastrophe sooner. In the simple case where every domain is accelerated by 1 year, humanity goes extinct 1 year sooner as well. The gain is one earlier year of welfare; the loss is one entire later year of welfare at roughly the quality of life humanity would otherwise have had in 2026.

Adapted from Toby Ord, On the Value of Advancing Progress.

Earlier centuries Industrial / scientific acceleration High-risk capability era Welfare 0 Ordinary progress Equal progress accelerated by 1 year Lost welfare year equal to roughly one year at the unaccelerated welfare level

Equal acceleration shifts both good years and dangerous years earlier. If extinction also comes earlier, total welfare need not increase.

Differential Progress

What increases total human welfare instead is differential progress.

Differential progress means accelerating some domains of progress so that they arrive earlier than progress in other domains. The aim is to get norms, institutions, and moral understanding ahead of capabilities that make catastrophic mistakes easier.

By developing social norms and institutions that regulate the development of future technologies, progress in technology becomes less dangerous. This is why ethical discovery, metaethical clarity, state capacity, and international coordination matter instrumentally as well as intrinsically.

We have already discovered many low-hanging-fruit ethical truths. Most people in history did not consider slavery wrong. The remaining work is harder: discovering ethical and metaethical truths that are more difficult to see, but that may matter even more for shaping advanced civilization well.

Moral + metaethical discovery Norms + institutions Dangerous capabilities ethical insight arrives earlier institutions regulate later capabilities dangerous technology arrives later present discover harder truths build governance capabilities under regulation

The goal is not to stop progress across the board. It is to bring moral understanding and institutional restraint forward relative to dangerous capability growth.

Wide reflective equilibrium starts from considered moral judgments, introduces candidate principles, checks both against background theories and relevant non-moral facts, and revises any part when conflict appears.

Reflective Equilibrium in its pure form is nearly impossible for an individual, but almost possible for collective deliberation.

1) Start with judgments: Use relatively confident case judgments as provisional fixed points.

2) Propose principles: Build candidate rules that explain and systematize those judgments.

3) Go wide: Test fit with background theories, empirical constraints, and new cases.

4) Revise iteratively: Adjust judgments, principles, or background assumptions until coherence improves.

    Animation informed by reflective-equilibrium research (Reflective Equilibrium), Rawls, Daniels, Scanlon, and recent directed reflective equilibrium work.

    Reflective Equilibrium Simulation

    Round 1 (Pass 1): Seed considered judgments.

    Coherence score0%

    Background support0%

    Judgment-principle tension0 pts

    Dialogues

    Match, Discuss, Update, Commit

    Studio is locked until the pledge is signed.

    Create A Dialogue

    Good proposition format: a declarative claim that can be true or false (not a question or slogan). See Moral Reasoning and Reflective Equilibrium.

    Your availability (select any half-hour slot in the next 30 days)

    Your Dialogues

    Pick one to manage reservations and matches.

      Reserve and Match

      Reservations require an opposing belief, a confidence score, reasons, and overlapping time slots.

      No active dialogue selected yet.

      Writings for This Dialogue

      Open Dialogue Space

        Your availability (select any half-hour slot in the next 30 days)

        Current Reservations

        Opposing beliefs and overlapping times drive matching.

          Suggested Matches

          Select a dialogue to generate suggested matches.

          Invite A Specific User

            3) Log The Dialogue Outcome

            No participant selected yet.

            High-stakes dialogue

            Eligibility-gated commitments with one-year follow-through

            Use this when the debated proposition has the form “If you [facts], you must [action].” Only eligible participants can join, and crossing 50% credence activates a one-year promise in the action ledger.

            Browse high-stakes dialogues below. Sign the pledge to create, reserve, and log one-year commitments.

            Create A High-stakes Dialogue

            Proposition preview: If you [facts], you must [action].

            Your availability (select any half-hour slot in the next 30 days)

            Your High-stakes Dialogues

            Only eligible participants with the opposite belief can be matched.

              Reserve and Match

              High-stakes dialogues require all participants to meet the factual condition. Matching still prioritizes shared time, opposite beliefs, and the smallest confidence gap.

              No active high-stakes dialogue selected yet.

              Writings for This High-stakes Dialogue

              Open Dialogue Space

                Your availability (select any half-hour slot in the next 30 days)

                Current Eligible Reservations

                Reservations are filtered to participants who attest that they meet the facts.

                  Suggested High-stakes Matches

                  Select a high-stakes dialogue to generate suggested matches.

                  Log The High-stakes Outcome

                  No participant selected yet.

                  Interactive Lab

                  Reflective Equilibrium Lab

                  This embedded page runs the standalone reflective equilibrium simulator with editable sentence pools, account/systematicity/faithfulness weights, and iterative theory/commitment revision.

                  Use this to test wide vs narrow modes, inspect path-dependent convergence behavior, and export iteration history as JSON.

                  Open Full Screen Lab Open WRE Case Loop Module

                  Dialogue, Not Debate

                  Discuss your deepest moral convictions, then live by what you genuinely believe.

                  Normativity pairs people with opposing views, asks both sides to update their confidence honestly, and enforces one core rule: if you are genuinely over 50% sure a moral argument is true, you must act on its implication in real life.

                  50%+

                  Action threshold

                  4x

                  Match signals ranked

                  100%

                  Pledge required to join

                  Moral Progress

                  History Shows Progress, But Harder Questions Remain

                  In the 19th century, slavery was treated as normal by many institutions and social orders. People in the 21st century now judge that as deeply wrong. Future generations may similarly judge some of our current practices.

                  The remaining problems are often harder to resolve. This model treats mutual deliberation as the main force that keeps moral progress moving even as difficulty rises.

                  • 19th century

                    Widespread slavery is normalized in many societies.

                  • 21st century

                    Slavery is condemned, yet some harms may still be normalized.

                  • 22nd century and beyond

                    Future citizens may judge our era as morally incomplete.

                  Moral Learning Over Time

                  Year 1850

                  Moral norms around slavery are still widely unjust.

                  Moral adequacy0%

                  Difficulty pressure0%

                  Deliberation lift0 pts

                  Comparative Progress

                  Mathematics vs Non-religious Ethics Across History

                  Both domains improve through argument and critique. But mathematical questions often become tractable faster, while non-religious ethical questions remain entangled with social interests, plural values, and contested tradeoffs.

                  Year 1800

                  Mathematics

                  0%

                  Formal proof standards are strengthening, but many foundations are still unsettled.

                  Non-religious Ethics

                  0%

                  Public moral reasoning exists, but norms are still deeply constrained by power and inherited practice.

                  Progress gap0 pts

                  Relative to mathematics, ethical convergence remains slower and more fragile.

                  Core Mechanism

                  The 50% Action Threshold

                  What is measured

                  After each dialogue, each participant reports how confident they are that the discussed claim is true.

                  Trigger condition

                  If your confidence is 51% or higher, your moral uncertainty no longer excuses inaction.

                  Required response

                  You must log a concrete action plan and carry it out in ordinary life. The app tracks this in your action ledger.

                  Pairing Logic

                  How Matching Works

                  1) Shared time slots

                  We only suggest matches when both participants have overlapping 30-minute availability windows.

                  2) Opposite beliefs

                  We then require that participants currently disagree about whether the proposition is true.

                  3) Closest confidence levels

                  Among opposing participants with shared time slots, we prefer the smallest confidence-gap to maximize productive exchange.

                  4) Method-informed tie-breakers

                  If confidence gaps are similar, we prioritize pairs with better disagreement-depth signals (case, principle, background theory) and clearer reasons.

                  Matching Priority

                  Shared availability → Opposite belief → Lowest confidence gap → Better depth/reason profile

                  Participants can always send direct invites with their availability. Method anchors: Moral Disagreement · Reflective Equilibrium

                  Dialogue Protocol

                  How to Disagree Without Losing Epistemic Discipline

                  1) Locate the dispute level

                  Before arguing, identify whether disagreement is about a case verdict, a general principle, or background theory.

                  2) Calibrate peer pressure

                  When disagreement is informed and symmetric, reduce confidence and run another reflective-equilibrium cycle.

                  3) Keep revision records

                  Log what changed and why. Coherence gains indicate better fit, not automatic proof of moral truth.

                  Pledge

                  Pledge

                  This is a practical oath, not branding language. Signing records your commitment profile and unlocks deliberation tools.

                  Commitments confirmed: 0 / 3

                  Accountability

                  Action Ledger

                  Every rule-triggered conviction appears here as a 12-month commitment. Each month requires a proof upload before the entry can be completed.

                  Discussion Norms

                  How To Use The Platform Well

                  Steelman First

                  State your participant's strongest case before criticizing it.

                  Update Publicly

                  Say what changed in your confidence and why, even when uncomfortable.

                  No Strategic Posturing

                  Confidence reporting is for sincere belief, not social signaling or point-scoring.