Learn WRE — Reflective Equilibrium lessons

Research

How can we accelerate normative discoveries?

Getting normative issues correct earlier than otherwise has huge benefits.

Normativity is a public nonprofit project for structured disagreement, confidence updates, and accountable action.

A longterm view

Competing ethical theories can climb toward the same summit from different sides.

As Derek Parfit suggested, routes differ, intermediate judgments diverge, and local views conflict. Yet sustained reflection can reveal convergence on deeper truths.

Mountain View

Each line is a theory-family trajectory. Motion shows inquiry over time: revising cases, principles, and background commitments while climbing.

  • Different starting points: welfare, duty, virtue, contract, relation.
  • Distinct local routes: disagreements persist for long stretches.
  • Shared summit pressure: broader coherence pulls views upward.
Consequentialist Deontological Virtue Hybrid Relational Contractualist Shared summit
Divergent local judgments Convergence under reflection

Site Guide

Navigate by workflow

SEP Foundations

Website logic now tracks core debates from the Stanford Encyclopedia of Philosophy.

We reviewed the core entries on moral theory, epistemology, disagreement, dilemmas, realism/cognitivism, consequentialism, deontology, virtue ethics, thought experiments, and reflective equilibrium. Use these links as the conceptual baseline for markets, dialogues, and WRE rounds.

Mutual Deliberation

Taking Everyone's Beliefs Seriously Improves Our Chances of Getting Normative Issues Correct Earlier, which Has Huge Benefits.

Even if humanity only lasts as long as the typical mammalian species (1 million years), we would only be among the first 0.5% of humans to ever live. Our actions bound a world that the following 99.5% of us will inherit. If we figure out that X is wrong 1 year sooner than otherwise, this can immediately save the lives of hundreds of thousands of people and leave a better world for the rest 99.5% of us. The earlier the timeframe, the better it is to make ethical discoveries.

Why should I care about future people and whether they will exist?

If you care about the welfare of other living people

  • because their welfare matter: future people's welfare matter just as much.
  • because they can experience pain and suffering: future people are similarly able to experience pain and suffering.
  • because it is good that a person lives a good life: if future people can't exist, they lose out on living lives potentially as good as yours.

Why should I care about other people?

If you care about your future self

  • because I care about myself: consider that you largely don't remember who you were at 3 years-old. Similarly, suppose that you are 40 years old today. When you become 80 years old, you would likely be very different and not psychologically connected to who you are today. It is highly contested whether you are the same person as you were when you were 15 and whether you will be the same person as you today are. By caring about your future yourself, you are caring for another person.

If you care about your current self

  • because I care about myself: consider the possibility that, if we all cared only for ourselves, each of us would end up worse off.
  • because I matter: the reasons why you matter are likely to indicate that other people (and, many moral philosophers believe, some animals') similarly matter, in many cases just as much as you do.

If you care about the amount of goodness in the world: other people have the potential to live good lives.

How does spending time to think about those philosophical issues benefit the world? What further contributions could I make beyond that of philosophers, who think about those issues for a living?

If philosophers were overall more reliable than other researchers, what would we expect to see? Two things: (a) We'd expect to see philosophers making faster progress than others. (b) We'd expect to see more agreement in philosophy than in other fields.

Obviously, these predictions are the opposite of the truth. So while philosophers may be better thinkers than others in some respects, there is no reason to think philosophers are better at getting to the truth.

Unlike in the sciences and mathematics, you possess the exact same data as philosophers—intuitions about the world. If you think carefully and persistently about philosophical problems, there is a non-trivial chance that you would come up an idea that would contribute to humanity's current understanding of the world. This chance is significantly higher than the chance of anyone making a contribution in the sciences or math that would have an equal benefit to the world.

Wide reflective equilibrium starts from considered moral judgments, introduces candidate principles, checks both against background theories and relevant non-moral facts, and revises any part when conflict appears.

Reflective Equilibrium in its pure form is nearly impossible for an individual, but almost possible for collective deliberation.

1) Start with judgments: Use relatively confident case judgments as provisional fixed points.

2) Propose principles: Build candidate rules that explain and systematize those judgments.

3) Go wide: Test fit with background theories, empirical constraints, and new cases.

4) Revise iteratively: Adjust judgments, principles, or background assumptions until coherence improves.

    Animation informed by SEP (Reflective Equilibrium), Rawls, Daniels, Scanlon, and recent directed reflective equilibrium work.

    Reflective Equilibrium Simulation

    Round 1 (Pass 1): Seed considered judgments.

    Coherence score0%

    Background support0%

    Judgment-principle tension0 pts

    Interactive Lab

    Reflective Equilibrium Lab

    This embedded page runs the standalone reflective equilibrium simulator with editable sentence pools, account/systematicity/faithfulness weights, and iterative theory/commitment revision.

    Use this to test wide vs narrow modes, inspect path-dependent convergence behavior, and export iteration history as JSON.

    Open Full Screen Lab Open WRE Case Loop Module

    Dialogue, Not Debate

    Discuss your deepest moral convictions, then live by what you genuinely believe.

    Normativity pairs people with opposing views, asks both sides to update their confidence honestly, and enforces one core rule: if you are genuinely over 50% sure a moral argument is true, you must act on its implication in real life.

    50%+

    Action threshold

    4x

    Match signals ranked

    100%

    Pledge required to join

    Moral Progress

    History Shows Progress, But Harder Questions Remain

    In the 19th century, slavery was treated as normal by many institutions and social orders. People in the 21st century now judge that as deeply wrong. Future generations may similarly judge some of our current practices.

    The remaining problems are often harder to resolve. This model treats mutual deliberation as the main force that keeps moral progress moving even as difficulty rises.

    • 19th century

      Widespread slavery is normalized in many societies.

    • 21st century

      Slavery is condemned, yet some harms may still be normalized.

    • 22nd century and beyond

      Future citizens may judge our era as morally incomplete.

    Moral Learning Over Time

    Year 1850

    Moral norms around slavery are still widely unjust.

    Moral adequacy0%

    Difficulty pressure0%

    Deliberation lift0 pts

    Comparative Progress

    Mathematics vs Non-religious Ethics Across History

    Both domains improve through argument and critique. But mathematical questions often become tractable faster, while non-religious ethical questions remain entangled with social interests, plural values, and contested tradeoffs.

    Year 1800

    Mathematics

    0%

    Formal proof standards are strengthening, but many foundations are still unsettled.

    Non-religious Ethics

    0%

    Public moral reasoning exists, but norms are still deeply constrained by power and inherited practice.

    Progress gap0 pts

    Relative to mathematics, ethical convergence remains slower and more fragile.

    Core Mechanism

    The 50% Action Threshold

    What is measured

    After each dialogue, each participant reports how confident they are that the discussed claim is true.

    Trigger condition

    If your confidence is 51% or higher, your moral uncertainty no longer excuses inaction.

    Required response

    You must log a concrete action plan and carry it out in ordinary life. The app tracks this in your action ledger.

    Pairing Logic

    How Matching Works

    1) Shared time slots

    We only suggest matches when both participants have overlapping 30-minute availability windows.

    2) Opposite beliefs

    We then require that participants currently disagree about whether the proposition is true.

    3) Closest confidence levels

    Among opposing participants with shared time slots, we prefer the smallest confidence-gap to maximize productive exchange.

    4) SEP-informed tie-breakers

    If confidence gaps are similar, we prioritize pairs with better disagreement-depth signals (case, principle, background theory) and clearer reasons.

    Matching Priority

    Shared availability → Opposite belief → Lowest confidence gap → Better depth/reason profile

    Participants can always send direct invites with their availability. Method anchors: Moral Disagreement · Reflective Equilibrium

    SEP Protocol

    How to Disagree Without Losing Epistemic Discipline

    1) Locate the dispute level

    Before arguing, identify whether disagreement is about a case verdict, a general principle, or background theory.

    2) Calibrate peer pressure

    When disagreement is informed and symmetric, reduce confidence and run another reflective-equilibrium cycle.

    3) Keep revision records

    Log what changed and why. Coherence gains indicate better fit, not automatic proof of moral truth.

    Pledge

    Pledge

    This is a practical oath, not branding language. Signing records your commitment profile and unlocks deliberation tools.

    Commitments confirmed: 0 / 3

    Dialogues

    Match, Discuss, Update, Commit

    Studio is locked until the pledge is signed.

    Create A Dialogue

    Good proposition format: a declarative claim that can be true or false (not a question or slogan). See Moral Reasoning and Reflective Equilibrium.

    Your availability (select any half-hour slot in the next 30 days)

    Your Dialogues

    Pick one to manage reservations and matches.

      Reserve and Match

      Reservations require an opposing belief, a confidence score, reasons, and overlapping time slots.

      No active dialogue selected yet.

      Your availability (select any half-hour slot in the next 30 days)

      Current Reservations

      Opposing beliefs and overlapping times drive matching.

        Suggested Matches

        Select a dialogue to generate suggested matches.

        Invite A Specific User

        Your proposed availability (select any half-hour slot in the next 30 days)

          3) Log The Dialogue Outcome

          No participant selected yet.

          Accountability

          Action Ledger

          Every rule-triggered conviction appears here until marked completed.

          Discussion Norms

          How To Use The Platform Well

          Steelman First

          State your participant's strongest case before criticizing it.

          Update Publicly

          Say what changed in your confidence and why, even when uncomfortable.

          No Strategic Posturing

          Confidence reporting is for sincere belief, not social signaling or point-scoring.