Research
How can we accelerate normative discoveries?
Getting normative issues correct earlier than otherwise has huge benefits.
Normativity is a public nonprofit project for structured disagreement, confidence updates, and accountable action.
Research
Getting normative issues correct earlier than otherwise has huge benefits.
Normativity is a public nonprofit project for structured disagreement, confidence updates, and accountable action.
A longterm view
As Derek Parfit suggested, routes differ, intermediate judgments diverge, and local views conflict. Yet sustained reflection can reveal convergence on deeper truths.
Each line is a theory-family trajectory. Motion shows inquiry over time: revising cases, principles, and background commitments while climbing.
Site Guide
SEP Foundations
We reviewed the core entries on moral theory, epistemology, disagreement, dilemmas, realism/cognitivism, consequentialism, deontology, virtue ethics, thought experiments, and reflective equilibrium. Use these links as the conceptual baseline for markets, dialogues, and WRE rounds.
Featured Markets
Loading category...
Mutual Deliberation
Even if humanity only lasts as long as the typical mammalian species (1 million years), we would only be among the first 0.5% of humans to ever live. Our actions bound a world that the following 99.5% of us will inherit. If we figure out that X is wrong 1 year sooner than otherwise, this can immediately save the lives of hundreds of thousands of people and leave a better world for the rest 99.5% of us. The earlier the timeframe, the better it is to make ethical discoveries.
If you care about the welfare of other living people
If you care about your future self
If you care about your current self
If you care about the amount of goodness in the world: other people have the potential to live good lives.
If philosophers were overall more reliable than other researchers, what would we expect to see? Two things: (a) We'd expect to see philosophers making faster progress than others. (b) We'd expect to see more agreement in philosophy than in other fields.
Obviously, these predictions are the opposite of the truth. So while philosophers may be better thinkers than others in some respects, there is no reason to think philosophers are better at getting to the truth.
Unlike in the sciences and mathematics, you possess the exact same data as philosophers—intuitions about the world. If you think carefully and persistently about philosophical problems, there is a non-trivial chance that you would come up an idea that would contribute to humanity's current understanding of the world. This chance is significantly higher than the chance of anyone making a contribution in the sciences or math that would have an equal benefit to the world.
Wide reflective equilibrium starts from considered moral judgments, introduces candidate principles, checks both against background theories and relevant non-moral facts, and revises any part when conflict appears.
Reflective Equilibrium in its pure form is nearly impossible for an individual, but almost possible for collective deliberation.
1) Start with judgments: Use relatively confident case judgments as provisional fixed points.
2) Propose principles: Build candidate rules that explain and systematize those judgments.
3) Go wide: Test fit with background theories, empirical constraints, and new cases.
4) Revise iteratively: Adjust judgments, principles, or background assumptions until coherence improves.
Animation informed by SEP (Reflective Equilibrium), Rawls, Daniels, Scanlon, and recent directed reflective equilibrium work.
Round 1 (Pass 1): Seed considered judgments.
Coherence score0%
Background support0%
Judgment-principle tension0 pts
Interactive Lab
This embedded page runs the standalone reflective equilibrium simulator with editable sentence pools, account/systematicity/faithfulness weights, and iterative theory/commitment revision.
Use this to test wide vs narrow modes, inspect path-dependent convergence behavior, and export iteration history as JSON.
Open Full Screen Lab Open WRE Case Loop ModuleDialogue, Not Debate
Normativity pairs people with opposing views, asks both sides to update their confidence honestly, and enforces one core rule: if you are genuinely over 50% sure a moral argument is true, you must act on its implication in real life.
50%+
Action threshold
4x
Match signals ranked
100%
Pledge required to join
Moral Progress
In the 19th century, slavery was treated as normal by many institutions and social orders. People in the 21st century now judge that as deeply wrong. Future generations may similarly judge some of our current practices.
The remaining problems are often harder to resolve. This model treats mutual deliberation as the main force that keeps moral progress moving even as difficulty rises.
Widespread slavery is normalized in many societies.
Slavery is condemned, yet some harms may still be normalized.
Future citizens may judge our era as morally incomplete.
Year 1850
Moral norms around slavery are still widely unjust.
Moral adequacy0%
Difficulty pressure0%
Deliberation lift0 pts
700,0001: Even if humans only survive as long as a typical mammalian species (1 million years). See MacAskill, What We Owe the Future, 23.
Comparative Progress
Both domains improve through argument and critique. But mathematical questions often become tractable faster, while non-religious ethical questions remain entangled with social interests, plural values, and contested tradeoffs.
Year 1800
0%
Formal proof standards are strengthening, but many foundations are still unsettled.
0%
Public moral reasoning exists, but norms are still deeply constrained by power and inherited practice.
Progress gap0 pts
Relative to mathematics, ethical convergence remains slower and more fragile.
Core Mechanism
After each dialogue, each participant reports how confident they are that the discussed claim is true.
If your confidence is 51% or higher, your moral uncertainty no longer excuses inaction.
You must log a concrete action plan and carry it out in ordinary life. The app tracks this in your action ledger.
Pairing Logic
We only suggest matches when both participants have overlapping 30-minute availability windows.
We then require that participants currently disagree about whether the proposition is true.
Among opposing participants with shared time slots, we prefer the smallest confidence-gap to maximize productive exchange.
If confidence gaps are similar, we prioritize pairs with better disagreement-depth signals (case, principle, background theory) and clearer reasons.
Shared availability → Opposite belief → Lowest confidence gap → Better depth/reason profile
Participants can always send direct invites with their availability. Method anchors: Moral Disagreement · Reflective Equilibrium
SEP Protocol
Before arguing, identify whether disagreement is about a case verdict, a general principle, or background theory.
When disagreement is informed and symmetric, reduce confidence and run another reflective-equilibrium cycle.
Log what changed and why. Coherence gains indicate better fit, not automatic proof of moral truth.
Reflective Equilibrium · Moral Disagreement · Moral Epistemology · Disagreement Triage Tool
Pledge
Dialogues
Studio is locked until the pledge is signed.
Pick one to manage reservations and matches.
Reservations require an opposing belief, a confidence score, reasons, and overlapping time slots.
If you will not be able to attend the session, please unregister in advance to free up the spot for another person.
Opposing beliefs and overlapping times drive matching.
Select a dialogue to generate suggested matches.
Accountability
Every rule-triggered conviction appears here until marked completed.
Discussion Norms
State your participant's strongest case before criticizing it.
Say what changed in your confidence and why, even when uncomfortable.
Confidence reporting is for sincere belief, not social signaling or point-scoring.