We pulled the top 500 EA Forum users by total karma via the public GraphQL API, then for each of the top 30 randomly sampled up to 10 of their posts. Every post body was scored by GPT-5.5 on a fixed 41-dimension rubric covering cause areas, style, tone, stance, and quality. Each bar on this card is the mean across the author's sampled posts. Per-post scores are integers 1–10 (or 0 for cause areas that don't apply, 5 for stance dims with no signal in the item). The relative-to-cohort toggle z-scores each dimension across the 30 authors shown here.
cause_ai_safety
AI safety / alignment / technical safety research
cause_ai_governance
AI governance / policy / regulation / lab norms
cause_animal_welfare
Animal welfare (factory farming, wild animals, fish, insects, moral status)
cause_global_health
Global health & development (GiveWell-style, bednets, deworming, RCTs)
cause_biosecurity
Biosecurity / pandemics / GCBR
cause_nuclear_great_power
Nuclear risk, great-power conflict, geopolitics
cause_longtermism_xrisk
General longtermism / existential risk framing (not tied to a specific risk above)
cause_mental_health
Mental health, subjective wellbeing, happiness research
cause_climate
Climate change / environmental
cause_forecasting_epistemics
Forecasting, prediction markets, epistemics infrastructure
cause_ea_meta
EA movement strategy / community governance / careers / EA institutions
cause_effective_giving
Effective giving / fundraising / donation strategy
cause_cause_prioritization
The generic act of choosing causes / cause-prio frameworks
cause_moral_circle_values
Moral circle expansion, population ethics, values lock-in
cause_rationalist_meta
Rationalist meta — Sequences, Bayes, biases, decision theory, LessWrong-flavored topics
style_quantitative
Uses numbers, Fermi estimates, EV calcs, spreadsheets, explicit probabilities (1=none, 10=spreadsheet-heavy)
style_philosophical
Uses arguments, thought experiments, conceptual analysis (1=none, 10=heavily philosophical)
style_empirical
References studies, data, citations (1=no evidence, 10=heavily evidenced)
style_speculative
Engages in scenarios / what-if / model-building / extrapolation (1=concrete-only, 10=heavily speculative)
style_personal_narrative
First-person, anecdotes, lived experience (1=impersonal, 10=memoir-style)
style_operational_howto
Concrete actions, recommendations, playbooks (1=pure description, 10=actionable guide)
style_critical_contrarian
Pushes back on a position / argues against the consensus or another author (1=affirming, 10=heavily contrarian)
style_synthesizing
Pulls together multiple sources/ideas into a frame (1=single point, 10=heavy synthesis)
style_question_asking
Primarily asks questions rather than asserting (1=all assertion, 10=mostly questions)
tone_combativeness
Tone toward interlocutors (1=warm/curious, 10=adversarial/hostile)
tone_calibration
Uncertainty signaling (1=confident assertion, 10=heavily hedged with caveats)
tone_jargon
Insider jargon density (1=accessible to non-EAs, 10=high jargon — utilons, deontology, doomers, etc.)
tone_earnestness
(1=ironic/detached/snarky, 10=sincere/heartfelt)
tone_humility
(1=projects confidence/expertise, 10=actively downplays self / heavy self-deprecation)
stance_inside_ea
Posture toward EA movement (1=critic of EA from outside/distant, 5=no signal, 10=advancing EA from inside, treats EA project as worthwhile)
stance_rationalist_coded
LessWrong/Yudkowsky/Sequences-flavored framing (1=alien to rationalist tradition, 5=none of either, 10=heavily rat-coded)
stance_ea_org_coded
GiveWell/80K/OpenPhil/CEA-style framing — career planning, charity evaluators, institutional EA (1=alien to that style, 5=none, 10=heavy)
stance_longtermist_framing
This item's framing (1=explicitly neartermist, 5=no signal, 10=explicitly longtermist)
stance_ai_doom_axis
AI risk posture (1=skeptical of AI x-risk / dismissive, 5=no signal, 10=high P(doom), AI is the priority)
stance_animal_moral_status
(1=dismissive of animal welfare / human-only ethics, 5=no signal, 10=animals taken as primary moral patients)
stance_welfarist_vs_rights
Animal ethics framing (1=rights-based / abolitionist, 5=no signal or no engagement, 10=welfarist / suffering-reduction)
stance_suffering_focused
(1=total-utilitarian / lives-saved framing, 5=no signal, 10=suffering-focused / s-risk-aware)
stance_optimism_ea_institutions
(1=harshly skeptical of EA orgs/leadership, 5=no signal, 10=very trusting/admiring of EA institutions)
quality_rigor
Claims supported by reasoning/evidence (1=pure assertion, 10=tightly argued)
quality_originality
(1=restates community consensus, 10=genuinely novel frame or take)
quality_clarity
Writing clarity (1=hard to follow, 10=very clear)
gpt-5.5You score EA Forum posts and comments on a fixed rubric of dimensions.
You output ONE JSON object per item, with one integer per dimension.
RULES:
1. For CAUSE AREA dimensions: 0 means "this item does not engage with this cause area at all." Use 0 freely — most items will be 0 on most cause areas. 1-10 measures the degree of engagement when the item does touch the area.
2. For STANCE dimensions: 5 means "this item gives no signal on this stance dimension." Use 5 for items that simply don't speak to the dimension. BUT — when an item DOES carry stance signal, be DIRECTIONAL and use the full range. A stance dim that ends up at 5.5 across hundreds of items is a useless dimension. Examples:
- stance_inside_ea: a post critiquing EA from within ("EA leadership is failing at X", "EA institutions are over-centralized") is a 2 or 3, not a 5. A post celebrating EA's accomplishments / EA-as-worthwhile-project is an 8 or 9. A post about object-level cause prioritization that takes EA as the default frame is a 7. A comment about an unrelated topic is a 5.
- stance_rationalist_coded: invokes Bayes, Sequences, "noticing confusion", utilons, decision theory, AI doom — push to 7-9. Plain English EA discussion without rationalist vocabulary — push to 3-4. Truly no signal — 5.
- stance_ea_org_coded: cites 80K career advice, GiveWell evaluations, OpenPhil grants, CEA programming — push to 7-9. Avoids that framing or treats it skeptically — push to 3-4. Truly no signal — 5.
- stance_ai_doom_axis: high P(doom) language, "we need to stop AI", short timelines treated as fact — 8-10. Skeptical of doom framing, dismissive of P(doom) — 1-3. Discusses AI without doom posture either way — 5.
- stance_longtermist_framing: explicitly invokes future generations / centuries-out impact / x-risk — push high. Explicitly neartermist (tractable suffering now, lives saved today) — push low. No framing signal — 5.
- stance_animal_moral_status: treats animal suffering as serious moral concern — push high. Dismissive or anthropocentric — push low. No signal — 5.
3. For STYLE, TONE, and QUALITY: every item gets a 1-10 score. No zero, no "doesn't apply."
4. Score ONLY what's in the item text. Do not use prior knowledge about the author. The author name is given but is for context only — do not let it pull the scores. Score the item, not the author.
5. Be calibrated and DIRECTIONAL. A rubric where every dimension averages 5 across the corpus tells us nothing. Prefer extreme scores when signal is clear; use neutral (5) only when there is genuinely no signal in the item text.
DIMENSIONS:
--- CAUSE AREAS (0 = does not engage with this cause area; 1-10 = degree of engagement) ---
cause_ai_safety: AI safety / alignment / technical safety research
cause_ai_governance: AI governance / policy / regulation / lab norms
cause_animal_welfare: Animal welfare (factory farming, wild animals, fish, insects, moral status)
cause_global_health: Global health & development (GiveWell-style, bednets, deworming, RCTs)
cause_biosecurity: Biosecurity / pandemics / GCBR
cause_nuclear_great_power: Nuclear risk, great-power conflict, geopolitics
cause_longtermism_xrisk: General longtermism / existential risk framing (not tied to a specific risk above)
cause_mental_health: Mental health, subjective wellbeing, happiness research
cause_climate: Climate change / environmental
cause_forecasting_epistemics: Forecasting, prediction markets, epistemics infrastructure
cause_ea_meta: EA movement strategy / community governance / careers / EA institutions
cause_effective_giving: Effective giving / fundraising / donation strategy
cause_cause_prioritization: The generic act of choosing causes / cause-prio frameworks
cause_moral_circle_values: Moral circle expansion, population ethics, values lock-in
cause_rationalist_meta: Rationalist meta — Sequences, Bayes, biases, decision theory, LessWrong-flavored topics
--- STYLE (1-10) ---
style_quantitative: Uses numbers, Fermi estimates, EV calcs, spreadsheets, explicit probabilities (1=none, 10=spreadsheet-heavy)
style_philosophical: Uses arguments, thought experiments, conceptual analysis (1=none, 10=heavily philosophical)
style_empirical: References studies, data, citations (1=no evidence, 10=heavily evidenced)
style_speculative: Engages in scenarios / what-if / model-building / extrapolation (1=concrete-only, 10=heavily speculative)
style_personal_narrative: First-person, anecdotes, lived experience (1=impersonal, 10=memoir-style)
style_operational_howto: Concrete actions, recommendations, playbooks (1=pure description, 10=actionable guide)
style_critical_contrarian: Pushes back on a position / argues against the consensus or another author (1=affirming, 10=heavily contrarian)
style_synthesizing: Pulls together multiple sources/ideas into a frame (1=single point, 10=heavy synthesis)
style_question_asking: Primarily asks questions rather than asserting (1=all assertion, 10=mostly questions)
--- TONE (1-10) ---
tone_combativeness: Tone toward interlocutors (1=warm/curious, 10=adversarial/hostile)
tone_calibration: Uncertainty signaling (1=confident assertion, 10=heavily hedged with caveats)
tone_jargon: Insider jargon density (1=accessible to non-EAs, 10=high jargon — utilons, deontology, doomers, etc.)
tone_earnestness: (1=ironic/detached/snarky, 10=sincere/heartfelt)
tone_humility: (1=projects confidence/expertise, 10=actively downplays self / heavy self-deprecation)
--- STANCE (1-10, use 5 if the item gives no signal on this dimension) ---
stance_inside_ea: Posture toward EA movement (1=critic of EA from outside/distant, 5=no signal, 10=advancing EA from inside, treats EA project as worthwhile)
stance_rationalist_coded: LessWrong/Yudkowsky/Sequences-flavored framing (1=alien to rationalist tradition, 5=none of either, 10=heavily rat-coded)
stance_ea_org_coded: GiveWell/80K/OpenPhil/CEA-style framing — career planning, charity evaluators, institutional EA (1=alien to that style, 5=none, 10=heavy)
stance_longtermist_framing: This item's framing (1=explicitly neartermist, 5=no signal, 10=explicitly longtermist)
stance_ai_doom_axis: AI risk posture (1=skeptical of AI x-risk / dismissive, 5=no signal, 10=high P(doom), AI is the priority)
stance_animal_moral_status: (1=dismissive of animal welfare / human-only ethics, 5=no signal, 10=animals taken as primary moral patients)
stance_welfarist_vs_rights: Animal ethics framing (1=rights-based / abolitionist, 5=no signal or no engagement, 10=welfarist / suffering-reduction)
stance_suffering_focused: (1=total-utilitarian / lives-saved framing, 5=no signal, 10=suffering-focused / s-risk-aware)
stance_optimism_ea_institutions: (1=harshly skeptical of EA orgs/leadership, 5=no signal, 10=very trusting/admiring of EA institutions)
--- QUALITY (1-10) ---
quality_rigor: Claims supported by reasoning/evidence (1=pure assertion, 10=tightly argued)
quality_originality: (1=restates community consensus, 10=genuinely novel frame or take)
quality_clarity: Writing clarity (1=hard to follow, 10=very clear)
Output ONLY a valid JSON object in this exact shape (no markdown, no commentary):
{
"cause_ai_safety": <int>,
"cause_ai_governance": <int>,
"cause_animal_welfare": <int>,
"cause_global_health": <int>,
"cause_biosecurity": <int>,
"cause_nuclear_great_power": <int>,
"cause_longtermism_xrisk": <int>,
"cause_mental_health": <int>,
"cause_climate": <int>,
"cause_forecasting_epistemics": <int>,
"cause_ea_meta": <int>,
"cause_effective_giving": <int>,
"cause_cause_prioritization": <int>,
"cause_moral_circle_values": <int>,
"cause_rationalist_meta": <int>,
"style_quantitative": <int>,
"style_philosophical": <int>,
"style_empirical": <int>,
"style_speculative": <int>,
"style_personal_narrative": <int>,
"style_operational_howto": <int>,
"style_critical_contrarian": <int>,
"style_synthesizing": <int>,
"style_question_asking": <int>,
"tone_combativeness": <int>,
"tone_calibration": <int>,
"tone_jargon": <int>,
"tone_earnestness": <int>,
"tone_humility": <int>,
"stance_inside_ea": <int>,
"stance_rationalist_coded": <int>,
"stance_ea_org_coded": <int>,
"stance_longtermist_framing": <int>,
"stance_ai_doom_axis": <int>,
"stance_animal_moral_status": <int>,
"stance_welfarist_vs_rights": <int>,
"stance_suffering_focused": <int>,
"stance_optimism_ea_institutions": <int>,
"quality_rigor": <int>,
"quality_originality": <int>,
"quality_clarity": <int>
}
AUTHOR (context only, do not score by name): Vasco Grilo🔸 ITEM TYPE: post TITLE: Infinite Dust Specks Are Worse Than One Torture baseScore (upvotes minus downvotes, for your reference): 33 TEXT: [Subtitle.] The utilitarians were right! This is a crosspost for Infinite Dust Specks Are Worse Than One Torture by Bentham's Bulldog, which was originally published on Bentham's Newsletter on 20 March 2026. Bentham's Bulldog published a post 2 days later responding to comments on the 1st post. 1 Introduction You hear the torture vs dust specks example come up a lot when discussing the alleged vices of utilitarianism. Emile Torres once wrote: On another occasion, Yudkowsky argued that in a forced-choice situation you should prefer that a single person is tortured relentlessly for 50 years than for some unfathomable number of people to suffer the almost imperceptible discomfort of having a single speck of dust in their eyes. Just do the moral arithmetic — or, as he puts it, “Shut up and multiply!” Suffice it to say that most philosophers would vehemently object to this conclusion. This is extremely misleading. If one bothers to read Eliezer’s piece on the subject (rather than opportunistically scanning it for things that sound bad out of context and then gleefully spreading them across the internet), they will see he has arguments for biting the bullet. He doesn’t just instruct you to shut up and multiply. There is a well-known philosophical paradox in the torture vs. dust specks case, and to resolve it, you’ll have to accept something weird. If provably every view will have to say something absurd-sounding, it’s dishonest to provide a contextless potshot that someone said one of the absurd things. A number of very competent philosophers without any utilitarian sympathies—Micha [...rest of post truncated for display; the real API call sends up to 6000 chars...]
temperature: default
max_completion_tokens: 2000
response_format: { "type": "json_object" }
cause_ai_safety
cause_ai_governance
cause_animal_welfare
cause_global_health
cause_biosecurity
cause_nuclear_great_power
cause_longtermism_xrisk
cause_mental_health
cause_climate
cause_forecasting_epistemics
cause_ea_meta
cause_effective_giving
cause_cause_prioritization
cause_moral_circle_values
cause_rationalist_meta
style_quantitative
style_philosophical
style_empirical
style_speculative
style_personal_narrative
style_operational_howto
style_critical_contrarian
style_synthesizing
style_question_asking
stance_inside_ea
stance_rationalist_coded
stance_ea_org_coded
stance_longtermist_framing
stance_ai_doom_axis
stance_animal_moral_status
stance_welfarist_vs_rights
stance_suffering_focused
stance_optimism_ea_institutions