NunoSempere

karma rank #12 · 13,864 karma · 10 posts sampled · profile ↗

Top cause areas

forecasting epistemics 8.2 +3.9σ
rationalist meta 4.7 +2.9σ
ea meta 4.1 -0.7σ
longtermism xrisk 2.0 -0.2σ
Show all 15 cause areas
ai safety 1.6 +0.2σ
nuclear great power 1.5 +1.5σ
biosecurity 1.3 +1.4σ
prioritization 1.2 -1.2σ
global health 1.0 -0.2σ
effective giving 0.8 -1.1σ
ai governance 0.3 -0.5σ
animal welfare 0.3 -0.5σ
moral circle values 0.2 -0.9σ
mental health 0.0 -0.4σ
climate 0.0 -0.7σ

Distinguishing dimensions z-score vs top-30 cohort

cause_forecasting_epistemics +3.9σ
cause_rationalist_meta +2.9σ
style_synthesizing +2.4σ
stance_ea_org_coded -1.6σ
tone_earnestness -1.3σ

Style quantitative / empirical / philosophical / speculative / operational / contrarian

quantitative 5.4 +0.9σ
empirical 5.3 +1.1σ
philosophical 3.1 -0.5σ
speculative 5.0 +0.8σ
operational howto 4.3 -0.2σ
critical contrarian 4.3 +0.4σ
Show all 9 style dims
personal narrative 2.8 +0.4σ
synthesizing 8.1 +2.4σ
question asking 2.3 -0.4σ

Stance signature top 3 stance dims furthest from neutral (5)

rationalist coded 7.5 +2.1σ
inside ea 7.0 -1.0σ
longtermist framing 6.0 +0.2σ
Show all 9 stance dims
ea org coded 5.4 -1.6σ
optimism ea institutions 5.3 -0.8σ
animal moral status 5.2 -0.4σ
welfarist vs rights 5.2 -0.3σ
ai doom axis 4.9 -0.9σ
suffering focused 5.0 -0.3σ

Tone

combativeness 2.4 +1.0σ
calibration 6.8 +0.5σ
jargon 5.8 +0.9σ
humility 5.9 +0.0σ

Quality

rigor 6.8 +0.5σ
originality 6.4 +0.8σ
clarity 7.8 -1.0σ

Neighbors in dimension space

Nearest: titotal

Foil: Toby Tremlett🔹

Most characteristic post

My highly personal skepticism braindump on existential risk from artificial intelligence.

Summary This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like  - selection effects at the level of which arguments are discovered and distributed - community epistemic problems, and  - increased uncertainty due to chains of reasoning with imperfect concepts  as real and important.  I still think that existential risk from AGI is important. But I don’t v…

Highest-karma post in sample

My highly personal skepticism braindump on existential risk from artificial intelligence.