NunoSempere
Top cause areas
Distinguishing dimensions z-score vs top-30 cohort
▲
cause_forecasting_epistemics
+3.9σ
▲
cause_rationalist_meta
+2.9σ
▲
style_synthesizing
+2.4σ
▼
stance_ea_org_coded
-1.6σ
▼
tone_earnestness
-1.3σ
Style quantitative / empirical / philosophical / speculative / operational / contrarian
Stance signature top 3 stance dims furthest from neutral (5)
Tone
Quality
Neighbors in dimension space
Nearest: titotal
Foil: Toby Tremlett🔹
Most characteristic post
My highly personal skepticism braindump on existential risk from artificial intelligence.
Summary This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like - selection effects at the level of which arguments are discovered and distributed - community epistemic problems, and - increased uncertainty due to chains of reasoning with imperfect concepts as real and important. I still think that existential risk from AGI is important. But I don’t v…
Highest-karma post in sample
My highly personal skepticism braindump on existential risk from artificial intelligence.