Products

If Artificial General Intelligence has an okay outcome, what will be the reason?

Prediction market on manifold. An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates. This market is a duplicate of https://manifold.markets/IsaacKing/if-we-survive-general-artificial-in with different options. https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=RWxpZXplcll1ZGtvd3NreQ is this same question but with user-submitted answers. (Please note: It's a known cognitive bias that you can make people assign more probability to one bucket over another, by unpacking one bucket into lots of subcategories, but not the other bucket, and asking people to assign probabilities to everything listed. This is the disjunctive dual of the Multiple Stage Fallacy, whereby you can unpack any outcome into a big list of supposedly necessary conjuncts that you ask people to assign probabilities to, and make the final outcome seem very improbable. So: That famed fiction writer Eliezer Yudkowsky can rationalize at least 15 different stories (options 'A' through 'O') about how things could maybe possibly turn out okay; and that the option texts don't have enough room to list out all the reasons each story is unlikely; and that you get 15 different chances to be mistaken about how plausible each story sounds; does not mean that Reality will be terribly impressed with how disjunctive the okay outcome bucket has been made to sound. Reality need not actually allocate more total probability into all the okayness disjuncts listed, from out of all the disjunctive bad ends and intervening difficulties not detailed here.) Update 2025-07-02 (PST) (AI summary of creator comment): Regarding answer option E: This option will not be the resolution if the okay outcome is achieved through careful alignment work. Such a scenario would resolve to a different answer. Update 2026-03-03 (PST) (AI summary of creator comment): Regarding answer option L: Human-scale amounts of time for civilization crash and restart are irrelevant on fractional light cone scales when evaluating whether the 20% of optimal CEV threshold is met. The time delay from civilization restart does not preclude reaching the "okay outcome" threshold.

Liquidity: $22,000. Resolves: 1/2/2200.

Blockcircle
Quantitative tools and real-time data for crypto and macro markets. Scorecards, trade signals, and research in one platform.
Trade
Whale AlphaPrediction AlphaPolitical AlphaInsider AlphaTrade Alpha
Discover
Momentum Trading EngineAsset Outperformer EngineMarket Reversal EngineAlpha Hunter SuiteMarket Analysis
Scorecards
Global Liquidity ScorecardMacroeconomic Risk ScorecardAltcoin Market Scorecard
Resources
Pulse DashboardEcosystem StatsTrending MarketsUser GuidesTrading CourseOpen SourceBlog and News
Company
About UsPricingInstitutionalContactTerms & ConditionsPrivacy Policy
© 2026 Blockcircle. All rights reserved.