AGI/APS Tech Usage
inherently cannot not
eventually result in
the Total Extinction of
All Planetary Life
[20_22/12/11;15:27:56.00]
Overviews of the arguments:. (@ AGI Non-Safety Scope of Work https://mflb.com/ai_alignment_1/ai_scope_of_work_psr.html) (@ Narrative Sequence https://mflb.com/ai_alignment_1/single_post_psr.html) (@ Comic Overview https://mflb.com/ai_alignment_1/xkcd_comic_overview_zout.html) (@ Argument Organization Introduction https://mflb.com/ai_alignment_1/org_intro_psr.html) (@ Cluster Summary https://mflb.com/ai_alignment_1/cluster_summary_out.html) (@ Advocacy Request https://mflb.com/ai_alignment_1/request_advocacy_psr.html) Part 1; Theoretical limits to controlling any AGI using any method of causation:. (@ Superintelligence Uncontainability https://mflb.com/ai_alignment_1/si_safety_qanda_out.html) (@ Galois Theory Analogy https://mflb.com/ai_alignment_1/galois_theory_out.html) (@ AGI Error Correction https://mflb.com/ai_alignment_1/agi_error_correction_psr.html) (@ SGD Selection https://mflb.com/ai_alignment_1/sgd_selection_psr.html) (@ Optimization Cycles https://mflb.com/ai_alignment_1/optimization_cycles_psr.html) (@ Infinity Control https://mflb.com/ai_alignment_1/infinity_control_psr.html) (@ Rice Rebuttal Rebuttal https://mflb.com/ai_alignment_1/rice_rebuttal_rebuttal_out_7.html) Part 2; Economic decoupling of value exchanges between the organic ecosystem and the artificial ecosystem:. (@ No People as Pets https://mflb.com/ai_alignment_1/no_people_as_pets_psr.html) (@ Graceful Counterarguments https://mflb.com/ai_alignment_1/contra_k_grace_pub_psr.html) (@ Halloween Special https://mflb.com/ai_alignment_1/contra_k_grace_alt_psr.html) (@ Fail AGI Alignment https://mflb.com/ai_alignment_1/fail_ai_alignment_psr.html) Part 3; Unsafe AGI convergent dynamic that is impossible to control or to align game-theoretically (by 1 and 2):. (@ Substrate-Needs Convergence https://mflb.com/ai_alignment_1/substrate_needs_convergence_psr.html) (@ Substrate Games https://mflb.com/ai_alignment_1/substrate_games_out.html) (@ Substrate needs Strikeback https://mflb.com/ai_alignment_1/substrate_needs_strikeback_psr.html) (@ APS Detail https://mflb.com/ai_alignment_1/aps_detail_out.html) (@ Power of Agency https://mflb.com/ai_alignment_1/power_of_agency_out.html) (@ Power Grid Review https://mflb.com/ai_alignment_1/power_grid_review_psr.html) Part 4; Impossibility theorem, by contradiction of 'long-term AGI safety' with convergence result (3):. (@ Elevator Pitch https://mflb.com/ai_alignment_1/elevator_pitch_psr.html) Overviews of healthy paths forward:. (@ Future Visions https://mflb.com/ai_alignment_1/vision_of_future_psr.html) (@ On the Nature of Human Assembly https://mflb.com/civ_dev_1/sgrp_essay_3.html) Obstacle 1: Personal biases in assessing risk of extinction by technology:. (@ AI Risk Bias Effects https://mflb.com/ai_alignment_1/bias_effect_4ai_psr.html) (@ X-Risk Categories https://mflb.com/lsag_1/risk_categories_psr.html) (@ The Monoclivity Bias https://mflb.com/ai_alignment_1/pluralism_psr.html) (@ Presumptive Listening https://mflb.com/ai_alignment_1/presumptive_listening_out.html) (@ IDM-Based Response https://mflb.com/ai_alignment_1/im_suggested_response_out.html) (@ Math Expectation https://mflb.com/ai_alignment_1/math_expectations_psr.html) (@ Negative Arguments https://mflb.com/ai_alignment_1/negative_arguments_out.html) (@ Right Skepticism https://mflb.com/ai_alignment_1/right_skepticism_out.html) (@ Safety Prudence https://mflb.com/ai_alignment_1/safety_prudence_psr.html) (@ Exponent Time https://mflb.com/ai_alignment_1/exponent_time_evolution_psr.html) (@ Unhandleable Complexity https://mflb.com/ai_alignment_1/unhandleable_complexity_psr.html) (@ AGI Inequalities https://mflb.com/ai_alignment_1/agi_inequalities_psr.html) (@ The No-Proof Fallacy https://mflb.com/ai_alignment_1/d_221201_psr.html) (@ Expert Judgemental https://mflb.com/ai_alignment_1/expert_judgemental_psr.html) (@ Alignment Drift https://mflb.com/ai_alignment_1/alignment_drift_out.html) Obstacle 2: Group and market-based dynamics pushing towards development of proto-AGI:. (@ Nine Points of Collective Insanity https://mflb.com/ai_alignment_1/ai_narrative_psr.html) (@ AGI Maybe Soon https://mflb.com/ai_alignment_1/agi_maybe_soon_psr.html) (@ Market Intelligence https://mflb.com/ai_alignment_1/market_intelligence_psr.html) (@ Academia or Industry https://mflb.com/ai_alignment_1/academic_or_industry_psr.html) (@ Regarding New Technology https://mflb.com/ai_alignment_1/new_technology_psr.html) (@ The Great Manure Crisis https://mflb.com/ai_alignment_1/manure_crisis_psr.html) (@ Levels of Altruism https://mflb.com/ai_alignment_1/levels_of_altruism_psr.html) (@ No Entitlement https://mflb.com/ai_alignment_1/no_entitlement_psr.html) (@ Low Order Bits https://mflb.com/ai_alignment_1/low_order_bits_psr.html) (@ Moral Repugnance https://mflb.com/ai_alignment_1/moral_repugnance_psr.html) (@ Three Questions https://mflb.com/ai_alignment_1/three_questions_out.html) (@ Rejecting the News https://mflb.com/ai_alignment_1/rejecting_the_news_psr.html) (@ AGI/APS Governance https://mflb.com/ai_alignment_1/ai_governance_psr.html) ~ ~ ~ > Is there any licensing associated with > any of this website content? Yes. All content and code, which is not explicitly otherwise labeled, is copyright (c) Forrest Landry, 2022; all rights reserved.