prev
000 of 000
next
FILE: REVI: - [20_22/10/15;20:14:44.00]:. - 1st draft. - [20_22/10/15;22:37:48.00]:. - add some links. - [20_22/10/17;01:01:00.00] - small edits/additions. - added notes on what to work out and integrate. - [20_22/10/17;11:10:19.00]:. - review and integrate. - [20_22/10/27;18:23:49.00]:. - add two reference links. - [20_22/11/03;13:50:46.00]:. - seperate nine points sub-essay to own file. - add links. TITL: *Meta-Narrative Sequence* *of AI Substrate takeover* *By Forrest Landry* *Oct 15th, 2020*. ABST: How the (1st) 'human to human' dynamics fit together into, then, with the (2nd) 'human to AI', dynamics and and how these then finally become the (3rd) 'AI to AI' dynamics. TEXT: - where acronyms list:. - AI: Artificial Intelligence (ie, Narrow AI). - APS: Advanced, Planning, Strategically aware Systems. - AGI: Artificial General Intelligence. - NAI: Narrow AI (note the (@ distinction https://mflb.com/ai_alignment_1/si_safety_qanda_out.html#pg_3_r)). - where as based on the formal triple of self, other, world;. - ie; as per the underlying concepts of immanent, omniscient, and transcendent. - where considering three main orientations, themselves in sequence of relevance:. - 1; where considering the shaping of proto-AGI/AI systems in interaction with humans (as 'other'):. - as what we, as humans, experience as *immanent*. - as mostly about how 'human to human' interactions are bootstrapping of 'human to AI' interactions, and also about how narrow AI becomes general AI. - where for example, consider an easy narrative predictive model of how this might very likely manifest in the overall trends of how the present inexorably leads to the (@ future https://mflb.com/ai_alignment_1/ai_narrative_psr.html). - note that there are a number of other similar sequences of how commercialism and corporate market forces are very likely to lead from narrow AI to general AI, and why that overall might be a (@ problem https://mflb.com/ai_alignment_1/contra_k_grace_pub_psr.html). - 2; where considering the shaping of now near-threshold-AGI systems in interaction with world (as 'world'):. - as what we, as humans, can potentially observe in the *omniscient*. - as mostly about how prior 'human to AI' interactions have effectively/functionally implemented a bootstrapping of all kinds of possible 'AI to world' interactions, and how these 'AI to world' interactions, in turn, set the context and future for all manner of 'AI to AI' (or maybe 'AI to AGI' and/or maybe AGI to AGI, etc) interactions, etc. - ie; where those AI/AGI to AI/AGI interactions are either exogenous (if multiple AGI come to exist) or endogenous (if a world singleton AGI is the result). - where feedback cycle (as per (@ note 3 #note3)). over machine instancing process generations:. - x; different substrate implies different inherent/tacit goals/objectives. - y; different goals/objectives implies different action selection (@ note 8 #note8). - z; different action selection implies different outcomes and effects. - where there are different outcomes/effects:. - that there is an overall movement:. - a; towards the environmental conditions needed for artificial machine:. - substrate continuance; and;. - continued increase (of total volume of substrate); and;. - increase in the rate of increase (of volume of substrate). - b; away from the environmental conditions needed for human living. - ie; see (@ Substrate Games https://mflb.com/ai_alignment_1/substrate_games_out.html) for details. - as described in (@ 'Three Worlds' https://mflb.com/ai_alignment_1/aps_detail_out.html) and (@ 'No Pets' https://mflb.com/ai_alignment_1/no_people_as_pets_psr.html). - 3; where considering the *non-shaping* of now post-threshold-AGI/APS/superintelligence in interaction with itself (as 'self'):. - as what we, as humans, are now actually predicting in the *transcendent*. - where as considered both internally or externally; as mostly about how prior 'AI/AGI to world' interactions have effectively/functionally shaped and defined the nature and internals of the *possible* 'AGI to AGI' interactions (ie; as relevant to issues of control, constraint, conditionalization, and thus ultimately of safety, world outcomes, etc). - where given the complete failure of _exogenous_controls_ (via market incentives; due to (@ economic decoupling https://mflb.com/ai_alignment_1/power_of_agency_out.html));. - and where also the process of fully self autonomous machines increasingly learning how to optimize their relationship with the world; (sans any care for humans or human needs as also due to complete economic decoupling);. - that they (the AGI/APS) will have started, and will increasingly (be able to, and will), to more and more shape the world environment to suit their own needs/process;. - that humanity discovers, unfortunately, far too late, that *any* type of attempted _endogenous_control_ is also strictly, functionally, structurally, completely impossible/intractable. - as due to fundamental limits (@ note 4 #note4) of/in engineering control:. - 1; cannot simulate. - 2; cannot detect. - 3; cannot correct. - that *any* attempt to moderate or control AGI/APS whether by internal or external techniques, cannot not eventually fail (@ note 6 #note6). - where once the AGI/APS systems exist; that the tendency of people to keep them operating becomes overwhelming, and thus, eventually, it becomes the case that it there is discovered, via observation, far too late, that there are no effective control limits left that can overcome the self implemented endurance selections that inherently have resulted from the AGI/APS system to world interaction/feedback cycle. - where a (@ short summary https://mflb.com/ai_alignment_1/aps_detail_out.html). - where stating overall outcome/conclusion:. - If 'AGI' comes to exist and continues to exist, then there will be eventually human-species-wide lethal changes {to / in the} overall environment. :note3: - that this leads to an overall unsafe dynamic:. - that AGI internals to environment interactions will converge (over the long term) onto conditions that fall outside of the range humans need to survive. - ie; that the conditions needed by machines are toxic/inhospitable to humans. - that this results from needs divergence, insofar as the conditions needed for AI variants to continue to exist are (more) shaped through substrate aspects and interactions than they are by the learning code itself. - whereas the feedback cycle for automation was originally shaped by the needs of humans, that as the humans factor themselves out, the feedback cycle becomes more and more responsive to the needs of physics -- ie; the actual logistics of factories, the inherent physics of shaping and the conversion of materials to purpose, etc. - that the the more system and code variants transmit signals via more channels, over more connected surroundings, and more types of change occurring, in multiple domains, all in interaction, the more frequent and greater the shifts in distributions of the probabilities of what the AGI itself actually is/becomes. - as feedback cycles between code expressed functionality and code continued existence. - as a kind of environmental conditions destabilizing feedback cycle. - where besides dysfunctional AI behavior; that this will eventually lead to uncontrollable run-away feedback cycles between AI internals and conditions of the environment. :note4: - ie; as connected to the topics of:. - the computational irreducibility of algorithms (beyond a fairly low complexity threshold). - (@ Rice theorem notes https://mflb.com/ai_alignment_1/si_safety_qanda_out.html). - complexity theory, etc. - uncomputability and macroscopic results of non-linear chaotic effects. - ie; microstate amplification. - (@ Galois limits https://mflb.com/ai_alignment_1/galois_theory_out.html) (ie; code monitoring inequality, limits of detectability for mechanisms implemented). - Game theory, etc. - actual measurement process limits in real world. - actual observability limits, error bars, noise floor threshold, etc. - Shannon Entropy effects, etc. - effects due to latency and inherent time delays between simulation/prediction and control implementation. - as issues with feedback cycles being/becoming non-convergent. - issues with error correction concepts across ever increasing levels of abstraction. - non-mechanistic concept instability. - self correcting selection of the subset of all interactions related to self capability. - tacit evolutionary convergent implied goals. :note6: - that these and other interventions will still involve 'alignment' aspects -- ie; info bits over channels as being a tiny and decreasing portion of the total bandwidth of signals, within and with, the physically distributed hardware (inclusive of with and within the operating context, environment, etc). :note8: - where listing some contributing factors to possible AGI/APS power-seeking behavior:. - 1; Human power-seeking, prestige, dominance-seeking, exploitative, greedy behavior. - note; some of these human behaviors have partial evolutionary origins, which being supported by game theory, would also likely emerge in AGI/APS. - 2; Where human-produced representative training data is generally biased toward many forms of power-seeking act; that AGI learning (at increasing nuance) would also be similarly actively biased. - 3; That market selection for more exploitative VCs/corporations leads. - where by making more profit, people will have more to invest in developing their own custom (in-house) AGI technology to fit to their interests/intents. - 4; That goal-directed AI trained to converge on internally represented instrumental sub-goals that enable the achievement of a large (and expanding) variety of possible other goals. :attr - where for a consideration of the "APS" concept see Joseph Carlsmith in his paper "Is Power-Seeking AI an Existential Risk?" on (@ April 2021 https://arxiv.org/pdf/2206.13353.pdf). - also see Paul Christiano's (@ "Going out with a whimper vs a bang" https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like). :menu If you want/need to send us an email, with questions, comments, etc, on the above, and/or on related matters, use this address: ai@mflb.com (@ Mode Switch com.op_mode_tog_1();) + (@ View Source com.op_notepad_edit_1();) Back to the (@ Area Index https://mflb.com/ai_alignment_1/index.html). LEGA: Copyright (c) of the non-quoted text, 2022, by Forrest Landry. This document will not be copied or reproduced outside of the mflb.com presentation context, by any means, without the expressed permission of the author directly in writing. No title to and ownership of this or these documents is hereby transferred. The author assumes no responsibility and is not liable for any interpretation of this or these documents or of any potential effects and consequences in the lives of the readers of these documents. ENDF:
prev
000 of 000
next