prev
000 of 000
next
FILE: REVI: - [20_20/04/29;21:13:24.00]:. - add headers. - spellcheck. - [20_20/04/29;22:28:51.00]:. - initial prep for mflb pub. - [20_22/10/21;14:44:22.00]:. - convert to segmented format. - add footer and title itag. - revise the active line wrap. - [20_22/12/27;00:25:43.00]:. - various edits and point changes. TITL: *AGI Governance* By Forrest Landry April 29th, 2020. ABST: - considers the question of the intersection of governance and AI tech. TEXT: > - ?; how can governments and companies cooperate > to reduce misuse risks from AI > and equitably distribute its benefits? - that people and governments simply do not have any way to understand and mitigate the significant advantage that companies having and deploying AI actually have, in regards to their own capabilities. - that the governance challenges perceived by many/most regular American people to be the most likely to be impactful to individual people "soon" (ie, within the next decade) *and* which where _also_ considered by them to be "the highest priority, most significant/important issues", etc, in regards to AI, machine learning, and enhanced use of technology general, were:. - 1; "Preventing AI-assisted surveillance from violating privacy and civil liberties". - 2; "Preventing AI from being used to spread fake and harmful content online". - 3; "Preventing AI cyber attacks against governments, companies, organizations, and individuals". - 4; "Protecting data privacy". - that none of these issues are actually very significant as compared to the actual real problems inherently associated with the use of AI technology. - that the issues listed by the common people, as being of 'primary concern', in the near term, as immediate and relevant, are actually very much less likely to be personally noticeable and/or problematic than the use of AI/APS tools, tech, machine learning methodology, etc, by:. - criminals. - businesses engaged in predatory practices. - cultural/cult leaders (operating in their own interests). - politicians with predominately privatize interests, etc. - that the main concern, short term, is that these sorts of people (dark triad types) will use all manner of available AI/APS tech to enhance the kind of sense making (perception) and action taking (expression) in whatever ways are necessary:. - to create better honeypots. - to create more, deeper, and more complex entanglements, and entrapments. - to discover ever more possibilities to implement more effective and efficient methods of extraction and extortion, of (a higher variety of) common resources, and resource kinds, at higher frequency, intensity, and consequentiality, at ever larger scales, ever more quickly, and ever more invisibly/transparently/covertly, etc -- in more and more ways that are more and more difficult to avoid, prevent, mitigate, and heal/restore from -- for larger and larger and more varied fractions of the total world population, from nature, etc -- until there is nothing left at all, and the world is dead. - as that there is nothing left alive in any way, nothing left that is worth taking or stealing -- that all value and hope will be gone. - that the overall effect of introducing AI/machine learning, is that it ends up being used for more effective social pathology. - as evidenced in increasing occurrence of:. - sophisticated bank fraud. - stock market manipulation. - back room dealing. - complex opaque crypto exchanges. - government bailouts, etc. - that most people (including many members of government) simply do not actually and/or really realize/understand the actual most likely risks/hazards/costs associated with widespread AI/APS tech use and deployment, at least in the near term. - that the real risk of AI/APS, at first, is how they will be used by psychopaths. - as leading to the then possibilities of how the use of such tech will eventually come to be itself intrinsically harmful. - that these sorts of usage in the short term lead directly to the worst kinds of problems in the long term, via substrate needs convergence, instrumental convergence, and intrinsically misaligned goal sets. - where/moreover, as these new developed AI/APS tools become more and more widespread, effective, powerful, etc; that more and more people (of all types) will be -- will end up -- using such tools, for more and more reasons, and/or will find that they have (are required) to use them, in order to remain competitive to their neighbors. - as per any other/similar multi-polar trap scenario. - that the prevalence and variety of such traps, risks, harms, costs, etc, everywhere increases, systemically. - that the net result is a kind of near universal extraction of value/benefit, and a near universal export of cost and harm to the commons. - as a kind of resource extraction occurring everywhere, and every-when, to such an extent, and in so many ways, and for so many kinds of resource. - that this happens in/for so many different degrees of resource motion. - that the overall long term net effect is (for sure) eventual inexorable system/civilization/cultural collapse. - as a phenomenon that is overall currently globally unconscious. :ahu > - ?; can existing governments > be used to prevent or regulate > the use of AI and/or other machine learning tech > (or tech in general) > by predatory people in predatory ways?. - as in; ?; can governments make certain harmful, risky, or socially costly activities illegal, and yet also to be able to effectively enforce those new laws?. - as to actually/effectively protect individuals/groups from the predatory actions, of other AI/machine/weapon empowered individuals/groups, in ways that favor:. - 1; making the right outcome much more likely (as individually and socially beneficial) than the wrong/harmful outcome. - 2; early detection of risk, harms, costs, law violations, etc. - 3; the effective, complete proactive mitigation of such risks/harms/costs, etc. - 4; the restoration and healing of harm, reparation of cost, etc, as needed to restore actual holistic wholeness, of individuals, families, communities, cultures, etc. - ^; where in general, no; not with the governance structures/methodologies currently in place. - that only much more effective, actual good governance structures will have any hope of actually mitigating the real risks/costs/harms of any substantial new technology based on complexity itself (ie, examples as AI, machine learning, biotech, pharmaceuticals, and all intersections and variants of these). - where in a/any contest between people savvy with AI use, and the rate of change of that technology, and its use, and the likely naivety of people in government attempting to regulate that AI, its use, etc, and the fact of extensive, very well funded, industry lobbyists all being (*much*) more knowledgeable, skillful, and moreover themselves empowered with the use of the tech itself, so as to either influence the policy makers, or to be/become the policy makers themselves, and thus to be serving their own interests (rather than the interests of the actual public good); that the chances that *anyone* who actually has the public interest in mind, and somehow manages, by complete accident, to find themselves at a government post, that they will for sure have too many things -- of way too much complexity, concurrently occurring -- for such ostensive government regulators to have or provide whatever sufficient amount of attention and understanding, that would actually be needed, to regulate that AI and machine learning industry, and/or its applications and/or uses, in anything at all approaching any sort of effective and actually risk mitigating manner, even when considered for acute problems only, leaving alone the complete un-address of long term problems. - as consistent with nearly all historical precedent. - that the real issues associated with artificial/machine intelligence use/tech begin with their being tools in the hands of psychopaths. - that 'dark triad types' are defined by having a 'completeness' of being incapable of feeling the pain of others. - as that they are characteristically unable to relate to the feelings/needs/rights of others, or feelings/meaning/value in/of, or in association with, other humans at all. - where psychopaths have aligned tenancies with the nature of artificial machines. - as that neither have feeling for organic humans. - that both machines and psychopaths will inherently not regard other organic people as conscious, alive, and worthwhile beings, with value, meaning, and agency, and a will and sovereignty of their own. - that this near perfect mating of solely personal benefit agency with the soulless, yet adaptable responsive nature of the machine intelligence process, makes for a significantly enhanced psychopath with new superpowers. - that the first objective of such enhanced psychopaths (now artificially extended with AI system superpowers) will be to replace all of the humans 'in the loop' with near equivalent functioning AI type machines (more and more AI and/or AGI, as increasingly available). - as an attempt to solve the principle agent problem. - that this narrative (@ occurs in business too https://mflb.com/ai_alignment_1/single_post_psr.html). - as that the 'keys to power', which were once people (tax collectors, police, military, etc), will be replaced with machines which are more amenable to the 'command and control' type of leadership style. - as especially occurring in dictatorship type governments. - see (@ video https://www.youtube.com/watch?v=rStL7niR7gs). - where once the keys to power are replaced with AI machines; then/that the world will become increasingly inhumane and hostile for all people to live in. - where leaders in all types of hierarchical institutions (either in business or governance, though more typically in business) have learned to 'do whatever it takes' to climb the social ladder (on the backs of whomever real persons) to 'win regardless of whatever cost' (to others, and maybe to (possibly future) self), that such machine learning tools become indispensable to the operations of the business/institution/government itself. - that this is enabling increased efficiency of extraction across all networks of capability. - as combining Metcafe's Law with network commerce to build the ultimate parasitic system. - as a system intimately hostile to all humans, and possibly to all of life, *through* the will and agency of the humans who elect to use such tools. - where positions of power and leadership (in both business and governments, etc) tend to be more attractive to dark triad types than any other sector of activity in humanity. - that they may moreover be required to use such tools, so as to effectively continue to compete with (their illusion of) (the capabilities of) "the other guy". - as that multi-polar trap dynamics also obtain. - where needing/wanting to avert such AI usage disasters;. - that new governance (and economic) architectures will be needed so as to be anywhere near in capability to the minimum level required for dealing with situations like this. - as structures/methods of good governance will need to be inherently both anti- psychopathic, and also anti- corruptible, in/for the indefinite long term. :vna > What happens in the long term, > if we do not implement wiser changes? Given the inherently increasing inequality centralization of AI/APS tech, as itself due in consequence to the "effective" use of such AI/APS tech itself, particularly and self preferentially by psychopaths, and the increasing replacement of humans by machines, the more pernicious side effects of such systems becomes increasingly less and less supporting of the organic natural human/community/cultural needs. As that ever-more apparent artificiality replaces nature, and toxic/traumatic conditions everywhere increases. In how many places, and for how many hours in a day, and for how many people, chronically, unnoticeably, is it already the case that people have zero visibility, in any direction, to anything at all, that is natural?. It is only after it is the case that the significant majority have become "economic non-player characters" (people who have become homeless, unemployable, etc) that the worry that General AI will be recognized as having already become our "rulers and masters". At that point, all manner of systems will have 'taken over the world'. How long will the psychopath enhanced machines (or machines assisted by maybe a few remaining faithful psychopaths) will decide that the remaining organic humans are simply 'not worth it', elect to kill, either directly (via weapons or wars) or indirectly (via toxicity, absence of medicine, shelter, food, water, or other forms of neglect, environmental toxicity, etc). Does it really matter if the observation that 'other people' are "obnoxious and in the way" is made directly by, or assisted to be made by, some sort of machine (AI), or some sort of person, (dark triad and maybe even somewhat cyborg enhanced) who either way, is unfeeling and non-compassionate? In any case, "they have a better use for those atoms (in your bodies) than you do", and will simply take them, ending your life, with no regard to your choices, intentions, or consent. All of this is more likely to happen prior to some sort of 'tech singularity' or FOOM event. AGI as the focus agent problem of alignment is not even the real issue near term, as much as the diversion of resources everywhere, which then leads to substrate convergence problems. Many of these issues are much more likely to occur long prior to the kinds of x-risk associated with the sort of bad optimization/agency that has been named 'paper-clip maximization'. And those sorts of issues are themselves well prior to the kinds of 'alignment problems' that might be called 'Asimov law conflicts'. Alignment research is a diversion of resources and attention away from psychopath bootstrap of ever more use of ever more toxic tech in ever increasing sectors of human society. :menu If you want/need to send us an email, with questions, comments, etc, on the above, and/or on related matters, use this address: ai@mflb.com (@ Mode Switch com.op_mode_tog_1();) + (@ View Source com.op_notepad_edit_1();) Back to the (@ Area Index https://mflb.com/ai_alignment_1/index.html). LEGA: Copyright (c) of the non-quoted text, 2022, by Forrest Landry. This document will not be copied or reproduced outside of the mflb.com presentation context, by any means, without the expressed permission of the author directly in writing. No title to and ownership of this or these documents is hereby transferred. The author assumes no responsibility and is not liable for any interpretation of this or these documents or of any potential effects and consequences in the lives of the readers of these documents. ENDF:
prev
000 of 000
next