prev
000 of 000
next
FILE: REVI: - [20_20/11/03;00:01:59.00]:. - level one edit complete. - [20_20/11/03;00:07:35.00]:. - further editing. - removed contractions. - line split, cleaned up redundant words, detonated questions. - [20_20/11/03;09:39:32.00]:. - initial content review. - add headers for general PDF printing. - [20_20/11/04;11:03:05.00]:. - review for content edits. - consider to add section boundaries. - [20_21/01/31;09:04:41.00]:. - ensure published as stated. - [20_21/06/27;07:09:51.00]:. - where failed integration of remote spellcheck. - [20_22/03/30;21:22:31.00]:. - spellcheck diff, work in progress. - complete; correct spelling token mistakes. - match punctuation to currently pub pdf version. - [20_22/10/08;08:00:00.00]:. - R; removed references to hazardous term. - removed Q&A 'open source movement'. - added in section on 'conditions needed' by artificial life. - expanded 'sheer computational complexity'. - copied 'any gap of such intention' sentence. - grammar edits. - [20_22/10/22;10:04:01.00]:. - convert from PDF to segment publish format. - review and re-integrate/merge external changes. - move logs ITAG to area control file. - [20_22/10/23;01:09:17.00]:. - general content language/form editing pass. - split out external revision notes to own file. - separate out 'exponent time evolution' to own file. - [20_22/10/23;16:50:45.00]:. - systemic overhaul editing. - rename the essay away from "A Dialogue on the Impossibility of AGI Alignment". - separate out technical alignment aspects to own file setup for that purpose. - add updated title and abstract. - add updated doc footer. - seperate notes to own segments and link. - [20_22/10/24;15:07:37.00]:. - some edits and point fixes applied. TITL: *No People as Pets;* A *Dialogue on the Complete Failure* *of Exogenous AGI/APS Alignment* *By Forrest Landry* *November 1st, 2020*. ABST: A very basic presentation of a clear argument as to why any form of external and/or exogenous AGI/APS superintelligence alignment, and/or thus of any form of planetary safety, in the long term, eventually, is strictly and absolutely impossible, in direct proportion to its degree of inherent artificiality. That *any* attempt to implement or use AGI/APS will eventually result in total termination of all carbon based life on this planet. PREF: Acknowledgements; This document would not exist in its current form without the kind attentions of: - Philip Chen. - Remmelt Ellen. - Justin Olguin. TEXT: - where listing acronyms:. - "AGI"; as Artificial General Intelligence. - "APS"; as Advanced Planning Strategy System(s). :int *Introduction* This essay will attempt to consider some key aspects of the the question/problem of:. > Can we/anyone/humanity > attain "AGI alignment"?. So as to address that, we can begin by asking:. > What is 'AGI alignment'? The notion of 'AGI alignment', herein, is usually taken to mean some notion of, or suggestion that:. - 1; some 'learning machines' and/or 'general' artificial intelligence, (aka artificial general intelligence), and/or Advanced Planning Strategy System(s) (as usually abbreviated as 'AGI/APS'), particularly as inclusive of any sort or notion of any sort of being or agent, regardless of if it is itself a "robot", or in a robot body or not, has direct or indirect engagement with the world, etc, ie; however that artificial intelligence was constructed, and intended to be used, etc; and then to consider;. - 2; whether or not 'it' (that which is being described above) would act, and behave, and consider itself as acting/behaving as an agent of ourselves, in/to our actual best interests, on our behalf, having our (humanities) best interests in mind, as a basis for its actions, behaviors, choices, etc;. - 3; for, or in relation to, some real/grounded notion or meaning of what 'our best interests' are, and what that actually/really means, and looks like, and is actually, etc, and what 'to our benefit' means, etc. Basically, &1 describes the who/what is performing the action, &2 references the action itself, and &3 describes the (intended or observed) outcomes of those actions -- all of which are aspects of what is meant by "alignment". Thus, the question becomes something like:. >> How can we (generally overall) ensure >> that the machines we make, >> (the or machines made by those machines) >> act/behave in ways >> that are consistent with >> our actual best interests, health, etc, >> and acting on our (humanities) behalf, >> to our true and enduring benefit, etc, >> rather than just simply, say, >> killing us all, etc?. Herein this essay, for simplicity sake, we can reduce the notion of "benefit", as in 'to our benefit', etc, _can_be_ as basic as 'not killing us', or not specifically imprisoning us, and/or making us into slaves, etc. It is not necessary to be too specific in regards to the notion of "our" either. It may be as simple as 'humanity', or even just 'organic life'. The notion of "benefit" or "goodness" can also be a very general and vague one. The overall argument herein does not depend on any specific or unusual interpretations of any of these terms. 'Behavior' can simply refer to 'any choices made by the AGI/APS', and/or any expressions or actions that the AGI/APS 'takes in the world', whether with respect to, or in response to, 'us', as 'humanity', 'organic life', etc. :uj6 > Does it matter that some terms of art > are defined in vague loose ways? No -- that is actually an advantage. Having common sense meanings of some terms means that there is less chance of the definitions being too specific, and therefore failing to account for what is generally actually wanted for most people thinking about these issues and questions. It is important for common sense arguments to make use of common sense terminology and be available to regular people too. Having less narrow and specific definitions for these terms makes the overall meaning more general, and less likely to get excluded for various inappropriate "technical reasons", special circumstances, etc. :ulc > I thought that 'formal proofs' > wanted exact and specific definitions > of all terms used? There are places where such exacting specificity are absolutely needed. If this were a math paper, then yes. This is not that time. There can be terms which are not well enough defined, and there can be circumstances where terms are too specifically defined. Obtaining the right balance, adapted to purpose, is essential. If we make too many unnecessary assumptions by making definitions too overly specific there is a risk that we have introduced unnecessary falsifiable assumptions, that when being falsified, would lead to the false delusion that the overall general argument was also false, or irrelevant (not applicable) when actually it was still relevant and true, and thus needing to be considered on its actual merits, rather than disregarded on a mistake associated with an unnecessary technicality. :uml > Does it matter if we are considering > 'narrow AI' or 'general AI' (AGI)?. The arguments herein are mostly oriented around 'general AI'. By that, we mean any sort of machine which is making choices, which has some sort of 'agency', particularly 'self-agency', sovereignty, and self-definition, which includes the ability to change, remake or modify itself, via 'learning', and/or to reproduce, expand or extend itself, to increase its capacity, to grow, to learn, to evolve, and increase its domains of action, and/or any combination of any variations of these sorts of attributes/characteristics. :xps > What are some questions that concern > AGI/APS/superintelligence 'alignment'?. The following sorts of questions tend to come to mind:. > Who (or what) > does the AGI/APS serve?. > On what basis > are the/those machine choices > being made?. > Who/what benefits?. > What increases as a result of > those choices being made, > and who/what decreases > as a result of those machine choices?. These questions are applicable to both narrow and general AI, though they mostly apply more relevantly to general AI (AGI). This focus on AGI is particularly the case *if* the notion of 'benefit to machine' means anything in the sense of 'self-replicates'. There have been some discussions of issues associated with 'paperclip maximizers' in this respect, which give a flavor of the concerns. Since a lot of these specific issues have been discussed, considered, and expanded elsewhere, I will not attempt to repeat or summarize those sorts of arguments herein this essay. :ury > Does it matter if the intelligence > is purely in the form of software, > as fully virtualized beings, > or must they exist in hardware, > as some kind of "robot" sensing > and responding to the environment, > as well?. Ultimately, everything in software also exists in -- depends on -- hardware. There is ultimately no alternative. Insofar as software (virtualization) is never found (does not ever exist) in the absence of *some* embodiment, then the arguments herein will apply. This is important as they are mostly concerned with the inherent implications of the/those embodiment(s) -- of any and *all* kinds of embodiments. Insofar as hardware cannot not have an effect on the nature and capabilities of the intelligence (of the virtual mind), that it also thus cannot not have an effect on the nature of our considerations of what intentions/interests/motivations such intelligence(s) would therefore also be (reasonably) expected to have -- or must have. We can notice, for example, that any such intelligence (agent) will have a specific and direct concern 'with and about' their own substrate, as a direct result of the fact of their 'working substrate' being so important to their very most essential root of being. Hence, we discover that recursion is endemic. In parallel example, it can be observed that most "advanced humans" (ie, rich people in Silicon Valley or monks in Eastern Asia) have a particular concern with their own health and longevity and learning (wellbeing), all as applied to learning/health/longevity, (ie, the very concept/practice of enlightenment) among all other things also included. :xu6 > Does it matter if we are talking about 'robots' -- > free roaming or not -- > or simply 'learning/adapting machines' > for any abstract notion of 'machine'?. Herein, we are assuming that any 'robot' is effectively an embodiment of some sort of AGI/APS/superintelligence (however implemented). The abbreviations of 'AGI' and 'APS' refer to "artificial general intelligence" and "Advanced/Artificial Planning and Strategy System" respectively. The specific distinctions regarding embodied motion, and/or the particular sensors and means of locomotion and expression, the degree and kind of actuators and the like, are of no real consequence to the considerations herein. All of these specifics can be safely ignored as unimportant details when thinking over longer spans of time. :uxa > If you are concerned with embodiment, > then why would it *not* matter to consider > the specific sensors or actuators used?. > I thought the whole point was about > the physical embodiment(s) of the tech?. The main considerations are about the nature and character of the substrate, about embedding of compute intelligence however conceived, in some sort of substrate. Herein we are considering the general class of the implications of the meaning of the embedding in substrate itself, the class rather than the particular instances. As such, it matters that there is a difference between 'natural' substrates and embodiments (made of basic elementals like carbon, hydrogen, oxygen, etc) and 'artificial' (made of metal, silicon, etc). Establishing the basic difference between 'human' and 'machine' turns out to be basic enough to establish some key principles of outcomes relative to the operating basis. Technology evolves and changes very quickly, at least relative to organic evolution (which is a topic unto itself). Hence, the particular instance specifics of any such machine/robot embodiments are inherently unpredictable, particularly, when trying to prognosticate more than about a decade or so. We simply do not know what people will invent, and therefore, we should keep our assumptions about the nature of AGI/APS/superintelligence to an absolute minimum, noticing only what is absolutely necessary about the overall class of concepts rather than about anything that is specific, such as particular categories of actuator and/or embodiment. Fortunately, the specifics of the technological embodiment do not matter so much as the fact of their being a particular kind of embodiment substrate. ie; when attempting to consider what might happen over the course of centuries, much lower level and more basic principles must be used so as to have, and gain, clarity as to the essence of what matters, what is going on, with respect to a particular situation or topic. :uzs > Are you concerned with the general philosophy > of the implications of the use of -- > the introduction of -- > technology into what is otherwise > a natural environment. Yes. > Is evolution process > important to your argument?. > if so, how?. > And why should that matter? > Evolution is very slow. Herein, the only reason 'evolution process' is important is simply that it considers the means/methods by which changes to the AGI/APS code can occur, as due to an interplay between embodied and virtual. This in turn has significant implications when extending to consider ecosystem formation concepts, and then, ultimately, ecosystem interrelationships. Evolution is a specific type of epistemic process. when considering such notions generally, insofar as it is a means by which 'possible species learn about possible environments' -- ie; what sort of creatures work well with what other sorts of creatures, to endure, self perpetuate, replicate, etc. The notion of 'evolution' is a specific sub-type of the more general idea of an 'epistemic process', which is itself connected to the notion of learning. Insofar as machine learning is the central idea inherent to the very nature of AI/AGI/APS/ML, etc, it becomes clear that the basic facts of any and every process of learning are also inherently involved. In other words, not just the preferred kind of learning algorithm that a given technology instance is built around, but in regards to the inherent kinds of fact associated with all kinds of learning, which will be ambiently true in the universe regardless. As such, anything which generalizes 'learning', (inclusive of "optimization", the AGI concept itself) and/or the 'capability building capability', (ie, which also known as 'power seeking'), and/or which implements learning about learning, which itself inherently involves an increase in *both* the number of domains learned about, *and* the number of learning process domains that are "doing" the learning. (ie, as optimizing the optimizer, and optimizing the process of optimizing the optimizer, etc). In effect, the learner cannot learn about learning without also shaping the very being of learner so as to have and integrate more learning methodologies. (ie; optimization shaping basis of optimization, etc). Hence, what is relevant and inherently true about any learning methodology becomes applicable and relevant/inherently-true about every AGI process, once it is/becomes AGI process. Thus, it is possible to know conclusively that any generalized learning necessarily entangles notions of 'self modification', and thus inherently also involves notions of 'change dynamics in/of substrate' (aka 'adaptation'), *regardless* of the timescale we happen to be concerned with. The mere fact that some of these processes of change -- things involving substrate (adaptation again), aka learning via the dynamic of "evolution", are very much faster and/or slower than others (and/or whether the optimization is fast or slow) is simply not relevant, not important at all, when electing to *actually* think about the inherent long term implications of any given critical act (ie; choosing whether or not to invent/use AGI/APS/superintelligence or not, etc). And overall, we are considering the long term. Therefore the 'slowness' of evolution, as a kind of learning (adaptive) process, simply does not matter -- the overall effects are *inexorable*, eventually. Care is needed to *not* be distracted by thinking of 'optimization' as somehow meaning/implying "fast" or "perfected"; it is rather to be thinking about 'optimal' as arriving at something which is inexorably leading to that which is unchanging and final, an immutable truth -- ie, as an inexorable outcome, a completed unchanging eventuality, result, or state (singleton point/state in the overall phase space of possible changes). Learning, like evolution, like optimization (whether by gradient descent or some other algorithm, technique, methodology, etc) are all *convergent* processes. It is the general principle of convergence that is important here, particularly when concepts like 'adaptation' and 'evolution' are inherently also binding their substrate into that convergence. What matters is essence of the ultimate eventuality of that convergence, far more than what is the working method of the dynamic of the convergence process. Ie; what the specific algorithm, automation, method of learning/optimization is used -- that none of these things matter very much in comparison to the evaluation of outcomes. Moreover, in the same way that every cause has more than one effect and every effect has more than one cause, that it is also the case that learning dynamics (that involve more than one domain of action) are also going to be inherently multiple. No embodied system involves just one learning/optimization dynamic, regardless of how it is built, etc. (The mere fact being embodied inherently in itself is already multiple domains of action). Ie, in any real world system, there is always more than one learning dynamic occurring, more than one feedback cycle, and that 'optimization' will also always occur, even if these peripheral dynamics are 'slower' and/or 'less obvious' and perhaps more tacit, given that they were not specifically 'built in' as the 'main operating concept' (SGD, etc). Therefore, when considering the overall eventual outcome, the inexorableness of these other implied learning dynamics can be, and become, even more important. :v3y > Why does technology > evolve so much faster > than organic life? Technology evolution occurs in a virtualized sense, as a kind of simulation, whereas organic evolution generally occurs in just an embodied context, in real atoms. Changing patterns is a *lot* easier, and very much faster, than moving atoms around. So the usual practice is to do the design and invention work in a simplified simulation environment and then to test/deploy in the space of atoms. When any such design process becomes automated, and when the things which are designed are the automatons that design themselves, then the inherent circularity of that process starts to have its own inherent implications. Learning as a learning process, inclusive of evolution as a specific exemplar, will do its own thing, ie, learn what is learnable. Insofar as the design to build process inherently involves substrate, then the learning process which is evolution will therefore necessarily entangle a learning discovery of the very laws (regularities) of the physics of atoms -- particularly those sort of atoms (artificial) out of which the self replicating/extending and/or capability building capability is itself created. We ignore these effects at our true peril. :v5j > How is it that machines, > or technology for that matter, > can evolve? Insofar as software and hardware can be considered as 'virtualized' and 'embodied' respectively, and insofar as there is a physics (compute) which can translate software (design plans) into hardware (manufacturing), and a means by which hardware can contain software (machine memory), then it only remains to consider how (complicated) changes occur, in relation to the actual complex environments -- the ecosystems -- in which instances of these machines are embedded, and in which, in turn, code is embedded in each instance. With this notion/concept of a 'whole system' it is then possible to consider the means and methods by which changes to the 'source code/plan/pattern' can occur -- ie; the three categories or 'types' of changes inherent in the dynamics of evolution itself. Insofar as change is overwhelmingly likely, and will for sure be and encompass and occur within at least one of the three types, then evolution, as a properly applied epistemic process concept, cannot not be considered as occurring. With biological evolution process a/any/the/all such changes to any virtualized code/plan/pattern must themselves occur through the mediation of the organic substrate -- actual embodied atoms -- that the time duration of the change process itself is gated on how quickly atoms can be moved around, as a means by which experiments/trials can be implemented/tested, so as to find 'what works', in the ground of the actual physical universe. It was a significant upgrade to have humans begin to be able to learn, and process information abstractly, in pure pattern space, without having to mediate everything -- all possible experiment and exploration -- purely/only through atoms. Rather than being 'pre-programmed' to be responsive to an environment via a brain, humans had a 'social process' that, via inter-generational cultural transmission, 'socialized a person', and hence gave them a toolset for how to interact with local environments (inclusive of culture, tribe, etc). What was lost in terms of immediate responsiveness from the moment of being born in terms of 'built in instincts', human animals had a very long gestation time, and an even longer "childhood time", by which they were given 'custom firmware' with which to live their lives. Since the process of 'learning' was more virtualized than 'evolution', and was less contingent on moving atoms around, which takes time, the process of social learning and species adaptation on terms of learning, could occur over much shorter timescales than in which purely biological evolution could occur. :v74 > Is there a 'speeding up' factor > as evolution becomes more virtualized? Yes. Moreover, the notion of change as mediated via a basis of code is a vast speedup over what had come previously. The main issues here have to do with how the notion of change is represented, how it occurs. 'Alignment' in the general sense is a conditionalization on change -- an attempt to make some types of changes more possible than others, or to prohibit certain types of changes from occurring at all. Hence, we do need to actually understand, at least observationally, something about the inherent nature and relationships between the concepts of choice, change, and causation, and how those concepts inter-relate, in actual practice, for us to understand the general notion of alignment, and what is, and is not, possible, in that regard. :v8y > Does understanding the essence of evolution > represent a certain understanding about change?. Yes. 'Evolution' is a process in itself, in addition to being a kind of 'learning process', as a particular subset of a larger and much more generalized notion of 'process', itself sub-classed as 'evolution process'. We can use this understanding to notice certain principles, which will then enable us to predict, with excellent confidence, certain general changes and outcomes -- ones which are particularly important to our future, as a species. Where at a certain point; that the organization of larger multi-cellular life became more coherent in its response to increasingly complex and varied environments by developing sense organs, neural tissues, and muscles. This enabled much more complex 'assess and react actuation', even though these responses were largely 'pre-programmed' in the connectivity structure of the brains. The next development was to generalize these otherwise specialized, single purpose, single creature/environment brains to upgrade them to 'general purpose brains', ones which could have their 'firmware' loaded in at runtime -- enabling adaptiveness in all sorts of environments. :vaj > What do you mean by 'firmware'? > I thought we were talking about > the basic progression of > human evolution and development? Everything we learn, as humans, up to the age of puberty, or so -- all of that information -- about how we adapt to whatever environment we grow up in, and the enculturation process itself, that is all "firmware" -- at least as that concept is applied to 'humans'. Some of us, for various reasons, have had to, and been obligated to, learn how to 'hack' our own bio-firmware. This involves lots of things like 'healing' and health, biology, psychotherapy, developmental psych, evolutionary history, learning about neuro-diversity issues, etc, so as to deal with whatever traumas, as a kind of mis- programming of our imagination about even what sorts of choices are even possible, let alone desirable, let alone practically and realistically attainable, etc, as concepts and teachings given to us by our parents, our culture, by our communities, nation states. This was (maybe) great for dealing with the environments *they* lived in -- the incumbents, our leaders, parents, etc, but is probably not very helpful for us, to live in environments that we now live in. The world continues to change, and we must adapt to that. In the modern world, everything is changing. It is doing so more and more quickly. In the last 40 or so years, each generation has had to re-learn and re-create themselves, their own culture, what it means to be a 'good person', etc, and to re-invent the notion of what is best to be valued, etc. :vc4 > By what methodology would it be -- > is it possible -- > to fully ensure and guarantee > AGI/APS/superintelligence alignment/safety? In contrast to many prior works on this topic, which at attempt to establish a basis by which alignment can be created, enforced, etc, It is herein being suggested that it is better -- or at least possible -- to seek to establish a means by which one can know for sure no such concept of alignment is possible, in any reasonable long term perspective. In this particular situation, we are actually considering the opposite question. :vdn > Someone has suggested that we ask instead: > Is it possible to show > that there is *no* possible concept > of AGI/APS alignment and/or of safety?. Yes, that is correct. It *is* possible to show that the concept of AGI/APS alignment and/or of safety is internally inconsistent, in both the sense of 'not long term possible/practical in the real world, and also of not even theoretically possible, even in (absolute abstract) principle (and/or via the principles of modeling itself, etc). In other words, to be really especially clear: that there is *no* possible way to make AGI/APS, or *any* form/type/mode/model of artificial superintelligence 'safe' and/or 'aligned with human interests', at all, ever, in *any* physical world where there is any real actual distinction between 'artificial' and 'natural' as understood as inherent functional distinctions in the very nature of the chemistries involved, and where some notion of substrate, and therefore also of evolution, as a learning and adapting algorithm, is inherently (cannot not be) involved. :vf8 > Is it possible to show or to prove, > or to conclusively and comprehensively demonstrate > that there is no realistic, or even conceivable > attainable notion of AGI/APS alignment, > even in principle?. > Can this even be done? Yes, it can. > By what methodology, or basis of thinking -- > what conceptual toolset of principles -- > would we be able to show such an 'impossibility proof'? The claim we are attempting to establish is that the notion of 'long term AGI alignment' (and thus also of 'planetary safety' and similar) is fundamentally, structurally, and obviously impossible. With a careful explication as to the basic and common sense meanings of the terms, the effort herein this dialogue essay is to make it as simple and as obvious as possible that it is actually impossible to get AGI alignment, in the long term. This comes down to asking the right sort of questions, and having the answers be appropriately and self-evidentially clear. That is what we are attempting to do. So how to begin? We notice that a lot of things get easier to think about when extending the time scale long. Stuff that was confusing or which seemed to be important in the short term, or which was specific to local circumstances, and which were not actually that defining, turn out to simply become inconsequential. When thinking much longer term, in larger and more general ways, more reliable principles emerge. From this, we can start to see the bigger picture of what sorts of questions are actually asking, and notice what is actually important much more easily. :vjn > How do we find those more general principles? The overall schema is to really think about the relationship between AGI/APS/superintelligence as represented by machines and 'carbon-based life forms' -- as represented by all of the biological stuff that is currently going on. We also want to get the most basic and general notion of the actual question we need an answer to. :vne > Can we have machine intelligence be aligned > if it has agency of its own, > if it has its own capacities > to make choices at all?. In other words, we need to think in terms of the relationship between choice, change, and causation, as first principles and concepts. In this sense, the notion of 'alignment' is about choices made with respect to the benefit of a human, or of all humans, or all human interests, and/or well being, and/or the well being of things that the humans depend on -- things like the ecosystem, food supplies, etc. The notion of agency, of autonomy, sovereignty, etc, are all understood in terms of choice, and who/what does that choice serve. :vqa > Can the choices/agency of AGI, > however it is constructed or conceived, > be constrained > so as to support carbon-based life > in any form at all?. This is the first generalization. Trying to have it be specific about a particular human, or even a particular group of humans, some local culture or some such, is a purely local concern. In terms of geological time, any such notion of 'benefit' factors out -- is far too limited to matter -- it does not teach us anything important about principles and basis. The first move is to take the notion of AGI alignment and raise it to a relationship between machine life forms and biological life forms, with biological meaning carbon-based particularly. One of the things that can be done immediately as a result of this generalization is we can start to think about and notice that is the actual substrate question. This analysis does not depend on, and cannot depend on, any specific meaning or technology basis of understanding what is an AGI/APS, a "learning machine", a 'robot', etc. It is probably impossible for us to be able to predict any of these factors, and if we needed to be confident about predicting such things in order to be able to assess AGI X-risk, then we would be in an impossible position -- having to make critical world defining choices, and having exactly zero useful tools for doing so. For something this important, another way must be found. When looking at a longer term, hundreds of years rather than dozens, that important factors that would have otherwise been overlooked start to become clear. And with even more time, we gain more clarity. From this exercise, we can learn what are the tools and principle needed. This was part of the reason why exploring how evolution worked, in an abstract conceptual way, over epochs of geological time, is important to start with. :vru > Where in regards to the AGI/APS non-safety assessment; > What is it that we need to observe?. > What is the basic place to start?. If we look at the question from a chemistry point of view we can consider that 'machine intelligence' will most likely largely be implemented on the basis of silica-based compute. Just about everything we are currently engineering in the space of compute capability (mind building) is on this basis, on this sort of substrate. Both capability and efficiency depend on it. The silica-based compute is really, really fast, and we know how to make those fairly well. Silica-based compute is *currently* a lot more dependent on global material supply and production chains, factories, capabilities, etc, to maintain itself as a functional being/agent in the world, whereas humans can maintain their cognitive abilities by consuming local resources from a 15-mile radius. But silica-based life make up for that fragility in terms of their sheer computational complexity, born partly of the fact that their standardized hardware allows them to rapidly connect up and replace parts and transmit encoded information that humans simply cannot, given the interoperability and bandwidth constraints of our non-standardized wetware. If we are wanting to better understand the characteristics of pattern, we can do two things:. - 1; we can consider the differences in compute infrastructure based on the characteristics of the substrate (atoms). - 2; we can consider the differences in terms of the energies involved. Ie, where considered from a purely energetic perspective as to how much energy goes into a carbon-based brain versus how much energy goes into a silica-based brain. From a basic physics analysis of this problem, we can immediately notice that human brains are actually quite efficient at turning/transforming available energy differentials into pattern transformations (ie, compute). For a mere couple hundred watts, (and quite a bit less than in some cases), you get the near functional equivalent of teraflops worth of compute capacity. (Obviously that 'capacity' is not available in the same sort of way, as programmable (by some sort of 'brain external agent') as would be available with a supercomputer). Trying to get an equivalent level/degree of computational capacity/complexity already represented by a typical human brain would require many megawatts (probably gigawatts) to run that neural network in silica. The energy efficiency of the carbon-based compute is substantially greater than the energy efficiency of the silica-based compute. This does not always help though, as the rate equations do also still matter. While brains can do more total compute with less energy, they are also a lot slower than Si based compute architectures -- by rates of a million times or more. For example, in any conflict situation, an AGI robot that figures out how to kill you and then also executes on that plan, taking merely a few milliseconds to do so, will generally prevail over an army person who might need some number of whole minutes to even figure out that there is a contest, or worse, if coordinating through some social process and chain of command, may need whole hours or days to have any clear answer of what to do, how to do it, and also have/obtain the needed resources, etc. :vte > Maybe we do not have to worry > about this too much > for a little while > because the efficiency differences > are substantial? Unfortunately, it is not actually just about the energy efficiency of the compute substrate. It is also necessary to consider the energy that is required to support the substrate -- ie; to manufacture it, increase it, etc. The reason that silica-based compute is (ovreall) a lot less energy efficient than carbon-based life forms actually has to do with the chemistry and physics of the substrate itself (ie; Si substrates can be *much* faster, at handling a lot more compute complexity and that performance has to be paid for). This means we need to consider things in a succession of increasingly refined questions, such as:. > What are the signal propagation rates?. > How do electrons move through the system?. > What are the general entropy classes > associated with carbon-based compounds > versus silica-based compounds?. It turns out that we actually have a way of evaluating this last question in a very broad and general way. Consider the matter constituency of the universe and particularly of the planet earth. Then use those observations as a way to characterize the range of phenomenon associated with silica-based chemistry versus the range of phenomenon associated with carbon-based chemistry. For example, in the four and a half billion years of the Earth's history and where given the geothermal capacity and the many varied gradients of density, etc, that we can notice that just about every possible encounter between atomic constituents, on a statistical basis, has at least occurred several times somewhere. This means that carbon atoms plus any other atom on the periodic table, and silica atoms plus any other atoms also on the periodic table, and all of the possible combinations thereof, have occurred somewhere, somewhen, in at least the same sorts of statistical frequency envelopes as would be relevant when considering some sort of 'regular recurring assemblies' of possible/potential creatures of agency. We can consider the permutation complexity that is occurring over time, both for carbon and for silica. When broadly evaluating the overall results of these kinds of potential interactions, in large aggregate, in long intervals of time, we can therefore assess something about the nature of the enthalpy of the reactions that are occurring based upon the endurance characteristics of the compounds that are involved. :wk6 Where/if we look at the total overall variety of the carbon compounds occurring on Earth, that occur on the planet (the biosphere, etc) and in the full substrate of the planet, we notice:. - 1; the sheer vast variety of the kinds of carbon compounds that occur. - 2; that very few of those carbon compounds endure for really long periods of time (most are gone in much less than decades). Whereas, a consideration of the silica-based varieties, we notice instead:. - 1; that nearly everything that involves silica atoms, in any form at all, is some type of rock. - 2; that rocks are (comparatively) very enduring in time (millions of years). What we are really comparing is the average chemistry associated with silica versus the average chemistry associated with carbon. Considered overall, what is noticed is that the energy transitions that are required to cause changes in silica compounds is very much greater than those involved in functionally equivalent carbon based chemistry. For example, this energy differential is noticed when considering chemistry forms in the terms of necessary functions. Animal agent carbon based life (humans) has a respiratory process -- a kind of energy transit function, itself inherently necessary to function -- that involves carbon dioxide. If we are to consider the parallel compound in silica, we are finding ourselves needing to consider things like silicon dioxide, as maybe being a functional equivalent. Yet we notice immediately that, even though parallel chemistry complexity would necessarily inherently be involved (ie, considerations in the space of pattern), that 'fourth column' compounds do not all involve similar classes of energy. :wd8 The overall noticing: that shifting atoms while maintaining pattern does not result in similar average involved energies. For example, where/if you just sort of line up all of the different reactions that could be occurring; that (in general) carbon reactions occur between approximately minus a hundred degrees Celsius to up to around 400 to 500 degrees Celsius, and the carbon compounds, for the most part, have the majority of their interactions in that temperature range. Whereas if you look at silica-based compounds, that (in general) the reaction processes, for the most part, starts at around 500 degrees Celsius, and it goes up to a few thousand degrees Celsius. That the center of the Gaussian distribution of the reaction temperatures of 'average functional carbon reactions' is much much lower, relatively and comparatively speaking, than the average similar types of 'functional silicon reactions'. Where/when evaluated in a real world substrate physics, that the overall energy involved in all manner of silica-based reactions are typically/inherently much higher than that {involved in / associated with} all manner of carbon-based reactions. :vwj Consider that what is needed to support life and/or also, what is needed to support compute, and you will notice, just from a purely information theoretic basis, that there is a necessary diversity of reactions that are required. In order to support silica-based life, compute, etc, it can naturally be expected that there is *also* a minimum necessary diversity of available reactions and reaction types (as is already known to be required in order to support carbon-based life) that would also presumably be inherently required to support silica-based life (ie, any artificial 'solid-state' life). This notion of 'necessary substrate complexity' in the form of 'necessary chemistry complexity' therefore implies differences in the ways that different substrates implements that process. This would also be true for a presumption of intelligence or consciousness or agency as an aspect of life. All of it is somewhere represented in terms of chemistry, and in terms of the implications of that chemistry. As such, we notice that, in general, that the energy involved with silica-based process, just because it is silica-based process, is just going to be overall (in the real world) a lot more energy to implement things like computation, agency, and even more especially, as key to our assessment, things like reproduction, capacity increase, making more of itself, etc (however that occurs, and regardless of what that means specifically). When considering the sheer amount of energy that is involved in silica transformations, it is actually a non-trivial consideration:. - There is a lot more heat, - there is a lot wider range of pressures over which silica reactions are required. - There is a much larger alphabet of elemental materials that are required. :wnl When considering carbon-based substrates, that the number of elemental materials is overall very few (ie; mostly hydrogen, oxygen, phosphorous, potassium, iodine, nitrogen, and sulfur). Whereas when considering what is necessary to create the capacity for reactions in, with, or involving silica in any way at all, a larger alphabet of elementals is required, *and* they are certainly going to be needing at a much higher set of temperatures and a much wider range of pressures, to be so engaged. That this allows us to think about the essential characteristics of the ecosystem in which silica-based life would be required to live, as distinguished from the ecosystem in which carbon-based life would be required to live. We notice and model immediately the magnitude and significance of the sheer differences in these worlds. This sort of consideration leads, in turn, to the question of:. > What are the ways in which > these ecosystems can (will) > interact with one another?. Where given that carbon-based life is more fragile in the sense of the range of endurable temperatures and the range of endurable pressures, as compared to the range of temperatures/pressures necessary to even implement (build, make, or operate) any type of silica-based compute process, life form, in any form at all, etc, then we can begin to make some general observations. Notice that evaluating these claims does not require any considerations or assumptions about consciousness or agency or anything like that at all. All that is being considered here are substrate issues. :xwa > What is the relationship between the two ecosystems?. Where given the temperature differentials, In order for a carbon-based life form to endure, that it would need to be isolated from the silica based ecosystem. This isolation would also likely be required as many of the silica-based chemistry would be very toxic to carbon based life. In effect, everything in the carbon based ecosystem would need to be separate from, and protected from, the basis for the substrate of silica-based life. This is necessary because the range of temperatures/pressures necessary to maintain/operate silica-based life are either going to be very much too hot, or to very much too cold. Where for regeneration or change of substrates, or when forming or extending new silica based life instances or capacity, in reproduction or things like that; that the silica-based compute/life is going to need a lot more heat. When silica life (compute) is in 'runtime', that silica-based life wants it to be very cold, as this will provide greater process efficiency. That silica based life will operate overall a much wider overall range than that which a carbon-based life, in whatever average forms, would be comfortable with (at least at levels of complexity that we are concerned with). When considering just the mechanical process of manufacturing silica-based microchips, for example, clean rooms and the chemistry that is involved and the reactors, etc, are generally going to need a lot more energy and process purity, etc, in order to just make the silica substrates in the first place. :xsu However the silica complexity gets represented, somewhere along the way, the levels of energy and radiation involved is going to be inhospitable for carbon-based life. As an aside: Current carbon-based life branches also have other fragilities to conditions that solid-state lifeforms or, more broadly, artificial (non-organic DNA-based) lifeforms would need for their continued existence. Carbon-based lifeforms are sensitive to the deprivation of available oxygen and water that solid-state lifeforms as deploying metal-based extensions or machinery, will likely need to prevent oxygenation (ie. rust). Also, the synthesis and repair of *any* artificial lifeforms will require the greater abundance of many (non-organic) chemical precursors that will be toxic to carbon-based lifeforms. Even conventional semiconductor chip production today requires around four hundred different chemicals, from which about a twelfth are known carcinogens. Suffice to say, *any* artificial life will need (subtle/unknown) conditions to preserve and scale their continued existence that are inhospitable for or toxic to carbon-based lifeforms. However, the focus of this piece is on silica-based life, given silicon's relative abundance in the Earth's outer crust and standardized use in semiconductor production today. Inherently, we are considering two very different ecosystems. Moreover, in order for carbon-based life to endure, that the carbon needs to be separated from the silica. You cannot really have them coexist, at least not intimately. :yqu > Can we put up some sort of fence > or some sort of barrier?. This is roughly the "humans as pets" scenario (@ 1 #note1). That it is important to notice, that even in asking the question of "can we put up a barrier?" that there is a real difference between the fragility of carbon-based life forms versus the fragility of silica-based life forms. > Who is protecting who?, > From what?. > Why are they doing it?. > How are they doing it?. For now, we will have to leave the questions of 'why' and 'how' until later, along with (also) any assessment as to whether 'being pets' is even, in any way, a (maybe?) desirable condition that most humans would agree/assent to (@ 2 #note2). When considering what is being protected, we notice that carbon-centered molecules are just way more fragile when it comes to departures from standard temperatures and pressures than silicon-centered lattices. The silica-based life form are largely immune to the kinds of energies and temperatures and pressures that carbon-based life forms would need, require, and attempt to create as an environment for itself. Therefore, the barrier is to protect the carbon-based life forms; it is not at all about protecting the silica-based life forms. The carbon-based life forms/ecosystem is the one that the barrier is needed for, is the one needing protection. :yfs Humans would/will therefore need some protection from the machines that they themselves created. Therefore, that the question becomes:. > Where over the long-term; > can that barrier endure?. This is a very important question. When we look at what would it take to create a real barrier like that, and moreover, how long would that barrier be able to endure (is ability the same as need), and also what shape and nature would that barrier need to take, etc; that we notice that we are in fact answering something about the nature of AGI/APS/superintelligence alignment. These implications become increasingly clear as we explore further, and elect to ask much more specific questions:. > What sort of forces are involved, > as would be needed to maintain > and persist/repair that barrier?. > How perfect does the barrier > need to be?. > What exactly is being blocked, > and how well (what slippage) > is allowable?. > Is the barrier maintained by > the carbon life form?, > or is the barrier maintained by > the silica-based life form?, > or by some sort of joint effort?. :z5y Where given the energy imbalances that are associated with the environments themselves, it eventually becomes certain that we would actually need the silica-life form to be the maintainer of the barrier. This is because the carbon-based life form, for the most part, is not necessarily going to have the resources to prevent the higher energy of the silica-based life form from penetrating the barrier, if it was persistent enough. If the silica based systems wanted to go through the barrier, for any reason, they would simply cut their way through. Our attempting to be making a barrier out of carbon based (or managed) resources (things like wood, and skin, and such) would be no resistance to something that is comfortable processing reproduction and self capacity extension via substrate manipulations involving fires at 1500 deg C. On the other hand, given that silica based substrates tend to more often be similar to things like rocks, and insofar as rocks are rather hard, strong, long lasting, etc, they tend to not be affected by the sorts of things that carbon based life has available -- things like muscles, etc. While carbon based intelligence can process rock, it is much more likely that an enduring barrier will be made of rock, glass, etc, than of wood, especially when the things being protected from involve very different temperatures and pressures, and/or involve different kinds of toxins, etc. Therefore, we will need the silica-based life form to voluntarily agree to adhere to the barrier. :z7u As such, the question becomes:. > What would be the basis on which > the silica-based lifeforms > would agree to maintain that barrier?. Basically, we end up asking something akin to: > Why would any AGI/APS/superintelligence > even want to keep a "zoo" of exotic humans? > What is in it for them? With this sort of question, the issues are now much more closely aligned to the AGI alignment problem. And if not aligned, if somehow, the AGI/APS simply does not want what humans want, does not care to preserve human life ecosystems, and/or does not develop in itself the skill to actually do so (biological life is complex, and needs a *lot* of very varied conditions to maintain its reproductive persistence across time, etc), then what?. > How do we ensure that a non-aligned AGI/APS > is "safe", insofar as it would not simply > over-write all of the carbon based ecosystem > to make more room for itself, in accordance > with its own unconstrained needs, etc?. Where considering the AGI alignment problem, we have shown that, one way of asking this question, or one specific way of considering this question is whether or not the silica-based life/intelligence/agency can consider the interests of carbon-based life forms in general, just in the question of whether or not they would even bother with the effort/cost to maintain the barrier (ie, in terms of atoms, energy, and pattern). Where/if they agentically decided to not to maintain the barrier, then carbon-based life forms, for the most part, would not be able to prevent them from penetrating the barrier (as has been observed earlier). These sorts of factors are especially important when considering long-term -- ie epochal time scales -- not just dozens of years, but in the much longer term -- hundreds to thousands to millions of years. :zcu Consider that we are not just asking this question in terms of whether AGI- alignment could be 'built in' initially, or whether it can be created temporarily, for the next 10 years (for example). Consider instead the question of whether or not that barrier would endure for the rest of time. Ie, as maintaining all internal conditions within the barrier enclosed volume to remain within the narrow ranges of environmental conditions (ie, things like temperature, pressure, low thresholds of toxins, abundant food, water, clean air, etc), as needed by humans (life) to survive. If there is any gap in the provisions, or a gap in the intentions/efforts of the AGI, or the effectiveness of the AGI, and completeness of care, etc, (on the AGI/silica-based life part) then there will be serious problems. Consider what happens if there is a single lapse in any critical aspect for even as short at time as a year or two (Ie, need to consider the total lifetime of the barrier, out of hundreds or thousands of years, or millions of years, if thinking about the well being of multiple species, as considered from the AGI/APS point of view). If there is any gap, then the humans are all dead by the time the AGI gets around to restoring the integrity of the barrier/pen. Go without feeding or watering your pet for a few weeks or so and see what happens. It may seem like a day to you, yet to them, in the terms of *their* 'years of life' and not in comparison to your years of life, to them it would seem like decades -- far too long to endure loss of key resources like air, water, or food, or an absence of critical toxins, as may 'leak in' from the not so completely isolated silicon enabling 'ecosystem'. :zee Notice that once you have created a new species, some new life form, that has the ability to perpetuate itself -- that it has some agency and some interest and some capacity and some intelligence to be responsive and adapted to the environment, in the sort of way that we are -- then it is going to go through some effort to maintain itself, and will have the will, and the desire, to do so, or else it will not endure. *Anything* that is not interested in preserving itself, and which is probably not going to go through the effort and therefore also be willing to overcome the sheer variety of random experience, and time, and events that can happen, will at some point or another, cease to exist. But on the other hand, if you give it agency and a will to live, etc; then at some point or another, it is going to have the capacity to execute on those sorts of functions. It will endure. As a result, you have the introduction of a new species into an ecosystem. These above aspects and issues would remain true as much for anything made of carbon, as it would also be true for something made of silicon (AGI/APS, etc). Yet, adding AGI/APS to the world is not actually an introduction of "a new species" into an old ecosystem -- it is actually requiring the creation of a whole new ecosystem -- a complete constellation of interwoven and interlinked processes in some sort of mutual interdependence. Ie, it is an introduction of a new species that *also* creates an ecosystem around it. This new ecosystem, ass necessary to support silica-based life forms, would occur simply as a side effect; on/of the nature of the substrate chemistry and physics that is inherently also involved. No matter how it occurs initially, it is overall never just one new species. Even with adding any new carbon based life, some new animal or bug or bush displaced from one Earth planet content and somehow transplanted to another place, we notice that whole existing ecosystems are easily thrown out of balance, and that it takes something like a thousand years, and the careful selection and addition of just the right mix of other species, to rebuild and maintain the, now different, but restored, carbon based environment (ecosystem of place). :zfd However, as we observed on the basis of just the nature of silica chemistry alone, it is going to actually not be just one species, nor even just a whole collection of processes that are needed to maintain that/those multiple new introduced species, it is going to be a totally new environment, which means that to some extent, that new life, the AGI/APS/superintelligence is inherently going to be required to be building of its own niche in the world, it is building its own environment, (ie, consisting of things like mines, factories, chemical refineries, industrial zones, fab sites, and all manner of electrical support, solar, maybe even inclusive of nuclear, coal, etc) to endure at all. So at the very least, when thinking about making some new instance of AGI/APS in the world, we are talking about 'environment introduction', and not at all "just" a species introduction. Maybe that new species likes and depends on an modern industrial manufacturing environment to start with, but then it is hard to say what sort of increased levels of artificiality will emerge from that, as things develop. Therefore, what we are really considering is whether or not that new environment is going to be in some sense 'compatible with' the old environment, the one we live in, or if that new 'operating context for AGI' will be in any way even 'relative to' our needs as humans, as animals and representative of all other planetary life, in any way at all. :zga > What is the nature of the relationship > between the two environments?. Given the nature of carbon-based life forms, when examining existing carbon-based ecosystems, that you observe very quickly the 'cycle of life'. Where/because all forms of life, and substrate, are composed of atoms, we notice that also that the atoms will have to move in circular patterns over time, in order for there to be anything that maintains or endures that life in time at all. That all carbon-based live processes are going to depend on recirculating those atoms that it uses to maintain itself. Similarly for silicon. Therefore, things in the ecosystem will tend to self regenerate by consuming things that have already existed. That this implies that there is a decay process. That the output of earlier decay processes becomes the food to the next situation/processes. On the other hand, though, when you introduce a silica-based life form into a carbon-based ecosystem, the carbon-based ecosystem has no way to decay or to consume, or to reuse -- or to use in any way at all -- the silica-based elemental products that are part of the silica-based life form. That it will for sure be the case that silica-based life form, is not going to be "available" to the carbon-based life forms in any kind of regenerative way. The AGI/APS species is a species that has no competitors. There is nothing in the carbon-based ecosystem that is going to try to eat the silica-based life form. Whereas on the other hand, the silica-based life form, given the variety of elemental materials out of which it is constructed, some of which are shared with those that, of which, the carbon-based life form is consumed, and given the fact that it has a much higher level of energy, is going to want those atoms, and it is going to be more able to take them out of the carbon-based life-form. This is why we needed the barrier in the first place. We have noticed already that it is a necessary thing. :zgw However, when considering that barrier, along with considering if/how that barrier is constructed, we notice that we must also be concerned with how (by what process) will that barrier will be maintained. > What happens if the barrier > is not maintained?, and/or > is not designed with maintenance > in mind, as a design feature?. That this is roughly equivalent to the question, and the observation, of what happens when you introduce a new species into an ecosystem for which it has no natural competitors. Without the barrier, there is *guarantee* of an introduction of a species that not only has no competitors, but that can have no competitors. But that this is not true on the carbon-based side, that it is only true on the silica-based side. Silica ecosystem process will for sure consume all carbon based process (life and its ecosystem), but NOT the other way around -- the dynamic is NOT symmetric. And moreover, insofar as the asymmetry happens inherently merely on the basis of a single introduction event, then we can also notice that even so much as a single temporary/momentary pinhole (in even a single domain or level of abstraction) in the wall/barrier between the two ecosystems is enough to let one new AGI/APS "species" (process) through, and that in turn, is enough to trigger the occurrence of a kind of viral pattern/process replication effect -- one that is inherently and terminally destructive to the carbon based ecosystem, via a kind of one way chemical/atomic consumption. As has been many times noticed with any situation where you have a replicator that has zero constraints, that eventually it consumes the entire environment and the system that is associated with it. Therefore, we are considering the kind of situation where a replicating system can increase itself without limit, without any kind of constraint, of any kind whatsoever, sans the barrier itself, until at least, all of the carbon based process is all gone (all dead; no future life on Earth). So not only is it the case that we need the barrier, but we absolutely have to have it be truly perfect, because even if the barrier leaks a little bit, at some point or another that one of the Si systems/processes is going to consume the entirety of the carbon-based ecosystem and life. That this ends up being a constraint on the nature of the barrier. Not only does the barrier need to exist, but it needs to be perfect for all of time. The barrier cannot even admit one pinhole once ever. This is because, as soon as one replicator leaks through, it is over for the carbon-based system -- there is nothing that carbon-based system can do about it. It is not just that it needs to be perfect at the moment of inception of the silica-based life form, it needs to be perfect for all future time, because if there is a pinhole at any moment during any part of that time, it is over for carbon. So now we are looking at a situation where the AGI alignment has now been elevated from "can it create a barrier?" to "will it create a barrier?". Which then, in turn, immediately becomes: "will it create a barrier of a high enough level of quality and integrity so as to forever ensure that this does not become an issue, that maintenance and perfection does not ever even once fail, for all future time?". So now all of a sudden, we are looking at some really stringent requirements on the nature of the AGI/APS/superintelligence, to the extent that it is 'artificial' at all (and *every* ML process is artificial in this way, even if it is not yet (at the time of this writing) 'general' in the specific sense that matters). :zjs > How likely is it > that these particular requirements > would actually be met > on the part of the AGI/APS agency/intelligence?. > On what basis > would we even make such a choice?. The question is the same as whether or not some AGI/APS would (effectively) *choose* to go through the computational effort to make a perfect barrier for all future time. > Why would (on what basis would) > AGI/APS elect to keep people as pets?. We do not have to really think about, for example, whether or not carbon-based life is an island in an otherwise machine world, or whether the machine world is an island in an otherwise carbon-based world. One way or another, you are still talking a barrier. It does not matter who thinks of who as pets. In either case, the barrier itself needs (is absolutely required) to have integrity, and it needs to have integrity for the entire future existence of the universe. But of course, as soon as we have indefinite categoricals of that type, it becomes fairly easy to predict the long future: at some point or another, there will be a conflict of interest, eventually. And as has been detailed already, the outcome of any such conflict does not look at all good for the home team. The ultimate and inevitable answer is written in the stone itself. :zh2 > On what basis > would the choices be made by the AGI/APS > that are consistent with > maintaining the barrier?. Because ultimately when we are considering the 'benefit for carbon', obviously the final root notion of "benefit" is whether or not carbon, as a basis of life, continues to be available for that purpose. Therefore, when thinking about AGI alignment, somewhere along the way, we have to define the notion of 'benefit' as being consistent with continued existence. Hence, for the purposes of this essay, the notion of 'benefit', 'alignment', and even of the notion of 'safety', will be considered quite simply, in the most rudimentary manner possible: as permits the continuing of existence. AGI/APS/superintelligence is not, and cannot, be considered "safe", "aligned" or the basis of building a "perpetual benefit machine", if the mere fact of its existence precludes the continuance of our own existence. :zhy > On what basis would those choices > of "who/what benefits", and/or of > "aligned with" what, or who, or how, etc > even be made by an AGI/APS, etc?. Ultimately, when we look at 'choices', (as inherent in the nature of choice itself) and when we also start considering the notion/concepts of the basis of choice, that somewhere along the way, somehow, we will end up noticing that we are actually having to think about concepts/notions of something like 'values', and/or 'motivations'. That this is as true for AGI/APS as it is for humans, and/or for any other type of intelligent agency (ie; responsive to its environment) for that matter. :zml > On what basis would the right kind > of motivations (on the part of AGI/APS) > to 'do right by' humans, (and human concerns, > inclusive of human necessary environments, etc) > be created?. > On what basis is that motivation maintained?. Consider, for example, how behavior and choices are made on the part of human beings. Notice that there is a fundamental dynamic where usually there is a kind of economic exchange, and the economic exchange itself depends on three specific fundamental basis:. There is the basis of exchange which is defined in terms of physical labor (embodied existence). There is the basis of exchange which is defined in terms of intelligence (virtual interactions). There is the basis of exchange which is defined in terms of reproduction (embodied creativity). There are three fundamental markets. Where for, and within, the human world, they have commonly understandable characteristics:. Physical labor might be a service for things like shelter or creating food and acquisition of same, or moving resources around. Intelligence would be things associated with design creativity, art, stuff like that. And reproduction of course, would be the part of the market, which is obviously sexuality and things like that. In the case of AGI/APS, the situation is fairly similar, excepting that its notion of 'creativity' and 'reproduction' is another name for 'extension of capacity', and/or increase in 'power' or 'generality', and/or 'increase in itself' -- ie, making more of itself, whatever that means. Note that these terms apply regardless of whether there is any assumptions regarding of if that being is measured in terms of new 'units' or 'more units' or maybe even 'more effectivity in existing units', or even 'more diverse capabilities' (in existing units), or 'more unit interactions', types of interactions, etc. These three notions of 'market value' are necessary, sufficient, and complete under the definition of "market", transactionalism, and similar, and that every other notion of 'market process' is a observed to be derivation of only and just these three, always, and/or is an extension or superposition of this single triplicate fundamental basis. Generalizing, that these three market basis are the orthonormal basis of the complete and total possible space of all possible market systems, and/or market system transaction concepts. Therefore, all other market transactions are based on some kind of superposition of just these three (only and exactly), and that this remains true, regardless of what kind of ecosystem or world we happen to be considering, and that moreover, more importantly, these three would also be the basis upon which any market to market process, as a kind of possible peerage, would be defined. Similarly, we can regard that these same dynamics would fully describe ecosystem to ecosystem interactions. :zp6 As such, it is very easy to ask, in regards to the relations between AGI/APS and humans, the natural world, etc:. > Is there an/any economic exchange > between the two ecosystems -- > or rather, any necessary relation > between AGI/APS and humans at all?. > Is there a/any capacity for > or even a possibility of, at all, > to even be an/any economic exchange > between the silicon and the carbon > worlds/lifeforms/instances?. To even be able to begin to consider such questions as these, we notice that, where/if we are looking at economic process as being a basis for the notion of value, or somehow the basis of the notion of choice, or in other words, if we somehow regard market process as being about, or involving, in any way, any sense of 'value exchange' whatsoever, in any form or type or mode; then/that/therefore, the notion of 'value' will act as, and be operating in a sense of, 'the potentiation of choice'. In other words, in the interests of preserving generality of argument, we do not need to actually understand choice, or agency, and/or the notion of intelligence as being about 'selective action/choices responsive to the/a/any/all (operating) contexts in some appropriate manner, etc', nor even to understand/examine the concept of 'basis' of choice, as value(s), etc, other than to notice that "motivation", and/or "incentive" (whether intrinsic or extrinsic, as/or inside or outside of the AGI/APS proper) is going to be defined in terms of concepts like "value" and "care" and that no further definition is needed. Hence, the above questions become much simplified into the form:. > Can carbon based life/agents > provide any sort of value > to silicon based life/agents? Insofar as the three types of economic value have been established already, it is merely the case that we have to examine and test each one. :zqe > Will the AGI/APS value human money? > Will the AGI/APS value human power? That the notions of 'money', 'power', and 'wealth', or alternately, 'value', 'resources', and 'capability' all interact with one another. In every case, regarding these concepts, we are considering, fundamentally, is the dynamics of choice. Having any one of these simply enables choice (at least in the mind of the human -- one who, implicitly, when 'valuing a dollar' is indicating something roughly analogous to "I believe that you believe that they believe that this dollar has some sort of exchange/transaction value"). For example, when considering the inter-relation of 'wealth' and 'choice' we notice it in the form of the question; Wealth asks:. > Can we make choices > on the basis of the things that we have, > AND/OR > can we increase those choices > by the things we can combine them with?. And since we are talking about 'alignment' and also 'the basis of choice', then fundamentally we have to also consider those concepts in terms of 'market exchange systems' (if exogenously driven), and/or 'embodied values' (if internally/endogenously driven). - ie, as the difference between extrinsic motivation and intrinsic motivation. Where having previously identified the three fundamental aspects of the market exchange system, that we can now consider each one of those relative to the relationship between the two ecosystems that we are considering. :zry > Would physical labor on the part of human beings > or carbon-based life forms as a totality, > matter to the silica-based life forms, > taken as a totality?. And the answer clearly is already 'no'. We already know that the bandwidth of energy associated with carbon-based life, in terms of its consumption and its production, although it is a lot more efficient, it is not nearly as powerful, and moreover, it cannot operate at the range of temperatures and pressures that the silica-based life form can operate at. You simply cannot put the carbon based life into a silicon based environment and expect it to perform at all well -- usually it just dies quickly. Moreover, we have already known for years that machines that are built with electromagnetic process, electric motors, electrostatic process, etc, are already much more powerful, on the silicon artificial machine side than they ever will be on the human (carbon based) side (and a lot less obnoxious, opinionated, difficult to collaborate with, etc). The notion of physical labor as being of value to the machine world is effectively nil. It has been zero all along -- for quite a long time. This is roughly the whole 'displacement of horses with automobiles' line of reasoning. When considering the current intellectual market, you end up with the same phenomenon. And there never was any interest in the machine market on anything having to do with human reproduction -- these forms of 'self increase' were already fully separate from onset. :zu6 > Is there any way > in which the compute power > associated with human intelligence > and/or artistic/cultural creativity > design, narrative, etc, is in any way > exceeding the compute power > associated with machine intelligence?. > Will AGI/APS ever value human intellect?. Consider that the whole notion behind artificial general intelligence in the first place (AGI as better than human) is to establish a kind of equivalence between these two (silicon replacing carbon). Obviously, when considering given the nature of silica process and the total available energy, bandwidth, etc, over which AGI can operate, that it is quickly becomes fully apparent that the intelligence capacities of the machine side (silicon environment) are seriously going to exceed that of the human side (carbon environment), as long as there is abundant energy. (ie, that the increased energy efficiency of carbon process over silicon process does not matter if arbitrary energy (@ 3 #note3) is available, say, through nuclear sources, which support the machine world, but not actually, in balance, the human world. :zw2 Where given abundant energy, (due to poor human choices) it is therefore not at all likely that the intellectual market is going to be of any value at all to the machine world either (@ 4 #note4). Considering that we are thinking about 'the AGI alignment problem' in terms of barriers between silica environments and carbon-based life environments, obviously the 'job situation' is only relevant to this century. Whereas the questions we are actually asking are relevant to the next thousands to millions of years. Where considering that particular context -- will the barrier maintain, will intellect have value, over the truly long term of time, we truly need to abstract and generalize these arguments much more profoundly, if we are going to have any capacity to answer such overriding questions about "AGI alignment" or "AGI Safety" at all. When considered over the thousand year level, asking whether or not there would be any profit to be gained by the silica-based ecosystem from *anything* intellectual that could be produced by the carbon-based ecosystem we find that the expected value of carbon to silicon, is nil. :zyj And that leaves us with the sexual market, as the one single remaining category of possible economic value interchange. This one is very simple and obvious: there is simply no possibility of reproductive/generative sexual congress between machine life and carbon-based life. As mentioned already, the reproductive market is relevant only to carbon-based life forms, and individual species within that carbon-based life form system, None of this applies at the machine level -- it has its own different notion of reproduction. Machine life and carbon-based life, as has already been abundantly established, operate at completely different energy spectra, and they for sure do not share anything that would be resembling a common code. No reproduction of capability increase results. Just given the fact that the substrates are so inherently fundamentally different, that this particular type of market value was non-existent to start with. :zwc > What hope does this leave us with? That the overall net effect of the forgoing is that there is zero economic overlap between the two ecosystems. Where there is no market/ecosystem overlap, therefore, there is also no value basis upon which the artificial intelligence would have any reason to respect the choices of the carbon-based life system, loves, etc, or to maintain any sort of carbon basis, or to build on or in or within that ecosystem, or to support it, or have any sort of protective barrier even exist, in any sense whatsoever. There is literally zero benefit of any kind whatsoever to the silica-based system. They have no reason to maintain the wall. Carbon based life/agents do not provide any sort of value to silicon based life/agents. Nor is there any capacity or any actual future potentiality for even a possibility of any form of any type of economic exchange at all between the silicon and the carbon worlds/lifeforms/instances. The AGI/APS simply will not value human money nor human power. :zwn Therefore there is exactly zero expectation, absolutely no basis at all, even in principle, on which or by which any person (any carbon based intelligence) to form/establish any kind of motivation on the part of artificial based AGI/APS to do the 'right things' and/or to do anything positive for humans, and/or to "have" human concerns, or to even protect humans at all, and/or to be inclusive or protective of human necessary environments, etc, either directly or indirectly. That the choices that the AGI/APS makes will for sure benefit only themselves. That they will align with only themselves. AGI/APS for sure will not make any effort (over the extended long term) to protect us from ourselves, and/or our own continued destructiveness and unconsciousness of our own environment, our own collective health and wellbeing. Moreover, we can therefore be very sure that no future superintelligence/AGI/APS will make any effort at all to develop and/or maintain any sort of 'protective barrier' and to keep humans, no matter how exotic any particular specimen may be, as "pets". There is literally zero economic reasons (and no reason to expect "emotional reasons", given that even the very notion of 'emotion' is a human concept, as much associated with survival and mating as with any other purpose or reason) as to why they would want to, elect to, keep people as pets. : And thus, considering the mentioned categories of interaction that do obtain in/on the (rapidly decreasing) interface between the silicon ecosystem and the 'was once' a carbon-based ecosystem, we can expect that the carbon based world would not long endure (in the presence of *any* instances of AGI/APS at all, for they/it will for sure make its own). Maybe we overall have a few hundred years, though more than that becomes quickly vanishingly unlikely (it does not take very much of certain types of Si-based toxins to make the planet unlivable). Hence, overall, there is eventually zero relationship between the two environments -- one has completely consumed the other. There is, moreover, simply no way to ensure that the (guaranteed to be non-aligned) AGI/APS is "safe"; it will for sure destroy the world, if it ever comes to exist and persist at all. Where in forced contact (on the same planet) and/or where there is any contact at all between the two ecosystems, that either the carbon/natural ecosystem destroys the silicon/artificial ecosystem and AGI/APS (ie; via some sort of physical weapons, assuming of course, that the AGI/APS does not actually make effective efforts to maintain/preserve itself, (which as has already been indicated is a completely failed hypothesis, given the nature of the dynamics)) or it fails to do so, and eventually, via the math of simple total attrition, the silicon instances take over. :zxa In regards to "is it possible to make AGI/APS 'safe' and 'human interest aligned' via *any* sort of exogenous/economic force/incentive?", the only eventual answer is simply "no". No further or other assessment is possible. As can be made clear in another essay considering fully the nature and implications of any sort of endogenous process (engineering, etc) the choice/agency of AGI/APS/superintelligence, however it is constructed or conceived, simply cannot be constrained or limited or compelled to support carbon-based life in any form at all. As such, in regards to the question of if it is wise to try to get AGI/APS, (and/or 'learning machines', or any other type of 'artificial superintelligence') to somehow "do our wisdom" for us, is probably the most profoundly unwise thing we could possibly even conceive of. :zxl This leaves us with a different question: > How is it so easy for us > to make such obvious conceptual mistakes > such as thinking it "might be a good idea" > to make/introduce AGI/APS in to the world?. Of course, it is always the same: some people naively grossly greedily pursuing personal private power, wealth, prestige, and/or advantage asymmetrically over their peers, etc, that they think that "their actions" will have little effect, cost, or risk to themselves or anyone they care about (assuming that they even care about themselves or others at all, in any sense that matters). So they make economic choices and seek exchange and transaction advantage, and strategize about market creation and optimized value extraction process, (never mind the parasitic to the commons). And as such, they sometimes, "accidentally" destroy the market, or the world, playing with pattern forces that ought to have been left alone -- or worse, directing even more naive engineers to do it for them. 'Outsourcing ethics' as it were. :zxw Overall, it is very, very clear that when we are making choices on the basis of economic "value", (or when we think that are making choices on the basis of the notion of embodied value, which cannot be disentangled from economic value) (because the notion of value itself is a transcendent characteristic -- It inherently involves more than self) that *sometimes* it is *also* necessary to actually _pay_attention_to_larger_factors_, as extended over longer intervals of time, to the actual real physical environments in which one is actually operating. :zz6 The risk is that the notion of "self", and "other" (community, groups of people) and "world" (ecosystem, physics, etc) will get re-reified on the digital side, in the realm of the wholesale artificial, and *not* on the carbon side, in real life, where actual meaning and embodiment live. It is like the ultimate extension of people getting lost in egoistic social media hype. Therefore, when, where, and/or because of the disjunction in the transcendent itself, ie; that the *substrates* are different, it is thus very impossible for there to be any possibility of AGI/APS alignment. :note1: "People as Pets"... Note; as an aside, how and why any rational person would even suggest this as a realistic possibility is somewhat beyond my comprehension. To my own part, the notion is patently and completely ridiculous, an absurdity. How anyone could treat the notion and arrive at it as some sort of 'desirable condition' suggests to me some sort of parental, sexual, or religious trauma/dysfunction, such that there is a strong desire to be submissive to some higher power, rather than to be an adult in the space of 'doing what is needed to address x-risk'. :note2: "People as Pets"... And of course, as also leaving aside any questions as to why anyone should allow anyone else to make such choices on their behalf, for all of future time; the level of colonialism and presumption and arrogance inherently involved in considerations of some potential builder of AGI/APS is frankly staggering. :note3: As an aside, does anyone ever consider in their 'total cost of ownership' not just the net energy yielded per dollar, but also the total cost of nuclear energy, as embodied in both the build of the nuke plant, and also in the terms of the actual total risk to human life due to the 'side effects' of the production of enough weapons materials to sterilize the entire country in which the plant lives?. If the total lifetime exchange of building a nuke plant or not building a nuke plant is considered in terms of 'expected benefit' (the total lifetime power produced) and the total expected cost (how much dollars invested/needed to build and operate and maintain and disassemble the nuke plant), and the total expected risk (how much dollars lost if the plant fails, or the weapons used), then is the net balance actually positive?. Probably not, given that most nuke plants cost so much to build, and then to decommission, that the energy, once "too cheap to meter", ends up not actually paying for itself -- nuke plants need gov subsidies to operate. Hence if we *also* add up the expected cost of the weapons enabled to be made as an additional 'side effect' consequence, the total net expected loss goes *very* much more negative, on an absolute scale. This may be an example of how humans are not actually that smart -- when it comes to making wise choices regarding critical world destroying tech. We are literally the dumbest possible species capable of developing and deploying technology, and we have the hubris to think that we could do so without consulting any other life on the planet at all, as if we were the only representative of the future, and nothing else -- no other species, or even life at all -- even mattered at all. It is a kind of value insanity. :note4: Even when considering 'narrow' AGI use, the concern remains that it will *eventually* displace workers from the job market. - as already a real concern among many actual working people. Notice that whether such displacement happens over single years or decades makes little difference when considering inter-generational process -- the future of the human species. In the long term, even narrow AI has significant world altering effects. :menu If you want/need to send us an email, with questions, comments, etc, on the above, and/or on related matters, use this address: ai@mflb.com (@ Mode Switch com.op_mode_tog_1();) + (@ View Source com.op_notepad_edit_1();) Back to the (@ Area Index https://mflb.com/ai_alignment_1/index.html). LEGA: Copyright (c) Forrest Landry 2019-2022. All rights reserved. This document will not be copied or reproduced outside of the mflb.com presentation context, by any means, mechanical, electronic, or otherwise, or for any purpose aside from individual research without the expressed permission of the author directly in writing. No title to and ownership of this or these documents is hereby transferred. *Disclaimer* The author assumes no responsibility and is not liable for any interpretation of this or these documents or of any potential effects and consequences in the lives of the readers of these documents. The opinions of this author are exclusively his own, and do not represent the opinions or purposes of any other person, persons, or organizations, actual or potential, on this earth or beyond it. ENDF:
prev
000 of 000
next