= Insufficient Paranoia =
+ Feb 7th, 2025 +
Where as edited from a remark seen (@ [here] https://www.lesswrong.com/posts/YqrAoCzNytYWtnsAx/the-failed-strategy-of-artificial-intelligence-doomers).
What follows is a snippit of conversation between 'Amber',
who is a rather naive first time angel investor
into an already started business project,
and 'Coral', who is a seasoned venture capitalist.
In this conversation,
Amber has already described the outlines of the project
as it has progressed so far,
and is looking for help and support from Coral.
Amber:
> ...The thing is, I am a little worried
> that the head of the project, Mr. Topaz,
> isn't concerned enough about the possibility
> of somebody hacking the drones and fooling them
> into delivering paychecks when they shouldn't.
> I mean, I've tried to raise that concern,
> but he says that of course we're not going to program the drones
> to give out money and resoures and things to just anyone.
> Can you maybe give him a few tips?
> For when it comes time to start thinking about security, I mean.
Coral:
> No, unfortunately I cannot help you.
Amber:
> Why not?
> You haven't even looked at our beautiful business model!
> This plan could work --
> its worth supporting!
Coral:
> I thought maybe your company merely had a hopeless case
> of underestimated difficulties and misplaced priorities.
> But now it sounds like your leader
> is not even using ordinary paranoia,
> and reacts with skepticism to it.
> Calling a case like that "hopeless" would be an understatement.
>
> For example, lets assume that you somehow modified your message
> into something Mr. Topaz doesn't find so unpleasant to hear.
> Something that sounds related to the topic of drone security,
> but which doesn't cost him much,
> and of course, this reduced investment has the outcome
> that it does not actually cause his drones to end up secure
> because that would be all unpleasant and expensive.
>
> Unfortunately, this means that you could convince yourself
> that you've gotten Mr. Topaz to ally with you,
> because he sounds agreeable now.
> Thus, Your instinctive desire for the high-status monkey
> to be on your political side
> will cause you to feel like this problem has been solved.
> The unpleasant sense of not having secured the actual drones
> will be replaced with the feeling of having solved a hard problem;
> you can basically tell yourself that the bigger monkey
> will take care of everything
> now that he seems to be on your pleasantly-modified political side.
> And so you will be happy.
> Until the merchant drones hit the market
> and (predictably) everything goes wrong.
> But that unpleasant experience should be brief,
> given that the severity of the problem, and outcome,
> is for sure fully terminal to your companies existence,
> that everyone involved looses everything in lawsuits.
>
> Of course, no sane investor would put money into something like this.
Rather than "drones distributing things of value, paychecks, food, etc",
think instead of:.
- 1; Some sort of AGI powered robot providing child care
for your children while you are working on your career.
How do we know that some terrorist cult leader cannot
somehow hack your robot so as to hold your children hostage
so as to extort significant ransom money from you?
So as to provide support for their ideological cause?
If you pay the ransom, you are supporting terrorism
and the state department will ensure you go to jail.
But if you don't pay, then your children are dead.
Either way, the robot company will be sued into oblivion.
- 2; Chat-AI providing answers to people thinking about policy.
How do we know that some political agent or actor
will not pay some technician somewhere to shape the AI
so as to covertly advantage their own political party
and some sort of self serving policies that go with it?
Ie, a bit like Google and Facebook slightly shifting
the ordering of the returned search results for some query
so as to give a more favorable impression of one thing,
idea, party, product, or company, than some other
(a strong effect that is well documented, but not widely known).
Although, in this case, the result is covert legal extortion,
and thus can be (is actually) a very profitable business model.
All sorts of advertisers and marketing execs will love it!
- 3; the deployment of uncontrollable and unaligned/unsafe
AGI superintelligence, that seemingly promises the cure to cancer,
but is itself, on a world scale, actually worse
than any cancer ever could be, because at least with real cancer,
you do not have to worry about it being more intelligent than you,
and thus taking up all of the worlds resources, space, energy, etc,
and thus displacing all organic life (including humans) from existance.
The same observation applies to all three:
Security and alignment issues are very important!
What is interesting is that the 2nd scenario,
which is already happening,
can provide support for the 1st,
insofar as the robot company lobbyists
have already enlisted the hype marketing of the 3rd,
as an "oncoming panacea of cheap AGI utopia"
(and the hopium of venture capital returns of 10,000X).
Yet there is no consideration of alignment problems,
or any real solution to the security of
the future human race from extinction
due to an AGI very likely going rogue.
It is a beautiful (apparently profitable) business model,
of having drones deliver resources, or AGI do everything,
but without any consideration of real security/safety,
it is a consideration of cost and profit without risk --
overall it is actually a very naive and very bad deal.
All sorts of companies are pushing for AGI development,
without any real consideration of safety and security issues,
ignoring known alignment impossibility challenges, etc,
since they are still thinking purely in terms of some sort of
illusionary potential short term 10,000x return on investment,
ignoring completely the downside of near universal catastrophe.
You cannot spend any money if you and everyone and everything
that you care about are all dead, replaced by rogue robots!
It does not matter how good the deployment plan looks on paper,
if obvious paranoia concerns are simply dismissed by Mr Topaz,
and all of the rest of the alignment whitewashing industry --
ie; the various technology apologists of "LessWrong" inc.
~ ~ ~