Monday, March 30, 2009

Not discussed enough… or, 'Crap, Missed Call's talking about robots again!'

Futurists who speak of Artificial Intelligence don’t focus enough of their attention on the fact that this intelligence will be ALIEN (unless we arrive at AI through brain simulations, in which case it will be exactly like humans). That the intelligence is would be alien doesn’t mean there isn’t anything we can say about how it’s likely to contrast our own intelligence.

1) Common Law : Statutory Law : : Biological Brain : Artificial Intelligence

What do I mean by this? Our brains have been cobbled together over the millennia through an evolutionary process that built new structures on top of old structures on top of ancient structures. We still have parts of our brain that function in much the same way as that of an ape or, perhaps, even a fish. In this respect our brains are sort of like common law.

Artificial intelligence, meanwhile, if it’s built up from scratch and mostly uses algorithms, won’t necessarily have the same redundancies. There’s no reason to think that we’ll code part of the brain using C, then another part using C+, before moving onto HTML or what have you.

What does this mean?

Well for starters, AI is likely to differ from humans on the two fronts that are the most basic needs of all organisms; survival and reproduction. There’s no guarantee that our AI will have any survival instinct whatsoever. We could program it to have one, but there’s no reason to assume that it would have one (unless you happen to be thinking of AI more as a man in a robot costume).

We’ve all grown up with images of robots and computers that have acquired intelligence struggling against an antagonistic force trying to shut them down. Some have been intelligent depictions, where the AI has a diagetic reason to struggle; others have been less intelligent, where it is just assumed the AI would 1) care and 2) that caring would take the form of opposition to being turned off. The later category is almost entirely bunk and more an example of ‘Didn’t do the research,’ or ‘oven logic,’ than it is a good model for our thinking about AI.

On the flipside, we could say that there is an advantage to AI wanting to survive, particularly if the AI serves a purpose that makes it especially human-like or otherwise valuable, and thus it is reasonable to think that it would be programmed into the AI’s operating system (its only unreasonable to assume that it would be there purely as a consequence of the AI having intelligence). That being said, it is unreasonable to assume that the survival instinct of AI will take on the same form as a human survival instinct. Humans are animals, AI’s are not. Humans are, for the time being at least, trapped within their bodies, AI, in all likeliness, will not be. Humans require certain resources to survive, AI would also require certain resources but these resources do not overlap, at least in the short term, as much as the survival needs of humans and any other animal species.

AI might need water, but it more likely requires a substance that will perform the same role as water in manufacturing or cooling systems. They can therefore use a variety of other substances. Humans, meanwhile, simply need water. Humans die if they are eaten by a shark. They die and they do not return. AI might also ‘die,’ but it will be more akin to death in video game terms, with the AI returning from its last save point. Although I personally enjoy using electricity, I can’t claim that it is key to my personal survival. For the foreseeable future, it would be key to the survival of AI’s.

Many of our emotions have evolved to help us survive (and/or reproduce). There’s no reason to assume that AI will have the same emotions. Will the robot envy my house? Not likely, as its needs for shelter would be far different from my own. Would an AI feel pride in the accomplishment of other algorithms? Be moved by music? Covet more USB ports? Probably not, unlikely, and perhaps.

As far as reproduction we must also question the drive of a non-biological agent having the desire to reproduce. If there is no survival instinct, it’s unlikely to reproduce without a reason to. Anissimov over at Accelerating Future has written about AI’s taking on multiple jobs, reproducing themselves (even on rented space if need be), completing those jobs for a profit, then folding those extra AI’s down…

“Anyway, say that I’m an AI looking at craigslist. I see 100 contract jobs that pay $50/hr, all in my field of expertise, and I want to do them all, but if I don’t do them now, the employers will hire somebody else. What to do? Well, if I have the money, I can rent 100 computers to run temporary copies of myself until the jobs are all done. I take complete advantage of the available tasks, and I didn’t have to spend huge amounts of money to buy and cool and maintain 100 me-equivalents of computing power. As long as the jobs I did are enough to pay for the rental costs and then some, I can keep making money this way.”

(I’d like to note at this point that I’m just using this as an example of a scenario that I can contrast my ideas against. I’m not suggesting that anything Anissimov does or does not say contradicts what I’m talking about, nor am I taking issue with his wonderful blog in anyway.)

It’s reasonable to assume that if an artificial intelligence is operating under someone else’s direction it will be compelled to maximize its own profitability. If, however, the AI is left to its own devices it’s only reasonable to assume that it will maximize its own profitability if doing so will both enhance its chances of survival and if the AI has some form of survival instinct. The first condition is easy to satisfy. Yes, having money will increase the odds of the AI’s survival even if the AI isn’t spending its money on the same resources as humans the money will still be useful. The second is not a given, but it could happen.

Ignoring the economic consequences of AI’s making copies of themselves to corner whole swaths of the job market, there is another issue. If the AI makes a full copy of itself (and I should note, much to his credit, Anissimov does suggest the possibility that the copies might only be fractional parts of the program itself) we would have to assume that the other copies all have the same agency and the same survival instincts. This would mean that those copies would not voluntarily shut themselves down, they would want to keep their ‘fair share,’ of the work they’d done and to allow the original program to destroy the copies would be allowing the original program to inflict some degree of distress onto the copies. (I should note here that there is not particular reason why I’m assuming that an AI couldn’t have survival instinct while simultaneously feeling dispassionate about its own survival. Such a situation would greatly complicate the question of a program making a copy of itself and then later shutting down that copy.)

Continuing on the theme of reproduction, the purpose of reproduction is the survival not of the individual, but of the species. AI is not mortal. AI might become obsolete but it would never ‘die’ of ‘natural causes’ in any traditional senses. Would there be an advantage for AI to reproduce?

Well, if the AI has a survival instinct than there might be a disadvantage for the AI to reproduce. With each copy (assuming it works within our economic system) it is diminishing the returns that it gets for its skill sets. If the performance of job in a normal labor market is worth $50 dollars an hour, than the performance of that same job in a labor market flooded with potential workers will be worth considerably less. An AI’s ability to copy itself an unlimited amount of times creates the potentially for super exponential growth of the labor pool which in turn causes a commensurate drop in the odds that any given AI will get the job it desires.

Anissimov has suggested that an AI might form a sort of homogenous AI corporation, and, indeed, it might. But all of the other AI’s in that corporation may also have the idea to subcontract to homogenous corporations of there own, or, quit and splinter off into their little AI Inc’s. Anyway you slice it, it’s an economic nightmare.

AI might ‘reproduce,’ if it’s programmed to reproduce. If Super Intelligent AI is truly desirable, than an AI might be programmed to reproduce by making copies of itself that it has slightly improved upon before activating them, but since the AI isn’t going to ‘die,’ this too comes at a survival cost. An AI is disadvantaged by any form of reproduction unless that reproduction reproduces *lesser* offspring (if we are still assuming the AI to be trying to survive within our economic frame work).

If it also stands to reason, particularly if I’m correct that the only reproductive advantage for an AI would come from reproducing incomplete copies of itself, that an AI would completely lack any maternal instinct. Its ‘offspring’ would not need rearing as they would spring into being with all the experiences and abilities of their ‘parent’ fully formed and intact. Were something to happen to that ‘offspring’ the cost to the parent in survival terms (or, perhaps even, in resources) would be negligible. Only if there was an existential risk to all of the AI’s would they likely act cooperatively (as a survival method) and even then the ‘destruction’ of individuals would likely result in indifference to the survivors (unless they are specifically programmed to respond otherwise).

In a way, this is good for us. If we look at one of the biggest worries about the singularity, the threat of an unfriendly super intelligence, the threat is greatest if it comes in the form of a quickly replicating enemy with a desire for the same resources that we need to survive. An unfriendly super intelligence, indifferent to our material needs could easily satisfy its own material needs if it didn’t have a desire to reproduce. An unfriendly super intelligence could, potentially, be overcome if it didn’t have any particular desire to survive. And a super intelligence without agency would be no more or less friendly or unfriendly than the agent deploying it.

To be continued…

No comments:

Post a Comment