Thursday, February 19, 2009

Capitalism and Progress


Here's my latest comment which I posted to Kevin Kelley's Technium website. I find that I have to work very hard to keep my comments from getting too long. That was the main reason I created this blog - so that I could elaborate on some of those comments.
Hi Kevin,

I’m sure you are aware, but perhaps you have forgotten, that Bill Joy in his classic warning of the dangers of a Singularity, “Why the future doesn’t need us” also mentions the Unabomber. I believe the reaction to his article was quite harsh and he was viewed as a sort of neo-Luddite.

Personally I’m open to criticisms of the “death march” of Progress. There are certainly benefits to mankind from technological progress, but the dangers should not be overlooked as they often are.

You mention the Amish in this article. I never quite got around to commenting on the full article you wrote about the Amish, but I think there is an analogy to the way that the Catholic Church maintained control over society in the Middle Ages. Just imagine if a young Galileo was born among the Amish. How would they handle the situation? Anyway, I think the last chance western society had to stop the unrelenting march of progress was Galileo. From there the combination of science and capitalism launched the revolution which has shaped the world we live in today. Like scientific progress, capitalism is built on an exponential model. These are dynamic systems that collapse under static restraints.

Are we willing to give up progress for a stable world system? I don’t think we need to return to a hunter-gatherer lifestyle, but it would require limits on technological advancement.

BTW, I’m so inspired by your writing in Technium that I’ve started a blog to record some of my reactions and responses to your articles.

Peace
One thing I've noticed about Kevin's articles is that he nicely keeps the discussion focused on the technical and cultural questions without somehow getting into the realm of politics. I'm nowhere near as diplomatic in these areas as Kevin is and I tend to drift easily into politics. Certainly if we are talking about the Singularity then the question of politics is eventually unavoidable. It will largely be up to the governments of the world to decide how that Singularity will emerge. For instance, the US military is already actively involved in AI research through DARPA.

But the issue I wanted to discuss here is the one I brought up in my comment regarding the connection between Capitalism and Scientific Progress. I mentioned that they both are built on an exponential model. I couldn't go in depth on that in the comment without making it way to long so I'll do that here.

First Capitalism. My view of Capitalism is that it requires ever expanding markets to survive. It is expected that the stock price of a company will continue to go up. For this to be true the company's net worth must continually grow. And therefore the company must seek out ever growing markets. If the company is not growing, then eventually it will go out of business - cease to exist.

An alternative is to reduce costs. Here is where Progress comes in. Through Scientific Progress a company may find a way to produce its products at reduced cost. This allows the company to continue to grow and avoid a virtual death in the business world. Therefore there is a marriage of convenience between business and technology.

John D. Rockefeller is the epitome of applying technology to business in order to reduce costs. As a result he became the richest man in history and created the world's most powerful company, Standard Oil.

It is actually quite surprising to me that today there is little emphasis by business on artificial intelligence. Yet I believe this is where the future of business is. I would have hoped that in the recently announced stimulus package there would have been a significant portion to developing AI. This is where America needs to be investing.

Even though Japan is currently in a very bad economic situation, I believe that it will come out ahead because of its attention to the area of robotics. Actually I believe this strategy is slightly flawed. I think robotics gets too much attention, whereas the real attention should be on AI.

AI does not require a physical body as in a robot. AI can have a virtual body and exist on the internet. So much of our world already exists in virtual space, that an AI living on the internet would hardly be hampered - the financial world is particularly accessible through cyberspace.

Next Scientific Progress. I should hardly need to say anything here. Moore's Law pretty much says it all - technology is progressing at an exponential rate. Just when we think that technology has reached its limits, knew scientific discoveries are made to maintain the rate of progress.

Certainly if we use biological organisms as our guide then there is a whole world of technology available that is capable of changing our world radically. The technology of life is nanotechnology. That is the secret of life. It manipulates materials at the molecular level. Nanotechnology is one of the game changing technologies that can eventually lead to a Singularity.

Bill Joy refers to these technologies as GNR - genetics, nanotechnology, and robotics. I would prefer AI to robotics, so perhaps a better acronym would be GAIN - genetics, artificial intelligence, and nanotechnology. Joy then talks about KMD - knowledge-enabled mass destruction. I prefer calling the term WMD-H - weapons of mass dehumanization.

Of the three, genetics has the head start. We have already begun tinkering with the existing blueprint of life - DNA. This seems to me a perilous road to take. I actually welcomed the decision by Bush to stop stem cell research. This is an example where government deliberately slowed the pace of technology. I don't think the reasoning was quite the right one. I think its dangerous to mix religion with politics. The reasoning should have been more along ethical and moral lines without resorting to a particular religious point of view. The ultimate concern should have been the impact that such a technology could have on society, and not so much whether it violated the Biblical interpretations of one Christian denomination or another.

Once AI and nanotechnology reach the state of maturity that genetics is at, then there will be similar moral questions. The mature combination of the three GAIN technologies will of course be the point of greatest danger.

Personally, for me, the key technology is AI. Once a true AI is developed, that will impact the other GAIN technologies. AI will be turned on genetics leading to significant improvements there. And AI will also enable nanotechnology to finally become viable. And it is a given that AI will also be used to develop more powerful AI. This is one of the ways that exponential growth continues in technology, by using today's technology to create tomorrow's technology.

There is no incentive on the part of Capitalism or Scientific Progress to slow down this juggernaut. Bill Joy has proposed that we begin to study the problem now and plan for the future, but his good advice seems to have gone unheeded. While the atomic age was thrust upon us overnight, there have been predictions of the coming of the new age of AI for many years now. Mankind has no excuse for not preparing for this eventual outcome.

See also Bill Joy's 2000 article in Wired magazine, “Why the future doesn’t need us”.

4 comments:

  1. "it is a given that AI will also be used to develop more powerful AI." — I think you're wrong here. That AI will develop more powerful one itself — artificial though, it's inteligence, and it thinks for itself (maybe even 'himself'?).

    And I strongly agree that AI don't need physical embodiment.

    ReplyDelete
  2. I can give you a simple example that leads me to conclude that "AI will also be used to develop more powerful AI". Today computers are designed using computers. The more powerful the computer, the more complex simulations can be performed to test out new strategies. Today's most powerful computers are used to design tomorrow's generation of computers.

    So I assume that one generation of AI would be used to design the next generation. Except that in this case no human intervention would be required.

    As for embodiment, I think that we all have a natural fascination with humanoid robots. But robots do not have to be humanoid. The main reason to create humanoid robots is so that they can interact with the human environment. But what about creating an environment around the robots instead of vice versa. Why have door handles for example that are obviously designed to meet human ergonomic needs, but don't take into account robot ergonomic needs. For a robot a door that can be opened with a wireless signal is vastly superior. Something like a remote control garage door works great, for example.

    And of course as we have all seen with the advent of the internet, the virtual world can offer much of the same qualities as the real physical world. Except that in the virtual world, there really is no need of a physical body. So here an disembodied AI would be right at home. And to access the physical world there are plenty of interfaces to physical devices available.

    And as everything continues to connect to the internet it will become even simpler for a disembodied AI to interface with the physical world.

    ReplyDelete
  3. Though there is an important consequence of AI living in virual world: it will never be able to fully understand the human language because the AI won't be able to 'feel' most of the concepts.

    ReplyDelete
  4. I absolutely agree that a disembodied AI would not be fully able to comprehend the human experience. That could make for some interesting communication challenges. Even the most humanoid AI will be a different "species" from homo sapiens.

    But I think it is a misconception to think that an AI would not have feelings or emotions. I think part of any advanced intelligence is the acquisition of likes and dislikes. Any advanced intelligence needs to be able to make many assumptions in order to act autonomously. It is very rare that we have all the data required to make a decision.

    Its my belief that in the course of learning to make appropriate assumptions in order to be able to function an AI will develop a personality. And part of that personality will be expressed in terms of some sort of "emotions".

    Part of the fitness function for the AI will be "Am I happy". What is happiness after all. Perhaps happiness is a byproduct of any sort of advanced intelligence.

    ReplyDelete