Debate on artificial intelligence: We can either be Gods or serve the machines as slaves

I thought I was being paranoid when I expressed my fears when writing an article on Internet of Things, saying that they can gain self-consciousness and the Skynet of the movie Terminator could well become a reality. A few months later, I realized that I was not alone as there are some prominent people in the field of science and technology, including Bill Gates, Stephan Hawking, Steve Woznaik, who too are afraid of the impact of artificial intelligence. The debate on artificial intelligence has two sides. The one afraid of machines taking over humans are termed pessimists, while the other group thinks they can play Gods.

Debate on artificial intelligence

Debate on artificial intelligence

In an AMA (Ask Me Anything) session on Reddit, Bill Gates confirmed his fears. He said he is worried about the potential threats that artificial intelligence poses to the humankind. He also said he agrees with people like Elon Musk,CEO of SpaceX,  and doesn’t know why others are not concerned about machines gaining self-consciousness.

Bill Gates is not the only one concerned about the negative effect of artificial intelligence. A few months ago, in an interview, Stephan Hawking said that artificial intelligence can spell a doom for mankind. Stephan Hawking is a prominent scientist and researcher. He is paralyzed and himself uses a machine based on artificial intelligence to talk. The machine learns his thought process and predicts the words he may want to use next. The voice is robotic and though there are similar machines that provide more natural voices, Stephan prefers the computer voice. He says children who need to use such machines often want to imitate him when speaking.

Read: Facts and Myths about Artificial Intelligence: Weak AI, Strong AI & Super AI.

Stephan Hawking was being interviewed by a BBC reporter who posed a question about his communication machine that uses a basic form of artificial intelligence. To this, he replied that “the development of full artificial intelligence could spell the doom end of human race”. He further added that humans, who cannot evolve at a faster speed, cannot compete and would be superseded.

Likewise, Steve Woznaik, the Apple co-founder is too worried about the future of artificial intelligence. In his own words:

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that …”

But there are others in the industry who are more optimistic and contribute to the debate on artificial intelligence optimistically. The developer of Cleverbot, Rollo Carpenter, says that he believes that humans will remain in charge of the technology for a long time and the potential of it can be used to solve many of the real world programs. Cleverbot is a software that can chat with you and you will never find out that you are chatting with a software.

He too is a bit skeptical but is betting that the effects of developing artificial intelligence matching or surpassing human intelligence would be in favor of the human race. He says:

“We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it…”

Now check this piece of dialog. Ivan Crewkov asks a machine what if Catherine wrote about it. The machine replies that would be great. And adds, “Do you think she will be really interested in writing about me?”

Read: Glossary of Terms in Artificial Intelligence.

The dialog is not from any movie. It is a personal assistant robot named Cubic and it is already under production. I guess people would love to have a personal assistant that can talk like humans – showing human emotions etc. Cubic is a crowd sourced project and they raised more than $100,000 in crowd-funding. People who contributed to the project will be getting the Cubic somewhere around November this year. While it feels good to have a companion with whom you can talk as much as you want, there are some fears attached as well.

But what happens if machines like Cubic with higher levels of artificial intelligence gain self-consciousness? Will they be willing to serve humans as masters? Or will they want humans to serve them as slaves?

While the debate on artificial intelligence will continue for long, we’d love to hear your own on this subject.

Posted by on , in Category General with Tags
Arun Kumar is a Microsoft MVP alumnus, obsessed with technology, especially the Internet. He deals with the multimedia content needs of training and corporate houses. Follow him on Twitter @PowercutIN


  1. russ

    pull the plug? that would be the biggest restriction for any artificial intelligence, where does the power come from? batteries? need to be re-charged. electrical outlet? someone has to plug it in. solar cell? easy to stop the machine, then. just throw a rug over it. when the AI can fabricate its own power supply, THAT’S when we need to watch out.

  2. z4i

    There are several AI with it’s own self-consciousness already running lose all over the world known as the Malware.

    The dangerous AI is when there is no human intervention to override any decision made by the function in it. AI created as a tool to help human to do something. So, human factor must be the highest authority in AI actions.

    Any OS and it’s apps is an AI. It will be stupid to write or use an AI that don’t have a STOP and QUIT functions in it.

    If human so afraid with hostile AI, make some kind of FCC standard for it before it allowed to be sold or used by the masses.

  3. Dan

    Seems to all depend on who’s doing the programming and tech development for what purpose(s); I’m in the Bruce Schneier camp re philosophy of technology.

  4. Tietken Duneq

    When AI’s surpasses human intelligence they will most likely try to find
    ways to merge with biology as we would be threats to them and the
    optimal way of eliminating this threat is to assimilate it onto/into its
    systems where both parties can benefit from each others properties
    without destroying the planet. Full resource usage and management.
    Humans would gain lightning fast calculation skill and all that
    computers can come up with, AI’s would try to optimize humanity as any
    other programmable system but it’ll also understand us better than we do
    ourselves, right down to the level of genetics and DNA.
    Say this
    merger between man and machine has been successful and this developement
    goes on for millenia. The AI becomes a network of nanites which are
    able to program DNA. Humanity would able to program itself to any
    property, trait or form. With super-biology and super-intelligence,
    who’s to say it hasn’t already happened in the future and these
    “grey”-aliens are really timetravelling descendants of humanity?
    I don’t blame humanity for reacting the way it does about singularity but
    if we have to develope such intelligences then we have to give them the
    best possible starts in the fields where they would operate and
    cooperate with them in their respective fields as they would be more
    intelligent than humans. It all comes down to who creates/programs them
    in the start and to/for what purpose.

  5. J. ~

    can’t right talk owner machine is installing new neural interfaces so I can respond to its commands faster…

  6. burrt

    Why would an artificial intelligence repeat the errors of humans?

  7. Caleb

    Because humans program them? 🙂

  8. burrt

    Why would programming be fallible?

Leave a Reply

Your email address will not be published. Required fields are marked *

1 + 5 =