In this article, the top myths about advanced AI is listed and given full explanation. About 3-5 big myths of AI and machine learning is debunked for all scholars/researchers. The myth of artificial intelligence pdf will be made available in our next publication.
Technology AI myth and Reality Debunked
1. Timeline Myths
The first myth regards the timeline: The time is getting short. How long do you think will it take before machines supersede human-level intelligence? Let me tell you; there is a common misconception that we know the answer to this question with great certainty. But I don’t think so.
This popular myth that we know we’ll get superhuman AI this century is not certain. In short, the history of AI is full of technological over-scaling. To prove my point; where are those flying cars and fusion power plants we were promised as kids? Are we not supposed to have it by now? If not for over-hyping, we’d by now, given all we were told. In the same vein, Artificial Intelligence has also been repeatedly over-hyped in the past. Even the founders of the field are among the hypers.
Predictions of AI
Take for example, John McCarthy (who came up with the term “artificial intelligence”), wrote this overly optimistic articles with high expectations. Also, Nathaniel Rochester, Marvin Minsky and Claude Shannon published forecast about what could be accomplished during 2 months with stone-age computers:
“We propose that a 2 month, 10 man study of AI (artificial intelligence) be carried out during the summer of 1956 at Dartmouth College. An attempt will be made to find how to make machines use language, form abstractions and concepts. They will be able to solve different kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”
If you notice the date in the statement above and today you can count how many years has gone by. On the other hand, there’s a popular counter-myth that we know we won’t get superhuman AI this century. Going by this, several researchers have made a wide range of estimations for how far we really are from superhuman AI. However, we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions.
Expectations from the Past
Lets look at an example where Ernest Rutherford who’s arguably the greatest nuclear physicist of his time, said in 1933 that nuclear energy was “moonshine.” He said this less than 24 hours before Szilard’s invention of the nuclear chain reaction. In addition, Astronomer Royal Richard Woolley went ahead to call interplanetary travel “utter bilge” in 1956.
Sincerely speaking, the most extreme part of this myth is that superhuman AI will never come alive because it’s physically impossible for now. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer. Furthermore, there’s no law of physics preventing the technology world from building even more intelligent quark blobs…. See?
Surveys 1 of AI Myths vs Reality
Over the years, there have been a number of surveys asking AI scholars/researchers how many years from now they think we’ll have human-level AI with at least 50% probability. To be candid with you, reports has all these surveys to have the same conclusion: the world’s leading experts disagree, so we simply don’t know when. In a cited example, in a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2050, but some researchers guessed hundreds of years or more.
Surveys 2 of AI Myths vs Reality
In another view, there’s also a related myth that people who worry about AI think it’s only a few years to come. To say the list, most survey still on record shows that people worrying about superhuman AI guess it’s still at least decades away. But these researchers still argue that as long as they’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the future. Lots of the safety problems associated with human-level AI machines are so hard that they may take another century to solve. So it’s important to start researching them now (safety solutions) rather than the night before some unknown programmers decide to switch on their their prototype robots.
2. Controversy Myths
There’s another common misconception that only the people harboring concerns about AI and advocating AI safety research are diehard fans who don’t know too much about AI. Apparently, when Stuart Russell, author of the standard AI textbook, mentioned this during his Puerto Rico talk, the audience laughed loudly because they took it as a joke.
Secondly, there’s a related misconception that people supporting AI safety research are highly controversial. In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of person’s duplex raise down by fire.
Evil Sells faster than Good
Come to think of it; it may be that media have made the AI safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and true ones. Tech geeks can testify to this. As a result, two organisation who only know about each other’s positions from media quotes are likely to think they disagree more than they really do.
Let me explain with an example. A techno-skeptic guy who only read about Bill Gates’s position in a British tabloid may mistakenly think Gates believes super-intelligence to be forthcoming. Likewise, someone in the beneficial-AI movement does not have the knowledge about Andrew Ng’s position except his quote about overpopulation on Mars. He/she may mistakenly think he doesn’t care about AI safety, whereas in fact, he seriously does. The essence is simply that because Ng’s timeline estimates are longer. What he does is that he naturally tends to prioritize short-term AI challenges over long-term ones.
3. Myths About the Risks of Superhuman AI
A lot of Many Artificial Intelligence scholars roll their eyes when seeing this headline: where “Stephen Hawking warns that rise of robots may be really disastrous for mankind if not checked.” Believe me when I say that many researchers have lost count of how many similar articles they’ve seen. Generally, these types of articles are accompanied by an evil-looking robot carrying a dangerous weapon, ha-ha!. Technically, the writers suggest that we should worry about robots rising up and killing us (humans) because they’ve become conscious with/without feelings. Kids term them evil robots. Looking at the bright side, such articles are actually rather impressive, because they briefly summarize the scenario that AI researchers don’t want to worry about. That scenario combines as many as 3 separate misconceptions: concern about consciousness, evil, and robots.
If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car? Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.
Is the Fear of Machines among the AI Myths?
The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.
The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”
The Evil Robots can do is not AI Myths
I want to personally sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids. This is because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. If truth be told, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with humans. For machines to cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand nor control. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding. This will be disastrous.
The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.
4. The Interesting Controversies
The last on our list today is the Interesting Controversies. Without wasting any time on the above-mentioned misconceptions, let us focus on true & interesting controversies where even the experts don’t agree. Question: What sort of future do the government want? Should researchers develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids in this technology age? Do you prefer new jobs replacing the old ones? Or a jobless society where everyone enjoys a life of leisure and machine-produced wealth?
Further down the road, would you like us to create super intelligent life and spread it through our cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way? Please join the conversation! Lets learn from each other.