AI is not as scary as you think, this is the reason

There are concerns around whether artificial intelligence (AI) can outperform and dominate people. But things won't work out like fiction movies you still see, at least in the near future.

Professor Nick Bostrom doesn't need a crystal ball to predict the future, he has a whole institute to do it. He is the founder and director of the Future of Humanity Institute (FHI) at Oxford University, which works with researchers and technology companies like Deep Mind of Google and Facebook in the chapter. Partnership AI (Partnership on AI). The goal is to answer humanity's big questions and ensure we won't be destroyed by the things we create .

Bostrom believes that we are entering a new era of human civilization, the AI ​​era. He predicts machine learning will represent fundamental changes in the way people exist in the world, similar to the emergence of agricultural and industrial revolution. Does that mean that AI will grow more than humans? The Next Web interviewed him whether computers could outperform humans.

Now, AI is still controlled by designers. They create data and the AI ​​uses them to perform the task, so the risk is not much. But we have to look different from other technologies. AI is not just a good product. Now it's hard to say why AI does those jobs. We realize that, faster than humans can imagine, deep learning is doing things that we do not always understand why AI chose this instead of another.

Bostrom thinks that potential is still more than risk. As a philosopher, he was surprisingly pragmatic. When asked about moral issues, he said that in the past there was a choice of unmanned cars, just a hypothesis, between stabbing a group of children or an old couple.

If I can reduce the number of casualties, use self-driving cars, from 1.2 million globally to very small numbers and almost zero, I think many moral questions refer to situations that happen only In very rare cases, they are not important. We focus on how these technologies benefit.

Obviously, the near future for AI needs the insight of people like Professor Bostrom. His work has shaped the dialogue about machine learning, but what about more practical details? Should we be concerned about practical issues besides whether robots turn us into slaves? If the occupation of robots is just a product of science fiction and philosophy, we can forget the importance that AI applications are not worth the big title. According to Ram Menon, CEO and co-founder of Avaamo, the AI ​​bot platform business is:

There are now many hype about the future of AI to solve problems like cancer treatment or smart self-driving cars. Although AI can handle these things in time, the current technology is too new for them to have a real impact. These exaggerated words make people judge AI too high.

AI is not as scary as you think, this is the reason Picture 1AI is not as scary as you think, this is the reason Picture 1
The AI ​​is not as scary as the gloomy scenario that people imagine

Menon agrees with Bostrom that the AI ​​will not turn us into slaves soon. He argued that the noise around AI could be counterproductive for the benefits it brings. 'People should remember that people build AI and teach it how to act in certain ways and perform certain functions. The goal is for it to work without human intervention but human hands still interfere with technology '.

Philosophers like Professor Bostrom are making sure we prepare for future problems. CEOs like Menon are relying on AI to use them in life as soon as possible. But the average number of people using AI every day is not as much as you think. 'Many AI technologies are being included in back-end work, helping to analyze and classify data. But when AI chat becomes more popular, chat bots will become the first human daily interaction with AI '.

There are many things worth worrying about AI, though most of them are just too talkative. We should be aware of the possibility of disaster when using new technology, but anyway, the sky has not collapsed.

4.5 ★ | 2 Vote