Our universe is expanding every now and then with the bunch of new technologies and terms coming along in our life. And to deal with this permanent informational flow we have a wide range of tools for observation, research, and judgment.
It’s attracting, it’s unbelievable but it puts a certain responsibility on our shoulders that is about digging deeper into things not to spread misinterpretations.Today we are going to figure out how it works from the inside.
Here’s a short plan of what you’ll be reading about:
- Levels of artificial intelligence and available AI tools;
- Machine learning types and their application;
- Basics of natural language processing;
If you feel comfortable enough about any of these fields, of course, you can skip but I recommend you to read the whole article if you already here because it’s always better to know more points of view.
Let’s be honest now, we all seem to be talking about these guys with a pretension of being knowledgeable: Artificial Intelligence, Machine Learning, Natural Language Processing and other “geek” keywords. But even though an amount of people getting interested in AI field is rising exponentially, unfortunately, a substantial part of them still has quite a pretty scattered idea about it.
Although sometimes we’re really good at looking from our point of view, we would never truly know how good it’s actually until we learn about other points.
That’s why the idea is to start with the most common scientific point of view on the main technological questions humanity is involved in nowadays, how we’re dealing with them now and what shall we expect further.
What we have now
Artificial Narrow Intelligence, or Weak AI. It’s our first significant step at the road that we’ve been suspected of taking. And yes, now we’re here. But don’t be confused by the term “weak”. Indeed, it’s far not weak when talking about particular tasks: playing games or performing research or making predictions.
Moreover. it’s way efficient than people. The term “weak” is legal only in comparison to the other two stages at which we’re going to arrive in some observable future.
What we may expect further
- Artificial General Intelligence, or Strong AI
- Artificial Super Intelligence
AGI is a machine (non-existent yet) that is capable of performing the variety of tasks, just as we humans are. It’s still more topic of futurists and science fiction writers but AI scientists are working hard right now to achieve it in the nearest future.
What’s further? ASI. And here’s how N. Bostrom explains it:
An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.
Due to recursive self-improvement that AGI is expected to possess, the path from AGI to ASI is going to be as fast as an explosion. Particularly, it’s called Intelligence Explosion.
This term was firstly used by Irvin Good in 1965. And this is how he defines it:
Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man needs ever make, provided that the machine is docile enough to tell us how to keep it under control.
One more question that we are to discuss in this sections is another widely used term of nowadays or machine learning, and what it has to do with artificial intelligence and vice versa.
In fact, machine learning is a subset of artificial intelligence studies. With the help of the variety of algorithms, it enables computers to learn from data and make predictions without explicit programming. Achievements in Machine learning made possible arriving of ANI but it’s not the only kind of AI tools.
ML falls into three parts:
Literally, it means learning with the teacher. In real-world cases it works in the following way: we have data with a certain portion of inputs and matching them outputs, from which our program is expected to learn in order to make correct predictions for further inputs.
In SL we usually face two problems: prediction of either continuous or discrete variables.
To solve the first problem (or regression problem) we will need to model the constant relation between input and output, in other words – a function that fits the problem.Example: knowing the amount of food that labradors are consuming, predict their weight.
The second problem is classification task which is about mapping an input to a certain discrete value.
One of the most widely used examples: given certain information about a client, the bank decides whether to give him (her) credit or not.
Instead of giving a program any information on correct outputs to rely on, it’s provided by the only scope of inputs in the form of unstructured data. So, the target is to figure out the structure behind this scope that can map to correct outputs. Approaches of UL divides into clustering and non-clustering ones.
Here’s example task: find “consumer types” of coffee fair trade to optimize further sells and improve marketing strategy.
One of the most efficient yet still expensive algorithms in machine learning. It all started in 1940 with psychologist Donald Hebb. Inspired by the idea of neural cells plasticity he proposed the hypothesis that would be called later “unsupervised learning”. The first attempt to implement it was in 1948 on Turing’s B-type machines.
In 1958 Frank Rosenblatt made the perceptron for pattern recognition with a two-layer neural network, and in 1975 backpropagation algorithm (the most commonly used method for training ANN) was created by Paul Werbos. Due to the lack of processing power, research on ANN had slowed down until recent times.
- when ANN possess multiple layers, they’re referred to deep learning.
- in ANN can be used either supervised or unsupervised learning, it depends on the particular task.
With Recurrent Neural Networks it’s possible to generate stories about images; with Convolutional Neural Networks we can combine different images, generate Chinese characters and create recommendation systems.
This type of ML implies dynamic interaction of machine with the environment when there’s exact goal to achieve. According to the particular situation and amount of steps, a machine is expected to take, the system of numeric awards is pre-defined.
Every time machine is making a correct step, it’s being given a reward, while a wrong step leads to taking it away. The machine is looking for more rewards and, therefore, learning from previous wrong steps. Thus, RL is mostly based on trial and error learning.
Examples of use: strategy games, robotic systems, self-driving cars.
Elon Musk, founder of Tesla and PayPal, co-founder of SpaceX and just “the world’s most rad man” (as described here) recently came up with a comprehensive toolkit for developers in reinforcement learning. It’s called OpenAI and “it supports teaching agents everything from walking to playing games like Pong or Go “.
In the college, Elon Musk was contemplating on what he wanted himself to become. He made a list of prospective future areas he could plunge into. There was a place for artificial intelligence in his list as well but he was uncertain enough about it to hold it over until better times that seem to have arrived. He’s actively investing in AI field, describing it “keeping an eye on what’s going on”.
Elon Musk’s widely known by his fears about AI. He said:
With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.
Gradually developing OpenAI he’s going to be a part of safe AI development and build up a secure environment. His words are frequently quoted as he represents the fearful side in a discussion on AI among scientific society:
I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.
There’s, however, another side, more optimistic one. Usually, it’s represented by Ry Kurtzweil. Being an outstanding person with a lot of inventions and high rate of correct predictions behind, he is self-confident enough to state that we’ll have AGI till 2030 and ASI till 2045.
You need to know also that he co-founded Singularity University, which is supported by NASA and Google. Even though his predictions about AI haven’t gone far from average prognosis, he sounds too ambitious and self-aware about it and every time drives loud discussions around.
In his book “How to create a mind” he explores reverse-engineering of the brain, intersections of neuroscience and AI and predicts explosive inventions of the nearest future.
Natural Language Processing
What’s more? One more black hole in a common understanding, or NLP (please don’t confuse with Neuro-linguistic Programming)!
In short, Natural Language Processing is about understanding natural language by a machine. Not to program everything explicitly, which is very time-consuming and inefficient, Machine Learning is used. Therefore, instead of being pre-programmed to understand conversational patterns, the machine is learning to understand dynamically.
While it’s legal to think about all of these terms as “must-knowns”, the true list of cutting-edge concepts is constantly expanding and it seems to be impossible to have them covered in one article. It’s useful to remember that we are setting rules and measures either for our own learning process or for global technological evolution.
Instead of treating information just as a way to spend our time, we’re gradually becoming aware of its power and responsibility that comes along. The better we understand the world we live in – the higher becomes the possibility that our contributions to its development will yield better world for the next populations.
Today most of the CUIs are based on the hard-coded logic with the elements of NLP, machine learning, computer vision and speech recognition
Let’s trace together with the way from the simple device (where we all started) to a smart virtual assistant.
Build smart customer service. Most of us imagine that in the future we all will live and work in smart houses, surrounded by robots.