Artificial intelligence is the buzzword of the 21st century and yet many people are unaware of what it actually is. Artifical intelligence is a field that encompasses many lesser known fields and thus not only the human-like robot from the movies. In this article I will set out the different subfields of artifical intelligence that you might have already (unknowingly) encountered or are very likely to run into in the upcoming years. Have a look for yourself and see if A.I is just the next hype or the future we are all going to live in.
Speaking like humans
You have probably already experienced artifical intelligence in the form of Apple’s Siri or Amazon’s Alexa. This field within artifical intelligence is called natural language processing (NLP). It studies the complex interactions between computers and human (natural) languages. For example, think about concepts like speech recognition, natural language understanding, and natural language generation. A computer needs to recognize what you say, translate this so it understands what you means and then respond with an answer. Large quantities of data is used to improve the overall quality of speech recognition and understanding. This data is gathered directly from the people like you and me and could provide a gateway to the world for the illiterate (Singh, 2017). Do you want to know how close computers are to human speech? Be sure to watch the video below and be amazed!
“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
A.I. planning may sounds a bit ambigious but you might have already seen some video’s about it as it is the main drive behind those weird walking robot video’s. This form of artificial intelligence solves strategies or action sequences. This is a rather complex field as it requires solving undiscovered scenario’s (unfamiliar terrain for a walking robot) and multidimensional space (the world is 3D which is a lot of extra variables for a robot to consider). This sounds ambiguous but the practical implementations can be seen in autonomous robots and unmanned vehicles. Known environments are no challenge for planning models as it can accurately rely on previous data to plan its moves even in an offline environment. However, dynamic environments demand fast real-time alterations, based on often, for the robot, random factors. A.I. planning is capable of learning from these iterative trial and error processes to even be able to survive in unfamiliar territory. This also means the robot has to adopt, it sort of has to think for itself what strategy to use. Quite impressive! Have a look below at some amazing robots doing seemingly ordinary things.
“Predicting the future isn’t magic, it’s artificial intelligence.”
Another subset of artificial intelligence is machine learning. This is a process in which a highly sophistaced algorithms uses sample data, formally known as ‘training data’, to teach itself what is right or wrong when doing a certain task. Of course the quality of the algorithm is is therefore contingent upon the quality of the input data. These algorithms need data to function and therefore unfortunately:
“There is no algorithm for creativity”
But how does this process work exactly? Basically, the algorithms utilizes patterns and conclusions within a dataset to generate predictions and make decisions accordingly. The key takeaway is that machine learning is not explicitly programmed for a specific purpose. Rather it is given a goals and figures out based on data how to achieve that goal to the best of its abilities. Consequently, machine learning is best used when alternatives like developing an algorithm with specific instructions is infeasible for the task at hand. For example, programming a self driving car would require you to input all the traffic rules, all the possible options road participants can take, consider all different weather conditions etc. A nearly impossible task and the chances of the programmer missing a factor and causing a crash are very likely. Instead, we drive the car with algorithm around a lot and make it gather data to reach its goal: drive safely from point A to point B. Every time the driver has to take over the car knows it did something wrong and gathers data. Eventually the car knows how to drive safely all by itself, while no programmer ever programmed any instructions.
“In order for A.I. systems to work, they need to be trained. And we, we humans, are their mothers and fathers. We are their study buddies. We are the ones these A.I. systems are learning from”
A.I. Knowledge reasoning
Closely related to machine learning is another A.I concept: knowledge reasoning. The difference with machine learning is subtle. Machine learning relies on a network of weighted links between inputs and outputs (via intermediary layers of nodes). A.I. reasoning relies on explicit, human-understandable representations of the concepts, relationships and rules that comprise the desired knowledge domain (Lefkowitz, 2018). It utilizes information or knowledge that was exempt from the ‘input data’ and by dynamically combining this knowledge with the context it is given it is able to reach an answers or conclusions.
Image and video recognition
Have you ever wondered how facebook manages to tag faces in a picture? The popular word for this is image recognition but the more technically correct term is computer vision. This form of artifical intelligence tries to understand digital images or videos. It includes methods for acquiring, processing, analyzing and understanding digital images. That means it can not only recognice that there is a face in the picture, it might also analyze the face and be able to suggest who the person is. Computer vision is based upon the extraction of high-dimensional data from the real world in order to produce numerical or symbolic information. For example, to help artificial intelligence make sense of the world China uses large volumes of cheap labor to identify objects in images (basicly people clicking faces or animals in pictures all day) (Yuan, 2018). The thought of mass identification by humans seems strange but is an essential part in the training of A.I. Even you have probably unknowingly been training A.I. for years without realizing it. I am talking about Google’s Recaptcha (the security measure where you have to click images of cars, fire hydrants, etc.). Your input has been improving Google’s A.I. for years (O’Malley, 2018)! Have a look at video below to learn more about how computers learn to recognize objects.
“All those little visual puzzles add up to A.I. advances”
General Artificial Intelligence
The previous points can be described as a form of applied or narrow artifical intelligence. These forms of A.I do not require the program to perform the full range of human cognitive abilities. Most of the misunderstanding when talking about A.I. comes from the fact that people visualize a computer gaining consciousness. The so called ‘strong AI’ that would be able to complete the turning test and prove through an imitation test that it would be indistinguishable from an actual human (Turing, 2009). That form of artifical intelligence is one step further and is better known as Artificial General Intelligence (A.G.I). A good example of an A.G.I would be IBM’s Watson, a supercomputer. Have a look at Watson’s succes in the popular game show Jeopardy.
“The five phases of Artificial Intelligence (AI 5.0) are Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), Artificial Consciousness, Artificial Superintelligence (ASI) and Compassionate Artificial Superintelligence (CAS).”
On an ending note, in future conversation you are now be able to talk about and distinguis the the diverse parts that are the makeup of the future intelligent machines. These developments are coming and how they will be utilized depends on the willingness to explore and expand upon current technologies. Artificial Consciousness will only be achieved when knowledge and understanding is shared upon a global stage. So keep filling in that captcha, the world depends on it!
Lefkowitz, L. (2018, May 22). aibusiness. Retrieved from Semantic Reasoning: The (Almost) Forgotten Half of AI: https://aibusiness.com/semantic-reasoning-ai/
O’Malley, J. (2018, January 12). Captcha if you can: how you’ve been training AI for years without realising it. Retrieved from www.techradar.com: https://www.techradar.com/news/captcha-if-you-can-how-youve-been-training-ai-for-years-without-realising-it
Singh, N. (2017, December 02). How Speech Recognition Technology Will Lead The Way to Better Communication. Retrieved from www.entrepreneur.com: https://www.entrepreneur.com/article/305609
Turing, A. M. (2009). Computing machinery and intelligence. In Parsing the Turing Test. Springer, 23-65.
Yuan, L. (2018, Nov 25). How Cheap Labor Drives China’s A.I. Ambitions. Retrieved from www.nytimes.com: https://www.nytimes.com/2018/11/25/business/china-artificial-intelligence-labeling.html