Understanding different AI-related Concepts

There are several different concepts and terminology that is used when speaking about Artificial Intelligence (AI), which are often used interchangeably; this blog will highlight the similarities and differences of the concepts used. These are the topics addressed:

  • Neural Networks
  • Deep Learning
  • Machine Learning
  • Expert Systems
  • Artificial Intelligence

Neural Networks

Neural networks are based on neural units working similarly to brain cells. Each neural unit is connected with many others, and the links between them can be either activated of inhibited, based on pre-set conditions. Any individual neural unit uses a summation function adding all connections and ranges between limits, such as -1 to 1.

The connections between different neural units are weighted; the stronger the weight is, the more the connection is used (similar to how human brain cells are stimulated). Weights are set at the beginning, and then over re-iteration corrected through back propagation. Interesting enough, the initial configuration of the weights does not matter, this will be adjusted over time.

A very simple neural network looks like:

A simple neural network uses just the distinction ”connection is on” or “connection is off”, more complex neural networks use real numbers (e.g. adding up to 1) for the weights.   Dynamic neural networks are the most advanced, in that they dynamically can, based on rules, form new connections and even new neural units while disabling others.

Neural networks learn similar to the way the human brain does, and the success of the learning process is not sure – some neural networks are more successful than others. Neural networks can solve different complex problems, such as speech recognition or computer vision, which are difficult to achieve using traditional programming. When there are many internal parts included in a neural network, it is called “deep neural network”.

Deep Learning

A concept related to Neural Networks is Deep Learning, here big (or deep), or multi-layered Neural Networks are used to perform learning tasks. Deep Learning made it to the news headlines, for example, by beating the Go world champion.   Deep Learning is particularly good in problem areas where abstract concepts need to be developed from input data, this is where the deep neural networks help.

As seen above, neutral networks use weights to achieve the desired behaviour; this is the “learning” process of neural networks. Deepening on the problem to be solved, this “learning” might need long causal chains of computational stages, where each stage transforms the aggregate activation of the network. Deep Learning addresses the accurate assignment of weights in many of these stages.

Popular applications of Deep Learning are Siri, speech recognition in Android, and Google Brain, which managed to identify cats on pictures randomly selected from the Internet (the Internet is full of cats, by the way). The success of Deep Learning is based on the fact that we nowadays have enough computational power, and that there is enough material to learn (Deep Learning only works if there is enough input data that can be used in the learning process).

Machine Learning

Machine learning uses algorithms that can learn from and make predictions of data, based on pattern recognition and computational learning. Machine learning is used to address problems that are difficult to address by normal programming, such as optical character recognition, image analysis, email filtering, pattern recognition, or ranking of items.

There are two major learning strategies, which are used to address different problems:

Supervised Learning: This works like giving a student a set of problems and their solutions and tell the student learn to identify solutions for other, future problems. The main algorithms included in supervised learning are:

  • Classification (including techniques such as logic regression, classification trees, support vector machines, random forests, and neural networks)
  • Regression (including techniques such as linear regression, decision trees, Bayesian networks, fuzzy classification, and neural networks)

Unsupervised Learning: On contrary to the supervised learning, in this case, no solution is given. To stay with the example of a student: This is like giving a student a set of patterns and ask him what the underlying principles are that generated these patters. The main algorithms included in unsupervised learning are:

  • Clustering (including techniques such as k-means clustering, hierarchical clustering, Gaussian mixture models, genetic algorithms, and neural networks)
  • Dimension reduction (including techniques such as principal component analysis, tensor decomposition, multidimensional statistics, random projections, and neural networks)

These different techniques used in Machine Learning will be explained in more detail in later blogs.   One of the techniques use by all the different types of Machine Leering are Neural Networks.

Machine Learning is one of the most popular uses of developments in AI, and has created a lot of different tools that can be used to handle particular problems. Using a Machine Learning platform is not very difficult, and some just try it to see what the results will be like. In order to select the right tool for a particular job, a good understanding of the algorithms and how to make best use of them is required.

Expert Systems

Expert Systems are computer systems that emulate the decision-making capabilities of a human expert. Expert systems solve problems based on “if – then” rules and have been around for quite some time now, staring from the 70th of the last century and continuously evolving.

Expert Systems are built of two components:

  • Knowledge base: Containing facts and rules that are the basis of decision-making; the knowledge base needs to contain very good (expert) knowledge and needs to be accurate – the better the knowledge base is, the better the Expert System will be working.   It contains factual and heuristic knowledge.
  • Interference engine: The inference engine applies the rules to the known facts to deduce new facts, in detail:
    • applies rules repeatedly to the facts, which are obtained from earlier rule application;
    • adds new knowledge into the knowledge base, if required;
    • resolves rules conflict when multiple rules are applicable to a particular case.

The interference engine can use two different strategies:

  • Forward chaining: Here, the Expert System asks “What happens next?” The Inference Engine follows the chain of conditions and derivations and finally deduces the outcome. It considers all the facts and rules, and sorts them before concluding to a solution.
  • Backward chaining: Here, the Expert System asks “Why did this happen?” On the basis of what has already happened, the Inference Engine tries to find out which conditions could have happened in the past for this result. This strategy is followed for finding out cause or reason.

Expert Systems also have a User Interface to help the users (who do not need to be experts in the particular field or in AI) to communicate. Expert Systems can also explain how it arrived at a solution, this is done through the user interface.

The biggest disadvantage of Expert Systems is the time and effort it takes to build the Knowledge Base.

Artificial Intelligence

There are many definitions of Artificial Intelligence (AI), some of which make the distinction between the aforementioned concepts pretty difficult, which adds to the overall confusion that lead to this blog (please see the Introduction if you don’t remember).   The main difference of AI versus all other developments (which do form a part of AI) is that an AI system will be able to perform in a way that makes it not possible to distinguish it from a human being.

There are small examples like this out there already (Alexa, Cortana, Siri), which – to an unaware person – might already sound like a human responding. What this blog considers is the “restricted” use of AI for specific problems. There is also a future of strong AI, which thinks “out of the box” and develops its own theories – this will not happen soon and is not target of this blog.

The definition of AIDirections for AI is: Making a (computer) system in a confined problem space to react in a way that makes it non-distinguishable from a human being (the “Imitation Game”).

The fact that computers can do certain things faster and more accurate than the human can be used to many advantages. Examples of AI applications are:

  • Self-driving cars
  • Creating arts
  • Playing games
  • Interactive assistants (Alexa, Siri, etc.)
  • Prediction of juridical decisions
  • And many, many more …

History of AI

Alan Turing had was the first person of thinking that computers can solve certain problems better (in this case faster) than a human could do. His work had a significant influence on AI​, some samples are:

  • 1947 – “What we want is a machine that can learn from experience”​
  • 1948 – Report “Intelligent Machinery” – training of a network of artificial neurons to perform specific tasks​
  • 1950 – Paper “Computing Machinery and Intelligence”​
  • Could a machine win the “The Imitation Game” [distinguish the gender of two players] ​
  • Introduced the Turing Test [a conversation with a machine not distinguishable from one with a human]

This was followed by lots of research on AI until the end of last century, a major breakthrough was “Deep Blue” winning against Garri Kasparow. Then things became more silent around AI and the research work that was carried out in laboratories and universities was not shard with the public.   Only recently, after more successes have been made, AI is more publically discussed, being now as prominent as never before.

Research Topics

Research in AI focused mainly on key components of intelligence: ​

  • Learning – from repetitive learning to generalizing of complex rules from data​
  • Reasoning – drawing implications appropriate to the situation​
  • Problem solving – find x when a set of data is given (often used in gaming)​
  • Perception – scan and interpret the environment​
  • Language understanding

The Future

Currently, AI is far from reaching the intelligence of a human being; it can well solve specific problem. There are different predictions when this will change, but it is safe to assume that this will not be the case in the next years to come.

What will take place is more and more use of AI will be made in all sorts of business and private situations, changing the way we live in the next 5 years.