In today’s high speed society, everything in the digital world uses artificial intelligence in some way, shape, or form.  From the Alexa to the Roomba, having computer systems perform typical mundane and repeatable human tasks has become one of the most commonplace technological innovations of our times.

In the tech world, it’s also responsible for some of the most commonly overused terms, often laced with convoluted explanations and definitions. According to the Oxford Dictionary, artificial intelligence is defined as, “The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”  It’s the umbrella term for all of the ways that a machine can simulate human intelligence, under which there are the following 6 sub-disciplines:

 

  1. Machine learning
  2. Neural Networks
  3. Robotics
  4. Expert Systems
  5. Fuzzy Logic
  6. Natural Language Processing

1. Machine Learning

Machine learning is another buzzword used whenever a new product gives computers the potential to learn without being programmed. You read that right – a machine learns on its own, without human input or training. 

Actively used in daily life, machine learning refers to the science that allows machines to translate, execute, and investigate data for solving real world problems. Machine learning extrapolates raw data to give you a predictive analysis of what potentially “might be.”

Complex mathematical machine learning algorithms build a model based on sample data, then they categorize and decipher that data. Eventually, after enough “learning”, they are trained to make predictions and decisions. Machine learning affords humans convenience and efficiency through applications like face recognition in your iphone or facebook photos, or even detecting fraud on your credit card.

Machine learning has also given us image & speech recognition and self-driving cars.  It even helps us make an efficient web-search online. It’s safe to say that machine learning is appreciated, and likely even needed, by the everyday human.

2. Neural Networks

A neural network is precisely what it sounds like – just in the digital world.  In the context of artificial intelligence, computer neural networks mimic the human’s biological neural network in our brains by remembering results and information in order to make better decisions in the future. 

In our brains, when images, text, or video are repeatedly received, they remain within our memory longer. This recurrence or repetition allows us to more easily recall it in the future, and make accurate decisions accordingly.

Similarly, a recurrent neural network would process images, text, or video and become more precise with each iteration.  As it “remembers” each previous iteration, it uses that information to make a better, more precise decision later on.  A great example of this is IBM Watson. IBM’s Watson is the most powerful artificial intelligence in the world. It took 2 years to train the neural network for medical practice. Millions of pages of medical academic journals, medical records, and other documents were uploaded to the system for its learning. And now it can prompt the diagnosis and propose the best treatment pattern based on the patient’s complaints and anamnesis.

In neural networks, a set of algorithms is used to find elemental relationships across the bunches of data via the process that imitates the human brain. This is used today in fraud detection, risk analysis, and even stock exchange predictions.  Quantum Star Technologies uses it to decipher whether data that is presented to the network is malicious or benign.

3. Robotics

Robotics is an interdisciplinary field of computer science using electrical and mechanical engineering. It’s a field that determines the designing, producing, operating, and usage of robots, which can be simple or quite complex.

For instance, the process behind automobile manufacturing is an example of a simple robot.  Automated assembly lines and Industrial Control Systems (ICS) use mechanical and electrical engineering to program a machine to perform a physical, and usually mundane task that a human could or would do. From Sophia the Robot down to the Roomba, these machines can do much more than just vacuum! This form of “intelligence” exists to replace the duties or tasks of the human body, and is often adopted in the name of efficiency.

4. Expert systems

Expert systems is a term that refers to a computer system mimicking the decision making intelligence of a human expert.  It was largely considered the first successful model of AI software in the 70s and 80s.  Expert systems are extremely responsive and rely on an accumulated knowledge base. For example, Grammarly is an “expert” system.  This online grammar tool has access to an arsenal of linguistic and grammatical knowledge or “data sets” that have trained it to know right from wrong when it comes to grammatical language.  

The intelligence behind expert systems uses the “if-then” rule.  For example, IF your email contains a date and time in the body, THEN your computer may suggest a calendar event to record it.  These expert systems are truly experts – and the more you “train” them, the smarter they get!

5. Fuzzy Logic

In the real world, we sometimes cannot tell if something is true or not. In the technological world, we call that “fuzzy logic”.  Fuzzy Logic is a technique that represents and modifies uncertain information.  It basically measures the degree to which the hypothesis is correct. Since not everything is precisely black and white, Fuzzy Logic is used for reasoning about uncertain concepts.  In other words, it will make an educated guess about the accuracy of something.

This sub-specialty of artificial intelligence is trained to determine the degree of truth exhibited by a concept. For instance, malware detection technology like Starpoint may determine that some malware is, in fact, malicious, but it will provide a confidence rating, or a recommended degree of malice, by percentage.  This would indicate that the malware is worth looking into, but perhaps not 100% wrong.  That’s fuzzy logic.

6. Natural Language Processing (NLP)

Natural Language Processing is the part of computer science and Artificial Intelligence that aids the communication between computers and humans with natural language. It enables computers to read and understand data by mimicking human language. Think “Siri” and “Alexa”.

Those are prime examples of NLP. Automated text responses, junk email filters, and chatbots for customer support are other uses of Natural Language Processing. Each of these applications of AI are meant to enhance the user experience.

It’s Everywhere, Everyday

Artificial intelligence is used everyday by the average person whether they know it or not. From speech recognition, to spell check, to apps that can classify an iphone photo of a plant – computers today can recognize nearly anything as long as they’ve been trained correctly.

And despite the inevitable limitations to AI’s capacity, corporations only continue to market these buzzwords – all in the name of the customer experience. (Well played, corporations. Well played.) The truth is, AI does improve our experiences on various computer systems. It’s only getting more advanced, and it’s not going away anytime soon.

Perhaps ironically, the simulation of human intelligence may be the machine’s greatest gift to humans!