X
x
Scrabbl
Think beyond ordinary
Subscribe to our newsletter to explore all the corners of worldly happenings

Artificial Neural Networks – Relationship with Machine Learning and Deep Learning

An artificial neural network is an attempt to simulate the network of neurons or interconnected cells that a human brain is comprised of, so that the computer will be able to learn things and make decisions in the same way as humans.

Artificial Neural Networks – Relationship with Machine Learning and Deep Learning

One of the best things that could happen to technology is the emergence and evolution of Artificial Intelligence, also known as Machine or Adaptive Intelligence, defined as the intelligence displayed by machines, primarily computers. Though man created a machine, you can think of innumerable things which machines can do better than men, such as scanning a billion records in multiple databases simultaneously and returning results in a few seconds, solving complex formula and calculating the exact output, generating offer letters in bulk without single error. However, it’s the invincible power of the human brain that makes this species so special amongst all-natural creation. You consider imagination, creativity, gratitude, inspiration, empathy, critical thinking or any cognitive ability, the incredible brain is way ahead.


For more technology insights, follow me @Asamanyakm


In the coexistence of man and machine, one problem remains, i.e., a machine cannot help man solve the problem which the latter does not fully understand but want resolved. Even worse, standard algorithms work well with clean, structured data, but in real life, the data is usually unstructured and clumsy. This conundrum paved the way and led to the popularity of Machine Learning, a powerful AI approach that allows computers to, as the name implies, learn from input data without having to be explicitly programmed. The technique or the computing system used for ML is a neural network, which draws inspiration from the biology of the brain, can learn on its own, making machines more human-like, transmitting information between layers of so-called artificial neurons, hence better known as Artificial Neural Network (ANN).

Deep learning is a subset of machine learning and Deep artificial neural networks are a set of algorithms that have set new records in accuracy for many important problems, such as image recognition, sound recognition, recommender systems, natural language processing, etc.


History of artificial neural network (ANN)


Context manipulates the perspective, an accepted fact. Human brains interpret the context of real-world situations in a way that machines can’t. Fascinated by the capability of human brain, very first artificial neural network was created by Minsky as a graduate student in 1951, but the approach was limited at first, and even Minsky himself soon turned his focus to other approaches for creating intelligent machines. Later in 1958 psychologist, Frank Rosenblatt used artificial neural network named Perceptron, which was intended to model how the human brain processes visual data and learned to recognize objects. Since then other researchers have used similar ANNs to study human perception and reasoning.


Thus, an artificial neural network is an attempt to simulate the network of neurons or interconnected cells that a human brain is comprised of, so that the computer will be able to learn things and make decisions in the same way as humans. ANNs are created by programming regular computers to behave as though they are interconnected brain cells. The neural networks produced this way are known as artificial neural networks (or ANNs) to differentiate them from the real neural networks that are found inside human brains.


What is an artificial neural network comprised of?


An archetypal artificial neural network is usually constituted of a few dozen to hundreds, thousands, or even millions of artificial neurons called units arranged in a series of layers, each of which connects to the layers on either side. Some of the artificial neurons, known as input units, are designed to receive various kinds of information from the external world that the network attempts to either learn about and recognize, or otherwise process. Other units known as output units reside on the opposite side of the network and signal on the response to the information learnt. In between the input units and output, units are one or more layers of artificial neurons known as hidden units, which, together, form much of the artificial brain. 



Most artificial neural networks are fully connected, which means each hidden unit and each output unit is connected to every unit in the layers on both sides. The connections between any two units are represented by a number called a weight, which can be either positive (if one unit excites another) or negative (if one unit suppresses or impedes another). Higher weight signifies the greater influence of one unit on another. This technique is a simulation of the way actual brain cells trigger one another across tiny gaps called synapses.


How does an artificial neural network function?


An artificial neural network allows Information to transmit through in two ways. When it's learning - being trained or operating normally - after being trained, patterns of information are fed into the network via the input units, which trigger the layers of hidden units, and these, in turn, reach the output units. This common design is called a feedforward network. All units do not trigger all the time. Each unit receives inputs from the units to its left, and the inputs are multiplied by the weight of the connections they flow along. Every unit adds up all the inputs it receives in this way and (in the simplest type of network) if the sum is more than a certain threshold value, the unit "fires" and triggers the units it's connected to, i.e., those on its right.


For a neural network to learn, there must be an element of feedback involved—in a similar way as humans learn by being told what they are doing right or wrong. In fact, the feedback is used by everyone all the time, irrespective of a professional or personal matter. Now if I ask you to think about the first time you tried your hand at a basketball game, you must have needed several tries before the ball popped into the basket. But every time you tried, you kept learning and improving the technique of bouncing the ball, taking a few jumps yourself before you attempted to throw the ball again. You remembered what you had done wrong before, modified your movements accordingly, and hopefully threw the ball a bit better every consecutive turn you attempted. So you used feedback to compare the outcome you wanted with what actually happened, figured out the difference between the two, and used that to alter what you did next time ("I need to throw it with a jump," "I need to be slightly lighter on my feet," "I need to loosen up my shoulders a bit more," and so on). The bigger the difference between the intended and actual outcome, the more radically you would have modified your moves.


Neural networks learn things the same way, typically by a feedback process called backpropagation - sometimes abbreviated as "backprop". This involves comparing the output a network generates with the output it was meant to generate and using the difference between them to modify the weights of the connections between the units in the network, working from the output units through the hidden units to the input units—going backward, in other words. In time, backpropagation causes the network to learn, reducing the difference between actual and intended output to the point where the two exactly coincide, which empowers the network to figure out things exactly as it should. Known as deep learning, this is what makes a network intelligent.


How does an artificial neural network function in practice?


Once you have trained the artificial neural network with enough learning examples, it reaches a point where you can present it with an entirely new set of inputs it has never seen before and examine how it responds. For example, suppose you've been teaching a network by showing it lots of pictures of cups and saucers, represented in some appropriate way it can understand, and telling it whether each one is a cup or a saucer. After showing it, let's say, 30 different cups and 30 different saucers, you feed it a picture of some new design it's not encountered before—let's say a tray—and see what happens. Depending upon how you've trained it, the network will attempt to categorize the new example as either a cup or a saucer, generalizing based on its experience—just like a human. See, you have taught a computer how to recognize crockery!


That doesn't mean to say a neural network can just "look" at pieces of crockery and instantly respond to them in meaningful ways; it's not behaving like a person. Consider the example just narrated: the network is not actually looking at pieces of crockery. The inputs to a network are essentially binary numbers: each input unit is either switched on or switched off. So, if you had five input units, you could feed in information about five different characteristics of different cups using binary (yes/no) answers. A typical cup, as well as saucer, would then present as Yes for 1 or No for 0 for each answer. So, during the learning phase, the network is simply looking at lots of numbers and learning that some mean chair and others mean table.


What are artificial neural networks used for?


There are innumerable ways artificial neural networks can be deployed including to categorize information, predict outcomes and cluster data. As the networks process and learn from data they can classify a given data set into a predefined class, it can be trained to predict outputs that are expected from a given input and can identify a special feature of data to then classify the data by that special feature.


Many of the things everyone does daily involve recognizing patterns and using them to make decisions, so neural networks can help in numerous different ways. They can help forecast the stock market or the weather, operate radar scanning systems which automatically identify enemy aircraft or ships, and even help doctors to diagnose complex diseases based on symptoms. There might be neural networks installed inside your computer or your smartphone right this moment. If you use smartphone apps that recognize your handwriting on a touchscreen, they might be using a simple neural network to figure out which characters you are writing by looking out for distinct features in the marks you make with your fingers - and the order in which you make them. Some types of voice recognition software also use neural networks, and so do some of the email programs that automatically differentiate between genuine emails and spam. Neural networks have even proved effective in translating text from one language to another.


For example, Google's automatic translation, has made increasing use of this technology over the last few years to convert words in one language - the network's input into the equivalent words in another language - the network's output. In 2016, Google announced it was using something it called Neural Machine Translation (NMT) to convert entire sentences, instantly, with a 55–85 percent reduction in errors. Do you know, Google uses a 30-layered neural network to power Google Photos as well as to power its “watch next” recommendations for YouTube videos? Even Facebook uses artificial neural networks for its DeepFace algorithm, which can recognize specific faces with 97% accuracy. It’s also an ANN that powers Skype’s ability to do translations in real-time.


Artificial neural networks are enabling computers to perceive the world around them in a very human-like fashion. In case you think at times you might like your brain to be as reliable as your smart device, think again and be thankful you have such a powerful neural network already installed in your…. You know it all, right?


For more technology insights, follow me  @Asamanyakm