TipsMake
Newest

Deep Learning (DL)

The Deep Learning revolution began around 2010.

Since then, Deep Learning has solved many "impossible" problems.

The Deep Learning revolution didn't begin with a single discovery. It happened when several essential elements were in place:

  1. The computer is fast enough.
  2. The computer's memory is large enough.
  3. Better training methods were invented.
  4. Better fine-tuning methods were invented.

Neuron

Scientists agree that our brains have between 80 and 100 billion neurons.

 

These neurons have hundreds of billions of connections with each other.

Deep Learning (DL) Picture 1

Neurons (also known as nerve cells) are the basic units of our brain and nervous system.

Neurons are responsible for receiving input from the outside world, sending output (commands to our muscles), and converting electrical signals in between.

Artificial neural networks

Artificial neural networks are often referred to as Neural Networks (NNs).

Artificial neural networks are essentially multilayer perceptrons.

The perceptron defines the first step in multilayer artificial neural networks.

Artificial neural networks are at the core of Deep Learning.

Artificial neural networks are one of the most important inventions in history.

Artificial neural networks can solve problems that algorithms CANNOT solve:

 

  1. Medical diagnosis
  2. Face detection
  3. Speech recognition

Artificial neural network model

The input data (yellow) is processed through a hidden layer (blue) and modified through another hidden layer (green) to produce the final output (red).

Deep Learning (DL) Picture 2

Tom Mitchell

Tom Michael Mitchell (born 1951) is an American computer scientist and undergraduate professor at Carnegie Mellon University (CMU). He is the former Chair of Computer Science at CMU.

"A computer program is said to learn from experience E for a class of tasks T and a measure of performance P, if its performance in tasks belonging to T, as measured by P, improves with experience E."

Tom Mitchell (1999)

  1. E: Experience (number of times).
  2. T: Task (driving).
  3. P: Performance (good or bad).

The story of Giraffe

In 2015, Matthew Lai, a student at Imperial College London, created an artificial neural network called Giraffe.

Giraffes can be trained in 72 hours to play chess at the same level as an international grandmaster.

Computers that play chess aren't new, but the way these programs are created is novel.

Intelligent chess-playing programs take years to build, while Giraffe was built in 72 hours using an artificial neural network.

 

Deep Learning

Classical programming uses programs (algorithms) to produce results:

Traditional computers

Data + Computer algorithm = Result

Machine learning uses the results to create programs (algorithms):

Line Learning

Data + Results = Computer Algorithm

Line Learning

Machine learning is often considered the equivalent of artificial intelligence.

This is incorrect. Machine Learning is a subfield of Artificial Intelligence.

Machine learning is a field of artificial intelligence that uses data to teach machines.

"Machine learning is a field of study that gives computers the ability to learn without programming."

Arthur Samuel (1959)

The formula for smart decision-making

  1. Save the results of all actions.
  2. Simulate all possible outcomes.
  3. Compare the new actions with the old actions.
  4. Test whether the new action is good or bad.
  5. Choose the new action if it's less bad.
  6. Repeat the entire process.

The fact that computers have been able to do this millions of times proves that they can make very intelligent decisions.

Jessica Tanner
Share by Jessica Tanner
Update 11 March 2026