Neural networks have become quite the buzzword in artificial intelligence these days. But what exactly are they and how do they work? In this post, I’ll provide a simple overview of neural networks and walk through a detailed example.

A machine learning algorithm

A neural network is a type of machine learning algorithm modelled after the human brain. The goal is to mimic how the brain processes information to learn and make decisions. The basic building block of a neural network is the neuron. Each neuron receives inputs, performs some simple computations, and produces outputs that are sent to other neurons.

Layers

These neurons are arranged in layers, with the first layer taking in raw input data. Each subsequent layer takes in the outputs from the previous layer, processes it, and passes it on until the final layer spits out a prediction or classification. The connections between neurons are also assigned weights, which determine how much influence the input from one neuron has on the output of another.

How does a neural network learn?

So how does a neural network actually learn? Through a training process where it iteratively adjusts its weights to produce more accurate predictions. The network processes some training data, makes predictions, and then receives feedback on how far off its predictions were. It then tweaks the connection weights in order to improve its performance. This repetition allows the network to learn complex patterns and relationships within the data.

An example

Let’s walk through a concrete example of a simple neural network for recognising handwritten digits. Our network will have an input layer to receive the image data, a hidden layer of neurons to process and extract features, and an output layer to classify the image into a digit 0-9.

The input layer contains 784 neurons (28 x 28 pixels) to take in a 28×28 pixel image. Each pixel is represented by a neuron’s activation value of 0 to 1 based on how dark it is.

The hidden layer contains 100 neurons, each of which takes inputs from all 784 input neurons. Each hidden neuron multiplies the input by the weight of the connection, sums them up, applies an activation function, and outputs a value between 0-1.

Finally, the output layer contains 10 neurons to represent the 10 digit classes (0 to 9). Each output neuron receives inputs from all 100 hidden neurons and computes its activation value. The neuron with the highest activation represents the predicted digit.

During training, the network will be fed thousands of labelled handwritten digit images. For each input image, it makes a prediction, compares it to the true label, and updates the weights across all connections to reduce the error. After many iterations, the network learns to accurately label handwritten digits based on the unique patterns and features it has detected.

And there you have it – a high-level intro to neural networks and how they employ interconnected layers, weight adjustments, and activations to learn complex relationships within data. Though simplified, this example illustrates how neural nets leverage inspiration from the brain to “think” more like humans!

No, it’s not like the human brain

While neural nets take inspiration from neuroscience, they are engineered systems focused on statistical learning rather than fully reproducing biological intelligence. No current machine learning technique truly emulates the complex structural and cognitive capabilities of the human brain.

Despite the name, neural networks are still far from truly emulating the complexity and capabilities of the human brain.

Here are some significant differences:

  • Scale – The human brain contains around 86 billion neurons. Even the largest artificial neural networks only have tens of millions of artificial neurons. Our networks are orders of magnitude simpler.
  • Connections – In addition to neurons, the brain has trillions of synaptic connections that modify signals between neurons. Neural nets have far fewer connections, and they don’t change dynamically like biological synapses.
  • Architecture – The brain has many specialised regions that each focus on certain tasks. Neural networks typically have just a few layers that aren’t nearly as specialised. Brains also have far more feedback connections, while ANNs feed information from lower to higher layers.
  • Learning – Humans learn dynamically through experience over decades. Neural nets require fixed training datasets and learn through many iterations of parameter tweaks. The learning capacity of biological neural systems is far greater.
  • General intelligence – Humans have innate abilities for generalization, reasoning, understanding contexts, etc. Neural nets are designed for narrow tasks and lack the broad cognitive abilities of biological brains.
  • Explainability – We can articulate logical reasoning behind human thinking and decisions. The inner workings of neural networks are complex and largely opaque.
  • Creativity – Humans have remarkably creative capacities. Neural networks excel at statistical analysis, pattern recognition and prediction within training data, but lack the ability to imagine, create and innovate like a person.

We have a long way to go before developing artificial general intelligence on the level of human cognition. Neural networks are impressive but limited tools for narrow tasks.

But, don’t fall for misconceptions

Individual neurons themselves do not actually “think” in the way that we typically conceive of human cognition and intelligence. The capabilities that arise from the human brain emerge from the incredibly complex interactions of billions of neurons, rather than the computations within singular cells.

Consider the following:

  • Neurons are specialised cells that transmit signals between different areas of the brain and nervous system. On their own, they simply receive inputs, integrate them, and produce spiking outputs.
  • It’s the patterns and connections between countless neurons that give rise to higher-order cognition and intelligence. No single neuron contains a full representation of knowledge, memories, behaviours, etc.
  • Conscious thought and reasoning happens at a higher level of brain activity, likely involving coordinated activation across many regions and neural networks. Individual neurons do not have measurable awareness or intentionality.
  • We cannot reduce human reasoning, creativity, emotion, and other complex mental faculties down to the cellular interactions within neurons. The whole is far greater than the sum of its parts.
  • While neural networks take inspiration from neuroscience, the capabilities they exhibit pale in comparison to human cognition. The deepest workings of the human mind remain largely mysterious.
  • There are billions of possible connections between neurons. The adaptive nature and complexity of these connections seem to facilitate the emergence of intelligence and consciousness.

Overall, it would be inaccurate to say neurons “think” in the same manner as humans. True intelligence and sentience likely emerges from the staggering complexity of the brain as a whole, not just the computations within its individual cells. We have to be cautious about anthropomorphizing the functions of single neurons. The creation of machine intelligence that rivals human cognition will involve far more than just modelling individual neurons. It’s an incredibly complex, systemic phenomenon that we are still struggling to understand.

There is still so much mystery surrounding the human mind

There is still so much mystery surrounding the human mind, consciousness, and the origins of human thought and reasoning. Even after centuries of study, there are many fundamentals we simply do not understand about our own cognition and sentience.

Let’s think about the following just to give an example:

  • The origins of human consciousness remain hotly debated in science and philosophy. We do not fully comprehend where self-awareness emerges from in the brain or how physical matter gives rise to subjective experience.
  • Our capacity for complex reasoning, creativity, language, abstract thought, etc. seem to be unique products of human evolution over hundreds of thousands of years. But pinpointing the evolutionary timeline and mechanisms behind these mental capabilities has proven extremely challenging.
  • Leading theories suggest our higher cognition arose gradually from increasing social cooperation and the iterative growth of brain size and complexity over eons of evolution. But the specifics of how innate intelligence blossomed remain unclear.
  • Looking at the human mind strictly through the lens of neurons, synapses, and neural networks does not seem capable of unravelling the profundities of consciousness. Some argue that phenomena like qualia, imagination, and free will require different explanatory frameworks.(Also check here for the term ‘qualia’)
  • Our comprehension of how thoughts and memories are structured, stored, and accessed in the brain remains primitive. These processes seem to involve staggeringly complex interactions between various regions of the brain, rather than discrete locations.
  • In essence, the workings of the human mind exceed the sum of our knowledge about neural mechanisms and computations. There are profound aspects of consciousness and cognition that we have yet to grasp.

Conclusion

The origins of human thought, reason, and sentience are deeply complex, rooted in aeons of gradual evolution, and not fully comprehended through today’s neuroscience. We have but scratched the surface of understanding our own minds. There are limits to reducing cognition down to neurons and systems. While an admirable pursuit, replicating human-level intelligence in machines may ultimately require unlocking mysteries we have yet to unravel even within ourselves.

Tagged