Inspired by the biological neurons in the brain, artificial neural networks consist of large groups of “neurons” or nodes, linked by “synapses” or weighted connections, and are trained to carry out specific tasks. These networks process information by utilizing the entire structure. The idea stemmed from an early curiosity about how the brain operates.
In the 1940s, researchers began exploring the mathematics behind the brain’s intricate system of neurons and synapses.
A key development came from psychology, particularly neuroscientist Donald Hebb's theory, which proposed that learning happens when the connections between neurons are strengthened as they work together. Later, researchers tried to replicate how the brain functions by creating artificial neural networks as computer simulations.
In these networks, neurons are represented by nodes with varying values, and synapses are modeled as connections between nodes that can be strengthened or weakened. Hebb's hypothesis remains a fundamental rule for training these networks.
By the late 1960s, theoretical setbacks led many to doubt their usefulness, but interest revived in the 1980s with contributions, including significant breakthroughs by John Hopfield and Geoffrey Hinton.