As becomes more sophisticated, much of the public attention has focused on how successfully these technologies can compete against humans at chess and other strategy games.
read more – copyright by techxplore.com
A philosopher from the University of Houston has taken a different approach, deconstructing the complex neural networks used in to shed light on how humans process abstract learning.
“As we rely more and more on these systems, it is important to know how they work and why,” said Cameron Buckner, assistant professor of philosophy and author of a paper exploring the topic published in the journal Synthese . Better understanding how the systems work, in turn, led him to insights into the nature of human learning.
Philosophers hav e debated the origins of human knowledge since the days of Plato—is it innate, based on logic, or does knowledge come from sensory experience in the world?
Deep Convolutional Neural Networks, or DCNNs, suggest human knowledge stems from experience, a school of thought known as empiricism, Buckner concluded. These neural networks —multi-layered artificial neural networks , with nodes replicating how neurons process and pass along information in the brain—demonstrate how abstract knowledge is acquired, he said, making the networks a useful tool for fields including neuroscience and psychology.
In the paper, Buckner notes that the success of these networks at complex tasks involving perception and discrimination has at times outpaced the ability of scientists to understand how they work.
While some scientists who build neural network systems have referenced the thinking of British philosopher John Locke and other influential theorists, their focus has been on results rather than understanding how the networks intersect with traditional philosophical accounts of human cognition. Buckner set out to fill that void, considering the use of
“Computer vision and
read more – copyright by techxplore.com
0 Comments