read more – copyright by techxplore.com

 SwissCognitiveA philosopher from the University of Houston has taken a different approach, deconstructing the complex neural networks used in machine learning to shed light on how humans process abstract learning.

“As we rely more and more on these systems, it is important to know how they work and why,” said Cameron Buckner, assistant professor of philosophy and author of a paper exploring the topic published in the journal Synthese . Better understanding how the systems work, in turn, led him to insights into the nature of human learning.

Philosophers hav e debated the origins of human knowledge since the days of Plato—is it innate, based on logic, or does knowledge come from sensory experience in the world?

Deep Convolutional Neural Networks, or DCNNs, suggest human knowledge stems from experience, a school of thought known as empiricism, Buckner concluded. These neural networks —multi-layered artificial neural networks , with nodes replicating how neurons process and pass along information in the brain—demonstrate how abstract knowledge is acquired, he said, making the networks a useful tool for fields including neuroscience and psychology.

In the paper, Buckner notes that the success of these networks at complex tasks involving perception and discrimination has at times outpaced the ability of scientists to understand how they work.

While some scientists who build neural network systems have referenced the thinking of British philosopher John Locke and other influential theorists, their focus has been on results rather than understanding how the networks intersect with traditional philosophical accounts of human cognition. Buckner set out to fill that void, considering the use of AI for abstract reasoning, ranging from strategy games to visual recognition of chairs, artwork and animals, tasks that are surprisingly complex considering the many potential variations in vantage point, color, style and other detail.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

“Computer vision and machine learning researchers have recently noted that triangle, chair, cat, and other everyday categories are so difficult to recognize because they can be encountered in a variety of different poses or orientations that are not mutually similar in terms of their low-level perceptual properties,” Buckner wrote. “… a chair seen from the front does not look much like the same chair seen from behind or above; we must somehow unify all these diverse perspectives to build a reliable chair-detector.” […]

read more – copyright by techxplore.com