AI Learns to Pay Covert Attention

AI learns to pay covert attention (Courtesy)

By Sonia Fernandez, UC Santa Barbara

Researchers at UC Santa Barbara have shown that covert attention — once thought to be the exclusive domain of primates — may actually be more of an emergent form of intelligence rather than one associated with a particularly evolved brain architecture. Using a type of artificial intelligence model called a feedforward convolutional neural network (CNN), they demonstrated that a relatively simple brain analog can conduct covert attention tasks with “human-like” performance.

“To some extent, people thought this attention business was something of humans, of primates, and that was it,” said Miguel Eckstein, a professor of psychological and brain sciences. “Some people thought it was actually even related to awareness and consciousness.

“But as years have gone by, the behavioral signatures of covert attention have been shown in animals, such as crows, rodents, archer fish, bees, even flies,” he continued. “So that motivated us to think that there must be something simpler that gives rise to these effects. And that was the starting point of our paper.”

Eckstein, with computer scientist William Wang and graduate student researcher and lead author Sudhanshu Srivastava published their work in the journal Current Biology.

Highly efficient information processing

We do it in social situations; we do it when we’re alone. We do it when we’re driving, or playing video games, or chatting at a party. It’s covert attention, the act of moving one’s attention around a visual scene without moving one’s eyes. It’s an efficient way to quickly gain information from multiple locations simultaneously, as opposed to focusing all attention on one spot at a time.

“When you’re driving, perhaps you’re looking at the periphery and you are processing things,” Eckstein said. “And sometimes in some social situations, you do it without trying to because you don’t want to reveal that you’re actually attending, because by seeing you moving your eyes, people make inferences. There are also strong suggestions that these covert attention mechanisms are also important before you make an actual eye movement and focus your attention in that direction.”

These attention mechanisms, which are typically executed when one is searching for something or trying to discriminate between things, was once thought to be an evolution of primates, but has since been shown in animals that don’t have the primate brain structure — the neocortex — that is associated with covert attention. However, because the ways we process information — and in particular how we optimize our attention for accuracy — are difficult to map to neurons, theories of covert attention have existed solely in the realm of verbal hypotheses.

“But you don’t necessarily have to hypothesize these psychological concepts,” Eckstein said. Using a simple 200,000-neuron (primates have billions) convolutional neural network, the researchers set it on signature covert attention conditions. These include Posner cuing (the ability to shift attention), set size effects (the effect of distractors on the time needed to locate a target) and contextual cuing (targets in repeated searches are found more quickly).

“The only thing we do to the network is we give it the images and we train it to try to detect the target,” Eckstein explained. There was little, if anything done to prepare the CNN for the tasks — no feedback connections or explicit incorporation of an intelligence mechanism, and no concept of limited resources (i.e. attention) to bias the task. All cues appeared to the neural network at the time the images were presented, “with no prior knowledge of cues or contexts.” The CNN was left to “decide” how to prioritize the information given.

“Of course, neural networks don’t reason,” Eckstein said. “This is all an emergent process.” With each iteration, he explained, the CNN automatically adjusts the weight, or importance, of the information based on its rates of error, a process known as backpropagation. Compared to a model called the Bayesian ideal observer (BIO) that is prepared with all information about cue and context, and thus attains the “highest perceptual accuracy, and has historically served as a mathematically elegant benchmark of human vision,” the neural network’s cuing and context performance, absent all the statistical information the BIO received, was “comparable.”

“The Bayesian ideal observer is a really very beautiful and thorough theory, but it can only be applied to simple tasks that we actually create in the lab,” Eckstein said. “It cannot be applied to real-world images.” Additionally, he said, neural networks can be better mapped to brains and applied to richer, real-world data.

The researchers’ proof-of-concept opens up exciting new avenues in the realm of brain sciences, hinting at potential connections and brain architectures that have yet to be uncovered, and, importantly, connecting verbal hypotheses to laboratory and real-world outcomes. Eckstein and Wang are deeply interested in the interface of human and machine intelligence, heading a Mellichamp Mind & Machine Initiative to bring together people working at the intersection of AI and the study of the mind.

“Down the road the idea is that this could provide a new, interesting framework to understand overt attention,” Eckstein said, referring to the eye movements that indicate more obvious shifts of attention and focus, and are of particular interest to those studying conditions such as schizophrenia and autism. “This is perhaps the first step in trying to get a much more intricate computational understanding and framework to understand attention.”

UCSBTheCurrent

Written by UCSBTheCurrent

What do you think?

Comments

0 Comments deleted by Administrator

Leave a Review or Comment

California Invests Nearly $2 Billion to Improve and Protect the State’s Transportation Infrastructure

Community Environmental Council Releases Critical Report on Impact of Climate Action