Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore computational models of visual attention mechanisms through this seminar that examines how both bottom-up and top-down attention processes work in humans and monkeys. Learn about the neurobiological foundations of visual attention, including how the brain automatically directs focus to salient visual features (bottom-up processing) and how cognitive goals and expectations influence what we attend to (top-down processing). Discover the computational approaches used to model these attention systems, including saliency maps and biologically-inspired algorithms that predict where humans and primates will look in visual scenes. Examine experimental evidence from neuroscience research that validates these computational models and understand how this work bridges computer vision, cognitive science, and neurobiology. Gain insights into applications of attention modeling in computer vision systems, robotics, and understanding visual disorders, while exploring the similarities and differences between human and non-human primate attention mechanisms.

Syllabus

Laurent Itti: Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.