Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys
Center for Language & Speech Processing(CLSP), JHU via YouTube
Get 35% Off CFI Certifications - Code CFI35
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore computational models of visual attention mechanisms through this seminar that examines how both bottom-up and top-down attention processes work in humans and monkeys. Learn about the neurobiological foundations of visual attention, including how the brain automatically directs focus to salient visual features (bottom-up processing) and how cognitive goals and expectations influence what we attend to (top-down processing). Discover the computational approaches used to model these attention systems, including saliency maps and biologically-inspired algorithms that predict where humans and primates will look in visual scenes. Examine experimental evidence from neuroscience research that validates these computational models and understand how this work bridges computer vision, cognitive science, and neurobiology. Gain insights into applications of attention modeling in computer vision systems, robotics, and understanding visual disorders, while exploring the similarities and differences between human and non-human primate attention mechanisms.
Syllabus
Laurent Itti: Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys
Taught by
Center for Language & Speech Processing(CLSP), JHU