Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys
Center for Language & Speech Processing(CLSP), JHU via YouTube
Free courses from frontend to fullstack and AI
Master AI and Machine Learning: From Neural Networks to Applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore computational models of visual attention mechanisms through this seminar that examines how both bottom-up and top-down attention processes work in humans and monkeys. Learn about the neurobiological foundations of visual attention, including how the brain automatically directs focus to salient visual features (bottom-up processing) and how cognitive goals and expectations influence what we attend to (top-down processing). Discover the computational approaches used to model these attention systems, including saliency maps and biologically-inspired algorithms that predict where humans and primates will look in visual scenes. Examine experimental evidence from neuroscience research that validates these computational models and understand how this work bridges computer vision, cognitive science, and neurobiology. Gain insights into applications of attention modeling in computer vision systems, robotics, and understanding visual disorders, while exploring the similarities and differences between human and non-human primate attention mechanisms.
Syllabus
Laurent Itti: Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys
Taught by
Center for Language & Speech Processing(CLSP), JHU