Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys - 2009
Center for Language & Speech Processing(CLSP), JHU via YouTube
Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
Foundations of Data Visualization - Self Paced Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the complex interplay between bottom-up and top-down visual processing in a lecture by Dr. Laurent Itti from the University of Southern California. Delve into the mathematical principles and neuro-computational architectures underlying visual attentional selection in humans and monkeys. Discover how these models can be applied to real-world vision challenges using stimuli from television and video games. Learn about Dr. Itti's research on developing flexible models of visual attention that can be modulated by specific tasks. Gain insights into the comparison of model predictions with behavioral recordings from primates. Understand the importance of combining sensory signals from the environment with behavioral goals in processing complex natural environments. Examine the speaker's background in electrical engineering, computation, and neural systems, as well as his extensive research and teaching experience in artificial intelligence, robotics, and biological vision.
Syllabus
Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys – Laurent Itti (USC) - 2009
Taught by
Center for Language & Speech Processing(CLSP), JHU