Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

NPTEL

Memory Device Technology for AI/ML Computing

NPTEL via Swayam

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
ABOUT THE COURSE:A paradigm shift from logic-oriented, deterministic computing to data-driven, heuristic computing has ushered in the era of AI/ML. This change has been catalyzed by advances in memory and storage technology which has made ‘big data’ accessible for computing. In this course, first, we do a deep dive into (a) the hierarchical memory-storage organization (b) peripherals and subsystem architecture and (c) individual memory devices (SRAM, DRAM, NANDFLASH and e-NVMs) that enable modern day computing. Second, we understand why, despite these advances, current hardware systems are unable to meet the requirements for AI/ML based computing. Finally, we see how these devices can be employed in architectures such as deep and spiking neural networks which will support low-power AI/ML computing.INTENDED AUDIENCE:1. 3rd and 4th UG (EE and ECE) students interested in Semiconductor devices and VLSI Design, who have finished basic courses in Semiconductor devices and circuits.2. MTech and PhD students working in Memory Device Technology, Neuromophic devices, Hardware for AI/MLPREREQUISITES: Introductory UG course in semiconductor devices and Electrical circuits. Knowledge of Analog circuits is desirable but not mandatory.INDUSTRY SUPPORT:Micron Inc (Have taken guest lectures for this course), Global Foundries, Intel, TSMC.MTech Students have given feedback that this course is useful for placements in semiconductor-based companies.

Syllabus

Week 1: 1.1 Motivation: Fundamental shift in nature of computing.1.2 Memory Centric Computing1.3 The von Neumann Architecture1.4 Memory Hierarchy – Concepts and Classifications1.5 The Global Memory Market

Week 2:2.1 Memory Array Architecture -12.2 Memory Array Architecture -22.3 Memory Peripheral Design2.4 SRAM – Construction of 6T-SRAM Cell2.5 SRAM - HOLD Operation

Week 3:3.1 SRAM - READ Operation3.2 SRAM – Write Operation3.3 SRAM – Stability and Noise Margins3.4 SRAM – READ-Write Conflicts and Solutions.3.5 DRAM – Introduction and place in memory hierarchy (Main Memory)

Week 4:4.1 DRAM – Subsystem Architecture (Inverted Pyramid)4.2 DRAM - 1T-1C DRAM Cell – HOLD and REFRESH4.3 DRAM – READ and WRITE Operations4.4 DRAM- Scaling Challenges and Technology Roadmap.4.5 NANDFLASH – Introduction and Cell Architecture

Week 5:5.1 NAND vs NOR FLASH Array Architecture5.2 PROGRAM Operation, ISPP5.3 NANDFLASH - Scaling from 2D to 3D NAND5.4 NANDFLASH – –Reliability (Read and Program Disturb)5.5 NANDFLASH – Data Retention and Endurance

Week 6:6.1 eNVMs – Introduction to Storage Class Memories.6.2 eNVMS – Two -terminal Memories – Memristors.6.3 PCM – Introduction to Phase change Materials6.4 PCM – Phase Change Memories Design6.5 PCM Challenges in commercialization and scaling

Week 7:7.1 RRAM – What and Why Resistive RAMs7.2 RRAM – VCRAM Operation7.3 RRAM – CBRAM Operation7.4 RRAM – Performance Metrics and Commercialization.7.5 MRAMs – Fundamentals: GMR and TMR

Week 8:8.1 MRAMS: STT and SOT MRAM8.2 FeRAM – Fundamentals: Ferroelectric materials and polarization switching.8.3 FeRAM – 1T-1C FeRAM vs. FeFET8.4 Other emerging e-NVMs and comparison8.5 Review of memory devices for AI/ML computing

Week 9:9.1 AI/ML Heuristic Computing: A historical perspective9.2 Neural Network: Basic Architecture and Operation9.3 Neural Networks: Scale of resource/power hungriness9.4 CMOS Scaling and Von-Neumann Bottleneck9.5 Neural Network Accelerators with von Neumann architecture

Week 10:10.1 Introduction to Neuromorphic Computing vs von Neumann Architecture10.2 Signal transmission in Neurons and Analog Circuit model10.3 HW implementation with neuromorphic devices10.4 Synaptic function: Plasticity and Learning.10.5 HW implementation of synapses with neuromorphic devices

Week 11:11.1 Artificial Neural Networks – Recap of Architecture11.2 DNNs and Applications11.3 CNNs – Concept of Convolution operation.11.4 CNNs – Examples and Implementations11.5 Accelerating CNNs through neuromorphic architectures

Week 12:
12.1: Introduction to Spiking Neural Networks (SNNs)12.2: Supervised and Unsupervised Training of SNNs12.3: Large Scale HW implementation of SNNs12.4: Introduction to In-Memory Computing12.5 Scalable implementations of Analog In-Memory Computing12.5 Conclusion and Course Review.

Taught by

Prof. Shubhadeep Bhattacharjee

Reviews

Start your review of Memory Device Technology for AI/ML Computing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.