Transformers Meet Directed Graphs - Exploring Direction-Aware Positional Encodings
Valence Labs via YouTube
Become an AI & ML Engineer with Cal Poly EPaCE — IBM-Certified Training
Master AI and Machine Learning: From Neural Networks to Applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the application of transformers to directed graphs in this comprehensive conference talk by Simon Geisler from Valence Labs. Dive into direction- and structure-aware positional encodings for directed graphs, including eigenvectors of the Magnetic Laplacian and directional random walk encodings. Learn how these techniques can be applied to domains such as source code and logic circuits. Discover the benefits of incorporating directionality information in various downstream tasks, including correctness testing of sorting networks and source code understanding. Examine the data-flow-centric graph construction approach that outperforms previous state-of-the-art methods on the Open Graph Benchmark Code2. Follow along as the speaker covers topics like sinusoidal encodings, signal processing, Graph Fourier Basis, harmonics for directed graphs, and the architecture of the proposed model.
Syllabus
- Intro
How do Language Models Encode Code
- Sinusoidal Encodings
- Signal Processing: DFT
- Graph Fourier Basis
- Magnetic Laplacian
- Harmonics for Directed Graphs
- Ambiguity of Eigenvectors
- Architecture
- Distance Prediction
- Correctness Prediction of Sorting Networks
- OpenGraphBenchmark Code 2
- Summary
- Q+A
Taught by
Valence Labs