Stanford UG Courses Index
Last edited: December 12, 2025Stanford UG Y1, Aut
Stanford UG Y1, Win
Stanford UG Y1, Spr
Stanford UG Y2, Aut
Stanford UG Y2, Win
Stanford UG Y2, Spr
Stanford UG Y3, Aut
Stanford UG Talks
| Date | Topic | Presenter | Link |
|---|---|---|---|
| UG Research Program | Brian Thomas | Stanford UG Research Program | |
| Bld an Ecosystem, Not Monolith | Colin Raffel | Build a System | |
| Training Helpful CHatbots | Nazeen Rajani | Training Helpful Chatbots | |
| AI Intepretability for Bio | Gasper Begus | AI Intepretability | |
| PT Transformers on Long Seqs | Mike Lewis | Pretraining Long Transformers | |
| Transformers! | A. Vaswani | Transformers | |
| Towards Interactive Agents | Jessy Lin | Interactive Agent | |
| Dissociating Language and Thought | Anna Ivanova | Dissociating Language and Thought | |
| Language Agents | Karthik Narasimhan | Language Agents with Karthik | |
| Pretraining Data | |||
| value alignment | Been Kim | LM Alignment | |
| model editing | Peter Hase | Knowledge Editing | |
| Knowledge Localization | |||
| Presentations | Sydney Katz | Presentations | |
| Video Generation with Learned Prior | Meenakshi Sarkar | Priors | |
| Theoretical Drone Control | Sliding Mode UAV Control | ||
| VLM to Agents | Tao Yu | VLM to Agents | |
| Social RL | Natasha Jaques | Social Reinforcement Learning | |
| Model Predictive Control + Prompting | Gabriel Maher | LLM MPC | |
| Planning for Learning | |||
| Theorem Proving | Self-Play Conjection Generalization | ||
| Safety for Trucks | Safety for Autonomous Trucking | ||
| Collaborate Multiagent DM | Collaborative Multiagent DM | ||
| AI Safety Talks | AI Safety Annual Meeting | ||
| Pretraining under infinite compute | Limited Samples and Infinite Compute | ||
| Mel Krusniak | Decisions.jl | ||
| SISL Flash Talks | SISL Talks | ||
| Predicting Scaling Performance |
Contacts
Algorithms Index
Last edited: December 12, 2025Lectures
Divide and Conquer
Sorting
- merge sort: SU-CS161 SEP252025
- recurrence solving: SU-CS161 SEP302025
- median: SU-CS161 OCT022025
- randomized algos + quicksort: SU-CS161 OCT072025
- linear time sorting: SU-CS161 OCT092025
Data Structures
- red-black trees: SU-CS161 OCT142025
- hashing: SU-CS161 OCT212025
Graphs
- DFS/BFS: SU-CS161 OCT232025
- Strongly connected components: SU-CS161 OCT282025
- Dijikstra: SU-CS161 OCT302025
DP
- bellman-ford and Floyd-Warshall: SU-CS161 NOV112025
- more DP LCS, knapsack, independent set: SU-CS161 NOV132025
Greedy Algorithms
- greedy algorithms: SU-CS161 NOV182025
- MSTs: SU-CS161 NOV202025
Closing
- Max Flows, Min Cuts, and Ford-Fulkerson: SU-CS161 DEC022025
EMNLP2025 Index
Last edited: November 11, 2025Talks
- EMNLP2025 Keynote: Heng Ji
- EMNLP2025 Eo: Expert Generalization in MOE
- EMNLP2025 Wu: Zero Shot Graph Learning
- Driven Red Teaming
Posters
Takes
- although parsing maybe dead for natural language, structure helps parse scientific information (i.e. drugs, molecules, proteins, etc.)
- two idea: 1) how to formalize approach mathematically 2) what can LMs do that humans can’t do?
- information-rich statefulness + constraints for pruning space is the unlock for ability to build on previous results; i.e. “critical thinking”
Tasks to Do
- EMNLP2025 Fan: medium is not the message: I wonder if we can remove keyboard based signals from BM25 using this method
- EMNLP2025 Xu: tree of prompting: a bunch of multi-hop retrieval datasets to benchmark for RAG-DOLL
- EMNLP2025 Bai: understanding and leveraging expert specialization of context faithfulness: a good set of retrieval benchmarks
Tasks Can Do
- EMNLP2025 Keynote: Heng Ji: “protein LLM requires early exit to capture dynamical Beauvoir”; what if we Mixture of Depth a protein LM?
- EMNLP2025 Hutson: measuring informative of open and questions: formalize this as a rho– POMDP , or use actual value of information measures with Belman backup
- EMNLP2025 Karamanolakis: interactive machine teaching: use MCTS UCB to pick the next set of constitutions to optimize for
- EMNLP2025 Yu: Long-Context LM Fail in Basic Retrieval: I wonder how thoughtbubbles do on the dataset
- EMNLP2025 Bai: understanding and leveraging expert specialization of context faithfulness: could be interesting using the same freeze/clamping technique for cultural work
- EMNLP2025 Vasu: literature grounded hypothesis generation: maybe could use its same hypothesis generation pipeline for RAG
- EMNLP2025Li: enhancing RAG RESPONSE evaluator: maybe could be useful to use to evaluate edge rewards for RAGDOLL
ACL2025 Index
Last edited: August 8, 2025Talks
Posters
Takes
- mayhaps we can apply thoughtbubbbles intutiton to BLT token pruning?
ADReSS Literature Survey Index
Last edited: August 8, 2025The ADReSS Literature Survey is a literature survey for the results published during the ADReSS Challenge.
- Antonsson 2021: disfluency + SVF features trained on SVM: lexical > narrative qual.
- Chlasta 2021: features extracted from VGGish on SVM; also trained new CNN from .wav.
- Sadeghian 2021: Used GA for feature sel., achieved 94% w/ MMSE alone; dev’d ASR tool.
- Martinc 2021: CBOW (text) + ADR (sound) late fusion’d to a BERT, ablated for features.
- Meghanani 2021: spontaneous speech transcripts with fastText and CNN; 83.33% acc.
- Yuan 2021: ERNIE on transcripts with pause encoding; 89.6% acc.
- Jonell 2021: Developed a kitchen sink of diag. tools and correlated it with biomarkers.
- Laguarta 2021: multimodel (OVBM) to embed auditory info + biomarkers for clsf.
- Shah 2021: late fusion of n-gram and OpenSMILE on std. classifiers.
- Lindsay 2021: Cross-linguistic markers shared for AD patients between English and French.
- Zhu 2021: late fusion of CTP task for AD clsf. w/ transf., mobilenet, yamnet, mockingjay.
- Guo 2021: WLS data to augment CTP from ADReSS Challenge and trained it on a BERT.
- Balagopalan 2021: lexo. and synt. features trained on a BERT and other models.
- Mahajan 2021: a bimodal model on speech/text with GRU on speech and CNN-LSTM on text.
- Parvin 2020: excercize scheme effects on theta/alpha ratio and Brain wave frequency.
- Luz 2021: review paper presenting the ADReSSo challenge and current baselines.
From Meghanani 2021, a review:
