Stanford UG Courses Index
Last edited: April 4, 2026Stanford UG Y1, Aut
Stanford UG Y1, Win
Stanford UG Y1, Spr
Stanford UG Y2, Aut
Stanford UG Y2, Win
Stanford UG Y2, Spr
Stanford UG Y3, Aut
Stanford UG Y3, Win
Stanford UG Y3, Spr
Stanford UG Talks
| Date | Topic | Presenter | Link |
|---|---|---|---|
| UG Research Program | Brian Thomas | Stanford UG Research Program | |
| Bld an Ecosystem, Not Monolith | Colin Raffel | Build a System | |
| Training Helpful CHatbots | Nazeen Rajani | Training Helpful Chatbots | |
| AI Intepretability for Bio | Gasper Begus | AI Intepretability | |
| PT Transformers on Long Seqs | Mike Lewis | Pretraining Long Transformers | |
| Transformers! | A. Vaswani | Transformers | |
| Towards Interactive Agents | Jessy Lin | Interactive Agent | |
| Dissociating Language and Thought | Anna Ivanova | Dissociating Language and Thought | |
| Language Agents | Karthik Narasimhan | Language Agents with Karthik | |
| Pretraining Data | |||
| value alignment | Been Kim | LM Alignment | |
| model editing | Peter Hase | Knowledge Editing | |
| Knowledge Localization | |||
| Presentations | Sydney Katz | Presentations | |
| Video Generation with Learned Prior | Meenakshi Sarkar | Priors | |
| Theoretical Drone Control | Sliding Mode UAV Control | ||
| VLM to Agents | Tao Yu | VLM to Agents | |
| Social RL | Natasha Jaques | Social Reinforcement Learning | |
| Model Predictive Control + Prompting | Gabriel Maher | LLM MPC | |
| Planning for Learning | |||
| Theorem Proving | Self-Play Conjection Generalization | ||
| Safety for Trucks | Safety for Autonomous Trucking | ||
| Collaborate Multiagent DM | Collaborative Multiagent DM | ||
| AI Safety Talks | AI Safety Annual Meeting | ||
| Pretraining under infinite compute | Limited Samples and Infinite Compute | ||
| Mel Krusniak | Decisions.jl | ||
| SISL Flash Talks | SISL Talks | ||
| Predicting Scaling Performance | |||
| mixed-autonomy traffic with LLMS | mixed-autonomy traffic with LLMs | ||
| AI Incidents Policy | AI Incidents Policy | ||
| Reliable RL | Reliable RL | ||
| Words to Concepts | Words to Concepts | ||
| Zen’s Defense | |||
| multi-agent LLM | Multi-Agent LLMs |
Contacts
EMNLP2025 Index
Last edited: December 12, 2025Talks
- EMNLP2025 Keynote: Heng Ji
- EMNLP2025 Eo: Expert Generalization in MOE
- EMNLP2025 Wu: Zero Shot Graph Learning
- EMNLP2025: MUSE, MCTS Driven Red Teaming
Posters
Takes
- although parsing maybe dead for natural language, structure helps parse scientific information (i.e. drugs, molecules, proteins, etc.)
- two idea: 1) how to formalize approach mathematically 2) what can LMs do that humans can’t do?
- information-rich statefulness + constraints for pruning space is the unlock for ability to build on previous results; i.e. “critical thinking”
Tasks to Do
- EMNLP2025 Fan: medium is not the message: I wonder if we can remove keyboard based signals from BM25 using this method
- EMNLP2025 Xu: tree of prompting: a bunch of multi-hop retrieval datasets to benchmark for RAG-DOLL
- EMNLP2025 Bai: understanding and leveraging expert specialization of context faithfulness: a good set of retrieval benchmarks
Tasks Can Do
- EMNLP2025 Keynote: Heng Ji: “protein LLM requires early exit to capture dynamical Beauvoir”; what if we Mixture of Depth a protein LM?
- EMNLP2025 Hutson: measuring informative of open and questions: formalize this as a rho– POMDP , or use actual value of information measures with Belman backup
- EMNLP2025 Karamanolakis: interactive machine teaching: use MCTS UCB to pick the next set of constitutions to optimize for
- EMNLP2025 Yu: Long-Context LM Fail in Basic Retrieval: I wonder how thoughtbubbles do on the dataset
- EMNLP2025 Bai: understanding and leveraging expert specialization of context faithfulness: could be interesting using the same freeze/clamping technique for cultural work
- EMNLP2025 Vasu: literature grounded hypothesis generation: maybe could use its same hypothesis generation pipeline for RAG
- EMNLP2025Li: enhancing RAG RESPONSE evaluator: maybe could be useful to use to evaluate edge rewards for RAGDOLL
Houjun's Academic Home Page
Last edited: December 12, 2025👋 Howdy, I'm Houjun Liu!
I’m a third-year coterminal MSCS and BSCS student in the Computer Science Department at Stanford University, grateful to be advised by Prof. Mykel Kochenderfer. In the course of my research, I have also had the fortunate opportunity to work with Stanford NLP under Prof. Chris Manning, CMU TalkBank under Prof. Brian MacWhinney, and Prof. Xin Liu at UC Davis Engineering. I am affiliated with the Stanford NLP Group and Stanford Intelligent Systems Lab.
Algorithms Index
Last edited: December 12, 2025Lectures
Divide and Conquer
Sorting
- merge sort: SU-CS161 SEP252025
- recurrence solving: SU-CS161 SEP302025
- median: SU-CS161 OCT022025
- randomized algos + quicksort: SU-CS161 OCT072025
- linear time sorting: SU-CS161 OCT092025
Data Structures
- red-black trees: SU-CS161 OCT142025
- hashing: SU-CS161 OCT212025
Graphs
- DFS/BFS: SU-CS161 OCT232025
- Strongly connected components: SU-CS161 OCT282025
- Dijikstra: SU-CS161 OCT302025
DP
- bellman-ford and Floyd-Warshall: SU-CS161 NOV112025
- more DP LCS, knapsack, independent set: SU-CS161 NOV132025
Greedy Algorithms
- greedy algorithms: SU-CS161 NOV182025
- MSTs: SU-CS161 NOV202025
Closing
- Max Flows, Min Cuts, and Ford-Fulkerson: SU-CS161 DEC022025
ACL2025 Index
Last edited: August 8, 2025Talks
Posters
Takes
- mayhaps we can apply thoughtbubbbles intutiton to BLT token pruning?
