Talks
- EMNLP2025 Keynote: Heng Ji
- EMNLP2025 Eo: Expert Generalization in MOE
- EMNLP2025 Wu: Zero Shot Graph Learning
- EMNLP2025: MUSE, MCTS Driven Red Teaming
Posters
Takes
- although parsing maybe dead for natural language, structure helps parse scientific information (i.e. drugs, molecules, proteins, etc.)
- two idea: 1) how to formalize approach mathematically 2) what can LMs do that humans can’t do?
- information-rich statefulness + constraints for pruning space is the unlock for ability to build on previous results; i.e. “critical thinking”
Tasks to Do
- EMNLP2025 Fan: medium is not the message: I wonder if we can remove keyboard based signals from BM25 using this method
- EMNLP2025 Xu: tree of prompting: a bunch of multi-hop retrieval datasets to benchmark for RAG-DOLL
- EMNLP2025 Bai: understanding and leveraging expert specialization of context faithfulness: a good set of retrieval benchmarks
Tasks Can Do
- EMNLP2025 Keynote: Heng Ji: “protein LLM requires early exit to capture dynamical Beauvoir”; what if we Mixture of Depth a protein LM?
- EMNLP2025 Hutson: measuring informative of open and questions: formalize this as a rho– POMDP , or use actual value of information measures with Belman backup
- EMNLP2025 Karamanolakis: interactive machine teaching: use MCTS UCB to pick the next set of constitutions to optimize for
- EMNLP2025 Yu: Long-Context LM Fail in Basic Retrieval: I wonder how thoughtbubbles do on the dataset
- EMNLP2025 Bai: understanding and leveraging expert specialization of context faithfulness: could be interesting using the same freeze/clamping technique for cultural work
- EMNLP2025 Vasu: literature grounded hypothesis generation: maybe could use its same hypothesis generation pipeline for RAG
- EMNLP2025Li: enhancing RAG RESPONSE evaluator: maybe could be useful to use to evaluate edge rewards for RAGDOLL
