_index.org

EMNLP2025 Wu: Zero Shot Graph Learning via Explicit Reasoning

Last edited: November 11, 2025

One-Liner

Novelty

Background

How do LLMs do graphs?

  • predict text from graphs (convert graph into text, autoregression)
  • align text with graph (GNN + LLM late fusion)
  • encode text with graph (stick LLM embedding to a GNN as a prompt)

Motivation

Notable Methods

Key Figs

New Concepts

Notes

EMNLP2025 Zhang: Diffusion vs. Autoregression Language Models

Last edited: November 11, 2025

One-Liner

Novelty

Notable Methods

Key Figs

New Concepts

Notes

EMNLP2025: MUSE, MCTS Driven Red Teaming

Last edited: November 11, 2025

One-Liner

Notable Methods

  1. construct a series of perturbation actions
    • \(A\qty(s)\) = decomposition (skip), expansion (rollout), dredirection
  2. sequence actions with MCTS

Key Figs

New Concepts

Notes

EMNLP2025 Keynote: Heng Ji

Last edited: November 11, 2025

Motivation: drug discovery is extremely slow and expensive; mostly modulating previous iterations of work.

Principles of Drug Discovery

  • observation: acquire/fuse knowledge from multiple data modalities (sequence, stricture, etc.)
  • think: critically generating actually new hypothesis — allowing iteratively
  • allowing LMs to code-switch between moladities (i.e. fuse different modalities together in the most uniform way)

LM as a heuristic helps prune down search space quickly.

SU-CS229 Midterm Sheet

Last edited: November 11, 2025