Houjun Liu

#Writing

AML: REINFORCE(ment learning)

Last edited: October 10, 2023
Woof. As I begin to write this I should add that this unit is going to be conceptually dense. Though we are teaching one particular algorithm (incidentally, named, REINFORCE), the world of reinforcement learning is build by one, if not many, very advanced treatments in maths. So if anything, I would focus on getting the conceptual flavor of how these problems are formulated and discuses. If you can be along for the mathematical and algorithmic journey, then even better — but by no means required or expected… There’s still lots for all of us to learn together.

Why is building a to-do list app so darn hard?

Last edited: October 10, 2023
Why are Todo Lists (a.k.a. personal productivity systems) so hard to build well? I’m genuinely curious. I was listening to the last episode of Cortex, and one of the hosts (CGP Grey) brought up a similar point regarding personal productivity platforms. OmniFocus, the reigning champion of the industry for professionals looking for a deeply customized system, has been staggering in their ability to ship the next version of their application. Much of the market consists of various different packagings of the same offering.

LLMs are fantastic search engines, so I built one

Last edited: September 9, 2023
For the past 20 years, semantic indexing sucked. For the most part, the core offerings of search products in the last while is divided into two categories: Full-text search things (i.e. every app in the face of the planet that stores text), which for the most part use something n-grammy like Okapi BM25 to do nice fuzzy string matching Ranking/Recommendation things, who isn’t so much trying to search a database as they are trying to guess the user’s intent and recommend them things from it And we lived in a pretty happy world in which, depending on the application, developers chose one or the other to build.

AML: Dipping into PyTorch

Last edited: September 9, 2023
Hello! Welcome to the series of guided code-along labs to introduce you to the basis of using the PyTorch library and its friends to create a neural network! We will dive deeply into Torch, focusing on how practically it can be used to build Neural Networks, as well as taking sideroads into how it works under the hood. Getting Started To get started, let’s open a colab and import Torch!

AML: It Takes Two

Last edited: September 9, 2023
Hello everyone! It’s April, which means we are ready again for a new unit. Let’s dive in. You know what’s better than one neural network? TWO!!! Multi-modal approaches—making two neural networks interact for a certain result—dominate many of the current edge of neural network research. In this unit, we are going to introduce one such approach, Generative Adversarial Networks (GAN), but leave you with some food for thought for other possibilities for what training multiple networks together can do.