Table of Contents

Alignment Is All You Need

Week 3 at Recurse Center

I’ve just finished my third week at Recurse Center, which means I’m already halfway through my residency here. Although I’m still digesting the material in Andrej Karpathy’s Neural Net series, I’ve finished working through the videos, and in the coming week, I’ll be continuing to implement my own model system to improve my understanding and intuition about these systems.

The last video in the series is centered around the pivotal Attention is All You Need paper, which presents the transformer architecture central to OpenAI’s GPT models. In working through Karpathy’s lecture series, I’m struck by the elegance of neural net architecture, their composition through the repetition of simple elements. And I’m struck too by the degree to which much of the “art” of designing these systems relies on simple arithmetic operations that nudge the state of the system within a regime of being “well-behaved”. Scale is the main thing, which led Rich Sutton to conclude in his Bitter Lesson: “We have to learn the bitter lesson that building in how we think we think does not work in the long run.”

Meanwhile … OpenAI continues to make dramatic headlines, with the sudden firing of Sam Altman. At the time of this writing, conflicts within the board around the speed and safety of the development of their products seem to be at the core of this rift.

The question of social disalignment around the topic of AI Alignment within the leadership team at OpenAI is poignant, provocative, and perhaps all too predictable … disalignment around what it means to build in how we think. Institutions have emergent behaviors and a kind of artificial intelligence too. It seems worth pondering how theories around the problem of alignment in both technologic and social domains emerged at a similar time, as is explored by Orit Halpern in a recent paper on the conjoined histories of neural nets and Hayekian neoliberal economics. Which is to say, e/acc is nothing new, and alignment has perhaps been out of vogue for nearly a century.

In my own microcosm of space-time at Recurse, the question of alignment (on a personal scale) feels like a desire for the reassertion of control: in optimizing my time here, having finished projects, the setting and completion of goals. Alignment presumes there’s a reward or loss function at the heart of the model/subject (as is the case for both neural nets and neoliberal economics), but the archetype of the meandering line still feels more appropriate to me. And I wonder, too, about the inverse of Sutton’s Bitter Lesson, of how the things we build changes how we think, how the logic embedded in formal systems become heuristics for our own sense-making.

See also