Category: Sparse Autoencoders
-
We study toy models which are trained on synthetic data consisting of composed feature pairs. We train sparse autoencoders on the activations of these toy models. We find that, for very small toy models, sparse autoencoders find composed features instead of the true underlying features, and discuss future experiments which should test these results.
-
I stress test sparse autoencoders trained on the residual stream of gpt2-small using tokens from Open WebText and the Lambada benchmark. I find good on-distribution performance but poor off-distribution performance, especially in contexts longer than the training context.
-
I look at some features in a pre-trained sparse autoencoder trained on an MLP layer in a TinyStories Model. I look at the features through a statistical lens and also just examine a few of them hand by hand.