LOW-RANK FACTORIZATIONS ARE INDIRECT ENCODINGS FOR DEEP NEUROEVOLUTION
My latest paper is available on arxiv: Low Rank Factorizations are Indirect Encodings for Deep Neuroevolution.
The general idea is that we can search for stronger neural networks in a gradient-free fashion by restricting search to networks of low-rank. We show that it works well for language modeling reinforcement learning tasks. It’s essentially a crossover between the following papers:
- LoRA: Low-Rank Adaptation of Large Language Models
- Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning.
I’ll be presenting it virtually for the Neuroevolution@Work workshop at GECCO 2025.