Academic Awards 2024 booklet

93 Learning Continually Under Changing Data Distributions Current AI systems are highly effective in enhancing our lives, however, they face a critical limitation — they struggle to adapt to the rapid and continuous changes in data. Adaptability is crucial for AI systems to keep pace with the evolving world around us. Climate change, for instance, demands accurate weather predictions, necessitating AI models that swiftly adapt to new climate patterns. The constant influx of images and videos on social media alters our preferences, calling for recommendation systems that can promptly adjust to our shifting behaviors, ensuring a tailored and efficient user experience. Achieving this objective in AI systems that are based on deep neural networks poses several challenges including catastrophic forgetting of previously learned data and the loss of expressivity and plasticity to adapt to new tasks. In this research, we address these challenges through the lens of learned representations, training regimes, and the utilization of the model capacity. We showed that learning sparse representation, unlike the typical dense one, is more effective to maintain previous knowledge. In addition, effective utilization of the neural network by recycling the unused capacity improves the learning ability over time. Furthermore, the topological adaptation of neural networks improves the learning speed. Figure 1: Learning sparse representation for the MNIST dataset, which consists of digits from 0 to 9. The connections of the input layer are grouped in the locations that identify a digit, which reduces the interference and forgetting of other digits. Figure 2: Recycling the unused capacity of the network via our proposed method, ReDo, improves the performance of reinforcement learning agents.

RkJQdWJsaXNoZXIy NzU2Mzgy