ICLR 2023 Workshop on
Sparsity in Neural Networks
On practical limitations and tradeoffs between sustainability and efficiency
Kigali, Rwanda / May 5th 2023
Hybrid (in person + virtual)
Call for Papers/Abstract
Deep networks with billions of parameters trained on large datasets have achieved unprecedented success in various applications, ranging from medical diagnostics to urban planning and autonomous driving, to name a few. However, training large models is contingent on exceptionally large and expensive computational resources. Such infrastructures consume substantial energy, produce a massive amount of carbon footprint, and often soon become obsolete and turn into e-waste. While there has been a persistent effort to improve the performance of machine learning models, their sustainability is often neglected. This realization has motivated the community to look closer at the sustainability and efficiency of machine learning, by identifying the most relevant model parameters or model structures. In this workshop, we examine the community’s progress toward these goals and aim to identify areas that call for additional research efforts. In particular, by bringing researchers with diverse backgrounds, we will focus on the limitations of existing methods for model compression and discuss the tradeoffs among model size and performance. The following is a non-exhaustive list of questions we aim to address through our invited talks, panels, and accepted papers:
Where do we stand in evaluating and incorporating sustainability in machine learning? We make our models larger every day. Is this the right way to learn better?
Do we need better sparse training algorithms or better hardware support for the existing sparse training algorithms?
Hardware seems to be behind in supporting sparse training. What are the challenges of hardware design for sparse and efficient training? Are GPUs the answer or do we need new designs?
Our current theory can only analyze small neural networks. Can compression help us provide performance and reliability guarantees for learning?
What are the tradeoffs between sustainability, efficiency, and performance? Are these constraints competing against each other? If so, how can we find a balance?
Among different compression techniques, quantization has found more applications in industry. What is the current experience and challenges in deployment?
How effective sparsity could be in different domains, ranging from reinforcement learning to vision and robotics?
The main goal of the workshop is to bring together researchers from academia, and industry with diverse expertise and points of view on network compression, to discuss how to effectively evaluate and enforce machine learning pipelines to better comply with sustainability and efficiency constraints. Our workshop will consist of a diverse set of speakers (ranging from researchers with hardware background to researchers in neurobiology, and algorithmic ML community) to discuss sparse training algorithms and hardware limitations in various machine learning domains, ranging from robotics and task automation, to vision, natural language processing, and reinforcement learning. The workshop aims to further develop these research directions for the machine learning community.