Call for Papers
Important Dates
Submissions Open: January 15th, 2023
Submission Deadline: February 3rd February 8th, 2023, 23:59 AoE (anywhere on earth)
Author Notification: March 3rd, 2023
Workshop Date: May 4th & 5th, 2023
Submission Instructions
The workshop submission and review are handled by OpenReview.
Link: https://openreview.net/group?id=ICLR.cc/2023/Workshop/SNN
Eligible Work
We aim to showcase:
The latest research innovations at all stages of the research process, from work-in-progress to recently published papers
We define “recent” as presented within one year of the workshop, e.g., the manuscript is first public available on arxiv or else no earlier than February 3, 2022.
Position or survey papers on any topics relevant to this workshop (see above)
Both technical and position papers can be up-to 8 pages in length excluding the references and the appendix. We encourage work-in-progress submissions and expect most submissions to be approximately 4 pages. Papers can be submitted in any of the ICLR, Neurips or ICML conference formats.
This workshop is non-archival, and it will not have proceedings. We permit under-review or concurrent submissions (except ICLR 2023). Submissions will receive one of three possible decisions:
Accept (Spotlight Presentation). The authors will be invited to present the work during the main conference, with live Q&A.
Accept (Poster Presentation). The authors will be invited to present their work as a poster during the workshop’s interactive poster sessions.
Reject. The paper will not be presented at the workshop.
Topics of Interest
Algorithms for Sparsity
Pruning both for post-training inference, and during training
Algorithms for fully sparse training (fixed or dynamic), including biologically inspired algorithms
Algorithms for ephemeral (activation) sparsity
Sparsely activated expert models
Scaling laws for sparsity
Sparsity in deep reinforcement learning
Systems for Sparsity
Libraries, kernels, and compilers for accelerating sparse computation
Hardware with support for sparse computation
Theory and Science of Sparsity
When is overparameterization necessary (or not)
Optimization behavior of sparse networks
Representation ability of sparse networks
Sparsity and generalization
The stability of sparse models
Forgetting owing to sparsity, including fairness, privacy and bias concerns
Connecting neural network sparsity with traditional sparse dictionary modeling
Applications for Sparsity
Resource-efficient learning at the edge or the cloud
Data-efficient learning for sparse models
Communication-efficient distributed or federated learning with sparse models
Graph and network science applications
Reviewing Criteria
Our goal is to build a broad community around questions related to neural network sparsity. As such, we aim to accept all submissions that are (1) relevant to the topic area of the conference, (2) technically well-substantiated, and (3) non-trivial or previously unknown results.
Reviewing will be conducted in a double-blind fashion.