[ICLR 2021] HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
-
Updated
Feb 27, 2023 - Python
[ICLR 2021] HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
[ICLR 2023] Pruning Deep Neural Networks from a Sparsity Perspective
[ICLR 2025] Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing
[Journal of Turbulence, DCC 2022] Dimension Reduced Turbulent Flow Data From Deep Vector Quantizers
[DCC 2020] DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression
InvarDiff: Cross-Scale Invariance Caching for Accelerated Diffusion Models
A collection of dataset distillation papers.
[arXiv] ColA: Collaborative Adaptation with Gradient Learning
[CVPR 2025] "Early-Bird Diffusion: Investigating and Leveraging Timestep-Aware Early-Bird Tickets in Diffusion Models for Efficient Training" by Lexington Whalen, Zhenbang Du, Haoran You, Chaojian Li, Sixu Li, and Yingyan (Celine) Lin.
Repository for the SS24 Efficient Machine Learning class at FSU Jena
This project investigates the efficacy of integrating context distillation techniques with parameter-efficient tuning methods such as LoRA, QLoRA, and traditional fine-tuning approaches, utilizing Facebook’s pre-trained OPT 125M model.
[DCC 2020] Deep Clustering of Compressed Variational Embeddings
On-Statistical-Efficiency-in-Learning
[IEEE BigData 2019] Restricted Recurrent Neural Networks
Add a description, image, and links to the efficient-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the efficient-machine-learning topic, visit your repo's landing page and select "manage topics."