Shlezinger et al., 2020 - Google Patents
Federated learning with quantization constraintsShlezinger et al., 2020
View PDF- Document ID
- 10895277203537508218
- Author
- Shlezinger N
- Chen M
- Eldar Y
- Poor H
- Cui S
- Publication year
- Publication venue
- ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
External Links
Snippet
Traditional deep learning models are trained on centralized servers using labeled sample data collected from edge devices. This data often includes private information, which the users may not be willing to share. Federated learning (FL) is an emerging approach to train …
- 238000009826 distribution 0 description 6
Classifications
-
- H—ELECTRICITY
- H03—BASIC ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same information or similar information or a subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3082—Vector coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communication
- H04L9/08—Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
- H04L9/0816—Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0057—Block codes
-
- H—ELECTRICITY
- H03—BASIC ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same information or similar information or a subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3002—Conversion to or from differential modulation
-
- H—ELECTRICITY
- H03—BASIC ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M3/00—Conversion of analogue values to or from differential modulation
- H03M3/30—Delta-sigma modulation
- H03M3/39—Structural details of delta-sigma modulators, e.g. incremental delta-sigma modulators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; Arrangements for supplying electrical power along data transmission lines
- H04L25/03—Shaping networks in transmitter or receiver, e.g. adaptive shaping networks ; Receiver end arrangements for processing baseband signals
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Shlezinger et al. | Federated learning with quantization constraints | |
Shlezinger et al. | UVeQFed: Universal vector quantization for federated learning | |
Dai et al. | Nonlinear transform source-channel coding for semantic communications | |
Silva et al. | A framework for control system design subject to average data-rate constraints | |
CN113222179A (en) | Federal learning model compression method based on model sparsification and weight quantization | |
Oh et al. | Communication-efficient federated learning via quantized compressed sensing | |
Lan et al. | Communication-efficient federated learning for resource-constrained edge devices | |
Whang et al. | Neural distributed source coding | |
Kipnis et al. | Gaussian approximation of quantization error for estimation from compressed data | |
Zong et al. | Communication reducing quantization for federated learning with local differential privacy mechanism | |
Chen et al. | Communication-efficient design for quantized decentralized federated learning | |
Yue et al. | Communication-efficient federated learning via predictive coding | |
Abdi et al. | Reducing communication overhead via CEO in distributed training | |
Zhang et al. | Fundamental limits of communication efficiency for model aggregation in distributed learning: A rate-distortion approach | |
Chen et al. | DNN gradient lossless compression: Can GenNorm be the answer? | |
Liang et al. | Wyner-Ziv gradient compression for federated learning | |
Ozyilkan et al. | Neural distributed compressor discovers binning | |
Chen et al. | Information compression in the AI era: Recent advances and future challenges | |
Shlezinger et al. | 1 Quantized Federated Learning | |
Cuvelier et al. | Time-invariant prefix coding for LQG control | |
Zhang et al. | An adaptive distributed source coding design for distributed learning | |
Saha et al. | Efficient randomized subspace embeddings for distributed optimization under a communication budget | |
Li et al. | Minimax learning for remote prediction | |
Shirazinia et al. | Distributed quantization for measurement of correlated sparse sources over noisy channels | |
Liang et al. | Improved communication efficiency for distributed mean estimation with side information |