Editing
Federated Learning
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Remembering</span> == * '''Federated learning''' β A distributed ML approach where model training happens locally on client devices, and only model updates (gradients or weights) are shared, never raw data. * '''Client''' β A participating device or institution that holds local data and trains a local model. Examples: smartphones, hospitals, banks. * '''Server''' β The central coordinator that aggregates client updates, computes a new global model, and distributes it back to clients. * '''Round''' β One communication cycle: server sends model β clients train locally β clients send updates β server aggregates. * '''FedAvg (Federated Averaging)''' β The foundational FL algorithm by McMahan et al. (2017); aggregates client models by weighted averaging of parameters. * '''Local epochs''' β The number of training epochs each client performs on local data before sending updates to the server. * '''Non-IID data (Non-Independent and Identically Distributed)''' β A key challenge in FL: each client's data reflects its own distribution, which may differ substantially from other clients. * '''Communication efficiency''' β Minimizing the amount of data transferred between clients and server, a key FL challenge. * '''Gradient compression''' β Techniques to reduce the size of gradients transmitted (sparsification, quantization). * '''Differential privacy (DP)''' β A mathematical privacy guarantee that limits what can be learned about any individual's data from shared model updates. * '''Secure aggregation''' β Cryptographic protocols ensuring the server can compute the sum of client updates without seeing any individual update. * '''Model poisoning''' β An attack where malicious clients submit corrupted updates to degrade or manipulate the global model. * '''Byzantine fault tolerance''' β The ability of an FL system to produce correct results even when some participants are malicious or faulty. * '''Cross-device FL''' β Federated learning across many mobile devices (millions of clients, heterogeneous, unreliable). * '''Cross-silo FL''' β Federated learning across a small number of organizations (hospitals, banks), each with large datasets. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information