Editing
Optimization Algorithms in Machine Learning
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Understanding</span> == Every ML model is trained by minimizing a loss function L(θ) over parameters θ. The loss surface is a high-dimensional landscape; optimization is the search for a good minimum. **Gradient descent** takes steps proportional to the negative gradient: θ_{t+1} = θ_t - α ∇L(θ_t). The challenge: computing the exact gradient over all training data is expensive. Mini-batch SGD approximates it with a small sample — cheap but noisy. **Why SGD noise helps**: Counterintuitively, the noise from stochastic gradients often improves generalization. It helps escape sharp local minima, which tend to generalize poorly, and settles into wider, flatter minima that generalize better (sharp vs. flat minima theory). **Adaptive optimizers** (Adam, RMSProp): Different parameters often need different learning rates. Adam maintains a running mean of gradient magnitudes per parameter and divides the learning rate by this estimate: parameters with large gradients (well-trained features) receive smaller effective learning rates; parameters with small gradients receive larger ones. This accelerates training on the heterogeneous loss landscapes of deep networks. **The Adam vs. SGD debate**: Adam converges faster and requires less tuning. SGD with momentum + careful LR schedule often finds slightly better final solutions (used for ImageNet training). For transformers and LLMs, AdamW is the universal default. **Batch size considerations**: Large batch training is efficient (GPU utilization) but changes gradient dynamics. Linear scaling rule: if you double batch size, double learning rate. Large batch training tends toward sharp minima and may generalize worse — addressed by linear warmup and LARS/LAMB optimizers for very large batches. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information