Editing
Mixture Of Experts
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
<div style="background-color: #4B0082; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> {{BloomIntro}} Mixture of Experts (MoE) is a neural network architecture where the model is split into specialized sub-networks called "experts," with a learned router that activates only a subset for each input. This decouples total parameters from compute: a model can have 100B+ parameters but activate only a fraction per token. MoE is the architecture behind Mixtral, and reportedly GPT-4 and Gemini. </div> __TOC__ <div style="background-color: #000080; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Remembering</span> == * '''Expert''' β A sub-network (typically an FFN block) that specializes in certain types of inputs. * '''Gating network (router)''' β A learned function mapping each token to a probability distribution over experts. * '''Sparse MoE''' β Only top-K experts (K=1 or 2) activated per token; compute stays constant. * '''Top-K routing''' β Selecting K highest-probability experts per token. * '''Expert capacity''' β Maximum tokens an expert processes per batch; overflow tokens are dropped. * '''Load balancing loss''' β Auxiliary loss encouraging uniform token distribution across experts. * '''Expert collapse''' β Failure mode where router learns to route all tokens to one expert. * '''Active parameters''' β Parameters actually used per forward pass; far fewer than total in MoE. * '''Mixtral 8x7B''' β 8 experts per MoE layer, top-2 routing, 47B total, ~13B active per token. * '''Switch Transformer''' β Google's top-1 routing MoE scaled to 1.6 trillion parameters. * '''Expert parallelism''' β Distributing different experts across different GPUs for scale. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Understanding</span> == The core motivation for MoE is '''parameter-compute decoupling''': in a dense model, doubling parameters doubles compute. In a sparse MoE with E experts and top-K routing, total parameters scale with E while compute stays roughly constant β only K experts fire per token. The router computes a score for each expert (dot product of token representation with learned expert embeddings), applies softmax, and selects top-K. The token passes through each selected expert; outputs are weighted by router probabilities and summed. '''Why experts specialize''': Training dynamics naturally lead experts to focus on different input types. In a language MoE, different experts activate for code, scientific text, different languages, or different grammatical structures β without explicit programming. '''Load balancing challenge''': Without intervention, routing is unstable β popular experts improve faster, attracting more tokens, improving more. The auxiliary load balancing loss penalizes uneven distribution, forcing all experts to be used roughly equally. In practice, every other FFN layer in the transformer is replaced with an MoE layer (alternating dense and sparse), while attention layers remain dense. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Applying</span> == '''Using Mixtral 8x7B for inference:''' <syntaxhighlight lang="python"> from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained( "mistralai/Mixtral-8x7B-Instruct-v0.1", torch_dtype=torch.float16, device_map="auto" # Distributes experts across available GPUs ) tokenizer = AutoTokenizer.from_pretrained("mistralai/Mixtral-8x7B-Instruct-v0.1") prompt = "[INST] Explain quantum entanglement simply. [/INST]" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) out = model.generate(**inputs, max_new_tokens=256, temperature=0.7) print(tokenizer.decode(out[0], skip_special_tokens=True)) </syntaxhighlight> ; MoE model comparison : '''Mixtral 8x7B''' β 8 experts, top-2, 47B total, 13B active; strong open model : '''Mixtral 8x22B''' β 141B total, top-2; very high quality : '''DeepSeek-V2/V3''' β 160 experts, top-6; extremely large expert pool : '''Switch Transformer''' β Top-1 routing, scales to 1.6T; Google research </div> <div style="background-color: #8B4500; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Analyzing</span> == {| class="wikitable" |+ Dense vs. Sparse MoE Trade-offs ! Property !! Dense Model !! Sparse MoE (top-2 of 8) |- | Total parameters || N || NΓ8 |- | Active parameters || N || NΓ(2/8) = N/4 |- | Compute per token || βN || βN/4 |- | Memory required || N Γ dtype || NΓ8 Γ dtype |- | Communication || None || Inter-device expert routing |} '''Failure modes''': Expert collapse, token dropping when experts are full, load imbalance causing some experts to never train effectively, and inter-device communication bottleneck in expert parallelism. </div> <div style="background-color: #483D8B; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Evaluating</span> == MoE-specific evaluation: # Expert load distribution β Gini coefficient or entropy over expert usage; alert if any expert handles >3Γ average load. # Expert specialization β do different experts handle semantically distinct inputs? Visualize token types routed to each expert. # Routing consistency β does the same input consistently route to the same experts? High variance suggests instability. # Quality vs. active FLOPs β compare at equal compute budgets, not equal total parameters. </div> <div style="background-color: #2F4F4F; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;"> == <span style="color: #FFFFFF;">Creating</span> == Designing an MoE architecture: Replace every other FFN with an MoE layer. Use top-2 routing for stability. Add auxiliary load balancing loss (weight 0.01). Set expert capacity factor 1.25. Implement expert parallelism by assigning expert groups to different GPUs. Monitor expert usage histograms every 1000 steps. At inference, cache frequently activated experts in faster memory for common patterns. [[Category:Artificial Intelligence]] [[Category:Deep Learning]] [[Category:Large Language Models]] </div>
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Template used on this page:
Template:BloomIntro
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information