Editing
Neural Networks
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Understanding</span> == A neural network learns by adjusting its weights to minimize a loss function. The learning process has two phases repeated iteratively: '''Forward Pass''': Input data flows through each layer. Each neuron computes a weighted sum of its inputs, adds a bias, and applies an activation function. This produces progressively more abstract representations until the final output layer produces a prediction. '''Backward Pass (Backpropagation)''': The error between the prediction and the true label is computed using the loss function. Using the chain rule of calculus, the gradient of this error with respect to every weight in the network is computed, flowing backward from output to input. Weights are then nudged in the direction that reduces the error. Think of it like tuning a radio dial β you make small adjustments (gradient steps) based on how much static (loss) you hear, until the signal (prediction) becomes clear. The power of neural networks comes from their ability to learn hierarchical representations. Early layers detect simple features (edges in an image, individual characters in text), while deeper layers combine these into increasingly complex abstractions (shapes, words, concepts). This emergent feature learning is why neural networks outperform hand-engineered feature extraction on most complex tasks. The choice of '''activation function''' is critical. Without non-linear activations, stacking layers would be mathematically equivalent to a single linear transformation, giving no benefit. ReLU (Rectified Linear Unit) β which simply outputs max(0, x) β has become the default because it avoids the vanishing gradient problem that plagued earlier sigmoid and tanh activations. </div> <div style="background-color: #8B0000; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information