Editing
Neural Networks
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== <span style="color: #FFFFFF;">Remembering</span> == * '''Artificial neuron''' β A mathematical unit that receives one or more inputs, applies a weighted sum, adds a bias, and passes the result through an activation function to produce an output. * '''Layer''' β A collection of neurons that process inputs in parallel. Networks are composed of an input layer, one or more hidden layers, and an output layer. * '''Weight''' β A learnable scalar value attached to each connection between neurons. Weights determine the strength of influence one neuron has on another. * '''Bias''' β An additional learnable parameter added to the weighted sum before applying the activation function, allowing the neuron to shift its output independently of its inputs. * '''Activation function''' β A mathematical function applied to a neuron's output to introduce non-linearity. Common examples: ReLU, Sigmoid, Tanh, Softmax. * '''Forward pass''' β The process of propagating input data through the network layer by layer to produce a prediction. * '''Backpropagation''' β An algorithm that computes the gradient of the loss function with respect to each weight, enabling gradient-based learning. * '''Loss function''' β A measure of how far the network's predictions are from the true labels. Examples: Mean Squared Error (MSE), Cross-Entropy Loss. * '''Gradient descent''' β An optimization algorithm that iteratively updates weights in the direction that reduces the loss. * '''Epoch''' β One full pass through the entire training dataset. * '''Batch size''' β The number of training examples used in a single weight update step. * '''Learning rate''' β A hyperparameter controlling the size of weight update steps during gradient descent. * '''Overfitting''' β When a model learns the training data too well, including its noise, and fails to generalize to new data. * '''Dropout''' β A regularization technique where random neurons are deactivated during training to prevent overfitting. * '''Convolutional Neural Network (CNN)''' β A network architecture specialized for grid-like data (e.g., images) using convolution operations. * '''Recurrent Neural Network (RNN)''' β An architecture where connections form directed cycles, enabling processing of sequential data. </div> <div style="background-color: #006400; color: #FFFFFF; padding: 20px; border-radius: 8px; margin-bottom: 15px;">
Summary:
Please note that all contributions to BloomWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
BloomWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information