×
Information
Error Calculation
When training a neural network, error calculation measures the disparity between the expected output and
the actual prediction. It plays a crucial role in adjusting the model's weights and biases during the
learning process. The error is computed using a formula that subtracts the predicted output from the
expected output.
Updating Weights and Bias
Updating weights and bias is a fundamental step in the training of neural networks. It involves adjusting
these parameters based on the calculated error, learning rate, and input values. By iteratively updating
weights and bias, the neural network aims to minimize prediction errors and improve its accuracy over time.
Learning Rate
The learning rate is a hyperparameter that controls how much to change the model in response to the estimated
error each time the model weights are updated. A higher learning rate means the model weights will be
updated more significantly. It’s a crucial factor that can affect the speed and quality of learning. Too
high a learning rate can cause the model to converge too quickly to a suboptimal solution, whereas too
low a learning rate can make the training process excessively slow.