Regression Model
This neural network takes 4 input features and predicts a single continuous output value. It's designed for regression tasks where we want to predict a numerical value based on multiple input features.
Model Architecture:
- Input Layer: 4 neurons (one for each feature)
- Hidden Layer: 4 neurons with activation
- Output Layer: 1 neuron (linear activation for regression)
Adjust the input sliders and set your target value to see how the network learns to minimize the difference between its prediction and the target.
Backpropagation Explained
Backpropagation calculates how much each weight contributes to the error and adjusts them to reduce the difference between the predicted output and the target value.
Training Process:
- Forward pass computes the prediction
- Compare prediction with target value
- Calculate Mean Squared Error (MSE)
- Backpropagate error through the network
- Update weights using gradient descent
- Repeat until error is minimized
Click "Forward Pass" then "Backward Pass" to see this in action!
Loss Functions
For regression problems, we typically use Mean Squared Error (MSE) to measure how far our predictions are from the target values.
Mean Squared Error:
MSE = 1/n Σ(y_true - y_pred)²
Where:
- y_true = target value
- y_pred = predicted value
- n = number of samples (1 in our case)
Why Minimize MSE?
MSE gives higher weight to larger errors, which makes it sensitive to outliers but effective for most regression problems. The training process adjusts weights to gradually reduce MSE.