Skip to content

Conversation

@Adhithya-Laxman
Copy link

@Adhithya-Laxman Adhithya-Laxman commented Oct 22, 2025

Description

This PR implements the Momentum SGD optimizer using pure NumPy as part of the effort to add neural_network/optimizers to the repository.

This PR addresses part of issue #13662 .

What does this PR do?

  • Implements Momentum SGD optimizer that accelerates gradient descent by adding momentum to weight updates
  • Uses velocity accumulation to dampen oscillations and speed up convergence
  • Provides a clean, educational implementation without external deep learning frameworks

Implementation Details

  • Algorithm: SGD with momentum
  • Update rule:
    velocity = momentum * velocity - learning_rate * gradient
    param = param + velocity
    
  • Pure NumPy: No PyTorch, TensorFlow, or other frameworks required
  • Educational focus: Clear variable names, detailed docstrings, and comments

Features

✅ Complete docstrings with parameter descriptions
✅ Type hints for all function parameters and return values
✅ Doctests for correctness validation
✅ Usage example demonstrating optimizer on quadratic function
✅ PEP8 compliant code formatting
✅ Momentum accumulation with configurable momentum factor

Testing

All doctests pass:

python -m doctest neural_network/optimizers/momentum_sgd.py -v

Linting passes:

ruff check neural_network/optimizers/momentum_sgd.py

Example output demonstrates proper convergence behavior.

This PR is the second optimizer in the planned sequence outlined in #13662:

References

Checklist

  • I have read CONTRIBUTING.md
  • This pull request is all my own work -- I have not plagiarized
  • I know that pull requests will not be merged if they fail the automated tests
  • This PR only changes one algorithm file
  • All new Python files are placed inside an existing directory
  • All filenames are in all lowercase characters with no spaces or dashes
  • All functions and variable names follow Python naming conventions
  • All function parameters and return values are annotated with Python type hints
  • All functions have doctests that pass the automated testing
  • All new algorithms include at least one URL that points to Wikipedia or another similar explanation
  • [x]

Next Steps

Additional optimizers (Adam, Adagrad, NAG, Muon) will be submitted in follow-up PRs to maintain focused, reviewable contributions.

- Implements SGD with momentum using pure NumPy
- Includes comprehensive docstrings and type hints
- Adds doctests for validation
- Provides usage example demonstrating convergence
- Follows PEP8 coding standards
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting reviews This PR is ready to be reviewed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant