Deprecated: Function get_magic_quotes_gpc() is deprecated in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 99

Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 619

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1169

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176

Warning: Cannot modify header information - headers already sent by (output started at /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php:99) in /hermes/walnacweb04/walnacweb04ab/b2791/pow.jasaeld/htdocs/De1337/nothing/index.php on line 1176
8000 GitHub - b-ionut-r/neurograd: A Pure Python Deep Learning Framework with Automatic Differentiation and PyTorch-like API
Nothing Special   »   [go: up one dir, main page]

Skip to content

b-ionut-r/neurograd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

84 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
< 8000 svg aria-hidden="true" focusable="false" class="octicon octicon-file color-fg-muted" viewBox="0 0 16 16" width="16" height="16" fill="currentColor" display="inline-block" overflow="visible" style="vertical-align:text-bottom">
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿš€ NeuroGrad

NeuroGrad Logo

A Pure Python Deep Learning Framework with Automatic Differentiation

Python NumPy CuPy License

Built from scratch with no AI assistance - showcasing pure algorithmic understanding


๐ŸŒŸ Overview

NeuroGrad is a lightweight, educational deep learning framework built entirely from scratch in Python. It implements automatic differentiation (backpropagation) with a clean, intuitive API similar to PyTorch.

Perfect for:

  • ๐ŸŽ“ Learning: Understanding how deep learning frameworks work
  • ๐Ÿ”ฌ Research: Rapid prototyping of new algorithms
  • ๐Ÿ“š Education: Teaching autodiff and neural network concepts
  • ๐Ÿ› ๏ธ Experimentation: Testing custom operations

Educational Foundation: Built following Andrew Ng's Deep Learning Specialization principles with minimal AI assistance for core implementation.


โœจ Key Features

๐Ÿ”ฅ Core Capabilities

  • Automatic Differentiation: Full reverse-mode autodiff with computational graph tracking
  • Mixed Precision Training: Automatic mixed precision (AMP) with PyTorch-compatible API โšก NEW!
  • GPU Acceleration: Seamless CPU/CUDA support via NumPy/CuPy backend switching
  • Dynamic Graphs: Build and modify computational graphs on-the-fly
  • Memory Efficient: Optimized gradient computation with cycle detection

๐Ÿง  Neural Network Components

  • Layers: Linear, Conv2D, MaxPool2D/AveragePool2D, MLP with batch normalization and dropout
  • Activations: ReLU, Sigmoid, Tanh, LeakyReLU, Softmax
  • Loss Functions: MSE, RMSE, MAE, Binary/Categorical Cross-Entropy
  • Optimizers: SGD (with momentum), Adam, RMSprop
  • Data Utilities: Dataset and DataLoader classes
  • Metrics: Classification and regression metrics

๐Ÿ› ๏ธ Developer Tools

  • Graph Visualization: Beautiful computational graph plotting
  • Gradient Checking: Numerical gradient verification
  • Mixed Precision: 1.5-2x speedup, 40-50% memory reduction

๐Ÿš€ Quick Start

Installation

# Install from PyPI
pip install neurograd

# With GPU support
pip install neurograd[gpu]

# Everything (GPU, visualization, examples)
pip install neurograd[all]

# From source
git clone https://github.com/b-ionut-r/neurograd.git
cd neurograd && pip install -e .

Basic Usage

import neurograd as ng

# Create tensors with gradient tracking
x = ng.Tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)
y = ng.Tensor([[2.0, 1.0], [1.0, 2.0]], requires_grad=True)

# Perform operations
z = x @ y + x.sin()  # Matrix multiplication + element-wise sine
loss = z.sum()       # Scalar loss

# Automatic differentiation
loss.backward()
print(f"x.grad: {x.grad}")

๐Ÿง  Neural Networks

Complete Training Example

from neurograd.nn.layers.linear import Linear, MLP
from neurograd.nn.losses import MSE
from neurograd.optim.adam import Adam
from neurograd.utils.data import Dataset, DataLoader
from neurograd.nn.metrics import accuracy_score

# Create dataset and model
dataset = Dataset(X_train, y_train)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
model = MLP([784, 128, 64, 10])  # Input -> Hidden -> Output

# Define loss and optimizer
criterion = MSE()
optimizer = Adam(model.named_parameters(), lr=0.001)

# Training loop
for epoch in range(100):
    for X_batch, y_batch in dataloader:
        # Forward pass
        output = model(X_batch)
        loss = criterion(y_batch, output)
        
        # Backward pass
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    
    # Evaluate
    model.eval()
    pred = model(X_test)
    acc = accuracy_score(y_test, pred)
    model.train()
    print(f"Epoch {epoch}, Loss: {loss.data:.4f}, Accuracy: {acc:.4f}")

Mixed Precision Training โšก NEW!

PyTorch-compatible automatic mixed precision for faster training:

from neurograd.amp import autocast, GradScaler
from neurograd.nn.layers.conv import Conv2D, MaxPool2D
from neurograd.nn.layers.linear import Linear
from neurograd.nn.module import Sequential
from neurograd.functions.activations import ReLU, Softmax

# Create CNN model (channels-first: NCHW)
model = Sequential(
    Conv2D(1, 32, kernel_size=3, padding="same", activation="relu"),
    MaxPool2D(pool_size=2),
    Conv2D(32, 64, kernel_size=3, padding="same", activation="relu"),
    MaxPool2D(pool_size=2),
    Flatten(),
    Linear(64 * 7 * 7, 128, activation="relu"),
    Linear(128, 10),
    Softmax(axis=1)
)

# Setup mixed precision
optimizer = Adam(model.named_parameters(), lr=0.001)
loss_fn = CategoricalCrossEntropy()
scaler = GradScaler()

# Training with mixed precision
for epoch in range(num_epochs):
    for batch_x, batch_y in dataloader:
        optimizer.zero_grad()
        
        # Mixed precision forward pass
        with autocast(enabled=True):
            predictions = model(batch_x)        # Auto FP16 where safe
            loss = loss_fn(batch_y, predictions)  # Auto FP32 for stability
        
        # Gradient scaling for FP16 stability
        scaled_loss = scaler.scale(loss)
        scaled_loss.backward()
        scaler.step(optimizer)  # Unscales gradients automatically
        scaler.update()         # Updates scale factor
        
        print(f"Loss: {loss.data.item():.4f}, Scale: {scaler.get_scale():.0f}")

# Benefits: โšก 1.5-2x faster, ๐Ÿ’พ 40-50% less memory, ๐ŸŽฏ same accuracy

Layers and Operations

# Linear layers with built-in features
layer = Linear(784, 128, activation="relu", dropout=0.2, 
               batch_normalization=True, weights_initializer="he")

# Convolutional layers (channels-first: NCHW)
conv = Conv2D(3, 64, kernel_size=(3,3), padding="same", activation="relu")
pool = MaxPool2D(pool_size=(2,2), strides=(2,2))

# Activations and losses
from neurograd.functions.activations import ReLU, Sigmoid, Softmax
from neurograd.nn.losses import MSE, CategoricalCrossEntropy

# Optimizers
optimizer = Adam(model.named_parameters(), lr=0.001, beta1=0.9, beta2=0.999)
optimizer = SGD(model.named_parameters(), lr=0.01, beta=0.9, weight_decay=1e-4)

๐Ÿงฎ Core Operations

Mathematical Functions

x = ng.Tensor([1.0, 2.0, 3.0], requires_grad=True)

# Arithmetic: +, -, *, /, **
z = x + y, x * y, x ** 2

# Math functions
y = x.log(), x.exp(), x.sin(), x.sqrt(), x.abs()

# Linear algebra
C = A @ B           # Matrix multiplication
D = A.transpose()   # Transpose

# Reductions with axis support
s = x.sum(axis=0), x.mean(axis=1, keepdims=True), x.max(), x.std()

Data Utilities

from neurograd.utils.data import Dataset, DataLoader

dataset = Dataset(X, y)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True, seed=42)

for batch_idx, (X_batch, y_batch) in enumerate(dataloader):
    output = model(X_batch)
    loss = criterion(y_batch, output)

๐Ÿ”ง Advanced Usage

Custom Functions

from neurograd.functions.base import Function

class Swish(Function):
    def forward(self, x):
        self.sigmoid_x = 1 / (1 + ng.xp.exp(-x))
        return x * self.sigmoid_x
    
    def backward(self, grad_output):
        x = self.parent_tensors[0]
        swish_grad = self.sigmoid_x * (1 + x.data * (1 - self.sigmoid_x))
        return grad_output * swish_grad if x.requires_grad else None

Gradient Checking

from neurograd.utils.grad_check import gradient_check

is_correct = gradient_check(model, X, y, loss_fn, epsilon=1e-7)
print(f"Gradients correct: {is_correct}")

Visualization

# Visualize computational graphs
fig = loss.visualize_graph(title="Training Loss Graph")
loss.save_graph("computation_graph.png")
loss.print_graph()

# Graph statistics
stats = loss.graph_stats()
print(f"Nodes: {stats['num_tensors']}, Depth: {stats['max_depth']}")

Checkpointing (PyTorch-style)

import neurograd as ng

# Save checkpoint
ng.save({
    'model_state': model.state_dict(),
    'optimizer_state': optimizer.state_dict(),
    # optional: 'scaler_state': scaler.state_dict(), 'epoch': epoch
}, 'checkpoint.pth')

# Or use the convenience helper
ng.save_checkpoint(model=model, optimizer=optimizer, path='checkpoint.pth', epoch=epoch)

# Load checkpoint later
ckpt = ng.load('checkpoint.pth')  # or ng.load_checkpoint('checkpoint.pth')
model.load_state_dict(ckpt['model_state'])
optimizer.load_state_dict(ckpt['optimizer_state'])

๐Ÿ—๏ธ Architecture

neurograd/
โ”œโ”€โ”€ tensor.py              # Core Tensor class
โ”œโ”€โ”€ functions/             # Mathematical operations
โ”‚   โ”œโ”€โ”€ base.py           # Function base class
โ”‚   โ”œโ”€โ”€ arithmetic.py     # +, -, *, /, **
โ”‚   โ”œโ”€โ”€ math.py          # log, exp, sin, cos, etc.
โ”‚   โ”œโ”€โ”€ activations.py   # Neural network activations
โ”‚   โ”œโ”€โ”€ conv.py          # Convolution operations
โ”‚   โ”œโ”€โ”€ tensor_ops.py    # Tensor ops (includes Cast)
โ”‚   โ””โ”€โ”€ reductions.py    # sum, mean, max, etc.
โ”œโ”€โ”€ amp/                  # โšก Mixed precision (NEW!)
โ”‚   โ”œโ”€โ”€ autocast.py      # Automatic precision context
โ”‚   โ”œโ”€โ”€ grad_scaler.py   # Gradient scaling
โ”‚   โ””โ”€โ”€ utils.py         # AMP utilities
โ”œโ”€โ”€ nn/                   # Neural network components
โ”‚   โ”œโ”€โ”€ layers/          # Network layers
โ”‚   โ”œโ”€โ”€ losses.py        # Loss functions
โ”‚   โ”œโ”€โ”€ metrics.py       # Evaluation metrics
โ”‚   โ””โ”€โ”€ module.py        # Base module system
โ”œโ”€โ”€ optim/               # Optimization algorithms
โ”‚   โ”œโ”€โ”€ sgd.py, adam.py, rmsprop.py
โ””โ”€โ”€ utils/               # Utilities
    โ”œโ”€โ”€ grad_check.py    # Gradient verification
    โ”œโ”€โ”€ graph.py         # Visualization
    โ””โ”€โ”€ data.py          # Dataset/DataLoader

๐ŸŽฏ Roadmap

โœ… Completed Features

  • Automatic differentiation with dynamic graphs โœ…
  • Neural network layers (Linear, Conv2D, Pooling) โœ…
  • Loss functions and optimizers (SGD, Adam, RMSprop) โœ…
  • Data utilities (Dataset, DataLoader) โœ…
  • Evaluation metrics and visualization โœ…
  • Mixed precision training (AMP) โšก NEW! โœ…

๐Ÿš€ Upcoming

  • Recurrent layers (RNN, LSTM, GRU)
  • Advanced optimizers (AdaGrad, Nadam)
  • Model serialization/loading
  • Distributed training support
  • Dynamic quantization and pruning

๐Ÿ“š Resources & Contributing

Educational Foundation

This framework implements concepts from Andrew Ng's Deep Learning Specialization and mathematical foundations of automatic differentiation.

Contributing

  • ๐Ÿ› Bug Reports: Use GitHub Issues with minimal reproduction code
  • ๐Ÿ’ก Features: Discuss API design in issues first
  • ๐Ÿ”ง Development: git clone โ†’ pip install -e . โ†’ pytest

Testing

# Run comprehensive tests
jupyter notebook comprehensive_framework_test.ipynb

# Gradient checking
python -c "from neurograd.utils.grad_check import *; test_all()"

๐Ÿ“„ License & Contact

MIT License - see LICENSE file for details.


โญ Star this repository if you find it helpful! โญ

Built with โค๏ธ for the deep learning community

About

A Pure Python Deep Learning Framework with Automatic Differentiation and PyTorch-like API

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0