Super minimalistic machine-learning framework.
Explore the docs »
View Example
|
Report Bug
|
Request Feature
Magnetron is a minimalistic, PyTorch-style machine-learning framework designed for IoT and other resource-limited environments.
The tiny C99 core - wrapped in a modern Python API - gives you dynamic graphs, automatic differentiation and network building blocks without the bloat.
A CUDA backend is also WIP.
- PyTorch-like Python API
→ Seamless switch for PyTorch users with familiar syntax and behavior - Automatic differentiation on dynamic computation graphs
→ Supports flexible model construction and training workflows - High-level neural-net building blocks
→ Includesnn.Module
,Linear
,Sequential
, and more out of the box - Broadcasting-aware operators with in-place variants
→ Efficient, NumPy-like tensor ops with performance in mind - CPU multithreading + SIMD (SSE4, AVX2/AVX512, ARM NEON)
→ High performance even without a GPU - Multiple datatypes: float32, float16, int32, and boolean
→ Flexibility for both training and quantized inference - Custom compressed tensor file formats
→ Fast serialization & model loading - Modern PRNGs (Mersenne Twister, PCG)
→ Reliable and reproducible randomness - Clear validation and error messages
→ Easier debugging and better developer experience - N-dimensional, flattened tensors
→ Simple internal representation with general support for shapes - No external C or Python dependencies (except CFFI for the Python wrapper)
→ Lightweight and portable – great for embedded or restricted environments
A simple XOR neuronal network (MLP) trained with Magnetron. Copy and paste the code below into a file called xor.py
and run it with Python.
import magnetron as mag
from magnetron import optim, nn
from matplotlib import pyplot as plt
EPOCHS: int = 2000
# Create the model, optimizer, and loss function
model = nn.Sequential(nn.Linear(2, 2), nn.Tanh(), nn.Linear(2, 1), nn.Tanh())
optimizer = optim.SGD(model.parameters(), lr=1e-1)
criterion = nn.MSELoss()
loss_values: list[float] = []
x = mag.Tensor.from_data([[0, 0], [0, 1], [1, 0], [1, 1]])
y = mag.Tensor.from_data([[0], [1], [1], [0]])
# Train the model
for epoch in range(EPOCHS):
y_hat = model(x)
loss = criterion(y_hat, y)
loss.backward()
optimizer.step()
optimizer.zero_grad()
loss_values.append(loss.item())
if epoch % 100 == 0:
print(f'Epoch: {epoch}, Loss: {loss.item()}')
# Print the final predictions after the training
print('=== Final Predictions ===')
with mag.no_grad():
y_hat = model(x)
for i in range(x.shape[0]):
print(f'Expected: {y[i]}, Predicted: {y_hat[i]}')
# Plot the loss
plt.figure()
plt.plot(loss_values)
plt.xlabel('Epoch')
plt.ylabel('MSE Loss')
plt.title('Training Loss over Time')
plt.grid(True)
plt.show()
This results in the following output:
To get a local copy up and running follow these simple steps.
Magnetron itself has no Python dependencies except for CFFI to call the C library from Python.
Some examples use matplotlib and numpy for plotting and data generation, but these are not required to use the framework.
- Linux, MacOS or Windows
- A modern, C-99 capable compiler (gcc, clang, msvc)
- Python 3.6 or higher
- CMake (Linux:
sudo apt install cmake
, OSX:brew install cmake
)
A pip-installable package will be provided once all core features are implemented. Until then, follow these steps to build Magnetron from source:
-
Clone and enter the Magnetron repository:
git clone https://github.com/MarioSieg/magnetron && cd magnetron
-
Create and activate a virtual environment:
python3 -m venv .venv && source .venv/bin/activate
-
Install Magnetron
(Make sure CMake and a C compiler are installed – see Prerequisites):pip install .
-
Run the XOR training example:
python3 examples/xor.py
See the Examples directory which contains various models and training examples.
For usage in C or C++ see the Unit Tests directory.
The following table lists all available operators and their properties.
Mnemonic | Desc | IN | OUT | Params | Flags | Inplace | Backward | Result | Validation | CPU-Parallel | Type |
---|---|---|---|---|---|---|---|---|---|---|---|
NOP | no-op | 0 | 0 | N/A | N/A | NO | NO | N/A | NO | NO | NO-OP |
CLONE | strided copy | 1 | 1 | N/A | N/A | NO | YES | ISOMORPH | YES | NO | Morph |
VIEW | memory view | 1 | 1 | N/A | N/A | NO | YES | ISOMORPH | YES | NO | Morph |
TRANSPOSE | 𝑥ᵀ | 1 | 1 | N/A | N/A | NO | YES | TRANSPOSED | YES | NO | Morph |
PERMUTE | swap axes by index | 1 | 1 | U64 [6] | N/A | NO | NO | PERMUTED | YES | NO | Morph |
MEAN | (∑𝑥) ∕ 𝑛 | 1 | 1 | N/A | N/A | NO | YES | SCALAR/REDUCED | YES | NO | Reduction |
MIN | min(𝑥) | 1 | 1 | N/A | N/A | NO | NO | SCALAR/REDUCED | YES | NO | Reduction |
MAX | max(𝑥) | 1 | 1 | N/A | N/A | NO | NO | SCALAR/REDUCED | YES | NO | Reduction |
SUM | ∑𝑥 | 1 | 1 | N/A | N/A | NO | YES | SCALAR/REDUCED | YES | NO | Reduction |
ABS | |𝑥| | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SGN | 𝑥⁄ | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
NEG | −𝑥 | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
LOG | log₁₀(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SQR | 𝑥² | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SQRT | √𝑥 | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SIN | sin(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
COS | cos(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
STEP | 𝐻(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
EXP | 𝑒ˣ | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
FLOOR | ⌊𝑥⌋ | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
CEIL | ⌈𝑥⌉ | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
ROUND | ⟦𝑥⟧ | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SOFTMAX | 𝑒ˣⁱ ∕ ∑𝑒ˣʲ | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SOFTMAX_DV | 𝑑⁄𝑑𝑥 softmax(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SIGMOID | 1 ∕ (1 + 𝑒⁻ˣ) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SIGMOID_DV | 𝑑⁄𝑑𝑥 sigmoid(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
HARD_SIGMOID | max(0, min(1, 0.2×𝑥 + 0.5)) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SILU | 𝑥 ∕ (1 + 𝑒⁻ˣ) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
SILU_DV | 𝑑⁄𝑑𝑥 silu(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
TANH | tanh(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
TANH_DV | 𝑑⁄𝑑𝑥 tanh(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
RELU | max(0, 𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
RELU_DV | 𝑑⁄𝑑𝑥 relu(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
GELU | 0.5×𝑥×(1 + erf(𝑥 ∕ √2)) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
GELU_DV | 𝑑⁄𝑑𝑥 gelu(𝑥) | 1 | 1 | N/A | N/A | YES | YES | ISOMORPH | YES | YES | Unary OP |
ADD | 𝑥 + 𝑦 | 2 | 1 | N/A | N/A | YES | YES | BROADCASTED | YES | YES | Binary OP |
SUB | 𝑥 − 𝑦 | 2 | 1 | N/A | N/A | YES | YES | BROADCASTED | YES | YES | Binary OP |
MUL | 𝑥 ⊙ 𝑦 | 2 | 1 | N/A | N/A | YES | YES | BROADCASTED | YES | YES | Binary OP |
DIV | 𝑥 ∕ 𝑦 | 2 | 1 | N/A | N/A | YES | YES | BROADCASTED | YES | YES | Binary OP |
MATMUL | 𝑥𝑦 | 2 | 1 | N/A | N/A | YES | YES | MATRIX | YES | YES | Binary OP |
REPEAT_BACK | gradient broadcast to repeated shape | 2 | 1 | N/A | N/A | YES | YES | BROADCASTED | YES | NO | Binary OP |
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
(c) 2025 Mario "Neo" Sieg. [email protected]
Distributed under the Apache 2 License. See LICENSE.txt
for more information.