Deep Learning with JAX
Notebooks for the chapters:
1. Intro to JAX
- JAX Speedup
2. Your first program in JAX
- MNIST image classification with MLP in pure JAX
3. Working with tensors
- Image Processing with Tensors
- Working with DeviceArray's
4. Autodiff
- Different ways of getting derivatives
- Working with gradients in TensorFlow, PyTorch, and JAX
- Differentiating in JAX
5. Compiling your code
- JIT compilation and more: JIT, Jaxpr, XLA, AOT
6. Vectorizing your code
- Different ways to vectorize a function, Controlling vmap() behavior, More real-life cases
7. Parallelizing your computations
- Using pmap()
8. Advanced parallelization
- Using xmap()
- Using pjit()
- Tensor sharding
- Multi-host example
9. Random numbers in JAX
- Random augmentations, NumPy and JAX PRNGs
9. Complex structures in JAX/Pytrees
- Pytrees, jax.tree_util functions, custom nodes
11. more to come
Github
Notebooks for the chapters:
1. Intro to JAX
- JAX Speedup
2. Your first program in JAX
- MNIST image classification with MLP in pure JAX
3. Working with tensors
- Image Processing with Tensors
- Working with DeviceArray's
4. Autodiff
- Different ways of getting derivatives
- Working with gradients in TensorFlow, PyTorch, and JAX
- Differentiating in JAX
5. Compiling your code
- JIT compilation and more: JIT, Jaxpr, XLA, AOT
6. Vectorizing your code
- Different ways to vectorize a function, Controlling vmap() behavior, More real-life cases
7. Parallelizing your computations
- Using pmap()
8. Advanced parallelization
- Using xmap()
- Using pjit()
- Tensor sharding
- Multi-host example
9. Random numbers in JAX
- Random augmentations, NumPy and JAX PRNGs
9. Complex structures in JAX/Pytrees
- Pytrees, jax.tree_util functions, custom nodes
11. more to come
Github
How To Build a Neural Network to Recognize
Handwritten Digits with TensorFlow
- measuring loss per epoch
- adding dropout probability
- adding callback function to automatically abort the training based on a condition on changing loss value per epoch.
GitHub notebook
Handwritten Digits with TensorFlow
- measuring loss per epoch
- adding dropout probability
- adding callback function to automatically abort the training based on a condition on changing loss value per epoch.
GitHub notebook
Complete ML Refresher (1).pdf
1.3 MB
Machine Learning refresher.
Notion is a popular tool that offers a wide range of features for note-taking, task management, document creation, and knowledge management. It provides a versatile and customizable interface that can be tailored to individual needs and workflows.
It is also available on Web, Mac, Linux, Windows, IOS and Android.
https://www.notion.so/
YouTube
It is also available on Web, Mac, Linux, Windows, IOS and Android.
https://www.notion.so/
YouTube
Notion
The AI workspace that works for you. | Notion
A tool that connects everyday work into one space. It gives you and your teams AI tools—search, writing, note-taking—inside an all-in-one, flexible workspace.
Diffrax is a JAX-based library providing numerical differential equation solvers.
Features include:
1️⃣ ODE/SDE/CDE (ordinary/stochastic/controlled) solvers;
2️⃣ lots of different solvers (including Tsit5, Dopri8, symplectic solvers, implicit solvers);
3️⃣ vmappable everything (including the region of integration);
4️⃣ using a PyTree as the state;
5️⃣ dense solutions;
6️⃣ multiple adjoint methods for backpropagation;
7️⃣support for neural differential equations.
#jax
Documentation
Features include:
1️⃣ ODE/SDE/CDE (ordinary/stochastic/controlled) solvers;
2️⃣ lots of different solvers (including Tsit5, Dopri8, symplectic solvers, implicit solvers);
3️⃣ vmappable everything (including the region of integration);
4️⃣ using a PyTree as the state;
5️⃣ dense solutions;
6️⃣ multiple adjoint methods for backpropagation;
7️⃣support for neural differential equations.
#jax
Documentation
docs.kidger.site
Diffrax
The documentation for the Diffrax software library.
Skorch: The Power of PyTorch Combined with The Elegance of Sklearn
Skorch immensely simplifies training neural networks with PyTorch.
Documentation
Skorch immensely simplifies training neural networks with PyTorch.
Documentation
A freely available short course on neuroscience for people with a machine learning background. Designed by Dan Goodman and Marcus Ghosh.
Link
Link
NVTOP stands for Neat Videocard TOP, a (h)top like task monitor for AMD, Intel and NVIDIA GPUs. It can handle multiple GPUs and print information about them in a htop-familiar way.
sudo apt install nvtop
GitHub
sudo apt install nvtop
GitHub
This media is not supported in your browser
VIEW IN TELEGRAM
Generative AI course for Everyone, is now available!
Learn how Generative AI works, how to use it in professional or personal settings, and how it will affect jobs, businesses and society. This course is accessible to everyone, and assumes no prior coding or AI experience.
Please access it here.
Learn how Generative AI works, how to use it in professional or personal settings, and how it will affect jobs, businesses and society. This course is accessible to everyone, and assumes no prior coding or AI experience.
Please access it here.
Polars is a highly performant DataFrame library for manipulating structured data. The core is written in Rust, but the library is also available in Python. Its key features are:
- Fast: Polars is written from the ground up;
- I/O: First class support for all common data storage layers: local, cloud storage & databases;
- Easy to use;
- Out of Core: Polars supports out of core data transformation with its streaming API; - Parallel;
- Vectorized Query Engine;
https://pola-rs.github.io/polars/
- Fast: Polars is written from the ground up;
- I/O: First class support for all common data storage layers: local, cloud storage & databases;
- Easy to use;
- Out of Core: Polars supports out of core data transformation with its streaming API; - Parallel;
- Vectorized Query Engine;
https://pola-rs.github.io/polars/
This media is not supported in your browser
VIEW IN TELEGRAM
NVIDIA just made Pandas 150x faster with zero code changes.
All you have to do is:
Their RAPIDS library will automatically know if you're running on GPU or CPU and speed up your processing.
You can try it here
Credit: Lior
All you have to do is:
%load_ext cudf.pandas
import pandas as pd
Their RAPIDS library will automatically know if you're running on GPU or CPU and speed up your processing.
You can try it here
Credit: Lior
Why I didn't use this before?!
You can run script (also notebook) on remote server and see the results instantly, forget about repeated ssh, scp, sftp, ...
https://code.visualstudio.com/docs/remote/ssh
You can run script (also notebook) on remote server and see the results instantly, forget about repeated ssh, scp, sftp, ...
https://code.visualstudio.com/docs/remote/ssh
Visualstudio
Remote Development using SSH
Developing on Remote Machines or VMs using Visual Studio Code Remote Development and SSH
Understanding Deep Learning
Just took a look for now, seems 👍.
Just took a look for now, seems 👍.