Scientific Programming
153 subscribers
158 photos
30 videos
138 files
442 links
Tutorials and applications from scientific programming

https://github.com/Ziaeemehr
Download Telegram
Scientific Programming
Hyperparameter Tuning.pdf
GA_iris.ipynb
220.6 KB
As before we saw the process of fining the best hyperparameters for a machine learning model using gridsearch, now we consider the same example by Genetic Algorithm.

References:
0. GitHub notebook with direct link to colab.
1. Tune Your Scikit-learn Model Using Evolutionary Algorithms
2. Understanding the evaluation process
Using conda for your environments? want to just add a package for testing but not break your existing one? dislike waiting half hour for conda to make up its mind about dependencies? you can tell it to copy an existing environment without trying to download any new versions with --clone and --offline:

conda create --name copy_of_foo --clone foo --offline


credit: Marmaduke
Downloading entire playlist from YouTube using yt-dlp:


# brew install yt-dlp
yt-dlp playlisturl
use -c for continuing and -N for multi-threaded fragment downloads:

yt-dpl -c -N 5 playlisturl
ممکنه از این پلی لیست خوشتون بیاد.
2
Latexify
👍7
Pandas.pdf
2.7 MB
Pandas cheat sheet.
👍1
How to pass class of parameters to jitted python function?
Here are some examples.
It's used for writing cleaner functions which get many parameters.
COLAB
GitHub

#numba #jit
🎉1
Basic Dictionary commands (1):


# Create a dictionary
d = {'name': 'Max', 'age': 28}
d = dict(name='Anna', age=27)
squares = {x: x*x for x in range(6)}

# Reading
print(d['name'])
print(d.get('age'))
print(d.get('address')) # returns None
print(d.get('address', 'not found')) # returns 'not found'

# Adding/Modifying
d['address'] = '123 N. Elm St.'
d['name'] = 'Anna'

# Updating
d.update({'name': 'Max', 'age': 28})
d.update(address='123 N. Elm St.')
d['age'] = 30

# Deleting
del d['name']
d.pop('age')
d.popitem() # removes a random key-value pair
d.clear() # removes all items
👍1
Basic Dictionary commands (2):


# Looping
for key in d:
print(key)
for value in d.values():
print(value)
for key, value in d.items():
print(key, value)

# Copying
d_copy = d.copy()
d_copy = dict(d)

# Merging
d = {'name': 'Max', 'age': 28}
d1 = {'name': 'Anna', 'age': 27}
d.update(d1)
d = {**d, **d1}

# Dictionary Comprehension
d = {x: x**2 for x in range(10)}
d = {k: v**2 for k, v in zip(['a', 'b'], range(4))}
d = {k: v for k, v in d.items() if v % 2 == 0}
👍1
Copy and deep copy in dictionaries:


# copy and deepcopy
import copy
d = {'x': [1, 2, 3]}
d1 = d.copy()
d2 = copy.deepcopy(d)

d['x'].append(4)
print(d1) # {'x': [1, 2, 3, 4]}
print(d2) # {'x': [1, 2, 3]}
👍3
This media is not supported in your browser
VIEW IN TELEGRAM
activate "sticky scroll" in vscode through ctrl+shirt+p panel.
#Datashader simplifies creating meaningful visuals from large datasets by breaking down the process into clear steps. It automatically generates accurate visualizations without the need for manual parameter tweaking. Computations are optimized using Python, Numba, Dask, and CUDA, making it efficient even with huge datasets on standard hardware.

https://datashader.org/
Gallery
The walrus operator, introduced in Python 3.8, is represented by ":=". It allows the assignment of a value to a variable within an expression.


## f(x) is called 3 times
foo = [f(x), f(x)**2, f(x)**3]

## two lines of code
y = f(x)
foo = [y, y**2, y**3]

## walrus operator
foo = [y := f(x), y**2, y**3]
👍1
Walrus operator [1]

# Avoiding inefficient comprehensions

results = []
for x in data:
    res = f(x)
    if res:
        results.append(res)
       
# f(x) is called twice
results = [f(x) for x in data if f(x)]       
# walrus operator
results = [res for x in data if (res := f(x))]

# Unnecessary variables in scope
match = pattern.search(data)
if match:
    do_sth(math)

if (match := pattern.search(data)):
    do_sth(match)
   
# Processing streams in chunks
chunk = file.read(8192)
while chunk:
    process(chunk)
    chunk = file.read(8192)

# walrus operator
while chunk := file.read(8192):
    process(chunk)
Datasets for machine learning typically contain a large number of features, but such high-dimensional feature spaces are not always helpful.

In general, all the features are not equally important and there are certain features that account for a large percentage of variance in the dataset. Dimensionality reduction algorithms aim to reduce the dimension of the feature space to a fraction of the original number of dimensions. In doing so, the features with high variance are still retained—but are in the transformed feature space. And principal component analysis (PCA) is one of the most popular dimensionality reduction algorithms.

Here's a simple example in Python demonstrating PCA for dimensionality reduction before training a scikit-learn classifier.
Github

You may also need to read more about PCA here.
Applications for Students & Teaching Assistants are Open!

1️⃣ 3-week Courses (July 8 - 26, 2024):
- Computational Neuroscience: Explore the intricacies of the brain's computational processes and join an engaging community of learners.
- Deep Learning: Delve into the world of machine learning, uncovering the principles and applications of deep learning.

2️⃣ 2-week Courses (July 15 - 26, 2024):
- Computational Tools for Climate Science: Uncover the tools and techniques driving climate science research in this dynamic two-week course.
- NeuroAI - Inaugural Year!: Be part of history as we launch our first-ever NeuroAI course, designed to explore the intersection of neuroscience and artificial intelligence.

https://neuromatch.io/courses/
Incremental principal component analysis (IPCA) is typically used as a replacement for principal component analysis (PCA) when the dataset to be decomposed is too large to fit in memory.

IPCA builds a low-rank approximation for the input data using an amount of memory which is independent of the number of input data samples. It is still dependent on the input data features, but changing the batch size allows for control of memory usage.

I have made some changes on the example from sklearn documentation so one does not need to load the whole dataset in memory.

python
X_ipca = np.zeros((X.shape[0], n_components))
for i in range(3):
ipca.partial_fit(X[i*50:(i+1)*50])

for i in range(3):
X_ipca[i*50:(i+1)*50] = ipca.transform(X[i*50:(i+1)*50])




GitHub
How can you create an audiobook with a natural human voice and a customized accent? Let's say you have an EPUB file and you're tired of the robotic voice generated by common text-to-speech (TTS) systems. One of the most advanced TTS technologies available today is provided by Openvoice. You can find more information about it here.

It performs optimally with a GPU, but it's also compatible with CPU. To use it on your own machine, simply set up a virtual environment and install the package. You'll also need to download a few additional files. I'm currently using the basic setup with the default voice, but the ability to clone any voice is an incredibly exciting feature.

follow the notebook demo1, extract text from epub and replace the sample test with your favourite book.

You may need to split the book into several chapters to fit into the gpu memory and avoid killing the job.

It took me about 10 min to make audio book from Shogun, a novel which is about 500 pages.
How to Use ZSH Auto-suggestions?

ZSH is a popular Unix shell that extends the Bourne Again Shell. It comes packed with features and improvements over Bash.
If you already zsh as default terminal just use:



# Linux
git clone https://github.com/zsh-users/zsh-autosuggestions ~/.zsh/zsh-autosuggestions
# add to .zshrc
source ~/.zsh/zsh-autosuggestions/zsh-autosuggestions.zsh
# Mac
brew install zsh-autosuggestions
# add to .zshrc
source $(brew --prefix)/share/zsh-autosuggestions/zsh-autosuggestions.zsh


Read more here.
Also for #JAX 😢
Credit: Geek_code
JAX is an open-source Python library developed by Google for high-performance numerical computing, especially suited for machine learning and scientific computing. It provides a combination of automatic differentiation, just-in-time compilation, and support for GPU/TPU acceleration, making it particularly well-suited for scalable and efficient computation on large datasets. JAX is built on top of the XLA (Accelerated Linear Algebra) compiler and is heavily inspired by NumPy, making it easy 🤨 for users familiar with NumPy to transition to JAX.

Let's practice some JAX:
I recommend start with this repo and the series of videos for start.

Videos
GitHub

Then you can move to the Deep learning book.

Deep Learning with JAX
Workshop JAX