danielhanchen/hyperlearn
50%+ Faster, 50%+ less RAM usage, GPU support re-written Sklearn, Statsmodels combo with new novel algorithms.
Language: Jupyter Notebook
#data_analysis #data_science #deep_learning #econometrics #gpu #machine_learning #neural_network #python #pytorch #regression_models #scikit_learn #statistics #statsmodels #tensor
Stars: 178 Issues: 5 Forks: 10
https://github.com/danielhanchen/hyperlearn
50%+ Faster, 50%+ less RAM usage, GPU support re-written Sklearn, Statsmodels combo with new novel algorithms.
Language: Jupyter Notebook
#data_analysis #data_science #deep_learning #econometrics #gpu #machine_learning #neural_network #python #pytorch #regression_models #scikit_learn #statistics #statsmodels #tensor
Stars: 178 Issues: 5 Forks: 10
https://github.com/danielhanchen/hyperlearn
GitHub
GitHub - unslothai/hyperlearn: 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.
2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old. - unslothai/hyperlearn
tairov/llama2.mojo
Inference Llama 2 in one file of pure 🔥
#inference #llama #llama2 #modular #mojo #parallelize #performance #simd #tensor #vectorization
Stars: 200 Issues: 0 Forks: 7
https://github.com/tairov/llama2.mojo
Inference Llama 2 in one file of pure 🔥
#inference #llama #llama2 #modular #mojo #parallelize #performance #simd #tensor #vectorization
Stars: 200 Issues: 0 Forks: 7
https://github.com/tairov/llama2.mojo
GitHub
GitHub - tairov/llama2.mojo: Inference Llama 2 in one file of pure 🔥
Inference Llama 2 in one file of pure 🔥. Contribute to tairov/llama2.mojo development by creating an account on GitHub.