vaneenige/phenomenon
🦄 A fast 2kB low-level WebGL API. GPU based with shaders.
Language: JavaScript
#gpu #low_level #particles #rendering #shaders #webgl
Stars: 250 Issues: 0 Forks: 2
https://github.com/vaneenige/phenomenon
  
  🦄 A fast 2kB low-level WebGL API. GPU based with shaders.
Language: JavaScript
#gpu #low_level #particles #rendering #shaders #webgl
Stars: 250 Issues: 0 Forks: 2
https://github.com/vaneenige/phenomenon
GitHub
  
  GitHub - vaneenige/phenomenon: ⚡️ A fast 2kB low-level WebGL API.
  ⚡️ A fast 2kB low-level WebGL API. Contribute to vaneenige/phenomenon development by creating an account on GitHub.
  danielhanchen/hyperlearn
50%+ Faster, 50%+ less RAM usage, GPU support re-written Sklearn, Statsmodels combo with new novel algorithms.
Language: Jupyter Notebook
#data_analysis #data_science #deep_learning #econometrics #gpu #machine_learning #neural_network #python #pytorch #regression_models #scikit_learn #statistics #statsmodels #tensor
Stars: 178 Issues: 5 Forks: 10
https://github.com/danielhanchen/hyperlearn
  
  50%+ Faster, 50%+ less RAM usage, GPU support re-written Sklearn, Statsmodels combo with new novel algorithms.
Language: Jupyter Notebook
#data_analysis #data_science #deep_learning #econometrics #gpu #machine_learning #neural_network #python #pytorch #regression_models #scikit_learn #statistics #statsmodels #tensor
Stars: 178 Issues: 5 Forks: 10
https://github.com/danielhanchen/hyperlearn
GitHub
  
  GitHub - unslothai/hyperlearn: 2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old.
  2-2000x faster ML algos, 50% less memory usage, works on all hardware - new and old. - unslothai/hyperlearn
  jacenkow/gpu-sentry
Flask-based package for monitoring utilisation of nVidia GPUs.
Language: Python
#flask #gpu #nvidia_smi
Stars: 100 Issues: 1 Forks: 4
https://github.com/jacenkow/gpu-sentry
  
  Flask-based package for monitoring utilisation of nVidia GPUs.
Language: Python
#flask #gpu #nvidia_smi
Stars: 100 Issues: 1 Forks: 4
https://github.com/jacenkow/gpu-sentry
GitHub
  
  GitHub - jacenkow/gpu-sentry: Flask-based package for monitoring utilisation of nVidia GPUs.
  Flask-based package for monitoring utilisation of nVidia GPUs. - jacenkow/gpu-sentry
  BMW-InnovationLab/BMW-TensorFlow-Inference-API-GPU
This is a repository for an object detection inference API using the Tensorflow framework.
Language: Python
#api #computer_vision #deep_learning #deep_neural_networks #detection_inference_api #docker #dockerfile #gpu #inference #neural_network #nvidia #object_detection #rest_api #tensorflow #tensorflow_framework #tensorflow_models
Stars: 150 Issues: 2 Forks: 50
https://github.com/BMW-InnovationLab/BMW-TensorFlow-Inference-API-GPU
  
  This is a repository for an object detection inference API using the Tensorflow framework.
Language: Python
#api #computer_vision #deep_learning #deep_neural_networks #detection_inference_api #docker #dockerfile #gpu #inference #neural_network #nvidia #object_detection #rest_api #tensorflow #tensorflow_framework #tensorflow_models
Stars: 150 Issues: 2 Forks: 50
https://github.com/BMW-InnovationLab/BMW-TensorFlow-Inference-API-GPU
GitHub
  
  GitHub - BMW-InnovationLab/BMW-TensorFlow-Inference-API-GPU: This is a repository for an object detection inference API using the…
  This is a repository for an object detection inference API using the Tensorflow framework. - BMW-InnovationLab/BMW-TensorFlow-Inference-API-GPU
  Tencent/Forward
a library for high performance deep learning inference on NVIDIA GPUs.
Language: C++
#cuda #deep_learning #forward #gpu #inference #inference_engine #keras #neural_network #pytorch #tensorflow #tensorrt
Stars: 102 Issues: 0 Forks: 8
https://github.com/Tencent/Forward
  
  a library for high performance deep learning inference on NVIDIA GPUs.
Language: C++
#cuda #deep_learning #forward #gpu #inference #inference_engine #keras #neural_network #pytorch #tensorflow #tensorrt
Stars: 102 Issues: 0 Forks: 8
https://github.com/Tencent/Forward
GitHub
  
  GitHub - Tencent/Forward: A library for high performance deep learning inference on NVIDIA GPUs.
  A library for high performance deep learning inference on NVIDIA GPUs.  - GitHub - Tencent/Forward: A library for high performance deep learning inference on NVIDIA GPUs.
👍1
  ricosjp/monolish
monolish: MONOlithic Liner equation Solvers for Highly-parallel architecture
Language: C++
#blas #cpp14 #cpu #cuda #gpu #hpc #lapack #linear_algebra #linear_algebra_library #matrix #matrix_structures #mkl #openmp #scientific_computing #sparse_matrix
Stars: 75 Issues: 33 Forks: 5
https://github.com/ricosjp/monolish
  
  monolish: MONOlithic Liner equation Solvers for Highly-parallel architecture
Language: C++
#blas #cpp14 #cpu #cuda #gpu #hpc #lapack #linear_algebra #linear_algebra_library #matrix #matrix_structures #mkl #openmp #scientific_computing #sparse_matrix
Stars: 75 Issues: 33 Forks: 5
https://github.com/ricosjp/monolish
GitHub
  
  GitHub - ricosjp/monolish: monolish: MONOlithic LInear equation Solvers for Highly-parallel architecture
  monolish: MONOlithic LInear equation Solvers for Highly-parallel architecture - ricosjp/monolish
  nihui/realcugan-ncnn-vulkan
real-cugan converter ncnn version, runs fast on intel / amd / nvidia GPU with vulkan
Language: C
#amd #gpu #intel #linux #macos #ncnn #nvidia #realcugan #vulkan #windows
Stars: 147 Issues: 3 Forks: 6
https://github.com/nihui/realcugan-ncnn-vulkan
  
  real-cugan converter ncnn version, runs fast on intel / amd / nvidia GPU with vulkan
Language: C
#amd #gpu #intel #linux #macos #ncnn #nvidia #realcugan #vulkan #windows
Stars: 147 Issues: 3 Forks: 6
https://github.com/nihui/realcugan-ncnn-vulkan
GitHub
  
  GitHub - nihui/realcugan-ncnn-vulkan: real-cugan converter ncnn version, runs fast on intel / amd / nvidia / apple-silicon GPU…
  real-cugan converter ncnn version, runs fast on intel / amd / nvidia / apple-silicon GPU with vulkan - nihui/realcugan-ncnn-vulkan
🥰3
  hpcaitech/FastFold
Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters
Language: Python
#alphafold2 #evoformer #gpu #parallelism #protein_structure #pytorch
Stars: 177 Issues: 1 Forks: 20
https://github.com/hpcaitech/FastFold
  
  Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters
Language: Python
#alphafold2 #evoformer #gpu #parallelism #protein_structure #pytorch
Stars: 177 Issues: 1 Forks: 20
https://github.com/hpcaitech/FastFold
GitHub
  
  GitHub - hpcaitech/FastFold: Optimizing AlphaFold Training and Inference on GPU Clusters
  Optimizing AlphaFold Training and Inference on GPU Clusters - hpcaitech/FastFold
👍1
  nnaisense/evotorch
EvoTorch is an advanced evolutionary computation library built directly on top of PyTorch, created at NNAISENSE.
Language: Python
#artificial_intelligence #distributed #evolutionary_computation #gpu #neural_networks #optimization_algorithms #python #pytorch
Stars: 132 Issues: 0 Forks: 8
https://github.com/nnaisense/evotorch
  
  EvoTorch is an advanced evolutionary computation library built directly on top of PyTorch, created at NNAISENSE.
Language: Python
#artificial_intelligence #distributed #evolutionary_computation #gpu #neural_networks #optimization_algorithms #python #pytorch
Stars: 132 Issues: 0 Forks: 8
https://github.com/nnaisense/evotorch
GitHub
  
  GitHub - nnaisense/evotorch: Advanced evolutionary computation library built directly on top of PyTorch, created at NNAISENSE.
  Advanced evolutionary computation library built directly on top of PyTorch, created at NNAISENSE. - nnaisense/evotorch
  arc53/llm-price-compass
LLM provider price comparison, gpu benchmarks to price per token calculation, gpu benchmark table
Language: TypeScript
#benchmark #gpu #inference_comparison #llm #llm_comparison #llm_inference #llm_price
Stars: 138 Issues: 1 Forks: 5
https://github.com/arc53/llm-price-compass
  
  LLM provider price comparison, gpu benchmarks to price per token calculation, gpu benchmark table
Language: TypeScript
#benchmark #gpu #inference_comparison #llm #llm_comparison #llm_inference #llm_price
Stars: 138 Issues: 1 Forks: 5
https://github.com/arc53/llm-price-compass
GitHub
  
  GitHub - arc53/llm-price-compass: This project collects GPU benchmarks from various cloud providers and compares them to fixed…
  This project collects GPU benchmarks from various cloud providers and compares them to fixed per token costs. Use our tool for efficient LLM GPU selections and cost-effective AI models. LLM provide...
  LegNeato/rust-gpu-chimera
Demo project showing a single Rust codebase running on CPU and directly on GPUs
Language: Rust
#cuda #gpu #rust #rust_cuda #rust_gpu #vulkan
Stars: 218 Issues: 1 Forks: 5
https://github.com/LegNeato/rust-gpu-chimera
  
  Demo project showing a single Rust codebase running on CPU and directly on GPUs
Language: Rust
#cuda #gpu #rust #rust_cuda #rust_gpu #vulkan
Stars: 218 Issues: 1 Forks: 5
https://github.com/LegNeato/rust-gpu-chimera
GitHub
  
  GitHub - LegNeato/rust-gpu-chimera: Demo project showing a single Rust codebase running on CPU and directly on GPUs
  Demo project showing a single Rust codebase running on CPU and directly on GPUs - LegNeato/rust-gpu-chimera
  yassa9/qwen600
Static suckless single batch CUDA-only qwen3-0.6B mini inference engine
Language: Cuda
#cuda #cuda_programming #gpu #llamacpp #llm #llm_inference #qwen #qwen3 #transformer
Stars: 287 Issues: 1 Forks: 17
https://github.com/yassa9/qwen600
  
  Static suckless single batch CUDA-only qwen3-0.6B mini inference engine
Language: Cuda
#cuda #cuda_programming #gpu #llamacpp #llm #llm_inference #qwen #qwen3 #transformer
Stars: 287 Issues: 1 Forks: 17
https://github.com/yassa9/qwen600
GitHub
  
  GitHub - yassa9/qwen600: Static suckless single batch CUDA-only qwen3-0.6B mini inference engine
  Static suckless single batch CUDA-only qwen3-0.6B mini inference engine - yassa9/qwen600
❤1
  