#python #amd #bot #docker #intel #nvidia #raspberrypi #rtx3070 #rtx3080 #rtx3090
https://github.com/EricJMarti/inventory-hunter
https://github.com/EricJMarti/inventory-hunter
GitHub
GitHub - EricJMarti/inventory-hunter: ⚡️ Get notified as soon as your next CPU, GPU, or game console is in stock
⚡️ Get notified as soon as your next CPU, GPU, or game console is in stock - EricJMarti/inventory-hunter
#c_lang #amd #android #arm #cpus #intel #linux #macos #microarchitecture #windows
https://github.com/Dr-Noob/cpufetch
https://github.com/Dr-Noob/cpufetch
GitHub
GitHub - Dr-Noob/cpufetch: Simple yet fancy CPU architecture fetching tool
Simple yet fancy CPU architecture fetching tool. Contribute to Dr-Noob/cpufetch development by creating an account on GitHub.
#c_lang #amd #gpu #intel #linux #macos #ncnn #nvidia #realcugan #vulkan #windows
https://github.com/nihui/realcugan-ncnn-vulkan
https://github.com/nihui/realcugan-ncnn-vulkan
GitHub
GitHub - nihui/realcugan-ncnn-vulkan: real-cugan converter ncnn version, runs fast on intel / amd / nvidia / apple-silicon GPU…
real-cugan converter ncnn version, runs fast on intel / amd / nvidia / apple-silicon GPU with vulkan - nihui/realcugan-ncnn-vulkan
#ocaml #intel #introspection #performance_tools #profile #tracing #visualizer #x86
https://github.com/janestreet/magic-trace
https://github.com/janestreet/magic-trace
GitHub
GitHub - janestreet/magic-trace: magic-trace collects and displays high-resolution traces of what a process is doing
magic-trace collects and displays high-resolution traces of what a process is doing - janestreet/magic-trace
#cplusplus #android #deep_learning #deployment #graphcore #intel #ios #jetson #kunlun #object_detection #onnxruntime #openvino #picodet #rockchip #sdk #serving #tensorrt #uie #yolov5
https://github.com/PaddlePaddle/FastDeploy
https://github.com/PaddlePaddle/FastDeploy
GitHub
GitHub - PaddlePaddle/FastDeploy: High-performance Inference and Deployment Toolkit for LLMs and VLMs based on PaddlePaddle
High-performance Inference and Deployment Toolkit for LLMs and VLMs based on PaddlePaddle - PaddlePaddle/FastDeploy
#c_lang #amd #cpu_cache #cpu_monitoring #cpu_temperature #cpu_topology #cpu_voltage #cpuid #cpuinfo #epyc #intel #multi_core #overclocking #process_monitor #processor #processor_architecture #ram_info #ryzen #threadripper #timings #turbo_boost
https://github.com/cyring/CoreFreq
https://github.com/cyring/CoreFreq
GitHub
GitHub - cyring/CoreFreq: CoreFreq : CPU monitoring and tuning software designed for the 64-bit processors.
CoreFreq : CPU monitoring and tuning software designed for the 64-bit processors. - cyring/CoreFreq
#cplusplus #addon #amd #av1 #dnxhr #ffmpeg #h264 #h265 #intel #libobs #linux #macos #multiplatform #nvidia #obs #obs_studio #obs_studio_plugin #plugin #prores #vp9 #windows
https://github.com/Xaymar/obs-StreamFX
https://github.com/Xaymar/obs-StreamFX
GitHub
GitHub - Vhonowslend/StreamFX-Public: StreamFX is a plugin for OBS® Studio which adds many new effects, filters, sources, transitions…
StreamFX is a plugin for OBS® Studio which adds many new effects, filters, sources, transitions and encoders! Be it 3D Transform, Blur, complex Masking, or even custom shaders, you'll find ...
#python #graphcore #habana #inference #intel #onnx #onnxruntime #optimization #pytorch #quantization #tflite #training #transformers
https://github.com/huggingface/optimum
https://github.com/huggingface/optimum
GitHub
GitHub - huggingface/optimum: 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers…
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools - huggingface/optimum
#cplusplus #4_bits #attention_sink #chatbot #chatpdf #intel_optimized_llamacpp #large_language_model #llm_cpu #llm_inference #smoothquant #sparsegpt #speculative_decoding #stable_diffusion #streamingllm
https://github.com/intel/intel-extension-for-transformers
https://github.com/intel/intel-extension-for-transformers
GitHub
GitHub - intel/intel-extension-for-transformers: ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression…
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡ - intel/intel-extension-for-transformers
#cplusplus #amd #dlss #framegeneration #fsr2 #fsr3 #intel #nvidia #tweak #upscaler #xess
OptiScaler is a tool that helps you change how games look by using different upscaling methods like DLSS, FSR, or XeSS. It allows you to replace one method with another in games that already support these technologies. This means you can choose the best option for your computer's hardware, improving performance and visuals. OptiScaler also offers features like frame generation and anti-lag options, making it easier to customize your gaming experience. However, be careful not to use it with online games as it might trigger anti-cheat systems.
https://github.com/cdozdil/OptiScaler
OptiScaler is a tool that helps you change how games look by using different upscaling methods like DLSS, FSR, or XeSS. It allows you to replace one method with another in games that already support these technologies. This means you can choose the best option for your computer's hardware, improving performance and visuals. OptiScaler also offers features like frame generation and anti-lag options, making it easier to customize your gaming experience. However, be careful not to use it with online games as it might trigger anti-cheat systems.
https://github.com/cdozdil/OptiScaler
#python #deep_learning #intel #machine_learning #neural_network #pytorch #quantization
Intel Extension for PyTorch boosts the speed of PyTorch on Intel hardware, including both CPUs and GPUs, by using special features like AVX-512, AMX, and XMX for faster calculations[5][2][4]. It supports many popular large language models (LLMs) such as Llama, Qwen, Phi, and DeepSeek, offering optimizations for different data types and easy GPU acceleration. This means you can run advanced AI models much faster and more efficiently on your Intel computer, with simple setup and support for both ready-made and custom models.
https://github.com/intel/intel-extension-for-pytorch
Intel Extension for PyTorch boosts the speed of PyTorch on Intel hardware, including both CPUs and GPUs, by using special features like AVX-512, AMX, and XMX for faster calculations[5][2][4]. It supports many popular large language models (LLMs) such as Llama, Qwen, Phi, and DeepSeek, offering optimizations for different data types and easy GPU acceleration. This means you can run advanced AI models much faster and more efficiently on your Intel computer, with simple setup and support for both ready-made and custom models.
https://github.com/intel/intel-extension-for-pytorch
GitHub
GitHub - intel/intel-extension-for-pytorch: A Python package for extending the official PyTorch that can easily obtain performance…
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform - intel/intel-extension-for-pytorch