#go #git #go #golang #hacktoberfest #hooks #lefthook #manager
Lefthook is a fast Git hooks manager built in Go for Node.js, Ruby, Python, and other projects. Install it easily via Go, NPM, gem, or pipx, then configure hooks in a simple lefthook.yml file and run `lefthook install`. It runs commands in parallel, filters files with globs/regex, supports scripts, tags, Docker, and local overrides for speed and control. This saves you time on commits/pushes by automating linting and checks quickly without dependencies, keeping code clean effortlessly.
https://github.com/evilmartians/lefthook
Lefthook is a fast Git hooks manager built in Go for Node.js, Ruby, Python, and other projects. Install it easily via Go, NPM, gem, or pipx, then configure hooks in a simple lefthook.yml file and run `lefthook install`. It runs commands in parallel, filters files with globs/regex, supports scripts, tags, Docker, and local overrides for speed and control. This saves you time on commits/pushes by automating linting and checks quickly without dependencies, keeping code clean effortlessly.
https://github.com/evilmartians/lefthook
GitHub
GitHub - evilmartians/lefthook: Fast and powerful Git hooks manager for any type of projects.
Fast and powerful Git hooks manager for any type of projects. - evilmartians/lefthook
#python #docker #fastapi #kbqa #kgqa #llms #neo4j #rag #vue
Yuxi-Know (语析) is a free, open-source platform built with LangGraph, Vue.js, FastAPI, and LightRAG to create smart agents using RAG knowledge bases and knowledge graphs. The latest v0.4.0-beta (Dec 2025) adds file uploads, multimodal image support, mind maps from files, evaluation tools, dark mode, and better graph visuals. It helps you quickly build and deploy custom AI agents for Q&A, analysis, and searches without starting from scratch, saving time and effort on development.
https://github.com/xerrors/Yuxi-Know
Yuxi-Know (语析) is a free, open-source platform built with LangGraph, Vue.js, FastAPI, and LightRAG to create smart agents using RAG knowledge bases and knowledge graphs. The latest v0.4.0-beta (Dec 2025) adds file uploads, multimodal image support, mind maps from files, evaluation tools, dark mode, and better graph visuals. It helps you quickly build and deploy custom AI agents for Q&A, analysis, and searches without starting from scratch, saving time and effort on development.
https://github.com/xerrors/Yuxi-Know
GitHub
GitHub - xerrors/Yuxi-Know: 结合LightRAG 知识库的知识图谱智能体平台。LangChain v1 + Vue + FastAPI。集成主流大模型、LightRAG、MinerU、PP-Structure、Neo4j 、联网检索、工具调用。
结合LightRAG 知识库的知识图谱智能体平台。LangChain v1 + Vue + FastAPI。集成主流大模型、LightRAG、MinerU、PP-Structure、Neo4j 、联网检索、工具调用。 - xerrors/Yuxi-Know
#typescript #devops #infra #infrastructure
Make beautiful isometric infrastructure diagrams
https://github.com/stan-smith/FossFLOW
Make beautiful isometric infrastructure diagrams
https://github.com/stan-smith/FossFLOW
GitHub
GitHub - stan-smith/FossFLOW: Make beautiful isometric infrastructure diagrams
Make beautiful isometric infrastructure diagrams. Contribute to stan-smith/FossFLOW development by creating an account on GitHub.
#python #audio_generation #diffusion #image_generation #inference #model_serving #multimodal #pytorch #transformer #video_generation
vLLM-Omni is a free, open-source tool that makes serving AI models for text, images, videos, and audio fast, easy, and cheap. It builds on vLLM for top speed using smart memory tricks, overlapping tasks, and flexible resource sharing across GPUs. You get 2x higher throughput, 35% less delay, and simple setup with Hugging Face models via OpenAI API—perfect for building quick multi-modal apps like chatbots or media generators without high costs.
https://github.com/vllm-project/vllm-omni
vLLM-Omni is a free, open-source tool that makes serving AI models for text, images, videos, and audio fast, easy, and cheap. It builds on vLLM for top speed using smart memory tricks, overlapping tasks, and flexible resource sharing across GPUs. You get 2x higher throughput, 35% less delay, and simple setup with Hugging Face models via OpenAI API—perfect for building quick multi-modal apps like chatbots or media generators without high costs.
https://github.com/vllm-project/vllm-omni
GitHub
GitHub - vllm-project/vllm-omni: A framework for efficient model inference with omni-modality models
A framework for efficient model inference with omni-modality models - vllm-project/vllm-omni
#python
Bloom is a free, open-source tool that automates testing AI models for bad behaviors like bias or sycophancy. You define the behavior in a simple config file, add example chats if you want, and it runs four steps: understanding it, creating varied test scenarios, simulating talks with your target model (like Claude or GPT via APIs), and scoring results with metrics like how often the issue appears. View interactive transcripts easily. This saves you hours of manual work, lets you quickly compare models on fresh tests to avoid overfitting, and gives reliable, reproducible insights into AI safety—perfect for researchers building trustworthy systems.
https://github.com/safety-research/bloom
Bloom is a free, open-source tool that automates testing AI models for bad behaviors like bias or sycophancy. You define the behavior in a simple config file, add example chats if you want, and it runs four steps: understanding it, creating varied test scenarios, simulating talks with your target model (like Claude or GPT via APIs), and scoring results with metrics like how often the issue appears. View interactive transcripts easily. This saves you hours of manual work, lets you quickly compare models on fresh tests to avoid overfitting, and gives reliable, reproducible insights into AI safety—perfect for researchers building trustworthy systems.
https://github.com/safety-research/bloom
GitHub
GitHub - safety-research/bloom: bloom - evaluate any behavior immediately 🌸🌱
bloom - evaluate any behavior immediately 🌸🌱. Contribute to safety-research/bloom development by creating an account on GitHub.
#python #ai_tool #darkweb #darkweb_osint #investigation_tool #llm_powered #osint #osint_tool
Robin is an AI tool that searches and scrapes the dark web, refines queries with large language models, filters results, and produces a concise investigation summary you can save or export, with Docker and CLI options and support for multiple LLMs (OpenAI, Anthropic, Gemini, local models) to fit your workflow. This helps you save hours of manual searching by automating multi-engine dark-web searches, scraping Onion sites via Tor, filtering noise with AI, and producing ready-to-use reports for faster, more focused OSINT investigations.
https://github.com/apurvsinghgautam/robin
Robin is an AI tool that searches and scrapes the dark web, refines queries with large language models, filters results, and produces a concise investigation summary you can save or export, with Docker and CLI options and support for multiple LLMs (OpenAI, Anthropic, Gemini, local models) to fit your workflow. This helps you save hours of manual searching by automating multi-engine dark-web searches, scraping Onion sites via Tor, filtering noise with AI, and producing ready-to-use reports for faster, more focused OSINT investigations.
https://github.com/apurvsinghgautam/robin
GitHub
GitHub - apurvsinghgautam/robin: AI-Powered Dark Web OSINT Tool
AI-Powered Dark Web OSINT Tool. Contribute to apurvsinghgautam/robin development by creating an account on GitHub.
#jupyter_notebook
DINOv3 offers powerful self-supervised vision models from Meta AI, like ViT up to 7B parameters and ConvNeXt, pretrained on 1.7B web or satellite images. Load them easily via PyTorch Hub, Hugging Face Transformers (v4.56+), or timm (v1.0.20+), with code examples for features, depth, detection, and segmentation. You benefit by using these top-performing, dense features without fine-tuning or labels—saving time and compute for tasks like classification, object detection, and zero-shot analysis on your images.
https://github.com/facebookresearch/dinov3
DINOv3 offers powerful self-supervised vision models from Meta AI, like ViT up to 7B parameters and ConvNeXt, pretrained on 1.7B web or satellite images. Load them easily via PyTorch Hub, Hugging Face Transformers (v4.56+), or timm (v1.0.20+), with code examples for features, depth, detection, and segmentation. You benefit by using these top-performing, dense features without fine-tuning or labels—saving time and compute for tasks like classification, object detection, and zero-shot analysis on your images.
https://github.com/facebookresearch/dinov3
GitHub
GitHub - facebookresearch/dinov3: Reference PyTorch implementation and models for DINOv3
Reference PyTorch implementation and models for DINOv3 - facebookresearch/dinov3
#rust #async #framework #http_server #rust #salvo #web
Salvo is a simple yet powerful Rust web framework that gives you fast, modern servers (HTTP/1–3, WebSocket/WebTransport) with minimal Rust knowledge required, built on Hyper and Tokio. It uses a unified handler/middleware model, an infinitely nestable, chainable router for clear public/private route grouping, built-in multipart/file upload and data extraction, automatic OpenAPI generation, ACME TLS support, and a CLI to scaffold projects—so you can prototype and deploy secure, high-performance backends quickly with less boilerplate and easier routing, testing, and API documentation.
https://github.com/salvo-rs/salvo
Salvo is a simple yet powerful Rust web framework that gives you fast, modern servers (HTTP/1–3, WebSocket/WebTransport) with minimal Rust knowledge required, built on Hyper and Tokio. It uses a unified handler/middleware model, an infinitely nestable, chainable router for clear public/private route grouping, built-in multipart/file upload and data extraction, automatic OpenAPI generation, ACME TLS support, and a CLI to scaffold projects—so you can prototype and deploy secure, high-performance backends quickly with less boilerplate and easier routing, testing, and API documentation.
https://github.com/salvo-rs/salvo
GitHub
GitHub - salvo-rs/salvo: A powerful web framework built with a simplified design.
A powerful web framework built with a simplified design. - salvo-rs/salvo
#cplusplus #arduino #ble_jammer #ble_spoof #ble_spoofer #cybersecurity #deauther #esp32 #hack #hacktoberfest #jammer #nrf_scanner #nrf24l01 #sour_apple
nRFBOX is a handheld ESP32-based tool that scans and analyzes the 2.4 GHz band (Wi‑Fi, BLE, etc.), shows signal strength and channel activity, and can run jamming, BLE jamming/spoofing, and Wi‑Fi deauthentication tests for security research and troubleshooting. It combines an ESP32, NRF24 modules, OLED display, battery management, and SD support for firmware and logging, with notes about limited range, device variability, and power limits when using multiple NRF modules. Benefit: you can use it to find crowded channels, diagnose wireless interference, and test network/device resilience in controlled, legal test environments.
https://github.com/cifertech/nRFBox
nRFBOX is a handheld ESP32-based tool that scans and analyzes the 2.4 GHz band (Wi‑Fi, BLE, etc.), shows signal strength and channel activity, and can run jamming, BLE jamming/spoofing, and Wi‑Fi deauthentication tests for security research and troubleshooting. It combines an ESP32, NRF24 modules, OLED display, battery management, and SD support for firmware and logging, with notes about limited range, device variability, and power limits when using multiple NRF modules. Benefit: you can use it to find crowded channels, diagnose wireless interference, and test network/device resilience in controlled, legal test environments.
https://github.com/cifertech/nRFBox
GitHub
GitHub - cifertech/nRFBox: Open-source ESP32-powered tool to scan, jam, spoof, and master BLE, Wi-Fi, and 2.4GHz networks.
Open-source ESP32-powered tool to scan, jam, spoof, and master BLE, Wi-Fi, and 2.4GHz networks. - cifertech/nRFBox
#python #auto_regressive_diffusion_model #diffusion_models #video_generation #wan_video
LightX2V is a fast, lightweight framework for generating videos from text or images, supporting models like HunyuanVideo-1.5 and Wan2.1/2.2 with up to 20x speedup via 4-step distillation, low VRAM use (8GB+), and features like offloading, quantization, and multi-GPU parallelism—outperforming rivals on H100/RTX 4090. You benefit by creating high-quality videos quickly on everyday hardware, saving time and costs for content creation, prototyping, or professional workflows, with easy Docker/ComfyUI setup and free online trials.
https://github.com/ModelTC/LightX2V
LightX2V is a fast, lightweight framework for generating videos from text or images, supporting models like HunyuanVideo-1.5 and Wan2.1/2.2 with up to 20x speedup via 4-step distillation, low VRAM use (8GB+), and features like offloading, quantization, and multi-GPU parallelism—outperforming rivals on H100/RTX 4090. You benefit by creating high-quality videos quickly on everyday hardware, saving time and costs for content creation, prototyping, or professional workflows, with easy Docker/ComfyUI setup and free online trials.
https://github.com/ModelTC/LightX2V
GitHub
GitHub - ModelTC/LightX2V: Light Video Generation Inference Framework
Light Video Generation Inference Framework. Contribute to ModelTC/LightX2V development by creating an account on GitHub.