#go #distribution_spec #helm #kubernetes #oci #oci_distribution #opencontainers #zot
Zot is a lightweight, production-ready OCI-native container registry for storing images, Helm charts, SBOMs, and other artifacts without vendor lock-in. It offers built-in authentication (OIDC, LDAP), storage options (S3, Azure), scanning, caching to cut Docker Hub limits/latency, and ARM/edge support as a single binary. You benefit by easily self-hosting a secure, scalable alternative to Docker Hub, saving costs, boosting speed, and enabling secret-less workflows on any device.
https://github.com/project-zot/zot
Zot is a lightweight, production-ready OCI-native container registry for storing images, Helm charts, SBOMs, and other artifacts without vendor lock-in. It offers built-in authentication (OIDC, LDAP), storage options (S3, Azure), scanning, caching to cut Docker Hub limits/latency, and ARM/edge support as a single binary. You benefit by easily self-hosting a secure, scalable alternative to Docker Hub, saving costs, boosting speed, and enabling secret-less workflows on any device.
https://github.com/project-zot/zot
GitHub
GitHub - project-zot/zot: zot - A scale-out production-ready vendor-neutral OCI-native container image/artifact registry (purely…
zot - A scale-out production-ready vendor-neutral OCI-native container image/artifact registry (purely based on OCI Distribution Specification) - project-zot/zot
#python #glm #image2text #ocr
GLM-OCR is a top 0.9B-parameter model for accurate OCR on complex documents like tables, code, formulas, seals, and receipts, scoring 94.62 on OmniDocBench V1.5. Install via `pip install glmocr`, use cloud API (no GPU needed) or self-host with vLLM/SGLang for fast, low-cost inference, and get JSON/Markdown outputs easily via CLI or Python. You benefit from quick, robust document parsing that saves time, cuts compute costs, and integrates simply into your apps for real-world tasks.
https://github.com/zai-org/GLM-OCR
GLM-OCR is a top 0.9B-parameter model for accurate OCR on complex documents like tables, code, formulas, seals, and receipts, scoring 94.62 on OmniDocBench V1.5. Install via `pip install glmocr`, use cloud API (no GPU needed) or self-host with vLLM/SGLang for fast, low-cost inference, and get JSON/Markdown outputs easily via CLI or Python. You benefit from quick, robust document parsing that saves time, cuts compute costs, and integrates simply into your apps for real-world tasks.
https://github.com/zai-org/GLM-OCR
GitHub
GitHub - zai-org/GLM-OCR: GLM-OCR: Accurate × Fast × Comprehensive
GLM-OCR: Accurate × Fast × Comprehensive. Contribute to zai-org/GLM-OCR development by creating an account on GitHub.
#typescript
OMX is a workflow layer that enhances OpenAI Codex by adding better task routing, reusable skills, and project state management. It keeps Codex as the execution engine while introducing canonical workflows like `$deep-interview` for clarification, `$ralplan` for planning approval, and `$ralph` or `$team` for execution. The benefit is that you get a structured, consistent approach to complex coding tasks—clarify intent first, approve the plan, then execute with either persistent single-owner loops or coordinated parallel work—without replacing Codex itself, just making your day-to-day work more organized and efficient.
https://github.com/Yeachan-Heo/oh-my-codex
OMX is a workflow layer that enhances OpenAI Codex by adding better task routing, reusable skills, and project state management. It keeps Codex as the execution engine while introducing canonical workflows like `$deep-interview` for clarification, `$ralplan` for planning approval, and `$ralph` or `$team` for execution. The benefit is that you get a structured, consistent approach to complex coding tasks—clarify intent first, approve the plan, then execute with either persistent single-owner loops or coordinated parallel work—without replacing Codex itself, just making your day-to-day work more organized and efficient.
https://github.com/Yeachan-Heo/oh-my-codex
GitHub
GitHub - Yeachan-Heo/oh-my-codex: OmX - Oh My codeX: Your codex is not alone. Add hooks, agent teams, HUDs, and so much more.
OmX - Oh My codeX: Your codex is not alone. Add hooks, agent teams, HUDs, and so much more. - Yeachan-Heo/oh-my-codex
#typescript #electron #open_source #pixijs #screen_capture #screen_recorder
OpenScreen is a free, open-source app like a simple Screen Studio for making beautiful product demos and screen walkthroughs. Record your full screen or windows, add zooms, mic/system audio, backgrounds, motion blur, annotations, trims, speed changes, and export in any ratio. Download from GitHub for macOS, Linux, or Windows—it's MIT-licensed so you can use, modify, or sell it freely with just a copyright notice. You save $29/month, get core tools without limits for personal/commercial use, and customize fully.
https://github.com/siddharthvaddem/openscreen
OpenScreen is a free, open-source app like a simple Screen Studio for making beautiful product demos and screen walkthroughs. Record your full screen or windows, add zooms, mic/system audio, backgrounds, motion blur, annotations, trims, speed changes, and export in any ratio. Download from GitHub for macOS, Linux, or Windows—it's MIT-licensed so you can use, modify, or sell it freely with just a copyright notice. You save $29/month, get core tools without limits for personal/commercial use, and customize fully.
https://github.com/siddharthvaddem/openscreen
GitHub
GitHub - siddharthvaddem/openscreen: Create stunning demos for free. Open-source, no subscriptions, no watermarks, and free for…
Create stunning demos for free. Open-source, no subscriptions, no watermarks, and free for commercial use. An alternative to Screen Studio. - siddharthvaddem/openscreen
#rust #filesearch #lua #neovim #neovim_plugin #rust
FFF is a super-fast fuzzy file finder for AI agents and Neovim users. It excels at grepping, fuzzy matching, and globbing with built-in memory that ranks results by frecency, git status, file size, and more for typo-proof searches. Install easily via script for AI (like Claude) or Lua for Neovim (keys like ff for files, fg for grep). This saves you time and tokens by finding code instantly, skipping useless files, and boosting productivity on big repos.
https://github.com/dmtrKovalenko/fff.nvim
FFF is a super-fast fuzzy file finder for AI agents and Neovim users. It excels at grepping, fuzzy matching, and globbing with built-in memory that ranks results by frecency, git status, file size, and more for typo-proof searches. Install easily via script for AI (like Claude) or Lua for Neovim (keys like ff for files, fg for grep). This saves you time and tokens by finding code instantly, skipping useless files, and boosting productivity on big repos.
https://github.com/dmtrKovalenko/fff.nvim
GitHub
GitHub - dmtrKovalenko/fff.nvim: The fastest and the most accurate file search toolkit for AI agents, Neovim, Rust, C, and NodeJS
The fastest and the most accurate file search toolkit for AI agents, Neovim, Rust, C, and NodeJS - dmtrKovalenko/fff.nvim
#python #apple_silicon #florence2 #idefics #llava #llm #local_ai #mlx #molmo #paligemma #pixtral #vision_framework #vision_language_model #vision_transformer
MLX-VLM lets you run, chat with, and fine-tune Vision Language Models (VLMs) plus audio/video models on your Mac using MLX—install easily with `pip install -U mlx-vlm`. Use CLI for quick text/image/audio generation (e.g., `mlx_vlm.generate --model ... --image photo.jpg`), Gradio UI for chats, Python scripts, or a FastAPI server with OpenAI-compatible endpoints supporting multi-images/videos. Features like TurboQuant cut KV cache memory by 76%, and LoRA/QLoRA fine-tuning works on consumer hardware. You benefit by experimenting with powerful multimodal AI locally—fast, memory-efficient, no cloud costs, perfect for Mac users tweaking models affordably.
https://github.com/Blaizzy/mlx-vlm
MLX-VLM lets you run, chat with, and fine-tune Vision Language Models (VLMs) plus audio/video models on your Mac using MLX—install easily with `pip install -U mlx-vlm`. Use CLI for quick text/image/audio generation (e.g., `mlx_vlm.generate --model ... --image photo.jpg`), Gradio UI for chats, Python scripts, or a FastAPI server with OpenAI-compatible endpoints supporting multi-images/videos. Features like TurboQuant cut KV cache memory by 76%, and LoRA/QLoRA fine-tuning works on consumer hardware. You benefit by experimenting with powerful multimodal AI locally—fast, memory-efficient, no cloud costs, perfect for Mac users tweaking models affordably.
https://github.com/Blaizzy/mlx-vlm
GitHub
GitHub - Blaizzy/mlx-vlm: MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using…
MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX. - Blaizzy/mlx-vlm
#yara #awesome_list #blueteam #blueteam_tools #cti #detection #detection_engineering #dfir #hacktools #incident_response #ioc #iocs #ir #ransomware #redteam #rmm #security #siem #soc #threat_hunting #threat_intelligence
You can access comprehensive security detection lists and threat hunting resources that help identify malicious activity across your infrastructure. These curated collections include indicators like suspicious file hashes, domain names, IP addresses, and behavioral patterns organized by threat type—from ransomware and phishing to command-and-control servers and vulnerable drivers. By integrating these lists into your security tools like SIEM platforms and endpoint detection systems, you gain immediate visibility into known threats while learning detection methodologies through guides and YARA rules. This accelerates your ability to hunt for compromises, validate security controls, and stay current with emerging attack techniques without building detection logic from scratch.
https://github.com/mthcht/awesome-lists
You can access comprehensive security detection lists and threat hunting resources that help identify malicious activity across your infrastructure. These curated collections include indicators like suspicious file hashes, domain names, IP addresses, and behavioral patterns organized by threat type—from ransomware and phishing to command-and-control servers and vulnerable drivers. By integrating these lists into your security tools like SIEM platforms and endpoint detection systems, you gain immediate visibility into known threats while learning detection methodologies through guides and YARA rules. This accelerates your ability to hunt for compromises, validate security controls, and stay current with emerging attack techniques without building detection logic from scratch.
https://github.com/mthcht/awesome-lists
GitHub
GitHub - mthcht/awesome-lists: Awesome Security lists for SOC/CERT/CTI
Awesome Security lists for SOC/CERT/CTI. Contribute to mthcht/awesome-lists development by creating an account on GitHub.
#kotlin
Google AI Edge Gallery lets you run powerful open-source AI models like Gemma 4 on your phone offline, with features like smart agents, image analysis, voice transcription, and a prompt tester. It keeps everything private on your device for fast, secure use without internet. Download from Google Play or App Store to test advanced AI reasoning and creativity anytime, boosting your productivity and privacy on the go.
https://github.com/google-ai-edge/gallery
Google AI Edge Gallery lets you run powerful open-source AI models like Gemma 4 on your phone offline, with features like smart agents, image analysis, voice transcription, and a prompt tester. It keeps everything private on your device for fast, secure use without internet. Download from Google Play or App Store to test advanced AI reasoning and creativity anytime, boosting your productivity and privacy on the go.
https://github.com/google-ai-edge/gallery
GitHub
GitHub - google-ai-edge/gallery: A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models…
A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models locally. - google-ai-edge/gallery
👍1
#cplusplus
LiteRT-LM is Google's free, high-speed tool for running large language models like Gemma 4 on phones, computers, Raspberry Pi, and more, with GPU boosts, vision/audio support, and tool use for smart apps. It powers AI in Chrome, Pixel Watch, and Chromebook—try it fast via CLI command on Linux, macOS, Windows, or Pi without coding. You benefit by easily deploying fast, private on-device AI for apps, prototyping, or edge projects, saving time and cloud costs.
https://github.com/google-ai-edge/LiteRT-LM
LiteRT-LM is Google's free, high-speed tool for running large language models like Gemma 4 on phones, computers, Raspberry Pi, and more, with GPU boosts, vision/audio support, and tool use for smart apps. It powers AI in Chrome, Pixel Watch, and Chromebook—try it fast via CLI command on Linux, macOS, Windows, or Pi without coding. You benefit by easily deploying fast, private on-device AI for apps, prototyping, or edge projects, saving time and cloud costs.
https://github.com/google-ai-edge/LiteRT-LM
GitHub
GitHub - google-ai-edge/LiteRT-LM
Contribute to google-ai-edge/LiteRT-LM development by creating an account on GitHub.
#java #minecraft #minecraft_mod #vulkan #vulkan_renderer
VulkanMod is a Fabric mod that replaces Minecraft Java's old OpenGL renderer with a modern Vulkan 1.2 engine, cutting CPU overhead, boosting GPU performance, and adding features like Wayland support and chunk optimizations for much higher FPS and smoother gameplay. Install Fabric loader, download the .jar from Modrinth or CurseForge, and drop it in your .minecraft/mods folder to enjoy lag-free worlds and better hardware use right away—perfect for high-res packs or busy servers.
https://github.com/xCollateral/VulkanMod
VulkanMod is a Fabric mod that replaces Minecraft Java's old OpenGL renderer with a modern Vulkan 1.2 engine, cutting CPU overhead, boosting GPU performance, and adding features like Wayland support and chunk optimizations for much higher FPS and smoother gameplay. Install Fabric loader, download the .jar from Modrinth or CurseForge, and drop it in your .minecraft/mods folder to enjoy lag-free worlds and better hardware use right away—perfect for high-res packs or busy servers.
https://github.com/xCollateral/VulkanMod
GitHub
GitHub - xCollateral/VulkanMod: Vulkan renderer mod for Minecraft.
Vulkan renderer mod for Minecraft. Contribute to xCollateral/VulkanMod development by creating an account on GitHub.
🤯1
#typescript
QMD is an on-device search engine that indexes your markdown notes, meeting transcripts, docs, and knowledge bases for fast keyword, semantic, or hybrid searches using local AI models—no internet needed. Install via npm or bun, add collections like `qmd collection add ~/notes --name notes`, embed with `qmd embed`, then query like `qmd query "project timeline"` for top results with scores and context. It integrates with AI agents via JSON output or MCP server. You benefit by quickly finding info across your files to boost productivity and make smarter decisions in agentic workflows.
https://github.com/tobi/qmd
QMD is an on-device search engine that indexes your markdown notes, meeting transcripts, docs, and knowledge bases for fast keyword, semantic, or hybrid searches using local AI models—no internet needed. Install via npm or bun, add collections like `qmd collection add ~/notes --name notes`, embed with `qmd embed`, then query like `qmd query "project timeline"` for top results with scores and context. It integrates with AI agents via JSON output or MCP server. You benefit by quickly finding info across your files to boost productivity and make smarter decisions in agentic workflows.
https://github.com/tobi/qmd
GitHub
GitHub - tobi/qmd: mini cli search engine for your docs, knowledge bases, meeting notes, whatever. Tracking current sota approaches…
mini cli search engine for your docs, knowledge bases, meeting notes, whatever. Tracking current sota approaches while being all local - tobi/qmd
#python
PersonaPlex is a real-time speech model for natural, low-latency conversations. Control its voice with audio prompts and role via simple text—like a friendly teacher, customer service rep, or casual chat partner—with natural male/female voices. Install easily, launch a web demo server, and test offline. You benefit by creating personalized AI interactions for apps, role-play, or fun talks, with quick setup and low GPU needs via CPU offload.
https://github.com/NVIDIA/personaplex
PersonaPlex is a real-time speech model for natural, low-latency conversations. Control its voice with audio prompts and role via simple text—like a friendly teacher, customer service rep, or casual chat partner—with natural male/female voices. Install easily, launch a web demo server, and test offline. You benefit by creating personalized AI interactions for apps, role-play, or fun talks, with quick setup and low GPU needs via CPU offload.
https://github.com/NVIDIA/personaplex
GitHub
GitHub - NVIDIA/personaplex: PersonaPlex code.
PersonaPlex code. Contribute to NVIDIA/personaplex development by creating an account on GitHub.
#python #ai_agents #ai_tutor #clawdbot #cli_tool #deepresearch #interactive_learning #large_language_models #multi_agent_systems #rag
DeepTutor v1.0.0 is an open-source AI tutoring tool with personalized TutorBots, unified chat modes for solving problems, quizzes, research, and math animations, plus knowledge bases from your PDFs, persistent memory of your learning style, AI co-writing, and guided plans—all via easy web, Docker, or CLI setup. You benefit by getting a smart, evolving study companion that adapts to you, boosts understanding with interactive tools, and saves time on tough topics without starting over.
https://github.com/HKUDS/DeepTutor
DeepTutor v1.0.0 is an open-source AI tutoring tool with personalized TutorBots, unified chat modes for solving problems, quizzes, research, and math animations, plus knowledge bases from your PDFs, persistent memory of your learning style, AI co-writing, and guided plans—all via easy web, Docker, or CLI setup. You benefit by getting a smart, evolving study companion that adapts to you, boosts understanding with interactive tools, and saves time on tough topics without starting over.
https://github.com/HKUDS/DeepTutor
GitHub
GitHub - HKUDS/DeepTutor: "DeepTutor: Agent-Native Personalized Learning Assistant"
"DeepTutor: Agent-Native Personalized Learning Assistant" - HKUDS/DeepTutor
🔥1
#other
Use Karpathy-inspired guidelines in a single CLAUDE.md file to fix Claude's coding flaws like wrong assumptions, overcomplicated code, unnecessary edits, and poor goal-setting. Follow four rules: think explicitly before coding, prioritize simplicity, make only required changes, and use tests for verifiable success. Install via Claude plugin or curl command. You benefit with cleaner, minimal code, fewer errors, proactive questions, and self-correcting AI that delivers precise results faster.
https://github.com/forrestchang/andrej-karpathy-skills
Use Karpathy-inspired guidelines in a single CLAUDE.md file to fix Claude's coding flaws like wrong assumptions, overcomplicated code, unnecessary edits, and poor goal-setting. Follow four rules: think explicitly before coding, prioritize simplicity, make only required changes, and use tests for verifiable success. Install via Claude plugin or curl command. You benefit with cleaner, minimal code, fewer errors, proactive questions, and self-correcting AI that delivers precise results faster.
https://github.com/forrestchang/andrej-karpathy-skills
GitHub
GitHub - forrestchang/andrej-karpathy-skills
Contribute to forrestchang/andrej-karpathy-skills development by creating an account on GitHub.
❤4