Forwarded from KernelSU Next Bot
KernelSU_Next_v1.0.9-29-ga540992e_12826-release.apk
17.7 MB
CI Manager (TEST BUILD)
#ci_2626
Commit
Workflow run
#ci_2626
Reset seccomp filter count when escaping to root (https://github.com/tiann/KernelSU/pull/2708) (#743)
-Note: legacy kernels:
https://github.com/selfmusing/kernel_xiaomi_violet/commit/9596554cfbdab57682a430c15ca64c691d404152
Co-authored-by: Wang Han <416810799@qq.com>
Commit
Workflow run
KernelSU_Next_v1.0.9-29-ga540992e-spoofed_12826-release.apk
17.7 MB
CI Manager (SPOOFED BUILD)
#ci_2626
Commit
Workflow run
#ci_2626
Reset seccomp filter count when escaping to root (https://github.com/tiann/KernelSU/pull/2708) (#743)
-Note: legacy kernels:
https://github.com/selfmusing/kernel_xiaomi_violet/commit/9596554cfbdab57682a430c15ca64c691d404152
Co-authored-by: Wang Han <416810799@qq.com>
Commit
Workflow run
Forwarded from The Hacker News
🚨 523 malicious SVG files are slipping past antivirus scans.
Hackers are posing as Colombia’s Attorney General, using fake “document downloads” to secretly drop malware.
The kicker? Every sample evaded detection.
Here’s what’s going on ↓ https://thehackernews.com/2025/09/virustotal-finds-44-undetected-svg.html
Hackers are posing as Colombia’s Attorney General, using fake “document downloads” to secretly drop malware.
The kicker? Every sample evaded detection.
Here’s what’s going on ↓ https://thehackernews.com/2025/09/virustotal-finds-44-undetected-svg.html
Forwarded from The Hacker News
Pentest reports are broken.
Teams are still stuck with static PDFs while attackers move in real time.
Now, platforms like PlexTrac deliver findings instantly—no waiting, no manual ticketing, no weeks-long delays. Faster fixes, lower risk.
Here’s how it changes the game ↓ https://thehackernews.com/2025/09/automation-is-redefining-pentest.html
Teams are still stuck with static PDFs while attackers move in real time.
Now, platforms like PlexTrac deliver findings instantly—no waiting, no manual ticketing, no weeks-long delays. Faster fixes, lower risk.
Here’s how it changes the game ↓ https://thehackernews.com/2025/09/automation-is-redefining-pentest.html
Forwarded from The Hacker News
🚨 The Salesloft Drift breach has ignited a flurry of incident disclosures from SaaS providers, making it hard for security teams to keep up.
Nudge Security has put together a tracker for notifications related to this breach which will be updated as more providers issue communications.
Stay up to date here: https://thn.news/breach-tracker
Nudge Security has put together a tracker for notifications related to this breach which will be updated as more providers issue communications.
Stay up to date here: https://thn.news/breach-tracker
Forwarded from The Hacker News
🚨 Cyber gang TAG-150 just built CastleRAT in Python & C — a new trojan that steals passwords, hijacks crypto wallets, logs keystrokes & takes over PCs.
It’s the latest weapon in their CastleLoader malware ops.
Full story → https://thehackernews.com/2025/09/tag-150-develops-castlerat-in-python.html
It’s the latest weapon in their CastleLoader malware ops.
Full story → https://thehackernews.com/2025/09/tag-150-develops-castlerat-in-python.html
Forwarded from Libreware
ChatterUI - A simple app for LLMs
https://github.com/Vali-98/ChatterUI
https://t.me/chatterui
ChatterUI is a native mobile frontend for LLMs.
Run LLMs on device or connect to various commercial or open source APIs. ChatterUI aims to provide a mobile-friendly interface with fine-grained control over chat structuring.
Features:
Run LLMs on-device in Local Mode
Connect to various APIs in Remote Mode
Chat with characters. (Supports the Character Card v2 specification.)
Create and manage multiple chats per character.
Customize Sampler fields and Instruct formatting
Integrates with your device’s text-to-speech (TTS) engine
Usage
Download and install latest APK from the releases page.
iOS is Currently unavailable due to lacking iOS hardware for development
Local Mode
ChatterUI uses a llama.cpp under the hood to run gguf files on device. A custom adapter is used to integrate with react-native: cui-llama.rn
To use on-device inferencing, first enable Local Mode, then go to Models > Import Model / Use External Model and choose a gguf model that can fit on your device's memory. The importing functions are as follows:
Import Model: Copies the model file into ChatterUI, potentially speeding up startup time.
Use External Model: Uses a model from your device storage directly, removing the need to copy large files into ChatterUI but with a slight delay in load times.
After that, you can load the model and begin chatting!
Note: For devices with Snapdragon 8 Gen 1 and above or Exynos 2200+, it is recommended to use the Q4_0 quantization for optimized performance.
Remote Mode
Remote Mode allows you to connect to a few common APIs from both commercial and open source projects.
Open Source Backends:
koboldcpp
text-generation-webui
Ollama
Dedicated API:
OpenAI
Claude (with ability to use a proxy)
Cohere
Open Router
Mancer
AI Horde
Generic backends:
Generic Text Completions
Generic Chat Completions
These should be compliant with any Text Completion/Chat Completion backends such as Groq or Infermatic.
Custom APIs:
Is your API provider missing? ChatterUI allows you to define APIs using its template system.
Read more about it here!
#ai #Android
https://github.com/Vali-98/ChatterUI
https://t.me/chatterui
ChatterUI is a native mobile frontend for LLMs.
Run LLMs on device or connect to various commercial or open source APIs. ChatterUI aims to provide a mobile-friendly interface with fine-grained control over chat structuring.
Features:
Run LLMs on-device in Local Mode
Connect to various APIs in Remote Mode
Chat with characters. (Supports the Character Card v2 specification.)
Create and manage multiple chats per character.
Customize Sampler fields and Instruct formatting
Integrates with your device’s text-to-speech (TTS) engine
Usage
Download and install latest APK from the releases page.
iOS is Currently unavailable due to lacking iOS hardware for development
Local Mode
ChatterUI uses a llama.cpp under the hood to run gguf files on device. A custom adapter is used to integrate with react-native: cui-llama.rn
To use on-device inferencing, first enable Local Mode, then go to Models > Import Model / Use External Model and choose a gguf model that can fit on your device's memory. The importing functions are as follows:
Import Model: Copies the model file into ChatterUI, potentially speeding up startup time.
Use External Model: Uses a model from your device storage directly, removing the need to copy large files into ChatterUI but with a slight delay in load times.
After that, you can load the model and begin chatting!
Note: For devices with Snapdragon 8 Gen 1 and above or Exynos 2200+, it is recommended to use the Q4_0 quantization for optimized performance.
Remote Mode
Remote Mode allows you to connect to a few common APIs from both commercial and open source projects.
Open Source Backends:
koboldcpp
text-generation-webui
Ollama
Dedicated API:
OpenAI
Claude (with ability to use a proxy)
Cohere
Open Router
Mancer
AI Horde
Generic backends:
Generic Text Completions
Generic Chat Completions
These should be compliant with any Text Completion/Chat Completion backends such as Groq or Infermatic.
Custom APIs:
Is your API provider missing? ChatterUI allows you to define APIs using its template system.
Read more about it here!
#ai #Android
GitHub
GitHub - Vali-98/ChatterUI: Simple frontend for LLMs built in react-native.
Simple frontend for LLMs built in react-native. Contribute to Vali-98/ChatterUI development by creating an account on GitHub.
Forwarded from Libreware
Maid - Mobile Artificial Intelligence Distribution
Maid is a cross-platform free and an open-source application for interfacing with llama.cpp models locally, and remotely with Ollama, Mistral, Google Gemini and OpenAI models remotely.
-Choose from A wide range of models that runs LOCALLY and access remote models via api key!
-Text based output
-Image Generation (Selected Models only)
-No video or short clips generation yet
-Voice generation on selected models (Not tested)
-Setting model parameters
-Setting system prompt (Making the model behave/generate output in a certain way).
-And more.
Get it on
Github - https://github.com/Mobile-Artificial-Intelligence/maid/releases/latest
Fdroid - https://f-droid.org/packages/com.danemadsen.maid/
Spystore - https://play.google.com/store/apps/details?id=com.danemadsen.maid
*Don't clear CACHE OF THE APP AND EXCLUDE IT FROM SYSTEM'S AUTO CACHE CLEANING as app stores everything in device cache*
Follow @nogoolag and @libreware for more
#ai
Maid is a cross-platform free and an open-source application for interfacing with llama.cpp models locally, and remotely with Ollama, Mistral, Google Gemini and OpenAI models remotely.
-Choose from A wide range of models that runs LOCALLY and access remote models via api key!
-Text based output
-Image Generation (Selected Models only)
-No video or short clips generation yet
-Voice generation on selected models (Not tested)
-Setting model parameters
-Setting system prompt (Making the model behave/generate output in a certain way).
-And more.
Get it on
Github - https://github.com/Mobile-Artificial-Intelligence/maid/releases/latest
Fdroid - https://f-droid.org/packages/com.danemadsen.maid/
Spystore - https://play.google.com/store/apps/details?id=com.danemadsen.maid
*Don't clear CACHE OF THE APP AND EXCLUDE IT FROM SYSTEM'S AUTO CACHE CLEANING as app stores everything in device cache*
Follow @nogoolag and @libreware for more
#ai
Forwarded from Libreware
Libreware
Photo
Maid is heating up my phone and draining battery. I don't recommend it for lower end phones. If snapdragon 8 gen 2 is behaving like this, lower end phones will fail to run this app
Anyway, it runs without internet!
Anyway, it runs without internet!
Forwarded from GSMArena (IFTTT)
Oppo exec officially reveals the Find X9 and Find X9 Pro's battery capacity, more details
https://ift.tt/mBRQ1fG
https://ift.tt/mBRQ1fG
GSMArena.com
Oppo exec officially reveals the Find X9 and Find X9 Pro's battery capacity, more details
The Find series product manager is a true treasure trove of information. Oppo is expected to launch the Find X9 and Find X9 Pro in October, and ahead of...