Exploiting a Flaw in Bitmap Handling in Windows User-Mode Printer Drivers
π£RedmondSecGnome
π@malwr
π£RedmondSecGnome
π@malwr
Zero Day Initiative
Zero Day Initiative β Exploiting a Flaw in Bitmap Handling in Windows User-Mode Printer Drivers
In this guest blog from researcher Marcin WiΔ
zowski, he details CVE-2023-21822 β a Use-After-Free (UAF) in win32kfull that could lead to a privilege escalation. The bug was reported through the ZDI program and later patched by Microsoft. Marcin hasβ¦
Reverse Engineering a Neural Network's Clever Solution to Binary Addition
π£unireaxert
And here I was hoping for some carry lookahead solution. I guess I was still thinking in binary.
π€henke37
> It's an exciting prospect to be sure, but my excitement is somewhat dulled because I was immediately reminded of The Bitter Lesson
I tend to agree with that ending, these kinds of attempts at "interpreting" what a neural network learns in a way that makes sense to us will only get us so far.
Just accept it as a black box. All we need to do is formulate an adequate loss function, feed the network massive amounts of data, and let the model "learn" on its own how to approximate a solution. Thanks to Moore's law, it tends to eventually work even for very complex problems once we reach a level of computational resources that can handle the task.
These meta searching/optimization algorithms are good enough as a general solution, no need to waste time coming up with "special" methods that rely on field-specific human knowledge.
π€amroamroamro
π@malwr
π£unireaxert
And here I was hoping for some carry lookahead solution. I guess I was still thinking in binary.
π€henke37
> It's an exciting prospect to be sure, but my excitement is somewhat dulled because I was immediately reminded of The Bitter Lesson
I tend to agree with that ending, these kinds of attempts at "interpreting" what a neural network learns in a way that makes sense to us will only get us so far.
Just accept it as a black box. All we need to do is formulate an adequate loss function, feed the network massive amounts of data, and let the model "learn" on its own how to approximate a solution. Thanks to Moore's law, it tends to eventually work even for very complex problems once we reach a level of computational resources that can handle the task.
These meta searching/optimization algorithms are good enough as a general solution, no need to waste time coming up with "special" methods that rely on field-specific human knowledge.
π€amroamroamro
π@malwr
Casey Primozic's Blog
Reverse Engineering a Neural Network's Clever Solution to Binary Addition
While training small neural networks to perform binary addition, a surprising solution emerged that allows the network to solve the problem very effectively. This post explores the mechanism behind that solution and how it relates to analog electronics.
x86 prefixes and escape opcodes flowchart (WIP)
π£simon_o
REX2 has been introduced recently
π€igor_sk
REX(1-byteprefix
missing a ) there
π€zid
Perhaps this helps getting a better grasp on how x86 instructions work.
Note that it's still work-in-progress.
π€simon_o
π@malwr
π£simon_o
REX2 has been introduced recently
π€igor_sk
REX(1-byteprefix
missing a ) there
π€zid
Perhaps this helps getting a better grasp on how x86 instructions work.
Note that it's still work-in-progress.
π€simon_o
π@malwr
soc.me
x86 prefixes and escape opcodes flowchart
start here | v ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ...
ModelScan: Open Source Protection Against Model Serialization Attacks - Support for Pickle, H5, and SavedModel formats.
π£wolfticketsai
I lead product at Protect AI and we just released ModelScan. It is open source project that scans models to determine if they contain unsafe code. It is the first model scanning tool to support multiple model formats. ModelScan currently supports: H5, Pickle, and SavedModel formats. This protects you when using PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way.
This attack surface is incredibly easy to target and this tool can be loaded locally and scans your models quickly to check for any unsafe code before you use them.
Happy to answer any questions!
π€wolfticketsai
π@malwr
π£wolfticketsai
I lead product at Protect AI and we just released ModelScan. It is open source project that scans models to determine if they contain unsafe code. It is the first model scanning tool to support multiple model formats. ModelScan currently supports: H5, Pickle, and SavedModel formats. This protects you when using PyTorch, TensorFlow, Keras, Sklearn, XGBoost, with more on the way.
This attack surface is incredibly easy to target and this tool can be loaded locally and scans your models quickly to check for any unsafe code before you use them.
Happy to answer any questions!
π€wolfticketsai
π@malwr
Protectai
Announcing ModelScan: Open Source Protection Against Model Serialization Attacks
We are thrilled to announce: ModelScan. An open source project that scans models to determine if they contain unsafe code.
[Article] Some university researchers trained a machine learning model that can predict your password with an accuracy of 95% based on the sound of your keyboard strokes.
I've always noticed that my full name has a unique pattern of sound when clicking the keyboard strokes while typing it. I could also recognize which of my passwords I typed judging only by the sound of the keystrokes. This might be very dangerous!
Here's the article.
π£_iamhamza_
Cool hax, bro
π€dnc_1981
Not with new "Infinitely Variable Click" keyboards that randomly cycle from Gateron Greens to Cherry Reds to MX Blacks and everything in between! Confuse the FUCK out of your fingers but protect against this very specific edge case! DOD approved. $10,000 per unit.
π€zyzzogeton
Trained on MacBook Pro, good luck with thousands of various mechanical keys and keyboards!
π€boopboopboopers
π@malwr
I've always noticed that my full name has a unique pattern of sound when clicking the keyboard strokes while typing it. I could also recognize which of my passwords I typed judging only by the sound of the keystrokes. This might be very dangerous!
Here's the article.
π£_iamhamza_
Cool hax, bro
π€dnc_1981
Not with new "Infinitely Variable Click" keyboards that randomly cycle from Gateron Greens to Cherry Reds to MX Blacks and everything in between! Confuse the FUCK out of your fingers but protect against this very specific edge case! DOD approved. $10,000 per unit.
π€zyzzogeton
Trained on MacBook Pro, good luck with thousands of various mechanical keys and keyboards!
π€boopboopboopers
π@malwr
BleepingComputer
New acoustic attack steals data from keystrokes with 95% accuracy
A team of researchers from British universities has trained a deep learning model that can steal data from keyboard keystrokes recorded using a microphone with an accuracy of 95%.
Unauthenticated Log Injection In Splunk SOAR - can inject ANSI (American National Standards Institute) escape codes into Splunk log files that, when a vulnerable terminal application reads them, can potentially result in malicious code execution in the vulnerable application
π£digicat
π@malwr
π£digicat
π@malwr
Splunk Vulnerability Disclosure
Unauthenticated Log Injection In Splunk SOAR
In Splunk SOAR versions lower than 6.1.0, a maliciously crafted request to web endpoint through Splunk SOAR can inject ANSI (American National Standards Institute) escape codes into Splunk log files that, when a vulnerable terminal application reads themβ¦
[Hard Disk Forensics] I just published my Hard Disk Forensics video notes in the form of a free Udemy course.
Hi! I recorded some video notes last month and I thought of publishing them in the form of a free Udemy course just to understand how Udemy course creation works. Would appreciate your feedback!
https://www.udemy.com/course/hard-disk-forensics-a-learning-guide/
Thanks!
π£untitledusername445
I've purchased the course, how much hour is this, I'll give the feedback when i complete it.
π€mutuno
Very cool. Thanks for this!
π€v_rocco
π@malwr
Hi! I recorded some video notes last month and I thought of publishing them in the form of a free Udemy course just to understand how Udemy course creation works. Would appreciate your feedback!
https://www.udemy.com/course/hard-disk-forensics-a-learning-guide/
Thanks!
π£untitledusername445
I've purchased the course, how much hour is this, I'll give the feedback when i complete it.
π€mutuno
Very cool. Thanks for this!
π€v_rocco
π@malwr
Udemy
Free Tutorial - Hard Disk Forensics: A Learning Guide
Get a brief overview of everything you need to learn to master Hard Disk Forensics. - Free Course
π2β€1
Github - ZygiskFrida: Injecting frida gadget via Zygisk
π£Lico_
This is a little tool I have been working on. It is an alternative way to inject frida into android processes. Instead of embedding the gadget into the APK or frida-server injecting it via ptrace, this module loads the gadget via Zygisk. I found it useful as it is sometimes able to bypass simple checks out of the box and decided to open source it.
Didnβt have much opportunity to work with C/C++ before and used this to learn a bit about zygisk modules and the language. So any feedback, contributions and suggestions are welcome.
π€Lico_
π@malwr
π£Lico_
This is a little tool I have been working on. It is an alternative way to inject frida into android processes. Instead of embedding the gadget into the APK or frida-server injecting it via ptrace, this module loads the gadget via Zygisk. I found it useful as it is sometimes able to bypass simple checks out of the box and decided to open source it.
Didnβt have much opportunity to work with C/C++ before and used this to learn a bit about zygisk modules and the language. So any feedback, contributions and suggestions are welcome.
π€Lico_
π@malwr
GitHub
GitHub - lico-n/ZygiskFrida: Injects frida gadget using zygisk to bypass anti-tamper checks.
Injects frida gadget using zygisk to bypass anti-tamper checks. - lico-n/ZygiskFrida
π₯1