๐จ CVE-2025-62218
Concurrent execution using shared resource with improper synchronization ('race condition') in Microsoft Wireless Provisioning System allows an authorized attacker to elevate privileges locally.
๐@cveNotify
Concurrent execution using shared resource with improper synchronization ('race condition') in Microsoft Wireless Provisioning System allows an authorized attacker to elevate privileges locally.
๐@cveNotify
๐จ CVE-2025-62219
Double free in Microsoft Wireless Provisioning System allows an authorized attacker to elevate privileges locally.
๐@cveNotify
Double free in Microsoft Wireless Provisioning System allows an authorized attacker to elevate privileges locally.
๐@cveNotify
๐จ CVE-2025-62220
Heap-based buffer overflow in Windows Subsystem for Linux GUI allows an unauthorized attacker to execute code over a network.
๐@cveNotify
Heap-based buffer overflow in Windows Subsystem for Linux GUI allows an unauthorized attacker to execute code over a network.
๐@cveNotify
๐จ CVE-2025-62222
Improper neutralization of special elements used in a command ('command injection') in Visual Studio Code CoPilot Chat Extension allows an unauthorized attacker to execute code over a network.
๐@cveNotify
Improper neutralization of special elements used in a command ('command injection') in Visual Studio Code CoPilot Chat Extension allows an unauthorized attacker to execute code over a network.
๐@cveNotify
๐จ CVE-2025-62452
Heap-based buffer overflow in Windows Routing and Remote Access Service (RRAS) allows an authorized attacker to execute code over a network.
๐@cveNotify
Heap-based buffer overflow in Windows Routing and Remote Access Service (RRAS) allows an authorized attacker to execute code over a network.
๐@cveNotify
๐จ CVE-2025-62453
Improper validation of generative ai output in GitHub Copilot and Visual Studio Code allows an authorized attacker to bypass a security feature locally.
๐@cveNotify
Improper validation of generative ai output in GitHub Copilot and Visual Studio Code allows an authorized attacker to bypass a security feature locally.
๐@cveNotify
๐จ CVE-2022-50001
In the Linux kernel, the following vulnerability has been resolved:
netfilter: nft_tproxy: restrict to prerouting hook
TPROXY is only allowed from prerouting, but nft_tproxy doesn't check this.
This fixes a crash (null dereference) when using tproxy from e.g. output.
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
netfilter: nft_tproxy: restrict to prerouting hook
TPROXY is only allowed from prerouting, but nft_tproxy doesn't check this.
This fixes a crash (null dereference) when using tproxy from e.g. output.
๐@cveNotify
๐จ CVE-2024-30851
Directory Traversal vulnerability in codesiddhant Jasmin Ransomware v.1.0.1 allows an attacker to obtain sensitive information via the download_file.php component.
๐@cveNotify
Directory Traversal vulnerability in codesiddhant Jasmin Ransomware v.1.0.1 allows an attacker to obtain sensitive information via the download_file.php component.
๐@cveNotify
GitHub
GitHub - chebuya/CVE-2024-30851-jasmin-ransomware-path-traversal-poc: Jasmin ransomware web panel path traversal PoC
Jasmin ransomware web panel path traversal PoC. Contribute to chebuya/CVE-2024-30851-jasmin-ransomware-path-traversal-poc development by creating an account on GitHub.
๐จ CVE-2024-34240
QDOCS Smart School 7.0.0 is vulnerable to Cross Site Scripting (XSS) resulting in arbitrary code execution in admin functions related to adding or updating records.
๐@cveNotify
QDOCS Smart School 7.0.0 is vulnerable to Cross Site Scripting (XSS) resulting in arbitrary code execution in admin functions related to adding or updating records.
๐@cveNotify
I Hack Everything. Bug Bounty Hunter.
New Stored XSS 0day Vulnerability Discovered
Discover the latest stored XSS 0day vulnerability in QDOCS Smart School Management System, version 7.0.0. Learn how it was found and exploited
๐จ CVE-2024-10081
CodeChecker is an analyzer tooling, defect database and viewer extension for the Clang Static Analyzer and Clang Tidy.
Authentication bypass occurs when the API URL ends with Authentication. This bypass allows superuser access to all API endpoints other than Authentication. These endpoints include the ability to add, edit, and remove products, among others. All endpoints, apart from the /Authentication is affected by the vulnerability.
This issue affects CodeChecker: through 6.24.1.
๐@cveNotify
CodeChecker is an analyzer tooling, defect database and viewer extension for the Clang Static Analyzer and Clang Tidy.
Authentication bypass occurs when the API URL ends with Authentication. This bypass allows superuser access to all API endpoints other than Authentication. These endpoints include the ability to add, edit, and remove products, among others. All endpoints, apart from the /Authentication is affected by the vulnerability.
This issue affects CodeChecker: through 6.24.1.
๐@cveNotify
GitHub
Authentication bypass when using specifically crafted URLs
### Summary
Authentication bypass occurs when the API URL ends with Authentication, Configuration or ServerInfo. This bypass allows superuser access to all API endpoints other than Authentication....
Authentication bypass occurs when the API URL ends with Authentication, Configuration or ServerInfo. This bypass allows superuser access to all API endpoints other than Authentication....
๐จ CVE-2025-22039
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix overflow in dacloffset bounds check
The dacloffset field was originally typed as int and used in an
unchecked addition, which could overflow and bypass the existing
bounds check in both smb_check_perm_dacl() and smb_inherit_dacl().
This could result in out-of-bounds memory access and a kernel crash
when dereferencing the DACL pointer.
This patch converts dacloffset to unsigned int and uses
check_add_overflow() to validate access to the DACL.
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix overflow in dacloffset bounds check
The dacloffset field was originally typed as int and used in an
unchecked addition, which could overflow and bypass the existing
bounds check in both smb_check_perm_dacl() and smb_inherit_dacl().
This could result in out-of-bounds memory access and a kernel crash
when dereferencing the DACL pointer.
This patch converts dacloffset to unsigned int and uses
check_add_overflow() to validate access to the DACL.
๐@cveNotify
๐จ CVE-2025-22043
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: add bounds check for durable handle context
Add missing bounds check for durable handle context.
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: add bounds check for durable handle context
Add missing bounds check for durable handle context.
๐@cveNotify
๐จ CVE-2025-22074
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix r_count dec/increment mismatch
r_count is only increased when there is an oplock break wait,
so r_count inc/decrement are not paired. This can cause r_count
to become negative, which can lead to a problem where the ksmbd
thread does not terminate.
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix r_count dec/increment mismatch
r_count is only increased when there is an oplock break wait,
so r_count inc/decrement are not paired. This can cause r_count
to become negative, which can lead to a problem where the ksmbd
thread does not terminate.
๐@cveNotify
๐จ CVE-2025-37776
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix use-after-free in smb_break_all_levII_oplock()
There is a room in smb_break_all_levII_oplock that can cause racy issues
when unlocking in the middle of the loop. This patch use read lock
to protect whole loop.
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix use-after-free in smb_break_all_levII_oplock()
There is a room in smb_break_all_levII_oplock that can cause racy issues
when unlocking in the middle of the loop. This patch use read lock
to protect whole loop.
๐@cveNotify
๐จ CVE-2025-37777
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix use-after-free in __smb2_lease_break_noti()
Move tcp_transport free to ksmbd_conn_free. If ksmbd connection is
referenced when ksmbd server thread terminates, It will not be freed,
but conn->tcp_transport is freed. __smb2_lease_break_noti can be performed
asynchronously when the connection is disconnected. __smb2_lease_break_noti
calls ksmbd_conn_write, which can cause use-after-free
when conn->ksmbd_transport is already freed.
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix use-after-free in __smb2_lease_break_noti()
Move tcp_transport free to ksmbd_conn_free. If ksmbd connection is
referenced when ksmbd server thread terminates, It will not be freed,
but conn->tcp_transport is freed. __smb2_lease_break_noti can be performed
asynchronously when the connection is disconnected. __smb2_lease_break_noti
calls ksmbd_conn_write, which can cause use-after-free
when conn->ksmbd_transport is already freed.
๐@cveNotify
๐จ CVE-2025-37999
In the Linux kernel, the following vulnerability has been resolved:
fs/erofs/fileio: call erofs_onlinefolio_split() after bio_add_folio()
If bio_add_folio() fails (because it is full),
erofs_fileio_scan_folio() needs to submit the I/O request via
erofs_fileio_rq_submit() and allocate a new I/O request with an empty
`struct bio`. Then it retries the bio_add_folio() call.
However, at this point, erofs_onlinefolio_split() has already been
called which increments `folio->private`; the retry will call
erofs_onlinefolio_split() again, but there will never be a matching
erofs_onlinefolio_end() call. This leaves the folio locked forever
and all waiters will be stuck in folio_wait_bit_common().
This bug has been added by commit ce63cb62d794 ("erofs: support
unencoded inodes for fileio"), but was practically unreachable because
there was room for 256 folios in the `struct bio` - until commit
9f74ae8c9ac9 ("erofs: shorten bvecs[] for file-backed mounts") which
reduced the array capacity to 16 folios.
It was now trivial to trigger the bug by manually invoking readahead
from userspace, e.g.:
posix_fadvise(fd, 0, st.st_size, POSIX_FADV_WILLNEED);
This should be fixed by invoking erofs_onlinefolio_split() only after
bio_add_folio() has succeeded. This is safe: asynchronous completions
invoking erofs_onlinefolio_end() will not unlock the folio because
erofs_fileio_scan_folio() is still holding a reference to be released
by erofs_onlinefolio_end() at the end.
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
fs/erofs/fileio: call erofs_onlinefolio_split() after bio_add_folio()
If bio_add_folio() fails (because it is full),
erofs_fileio_scan_folio() needs to submit the I/O request via
erofs_fileio_rq_submit() and allocate a new I/O request with an empty
`struct bio`. Then it retries the bio_add_folio() call.
However, at this point, erofs_onlinefolio_split() has already been
called which increments `folio->private`; the retry will call
erofs_onlinefolio_split() again, but there will never be a matching
erofs_onlinefolio_end() call. This leaves the folio locked forever
and all waiters will be stuck in folio_wait_bit_common().
This bug has been added by commit ce63cb62d794 ("erofs: support
unencoded inodes for fileio"), but was practically unreachable because
there was room for 256 folios in the `struct bio` - until commit
9f74ae8c9ac9 ("erofs: shorten bvecs[] for file-backed mounts") which
reduced the array capacity to 16 folios.
It was now trivial to trigger the bug by manually invoking readahead
from userspace, e.g.:
posix_fadvise(fd, 0, st.st_size, POSIX_FADV_WILLNEED);
This should be fixed by invoking erofs_onlinefolio_split() only after
bio_add_folio() has succeeded. This is safe: asynchronous completions
invoking erofs_onlinefolio_end() will not unlock the folio because
erofs_fileio_scan_folio() is still holding a reference to be released
by erofs_onlinefolio_end() at the end.
๐@cveNotify
๐จ CVE-2025-38002
In the Linux kernel, the following vulnerability has been resolved:
io_uring/fdinfo: grab ctx->uring_lock around io_uring_show_fdinfo()
Not everything requires locking in there, which is why the 'has_lock'
variable exists. But enough does that it's a bit unwieldy to manage.
Wrap the whole thing in a ->uring_lock trylock, and just return
with no output if we fail to grab it. The existing trylock() will
already have greatly diminished utility/output for the failure case.
This fixes an issue with reading the SQE fields, if the ring is being
actively resized at the same time.
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
io_uring/fdinfo: grab ctx->uring_lock around io_uring_show_fdinfo()
Not everything requires locking in there, which is why the 'has_lock'
variable exists. But enough does that it's a bit unwieldy to manage.
Wrap the whole thing in a ->uring_lock trylock, and just return
with no output if we fail to grab it. The existing trylock() will
already have greatly diminished utility/output for the failure case.
This fixes an issue with reading the SQE fields, if the ring is being
actively resized at the same time.
๐@cveNotify
๐จ CVE-2025-6095
A vulnerability, which was classified as critical, was found in codesiddhant Jasmin Ransomware 1.0.1. Affected is an unknown function of the file /checklogin.php. The manipulation of the argument username/password leads to sql injection. It is possible to launch the attack remotely. The exploit has been disclosed to the public and may be used. The vendor was contacted early about this disclosure but did not respond in any way.
๐@cveNotify
A vulnerability, which was classified as critical, was found in codesiddhant Jasmin Ransomware 1.0.1. Affected is an unknown function of the file /checklogin.php. The manipulation of the argument username/password leads to sql injection. It is possible to launch the attack remotely. The exploit has been disclosed to the public and may be used. The vendor was contacted early about this disclosure but did not respond in any way.
๐@cveNotify
GitHub
CVE/Jasmin-Ransomware/sqli_password.md at main ยท YZS17/CVE
CVE of XU17. Contribute to YZS17/CVE development by creating an account on GitHub.
๐จ CVE-2025-6096
A vulnerability has been found in codesiddhant Jasmin Ransomware up to 1.0.1 and classified as critical. Affected by this vulnerability is an unknown functionality of the file /dashboard.php. The manipulation of the argument Search leads to sql injection. The attack can be launched remotely. The exploit has been disclosed to the public and may be used. The vendor was contacted early about this disclosure but did not respond in any way.
๐@cveNotify
A vulnerability has been found in codesiddhant Jasmin Ransomware up to 1.0.1 and classified as critical. Affected by this vulnerability is an unknown functionality of the file /dashboard.php. The manipulation of the argument Search leads to sql injection. The attack can be launched remotely. The exploit has been disclosed to the public and may be used. The vendor was contacted early about this disclosure but did not respond in any way.
๐@cveNotify
GitHub
CVE/Jasmin-Ransomware/sqli_search.md at main ยท YZS17/CVE
CVE of XU17. Contribute to YZS17/CVE development by creating an account on GitHub.
๐จ CVE-2025-38006
In the Linux kernel, the following vulnerability has been resolved:
net: mctp: Don't access ifa_index when missing
In mctp_dump_addrinfo, ifa_index can be used to filter interfaces, but
only when the struct ifaddrmsg is provided. Otherwise it will be
comparing to uninitialised memory - reproducible in the syzkaller case from
dhcpd, or busybox "ip addr show".
The kernel MCTP implementation has always filtered by ifa_index, so
existing userspace programs expecting to dump MCTP addresses must
already be passing a valid ifa_index value (either 0 or a real index).
BUG: KMSAN: uninit-value in mctp_dump_addrinfo+0x208/0xac0 net/mctp/device.c:128
mctp_dump_addrinfo+0x208/0xac0 net/mctp/device.c:128
rtnl_dump_all+0x3ec/0x5b0 net/core/rtnetlink.c:4380
rtnl_dumpit+0xd5/0x2f0 net/core/rtnetlink.c:6824
netlink_dump+0x97b/0x1690 net/netlink/af_netlink.c:2309
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
net: mctp: Don't access ifa_index when missing
In mctp_dump_addrinfo, ifa_index can be used to filter interfaces, but
only when the struct ifaddrmsg is provided. Otherwise it will be
comparing to uninitialised memory - reproducible in the syzkaller case from
dhcpd, or busybox "ip addr show".
The kernel MCTP implementation has always filtered by ifa_index, so
existing userspace programs expecting to dump MCTP addresses must
already be passing a valid ifa_index value (either 0 or a real index).
BUG: KMSAN: uninit-value in mctp_dump_addrinfo+0x208/0xac0 net/mctp/device.c:128
mctp_dump_addrinfo+0x208/0xac0 net/mctp/device.c:128
rtnl_dump_all+0x3ec/0x5b0 net/core/rtnetlink.c:4380
rtnl_dumpit+0xd5/0x2f0 net/core/rtnetlink.c:6824
netlink_dump+0x97b/0x1690 net/netlink/af_netlink.c:2309
๐@cveNotify
๐จ CVE-2022-49999
In the Linux kernel, the following vulnerability has been resolved:
btrfs: fix space cache corruption and potential double allocations
When testing space_cache v2 on a large set of machines, we encountered a
few symptoms:
1. "unable to add free space :-17" (EEXIST) errors.
2. Missing free space info items, sometimes caught with a "missing free
space info for X" error.
3. Double-accounted space: ranges that were allocated in the extent tree
and also marked as free in the free space tree, ranges that were
marked as allocated twice in the extent tree, or ranges that were
marked as free twice in the free space tree. If the latter made it
onto disk, the next reboot would hit the BUG_ON() in
add_new_free_space().
4. On some hosts with no on-disk corruption or error messages, the
in-memory space cache (dumped with drgn) disagreed with the free
space tree.
All of these symptoms have the same underlying cause: a race between
caching the free space for a block group and returning free space to the
in-memory space cache for pinned extents causes us to double-add a free
range to the space cache. This race exists when free space is cached
from the free space tree (space_cache=v2) or the extent tree
(nospace_cache, or space_cache=v1 if the cache needs to be regenerated).
struct btrfs_block_group::last_byte_to_unpin and struct
btrfs_block_group::progress are supposed to protect against this race,
but commit d0c2f4fa555e ("btrfs: make concurrent fsyncs wait less when
waiting for a transaction commit") subtly broke this by allowing
multiple transactions to be unpinning extents at the same time.
Specifically, the race is as follows:
1. An extent is deleted from an uncached block group in transaction A.
2. btrfs_commit_transaction() is called for transaction A.
3. btrfs_run_delayed_refs() -> __btrfs_free_extent() runs the delayed
ref for the deleted extent.
4. __btrfs_free_extent() -> do_free_extent_accounting() ->
add_to_free_space_tree() adds the deleted extent back to the free
space tree.
5. do_free_extent_accounting() -> btrfs_update_block_group() ->
btrfs_cache_block_group() queues up the block group to get cached.
block_group->progress is set to block_group->start.
6. btrfs_commit_transaction() for transaction A calls
switch_commit_roots(). It sets block_group->last_byte_to_unpin to
block_group->progress, which is block_group->start because the block
group hasn't been cached yet.
7. The caching thread gets to our block group. Since the commit roots
were already switched, load_free_space_tree() sees the deleted extent
as free and adds it to the space cache. It finishes caching and sets
block_group->progress to U64_MAX.
8. btrfs_commit_transaction() advances transaction A to
TRANS_STATE_SUPER_COMMITTED.
9. fsync calls btrfs_commit_transaction() for transaction B. Since
transaction A is already in TRANS_STATE_SUPER_COMMITTED and the
commit is for fsync, it advances.
10. btrfs_commit_transaction() for transaction B calls
switch_commit_roots(). This time, the block group has already been
cached, so it sets block_group->last_byte_to_unpin to U64_MAX.
11. btrfs_commit_transaction() for transaction A calls
btrfs_finish_extent_commit(), which calls unpin_extent_range() for
the deleted extent. It sees last_byte_to_unpin set to U64_MAX (by
transaction B!), so it adds the deleted extent to the space cache
again!
This explains all of our symptoms above:
* If the sequence of events is exactly as described above, when the free
space is re-added in step 11, it will fail with EEXIST.
* If another thread reallocates the deleted extent in between steps 7
and 11, then step 11 will silently re-add that space to the space
cache as free even though it is actually allocated. Then, if that
space is allocated *again*, the free space tree will be corrupted
(namely, the wrong item will be deleted).
* If we don't catch this free space tree corr
---truncated---
๐@cveNotify
In the Linux kernel, the following vulnerability has been resolved:
btrfs: fix space cache corruption and potential double allocations
When testing space_cache v2 on a large set of machines, we encountered a
few symptoms:
1. "unable to add free space :-17" (EEXIST) errors.
2. Missing free space info items, sometimes caught with a "missing free
space info for X" error.
3. Double-accounted space: ranges that were allocated in the extent tree
and also marked as free in the free space tree, ranges that were
marked as allocated twice in the extent tree, or ranges that were
marked as free twice in the free space tree. If the latter made it
onto disk, the next reboot would hit the BUG_ON() in
add_new_free_space().
4. On some hosts with no on-disk corruption or error messages, the
in-memory space cache (dumped with drgn) disagreed with the free
space tree.
All of these symptoms have the same underlying cause: a race between
caching the free space for a block group and returning free space to the
in-memory space cache for pinned extents causes us to double-add a free
range to the space cache. This race exists when free space is cached
from the free space tree (space_cache=v2) or the extent tree
(nospace_cache, or space_cache=v1 if the cache needs to be regenerated).
struct btrfs_block_group::last_byte_to_unpin and struct
btrfs_block_group::progress are supposed to protect against this race,
but commit d0c2f4fa555e ("btrfs: make concurrent fsyncs wait less when
waiting for a transaction commit") subtly broke this by allowing
multiple transactions to be unpinning extents at the same time.
Specifically, the race is as follows:
1. An extent is deleted from an uncached block group in transaction A.
2. btrfs_commit_transaction() is called for transaction A.
3. btrfs_run_delayed_refs() -> __btrfs_free_extent() runs the delayed
ref for the deleted extent.
4. __btrfs_free_extent() -> do_free_extent_accounting() ->
add_to_free_space_tree() adds the deleted extent back to the free
space tree.
5. do_free_extent_accounting() -> btrfs_update_block_group() ->
btrfs_cache_block_group() queues up the block group to get cached.
block_group->progress is set to block_group->start.
6. btrfs_commit_transaction() for transaction A calls
switch_commit_roots(). It sets block_group->last_byte_to_unpin to
block_group->progress, which is block_group->start because the block
group hasn't been cached yet.
7. The caching thread gets to our block group. Since the commit roots
were already switched, load_free_space_tree() sees the deleted extent
as free and adds it to the space cache. It finishes caching and sets
block_group->progress to U64_MAX.
8. btrfs_commit_transaction() advances transaction A to
TRANS_STATE_SUPER_COMMITTED.
9. fsync calls btrfs_commit_transaction() for transaction B. Since
transaction A is already in TRANS_STATE_SUPER_COMMITTED and the
commit is for fsync, it advances.
10. btrfs_commit_transaction() for transaction B calls
switch_commit_roots(). This time, the block group has already been
cached, so it sets block_group->last_byte_to_unpin to U64_MAX.
11. btrfs_commit_transaction() for transaction A calls
btrfs_finish_extent_commit(), which calls unpin_extent_range() for
the deleted extent. It sees last_byte_to_unpin set to U64_MAX (by
transaction B!), so it adds the deleted extent to the space cache
again!
This explains all of our symptoms above:
* If the sequence of events is exactly as described above, when the free
space is re-added in step 11, it will fail with EEXIST.
* If another thread reallocates the deleted extent in between steps 7
and 11, then step 11 will silently re-add that space to the space
cache as free even though it is actually allocated. Then, if that
space is allocated *again*, the free space tree will be corrupted
(namely, the wrong item will be deleted).
* If we don't catch this free space tree corr
---truncated---
๐@cveNotify