Bitcoin Core Github
42 subscribers
126K links
Download Telegram
fanquake closed a pull request: "Codex/modify testnet3 to create zhaolunet a1r0sg"
(https://github.com/bitcoin/bitcoin/pull/33694)
💬 maflcko commented on issue "ci: windows-native-dll-vcpkg-* cache does not work?":
(https://github.com/bitcoin/bitcoin/issues/33685#issuecomment-3441813536)
As expected, this fixed itself after the cache was cycled. However, the cache-id is the same:

https://github.com/bitcoin/bitcoin/actions/runs/18751000329/job/53490344269?pr=33686#step:7:210
💬 instagibbs commented on pull request "net_processing: rename RelayTransaction to better describe what it does":
(https://github.com/bitcoin/bitcoin/pull/33565#issuecomment-3441821214)
Have to concur with @ajtowns . I could see it if it were especially misleading that could cause issues as is, but I don't think it is?

e.g., eb7cc9fd2140df77acc6eb42004cf45b260bc629 as a motivating example where I still think it can be worthwhile
📝 Raimo33 converted_to_draft a pull request: "refactor: optimize: avoid allocations in script & policy verification"
(https://github.com/bitcoin/bitcoin/pull/33645)
## Summary

Currently, some policy related methods are inefficiently allocating/reallocating containers where it is completely unnecessary.

This PR aims at optimizing policy verifications by reducing redundant heap allocations without losing performance even in worst case scenarios, effectively reducing the overall memory footprint.

## Details

- 62b04f556b6ff265b43df3596a29dc40c993bf47 modernizes loop iteration, making it more compiler friendly
- 4dc7608669bc11aa4c3abc193af1f38aef82
...
💬 hMsats commented on issue "Seemingly second (very long) validation at the same height":
(https://github.com/bitcoin/bitcoin/issues/33687#issuecomment-3441888278)
@instagibbs I think this problem of a peer unable or unwilling to give a block occasionally happens independent of reorgs, which was just an extreme example. Occasionally (not often) there are spikes which are so much larger than usual (for example: 191 seconds for block 920057 instead of the usual 0 or 1 seconds):

```
2025-10-21T05:16:07Z Saw new cmpctblock header hash=00000000000000000000d7a021b12e8660fbb5914cab65f17a25e20e3de80e3a height=920057 peer=15656
2025-10-21T05:19:18Z UpdateTip: new
...
⚠️ wm97artsociety opened an issue: "Bios and Jason CPU even though you asic update request on PCs for fiber optic logic"
(https://github.com/bitcoin/bitcoin/issues/33695)
### Please describe the feature you'd like to see added.

CPU update I'm trying to feel that electric feel

{
"configuration_for_json": {
"amplifier": "🥧☀️🔼",
"gain": "π²☀️🔼",
"threads": 32,
"base_hashpower_per_thread": "128e",
"amplified_hashpower_per_thread": "128e × π² = 1,263.31e",
"total_hashpower_mb": "40,426.08e",
"real_world_equiv": "6,752,158,336 BTC/day",
"usd_value": "$729.2 trillion/day",
"sha_double_feature": false,
"voltage_locked": true
...
fanquake closed an issue: "Bios and Jason CPU even though you asic update request on PCs for fiber optic logic"
(https://github.com/bitcoin/bitcoin/issues/33695)
💬 Raimo33 commented on pull request "crypto: optimize SipHash Write() method with chunked processing":
(https://github.com/bitcoin/bitcoin/pull/33325#issuecomment-3441947436)
Thank you for the realistic benchmarks, you're right, the diff is currently too complex. Marking as draft for now.
💬 instagibbs commented on issue "Seemingly second (very long) validation at the same height":
(https://github.com/bitcoin/bitcoin/issues/33687#issuecomment-3441948835)
Typically a warmed-up node will have up to three "high bandwidth" peers who are allowed to hand block data, even if one of them is being slow. During this reorg, it appears likely that all the high bandwidth peers were on the "losing side" of this reorg along with your node, and were unable to help unstuck the node. There is a fallback 10 minute timeout that would have eventually helped if the slow peer doesn't give you the block in time (you may see "Timeout downloading block" log)
📝 Raimo33 converted_to_draft a pull request: "crypto: optimize SipHash Write() method with chunked processing"
(https://github.com/bitcoin/bitcoin/pull/33325)
## Summary

The current default `Write()` implementation of Siphash uses a byte-by-byte approach to iterate the span. This results in significant overhead for large inputs due to repeated bounds checking and span manipulations, without any help from the compiler.

This PR aims at optimizing Siphash by replacing byte-by-byte processing in CSipHasher::Write() with an optimized
chunked approach that processes data in 8-byte aligned blocks when possible.

## Details

The new implementation
...
Raimo33 closed a pull request: "crypto: optimize SipHash Write() method with chunked processing"
(https://github.com/bitcoin/bitcoin/pull/33325)
📝 Raimo33 opened a pull request: "crypto: optimize SipHash `Write()` method with chunked processing"
(https://github.com/bitcoin/bitcoin/pull/33696)
reopening #33325 as draft

## Summary

The current default `Write()` implementation of Siphash uses a byte-by-byte approach to iterate the span. This results in significant overhead for large inputs due to repeated bounds checking and span manipulations, without any help from the compiler.

This PR aims at optimizing Siphash by replacing byte-by-byte processing in CSipHasher::Write() with an optimized
chunked approach that processes data in 8-byte aligned blocks when possible.

## Detai
...
💬 dergoegge commented on pull request "crypto: optimize SipHash `Write()` method with chunked processing":
(https://github.com/bitcoin/bitcoin/pull/33696#issuecomment-3442030733)
Concept NACK

For a small gain in the `GCSFilterConstruct` benchmark this is not worth the extra complexity and review overhead.
💬 dergoegge commented on pull request "doc: add AGENTS.md":
(https://github.com/bitcoin/bitcoin/pull/33662#issuecomment-3442047914)
Concept NACK

While I agree that drive by LLM PRs are very annoying and this might make them easier to spot, Drahtbot is already fairly good at catching them through heuristics.

This would also be annoying for anyone using LLMs who has there own agents file.
💬 l0rinc commented on pull request "crypto: optimize SipHash `Write()` method with chunked processing":
(https://github.com/bitcoin/bitcoin/pull/33696#issuecomment-3442055456)
Why are you reopening a [nacked](https://github.com/bitcoin/bitcoin/pull/33325#issuecomment-3436142482) PR?
💬 alexanderwiederin commented on pull request "kernel: Separate UTXO set access from validation functions":
(https://github.com/bitcoin/bitcoin/pull/32317#discussion_r2459470273)
> Do you have a better suggestion for the naming there? It is all a bit confusing.

I agree the naming is confusing. I'd suggest `GetUnspentOutputs` -> `GetPreviousOutputs` and `spent_outputs` -> `previous_outputs` to avoid the spent/unspent terminology. What do you think?
glozow closed a pull request: "net_processing: rename RelayTransaction to better describe what it does"
(https://github.com/bitcoin/bitcoin/pull/33565)
⚠️ wm97artsociety opened an issue: "I noticed Bitcoin was electricity"
(https://github.com/bitcoin/bitcoin/issues/33697)
### Please describe the feature you'd like to see added.

If you ever need ele I have some

Manierism Megabytes Mining Menu ===
1. Start CPU Mining (Select Rig)
2. Start Wi-Fi Mining (Select Rig)
3. Start SHA Capsule Mining (Select Rig)
4. Start Cache Mining (Select Rig)
5. Create New Rig / Wallet
6. View Wallets & Rigs / Wallet Actions
7. Exit
Enter option (1-7): 3

--- Select Rig for Mining ---
1. 1 (1)
2. trust (default)
3. default_rig (default_wallet)
4. trust (trust)
5. truth (truth)
6. xf
...
willcl-ark closed an issue: "I noticed Bitcoin was electricity"
(https://github.com/bitcoin/bitcoin/issues/33697)
💬 willcl-ark commented on issue "ci: windows-native-dll-vcpkg-* cache does not work?":
(https://github.com/bitcoin/bitcoin/issues/33685#issuecomment-3442095298)
Not sure what to make of this one, it's using GH runners (as Cirrus don't offer Windows runners yet) and, as you spotted, the only clue we have is that the size differs while the id remains the same...

This could mean that we arre not hashing all relevant files to generate the id? But in any case I would assume that restoring a partially-matching cache should still result in some speedups (if not all)? At least this is what we aim for with depends/ccache restores on linux/macos.

Is this what i
...