Aaaand I've stopped giving a damn about all these hype cycles. Information overload
Always wanted to use this, seems really convinent.
https://thevalleyofcode.com/git-worktrees/?ref=dailydev
https://thevalleyofcode.com/git-worktrees/?ref=dailydev
Thevalleyofcode
Git Worktrees
⚡1
I don't think I could have learnt anything if there was an easy way out like this. I'm starting to think that AI is going to dominate because it will degrade human intellect to the ground.
https://x.com/karpathy/status/1992655330002817095?t=QEBwGj-JuNy5tS8e5JG58w&s=19
https://x.com/karpathy/status/1992655330002817095?t=QEBwGj-JuNy5tS8e5JG58w&s=19
X (formerly Twitter)
Andrej Karpathy (@karpathy) on X
Gemini Nano Banana Pro can solve exam questions *in* the exam page image. With doodles, diagrams, all that.
ChatGPT thinks these solutions are all correct except Se_2P_2 should be "diselenium diphosphide" and a spelling mistake (should be "thiocyanic acid"…
ChatGPT thinks these solutions are all correct except Se_2P_2 should be "diselenium diphosphide" and a spelling mistake (should be "thiocyanic acid"…
not going to lie, the workflow of making detailed plans with step by step architecture things you need to decide on with the ai before building anything, and then storing it in an actual live architecture.md file is pretty fire
so have you ever been using an LLM chat like chatgpt and it suddenly became like 80% dumber when the conversation gets a bit longer?
it's because the llm has a finite context window, 270K tokens for gpt 5.1, and when it approaches it, it automatically summarizes your whole conversation into a summary text and starts a new conversation from that tiny summary. basically forgetting a lot of the minor details
in opencode you can see the context window and manually compact it when you are at a stopping point of a problem or something. better to compact and summarize it on your own terms than it cutting off randomly in the middle of something important.
making it write the important things inside of a text file is also a good habit.
it's because the llm has a finite context window, 270K tokens for gpt 5.1, and when it approaches it, it automatically summarizes your whole conversation into a summary text and starts a new conversation from that tiny summary. basically forgetting a lot of the minor details
in opencode you can see the context window and manually compact it when you are at a stopping point of a problem or something. better to compact and summarize it on your own terms than it cutting off randomly in the middle of something important.
making it write the important things inside of a text file is also a good habit.
⚡1
i just realized that i am using opencode with wraps codex-cli which wraps gpt-5.1 to build an ai wrapper over mistral ai
it's litterally
ai_wrapper(ai_wrapper(ai_wrapper(ai_wrapper( ))))
it's litterally
ai_wrapper(ai_wrapper(ai_wrapper(ai_wrapper( ))))
😭1
NeuralNate
i'm cooked and i have a week left for me to live with 40% when i speant 60% of it in one day
guess it's back to dumb models.
Happy that I am working in the greenfield side.
A greenfield project is starting a software system from scratch with no existing code or infrastructure constraints.
A brownfield project is developing new software or features within an existing, operational system, requiring integration with legacy code and infrastructure.
No Vibes Allowed: Solving Hard Problems in Complex Codebases - Dex Horthy, HumanLayer
A greenfield project is starting a software system from scratch with no existing code or infrastructure constraints.
A brownfield project is developing new software or features within an existing, operational system, requiring integration with legacy code and infrastructure.
No Vibes Allowed: Solving Hard Problems in Complex Codebases - Dex Horthy, HumanLayer
Apparently when we are in a loop of the ai does something wrong and we yell at it to fix it, and it does something wrong and we yell at it... All an LLM is is a next word predictor looking at the previous words. So it looks at the previous fight and predicts the next thing to be it doing something wrong and you yelling at it. So it gives you the prediction of it doing something wrong.
Damn, I've never thought about it this way.
https://youtu.be/rmvDxxNubIg?t=5m31s
Damn, I've never thought about it this way.
https://youtu.be/rmvDxxNubIg?t=5m31s
how did i not know that react and react native are coming under the linux foundation????
https://x.com/fb_engineering/status/1975935750509343006
https://x.com/fb_engineering/status/1975935750509343006
lowkey i really like opensource projects, like people were bugging the opencode team to add oauth for providers like atlassian last week and today it is merged and added to the docs.
btw, the jira and confluence MCP is apparently really cool, you can summerize tickets and search through whole company documents stored in confluence to understand (dig up secrets) about your company and projects.
btw, the jira and confluence MCP is apparently really cool, you can summerize tickets and search through whole company documents stored in confluence to understand (dig up secrets) about your company and projects.
just asked it if what we've been doing for the past couple of hours is the wrong way to do things and it replied this. I hate AI coding, like not gonna lie. now that i have built the incredibly weak foundations of the project really fast, editing one thing takes so long and is just a pointless spiral. you don't know what is wrong till it blows up in your face at the end.
really cool feature of lazygit, it you can click a commit to go inside of it and see each file change and edit it easily
⚡1