In the poll reference to the "friendly chat" didn't appeared. So:
friendly channel
friendly chat
friendly message
Links are not quite working, figuring out why...
friendly channel
friendly chat
friendly message
Links are not quite working, figuring out why...
#shitposting #meme
It's Friday!
Last week I found a channel and I want to share some content because it's hilarious. I spent Monday skimming through it. It turned out that these memes are universal and can be applied to quite a wide range of situations. For example, I used the first picture today, while presenting my work from the past two weeks.
It's Friday!
Last week I found a channel and I want to share some content because it's hilarious. I spent Monday skimming through it. It turned out that these memes are universal and can be applied to quite a wide range of situations. For example, I used the first picture today, while presenting my work from the past two weeks.
Telegram
Retard Wizard
Канал моей шизы
Shadow Wizard Money Gang
Shadow Wizard Money Gang
❤4
Yesterday I tried to make the first in the history of this channel memeshitposting. The idea is to do it once a week. What do you think?
Anonymous Poll
7%
no memes, just ML, math and hardcore
73%
memes ok, no obscenities
7%
more memes, no math, phys
13%
Who am I? What do you want from me?
Titanic. Embarkation
In previous posts we started exploratory data analysis of the Titanic dataset. We already checked the list of features and checked whether age is a useful feature.
This time let's talk about the embarked feature. It is about the place where a passenger embarked on the ship. We can think about this feature in two modes: wearing an ML engineer hat, or wearing an analyst hat. For an ML engineer, it is quite enough to have a strong connection between a feature and the target to include the feature into the dataset and train a model. An analyst is a little more curious creature. They would ask more questions. What is the nature of the connection between embarkation and survival? How does this feature interact with other features?
Let's check the picture. I'm more than confident that it is not totally accurate, but I hope it is correct in the most important aspect: the order of ports. Before I checked this order, I had a very naive hypothesis that passengers who embarked earlier had a better chance to leave the ship, so their survival rate is higher. But that can't be the case for two reasons. First of all, the dataset contains records of all passengers who were on board at the moment of the disaster. Second, passengers from the last stop had the best chances.
The real reason seems to be a little more subtle. In "S" there were a lot of crew and third-class passengers, and later we will see that it was bad to be a third-class man on Titanic (oops, spoiler). Therefore, embarked could be a proxy feature for economic status.
In previous posts we started exploratory data analysis of the Titanic dataset. We already checked the list of features and checked whether age is a useful feature.
This time let's talk about the embarked feature. It is about the place where a passenger embarked on the ship. We can think about this feature in two modes: wearing an ML engineer hat, or wearing an analyst hat. For an ML engineer, it is quite enough to have a strong connection between a feature and the target to include the feature into the dataset and train a model. An analyst is a little more curious creature. They would ask more questions. What is the nature of the connection between embarkation and survival? How does this feature interact with other features?
Let's check the picture. I'm more than confident that it is not totally accurate, but I hope it is correct in the most important aspect: the order of ports. Before I checked this order, I had a very naive hypothesis that passengers who embarked earlier had a better chance to leave the ship, so their survival rate is higher. But that can't be the case for two reasons. First of all, the dataset contains records of all passengers who were on board at the moment of the disaster. Second, passengers from the last stop had the best chances.
The real reason seems to be a little more subtle. In "S" there were a lot of crew and third-class passengers, and later we will see that it was bad to be a third-class man on Titanic (oops, spoiler). Therefore, embarked could be a proxy feature for economic status.
❤2👍2🔥1
Agent personalities
I've been working on quite a tedious and bulky task. In general, it's about extracting a structure from medium-sized data in a free form. I came up with the following pipeline:
💣 take a sample of about 100 records
💣 review them "manually" - one pass of an agent in one context pass - to discover clusters
💣 create a prompt for an LLM using an agent
💣 iterative prompt refinement (with an agent, using some test runs through the LLM)
💣 big run through the LLM
💣 analysis of the run
💣 production of final artefacts
As you can see, the pipeline is quite lengthy, and even the most advanced agents started to stumble on it. The interesting thing I want to share is how they stumbled.
Codex I built this pipeline using this agent and, in general, I'm quite happy with it. I'm talking with this instrument in free form, and it's enough to say something like "don't use that proxy, switch to that basic thing," and it understands. I had two real pains in the neck. First of all, when using a proxy, it starts to complain about the "apply_patch" instrument. This tool produces some warnings, and although it looks like a petty problem, it blocks the work because Codex starts to dwell on this topic for tens of minutes. The second is that a context flush is like amnesia for it. I asked it to save the project state into special .md files and manually checked that we restore state from our files, not using the default agent context compaction tool. Probably, it's my fault, but I don't understand what I did wrong.
Claude When I started Claude in the environment in which Codex slowly but surely solved my tasks, Claude started to act. It moved in a good direction. But it exceeded its quota without producing any valuable result, so I forbade it.
Gemini Probably the funniest story. One session. I didn't pay attention to the context window at all. I started it, gave it an instruction like "solve problem X," and forgot about it for a day while working with Codex. It solved the problem. Then I asked it, "Analyse your work, the problems you stumbled upon, and write down a memory note on how to avoid these problems in the future." It made this note, and a similar task was performed ideally in 10 minutes. From this moment, Gemini became my working horse and, basically, thanks to it I managed to solve my task in time.
For me now they have three personalities: Claude is a stingy person who promises to perform a task if you pay, but you don't see results. Codex is a very smart person, like a professor from a Disney movie. It's very nice to talk to him, and he can solve your problem, but you have to remind him who you are. Gemini is a worker. If it has a nice instruction, everything will be done quickly and with nice quality.
If you know someone, who might like this post, don't hesitate to share it!
I've been working on quite a tedious and bulky task. In general, it's about extracting a structure from medium-sized data in a free form. I came up with the following pipeline:
💣 take a sample of about 100 records
💣 review them "manually" - one pass of an agent in one context pass - to discover clusters
💣 create a prompt for an LLM using an agent
💣 iterative prompt refinement (with an agent, using some test runs through the LLM)
💣 big run through the LLM
💣 analysis of the run
💣 production of final artefacts
As you can see, the pipeline is quite lengthy, and even the most advanced agents started to stumble on it. The interesting thing I want to share is how they stumbled.
Codex I built this pipeline using this agent and, in general, I'm quite happy with it. I'm talking with this instrument in free form, and it's enough to say something like "don't use that proxy, switch to that basic thing," and it understands. I had two real pains in the neck. First of all, when using a proxy, it starts to complain about the "apply_patch" instrument. This tool produces some warnings, and although it looks like a petty problem, it blocks the work because Codex starts to dwell on this topic for tens of minutes. The second is that a context flush is like amnesia for it. I asked it to save the project state into special .md files and manually checked that we restore state from our files, not using the default agent context compaction tool. Probably, it's my fault, but I don't understand what I did wrong.
Claude When I started Claude in the environment in which Codex slowly but surely solved my tasks, Claude started to act. It moved in a good direction. But it exceeded its quota without producing any valuable result, so I forbade it.
Gemini Probably the funniest story. One session. I didn't pay attention to the context window at all. I started it, gave it an instruction like "solve problem X," and forgot about it for a day while working with Codex. It solved the problem. Then I asked it, "Analyse your work, the problems you stumbled upon, and write down a memory note on how to avoid these problems in the future." It made this note, and a similar task was performed ideally in 10 minutes. From this moment, Gemini became my working horse and, basically, thanks to it I managed to solve my task in time.
For me now they have three personalities: Claude is a stingy person who promises to perform a task if you pay, but you don't see results. Codex is a very smart person, like a professor from a Disney movie. It's very nice to talk to him, and he can solve your problem, but you have to remind him who you are. Gemini is a worker. If it has a nice instruction, everything will be done quickly and with nice quality.
If you know someone, who might like this post, don't hesitate to share it!
👍2❤1🔥1
Round Numbers
I really don’t want to push my luck, but it seems we’ve reached a perfectly round number of subscribers. So let me tell you a story.
Many, many years ago, an Indian shah was bored. Then a wise man came and presented him with the game of chess. The shah was thrilled and offered the man anything he wanted. The wise man asked for as much rice as the shah could place on a chessboard using the following rule: on the first square, put one grain of rice; on the second, two grains; and so on. Each next square should contain twice as many grains as the previous one.
I don’t actually know how this story ends because, obviously, 2**64 is quite a big number, and the shah could not possibly give the wise man everything he had asked for.
But this story gives us exactly the picture I started this post with.
Yesterday our community covered one row of this proverbial chessboard. Today — one cell of the next row. That’s the strange property of fast-growing populations.
As far as I remember, about 30% of all people who have ever lived are alive right now. So if you hear the joke “100% of people who eat cucumbers died,” don’t trust it. No more than 70%.
I really don’t want to push my luck, but it seems we’ve reached a perfectly round number of subscribers. So let me tell you a story.
Many, many years ago, an Indian shah was bored. Then a wise man came and presented him with the game of chess. The shah was thrilled and offered the man anything he wanted. The wise man asked for as much rice as the shah could place on a chessboard using the following rule: on the first square, put one grain of rice; on the second, two grains; and so on. Each next square should contain twice as many grains as the previous one.
I don’t actually know how this story ends because, obviously, 2**64 is quite a big number, and the shah could not possibly give the wise man everything he had asked for.
But this story gives us exactly the picture I started this post with.
Yesterday our community covered one row of this proverbial chessboard. Today — one cell of the next row. That’s the strange property of fast-growing populations.
As far as I remember, about 30% of all people who have ever lived are alive right now. So if you hear the joke “100% of people who eat cucumbers died,” don’t trust it. No more than 70%.