Yesterday, @luciddreams_bot became unavailable to millions of our iOS and Android users.
Within hours after the ban, our team identified and removed all pornographic content that was previously publicly available in the bot. Additionally, we began implementing an optional setting allowing users to hide all NSFW content from their private conversations with AI.
Despite being a top-5 grossing app on Telegram, we didn't receive any advance warnings or even a notification after the fact – instead, we learned about the ban directly from our users.
We contacted all available Telegram support accounts with a request for clarification, but none of them responded or even read our messages. Dear Telegram, we need your response 🙏
Within hours after the ban, our team identified and removed all pornographic content that was previously publicly available in the bot. Additionally, we began implementing an optional setting allowing users to hide all NSFW content from their private conversations with AI.
Despite being a top-5 grossing app on Telegram, we didn't receive any advance warnings or even a notification after the fact – instead, we learned about the ban directly from our users.
We contacted all available Telegram support accounts with a request for clarification, but none of them responded or even read our messages. Dear Telegram, we need your response 🙏
❤221👍113😱54🍾22👎19🍌17👀9🤓6❤🔥5🤯5
To our best knowledge, Lucid Dreams should now be compliant with the Telegram terms – so we launched a mirror bot for our iOS and Android users at @lucid_dreams_ai_bot and @luciddreams. All your settings and in-game purchases will synchronize between all our bots.
👍140❤54🍾39👀17👎16🔥9🍓8💅5👏3
platov.ai
Yesterday, @luciddreams_bot became unavailable to millions of our iOS and Android users. Within hours after the ban, our team identified and removed all pornographic content that was previously publicly available in the bot. Additionally, we began implementing…
Update: @luciddreams_bot is now again available in official Telegram apps.
Huge thanks to the Telegram team who paid attention to my posts today. If you have any concerns in the future, please message us directly and we'll gladly cooperate.🫡
For those who switched to our mirror bot (@lucid_dreams_ai_bot), you can now safely return to using the main bot – all your settings and purchases will remain synchronized.
UPD: We'll keep the mirror running for users who still have issues accessing the main bot. We're currently hitting Telegram rate limits due to heavy traffic, this will hopefully be resolved in the next few hours.
UPD2: Users report that the issue is still not resolved in some official clients. You can safely use the mirror bot if that's the case for you.
Huge thanks to the Telegram team who paid attention to my posts today. If you have any concerns in the future, please message us directly and we'll gladly cooperate.
For those who switched to our mirror bot (@lucid_dreams_ai_bot), you can now safely return to using the main bot – all your settings and purchases will remain synchronized.
UPD: We'll keep the mirror running for users who still have issues accessing the main bot. We're currently hitting Telegram rate limits due to heavy traffic, this will hopefully be resolved in the next few hours.
UPD2: Users report that the issue is still not resolved in some official clients. You can safely use the mirror bot if that's the case for you.
Please open Telegram to view this post
VIEW IN TELEGRAM
👍680❤283👎151🍾94🍌57🗿48❤🔥43🔥43👏43🤓35
Everything is back and running normally, so let me share a fun story – how HUBBA.AI got its name 😎
First of all, I'm terrible at naming things. Since @luciddreams was named after a Juice WRLD song, I decided to stop doing naming myself (jokes aside, love that name).
Remember my recent post about audience simulation using LLMs? In September 2024, after days of frustrating manual search for a domain name for our project, I built a simple script to do the following:
1. Prompt language model to generate domain names based on the product description. I used the following system prompt for Claude 3.5 Sonnet:
Note the three-step generation with
2. Use domain registrar API to check availability of each domain. Since WHOIS doesn't work with .ai domains, I paid a few dollars for a domain lookup API on RapidAPI. The script only kept domains available for purchase within the configured budget.
3. Simulate the target audience to find the most clickable domain names. The script arranged all available domains into groups of four and simulated polls using this simple prompt:
After running multiple polls for different group combinations, the script ranked domains by their average winrate.
🎁 Here is the output we got after running it for 15 minutes:
Although hubba.ai only ranked seventh, we instantly loved that name and decided to stick with it. For me, the best part is that in some tests our AI-generated domain name now outperforms competitor domains that cost 5-6 figures.
So, can AI create a masterpiece?🧐
First of all, I'm terrible at naming things. Since @luciddreams was named after a Juice WRLD song, I decided to stop doing naming myself (jokes aside, love that name).
Remember my recent post about audience simulation using LLMs? In September 2024, after days of frustrating manual search for a domain name for our project, I built a simple script to do the following:
1. Prompt language model to generate domain names based on the product description. I used the following system prompt for Claude 3.5 Sonnet:
You are a creative domain name generator. You only suggest domain names that are good for SEO.
<rules>
1. Suggested domains must be easy to remember and hard to misspell
2. Suggested domains must be short, maximum 3 syllables and 12 characters
3. Suggested domains must not contain any numbers or hyphens
4. Suggested domains must be in one of the following TLDs: .com, .ai
5. Avoid intentional misspellings in the domain names
</rules>
Follow this algorithm to generate good and SEO optimized domain names:
- Think about possible keywords and search queries that people might use in <keywords>...</keywords> block.
- Write a list of 30-40 possibly relevant domain names in <draft>...</draft> block. This is the creative step, be chaotic.
- Finally, select 10-12 most promising domain names in <result>...</result> block.
Note the three-step generation with
<keywords>, <draft> and <result> sections. Giving model a "scratchpad" for thinking makes a huge difference in the output quality.2. Use domain registrar API to check availability of each domain. Since WHOIS doesn't work with .ai domains, I paid a few dollars for a domain lookup API on RapidAPI. The script only kept domains available for purchase within the configured budget.
3. Simulate the target audience to find the most clickable domain names. The script arranged all available domains into groups of four and simulated polls using this simple prompt:
You are a 21-year-old male XVideos user from the United States who is curious about AI. Which of the following links would you be most likely to click judging by the domain name only?
{options}
Answer with just the letter a, b, c, or d corresponding to your choice.
After running multiple polls for different group combinations, the script ranked domains by their average winrate.
🎁 Here is the output we got after running it for 15 minutes:
1. sirenhub.com
2. kinkie.ai
3. lustie.ai
4. hernextdoor.ai
5. sexysiri.ai
6. sexsim.ai
7. hubba.ai
... 993 other domains ...
Although hubba.ai only ranked seventh, we instantly loved that name and decided to stick with it. For me, the best part is that in some tests our AI-generated domain name now outperforms competitor domains that cost 5-6 figures.
So, can AI create a masterpiece?
Please open Telegram to view this post
VIEW IN TELEGRAM
👍728❤358🍾90🤓68🔥65👎62👏30🤯29🌚24💅24
🔍 We’re hiring: Part‑Time LoRA Wizard
4 million users turn to @luciddreams and hubba.ai every month for visuals that surprise and delight. To scale the quality of our character imagery, we’re looking for a specialist who can build top‑tier LoRA models on demand.
What you’ll do
• Curate high‑quality datasets for new characters from scratch.
• Train and iterate LoRA checkpoints in both anime and realism styles, validating performance on our internal eval suite.
What we expect
• Proven experience with Kohya, ComfyUI, or Automatic1111 (at least one).
• Solid understanding of prompt engineering, dataset hygiene, and quantitative model testing.
• Ability to indicate that Nika is your favorite anime character in the application.
• Ability to work asynchronously on a task‑based schedule.
What we offer
• US $70–100 per approved LoRA (payment in USDT).
• Access to A100/4090 GPUs.
• Immediate impact—your models go live to millions users.
Interested? Share a portfolio (CivitAI, Hugging Face) via the form below.
➡️ Apply here: https://unique-drifter-c36.notion.site/1e8a993f07b680019b58c8847b3fa1c9?pvs=105
Let's create characters the community will remember!
4 million users turn to @luciddreams and hubba.ai every month for visuals that surprise and delight. To scale the quality of our character imagery, we’re looking for a specialist who can build top‑tier LoRA models on demand.
What you’ll do
• Curate high‑quality datasets for new characters from scratch.
• Train and iterate LoRA checkpoints in both anime and realism styles, validating performance on our internal eval suite.
What we expect
• Proven experience with Kohya, ComfyUI, or Automatic1111 (at least one).
• Solid understanding of prompt engineering, dataset hygiene, and quantitative model testing.
• Ability to indicate that Nika is your favorite anime character in the application.
• Ability to work asynchronously on a task‑based schedule.
What we offer
• US $70–100 per approved LoRA (payment in USDT).
• Access to A100/4090 GPUs.
• Immediate impact—your models go live to millions users.
Interested? Share a portfolio (CivitAI, Hugging Face) via the form below.
➡️ Apply here: https://unique-drifter-c36.notion.site/1e8a993f07b680019b58c8847b3fa1c9?pvs=105
Let's create characters the community will remember!
👍143❤138🔥24💅18👎17🍾17🍌12❤🔥10👏6👨💻2
Found myself spending 30% of my programming time on aistudio.google.com chatting about the high-level code architecture. Gemini 2.5 Pro is a genius model that is ridiculously nerfed in Cursor and other wrappers. Here’s the approximate workflow I use:
1. Pack the entire codebase into one prompt using Repomix or RepoPrompt, then copypaste it in the conversation. It makes a HUGE difference to include all project code in the input, and most AI IDEs don’t do that for you!
2. Braindump all requirements and implementation ideas related to the feature you’re building. Simply make a <braindump>...</braindump> section, AI just gets you most of the time. «Convert this into an elegant implementation design».
3. Skim through the design draft trying to identify obvious blunders and overcomplications, regenerate 2–3 times until the draft appears good. If it still has some minor issues, «Please update the design based on my feedback: …»
4. Read the polished version of the design very carefully, make sure you understand it. Question everything, like: «Why did you choose SHA-256 for hashing, wouldn’t BLAKE3 be faster?», «Let’s simulate what happens if this server handles 100k SSE connections simultaneously» and so on.
5. When you are confident that the design would work, ask AI to «write a step-by-step implementation plan for the finalized design for a developer who has no context of the previous conversation».
6. Finally, copypaste this very detailed implementation plan to Cursor or any other AI coding agent to actually make changes to files. GPT-4.1 is excellent for this due to its good instruction following ability.
7. As a final check, copypaste the diff of the changes back to your Gemini 2.5 Pro conversation and ask it to review them. I like how it usually writes a checklist of what needed to be done and verifies the changes against it.
8. Brag about 100% of your code being AI-generated. ✨
1. Pack the entire codebase into one prompt using Repomix or RepoPrompt, then copypaste it in the conversation. It makes a HUGE difference to include all project code in the input, and most AI IDEs don’t do that for you!
2. Braindump all requirements and implementation ideas related to the feature you’re building. Simply make a <braindump>...</braindump> section, AI just gets you most of the time. «Convert this into an elegant implementation design».
3. Skim through the design draft trying to identify obvious blunders and overcomplications, regenerate 2–3 times until the draft appears good. If it still has some minor issues, «Please update the design based on my feedback: …»
4. Read the polished version of the design very carefully, make sure you understand it. Question everything, like: «Why did you choose SHA-256 for hashing, wouldn’t BLAKE3 be faster?», «Let’s simulate what happens if this server handles 100k SSE connections simultaneously» and so on.
5. When you are confident that the design would work, ask AI to «write a step-by-step implementation plan for the finalized design for a developer who has no context of the previous conversation».
6. Finally, copypaste this very detailed implementation plan to Cursor or any other AI coding agent to actually make changes to files. GPT-4.1 is excellent for this due to its good instruction following ability.
7. As a final check, copypaste the diff of the changes back to your Gemini 2.5 Pro conversation and ask it to review them. I like how it usually writes a checklist of what needed to be done and verifies the changes against it.
8. Brag about 100% of your code being AI-generated. ✨
❤117🤓74👍42🤯16🍌15🔥13👎11❤🔥9🍾6🌚3
platov.ai
Gemini 2.5 Pro is a genius model that is ridiculously nerfed in Cursor and other wrappers
In fact, AI in Cursor is surprizingly retarded. Yesterday I had to explain Gemini 2.5 Pro that code after return statement isn't executed – it responded that my feedback was "eye-opening" 😂
My guess is that they messed up the context management. Cursor's system prompt is overloaded with corner-case instructions and rules for tools that agent won't need 90% of the time. And yet, Cursor often doesn't include important project files in the context, even if they're explicitly tagged in the prompt. For long files, it only includes the first 500 lines and omits the rest – without any indication to the user whatsoever.
The golden rule of LLM performance is: give model the entire context relevant to solving your task AND remove everything else. This is why manual context management and task-specific prompts make a HUGE difference in model intelligence.
My guess is that they messed up the context management. Cursor's system prompt is overloaded with corner-case instructions and rules for tools that agent won't need 90% of the time. And yet, Cursor often doesn't include important project files in the context, even if they're explicitly tagged in the prompt. For long files, it only includes the first 500 lines and omits the rest – without any indication to the user whatsoever.
The golden rule of LLM performance is: give model the entire context relevant to solving your task AND remove everything else. This is why manual context management and task-specific prompts make a HUGE difference in model intelligence.
Please open Telegram to view this post
VIEW IN TELEGRAM
❤244👍91👎32🔥31❤🔥30🍌29🗿22🤓17👨💻17🌚16
And yes, aistudio.google.com is free. Chat with the most intelligent model in history for $0. Just use a US proxy if you're in Europe or Russia.
👍327❤154🍌140❤🔥47👎34🔥30🗿27🍓23💅18🍾16