Forwarded from DOGS Community
DOGS merch all over the world 😎 😎
🤘 Tick-tock — first 20 get 10% off with code TG10
Pick yours and join the pack💪
Pick yours and join the pack
Please open Telegram to view this post
VIEW IN TELEGRAM
3
ПРОДАВЕЦ ОШИБСЯ И ВЫСТАВИЛ ЕЁ ВСЕГО ЗА 80 TON НА GETGEMS
Теперь он пробует себя в RWA
На мой взгляд, это показатель того, что всё не случайно – люди, которые зарабатывают в крипте, имеют жилку. Они всегда в движении и в поисках возможностей.
Честно говоря, это первая и единственная качественная реализация плюш пепе из всех, что я видел. Поэтому теперь она моя.
И я в целом обожаю поддерживать любые начинания ребят в комьюнити. Всё что замечаю, сразу пытаюсь пропушить и поспособствовать развитию любым способом.
(eg концепт кки от сириуса до сих пор в сердечке )
а пепу купил здесь
Please open Telegram to view this post
VIEW IN TELEGRAM
875
Forwarded from Cocoon
Welcome to Cocoon — the Confidential Compute Open Network
Cocoon is a decentralized network for executing AI inference securely and privately.
In this network, app developers reward GPU owners with TON for processing inference requests.
Telegram will be the first major customer to use Cocoon for confidential AI queries — and will invest heavily in promoting the network across its global ecosystem.
🔨 App developers who want to run inference through Cocoon are invited to contact us via DMs to this channel.
Please specify which model architecture you plan to use (e.g., DeepSeek, Qwen), along with your expected daily query volume and average input/output token size.
💡 GPU owners who want to earn TON by contributing compute power can also message this channel using the 💬 button below.
Please indicate how many GPUs you can provide and include details such as type (e.g., H200), VRAM, and expected uptime.
Cocoon is ready — launching in November, once we’ve gathered your applications.
Cocoon is a decentralized network for executing AI inference securely and privately.
In this network, app developers reward GPU owners with TON for processing inference requests.
Telegram will be the first major customer to use Cocoon for confidential AI queries — and will invest heavily in promoting the network across its global ecosystem.
Please specify which model architecture you plan to use (e.g., DeepSeek, Qwen), along with your expected daily query volume and average input/output token size.
Please indicate how many GPUs you can provide and include details such as type (e.g., H200), VRAM, and expected uptime.
Cocoon is ready — launching in November, once we’ve gathered your applications.
Please open Telegram to view this post
VIEW IN TELEGRAM