Offshore
Photo
Robert Scoble
RT @bbrague: Data don't lie... Personality traits drive conversion. If you don't know the personalities of your customers, then you are already playing catch-up. #IYKYK #revops #MarketingStrategy #AI #martech https://t.co/ohsbrucSKt https://t.co/94Y2eLAT23
tweet
RT @bbrague: Data don't lie... Personality traits drive conversion. If you don't know the personalities of your customers, then you are already playing catch-up. #IYKYK #revops #MarketingStrategy #AI #martech https://t.co/ohsbrucSKt https://t.co/94Y2eLAT23
tweet
Offshore
Photo
proxima centauri b ▫️ open edition live!
RT @cypheristikal: GM Anons 🌀
Since I've received interest from multiple parties on this piece, I will be listing my #phospiral tribute 'Museum Visit' as a 1/1 on @foundation with a 0.1 reserve at 3pm EST.
There will also be a bidder's edition (shown below). https://t.co/SKhMAwWuAh
tweet
RT @cypheristikal: GM Anons 🌀
Since I've received interest from multiple parties on this piece, I will be listing my #phospiral tribute 'Museum Visit' as a 1/1 on @foundation with a 0.1 reserve at 3pm EST.
There will also be a bidder's edition (shown below). https://t.co/SKhMAwWuAh
tweet
Zohaib Ahmed
🎉Introducing Neural Speech Watermarking from @resembleai
PerTh is built for protecting synthetic voices from data manipulation. It embeds imperceptible data into the speech and provides a way to verify genuine content.
Read: https://t.co/0Kz9mVmD1K
Open source soon!
tweet
🎉Introducing Neural Speech Watermarking from @resembleai
PerTh is built for protecting synthetic voices from data manipulation. It embeds imperceptible data into the speech and provides a way to verify genuine content.
Read: https://t.co/0Kz9mVmD1K
Open source soon!
tweet
Offshore
Photo
TomLikesRobots
RT @LiJunnan0409: Can LLMs understand images? We introduce 🔥BLIP-2🔥, a generic and efficient vision-language pre-training strategy that bootstraps from frozen❄️image encoders and frozen❄️LLMs. BLIP-2 outperforms existing SoTAs with only 188M trainable parameters!
Github: https://t.co/vXi6CCID7w https://t.co/q5oKrTzsxh
tweet
RT @LiJunnan0409: Can LLMs understand images? We introduce 🔥BLIP-2🔥, a generic and efficient vision-language pre-training strategy that bootstraps from frozen❄️image encoders and frozen❄️LLMs. BLIP-2 outperforms existing SoTAs with only 188M trainable parameters!
Github: https://t.co/vXi6CCID7w https://t.co/q5oKrTzsxh
tweet
Offshore
Photo
PromptBase | Prompt Marketplace
Illustrated Book Covers by kvicko using #midjourney 📖 https://t.co/xIlDUlhaJD
tweet
Illustrated Book Covers by kvicko using #midjourney 📖 https://t.co/xIlDUlhaJD
tweet
Clint Gibler
☁️ Precloud
An open source CLI that runs checks on infrastructure as code to catch potential deployment issues before deploying
It works by comparing resources in CDK diffs and Terraform Plans against the state of your cloud account
https://t.co/MGytAEvvwH
tweet
☁️ Precloud
An open source CLI that runs checks on infrastructure as code to catch potential deployment issues before deploying
It works by comparing resources in CDK diffs and Terraform Plans against the state of your cloud account
https://t.co/MGytAEvvwH
tweet