This media is not supported in your browser
VIEW IN TELEGRAM
ICYMI: Perplexity Pro now supports Claude 3 and Playground V2.5 for image generation. There it can help you visualising your search intents π
π₯3
Forwarded from telegram-beta π (Alexey π€³)
ICYMI: Telegram Premium has Business features where you can now set location, opening hours, automatic replies and more perks.
It seems to be a move towards what we can see in WhatsApp for Business while on Telegram it is a part of the same app.
It seems to be a move towards what we can see in WhatsApp for Business while on Telegram it is a part of the same app.
π5
Media is too big
VIEW IN TELEGRAM
Testing Hunch Beta today π€
A new AI tool which allows utilizing different AI models (ChatGPT 4 Turbo, Mixtral Large, Gemini Pro, Claude 3 Opus, etc) to work as a part of the same workflow which you can draft on a Canvas editor.
Gonna play around with it for my publishing usecases. It is already very simple to use if you want to compare an output from different models to the same prompt.
Link https://hunch.tools/
Any specific prompt you are willing to compare? Reply with a prompt and a list of models below π
A new AI tool which allows utilizing different AI models (ChatGPT 4 Turbo, Mixtral Large, Gemini Pro, Claude 3 Opus, etc) to work as a part of the same workflow which you can draft on a Canvas editor.
Gonna play around with it for my publishing usecases. It is already very simple to use if you want to compare an output from different models to the same prompt.
Link https://hunch.tools/
Any specific prompt you are willing to compare? Reply with a prompt and a list of models below π
π₯3π1
BREAKING: Zapier just launched Central Preview - an AI bot solution on top of various LLMs with Zapier Actions as a superpower π₯
It comes in handy in a variety of use cases where you can work with an LLM in the usual way while creating different AI bots for specific tasks.
There you can set "triggers" to invoke a certain prompt as well as Zapier Actions to interact with external apps. Google or Notion docs can be added as a data source.
Source https://zapier.com/bots
It comes in handy in a variety of use cases where you can work with an LLM in the usual way while creating different AI bots for specific tasks.
There you can set "triggers" to invoke a certain prompt as well as Zapier Actions to interact with external apps. Google or Notion docs can be added as a data source.
Source https://zapier.com/bots
π₯3π1
Smith is happening with DALL-E π It is the first time I see it generating vertical images.
There is also a possibility that we may get a new mode for working with images directly in ChatGPT (A previously leaked Edit mode or something similar to Copilot Designer) π€
Currently, I only can see a new page layout.
There is also a possibility that we may get a new mode for working with images directly in ChatGPT (A previously leaked Edit mode or something similar to Copilot Designer) π€
Currently, I only can see a new page layout.
π3
A retro Gemini update from 04.03.24 came out being visible π
Previously it was a blank placeholder.
A better way to tune the Gemini web appβs responses
What: Weβre launching a more precise way for you to tune Geminiβs responses. Starting in English in the Gemini web app, just select the portion of text you want to change, give Gemini some instruction, and get an output thatβs closer to what you are looking for.
Why: We want to give you more control over your creative process by letting you iterate on content and ideas in the context of the original response.
Previously it was a blank placeholder.
A better way to tune the Gemini web appβs responses
What: Weβre launching a more precise way for you to tune Geminiβs responses. Starting in English in the Gemini web app, just select the portion of text you want to change, give Gemini some instruction, and get an output thatβs closer to what you are looking for.
Why: We want to give you more control over your creative process by letting you iterate on content and ideas in the context of the original response.
π2π1π₯1
This media is not supported in your browser
VIEW IN TELEGRAM
ICYMI: Lip Sync on Pika Labs is generally available for you to try π₯
Generating a voice from the text takes 2 credits and it often fails with a "Face not detected" error. But I was mainly trying on cartoon characters.
Try here https://pika.art/
Generating a voice from the text takes 2 credits and it often fails with a "Face not detected" error. But I was mainly trying on cartoon characters.
Try here https://pika.art/
π₯2π1
This media is not supported in your browser
VIEW IN TELEGRAM
ChatGPT got the MFA feature officially. Can be enabled in the General account settings π₯
π₯2π1
Alpha WIP π§ ChatGPT may soon get an option to set UI language explicitly via settings.
Previously, this variable was taken from browser settings in case you signed up for an Alpha program https://www.testingcatalog.com/chatgpt-alpha/
Previously, this variable was taken from browser settings in case you signed up for an Alpha program https://www.testingcatalog.com/chatgpt-alpha/
π3π₯1
This media is not supported in your browser
VIEW IN TELEGRAM
ICYMI: @pika_labs released generative sound effects which can be added to AI videos, only for PRO users in early access for now π
β€1π1
This media is not supported in your browser
VIEW IN TELEGRAM
What? AnthropicAI released a new GPT-3.5 class model, claude-3-haiku. Now you can try it for free on Perplexity labs
Why? It is a faster and cheaper model in case you want to give it a try.
- Link to labs https://labs.perplexity.ai
- More about the model https://anthropic.com/news/claude-3-haiku
Why? It is a faster and cheaper model in case you want to give it a try.
- Link to labs https://labs.perplexity.ai
- More about the model https://anthropic.com/news/claude-3-haiku
β€3π1
ICYMI: Perplexity knowledge cards are now available on mobile for all users π
π3