TestingCatalog AI News π
Do you also hoard your compute quota for o1 model and plan to utilize it on something really useful? I will invest it in news publishing workflows to evaluate its writing capabilities π
I haven't run a lot of testing yet but here are some quick observations:
- Retrieving knowledge via search didn't change much - o1 blindly trusts the Bing search index unless you clarify your reasoning.
- Written outputs follow the same GPT writing style and are not "humanised" at all.
- However, if you ask it to "humanise" the text - it does an impressive job.
As a next step, I am planning to test more complex writing prompts πππ
- Retrieving knowledge via search didn't change much - o1 blindly trusts the Bing search index unless you clarify your reasoning.
- Written outputs follow the same GPT writing style and are not "humanised" at all.
- However, if you ask it to "humanise" the text - it does an impressive job.
As a next step, I am planning to test more complex writing prompts πππ
π€2π1π₯1
TestingCatalog AI News π
I haven't run a lot of testing yet but here are some quick observations: - Retrieving knowledge via search didn't change much - o1 blindly trusts the Bing search index unless you clarify your reasoning. - Written outputs follow the same GPT writing style andβ¦
Itβs interesting to observe how limited compute capacity impacts consumer behavior. I believe that in the future we will have more and more such cases as soon as models will advance further.
Think of it as a weekly budget that you can decide how to distribute. At the current level, AI can handle certain tasks but as soon as they will advance further, the complexity of these tasks will increase along with the value that people would expect in return.
Does it mean that we are participating in "Universal Basic Compute" testing? π€
Think of it as a weekly budget that you can decide how to distribute. At the current level, AI can handle certain tasks but as soon as they will advance further, the complexity of these tasks will increase along with the value that people would expect in return.
Does it mean that we are participating in "Universal Basic Compute" testing? π€
π€4π1π―1
ChatGPT rolled out new system shortcuts /picture and /search
https://www.testingcatalog.com/chatgpt-rolled-out-new-system-shortcuts-picture-and-search-2/
π #chatgpt
https://www.testingcatalog.com/chatgpt-rolled-out-new-system-shortcuts-picture-and-search-2/
π #chatgpt
TestingCatalog
ChatGPT rolled out new system shortcuts /picture and /search
Discover ChatGPT's new /picture and /search shortcuts for a smoother, more engaging chat experience. Generate images with DALL-E or search seamlessly within your conversation.
π3
TestingCatalog AI News π
π¨ BREAKING: Mistral Le Chat will get Pixtral vision model support soon. Pixtral model will be able to work with images π
Currently, it is not enabled yet but may be released any time now.
π₯3
TestingCatalog AI News π
Currently, it is not enabled yet but may be released any time now.
It seems that you also will be able to add more images and remove previously uploaded ones via editing πππ
4 images max per chat for Pixtral on Le Chat.
4 images max per chat for Pixtral on Le Chat.
π2
TestingCatalog AI News π
Runway is rolling out video inputs for Gen-3 Alpha π You can use text prompts to change the video or adjust its style there.
Besides that, Runway is also working on Video Upscaling. There you can select x2 or x4 options and toggle face recognition.
It allows previewing 1 second for free π
It allows previewing 1 second for free π
π₯2
This media is not supported in your browser
VIEW IN TELEGRAM
For all waitlist lovers and anime pfp experts - there is a new project where you can experience minting 3 NFTs with different rarities without realising that you are interacting with blockchain at all π
anime.com
anime.com
π€3
TestingCatalog AI News π
π¨ BREAKING: Runway is working on Storyboards! Yes, there are 2 of them and it feels like both are different prototypes with different user experiences. Here is what's inside π
On Storyboard number one it is possible to draw and use this drawing to transform it into an image.
β€βπ₯2
TestingCatalog AI News π
On Storyboard number one it is possible to draw and use this drawing to transform it into an image.
Later these images can be animated with the Gen-2 model.
β€2
TestingCatalog AI News π
Later these images can be animated with the Gen-2 model.
Results can be previewed in a gallery where you will be able to view generations one by one.
β€2
TestingCatalog AI News π
Results can be previewed in a gallery where you will be able to view generations one by one.
The second storyboard features a slightly different UI where you can generate text prompts for uploaded or generated images.
β€βπ₯2
TestingCatalog AI News π
The second storyboard features a slightly different UI where you can generate text prompts for uploaded or generated images.
These images could be regenerated afterwards. There is no preview mode over here.
Both concepts look more like early prototypes so far and are not necessarily designed to produce a single video for now. It will be very interesting to compare it with VideoFX Storyboard when both will be released π
Both concepts look more like early prototypes so far and are not necessarily designed to produce a single video for now. It will be very interesting to compare it with VideoFX Storyboard when both will be released π
β€3
ICYMI: OpenAI released o1 series AI models with enhanced reasoning
https://www.testingcatalog.com/openai-released-o1-series-ai-models-with-enhanced-reasoning/
π #chatgpt
https://www.testingcatalog.com/openai-released-o1-series-ai-models-with-enhanced-reasoning/
π #chatgpt
TestingCatalog
OpenAI released o1 series AI models with enhanced reasoning
Discover OpenAI's new o1 AI models, o1-preview and o1-mini, enhancing reasoning in science, coding, and math. Available to ChatGPT Plus users from Sept 12, 2024.
π2π€1