Gemini Live
In Gemini Live, you’ll get a fullscreen experience with a cool audio waveform effect. This will let you have a 2-way dialogue, with Gemini returning concise responses.
You can speak at your own pace, with Google adapting, and interrupt Gemini as it’s replying to add new information or ask for clarification.
Available for Gemini Advanced subscribers, it’s launching in the coming months.
🔗 9to5Google
🧑💻 @agamtechtricks
In Gemini Live, you’ll get a fullscreen experience with a cool audio waveform effect. This will let you have a 2-way dialogue, with Gemini returning concise responses.
You can speak at your own pace, with Google adapting, and interrupt Gemini as it’s replying to add new information or ask for clarification.
Available for Gemini Advanced subscribers, it’s launching in the coming months.
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🤯5
This media is not supported in your browser
VIEW IN TELEGRAM
Project Astra
The Astra demo Google showed — single-take in real-time — pointed a phone at objects as someone issued commands or questions, with Gemini recognizing what’s in front of it in near real-time. You can show it a cityscape and ask what neighborhood you’re in, or inquire about code.
This is built on the Gemini 1.5 Pro mode and “other task specific models.” Google says it’s “designed to process information faster by continuously encoding video frames,” with reducing the response times to “something conversational” a “difficult engineering challenge.”
🔗 9to5Google
🧑💻 @agamtechtricks
The Astra demo Google showed — single-take in real-time — pointed a phone at objects as someone issued commands or questions, with Gemini recognizing what’s in front of it in near real-time. You can show it a cityscape and ask what neighborhood you’re in, or inquire about code.
This is built on the Gemini 1.5 Pro mode and “other task specific models.” Google says it’s “designed to process information faster by continuously encoding video frames,” with reducing the response times to “something conversational” a “difficult engineering challenge.”
…combining the video and speech input into a timeline of events, and caching this information for efficient recall.”
Please open Telegram to view this post
VIEW IN TELEGRAM
🤯7
More Gemini Updates:
Gemini 1.5 Pro:
Google announced Gemini 1.5 Pro in February and is now launching it in the paid Gemini Advanced subscription.
New Extension:
The new YouTube Music extension lets you search for songs by "mentioning a favorite verse or a featured artist."
Gems:
In the coming months, Gemini Advanced users will be able to create customized versions of Gemini. Examples include a "gym buddy, sous chef, coding partner, etc." All Gemini users will have access to a number of pre-made Gems, like Learning Coach.
Immersive Planner:
Gemini Advanced on the web is getting an “immersive planner” that can create a custom, timeline-based itinerary. Google says this “new planning experience will go beyond showing a list of suggested activities.”
Gemini 1.5 Flash:
Google is introducing 1.5 Flash as its “fastest and most versatile multimodal AI model.” It has the same 1 million context window and is aimed at use cases where low latency and cost matters the most.
🔗 9to5Google
🧑💻 @agamtechtricks
Gemini 1.5 Pro:
Google announced Gemini 1.5 Pro in February and is now launching it in the paid Gemini Advanced subscription.
New Extension:
The new YouTube Music extension lets you search for songs by "mentioning a favorite verse or a featured artist."
Gems:
In the coming months, Gemini Advanced users will be able to create customized versions of Gemini. Examples include a "gym buddy, sous chef, coding partner, etc." All Gemini users will have access to a number of pre-made Gems, like Learning Coach.
Immersive Planner:
Gemini Advanced on the web is getting an “immersive planner” that can create a custom, timeline-based itinerary. Google says this “new planning experience will go beyond showing a list of suggested activities.”
Gemini 1.5 Flash:
Google is introducing 1.5 Flash as its “fastest and most versatile multimodal AI model.” It has the same 1 million context window and is aimed at use cases where low latency and cost matters the most.
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🤯5👍1
Gmail on Android, iOS getting more Gemini: Q&A, better Smart Reply, Summarize
Gmail on Android and iOS is getting a “Summarize this email” feature for longer threads. Workspace Labs users will get Summarize this week, with a launch for Google One AI Premium subscribers and paying Gemini for Workspace customers in June.
Meanwhile, Gmail Q&A will let you go beyond summarizing which will let you enter full prompts. You can ask a question from the e-mail.
Google is building on 2017’s Smart Reply and Smart Compose in 2018 with Contextual Smart Replies. Appearing as a carousel of chips, each suggestion is quickly summarized, like “Proceed & confirm time” or “suggest new time.”
🔗 9to5Google
🧑💻 @agamtechtricks
Gmail on Android and iOS is getting a “Summarize this email” feature for longer threads. Workspace Labs users will get Summarize this week, with a launch for Google One AI Premium subscribers and paying Gemini for Workspace customers in June.
Meanwhile, Gmail Q&A will let you go beyond summarizing which will let you enter full prompts. You can ask a question from the e-mail.
Google is building on 2017’s Smart Reply and Smart Compose in 2018 with Contextual Smart Replies. Appearing as a carousel of chips, each suggestion is quickly summarized, like “Proceed & confirm time” or “suggest new time.”
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🤯1
Gemini 1.5 Pro-powered side panel launching in Gmail, Google Docs, and more
Available in Gmail Docs, Sheets, Slides, and Drive, the side panel is now powered by Gemini 1.5 Pro. A larger context window allows for more information to be analyzed, while there’s also more advanced reasoning.
it’s coming to users enrolled in Workspace Labs and the Gemini for Workspace Alpha. All paying Gemini for Workspace customers and Google One AI Premium subscribers will get it next month.
🔗 9to5Google
🧑💻 @agamtechtricks
Available in Gmail Docs, Sheets, Slides, and Drive, the side panel is now powered by Gemini 1.5 Pro. A larger context window allows for more information to be analyzed, while there’s also more advanced reasoning.
For example you can ask it to summarize emails from your child’s school or even to highlight the main points from a recording of a PTA meeting you missed.
it’s coming to users enrolled in Workspace Labs and the Gemini for Workspace Alpha. All paying Gemini for Workspace customers and Google One AI Premium subscribers will get it next month.
Please open Telegram to view this post
VIEW IN TELEGRAM
🤯2
Gemini in Android Studio is getting some new features with Android Studio Koala!
• You can now provide custom prompts to generate a code suggestion that either adds new code or transforms selected code. You can ask Gemini to simplify complex code by rewriting it, perform specific code transformations like “make this code idiomatic”, or generate new functions you describe. Android Studio will show you Gemini’s code suggestion as a code diff you can review.
• It can now analyze your crash reports, generate insights that are shown in the Gemini tool window, provide a crash summary, and sometimes recommend next steps like sample code and links to relevant documentation.
• Later this year, the underlying Gemini model will be upgraded to Gemini 1.5 Pro, which offers a much larger context window and multimodal input.
ℹ️ Credits: @MishaalAndroidNews
🧑💻 @agamtechtricks
• You can now provide custom prompts to generate a code suggestion that either adds new code or transforms selected code. You can ask Gemini to simplify complex code by rewriting it, perform specific code transformations like “make this code idiomatic”, or generate new functions you describe. Android Studio will show you Gemini’s code suggestion as a code diff you can review.
• It can now analyze your crash reports, generate insights that are shown in the Gemini tool window, provide a crash summary, and sometimes recommend next steps like sample code and links to relevant documentation.
• Later this year, the underlying Gemini model will be upgraded to Gemini 1.5 Pro, which offers a much larger context window and multimodal input.
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3👎1🤯1
Gemini Nano is coming to Chrome, starting with ‘Help Me Write’
Gemini Nano is being baked into Google Chrome on PCs! This will power a new “help me write” feature that helps users write short form content. When users click on “help me write”, Chrome will download and quickly load the optimized Gemini Nano model in the background. This will work on the “vast majority of devices that are out there.”
Furthermore, web devs will get access to Gemini in Chrome. They won’t have to worry about prompt engineering, as Gemini Nano can be called by one of several high-level APIs, including translate, caption, and transcribe.
Google says they’ve “started to engage” with other browsers on enabling Gemini and will be opening up an early preview program soon.
ℹ️ Credits: @MishaalAndroidNews
🔗 9to5Google
🧑💻 @agamtechtricks
Gemini Nano is being baked into Google Chrome on PCs! This will power a new “help me write” feature that helps users write short form content. When users click on “help me write”, Chrome will download and quickly load the optimized Gemini Nano model in the background. This will work on the “vast majority of devices that are out there.”
Furthermore, web devs will get access to Gemini in Chrome. They won’t have to worry about prompt engineering, as Gemini Nano can be called by one of several high-level APIs, including translate, caption, and transcribe.
Google says they’ve “started to engage” with other browsers on enabling Gemini and will be opening up an early preview program soon.
Please open Telegram to view this post
VIEW IN TELEGRAM
🤯2👎1😁1
Google Play has announced a bunch of new features and tools for app developers! Here’s a summary:
- The ability to tailor store listings by search keywords. If you don’t know what keywords to optimize for, Google Play will give suggested keywords.
- Developers can now leverage Play Points to launch coupons, discounts, or exclusive in-game items.
- Deep links patching makes it easier to experiment or make quick changes to your deep links setup without needing to release a new app version.
- A new surface (in developer preview) in Google Play for devs to showcase app content and enable cross-app continuation journeys. Devs can highlight the most important content from their apps and even launch users into a full-screen, immersive experience with personalized recommendations and promotions. This requires integrating the Engage SDK.
- The SDK Console is now available to all SDK providers that are distributed from a canonical Maven repository source. Devs can also now share crash or ANR data with SDK owners.
- Google’s new pre-review checks aggregate existing quality checks into one UI so it’s easier for devs to spot common policy and compatibility issues before their app goes live. You can also now discard unwanted releases in the “not yet sent for review” stage.
- Play Integrity API can now return a Play Protect verdict, letting apps know if Play Protect is turned on and if it’s found any known malware. Through recent device activity, Play Integrity lets apps know if it detects a high volume of requests that could signal an attack. Further, a new app access risk signal (in public beta) lets devs know when a non-accessibility app is capturing the screen or controlling the device.
- Listings for an app will now show screenshots, ratings, and reviews specific to each device type. Users can also search and filter through ratings and reviews by device type.
(1/2)
- The ability to tailor store listings by search keywords. If you don’t know what keywords to optimize for, Google Play will give suggested keywords.
- Developers can now leverage Play Points to launch coupons, discounts, or exclusive in-game items.
- Deep links patching makes it easier to experiment or make quick changes to your deep links setup without needing to release a new app version.
- A new surface (in developer preview) in Google Play for devs to showcase app content and enable cross-app continuation journeys. Devs can highlight the most important content from their apps and even launch users into a full-screen, immersive experience with personalized recommendations and promotions. This requires integrating the Engage SDK.
- The SDK Console is now available to all SDK providers that are distributed from a canonical Maven repository source. Devs can also now share crash or ANR data with SDK owners.
- Google’s new pre-review checks aggregate existing quality checks into one UI so it’s easier for devs to spot common policy and compatibility issues before their app goes live. You can also now discard unwanted releases in the “not yet sent for review” stage.
- Play Integrity API can now return a Play Protect verdict, letting apps know if Play Protect is turned on and if it’s found any known malware. Through recent device activity, Play Integrity lets apps know if it detects a high volume of requests that could signal an attack. Further, a new app access risk signal (in public beta) lets devs know when a non-accessibility app is capturing the screen or controlling the device.
- Listings for an app will now show screenshots, ratings, and reviews specific to each device type. Users can also search and filter through ratings and reviews by device type.
(1/2)
🤯4
ATT • Tech News
Google Play has announced a bunch of new features and tools for app developers! Here’s a summary: - The ability to tailor store listings by search keywords. If you don’t know what keywords to optimize for, Google Play will give suggested keywords. - Developers…
- Billing changes: customers with a Google family setup can approve their child’s purchases from anywhere; in India, customers can ask someone else to buy an app or in-app product for them by sharing a payment link; Google Play now updates price ranges to reflect currency fluctuations; items can be priced as high as $999.99 now; new badges reflect trending items; finally, customers in Brazil, France, Italy, and Spain can now pay over time for long-term subscriptions.
(2/2)
ℹ️ Credits: @MishaalAndroidNews
🧑💻 @agamtechtricks
(2/2)
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🤯2👍1
Google launches ‘Veo,’ an AI video generation tool
This model is built specifically for video generation that understands visual semantics and natural language, similar to other modern models. That approach brought into video generation offers results that can be creatively tailored to fit certain styles.
Google notes that the Veo model will be able to understand “cinematic terms” in the user’s prompts, like aerial shots and timelapse formats. Veo is capable of generating videos in 1080p that can last beyond a minute, which surpasses current models like OpenAI’s Sora, maxing out at 60 seconds.
🔗 9to5Google
🧑💻 @agamtechtricks
This model is built specifically for video generation that understands visual semantics and natural language, similar to other modern models. That approach brought into video generation offers results that can be creatively tailored to fit certain styles.
Google notes that the Veo model will be able to understand “cinematic terms” in the user’s prompts, like aerial shots and timelapse formats. Veo is capable of generating videos in 1080p that can last beyond a minute, which surpasses current models like OpenAI’s Sora, maxing out at 60 seconds.
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🤯5
Imagen 3
Imagen 3 is positioned as Google’s “highest-quality” text-to-image model and offers a few improvements over the Imagen 2 model we’ve seen in Gemini and Bard.
Imagen 3 is said to bring a higher level of detail in images without as many visual artifacts and impurities in generated images. The images are more photorealistic and lifelike when requested.
🔗 9to5Google
🧑💻 @agamtechtricks
Imagen 3 is positioned as Google’s “highest-quality” text-to-image model and offers a few improvements over the Imagen 2 model we’ve seen in Gemini and Bard.
Imagen 3 is said to bring a higher level of detail in images without as many visual artifacts and impurities in generated images. The images are more photorealistic and lifelike when requested.
Please open Telegram to view this post
VIEW IN TELEGRAM
🤯1