CB Insights released report "Tech Trends for 2024"
Here are 8 insightful slides on AI:
1. Companies are desperate for GPUs.
An Nvidia H100 costs $3.3K, sold to customers for $30K & selling on other marketplaces for $100K.
2. Startups are linking up with big tech to access their AI chips and compute power.
3. Researchers estimate that, by 2026, we will exhaust high quality text data for training LLMs — a trend that can slow down AI progress.
On the contrary, developers have increasingly started experimenting with synthetic data.
4. Scraping proprietary sources is getting harder & comes with legal risks too.
Companies like X, reddit have taken measures to monetize this with many more already following.
5. Interest in the intersection between quantum computing & AI is heating up as quantum advances tease more powerful AI models down the road.
Investment surged to a record high in 2023 too.
6. AI is the talk of the town in banking. Earnings calls mentions of AI hit a record high into Q1’24.
7. Brain activity data can now be coupled with AI to ‘read minds’ with some accuracy.
8. AI could cut dev costs while making gameplay more immersive.
9. Humanoid robotics startups like Figure are gain momentum. Dealmaking in the space also ramped up in 2023 & in Q1 2024
Here are 8 insightful slides on AI:
1. Companies are desperate for GPUs.
An Nvidia H100 costs $3.3K, sold to customers for $30K & selling on other marketplaces for $100K.
2. Startups are linking up with big tech to access their AI chips and compute power.
3. Researchers estimate that, by 2026, we will exhaust high quality text data for training LLMs — a trend that can slow down AI progress.
On the contrary, developers have increasingly started experimenting with synthetic data.
4. Scraping proprietary sources is getting harder & comes with legal risks too.
Companies like X, reddit have taken measures to monetize this with many more already following.
5. Interest in the intersection between quantum computing & AI is heating up as quantum advances tease more powerful AI models down the road.
Investment surged to a record high in 2023 too.
6. AI is the talk of the town in banking. Earnings calls mentions of AI hit a record high into Q1’24.
7. Brain activity data can now be coupled with AI to ‘read minds’ with some accuracy.
8. AI could cut dev costs while making gameplay more immersive.
9. Humanoid robotics startups like Figure are gain momentum. Dealmaking in the space also ramped up in 2023 & in Q1 2024
Brain-To-Text Competition 2024
This is the most fascinating BCI competition yet, organized by Stanford.
Everyone has time to develop the world's best brain-to-speech decoder.
Deadline: June 2, 2024
Task: Predict attempted speech from brain activity.
This is the most fascinating BCI competition yet, organized by Stanford.
Everyone has time to develop the world's best brain-to-speech decoder.
Deadline: June 2, 2024
Task: Predict attempted speech from brain activity.
EvalAI
EvalAI: Evaluating state of the art in AI
EvalAI is an open-source web platform for organizing and participating in challenges to push the state of the art on AI tasks.
Microsoft released Phi-3
1. phi-3-mini: 3.8B model trained on 3.3T tokens rivals Mixtral 8x7B and GPT-3.5
2. Phi-3-medium performs well on TriviaQA but noticeably underperforms rel. to GPT-3.5.
We can guess that Phi-3 recipe doesn't magically make it understand more random factoids. It's more focused on valuable knowledges (e.g. STEM), which makes sense.
1. phi-3-mini: 3.8B model trained on 3.3T tokens rivals Mixtral 8x7B and GPT-3.5
2. Phi-3-medium performs well on TriviaQA but noticeably underperforms rel. to GPT-3.5.
We can guess that Phi-3 recipe doesn't magically make it understand more random factoids. It's more focused on valuable knowledges (e.g. STEM), which makes sense.
arXiv.org
Phi-3 Technical Report: A Highly Capable Language Model Locally on...
We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that...
🆒1
Open AI presents The Instruction Hierarchy
Training LLMs to Prioritize Privileged Instructions
Today's LLMs are susceptible to prompt injections, jailbreaks, and other attacks that allow adversaries to overwrite a model's original instructions with their own malicious prompts.
New paper from OpenAI on prompt injection - it's the most detailed evaluation of the problem and has some very interesting details.
Training LLMs to Prioritize Privileged Instructions
Today's LLMs are susceptible to prompt injections, jailbreaks, and other attacks that allow adversaries to overwrite a model's original instructions with their own malicious prompts.
New paper from OpenAI on prompt injection - it's the most detailed evaluation of the problem and has some very interesting details.
huggingface.co
Paper page - The Instruction Hierarchy: Training LLMs to Prioritize Privileged
Instructions
Instructions
Join the discussion on this paper page
How Foundation Models Work - Let's understand the basics
Traditional AI models are narrow in scope. They are trained on curated datasets to perform specific tasks, and their capabilities are restricted to the single task for which they were trained.
Foundation models take a different approach. They are trained on massive, diverse, unlabeled datasets, allowing them to be incredibly flexible and perform various tasks. Text generation, classification, summarization - foundation models can do it all. Rather than training new models from scratch, practitioners can leverage existing foundation models and provide tailored prompts or tune them to generate desired outputs across many domains.
The key difference is that while traditional models learn patterns to make predictions, foundation models leverage their vast training data to generate relevant outputs when prompted appropriately.
This capability represents a breakthrough compared to traditional AI models.
Foundation models open up new frontiers for AI applications.
Traditional AI models are narrow in scope. They are trained on curated datasets to perform specific tasks, and their capabilities are restricted to the single task for which they were trained.
Foundation models take a different approach. They are trained on massive, diverse, unlabeled datasets, allowing them to be incredibly flexible and perform various tasks. Text generation, classification, summarization - foundation models can do it all. Rather than training new models from scratch, practitioners can leverage existing foundation models and provide tailored prompts or tune them to generate desired outputs across many domains.
The key difference is that while traditional models learn patterns to make predictions, foundation models leverage their vast training data to generate relevant outputs when prompted appropriately.
This capability represents a breakthrough compared to traditional AI models.
Foundation models open up new frontiers for AI applications.
GE HealthCare launches voice-activated, AI-powered ultrasound machines for women's health.
The latest AI-powered capabilities on the Voluson Signature 20 and 18 enable a number of efficiencies across women’s health care settings, including the new “Hey Voluson” feature which allows users to operate the system using voice commands, saving time and keystrokes.
The latest AI-powered capabilities on the Voluson Signature 20 and 18 enable a number of efficiencies across women’s health care settings, including the new “Hey Voluson” feature which allows users to operate the system using voice commands, saving time and keystrokes.
Fierce Biotech
GE HealthCare launches voice-activated, AI-powered ultrasound machines for women's health
GE HealthCare has raised the curtain on two ultrasound systems equipped with artificial intelligence programs designed to assist in diagnosing conditions in women’s health, including obstetric exam | GE HealthCare has raised the curtain on two ultrasound…
Perplexity raised 62.7M$ at 1.04B$ valuation, led by Daniel Gross, along with Stan Druckenmiller, NVIDIA, Jeff Bezos, Tobi Lutke, Garry Tan, Andrej Karpathy, Dylan Field, Elad Gil, Nat Friedman, IVP, NEA, Jakob Uszkoreit, Naval Ravikant, Brad Gerstner, and Lip-Bu Tan.
www.perplexity.ai
Perplexity launches Enterprise Pro
Announces $62.7M in funding and partnerships with SoftBank + Deutsche Telekom
ByteDance presents Graphic Design with Large Multimodal Model
Outperforms prior arts and establishes a strong baseline for the field of graphi design
Repo.
Outperforms prior arts and establishes a strong baseline for the field of graphi design
Repo.
arXiv.org
Graphic Design with Large Multimodal Model
In the field of graphic design, automating the integration of design elements into a cohesive multi-layered artwork not only boosts productivity but also paves the way for the democratization of...
Datasets, Benchmarks, and Protocols: GPT versus Resident Physicians — A Benchmark Based on Official Board Scores
This study shows that GPT-4 compares well against a cohort of 849 humans who took medical board exams in 5 specialties in 2022. The analysis is solid but uses zero-shot AI; smarter (and fully automatic) prompting techniques such as MedPrompt would improve GPT-4 even further.
This study shows that GPT-4 compares well against a cohort of 849 humans who took medical board exams in 5 specialties in 2022. The analysis is solid but uses zero-shot AI; smarter (and fully automatic) prompting techniques such as MedPrompt would improve GPT-4 even further.
New Google DeepMind paper exploring what persuasion and manipulation in the context of language models.
Existing safeguard approaches often focus on harmful outcomes of persuasion.
This research argues for a deeper examination of the process of AI persuasion itself to understand and mitigate potential harms.
The authors distinguish between rational persuasion, which relies on providing relevant facts, sound reasoning, or other forms of trustworthy evidence, and manipulation, which relies on taking advantage of cognitive biases and heuristics or misrepresenting information.
Existing safeguard approaches often focus on harmful outcomes of persuasion.
This research argues for a deeper examination of the process of AI persuasion itself to understand and mitigate potential harms.
The authors distinguish between rational persuasion, which relies on providing relevant facts, sound reasoning, or other forms of trustworthy evidence, and manipulation, which relies on taking advantage of cognitive biases and heuristics or misrepresenting information.
What are the ethical and societal implications of advanced AI assistants? What might change in a world with more agentic AI?
New paper explores these questions from Google DeepMind.
Researchers define advanced AI assistants as “artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user—across one or more domains—in line with the user’s expectations.”
For advanced AI assistants, several things stand out:
First, with autonomous and agentic technologies *alignment* is key—but this goes beyond following the instructions of intentions of a single user.
Greater freedom and scope of action make it essential that AI systems embody the right set of values—respecting the needs of non-users and society more widely.
An AI assistant is aligned if it does not *disproportionately* favor some actors over others.
Second, increasingly personal and human-like forms of assistant introduce new questions around anthropomorphism, privacy, trust and appropriate relationships with AI.
Safeguards are needed to support user well-being—and well-being needs to be understood in holistic terms.
Third, millions of AI assistants could be deployed at a societal level where they’ll interact with one another and with non-users.
Coordination to avoid collective action problems is needed. So too, is equitable access and inclusive design.
Fourth, it’s hard to evaluate AI assistants using tools that focus only on model properties and outputs.
We need to understand how people interact with assistants & how they could shape society over time.
Fifth, we potentially stand at the beginning of an era of potentially profound technological and societal change—which means there's a window of opportunity to shape the design, use and purpose of advanced AI assistants.
The paper contains over 50 suggestions...
New paper explores these questions from Google DeepMind.
Researchers define advanced AI assistants as “artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user—across one or more domains—in line with the user’s expectations.”
For advanced AI assistants, several things stand out:
First, with autonomous and agentic technologies *alignment* is key—but this goes beyond following the instructions of intentions of a single user.
Greater freedom and scope of action make it essential that AI systems embody the right set of values—respecting the needs of non-users and society more widely.
An AI assistant is aligned if it does not *disproportionately* favor some actors over others.
Second, increasingly personal and human-like forms of assistant introduce new questions around anthropomorphism, privacy, trust and appropriate relationships with AI.
Safeguards are needed to support user well-being—and well-being needs to be understood in holistic terms.
Third, millions of AI assistants could be deployed at a societal level where they’ll interact with one another and with non-users.
Coordination to avoid collective action problems is needed. So too, is equitable access and inclusive design.
Fourth, it’s hard to evaluate AI assistants using tools that focus only on model properties and outputs.
We need to understand how people interact with assistants & how they could shape society over time.
Fifth, we potentially stand at the beginning of an era of potentially profound technological and societal change—which means there's a window of opportunity to shape the design, use and purpose of advanced AI assistants.
The paper contains over 50 suggestions...
Google DeepMind
The ethics of advanced AI assistants
Exploring the promise and risks of a future with more capable AI
Apple is joining the public AI game with 4 new models on the Hugging Face hub
Snowflake announced a state-of-the-art large language model uniquely designed to be the most open, enterprise-grade LLM on the market.
Snowflake
Snowflake Arctic - LLM for Enterprise AI
Introducing Snowflake Arctic, a top-tier enterprise focused LLM pushing the frontiers of cost-effective training and openness.
Nvidia acquired Run:ai for $700M
Run:ai a Tel Aviv-based company that makes it easier for developers and operations teams to manage and optimize their AI hardware infrastructure, for an undisclosed sum.
Investment philosophy: support companies that leverage its technology
2021: 14 investments
2022: 14 investments
2023: 40 investments
2024: 12 investments so far
What's going on:
- From looking at the market map, their investment strategy is solely focused on:
1. Companies building foundational models
(Cohere, Imbue, Runway, Inflection, etc).
2. Companies that help deploy these models
(Together, Replicate, Hugging Face, etc).
Run:ai a Tel Aviv-based company that makes it easier for developers and operations teams to manage and optimize their AI hardware infrastructure, for an undisclosed sum.
Investment philosophy: support companies that leverage its technology
2021: 14 investments
2022: 14 investments
2023: 40 investments
2024: 12 investments so far
What's going on:
- From looking at the market map, their investment strategy is solely focused on:
1. Companies building foundational models
(Cohere, Imbue, Runway, Inflection, etc).
2. Companies that help deploy these models
(Together, Replicate, Hugging Face, etc).
Drexel University announced a new machine-learning technology that enables accurate estimation of brain age using a low-cost EEG device.
YouTube
How Old Is Your Brain?
A team of researchers from Drexel and Stockton universities has developed a new and practical way to monitor general brain health and detect premature brain aging using a low-cost EEG headset and a machine learning algorithm, presenting a quick and easy way…
Immersive_tech_in_healthcare_1713358341.pdf
14.2 MB
This is a very interesting report focused on the use of immersive technology, like VR, in the healthcare sector.
The goal of the work is to help those in healthcare (including providers, built environment experts, and policy makers) to:
1. Advocate for the benefits of XR as a means to innovate health and social care
2. Increase debate and dialog across networks of expertise to create health-promoting environments
3. Understand the overriding priorities in making effective pathways to the implementation of XR.
This research is unique in its methodology that includes literature review but also semi-structured interviews.
The unique nature of this work really pays off in the knowledge gained.
The authors conclude that:
(a) both built environment and healthcare sectors can benefit from the various capabilities of XR through cross-sectional initiatives, evidence-based practices, and participatory approaches.
(b) a confluence of knowledge and methods of HCI and HBI can increase the interoperability and usability of XR for the patient-centered and value-based healthcare models.
(c) the XR-enabled technological regime will largely affect the new forms of value in healthcare premises by fostering more decentralized, preventive, and therapeutic characteristics in the future healthcare ecosystems.
The goal of the work is to help those in healthcare (including providers, built environment experts, and policy makers) to:
1. Advocate for the benefits of XR as a means to innovate health and social care
2. Increase debate and dialog across networks of expertise to create health-promoting environments
3. Understand the overriding priorities in making effective pathways to the implementation of XR.
This research is unique in its methodology that includes literature review but also semi-structured interviews.
The unique nature of this work really pays off in the knowledge gained.
The authors conclude that:
(a) both built environment and healthcare sectors can benefit from the various capabilities of XR through cross-sectional initiatives, evidence-based practices, and participatory approaches.
(b) a confluence of knowledge and methods of HCI and HBI can increase the interoperability and usability of XR for the patient-centered and value-based healthcare models.
(c) the XR-enabled technological regime will largely affect the new forms of value in healthcare premises by fostering more decentralized, preventive, and therapeutic characteristics in the future healthcare ecosystems.