Based on the available data, the usage of ChatGPT in the selected countries is as follows:
1. United States: The United States accounts for 15.32% of the total audience using ChatGPT
2. India: India accounts for 6.32% of the total audience using ChatGPT.
3. Japan: Japan accounts for 3.97% of the total audience using ChatGPT.
4. Canada: Canada accounts for 2.74% of the total audience using ChatGPT.
5. Other countries: The rest of the world accounts for 68.36% of visits to ChatGPT's website.
1. United States: The United States accounts for 15.32% of the total audience using ChatGPT
2. India: India accounts for 6.32% of the total audience using ChatGPT.
3. Japan: Japan accounts for 3.97% of the total audience using ChatGPT.
4. Canada: Canada accounts for 2.74% of the total audience using ChatGPT.
5. Other countries: The rest of the world accounts for 68.36% of visits to ChatGPT's website.
๐โโ๏ธ How to Play Long Term Games:
Systems > Goals
Discipline > Motivation
Trust > Distrust
Principles > Tactics
Writing > Reading
Vulnerability > Confidence
North Stars > Low Hanging Fruit
Trends > News
Habits > Sprints
Questions > Answers
Problems > Solutions
People > Projects
Systems > Goals
Discipline > Motivation
Trust > Distrust
Principles > Tactics
Writing > Reading
Vulnerability > Confidence
North Stars > Low Hanging Fruit
Trends > News
Habits > Sprints
Questions > Answers
Problems > Solutions
People > Projects
AI๊ฐ ๊ฒ์์ ์ ์๋ถํฐ ๊ฒ์์ UI/UX๊น์ง ๋ง์ ๋ถ๋ถ์ ๋ณํ์์ผ๋์ ๊ฑฐ๋ผ๊ณ ์๊ฐํฉ๋๋ค.
์ง๋ ๋ช๋ ๊ฐ AI ๋ชจ๋ธ์ ์์ฒญ๋ ์๋๋ก ๋ณํํด์๋๋ฐ์. ๊ฐ์ฅ ์ต์ ์ AI ๋ชจ๋ธ์ ๋ฐ์ ์ญ์ฌ์ ์์ผ๋ก ์์๋๋ AI ์ฐ๊ตฌ์ฃผ์ ๋ฅผ ๋ฐํ์ผ๋ก ๋ฏธ๋์ ๊ฒ์์ ์์ํด๋ด ๋๋ค.
Stable Diffusion ๋ชจ๋ธ์ด ๋น ๋ฅด๊ฒ ํ์ ํ๋ฉด์, ๊ฒ์ ์ํธ์ ๊ด๋ จํด์ ๋ค์ํ ์คํ์ด ์ด๋ฃจ์ด์ง๊ณ ์์ต๋๋ค. ๊ฒ์ ์ํธ๋ฅผ ๊ธฐํํ๊ณ ๊ฐ๋ฐํ๋ ๊ณผ์ ์์ AI๋ฅผ ์ ์ฌ์ฉํ ํ๋ก์ธ์ค๋ ๋ญ๊น์?
์ด ๋๊ฐ์ง ์ง๋ฌธ์ ๋ํด์ ๊ถ๊ธ์ฆ์ด ์๊ธฐ์ จ๋ค๋ฉด ์๋ ๊ตฌ๊ธํผ์ ์์ฑํด์ฃผ์ธ์ ๐
https://forms.gle/RFJjwqELL9juekP66
์ง๋ ๋ช๋ ๊ฐ AI ๋ชจ๋ธ์ ์์ฒญ๋ ์๋๋ก ๋ณํํด์๋๋ฐ์. ๊ฐ์ฅ ์ต์ ์ AI ๋ชจ๋ธ์ ๋ฐ์ ์ญ์ฌ์ ์์ผ๋ก ์์๋๋ AI ์ฐ๊ตฌ์ฃผ์ ๋ฅผ ๋ฐํ์ผ๋ก ๋ฏธ๋์ ๊ฒ์์ ์์ํด๋ด ๋๋ค.
Stable Diffusion ๋ชจ๋ธ์ด ๋น ๋ฅด๊ฒ ํ์ ํ๋ฉด์, ๊ฒ์ ์ํธ์ ๊ด๋ จํด์ ๋ค์ํ ์คํ์ด ์ด๋ฃจ์ด์ง๊ณ ์์ต๋๋ค. ๊ฒ์ ์ํธ๋ฅผ ๊ธฐํํ๊ณ ๊ฐ๋ฐํ๋ ๊ณผ์ ์์ AI๋ฅผ ์ ์ฌ์ฉํ ํ๋ก์ธ์ค๋ ๋ญ๊น์?
์ด ๋๊ฐ์ง ์ง๋ฌธ์ ๋ํด์ ๊ถ๊ธ์ฆ์ด ์๊ธฐ์ จ๋ค๋ฉด ์๋ ๊ตฌ๊ธํผ์ ์์ฑํด์ฃผ์ธ์ ๐
https://forms.gle/RFJjwqELL9juekP66
Google Docs
AGI Town in Seoul 4ํ์ฐจ(6/23 ๊ธ) ๋ฐํ์๋ฃ ์ ์ฒญ
I don't have to check hacker news on a daily basis anymore! Thanks for the service!
https://share.snipd.com/show/a7f48397-d9ed-458a-9bda-51b504acddee
https://share.snipd.com/show/a7f48397-d9ed-458a-9bda-51b504acddee
Snipd
Hacker News Recap
A podcast that recaps some of the top posts on Hacker News every day. This is a third-party project, independent from HN and YC. Text and audio generated usingโฆ
What era do we live in?
A wide range of AI tasks that used to take 5 years and a research team to accomplish in 2013, now just require API docs and a spare afternoon in 2023.
Not a single PhD in sight. When it comes to shipping AI products, you want engineers, not researchers.
Microsoft, Google, Meta, and the large Foundation Model labs have cornered scarce research talent to essentially deliver โAI Research as a Serviceโ APIs. You canโt hire them, but you can rent them โ if you have software engineers on the other end who know how to work with them. There are ~5000 LLM researchers in the world, but ~50m software engineers. Supply constraints dictate that an โin-betweenโ class of AI Engineers will rise to meet demand.
Fire, ready, aim. Instead of requiring data scientists/ML engineers do a laborious data collection exercise before training a single domain specific model that is then put into production, a product manager/software engineer can prompt an LLM, and build/validate a product idea, before getting specific data to finetune.
Letโs say there are 100-1000x more of the latter than the former, and the โfire, ready, aimโ workflow of prompted LLM prototypes lets you move 10-100x faster than traditional ML. So AI Engineers will be able to validate AI products say 1,000-10,000x cheaper. Itโs Waterfall vs Agile, all over again. AI is Agile.
A wide range of AI tasks that used to take 5 years and a research team to accomplish in 2013, now just require API docs and a spare afternoon in 2023.
Not a single PhD in sight. When it comes to shipping AI products, you want engineers, not researchers.
Microsoft, Google, Meta, and the large Foundation Model labs have cornered scarce research talent to essentially deliver โAI Research as a Serviceโ APIs. You canโt hire them, but you can rent them โ if you have software engineers on the other end who know how to work with them. There are ~5000 LLM researchers in the world, but ~50m software engineers. Supply constraints dictate that an โin-betweenโ class of AI Engineers will rise to meet demand.
Fire, ready, aim. Instead of requiring data scientists/ML engineers do a laborious data collection exercise before training a single domain specific model that is then put into production, a product manager/software engineer can prompt an LLM, and build/validate a product idea, before getting specific data to finetune.
Letโs say there are 100-1000x more of the latter than the former, and the โfire, ready, aimโ workflow of prompted LLM prototypes lets you move 10-100x faster than traditional ML. So AI Engineers will be able to validate AI products say 1,000-10,000x cheaper. Itโs Waterfall vs Agile, all over again. AI is Agile.
์๋ก์ด ๊ฒ์ด ๋ฑ์ฅํ๋ฉด ๊ทธ ๋๊ตฌ๋ ์ ๋ฌธ๊ฐ๊ฐ ๋ ์ ์๋ ์๊ธฐ๊ฐ ์์ต๋๋ค. ๊ทธ์ ๊ด์ฌ ์๋ ์ฌ๋๋ค๋ง ๊ด์ฌ์ ๊ฐ๊ณ ๊ฐ์ง๊ณ ๋๋ฉฐ ์๋ก ์ด์ผ๊ธฐํ ๋ฟ์
๋๋ค. ํ์ง๋ง ๊ฒฐ๊ตญ์๋ ๊ทธ ์ผ์ด ์ฑ์ํด์ง๊ณ ๊ทธ ์ฐฝ์ด ๋ซํ๋๋ค. ์ง์
์ฅ๋ฒฝ์ด ํจ์ฌ ๋์์ง ํ์๋์.
๋น์ ์ AI๋ก ์ ํํ๊ธฐ ์ํด ๋๋ฌด ๋์ง ์์์ต๋๋ค.
https://www.latent.space/p/not-old
๋น์ ์ AI๋ก ์ ํํ๊ธฐ ์ํด ๋๋ฌด ๋์ง ์์์ต๋๋ค.
https://www.latent.space/p/not-old
www.latent.space
You Are Not Too Old (To Pivot Into AI)
Everything important in AI happened in the last 5 years and you can catch up
AI x Design: https://www.figma.com/blog/ai-the-next-chapter-in-design/
ํน์ Design ์ชฝ ์ปค๋ฆฌ์ด๋ฅผ ๊ฐ์ ธ๊ฐ๊ณ ์๋ ๋ถ๋ค์ค ์ค๋ ฅ๊ณผ ๊ด์ฌ ๋๊ฐ์ง๊ฐ ๋ค ์๋ ์ง์ธ ๋ถ๋ค์ด ์์ผ์ค๊น์?~ ใ ใ
5๋ช ์ ๋๋ง ๋ชจ์ฌ๋ ์ฌ๋ฐ๋ ์ด์ผ๊ธฐ ๋ง์ด ํ ์ ์์ ๊ฒ ๊ฐ์๋ฐ์!
ํน์ Design ์ชฝ ์ปค๋ฆฌ์ด๋ฅผ ๊ฐ์ ธ๊ฐ๊ณ ์๋ ๋ถ๋ค์ค ์ค๋ ฅ๊ณผ ๊ด์ฌ ๋๊ฐ์ง๊ฐ ๋ค ์๋ ์ง์ธ ๋ถ๋ค์ด ์์ผ์ค๊น์?~ ใ ใ
5๋ช ์ ๋๋ง ๋ชจ์ฌ๋ ์ฌ๋ฐ๋ ์ด์ผ๊ธฐ ๋ง์ด ํ ์ ์์ ๊ฒ ๊ฐ์๋ฐ์!
Figma
AI: The Next Chapter in Design | Figma Blog
AI is more than a product, itโs a platform that will change how and what we designโand who gets involved.
We need to understand function call Open AI recently announced.
https://www.latent.space/p/function-agents#details
https://www.latent.space/p/function-agents#details
www.latent.space
Emergency Pod: OpenAI's new Functions API, up to 75% Price Drop, 4x Context Length (w/ Simon Willison, Riley Goodside, Roie Schwaberโฆ
Listen now | Leading AI Engineers from Scale, Microsoft, Pinecone, Huggingface and more convene to discuss the June 2023 OpenAI updates and the emerging Code x LLM paradigms. Plus: Recursive Function Agents!
Wow he is earning meony $1 mrr bwith two ai services.
๐ธ http://PhotoAI.com $62K MRR
๐ผ http://InteriorAI.com $52K MRR
๐ธ http://PhotoAI.com $62K MRR
๐ผ http://InteriorAI.com $52K MRR
Photo AI
AI Video Generator & Image Generator by Photo AI
Generate photorealistic images and videos of people with AI. Take stunning photos of people with the first AI Photographer! Generate photo and video content for your social media with AI. Save time and money and do an AI photo shoot from your laptop or phoneโฆ
I found GitHub to be the best organizer of AI-related newsletters and podcasts. Eureka!!!
https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
GitHub
ai-notes/Resources/Good AI Podcasts and Newsletters.md at main ยท swyxio/ai-notes
notes for software engineers getting up to speed on new AI developments. Serves as datastore for https://latent.space writing, and product brainstorming, but has cleaned up canonical references und...
Continuous Learning_Startup & Investment
I found GitHub to be the best organizer of AI-related newsletters and podcasts. Eureka!!! https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
์ค๋ ๋ฐ๊ฒฌํ ์ฌ๋ฐ๋ ๊นํ. ์ฌ๋ฐ๋ ๊ฒ ๋๋ฌด ๋ง๋ค ๐คฃ
AI ๋ธ๋ก๊ทธ ์ด์์์ด์ ํ์บ์คํธ ํธ์คํธ https://latent.space/
AI note: https://github.com/swyxio/ai-notes/tree/main
- ํ์ฉ์ฌ๋ก
- ์ด์ฌ์/์ค๊ธ์/๊ณ ์๊ฐ ์ฝ์ ๊ฑฐ๋ฆฌ
- ์ปค๋ฎค๋ํฐ
- People
- Reality & Demotivations
- Legal, Ethics, and Privacy
- Alignment, Safety
Good AI Podcasts and Newsletters: https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
AI ๋ธ๋ก๊ทธ ์ด์์์ด์ ํ์บ์คํธ ํธ์คํธ https://latent.space/
AI note: https://github.com/swyxio/ai-notes/tree/main
- ํ์ฉ์ฌ๋ก
- ์ด์ฌ์/์ค๊ธ์/๊ณ ์๊ฐ ์ฝ์ ๊ฑฐ๋ฆฌ
- ์ปค๋ฎค๋ํฐ
- People
- Reality & Demotivations
- Legal, Ethics, and Privacy
- Alignment, Safety
Good AI Podcasts and Newsletters: https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
www.latent.space
Latent.Space | Substack
The AI Engineer newsletter + Top technical AI podcast. How leading labs build Agents, Models, Infra, & AI for Science. See https://latent.space/about for highlights from Greg Brockman, Andrej Karpathy, George Hotz, Simon Willison, Soumith Chintala et al!โฆ
Funding news_Foundational models
https://www.newcomer.co/p/former-github-cto-jason-warner-raises
https://reka.ai/announcing-our-58m-funding-to-build-generative-models-and-advance-ai-research/
https://www.newcomer.co/p/former-github-cto-jason-warner-raises
https://reka.ai/announcing-our-58m-funding-to-build-generative-models-and-advance-ai-research/
www.newcomer.co
Former GitHub CTO Jason Warner Raises $26 Million for Foundation Model Code Startup
Warner steps back from Redpoint to lead Poolside. Redpoint leads seed round.
์ง๋ ๋ฉฐ์น ๋์ ํ๋์ค ๋ถ๋ฅด๊ณ ๋ด ์ง๋ฐฉ์ ์ ๋ช
ํ ์์ด๋๋ฆฌ ์ค๋๋ค๊ณผ ์์ํ๊ณ , ์์ฌํ๊ณ ๋ ์์
์ ์ฆ๊ธฐ๋ ์๊ฐ์ ๊ฐ์ก๋ค. ํน๋ณํ ์ธ์ฐ๋๋ฌธ ์๋๋ผ ๊ทธ๋ฅ ์๋ ์ฌ๋์ด ์์์ธ์ (8๋ช
) ๊ฐ๋ ์ผ์ ์ ๋ง๋ค์๋๋ฐ ๊ฑฐ๊ธฐ ๋ผ์ฌ์ ๊ฐ๋ค.
ํ๊ตญ๊ธฐ์ ์ด์ง๋ง ์ธ๊ณ์ ์ผ๋ก ์ฌ์ ํ๋ ์ฌ๋, ํธ์ฃผ ๋๊ธฐ์ ๋ํ, ๋ ์์ฅ๊ธฐ์ ํฌ์๊ฐ ๊ทธ๋ฆฌ๊ณ ๋... ๋ค๋ค ๋ถ๋ถ ๋๋ฐ์ผ๋ก ์ฐธ๊ฐํ๋๋ฐ ๊ธฐ๋๋ณด๋ค ํจ์ฌ ๊ฐ์ง ์๊ฐ ์ด์๋ค.
์์ด๋๋ฆฌ ์ค๋๋คํ๊ณ ๊ฐ์ด ์ฆ๊ธฐ๋ฉด์ ๋ํ ํ๋๊ฒ ์ฐ๋ฆฌ๊ฐ ํฌ์ํ ํ์ฌ๋ค๊ณผ ๋ณด๋ด๋ ์๊ฐ๋คํ๊ณ ๋น์ทํ๋ค. ๋ ์จ์ ๋ํ ๊ณ ๋ฏผ๋ถํฐ AI ๋ ๊ณ ๋ฏผํ๊ณ ์๋ค. ์ฐธ๊ณ ๋ก ๊ทธ๋ค์ ๋ณดํต ์๋ฒฝ 6์๋ถํฐ ๋ฐค 10์๊น์ง ์ผํ๋ค. ๊ทธ๋ฆฌ๊ณ peak season ์๋ ์ ๋ ๊ฑฐ์ ๋ชป์๋ฉด์ ์ผํ๋ค. ๊ทธ๋ค์๊ฒ๋ ์ญ์ ๋ง์ ๊ฒ์ ๋ฐฐ์ฐ๊ณ ๊ฐํํ๊ฒ ๋์๋ค.
๊ทธ์ค ์ธ์ ๊น์๋ ๋ง๋ค:
"์์ ๋จน๊ธฐ์ (๋ฐฐ๊ฐ ๊ณ ํ๋) ์์ธ์ ๋ ๋๋ผ๊ณ ๋ง์๋ค. ์ฌ๋์ ๋ณธ๋ฅ์ด๋ค."
"์ด์ 30๋ ์งธ ํ๋๋ฐ ์ด๋ป๊ฒ ํด์ผ๋๋์ง ์กฐ๊ธ ์์๊ฐ๋๊ฒ ๊ฐ๋ค. ๋ณ์๊ฐ ๋ง์์ ๊ณ์ ์คํ์ ํด์ผ๋๋ค."
"์ฐ๋ฆฐ ์ด์ฌํ ํ๋ค. ์ด๋ ๋๊ตฌ๋ณด๋ค๋. ๊ทธ๋ ์ง๋ง ๊ฒฐ๊ตญ ํ๋์ด ๋ง์๊ฒ์ ์ข์ฐํ๊ธฐ๋๋ฌธ ๊ธฐ๋๋ ๋ง์ด ํ๋ค."
"๋ค๋ค ํน์ ์ ๋ฐฉ๋ฒ์ด ์๋ค. ๋ฌผ๋ก ๋จ๋ค์ด ๋ญ ์ด๋ป๊ฒ ํ๋์ง ๊ด์ฌ์๊ฒ ๋ณด์ง๋ง...๊ทธ๋ฆฌ๊ณ ์คํ๋ ํ์ง๋ง ์ฐ๋ฆฌ๋ง์ ๋ฐฉ์๋๋ก ํ๋ค."
"๋ชจ๋ ์์ธ์ ๋ง์ค๋๋ง๋ค ๋ค๋ฅด๋ค. ๊ฐ ์ฌ๋๋ง๋ค ์ข์ํ๋ ํฅ์ด ์๊ณ ๋ง์ด ์๋๊ฑฐ๊ธฐ ๋๋ฌธ ์ด๋ค ์์ธ์ด ์ ๋์ ์ด๋ผ๊ณ ํ ์ ์๋ค. ๊ทธ๋ฆฌ๊ณ ๋๊ฐ์ ์์ธ๋ ๋ณ๋ง๋ค ์กฐ๊ธ์ฉ ๋ค๋ฅด๊ณ ๋ ๊ทธ๋ ๊ธฐ๋ถ/๋ถ์๊ธฐ ๋ฐ๋ผ์ ๋ค๋ฅธ๋ง์ ๋๋๋ค." -- (๋ก๋ง๋ค๊ฝํฐ ์ค๋).
๋ง์ง๋ง ๊ท์ ์ ๊ทธ๋ถ์๊ฒ ๋ฃ๊ณ ... ์ด์ ๋ ํธํ ์ซ์ง์๊ณ ์์ธ ์ฆ๊ธธ์ ์๊ฒ๋์ ์ข์๋ค.
ํ๊ตญ๊ธฐ์ ์ด์ง๋ง ์ธ๊ณ์ ์ผ๋ก ์ฌ์ ํ๋ ์ฌ๋, ํธ์ฃผ ๋๊ธฐ์ ๋ํ, ๋ ์์ฅ๊ธฐ์ ํฌ์๊ฐ ๊ทธ๋ฆฌ๊ณ ๋... ๋ค๋ค ๋ถ๋ถ ๋๋ฐ์ผ๋ก ์ฐธ๊ฐํ๋๋ฐ ๊ธฐ๋๋ณด๋ค ํจ์ฌ ๊ฐ์ง ์๊ฐ ์ด์๋ค.
์์ด๋๋ฆฌ ์ค๋๋คํ๊ณ ๊ฐ์ด ์ฆ๊ธฐ๋ฉด์ ๋ํ ํ๋๊ฒ ์ฐ๋ฆฌ๊ฐ ํฌ์ํ ํ์ฌ๋ค๊ณผ ๋ณด๋ด๋ ์๊ฐ๋คํ๊ณ ๋น์ทํ๋ค. ๋ ์จ์ ๋ํ ๊ณ ๋ฏผ๋ถํฐ AI ๋ ๊ณ ๋ฏผํ๊ณ ์๋ค. ์ฐธ๊ณ ๋ก ๊ทธ๋ค์ ๋ณดํต ์๋ฒฝ 6์๋ถํฐ ๋ฐค 10์๊น์ง ์ผํ๋ค. ๊ทธ๋ฆฌ๊ณ peak season ์๋ ์ ๋ ๊ฑฐ์ ๋ชป์๋ฉด์ ์ผํ๋ค. ๊ทธ๋ค์๊ฒ๋ ์ญ์ ๋ง์ ๊ฒ์ ๋ฐฐ์ฐ๊ณ ๊ฐํํ๊ฒ ๋์๋ค.
๊ทธ์ค ์ธ์ ๊น์๋ ๋ง๋ค:
"์์ ๋จน๊ธฐ์ (๋ฐฐ๊ฐ ๊ณ ํ๋) ์์ธ์ ๋ ๋๋ผ๊ณ ๋ง์๋ค. ์ฌ๋์ ๋ณธ๋ฅ์ด๋ค."
"์ด์ 30๋ ์งธ ํ๋๋ฐ ์ด๋ป๊ฒ ํด์ผ๋๋์ง ์กฐ๊ธ ์์๊ฐ๋๊ฒ ๊ฐ๋ค. ๋ณ์๊ฐ ๋ง์์ ๊ณ์ ์คํ์ ํด์ผ๋๋ค."
"์ฐ๋ฆฐ ์ด์ฌํ ํ๋ค. ์ด๋ ๋๊ตฌ๋ณด๋ค๋. ๊ทธ๋ ์ง๋ง ๊ฒฐ๊ตญ ํ๋์ด ๋ง์๊ฒ์ ์ข์ฐํ๊ธฐ๋๋ฌธ ๊ธฐ๋๋ ๋ง์ด ํ๋ค."
"๋ค๋ค ํน์ ์ ๋ฐฉ๋ฒ์ด ์๋ค. ๋ฌผ๋ก ๋จ๋ค์ด ๋ญ ์ด๋ป๊ฒ ํ๋์ง ๊ด์ฌ์๊ฒ ๋ณด์ง๋ง...๊ทธ๋ฆฌ๊ณ ์คํ๋ ํ์ง๋ง ์ฐ๋ฆฌ๋ง์ ๋ฐฉ์๋๋ก ํ๋ค."
"๋ชจ๋ ์์ธ์ ๋ง์ค๋๋ง๋ค ๋ค๋ฅด๋ค. ๊ฐ ์ฌ๋๋ง๋ค ์ข์ํ๋ ํฅ์ด ์๊ณ ๋ง์ด ์๋๊ฑฐ๊ธฐ ๋๋ฌธ ์ด๋ค ์์ธ์ด ์ ๋์ ์ด๋ผ๊ณ ํ ์ ์๋ค. ๊ทธ๋ฆฌ๊ณ ๋๊ฐ์ ์์ธ๋ ๋ณ๋ง๋ค ์กฐ๊ธ์ฉ ๋ค๋ฅด๊ณ ๋ ๊ทธ๋ ๊ธฐ๋ถ/๋ถ์๊ธฐ ๋ฐ๋ผ์ ๋ค๋ฅธ๋ง์ ๋๋๋ค." -- (๋ก๋ง๋ค๊ฝํฐ ์ค๋).
๋ง์ง๋ง ๊ท์ ์ ๊ทธ๋ถ์๊ฒ ๋ฃ๊ณ ... ์ด์ ๋ ํธํ ์ซ์ง์๊ณ ์์ธ ์ฆ๊ธธ์ ์๊ฒ๋์ ์ข์๋ค.
Continuous Learning_Startup & Investment
What era do we live in? A wide range of AI tasks that used to take 5 years and a research team to accomplish in 2013, now just require API docs and a spare afternoon in 2023. Not a single PhD in sight. When it comes to shipping AI products, you want engineersโฆ
I think this is mostly right.
- LLMs created a whole new layer of abstraction and profession.
- I've so far called this role "Prompt Engineer" but agree it is misleading. It's not just prompting alone, there's a lot of glue code/infra around it. Maybe "AI Engineer" is ~usable, though it takes something a bit too specific and makes it a bit too broad.
- ML people train algorithms/networks, usually from scratch, usually at lower capability.
- LLM training is becoming sufficently different from ML because of its systems-heavy workloads, and is also splitting off into a new kind of role, focused on very large scale training of transformers on supercomputers.
- In numbers, there's probably going to be significantly more AI Engineers than there are ML engineers / LLM engineers.
- One can be quite successful in this role without ever training anything.
- I don't fully follow the Software 1.0/2.0 framing. Software 3.0 (imo ~prompting LLMs) is amusing because prompts are human-designed "code", but in English, and interpreted by an LLM (itself now a Software 2.0 artifact). AI Engineers simultaneously program in all 3 paradigms. It's a bit ๐ตโ๐ซ
https://twitter.com/karpathy/status/1674873002314563584
- LLMs created a whole new layer of abstraction and profession.
- I've so far called this role "Prompt Engineer" but agree it is misleading. It's not just prompting alone, there's a lot of glue code/infra around it. Maybe "AI Engineer" is ~usable, though it takes something a bit too specific and makes it a bit too broad.
- ML people train algorithms/networks, usually from scratch, usually at lower capability.
- LLM training is becoming sufficently different from ML because of its systems-heavy workloads, and is also splitting off into a new kind of role, focused on very large scale training of transformers on supercomputers.
- In numbers, there's probably going to be significantly more AI Engineers than there are ML engineers / LLM engineers.
- One can be quite successful in this role without ever training anything.
- I don't fully follow the Software 1.0/2.0 framing. Software 3.0 (imo ~prompting LLMs) is amusing because prompts are human-designed "code", but in English, and interpreted by an LLM (itself now a Software 2.0 artifact). AI Engineers simultaneously program in all 3 paradigms. It's a bit ๐ตโ๐ซ
https://twitter.com/karpathy/status/1674873002314563584