Continuous Learning_Startup & Investment
2.43K subscribers
513 photos
5 videos
16 files
2.74K links
We journey together through the captivating realms of entrepreneurship, investment, life, and technology. This is my chronicle of exploration, where I capture and share the lessons that shape our world. Join us and let's never stop learning!
Download Telegram
I think this is mostly right.
- LLMs created a whole new layer of abstraction and profession.
- I've so far called this role "Prompt Engineer" but agree it is misleading. It's not just prompting alone, there's a lot of glue code/infra around it. Maybe "AI Engineer" is ~usable, though it takes something a bit too specific and makes it a bit too broad.
- ML people train algorithms/networks, usually from scratch, usually at lower capability.
- LLM training is becoming sufficently different from ML because of its systems-heavy workloads, and is also splitting off into a new kind of role, focused on very large scale training of transformers on supercomputers.
- In numbers, there's probably going to be significantly more AI Engineers than there are ML engineers / LLM engineers.
- One can be quite successful in this role without ever training anything.
- I don't fully follow the Software 1.0/2.0 framing. Software 3.0 (imo ~prompting LLMs) is amusing because prompts are human-designed "code", but in English, and interpreted by an LLM (itself now a Software 2.0 artifact). AI Engineers simultaneously program in all 3 paradigms. It's a bit ๐Ÿ˜ตโ€๐Ÿ’ซ

https://twitter.com/karpathy/status/1674873002314563584
[์–ด๋–ป๊ฒŒ AI๋ฅผ ์‚ฌ์šฉํ•ด์„œ 10๋ฐฐ ์ข‹์€ ์ œํ’ˆ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์„๊นŒ?_Github Copilot]

Copilot์„ ์‚ฌ์šฉํ•˜์‹œ๋Š” ๋ถ„๋“ค ์žˆ์œผ์‹ ๊ฐ€์š”? GitHub Copilot์€ GitHub์™€ OpenAI์—์„œ ๊ฐœ๋ฐœํ•œ AI coding assistance์ธ๋ฐ์š”. ์ œ ์ฃผ๋ณ€์— Copilot์„ ํ•œ๋ฒˆ ์‚ฌ์šฉํ•˜์‹  ๋ถ„๋“ค์€ ๋Œ€๋ถ€๋ถ„ ๊พธ์ค€ํžˆ ์‚ฌ์šฉํ•˜์‹œ๋ฉด์„œ ๋งŒ์กฑํ•˜์‹œ๋”๋ผ๊ณ ์š”.
๋‚ด๊ฐ€ ์ž‘์„ฑํ•˜๋Š” ์ฝ”๋“œ ๋ฒ ์ด์Šค์˜ ๋งฅ๋ฝ์„ ์ž์„ธํžˆ ์ดํ•ดํ•˜๊ณ  ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ ํ•ฉํ•œ ์ •๋ณด๋ฅผ ์ถ”์ฒœํ•˜๋Š” ๊ฒƒ์ด Copilot์ด ๊ฐ€์ง„ ๊ฐ€์žฅ ํฐ ์žฅ์ ์ด๋ผ๊ณ  ์ƒ๊ฐํ•˜๋Š”๋ฐ์š”. ์ด๋Ÿฐ ๊ธฐ๋Šฅ๋“ค์€ ์–ด๋–ป๊ฒŒ ์ง€์›ํ•  ์ˆ˜ ์žˆ์„๊นŒ์š”? AI๊ฐ€ ์•Œ์•„์„œ ๋‹ค ์ถ”์ฒœํ•ด์ฃผ๋Š” ๊ฑธ๊นŒ์š”?
์ตœ๊ทผ์— Parth Thakkar์ด ์“ด Copilot Internals์„ ์ฝ๊ณ  ์ƒˆ๋กญ๊ฒŒ ๋ฐฐ์šด ๋‚ด์šฉ๊ณผ ์ œ ์ƒ๊ฐ ํ•œ ์Šคํ‘ผ์„ ๊ณต์œ ํ•ฉ๋‹ˆ๋‹ค.
Copilot์ด ๋‚ด๊ฐ€ ์ž‘์„ฑํ•˜๋Š” ์ฝ”๋“œ์— ๋งฅ๋ฝ์„ ์ดํ•ดํ•˜๊ณ  ์‹ค์‹œ๊ฐ„์œผ๋กœ ๋‹ต์„ ์ฃผ๋Š” ๋ฐ์—๋Š” ํฌ๊ฒŒ 3๊ฐ€์ง€ ๋น„๋ฒ• ์†Œ์Šค๊ฐ€ ์กด์žฌํ•ฉ๋‹ˆ๋‹ค.

๋น„๋ฒ• ์†Œ์Šค 1: ํ”„๋กฌํ”„ํŠธ ์—”์ง€๋‹ˆ์–ด๋ง
- ํด๋ผ์ด์–ธํŠธ๊ฐ€ ํ”„๋กฌํ”„ํŠธ๋ฅผ ๋ณด๋‚ด๋ฉด(์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ฉด), ์ฝ”๋“œ์™€ ๊ด€๋ จ๋œ ๋งฅ๋ฝ(์ ‘๋‘์‚ฌ[์ฝ”๋“œ ์œ„์น˜, ๊ด€๋ จ ์ฝ”๋“œ/ํŒŒ์ผ์˜ ์Šค๋‹ˆํŽซ], ์ ‘๋ฏธ์‚ฌ(์ƒ์„ฑ๋œ ์ฝ”๋“œ๊ฐ€ ๋“ค์–ด๊ฐˆ ์žฅ์†Œ์— ๋Œ€ํ•œ ๋งฅ๋ฝ), PromptElementRanges(ํ”„๋กฌํ”„ํŠธ๊ฐ€ ์ž˜ ์ž‘๋™ํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๋ณธ ์ •๋ณด๋“ค)์„ AI Model(Codex)์— ๋ณด๋ƒ…๋‹ˆ๋‹ค.

๋น„๋ฒ• ์†Œ์Šค 2: ๋ชจ๋ธ ํ˜ธ์ถœ(Model Invocation)
- Copilot์€ ์ธ๋ผ์ธ/๊ณ ์ŠคํŠธํ…์ŠคํŠธ ๊ทธ๋ฆฌ๊ณ  CopilotํŒจ๋„ ๋‘๊ฐ€์ง€ ์ฑ„๋„์„ ํ†ตํ•ด์„œ AI๋ชจ๋ธ์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค.
- GitHub Copilot์˜ ์ธ๋ผ์ธ/๊ณ ์ŠคํŠธํ…์ŠคํŠธ ์ธํ„ฐํŽ˜์ด์Šค๋Š” ์ œ์•ˆ ์†๋„๋ฅผ ๋†’์ด๊ณ , ๋ฐ˜๋ณต์ ์ธ ๋ชจ๋ธ ํ˜ธ์ถœ์„ ์ค„์ด๊ณ , ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์— ๋”ฐ๋ผ ์ œ์•ˆ์„ ์กฐ์ •ํ•˜๋ฉฐ, ๋น ๋ฅธ ์ž…๋ ฅ์„ ์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ๋””๋ฐ”์šด์‹ฑ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด, ์ฝ”ํŒŒ์ผ๋Ÿฟ ํŒจ๋„์€ ๋” ๋งŽ์€ ์ƒ˜ํ”Œ์„ ์š”์ฒญํ•˜๊ณ , ๋กœ๊ทธ ํ”„๋กœ๋ธŒ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์†”๋ฃจ์…˜์„ ์ •๋ ฌํ•ฉ๋‹ˆ๋‹ค. ๋‘ ์ธํ„ฐํŽ˜์ด์Šค ๋ชจ๋‘๋Š” ๋„์›€์ด ๋˜์ง€ ์•Š๋Š” ์™„๋ฃŒ๋ฅผ ๋ฐฉ์ง€ํ•˜๊ธฐ ์œ„ํ•ด ๊ฒ€์‚ฌ๋ฅผ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค.

๋น„๋ฒ• ์†Œ์Šค 3: ์›๊ฒฉ ์ธก์ • (Telemetry)
- GitHub Copilot๋Š” ์›๊ฒฉ ์ธก์ •์„ ํ†ตํ•ด ์‚ฌ์šฉ์ž ์ƒํ˜ธ์ž‘์šฉ์„ ํ•™์Šตํ•˜๊ณ  ์ œํ’ˆ์„ ๊ฐœ์„ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ œ์•ˆ์˜ ์ˆ˜๋ฝ์ด๋‚˜ ๊ฑฐ๋ถ€, ์ฝ”๋“œ์— ๋‚จ์•„ ์žˆ๋Š” ์ˆ˜๋ฝ๋œ ์ œ์•ˆ์˜ ์ง€์†์„ฑ, ์ œ์•ˆ ์ˆ˜๋ฝ ํ›„ 30์ดˆ ์ด๋‚ด์— ์บก์ฒ˜๋œ ์ฝ”๋“œ ์Šค๋‹ˆํŽซ ๋“ฑ์„ ํฌํ•จํ•˜๋ฉฐ, ์‚ฌ์šฉ์ž๋Š” ๊ฐœ์ธ ์ •๋ณด ๋ณดํ˜ธ๋ฅผ ์œ„ํ•ด ์ด๋Ÿฌํ•œ ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘์„ ๊ฑฐ๋ถ€ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.


AI๋ฅผ ์‚ฌ์šฉํ•ด์„œ ๊ณ ๊ฐ์—๊ฒŒ 10๋ฐฐ ์ข‹์€ ์ œํ’ˆ์„ ๋งŒ๋“œ๋Š” ์ฐฝ์—…์ž๋กœ์„œ ๋ฌด์—‡์„ ๋ฐฐ์šธ ์ˆ˜ ์žˆ์„๊นŒ์š”?
- ๊ณ ๊ฐ์—๊ฒŒ ๊ฐ€์น˜๋ฅผ ์ „๋‹ฌํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ชจ๋ธ ๊ทธ์ž์ฒด๋กœ ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๊ณ ๊ฐ์˜ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๋ชจ๋ธ์„ ์ž˜ ์ดํ•ดํ•˜๊ณ , ์ด๋ฅผ ์ž˜ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” Engineering ์—ญ๋Ÿ‰๊ณผ ๋น ๋ฅธ Iteration์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค.
- ์•„์ง, LLM ํ˜น์€ AI๋ฅผ ์‚ฌ์šฉํ•ด์„œ ์ข‹์€ ์ œํ’ˆ์„ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์ด ์ž˜ ์•Œ๋ ค์ง€์ง€ ์•Š์•˜๊ณ  ๋ˆ„๊ตฌ๋„ ๋‹ต์„ ์•Œ๊ณ  ์žˆ๋‹ค๊ณ  ๋งํ•˜๊ธฐ ์–ด๋ ต์Šต๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, AI๋ฅผ ์ž˜ ์ดํ•ดํ•˜๋ฉด์„œ๋„ ๊ณ ๊ฐ์˜ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด์„œ AI ๋ชจ๋ธ๊ณผ ๋‹ค์–‘ํ•œ ์—”์ง€๋‹ˆ์–ด๋ง์„ ๊ฒฐํ•ฉํ•˜๋ ค๋Š” ์Šคํƒ€ํŠธ์—…์—๊ฒŒ ๊ธฐํšŒ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.

๋” ์ž์„ธํ•œ ๋‚ด์šฉ์„ ๋ณด๊ณ  ์‹ถ๋‹ค๋ฉด ์•„๋ž˜ ๋งํฌ๋ฅผ ์ฐธ๊ณ ํ•ด์ฃผ์„ธ์š”. https://bit.ly/copilotinternal


ํ˜น์‹œ, Engineering ๊ฒฝํ—˜๊ณผ ์ง€์‹์ด ๋›ฐ์–ด๋‚œ ๋ถ„์ด๋‚˜ AI ๋ชจ๋ธ์— ๋Œ€ํ•œ ์ดํ•ด๊ฐ€ ๋†’์œผ์‹  ๋ถ„๋“ค ์ค‘์—์„œ ๊ณ ๊ฐ์—๊ฒŒ 10๋ฐฐ ์ข‹์€ ๊ฐ€์น˜๋ฅผ ๋งŒ๋“ค์–ด๋‚ด๋Š” ๋ฐ์— ๊ด€์‹ฌ์žˆ๋Š” ๋ถ„์ด ์žˆ์œผ์‹œ๋‹ค๋ฉด DM ํ˜น์€ minseok.kim0129@gmail.com์œผ๋กœ ์–ด๋–ค ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ด์˜ค์…จ๊ณ  ์•ž์œผ๋กœ๋Š” ์–ด๋–ค ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ณ  ์‹ถ์œผ์‹ ์ง€ ํŽธํ•˜๊ฒŒ ์•Œ๋ ค์ฃผ์…”์š” ๐Ÿ™

์š”์ฆ˜ AI ์„œ๋น„์Šค๋“ค์„ ์‚ฌ์šฉํ•˜๋ฉด์„œ PC, ์ธํ„ฐ๋„ท, ๋ชจ๋ฐ”์ผ ์ดˆ์ฐฝ๊ธฐ์™€ ๋น„์Šทํ•˜๊ฒŒ ์–ด๋–ค ์„œ๋น„์Šค๊ฐ€ ์œ ์ €์—๊ฒŒ ๊ฐ€์น˜์žˆ์„์ง€ ๋ชฐ๋ผ์„œ ๋ญ๋“  ํ•ด๋ณผ ์ˆ˜ ์žˆ๋Š”์‹œ๊ธฐ๋ผ๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. ๋งˆ์น˜ ๋ชจ๋ฐ”์ผ ์ดˆ์ฐฝ๊ธฐ์—๋Š” LBS(Location Based Service)๋ผ๋Š” ๊ฐœ๋…์ด ์žˆ์—ˆ์ง€๋งŒ ์ง€๊ธˆ์€ ๋Œ€๋ถ€๋ถ„ ๋ชจ๋ฐ”์ผ ์„œ๋น„์Šค์—์„œ GPS๋ฅผ ์•„์ฃผ ๊ธฐ๋ณธ์ ์œผ๋กœ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ์ฒ˜๋Ÿผ์š”.

๊ฒฐ๊ตญ ์œ ์ €์˜ ๋ณ€ํ•˜์ง€ ์•Š๋Š” ๋‹ˆ์ฆˆ๋ฅผ ๋ฐœ๊ฒฌํ•˜๊ณ  ๋น ๋ฅด๊ฒŒ ๋ณ€ํ™”ํ•˜๋Š” ๊ธฐ์ˆ ์„ ์ž˜ ํ™œ์šฉํ•ด์„œ 10๋ฐฐ ์ข‹์€ ์„œ๋น„์Šค๋ฅผ ์ง€์†์ ์œผ๋กœ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ํŒ€์ด ์ข‹์€ ์ œํ’ˆ ๊ทธ๋ฆฌ๊ณ  ์ข‹์€ ํšŒ์‚ฌ๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค๊ณ  ๋ฏฟ์Šต๋‹ˆ๋‹ค.
https://youtu.be/ajkAbLe-0Uk

Major Takeaways:

Product Differentiation: Perplexity AI focuses on providing accurate and trustworthy search results with citations, thereby positioning itself as a superior alternative to AI models like ChatGPT and Bart in terms of search accuracy. They differentiate themselves further by leveraging reasoning engines in combination with a well-ranked index of relevant content to generate quick and accurate answers.

Technology Utilization and Development: Perplexity AI's strategy relies on utilizing well-established AI models such as ChatGPT and Bart, but also developing their own models to address specific aspects of their product. This allows them to create a competitive and unique search experience. Moreover, the company orchestrates various components in their backend to ensure they work together efficiently and reliably.

Business Model and Advertising: The company considers advertising within a chat interface, which could provide relevant and targeted ads based on user profiles and queries, as a promising potential business model. The need for transparency and ethical advertising practices is emphasized.

AI Integration: The future vision for Perplexity AI involves the seamless integration of language models into everyday devices, which will enable natural conversations and immediate responses. The speaker acknowledges the existing limitations but expresses confidence in the continual advancements of the technology.

Data Quality and Training: The quality of training data is highlighted as a key factor in achieving higher levels of reasoning and intelligence in AI models. This is seen as a factor contributing to the lead of OpenAI in the AI market.

Open-source vs. Closed Models: The speaker discusses the implications of open-source models and closed models like Google and OpenAI, noting that the progress in the field depends on algorithmic efficiencies and talented researchers. The dynamics of this will be influenced by whether organizations continue to publish their techniques or opt to stay closed.

Lessons for AI Startup Founders:

Differentiation is Key: In a competitive field, providing a unique value proposition is crucial. This might involve creating more accurate or trustworthy results, or delivering them in a more efficient manner.

Leverage and Develop Technology: While it's beneficial to leverage established AI models, developing your own models to address specific aspects of your product can create a competitive edge.

Backend Efficiency: The success of your startup doesn't only rely on the end product but also how well the backend processes and components are orchestrated.

Ethical Business Practices: In implementing advertising or other monetization methods, maintaining transparency and ethical practices is essential to avoid the risk of alienating users.

Quality of Training Data: As an AI startup, the quality of your training data is paramount. Efforts should be made to curate high-quality data to achieve superior models.

Open Source vs. Closed Debate: The choice between operating with open-source models or closed ones can have implications on your company's future. Founders should consider the pros and cons of each, taking into account factors such as collaboration, progress speed, and knowledge sharing.
Japan uses Chat GPt quite alot.
Based on the available data, the usage of ChatGPT in the selected countries is as follows:

1. United States: The United States accounts for 15.32% of the total audience using ChatGPT
2. India: India accounts for 6.32% of the total audience using ChatGPT.
3. Japan: Japan accounts for 3.97% of the total audience using ChatGPT.
4. Canada: Canada accounts for 2.74% of the total audience using ChatGPT.
5. Other countries: The rest of the world accounts for 68.36% of visits to ChatGPT's website.
Coming AI event in sf
๐Ÿ’โ€โ™‚๏ธ How to Play Long Term Games:

Systems > Goals
Discipline > Motivation
Trust > Distrust
Principles > Tactics
Writing > Reading
Vulnerability > Confidence
North Stars > Low Hanging Fruit
Trends > News
Habits > Sprints
Questions > Answers
Problems > Solutions
People > Projects
AI๊ฐ€ ๊ฒŒ์ž„์˜ ์ œ์ž‘๋ถ€ํ„ฐ ๊ฒŒ์ž„์˜ UI/UX๊นŒ์ง€ ๋งŽ์€ ๋ถ€๋ถ„์„ ๋ณ€ํ™”์‹œ์ผœ๋†“์„ ๊ฑฐ๋ผ๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค.

์ง€๋‚œ ๋ช‡๋…„๊ฐ„ AI ๋ชจ๋ธ์€ ์—„์ฒญ๋‚œ ์†๋„๋กœ ๋ณ€ํ™”ํ•ด์™”๋Š”๋ฐ์š”. ๊ฐ€์žฅ ์ตœ์‹ ์˜ AI ๋ชจ๋ธ์˜ ๋ฐœ์ „ ์—ญ์‚ฌ์™€ ์•ž์œผ๋กœ ์˜ˆ์ƒ๋˜๋Š” AI ์—ฐ๊ตฌ์ฃผ์ œ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋ฏธ๋ž˜์˜ ๊ฒŒ์ž„์„ ์ƒ์ƒํ•ด๋ด…๋‹ˆ๋‹ค.

Stable Diffusion ๋ชจ๋ธ์ด ๋น ๋ฅด๊ฒŒ ํ˜์‹ ํ•˜๋ฉด์„œ, ๊ฒŒ์ž„ ์•„ํŠธ์™€ ๊ด€๋ จํ•ด์„œ ๋‹ค์–‘ํ•œ ์‹คํ—˜์ด ์ด๋ฃจ์–ด์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฒŒ์ž„ ์•„ํŠธ๋ฅผ ๊ธฐํšํ•˜๊ณ  ๊ฐœ๋ฐœํ•˜๋Š” ๊ณผ์ •์—์„œ AI๋ฅผ ์ž˜ ์‚ฌ์šฉํ•œ ํ”„๋กœ์„ธ์Šค๋Š” ๋ญ˜๊นŒ์š”?

์ด ๋‘๊ฐ€์ง€ ์งˆ๋ฌธ์— ๋Œ€ํ•ด์„œ ๊ถ๊ธˆ์ฆ์ด ์ƒ๊ธฐ์…จ๋‹ค๋ฉด ์•„๋ž˜ ๊ตฌ๊ธ€ํผ์„ ์ž‘์„ฑํ•ด์ฃผ์„ธ์š” ๐Ÿ™‚

https://forms.gle/RFJjwqELL9juekP66
What era do we live in?

A wide range of AI tasks that used to take 5 years and a research team to accomplish in 2013, now just require API docs and a spare afternoon in 2023.

Not a single PhD in sight. When it comes to shipping AI products, you want engineers, not researchers.

Microsoft, Google, Meta, and the large Foundation Model labs have cornered scarce research talent to essentially deliver โ€œAI Research as a Serviceโ€ APIs. You canโ€™t hire them, but you can rent them โ€” if you have software engineers on the other end who know how to work with them. There are ~5000 LLM researchers in the world, but ~50m software engineers. Supply constraints dictate that an โ€œin-betweenโ€ class of AI Engineers will rise to meet demand.

Fire, ready, aim. Instead of requiring data scientists/ML engineers do a laborious data collection exercise before training a single domain specific model that is then put into production, a product manager/software engineer can prompt an LLM, and build/validate a product idea, before getting specific data to finetune.

Letโ€™s say there are 100-1000x more of the latter than the former, and the โ€œfire, ready, aimโ€ workflow of prompted LLM prototypes lets you move 10-100x faster than traditional ML. So AI Engineers will be able to validate AI products say 1,000-10,000x cheaper. Itโ€™s Waterfall vs Agile, all over again. AI is Agile.
๐Ÿ†• post: You Are Not Too Old
(To Pivot Into AI)

https://twitter.com/swyx/status/1641849911326150661
์ƒˆ๋กœ์šด ๊ฒƒ์ด ๋“ฑ์žฅํ•˜๋ฉด ๊ทธ ๋ˆ„๊ตฌ๋„ ์ „๋ฌธ๊ฐ€๊ฐ€ ๋  ์ˆ˜ ์—†๋Š” ์‹œ๊ธฐ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ์ € ๊ด€์‹ฌ ์žˆ๋Š” ์‚ฌ๋žŒ๋“ค๋งŒ ๊ด€์‹ฌ์„ ๊ฐ–๊ณ  ๊ฐ€์ง€๊ณ  ๋†€๋ฉฐ ์„œ๋กœ ์ด์•ผ๊ธฐํ•  ๋ฟ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๊ฒฐ๊ตญ์—๋Š” ๊ทธ ์ผ์ด ์„ฑ์ˆ™ํ•ด์ง€๊ณ  ๊ทธ ์ฐฝ์ด ๋‹ซํž™๋‹ˆ๋‹ค. ์ง„์ž… ์žฅ๋ฒฝ์ด ํ›จ์”ฌ ๋†’์•„์ง„ ํ›„์—๋Š”์š”.

๋‹น์‹ ์€ AI๋กœ ์ „ํ™˜ํ•˜๊ธฐ ์œ„ํ•ด ๋„ˆ๋ฌด ๋Š™์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.

https://www.latent.space/p/not-old
AI x Design: https://www.figma.com/blog/ai-the-next-chapter-in-design/

ํ˜น์‹œ Design ์ชฝ ์ปค๋ฆฌ์–ด๋ฅผ ๊ฐ€์ ธ๊ฐ€๊ณ  ์žˆ๋Š” ๋ถ„๋“ค์ค‘ ์‹ค๋ ฅ๊ณผ ๊ด€์‹ฌ ๋‘๊ฐ€์ง€๊ฐ€ ๋‹ค ์žˆ๋Š” ์ง€์ธ ๋ถ„๋“ค์ด ์žˆ์œผ์‹ค๊นŒ์š”?~ ใ…Žใ…Ž
5๋ช…์ •๋„๋งŒ ๋ชจ์—ฌ๋„ ์žฌ๋ฐŒ๋Š” ์ด์•ผ๊ธฐ ๋งŽ์ด ํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ ๊ฐ™์€๋ฐ์š”!
Continuous Learning_Startup & Investment
I found GitHub to be the best organizer of AI-related newsletters and podcasts. Eureka!!! https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md
์˜ค๋Š˜ ๋ฐœ๊ฒฌํ•œ ์žฌ๋ฐŒ๋Š” ๊นƒํ—™. ์žฌ๋ฐŒ๋Š” ๊ฒŒ ๋„ˆ๋ฌด ๋งŽ๋„ค ๐Ÿคฃ

AI ๋ธ”๋กœ๊ทธ ์šด์˜์ž์ด์ž ํŒŸ์บ์ŠคํŠธ ํ˜ธ์ŠคํŠธ https://latent.space/

AI note: https://github.com/swyxio/ai-notes/tree/main
- ํ™œ์šฉ์‚ฌ๋ก€
- ์ดˆ์‹ฌ์ž/์ค‘๊ธ‰์ž/๊ณ ์ˆ˜๊ฐ€ ์ฝ์„ ๊ฑฐ๋ฆฌ
- ์ปค๋ฎค๋‹ˆํ‹ฐ
- People
- Reality & Demotivations
- Legal, Ethics, and Privacy
- Alignment, Safety

Good AI Podcasts and Newsletters: https://github.com/swyxio/ai-notes/blob/main/Resources/Good%20AI%20Podcasts%20and%20Newsletters.md