Continuous Learning_Startup & Investment
https://youtu.be/5cQXjboJwg0
Startups are now implementing AI rather than just talking about it
The open architecture of the web needs to be re-architected in the age of AI
The top of the funnel in web search is up for grabs and all of those dollars will be redistributed
Google may have to fight off cannibalization of core search to make its claim for the best AI
Building a conversational UI is the new way to interact with customers
https://share.snipd.com/snip/92c992f6-9826-4766-be31-209024910225
There's another way to think about this, which is the reverse of the client server model, where today you are the client, you are the node on the network and you're communicating with the center of the network, which is the server, which is the service provider and all the data sits there and you're getting data in and out of their product to suit your particular objectives.
But the future may be that more data is aggregating and accruing about you, you end up becoming a server, you then have the option.
Imagine if every individual had their own IP address and that individual IP address had behind it behind your ability to control lots of information and lots of output about all your interactions across the different network of service providers at years.
https://share.snipd.com/snip/6b55619f-14ff-469e-96d7-150fbca0ceb9
The open architecture of the web needs to be re-architected in the age of AI
The top of the funnel in web search is up for grabs and all of those dollars will be redistributed
Google may have to fight off cannibalization of core search to make its claim for the best AI
Building a conversational UI is the new way to interact with customers
https://share.snipd.com/snip/92c992f6-9826-4766-be31-209024910225
There's another way to think about this, which is the reverse of the client server model, where today you are the client, you are the node on the network and you're communicating with the center of the network, which is the server, which is the service provider and all the data sits there and you're getting data in and out of their product to suit your particular objectives.
But the future may be that more data is aggregating and accruing about you, you end up becoming a server, you then have the option.
Imagine if every individual had their own IP address and that individual IP address had behind it behind your ability to control lots of information and lots of output about all your interactions across the different network of service providers at years.
https://share.snipd.com/snip/6b55619f-14ff-469e-96d7-150fbca0ceb9
Snipd
The $20 Trillion Question That Has Defined the Internet for Our Careers | 3min snip from All-In with Chamath, Jason, Sacks & Friedberg
3min snip from E133: Market melt-up, IPO update, AI startups overheat, Reddit revolts & more with Brad Gerstner | All-In with Chamath, Jason, Sacks & Friedberg
Continuous Learning_Startup & Investment
๐Reddit API ๊ฐ๊ฒฉ ๋
ผ๋ ์ฌ์ธต ๋ถ์(Feat. ChatGPT & Perplexity) ๋น ๋ฅด๊ฒ ๋ณํํ๋ ๋์งํธ ์ธ์์์ ์ต์ ์์์ ํ์
ํ๊ณ ์ดํดํ๋ ๊ฒ์ ์ข
์ข
๋ฒ
์ฐฌ ์ผ์
๋๋ค. ์ต๊ทผ ๋
ผ๋์ด ๋๊ณ ์๋ Reddit์ API ๊ฐ๊ฒฉ ๋ณ๊ฒฝ ๊ฒฐ์ ์ด ๊ทธ ์์
๋๋ค. ์ค๋์ Chat GPT์ ๊ฐ์ AI ๋๊ตฌ๋ฅผ ์ฌ์ฉํ์ฌ ๋ด์ค๋ฅผ ๋ถ์ํ๊ณ ์ฃผ์ ์ธ์ฌ์ดํธ๋ฅผ ์์งํ์ฌ ์ด ์ฌ๊ฑด์ ์์ธํ ์ดํด๋ณด๊ณ ์ ํฉ๋๋ค. ๋จผ์ ๋ฌธ์ ์์ฒด๋ถํฐ ์ดํด๋ณด๊ฒ ์ต๋๋ค. ์ต์ ์ฌ์ค์ ํ์ธํ๊ธฐ ์ํด ์ค์๊ฐโฆ
Look, you're seeing this upfront now with Reddit. When in this world, the people that create the content are actually in control of the data.
And if you try to like modifies an API without paying the people that created it or giving them control, they'll revolt.
And if you try to like modifies an API without paying the people that created it or giving them control, they'll revolt.
๋์ค๊ฐ ๋
์๋ณด๋ค 2๋ฐฐ์ ํจ๊ณผ๊ฐ ์๋ค--๋ถ์ . ๋์ด๊ฐ ๋ค์ด๋ ์ ๊ณ ๋๋ํด์ง์ ์๋ ๋น๋ฒ
1. ํฅ๋ฏธ๋กญ๊ฒ๋ ์ ๊ฐ ์ธ์คํ์ ์ฌ๋ฆฐ ๋์ค์์์ ์ข์ํ์๋ ๋ถ๋ค์ด ๋ง๋ค. ์ ๊ธ๋ณด๋ค ๋์ค ์ด์ผ๊ธฐ๋ฅผ ํ์๋ ๋ถ๋ค์ด ์์ธ๋ก ์ ์ง์๋ค.
2. ๋ด๊ฐ ์ข์์๋ฅผ ๊ฐ์ฅ ๋ง์ด ๋ฐ์๊ธ์ ๋ณด๋ฉด
ํ๋ถ์ 3000๊ฐ์ ๋๊ฐ ์ต๊ณ ์๋ค.
๋งํฌ๋์ธ๋ 3000๊ฐ ์ ๋๊ฐ ์ต๊ณ ์๋ค.
๊ทธ๋ฐ๋ฐ ์ธ์คํ์ ์ฌ๋ฆฐ ํ ๋์ค์์์ ์ข์์๊ฐ 5000๊ฐ์ด๋ค. ์ธ์คํ๋ ์์ํ์ง ์ผ๋ง์๋์ด ๋ค๋ฅธ ํฌ์คํธ๋ ์ข์์๊ฐ ๊ทธ๋ฆฌ ๋ง์ง ์์๋ฐ ๋ง์ ๋๋ค. ์ด๋ค ๋ถ์ ๊ทธ๊ฑธ๋ก ์ฑ ์ฐ๋๊ฒ ๋์ค๋ค์๊ฒ๋ ์ฌ๋ฏธ์๋ ๋ฆฌ๋์ญ ์ฑ ์ฐ๋๊ฒ๋ณด๋ค ์ํ๋ฆด๊ฑฐ๋ผ ํ์ จ๋ค๐
3. ๋์ ๊ด๋ จ๋ ์ฑ ์ ์ฝ๋ค๋ณด๋ ์ด๋์ด ๋ชธ ๋ฟ ์๋๋ผ ์ ์ ์๋ ๋งค์ฐ ์ข์ ์ํฅ์ ๋ฏธ์น๋ค๊ณ ํ๋ค. ์ฌ๊ธฐ๊น์ง๋ ์์์ธ๋ฐ. ํนํ, ๋ด์๊ธ๋๋์ํ์ ๋์ ํ ๋ ผ๋ฌธ์ ์ํ๋ฉด ๊ท์น์ ๋์ค๋ ์น๋งค ๋ฐ์ ์ํ 76ํ๋ก๋ฅผ ๊ฐ์ํ๋ฉฐ, ์ด๋ ๋ ์๋ก ์ป๋ ํจ๊ณผ์ ๋๋ฐฐ๋ผ๊ณ ํ๋ค.
5. ๊ณผ๊ฑฐ์๋, ๋์ด๊ฐ ๋ค์๋ก ๋์ ์ฐ๊ฒฐ์ด ๋์ด์ง๊ณ ์๋ก์ด ์ฐ๊ฒฐ์ด ๋ง๋ค์ด์ง์ง ์๋๋ค๊ณ ์๊ฐํ๋ค. ์ด์ ๋์ด๊ฐ ๋ค์๋ก ์ ์ ๋ฉ์ฒญํด์ง๋๊ฒ์ ํผํ ์ ์๋ ํ์์ด๋ผ๊ณ ์ฌ๊ฒผ๋ค. ๊ทธ๋ฌ๋, ์ต๊ทผ ์ฐ๊ตฌ์ ์ํ๋ฉด ๋์ด๊ฐ ๋ค์ด๋ ๋์ ์๋ก์ด ์ฐ๊ฒฐ์ด ๋ง๋ค์ด์ง์ ์๋ค๋๊ฒ์ด ๋ฐํ์ก๋ค. ์ฆ, ๋์ด๊ฐ ๋ค์ด๋ ๋์ ๊ธฐ๋ฅ์ด ๋ฐ๋ฌํ๊ณ ๋๋ํด์ง์ ์๋ค๋ ๊ฒ์ด๋ค.
6. ๊ทธ๋ผ, ๋์ ์๋ก์ด ์ฐ๊ฒฐ์ ๋ง๋๋ ๋ฐฉ๋ฒ์ ๋ฌด์์ผ๊น? ์ฅ์คํผ๋ ๋ํ์์ ์ฌ๋๋ค์ ๋์์ผ๋ก 6๊ฐ์๊ฐ ์ ๊ธ๋งํ๋ จ์ ์์ผฐ๋ค. ๊ทธ๋ฌ์ ๋์ด์ ๋ฌด๊ดํ๊ฒ ๋๊ณผ์์ ํ์ ๊ณผ ๊ด๋ จ๋ ๋ชจ๋ ์ฌ๋๋ค์ ๋๋ถ์๊ฐ ๋ฐ๋ฌํ๋ค๊ณ ํ๋ค.
7. ํนํ, ์์ ์ด ๊ธฐ์กด์ ์ํ๋๊ฒ๋ณด๋ค ์๋ก์ด๊ฒ์ ๋ฐฐ์ฐ๋ฉด ๋์ ์๋ก์ด ์ฐ๊ฒฐ์ด ๋ง๋ค์ด ์ง๋ค๊ณ ํ๋ค.
8. ๊ทธ๋ฌ๋ฏ๋ก, ์๋ก์ด ๊ฒ์ ๋์ ํ์๋ผ. ์ธ๊ตญ์ด๋ , ํผ์๋ ธ๋ , ์ถค์ด๋ , ์คํฌ์ธ ๋ ์๋ก์ด๊ฒ์ ๋ฐฐ์ธ ํ์๊ฐ ์๋ค. ์๋ก์ด ์ฑ ์ ์ฝ๊ณ ์๋ก์ด ์ฅ์๋ฅผ ์ฌํํ๋ ๊ฒ๋ ์ข์ ๋ฐฉ๋ฒ์ด๋ค. ์๋ ์ ๋ฐฐ์ด๊ฒ์ ๊ณฐ๊ตญ ์ฐ๋ ค๋จน๋ฏ ์ฌ๋ ์ฌ๋์ ๋์ด๊ฐ ๋ค์๋ก ๋ฉ์ฒญํด์ง์ง๋ง, ํญ์ ์๋ก์ด ๊ฒ์ ๋ฐฐ์ฐ๋ ์ฌ๋์ ์ฒญ๋ ์ฒ๋ผ ์ด์ ์๋ค.
9. ์ฌ๊ธฐ์ ๋ ํ๋์ ๋ณด๋์ค ๊ตฟ ๋ด์ค๊ฐ ์๋ค.
์ ๊ธ๋ง์ ํตํ ๋ ๋ฐ๋ฌ์ ๊ดํ ์ฅ์คํฌ๋๋์์์ ์ฐ๊ตฌ์์ ๋ฐ๊ฒฌํ ๊ฒ์ ์ค๋ ฅ๊ณผ ๋๋ฐ๋ฌ์ ๊ด๊ณ๊ฐ ์๋ค๋ ๊ฒ์ด๋ค. ์ฆ, ๋ชปํด๋ ๋๊ฐ ๋ฐ๋ฌํ๋ค๋ ๊ฒ์ด๋ค.
10. ๊ทธ๋ฌ๋ฏ๋ก, ์ธ๊ตญ์ด๋ , ํผ์๋ ธ๋ , ๋์ค๋ , ์ฃผ์ง์ค๋ , ์๊ฐ๋ ๋ญ๋ ์๋ก์ด๊ฒ์ ๋ฐฐ์ธ๋ ๋ชปํ๋ค๊ณ ์์ฑ ํ๊ฑฐ๋ ํฌ๊ธฐํ ํ์๊ฐ ์๋ค. ๋ชธ์ด ๋ชป๋ฐ๋ผ๊ฐ๋ค๊ณ ์ข์ ํ ํ์๋ ์๋ค. ๊ทธ๋๋ ๋๋ ์ฅ์ฅ ์๋๋ค.
11. ์ฌ์ค ์ ๋ ๋ฐฐ์ฐ๋ ๊ฒ์ ์ข์ํ๋ค. ๋์์ ์ด๊ฒ์ ๊ฒ ํ์ง๋ ์์๋ ํ๊ฐ์ง ์ ๋ ์๋ก์ด ๊ฒ์ ๋์ ํ๋ค. ๋์ค ๋ํ ๋ช๋ฌ์ ์๋กญ๊ฒ ์์ํ๋ค. ์ฌ์ค ๋๋ ์ด๋์ ๋งค์ฐ ์ซ์ดํ๋ค. ๊ทธ๋ฌ๋ ๋คํํ ํฅ์ด ๋ง์ ๋์ค๋ ์ข๋ค. ์ผ์ฃผ์ผ์ ํ๋ฒ ์ ๋ ๋์ค ๋ ์จ์ ๋ฐ๊ณ ๊ฐ๋์ฉ ์ฐ์ต์ ํ ๋ฟ์ด๋ค. ๊ทธ๋ฌ๋ ์ ํ๋ คํ๊ธฐ๋ณด๋ค๋ ์ด์งํผ ์ค๋ ์ด๊ฒ์ธ๋ฐ ์ค๋ํ๋ คํ๋ค. ์ดํ์๋ "ํผ์๋ ธ"๋ฅผ ๋ณธ๊ฒฉ์ ์ผ๋ก ๋ฐฐ์ธ ์๊ฐ์ด๋ค. ๋๋ ํผ์๋์คํธ ์ํ์ ์จ๋ฅผ ์ข์ํ๋ค(๊ทธ๋ถ์ ํผ์๋ ธ ์๋ฆฌ๋ณด๋ค ์ด์ ์ ์ธ ๊ทธ๋ถ ์์ฒด๊ฐ ๋ฉ์๋ค). ๋ฐฐ์ฐ๋ ๊ณผ์ ์ ์ ํ๋ธ๋ ์ธ์คํ์ ๋จ๊ธฐ๋ ค ํ๋ค.
12. ์ ์์์ ๋ณด๊ณ ์ ํ์ด๋ ํฌ๋กํ๋ฅผ ๋ฐฐ์ฐ๋ ค๋ ๋ถ๋ค์ด ๋์๋ค๊ณ ํ๋ค. ๋ฌด๋ฆฌํ์ง ๋ง์๊ณ ์ ์ฒ๋ผ ์ผ์ฃผ์ผ์ ํ๋ฒ์ฉ๋ง ํ์ ๋ ์ข๋ค. ์ง๊ธ์ ์ํ๋๊ฒ ์ค์ํ๊ฒ ์๋๋ผ ์ฆ๊ธฐ๋๊ฒ๊ณผ ์ง์ํ๋๊ฒ ์ค์ํ๋ค.
13. ํนํ, ์ด๊ธ์ ๋ณด๋ ๋ถ๋ค์ ๋์ด๊ฐ 40์ด ๋์๋ค๋ฉด ๋ง์ด๋ค. 50์ด ๋์ผ์ จ๋ค๋ฉด ๋๋์ฑ ์๋ก์ด ๊ฑธ ๋ฐฐ์ฐ์๋ผ. ๊ทธ๊ฒ์ด ์ค๋ ๊ฑด๊ฐํ๋ฉฐ ํ๋ช ํ๊ฒ ์ด์ ์๋ ๋น๊ฒฐ์ด๋ค.
1. ํฅ๋ฏธ๋กญ๊ฒ๋ ์ ๊ฐ ์ธ์คํ์ ์ฌ๋ฆฐ ๋์ค์์์ ์ข์ํ์๋ ๋ถ๋ค์ด ๋ง๋ค. ์ ๊ธ๋ณด๋ค ๋์ค ์ด์ผ๊ธฐ๋ฅผ ํ์๋ ๋ถ๋ค์ด ์์ธ๋ก ์ ์ง์๋ค.
2. ๋ด๊ฐ ์ข์์๋ฅผ ๊ฐ์ฅ ๋ง์ด ๋ฐ์๊ธ์ ๋ณด๋ฉด
ํ๋ถ์ 3000๊ฐ์ ๋๊ฐ ์ต๊ณ ์๋ค.
๋งํฌ๋์ธ๋ 3000๊ฐ ์ ๋๊ฐ ์ต๊ณ ์๋ค.
๊ทธ๋ฐ๋ฐ ์ธ์คํ์ ์ฌ๋ฆฐ ํ ๋์ค์์์ ์ข์์๊ฐ 5000๊ฐ์ด๋ค. ์ธ์คํ๋ ์์ํ์ง ์ผ๋ง์๋์ด ๋ค๋ฅธ ํฌ์คํธ๋ ์ข์์๊ฐ ๊ทธ๋ฆฌ ๋ง์ง ์์๋ฐ ๋ง์ ๋๋ค. ์ด๋ค ๋ถ์ ๊ทธ๊ฑธ๋ก ์ฑ ์ฐ๋๊ฒ ๋์ค๋ค์๊ฒ๋ ์ฌ๋ฏธ์๋ ๋ฆฌ๋์ญ ์ฑ ์ฐ๋๊ฒ๋ณด๋ค ์ํ๋ฆด๊ฑฐ๋ผ ํ์ จ๋ค๐
3. ๋์ ๊ด๋ จ๋ ์ฑ ์ ์ฝ๋ค๋ณด๋ ์ด๋์ด ๋ชธ ๋ฟ ์๋๋ผ ์ ์ ์๋ ๋งค์ฐ ์ข์ ์ํฅ์ ๋ฏธ์น๋ค๊ณ ํ๋ค. ์ฌ๊ธฐ๊น์ง๋ ์์์ธ๋ฐ. ํนํ, ๋ด์๊ธ๋๋์ํ์ ๋์ ํ ๋ ผ๋ฌธ์ ์ํ๋ฉด ๊ท์น์ ๋์ค๋ ์น๋งค ๋ฐ์ ์ํ 76ํ๋ก๋ฅผ ๊ฐ์ํ๋ฉฐ, ์ด๋ ๋ ์๋ก ์ป๋ ํจ๊ณผ์ ๋๋ฐฐ๋ผ๊ณ ํ๋ค.
5. ๊ณผ๊ฑฐ์๋, ๋์ด๊ฐ ๋ค์๋ก ๋์ ์ฐ๊ฒฐ์ด ๋์ด์ง๊ณ ์๋ก์ด ์ฐ๊ฒฐ์ด ๋ง๋ค์ด์ง์ง ์๋๋ค๊ณ ์๊ฐํ๋ค. ์ด์ ๋์ด๊ฐ ๋ค์๋ก ์ ์ ๋ฉ์ฒญํด์ง๋๊ฒ์ ํผํ ์ ์๋ ํ์์ด๋ผ๊ณ ์ฌ๊ฒผ๋ค. ๊ทธ๋ฌ๋, ์ต๊ทผ ์ฐ๊ตฌ์ ์ํ๋ฉด ๋์ด๊ฐ ๋ค์ด๋ ๋์ ์๋ก์ด ์ฐ๊ฒฐ์ด ๋ง๋ค์ด์ง์ ์๋ค๋๊ฒ์ด ๋ฐํ์ก๋ค. ์ฆ, ๋์ด๊ฐ ๋ค์ด๋ ๋์ ๊ธฐ๋ฅ์ด ๋ฐ๋ฌํ๊ณ ๋๋ํด์ง์ ์๋ค๋ ๊ฒ์ด๋ค.
6. ๊ทธ๋ผ, ๋์ ์๋ก์ด ์ฐ๊ฒฐ์ ๋ง๋๋ ๋ฐฉ๋ฒ์ ๋ฌด์์ผ๊น? ์ฅ์คํผ๋ ๋ํ์์ ์ฌ๋๋ค์ ๋์์ผ๋ก 6๊ฐ์๊ฐ ์ ๊ธ๋งํ๋ จ์ ์์ผฐ๋ค. ๊ทธ๋ฌ์ ๋์ด์ ๋ฌด๊ดํ๊ฒ ๋๊ณผ์์ ํ์ ๊ณผ ๊ด๋ จ๋ ๋ชจ๋ ์ฌ๋๋ค์ ๋๋ถ์๊ฐ ๋ฐ๋ฌํ๋ค๊ณ ํ๋ค.
7. ํนํ, ์์ ์ด ๊ธฐ์กด์ ์ํ๋๊ฒ๋ณด๋ค ์๋ก์ด๊ฒ์ ๋ฐฐ์ฐ๋ฉด ๋์ ์๋ก์ด ์ฐ๊ฒฐ์ด ๋ง๋ค์ด ์ง๋ค๊ณ ํ๋ค.
8. ๊ทธ๋ฌ๋ฏ๋ก, ์๋ก์ด ๊ฒ์ ๋์ ํ์๋ผ. ์ธ๊ตญ์ด๋ , ํผ์๋ ธ๋ , ์ถค์ด๋ , ์คํฌ์ธ ๋ ์๋ก์ด๊ฒ์ ๋ฐฐ์ธ ํ์๊ฐ ์๋ค. ์๋ก์ด ์ฑ ์ ์ฝ๊ณ ์๋ก์ด ์ฅ์๋ฅผ ์ฌํํ๋ ๊ฒ๋ ์ข์ ๋ฐฉ๋ฒ์ด๋ค. ์๋ ์ ๋ฐฐ์ด๊ฒ์ ๊ณฐ๊ตญ ์ฐ๋ ค๋จน๋ฏ ์ฌ๋ ์ฌ๋์ ๋์ด๊ฐ ๋ค์๋ก ๋ฉ์ฒญํด์ง์ง๋ง, ํญ์ ์๋ก์ด ๊ฒ์ ๋ฐฐ์ฐ๋ ์ฌ๋์ ์ฒญ๋ ์ฒ๋ผ ์ด์ ์๋ค.
9. ์ฌ๊ธฐ์ ๋ ํ๋์ ๋ณด๋์ค ๊ตฟ ๋ด์ค๊ฐ ์๋ค.
์ ๊ธ๋ง์ ํตํ ๋ ๋ฐ๋ฌ์ ๊ดํ ์ฅ์คํฌ๋๋์์์ ์ฐ๊ตฌ์์ ๋ฐ๊ฒฌํ ๊ฒ์ ์ค๋ ฅ๊ณผ ๋๋ฐ๋ฌ์ ๊ด๊ณ๊ฐ ์๋ค๋ ๊ฒ์ด๋ค. ์ฆ, ๋ชปํด๋ ๋๊ฐ ๋ฐ๋ฌํ๋ค๋ ๊ฒ์ด๋ค.
10. ๊ทธ๋ฌ๋ฏ๋ก, ์ธ๊ตญ์ด๋ , ํผ์๋ ธ๋ , ๋์ค๋ , ์ฃผ์ง์ค๋ , ์๊ฐ๋ ๋ญ๋ ์๋ก์ด๊ฒ์ ๋ฐฐ์ธ๋ ๋ชปํ๋ค๊ณ ์์ฑ ํ๊ฑฐ๋ ํฌ๊ธฐํ ํ์๊ฐ ์๋ค. ๋ชธ์ด ๋ชป๋ฐ๋ผ๊ฐ๋ค๊ณ ์ข์ ํ ํ์๋ ์๋ค. ๊ทธ๋๋ ๋๋ ์ฅ์ฅ ์๋๋ค.
11. ์ฌ์ค ์ ๋ ๋ฐฐ์ฐ๋ ๊ฒ์ ์ข์ํ๋ค. ๋์์ ์ด๊ฒ์ ๊ฒ ํ์ง๋ ์์๋ ํ๊ฐ์ง ์ ๋ ์๋ก์ด ๊ฒ์ ๋์ ํ๋ค. ๋์ค ๋ํ ๋ช๋ฌ์ ์๋กญ๊ฒ ์์ํ๋ค. ์ฌ์ค ๋๋ ์ด๋์ ๋งค์ฐ ์ซ์ดํ๋ค. ๊ทธ๋ฌ๋ ๋คํํ ํฅ์ด ๋ง์ ๋์ค๋ ์ข๋ค. ์ผ์ฃผ์ผ์ ํ๋ฒ ์ ๋ ๋์ค ๋ ์จ์ ๋ฐ๊ณ ๊ฐ๋์ฉ ์ฐ์ต์ ํ ๋ฟ์ด๋ค. ๊ทธ๋ฌ๋ ์ ํ๋ คํ๊ธฐ๋ณด๋ค๋ ์ด์งํผ ์ค๋ ์ด๊ฒ์ธ๋ฐ ์ค๋ํ๋ คํ๋ค. ์ดํ์๋ "ํผ์๋ ธ"๋ฅผ ๋ณธ๊ฒฉ์ ์ผ๋ก ๋ฐฐ์ธ ์๊ฐ์ด๋ค. ๋๋ ํผ์๋์คํธ ์ํ์ ์จ๋ฅผ ์ข์ํ๋ค(๊ทธ๋ถ์ ํผ์๋ ธ ์๋ฆฌ๋ณด๋ค ์ด์ ์ ์ธ ๊ทธ๋ถ ์์ฒด๊ฐ ๋ฉ์๋ค). ๋ฐฐ์ฐ๋ ๊ณผ์ ์ ์ ํ๋ธ๋ ์ธ์คํ์ ๋จ๊ธฐ๋ ค ํ๋ค.
12. ์ ์์์ ๋ณด๊ณ ์ ํ์ด๋ ํฌ๋กํ๋ฅผ ๋ฐฐ์ฐ๋ ค๋ ๋ถ๋ค์ด ๋์๋ค๊ณ ํ๋ค. ๋ฌด๋ฆฌํ์ง ๋ง์๊ณ ์ ์ฒ๋ผ ์ผ์ฃผ์ผ์ ํ๋ฒ์ฉ๋ง ํ์ ๋ ์ข๋ค. ์ง๊ธ์ ์ํ๋๊ฒ ์ค์ํ๊ฒ ์๋๋ผ ์ฆ๊ธฐ๋๊ฒ๊ณผ ์ง์ํ๋๊ฒ ์ค์ํ๋ค.
13. ํนํ, ์ด๊ธ์ ๋ณด๋ ๋ถ๋ค์ ๋์ด๊ฐ 40์ด ๋์๋ค๋ฉด ๋ง์ด๋ค. 50์ด ๋์ผ์ จ๋ค๋ฉด ๋๋์ฑ ์๋ก์ด ๊ฑธ ๋ฐฐ์ฐ์๋ผ. ๊ทธ๊ฒ์ด ์ค๋ ๊ฑด๊ฐํ๋ฉฐ ํ๋ช ํ๊ฒ ์ด์ ์๋ ๋น๊ฒฐ์ด๋ค.
๐4
Unifying LLMs & Knowledge Graphs
1) Incorporate KGs during LLM pre-training/ inference, enhancing LLM understanding
2) Leverage LLMs for different KG tasks (embedding, completion, construction)
3) LLMs <> KGs bidirectional reasoning (data vs knowledge)
arxiv.org/abs/2306.08302
LLM ๋ฐ ์ง์ ๊ทธ๋ํ ํตํฉ
1) LLM ์ฌ์ ํ์ต/์ถ๋ก ์ KG๋ฅผ ํตํฉํ์ฌ LLM ์ดํด๋ ํฅ์
2) ๋ค์ํ KG ์์ (์๋ฒ ๋ฉ, ์์ฑ, ๊ตฌ์ถ)์ LLM ํ์ฉ
3) LLM <> KG ์๋ฐฉํฅ ์ถ๋ก (๋ฐ์ดํฐ ๋ ์ง์)
arxiv.org/abs/2306.08302
https://twitter.com/johnjnay/status/1670051081722769408?s=46&t=h5Byg6Wosg8MJb4pbPSDow
1) Incorporate KGs during LLM pre-training/ inference, enhancing LLM understanding
2) Leverage LLMs for different KG tasks (embedding, completion, construction)
3) LLMs <> KGs bidirectional reasoning (data vs knowledge)
arxiv.org/abs/2306.08302
LLM ๋ฐ ์ง์ ๊ทธ๋ํ ํตํฉ
1) LLM ์ฌ์ ํ์ต/์ถ๋ก ์ KG๋ฅผ ํตํฉํ์ฌ LLM ์ดํด๋ ํฅ์
2) ๋ค์ํ KG ์์ (์๋ฒ ๋ฉ, ์์ฑ, ๊ตฌ์ถ)์ LLM ํ์ฉ
3) LLM <> KG ์๋ฐฉํฅ ์ถ๋ก (๋ฐ์ดํฐ ๋ ์ง์)
arxiv.org/abs/2306.08302
https://twitter.com/johnjnay/status/1670051081722769408?s=46&t=h5Byg6Wosg8MJb4pbPSDow
https://datapreneurs.com/
Legendary founding CEO of snowflake wrote a book, datatreneur, https://datapreneurs.com/.
Legendary founding CEO of snowflake wrote a book, datatreneur, https://datapreneurs.com/.
Datapreneurs
Home
This has been autogenerated as a placeholder for homepage.
The Falcon 40B is a large-scale artificial intelligence model developed by the Technology Innovation Institute (TII) in Abu Dhabi, United Arab Emirates1. It is a foundational large language model (LLM) with 40 billion parameters and trained on one trillion tokens1. Falcon 40B is the worldโs top-ranked open-source AI model on the Hugging Face leaderboard for large language models2. The model is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
The implications of Falcon 40B for large language models are significant. It matches the performance of other high-performing LLMs and is cost-effective3. The model is English-centric but also includes German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, and Swedish4. Falcon 40Bโs open-source nature and royalty-free deployment can empower public and private sector entities with efficiencies such as faster project execution and reduced costs5.
For LLM startups, Falcon 40B offers an open-source alternative to proprietary models like OpenAIโs GPT-34. The modelโs creator, TII, is offering the most exceptional project ideas access to training compute power as a form of investment2. This enables developers to tackle more complex and resource-intensive use cases with increased efficiency, productivity, and performance, driving innovation and expanding the possibilities for LLM startups2.
For big tech companies, Falcon 40B presents both opportunities and challenges. On one hand, the open-source nature of the model can foster collaboration and innovation, allowing big tech companies to leverage Falcon 40Bโs capabilities for various applications. On the other hand, the modelโs open-source availability may increase competition, as more startups and developers gain access to advanced LLM capabilities, potentially disrupting the market dominance of proprietary models from big tech companies.
Overall, Falcon 40B represents a significant milestone in the AI and LLM landscape, promoting open-source development, fostering innovation, and offering new opportunities for startups and big tech companies alike6.
https://twitter.com/TIIuae/status/1663911042559234051
The implications of Falcon 40B for large language models are significant. It matches the performance of other high-performing LLMs and is cost-effective3. The model is English-centric but also includes German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, and Swedish4. Falcon 40Bโs open-source nature and royalty-free deployment can empower public and private sector entities with efficiencies such as faster project execution and reduced costs5.
For LLM startups, Falcon 40B offers an open-source alternative to proprietary models like OpenAIโs GPT-34. The modelโs creator, TII, is offering the most exceptional project ideas access to training compute power as a form of investment2. This enables developers to tackle more complex and resource-intensive use cases with increased efficiency, productivity, and performance, driving innovation and expanding the possibilities for LLM startups2.
For big tech companies, Falcon 40B presents both opportunities and challenges. On one hand, the open-source nature of the model can foster collaboration and innovation, allowing big tech companies to leverage Falcon 40Bโs capabilities for various applications. On the other hand, the modelโs open-source availability may increase competition, as more startups and developers gain access to advanced LLM capabilities, potentially disrupting the market dominance of proprietary models from big tech companies.
Overall, Falcon 40B represents a significant milestone in the AI and LLM landscape, promoting open-source development, fostering innovation, and offering new opportunities for startups and big tech companies alike6.
https://twitter.com/TIIuae/status/1663911042559234051
Twitter
UAE's Falcon 40B, the world's top ranked open-source AI model from the Technology Innovation Institute (TII) has waived royalties on its use for commercial and research purposes.
#TII #LLM #FalconLLM #Tech #Innovation #AI #AbuDhabi #UAE
#TII #LLM #FalconLLM #Tech #Innovation #AI #AbuDhabi #UAE
https://twitter.com/TIIuae/status/1663911042559234051
The Falcon 40B is a large-scale artificial intelligence model developed by the Technology Innovation Institute (TII) in Abu Dhabi, United Arab Emirates1. It is a foundational large language model (LLM) with 40 billion parameters and trained on one trillion tokens1. Falcon 40B is the worldโs top-ranked open-source AI model on the Hugging Face leaderboard for large language models2. The model is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
The implications of Falcon 40B for large language models are significant. It matches the performance of other high-performing LLMs and is cost-effective3. The model is English-centric but also includes German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, and Swedish4. Falcon 40Bโs open-source nature and royalty-free deployment can empower public and private sector entities with efficiencies such as faster project execution and reduced costs5.
For LLM startups, Falcon 40B offers an open-source alternative to proprietary models like OpenAIโs GPT-34. The modelโs creator, TII, is offering the most exceptional project ideas access to training compute power as a form of investment2. This enables developers to tackle more complex and resource-intensive use cases with increased efficiency, productivity, and performance, driving innovation and expanding the possibilities for LLM startups2.
For big tech companies, Falcon 40B presents both opportunities and challenges. On one hand, the open-source nature of the model can foster collaboration and innovation, allowing big tech companies to leverage Falcon 40Bโs capabilities for various applications. On the other hand, the modelโs open-source availability may increase competition, as more startups and developers gain access to advanced LLM capabilities, potentially disrupting the market dominance of proprietary models from big tech companies.
Overall, Falcon 40B represents a significant milestone in the AI and LLM landscape, promoting open-source development, fostering innovation, and offering new opportunities for startups and big tech companies alike6.
Comparisons with other models.
When comparing Falcon 40B to other large language models like GPT-3, ChatGPT, GPT-4, and LLaMA, Falcon 40B demonstrates impressive performance and capabilities. It outperforms other open-source models such as LLaMA, StableLM, RedPajama, and MPT3. Despite its power, Falcon 40B uses only 75% of GPT-3's training compute, 40% of Chinchillaโs, and 80% of PaLM-62Bโs4. Falcon 40B is smaller than LLaMA (65 billion parameters) but has better performance on the OpenLLM leaderboard5. The modelโs architecture is optimized for inference, with FlashAttention and multiquery5. It is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
About Flash attention
FlashAttention is a technique that speeds up the attention mechanism in the model, while multiquery attention allows the model to generate multiple queries for each token, thus better representing the tokenโs relationships with other tokens.
FlashAttention is an algorithm that reorders the attention computation and leverages classical techniques, such as tiling and recomputation, to significantly speed up the attention mechanism and reduce memory usage from quadratic to linear in sequence length1. It is designed to address the compute and memory bottleneck in the attention layer of transformer models, particularly when dealing with long sequences1.
The Falcon 40B is a large-scale artificial intelligence model developed by the Technology Innovation Institute (TII) in Abu Dhabi, United Arab Emirates1. It is a foundational large language model (LLM) with 40 billion parameters and trained on one trillion tokens1. Falcon 40B is the worldโs top-ranked open-source AI model on the Hugging Face leaderboard for large language models2. The model is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
The implications of Falcon 40B for large language models are significant. It matches the performance of other high-performing LLMs and is cost-effective3. The model is English-centric but also includes German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, and Swedish4. Falcon 40Bโs open-source nature and royalty-free deployment can empower public and private sector entities with efficiencies such as faster project execution and reduced costs5.
For LLM startups, Falcon 40B offers an open-source alternative to proprietary models like OpenAIโs GPT-34. The modelโs creator, TII, is offering the most exceptional project ideas access to training compute power as a form of investment2. This enables developers to tackle more complex and resource-intensive use cases with increased efficiency, productivity, and performance, driving innovation and expanding the possibilities for LLM startups2.
For big tech companies, Falcon 40B presents both opportunities and challenges. On one hand, the open-source nature of the model can foster collaboration and innovation, allowing big tech companies to leverage Falcon 40Bโs capabilities for various applications. On the other hand, the modelโs open-source availability may increase competition, as more startups and developers gain access to advanced LLM capabilities, potentially disrupting the market dominance of proprietary models from big tech companies.
Overall, Falcon 40B represents a significant milestone in the AI and LLM landscape, promoting open-source development, fostering innovation, and offering new opportunities for startups and big tech companies alike6.
Comparisons with other models.
When comparing Falcon 40B to other large language models like GPT-3, ChatGPT, GPT-4, and LLaMA, Falcon 40B demonstrates impressive performance and capabilities. It outperforms other open-source models such as LLaMA, StableLM, RedPajama, and MPT3. Despite its power, Falcon 40B uses only 75% of GPT-3's training compute, 40% of Chinchillaโs, and 80% of PaLM-62Bโs4. Falcon 40B is smaller than LLaMA (65 billion parameters) but has better performance on the OpenLLM leaderboard5. The modelโs architecture is optimized for inference, with FlashAttention and multiquery5. It is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
About Flash attention
FlashAttention is a technique that speeds up the attention mechanism in the model, while multiquery attention allows the model to generate multiple queries for each token, thus better representing the tokenโs relationships with other tokens.
FlashAttention is an algorithm that reorders the attention computation and leverages classical techniques, such as tiling and recomputation, to significantly speed up the attention mechanism and reduce memory usage from quadratic to linear in sequence length1. It is designed to address the compute and memory bottleneck in the attention layer of transformer models, particularly when dealing with long sequences1.
Twitter
UAE's Falcon 40B, the world's top ranked open-source AI model from the Technology Innovation Institute (TII) has waived royalties on its use for commercial and research purposes.
#TII #LLM #FalconLLM #Tech #Innovation #AI #AbuDhabi #UAE
#TII #LLM #FalconLLM #Tech #Innovation #AI #AbuDhabi #UAE
Traditional attention mechanisms can be computationally expensive, as they involve a quadratic increase in memory usage and runtime with respect to sequence length1. FlashAttention addresses this issue by making the attention algorithm IO-aware, accounting for reads and writes between levels of GPU memory2. It uses tiling to reduce the number of memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM2. This results in fewer HBM accesses than standard attention and optimizes performance for a range of SRAM sizes2.
Compared to traditional attention mechanisms, FlashAttention offers faster training and support for longer sequences without sacrificing accuracy3. It has been adopted by many organizations and research labs to speed up their training and inference processes4.
AI researchers can learn from FlashAttentionโs efficient architecture and its ability to achieve exceptional performance while maintaining a compact size. Its IO-aware design and tiling technique can inspire new approaches to optimizing attention mechanisms in transformer models. AI startup founders can also benefit from the improved efficiency and performance offered by FlashAttention, enabling them to tackle more complex and resource-intensive use cases with increased productivity.
Compared to traditional attention mechanisms, FlashAttention offers faster training and support for longer sequences without sacrificing accuracy3. It has been adopted by many organizations and research labs to speed up their training and inference processes4.
AI researchers can learn from FlashAttentionโs efficient architecture and its ability to achieve exceptional performance while maintaining a compact size. Its IO-aware design and tiling technique can inspire new approaches to optimizing attention mechanisms in transformer models. AI startup founders can also benefit from the improved efficiency and performance offered by FlashAttention, enabling them to tackle more complex and resource-intensive use cases with increased productivity.
์ต๊ทผ์ Perplexity์ ์ ๋ฃ ๋ฒ์ (https://www.perplexity.ai/)์ ๊ตฌ๋งคํ์ต๋๋ค.
์ ๋ฃ ๋ฒ์ ์ ์ ํํ ์ด์ ์ ๋ํด ์๊ฐํด๋ณผ ๋, ์คํํธ์ ์ด ์ด๋ป๊ฒ ๊ฒ์ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ , ๊ฑฐ๋ํ ๊ฒฝ์์ ์ฒด๋ค ์ฌ์ด์์ ์ด์๋จ์ ์ ์๋์ง์ ๋ํ ์๊ฐ์ ํ๊ฒ ๋์์ต๋๋ค.
AI์ ์์ง๋์ด๋ง์ ์ ๋ชฉํด์ ํจ์ฌ ์ข์ ๊ฒ์ ๊ฒฝํ์ ์ ๊ณตํ๊ณ ์๋ Perplexity๋ฅผ ์ฌ์ฉํด ๋ณด๋ฉด์, ํน์ ๊ณ ๊ฐ์๊ฒ 10-100๋ฐฐ ์ข์ ์๋น์ค๋ฅผ ๋ง๋ค ๋ AI๋ฅผ ์ฌ์ฉํ๋ค๋ฉด ๊ฝค ์ข์ ์ ํ๊ณผ ํ์ฌ๋ฅผ ๋ง๋ค ์ ์๊ฒ ๋ค๋ ์๊ฐ์ด ๋ญ๋๋ค.
์ ๊ฐ Perplexity๋ฅผ ์ ๋ฃ๋ก ๊ตฌ๋ ํ๊ฒ๋ ์ด์ ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค
1. ์ต๊ทผ์ GPT-4๋ฅผ ์ฐ๋ํ๋ฉด์, GPT-4๊ฐ ์ต์ ์ ๋ณด๋ฅผ ํ์ฉํ ์ ์๊ฒํฉ๋๋ค. ChatGPT-bing๋ณด๋ค ๋ ๋น ๋ฅด๊ณ ์๋ฌธ์ ๋ํ Source๋ฅผ ๋ ์ ๋ฌ์์ค๋๋ค.
2. ์ฌ์ฉ์์ Profile์ ๋ณธ์ธ์ ๊ด์ฌ์ฌ๋ฅผ ์ ์ด๋์ผ๋ฉด ๊ทธ ๊ด์ฌ์ฌ์ ๋ง์ถ ์ปจํ ์ธ ๋ฅผ ์ ๊ณตํฉ๋๋ค.
3. ์ฌ์ฉ์์ ํผ๋๋ฐฑ์ ์์ฒญํ์ฌ ์ ๋ณด์ ์ ํ์ฑ์ ๋์ด๋๋ฐ, ์ด๋ ์ฌ์ฉ์ ๊ฒฝํ์ ๋ฐฉํด๋์ง ์๊ณ , ์คํ๋ ค Perplexity์ ๋ํ ์ ๋ขฐ์ฑ์ ๋์ฌ์ค๋๋ค.
4. ์ถ๊ฐ์ ์ผ๋ก ์ฐพ์๋ณผ ๋งํ ์ฃผ์ ๋ฅผ ์ ์ํฉ๋๋ค.
์ด๋ฌํ ์ฅ์ ๋๋ฌธ์ ChatGPT๋ฅผ ์ ๋ฃ๋ก ๊ตฌ๋ ํ๊ณ ์์์๋ ์ถ๊ฐ ๊ตฌ๋ ์ ๊ฒฐ์ ํ๊ฒ ๋์์ต๋๋ค. ๊ทธ๋ฆฌ๊ณ ๊ตฌ๊ธ Bard์ ๋น๊ตํ๋๋ผ๋ Perplexity๋ ์ ์ถ์ฒ๋ฅผ ์ ๊ณตํจ์ผ๋ก์จ ์ ๋ณด์ ์ ํ์ฑ์ ๋์ด๊ณ , ๊ฒ์ ๊ธฐ๋ก์ ์ ์งํจ์ผ๋ก์จ ์ฌ์ฉ์ ๊ฒฝํ์ ํฅ์์ํต๋๋ค. ๋ํ, GPT-4๋ฅผ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์ Bard๋ณด๋ค ์ ๋ฐ์ ์ผ๋ก ๊ฒฐ๊ณผ ํ์ง์ด ์ข์ต๋๋ค.
Perplexity๋ ์ด๋ป๊ฒ ํด์(Moat)๋ฅผ ๋ง๋ค ์ ์์๊น์?
1. ์์ง 16๋ช ๊ท๋ชจ์์คํํธ์ ์ธ Perplexity์ด Moat์ ๋ง๋๋ ๊ฒ์ ๊ณ ๋ฏผํ๋ ๊ฒ๋ณด๋ค ๊ณ ๊ฐ์๊ฒ ๊ฐ๋ ฌํ ์ธ์์ ์ฃผ๋ฉฐ ์ฌ์ฉ์์ ์ฌ๋ฐฉ๋ฌธ์จ(Retention)์ ๋์ผ ์ ์๋ ๊ธฐ๋ฅ์ ์ฐพ์๋ด๋ ๊ฒ ๋ ์ค์ํ๋ค๊ณ ์๊ฐํฉ๋๋ค. ์ด๋ฐ ๊ธฐ๋ฅ์ ์ฐพ๋๋ค๋ฉด Chat GPT, Google Search์ ๊ฒฝ์ํ๋๋ผ๋ ์ฅ๊ธฐ์ ์ผ๋ก ๊ฒฝ์๋ ฅ์ ์ ์งํ ์ ์์ ๊ฒ์ด๋ผ ์๊ฐํฉ๋๋ค.
2. ํนํ ๊ณ ๊ฐ์ ๊ฒ์ ๋ฐ์ดํฐ๋ฅผ ์ถ์ ํ๊ฒ ๋๋ฉด, ๋จ์ํ ChatGPT๋ฅผ ์ด์ฉํ๋ ๊ฒ๋ณด๋ค ๋ ์ฐ์ํ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ฌ์ค ์ ์์ ๊ฒ์ ๋๋ค. ๋ํ GPT-4์ ๊ฐ์ ๋ชจ๋ธ๋ค์ด ๋์ฌ ๊ฒฝ์ฐ, ๊ฒ์ ์ฌ์ฉ์ ๊ฒฝํ(UX)์ ํตํด ๋ค์ํ AI ๋ชจ๋ธ์ ํ์ฉํ ์ ์๊ฒ ๋ ๊ฒ์ ๋๋ค.
3. ๋ง์ฝ ๊ณ ๊ฐ์ด ์ด๋ค ๊ธฐ๋ฅ์ ๊ฐ์น๋ฅผ ๋๋ผ๋์ง ์ ์ ์๋ค๋ฉด, ํด๋น ๊ธฐ๋ฅ์ ์ต์ ํํ๋ ๋ชจ๋ธ์ ์์ฒด์ ์ผ๋ก ๊ตฌ์ถํด๋ณผ ์ ์์ต๋๋ค.
Chat GPT ํน์ Google์ด Perplexity์ ์ฃผ์ ๊ธฐ๋ฅ์ ๋น ๋ฅด๊ฒ ์นดํผํ๋ค๋ฉด?
ChatGPT๋ ์ ๋ง ๋ค์ํ ์ ๋ฌด(์ด๋ฉ์ผ ์์ฑ, ๋ฆฌ์์น, ์์ฝ)์ ์ํํ์ง๋ง Perplexity๋ ์ ๋ณด ๊ฒ์ ๋ฐ Fact-check์ ๋พฐ์กฑํ๊ฒ ํนํ๋์ด์๋ ์๋น์ค ์ ๋๋ค. Perplexityํ์ด ์ด๋ค ๊ณ ๊ฐ์ ๋ฌด์จ ๋ฌธ์ ์ ์ง์คํ๋์ง ์ ํํ ์์ง ๋ชปํ์ง๋ง, ChatGPT/Google Bard๊ฐ ์์ฒญ ๋ค์ํ ๊ณ ๊ฐ์ ์ง์คํ๋ ๋์ Perplexity ํ์ ๋๋ถ๋ถ์ ์คํํธ์ ์ด ๊ทธ๋ฌ๋ ๊ฒ์ฒ๋ผ ํน์ ๊ณ ๊ฐ๋ค์ ์ง์คํด์ ๋ฉ์น๋ฅผ ํค์๋๊ฐ๋ ๊ฒ์ด ์ธ์ฌ, ๋, ์ธํ๋ผ๊ฐ ํ๋ถํ ๊ฑฐ์ธ๋ค ์ฌ์ด์์ ์ด์๋จ๋ ๋ฐฉ๋ฒ ์๋๊น์?
์ด ์ฌ์ ์ด ์ฌ์๋ณด์ด์ง ์์ง๋ง, ๊ณผ๊ฑฐ์ ์คํํธ์ ์ด ํด๊ฒฐํ๊ธฐ ์ด๋ ค์ด ์์ญ์ผ๋ก ์ฌ๊ฒจ์ก๋ ๊ฒ์์ด๋ผ๋ ๋ถ์ผ๋ ์คํํธ์ ์ด ์๋ก์ด ๊ธฐ์ ๊ณผ ์ข์ ์ ํ์ ๊ฒฐํฉ์ผ๋ก ๊ท ์ด์ ๋ง๋ค์ด๋ผ ์ ์๋ค๋ ๊ฒ ์๋ฏธ์๋ค๊ณ ์๊ฐํ๊ณ ์์ผ๋ก๋ ๋ ์ํด์ฃผ๊ธธ ์ ์ ๋ก์จ ์์ํ๊ณ ์ถ๋ค์.
์ ๋ฃ ๋ฒ์ ์ ์ ํํ ์ด์ ์ ๋ํด ์๊ฐํด๋ณผ ๋, ์คํํธ์ ์ด ์ด๋ป๊ฒ ๊ฒ์ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ , ๊ฑฐ๋ํ ๊ฒฝ์์ ์ฒด๋ค ์ฌ์ด์์ ์ด์๋จ์ ์ ์๋์ง์ ๋ํ ์๊ฐ์ ํ๊ฒ ๋์์ต๋๋ค.
AI์ ์์ง๋์ด๋ง์ ์ ๋ชฉํด์ ํจ์ฌ ์ข์ ๊ฒ์ ๊ฒฝํ์ ์ ๊ณตํ๊ณ ์๋ Perplexity๋ฅผ ์ฌ์ฉํด ๋ณด๋ฉด์, ํน์ ๊ณ ๊ฐ์๊ฒ 10-100๋ฐฐ ์ข์ ์๋น์ค๋ฅผ ๋ง๋ค ๋ AI๋ฅผ ์ฌ์ฉํ๋ค๋ฉด ๊ฝค ์ข์ ์ ํ๊ณผ ํ์ฌ๋ฅผ ๋ง๋ค ์ ์๊ฒ ๋ค๋ ์๊ฐ์ด ๋ญ๋๋ค.
์ ๊ฐ Perplexity๋ฅผ ์ ๋ฃ๋ก ๊ตฌ๋ ํ๊ฒ๋ ์ด์ ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค
1. ์ต๊ทผ์ GPT-4๋ฅผ ์ฐ๋ํ๋ฉด์, GPT-4๊ฐ ์ต์ ์ ๋ณด๋ฅผ ํ์ฉํ ์ ์๊ฒํฉ๋๋ค. ChatGPT-bing๋ณด๋ค ๋ ๋น ๋ฅด๊ณ ์๋ฌธ์ ๋ํ Source๋ฅผ ๋ ์ ๋ฌ์์ค๋๋ค.
2. ์ฌ์ฉ์์ Profile์ ๋ณธ์ธ์ ๊ด์ฌ์ฌ๋ฅผ ์ ์ด๋์ผ๋ฉด ๊ทธ ๊ด์ฌ์ฌ์ ๋ง์ถ ์ปจํ ์ธ ๋ฅผ ์ ๊ณตํฉ๋๋ค.
3. ์ฌ์ฉ์์ ํผ๋๋ฐฑ์ ์์ฒญํ์ฌ ์ ๋ณด์ ์ ํ์ฑ์ ๋์ด๋๋ฐ, ์ด๋ ์ฌ์ฉ์ ๊ฒฝํ์ ๋ฐฉํด๋์ง ์๊ณ , ์คํ๋ ค Perplexity์ ๋ํ ์ ๋ขฐ์ฑ์ ๋์ฌ์ค๋๋ค.
4. ์ถ๊ฐ์ ์ผ๋ก ์ฐพ์๋ณผ ๋งํ ์ฃผ์ ๋ฅผ ์ ์ํฉ๋๋ค.
์ด๋ฌํ ์ฅ์ ๋๋ฌธ์ ChatGPT๋ฅผ ์ ๋ฃ๋ก ๊ตฌ๋ ํ๊ณ ์์์๋ ์ถ๊ฐ ๊ตฌ๋ ์ ๊ฒฐ์ ํ๊ฒ ๋์์ต๋๋ค. ๊ทธ๋ฆฌ๊ณ ๊ตฌ๊ธ Bard์ ๋น๊ตํ๋๋ผ๋ Perplexity๋ ์ ์ถ์ฒ๋ฅผ ์ ๊ณตํจ์ผ๋ก์จ ์ ๋ณด์ ์ ํ์ฑ์ ๋์ด๊ณ , ๊ฒ์ ๊ธฐ๋ก์ ์ ์งํจ์ผ๋ก์จ ์ฌ์ฉ์ ๊ฒฝํ์ ํฅ์์ํต๋๋ค. ๋ํ, GPT-4๋ฅผ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์ Bard๋ณด๋ค ์ ๋ฐ์ ์ผ๋ก ๊ฒฐ๊ณผ ํ์ง์ด ์ข์ต๋๋ค.
Perplexity๋ ์ด๋ป๊ฒ ํด์(Moat)๋ฅผ ๋ง๋ค ์ ์์๊น์?
1. ์์ง 16๋ช ๊ท๋ชจ์์คํํธ์ ์ธ Perplexity์ด Moat์ ๋ง๋๋ ๊ฒ์ ๊ณ ๋ฏผํ๋ ๊ฒ๋ณด๋ค ๊ณ ๊ฐ์๊ฒ ๊ฐ๋ ฌํ ์ธ์์ ์ฃผ๋ฉฐ ์ฌ์ฉ์์ ์ฌ๋ฐฉ๋ฌธ์จ(Retention)์ ๋์ผ ์ ์๋ ๊ธฐ๋ฅ์ ์ฐพ์๋ด๋ ๊ฒ ๋ ์ค์ํ๋ค๊ณ ์๊ฐํฉ๋๋ค. ์ด๋ฐ ๊ธฐ๋ฅ์ ์ฐพ๋๋ค๋ฉด Chat GPT, Google Search์ ๊ฒฝ์ํ๋๋ผ๋ ์ฅ๊ธฐ์ ์ผ๋ก ๊ฒฝ์๋ ฅ์ ์ ์งํ ์ ์์ ๊ฒ์ด๋ผ ์๊ฐํฉ๋๋ค.
2. ํนํ ๊ณ ๊ฐ์ ๊ฒ์ ๋ฐ์ดํฐ๋ฅผ ์ถ์ ํ๊ฒ ๋๋ฉด, ๋จ์ํ ChatGPT๋ฅผ ์ด์ฉํ๋ ๊ฒ๋ณด๋ค ๋ ์ฐ์ํ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ฌ์ค ์ ์์ ๊ฒ์ ๋๋ค. ๋ํ GPT-4์ ๊ฐ์ ๋ชจ๋ธ๋ค์ด ๋์ฌ ๊ฒฝ์ฐ, ๊ฒ์ ์ฌ์ฉ์ ๊ฒฝํ(UX)์ ํตํด ๋ค์ํ AI ๋ชจ๋ธ์ ํ์ฉํ ์ ์๊ฒ ๋ ๊ฒ์ ๋๋ค.
3. ๋ง์ฝ ๊ณ ๊ฐ์ด ์ด๋ค ๊ธฐ๋ฅ์ ๊ฐ์น๋ฅผ ๋๋ผ๋์ง ์ ์ ์๋ค๋ฉด, ํด๋น ๊ธฐ๋ฅ์ ์ต์ ํํ๋ ๋ชจ๋ธ์ ์์ฒด์ ์ผ๋ก ๊ตฌ์ถํด๋ณผ ์ ์์ต๋๋ค.
Chat GPT ํน์ Google์ด Perplexity์ ์ฃผ์ ๊ธฐ๋ฅ์ ๋น ๋ฅด๊ฒ ์นดํผํ๋ค๋ฉด?
ChatGPT๋ ์ ๋ง ๋ค์ํ ์ ๋ฌด(์ด๋ฉ์ผ ์์ฑ, ๋ฆฌ์์น, ์์ฝ)์ ์ํํ์ง๋ง Perplexity๋ ์ ๋ณด ๊ฒ์ ๋ฐ Fact-check์ ๋พฐ์กฑํ๊ฒ ํนํ๋์ด์๋ ์๋น์ค ์ ๋๋ค. Perplexityํ์ด ์ด๋ค ๊ณ ๊ฐ์ ๋ฌด์จ ๋ฌธ์ ์ ์ง์คํ๋์ง ์ ํํ ์์ง ๋ชปํ์ง๋ง, ChatGPT/Google Bard๊ฐ ์์ฒญ ๋ค์ํ ๊ณ ๊ฐ์ ์ง์คํ๋ ๋์ Perplexity ํ์ ๋๋ถ๋ถ์ ์คํํธ์ ์ด ๊ทธ๋ฌ๋ ๊ฒ์ฒ๋ผ ํน์ ๊ณ ๊ฐ๋ค์ ์ง์คํด์ ๋ฉ์น๋ฅผ ํค์๋๊ฐ๋ ๊ฒ์ด ์ธ์ฌ, ๋, ์ธํ๋ผ๊ฐ ํ๋ถํ ๊ฑฐ์ธ๋ค ์ฌ์ด์์ ์ด์๋จ๋ ๋ฐฉ๋ฒ ์๋๊น์?
์ด ์ฌ์ ์ด ์ฌ์๋ณด์ด์ง ์์ง๋ง, ๊ณผ๊ฑฐ์ ์คํํธ์ ์ด ํด๊ฒฐํ๊ธฐ ์ด๋ ค์ด ์์ญ์ผ๋ก ์ฌ๊ฒจ์ก๋ ๊ฒ์์ด๋ผ๋ ๋ถ์ผ๋ ์คํํธ์ ์ด ์๋ก์ด ๊ธฐ์ ๊ณผ ์ข์ ์ ํ์ ๊ฒฐํฉ์ผ๋ก ๊ท ์ด์ ๋ง๋ค์ด๋ผ ์ ์๋ค๋ ๊ฒ ์๋ฏธ์๋ค๊ณ ์๊ฐํ๊ณ ์์ผ๋ก๋ ๋ ์ํด์ฃผ๊ธธ ์ ์ ๋ก์จ ์์ํ๊ณ ์ถ๋ค์.
Perplexity AI
Perplexity is a free AI-powered answer engine that provides accurate, trusted, and real-time answers to any question.
๐1
Continuous Learning_Startup & Investment
https://youtu.be/5cQXjboJwg0
About Hard Landing
ํ๋ ๋๋ฉ์ ๊ธ๊ฒฉํ ์ฑ์ฅ ๊ธฐ๊ฐ ์ดํ์ ๊ฒฝ์ ์ ์ผ๋ก ๊ธ์ํ ๊ฐ์ ๋๋ ๋ํ๋ฅผ ์๋ฏธํฉ๋๋ค. ๋๋๋ก ์ธํ๋ ์ด์ ์ ์ ํ์ํค๊ธฐ ์ํด ์ ๋ถ๊ฐ ๋ ธ๋ ฅํ ๋, ๊ฒฝ์ ๋ ๋๋ฆฐ ์ฑ์ฅ ๋๋ ๋ถํฉ์ผ๋ก ์ ํ๋๊ฑฐ๋ ๋ถํ์ฑ ์ํ์ ๋น ์ง ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ธํ๋ ์ด์ ์ ์ ์ดํ๊ธฐ ์ํด ๊ฒฝ์ ์ฑ์ฅ์ด ์ถฉ๋ถํ ์ ์ง๋์ง๋ง ๋ถํฉ์ ํผํ๊ธฐ์ ์ถฉ๋ถํ ๋์ ๊ฒฝ์ฐ์ธ ์ํํธ ๋๋ฉ๊ณผ ๋์กฐ๋ ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ค์ ์ํ์ ๊ณต๊ฒฉ์ ์ธ ํตํ ์ ์ฑ ๊ฐ์ , ์ง์์ ์ธ ์ธํ๋ ์ด์ ๋ฐ ๋ฎ์ ์ค์ ๋ฅ ๋ฑ ๋ค์ํ ์์์ ์ํด ๋ฐ์ํ ์ ์์ผ๋ฉฐ, ๋ถ์ฑ ์์ค์ด ๋์์ง๊ฑฐ๋ ์ ๋ถ ์ฑ๊ถ์ ๋ํ ๊ตฌ๋งค์ ๋ถ์กฑ ๋ฑ์ ์์ธ์ด ์์ ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ํ์ ๋ถํ์ฑ ๊ธฐ๊ฐ์ด๋ ๋ถํฉ์ผ๋ก ๋น ์ง๋ฉฐ, ์ค์ ๋ฅ ์ ์์น, ๊ธฐ์ ์ด์ต์ ํ๋ฝ ๋ฐ ๋ถ๋ ์ฆ๊ฐ ๋ฑ์ด ์์ต๋๋ค. ํ๋ ๋๋ฉ์ผ๋ก ์ธํ ์ํ์ ๋๋นํ๊ธฐ ์ํด ํฌ์์๋ ํฌํธํด๋ฆฌ์ค๋ฅผ ๋ค๋ณํํ๊ณ ์ง ๋์ ์์ฐ์ ํฌ์ํ๋ฉฐ, ์ฅ๊ธฐ์ ์ธ ํฌ์ ๋ชฉํ์ ์ง์คํ๋ฉฐ, ํฌํธํด๋ฆฌ์ค๋ฅผ ๋ฆฌ๋ฐธ๋ฐ์ฑํ๊ณ , ์์ ์ ์ธ ์์ฐ ๋ฐ ์ผ๋ถ ๊ตญ๊ฐ์ ํฌ์ํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
์ฐ๋ฐฉ์ค๋น์ ๋(Federal Reserve)์ ๊ธ๋ฆฌ ์ธ์ ์ฃผ๊ธฐ๋ ์ฐ์ด์ด ์ผ์ด๋๋ ๊ฒฝ์ฐ๊ฐ ๋ง์ผ๋ฉฐ, ๋ฏธ๊ตญ์์ ํ๋์ ์ธ ๋ถํฉ๊ณผ ํ๋ ๋๋ฉ(hard landing) ์ดํ ์ํํธ ๋๋ฉ(soft landing)์ด ๋ฐ๋ฅด๊ณค ํ๋ค[2] ([https://en.wikipedia.org/wiki/Hard_landing_(economics)](https://en.wikipedia.org/wiki/Hard_landing_(economics))). ๋ ์ผ Deutsche Bank์ ์ฐ๊ตฌํ์ ์ฐ๋ฐฉ์ค๋น์ ๋์ ๊ณต๊ฒฉ์ ์ธ ๊ธ๋ฆฌ ์ธ์์ด ์ข ๋ฃ ๋จ๊ณ์ ์ด๋ฅด๋ ์ผ๋ฉฐ, ๋ถ๊ฒฝ๊ธฐ๊ฐ 10์์๋ ๋๋ํ ๊ฐ๋ฅ์ฑ์ด ์๋ค๊ณ ์ฌ๊ธด๋ค[8] (https://fortune.com/2023/06/15/economy-recession-federal-reserve-powell-deutsche-bank-hard-landing/). Duquesne Family Office์ ํ์ฅ ๊ฒธ CEO์ธ Stanley Druckenmiller๋ ์ฐ๋ฐฉ์ค๋น์ ๋์ ๊ธ๋ฆฌ ์ธ์์ด ๋ฏธ๊ตญ ๊ฒฝ์ ๋ฅผ ๋ถํฉ์ผ๋ก ๋ฐ์ด๋ฃ์ ๊ฒ์ด๋ผ ์์ํ๊ณ ์๋ค[5] (https://www.reuters.com/markets/us/investor-druckenmiller-expects-hard-landing-us-economy-bullish-ai-2023-06-07/). ๊ทธ๋ ์ด๋ฒ ํด ์ํ ์๋ง์ผ๋ก ์ธํด ๊ฒฝ์ ์ผ๋ถ ๋ถ๋ฌธ์์ ์์ง ๊ธ๋ฆฌ ์ธ์์ ์ํฅ์ด ๋ฏธ์น์ง ์์์ผ๋ฉฐ, ๋ ๋ง์ "์ ๋ฐ"์ด ๋จ์ด์ง ๊ฒ์ด๋ผ๊ณ ๋ฏฟ๊ณ ์๋ค[5] (https://www.reuters.com/markets/us/investor-druckenmiller-expects-hard-landing-us-economy-bullish-ai-2023-06-07/). Bridgewater Associates์ ์ฐฝ์ ์์ธ Ray Dalio๋ ๋ฏธ๊ตญ์ด ํฐ ์ฌ์ดํด ๋ถ์ฑ ์๊ธฐ์ ์ง๋ฉดํ๊ณ ์์ผ๋ฉฐ, ๊ฒฝ์ ์ํฉ์ด ์ ํ๋ ๊ฒ์ด๋ผ ๊ฒฝ๊ณ ํ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/)[6] (https://finance.yahoo.com/news/ray-dalio-says-u-facing-145648699.html). ๊ทธ๋ ๋ฏธ๊ตญ ์ฌ๋ฌด์ฑ์ด 2023๋ ๋ง๊น์ง 1์กฐ ๋ฌ๋ฌ ์ด์์ T-Bills์ ๋ฐํํ ๊ฒ์ผ๋ก ์์๋๋ ๊ฐ์ด๋ฐ, ์์ฅ์์ ์ด๋ฌํ ์ ๋ถ ๋ถ์ฑ๋ฅผ ๊ตฌ๋งคํ ์ถฉ๋ถํ ๊ตฌ๋งค์๊ฐ ์์ ๊ฐ๋ฅ์ฑ์ด ์๋ค๋ ์ฐ๋ ค๋ฅผ ํ๋ช ํ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/). Dalio๋ ๋ฏธ๊ตญ์ด ๋๋ฌด ๋ง์ ๋ถ์ฑ๋ฅผ ์์ฐํ๊ณ ๊ตฌ๋งค์๊ฐ ๋ถ์กฑํ ํด๋์ํ ๋ฆ์ ํฐ ์ฌ์ดํด ๋ถ์ฑ ์๊ธฐ์ ์์์ ์๋ค๊ณ ๋ฏฟ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/)๊ธฐํ ์ธ์ฌ์ดํธ๊ตญ์ ํตํ๊ธฐ๊ธ(IMF)์ ๋ง์ฑ์ ์ธ ๋์ ์ธํ๋ ์ด์ , ๊ธ๋ฆฌ ์์น, ๊ทธ๋ฆฌ๊ณ ๋ ๊ฐ์ ๋ํ ๋ฏธ๊ตญ ์ํ ํ์ฐ์ผ๋ก ์ธํ ๋ถํ์ค์ฑ์ผ๋ก ์ธํด ์ธ๊ณ ๊ฒฝ์ ์ ํ๋ ๋๋ฉ ์ํ์ด "์ฌ๊ฐํ๊ฒ ์ฆ๊ฐํ๋ค"๊ณ ๊ฒฝ๊ณ ํ๊ณ ์๋ค[10] (https://fortune.com/2023/04/11/recession-outlook-imf-slashes-global-growth-hard-landing/). Morgan Stanley Wealth Management์ ์ต๊ณ ํฌ์ ์ฑ ์์์ธ Lisa Shalett์ ์๋น์ ์ธํ๋ ์ด์ ์ด ๋ค์ ๋จ๊ฑฐ์์ง์ ๋ฐ๋ผ ํ๋ ๋๋ฉ ์ํ์ด ์ปค์ง๊ณ ์๋ค๋ ๊ฒฝ๊ณ ๋ฅผ ํ๊ณ ์๋ค[11] (https://fortune.com/2023/02/21/stock-market-outlook-economic-forecast-morgan-stanley-wealth-management-goldilocks-dead-economic-hard-landing-risk-growing/)
ํฌ์์์๊ฒ ์ฐ๊ด๋ ํ๋ ๋๋ฉ์ ์ํํ๋ ๋๋ฉ ์ค์๋ ํฌ์์๊ฐ ์ง๋ฉดํ๋ ์ฌ๋ฌ ๊ฐ์ง ์ํ์ด ์๋ค. ๊ทธ๊ฒ์ ๋ค์๊ณผ ๊ฐ๋ค:
ํฌ์์์๊ฒ ๋ฐ๋ฅด๋ ํ๋ ๋๋ฉ์ ์ํ ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์ ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์ํ์ ์ง๋ฉดํ๊ฒ ๋ฉ๋๋ค.
์์ฐ ๊ฐ์น ํ๋ฝ: ์ฃผ์ ๋ฐ ๋ถ๋์ฐ๊ณผ ๊ฐ์ ์์ฐ ๊ฐ๊ฒฉ์ด ํฌ๊ฒ ํ๋ฝํ์ฌ ํฌํธํด๋ฆฌ์ค ๊ฐ์น๊ฐ ๊ฐ์ํ๊ณ ์ ์ฌ์ ์ธ ์์ค์ด ๋ฐ์ํ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
์ ๋์ฑ ๊ฐ์: ์์ฅ ์ ๋์ฑ์ด ๊ฐ์ํ์ฌ ์ํ๋ ๊ฐ๊ฒฉ์ผ๋ก ์์ฐ์ ๋งค์ ๋๋ ๋งค๋ํ๊ธฐ๊ฐ ๋ ์ด๋ ค์ธ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)*.
๋ณ๋์ฑ ์ฆ๊ฐ: ๊ธ์ต ์์ฅ์ด ๋ถ์์ ํด์ ธ ๊ฐ๊ฒฉ ๋ณ๋์ด ํฌ๊ฒ ์ผ์ด๋๊ณ ๋ถํ์ค์ฑ์ด ์ฆ๊ฐํ ์ ์์ต๋๋ค**[3](https://www.investopedia.com/terms/h/hardlanding.asp)*.
ํ์ฐ ๋ฐ ์ฑ๋ฌด๋ถ์ดํ: ํ์ฌ๋ค์ด ์ฌ์ ์ ์ธ ์ด๋ ค์์ ๊ฒช์ด ํ์ฐ ๋ฐ ์ฑ๋ฌด๋ถ์ดํ์ ์ง๋ฉดํ ์ ์์ผ๋ฉฐ, ์ด๋ ๊ทธ๋ค์ ์ฃผ์์ด๋ ์ฑ๊ถ์ ๋ณด์ ํ ํฌ์์์๊ฒ ๋ถ์ ์ ์ธ ์ํฅ์ ์ค ์ ์์ต๋๋ค**[4](https://seekingalpha.com/news/3973813-goldman-sachs-picks-top-stocks-in-case-of-a-hard-landing)*.
ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์์ ๊ธฐํ ์ํ์๋ ๋ถ๊ตฌํ๊ณ ํ๋ ๋๋ฉ์ ํฌ์์๋ค์๊ฒ ๊ธฐํ๋ฅผ ์ ๊ณตํ ์ ์์ต๋๋ค.
์ ํ๊ฐ ์์ฐ ๋งค์ : ์์ฐ ๊ฐ๊ฒฉ ํ๋ฝ์ผ๋ก ์ธํด ํ ์ธ๋ ๊ฐ๊ฒฉ์ผ๋ก ๊ณ ํ์ง ์์ฐ์ ๋งค์ ํ ์ ์๋ ๊ธฐํ๊ฐ ์๊ธธ ์ ์์ต๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
๋ฐฉ์ด์ ์ฃผ์: ์๋น์ฌ, ์ ํธ๋ฆฌํฐ, ํฌ์ค์ผ์ด ๋ฑ์ ๋ฐฉ์ด์ ์ฃผ์ ํฌ์๋ ๊ฒฝ๊ธฐ ํ๋ฝ ๊ธฐ๊ฐ ๋์ ์์ ์ฑ์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๋ฐฐ๋น ์ฃผ์: ๋ฐฐ๋น์ ์ง์์ ์ผ๋ก ์ง๊ธํ๋ ์ฃผ์ ํฌ์๋ ์ด๋ ค์ด ์์ฅ ์ํฉ์์ ์์ ๋ฐ ์ ์ฌ์ ์ธ ์๋ณธ ๊ฐ์น ์์น์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
ํ๋ ๋๋ฉ์ ๊ธ๊ฒฉํ ์ฑ์ฅ ๊ธฐ๊ฐ ์ดํ์ ๊ฒฝ์ ์ ์ผ๋ก ๊ธ์ํ ๊ฐ์ ๋๋ ๋ํ๋ฅผ ์๋ฏธํฉ๋๋ค. ๋๋๋ก ์ธํ๋ ์ด์ ์ ์ ํ์ํค๊ธฐ ์ํด ์ ๋ถ๊ฐ ๋ ธ๋ ฅํ ๋, ๊ฒฝ์ ๋ ๋๋ฆฐ ์ฑ์ฅ ๋๋ ๋ถํฉ์ผ๋ก ์ ํ๋๊ฑฐ๋ ๋ถํ์ฑ ์ํ์ ๋น ์ง ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ธํ๋ ์ด์ ์ ์ ์ดํ๊ธฐ ์ํด ๊ฒฝ์ ์ฑ์ฅ์ด ์ถฉ๋ถํ ์ ์ง๋์ง๋ง ๋ถํฉ์ ํผํ๊ธฐ์ ์ถฉ๋ถํ ๋์ ๊ฒฝ์ฐ์ธ ์ํํธ ๋๋ฉ๊ณผ ๋์กฐ๋ ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ค์ ์ํ์ ๊ณต๊ฒฉ์ ์ธ ํตํ ์ ์ฑ ๊ฐ์ , ์ง์์ ์ธ ์ธํ๋ ์ด์ ๋ฐ ๋ฎ์ ์ค์ ๋ฅ ๋ฑ ๋ค์ํ ์์์ ์ํด ๋ฐ์ํ ์ ์์ผ๋ฉฐ, ๋ถ์ฑ ์์ค์ด ๋์์ง๊ฑฐ๋ ์ ๋ถ ์ฑ๊ถ์ ๋ํ ๊ตฌ๋งค์ ๋ถ์กฑ ๋ฑ์ ์์ธ์ด ์์ ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ํ์ ๋ถํ์ฑ ๊ธฐ๊ฐ์ด๋ ๋ถํฉ์ผ๋ก ๋น ์ง๋ฉฐ, ์ค์ ๋ฅ ์ ์์น, ๊ธฐ์ ์ด์ต์ ํ๋ฝ ๋ฐ ๋ถ๋ ์ฆ๊ฐ ๋ฑ์ด ์์ต๋๋ค. ํ๋ ๋๋ฉ์ผ๋ก ์ธํ ์ํ์ ๋๋นํ๊ธฐ ์ํด ํฌ์์๋ ํฌํธํด๋ฆฌ์ค๋ฅผ ๋ค๋ณํํ๊ณ ์ง ๋์ ์์ฐ์ ํฌ์ํ๋ฉฐ, ์ฅ๊ธฐ์ ์ธ ํฌ์ ๋ชฉํ์ ์ง์คํ๋ฉฐ, ํฌํธํด๋ฆฌ์ค๋ฅผ ๋ฆฌ๋ฐธ๋ฐ์ฑํ๊ณ , ์์ ์ ์ธ ์์ฐ ๋ฐ ์ผ๋ถ ๊ตญ๊ฐ์ ํฌ์ํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
์ฐ๋ฐฉ์ค๋น์ ๋(Federal Reserve)์ ๊ธ๋ฆฌ ์ธ์ ์ฃผ๊ธฐ๋ ์ฐ์ด์ด ์ผ์ด๋๋ ๊ฒฝ์ฐ๊ฐ ๋ง์ผ๋ฉฐ, ๋ฏธ๊ตญ์์ ํ๋์ ์ธ ๋ถํฉ๊ณผ ํ๋ ๋๋ฉ(hard landing) ์ดํ ์ํํธ ๋๋ฉ(soft landing)์ด ๋ฐ๋ฅด๊ณค ํ๋ค[2] ([https://en.wikipedia.org/wiki/Hard_landing_(economics)](https://en.wikipedia.org/wiki/Hard_landing_(economics))). ๋ ์ผ Deutsche Bank์ ์ฐ๊ตฌํ์ ์ฐ๋ฐฉ์ค๋น์ ๋์ ๊ณต๊ฒฉ์ ์ธ ๊ธ๋ฆฌ ์ธ์์ด ์ข ๋ฃ ๋จ๊ณ์ ์ด๋ฅด๋ ์ผ๋ฉฐ, ๋ถ๊ฒฝ๊ธฐ๊ฐ 10์์๋ ๋๋ํ ๊ฐ๋ฅ์ฑ์ด ์๋ค๊ณ ์ฌ๊ธด๋ค[8] (https://fortune.com/2023/06/15/economy-recession-federal-reserve-powell-deutsche-bank-hard-landing/). Duquesne Family Office์ ํ์ฅ ๊ฒธ CEO์ธ Stanley Druckenmiller๋ ์ฐ๋ฐฉ์ค๋น์ ๋์ ๊ธ๋ฆฌ ์ธ์์ด ๋ฏธ๊ตญ ๊ฒฝ์ ๋ฅผ ๋ถํฉ์ผ๋ก ๋ฐ์ด๋ฃ์ ๊ฒ์ด๋ผ ์์ํ๊ณ ์๋ค[5] (https://www.reuters.com/markets/us/investor-druckenmiller-expects-hard-landing-us-economy-bullish-ai-2023-06-07/). ๊ทธ๋ ์ด๋ฒ ํด ์ํ ์๋ง์ผ๋ก ์ธํด ๊ฒฝ์ ์ผ๋ถ ๋ถ๋ฌธ์์ ์์ง ๊ธ๋ฆฌ ์ธ์์ ์ํฅ์ด ๋ฏธ์น์ง ์์์ผ๋ฉฐ, ๋ ๋ง์ "์ ๋ฐ"์ด ๋จ์ด์ง ๊ฒ์ด๋ผ๊ณ ๋ฏฟ๊ณ ์๋ค[5] (https://www.reuters.com/markets/us/investor-druckenmiller-expects-hard-landing-us-economy-bullish-ai-2023-06-07/). Bridgewater Associates์ ์ฐฝ์ ์์ธ Ray Dalio๋ ๋ฏธ๊ตญ์ด ํฐ ์ฌ์ดํด ๋ถ์ฑ ์๊ธฐ์ ์ง๋ฉดํ๊ณ ์์ผ๋ฉฐ, ๊ฒฝ์ ์ํฉ์ด ์ ํ๋ ๊ฒ์ด๋ผ ๊ฒฝ๊ณ ํ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/)[6] (https://finance.yahoo.com/news/ray-dalio-says-u-facing-145648699.html). ๊ทธ๋ ๋ฏธ๊ตญ ์ฌ๋ฌด์ฑ์ด 2023๋ ๋ง๊น์ง 1์กฐ ๋ฌ๋ฌ ์ด์์ T-Bills์ ๋ฐํํ ๊ฒ์ผ๋ก ์์๋๋ ๊ฐ์ด๋ฐ, ์์ฅ์์ ์ด๋ฌํ ์ ๋ถ ๋ถ์ฑ๋ฅผ ๊ตฌ๋งคํ ์ถฉ๋ถํ ๊ตฌ๋งค์๊ฐ ์์ ๊ฐ๋ฅ์ฑ์ด ์๋ค๋ ์ฐ๋ ค๋ฅผ ํ๋ช ํ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/). Dalio๋ ๋ฏธ๊ตญ์ด ๋๋ฌด ๋ง์ ๋ถ์ฑ๋ฅผ ์์ฐํ๊ณ ๊ตฌ๋งค์๊ฐ ๋ถ์กฑํ ํด๋์ํ ๋ฆ์ ํฐ ์ฌ์ดํด ๋ถ์ฑ ์๊ธฐ์ ์์์ ์๋ค๊ณ ๋ฏฟ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/)๊ธฐํ ์ธ์ฌ์ดํธ๊ตญ์ ํตํ๊ธฐ๊ธ(IMF)์ ๋ง์ฑ์ ์ธ ๋์ ์ธํ๋ ์ด์ , ๊ธ๋ฆฌ ์์น, ๊ทธ๋ฆฌ๊ณ ๋ ๊ฐ์ ๋ํ ๋ฏธ๊ตญ ์ํ ํ์ฐ์ผ๋ก ์ธํ ๋ถํ์ค์ฑ์ผ๋ก ์ธํด ์ธ๊ณ ๊ฒฝ์ ์ ํ๋ ๋๋ฉ ์ํ์ด "์ฌ๊ฐํ๊ฒ ์ฆ๊ฐํ๋ค"๊ณ ๊ฒฝ๊ณ ํ๊ณ ์๋ค[10] (https://fortune.com/2023/04/11/recession-outlook-imf-slashes-global-growth-hard-landing/). Morgan Stanley Wealth Management์ ์ต๊ณ ํฌ์ ์ฑ ์์์ธ Lisa Shalett์ ์๋น์ ์ธํ๋ ์ด์ ์ด ๋ค์ ๋จ๊ฑฐ์์ง์ ๋ฐ๋ผ ํ๋ ๋๋ฉ ์ํ์ด ์ปค์ง๊ณ ์๋ค๋ ๊ฒฝ๊ณ ๋ฅผ ํ๊ณ ์๋ค[11] (https://fortune.com/2023/02/21/stock-market-outlook-economic-forecast-morgan-stanley-wealth-management-goldilocks-dead-economic-hard-landing-risk-growing/)
ํฌ์์์๊ฒ ์ฐ๊ด๋ ํ๋ ๋๋ฉ์ ์ํํ๋ ๋๋ฉ ์ค์๋ ํฌ์์๊ฐ ์ง๋ฉดํ๋ ์ฌ๋ฌ ๊ฐ์ง ์ํ์ด ์๋ค. ๊ทธ๊ฒ์ ๋ค์๊ณผ ๊ฐ๋ค:
ํฌ์์์๊ฒ ๋ฐ๋ฅด๋ ํ๋ ๋๋ฉ์ ์ํ ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์ ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์ํ์ ์ง๋ฉดํ๊ฒ ๋ฉ๋๋ค.
์์ฐ ๊ฐ์น ํ๋ฝ: ์ฃผ์ ๋ฐ ๋ถ๋์ฐ๊ณผ ๊ฐ์ ์์ฐ ๊ฐ๊ฒฉ์ด ํฌ๊ฒ ํ๋ฝํ์ฌ ํฌํธํด๋ฆฌ์ค ๊ฐ์น๊ฐ ๊ฐ์ํ๊ณ ์ ์ฌ์ ์ธ ์์ค์ด ๋ฐ์ํ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
์ ๋์ฑ ๊ฐ์: ์์ฅ ์ ๋์ฑ์ด ๊ฐ์ํ์ฌ ์ํ๋ ๊ฐ๊ฒฉ์ผ๋ก ์์ฐ์ ๋งค์ ๋๋ ๋งค๋ํ๊ธฐ๊ฐ ๋ ์ด๋ ค์ธ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)*.
๋ณ๋์ฑ ์ฆ๊ฐ: ๊ธ์ต ์์ฅ์ด ๋ถ์์ ํด์ ธ ๊ฐ๊ฒฉ ๋ณ๋์ด ํฌ๊ฒ ์ผ์ด๋๊ณ ๋ถํ์ค์ฑ์ด ์ฆ๊ฐํ ์ ์์ต๋๋ค**[3](https://www.investopedia.com/terms/h/hardlanding.asp)*.
ํ์ฐ ๋ฐ ์ฑ๋ฌด๋ถ์ดํ: ํ์ฌ๋ค์ด ์ฌ์ ์ ์ธ ์ด๋ ค์์ ๊ฒช์ด ํ์ฐ ๋ฐ ์ฑ๋ฌด๋ถ์ดํ์ ์ง๋ฉดํ ์ ์์ผ๋ฉฐ, ์ด๋ ๊ทธ๋ค์ ์ฃผ์์ด๋ ์ฑ๊ถ์ ๋ณด์ ํ ํฌ์์์๊ฒ ๋ถ์ ์ ์ธ ์ํฅ์ ์ค ์ ์์ต๋๋ค**[4](https://seekingalpha.com/news/3973813-goldman-sachs-picks-top-stocks-in-case-of-a-hard-landing)*.
ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์์ ๊ธฐํ ์ํ์๋ ๋ถ๊ตฌํ๊ณ ํ๋ ๋๋ฉ์ ํฌ์์๋ค์๊ฒ ๊ธฐํ๋ฅผ ์ ๊ณตํ ์ ์์ต๋๋ค.
์ ํ๊ฐ ์์ฐ ๋งค์ : ์์ฐ ๊ฐ๊ฒฉ ํ๋ฝ์ผ๋ก ์ธํด ํ ์ธ๋ ๊ฐ๊ฒฉ์ผ๋ก ๊ณ ํ์ง ์์ฐ์ ๋งค์ ํ ์ ์๋ ๊ธฐํ๊ฐ ์๊ธธ ์ ์์ต๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
๋ฐฉ์ด์ ์ฃผ์: ์๋น์ฌ, ์ ํธ๋ฆฌํฐ, ํฌ์ค์ผ์ด ๋ฑ์ ๋ฐฉ์ด์ ์ฃผ์ ํฌ์๋ ๊ฒฝ๊ธฐ ํ๋ฝ ๊ธฐ๊ฐ ๋์ ์์ ์ฑ์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๋ฐฐ๋น ์ฃผ์: ๋ฐฐ๋น์ ์ง์์ ์ผ๋ก ์ง๊ธํ๋ ์ฃผ์ ํฌ์๋ ์ด๋ ค์ด ์์ฅ ์ํฉ์์ ์์ ๋ฐ ์ ์ฌ์ ์ธ ์๋ณธ ๊ฐ์น ์์น์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
Wikipedia
Hard landing (economics)
A hard landing in the business cycle or economic cycle
Continuous Learning_Startup & Investment
https://youtu.be/5cQXjboJwg0
์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ์์ฐ: ์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ๋ค๋ฅธ ์์ฐ์ ๋ค์์ฑ์ ์ ๊ณตํ๊ณ ํฌํธํด๋ฆฌ์ค ์ํ์ ๊ฐ์์ํฌ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
ํ๋ ๋๋ฉ์ ๋๋นํ๋ ํฌ์์ ์ค๋น ํ๋ ๋๋ฉ์ ๋๋นํ๊ธฐ ์ํด ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์กฐ์น๋ฅผ ์ทจํ ์ ์์ต๋๋ค.
ํฌํธํด๋ฆฌ์ค ๋ค์ํ: ์์ฐ ํด๋์ค, ์นํฐ ๋ฐ ์ง์ญ ๊ฐ์ ๋ค์ํ ํฌ์๋ ์ํ์ ์ํํ๊ณ ์ ์ฌ์ ๊ธฐํ๋ฅผ ํฌ์ฐฉํ๋ ๋ฐ ๋์์ด ๋ ์ ์์ต๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
๊ณ ํ์ง ์์ฐ์ ์ง์ค: ๋ฎ์ ๋ถ์ฑ, ๊ฐ๋ ฅํ ํ๊ธ ํ๋ฆ ๋ฐ ๊ฒฌ๊ณ ํ ์ฌ๋ฌด ์ํ๋ฅผ ๊ฐ์ง ์ ๊ด๋ฆฌ๋๋ ํ์ฌ์ ํฌ์ํฉ๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
์ฅ๊ธฐ์ ์ธ ๊ด์ ์ ์ง: ์ฅ๊ธฐ์ ์ธ ํฌ์ ๋ชฉํ์ ์ง์คํ๊ณ ๋จ๊ธฐ์ ์ธ ์์ฅ ๋ณ๋์ ๊ธฐ๋ฐํ ์ถฉ๋์ ์ธ ๊ฒฐ์ ์ ํผํฉ๋๋ค**[6](https://www.pwc.com/us/en/industries/financial-services/asset-wealth-management/real-estate/emerging-trends-in-real-estate.html)*.
ํฌํธํด๋ฆฌ์ค ์ฌ์กฐ์ : ํฌํธํด๋ฆฌ์ค๋ฅผ ์ ๊ธฐ์ ์ผ๋ก ๊ฒํ ํ๊ณ ์กฐ์ ํ์ฌ ์ํ๋ ์์ฐ ๋ฐฐ๋ถ๊ณผ ์ํ ํ๋กํ์ ์ ์งํฉ๋๋ค**[7](https://www.schwab.com/learn/story/how-to-prepare-landing)*.
์ ๋งํ ์์ฐ ๋ฐ ๊ตญ๊ฐ ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์ ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์์ฐ ๋ฐ ๊ตญ๊ฐ๋ฅผ ๊ณ ๋ คํ ์ ์์ต๋๋ค.
์ผ๋ณธ ๋ถ๋์ฐ: ์ผ๋ณธ์ ๋ถ๋์ฐ ์์ฅ์ ๊ฒฝ๊ธฐ ํ๋ฝ ๊ธฐ๊ฐ ๋์์๋ ๋ด๊ตฌ์ฑ์ ๋ณด์ฌ์ฃผ์ด ํฌ์์๋ค์๊ฒ ์์ ํ ํผ๋์ฒ๊ฐ ๋ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)*.
๋ฐฉ์ด์ ์ฃผ์: ์์์ ์ธ๊ธํ ๋ฐฉ์ด์ ์ฃผ์์ ์๋น์ฌ, ์ ํธ๋ฆฌํฐ, ํฌ์ค์ผ์ด ๋ฑ์ ์นํฐ์์ ์์ ์ฑ์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๋ฐฐ๋น ์ฃผ์: ์ผ๊ด์ ์ธ ๋ฐฐ๋น์ ์ง๊ธํ ๊ธฐ์ ์ ํฌ์ํ๋ฉด ์ด๋ ค์ด ์์ฅ ์ํฉ์์ ์์ ๋ฐ ์ ์ฌ์ ์ธ ์๋ณธ ๊ฐ์น ์์น์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ์์ฐ: ์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ๋ค๋ฅธ ์์ฐ์ ๋ค์์ฑ์ ์ ๊ณตํ๊ณ ํฌํธํด๋ฆฌ์ค ์ํ์ ๊ฐ์์ํฌ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๊ฒฐ๋ก ์ ์ผ๋ก, ํ๋ ๋๋ฉ์ ํฌ์์๋ค์๊ฒ ์ํ๊ณผ ๊ธฐํ ๋ชจ๋๋ฅผ ์ ๊ณตํฉ๋๋ค. ํฌํธํด๋ฆฌ์ค ๋ค์ํ, ๊ณ ํ์ง ์์ฐ์ ์ง์ค, ์ฅ๊ธฐ์ ์ธ ๊ด์ ์ ์ง ๋ฐ ํฌํธํด๋ฆฌ์ค ์ฌ์กฐ์ ์ ํตํด ํฌ์์๋ค์ ํ๋ ๋๋ฉ์ ๋์ ์ ๋ ์ ๊ทน๋ณตํ๊ณ ์ ์ฌ์ ์ธ ๊ธฐํ๋ฅผ ์ก์ ์ ์์ต๋๋ค.
ํ๋ ๋๋ฉ์ ๋๋นํ๋ ํฌ์์ ์ค๋น ํ๋ ๋๋ฉ์ ๋๋นํ๊ธฐ ์ํด ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์กฐ์น๋ฅผ ์ทจํ ์ ์์ต๋๋ค.
ํฌํธํด๋ฆฌ์ค ๋ค์ํ: ์์ฐ ํด๋์ค, ์นํฐ ๋ฐ ์ง์ญ ๊ฐ์ ๋ค์ํ ํฌ์๋ ์ํ์ ์ํํ๊ณ ์ ์ฌ์ ๊ธฐํ๋ฅผ ํฌ์ฐฉํ๋ ๋ฐ ๋์์ด ๋ ์ ์์ต๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
๊ณ ํ์ง ์์ฐ์ ์ง์ค: ๋ฎ์ ๋ถ์ฑ, ๊ฐ๋ ฅํ ํ๊ธ ํ๋ฆ ๋ฐ ๊ฒฌ๊ณ ํ ์ฌ๋ฌด ์ํ๋ฅผ ๊ฐ์ง ์ ๊ด๋ฆฌ๋๋ ํ์ฌ์ ํฌ์ํฉ๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
์ฅ๊ธฐ์ ์ธ ๊ด์ ์ ์ง: ์ฅ๊ธฐ์ ์ธ ํฌ์ ๋ชฉํ์ ์ง์คํ๊ณ ๋จ๊ธฐ์ ์ธ ์์ฅ ๋ณ๋์ ๊ธฐ๋ฐํ ์ถฉ๋์ ์ธ ๊ฒฐ์ ์ ํผํฉ๋๋ค**[6](https://www.pwc.com/us/en/industries/financial-services/asset-wealth-management/real-estate/emerging-trends-in-real-estate.html)*.
ํฌํธํด๋ฆฌ์ค ์ฌ์กฐ์ : ํฌํธํด๋ฆฌ์ค๋ฅผ ์ ๊ธฐ์ ์ผ๋ก ๊ฒํ ํ๊ณ ์กฐ์ ํ์ฌ ์ํ๋ ์์ฐ ๋ฐฐ๋ถ๊ณผ ์ํ ํ๋กํ์ ์ ์งํฉ๋๋ค**[7](https://www.schwab.com/learn/story/how-to-prepare-landing)*.
์ ๋งํ ์์ฐ ๋ฐ ๊ตญ๊ฐ ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์ ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์์ฐ ๋ฐ ๊ตญ๊ฐ๋ฅผ ๊ณ ๋ คํ ์ ์์ต๋๋ค.
์ผ๋ณธ ๋ถ๋์ฐ: ์ผ๋ณธ์ ๋ถ๋์ฐ ์์ฅ์ ๊ฒฝ๊ธฐ ํ๋ฝ ๊ธฐ๊ฐ ๋์์๋ ๋ด๊ตฌ์ฑ์ ๋ณด์ฌ์ฃผ์ด ํฌ์์๋ค์๊ฒ ์์ ํ ํผ๋์ฒ๊ฐ ๋ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)*.
๋ฐฉ์ด์ ์ฃผ์: ์์์ ์ธ๊ธํ ๋ฐฉ์ด์ ์ฃผ์์ ์๋น์ฌ, ์ ํธ๋ฆฌํฐ, ํฌ์ค์ผ์ด ๋ฑ์ ์นํฐ์์ ์์ ์ฑ์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๋ฐฐ๋น ์ฃผ์: ์ผ๊ด์ ์ธ ๋ฐฐ๋น์ ์ง๊ธํ ๊ธฐ์ ์ ํฌ์ํ๋ฉด ์ด๋ ค์ด ์์ฅ ์ํฉ์์ ์์ ๋ฐ ์ ์ฌ์ ์ธ ์๋ณธ ๊ฐ์น ์์น์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ์์ฐ: ์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ๋ค๋ฅธ ์์ฐ์ ๋ค์์ฑ์ ์ ๊ณตํ๊ณ ํฌํธํด๋ฆฌ์ค ์ํ์ ๊ฐ์์ํฌ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๊ฒฐ๋ก ์ ์ผ๋ก, ํ๋ ๋๋ฉ์ ํฌ์์๋ค์๊ฒ ์ํ๊ณผ ๊ธฐํ ๋ชจ๋๋ฅผ ์ ๊ณตํฉ๋๋ค. ํฌํธํด๋ฆฌ์ค ๋ค์ํ, ๊ณ ํ์ง ์์ฐ์ ์ง์ค, ์ฅ๊ธฐ์ ์ธ ๊ด์ ์ ์ง ๋ฐ ํฌํธํด๋ฆฌ์ค ์ฌ์กฐ์ ์ ํตํด ํฌ์์๋ค์ ํ๋ ๋๋ฉ์ ๋์ ์ ๋ ์ ๊ทน๋ณตํ๊ณ ์ ์ฌ์ ์ธ ๊ธฐํ๋ฅผ ์ก์ ์ ์์ต๋๋ค.
Forbes Advisor
How To Invest During A Recession
With inflation up, the stock market down and gross domestic product (GDP) in the red, experts are debating whether the U.S. has entered a recession. While the jury is still out on that question, you may wonder what you can do now to best position your investmentsโฆ
์๋ฌด๊ฒ๋ ์๋ ์ํฉ์์ ๋ถ์๊ฐ ๋๋ ค๋ฉด (์ธ์ด๋
ธ ์์ ์จ ๋ง์ ๋ฐ๋ฅด๋ฉด) ํผ๋ณด๋ค ์งํ๊ฒ ์ด์์ผํ๋ค๊ณ ์๊ฐํ๋ค.
์คํผ ์ตํ์ฌ ๋ํ๋ ํ์ด์ค๋ถ ํฌ์คํ
๋ถ์๊ฐ ๋๊ณ ์ถ๋ค๋ฉด์?
์๋ผ๋ฒจ ์ ๋นํ๊ณ ๋์ ์ฐ๋ด ์ง์ฅ
์ผ๋ ์ ํ๋๋ฒ ํด์ธ์ฌํ๊ฐ์ผ์ง.
๊ฒฐํผ์๋ ์กฐ๊ฑด์ ์์ 10%๋ฅผ ๋ฐ๋ผ๊ณ ,
๋ง์ด๋ ์๋์ง๋ง ๋ช ํ๋ฐฑ ํ๋๊ฐ๋ ์์ด์ผ์ง.
๊ตญ์ฐ์ฐจ ์ด๋ฐ์ ๋ ์ข ๋ณดํ์ ์ธ์ ์ฐจ ์ฌ๋๊ฒ ๋์.
ํธํ๊ณ ์ข๊ณ ์ด๋งํ๋ฐ...
๋๋ ์ํ๋๋ฐ ๋จ๋ค์ด๋ผ๊ณ ๋ค๋ฅผ๊น?
์ค์ ํ๊ตญ ์ต๊ทผ ํต๊ณ ๋ฐ ๊ฒฝํฅ
-๊ฒฐํผ๋ฅ ์ต์
-์ธ๊ตฌ์๋๋น ํด์ธ์ฌํ ์ต๊ณ
-์๋๋๋น ๋ช ํ์๋น ์ต๊ณ
-์๋๋๋น ์ธ์ ์ฐจ์๋น ์ต๊ณ
SNS๋ ๋ฏธ๋์ด๋ค์ด ์๋ก๊ฐ ๊ฒฝ์ํ๋ฏ ์ฌ๋๋ค์ ๋ถ์ถ์ด๊ณ , ๋จ๋ค๋งํผ์ ํด์ผ์ง ๋ผ๋ ๋ฌธํ๋ก ๋๋ผ๊ฐ ์ฌํ๊ฐ ๋๋ฝ์ผ๋ก ๊ฐ๋ ๊ฒ ๊ฐ๋ค.
๋ถ์๋ ๊ทนํ ์์์ธ๋ฐ ๊ทธ๋ ๋ค๋ฉด
๋์ค๋ค๊ณผ ๋ค๋ฅด๊ฒ ํด์ผํ์ง ์์๊น?
๋ฏธ๋ จํ๋ค ์๊ฐ๋ฝ์ง ๋ฐ์ผ๋ฉฐ, ๋ ธ๋ ฅํ๋ ์์์ ์ฌ๋๋ค
์ด์ฌํ ์ฑ์ฅ์ ์ํด ์ผํ๊ณ ,
ํด์ธ์ฌํ ๊ฐ ๋, ๋ช ํ ์ด๋, ์ธ์ ์ฐจ ํ ๋ถ ์๊ปด์
์ธ๊ตญ์ด๋ ํฌ์๊ณต๋ถํ๊ณ ,
์์์ง๋ถํฐ ์์ํด ๊ฒฐํผํ๊ณ
์กฐ๊ธ์ฉ ๊ณ์ ๋๋ ค๊ฐ๋ ์ฌ๋
์ด๋ฐ ์ฌ๋๋ค์ด ๊ฒฐ๊ตญ์๋ ์ ์์๋ ๋ง๋ ์๋๊ฒ ๋ถ์ ๊ฒฉ์ฐจ๊ฐ ๋ฒ์ด์ง๋๊ฑธ ๋ง์ด ๋ชฉ๊ฒฉ ํ๋ค.
๊ทธ๋ฌ๊ตฌ ๋ณด๋ฉด ํ๋์ ๋ชฉํ๋ฅผ ์ํด ๊พธ์คํ ๋ ธ๋ ฅํ๋ ์ฌ๋์ ๋นํํ๊ณ ๋จ๋ค ๋คํ๋๊ฑด ๋๋ ํด๋ณด๋ฉฐ ์ด์๊ฐ๋ ์ฌ๋์น๊ณ ํฐ ๋ถ๋ฅผ ์ด๋ฃฌ ์ฌ๋์ ๋ณธ์ ์ด ์๋ ๊ฒ ๊ฐ๋ค.
์คํผ ์ตํ์ฌ ๋ํ๋ ํ์ด์ค๋ถ ํฌ์คํ
๋ถ์๊ฐ ๋๊ณ ์ถ๋ค๋ฉด์?
์๋ผ๋ฒจ ์ ๋นํ๊ณ ๋์ ์ฐ๋ด ์ง์ฅ
์ผ๋ ์ ํ๋๋ฒ ํด์ธ์ฌํ๊ฐ์ผ์ง.
๊ฒฐํผ์๋ ์กฐ๊ฑด์ ์์ 10%๋ฅผ ๋ฐ๋ผ๊ณ ,
๋ง์ด๋ ์๋์ง๋ง ๋ช ํ๋ฐฑ ํ๋๊ฐ๋ ์์ด์ผ์ง.
๊ตญ์ฐ์ฐจ ์ด๋ฐ์ ๋ ์ข ๋ณดํ์ ์ธ์ ์ฐจ ์ฌ๋๊ฒ ๋์.
ํธํ๊ณ ์ข๊ณ ์ด๋งํ๋ฐ...
๋๋ ์ํ๋๋ฐ ๋จ๋ค์ด๋ผ๊ณ ๋ค๋ฅผ๊น?
์ค์ ํ๊ตญ ์ต๊ทผ ํต๊ณ ๋ฐ ๊ฒฝํฅ
-๊ฒฐํผ๋ฅ ์ต์
-์ธ๊ตฌ์๋๋น ํด์ธ์ฌํ ์ต๊ณ
-์๋๋๋น ๋ช ํ์๋น ์ต๊ณ
-์๋๋๋น ์ธ์ ์ฐจ์๋น ์ต๊ณ
SNS๋ ๋ฏธ๋์ด๋ค์ด ์๋ก๊ฐ ๊ฒฝ์ํ๋ฏ ์ฌ๋๋ค์ ๋ถ์ถ์ด๊ณ , ๋จ๋ค๋งํผ์ ํด์ผ์ง ๋ผ๋ ๋ฌธํ๋ก ๋๋ผ๊ฐ ์ฌํ๊ฐ ๋๋ฝ์ผ๋ก ๊ฐ๋ ๊ฒ ๊ฐ๋ค.
๋ถ์๋ ๊ทนํ ์์์ธ๋ฐ ๊ทธ๋ ๋ค๋ฉด
๋์ค๋ค๊ณผ ๋ค๋ฅด๊ฒ ํด์ผํ์ง ์์๊น?
๋ฏธ๋ จํ๋ค ์๊ฐ๋ฝ์ง ๋ฐ์ผ๋ฉฐ, ๋ ธ๋ ฅํ๋ ์์์ ์ฌ๋๋ค
์ด์ฌํ ์ฑ์ฅ์ ์ํด ์ผํ๊ณ ,
ํด์ธ์ฌํ ๊ฐ ๋, ๋ช ํ ์ด๋, ์ธ์ ์ฐจ ํ ๋ถ ์๊ปด์
์ธ๊ตญ์ด๋ ํฌ์๊ณต๋ถํ๊ณ ,
์์์ง๋ถํฐ ์์ํด ๊ฒฐํผํ๊ณ
์กฐ๊ธ์ฉ ๊ณ์ ๋๋ ค๊ฐ๋ ์ฌ๋
์ด๋ฐ ์ฌ๋๋ค์ด ๊ฒฐ๊ตญ์๋ ์ ์์๋ ๋ง๋ ์๋๊ฒ ๋ถ์ ๊ฒฉ์ฐจ๊ฐ ๋ฒ์ด์ง๋๊ฑธ ๋ง์ด ๋ชฉ๊ฒฉ ํ๋ค.
๊ทธ๋ฌ๊ตฌ ๋ณด๋ฉด ํ๋์ ๋ชฉํ๋ฅผ ์ํด ๊พธ์คํ ๋ ธ๋ ฅํ๋ ์ฌ๋์ ๋นํํ๊ณ ๋จ๋ค ๋คํ๋๊ฑด ๋๋ ํด๋ณด๋ฉฐ ์ด์๊ฐ๋ ์ฌ๋์น๊ณ ํฐ ๋ถ๋ฅผ ์ด๋ฃฌ ์ฌ๋์ ๋ณธ์ ์ด ์๋ ๊ฒ ๊ฐ๋ค.
โค3
ํ๋๊ฐ ์ค๋ ๊ฒ์ ์๋ ๊ฒ, ํ๋๋ฅผ ํ๋ ๊ฒ, ๊ทธ๋ฆฌ๊ณ ์ฌ๋ฌ๋ฒ ์ ํ๋ ๊ฒ์ ๋ค ๋ค๋ฅด๋ค.
AI Wave๋ฅผ ๋ฐ๋ผ๋ณด๋ฉด์ ์ฌ๋ฌ๋ฒ ๋ค์ ์๊ฐํด๋ด์ผํ ๋ถ๋ถ
์ตํ์ฌ ๋ํ๋ ํ๋ถ
๋งคํธ๋ฆญ์ค ๋ชจํผ์ด์ค์ ๋ช ๋์ฌ
"๊ธธ์ ์๋ ๊ฒ๊ณผ ๊ธธ์ ๊ฑท๋ ๊ฒ์ ๋ค๋ฅด๋ค."
์คํํธ์ ๋ค๋ ์ฌ๋๋ค๋ ๋ ๋ง์ด ์๋๊ฒ ์๋๋ผ ๋ ๋ง์ด ๊ฑท๋๊ฒ ์ค์ํ๋ฐ ๋ค๋ค ๋ ๋ง์ด ์๋ ๊ฒ์๋ง ์ง์คํ๋ ๊ฒ ๊ฐ๋ค.
๋ ๋ง์ด ์๊ณ ์ถ์ด ์ค๋นํ๋ ์ฌ๋๋ค์๊ฒ ๋ฌผ์ด๋ณด๋ฉด ๋ ๋ง์ด ๊ฑท๊ธฐ ์ํด ๋ ๋ง์ด ์๋ ค๊ณ ํ๋ค ๋ผ๊ณ ๋๋ตํ์ง๋ง ์ค์ ๋ก ๊ฑท๋ ๊ฑธ ๋ณธ์ ์ด ๋ณ๋ก ์๋ค.
์คํ๋ ค ๋๋ฌด ๋ง์ ๊ฑธ ์์๋ฒ๋ ค์ ๋จผ์ ๊ฒ์ ๋จน๊ณ ํ์ง ์์๋ฟ...
๊ทธ๋ฅ ๊ฑท์. ์ค๋๋ ๋ด์ผ๋...
AI Wave๋ฅผ ๋ฐ๋ผ๋ณด๋ฉด์ ์ฌ๋ฌ๋ฒ ๋ค์ ์๊ฐํด๋ด์ผํ ๋ถ๋ถ
์ตํ์ฌ ๋ํ๋ ํ๋ถ
๋งคํธ๋ฆญ์ค ๋ชจํผ์ด์ค์ ๋ช ๋์ฌ
"๊ธธ์ ์๋ ๊ฒ๊ณผ ๊ธธ์ ๊ฑท๋ ๊ฒ์ ๋ค๋ฅด๋ค."
์คํํธ์ ๋ค๋ ์ฌ๋๋ค๋ ๋ ๋ง์ด ์๋๊ฒ ์๋๋ผ ๋ ๋ง์ด ๊ฑท๋๊ฒ ์ค์ํ๋ฐ ๋ค๋ค ๋ ๋ง์ด ์๋ ๊ฒ์๋ง ์ง์คํ๋ ๊ฒ ๊ฐ๋ค.
๋ ๋ง์ด ์๊ณ ์ถ์ด ์ค๋นํ๋ ์ฌ๋๋ค์๊ฒ ๋ฌผ์ด๋ณด๋ฉด ๋ ๋ง์ด ๊ฑท๊ธฐ ์ํด ๋ ๋ง์ด ์๋ ค๊ณ ํ๋ค ๋ผ๊ณ ๋๋ตํ์ง๋ง ์ค์ ๋ก ๊ฑท๋ ๊ฑธ ๋ณธ์ ์ด ๋ณ๋ก ์๋ค.
์คํ๋ ค ๋๋ฌด ๋ง์ ๊ฑธ ์์๋ฒ๋ ค์ ๋จผ์ ๊ฒ์ ๋จน๊ณ ํ์ง ์์๋ฟ...
๊ทธ๋ฅ ๊ฑท์. ์ค๋๋ ๋ด์ผ๋...
โค2
### Effective or Experimental LLM Lightweighting Approaches
Lightweighting approaches for Large Language Models (LLMs) aim to reduce the memory footprint and computational requirements of these models, making them more efficient and easier to deploy. Some popular lightweighting methods include quantization, pruning, and distillation**[1](https://medium.com/intel-analytics-software/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98)**.
### Quantization
Quantization is a compression operation that reduces the memory footprint of a model and improves inference performance. An enhanced SmoothQuant approach has been proposed for post-training quantization of LLMs, which has been integrated into Intel Neural Compressor, an open-source Python library of popular model compression techniques**[1](https://medium.com/intel-analytics-software/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98)**.
### Pruning
Pruning is a method to compress a model by removing some of its weights, which can lead to a significant reduction in model size. SparseGPT is an algorithm that allows reducing a model size by more than 50% while maintaining its performance**[2](https://www.machinelearningatscale.com/pruning-llm-sparsegpt/)**.
### Distillation
Distillation is a technique that involves training a smaller model (student model) to mimic the behavior of a larger model (teacher model). This approach creates compute-friendly LLMs suitable for use in resource-constrained environments, such as real-time language translation, automated speech recognition, and chatbots on edge devices like smartphones and tablets**[3](https://jaxon.ai/distillation-making-large-language-models-compute-friendly/)**.
### Quantization
Pros:
- Reduces memory footprint and accelerates inference**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- Can be applied post-training without the need for additional training data**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
Cons:
- Potential loss of accuracy during the quantization process**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- May require further optimization for different LLM architectures**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
Use Cases:
- Deploying LLMs on edge devices with limited resources**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- Real-time language translation and automated speech recognition**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
### Pruning
Pros:
- Can significantly reduce model size while maintaining performance**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Can be applied to both structured and unstructured pruning**[3](https://web.stanford.edu/class/cs224n/reports/custom_116951464.pdf)**.
Cons:
- May require additional fine-tuning to achieve optimal performance**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Can be computationally expensive for large models**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
Use Cases:
- Deploying LLMs on resource-constrained devices**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Improving the efficiency of LLMs in various applications, such as natural language processing and computer vision tasks**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
### Distillation
Pros:
- Creates compute-friendly LLMs suitable for use in resource-constrained environments**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
- Can maintain the performance of the original model**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
Cons:
Lightweighting approaches for Large Language Models (LLMs) aim to reduce the memory footprint and computational requirements of these models, making them more efficient and easier to deploy. Some popular lightweighting methods include quantization, pruning, and distillation**[1](https://medium.com/intel-analytics-software/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98)**.
### Quantization
Quantization is a compression operation that reduces the memory footprint of a model and improves inference performance. An enhanced SmoothQuant approach has been proposed for post-training quantization of LLMs, which has been integrated into Intel Neural Compressor, an open-source Python library of popular model compression techniques**[1](https://medium.com/intel-analytics-software/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98)**.
### Pruning
Pruning is a method to compress a model by removing some of its weights, which can lead to a significant reduction in model size. SparseGPT is an algorithm that allows reducing a model size by more than 50% while maintaining its performance**[2](https://www.machinelearningatscale.com/pruning-llm-sparsegpt/)**.
### Distillation
Distillation is a technique that involves training a smaller model (student model) to mimic the behavior of a larger model (teacher model). This approach creates compute-friendly LLMs suitable for use in resource-constrained environments, such as real-time language translation, automated speech recognition, and chatbots on edge devices like smartphones and tablets**[3](https://jaxon.ai/distillation-making-large-language-models-compute-friendly/)**.
### Quantization
Pros:
- Reduces memory footprint and accelerates inference**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- Can be applied post-training without the need for additional training data**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
Cons:
- Potential loss of accuracy during the quantization process**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- May require further optimization for different LLM architectures**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
Use Cases:
- Deploying LLMs on edge devices with limited resources**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- Real-time language translation and automated speech recognition**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
### Pruning
Pros:
- Can significantly reduce model size while maintaining performance**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Can be applied to both structured and unstructured pruning**[3](https://web.stanford.edu/class/cs224n/reports/custom_116951464.pdf)**.
Cons:
- May require additional fine-tuning to achieve optimal performance**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Can be computationally expensive for large models**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
Use Cases:
- Deploying LLMs on resource-constrained devices**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Improving the efficiency of LLMs in various applications, such as natural language processing and computer vision tasks**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
### Distillation
Pros:
- Creates compute-friendly LLMs suitable for use in resource-constrained environments**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
- Can maintain the performance of the original model**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
Cons:
Medium
Effective Post-Training Quantization for Large Language Models
Enhancing the SmoothQuant Approach to Quantization
- May suffer from the "curse of capacity gap" when the teacher and student models have a large capacity difference**[5](https://openreview.net/forum?id=CMsuT6Cmfvs)**.
- Requires additional training data and computational resources**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
Use Cases:
- Real-time language translation, automated speech recognition, and chatbots on edge devices like smartphones and tablets**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
- Deploying LLMs in various applications, such as natural language processing and computer vision tasks**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
In summary, each lightweighting approach has its own set of advantages and disadvantages, making them suitable for different use cases. Quantization is ideal for deploying LLMs on edge devices with limited resources, while pruning can help improve the efficiency of LLMs in various applications. Distillation is useful for creating compute-friendly LLMs suitable for use in resource-constrained environments. Choosing the right approach depends on the specific requirements and constraints of the application.
### LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
LLM-QAT is a data-free distillation method that leverages generations produced by the pre-trained model to better preserve the original model's performance while reducing its size and computational requirements**[4](https://arxiv.org/abs/2305.17888)**. This approach enables efficient quantization of LLMs without the need for additional training data.
### Limitations and Opportunities
Some limitations of LLM-QAT include the potential loss of accuracy during the quantization process and the need for further research to optimize the method for different LLM architectures. However, LLM-QAT presents opportunities for improving the efficiency of LLM deployment in various applications, such as natural language processing and computer vision tasks.
### Real-World Lightweighting Methods
In the real world, lightweighting methods are used in various industries, such as automotive and aerospace, to reduce the weight of components and improve overall performance. Some common lightweighting strategies include:
1. Material selection: Using lighter materials for each component**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
2. Structural optimization: Designing components to minimize weight while maintaining strength and functionality**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
3. Architected materials: Creating materials with specific microstructures to optimize their properties for lightweighting**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
4. Multifunctionality: Designing components that serve multiple purposes, reducing the need for additional parts**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
These lightweighting methods can be used separately or in conjunction with one another to achieve the desired weight reduction and performance improvements.
- Requires additional training data and computational resources**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
Use Cases:
- Real-time language translation, automated speech recognition, and chatbots on edge devices like smartphones and tablets**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
- Deploying LLMs in various applications, such as natural language processing and computer vision tasks**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
In summary, each lightweighting approach has its own set of advantages and disadvantages, making them suitable for different use cases. Quantization is ideal for deploying LLMs on edge devices with limited resources, while pruning can help improve the efficiency of LLMs in various applications. Distillation is useful for creating compute-friendly LLMs suitable for use in resource-constrained environments. Choosing the right approach depends on the specific requirements and constraints of the application.
### LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
LLM-QAT is a data-free distillation method that leverages generations produced by the pre-trained model to better preserve the original model's performance while reducing its size and computational requirements**[4](https://arxiv.org/abs/2305.17888)**. This approach enables efficient quantization of LLMs without the need for additional training data.
### Limitations and Opportunities
Some limitations of LLM-QAT include the potential loss of accuracy during the quantization process and the need for further research to optimize the method for different LLM architectures. However, LLM-QAT presents opportunities for improving the efficiency of LLM deployment in various applications, such as natural language processing and computer vision tasks.
### Real-World Lightweighting Methods
In the real world, lightweighting methods are used in various industries, such as automotive and aerospace, to reduce the weight of components and improve overall performance. Some common lightweighting strategies include:
1. Material selection: Using lighter materials for each component**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
2. Structural optimization: Designing components to minimize weight while maintaining strength and functionality**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
3. Architected materials: Creating materials with specific microstructures to optimize their properties for lightweighting**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
4. Multifunctionality: Designing components that serve multiple purposes, reducing the need for additional parts**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
These lightweighting methods can be used separately or in conjunction with one another to achieve the desired weight reduction and performance improvements.
OpenReview
Lifting the Curse of Capacity Gap in Distilling Large Language Models
Large language models (LLMs) have shown compelling performance on various downstream tasks, but unfortunately require a tremendous amount of inference compute. Knowledge distillation finds a path...
The article "The Law Is Coming for AIโBut Maybe Not the Law You Think" discusses the legal challenges and implications surrounding the use of artificial intelligence (AI) technology, particularly focusing on the recent approval of the AI Act in the European Parliament**[1](https://www.theinformation.com/articles/the-law-is-coming-for-ai-but-maybe-not-the-law-you-think)**. The article highlights the case of Italy's data protection authority banning OpenAI's ChatGPT due to non-compliance with European data protection provisions**[1](https://www.theinformation.com/articles/the-law-is-coming-for-ai-but-maybe-not-the-law-you-think)**. The main points of the article are as follows:
1. AI technology raises legal questions in areas such as privacy, discrimination, and liability.
2. There is no single law governing AI, and existing laws are often unclear or outdated.
3. There is a growing movement to create new laws and regulations specifically for AI.
4. There is no consensus on what these laws and regulations should look like.
5. Some people believe AI should be regulated like any other technology, while others believe it requires special treatment.
6. The debate over how to regulate AI is likely to continue for many years to come.
As an AI researcher or AI startup founder, it is crucial to stay informed about the legal landscape surrounding AI technology. This includes understanding the potential legal issues that may arise from the development and deployment of AI systems, as well as keeping up-to-date with new laws and regulations that may impact your work or business. By being proactive and knowledgeable about the legal aspects of AI, you can better navigate potential challenges and ensure that your AI systems are developed and used responsibly and ethically.
1. AI technology raises legal questions in areas such as privacy, discrimination, and liability.
2. There is no single law governing AI, and existing laws are often unclear or outdated.
3. There is a growing movement to create new laws and regulations specifically for AI.
4. There is no consensus on what these laws and regulations should look like.
5. Some people believe AI should be regulated like any other technology, while others believe it requires special treatment.
6. The debate over how to regulate AI is likely to continue for many years to come.
As an AI researcher or AI startup founder, it is crucial to stay informed about the legal landscape surrounding AI technology. This includes understanding the potential legal issues that may arise from the development and deployment of AI systems, as well as keeping up-to-date with new laws and regulations that may impact your work or business. By being proactive and knowledgeable about the legal aspects of AI, you can better navigate potential challenges and ensure that your AI systems are developed and used responsibly and ethically.
The Information
The Law Is Coming for AIโBut Maybe Not the Law You Think
While the approval of the AI Act in the European Parliament on Wednesday will no doubt go down in history as a day of reckoning for generative artificial intelligence, it was not the first. That honor belongs to March 31, when, citing a lack of complianceโฆ
๋ ์ธํฌ๋ฉ์ด์
**[1](https://www.theinformation.com/articles/a-reckoning-arrives-for-creator-economy-startups)**์ ๊ธฐ์ฌ์ ๋ฐ๋ฅด๋ฉด ๋ฏธ๊ตญ ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ์คํํธ์
์ ๋ํ ์๊ธ์ด 1์ต 2,300๋ง ๋ฌ๋ฌ๋ก 86% ๊ฐ์ํ์ผ๋ฉฐ, ์ด๋ ์ ๋
๋๊ธฐ ๋๋น 7๋ถ๊ธฐ ์ฐ์ ๊ฐ์ํ ์์น๋ผ๊ณ ํฉ๋๋ค. ๋ฐ๋ฉด์ ๋์งํธ ํฌ๋ฆฌ์์ดํฐ๊ฐ ์ฝํ
์ธ ์ ์์ ๋น์ฆ๋์ค ์ธก๋ฉด์ ๋ณด๋ค ์ฝ๊ฒ ์ํํ ์ ์๋๋ก ํ์ํ ๋๊ตฌ, ๋ฆฌ์์ค ๋ฐ ํ๋ซํผ์ ์ ๊ณตํ๋ ๋ง์ ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ์คํํธ์
์ด ์์ต๋๋ค**[2](https://blog.hubspot.com/marketing/creator-economy-startups)**. ๊ทธ๋ฌ๋ ์ด๋ฌํ ๋น์ฆ๋์ค๊ฐ ๋ชจ๋ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ์ข์ ๊ฒ์ ์๋๋ฉฐ, ์ผ๋ถ๋ ์ค์ ๋ก๋ ๋งค์ฐ ์ฝํ์ ์ผ ์ ์์ต๋๋ค**[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**. ์ธ๊ณต์ง๋ฅ ์ฐ๊ตฌ์๋ ์ธ๊ณต์ง๋ฅ ์คํํธ์
์ฐฝ์
์๋ก์ ์ด ๊ธ์์ ๋ฐฐ์ธ ์ ์๋ ๋ช ๊ฐ์ง ์ฌํญ์ด ์์ต๋๋ค:
- ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ๋ ์ฑ์ฅํ๋ ์์ฅ์ด๋ฉฐ, ์ฝํ ์ธ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ๋๊ตฌ์ ๋ฆฌ์์ค๋ฅผ ์ ๊ณตํ ์ ์๋ ๋ง์ ๊ธฐํ๊ฐ ์์ต๋๋ค.
- ํฌ๋ฆฌ์์ดํฐ๊ฐ ์์ ์ ๋น์ฆ๋์ค๋ฅผ ๋ฏฟ๊ณ ๋งก๊ธด๋ค๋ฉด, ํฌ๋ฆฌ์์ดํฐ๋ ์ฌ๋ฌ๋ถ์ด ์์ ์ ์ต์ ์ ์ด์ต์ ์ผ๋์ ๋๊ธฐ๋ฅผ ๊ธฐ๋ํ๋ค๋ ์ ์ ์ดํดํ๋ฉด์ ํฌ๋ฆฌ์์ดํฐ์ ์ ์ฅ์ด ๋์ด ์๊ฐํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค**[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**.
- ์ฝํ ์ธ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ์ง์๊ณผ ์์์ ์ ๊ณตํ ์ ์๋ ์ค๋ฆฌ์ ์ด๊ณ ์ ๋ขฐํ ์ ์๋ ํฌ๋ฆฌ์์ดํฐ ์ด์ฝ๋ ธ๋ฏธ ์คํํธ์ **[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**์ด ๋ ๋ง์ด ํ์ํฉ๋๋ค.
- ํฌ๋ฆฌ์์ดํฐ ์ด์ฝ๋ ธ๋ฏธ ์คํํธ์ ์ ๋ํ ์๊ธ ์ง์ ๊ฐ์๋ ์์ฅ์ ๋ณํ๋ฅผ ๋ํ๋ด๋ ์ ํธ์ผ ์ ์์ผ๋ฉฐ, ์ด๋ฌํ ์ถ์ธ๋ฅผ ์ฃผ์ํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค**[1](https://www.theinformation.com/articles/a-reckoning-arrives-for-creator-economy-startups)[4](https://www.antler.co/blog/2023-creator-economy)**.
- AI๋ ์ ์ฌ์ ์ผ๋ก ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ์์ ์ฝํ ์ธ ์ ์์๋ฅผ ๋์ธ ์ ์๋ ์๋ก์ด ๋๊ตฌ์ ํ๋ซํผ์ ๊ฐ๋ฐํ๋ ๋ฐ ์ฌ์ฉ๋ ์ ์์ต๋๋ค**[5](https://wonnda.com/magazine/creator-economy-startups/)[6](https://influencermarketinghub.com/creator-economy-startups/)**.
- ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ๋ ์ฑ์ฅํ๋ ์์ฅ์ด๋ฉฐ, ์ฝํ ์ธ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ๋๊ตฌ์ ๋ฆฌ์์ค๋ฅผ ์ ๊ณตํ ์ ์๋ ๋ง์ ๊ธฐํ๊ฐ ์์ต๋๋ค.
- ํฌ๋ฆฌ์์ดํฐ๊ฐ ์์ ์ ๋น์ฆ๋์ค๋ฅผ ๋ฏฟ๊ณ ๋งก๊ธด๋ค๋ฉด, ํฌ๋ฆฌ์์ดํฐ๋ ์ฌ๋ฌ๋ถ์ด ์์ ์ ์ต์ ์ ์ด์ต์ ์ผ๋์ ๋๊ธฐ๋ฅผ ๊ธฐ๋ํ๋ค๋ ์ ์ ์ดํดํ๋ฉด์ ํฌ๋ฆฌ์์ดํฐ์ ์ ์ฅ์ด ๋์ด ์๊ฐํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค**[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**.
- ์ฝํ ์ธ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ์ง์๊ณผ ์์์ ์ ๊ณตํ ์ ์๋ ์ค๋ฆฌ์ ์ด๊ณ ์ ๋ขฐํ ์ ์๋ ํฌ๋ฆฌ์์ดํฐ ์ด์ฝ๋ ธ๋ฏธ ์คํํธ์ **[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**์ด ๋ ๋ง์ด ํ์ํฉ๋๋ค.
- ํฌ๋ฆฌ์์ดํฐ ์ด์ฝ๋ ธ๋ฏธ ์คํํธ์ ์ ๋ํ ์๊ธ ์ง์ ๊ฐ์๋ ์์ฅ์ ๋ณํ๋ฅผ ๋ํ๋ด๋ ์ ํธ์ผ ์ ์์ผ๋ฉฐ, ์ด๋ฌํ ์ถ์ธ๋ฅผ ์ฃผ์ํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค**[1](https://www.theinformation.com/articles/a-reckoning-arrives-for-creator-economy-startups)[4](https://www.antler.co/blog/2023-creator-economy)**.
- AI๋ ์ ์ฌ์ ์ผ๋ก ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ์์ ์ฝํ ์ธ ์ ์์๋ฅผ ๋์ธ ์ ์๋ ์๋ก์ด ๋๊ตฌ์ ํ๋ซํผ์ ๊ฐ๋ฐํ๋ ๋ฐ ์ฌ์ฉ๋ ์ ์์ต๋๋ค**[5](https://wonnda.com/magazine/creator-economy-startups/)[6](https://influencermarketinghub.com/creator-economy-startups/)**.
The Information
A Reckoning Arrives for Creator Economy Startups
Two years ago, Dmitry Shapiro and Sean Thielen were so optimistic about the booming creator economy that they pivoted their startup to a new product: a simple tool called Koji that lets influencers more easily link to their online tip jars, merch and otherโฆ