https://datapreneurs.com/
Legendary founding CEO of snowflake wrote a book, datatreneur, https://datapreneurs.com/.
Legendary founding CEO of snowflake wrote a book, datatreneur, https://datapreneurs.com/.
Datapreneurs
Home
This has been autogenerated as a placeholder for homepage.
The Falcon 40B is a large-scale artificial intelligence model developed by the Technology Innovation Institute (TII) in Abu Dhabi, United Arab Emirates1. It is a foundational large language model (LLM) with 40 billion parameters and trained on one trillion tokens1. Falcon 40B is the worldโs top-ranked open-source AI model on the Hugging Face leaderboard for large language models2. The model is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
The implications of Falcon 40B for large language models are significant. It matches the performance of other high-performing LLMs and is cost-effective3. The model is English-centric but also includes German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, and Swedish4. Falcon 40Bโs open-source nature and royalty-free deployment can empower public and private sector entities with efficiencies such as faster project execution and reduced costs5.
For LLM startups, Falcon 40B offers an open-source alternative to proprietary models like OpenAIโs GPT-34. The modelโs creator, TII, is offering the most exceptional project ideas access to training compute power as a form of investment2. This enables developers to tackle more complex and resource-intensive use cases with increased efficiency, productivity, and performance, driving innovation and expanding the possibilities for LLM startups2.
For big tech companies, Falcon 40B presents both opportunities and challenges. On one hand, the open-source nature of the model can foster collaboration and innovation, allowing big tech companies to leverage Falcon 40Bโs capabilities for various applications. On the other hand, the modelโs open-source availability may increase competition, as more startups and developers gain access to advanced LLM capabilities, potentially disrupting the market dominance of proprietary models from big tech companies.
Overall, Falcon 40B represents a significant milestone in the AI and LLM landscape, promoting open-source development, fostering innovation, and offering new opportunities for startups and big tech companies alike6.
https://twitter.com/TIIuae/status/1663911042559234051
The implications of Falcon 40B for large language models are significant. It matches the performance of other high-performing LLMs and is cost-effective3. The model is English-centric but also includes German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, and Swedish4. Falcon 40Bโs open-source nature and royalty-free deployment can empower public and private sector entities with efficiencies such as faster project execution and reduced costs5.
For LLM startups, Falcon 40B offers an open-source alternative to proprietary models like OpenAIโs GPT-34. The modelโs creator, TII, is offering the most exceptional project ideas access to training compute power as a form of investment2. This enables developers to tackle more complex and resource-intensive use cases with increased efficiency, productivity, and performance, driving innovation and expanding the possibilities for LLM startups2.
For big tech companies, Falcon 40B presents both opportunities and challenges. On one hand, the open-source nature of the model can foster collaboration and innovation, allowing big tech companies to leverage Falcon 40Bโs capabilities for various applications. On the other hand, the modelโs open-source availability may increase competition, as more startups and developers gain access to advanced LLM capabilities, potentially disrupting the market dominance of proprietary models from big tech companies.
Overall, Falcon 40B represents a significant milestone in the AI and LLM landscape, promoting open-source development, fostering innovation, and offering new opportunities for startups and big tech companies alike6.
https://twitter.com/TIIuae/status/1663911042559234051
Twitter
UAE's Falcon 40B, the world's top ranked open-source AI model from the Technology Innovation Institute (TII) has waived royalties on its use for commercial and research purposes.
#TII #LLM #FalconLLM #Tech #Innovation #AI #AbuDhabi #UAE
#TII #LLM #FalconLLM #Tech #Innovation #AI #AbuDhabi #UAE
https://twitter.com/TIIuae/status/1663911042559234051
The Falcon 40B is a large-scale artificial intelligence model developed by the Technology Innovation Institute (TII) in Abu Dhabi, United Arab Emirates1. It is a foundational large language model (LLM) with 40 billion parameters and trained on one trillion tokens1. Falcon 40B is the worldโs top-ranked open-source AI model on the Hugging Face leaderboard for large language models2. The model is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
The implications of Falcon 40B for large language models are significant. It matches the performance of other high-performing LLMs and is cost-effective3. The model is English-centric but also includes German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, and Swedish4. Falcon 40Bโs open-source nature and royalty-free deployment can empower public and private sector entities with efficiencies such as faster project execution and reduced costs5.
For LLM startups, Falcon 40B offers an open-source alternative to proprietary models like OpenAIโs GPT-34. The modelโs creator, TII, is offering the most exceptional project ideas access to training compute power as a form of investment2. This enables developers to tackle more complex and resource-intensive use cases with increased efficiency, productivity, and performance, driving innovation and expanding the possibilities for LLM startups2.
For big tech companies, Falcon 40B presents both opportunities and challenges. On one hand, the open-source nature of the model can foster collaboration and innovation, allowing big tech companies to leverage Falcon 40Bโs capabilities for various applications. On the other hand, the modelโs open-source availability may increase competition, as more startups and developers gain access to advanced LLM capabilities, potentially disrupting the market dominance of proprietary models from big tech companies.
Overall, Falcon 40B represents a significant milestone in the AI and LLM landscape, promoting open-source development, fostering innovation, and offering new opportunities for startups and big tech companies alike6.
Comparisons with other models.
When comparing Falcon 40B to other large language models like GPT-3, ChatGPT, GPT-4, and LLaMA, Falcon 40B demonstrates impressive performance and capabilities. It outperforms other open-source models such as LLaMA, StableLM, RedPajama, and MPT3. Despite its power, Falcon 40B uses only 75% of GPT-3's training compute, 40% of Chinchillaโs, and 80% of PaLM-62Bโs4. Falcon 40B is smaller than LLaMA (65 billion parameters) but has better performance on the OpenLLM leaderboard5. The modelโs architecture is optimized for inference, with FlashAttention and multiquery5. It is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
About Flash attention
FlashAttention is a technique that speeds up the attention mechanism in the model, while multiquery attention allows the model to generate multiple queries for each token, thus better representing the tokenโs relationships with other tokens.
FlashAttention is an algorithm that reorders the attention computation and leverages classical techniques, such as tiling and recomputation, to significantly speed up the attention mechanism and reduce memory usage from quadratic to linear in sequence length1. It is designed to address the compute and memory bottleneck in the attention layer of transformer models, particularly when dealing with long sequences1.
The Falcon 40B is a large-scale artificial intelligence model developed by the Technology Innovation Institute (TII) in Abu Dhabi, United Arab Emirates1. It is a foundational large language model (LLM) with 40 billion parameters and trained on one trillion tokens1. Falcon 40B is the worldโs top-ranked open-source AI model on the Hugging Face leaderboard for large language models2. The model is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
The implications of Falcon 40B for large language models are significant. It matches the performance of other high-performing LLMs and is cost-effective3. The model is English-centric but also includes German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, and Swedish4. Falcon 40Bโs open-source nature and royalty-free deployment can empower public and private sector entities with efficiencies such as faster project execution and reduced costs5.
For LLM startups, Falcon 40B offers an open-source alternative to proprietary models like OpenAIโs GPT-34. The modelโs creator, TII, is offering the most exceptional project ideas access to training compute power as a form of investment2. This enables developers to tackle more complex and resource-intensive use cases with increased efficiency, productivity, and performance, driving innovation and expanding the possibilities for LLM startups2.
For big tech companies, Falcon 40B presents both opportunities and challenges. On one hand, the open-source nature of the model can foster collaboration and innovation, allowing big tech companies to leverage Falcon 40Bโs capabilities for various applications. On the other hand, the modelโs open-source availability may increase competition, as more startups and developers gain access to advanced LLM capabilities, potentially disrupting the market dominance of proprietary models from big tech companies.
Overall, Falcon 40B represents a significant milestone in the AI and LLM landscape, promoting open-source development, fostering innovation, and offering new opportunities for startups and big tech companies alike6.
Comparisons with other models.
When comparing Falcon 40B to other large language models like GPT-3, ChatGPT, GPT-4, and LLaMA, Falcon 40B demonstrates impressive performance and capabilities. It outperforms other open-source models such as LLaMA, StableLM, RedPajama, and MPT3. Despite its power, Falcon 40B uses only 75% of GPT-3's training compute, 40% of Chinchillaโs, and 80% of PaLM-62Bโs4. Falcon 40B is smaller than LLaMA (65 billion parameters) but has better performance on the OpenLLM leaderboard5. The modelโs architecture is optimized for inference, with FlashAttention and multiquery5. It is available open source for research and commercial use, making it accessible to researchers, developers, and commercial users1.
About Flash attention
FlashAttention is a technique that speeds up the attention mechanism in the model, while multiquery attention allows the model to generate multiple queries for each token, thus better representing the tokenโs relationships with other tokens.
FlashAttention is an algorithm that reorders the attention computation and leverages classical techniques, such as tiling and recomputation, to significantly speed up the attention mechanism and reduce memory usage from quadratic to linear in sequence length1. It is designed to address the compute and memory bottleneck in the attention layer of transformer models, particularly when dealing with long sequences1.
Twitter
UAE's Falcon 40B, the world's top ranked open-source AI model from the Technology Innovation Institute (TII) has waived royalties on its use for commercial and research purposes.
#TII #LLM #FalconLLM #Tech #Innovation #AI #AbuDhabi #UAE
#TII #LLM #FalconLLM #Tech #Innovation #AI #AbuDhabi #UAE
Traditional attention mechanisms can be computationally expensive, as they involve a quadratic increase in memory usage and runtime with respect to sequence length1. FlashAttention addresses this issue by making the attention algorithm IO-aware, accounting for reads and writes between levels of GPU memory2. It uses tiling to reduce the number of memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM2. This results in fewer HBM accesses than standard attention and optimizes performance for a range of SRAM sizes2.
Compared to traditional attention mechanisms, FlashAttention offers faster training and support for longer sequences without sacrificing accuracy3. It has been adopted by many organizations and research labs to speed up their training and inference processes4.
AI researchers can learn from FlashAttentionโs efficient architecture and its ability to achieve exceptional performance while maintaining a compact size. Its IO-aware design and tiling technique can inspire new approaches to optimizing attention mechanisms in transformer models. AI startup founders can also benefit from the improved efficiency and performance offered by FlashAttention, enabling them to tackle more complex and resource-intensive use cases with increased productivity.
Compared to traditional attention mechanisms, FlashAttention offers faster training and support for longer sequences without sacrificing accuracy3. It has been adopted by many organizations and research labs to speed up their training and inference processes4.
AI researchers can learn from FlashAttentionโs efficient architecture and its ability to achieve exceptional performance while maintaining a compact size. Its IO-aware design and tiling technique can inspire new approaches to optimizing attention mechanisms in transformer models. AI startup founders can also benefit from the improved efficiency and performance offered by FlashAttention, enabling them to tackle more complex and resource-intensive use cases with increased productivity.
์ต๊ทผ์ Perplexity์ ์ ๋ฃ ๋ฒ์ (https://www.perplexity.ai/)์ ๊ตฌ๋งคํ์ต๋๋ค.
์ ๋ฃ ๋ฒ์ ์ ์ ํํ ์ด์ ์ ๋ํด ์๊ฐํด๋ณผ ๋, ์คํํธ์ ์ด ์ด๋ป๊ฒ ๊ฒ์ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ , ๊ฑฐ๋ํ ๊ฒฝ์์ ์ฒด๋ค ์ฌ์ด์์ ์ด์๋จ์ ์ ์๋์ง์ ๋ํ ์๊ฐ์ ํ๊ฒ ๋์์ต๋๋ค.
AI์ ์์ง๋์ด๋ง์ ์ ๋ชฉํด์ ํจ์ฌ ์ข์ ๊ฒ์ ๊ฒฝํ์ ์ ๊ณตํ๊ณ ์๋ Perplexity๋ฅผ ์ฌ์ฉํด ๋ณด๋ฉด์, ํน์ ๊ณ ๊ฐ์๊ฒ 10-100๋ฐฐ ์ข์ ์๋น์ค๋ฅผ ๋ง๋ค ๋ AI๋ฅผ ์ฌ์ฉํ๋ค๋ฉด ๊ฝค ์ข์ ์ ํ๊ณผ ํ์ฌ๋ฅผ ๋ง๋ค ์ ์๊ฒ ๋ค๋ ์๊ฐ์ด ๋ญ๋๋ค.
์ ๊ฐ Perplexity๋ฅผ ์ ๋ฃ๋ก ๊ตฌ๋ ํ๊ฒ๋ ์ด์ ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค
1. ์ต๊ทผ์ GPT-4๋ฅผ ์ฐ๋ํ๋ฉด์, GPT-4๊ฐ ์ต์ ์ ๋ณด๋ฅผ ํ์ฉํ ์ ์๊ฒํฉ๋๋ค. ChatGPT-bing๋ณด๋ค ๋ ๋น ๋ฅด๊ณ ์๋ฌธ์ ๋ํ Source๋ฅผ ๋ ์ ๋ฌ์์ค๋๋ค.
2. ์ฌ์ฉ์์ Profile์ ๋ณธ์ธ์ ๊ด์ฌ์ฌ๋ฅผ ์ ์ด๋์ผ๋ฉด ๊ทธ ๊ด์ฌ์ฌ์ ๋ง์ถ ์ปจํ ์ธ ๋ฅผ ์ ๊ณตํฉ๋๋ค.
3. ์ฌ์ฉ์์ ํผ๋๋ฐฑ์ ์์ฒญํ์ฌ ์ ๋ณด์ ์ ํ์ฑ์ ๋์ด๋๋ฐ, ์ด๋ ์ฌ์ฉ์ ๊ฒฝํ์ ๋ฐฉํด๋์ง ์๊ณ , ์คํ๋ ค Perplexity์ ๋ํ ์ ๋ขฐ์ฑ์ ๋์ฌ์ค๋๋ค.
4. ์ถ๊ฐ์ ์ผ๋ก ์ฐพ์๋ณผ ๋งํ ์ฃผ์ ๋ฅผ ์ ์ํฉ๋๋ค.
์ด๋ฌํ ์ฅ์ ๋๋ฌธ์ ChatGPT๋ฅผ ์ ๋ฃ๋ก ๊ตฌ๋ ํ๊ณ ์์์๋ ์ถ๊ฐ ๊ตฌ๋ ์ ๊ฒฐ์ ํ๊ฒ ๋์์ต๋๋ค. ๊ทธ๋ฆฌ๊ณ ๊ตฌ๊ธ Bard์ ๋น๊ตํ๋๋ผ๋ Perplexity๋ ์ ์ถ์ฒ๋ฅผ ์ ๊ณตํจ์ผ๋ก์จ ์ ๋ณด์ ์ ํ์ฑ์ ๋์ด๊ณ , ๊ฒ์ ๊ธฐ๋ก์ ์ ์งํจ์ผ๋ก์จ ์ฌ์ฉ์ ๊ฒฝํ์ ํฅ์์ํต๋๋ค. ๋ํ, GPT-4๋ฅผ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์ Bard๋ณด๋ค ์ ๋ฐ์ ์ผ๋ก ๊ฒฐ๊ณผ ํ์ง์ด ์ข์ต๋๋ค.
Perplexity๋ ์ด๋ป๊ฒ ํด์(Moat)๋ฅผ ๋ง๋ค ์ ์์๊น์?
1. ์์ง 16๋ช ๊ท๋ชจ์์คํํธ์ ์ธ Perplexity์ด Moat์ ๋ง๋๋ ๊ฒ์ ๊ณ ๋ฏผํ๋ ๊ฒ๋ณด๋ค ๊ณ ๊ฐ์๊ฒ ๊ฐ๋ ฌํ ์ธ์์ ์ฃผ๋ฉฐ ์ฌ์ฉ์์ ์ฌ๋ฐฉ๋ฌธ์จ(Retention)์ ๋์ผ ์ ์๋ ๊ธฐ๋ฅ์ ์ฐพ์๋ด๋ ๊ฒ ๋ ์ค์ํ๋ค๊ณ ์๊ฐํฉ๋๋ค. ์ด๋ฐ ๊ธฐ๋ฅ์ ์ฐพ๋๋ค๋ฉด Chat GPT, Google Search์ ๊ฒฝ์ํ๋๋ผ๋ ์ฅ๊ธฐ์ ์ผ๋ก ๊ฒฝ์๋ ฅ์ ์ ์งํ ์ ์์ ๊ฒ์ด๋ผ ์๊ฐํฉ๋๋ค.
2. ํนํ ๊ณ ๊ฐ์ ๊ฒ์ ๋ฐ์ดํฐ๋ฅผ ์ถ์ ํ๊ฒ ๋๋ฉด, ๋จ์ํ ChatGPT๋ฅผ ์ด์ฉํ๋ ๊ฒ๋ณด๋ค ๋ ์ฐ์ํ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ฌ์ค ์ ์์ ๊ฒ์ ๋๋ค. ๋ํ GPT-4์ ๊ฐ์ ๋ชจ๋ธ๋ค์ด ๋์ฌ ๊ฒฝ์ฐ, ๊ฒ์ ์ฌ์ฉ์ ๊ฒฝํ(UX)์ ํตํด ๋ค์ํ AI ๋ชจ๋ธ์ ํ์ฉํ ์ ์๊ฒ ๋ ๊ฒ์ ๋๋ค.
3. ๋ง์ฝ ๊ณ ๊ฐ์ด ์ด๋ค ๊ธฐ๋ฅ์ ๊ฐ์น๋ฅผ ๋๋ผ๋์ง ์ ์ ์๋ค๋ฉด, ํด๋น ๊ธฐ๋ฅ์ ์ต์ ํํ๋ ๋ชจ๋ธ์ ์์ฒด์ ์ผ๋ก ๊ตฌ์ถํด๋ณผ ์ ์์ต๋๋ค.
Chat GPT ํน์ Google์ด Perplexity์ ์ฃผ์ ๊ธฐ๋ฅ์ ๋น ๋ฅด๊ฒ ์นดํผํ๋ค๋ฉด?
ChatGPT๋ ์ ๋ง ๋ค์ํ ์ ๋ฌด(์ด๋ฉ์ผ ์์ฑ, ๋ฆฌ์์น, ์์ฝ)์ ์ํํ์ง๋ง Perplexity๋ ์ ๋ณด ๊ฒ์ ๋ฐ Fact-check์ ๋พฐ์กฑํ๊ฒ ํนํ๋์ด์๋ ์๋น์ค ์ ๋๋ค. Perplexityํ์ด ์ด๋ค ๊ณ ๊ฐ์ ๋ฌด์จ ๋ฌธ์ ์ ์ง์คํ๋์ง ์ ํํ ์์ง ๋ชปํ์ง๋ง, ChatGPT/Google Bard๊ฐ ์์ฒญ ๋ค์ํ ๊ณ ๊ฐ์ ์ง์คํ๋ ๋์ Perplexity ํ์ ๋๋ถ๋ถ์ ์คํํธ์ ์ด ๊ทธ๋ฌ๋ ๊ฒ์ฒ๋ผ ํน์ ๊ณ ๊ฐ๋ค์ ์ง์คํด์ ๋ฉ์น๋ฅผ ํค์๋๊ฐ๋ ๊ฒ์ด ์ธ์ฌ, ๋, ์ธํ๋ผ๊ฐ ํ๋ถํ ๊ฑฐ์ธ๋ค ์ฌ์ด์์ ์ด์๋จ๋ ๋ฐฉ๋ฒ ์๋๊น์?
์ด ์ฌ์ ์ด ์ฌ์๋ณด์ด์ง ์์ง๋ง, ๊ณผ๊ฑฐ์ ์คํํธ์ ์ด ํด๊ฒฐํ๊ธฐ ์ด๋ ค์ด ์์ญ์ผ๋ก ์ฌ๊ฒจ์ก๋ ๊ฒ์์ด๋ผ๋ ๋ถ์ผ๋ ์คํํธ์ ์ด ์๋ก์ด ๊ธฐ์ ๊ณผ ์ข์ ์ ํ์ ๊ฒฐํฉ์ผ๋ก ๊ท ์ด์ ๋ง๋ค์ด๋ผ ์ ์๋ค๋ ๊ฒ ์๋ฏธ์๋ค๊ณ ์๊ฐํ๊ณ ์์ผ๋ก๋ ๋ ์ํด์ฃผ๊ธธ ์ ์ ๋ก์จ ์์ํ๊ณ ์ถ๋ค์.
์ ๋ฃ ๋ฒ์ ์ ์ ํํ ์ด์ ์ ๋ํด ์๊ฐํด๋ณผ ๋, ์คํํธ์ ์ด ์ด๋ป๊ฒ ๊ฒ์ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ , ๊ฑฐ๋ํ ๊ฒฝ์์ ์ฒด๋ค ์ฌ์ด์์ ์ด์๋จ์ ์ ์๋์ง์ ๋ํ ์๊ฐ์ ํ๊ฒ ๋์์ต๋๋ค.
AI์ ์์ง๋์ด๋ง์ ์ ๋ชฉํด์ ํจ์ฌ ์ข์ ๊ฒ์ ๊ฒฝํ์ ์ ๊ณตํ๊ณ ์๋ Perplexity๋ฅผ ์ฌ์ฉํด ๋ณด๋ฉด์, ํน์ ๊ณ ๊ฐ์๊ฒ 10-100๋ฐฐ ์ข์ ์๋น์ค๋ฅผ ๋ง๋ค ๋ AI๋ฅผ ์ฌ์ฉํ๋ค๋ฉด ๊ฝค ์ข์ ์ ํ๊ณผ ํ์ฌ๋ฅผ ๋ง๋ค ์ ์๊ฒ ๋ค๋ ์๊ฐ์ด ๋ญ๋๋ค.
์ ๊ฐ Perplexity๋ฅผ ์ ๋ฃ๋ก ๊ตฌ๋ ํ๊ฒ๋ ์ด์ ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค
1. ์ต๊ทผ์ GPT-4๋ฅผ ์ฐ๋ํ๋ฉด์, GPT-4๊ฐ ์ต์ ์ ๋ณด๋ฅผ ํ์ฉํ ์ ์๊ฒํฉ๋๋ค. ChatGPT-bing๋ณด๋ค ๋ ๋น ๋ฅด๊ณ ์๋ฌธ์ ๋ํ Source๋ฅผ ๋ ์ ๋ฌ์์ค๋๋ค.
2. ์ฌ์ฉ์์ Profile์ ๋ณธ์ธ์ ๊ด์ฌ์ฌ๋ฅผ ์ ์ด๋์ผ๋ฉด ๊ทธ ๊ด์ฌ์ฌ์ ๋ง์ถ ์ปจํ ์ธ ๋ฅผ ์ ๊ณตํฉ๋๋ค.
3. ์ฌ์ฉ์์ ํผ๋๋ฐฑ์ ์์ฒญํ์ฌ ์ ๋ณด์ ์ ํ์ฑ์ ๋์ด๋๋ฐ, ์ด๋ ์ฌ์ฉ์ ๊ฒฝํ์ ๋ฐฉํด๋์ง ์๊ณ , ์คํ๋ ค Perplexity์ ๋ํ ์ ๋ขฐ์ฑ์ ๋์ฌ์ค๋๋ค.
4. ์ถ๊ฐ์ ์ผ๋ก ์ฐพ์๋ณผ ๋งํ ์ฃผ์ ๋ฅผ ์ ์ํฉ๋๋ค.
์ด๋ฌํ ์ฅ์ ๋๋ฌธ์ ChatGPT๋ฅผ ์ ๋ฃ๋ก ๊ตฌ๋ ํ๊ณ ์์์๋ ์ถ๊ฐ ๊ตฌ๋ ์ ๊ฒฐ์ ํ๊ฒ ๋์์ต๋๋ค. ๊ทธ๋ฆฌ๊ณ ๊ตฌ๊ธ Bard์ ๋น๊ตํ๋๋ผ๋ Perplexity๋ ์ ์ถ์ฒ๋ฅผ ์ ๊ณตํจ์ผ๋ก์จ ์ ๋ณด์ ์ ํ์ฑ์ ๋์ด๊ณ , ๊ฒ์ ๊ธฐ๋ก์ ์ ์งํจ์ผ๋ก์จ ์ฌ์ฉ์ ๊ฒฝํ์ ํฅ์์ํต๋๋ค. ๋ํ, GPT-4๋ฅผ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์ Bard๋ณด๋ค ์ ๋ฐ์ ์ผ๋ก ๊ฒฐ๊ณผ ํ์ง์ด ์ข์ต๋๋ค.
Perplexity๋ ์ด๋ป๊ฒ ํด์(Moat)๋ฅผ ๋ง๋ค ์ ์์๊น์?
1. ์์ง 16๋ช ๊ท๋ชจ์์คํํธ์ ์ธ Perplexity์ด Moat์ ๋ง๋๋ ๊ฒ์ ๊ณ ๋ฏผํ๋ ๊ฒ๋ณด๋ค ๊ณ ๊ฐ์๊ฒ ๊ฐ๋ ฌํ ์ธ์์ ์ฃผ๋ฉฐ ์ฌ์ฉ์์ ์ฌ๋ฐฉ๋ฌธ์จ(Retention)์ ๋์ผ ์ ์๋ ๊ธฐ๋ฅ์ ์ฐพ์๋ด๋ ๊ฒ ๋ ์ค์ํ๋ค๊ณ ์๊ฐํฉ๋๋ค. ์ด๋ฐ ๊ธฐ๋ฅ์ ์ฐพ๋๋ค๋ฉด Chat GPT, Google Search์ ๊ฒฝ์ํ๋๋ผ๋ ์ฅ๊ธฐ์ ์ผ๋ก ๊ฒฝ์๋ ฅ์ ์ ์งํ ์ ์์ ๊ฒ์ด๋ผ ์๊ฐํฉ๋๋ค.
2. ํนํ ๊ณ ๊ฐ์ ๊ฒ์ ๋ฐ์ดํฐ๋ฅผ ์ถ์ ํ๊ฒ ๋๋ฉด, ๋จ์ํ ChatGPT๋ฅผ ์ด์ฉํ๋ ๊ฒ๋ณด๋ค ๋ ์ฐ์ํ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ฌ์ค ์ ์์ ๊ฒ์ ๋๋ค. ๋ํ GPT-4์ ๊ฐ์ ๋ชจ๋ธ๋ค์ด ๋์ฌ ๊ฒฝ์ฐ, ๊ฒ์ ์ฌ์ฉ์ ๊ฒฝํ(UX)์ ํตํด ๋ค์ํ AI ๋ชจ๋ธ์ ํ์ฉํ ์ ์๊ฒ ๋ ๊ฒ์ ๋๋ค.
3. ๋ง์ฝ ๊ณ ๊ฐ์ด ์ด๋ค ๊ธฐ๋ฅ์ ๊ฐ์น๋ฅผ ๋๋ผ๋์ง ์ ์ ์๋ค๋ฉด, ํด๋น ๊ธฐ๋ฅ์ ์ต์ ํํ๋ ๋ชจ๋ธ์ ์์ฒด์ ์ผ๋ก ๊ตฌ์ถํด๋ณผ ์ ์์ต๋๋ค.
Chat GPT ํน์ Google์ด Perplexity์ ์ฃผ์ ๊ธฐ๋ฅ์ ๋น ๋ฅด๊ฒ ์นดํผํ๋ค๋ฉด?
ChatGPT๋ ์ ๋ง ๋ค์ํ ์ ๋ฌด(์ด๋ฉ์ผ ์์ฑ, ๋ฆฌ์์น, ์์ฝ)์ ์ํํ์ง๋ง Perplexity๋ ์ ๋ณด ๊ฒ์ ๋ฐ Fact-check์ ๋พฐ์กฑํ๊ฒ ํนํ๋์ด์๋ ์๋น์ค ์ ๋๋ค. Perplexityํ์ด ์ด๋ค ๊ณ ๊ฐ์ ๋ฌด์จ ๋ฌธ์ ์ ์ง์คํ๋์ง ์ ํํ ์์ง ๋ชปํ์ง๋ง, ChatGPT/Google Bard๊ฐ ์์ฒญ ๋ค์ํ ๊ณ ๊ฐ์ ์ง์คํ๋ ๋์ Perplexity ํ์ ๋๋ถ๋ถ์ ์คํํธ์ ์ด ๊ทธ๋ฌ๋ ๊ฒ์ฒ๋ผ ํน์ ๊ณ ๊ฐ๋ค์ ์ง์คํด์ ๋ฉ์น๋ฅผ ํค์๋๊ฐ๋ ๊ฒ์ด ์ธ์ฌ, ๋, ์ธํ๋ผ๊ฐ ํ๋ถํ ๊ฑฐ์ธ๋ค ์ฌ์ด์์ ์ด์๋จ๋ ๋ฐฉ๋ฒ ์๋๊น์?
์ด ์ฌ์ ์ด ์ฌ์๋ณด์ด์ง ์์ง๋ง, ๊ณผ๊ฑฐ์ ์คํํธ์ ์ด ํด๊ฒฐํ๊ธฐ ์ด๋ ค์ด ์์ญ์ผ๋ก ์ฌ๊ฒจ์ก๋ ๊ฒ์์ด๋ผ๋ ๋ถ์ผ๋ ์คํํธ์ ์ด ์๋ก์ด ๊ธฐ์ ๊ณผ ์ข์ ์ ํ์ ๊ฒฐํฉ์ผ๋ก ๊ท ์ด์ ๋ง๋ค์ด๋ผ ์ ์๋ค๋ ๊ฒ ์๋ฏธ์๋ค๊ณ ์๊ฐํ๊ณ ์์ผ๋ก๋ ๋ ์ํด์ฃผ๊ธธ ์ ์ ๋ก์จ ์์ํ๊ณ ์ถ๋ค์.
Perplexity AI
Perplexity is a free AI-powered answer engine that provides accurate, trusted, and real-time answers to any question.
๐1
Continuous Learning_Startup & Investment
https://youtu.be/5cQXjboJwg0
About Hard Landing
ํ๋ ๋๋ฉ์ ๊ธ๊ฒฉํ ์ฑ์ฅ ๊ธฐ๊ฐ ์ดํ์ ๊ฒฝ์ ์ ์ผ๋ก ๊ธ์ํ ๊ฐ์ ๋๋ ๋ํ๋ฅผ ์๋ฏธํฉ๋๋ค. ๋๋๋ก ์ธํ๋ ์ด์ ์ ์ ํ์ํค๊ธฐ ์ํด ์ ๋ถ๊ฐ ๋ ธ๋ ฅํ ๋, ๊ฒฝ์ ๋ ๋๋ฆฐ ์ฑ์ฅ ๋๋ ๋ถํฉ์ผ๋ก ์ ํ๋๊ฑฐ๋ ๋ถํ์ฑ ์ํ์ ๋น ์ง ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ธํ๋ ์ด์ ์ ์ ์ดํ๊ธฐ ์ํด ๊ฒฝ์ ์ฑ์ฅ์ด ์ถฉ๋ถํ ์ ์ง๋์ง๋ง ๋ถํฉ์ ํผํ๊ธฐ์ ์ถฉ๋ถํ ๋์ ๊ฒฝ์ฐ์ธ ์ํํธ ๋๋ฉ๊ณผ ๋์กฐ๋ ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ค์ ์ํ์ ๊ณต๊ฒฉ์ ์ธ ํตํ ์ ์ฑ ๊ฐ์ , ์ง์์ ์ธ ์ธํ๋ ์ด์ ๋ฐ ๋ฎ์ ์ค์ ๋ฅ ๋ฑ ๋ค์ํ ์์์ ์ํด ๋ฐ์ํ ์ ์์ผ๋ฉฐ, ๋ถ์ฑ ์์ค์ด ๋์์ง๊ฑฐ๋ ์ ๋ถ ์ฑ๊ถ์ ๋ํ ๊ตฌ๋งค์ ๋ถ์กฑ ๋ฑ์ ์์ธ์ด ์์ ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ํ์ ๋ถํ์ฑ ๊ธฐ๊ฐ์ด๋ ๋ถํฉ์ผ๋ก ๋น ์ง๋ฉฐ, ์ค์ ๋ฅ ์ ์์น, ๊ธฐ์ ์ด์ต์ ํ๋ฝ ๋ฐ ๋ถ๋ ์ฆ๊ฐ ๋ฑ์ด ์์ต๋๋ค. ํ๋ ๋๋ฉ์ผ๋ก ์ธํ ์ํ์ ๋๋นํ๊ธฐ ์ํด ํฌ์์๋ ํฌํธํด๋ฆฌ์ค๋ฅผ ๋ค๋ณํํ๊ณ ์ง ๋์ ์์ฐ์ ํฌ์ํ๋ฉฐ, ์ฅ๊ธฐ์ ์ธ ํฌ์ ๋ชฉํ์ ์ง์คํ๋ฉฐ, ํฌํธํด๋ฆฌ์ค๋ฅผ ๋ฆฌ๋ฐธ๋ฐ์ฑํ๊ณ , ์์ ์ ์ธ ์์ฐ ๋ฐ ์ผ๋ถ ๊ตญ๊ฐ์ ํฌ์ํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
์ฐ๋ฐฉ์ค๋น์ ๋(Federal Reserve)์ ๊ธ๋ฆฌ ์ธ์ ์ฃผ๊ธฐ๋ ์ฐ์ด์ด ์ผ์ด๋๋ ๊ฒฝ์ฐ๊ฐ ๋ง์ผ๋ฉฐ, ๋ฏธ๊ตญ์์ ํ๋์ ์ธ ๋ถํฉ๊ณผ ํ๋ ๋๋ฉ(hard landing) ์ดํ ์ํํธ ๋๋ฉ(soft landing)์ด ๋ฐ๋ฅด๊ณค ํ๋ค[2] ([https://en.wikipedia.org/wiki/Hard_landing_(economics)](https://en.wikipedia.org/wiki/Hard_landing_(economics))). ๋ ์ผ Deutsche Bank์ ์ฐ๊ตฌํ์ ์ฐ๋ฐฉ์ค๋น์ ๋์ ๊ณต๊ฒฉ์ ์ธ ๊ธ๋ฆฌ ์ธ์์ด ์ข ๋ฃ ๋จ๊ณ์ ์ด๋ฅด๋ ์ผ๋ฉฐ, ๋ถ๊ฒฝ๊ธฐ๊ฐ 10์์๋ ๋๋ํ ๊ฐ๋ฅ์ฑ์ด ์๋ค๊ณ ์ฌ๊ธด๋ค[8] (https://fortune.com/2023/06/15/economy-recession-federal-reserve-powell-deutsche-bank-hard-landing/). Duquesne Family Office์ ํ์ฅ ๊ฒธ CEO์ธ Stanley Druckenmiller๋ ์ฐ๋ฐฉ์ค๋น์ ๋์ ๊ธ๋ฆฌ ์ธ์์ด ๋ฏธ๊ตญ ๊ฒฝ์ ๋ฅผ ๋ถํฉ์ผ๋ก ๋ฐ์ด๋ฃ์ ๊ฒ์ด๋ผ ์์ํ๊ณ ์๋ค[5] (https://www.reuters.com/markets/us/investor-druckenmiller-expects-hard-landing-us-economy-bullish-ai-2023-06-07/). ๊ทธ๋ ์ด๋ฒ ํด ์ํ ์๋ง์ผ๋ก ์ธํด ๊ฒฝ์ ์ผ๋ถ ๋ถ๋ฌธ์์ ์์ง ๊ธ๋ฆฌ ์ธ์์ ์ํฅ์ด ๋ฏธ์น์ง ์์์ผ๋ฉฐ, ๋ ๋ง์ "์ ๋ฐ"์ด ๋จ์ด์ง ๊ฒ์ด๋ผ๊ณ ๋ฏฟ๊ณ ์๋ค[5] (https://www.reuters.com/markets/us/investor-druckenmiller-expects-hard-landing-us-economy-bullish-ai-2023-06-07/). Bridgewater Associates์ ์ฐฝ์ ์์ธ Ray Dalio๋ ๋ฏธ๊ตญ์ด ํฐ ์ฌ์ดํด ๋ถ์ฑ ์๊ธฐ์ ์ง๋ฉดํ๊ณ ์์ผ๋ฉฐ, ๊ฒฝ์ ์ํฉ์ด ์ ํ๋ ๊ฒ์ด๋ผ ๊ฒฝ๊ณ ํ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/)[6] (https://finance.yahoo.com/news/ray-dalio-says-u-facing-145648699.html). ๊ทธ๋ ๋ฏธ๊ตญ ์ฌ๋ฌด์ฑ์ด 2023๋ ๋ง๊น์ง 1์กฐ ๋ฌ๋ฌ ์ด์์ T-Bills์ ๋ฐํํ ๊ฒ์ผ๋ก ์์๋๋ ๊ฐ์ด๋ฐ, ์์ฅ์์ ์ด๋ฌํ ์ ๋ถ ๋ถ์ฑ๋ฅผ ๊ตฌ๋งคํ ์ถฉ๋ถํ ๊ตฌ๋งค์๊ฐ ์์ ๊ฐ๋ฅ์ฑ์ด ์๋ค๋ ์ฐ๋ ค๋ฅผ ํ๋ช ํ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/). Dalio๋ ๋ฏธ๊ตญ์ด ๋๋ฌด ๋ง์ ๋ถ์ฑ๋ฅผ ์์ฐํ๊ณ ๊ตฌ๋งค์๊ฐ ๋ถ์กฑํ ํด๋์ํ ๋ฆ์ ํฐ ์ฌ์ดํด ๋ถ์ฑ ์๊ธฐ์ ์์์ ์๋ค๊ณ ๋ฏฟ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/)๊ธฐํ ์ธ์ฌ์ดํธ๊ตญ์ ํตํ๊ธฐ๊ธ(IMF)์ ๋ง์ฑ์ ์ธ ๋์ ์ธํ๋ ์ด์ , ๊ธ๋ฆฌ ์์น, ๊ทธ๋ฆฌ๊ณ ๋ ๊ฐ์ ๋ํ ๋ฏธ๊ตญ ์ํ ํ์ฐ์ผ๋ก ์ธํ ๋ถํ์ค์ฑ์ผ๋ก ์ธํด ์ธ๊ณ ๊ฒฝ์ ์ ํ๋ ๋๋ฉ ์ํ์ด "์ฌ๊ฐํ๊ฒ ์ฆ๊ฐํ๋ค"๊ณ ๊ฒฝ๊ณ ํ๊ณ ์๋ค[10] (https://fortune.com/2023/04/11/recession-outlook-imf-slashes-global-growth-hard-landing/). Morgan Stanley Wealth Management์ ์ต๊ณ ํฌ์ ์ฑ ์์์ธ Lisa Shalett์ ์๋น์ ์ธํ๋ ์ด์ ์ด ๋ค์ ๋จ๊ฑฐ์์ง์ ๋ฐ๋ผ ํ๋ ๋๋ฉ ์ํ์ด ์ปค์ง๊ณ ์๋ค๋ ๊ฒฝ๊ณ ๋ฅผ ํ๊ณ ์๋ค[11] (https://fortune.com/2023/02/21/stock-market-outlook-economic-forecast-morgan-stanley-wealth-management-goldilocks-dead-economic-hard-landing-risk-growing/)
ํฌ์์์๊ฒ ์ฐ๊ด๋ ํ๋ ๋๋ฉ์ ์ํํ๋ ๋๋ฉ ์ค์๋ ํฌ์์๊ฐ ์ง๋ฉดํ๋ ์ฌ๋ฌ ๊ฐ์ง ์ํ์ด ์๋ค. ๊ทธ๊ฒ์ ๋ค์๊ณผ ๊ฐ๋ค:
ํฌ์์์๊ฒ ๋ฐ๋ฅด๋ ํ๋ ๋๋ฉ์ ์ํ ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์ ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์ํ์ ์ง๋ฉดํ๊ฒ ๋ฉ๋๋ค.
์์ฐ ๊ฐ์น ํ๋ฝ: ์ฃผ์ ๋ฐ ๋ถ๋์ฐ๊ณผ ๊ฐ์ ์์ฐ ๊ฐ๊ฒฉ์ด ํฌ๊ฒ ํ๋ฝํ์ฌ ํฌํธํด๋ฆฌ์ค ๊ฐ์น๊ฐ ๊ฐ์ํ๊ณ ์ ์ฌ์ ์ธ ์์ค์ด ๋ฐ์ํ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
์ ๋์ฑ ๊ฐ์: ์์ฅ ์ ๋์ฑ์ด ๊ฐ์ํ์ฌ ์ํ๋ ๊ฐ๊ฒฉ์ผ๋ก ์์ฐ์ ๋งค์ ๋๋ ๋งค๋ํ๊ธฐ๊ฐ ๋ ์ด๋ ค์ธ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)*.
๋ณ๋์ฑ ์ฆ๊ฐ: ๊ธ์ต ์์ฅ์ด ๋ถ์์ ํด์ ธ ๊ฐ๊ฒฉ ๋ณ๋์ด ํฌ๊ฒ ์ผ์ด๋๊ณ ๋ถํ์ค์ฑ์ด ์ฆ๊ฐํ ์ ์์ต๋๋ค**[3](https://www.investopedia.com/terms/h/hardlanding.asp)*.
ํ์ฐ ๋ฐ ์ฑ๋ฌด๋ถ์ดํ: ํ์ฌ๋ค์ด ์ฌ์ ์ ์ธ ์ด๋ ค์์ ๊ฒช์ด ํ์ฐ ๋ฐ ์ฑ๋ฌด๋ถ์ดํ์ ์ง๋ฉดํ ์ ์์ผ๋ฉฐ, ์ด๋ ๊ทธ๋ค์ ์ฃผ์์ด๋ ์ฑ๊ถ์ ๋ณด์ ํ ํฌ์์์๊ฒ ๋ถ์ ์ ์ธ ์ํฅ์ ์ค ์ ์์ต๋๋ค**[4](https://seekingalpha.com/news/3973813-goldman-sachs-picks-top-stocks-in-case-of-a-hard-landing)*.
ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์์ ๊ธฐํ ์ํ์๋ ๋ถ๊ตฌํ๊ณ ํ๋ ๋๋ฉ์ ํฌ์์๋ค์๊ฒ ๊ธฐํ๋ฅผ ์ ๊ณตํ ์ ์์ต๋๋ค.
์ ํ๊ฐ ์์ฐ ๋งค์ : ์์ฐ ๊ฐ๊ฒฉ ํ๋ฝ์ผ๋ก ์ธํด ํ ์ธ๋ ๊ฐ๊ฒฉ์ผ๋ก ๊ณ ํ์ง ์์ฐ์ ๋งค์ ํ ์ ์๋ ๊ธฐํ๊ฐ ์๊ธธ ์ ์์ต๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
๋ฐฉ์ด์ ์ฃผ์: ์๋น์ฌ, ์ ํธ๋ฆฌํฐ, ํฌ์ค์ผ์ด ๋ฑ์ ๋ฐฉ์ด์ ์ฃผ์ ํฌ์๋ ๊ฒฝ๊ธฐ ํ๋ฝ ๊ธฐ๊ฐ ๋์ ์์ ์ฑ์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๋ฐฐ๋น ์ฃผ์: ๋ฐฐ๋น์ ์ง์์ ์ผ๋ก ์ง๊ธํ๋ ์ฃผ์ ํฌ์๋ ์ด๋ ค์ด ์์ฅ ์ํฉ์์ ์์ ๋ฐ ์ ์ฌ์ ์ธ ์๋ณธ ๊ฐ์น ์์น์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
ํ๋ ๋๋ฉ์ ๊ธ๊ฒฉํ ์ฑ์ฅ ๊ธฐ๊ฐ ์ดํ์ ๊ฒฝ์ ์ ์ผ๋ก ๊ธ์ํ ๊ฐ์ ๋๋ ๋ํ๋ฅผ ์๋ฏธํฉ๋๋ค. ๋๋๋ก ์ธํ๋ ์ด์ ์ ์ ํ์ํค๊ธฐ ์ํด ์ ๋ถ๊ฐ ๋ ธ๋ ฅํ ๋, ๊ฒฝ์ ๋ ๋๋ฆฐ ์ฑ์ฅ ๋๋ ๋ถํฉ์ผ๋ก ์ ํ๋๊ฑฐ๋ ๋ถํ์ฑ ์ํ์ ๋น ์ง ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ธํ๋ ์ด์ ์ ์ ์ดํ๊ธฐ ์ํด ๊ฒฝ์ ์ฑ์ฅ์ด ์ถฉ๋ถํ ์ ์ง๋์ง๋ง ๋ถํฉ์ ํผํ๊ธฐ์ ์ถฉ๋ถํ ๋์ ๊ฒฝ์ฐ์ธ ์ํํธ ๋๋ฉ๊ณผ ๋์กฐ๋ ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ค์ ์ํ์ ๊ณต๊ฒฉ์ ์ธ ํตํ ์ ์ฑ ๊ฐ์ , ์ง์์ ์ธ ์ธํ๋ ์ด์ ๋ฐ ๋ฎ์ ์ค์ ๋ฅ ๋ฑ ๋ค์ํ ์์์ ์ํด ๋ฐ์ํ ์ ์์ผ๋ฉฐ, ๋ถ์ฑ ์์ค์ด ๋์์ง๊ฑฐ๋ ์ ๋ถ ์ฑ๊ถ์ ๋ํ ๊ตฌ๋งค์ ๋ถ์กฑ ๋ฑ์ ์์ธ์ด ์์ ์ ์์ต๋๋ค. ํ๋ ๋๋ฉ์ ์ํ์ ๋ถํ์ฑ ๊ธฐ๊ฐ์ด๋ ๋ถํฉ์ผ๋ก ๋น ์ง๋ฉฐ, ์ค์ ๋ฅ ์ ์์น, ๊ธฐ์ ์ด์ต์ ํ๋ฝ ๋ฐ ๋ถ๋ ์ฆ๊ฐ ๋ฑ์ด ์์ต๋๋ค. ํ๋ ๋๋ฉ์ผ๋ก ์ธํ ์ํ์ ๋๋นํ๊ธฐ ์ํด ํฌ์์๋ ํฌํธํด๋ฆฌ์ค๋ฅผ ๋ค๋ณํํ๊ณ ์ง ๋์ ์์ฐ์ ํฌ์ํ๋ฉฐ, ์ฅ๊ธฐ์ ์ธ ํฌ์ ๋ชฉํ์ ์ง์คํ๋ฉฐ, ํฌํธํด๋ฆฌ์ค๋ฅผ ๋ฆฌ๋ฐธ๋ฐ์ฑํ๊ณ , ์์ ์ ์ธ ์์ฐ ๋ฐ ์ผ๋ถ ๊ตญ๊ฐ์ ํฌ์ํ๋ ๊ฒ์ด ์ข์ต๋๋ค.
์ฐ๋ฐฉ์ค๋น์ ๋(Federal Reserve)์ ๊ธ๋ฆฌ ์ธ์ ์ฃผ๊ธฐ๋ ์ฐ์ด์ด ์ผ์ด๋๋ ๊ฒฝ์ฐ๊ฐ ๋ง์ผ๋ฉฐ, ๋ฏธ๊ตญ์์ ํ๋์ ์ธ ๋ถํฉ๊ณผ ํ๋ ๋๋ฉ(hard landing) ์ดํ ์ํํธ ๋๋ฉ(soft landing)์ด ๋ฐ๋ฅด๊ณค ํ๋ค[2] ([https://en.wikipedia.org/wiki/Hard_landing_(economics)](https://en.wikipedia.org/wiki/Hard_landing_(economics))). ๋ ์ผ Deutsche Bank์ ์ฐ๊ตฌํ์ ์ฐ๋ฐฉ์ค๋น์ ๋์ ๊ณต๊ฒฉ์ ์ธ ๊ธ๋ฆฌ ์ธ์์ด ์ข ๋ฃ ๋จ๊ณ์ ์ด๋ฅด๋ ์ผ๋ฉฐ, ๋ถ๊ฒฝ๊ธฐ๊ฐ 10์์๋ ๋๋ํ ๊ฐ๋ฅ์ฑ์ด ์๋ค๊ณ ์ฌ๊ธด๋ค[8] (https://fortune.com/2023/06/15/economy-recession-federal-reserve-powell-deutsche-bank-hard-landing/). Duquesne Family Office์ ํ์ฅ ๊ฒธ CEO์ธ Stanley Druckenmiller๋ ์ฐ๋ฐฉ์ค๋น์ ๋์ ๊ธ๋ฆฌ ์ธ์์ด ๋ฏธ๊ตญ ๊ฒฝ์ ๋ฅผ ๋ถํฉ์ผ๋ก ๋ฐ์ด๋ฃ์ ๊ฒ์ด๋ผ ์์ํ๊ณ ์๋ค[5] (https://www.reuters.com/markets/us/investor-druckenmiller-expects-hard-landing-us-economy-bullish-ai-2023-06-07/). ๊ทธ๋ ์ด๋ฒ ํด ์ํ ์๋ง์ผ๋ก ์ธํด ๊ฒฝ์ ์ผ๋ถ ๋ถ๋ฌธ์์ ์์ง ๊ธ๋ฆฌ ์ธ์์ ์ํฅ์ด ๋ฏธ์น์ง ์์์ผ๋ฉฐ, ๋ ๋ง์ "์ ๋ฐ"์ด ๋จ์ด์ง ๊ฒ์ด๋ผ๊ณ ๋ฏฟ๊ณ ์๋ค[5] (https://www.reuters.com/markets/us/investor-druckenmiller-expects-hard-landing-us-economy-bullish-ai-2023-06-07/). Bridgewater Associates์ ์ฐฝ์ ์์ธ Ray Dalio๋ ๋ฏธ๊ตญ์ด ํฐ ์ฌ์ดํด ๋ถ์ฑ ์๊ธฐ์ ์ง๋ฉดํ๊ณ ์์ผ๋ฉฐ, ๊ฒฝ์ ์ํฉ์ด ์ ํ๋ ๊ฒ์ด๋ผ ๊ฒฝ๊ณ ํ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/)[6] (https://finance.yahoo.com/news/ray-dalio-says-u-facing-145648699.html). ๊ทธ๋ ๋ฏธ๊ตญ ์ฌ๋ฌด์ฑ์ด 2023๋ ๋ง๊น์ง 1์กฐ ๋ฌ๋ฌ ์ด์์ T-Bills์ ๋ฐํํ ๊ฒ์ผ๋ก ์์๋๋ ๊ฐ์ด๋ฐ, ์์ฅ์์ ์ด๋ฌํ ์ ๋ถ ๋ถ์ฑ๋ฅผ ๊ตฌ๋งคํ ์ถฉ๋ถํ ๊ตฌ๋งค์๊ฐ ์์ ๊ฐ๋ฅ์ฑ์ด ์๋ค๋ ์ฐ๋ ค๋ฅผ ํ๋ช ํ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/). Dalio๋ ๋ฏธ๊ตญ์ด ๋๋ฌด ๋ง์ ๋ถ์ฑ๋ฅผ ์์ฐํ๊ณ ๊ตฌ๋งค์๊ฐ ๋ถ์กฑํ ํด๋์ํ ๋ฆ์ ํฐ ์ฌ์ดํด ๋ถ์ฑ ์๊ธฐ์ ์์์ ์๋ค๊ณ ๋ฏฟ๊ณ ์๋ค[4] (https://fortune.com/2023/06/08/ray-dalio-bridgewater-associates-us-economy-debt-crisis-recession/)๊ธฐํ ์ธ์ฌ์ดํธ๊ตญ์ ํตํ๊ธฐ๊ธ(IMF)์ ๋ง์ฑ์ ์ธ ๋์ ์ธํ๋ ์ด์ , ๊ธ๋ฆฌ ์์น, ๊ทธ๋ฆฌ๊ณ ๋ ๊ฐ์ ๋ํ ๋ฏธ๊ตญ ์ํ ํ์ฐ์ผ๋ก ์ธํ ๋ถํ์ค์ฑ์ผ๋ก ์ธํด ์ธ๊ณ ๊ฒฝ์ ์ ํ๋ ๋๋ฉ ์ํ์ด "์ฌ๊ฐํ๊ฒ ์ฆ๊ฐํ๋ค"๊ณ ๊ฒฝ๊ณ ํ๊ณ ์๋ค[10] (https://fortune.com/2023/04/11/recession-outlook-imf-slashes-global-growth-hard-landing/). Morgan Stanley Wealth Management์ ์ต๊ณ ํฌ์ ์ฑ ์์์ธ Lisa Shalett์ ์๋น์ ์ธํ๋ ์ด์ ์ด ๋ค์ ๋จ๊ฑฐ์์ง์ ๋ฐ๋ผ ํ๋ ๋๋ฉ ์ํ์ด ์ปค์ง๊ณ ์๋ค๋ ๊ฒฝ๊ณ ๋ฅผ ํ๊ณ ์๋ค[11] (https://fortune.com/2023/02/21/stock-market-outlook-economic-forecast-morgan-stanley-wealth-management-goldilocks-dead-economic-hard-landing-risk-growing/)
ํฌ์์์๊ฒ ์ฐ๊ด๋ ํ๋ ๋๋ฉ์ ์ํํ๋ ๋๋ฉ ์ค์๋ ํฌ์์๊ฐ ์ง๋ฉดํ๋ ์ฌ๋ฌ ๊ฐ์ง ์ํ์ด ์๋ค. ๊ทธ๊ฒ์ ๋ค์๊ณผ ๊ฐ๋ค:
ํฌ์์์๊ฒ ๋ฐ๋ฅด๋ ํ๋ ๋๋ฉ์ ์ํ ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์ ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์ํ์ ์ง๋ฉดํ๊ฒ ๋ฉ๋๋ค.
์์ฐ ๊ฐ์น ํ๋ฝ: ์ฃผ์ ๋ฐ ๋ถ๋์ฐ๊ณผ ๊ฐ์ ์์ฐ ๊ฐ๊ฒฉ์ด ํฌ๊ฒ ํ๋ฝํ์ฌ ํฌํธํด๋ฆฌ์ค ๊ฐ์น๊ฐ ๊ฐ์ํ๊ณ ์ ์ฌ์ ์ธ ์์ค์ด ๋ฐ์ํ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
์ ๋์ฑ ๊ฐ์: ์์ฅ ์ ๋์ฑ์ด ๊ฐ์ํ์ฌ ์ํ๋ ๊ฐ๊ฒฉ์ผ๋ก ์์ฐ์ ๋งค์ ๋๋ ๋งค๋ํ๊ธฐ๊ฐ ๋ ์ด๋ ค์ธ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)*.
๋ณ๋์ฑ ์ฆ๊ฐ: ๊ธ์ต ์์ฅ์ด ๋ถ์์ ํด์ ธ ๊ฐ๊ฒฉ ๋ณ๋์ด ํฌ๊ฒ ์ผ์ด๋๊ณ ๋ถํ์ค์ฑ์ด ์ฆ๊ฐํ ์ ์์ต๋๋ค**[3](https://www.investopedia.com/terms/h/hardlanding.asp)*.
ํ์ฐ ๋ฐ ์ฑ๋ฌด๋ถ์ดํ: ํ์ฌ๋ค์ด ์ฌ์ ์ ์ธ ์ด๋ ค์์ ๊ฒช์ด ํ์ฐ ๋ฐ ์ฑ๋ฌด๋ถ์ดํ์ ์ง๋ฉดํ ์ ์์ผ๋ฉฐ, ์ด๋ ๊ทธ๋ค์ ์ฃผ์์ด๋ ์ฑ๊ถ์ ๋ณด์ ํ ํฌ์์์๊ฒ ๋ถ์ ์ ์ธ ์ํฅ์ ์ค ์ ์์ต๋๋ค**[4](https://seekingalpha.com/news/3973813-goldman-sachs-picks-top-stocks-in-case-of-a-hard-landing)*.
ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์์ ๊ธฐํ ์ํ์๋ ๋ถ๊ตฌํ๊ณ ํ๋ ๋๋ฉ์ ํฌ์์๋ค์๊ฒ ๊ธฐํ๋ฅผ ์ ๊ณตํ ์ ์์ต๋๋ค.
์ ํ๊ฐ ์์ฐ ๋งค์ : ์์ฐ ๊ฐ๊ฒฉ ํ๋ฝ์ผ๋ก ์ธํด ํ ์ธ๋ ๊ฐ๊ฒฉ์ผ๋ก ๊ณ ํ์ง ์์ฐ์ ๋งค์ ํ ์ ์๋ ๊ธฐํ๊ฐ ์๊ธธ ์ ์์ต๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
๋ฐฉ์ด์ ์ฃผ์: ์๋น์ฌ, ์ ํธ๋ฆฌํฐ, ํฌ์ค์ผ์ด ๋ฑ์ ๋ฐฉ์ด์ ์ฃผ์ ํฌ์๋ ๊ฒฝ๊ธฐ ํ๋ฝ ๊ธฐ๊ฐ ๋์ ์์ ์ฑ์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๋ฐฐ๋น ์ฃผ์: ๋ฐฐ๋น์ ์ง์์ ์ผ๋ก ์ง๊ธํ๋ ์ฃผ์ ํฌ์๋ ์ด๋ ค์ด ์์ฅ ์ํฉ์์ ์์ ๋ฐ ์ ์ฌ์ ์ธ ์๋ณธ ๊ฐ์น ์์น์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
Wikipedia
Hard landing (economics)
A hard landing in the business cycle or economic cycle
Continuous Learning_Startup & Investment
https://youtu.be/5cQXjboJwg0
์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ์์ฐ: ์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ๋ค๋ฅธ ์์ฐ์ ๋ค์์ฑ์ ์ ๊ณตํ๊ณ ํฌํธํด๋ฆฌ์ค ์ํ์ ๊ฐ์์ํฌ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
ํ๋ ๋๋ฉ์ ๋๋นํ๋ ํฌ์์ ์ค๋น ํ๋ ๋๋ฉ์ ๋๋นํ๊ธฐ ์ํด ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์กฐ์น๋ฅผ ์ทจํ ์ ์์ต๋๋ค.
ํฌํธํด๋ฆฌ์ค ๋ค์ํ: ์์ฐ ํด๋์ค, ์นํฐ ๋ฐ ์ง์ญ ๊ฐ์ ๋ค์ํ ํฌ์๋ ์ํ์ ์ํํ๊ณ ์ ์ฌ์ ๊ธฐํ๋ฅผ ํฌ์ฐฉํ๋ ๋ฐ ๋์์ด ๋ ์ ์์ต๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
๊ณ ํ์ง ์์ฐ์ ์ง์ค: ๋ฎ์ ๋ถ์ฑ, ๊ฐ๋ ฅํ ํ๊ธ ํ๋ฆ ๋ฐ ๊ฒฌ๊ณ ํ ์ฌ๋ฌด ์ํ๋ฅผ ๊ฐ์ง ์ ๊ด๋ฆฌ๋๋ ํ์ฌ์ ํฌ์ํฉ๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
์ฅ๊ธฐ์ ์ธ ๊ด์ ์ ์ง: ์ฅ๊ธฐ์ ์ธ ํฌ์ ๋ชฉํ์ ์ง์คํ๊ณ ๋จ๊ธฐ์ ์ธ ์์ฅ ๋ณ๋์ ๊ธฐ๋ฐํ ์ถฉ๋์ ์ธ ๊ฒฐ์ ์ ํผํฉ๋๋ค**[6](https://www.pwc.com/us/en/industries/financial-services/asset-wealth-management/real-estate/emerging-trends-in-real-estate.html)*.
ํฌํธํด๋ฆฌ์ค ์ฌ์กฐ์ : ํฌํธํด๋ฆฌ์ค๋ฅผ ์ ๊ธฐ์ ์ผ๋ก ๊ฒํ ํ๊ณ ์กฐ์ ํ์ฌ ์ํ๋ ์์ฐ ๋ฐฐ๋ถ๊ณผ ์ํ ํ๋กํ์ ์ ์งํฉ๋๋ค**[7](https://www.schwab.com/learn/story/how-to-prepare-landing)*.
์ ๋งํ ์์ฐ ๋ฐ ๊ตญ๊ฐ ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์ ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์์ฐ ๋ฐ ๊ตญ๊ฐ๋ฅผ ๊ณ ๋ คํ ์ ์์ต๋๋ค.
์ผ๋ณธ ๋ถ๋์ฐ: ์ผ๋ณธ์ ๋ถ๋์ฐ ์์ฅ์ ๊ฒฝ๊ธฐ ํ๋ฝ ๊ธฐ๊ฐ ๋์์๋ ๋ด๊ตฌ์ฑ์ ๋ณด์ฌ์ฃผ์ด ํฌ์์๋ค์๊ฒ ์์ ํ ํผ๋์ฒ๊ฐ ๋ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)*.
๋ฐฉ์ด์ ์ฃผ์: ์์์ ์ธ๊ธํ ๋ฐฉ์ด์ ์ฃผ์์ ์๋น์ฌ, ์ ํธ๋ฆฌํฐ, ํฌ์ค์ผ์ด ๋ฑ์ ์นํฐ์์ ์์ ์ฑ์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๋ฐฐ๋น ์ฃผ์: ์ผ๊ด์ ์ธ ๋ฐฐ๋น์ ์ง๊ธํ ๊ธฐ์ ์ ํฌ์ํ๋ฉด ์ด๋ ค์ด ์์ฅ ์ํฉ์์ ์์ ๋ฐ ์ ์ฌ์ ์ธ ์๋ณธ ๊ฐ์น ์์น์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ์์ฐ: ์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ๋ค๋ฅธ ์์ฐ์ ๋ค์์ฑ์ ์ ๊ณตํ๊ณ ํฌํธํด๋ฆฌ์ค ์ํ์ ๊ฐ์์ํฌ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๊ฒฐ๋ก ์ ์ผ๋ก, ํ๋ ๋๋ฉ์ ํฌ์์๋ค์๊ฒ ์ํ๊ณผ ๊ธฐํ ๋ชจ๋๋ฅผ ์ ๊ณตํฉ๋๋ค. ํฌํธํด๋ฆฌ์ค ๋ค์ํ, ๊ณ ํ์ง ์์ฐ์ ์ง์ค, ์ฅ๊ธฐ์ ์ธ ๊ด์ ์ ์ง ๋ฐ ํฌํธํด๋ฆฌ์ค ์ฌ์กฐ์ ์ ํตํด ํฌ์์๋ค์ ํ๋ ๋๋ฉ์ ๋์ ์ ๋ ์ ๊ทน๋ณตํ๊ณ ์ ์ฌ์ ์ธ ๊ธฐํ๋ฅผ ์ก์ ์ ์์ต๋๋ค.
ํ๋ ๋๋ฉ์ ๋๋นํ๋ ํฌ์์ ์ค๋น ํ๋ ๋๋ฉ์ ๋๋นํ๊ธฐ ์ํด ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์กฐ์น๋ฅผ ์ทจํ ์ ์์ต๋๋ค.
ํฌํธํด๋ฆฌ์ค ๋ค์ํ: ์์ฐ ํด๋์ค, ์นํฐ ๋ฐ ์ง์ญ ๊ฐ์ ๋ค์ํ ํฌ์๋ ์ํ์ ์ํํ๊ณ ์ ์ฌ์ ๊ธฐํ๋ฅผ ํฌ์ฐฉํ๋ ๋ฐ ๋์์ด ๋ ์ ์์ต๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
๊ณ ํ์ง ์์ฐ์ ์ง์ค: ๋ฎ์ ๋ถ์ฑ, ๊ฐ๋ ฅํ ํ๊ธ ํ๋ฆ ๋ฐ ๊ฒฌ๊ณ ํ ์ฌ๋ฌด ์ํ๋ฅผ ๊ฐ์ง ์ ๊ด๋ฆฌ๋๋ ํ์ฌ์ ํฌ์ํฉ๋๋ค**[5](https://www.investopedia.com/ask/answers/042115/whats-best-investing-strategy-have-during-recession.asp)*.
์ฅ๊ธฐ์ ์ธ ๊ด์ ์ ์ง: ์ฅ๊ธฐ์ ์ธ ํฌ์ ๋ชฉํ์ ์ง์คํ๊ณ ๋จ๊ธฐ์ ์ธ ์์ฅ ๋ณ๋์ ๊ธฐ๋ฐํ ์ถฉ๋์ ์ธ ๊ฒฐ์ ์ ํผํฉ๋๋ค**[6](https://www.pwc.com/us/en/industries/financial-services/asset-wealth-management/real-estate/emerging-trends-in-real-estate.html)*.
ํฌํธํด๋ฆฌ์ค ์ฌ์กฐ์ : ํฌํธํด๋ฆฌ์ค๋ฅผ ์ ๊ธฐ์ ์ผ๋ก ๊ฒํ ํ๊ณ ์กฐ์ ํ์ฌ ์ํ๋ ์์ฐ ๋ฐฐ๋ถ๊ณผ ์ํ ํ๋กํ์ ์ ์งํฉ๋๋ค**[7](https://www.schwab.com/learn/story/how-to-prepare-landing)*.
์ ๋งํ ์์ฐ ๋ฐ ๊ตญ๊ฐ ํ๋ ๋๋ฉ ๊ธฐ๊ฐ ๋์ ํฌ์์๋ ๋ค์๊ณผ ๊ฐ์ ์์ฐ ๋ฐ ๊ตญ๊ฐ๋ฅผ ๊ณ ๋ คํ ์ ์์ต๋๋ค.
์ผ๋ณธ ๋ถ๋์ฐ: ์ผ๋ณธ์ ๋ถ๋์ฐ ์์ฅ์ ๊ฒฝ๊ธฐ ํ๋ฝ ๊ธฐ๊ฐ ๋์์๋ ๋ด๊ตฌ์ฑ์ ๋ณด์ฌ์ฃผ์ด ํฌ์์๋ค์๊ฒ ์์ ํ ํผ๋์ฒ๊ฐ ๋ ์ ์์ต๋๋ค**[1](https://60secondmarketer.com/2021/12/28/how-will-recession-affect-real-estate-investors-the-must-know-facts/)*.
๋ฐฉ์ด์ ์ฃผ์: ์์์ ์ธ๊ธํ ๋ฐฉ์ด์ ์ฃผ์์ ์๋น์ฌ, ์ ํธ๋ฆฌํฐ, ํฌ์ค์ผ์ด ๋ฑ์ ์นํฐ์์ ์์ ์ฑ์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๋ฐฐ๋น ์ฃผ์: ์ผ๊ด์ ์ธ ๋ฐฐ๋น์ ์ง๊ธํ ๊ธฐ์ ์ ํฌ์ํ๋ฉด ์ด๋ ค์ด ์์ฅ ์ํฉ์์ ์์ ๋ฐ ์ ์ฌ์ ์ธ ์๋ณธ ๊ฐ์น ์์น์ ์ ๊ณตํ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ์์ฐ: ์ฑ๊ถ ๋ฐ ์๊ด๊ด๊ณ ์๋ ๋ค๋ฅธ ์์ฐ์ ๋ค์์ฑ์ ์ ๊ณตํ๊ณ ํฌํธํด๋ฆฌ์ค ์ํ์ ๊ฐ์์ํฌ ์ ์์ต๋๋ค**[2](https://www.forbes.com/advisor/investing/how-to-invest-during-a-recession/)*.
๊ฒฐ๋ก ์ ์ผ๋ก, ํ๋ ๋๋ฉ์ ํฌ์์๋ค์๊ฒ ์ํ๊ณผ ๊ธฐํ ๋ชจ๋๋ฅผ ์ ๊ณตํฉ๋๋ค. ํฌํธํด๋ฆฌ์ค ๋ค์ํ, ๊ณ ํ์ง ์์ฐ์ ์ง์ค, ์ฅ๊ธฐ์ ์ธ ๊ด์ ์ ์ง ๋ฐ ํฌํธํด๋ฆฌ์ค ์ฌ์กฐ์ ์ ํตํด ํฌ์์๋ค์ ํ๋ ๋๋ฉ์ ๋์ ์ ๋ ์ ๊ทน๋ณตํ๊ณ ์ ์ฌ์ ์ธ ๊ธฐํ๋ฅผ ์ก์ ์ ์์ต๋๋ค.
Forbes Advisor
How To Invest During A Recession
With inflation up, the stock market down and gross domestic product (GDP) in the red, experts are debating whether the U.S. has entered a recession. While the jury is still out on that question, you may wonder what you can do now to best position your investmentsโฆ
์๋ฌด๊ฒ๋ ์๋ ์ํฉ์์ ๋ถ์๊ฐ ๋๋ ค๋ฉด (์ธ์ด๋
ธ ์์ ์จ ๋ง์ ๋ฐ๋ฅด๋ฉด) ํผ๋ณด๋ค ์งํ๊ฒ ์ด์์ผํ๋ค๊ณ ์๊ฐํ๋ค.
์คํผ ์ตํ์ฌ ๋ํ๋ ํ์ด์ค๋ถ ํฌ์คํ
๋ถ์๊ฐ ๋๊ณ ์ถ๋ค๋ฉด์?
์๋ผ๋ฒจ ์ ๋นํ๊ณ ๋์ ์ฐ๋ด ์ง์ฅ
์ผ๋ ์ ํ๋๋ฒ ํด์ธ์ฌํ๊ฐ์ผ์ง.
๊ฒฐํผ์๋ ์กฐ๊ฑด์ ์์ 10%๋ฅผ ๋ฐ๋ผ๊ณ ,
๋ง์ด๋ ์๋์ง๋ง ๋ช ํ๋ฐฑ ํ๋๊ฐ๋ ์์ด์ผ์ง.
๊ตญ์ฐ์ฐจ ์ด๋ฐ์ ๋ ์ข ๋ณดํ์ ์ธ์ ์ฐจ ์ฌ๋๊ฒ ๋์.
ํธํ๊ณ ์ข๊ณ ์ด๋งํ๋ฐ...
๋๋ ์ํ๋๋ฐ ๋จ๋ค์ด๋ผ๊ณ ๋ค๋ฅผ๊น?
์ค์ ํ๊ตญ ์ต๊ทผ ํต๊ณ ๋ฐ ๊ฒฝํฅ
-๊ฒฐํผ๋ฅ ์ต์
-์ธ๊ตฌ์๋๋น ํด์ธ์ฌํ ์ต๊ณ
-์๋๋๋น ๋ช ํ์๋น ์ต๊ณ
-์๋๋๋น ์ธ์ ์ฐจ์๋น ์ต๊ณ
SNS๋ ๋ฏธ๋์ด๋ค์ด ์๋ก๊ฐ ๊ฒฝ์ํ๋ฏ ์ฌ๋๋ค์ ๋ถ์ถ์ด๊ณ , ๋จ๋ค๋งํผ์ ํด์ผ์ง ๋ผ๋ ๋ฌธํ๋ก ๋๋ผ๊ฐ ์ฌํ๊ฐ ๋๋ฝ์ผ๋ก ๊ฐ๋ ๊ฒ ๊ฐ๋ค.
๋ถ์๋ ๊ทนํ ์์์ธ๋ฐ ๊ทธ๋ ๋ค๋ฉด
๋์ค๋ค๊ณผ ๋ค๋ฅด๊ฒ ํด์ผํ์ง ์์๊น?
๋ฏธ๋ จํ๋ค ์๊ฐ๋ฝ์ง ๋ฐ์ผ๋ฉฐ, ๋ ธ๋ ฅํ๋ ์์์ ์ฌ๋๋ค
์ด์ฌํ ์ฑ์ฅ์ ์ํด ์ผํ๊ณ ,
ํด์ธ์ฌํ ๊ฐ ๋, ๋ช ํ ์ด๋, ์ธ์ ์ฐจ ํ ๋ถ ์๊ปด์
์ธ๊ตญ์ด๋ ํฌ์๊ณต๋ถํ๊ณ ,
์์์ง๋ถํฐ ์์ํด ๊ฒฐํผํ๊ณ
์กฐ๊ธ์ฉ ๊ณ์ ๋๋ ค๊ฐ๋ ์ฌ๋
์ด๋ฐ ์ฌ๋๋ค์ด ๊ฒฐ๊ตญ์๋ ์ ์์๋ ๋ง๋ ์๋๊ฒ ๋ถ์ ๊ฒฉ์ฐจ๊ฐ ๋ฒ์ด์ง๋๊ฑธ ๋ง์ด ๋ชฉ๊ฒฉ ํ๋ค.
๊ทธ๋ฌ๊ตฌ ๋ณด๋ฉด ํ๋์ ๋ชฉํ๋ฅผ ์ํด ๊พธ์คํ ๋ ธ๋ ฅํ๋ ์ฌ๋์ ๋นํํ๊ณ ๋จ๋ค ๋คํ๋๊ฑด ๋๋ ํด๋ณด๋ฉฐ ์ด์๊ฐ๋ ์ฌ๋์น๊ณ ํฐ ๋ถ๋ฅผ ์ด๋ฃฌ ์ฌ๋์ ๋ณธ์ ์ด ์๋ ๊ฒ ๊ฐ๋ค.
์คํผ ์ตํ์ฌ ๋ํ๋ ํ์ด์ค๋ถ ํฌ์คํ
๋ถ์๊ฐ ๋๊ณ ์ถ๋ค๋ฉด์?
์๋ผ๋ฒจ ์ ๋นํ๊ณ ๋์ ์ฐ๋ด ์ง์ฅ
์ผ๋ ์ ํ๋๋ฒ ํด์ธ์ฌํ๊ฐ์ผ์ง.
๊ฒฐํผ์๋ ์กฐ๊ฑด์ ์์ 10%๋ฅผ ๋ฐ๋ผ๊ณ ,
๋ง์ด๋ ์๋์ง๋ง ๋ช ํ๋ฐฑ ํ๋๊ฐ๋ ์์ด์ผ์ง.
๊ตญ์ฐ์ฐจ ์ด๋ฐ์ ๋ ์ข ๋ณดํ์ ์ธ์ ์ฐจ ์ฌ๋๊ฒ ๋์.
ํธํ๊ณ ์ข๊ณ ์ด๋งํ๋ฐ...
๋๋ ์ํ๋๋ฐ ๋จ๋ค์ด๋ผ๊ณ ๋ค๋ฅผ๊น?
์ค์ ํ๊ตญ ์ต๊ทผ ํต๊ณ ๋ฐ ๊ฒฝํฅ
-๊ฒฐํผ๋ฅ ์ต์
-์ธ๊ตฌ์๋๋น ํด์ธ์ฌํ ์ต๊ณ
-์๋๋๋น ๋ช ํ์๋น ์ต๊ณ
-์๋๋๋น ์ธ์ ์ฐจ์๋น ์ต๊ณ
SNS๋ ๋ฏธ๋์ด๋ค์ด ์๋ก๊ฐ ๊ฒฝ์ํ๋ฏ ์ฌ๋๋ค์ ๋ถ์ถ์ด๊ณ , ๋จ๋ค๋งํผ์ ํด์ผ์ง ๋ผ๋ ๋ฌธํ๋ก ๋๋ผ๊ฐ ์ฌํ๊ฐ ๋๋ฝ์ผ๋ก ๊ฐ๋ ๊ฒ ๊ฐ๋ค.
๋ถ์๋ ๊ทนํ ์์์ธ๋ฐ ๊ทธ๋ ๋ค๋ฉด
๋์ค๋ค๊ณผ ๋ค๋ฅด๊ฒ ํด์ผํ์ง ์์๊น?
๋ฏธ๋ จํ๋ค ์๊ฐ๋ฝ์ง ๋ฐ์ผ๋ฉฐ, ๋ ธ๋ ฅํ๋ ์์์ ์ฌ๋๋ค
์ด์ฌํ ์ฑ์ฅ์ ์ํด ์ผํ๊ณ ,
ํด์ธ์ฌํ ๊ฐ ๋, ๋ช ํ ์ด๋, ์ธ์ ์ฐจ ํ ๋ถ ์๊ปด์
์ธ๊ตญ์ด๋ ํฌ์๊ณต๋ถํ๊ณ ,
์์์ง๋ถํฐ ์์ํด ๊ฒฐํผํ๊ณ
์กฐ๊ธ์ฉ ๊ณ์ ๋๋ ค๊ฐ๋ ์ฌ๋
์ด๋ฐ ์ฌ๋๋ค์ด ๊ฒฐ๊ตญ์๋ ์ ์์๋ ๋ง๋ ์๋๊ฒ ๋ถ์ ๊ฒฉ์ฐจ๊ฐ ๋ฒ์ด์ง๋๊ฑธ ๋ง์ด ๋ชฉ๊ฒฉ ํ๋ค.
๊ทธ๋ฌ๊ตฌ ๋ณด๋ฉด ํ๋์ ๋ชฉํ๋ฅผ ์ํด ๊พธ์คํ ๋ ธ๋ ฅํ๋ ์ฌ๋์ ๋นํํ๊ณ ๋จ๋ค ๋คํ๋๊ฑด ๋๋ ํด๋ณด๋ฉฐ ์ด์๊ฐ๋ ์ฌ๋์น๊ณ ํฐ ๋ถ๋ฅผ ์ด๋ฃฌ ์ฌ๋์ ๋ณธ์ ์ด ์๋ ๊ฒ ๊ฐ๋ค.
โค3
ํ๋๊ฐ ์ค๋ ๊ฒ์ ์๋ ๊ฒ, ํ๋๋ฅผ ํ๋ ๊ฒ, ๊ทธ๋ฆฌ๊ณ ์ฌ๋ฌ๋ฒ ์ ํ๋ ๊ฒ์ ๋ค ๋ค๋ฅด๋ค.
AI Wave๋ฅผ ๋ฐ๋ผ๋ณด๋ฉด์ ์ฌ๋ฌ๋ฒ ๋ค์ ์๊ฐํด๋ด์ผํ ๋ถ๋ถ
์ตํ์ฌ ๋ํ๋ ํ๋ถ
๋งคํธ๋ฆญ์ค ๋ชจํผ์ด์ค์ ๋ช ๋์ฌ
"๊ธธ์ ์๋ ๊ฒ๊ณผ ๊ธธ์ ๊ฑท๋ ๊ฒ์ ๋ค๋ฅด๋ค."
์คํํธ์ ๋ค๋ ์ฌ๋๋ค๋ ๋ ๋ง์ด ์๋๊ฒ ์๋๋ผ ๋ ๋ง์ด ๊ฑท๋๊ฒ ์ค์ํ๋ฐ ๋ค๋ค ๋ ๋ง์ด ์๋ ๊ฒ์๋ง ์ง์คํ๋ ๊ฒ ๊ฐ๋ค.
๋ ๋ง์ด ์๊ณ ์ถ์ด ์ค๋นํ๋ ์ฌ๋๋ค์๊ฒ ๋ฌผ์ด๋ณด๋ฉด ๋ ๋ง์ด ๊ฑท๊ธฐ ์ํด ๋ ๋ง์ด ์๋ ค๊ณ ํ๋ค ๋ผ๊ณ ๋๋ตํ์ง๋ง ์ค์ ๋ก ๊ฑท๋ ๊ฑธ ๋ณธ์ ์ด ๋ณ๋ก ์๋ค.
์คํ๋ ค ๋๋ฌด ๋ง์ ๊ฑธ ์์๋ฒ๋ ค์ ๋จผ์ ๊ฒ์ ๋จน๊ณ ํ์ง ์์๋ฟ...
๊ทธ๋ฅ ๊ฑท์. ์ค๋๋ ๋ด์ผ๋...
AI Wave๋ฅผ ๋ฐ๋ผ๋ณด๋ฉด์ ์ฌ๋ฌ๋ฒ ๋ค์ ์๊ฐํด๋ด์ผํ ๋ถ๋ถ
์ตํ์ฌ ๋ํ๋ ํ๋ถ
๋งคํธ๋ฆญ์ค ๋ชจํผ์ด์ค์ ๋ช ๋์ฌ
"๊ธธ์ ์๋ ๊ฒ๊ณผ ๊ธธ์ ๊ฑท๋ ๊ฒ์ ๋ค๋ฅด๋ค."
์คํํธ์ ๋ค๋ ์ฌ๋๋ค๋ ๋ ๋ง์ด ์๋๊ฒ ์๋๋ผ ๋ ๋ง์ด ๊ฑท๋๊ฒ ์ค์ํ๋ฐ ๋ค๋ค ๋ ๋ง์ด ์๋ ๊ฒ์๋ง ์ง์คํ๋ ๊ฒ ๊ฐ๋ค.
๋ ๋ง์ด ์๊ณ ์ถ์ด ์ค๋นํ๋ ์ฌ๋๋ค์๊ฒ ๋ฌผ์ด๋ณด๋ฉด ๋ ๋ง์ด ๊ฑท๊ธฐ ์ํด ๋ ๋ง์ด ์๋ ค๊ณ ํ๋ค ๋ผ๊ณ ๋๋ตํ์ง๋ง ์ค์ ๋ก ๊ฑท๋ ๊ฑธ ๋ณธ์ ์ด ๋ณ๋ก ์๋ค.
์คํ๋ ค ๋๋ฌด ๋ง์ ๊ฑธ ์์๋ฒ๋ ค์ ๋จผ์ ๊ฒ์ ๋จน๊ณ ํ์ง ์์๋ฟ...
๊ทธ๋ฅ ๊ฑท์. ์ค๋๋ ๋ด์ผ๋...
โค2
### Effective or Experimental LLM Lightweighting Approaches
Lightweighting approaches for Large Language Models (LLMs) aim to reduce the memory footprint and computational requirements of these models, making them more efficient and easier to deploy. Some popular lightweighting methods include quantization, pruning, and distillation**[1](https://medium.com/intel-analytics-software/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98)**.
### Quantization
Quantization is a compression operation that reduces the memory footprint of a model and improves inference performance. An enhanced SmoothQuant approach has been proposed for post-training quantization of LLMs, which has been integrated into Intel Neural Compressor, an open-source Python library of popular model compression techniques**[1](https://medium.com/intel-analytics-software/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98)**.
### Pruning
Pruning is a method to compress a model by removing some of its weights, which can lead to a significant reduction in model size. SparseGPT is an algorithm that allows reducing a model size by more than 50% while maintaining its performance**[2](https://www.machinelearningatscale.com/pruning-llm-sparsegpt/)**.
### Distillation
Distillation is a technique that involves training a smaller model (student model) to mimic the behavior of a larger model (teacher model). This approach creates compute-friendly LLMs suitable for use in resource-constrained environments, such as real-time language translation, automated speech recognition, and chatbots on edge devices like smartphones and tablets**[3](https://jaxon.ai/distillation-making-large-language-models-compute-friendly/)**.
### Quantization
Pros:
- Reduces memory footprint and accelerates inference**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- Can be applied post-training without the need for additional training data**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
Cons:
- Potential loss of accuracy during the quantization process**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- May require further optimization for different LLM architectures**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
Use Cases:
- Deploying LLMs on edge devices with limited resources**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- Real-time language translation and automated speech recognition**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
### Pruning
Pros:
- Can significantly reduce model size while maintaining performance**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Can be applied to both structured and unstructured pruning**[3](https://web.stanford.edu/class/cs224n/reports/custom_116951464.pdf)**.
Cons:
- May require additional fine-tuning to achieve optimal performance**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Can be computationally expensive for large models**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
Use Cases:
- Deploying LLMs on resource-constrained devices**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Improving the efficiency of LLMs in various applications, such as natural language processing and computer vision tasks**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
### Distillation
Pros:
- Creates compute-friendly LLMs suitable for use in resource-constrained environments**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
- Can maintain the performance of the original model**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
Cons:
Lightweighting approaches for Large Language Models (LLMs) aim to reduce the memory footprint and computational requirements of these models, making them more efficient and easier to deploy. Some popular lightweighting methods include quantization, pruning, and distillation**[1](https://medium.com/intel-analytics-software/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98)**.
### Quantization
Quantization is a compression operation that reduces the memory footprint of a model and improves inference performance. An enhanced SmoothQuant approach has been proposed for post-training quantization of LLMs, which has been integrated into Intel Neural Compressor, an open-source Python library of popular model compression techniques**[1](https://medium.com/intel-analytics-software/effective-post-training-quantization-for-large-language-models-with-enhanced-smoothquant-approach-93e9d104fb98)**.
### Pruning
Pruning is a method to compress a model by removing some of its weights, which can lead to a significant reduction in model size. SparseGPT is an algorithm that allows reducing a model size by more than 50% while maintaining its performance**[2](https://www.machinelearningatscale.com/pruning-llm-sparsegpt/)**.
### Distillation
Distillation is a technique that involves training a smaller model (student model) to mimic the behavior of a larger model (teacher model). This approach creates compute-friendly LLMs suitable for use in resource-constrained environments, such as real-time language translation, automated speech recognition, and chatbots on edge devices like smartphones and tablets**[3](https://jaxon.ai/distillation-making-large-language-models-compute-friendly/)**.
### Quantization
Pros:
- Reduces memory footprint and accelerates inference**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- Can be applied post-training without the need for additional training data**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
Cons:
- Potential loss of accuracy during the quantization process**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- May require further optimization for different LLM architectures**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
Use Cases:
- Deploying LLMs on edge devices with limited resources**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
- Real-time language translation and automated speech recognition**[1](https://arxiv.org/pdf/2211.10438.pdf)**.
### Pruning
Pros:
- Can significantly reduce model size while maintaining performance**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Can be applied to both structured and unstructured pruning**[3](https://web.stanford.edu/class/cs224n/reports/custom_116951464.pdf)**.
Cons:
- May require additional fine-tuning to achieve optimal performance**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Can be computationally expensive for large models**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
Use Cases:
- Deploying LLMs on resource-constrained devices**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
- Improving the efficiency of LLMs in various applications, such as natural language processing and computer vision tasks**[2](https://towardsdatascience.com/model-compression-via-pruning-ac9b730a7c7b)**.
### Distillation
Pros:
- Creates compute-friendly LLMs suitable for use in resource-constrained environments**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
- Can maintain the performance of the original model**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
Cons:
Medium
Effective Post-Training Quantization for Large Language Models
Enhancing the SmoothQuant Approach to Quantization
- May suffer from the "curse of capacity gap" when the teacher and student models have a large capacity difference**[5](https://openreview.net/forum?id=CMsuT6Cmfvs)**.
- Requires additional training data and computational resources**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
Use Cases:
- Real-time language translation, automated speech recognition, and chatbots on edge devices like smartphones and tablets**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
- Deploying LLMs in various applications, such as natural language processing and computer vision tasks**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
In summary, each lightweighting approach has its own set of advantages and disadvantages, making them suitable for different use cases. Quantization is ideal for deploying LLMs on edge devices with limited resources, while pruning can help improve the efficiency of LLMs in various applications. Distillation is useful for creating compute-friendly LLMs suitable for use in resource-constrained environments. Choosing the right approach depends on the specific requirements and constraints of the application.
### LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
LLM-QAT is a data-free distillation method that leverages generations produced by the pre-trained model to better preserve the original model's performance while reducing its size and computational requirements**[4](https://arxiv.org/abs/2305.17888)**. This approach enables efficient quantization of LLMs without the need for additional training data.
### Limitations and Opportunities
Some limitations of LLM-QAT include the potential loss of accuracy during the quantization process and the need for further research to optimize the method for different LLM architectures. However, LLM-QAT presents opportunities for improving the efficiency of LLM deployment in various applications, such as natural language processing and computer vision tasks.
### Real-World Lightweighting Methods
In the real world, lightweighting methods are used in various industries, such as automotive and aerospace, to reduce the weight of components and improve overall performance. Some common lightweighting strategies include:
1. Material selection: Using lighter materials for each component**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
2. Structural optimization: Designing components to minimize weight while maintaining strength and functionality**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
3. Architected materials: Creating materials with specific microstructures to optimize their properties for lightweighting**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
4. Multifunctionality: Designing components that serve multiple purposes, reducing the need for additional parts**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
These lightweighting methods can be used separately or in conjunction with one another to achieve the desired weight reduction and performance improvements.
- Requires additional training data and computational resources**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
Use Cases:
- Real-time language translation, automated speech recognition, and chatbots on edge devices like smartphones and tablets**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
- Deploying LLMs in various applications, such as natural language processing and computer vision tasks**[4](https://wandb.ai/byyoung3/ml-news/reports/New-Method-For-LLM-Quantization--VmlldzozOTU1NTgz)**.
In summary, each lightweighting approach has its own set of advantages and disadvantages, making them suitable for different use cases. Quantization is ideal for deploying LLMs on edge devices with limited resources, while pruning can help improve the efficiency of LLMs in various applications. Distillation is useful for creating compute-friendly LLMs suitable for use in resource-constrained environments. Choosing the right approach depends on the specific requirements and constraints of the application.
### LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
LLM-QAT is a data-free distillation method that leverages generations produced by the pre-trained model to better preserve the original model's performance while reducing its size and computational requirements**[4](https://arxiv.org/abs/2305.17888)**. This approach enables efficient quantization of LLMs without the need for additional training data.
### Limitations and Opportunities
Some limitations of LLM-QAT include the potential loss of accuracy during the quantization process and the need for further research to optimize the method for different LLM architectures. However, LLM-QAT presents opportunities for improving the efficiency of LLM deployment in various applications, such as natural language processing and computer vision tasks.
### Real-World Lightweighting Methods
In the real world, lightweighting methods are used in various industries, such as automotive and aerospace, to reduce the weight of components and improve overall performance. Some common lightweighting strategies include:
1. Material selection: Using lighter materials for each component**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
2. Structural optimization: Designing components to minimize weight while maintaining strength and functionality**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
3. Architected materials: Creating materials with specific microstructures to optimize their properties for lightweighting**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
4. Multifunctionality: Designing components that serve multiple purposes, reducing the need for additional parts**[5](https://3dxresearch.com/2018/06/05/lightweighting-strategies-in-an-additively-manufactured-world/)**.
These lightweighting methods can be used separately or in conjunction with one another to achieve the desired weight reduction and performance improvements.
OpenReview
Lifting the Curse of Capacity Gap in Distilling Large Language Models
Large language models (LLMs) have shown compelling performance on various downstream tasks, but unfortunately require a tremendous amount of inference compute. Knowledge distillation finds a path...
The article "The Law Is Coming for AIโBut Maybe Not the Law You Think" discusses the legal challenges and implications surrounding the use of artificial intelligence (AI) technology, particularly focusing on the recent approval of the AI Act in the European Parliament**[1](https://www.theinformation.com/articles/the-law-is-coming-for-ai-but-maybe-not-the-law-you-think)**. The article highlights the case of Italy's data protection authority banning OpenAI's ChatGPT due to non-compliance with European data protection provisions**[1](https://www.theinformation.com/articles/the-law-is-coming-for-ai-but-maybe-not-the-law-you-think)**. The main points of the article are as follows:
1. AI technology raises legal questions in areas such as privacy, discrimination, and liability.
2. There is no single law governing AI, and existing laws are often unclear or outdated.
3. There is a growing movement to create new laws and regulations specifically for AI.
4. There is no consensus on what these laws and regulations should look like.
5. Some people believe AI should be regulated like any other technology, while others believe it requires special treatment.
6. The debate over how to regulate AI is likely to continue for many years to come.
As an AI researcher or AI startup founder, it is crucial to stay informed about the legal landscape surrounding AI technology. This includes understanding the potential legal issues that may arise from the development and deployment of AI systems, as well as keeping up-to-date with new laws and regulations that may impact your work or business. By being proactive and knowledgeable about the legal aspects of AI, you can better navigate potential challenges and ensure that your AI systems are developed and used responsibly and ethically.
1. AI technology raises legal questions in areas such as privacy, discrimination, and liability.
2. There is no single law governing AI, and existing laws are often unclear or outdated.
3. There is a growing movement to create new laws and regulations specifically for AI.
4. There is no consensus on what these laws and regulations should look like.
5. Some people believe AI should be regulated like any other technology, while others believe it requires special treatment.
6. The debate over how to regulate AI is likely to continue for many years to come.
As an AI researcher or AI startup founder, it is crucial to stay informed about the legal landscape surrounding AI technology. This includes understanding the potential legal issues that may arise from the development and deployment of AI systems, as well as keeping up-to-date with new laws and regulations that may impact your work or business. By being proactive and knowledgeable about the legal aspects of AI, you can better navigate potential challenges and ensure that your AI systems are developed and used responsibly and ethically.
The Information
The Law Is Coming for AIโBut Maybe Not the Law You Think
While the approval of the AI Act in the European Parliament on Wednesday will no doubt go down in history as a day of reckoning for generative artificial intelligence, it was not the first. That honor belongs to March 31, when, citing a lack of complianceโฆ
๋ ์ธํฌ๋ฉ์ด์
**[1](https://www.theinformation.com/articles/a-reckoning-arrives-for-creator-economy-startups)**์ ๊ธฐ์ฌ์ ๋ฐ๋ฅด๋ฉด ๋ฏธ๊ตญ ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ์คํํธ์
์ ๋ํ ์๊ธ์ด 1์ต 2,300๋ง ๋ฌ๋ฌ๋ก 86% ๊ฐ์ํ์ผ๋ฉฐ, ์ด๋ ์ ๋
๋๊ธฐ ๋๋น 7๋ถ๊ธฐ ์ฐ์ ๊ฐ์ํ ์์น๋ผ๊ณ ํฉ๋๋ค. ๋ฐ๋ฉด์ ๋์งํธ ํฌ๋ฆฌ์์ดํฐ๊ฐ ์ฝํ
์ธ ์ ์์ ๋น์ฆ๋์ค ์ธก๋ฉด์ ๋ณด๋ค ์ฝ๊ฒ ์ํํ ์ ์๋๋ก ํ์ํ ๋๊ตฌ, ๋ฆฌ์์ค ๋ฐ ํ๋ซํผ์ ์ ๊ณตํ๋ ๋ง์ ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ์คํํธ์
์ด ์์ต๋๋ค**[2](https://blog.hubspot.com/marketing/creator-economy-startups)**. ๊ทธ๋ฌ๋ ์ด๋ฌํ ๋น์ฆ๋์ค๊ฐ ๋ชจ๋ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ์ข์ ๊ฒ์ ์๋๋ฉฐ, ์ผ๋ถ๋ ์ค์ ๋ก๋ ๋งค์ฐ ์ฝํ์ ์ผ ์ ์์ต๋๋ค**[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**. ์ธ๊ณต์ง๋ฅ ์ฐ๊ตฌ์๋ ์ธ๊ณต์ง๋ฅ ์คํํธ์
์ฐฝ์
์๋ก์ ์ด ๊ธ์์ ๋ฐฐ์ธ ์ ์๋ ๋ช ๊ฐ์ง ์ฌํญ์ด ์์ต๋๋ค:
- ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ๋ ์ฑ์ฅํ๋ ์์ฅ์ด๋ฉฐ, ์ฝํ ์ธ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ๋๊ตฌ์ ๋ฆฌ์์ค๋ฅผ ์ ๊ณตํ ์ ์๋ ๋ง์ ๊ธฐํ๊ฐ ์์ต๋๋ค.
- ํฌ๋ฆฌ์์ดํฐ๊ฐ ์์ ์ ๋น์ฆ๋์ค๋ฅผ ๋ฏฟ๊ณ ๋งก๊ธด๋ค๋ฉด, ํฌ๋ฆฌ์์ดํฐ๋ ์ฌ๋ฌ๋ถ์ด ์์ ์ ์ต์ ์ ์ด์ต์ ์ผ๋์ ๋๊ธฐ๋ฅผ ๊ธฐ๋ํ๋ค๋ ์ ์ ์ดํดํ๋ฉด์ ํฌ๋ฆฌ์์ดํฐ์ ์ ์ฅ์ด ๋์ด ์๊ฐํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค**[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**.
- ์ฝํ ์ธ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ์ง์๊ณผ ์์์ ์ ๊ณตํ ์ ์๋ ์ค๋ฆฌ์ ์ด๊ณ ์ ๋ขฐํ ์ ์๋ ํฌ๋ฆฌ์์ดํฐ ์ด์ฝ๋ ธ๋ฏธ ์คํํธ์ **[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**์ด ๋ ๋ง์ด ํ์ํฉ๋๋ค.
- ํฌ๋ฆฌ์์ดํฐ ์ด์ฝ๋ ธ๋ฏธ ์คํํธ์ ์ ๋ํ ์๊ธ ์ง์ ๊ฐ์๋ ์์ฅ์ ๋ณํ๋ฅผ ๋ํ๋ด๋ ์ ํธ์ผ ์ ์์ผ๋ฉฐ, ์ด๋ฌํ ์ถ์ธ๋ฅผ ์ฃผ์ํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค**[1](https://www.theinformation.com/articles/a-reckoning-arrives-for-creator-economy-startups)[4](https://www.antler.co/blog/2023-creator-economy)**.
- AI๋ ์ ์ฌ์ ์ผ๋ก ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ์์ ์ฝํ ์ธ ์ ์์๋ฅผ ๋์ธ ์ ์๋ ์๋ก์ด ๋๊ตฌ์ ํ๋ซํผ์ ๊ฐ๋ฐํ๋ ๋ฐ ์ฌ์ฉ๋ ์ ์์ต๋๋ค**[5](https://wonnda.com/magazine/creator-economy-startups/)[6](https://influencermarketinghub.com/creator-economy-startups/)**.
- ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ๋ ์ฑ์ฅํ๋ ์์ฅ์ด๋ฉฐ, ์ฝํ ์ธ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ๋๊ตฌ์ ๋ฆฌ์์ค๋ฅผ ์ ๊ณตํ ์ ์๋ ๋ง์ ๊ธฐํ๊ฐ ์์ต๋๋ค.
- ํฌ๋ฆฌ์์ดํฐ๊ฐ ์์ ์ ๋น์ฆ๋์ค๋ฅผ ๋ฏฟ๊ณ ๋งก๊ธด๋ค๋ฉด, ํฌ๋ฆฌ์์ดํฐ๋ ์ฌ๋ฌ๋ถ์ด ์์ ์ ์ต์ ์ ์ด์ต์ ์ผ๋์ ๋๊ธฐ๋ฅผ ๊ธฐ๋ํ๋ค๋ ์ ์ ์ดํดํ๋ฉด์ ํฌ๋ฆฌ์์ดํฐ์ ์ ์ฅ์ด ๋์ด ์๊ฐํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค**[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**.
- ์ฝํ ์ธ ํฌ๋ฆฌ์์ดํฐ์๊ฒ ์ง์๊ณผ ์์์ ์ ๊ณตํ ์ ์๋ ์ค๋ฆฌ์ ์ด๊ณ ์ ๋ขฐํ ์ ์๋ ํฌ๋ฆฌ์์ดํฐ ์ด์ฝ๋ ธ๋ฏธ ์คํํธ์ **[3](https://techcrunch.com/2021/12/30/not-every-creator-economy-startup-is-built-for-creators/)**์ด ๋ ๋ง์ด ํ์ํฉ๋๋ค.
- ํฌ๋ฆฌ์์ดํฐ ์ด์ฝ๋ ธ๋ฏธ ์คํํธ์ ์ ๋ํ ์๊ธ ์ง์ ๊ฐ์๋ ์์ฅ์ ๋ณํ๋ฅผ ๋ํ๋ด๋ ์ ํธ์ผ ์ ์์ผ๋ฉฐ, ์ด๋ฌํ ์ถ์ธ๋ฅผ ์ฃผ์ํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค**[1](https://www.theinformation.com/articles/a-reckoning-arrives-for-creator-economy-startups)[4](https://www.antler.co/blog/2023-creator-economy)**.
- AI๋ ์ ์ฌ์ ์ผ๋ก ํฌ๋ฆฌ์์ดํฐ ๊ฒฝ์ ์์ ์ฝํ ์ธ ์ ์์๋ฅผ ๋์ธ ์ ์๋ ์๋ก์ด ๋๊ตฌ์ ํ๋ซํผ์ ๊ฐ๋ฐํ๋ ๋ฐ ์ฌ์ฉ๋ ์ ์์ต๋๋ค**[5](https://wonnda.com/magazine/creator-economy-startups/)[6](https://influencermarketinghub.com/creator-economy-startups/)**.
The Information
A Reckoning Arrives for Creator Economy Startups
Two years ago, Dmitry Shapiro and Sean Thielen were so optimistic about the booming creator economy that they pivoted their startup to a new product: a simple tool called Koji that lets influencers more easily link to their online tip jars, merch and otherโฆ
https://www.the-coming-wave.com/
Written by Mustafa cofounder of Deepmind and cofounder of inflection (maker of Pi)
Written by Mustafa cofounder of Deepmind and cofounder of inflection (maker of Pi)
The Coming Wave Book
This groundbreaking new book from AI entrepreneur Mustafa Suleyman is a must-read guide to the technological revolution just starting, and the transformed world it will create.
Continuous Learning_Startup & Investment
https://www.the-coming-wave.com/ Written by Mustafa cofounder of Deepmind and cofounder of inflection (maker of Pi)
despite all the hype and excitement, people still arenโt grokking the full impact of the coming wave of ai. Within the next ten years, most โcognitive manual laborโ is going to be carried out by ai systems.
call centers, invoicing, payroll, paralegals, scheduling, bookkeeping, back office admin, and so onโฆ these are the first. planning and more complex sequences of actions will comes shortly after
call centers, invoicing, payroll, paralegals, scheduling, bookkeeping, back office admin, and so onโฆ these are the first. planning and more complex sequences of actions will comes shortly after
- ์ฑ๊ณต์ 4๊ฐ์ง ์์: ์ด์ฌ, ์๋ฒฝ, ์ฒด๊ณ, ํ๋ช
(๊น์นํธ๋, ์ฌ์ฅํ๊ฐ๋ก )
๊น์นํธ๋์ ์ฌ์ฅํ๊ฐ๋ก ์์ ์์ ๋ง๋ฏธ์ ์ฑ๊ณต์ 4๊ฐ์ง ์์์ ๋ํด์ ์ธ๊ธํด ์ฃผ์ จ์ต๋๋ค. ๋ฃ๋ค ๋ณด๋ ๊ณต๊ฐ๋ ๋ง์ด ๋๊ณ , ๋ ๋ฐฐ์์ผ ํ ์ ์ด๋ผ๋ ์๊ฐ์ด ๋ค์ด์ ๊ณต์ ํฉ๋๋ค. :)
์ด์ฌํ ์ผํ๋ ๊ฒ์ด ์ฑ๊ณต์ผ๋ก ๊ฐ๋ ์ฒซ ๋จ๊ณ์ง๋ง,
์ผ์ ์๋ฒฝํ๊ฒ ํ๊ณ (์ผ์ ํ๋ฆฌํฐ),
์ผ์ ์ฒด๊ณ์ ์ผ๋ก ํ๊ณ (์ผ์ ํจ์จ์ฑ),
์ผ์ ํ๋ช ํ๊ฒ ํด์ผ (์ผ์ ๋ฐฉํฅ์ฑ)
๋น๋ก์ ์ฑ๊ณตํ ์ ์๋ค๊ณ ํฉ๋๋ค.
์ ๋ ๋ฐฐ์ธ ์ ์ด ์ฐธ ๋ง๋ค์.
ํ๋ฆฌํฐ, ํจ์จ, ๋ฐฉํฅ. ์๊ฐ์ ๊น๊ณ ๋๊ฒ ๊ฐ์ ธ์ผ ๊ฒ ์ต๋๋ค. :)
์๋ ๋ถํฐ๋ ๊น์นํธ๋์ ์ด์ผ๊ธฐ์ ๋๋ค.
์ ์ฒด ์์์ ๋๊ธ์ ์์ต๋๋ค. ์์ ์ ์ฒด ๋ค ๋ณด์๋ ๊ฒ ๋ ๋์์ด ๋์ค๊ฑฐ๋ผ ์๊ฐํฉ๋๋ค.
"
๋ฐ๋ณต์ ์ธ ์ผ์ ๊ฐ์ฅ ์ซ์ดํ๋ค. ์ผ์ ํ๋ ์๋ น์ ๋ง๋ค๊ณ , ์ผํ๋ ๋ฐฉ์์ ๋ฐ๊ฟ์ผ ํ๋ค.
์ฌ๋์ด ๋ถ์ง๋ฐํ๋ฉด ์ด์ฌํ ์ผํ ์๊ฐ๋ง ํ๋ค. ๊ทธ๋์๋ ์๋๋ค.
์ผ์ ์๋ฒฝํ๊ฒ ํด์ผ ํ๋ค. ์ผ์ ์ด์ฌํ ๋ง์น๋ฉด ์๋๋ค. ์๋ฒฝํด์ผ ํ๋ค.
์ผ์ ์์ฑ์ ์ด์ฌํ ํ๋ ๊ฒ์์ ์ค์ง ์๋๋ค. ์ด์ฌํ ์ผํ๊ธฐ ๋๋ฌธ์ ์คํจํ ์ฌ๋์ด ๋ ๋ง๋ค.
์ด์ฌํ ํ๋ฉด "์๋ฏผ ๊ฐ๋ถ" ํ๋ก๊ทธ๋จ์ ๋์ค๊ฒ ๋๋ค.
์ด์ฌ์ ๊ฐ์ฅ ์ฒซ๋ฒ์งธ ์์๋ค.
์ด์ฌํ ์๋ฒฝํ๊ฒ, ์ฒด๊ณ์ ์ผ๋ก, ํ๋ช ํ๊ฒ ํด์ผ ํ๋ค.
์ด์ฌ์ ์๋ฒฝ๊ณผ ํ๋ช ๊ณผ ์ฒด๊ณ๊ฐ ๋ค์ด๊ฐ๋ฉด ๊ธฐ์ ์ด ๋๋ค.
๊ทธ ์ผ์ ๋ค๋ฅธ ์ฌ๋์๊ฒ ์ํค๊ณ , ์์ ์ ๋ค๋ฅธ ์ผ์ ํจ์จ์ ์ผ๋ก ํ๋ฉด ๋๊ธฐ ๋๋ฌธ์ด๋ค.
์ค์ ๋ง๋๋ ์ผ์ ์ด์ฌํ ํ๋ค๋ฉด, ์ง๊ธ ์๊ฐ ๋น ์ค์ 50๊ฐ๋ฅผ ๋ง๋ค๊ณ ์์ ๊ฒ์ด๋ค.
(์ฃผ: ๊น์นํธ๋์ ๋ฏธ๊ตญ์์ ์ค์ ์ฒด์ธ ๊ธฐ์ ์ ์ด์ํ๊ณ ์์ต๋๋ค.)
๊ทธ๋ฌ๋ ์ด์ฌํ ํ์ง ์์๊ธฐ ๋๋ฌธ์ ์ง๊ธ ์ด ํ์ฌ๋ ์๊ฐ ๋น ์ค์ 20๋ง๊ฐ๋ฅผ ๋ง๋ค ์ ์๋ค.
์ด์ฌํ ์ผํด์ ์ฑ๊ณตํ ์ฌ๋์ ๋์ด ๋ค์ด ๋ชธ์ด ๋ค์น๊ณ ๋์ด์ ๊ณ ์ํ๊ฒ ๋๋ค.
์ด์ฌํ ์ผํ์ง ๋ง์ธ์.
"
https://www.facebook.com/100009346142985/posts/pfbid09CZa5KpXYtQAH31VweWiX3VHquc1B5fjZ2jqUgPSteVFHt5FjJ2EfZwEzHh1348Bl/?mibextid=jf9HGS
๊น์นํธ๋์ ์ฌ์ฅํ๊ฐ๋ก ์์ ์์ ๋ง๋ฏธ์ ์ฑ๊ณต์ 4๊ฐ์ง ์์์ ๋ํด์ ์ธ๊ธํด ์ฃผ์ จ์ต๋๋ค. ๋ฃ๋ค ๋ณด๋ ๊ณต๊ฐ๋ ๋ง์ด ๋๊ณ , ๋ ๋ฐฐ์์ผ ํ ์ ์ด๋ผ๋ ์๊ฐ์ด ๋ค์ด์ ๊ณต์ ํฉ๋๋ค. :)
์ด์ฌํ ์ผํ๋ ๊ฒ์ด ์ฑ๊ณต์ผ๋ก ๊ฐ๋ ์ฒซ ๋จ๊ณ์ง๋ง,
์ผ์ ์๋ฒฝํ๊ฒ ํ๊ณ (์ผ์ ํ๋ฆฌํฐ),
์ผ์ ์ฒด๊ณ์ ์ผ๋ก ํ๊ณ (์ผ์ ํจ์จ์ฑ),
์ผ์ ํ๋ช ํ๊ฒ ํด์ผ (์ผ์ ๋ฐฉํฅ์ฑ)
๋น๋ก์ ์ฑ๊ณตํ ์ ์๋ค๊ณ ํฉ๋๋ค.
์ ๋ ๋ฐฐ์ธ ์ ์ด ์ฐธ ๋ง๋ค์.
ํ๋ฆฌํฐ, ํจ์จ, ๋ฐฉํฅ. ์๊ฐ์ ๊น๊ณ ๋๊ฒ ๊ฐ์ ธ์ผ ๊ฒ ์ต๋๋ค. :)
์๋ ๋ถํฐ๋ ๊น์นํธ๋์ ์ด์ผ๊ธฐ์ ๋๋ค.
์ ์ฒด ์์์ ๋๊ธ์ ์์ต๋๋ค. ์์ ์ ์ฒด ๋ค ๋ณด์๋ ๊ฒ ๋ ๋์์ด ๋์ค๊ฑฐ๋ผ ์๊ฐํฉ๋๋ค.
"
๋ฐ๋ณต์ ์ธ ์ผ์ ๊ฐ์ฅ ์ซ์ดํ๋ค. ์ผ์ ํ๋ ์๋ น์ ๋ง๋ค๊ณ , ์ผํ๋ ๋ฐฉ์์ ๋ฐ๊ฟ์ผ ํ๋ค.
์ฌ๋์ด ๋ถ์ง๋ฐํ๋ฉด ์ด์ฌํ ์ผํ ์๊ฐ๋ง ํ๋ค. ๊ทธ๋์๋ ์๋๋ค.
์ผ์ ์๋ฒฝํ๊ฒ ํด์ผ ํ๋ค. ์ผ์ ์ด์ฌํ ๋ง์น๋ฉด ์๋๋ค. ์๋ฒฝํด์ผ ํ๋ค.
์ผ์ ์์ฑ์ ์ด์ฌํ ํ๋ ๊ฒ์์ ์ค์ง ์๋๋ค. ์ด์ฌํ ์ผํ๊ธฐ ๋๋ฌธ์ ์คํจํ ์ฌ๋์ด ๋ ๋ง๋ค.
์ด์ฌํ ํ๋ฉด "์๋ฏผ ๊ฐ๋ถ" ํ๋ก๊ทธ๋จ์ ๋์ค๊ฒ ๋๋ค.
์ด์ฌ์ ๊ฐ์ฅ ์ฒซ๋ฒ์งธ ์์๋ค.
์ด์ฌํ ์๋ฒฝํ๊ฒ, ์ฒด๊ณ์ ์ผ๋ก, ํ๋ช ํ๊ฒ ํด์ผ ํ๋ค.
์ด์ฌ์ ์๋ฒฝ๊ณผ ํ๋ช ๊ณผ ์ฒด๊ณ๊ฐ ๋ค์ด๊ฐ๋ฉด ๊ธฐ์ ์ด ๋๋ค.
๊ทธ ์ผ์ ๋ค๋ฅธ ์ฌ๋์๊ฒ ์ํค๊ณ , ์์ ์ ๋ค๋ฅธ ์ผ์ ํจ์จ์ ์ผ๋ก ํ๋ฉด ๋๊ธฐ ๋๋ฌธ์ด๋ค.
์ค์ ๋ง๋๋ ์ผ์ ์ด์ฌํ ํ๋ค๋ฉด, ์ง๊ธ ์๊ฐ ๋น ์ค์ 50๊ฐ๋ฅผ ๋ง๋ค๊ณ ์์ ๊ฒ์ด๋ค.
(์ฃผ: ๊น์นํธ๋์ ๋ฏธ๊ตญ์์ ์ค์ ์ฒด์ธ ๊ธฐ์ ์ ์ด์ํ๊ณ ์์ต๋๋ค.)
๊ทธ๋ฌ๋ ์ด์ฌํ ํ์ง ์์๊ธฐ ๋๋ฌธ์ ์ง๊ธ ์ด ํ์ฌ๋ ์๊ฐ ๋น ์ค์ 20๋ง๊ฐ๋ฅผ ๋ง๋ค ์ ์๋ค.
์ด์ฌํ ์ผํด์ ์ฑ๊ณตํ ์ฌ๋์ ๋์ด ๋ค์ด ๋ชธ์ด ๋ค์น๊ณ ๋์ด์ ๊ณ ์ํ๊ฒ ๋๋ค.
์ด์ฌํ ์ผํ์ง ๋ง์ธ์.
"
https://www.facebook.com/100009346142985/posts/pfbid09CZa5KpXYtQAH31VweWiX3VHquc1B5fjZ2jqUgPSteVFHt5FjJ2EfZwEzHh1348Bl/?mibextid=jf9HGS
Facebook
Log in to Facebook
Log in to Facebook to start sharing and connecting with your friends, family and people you know.
โค2
Forwarded from ์ ์ข
ํ์ ์ธ์ฌ์ดํธ
์๋น๋์์ ๊ฐ์ด๋์ค ์ํฅ์ด ์ธ์์ ๋ฐฉํฅ์ฑ์ ๋ณด์ฌ์ฃผ์๊ณ , ๊ทธ ๋ค๋ก ๋ฐธ๋ฅ์ฒด์ธ์ ํ๋์ฉ ๋ค์ฌ๋ค๋ณด๋ฉด์ ๋ค์ ๊ณต๋ถํด๋ณด๊ณ ์๋ค.
๊ทธ์ค์์๋ FC-BGA์ ์ฑ์ฅ์ด ๋์ ๋์ด์ ๊ธฐ๊ฐ๋น์ค, ์ธํ ํ๋ฌ์ค ๋ฑ์ ์ฐพ์๋ณด๊ฒ ๋๋ค.
์๋๋ ๊ฒธ์ํ ํฌ์๋์ด ์ ๋ฆฌํด๋์ ์ธํ ํ๋ฌ์ค. ๊ธธ์ง๋ง ์๊ฐ๋ด์ด ์ฝ์ด๋ณผ ๊ฐ์น๊ฐ ์๋ค.
https://blog.naver.com/humbleinvest/223127883654
๊ทธ์ค์์๋ FC-BGA์ ์ฑ์ฅ์ด ๋์ ๋์ด์ ๊ธฐ๊ฐ๋น์ค, ์ธํ ํ๋ฌ์ค ๋ฑ์ ์ฐพ์๋ณด๊ฒ ๋๋ค.
์๋๋ ๊ฒธ์ํ ํฌ์๋์ด ์ ๋ฆฌํด๋์ ์ธํ ํ๋ฌ์ค. ๊ธธ์ง๋ง ์๊ฐ๋ด์ด ์ฝ์ด๋ณผ ๊ฐ์น๊ฐ ์๋ค.
https://blog.naver.com/humbleinvest/223127883654
NAVER
[์ธํ
ํ๋ฌ์ค] Advanced Packaging๋ถํฐ 2์ฐจ์ ์ง๊น์ง(์ด์ ๋ฆฌ)
์ธํ
ํ๋ฌ์ค๋ฅผ ์๊ธฐํ๋ฉด ์ดํดํ๊ธฐ ์ด๋ ต๋ค๋ ํผ๋๋ฐฑ์ ๋ง์ด ๋ฃ์ต๋๋ค. ๊ทธ๋ด๊ฒ์ด ์๊ฐ์ด์ก 3,000์ต์ ์ฃผ์ ์ ์์์ผํ ๊ฒ ๋๋ฌด ๋ง์ต๋๋ค. ์ด์ ๋ ์์ด์ด๋ฉด ์ฌํํด์ผ ํ๋๋ฐ, ์ํ๊น๊ฒ๋ ๋ณต์กํฉ๋๋ค.
๐1
Mark Manson crowdsourced relationship advice from over 1,500 happily married couples and synthesized their wisdom and experience into something straightforward and applicable to any relationship. He asked his readers who have been married for 10+ years and are still happy in their relationship to share their best relationship/marriage advice. He received a lot of advice, but perhaps the most interesting nugget from Gottmanโs research is the fact that most successful couples donโt actually resolve all of their problems. In fact, his findings were completely different from what most people actually expect: people in lasting and happy relationships have problems that never completely go away, while couples that feel as though they need to agree and compromise on everything end up feeling miserable and falling apart. Successful couples accept and understand that some problems are perpetual and that they will be working on them for the rest of their lives. They donโt try to solve them; they just try to manage them.
Mark Manson
Every Successful Relationship Is Successful for the Same Exact Reasons
Crowdsourced relationship advice from over 1,500 people who have been living "happily ever after." Learn how they make it work.