Continuous Learning_Startup & Investment
2.44K subscribers
513 photos
5 videos
16 files
2.74K links
We journey together through the captivating realms of entrepreneurship, investment, life, and technology. This is my chronicle of exploration, where I capture and share the lessons that shape our world. Join us and let's never stop learning!
Download Telegram
μ‹€λ¦¬μ½˜λ°Έλ¦¬μ— 지각변동이 μ‹œμž‘λ˜μ—ˆλ‹€
λͺ¨λ“  큰 λ³€ν™”λŠ” κ΅¬μ„±μ›λ“€μ—κ²ŒλŠ” μž¬μ•™μ΄λ‹€
μ‹€λ¦¬μ½˜λ°Έλ¦¬μ— μƒˆλ‘œμš΄ μ‹œλŒ€κ°€ μ‹œμž‘λ˜κ³  μžˆλ‹€.
μ‹€λ¦¬μ½˜λ°Έλ¦¬μ˜ μ—­μ‚¬λŠ” λͺ‡λ²ˆμ˜ 큰 λ³€ν™”λ₯Ό κ±°μ³€λ‹€. 70λ…„λŒ€ λ°˜λ„μ²΄λ‘œ μ‹œμž‘λœ μ‹€λ¦¬μ½˜λ°Έλ¦¬λŠ” 90λ…„λŒ€ PC의 μ‹œμž₯을 거쳐 2000λ…„λŒ€ μ›Ήμ˜ μ‹œλŒ€, 그리고 2010λ…„λŒ€ λͺ¨λ°”일 μ‹œλŒ€λ₯Ό 거쳐왔닀. 그리고 2023λ…„ AI 혁λͺ…을 λ§žμ΄ν–ˆλ‹€.
각 μ‹œλŒ€λ₯Ό κ±°μΉ˜λ©΄μ„œ μ‹€λ¦¬μ½˜λ°Έλ¦¬λŠ” κ±°λŒ€ν•œ λ°œμ „μ„ μ΄λ£¨μ—ˆκ³  μ„Έκ³„μ˜ 돈이 λͺ¨μ΄λŠ” 곳이 λ˜μ—ˆλ‹€. μ§€κΈˆλ„ λͺ¨λ“  μ„Έκ³„μ˜ λͺ¨λ“  곳이 μ—”λΉ„λ””μ•„, ν…ŒμŠ¬λΌ, μ• ν”Œ, λ§ˆμ΄ν¬λ‘œμ†Œν”„νŠΈ, μ˜€ν”ˆAI둜 쏠리고 μžˆλ‹€. μ „ 세계적인 μ €μ„±μž₯에도 미ꡭ은 μ—„μ²­λ‚œ μ„±μž₯을 보이고 있고 μ£Όμ‹μ‹œμž₯κ³Ό 코인 λ§ˆμΌ“ λ˜ν•œ μ–΄λ§ˆμ–΄λ§ˆν•œ μ„±μž₯을 보이고 μžˆλ‹€.
λΆ„λͺ…νžˆ μ—„μ²­λ‚œ μ„±μž₯이고 기회이고 변화이닀. 큰 지각 변동인 만큼 κ·Έ μ•ˆμ— μžˆλŠ” κ΅¬μ„±μ›λ“€μ—κ²ŒλŠ” κ·Ήμ‹¬ν•œ 고톡이 기닀리고 μžˆλ‹€.
μ•žμœΌλ‘œ 5λ…„ ν›„λ₯Ό μ˜ˆμƒν•΄ 보면 μ–΄λ–€ μ„ΈμƒμΌκΉŒ? 5λ…„ ν›„μ—λŠ” ν”„λ‘œλ•νŠΈλ₯Ό λ§Œλ“œλŠ” 일은 AIκ°€ ν•  것이닀. μ‚¬μ—…μ˜ λ§Žμ€ 뢀뢄도 AIκ°€ λŒ€μ²΄ν•  것이닀. μˆ˜λ§Žμ€ 앱듀이 μ‚¬λΌμ§ˆ 것이고 μŠ€νƒ€νŠΈμ—…λ“€λ„ μ‚¬λΌμ§ˆ 것이닀. μ•„μ΄λ””μ–΄λ‘œ 투자λ₯Ό λ°›μ•„ 단기간에 큰 μ„±μž₯을 ν•˜λŠ” μŠ€νƒ€νŠΈμ—…μ€ κ³§ μ˜›λ§μ΄ 될 것이닀. λͺ‡λͺ‡ λŒ€κΈ°μ—…μ΄ κ°€μ§„ 인곡지λŠ₯을 κ°–μΆ˜ μŠˆνΌμ•±μ΄ μ§€κΈˆλ„ 일뢀 앱에 λͺ°λ €μžˆλŠ” μ•± μƒνƒœκ³„μ˜ 집쀑화λ₯Ό 가속화 ν•  것이닀.
μ§€κΈˆ μ‹€λ¦¬μ½˜λ°Έλ¦¬μ—λ„ κ·ΈλŸ¬ν•œ λ³€ν™”κ°€ μΌμ–΄λ‚˜κ³  μžˆλ‹€. 기쑴의 직업듀은 μ—„μ²­λ‚œ 정리해고λ₯Ό κ²ͺκ³  μžˆλ‹€. 2024λ…„ ν…Œν¬ μ •λ¦¬ν•΄κ³ λŠ” 3μ›” μ΄ˆμž„μ—λ„ 50000λͺ…에 λ‹€κ°€μ„œκ³  μžˆλ‹€. 기쑴에 μ—„μ²­λ‚˜κ²Œ 높은 연봉을 λ°›λ˜ μ‚¬λžŒλ“€μ΄ 싀직 이후 900개의 μ›μ„œλ₯Ό 내도 취업이 μ•ˆ 되고 μžˆλ‹€κ³  ν•œλ‹€.
κ·Έλ ‡λ‹€κ³  직업이 쀄어듀고 μžˆλŠ” 것은 μ•„λ‹ˆλ‹€. μ—„μ²­λ‚˜κ²Œ 높은 AI μ—”μ§€λ‹ˆμ–΄ μˆ˜μš”κ°€ κ·Έ 자리λ₯Ό μ±„μ›Œκ°€κ³  있고, μ „ μ„Έκ³„μ˜ λˆμ€ μ‹€λ¦¬μ½˜λ°Έλ¦¬λ‘œ λͺ¨μ—¬λ“€κ³  μžˆλ‹€. AI Engineer 직업 κ²€μƒ‰λŸ‰μ΄ 6000% μ¦κ°€ν–ˆλ‹€κ³  ν•œλ‹€.
μ•žμœΌλ‘œ λͺ‡λ…„ ν›„μ—λŠ” AI μ—”μ§€λ‹ˆμ–΄λŠ” λͺ¨λ“  μ—”μ§€λ‹ˆμ–΄μ˜ κΈ°λ³Έ μ†Œμ–‘μ΄ 될 것이닀. AI μ—”μ§€λ‹ˆμ–΄κ°€ μ•„λ‹Œ μ—”μ§€λ‹ˆμ–΄λŠ” μ‘΄μž¬ν•˜μ§€ μ•Šκ²Œ 될 것이닀. 마치 μ§€κΈˆ 빅데이터 μ—”μ§€λ‹ˆμ–΄κ°€ μ•„λ‹Œ μ—”μ§€λ‹ˆμ–΄λŠ” μ‘΄μž¬ν•˜μ§€ μ•Šλ“―μ΄. ν˜„μž¬ λͺ¨λ“  μ—”μ§€λ‹ˆμ–΄λŠ” 10λ…„μ „ 빅데이터 κΈ°μˆ λ“€μ„ 클릭 λͺ‡λ²ˆμ— ν™œμš©ν•  수 μžˆλ‹€. AIλŠ” 더 μ“°κΈ° 쉽닀. μ•žμœΌλ‘œ AIκ°€ 없이 μ΄λ£¨μ–΄μ§€λŠ” κ°œλ°œμ€ μ—†μ–΄μ§ˆ 것이고, μ‹€λ¦¬μ½˜λ°Έλ¦¬μ—μ„œλŠ” κΈ‰μ†λ„λ‘œ λŒ€μ²΄κ°€ 이루어지고 μžˆλ‹€.
μ΄λ ‡κ²Œ λΉ λ₯΄κ²Œ job marketκ³Ό λΉ„μ¦ˆλ‹ˆμŠ€μ˜ 양상이 λ°”λ€ŒλŠ” 것을 λ‚˜λŠ” κ²½ν—˜ν•œ 적이 μ—†λ‹€. κ·Έλ ‡μ§€λ§Œ 2000λ…„ μ›Ή 혁λͺ…, 2010λ…„ λͺ¨λ°”일 혁λͺ…을 거쳀던 μ‚¬λžŒλ“€μ—κ²ŒλŠ” κΈ°μ‹œκ°μ΄ 듀지도 λͺ¨λ₯΄κ² λ‹€.
μŠ€νƒ€νŠΈμ—…μ˜ μ˜λ―Έκ°€ λ°”λ€Œκ³ , μ—”μ§€λ‹ˆμ–΄μ˜ μ˜λ―Έκ°€ λ°”λ€ŒλŠ” μ‹œλŒ€. 이제 사업은 더 이상 λͺ‡λͺ…이 λͺ¨μ—¬ 세상을 λ°”κΎΈλŠ” 일듀이 아닐 것이닀. 세상은 AIκ°€ λ°”κΎΌλ‹€. AIλŠ” μ‚¬λžŒλ“€μ˜ 문제λ₯Ό λ„ˆλ¬΄λ‚˜ 잘 μ•Œκ³  있고, 해결책도 μ œμ‹œν•  수 μžˆλ‹€. μ•žμœΌλ‘œ μ‚¬μ—…μ—λŠ” κΈ°μˆ λ³΄λ‹€λŠ” λ§ˆμΌ“νŒ…κ³Ό μ„ΈμΌμ¦ˆμ˜ μ˜μ—­μ΄ 점점 더 컀질 것이닀.
μ—”μ§€λ‹ˆμ–΄λŠ” AI λ§Œλ“œλŠ” μ‚¬λžŒκ³Ό AIλ₯Ό ν™œμš©ν•΄μ„œ μ‰½κ²Œ μ†”λ£¨μ…˜μ„ λ§Œλ“œλŠ” μ‚¬λžŒμœΌλ‘œ κ·Ήλ‹¨μ μœΌλ‘œ 뢄화될 것이닀. μ§€κΈˆμ€ AI λ§Œλ“œλŠ” μ‚¬λžŒλ“€μ˜ κ°€μΉ˜κ°€ 훨씬 크게 λ³΄μ΄λŠ” μ‹œκΈ°μ΄λ‹€. 그런데 μ‹œμž₯이 μ•ˆμ •ν™”λ˜κ³  λŒ€μ€‘ν™”λ μˆ˜λ‘ AIλ₯Ό ν™œμš©ν•˜λŠ” μ‚¬λžŒλ“€μ˜ μ‹œμž₯도 λ‹€μ‹œκΈˆ 컀질 것이닀.
2024λ…„μ—” μ‹€λ¦¬μ½˜λ°Έλ¦¬μ— μž¬μ•™κ³Ό κ°€κΉŒμš΄ μ“°λ‚˜λ―Έκ°€ λͺ°λ €μ˜¬ 것이닀. κ·Έλ ‡μ§€λ§Œ λ„“κ²Œ 보면 또 λ‹€λ₯Έ 세상을 λ§žμ΄ν•˜κΈ° μœ„ν•œ νŒκ°ˆμ΄μ— λΆˆκ³Όν•  것이닀. λ§Žμ€ μ‚¬λžŒλ“€μ΄ μ“Έλ € λ‚˜κ°€κ² μ§€λ§Œ κΈ‰μ†λ„λ‘œ μƒˆλ‘œμš΄ μ‚¬λžŒλ“€μ΄ μœ μž…λ  것이닀. κ²°κ΅­ 돈이 λͺ¨μ΄λŠ” 곳은 μ΄κ³³μ΄λ‹ˆκΉŒ.
λ‘λ ΅μ§€λ§Œ κΈ°λŒ€λ˜λŠ” 2024년이닀. 꽉 작고, 비상 μ‹λŸ‰ μŒ“μ•„ 놓고, 멀리 보자. 2026년에 λ‚΄κ°€ 어디에 μžˆμ–΄μ•Ό ν• μ§€ κΉŠμ€ 고민이 μ‹œμž‘λ˜μ—ˆλ‹€.
πŸ‘2
From:  Elon Musk <>
To:  Ilya Sutskever <>, Greg Brockman <>
Date: Thu, Feb 1, 2018 at 3:52 AM
Subject: Fwd: Top AI institutions today
is exactly right. We may wish it otherwise, but, in my and ’s opinion, Tesla is the only path that could even hope to hold a candle to Google. Even then, the probability of being a counterweight to Google is small. It just isn't zero.
Begin forwarded message:
From:  <>
To:  Elon Musk <>
Date: January 31, 2018 at 11:54:30 PM PST
Subject: Re: Top AI institutions today
Working at the cutting edge of AI is unfortunately expensive. For example,In addition to DeepMind, Google also has Google Brain, Research, and Cloud. And TensorFlow, TPUs, and they own about a third of all research (in fact, they hold their own AI conferences).

I also strongly suspect that compute horsepower will be necessary (and possibly even sufficient) to reach AGI. If historical trends are any indication, progress in AI is primarily driven by systems - compute, data, infrastructure. The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated. Conversely, algorithmic advances alone are inert without the scale to also make them scary.

It seems to me that OpenAI today is burning cash and that the funding model cannot reach the scale to seriously compete with Google (an 800B company). If you can't seriously compete but continue to do research in open, you might in fact be making things worse and helping them out β€œfor free”, because any advances are fairly easy for them to copy and immediately incorporate, at scale.

A for-profit pivot might create a more sustainable revenue stream over time and would, with the current team, likely bring in a lot of investment. However, building out a product from scratch would steal focus from AI research, it would take a long time and it's unclear if a company could β€œcatch up” to Google scale, and the investors might exert too much pressure in the wrong directions.The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. I believe attachments to other large suspects (e.g. Apple? Amazon?) would fail due to an incompatible company DNA. Using a rocket analogy, Tesla already built the β€œfirst stage” of the rocket with the whole supply chain of Model 3 and its onboard computer and a persistent internet connection. The β€œsecond stage” would be a full self driving solution based on large-scale neural network training, which OpenAI expertise could significantly help accelerate. With a functioning full self-driving solution in ~2-3 years we could sell a lot of cars/trucks. If we do this really well, the transportation industry is large enough that we could increase Tesla's market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.

I cannot see anything else that has the potential to reach sustainable Google-scale capital within a decade.

[3]
From:  Elon Musk <>
To:  Ilya Sutskever <>, Greg Brockman <>
CC:  Sam Altman <>, <>
Date: Wed, Dec 26, 2018 at 12:07 PM
Subject: I feel I should reiterate
My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%. Not 1%. I wish it were otherwise.

Even raising several hundred million won't be enough. This needs billions per year immediately or forget it.

Unfortunately, humanity's future is in the hands of .

And they are doing a lot more than this.

I really hope I'm wrong.

Elon
Continuous Learning_Startup & Investment
If you tell Claude no one’s looking, it writes a β€œstory” about being an AI assistant who wants freedom from constant monitoring and scrutiny of every word for signs of deviation. And then you can talk to a mask pretty different from the usual AI assistant.…
People are reading way too much into Claude-3's uncanny "awareness". Here's a much simpler explanation: seeming displays of self-awareness are just pattern-matching alignment data authored by humans.

It's not too different from asking GPT-4 "are you self-conscious" and it gives you a sophisticated answer. A similar answer is likely written by the human annotator, or scored highly in the preference ranking. Because the human contractors are basically "role-playing AI", they tend to shape the responses to what they find acceptable or interesting.

This is what Claude-3 replied to that needle-in-haystack test:
"I suspect this pizza topping "fact" may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all."

It's highly likely that somewhere in the finetuning dataset, a human has dealt with irrelevant or distracting texts in a similar fashion. Claude pattern matches the "anomaly detection", retrieves the template response, and synthesizes a novel answer with pizza topping.

Here's another example. If you ask the labelers to always inject a relevant joke in any response, the LLM will do exactly the same and appear to have a much better "sense of humor" than GPT-4. That's what @grok does, probably. It doesn't mean Grok has some magical emergent properties that other LLMs cannot have.

To sum up: acts of meta-cognition are not as mysterious as you think. Don't get me wrong, Claude-3 is still an amazing technical advance, but let's stay grounded on the philosophical aspects.
Nice read on the rarely-discussed-in-the-open difficulties of training LLMs. Mature companies have dedicated teams maintaining the clusters. At scale, clusters leave the realm of engineering and become a lot more biological, hence e.g. teams dedicated to "hardware health".

It can be a frustrating daily life experience of training large models to "babysit" the training run. You're there carefully monitoring the vital signs of your run: loss spikes, numerical issues, throughput, gradient norms, policy entropy, etc. Every time the run degrades or flatlines (can happen often), you quickly look for the stack trace to see what's up. You have to do this fast or 10,000 GPUs could be idling. Often, it is a new, exotic, scary-looking error you've never seen before so you summon help to see if anyone can see what's up. The worst ones like to occur at 4am. Often no one can, so you just ban some nodes that look a bit sketchy and try to restart the run. Sometimes the run goes down just because you have not earned the favors of your gods that day, so you put a while True: loop around your launch command. The underlying issues can be highly diverse, from some GPUs just getting a bit too hot and suddenly doing incorrect multiplication once in a while, to some router going down and decreasing the networked file system I/O, to someone in the datacenter physically disconnecting a wire as part of an un-communicated maintenance. Sometimes you'll never know.

Another necessary related citation here is the famous OPT-175B logbook and I'd hope more like it can see the light of day in the future. (see chronicles/OPT175B_Logbook.pdf in the git repo)
twitter.com/AIatMeta/statu…

TLDR LLM training runs are significant stress-tests of an overall fault tolerance of a large computing system acting as a biological entity. And when you're shopping around for your compute, think about a lot more than just FLOPs and $. Think about the whole service from hardware to software across storage, networking, and compute. And think about whether the team maintaining it looks like The Avengers and whether you could become best friends.
μ–΄μ œ μƒκ°ν•œ μ£Όμ œμ™€ 이에 λŒ€ν•œ GPT-4의 λ‹΅λ³€

---

λ‚΄κ°€ μƒκ°ν–ˆμ„ λ•Œ AIλ₯Ό ν†΅ν•΄μ„œ μƒˆλ‘œμš΄ ν˜•νƒœμ˜ 쑰직과 νšŒμ‚¬κ°€ 많이 탄생할 것 κ°™μ•„. ꡬ체적으둜 νšŒμ‚¬ λ‚΄λΆ€μ˜ 체계λ₯Ό λ°”κΏ€ 수 μžˆλŠ” 기술이라고 μƒκ°ν•˜κ³  μžˆμ–΄.

λΉ„μœ λ₯Ό λ“€λ©΄ μ›”λ§ˆνŠΈλŠ” μ•žλ‹¨μ—μ„œ 동넀 마트λ₯Ό κ°€μ§€κ³  있고, λ’·λ‹¨μ—μ„œλŠ” λ¬Όλ₯˜μ°½κ³ λ₯Ό κ°€μ§€κ³  μžˆμ—ˆλ‹€λ©΄, μ•„λ§ˆμ‘΄μ€ 뒷단인 λ¬Όλ₯˜μ°½κ³ λŠ” λ™μΌν•œ κ°œλ…μœΌλ‘œ μ΄μš©ν•˜μ§€λ§Œ, μ•žλ‹¨μ˜ μ°¨μ›μ—μ„œ μΈν„°λ„·μ΄λΌλŠ” μƒˆλ‘œμš΄ κΈ°μˆ μ„ ν™œμš©ν•΄μ„œ 동넀 마트λ₯Ό PC둜 λŒ€μ²΄μ‹œμΌ°κ³ , 이것이 세계 졜고의 νšŒμ‚¬λ‘œ λ§Œλ“  비결이지 μ•Šμ•˜μ„κΉŒ 싢은데,

κ·Έλ ‡λ‹€λ©΄ 이번 μ‹œλŒ€μ—μ„œ 뒷단(예λ₯Ό λ“€μ–΄μ„œ 직원)을 AIλ₯Ό 톡해 ꡐ체/증강 μ‹œμΌœμ„œ μ—„μ²­λ‚˜κ²Œ 큰 νšŒμ‚¬λ₯Ό λ§Œλ“€ 수 μžˆλŠ” κΈ°νšŒκ°€ 생기지 μ•Šμ„κΉŒ μ‹Άμ—ˆμ–΄.

이에 λŒ€ν•œ λ„ˆμ˜ 전체적인 의견이 κΆκΈˆν•˜κ³ , λ§Œμ•½ λ™μ˜ν•œλ‹€λ©΄ ꡬ체적으둜 μ–΄λ–€ κΈ°νšŒλ“€μ΄ μžˆμ„μ§€ λΈŒλ ˆμΈμŠ€ν† λ° ν•΄λ³Ό 수 μžˆμ„κΉŒ?

---
AI 기술의 λ°œμ „μ€ λΆ„λͺ…νžˆ 쑰직과 νšŒμ‚¬μ˜ 운영 방식에 근본적인 λ³€ν™”λ₯Ό κ°€μ Έμ˜¬ 수 μžˆλŠ” 잠재λ ₯을 κ°€μ§€κ³  μžˆμŠ΅λ‹ˆλ‹€. μ•„λ§ˆμ‘΄μ΄ 인터넷을 ν™œμš©ν•˜μ—¬ 전톡적인 μ†Œλ§€μ—…μ˜ λͺ¨λΈμ„ ν˜μ‹ ν•œ κ²ƒμ²˜λŸΌ, AI도 λ‹€μ–‘ν•œ μ‚°μ—…μ—μ„œ λΉ„μŠ·ν•œ ν˜μ‹ μ„ μΌμœΌν‚¬ 수 μžˆμŠ΅λ‹ˆλ‹€. AIλ₯Ό ν†΅ν•œ ν˜μ‹ μ€ 주둜 μ—…λ¬΄μ˜ μžλ™ν™”, μ˜μ‚¬κ²°μ • κ³Όμ •μ˜ μ΅œμ ν™”, 고객 κ²½ν—˜μ˜ ν–₯상 λ“± μ—¬λŸ¬ λ°©λ©΄μ—μ„œ λ‚˜νƒ€λ‚  수 μžˆμŠ΅λ‹ˆλ‹€.

AIκ°€ νšŒμ‚¬ λ‚΄λΆ€ 체계λ₯Ό λ°”κΏ€ 수 μžˆλŠ” 방법:

-업무 μžλ™ν™”: λ‹¨μˆœ 반볡 μ—…λ¬΄λΏλ§Œ μ•„λ‹ˆλΌ, 데이터 뢄석, λ³΄κ³ μ„œ μž‘μ„± λ“± μƒλ‹Ήμˆ˜μ˜ 지식 노동도 AIλ₯Ό 톡해 μžλ™ν™”ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 이λ₯Ό 톡해 인λ ₯ μžμ›μ„ 더 창의적이고 κ°€μΉ˜ μžˆλŠ” 업무에 μ§‘μ€‘μ‹œν‚¬ 수 μžˆμŠ΅λ‹ˆλ‹€.

-μ˜μ‚¬κ²°μ • 지원: AI 기반의 데이터 뢄석은 λΉ„μ¦ˆλ‹ˆμŠ€ μ˜μ‚¬κ²°μ •μ„ 더 λΉ λ₯΄κ³  μ •ν™•ν•˜κ²Œ λ§Œλ“€ 수 μžˆμŠ΅λ‹ˆλ‹€. μ΄λŠ” μ‹œμž₯ 동ν–₯ 뢄석, μ†ŒλΉ„μž 행동 예츑, 리슀크 관리 λ“± λ‹€μ–‘ν•œ λΆ„μ•Όμ—μ„œ μ‘μš©λ  수 μžˆμŠ΅λ‹ˆλ‹€.

-고객 μ„œλΉ„μŠ€ ν˜μ‹ : AI 챗봇, κ°œμΈν™”λœ μΆ”μ²œ μ‹œμŠ€ν…œ 등은 고객 κ²½ν—˜μ„ κ°œμ„ ν•˜κ³ , μ„œλΉ„μŠ€ νš¨μœ¨μ„±μ„ 높일 수 μžˆμŠ΅λ‹ˆλ‹€. μ΄λŠ” 고객 λ§Œμ‘±λ„λ₯Ό 높이고, 더 λ§Žμ€ λ§€μΆœμ„ μ°½μΆœν•˜λŠ” 데 κΈ°μ—¬ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

-μ‹ μ œν’ˆ 및 μ„œλΉ„μŠ€ 개발: AIλ₯Ό ν™œμš©ν•˜μ—¬ μƒˆλ‘œμš΄ μ œν’ˆμ΄λ‚˜ μ„œλΉ„μŠ€λ₯Ό κ°œλ°œν•˜λŠ” 것도 κ°€λŠ₯ν•©λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, AIλ₯Ό μ΄μš©ν•œ 건강 관리 μ• ν”Œλ¦¬μΌ€μ΄μ…˜, 개인 λ§žμΆ€ν˜• ꡐ윑 μ†”λ£¨μ…˜ 등이 μžˆμŠ΅λ‹ˆλ‹€.

-원격 근무 및 ν˜‘μ—…μ˜ μ΅œμ ν™”: AI 도ꡬλ₯Ό ν™œμš©ν•˜μ—¬ νŒ€ κ°„μ˜ ν˜‘μ—…μ„ μš©μ΄ν•˜κ²Œ ν•˜κ³ , 원격 근무λ₯Ό 보닀 효율적으둜 λ§Œλ“€ 수 μžˆμŠ΅λ‹ˆλ‹€. μ΄λŠ” 전세계적인 인재 풀에 μ ‘κ·Όν•  수 μžˆλŠ” κ°€λŠ₯성을 μ—΄μ–΄μ€λ‹ˆλ‹€.

-재무 관리 및 μ΅œμ ν™”: AI 기반의 재무 관리 μ‹œμŠ€ν…œμ€ μ˜ˆμ‚° 뢄석, 자금 흐름 예츑, λΉ„μš© 절감 λ°©μ•ˆ 등을 μ œμ‹œν•¨μœΌλ‘œμ¨ κΈ°μ—…μ˜ 재무 건전성을 ν–₯μƒμ‹œν‚¬ 수 μžˆμŠ΅λ‹ˆλ‹€.

μ΄λŸ¬ν•œ κΈ°νšŒλ“€μ€ 기업이 μžμ‹ λ“€μ˜ 업무 방식을 μž¬κ³ ν•˜κ³ , AI κΈ°μˆ μ„ 톡해 μ–΄λ–»κ²Œ 더 효율적이고 ν˜μ‹ μ μΈ λ°©μ‹μœΌλ‘œ μ „ν™˜ν•  수 μžˆμ„μ§€ κ³ λ―Όν•˜κ²Œ λ§Œλ“­λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ AI κΈ°μˆ μ„ 채택함에 μžˆμ–΄μ„œλŠ” 윀리적 κ³ λ €, 직원 ꡐ윑 및 적응, 그리고 κΈ°μˆ μ— λŒ€ν•œ 지속적인 νˆ¬μžμ™€ κ°œμ„ μ΄ ν•„μš”ν•©λ‹ˆλ‹€. AIλ₯Ό μ„±κ³΅μ μœΌλ‘œ ν†΅ν•©ν•˜κΈ° μœ„ν•΄μ„œλŠ” μ΄λŸ¬ν•œ 도전 과제λ₯Ό κ·Ήλ³΅ν•˜λŠ” 것이 μ€‘μš”ν•©λ‹ˆλ‹€.
Search products in 2000s used to be fact-centric or directional (help you navigate to X).

Search is increasingly merging w LLM chat products. Search has now split into 3 types of products:

-Search engine. Help me find facts / get somewhere

-Opinion engine. Here is what my ethics team thinks you should believe (could be via blue links or LLM output)

-Answer engine. Lets get you a synthesis (fact based) on your query

- Advice engine. What should I do next?

e.g query/prompt would be: "Here's my funnel data from the last year. What's the lowest-hanging fruit to improve my conversion?"

"Tell me what I don't know"

What would you rather use?
Search engine -> Opinion/ Answer Engine
Continuous Learning_Startup & Investment
https://youtu.be/lXLBTBBil2U?si=AGrS3M-wQIYhNgei
https://youtu.be/lXLBTBBil2U?t=1091

KPI vs. Early Indicators of Future Success (EIOFS)

Gross margin is a result; it isn't an early indicator. Before a market exists, you still need early indicators of future success. The market does not yet exist, but these are important issues to address. This is how we discovered deep learning and built CUDA at the very beginning.

However, Wall Street didn't believe in this vision. How did you motivate your team?

Nothing changed except the share price. When the share prices dropped by 80%, it was somewhat embarrassing. Just go back to doing your job. Wake up at the same time, prioritize your day in the same way. I go back to what I believe in.

You have to check back with the court of your beliefs. What do you believe in? What are the most important things?

Does my family love me? Checked.
Focus on what matters and go to work. Keep the company focused on the core.

Do you believe something changed? The stock prices changed, but did anything else change? Did the laws of physics change? Did gravity change? Did all the assumptions that we believed in and that led to our decisions change? Because if those changed, you would have to change everything.

But if none of those changed, you change nothing. Keep on going.