This media is not supported in your browser
VIEW IN TELEGRAM
π20π7π₯2
π15π₯1
Teslaga ishga kirasizmi?π€«
Software Developer(React, Websockets)
Site Reliability Engineer, Platform Engineering (Kubernetes) - Juda yaxshi komandda, ko'p birga ishlimiz.
Software Engineering Internshipππ
Referal beraman :)
Software Developer(React, Websockets)
Site Reliability Engineer, Platform Engineering (Kubernetes) - Juda yaxshi komandda, ko'p birga ishlimiz.
Software Engineering Internshipππ
Referal beraman :)
Build your Career at Tesla
Careers Search
π29
Forwarded from Laziz Abdullaev
LLMs (Large Language Models) aslida fikrlash qobiliyatiga egami?
Birinchi nigohda go'yo LLMs (masalan GPT mahsulotlari) shunchaki internetdagi matnlarni "yodlab olib", savol berilganda shu yodlangan so'z va gaplardan mosini topib beradigandek. Shu tufayli ham bunday modellarga "Stochastic parrot" (tasodifiy to'tiqush) sifatida qarash keng tarqaldi. Chindan ham GPT qanday ishlashini tushuntirishga harakat qiladigan manbalarni ko'rib chiqish shunday tushunchani keltirib chiqarish ehtimoli katta - "N ta so'zni o'qiyman, keyingi so'zni ombordan topib beraman". Sodda statistik modellashtirganda ham shu jarayondan foydalanish mumkin. Ammo, inson intellekti ham to'liq gapni birdaniga tuza olmaydi, so'zlarni qandaydir ketma ketlik bilan hosil qiladi.
"Keyingi so'zni topib berish va fikrlash yer bilan osmonchalikku", - deyishingiz mumkin.
"Fikrlash" tushunchasiga qanday ta'rif berishga bog'liq.
Quyida LLMs ga oid ba'zi qiziqarli ilmiy asoslangan yoki tekshirilgan faktlarni keltiraman:
1. LMlar so'zlarni o'zida vector ko'rinishda tasvirlaydi (e.g. word2vec). Raqamlar orqali so'zlar to'plamidagi ma'noni aks ettirishiga ishonish qiyin, ammo namunaviy ilmiy tajribalar ko'rsatadiki, qaysidir "o'rganilgan" qism fazoning qaysidir yo'nalishi, misol uchun, jinsni ifodalaydi. Masalan,
2. Shunga o'xshash, "haqiqat" tushunchasi ham o'rganilgan fazoning qaysidir yo'nalishi, aytaylik A, bilan qoniqarli ifodalanishi kuzatiladi. Ya'ni "to'g'ri" va "noto'g'ri" tushunchalarini bir biridan (oddiygina chiziqli classifier bilan!) qoniqarli tarzda ajrata olish ko'nikmasi borligi ko'rildi.
3. Umuman, tushirib qoldirilgan so'zni gap ma'nosini buzib yubormay topish uchun ongimizdan aynan nima talab etiladi? Gap mazmunini tushunish talab etilmaydimi?
Sanab o'tilganlar oddiy observation'lar bo'lishi mumkin, ammo "tasodifiy to'tiqush" masalan shunday xususiyatlarni o'zida namoyish eta olarmidi? Sarlavhadagi savol aslida munozarali savol. AI'ning hozirgacha erishgan muvofaqqiyati siri aynan nimada ekanligi yetarlicha tushunilmaganligi "Ha" deyish uchun ham "Yo'q" deyish uchun ham to'sqinlik qiladi.
@lazizabdullaev
Birinchi nigohda go'yo LLMs (masalan GPT mahsulotlari) shunchaki internetdagi matnlarni "yodlab olib", savol berilganda shu yodlangan so'z va gaplardan mosini topib beradigandek. Shu tufayli ham bunday modellarga "Stochastic parrot" (tasodifiy to'tiqush) sifatida qarash keng tarqaldi. Chindan ham GPT qanday ishlashini tushuntirishga harakat qiladigan manbalarni ko'rib chiqish shunday tushunchani keltirib chiqarish ehtimoli katta - "N ta so'zni o'qiyman, keyingi so'zni ombordan topib beraman". Sodda statistik modellashtirganda ham shu jarayondan foydalanish mumkin. Ammo, inson intellekti ham to'liq gapni birdaniga tuza olmaydi, so'zlarni qandaydir ketma ketlik bilan hosil qiladi.
"Keyingi so'zni topib berish va fikrlash yer bilan osmonchalikku", - deyishingiz mumkin.
"Fikrlash" tushunchasiga qanday ta'rif berishga bog'liq.
Quyida LLMs ga oid ba'zi qiziqarli ilmiy asoslangan yoki tekshirilgan faktlarni keltiraman:
1. LMlar so'zlarni o'zida vector ko'rinishda tasvirlaydi (e.g. word2vec). Raqamlar orqali so'zlar to'plamidagi ma'noni aks ettirishiga ishonish qiyin, ammo namunaviy ilmiy tajribalar ko'rsatadiki, qaysidir "o'rganilgan" qism fazoning qaysidir yo'nalishi, misol uchun, jinsni ifodalaydi. Masalan,
word2vec
("king") - word2vec
("queen") va word2vec
("male") - word2vec
("female") ayirmalar bir biriga o'xshash. Bu tasodif emas, bunday mosliklar stabil tarzda kuzatiladi.2. Shunga o'xshash, "haqiqat" tushunchasi ham o'rganilgan fazoning qaysidir yo'nalishi, aytaylik A, bilan qoniqarli ifodalanishi kuzatiladi. Ya'ni "to'g'ri" va "noto'g'ri" tushunchalarini bir biridan (oddiygina chiziqli classifier bilan!) qoniqarli tarzda ajrata olish ko'nikmasi borligi ko'rildi.
3. Umuman, tushirib qoldirilgan so'zni gap ma'nosini buzib yubormay topish uchun ongimizdan aynan nima talab etiladi? Gap mazmunini tushunish talab etilmaydimi?
Sanab o'tilganlar oddiy observation'lar bo'lishi mumkin, ammo "tasodifiy to'tiqush" masalan shunday xususiyatlarni o'zida namoyish eta olarmidi? Sarlavhadagi savol aslida munozarali savol. AI'ning hozirgacha erishgan muvofaqqiyati siri aynan nimada ekanligi yetarlicha tushunilmaganligi "Ha" deyish uchun ham "Yo'q" deyish uchun ham to'sqinlik qiladi.
@lazizabdullaev
π₯8π5
Forwarded from Dilmurod Yangiboev | DYDO :) (Dilmurod)
#hr #interview #tesla #experience
1. HR interview
Ajralib turadigan jihati:
- Qiziqishlarim, kompaniya haqida asosan!
- 'Why' savollar ko'proq!
Yaxshi javob berish uchun:
- Qanchalik ilmga chanqoqligingizni ko'rsata olish;
- Kompaniya haqida yangiliklarni yaxshi bilish;
- 'Brandenburg Giga factory'(barcha Giga) haqida ko'p malumot bilish
- Energiyangiz
- ......
Qiziqishingiz muhim Teslada 'You can almost learn and do anything!'
1. HR interview
Ajralib turadigan jihati:
- Qiziqishlarim, kompaniya haqida asosan!
- 'Why' savollar ko'proq!
Yaxshi javob berish uchun:
- Qanchalik ilmga chanqoqligingizni ko'rsata olish;
- Kompaniya haqida yangiliklarni yaxshi bilish;
- 'Brandenburg Giga factory'(barcha Giga) haqida ko'p malumot bilish
- Energiyangiz
- ......
Qiziqishingiz muhim Teslada 'You can almost learn and do anything!'
π21
Forwarded from Alisher Sadullaev
This media is not supported in your browser
VIEW IN TELEGRAM
πΊπΏHar bir sportchi uchun eng yuqori cho'qqi!
Sportchilar shunday lahzalar uchun butun umrini bag'ishlaydi. G'alaba muborak, do'stlar.
Sportchilar shunday lahzalar uchun butun umrini bag'ishlaydi. G'alaba muborak, do'stlar.
π42π₯17β€3π1
Tesla Gym
Man kompaniyaga qo'shilganimda, ko'ngilochar nimadir qilish uchun deyarli hech nima yo'q edi! Robotlardan tashqari π
Hozir,
Playstation, GYM, Ping-Pong, va Night Clubπ
Man kompaniyaga qo'shilganimda, ko'ngilochar nimadir qilish uchun deyarli hech nima yo'q edi! Robotlardan tashqari π
Hozir,
Playstation, GYM, Ping-Pong, va Night Clubπ
π₯26π8β€1
Junior Dilmurod
- Oxxo yomon (ishlaydigan)code yozilgan ekan, boshqa yo'ldan qilish kk edi -> qaytadan yozaman
- Goda gRPC ishlatib Microservice qilaman
- Docker, Kubernetesga qo'yib load balance qilaman
4 yillik tarjibali Dilmurod
- Ishlaydigan code ga tegmayman π va umuman complain qilmayman (Refactorni boshqacha usulda qilamiz)
- Monolith & Python Djanggoda ko'tarib ko'raman
- Pm2 yoki gunicorn nginx qilib qo'ya qolaman
Xulosa:
1. Shikoyat qilmang
2. KISS (π€«)
3. Ishga tushiring, keyin optimallashtiring
Photos by: Nodirbek Ergashev
- Oxxo yomon (ishlaydigan)code yozilgan ekan, boshqa yo'ldan qilish kk edi -> qaytadan yozaman
- Goda gRPC ishlatib Microservice qilaman
- Docker, Kubernetesga qo'yib load balance qilaman
4 yillik tarjibali Dilmurod
- Ishlaydigan code ga tegmayman π va umuman complain qilmayman (Refactorni boshqacha usulda qilamiz)
- Monolith & Python Djanggoda ko'tarib ko'raman
- Pm2 yoki gunicorn nginx qilib qo'ya qolaman
Xulosa:
1. Shikoyat qilmang
2. KISS (π€«)
3. Ishga tushiring, keyin optimallashtiring
Photos by: Nodirbek Ergashev
π26π15π’3π₯1
Golang middle vacancy at AppliedLabs
Linkedin
Middle Golang Developer - Applied Labs | Ekaterina Tolstikh | 13 comments
Weβre looking for a talented Go Developer to join our team! π
If youβre passionate about coding and want to make an impact, this is the perfect opportunity.
Check out the link for all the details and apply today:
https://lnkd.in/dxEc7Zsq | 13 comments onβ¦
If youβre passionate about coding and want to make an impact, this is the perfect opportunity.
Check out the link for all the details and apply today:
https://lnkd.in/dxEc7Zsq | 13 comments onβ¦
π11π±4π₯2π1
Forwarded from Nodir's notebook
Anthropic, two months later
There are those moments when you talk to someone and think to yourself βwow, this person is smart! They will do great things, I should pay attention to themβ. Usually those moments are rare, but I had so many of them here. I had some research questions, and in a number of times the answer was in a paper authored by someone at Anthropic. We hear about and celebrate when talented people join from OpenAI, but we are less likely to hear about talent leaving to OpenAI, so I asked John Schulman about this: nope, there was a just one case a while ago when someone left Anthropic for OpenAI. There are a number of examples where a person had a pretty senior title, like Engineering Director, VP or Staff/Principal at another company, leading tens or hundreds of people, and they joined Anthropic as Individual Contributor. Some of them even came back from retirement! Dream team.
High talent density is a massive competitive advantage. Even if a competitor is currently ahead of the game, in the end of the day, at least until AGI, it is the people that make things happen, so they represent the derivative of the company and thus are big factor in the long-term outcome. Talented people in lower-talent-density companies, or low-trust companies not united under one mission, are tempted by the environment to feel special about themselves, and are more likely to eventually start acting in selfish ways, such as growing their headcount for the sake of personal career growth, or practicing promotion-driven development, which ultimately hurts the company performance.
Talent density is sticky. It is hard to beat the sense of belonging when a smart person is surrounded by lots of other smart people, especially when money is not the top motivator. There is no other place Iβd rather be at. Therefore, I think this competitive advantage is durable, AKA a moat.
There are those moments when you talk to someone and think to yourself βwow, this person is smart! They will do great things, I should pay attention to themβ. Usually those moments are rare, but I had so many of them here. I had some research questions, and in a number of times the answer was in a paper authored by someone at Anthropic. We hear about and celebrate when talented people join from OpenAI, but we are less likely to hear about talent leaving to OpenAI, so I asked John Schulman about this: nope, there was a just one case a while ago when someone left Anthropic for OpenAI. There are a number of examples where a person had a pretty senior title, like Engineering Director, VP or Staff/Principal at another company, leading tens or hundreds of people, and they joined Anthropic as Individual Contributor. Some of them even came back from retirement! Dream team.
High talent density is a massive competitive advantage. Even if a competitor is currently ahead of the game, in the end of the day, at least until AGI, it is the people that make things happen, so they represent the derivative of the company and thus are big factor in the long-term outcome. Talented people in lower-talent-density companies, or low-trust companies not united under one mission, are tempted by the environment to feel special about themselves, and are more likely to eventually start acting in selfish ways, such as growing their headcount for the sake of personal career growth, or practicing promotion-driven development, which ultimately hurts the company performance.
Talent density is sticky. It is hard to beat the sense of belonging when a smart person is surrounded by lots of other smart people, especially when money is not the top motivator. There is no other place Iβd rather be at. Therefore, I think this competitive advantage is durable, AKA a moat.
π11β€3π₯1
Assalom alaykum! ICT weekda qatnashayotganlar bormi?
π7π4
Kelilar networking uchun π Tanishgandan xursand boβlaman)
π29