LLM Social Simulations Are a Promising Research Method
@css_nlp
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
@css_nlp
Please open Telegram to view this post
VIEW IN TELEGRAM
Joel on Software
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
Ever wonder about that mysterious Content-Type tag? You know, the one you’re supposed to put in HTML and you never quite know what it should be? Did you ever get an email from your friends in…
❤1👍1👏1
From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning
این یکی از باحال ترین مقاله هایی هستش که ژورافسکی و یان لکون که نام های آشنایی هستن بیرون دادن.
تو این مقاله سعی کردن تفاوت بین LLM ها و سیستم زبانی انسان ها رو مشخص کنن.
و نتیجه های باحالی به دست اوردن.
مدل های زبانی به صورت اگرسیو طور کامپرس میکنن مفاهیم رو و اونقدر که دیگه با مفاهیم ما انسان ها تفاوت ایجاد میشه.
قضیه اینکه این مدل ها در اصل یه عالمه دیتا رو که بخوردشون میدیم کامپرس میکنن اطلاعات رو و بعد چون کامپرس شدن (فضای کمتری میگیرن تو فضا) و بعد زمان تولید یا جنریشن این اطلاعات کامپرس شده دیکود میشن.
مغز ماهم همینطور هستش و مثلا شما ممکنه یه کتاب ۱۰۰۰ صفحه ای رو بخونید و بعدش تو ذهن شما یه سامری یا خلاصه ای تو ذهن شما میمونه و شما بعد ها زمانی که بازگو میکنید میتونید اون خلوص داستان رو با طبع ایجاد variation بازگو کنید.
As the mental scaffolding of human cognition, concepts enable efficient interpretation, generalization
from sparse data, and rich communication. For LLMs to transcend surface-level mimicry and achieve
more human-like understanding, it is critical to investigate how their internal representations navigate
the crucial trade-off between information compression and the preservation of semantic meaning. Do
LLMs develop conceptual structures mirroring the efficiency and richness of human thought, or do
they employ fundamentally different representational strategies?
حتما این مقاله رو بخونید :)
@css_nlp
Please open Telegram to view this post
VIEW IN TELEGRAM
👍9❤7
تویت جالب اندرو کارپاسی درمورد
LLMs and code generation
https://x.com/karpathy/status/1930305209747812559
@css_nlp
LLMs and code generation
https://x.com/karpathy/status/1930305209747812559
You could see it as there being two modes in creation. Borrowing GAN terminology:
1) generation and
2) discrimination.
e.g. painting - you make a brush stroke (1) and then you look for a while to see if you improved the painting (2). these two stages are interspersed in pretty much all creative work.
Second point. Discrimination can be computationally very hard.
- images are by far the easiest. e.g. image generator teams can create giant grids of results to decide if one image is better than the other. thank you to the giant GPU in your brain built for processing images very fast.
- text is much harder. it is skimmable, but you have to read, it is semantic, discrete and precise so you also have to reason (esp in e.g. code).
- audio is maybe even harder still imo, because it force a time axis so it's not even skimmable. you're forced to spend serial compute and can't parallelize it at all.
You could say that in coding LLMs have collapsed (1) to ~instant, but have done very little to address (2). A person still has to stare at the results and discriminate if they are good. This is my major criticism of LLM coding in that they casually spit out *way* too much code per query at arbitrary complexity, pretending there is no stage 2. Getting that much code is bad and scary. Instead, the LLM has to actively work with you to break down problems into little incremental steps, each more easily verifiable. It has to anticipate the computational work of (2) and reduce it as much as possible. It has to really care.
This leads me to probably the biggest misunderstanding non-coders have about coding. They think that coding is about writing the code (1). It's not. It's about staring at the code (2). Loading it all into your working memory. Pacing back and forth. Thinking through all the edge cases. If you catch me at a random point while I'm "programming", I'm probably just staring at the screen and, if interrupted, really mad because it is so computationally strenuous. If we only get much faster 1, but we don't also reduce 2 (which is most of the time!), then clearly the overall speed of coding won't improve (see Amdahl's law).
@css_nlp
👍4❤3🔥1
یه پکیج خوب برای فیکس کردن جیسان های خروجی مدل های زبانی:
🔗 https://github.com/mangiucugna/json_repair
@css_nlp
@css_nlp
Please open Telegram to view this post
VIEW IN TELEGRAM
GitHub
GitHub - mangiucugna/json_repair: A python module to repair invalid JSON from LLMs
A python module to repair invalid JSON from LLMs. Contribute to mangiucugna/json_repair development by creating an account on GitHub.
👍9
Please open Telegram to view this post
VIEW IN TELEGRAM
Langfuse
Traces, evals, prompt management and metrics to debug and improve your LLM application. Integrates with Langchain, OpenAI, LlamaIndex, LiteLLM, and more.
👍3❤1