AI does have the potential to make much better dictionaries
Unlikely to be done well if controlled by big tech, like Googleβs
But the potential is there
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
Unlikely to be done well if controlled by big tech, like Googleβs
But the potential is there
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
β‘4
DoomPosting
$Ghibli Doubled π³πΎπΎπΌπΏπΎπ
π
πΈπ½πΆ
Sending
Becoming a bit of a meme that gmgn labels nearly every major coin as 100% rug
Junk system
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
Becoming a bit of a meme that gmgn labels nearly every major coin as 100% rug
Junk system
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π2π₯1π€―1
How AI draws them when that caption is included β all black
-vs-
How it draws them when itβs not included β mixed
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
-vs-
How it draws them when itβs not included β mixed
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
β€βπ₯4β‘2π―1π1
Most effective way to spot deepfakes of people?
= Reflections in the eyes
Or at least this technique has worked extremely well up to now
Article & Study
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
= Reflections in the eyes
Or at least this technique has worked extremely well up to now
Article & Study
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π₯6π―2π2
So does OpenAIβs new image generation model rig the eyes, using a special separate model
β like they did with text?
Seems like it
(1) The eyes never seem to be pointing in the same direction as the original reference image β pretty sus
(2) This seems to be the first AI image generation model where the reflections in the eyes almost always perfectly match
All signs pointing to rigged
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
β like they did with text?
Seems like it
(1) The eyes never seem to be pointing in the same direction as the original reference image β pretty sus
(2) This seems to be the first AI image generation model where the reflections in the eyes almost always perfectly match
All signs pointing to rigged
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π5
Notice how their official GPT-4o image generation announcement says βmodelsβ instead of model
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π3
Their title β4o Image Generation in ChatGPT and Soraβ
= Further evidence that the image generation is a separate model, used as a tool by their LLM models
Why do it as a separate model?
= 99% cheapness & speed
Much cheaper and faster to do multiple narrowly-focused models
Why is it a cheat?
Youβre sacrificing IQ
Huge IQ gains made from, what 10 years ago the AI guys called βtransfer learningβ
I.e. figuring out where tasks in different domains are analogous, or where part of a problem can be best solved as thinking of it in another modality
I.e. same things smart humans do
Ofc youβre no longer allowed to point this out, because it goes against a fundamental tenant of the lefties
Lefty belief β everyone equally valuable and skilled, just each in a different specialized way, and if it doesnβt seem so then thatβs just because you havenβt found the special area where the person excels
Reality β those who extremely excel in one area tend to be the people who excel in ALL areas, and very quickly pick up expertise in additional areas, e.g. as is shown with IQ
So yes,
The huge draw of an everything omni-model β is the massive IQ gain potential that inevitably comes from that
Some will say that many specialized models are the future β Lefty BS, except when it comes to latency speed, but work where extremely low latency is critical is less common than youβd think
Unfortunately for now, seems a single SOTA omni model is still too expensive and slow for OpenAI, for now
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
= Further evidence that the image generation is a separate model, used as a tool by their LLM models
Why do it as a separate model?
= 99% cheapness & speed
Much cheaper and faster to do multiple narrowly-focused models
Why is it a cheat?
Youβre sacrificing IQ
Huge IQ gains made from, what 10 years ago the AI guys called βtransfer learningβ
I.e. figuring out where tasks in different domains are analogous, or where part of a problem can be best solved as thinking of it in another modality
I.e. same things smart humans do
Ofc youβre no longer allowed to point this out, because it goes against a fundamental tenant of the lefties
Lefty belief β everyone equally valuable and skilled, just each in a different specialized way, and if it doesnβt seem so then thatβs just because you havenβt found the special area where the person excels
Reality β those who extremely excel in one area tend to be the people who excel in ALL areas, and very quickly pick up expertise in additional areas, e.g. as is shown with IQ
So yes,
The huge draw of an everything omni-model β is the massive IQ gain potential that inevitably comes from that
Some will say that many specialized models are the future β Lefty BS, except when it comes to latency speed, but work where extremely low latency is critical is less common than youβd think
Unfortunately for now, seems a single SOTA omni model is still too expensive and slow for OpenAI, for now
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π―3π1π1
Forwarded from Chat GPT
General intelligence is all you need for LLMs: General intelligence 2x more predictive of AI abilities than in humans.
Man gives LLMs IQ testing, finds the general intelligence factor, g, measured by this testing to be incredibly powerful in predicting LLMβs abilities.
Whereas in humans this g factor typically accounts for 40% to 50% of the between-individual performance differences on a given cognitive test β in LLMs, general intelligence accounted for 85.4% = twice as strong as in humans!
He then goes on to rank AI benchmarks commonly used for LLMs today, and finds many of them incredibly g-loaded β in other words, doing well on them is heavily dependant on general intelligence.
Finally, he ranks LLMs by general intelligence, and finds a moderate/strong positive relationship between model size and g. (Would probably find an extremely strong correlation if only model size was varied, and nothing else about the training process.)
If you had to choose 1 measure, general intelligence really is all you need.
Paper
g factor
Man gives LLMs IQ testing, finds the general intelligence factor, g, measured by this testing to be incredibly powerful in predicting LLMβs abilities.
Whereas in humans this g factor typically accounts for 40% to 50% of the between-individual performance differences on a given cognitive test β in LLMs, general intelligence accounted for 85.4% = twice as strong as in humans!
He then goes on to rank AI benchmarks commonly used for LLMs today, and finds many of them incredibly g-loaded β in other words, doing well on them is heavily dependant on general intelligence.
Finally, he ranks LLMs by general intelligence, and finds a moderate/strong positive relationship between model size and g. (Would probably find an extremely strong correlation if only model size was varied, and nothing else about the training process.)
If you had to choose 1 measure, general intelligence really is all you need.
Paper
g factor
π₯5π1
Transfer learning,
i.e. where skills and knowledge from one domain is applicable to another
Like a top pro athlete quickly being able to cross over and dominate other sports, despite playing the new sport for very little time
β Undeniably proven to be the reality via AI experiments at least a decade ago
But never talked about much anymore, largely because itβs a short step from here to coming into conflict with MANY of the leftiesβ core beliefs:
+ Nurture > nature β nope, some people quickly come to dominate each new thing they try
+ Everyone equally good in their own unique ways β nope, some quickly come to dominate everything, even dominating others whoβd spent a lifetime trying to master what theyβre doing
Lefties rekt
Lots of fascinating things proven beyond a doubt by simple AI experiments, a decade or more ago
All largely buried, because it upsets the commies
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
i.e. where skills and knowledge from one domain is applicable to another
Like a top pro athlete quickly being able to cross over and dominate other sports, despite playing the new sport for very little time
β Undeniably proven to be the reality via AI experiments at least a decade ago
But never talked about much anymore, largely because itβs a short step from here to coming into conflict with MANY of the leftiesβ core beliefs:
+ Nurture > nature β nope, some people quickly come to dominate each new thing they try
+ Everyone equally good in their own unique ways β nope, some quickly come to dominate everything, even dominating others whoβd spent a lifetime trying to master what theyβre doing
Lefties rekt
Lots of fascinating things proven beyond a doubt by simple AI experiments, a decade or more ago
All largely buried, because it upsets the commies
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π―4π2β‘1
Japan warns of βsignificant impactβ from new U.S. tariffs
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π7
This new AI gives every single woman Iβve input a breast reduction
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π11β‘1