DoomPosting
π³πΎπΎπΌπΏπΎπ
π
πΈπ½πΆ
At first glance, looks like a bunch of rules for use by women, on women
Perhaps not a coincidence that the first site on Google listing them is pink
Totally unprioritized list and way too long, in the usual left style
But open to see if thereβs anything big Iβm missing
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
Perhaps not a coincidence that the first site on Google listing them is pink
Totally unprioritized list and way too long, in the usual left style
But open to see if thereβs anything big Iβm missing
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π―3π3β€βπ₯2
βconsult the literatureβ
βdo your researchβ
βyou donβt have media literacyβ
βyou just have to read marxβs booksβ
β Usual lefty BS tactic to camouflage that there is no real point
If you canβt prioritize your points by importance, youβre BSing
Inverted pyramid, do you speak it
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
βdo your researchβ
βyou donβt have media literacyβ
βyou just have to read marxβs booksβ
β Usual lefty BS tactic to camouflage that there is no real point
If you canβt prioritize your points by importance, youβre BSing
Inverted pyramid, do you speak it
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π5π«‘5π―3
DoomPosting
Photo
99% passive-aggressive, blatant manipulator, woman tactics,
β Where those who use it on others think theyβre being super sly,
But really nearly everyone youβre using them on sees exactly what youβre doing
β And the only reason they never called you out on it is because they correctly read you as a manipulator psycho whoβd do some nonsense if they ever did
May work if youβre deep into woman-controlled parts of world though,
Though Iβd suggest never ever getting yourself into that situation in the first place
Ok, suppose it could be a somewhat useful guide for recognizing what feminine-style manipulators are doing, for those unfamiliar with these tactics
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
β Where those who use it on others think theyβre being super sly,
But really nearly everyone youβre using them on sees exactly what youβre doing
β And the only reason they never called you out on it is because they correctly read you as a manipulator psycho whoβd do some nonsense if they ever did
May work if youβre deep into woman-controlled parts of world though,
Though Iβd suggest never ever getting yourself into that situation in the first place
Ok, suppose it could be a somewhat useful guide for recognizing what feminine-style manipulators are doing, for those unfamiliar with these tactics
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π―7π2
DoomPosting
99% passive-aggressive, blatant manipulator, woman tactics, β Where those who use it on others think theyβre being super sly, But really nearly everyone youβre using them on sees exactly what youβre doing β And the only reason they never called you outβ¦
Unfamiliar with the longhouse?
No, extremely familiar with the longhouse and its tactics
Totally familiar
βand choose to reject it
No, I will not embody it to make my life easier
Not a snake
Reject the longhouse, burn the snakes
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
No, extremely familiar with the longhouse and its tactics
Totally familiar
βand choose to reject it
No, I will not embody it to make my life easier
Not a snake
Reject the longhouse, burn the snakes
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π―6π₯2π1
AI does have the potential to make much better dictionaries
Unlikely to be done well if controlled by big tech, like Googleβs
But the potential is there
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
Unlikely to be done well if controlled by big tech, like Googleβs
But the potential is there
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
β‘4
DoomPosting
$Ghibli Doubled π³πΎπΎπΌπΏπΎπ
π
πΈπ½πΆ
Sending
Becoming a bit of a meme that gmgn labels nearly every major coin as 100% rug
Junk system
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
Becoming a bit of a meme that gmgn labels nearly every major coin as 100% rug
Junk system
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π2π₯1π€―1
How AI draws them when that caption is included β all black
-vs-
How it draws them when itβs not included β mixed
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
-vs-
How it draws them when itβs not included β mixed
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
β€βπ₯4β‘2π―1π1
Most effective way to spot deepfakes of people?
= Reflections in the eyes
Or at least this technique has worked extremely well up to now
Article & Study
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
= Reflections in the eyes
Or at least this technique has worked extremely well up to now
Article & Study
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π₯6π―2π2
So does OpenAIβs new image generation model rig the eyes, using a special separate model
β like they did with text?
Seems like it
(1) The eyes never seem to be pointing in the same direction as the original reference image β pretty sus
(2) This seems to be the first AI image generation model where the reflections in the eyes almost always perfectly match
All signs pointing to rigged
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
β like they did with text?
Seems like it
(1) The eyes never seem to be pointing in the same direction as the original reference image β pretty sus
(2) This seems to be the first AI image generation model where the reflections in the eyes almost always perfectly match
All signs pointing to rigged
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π5
Notice how their official GPT-4o image generation announcement says βmodelsβ instead of model
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π3
Their title β4o Image Generation in ChatGPT and Soraβ
= Further evidence that the image generation is a separate model, used as a tool by their LLM models
Why do it as a separate model?
= 99% cheapness & speed
Much cheaper and faster to do multiple narrowly-focused models
Why is it a cheat?
Youβre sacrificing IQ
Huge IQ gains made from, what 10 years ago the AI guys called βtransfer learningβ
I.e. figuring out where tasks in different domains are analogous, or where part of a problem can be best solved as thinking of it in another modality
I.e. same things smart humans do
Ofc youβre no longer allowed to point this out, because it goes against a fundamental tenant of the lefties
Lefty belief β everyone equally valuable and skilled, just each in a different specialized way, and if it doesnβt seem so then thatβs just because you havenβt found the special area where the person excels
Reality β those who extremely excel in one area tend to be the people who excel in ALL areas, and very quickly pick up expertise in additional areas, e.g. as is shown with IQ
So yes,
The huge draw of an everything omni-model β is the massive IQ gain potential that inevitably comes from that
Some will say that many specialized models are the future β Lefty BS, except when it comes to latency speed, but work where extremely low latency is critical is less common than youβd think
Unfortunately for now, seems a single SOTA omni model is still too expensive and slow for OpenAI, for now
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
= Further evidence that the image generation is a separate model, used as a tool by their LLM models
Why do it as a separate model?
= 99% cheapness & speed
Much cheaper and faster to do multiple narrowly-focused models
Why is it a cheat?
Youβre sacrificing IQ
Huge IQ gains made from, what 10 years ago the AI guys called βtransfer learningβ
I.e. figuring out where tasks in different domains are analogous, or where part of a problem can be best solved as thinking of it in another modality
I.e. same things smart humans do
Ofc youβre no longer allowed to point this out, because it goes against a fundamental tenant of the lefties
Lefty belief β everyone equally valuable and skilled, just each in a different specialized way, and if it doesnβt seem so then thatβs just because you havenβt found the special area where the person excels
Reality β those who extremely excel in one area tend to be the people who excel in ALL areas, and very quickly pick up expertise in additional areas, e.g. as is shown with IQ
So yes,
The huge draw of an everything omni-model β is the massive IQ gain potential that inevitably comes from that
Some will say that many specialized models are the future β Lefty BS, except when it comes to latency speed, but work where extremely low latency is critical is less common than youβd think
Unfortunately for now, seems a single SOTA omni model is still too expensive and slow for OpenAI, for now
π³πΎπΎπΌπΏπΎπ π πΈπ½πΆ
π―3π1π1
Forwarded from Chat GPT
General intelligence is all you need for LLMs: General intelligence 2x more predictive of AI abilities than in humans.
Man gives LLMs IQ testing, finds the general intelligence factor, g, measured by this testing to be incredibly powerful in predicting LLMβs abilities.
Whereas in humans this g factor typically accounts for 40% to 50% of the between-individual performance differences on a given cognitive test β in LLMs, general intelligence accounted for 85.4% = twice as strong as in humans!
He then goes on to rank AI benchmarks commonly used for LLMs today, and finds many of them incredibly g-loaded β in other words, doing well on them is heavily dependant on general intelligence.
Finally, he ranks LLMs by general intelligence, and finds a moderate/strong positive relationship between model size and g. (Would probably find an extremely strong correlation if only model size was varied, and nothing else about the training process.)
If you had to choose 1 measure, general intelligence really is all you need.
Paper
g factor
Man gives LLMs IQ testing, finds the general intelligence factor, g, measured by this testing to be incredibly powerful in predicting LLMβs abilities.
Whereas in humans this g factor typically accounts for 40% to 50% of the between-individual performance differences on a given cognitive test β in LLMs, general intelligence accounted for 85.4% = twice as strong as in humans!
He then goes on to rank AI benchmarks commonly used for LLMs today, and finds many of them incredibly g-loaded β in other words, doing well on them is heavily dependant on general intelligence.
Finally, he ranks LLMs by general intelligence, and finds a moderate/strong positive relationship between model size and g. (Would probably find an extremely strong correlation if only model size was varied, and nothing else about the training process.)
If you had to choose 1 measure, general intelligence really is all you need.
Paper
g factor
π₯5π1