Media is too big
VIEW IN TELEGRAM
Fast. Agile. Glowing. No identifiable propulsion, no flight surfaces, no logic.
We've now watched the U.S. military track dozens of these across 2 decades, on every sensor available, in every theater of operation.
They don't know what it is.
So genuinely... what do you think it is?
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
We've now watched the U.S. military track dozens of these across 2 decades, on every sensor available, in every theater of operation.
They don't know what it is.
So genuinely... what do you think it is?
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Media is too big
VIEW IN TELEGRAM
This object does something none of the others do.
It disappears and reappears, not because it moves out of frame, but while the sensor is actively tracking it. Irregularly. Intermittently. Like a signal dropping in and out.
Then it jumps position when the sensor switches imaging modes, visible on the left in one spectrum, reappearing on the right in another.
Same object. Different location depending on how you look at it.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
It disappears and reappears, not because it moves out of frame, but while the sensor is actively tracking it. Irregularly. Intermittently. Like a signal dropping in and out.
Then it jumps position when the sensor switches imaging modes, visible on the left in one spectrum, reappearing on the right in another.
Same object. Different location depending on how you look at it.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Media is too big
VIEW IN TELEGRAM
A U.S. military sensor tracked an object making 90-degree turns at 80 mph near the ocean surface.
The sensor locked it with a targeting reticle at minute one. The same system used to track missiles.
The object became indistinguishable from the background and the sensor lost it entirely.
Not because it flew away. Because it became invisible to military infrared optics.
The sensor spent the last 30 seconds "rapidly cycling zoom levels and contrast thresholds" trying to find it again.
It couldn't.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
The sensor locked it with a targeting reticle at minute one. The same system used to track missiles.
The object became indistinguishable from the background and the sensor lost it entirely.
Not because it flew away. Because it became invisible to military infrared optics.
The sensor spent the last 30 seconds "rapidly cycling zoom levels and contrast thresholds" trying to find it again.
It couldn't.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Did Grok kill Community Notes?
It sure seems like it!
After it became possible to ask Grok questions on the timeline, new Community Notes sign-ups plummeted.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
It sure seems like it!
After it became possible to ask Grok questions on the timeline, new Community Notes sign-ups plummeted.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
β3π2
Left: the watermark GPT Image 2 embeds into every image it generates.
Right: SynthID, the fingerprint Google bakes into every Nano Banana and Gemini image.
Invisible to the human eye. Applied during generation, not added after. Designed to survive screenshots, crops, and compression.
Most people using these tools daily have no idea their output is fingerprinted at the pixel level. Every major AI image generator now tags what it produces, and the tag travels with the image wherever it ends up.
You can verify this yourself. Content Credentials Verify detects C2PA markers from OpenAI images. Gemini detects SynthID if you upload an image directly to it.
The images will keep getting more realistic. The identification tech is keeping pace.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Right: SynthID, the fingerprint Google bakes into every Nano Banana and Gemini image.
Invisible to the human eye. Applied during generation, not added after. Designed to survive screenshots, crops, and compression.
Most people using these tools daily have no idea their output is fingerprinted at the pixel level. Every major AI image generator now tags what it produces, and the tag travels with the image wherever it ends up.
You can verify this yourself. Content Credentials Verify detects C2PA markers from OpenAI images. Gemini detects SynthID if you upload an image directly to it.
The images will keep getting more realistic. The identification tech is keeping pace.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π4π2
This media is not supported in your browser
VIEW IN TELEGRAM
A man in a yellow shirt gets angry and tries to slash the tires of a big truck with a blade. The second he stabs it the pressurized tire explodes violently and knocks him straight to the ground. Instant karma at its finest
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π14π8πΏ3
Of course theyβre using a photo of the King for this headline
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π12
In his defense, he said he wanted to βtry something newβ
Perhaps the electric chair?
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Perhaps the electric chair?
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π€¬10
> Be retarded Jeet lawyer
> Like all Indians you got fake law degree from scam college
> Retarded brother starts business, scammed investors as is Indian tradition
> He gets sued
> You represent him because no way in hell will an Indian pay for professional services
> Get AI to write the whole brief
> Hallucinates, makes up entire cases
> Get fined $2500, forced to take ethics classes
> Fast forward one year, submit legal documents to the same case
> Used AI again and entered hallucinations again to the same judge, issued $5000 fine this time
> Argue that you've already paid $73,500 in completely unrelated fines for other mistakes during that case that you should only pay $950
> Judge denies
> Brother loses case, has to pay $1.4 million
> Brother counter-sues for $500 million
> Judge laughs and rips up the suit
You think you understand how retarded Indians are but I promise you that you don't
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
> Like all Indians you got fake law degree from scam college
> Retarded brother starts business, scammed investors as is Indian tradition
> He gets sued
> You represent him because no way in hell will an Indian pay for professional services
> Get AI to write the whole brief
> Hallucinates, makes up entire cases
> Get fined $2500, forced to take ethics classes
> Fast forward one year, submit legal documents to the same case
> Used AI again and entered hallucinations again to the same judge, issued $5000 fine this time
> Argue that you've already paid $73,500 in completely unrelated fines for other mistakes during that case that you should only pay $950
> Judge denies
> Brother loses case, has to pay $1.4 million
> Brother counter-sues for $500 million
> Judge laughs and rips up the suit
You think you understand how retarded Indians are but I promise you that you don't
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π5π4
This media is not supported in your browser
VIEW IN TELEGRAM
I wonder how much sheβll want after she spends the 300k on food in six months
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π14
This media is not supported in your browser
VIEW IN TELEGRAM
This is why it's easier to get laid when you already have a girlfriend.
When I was with my bisexual ex girlfriend I actually found it easier to get threesomes (two women) in Australia than just sex with one girl when I was single.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
When I was with my bisexual ex girlfriend I actually found it easier to get threesomes (two women) in Australia than just sex with one girl when I was single.
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π8
The Argentine app some women use to rate dates before going out started in Buenos Aires eight months ago as a private project among friends.
Today it has 22,000 users.
It works like this:
You upload screenshots of the chat of the guy you're talking to, and an AI analyzes patterns of manipulation, narcissism, passive aggression, love bombing, likely lies, and chances of being ghosted.
Then it returns a score.
"Emotional risk: 78/100"
"High probability of infidelity"
"Profile compatible with emotional dependency"
"Language similar to men previously reported"
Premium users can even connect the guy's Instagram and let the model analyze follows, likes, activity times, and changes in behavior.
The creator is 27 years old and studied psychology at the University of Buenos Aires.
She says the idea came about after a friend ended up hospitalized due to domestic violence.
Lawyers want to sue her, while female users love her. Several men have started asking for screenshots of their own results on the app before accepting a date.
The app is called FirstRedFlag and has a waiting list
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
Today it has 22,000 users.
It works like this:
You upload screenshots of the chat of the guy you're talking to, and an AI analyzes patterns of manipulation, narcissism, passive aggression, love bombing, likely lies, and chances of being ghosted.
Then it returns a score.
"Emotional risk: 78/100"
"High probability of infidelity"
"Profile compatible with emotional dependency"
"Language similar to men previously reported"
Premium users can even connect the guy's Instagram and let the model analyze follows, likes, activity times, and changes in behavior.
The creator is 27 years old and studied psychology at the University of Buenos Aires.
She says the idea came about after a friend ended up hospitalized due to domestic violence.
Lawyers want to sue her, while female users love her. Several men have started asking for screenshots of their own results on the app before accepting a date.
The app is called FirstRedFlag and has a waiting list
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π5
One of the documents declassified today reports that in Area 51 the US maintains a humanoid with feline characteristics I asked the AI to strictly follow what is written in the document description and produce an image
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π³πΎπΎπΌπΏπ€π π πΈπ½πΆ
π16π2