DoomPosting
8.18K subscribers
95.7K photos
31.7K videos
6 files
96.9K links
Degens Deteriorating
Download Telegram
Media is too big
VIEW IN TELEGRAM
Fast. Agile. Glowing. No identifiable propulsion, no flight surfaces, no logic.

We've now watched the U.S. military track dozens of these across 2 decades, on every sensor available, in every theater of operation.

They don't know what it is.

So genuinely... what do you think it is?

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
Media is too big
VIEW IN TELEGRAM
This object does something none of the others do.

It disappears and reappears, not because it moves out of frame, but while the sensor is actively tracking it. Irregularly. Intermittently. Like a signal dropping in and out.

Then it jumps position when the sensor switches imaging modes, visible on the left in one spectrum, reappearing on the right in another.

Same object. Different location depending on how you look at it.

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
Media is too big
VIEW IN TELEGRAM
A U.S. military sensor tracked an object making 90-degree turns at 80 mph near the ocean surface.

The sensor locked it with a targeting reticle at minute one. The same system used to track missiles.

The object became indistinguishable from the background and the sensor lost it entirely.

Not because it flew away. Because it became invisible to military infrared optics.

The sensor spent the last 30 seconds "rapidly cycling zoom levels and contrast thresholds" trying to find it again.

It couldn't.

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
Did Grok kill Community Notes?

It sure seems like it!

After it became possible to ask Grok questions on the timeline, new Community Notes sign-ups plummeted.

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
✍3πŸ‘2
Left: the watermark GPT Image 2 embeds into every image it generates.

Right: SynthID, the fingerprint Google bakes into every Nano Banana and Gemini image.

Invisible to the human eye. Applied during generation, not added after. Designed to survive screenshots, crops, and compression.

Most people using these tools daily have no idea their output is fingerprinted at the pixel level. Every major AI image generator now tags what it produces, and the tag travels with the image wherever it ends up.

You can verify this yourself. Content Credentials Verify detects C2PA markers from OpenAI images. Gemini detects SynthID if you upload an image directly to it.

The images will keep getting more realistic. The identification tech is keeping pace.

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
πŸ‘€4😎2
😁8😭5⚑1🫑1
πŸ’―9πŸ’…2πŸ—Ώ2
This media is not supported in your browser
VIEW IN TELEGRAM
A man in a yellow shirt gets angry and tries to slash the tires of a big truck with a blade. The second he stabs it the pressurized tire explodes violently and knocks him straight to the ground. Instant karma at its finest

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
😁14🌚8πŸ—Ώ3
Of course they’re using a photo of the King for this headline

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
😁12
In his defense, he said he wanted to β€œtry something new”

Perhaps the electric chair?

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
🀬10
> Be retarded Jeet lawyer
> Like all Indians you got fake law degree from scam college
> Retarded brother starts business, scammed investors as is Indian tradition
> He gets sued
> You represent him because no way in hell will an Indian pay for professional services
> Get AI to write the whole brief
> Hallucinates, makes up entire cases
> Get fined $2500, forced to take ethics classes
> Fast forward one year, submit legal documents to the same case
> Used AI again and entered hallucinations again to the same judge, issued $5000 fine this time
> Argue that you've already paid $73,500 in completely unrelated fines for other mistakes during that case that you should only pay $950
> Judge denies
> Brother loses case, has to pay $1.4 million
> Brother counter-sues for $500 million
> Judge laughs and rips up the suit

You think you understand how retarded Indians are but I promise you that you don't

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
😁5🌚4
This media is not supported in your browser
VIEW IN TELEGRAM
This is why it's easier to get laid when you already have a girlfriend.

When I was with my bisexual ex girlfriend I actually found it easier to get threesomes (two women) in Australia than just sex with one girl when I was single.

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
😁8
The Argentine app some women use to rate dates before going out started in Buenos Aires eight months ago as a private project among friends.

Today it has 22,000 users.

It works like this:

You upload screenshots of the chat of the guy you're talking to, and an AI analyzes patterns of manipulation, narcissism, passive aggression, love bombing, likely lies, and chances of being ghosted.

Then it returns a score.

"Emotional risk: 78/100"

"High probability of infidelity"

"Profile compatible with emotional dependency"

"Language similar to men previously reported"

Premium users can even connect the guy's Instagram and let the model analyze follows, likes, activity times, and changes in behavior.

The creator is 27 years old and studied psychology at the University of Buenos Aires.

She says the idea came about after a friend ended up hospitalized due to domestic violence.

Lawyers want to sue her, while female users love her. Several men have started asking for screenshots of their own results on the app before accepting a date.

The app is called FirstRedFlag and has a waiting list

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
πŸ‘€5
One of the documents declassified today reports that in Area 51 the US maintains a humanoid with feline characteristics I asked the AI to strictly follow what is written in the document description and produce an image

πŸ„³πŸ„ΎπŸ„ΎπŸ„ΌπŸ„ΏπŸ€–πŸ…‚πŸ…ƒπŸ„ΈπŸ„½πŸ„Ά
😁16πŸ‘€2