"There’s no reason that gig workers who are facing algorithmic wage discrimination couldn’t install a counter-app that co-ordinated among all the Uber drivers to reject all jobs unless they reach a certain pay threshold. No reason except felony contempt of business model, the threat that the toolsmiths who built that counter-app would go broke or land in prison, for violating DMCA 1201, the Computer Fraud and Abuse Act, trademark, copyright, patent, contract, trade secrecy, nondisclosure and noncompete or, in other words, “IP law”.
IP isn’t just short for intellectual property. It’s a euphemism for “a law that lets me reach beyond the walls of my company and control the conduct of my critics, competitors and customers”. And “app” is just a euphemism for “a web page wrapped in enough IP to make it a felony to mod it, to protect the labour, consumer and privacy rights of its user”."
https://www.ft.com/content/6fb1602d-a08b-4a8c-bac0-047b7d64aba5
IP isn’t just short for intellectual property. It’s a euphemism for “a law that lets me reach beyond the walls of my company and control the conduct of my critics, competitors and customers”. And “app” is just a euphemism for “a web page wrapped in enough IP to make it a felony to mod it, to protect the labour, consumer and privacy rights of its user”."
https://www.ft.com/content/6fb1602d-a08b-4a8c-bac0-047b7d64aba5
Humans now share the web equally with bots, according to a major new report – as some fear that the internet is dying.
In recent months, the so-called “dead internet theory” has gained new popularity. It suggests that much of the content online is in fact automatically generated, and that the number of humans on the web is dwindling in comparison with bot accounts.
Now a new report from cyber security company Imperva suggests that it is increasingly becoming true. Nearly half, 49.6 per cent, of all internet traffic came from bots last year, its “Bad Bot Report” indicates.
That is up 2 per cent in comparison with last year, and is the highest number ever seen since the report began in 2013.
In some countries, the picture is worse. In Ireland, 71 per cent of internet traffic is automated, it said.
“Automated bots will soon surpass the proportion of internet traffic coming from humans, changing the way that organizations approach building and protecting their websites and applications,” said Nanhi Singh, general manager for application security at Imperva. “As more AI-enabled tools are introduced, bots will become omnipresent.”https://www.independent.co.uk/tech/dead-internet-web-bots-humans-b2530324.html
In recent months, the so-called “dead internet theory” has gained new popularity. It suggests that much of the content online is in fact automatically generated, and that the number of humans on the web is dwindling in comparison with bot accounts.
Now a new report from cyber security company Imperva suggests that it is increasingly becoming true. Nearly half, 49.6 per cent, of all internet traffic came from bots last year, its “Bad Bot Report” indicates.
That is up 2 per cent in comparison with last year, and is the highest number ever seen since the report began in 2013.
In some countries, the picture is worse. In Ireland, 71 per cent of internet traffic is automated, it said.
“Automated bots will soon surpass the proportion of internet traffic coming from humans, changing the way that organizations approach building and protecting their websites and applications,” said Nanhi Singh, general manager for application security at Imperva. “As more AI-enabled tools are introduced, bots will become omnipresent.”https://www.independent.co.uk/tech/dead-internet-web-bots-humans-b2530324.html
The Independent
Humans share the web equally with bots, report warns amid fears of ‘dead internet’
Sites such as Twitter/X have been overrun by automated accounts
US slows plans to retire coal-fired plants as power demand from AI surges
I also want you to realize that anything bad that you see on the platform is a symptom of Mark Zuckerberg’s unwillingness to rate-limit or sufficiently moderate the platform. Logically-speaking, one would think that Meta would want you to have a high-quality Facebook experience, pruning content that might be incendiary, spammy, scammy or unhelpful, or at the very least, comes primarily from those within your own network, but when your only concern is growth, content moderation is more of an emergency measure.
And to be clear, this is part of Meta’s cultural DNA. In an interview with journalist Jeff Horwitz in his book Broken Code, Facebook’s former VP of Ads and Partnerships Brian Bolland said that “building things is way more fun than making things secure and safe…[and] until there’s a regulatory or press fire, you don’t deal with it.”
Horwitz also cites that Meta engineers’ greatest frustration was that the company “perpetually [needed] something to fail — often fucking spectacularly — to drive interest in fixing it.” Horwitz’s book describes Meta’s approach to moderation as “having a light touch,” considering it “a moral virtue” and that the company “wasn’t failing to supervise what users did — it was neutral.”
As I’ve briefly explained, the logic here is that the more stuff there is on Facebook or Instagram, the more likely you are to run into something you’ll interact with, even if said interaction is genuinely bad. Horwitz notes that in April 2016, Meta analyzed Facebook’s most successful political groups, finding that a third of them “routinely featured content that was racist and conspiracy-minded,” with their growth heavily-driven by Facebook’s “Groups You Should Join” and “Discover” features, algorithmic tools that Facebook used to recommend content. The researcher in question added that “sixty-four percent of all extremist group joins are due to our recommendation tools.”
When the researcher took their concerns to Facebook’s “Protect and Care” team, they were told that there was nothing the team could do as “the accounts creating the content were real people, and Facebook intentionally had no rules mandating truth, balance or good faith.”
Meta, at its core, is a rot economy empire, entirely engineered to grow metrics and revenue at the expense of anything else. In practice, this means allowing almost any activity that might “grow” the platform, even if it means groups that balloon by tens or hundreds of thousands of people a day, or allowing people to friend 50 or more people in a single day. It means allowing almost any content other than that which it’s legally required to police like mutilation and child pornography, even if the content it allows in makes the platform significantly worse.
https://www.wheresyoured.at/were-watching-facebook-die/
And to be clear, this is part of Meta’s cultural DNA. In an interview with journalist Jeff Horwitz in his book Broken Code, Facebook’s former VP of Ads and Partnerships Brian Bolland said that “building things is way more fun than making things secure and safe…[and] until there’s a regulatory or press fire, you don’t deal with it.”
Horwitz also cites that Meta engineers’ greatest frustration was that the company “perpetually [needed] something to fail — often fucking spectacularly — to drive interest in fixing it.” Horwitz’s book describes Meta’s approach to moderation as “having a light touch,” considering it “a moral virtue” and that the company “wasn’t failing to supervise what users did — it was neutral.”
As I’ve briefly explained, the logic here is that the more stuff there is on Facebook or Instagram, the more likely you are to run into something you’ll interact with, even if said interaction is genuinely bad. Horwitz notes that in April 2016, Meta analyzed Facebook’s most successful political groups, finding that a third of them “routinely featured content that was racist and conspiracy-minded,” with their growth heavily-driven by Facebook’s “Groups You Should Join” and “Discover” features, algorithmic tools that Facebook used to recommend content. The researcher in question added that “sixty-four percent of all extremist group joins are due to our recommendation tools.”
When the researcher took their concerns to Facebook’s “Protect and Care” team, they were told that there was nothing the team could do as “the accounts creating the content were real people, and Facebook intentionally had no rules mandating truth, balance or good faith.”
Meta, at its core, is a rot economy empire, entirely engineered to grow metrics and revenue at the expense of anything else. In practice, this means allowing almost any activity that might “grow” the platform, even if it means groups that balloon by tens or hundreds of thousands of people a day, or allowing people to friend 50 or more people in a single day. It means allowing almost any content other than that which it’s legally required to police like mutilation and child pornography, even if the content it allows in makes the platform significantly worse.
https://www.wheresyoured.at/were-watching-facebook-die/
Ed Zitron's Where's Your Ed At
We're Watching Facebook Die
Like this newsletter? You should listen to the Better Offline episode!
In the first quarter of 2024, Meta made $36.45 billion dollars - $12.37 billion dollars of which was pure profit. Though the company no longer reports daily active users, it now uses…
In the first quarter of 2024, Meta made $36.45 billion dollars - $12.37 billion dollars of which was pure profit. Though the company no longer reports daily active users, it now uses…