NoGoolag
4.53K subscribers
13.6K photos
7.11K videos
591 files
14.4K links
Download Telegram
Media is too big
VIEW IN TELEGRAM
France Set to Roll Out Nationwide Facial Recognition ID Program

Digital identity enrollment app to be rolled out in November
Privacy, absence of consent and security among concerns raised

France is poised to become the first European country to use facial recognition technology to give citizens a secure digital identity -- whether they want it or not.

👉🏼 Read more:
https://www.bloomberg.com/news/articles/2019-10-03/french-liberte-tested-by-nationwide-facial-recognition-id-plan

#france #id #FacialRecognition #nationwide #thinkabout #why #video
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
This is how you kick facial recognition out of your town

Bans on the technology have mostly focused on law enforcement, but there’s a growing movement to get it out of school, parks, and private businesses too.

In San Francisco, a cop can’t use facial recognition technology on a person arrested. But a landlord can use it on a tenant, and a school district can use it on students.

This is where we find ourselves, smack in the middle of an era when cameras on the corner can automatically recognize passersby, whether they like it or not. The question of who should be able to use this technology, and who shouldn’t, remains largely unanswered in the US. So far, American backlash against facial recognition has been directed mainly at law enforcement. San Francisco and Oakland, as well as Somerville, Massachusetts, have all banned police from using the technology in the past year because the algorithms aren’t accurate for people of color and women. Presidential candidate Bernie Sanders has even called for a moratorium on police use.

Private companies and property owners have had no such restrictions, and facial recognition is increasingly cropping up in apartment buildings, hotels, and more. Privacy advocates worry that constant surveillance will lead to discrimination and have a chilling effect on free speech—and the American public isn’t very comfortable with it either. According to a recent survey by Pew Research, people in the US actually feel better about cops using facial recognition than they do about private businesses.

👉🏼 Read more:
https://www.technologyreview.com/s/614477/facial-recognition-law-enforcement-surveillance-private-industry-regulation-ban-backlash/

#surveillance #facialrecognition #lawenforcement #regulation #thinkabout
📡@cRyPtHoN_INFOSEC_DE
📡@cRyPtHoN_INFOSEC_EN
📡@cRyPtHoN_INFOSEC_ES
Enhancing digital privacy by hiding images from AI

Researchers develop a new technique that will keep your online photos safe from facial recognition algorithms. The research, which has been ongoing for more than six months, is targeted at countering the facial-recognition algorithms of big tech firms such as Facebook and Google. Professor Kankanhalli and her team from NUS Computer Science has developed a technique that safeguards sensitive information in photos by making subtle visual distortion in the images that are almost imperceptible to humans but render selected features undetectable by known algorithms.

https://news.nus.edu.sg/research/enhancing-digital-privacy-hiding-images-ai

#AI #facialRecognition
PimEyes - A Polish company just abolishes our anonymity

Research by
netzpolitik.org shows the potential for abuse of PimEyes, a free search engine for 900 million faces. All of whom have photos on the Internet could already be part of their database.

Dylan smiles into the camera, arm in arm with the other guests of a queer boat party. Behind them, glasses glisten on the shelves of a bar. Eight years ago a party photographer uploaded this snapshot on the internet. Dylan had already forgotten it - until today. Because with a reverse search engine for faces, everyone can find this old party photo of Dylan. All they have to do is upload his profile picture from the Xing career network, free of charge and without registration. But Dylan wants to keep his private and professional life separate: During the day he works as a banker in Frankfurt am Main.

The name of the search engine is PimEyes. It analyses masses of faces on the Internet for individual characteristics and stores the biometric data. When Dylan tests the search engine with his profile picture, it compares it with the database and delivers similar faces as a result, shows a preview picture and the domain where the picture was found. Dylan was recognized even though, unlike today, he did not even have a beard then.

Our research shows: PimEyes is a wholesale attack on anonymity and possibly illegal. A snapshot may be enough to identify a stranger using PimEyes. The search engine does not directly provide the name of a person you are looking for. But if it finds matching faces, in many cases the displayed websites can be used to find out name, profession and much more.

👀 👉🏼 🇬🇧 PimEyes - A Polish company just abolishes our anonymity
https://netzpolitik.org/2020/pimeyes-face-search-company-is-abolishing-our-anonymity/

👀 👉🏼 🇩🇪: https://netzpolitik.org/2020/gesichter-suchmaschine-pimeyes-schafft-anonymitaet-ab/

👀 👉🏼 🇬🇧 https://www.bbc.com/news/technology-53007510

👀 👉🏼 🇬🇧 https://petapixel.com/2020/06/11/this-creepy-face-search-engine-scours-the-web-for-photos-of-anyone/

👀 👉🏼 🇩🇪 Automated face recognition -
Enforce our data protection rights at last!
https://netzpolitik.org/2020/automatisierte-gesichtserkennung-setzt-unsere-datenschutzrechte-endlich-auch-durch/

#PimEyes #facialrecognition #searchengine #privacy #anonymity #ourdata #thinkabout
📡@cRyPtHoN_INFOSEC_DE
📡
@cRyPtHoN_INFOSEC_EN
📡
@BlackBox_Archiv
📡
@NoGoolag
Image "Cloaking" for Personal Privacy

2020 is a watershed year for machine learning. It has seen the true arrival of commodized machine learning, where deep learning models and algorithms are readily available to Internet users. GPUs are cheaper and more readily available than ever, and new training methods like transfer learning have made it possible to train powerful deep learning models using smaller sets of data.

But accessible machine learning also has its downsides as well. A recent New York Times article by Kashmir Hill profiled clearview.ai, an unregulated facial recognition service that has now downloaded over 3 billion photos of people from the Internet and social media, using them to build facial recognition models for millions of citizens without their knowledge or permission. Clearview.ai demonstrates just how easy it is to build invasive tools for monitoring and tracking using deep learning.

So how do we protect ourselves against unauthorized third parties building facial recognition models to recognize us wherever we may go? Regulations can and will help restrict usage of machine learning by public companies, but will have negligible impact on private organizations, individuals, or even other nation states with similar goals.

The SAND Lab at University of Chicago has developed Fawkes1, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how their own images can be used to track them. At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

👀 👉🏼 http://sandlab.cs.uchicago.edu/fawkes/

#Fawkes #image #cloaking #facialrecognition #privacy
📡@cRyPtHoN_INFOSEC_DE
📡
@cRyPtHoN_INFOSEC_EN
📡
@BlackBox_Archiv
📡
@NoGoolag
Forwarded from GJ `°÷°` 🇵🇸🕊 (t ``~__/>_GJ06)
Facial Recognition Failures Are Locking People Out of Unemployment Systemshttps://www.vice.com/en/article/5dbywn/facial-recognition-failures-are-locking-people-out-of-unemployment-systems

People around the country are furious after being denied their #unemployment benefits due to apparent problems with facial recognition technology that claims to prevent fraud.

Unemployment recipients have been complaining for months about the #identity verification service ID.me, which uses a combination of biometric information and official documents to confirm that applicants are who they claim to be. The complaints reached another crescendo this week after Axios published a “deep dive” article about the threat of unemployment fraud based on statistics provided to the outlet by ID.me..

#facialrecognition #id #IDme
Police accused over use of facial recognition at King Charles’s coronation | King Charles coronation | The Guardian – https://www.theguardian.com/uk-news/2023/may/03/metropolitan-police-live-facial-recognition-in-crowds-at-king-charles-coronation

The largest previous LFR deployment was the 2017 Notting Hill carnival, when 100,000 faces were scanned.
Fussey said: “A surveillance deployment for the coronation would likely be the biggest live facial recognition operation ever conducted by the MPS, and probably the largest ever seen in Europe.

#UK #FacialRecognition
Ian Dunt (@IanDunt): "The purpose of the Public Order Act is to make the trigger for criminal penalties so broad, and the meaning of key terms so nebulous, that it will be hard for a protester to ever really know they are abiding by the law. https://inews.co.uk/opinion/most-draconian-assault-free-speech-living-memory-now-law-2313273" | nitter – https://nitter.net/IanDunt/status/1654034279641350150#m

#UK #FacialRecognition