BlackBox (Security) Archiv
4.13K subscribers
183 photos
393 videos
167 files
2.67K links
๐Ÿ‘‰๐Ÿผ Latest viruses and malware threats
๐Ÿ‘‰๐Ÿผ Latest patches, tips and tricks
๐Ÿ‘‰๐Ÿผ Threats to security/privacy/democracy on the Internet

๐Ÿ‘‰๐Ÿผ Find us on Matrix: https://matrix.to/#/!wNywwUkYshTVAFCAzw:matrix.org
Download Telegram
https://github.com/StevenBlack/hosts

๐Ÿ’กAdvice
In the AdAway Wiki you will find further suggestions and filter lists.
https://github.com/AdAway/AdAway/wiki/HostsSources

Of course you can also activate other filter lists or (block) lists. Possible overlaps are automatically removed by AdAway - duplicate entries would be too inefficient to process the filter lists. After adding the filter lists, AdAway will first download them from the sources and merge them into one big list - so you'll have to wait a moment.

Activating the filter lists can lead to so-called "overblocking". This means that domains that are necessary for the functionality of an app are filtered incorrectly. You will then have to decide on a case-by-case basis whether you want to release the domain in AdAway or put it on the whitelist. Further information on this topic can be found in Section 4.2.

4th AdAway in action

The configuration of AdAway is finished or you can customize it to your needs. Unfortunately AdAway does not offer the possibility to display the number of blocked domains - it should be more than 100.000 domains.

4.1 Blocked Domains

As already mentioned, the phenomenon of overblocking can occur, which can under certain circumstances lead to an app or certain function no longer functioning correctly. Personally, I have not been able to observe this so far - however, I am not the appropriate yardstick in this respect either, as I deliberately do without the services of Google, Facebook and Co.

So if an app doesn't work as usual, you should first activate DNS recording via the menu item Record DNS Requests and then open the app that doesn't work. Then open the menu item Record DNS Queries again and tap on the button Display RESULTS. All logged DNS queries will then be listed. As an example I allow the domain "media.kuketz.de" by tapping on the tick in the middle. AdAway will then remember this selection and put the domain on the whitelist:

4.2 Whitelist of a domain | App

Via the menu item Your Lists you can view the domains you have added yourself. AdAway distinguishes between three different variants:

Negative list:
You can add your own domains to block AdAway. In a way, this is a supplement to the existing (block) lists, which you can influence yourself.

Positive list:
As already mentioned, the overblocking effect may occur under certain circumstances. If this happens, you can make a domain accessible again via the positive list. The positive list is always before the filter lists - so the domain is reachable again, even if it is listed in one of the filter lists.

Redirections:
If necessary, you can activate IP redirects for certain domains. The domain "facebook.com" could be pointed to the IP address 193.99.144.80 (heise.de). If you call the domain "facebook.com" in your browser, you will be redirected to heise.de.

5. final note

The integration of advertising or the transmission of data to tracking companies is not necessary for the pure function provision of an app. These third-party software components do not simply end up in an app by magic, but are deliberately or actively integrated by the developers. Unfortunately, the developers themselves often do not know which data these building blocks or modules (also known as SDK in technical jargon) actually capture. Thus providers and developers sacrifice their users frivolously on the altar of boundless data collection frenzy, regardless of the associated risks for the security and privacy of their users.

With AdAway, you can minimize this unwanted data transfer. In practice, the principle of DNS blocking works extremely well - the vast majority of unwanted tracking and advertising domains are filtered, which of course has a positive effect on both security and privacy.
Nevertheless, you should not lull yourself into security and now believe that you are solving all tracker and privacy problems. Under certain circumstances it may happen that a tracking or advertising domain is still so unknown or new that it has not yet found its way to one of the (block) lists. In this case, there is a high probability that unwanted data will flow out to questionable third parties. The best long-term protection against unwanted data leakage is to do without most of the apps offered in the Google Play Store. Fortunately, the F-Droid Store is an alternative app store that addresses critical users who value free and open source applications. In the recommendation corner you will find data protection-friendly apps for a wide variety of applications.

6. conclusion

In the Google Play Store there is a whole arsenal of "pseudo-security apps" like virus scanners, which lull the user into false security. AdAway, on the other hand, can effectively protect security and privacy. The paradox is that AdAway is excluded from the Google Play Store because blocking trackers and advertising is against Google's business model. An app that blocks the delivery of Google advertising and tracking is understandably a thorn in Google's side.

In the following article of the article series "Take back control!" I will show you how to block "Big Brother Apps" from the Google Play Store into a kind of closed environment or prison - this is possible with Shelter. This way you can avoid that these apps access sensitive data (contacts etc.).

Source (๐Ÿ‡ฉ๐Ÿ‡ช) and more info:
https://www.kuketz-blog.de/adaway-werbe-und-trackingblocker-take-back-control-teil6/

#android #NoGoogle #guide #part1 #part2 #part4 #part5 #part6 #AdAway #kuketz
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
All just fake ethics

After numerous scandals, Facebook, Google and Co. have recently been playing the role of moral model students. Why we shouldn't fall for this scam.

Lean back, breathe calmly - in, out. There's no reason to get excited, you're in good hands. Even if the last time was difficult and you feel betrayed: We have listened, we promise improvement.

Everything will be different, no: Everything will be fine.

The promise

This is the sound of the hypnotic singsong currently blowing out of Silicon Valley.

For example, from Google headquarters, where ethicists are to discuss algorithms in the future, or from the mouth of Facebook boss Mark Zuckerberg. He suddenly wants the privacy of his users to take precedence over everything else and has recently expressed the wish for a "more active role for governments" in tech regulation. This follows a series of scandals that have severely damaged his company's reputation. The big IT companies no longer want to be the bad boys. Instead, they want to look more mature and virtuous. https://netzpolitik.org/2018/die-ultimative-liste-so-viele-datenskandale-gab-es-2018-bei-facebook/

Throughout the Valley, people are purifying themselves after the crisis tactics of recent years, mantra-like professing their own responsibility - code name: Corporate Digital Responsibility. The corporations seem to be reflecting on the good and calling out, frightened by the risks and side effects of their own smart developments, one ethic after the other, especially in the field of artificial intelligence (AI).

Mark Zuckerberg recently even announced ideas for regulating the Internet in a charm offensive - after having lobbied for years against everything that looked like the law (e.g. the DSGVO). The CEO of Facebook not only pretended to obey the authorities in advance, but also pretended to be a moral lawyer who wished to "preserve the good" on the Internet in order to present his own solutions from the very top with proposals for a "more active role for governments". https://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html?noredirect=on&utm_term=.e2c285fa7e1e

Critics see Zuckerberg's proclamation as a clever calculation of power to cement their own monopoly position. They sense that someone here wants to take off their dirty coat in order to stage themselves as decent, clean-washed again. The discomfort is well-founded. And it doesn't just extend to Zuckerberg's new desire for clear rules.

The measures with which Google, Facebook and Co. want to get their problems in terms of credibility, data protection or artificial intelligence under control seem immature. They are fragmentary - and in most cases only a facade of public relations behind which the void yawns.

The problems

Google: Distorted Algorithms
For example at Google. There, in 2018, internal protests against Project Maven, an order from the US Department of Defense for the AI-supported, image-analytical improvement of drone attacks, were raised. CEO Sundar Pichai quickly announced new ethical guidelines: Google wanted to ensure that its AI systems would operate in a socially responsible manner, comply with scientific rigour, protect privacy, not discriminate unfairly, and generally be safe and responsible. https://www.blog.google/technology/ai/ai-principles/

But whether this catalogue of principles, formulated in the form of seven commandments, really promises a new responsibility in AI is highly questionable. As long as Google itself determines what an "appropriate transparency" and what a "relevant explanation" is, the effect of the new guidelines and the interpretation of the terms will remain a company secret - a beautiful appearance that at best simulates clear rules.
Google's bids were not only a response to militarily explosive projects, but also a reaction to Jacky Alcinรฉ's case, which became known in 2015: Alcinรฉ and his girlfriend were identified as "gorillas" in Google Photos. This racist bias referred on the one hand to a patchy data set and on the other to a diversity problem among Google programmers. Both are fundamental problems for many digital companies, as a study by the MIT lab found out. https://twitter.com/jackyalcine/status/615329515909156865?lang=en and http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

The AI-supported face recognition software from IBM, Microsoft and Face++ also recognizes one group of people particularly well: white men. Black men, on the other hand, were wrongly classified in six percent of cases, black women in almost one third.

IBM: Questionable application areas
IBM, too, has therefore sought ethical guidelines and even developed ethnic-diverse data sets to correct distortions in its software. IBM CEO Virginia Rometty told the press that the company wanted to remain attractive especially in the areas of trust and transparency: "Every organization that develops or uses AI or stores or processes the data must act responsibly and transparently.

However, the fact that IBM's face recognition software was used in Rodrigo Duterte's "War on Drugs" in the Philippines suggests that ethically responsible action is by no means guaranteed even with a distortion-free AI. Because the difficulties are not limited to the smooth functioning of the system, but are reflected above all in its questionable application. Can an ever more precise recording of the population - especially of marginalized groups - be desirable at all? Perhaps, as the authorities in San Francisco recently decided, it would be better to do without such technologies altogether.

The fact that Google has also resumed work on a search engine for the Chinese market contrary to the announcement is another reason to become suspicious of the company's own catalogues of principles. For they do not define themselves as categorical imperatives, but as morally blurred declarations of intent whose commercial interpretation promises maximum flexibility. One must therefore almost inevitably agree with Rometty's words: "Society will decide which companies it trusts".

Microsoft: Ethics Council without bite
Microsoft has also been committed to the values of "transparency", "non-discrimination", "reliability", "accessibility", "responsibility" and "data protection" for a year now. In order to make such guidelines appear not only to be pretty but ultimately meaningless brochures, an ethics committee was established, the AI and Ethics in Engineering and Research (Aether) Committee, which advises developers on moral issues such as facial recognition and autonomous weapon systems. https://theintercept.com/2019/03/20/rodrigo-duterte-ibm-surveillance/

However, the committee is not allowed to provide information to the public. Hardly anything is known about the committee's working methods - what is known is limited to the statements of those responsible. These seldom shed light on the darkness. Eric Horvitz, director of the Microsoft Research Lab, recently proudly stated - albeit without giving any concrete figures - that the Aether Committee had already expressed reservations about the fact that several profits had not been realized. The committee had therefore shown its teeth. https://www.geekwire.com/2018/microsoft-cutting-off-sales-ai-ethics-top-researcher-eric-horvitz-says/
Whether the committee really shows effect may be doubted, however. As the AI expert Rumman Chowdhury recently explained, the committee cannot make any changes, but only make recommendations. And so it's not surprising that Microsoft has raised awareness on its own blog about the ethical problems of AIs in the context of military projects, but despite employee protests still wants to cooperate with the US Department of Defense: "We can't address these new developments if the people in the tech sector who know the most about the technology withdraw from the debate. https://www.theverge.com/2019/4/3/18293410/ai-artificial-intelligence-ethics-boards-charters-problem-big-tech

Ethical ideals, for example, are in principle documented at Microsoft, but often appear as gross silhouettes. As long as expert councils act in secret and without authority to issue directives, the "applied ethics" of the technology companies remain nothing but a loose lip service.

Google: The wrong partners
In addition to the planned lack of transparency, the structure of the ethics councils in particular often points to questionable breaking points. Although their composition usually follows the pretty principle of "interdisciplinarity", they rarely impress with ethical qualifications.

Google recently found out that this is a problem. Starting in April, an eight-member Advanced Technology External Advisory Council was to check whether the self-estimated values are really filled with life for AI development. Even before the first meeting, the council was suspended again because parts of the staff protested against the appointment of the committee and wanted both Dyan Gibbens, CEO of the drone manufacturer Trumbull, and Kay Coles James, President of the neoconservative Thinktanks Heritage Foundation, banished. https://blog.google/technology/ai/external-advisory-council-help-advance-responsible-development-ai/

Meanwhile, Google is acting at a loss - after all, without explaining anything in detail, they want to "break new ground" in order to obtain external opinions.

Facebook: Purchased research
Meanwhile, Facebook shows us how to avoid the problems of a lack of expertise and at the same time appear untrustworthy. The social network also wants to have the ethical challenges of the AI externally evaluated and founded the Institute for Ethics in Artificial Intelligence in cooperation with the Technical University of Munich at the beginning of the year. Facebook is investing 6.5 million euros over 5 years to develop "ethical guidelines for the responsible use of this technology in business and society". https://www.tum.de/die-tum/aktuelles/pressemitteilungen/detail/article/35188/

Since a company whose CEO once called its users "dumb fucks" also looks like a praiseworthy effort for ethical make-up, it was hardly surprising that criticism quickly rose. This was mostly aimed at the risk of purchased research and anticipated conflicts of interest as well as the moral damage the university would suffer if it "went to bed" with such a company. https://www.theguardian.com/technology/2018/apr/17/facebook-people-first-ever-mark-zuckerberg-harvard

Christoph Lรผtge, the future director of the institute, replied that Facebook was independent and that the research was published transparently, referring to the "win-win situation" for society as a whole resulting from the financing of Facebook.

But there are also limits to ethical research at the TU Munich. In an interview, Lรผtge stated that society's concerns about artificial intelligence would be addressed - but also that ethics "can do this better than legal regulation". https://netzpolitik.org/2019/warum-facebook-ein-institut-fuer-ethik-in-muenchen-finanziert/
Perhaps as a result, really important questions came up: whether, how and at what speed we do the business of digitization, in which areas we want to use AI systems such as face recognition at all, and how regulation can look beyond a calendar-like responsibility. Where do our red lines run?

A critical public will therefore be more important than ever. In this sense, breathe in and out calmly. But leaning back doesn't count - otherwise we'll be the all too trusting "dumb fucks" mentioned above.

https://www.republik.ch/2019/05/22/alles-nur-fake-ethik

#thinkabout #why
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
This is exactly where the matter becomes delicate. For as long as the companies themselves issue guidelines beyond generally applicable laws, "regulate themselves" through self-chosen councils or finance "independent" research themselves, doubts ferment as to whether the ethical principles are really sufficient; whether they are maintained or enforced at all - or whether they are not just a fleshless shell and thus cheap PR.

EU: Trustworthy CI
So the self-proclaimed do-gooders from Silicon Valley can hardly be expected to do anything substantial when it comes to ethics. From the stylized wording of Potemkin's ethics councils to the always the same, meaningless term casings, a lot of verbal fuss is made. But there are usually no consequences that really question one's own actions.

Their educational work thus has no effect whatsoever as a "principle of responsibility" (Hans Jonas), but as an act of precautionary ethics washing. If something goes wrong again, one will at least be able to explain it: After all, we made an effort.

The EU has now recognised the problem and set up the 52-member High-Level Expert Group on Artificial Intelligence itself, an expert committee that was to develop guidelines for AI. https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence

The result was presented in April - and was sobering. Thomas Metzinger, Professor of Theoretical Philosophy and one of only four ethicists in the group, described it as "lukewarm, short-sighted and deliberately vague". Resolute rejections, such as the use of lethal autonomous weapon systems, had been dispensed with at the insistence of industry representatives and the proclaimed "trustworthy AI" was nothing more than a stale "marketing narrative". https://background.tagesspiegel.de/ethik-waschmaschinen-made-in-europe

Metzinger's conclusion:
If the economy is too strongly involved in the discussion, at best "fake ethics" will emerge - but no real ethical progress. His appeal: Civil society must take the ethics debate away from industry again in order to develop the guidelines further itself. But how?

The tasks

Loose concepts could never make things and people better on their own. And preaching morality, as Friedrich Nietzsche already knew, is "just as easy as justifying morality is difficult". So instead of formulating a few melodious but shallow principles ex post, it is necessary to start earlier.

This means that already during the training of the developers - the TU Kaiserslautern, for example, offers the study course Socioinformatics - ethical and socio-political questions are raised and institutions are strengthened that negotiate ethics and digitality on a higher level beyond the relevant lobbyism. Institutions that push the discourse on effective rules without blinkers and false considerations.

Humanities scholars are also in demand here. Ethics, this would be the goal, must not remain a mere accessory that modestly accompanies or softly covers the laisser-faire in digital space. As a practice of consistent, critical assessment, its task should be to develop clear criteria for the corridors of action and thus also to determine the framework on which binding regulations are based. If it does not do so, it misses its potential and runs the risk of becoming meaningless.

To avoid this, it is necessary not to rely on the voluntary self-regulation of the tech elite, but to declare oneself more independent, in order to combine reflection on morality with reflection on the establishment of the world. Because if digital corporations are penetrating more and more areas of life and are decisively shaping social coexistence through their smart systems, this circumstance should be taken seriously. And think fundamentally about whether the techies, entrepreneurs and engineers alone should decide on the ethical dimensions of their developments - or whether this should not be a democratic, participatory, and thus many-voiced process.
This media is not supported in your browser
VIEW IN TELEGRAM
๐Ÿ“บ SensorID
Sensor Calibration Fingerprinting for Smartphones

When you visit a website, your web browser provides a range of information to the website, including the name and version of your browser, screen size, fonts installed, and so on. Ostensibly, this information allows the website to provide a great user experience. Unfortunately this same information can also be used to track you. In particular, this information can be used to generate a distinctive signature, or device fingerprint, to identify you.

๐Ÿ“บ https://sensorid.cl.cam.ac.uk/

#tracking #android #ios #fingerprinting
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
๐Ÿ“บ Top 5 "Conspiracy Theories" That Turned Out To Be True

We all know the old trope of the tinfoil hat wearing conspiracy theorist who believes crazy things like "the government is spying on us" and "the military is spraying things in the sky" and "the CIA ships in the drugs." Except those things aren't so crazy after all. Here are five examples of things that were once derided as zany conspiracy paranoia and are now accepted as mundane historical fact.

๐Ÿ“บ https://www.youtube.com/watch?v=wO5oJM8GjWA

๐Ÿ–จ https://www.corbettreport.com/5conspiracies/

๐Ÿ“ก @NoGoolag #corbettreport
https://t.me/NoGoolag/1233

#corbettreport #conspiracy #facts #history #gov #why #video #podcast
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
๐Ÿ“บ The secret tactics Monsanto used to protect Roundup, its star product

Four Corners investigates the secret tactics used by global chemical giant #Monsanto to protect its billion-dollar business and its star product โ€” the weed killer, #Roundup

๐Ÿ“บ https://www.youtube.com/watch?v=JszHrMZ7dx4

๐Ÿ–จ https://www.abc.net.au/news/2018-10-08/cancer-council-calls-for-review-amid-roundup-cancer-concerns/10337806

#DeleteMonsanto #DeleteBayer #DeleteRoundup #FourCorners #video #podcast
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
๐ŸŽง Around the Globe with Financial Survival

James joins Melody Cedarstrom for this wide-ranging edition of Financial Survival. Topics covered include Vietnam and tyranny, big tech regulation and back door globalization, the US-China trade war and false flags in the Persian Gulf.

๐Ÿ–จ https://www.corbettreport.com/around-the-globe-with-financial-survival/

#corbettreport #video #podcast
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
โ˜ฃ๏ธ Chaos Communication Camp 2019 โ˜ฃ๏ธ

The Chaos Communication Camp in Mildenberg is an open-air hacker camp and party that takes place every four years, organized by the Chaos Computer Club (CCC). Thousands of hackers, technology freaks, artists and utopians get together for five days in the Brandenburg summer โ€“ to communicate, learn, hack and party together.

We focus on topics such as information technology, digital security, hacking, crafting, making and breaking, and we engage in creative, sceptical discourse on the interaction between technology and society.

Weโ€™d love to see your submission for these tracks:

๐Ÿ’ก Arts & Culture,
๐Ÿ’ก Ethics, Society & Politics,
๐Ÿ’ก Hardware & Making,
๐Ÿ’ก Security & Hacking,
๐Ÿ’ก Science.

Apart from the official conference program on the main stages, the Chaos Communication Camp also offers space for community villages, developer and project meetings, art installations, lightning talks and numerous workshops (called โ€œself-organized sessionsโ€).

Dates & deadlines:

๐Ÿ’ก May 22th, 2019: Call for Participation
๐Ÿ’ก June 11th, 2019 (23:59 CEST): Deadline for submissions
๐Ÿ’ก July 10th: Notification of acceptance
๐Ÿ’ก August 21st โ€“ 25th, 2019: Chaos Communication Camp at Ziegeleipark Mildenberg

Submission guidelines for talks:

All lectures need to be submitted to our conference planning system under the following URL: https://frab.cccv.de/cfp/camp2019.

Please follow the instructions there. If you have any questions regarding the submission, you are welcome to contact us via mail at camp2019-content@cccv.de.

Please send us a description of your suggested talk that is as complete as possible. The description is the central criterium for acceptance or rejection, so please ensure that it is as clear and complete as possible. Quality comes before quantity. Due to the non-commercial nature of the event, presentations which aim to market or promote commercial products or entities will be rejected without consideration.

More info:
https://events.ccc.de/2019/05/22/call-for-participation-chaos-communication-camp-2019/

#ccc #camp
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
Media is too big
VIEW IN TELEGRAM
๐Ÿ“บ Interview with Ren Zhengfei, Founder And CEO Of Chinese Telecom Giant Huawei

Ren Zhengfei, founder and CEO of Chinese telecom giant Huawei, spoke to Time on U.S. actions against his company, the security of Huawei's product, his daughter and Huawei CFO's arrest, President Donald Trump and 5G technology.

๐Ÿ“บ https://www.youtube.com/watch?v=Nl2jCWDwE8w

#china #huawei #founder #interview #video #podcast
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
US authorities want to intercept telecommunications in Europe

The FBI could soon legally demand sensitive communication data from European Internet service providers, possibly in real time. In doing so, the European Union wants to make the Trump administration more inclined to be allowed to query "electronic evidence" directly on Facebook & Co. in return.

The EU Commission wants to negotiate an agreement with the US government that will force European Union-based Internet service providers to cooperate more with US authorities. The companies would have to grant police and secret services from the USA access to the communication of their users. European prosecutors would then also be able to issue an order for publication directly on Facebook, Apple and other Internet giants. The legal process via the judicial authorities that has been customary up to now is to be dropped. https://ec.europa.eu/info/policies/justice-and-fundamental-rights/criminal-justice/e-evidence-cross-border-access-electronic-evidence_de

The plans are part of the "E-Evidence" regulation, with which the EU wants to facilitate the publication of "electronic evidence". According to a recently published draft, this includes user data (name, date of birth, postal address, telephone number), access data (date and time of use, IP address), transaction data (transmission and reception data, location of the device, protocol used) and content data.

Agreement on implementation with the US Government
The planned EU regulation is limited to companies domiciled in the European Union. But because most of the coveted data is stored in the USA, the EU Commission is planning an implementation agreement with the US government. This would be possible within the framework of the "CLOUD Act", which the US government enacted last year. It obliges companies established in the USA to disclose inventory, traffic and content data if this appears necessary for criminal prosecution or averting danger.

The CLOUD Act also allows third countries to issue orders to US companies. An agreement necessary for this must be based on reciprocity and thus allow the US government access to companies in the partner countries. The Trump administration, however, demands a concession to be able to listen to content data in real time. Companies based in the EU would then have to transfer this data directly to US authorities.

More info:
https://netzpolitik.org/2019/us-behoerden-wollen-telekommunikation-in-europa-abhoeren/

#USA #FBI #EU #government #surveillance
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
This media is not supported in your browser
VIEW IN TELEGRAM
Meet Doggo: Stanford's student built, four-legged robot

Putting their own twist on robots that amble through complicated landscapes, the Stanford Student Robotics clubโ€™s Extreme Mobility team at Stanford University has developed a four-legged robot that is not only capable of performing acrobatic tricks and traversing challenging terrain but is also designed with reproducibility in mind. Anyone who wants their own version of the robot, dubbed Stanford Doggo, can consult comprehensive plans, code and a supply list that the students have made freely available online:

https://github.com/Nate711/StanfordDoggoProject

https://docs.google.com/spreadsheets/d/1MQRoZCfsMdJhHQ-ht6YvhzNvye6xDXO8vhWQql2HtlI/edit#gid=726381752

http://roboticsclub.stanford.edu/

๐Ÿ“บ https://www.youtube.com/watch?v=2E82o2pP9Jo

#doggo #robotic #opensource #video #podcast
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
Itโ€™s the middle of the night. Do you know who your iPhone is talking to?

Apple says, โ€œWhat happens on your iPhone stays on your iPhone.โ€ Our privacy experiment showed 5,400 hidden app trackers guzzled our data โ€” in a single week.

Itโ€™s 3 a.m. Do you know what your iPhone is doing?

Mine has been alarmingly busy. Even though the screen is off and Iโ€™m snoring, apps are beaming out lots of information about me to companies Iโ€™ve never heard of. Your iPhone probably is doing the same โ€” and Apple could be doing more to stop it.

On a recent Monday night, a dozen marketing companies, research firms and other personal data guzzlers got reports from my iPhone. At 11:43 p.m., a company called Amplitude learned my phone number, email and exact location. At 3:58 a.m., another called Appboy got a digital fingerprint of my phone. At 6:25 a.m., a tracker called Demdex received a way to identify my phone and sent back a list of other trackers to pair up with.

And all night long, there was some startling behavior by a household name: Yelp. It was receiving a message that included my IP address -โ€” once every five minutes.

Our data has a secret life in many of the devices we use every day, from talking Alexa speakers to smart TVs. But weโ€™ve got a giant blind spot when it comes to the data companies probing our phones.

You might assume you can count on Apple to sweat all the privacy details. After all, it touted in a recent ad, โ€œWhat happens on your iPhone stays on your iPhone.โ€ My investigation suggests otherwise.

IPhone apps I discovered tracking me by passing information to third parties โ€” just while I was asleep โ€” include Microsoft OneDrive, Intuitโ€™s Mint, Nike, Spotify, The Washington Post and IBMโ€™s the Weather Channel. One app, the crime-alert service Citizen, shared personally identifiable information in violation of its published privacy policy.

And your iPhone doesnโ€™t only feed data trackers while you sleep. In a single week, I encountered over 5,400 trackers, mostly in apps, not including the incessant Yelp traffic. According to privacy firm Disconnect, which helped test my iPhone, those unwanted trackers would have spewed out 1.5 gigabytes of data over the span of a month. Thatโ€™s half of an entire basic wireless service plan from AT&T.

โ€œThis is your data. Why should it even leave your phone? Why should it be collected by someone when you donโ€™t know what theyโ€™re going to do with it?โ€ says Patrick Jackson, a former National Security Agency researcher who is chief technology officer for Disconnect. He hooked my iPhone into special software so we could examine the traffic. โ€œI know the value of data, and I donโ€™t want mine in any hands where it doesnโ€™t need to be,โ€ he told me.

Read more:
https://www.washingtonpost.com/technology/2019/05/28/its-middle-night-do-you-know-who-your-iphone-is-talking

#apple #iphone #trackers #datamining #privacy #why
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_EN
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN
All for all - and Assange against all

Julian Assange is the only anarchist who has made world politics in the 21st century. In London he must go to court - and with him the ideas of hacker culture.

80c11049faebf441d524fb3c4cd5351c: American soldier Chelsea Manning types this character combination into a chat on March 8, 2010. It is a so-called hash value, the encrypted form of a password. Manning wants to open another door in the army's computer system from which she forwards internal documents to Wikileaks. But she can't crack the hash, hopes for her chat partner, and according to the US Department of Justice the name is Julian Assange. But even the Wikileaks founder and his team are unable to decipher the hash.

The very moment when Assange's hacking skills fail is his doom.

This Thursday, the most famous face that hacker culture has produced will be on trial in London. It is about Assange's extradition to his arch-enemy, the United States of America. The accusation: espionage as Manning's accomplice. Depending on one's point of view, this does not only call into question freedom of the press. The basic conviction of an increasingly influential subculture is also being negotiated: that all information must be freed from dark computer memories and all knowledge of domination must be removed. Everything for everyone. Assange is the only anarchist who made world politics in the 21st century.

The term "hacker" first appeared in the USA in the 1950s and has little to do with political ideas. The first mainframe computers still programmed with punched cards are found in universities. The early hackers did not want to invade foreign computer systems - there are hardly any of them yet. They want to extend the functions of computers. The word "hack" initially means solving a technical problem. In the "Tech Model Railroad Club", a model railway club at the Massachusetts Institute of Technology, the first hackers are working on improving track circuits.

Chip technology later replaced the transistor computer, computers became smaller and more affordable for private individuals. But because there were no graphical user interfaces until the 1980s, as is the case today in Windows or smartphones, owners must at least be able to program a little. To this day, movies present hacking as a superhuman skill. As an X-ray view that perceives what happens under the smooth surfaces of the devices and in nanometer-sized chip parts. Historian Julia Gรผl Erdogan sees things differently. She says: "The goal of the early hackers was to demystify computer technology. They wanted to understand and master their new machines." Erdogan is doing his doctorate at the Centre for Contemporary Historical Research in Potsdam on hacker cultures in the FRG and GDR.

In 1984, US journalist Steven Levy formulated the so-called "hacker ethics" in the foreword to his book "Hackers". His main concern is free access to computers and knowledge. And mistrust of authorities. It is the spirit that drives Assange since he invaded military networks from his native Australia in the early 1990s. A spirit that at some point leads to his very personal hatred of foreign policy "hawks" like Hillary Clinton.

Early on in the scene, some discover the computer as a political tool. Since the 1980s, the Free Software movement has demanded that people have control over the programs they use at all times, and can change them at any time. Other hackers realize that money can be made with their knowledge. Bill Gates hacks university computers as a student, Steve Jobs and Steve Wozniak also manipulate telephone circuits by sending whistling tones at a certain frequency over the line (phreaking). Their companies Microsoft and Apple later set out to conquer the world.
So the hacker himself has always been an ambivalent figure. A system malfunctioner that makes the system better. The "penetration test", an attack on a network to find weaknesses in the defense, today means a livelihood for those hackers who then call themselves "IT security experts". But it is precisely because the hacker and his work are usually invisible that they inspire collective fantasies. The penetration of computer systems by dark powers is part of pop culture.

A decisive experience for the Federal Republic: the so-called KGB Hack

Hackers in fictitious works today are above all gloomy, like Elliot Alderson (Rami Malek), who sabotages an overpowering corporation in Mr Robot as a torn drug addict. Thanks to advisors from the scene, the series is one of the few depictions with which experts are satisfied: with digital attack techniques that really exist and without squeaky colorful visualizations of computer viruses. Assange himself probably inspired the figure of the sex-fixed transparency guru Andreas Wolf in Jonathan Franzens novel "Innocence", and James Bond's opponent in "Skyfall".

The paths to hacking are different, Beau Woods describes his as follows: "In college, people sometimes do things that annoy, and then you just want to wipe out their computers." The American works for think tanks and his NGO "i am the cavalry", which is supposed to bring hackers together with the rest of society. The black hoodie, he assures the meeting at the digital conference SXSW in Austin, he only wears ironically. It was his speciality to suddenly let foreign CD-Rom drives open from a distance.

In Germany, the Chaos Computer Club (CCC) has been making a name for itself as a group of experts since the 1980s. A decisive experience for the Federal Republic of Germany is the so-called KGB hack, which became public in 1989: A group from Hanover sold information from US-American servers to the Soviet secret service. A television program spoke of the "biggest espionage case since Guillaume", even though the information sold was anything but explosive. In public perception, however, fears of a networked world have since been linked to the practices of hackers, explains historian Erdogan.

In contrast, the actions of the "white hats", the hackers on the bright side who, unlike criminal "black hats", do not want to harm anyone, are fading. During the 1984 BTX hack, CCC members were able to access the Hamburger Sparkasse and theoretically make it about 135,000 marks easier. Theoretically, because the CCC kept the hacking largely in legal tracks and emphasized the social responsibility of the scene. The members come from an alternative left-wing milieu, from the civil rights and peace movement, which are actually considered critical of technology. But they fear the surveillance state and oppose, for example, the telecommunications monopoly of the Federal Post Office, says Erdogan.

Hacking is also taking place in the GDR. While more than three million home computers and consoles were sold in West Germany in 1986, they were rare in socialist Germany. The imported Western technology was expensive, there were only a few GDR models. The largest and best known club was the one in the House of Young Talents in East Berlin. In 1987 there were two Commodore 64 and one Atari 130 XL. The Stasi supervised the meetings, says Erdogan. Private online communication was not possible in the GDR.

For Beau Woods, hackers trigger a primeval fear of industrial society

In the 21st century, culture's detachment from the state and idealism are put to the test. Blackmailing software that cripples victims' computers until they pay is becoming increasingly lucrative. Facebook and Google offer the best experts astronomical sums to protect their products. At the most important hacker conference, the Black Hat in Las Vegas, NSA agents bring along an Enigma machine from the Wehrmacht. They wanted to bait and recruit hackers through their love of encryption and decryption.
For Beau Woods, hackers trigger a primal fear of industrial society. They are the opponents of the established heroes of modernity: "Scientists and engineers tame nature, of which humans have always been afraid. Thousands of years ago, the king of the Assyrians is said to have gone into the wilderness and killed lions because humans killed them. Later, engineers would have built roads through nature and put them in order with machines. "But now there are people, hackers, who can manipulate the machines of the engineers at will, let them do their will." That irritates people. Woods says others see him like this: "The wizards have created the smartphone, but you surpass them because you can break into what they have created."

Those who have such abilities can afford a bit of arrogance. The punch line at the end of the groundbreaking "Hacker Manifesto" of 1986 is: "My crime is that I'm smarter than you, which you'll never forgive me for." A sentence like an autobiography of Julian Assange.

https://www.sueddeutsche.de/digital/julian-assange-hacker-it-sicherheit-1.4467914

#FreeAssange
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_DE
๐Ÿ“ก@cRyPtHoN_INFOSEC_ES
๐Ÿ“ก@FLOSSb0xIN