program called GARD:
We are awash with fake news, and it is adversely impacting elections and people ranging from politicians to executives, to everyday people. But the growing concern is what it will do to our increasing population of ever more capable artificial intelligence deployments.
Because these AIs are increasingly controlling the world around us. And while humans make bad decisions at a relatively glacial pace compared to computers, AIs make decisions at machine speeds. This creates the opportunity for future cascading catastrophes directly related to bad information but intentional (as in an attack) and unintentional (because they are people sourced and people are flawed).
Looking back, Computer Science wasn't great, everything was batch, turn around was glacial, and by the time you got an answer to a math problem, you could have worked it out yourself by hand, not even needing a calculator, in less time. But they could handle what seemed then to be massive amounts of data and provide at least some insight into what the data was telling you.
But if the data was corrupted, so was the answer. One field mistake could have you arguing that that women were huge football fans, and men were Oprah’s largest dedicated audience. Just one binary mistake that switched the sexes, and suddenly you are in front of executives looking like an idiot.
What happened to me was that I worked for a multi-national in Internal Audit and it made no sense that we had to, at year-end, guess how much annual sales was going to be because, at the time we made the announcement, the company knew the exact answer we just hadn’t processed the data yet.
The practice was to uplift the actual numbers we had calculated by around 20%. So a bunch of us worked to fix the timing problem, and that year the internal report that had always been about 20% low was accurate, only to have a Controller then uplift that number, making us 20% over and costing the CFO his job.
Now we have been aggressively moving to replace people with AIs, particularly in areas like accounting. Still, if those AIs get bad information, a bad directive, or are intentionally messed with, we are going to be in a world of hurt and not just financially. Jobs, corporate performance, lives (thinking about the current Pandemic and logistics issues), and even national defense will increasingly depend on AIs getting the accurate information they need so that we can trust both the advice they provide and the decisions they make.
But we not only have issues that people can make coding and data entry mistakes, but we also have hostile players from criminals to disgruntled employees to hostile governments actively trying to mess things up. We need to get in front of this because, if we fall behind, we are pretty much screwed as AIs scale.
The GARD Initiative
GARD (Guaranteeing Artificial Intelligence Robustness against Deception) is a government-driven education and industry leadership 😇program under the DARPA (Defense Advanced Research Projects Agency) umbrella to do precisely that. Get ahead of this problem and crafting robust defense against those that want to compromise our data and put our jobs and lives at risk.
It will focus on both ensuring data integrity and any adversarial attempt to alter or corrupt the algorithms that are used to interpret that data. Granted, it doesn’t address the corruption of the individual interpreting the result, but that has been a known problem that predates computers and policies going back decades exist to address corrupted officers, executives, and other employees.
People haven’t been sitting idly by, but the defenses currently in existence are designed to address pre-defined adversarial attacks but can’t adjust to attacks beyond the designed parameters. This shortfall means an attacker either using a unique attack or designing an attack to circumvent a known defense could still do substantial damage.
We are awash with fake news, and it is adversely impacting elections and people ranging from politicians to executives, to everyday people. But the growing concern is what it will do to our increasing population of ever more capable artificial intelligence deployments.
Because these AIs are increasingly controlling the world around us. And while humans make bad decisions at a relatively glacial pace compared to computers, AIs make decisions at machine speeds. This creates the opportunity for future cascading catastrophes directly related to bad information but intentional (as in an attack) and unintentional (because they are people sourced and people are flawed).
Looking back, Computer Science wasn't great, everything was batch, turn around was glacial, and by the time you got an answer to a math problem, you could have worked it out yourself by hand, not even needing a calculator, in less time. But they could handle what seemed then to be massive amounts of data and provide at least some insight into what the data was telling you.
But if the data was corrupted, so was the answer. One field mistake could have you arguing that that women were huge football fans, and men were Oprah’s largest dedicated audience. Just one binary mistake that switched the sexes, and suddenly you are in front of executives looking like an idiot.
What happened to me was that I worked for a multi-national in Internal Audit and it made no sense that we had to, at year-end, guess how much annual sales was going to be because, at the time we made the announcement, the company knew the exact answer we just hadn’t processed the data yet.
The practice was to uplift the actual numbers we had calculated by around 20%. So a bunch of us worked to fix the timing problem, and that year the internal report that had always been about 20% low was accurate, only to have a Controller then uplift that number, making us 20% over and costing the CFO his job.
Now we have been aggressively moving to replace people with AIs, particularly in areas like accounting. Still, if those AIs get bad information, a bad directive, or are intentionally messed with, we are going to be in a world of hurt and not just financially. Jobs, corporate performance, lives (thinking about the current Pandemic and logistics issues), and even national defense will increasingly depend on AIs getting the accurate information they need so that we can trust both the advice they provide and the decisions they make.
But we not only have issues that people can make coding and data entry mistakes, but we also have hostile players from criminals to disgruntled employees to hostile governments actively trying to mess things up. We need to get in front of this because, if we fall behind, we are pretty much screwed as AIs scale.
The GARD Initiative
GARD (Guaranteeing Artificial Intelligence Robustness against Deception) is a government-driven education and industry leadership 😇program under the DARPA (Defense Advanced Research Projects Agency) umbrella to do precisely that. Get ahead of this problem and crafting robust defense against those that want to compromise our data and put our jobs and lives at risk.
It will focus on both ensuring data integrity and any adversarial attempt to alter or corrupt the algorithms that are used to interpret that data. Granted, it doesn’t address the corruption of the individual interpreting the result, but that has been a known problem that predates computers and policies going back decades exist to address corrupted officers, executives, and other employees.
People haven’t been sitting idly by, but the defenses currently in existence are designed to address pre-defined adversarial attacks but can’t adjust to attacks beyond the designed parameters. This shortfall means an attacker either using a unique attack or designing an attack to circumvent a known defense could still do substantial damage.
GARD will be designed to approach this problem differently using a far broader approach to attack types and be far more agile in its ability to both identify and respond to an attack.
I see this as an AI-driven defense against an AI targeted threat and critical to the growing potential for an AI-driven attack that could circumvent existing defenses.
Wrapping Up: GARD Is Critical
We are entering a new age, but we already see huge problems with the massive proliferation of false information and equally massive attempts to corrupt information gathering systems with this false data. To combat this, DARPA has defined a program called GARD, and both Intel and Georgia Tech have stepped up to help make us safe. Here is hoping that this effort is successful because if it isn’t, the outcome could be extremely dire.
I see this as an AI-driven defense against an AI targeted threat and critical to the growing potential for an AI-driven attack that could circumvent existing defenses.
Wrapping Up: GARD Is Critical
We are entering a new age, but we already see huge problems with the massive proliferation of false information and equally massive attempts to corrupt information gathering systems with this false data. To combat this, DARPA has defined a program called GARD, and both Intel and Georgia Tech have stepped up to help make us safe. Here is hoping that this effort is successful because if it isn’t, the outcome could be extremely dire.
On March 31, network security provider Palo Alto Networks (PAN) announced its intent to acquire software-defined wide-area network (SD-WAN) pioneer CloudGenix for about $420 million in cash. This is a healthy, albeit fair, premium for a company that has an estimated revenue of $45 million with about 250 customers.
For context, VMware paid roughly the same for VeloCloud in 2017. The CloudGenix customer base comprises many Fortune 1000 companies with strengths in health care, retail, manufacturing, finance, tech and hospitality.
The addition of CloudGenix brings SD-WAN into the PAN portfolio. security is shifting away from point products to platforms, and PAN has one of the best platform stories in the industry
For context, VMware paid roughly the same for VeloCloud in 2017. The CloudGenix customer base comprises many Fortune 1000 companies with strengths in health care, retail, manufacturing, finance, tech and hospitality.
The addition of CloudGenix brings SD-WAN into the PAN portfolio. security is shifting away from point products to platforms, and PAN has one of the best platform stories in the industry
GRCA-Academy Launches free GDPR Foundational Course...
CISM and CISSP
Register by 25th April - Its Online LIVE Course for 2 days...
https://grca-academy.com/product/covid-19-free-course-cobit-2019-foundation-training/
https://www.udemy.com/course/winning-at-python-start-learning-python-for-free/
CISM and CISSP
Register by 25th April - Its Online LIVE Course for 2 days...
https://grca-academy.com/product/covid-19-free-course-cobit-2019-foundation-training/
https://www.udemy.com/course/winning-at-python-start-learning-python-for-free/
Udemy
Free Python Tutorial - Winning at Python: Start Learning Python for FREE
Learn Python like a Professional! Learn this decade's most valuable skill in a fun and interactive way! - Free Course
An awesome list of resources for training, conferences, speaking, labs, reading, etc that are free all the time or during COVID-19 that cybersecurity professionals with downtime can take advantage of to improve their skills and marketability to come out on the other side ready to rock.
https://github.com/gerryguy311/CyberProfDevelopmentCovidResources
https://github.com/gerryguy311/CyberProfDevelopmentCovidResources
GitHub
GitHub - gerryguy311/Free_CyberSecurity_Professional_Development_Resources: An awesome list of FREE resources for training, conferences…
An awesome list of FREE resources for training, conferences, speaking, labs, reading, etc that are free. Originally built during COVID-19 for cybersecurity professionals with downtime can take adva...
DetectionLabELK
DetectionLabELK is a fork from Chris Long's DetectionLab with ELK stack instead of Splunk.
https://github.com/cyberdefenders/DetectionLabELK
DetectionLabELK is a fork from Chris Long's DetectionLab with ELK stack instead of Splunk.
https://github.com/cyberdefenders/DetectionLabELK
GitHub
GitHub - clong/DetectionLab: Automate the creation of a lab environment complete with security tooling and logging best practices
Automate the creation of a lab environment complete with security tooling and logging best practices - clong/DetectionLab
⚠️ WARNING !!!
It's possible to hack iPhones / iPads just by sending an email to targeted users.
Hackers have been exploiting critical 0-click + 0-day RCE vulnerability in the default mail app installed on millions of Apple devices.
Details — https://thehackernews.com/2020/04/zero-day-warning-its-possible-to-hack.html
It's possible to hack iPhones / iPads just by sending an email to targeted users.
Hackers have been exploiting critical 0-click + 0-day RCE vulnerability in the default mail app installed on millions of Apple devices.
Details — https://thehackernews.com/2020/04/zero-day-warning-its-possible-to-hack.html
This media is not supported in your browser
VIEW IN TELEGRAM
Boos or leader!?