govciooutlook
September - 20199GOVERNMENT CIO OUTLOOKIn support of these, the federal government have passed laws, such as the 2001 Patriot Act, which allows law enforcement greater surveillance of those suspected of terrorist-related crimes, facilitation of information sharing amongst government agencies and other homeland security activities. As one might expect, any gains in national security must be tied to increased ability to gather and derive intelligence from data. However, these laws and reports, such as the "Artificial Intelligence ­ Using Standards to Mitigate Risks", do not focus upon data privacy at an individual, enterprise or governmental level. So where is the correlation between AI and data privacy? Data privacy issues are present throughout the lifecycle of AI. For example, the process in which data is initially collected and aggregated with other sources of data may be done without controls in place to determine if some sort of privacy line has been crossed. For example, bots might be employed to `scrape' the Internet for personal information, some of which should not be accessible to the public. Secondarily, once this data is consumed by machine learning or AI solutions, the intelligence gathered may be greater than the sum of its parts. Another way of putting it is if one of the characteristics of AI is to simulate natural intelligence to solve complex problems then one method this is accomplished is by finding correlations between data that will help inform and predict. Individually, these pieces of data may not generate much value but in a wider context with these relationships between data sources identified there may be a much richer insight gleaned. Given the reach and scope of the federal government, as well as the ubiquitous nature of the Internet, it is safe to say that their ability to access multiple sources of unique and shared data is tremendous.So, where does the line between the greater good of homeland security and data privacy lie? Is there one? While the federal government has enacted numerous sector-based laws, some of which touch upon privacy (e.g. Health Information Portability and Accountability Act, Fair Credit Reporting Act, etc.) there is no single piece of federal legislation that principally focuses on data protection and privacy, like the EU's GDPR. Most states have adopted laws protecting their resident's personally identifiable information, but this doesn't address how data privacy and homeland security on a national level coexist. This is not to suggest that DHS and other intelligence agencies shouldn't continue leveraging AI to help secure our country. If for no other reason threat actors and nation states are adopting this technology for their own purposes, often at odds with the United States. For example, it is thought that AI is being used to make polymorphic attacks even more effective, increasing the speed and efficacy in which identifiable attack attributes change to avoid detection by advanced cybersecurity tools. Another example is that of a spear phishing experiment conducted by security firm ZeroFox. Their AI tool sent spear phishing tweets to over 800 people at a rate of 6.75 tweets a minute, capturing 275 people. Their human counterpart was only able to send 129 users malicious tweets at a rate of 1.075 tweets a minute, ensnaring only 49 individuals. It's examples like these that make it imperative that we use tools like AI to protect our national security. Until the United States defines its position on data privacy and protection, other legitimate and pressing national needs, such as homeland security will continue to move forward in the advanced use and application of data-driven technologies, such as AI and machine learning. As we continue to push the private sector to be privacy-centric with their services and technology so too must the US government, starting with a clear message about data privacy and protection. Lester Godsey
< Page 8 | Page 10 >