Outsourcing, third parties, and business associates can pose significant risk to a business and need to be managed properly. All regulatory bodies and standards have been rewritten recently to include more rigor in vendor management because of recent security breaches.
So, what is disruptive outsourcing? It’s a cloud-based, automated AI solution that delivers outsourced solutions for various businesses. It is being used today for recruiting, search engines, and is applicable to other industries like retail or and healthcare. But what if the disruptive outsourced solution was compromised and information was leaked? How do standards and regulations such as GDPR, HIPAA, or PCI DSS come into play? There are very defined rules from a compliance perspective, but there is an expectation that traditional outsourcing is deployed, not an AI solution.
The Liability Question
If there was a hypothetical breach, several questions come to mind:
- Who is responsible or liable?
- What is the jurisdiction? Does the jurisdiction reside where the IP address resides in the cloud? What happens if that IP address is spoofed?
- What is the recoverable property or base in that jurisdiction?
The real subject here is disruptive technology not just the outsourcing of that technology.
Mike Gerdes, one of my Experis Solutions colleagues, had an interesting perspective: “It is an emerging use of AI in place of people doing jobs that are either programmatic or have deterministic or predictable responses from whatever inputs are received.” Truly disruptive solutions incorporate autonomous decision-making and go beyond the mere combination of Robotic Process Automation (RPA) and cloud computing.
Disruptive technology like AI has huge implications to organizations that don’t properly deal with the new forms of risk and potential threat vectors that accompany this platform. The variations on how AI can get into processes and disrupt how normal protections and security controls operate without human intervention is endless, and some of those variations may not even be predictable once the underlying system involves fully functional AI.
Mike and I agreed that companies using AI will need to proactively determine a new set of rules for how and when open-source information can be aggregated with data solicited directly from data subjects, as aggregation can put an entity over the threshold of what is allowable and what is not. Failing to create these rules could create legal and regulatory exposure of companies employing AI, with potential fines that could cripple their businesses.
Data Protection and Governance
There are also significant risks associated with allowing a machine intelligence to seek out and compile additional records from public sources and then autonomously create repositories that link that data to individual records about a data subject. Depending on the rules used, there is a distinct probability the composite records created by the AI could exceed the levels of private/protected data collection and use that are allowed, where the collection and use of the individual elements did not. Both Mike and I were not aware of any statute or regulation that allows aggregation of information to be exempt from the controls that apply to initial data collection, so legal and regulatory compliance of autonomous data aggregation might be one of the key risks/legal liabilities that will come with this technology.
This brings up the question, who is going to audit the companies that use the technology and what will they be auditing it against?
Clearly there needs to be definition around disruptive technologies and the use of defined “key outcomes” need to be an anchor in determining and managing the associated risks. There also needs to be an agreed-upon set of rules governing when AI technologies collect data to limit how that data is gathered and presented for the solution the data is contracted to serve, and how to determine the assurance level these services provide.
The Case for Process – Outcome Risk Models
Process – Outcome based risk models using data analytics could provide that assurance and definition around AI-based outsourced solutions, and that Process – Outcomes would be a viable basis upon which contracts and agreements are legally drafted. Selecting key data (key outcomes and key risk indicators) to be used in risk evaluation is critical. Once the appropriate data has been identified, the correct analytic technique can be selected to determine the point of risk to be addressed.
The ability to build Process – Outcomes risk models can provide the guidance needed with AI technology and disruptive outsourcing. How industries, businesses and people will deal with disruptive technologies in our near future will require careful consideration and definition. And those consultancies that are building risk models using data analytics for their clients will be a step ahead of the others.