How to Address AI Data Privacy Concerns

Can we preserve data privacy as artificial intelligence use cases continue to grow?

Carrie Pallardy, Contributing Reporter

July 4, 2023

5 Min Read
a digital gavel
sasunBughdaryan via Adobe Stock

AI has been unleashed, along with the privacy, legal, and ethical concerns it raises. What can companies do to address these concerns? How will AI regulation address data privacy? What power do individuals have to protect their privacy?

What Companies Are Doing

Some companies have opted for a ban. For example, Samsung banned the use of generative AI tools, like ChatGPT, following the leak of data to ChatGPT. Tony Lee, CTO at intelligent document processing company Hyperscience, suspects this approach will only be a stopgap. “Banning employees from using AI-powered tools in the interim will only delay the inevitable, and organizations need to outline their privacy and security measures sooner rather than later,” he says.

An AI governance program can help companies determine when to use AI and how to do so responsibly. This kind of program requires collaboration across multiple teams, including legal, risk management, and data governance.

Kristin Johnston, the associate general counsel of AI, privacy, and security at applied AI company Afiniti, outlines some of the most important questions to ask when building an AI governance program. “Are there checks and balances to ensure you stay within legal and ethical limits? Is there a lawful basis for data collection and processing? Is there always disclosure when AI is being used? Are consumers informed about the use of their personal information, and are they able to access or delete their data or opt out of its collection? Are your systems fair and nondiscriminatory and do not lead to biased output? And do you have a robust security infrastructure in place?”

Companies developing AI systems can take several approaches to protecting data privacy. Data scientists need to be educated on data privacy, but company leadership needs to recognize they are not the ultimate experts on privacy. “Companies also can provide their data scientists with tools that have built-in guardrails that enforce compliance,” says Manasi Vartak, founder and CEO of Verta, a company that provides management and operations solutions for data science and machine learning team.

“Companies have to deploy a variety of technical strategies to protect data privacy; there is an entire spectrum of privacy preservation technologies out there to address such issues,” says Adnan Masood, PhD, chief AI architect at digital transformation solutions company UST.

He points to approaches like tokenization, which replaces sensitive data elements with non-sensitive equivalents. Anonymization and the use of synthetic data are also among the potential privacy preservation strategies. “On the cutting edge, we have techniques like fully homomorphic encryption, which allows computations to be performed on encrypted data without ever needing to decrypt it,” says Masood. “This could revolutionize privacy in AI, as it means AI models could be trained on encrypted data, ensuring the raw, sensitive data is never exposed.”

Ensuring individuals retain control over their personal data will remain an essential privacy issue for AI developers. “A core concept should be the ability to give the user insight into the data that they are sharing and the ability to stop sharing it or the ability to control where it is being shared,” says Will LaSala, the field CTO at cybersecurity company OneSpan.

While companies are racing to get the most out of AI, data privacy may not be at the forefront. “I know the startup pattern, which is that companies launch with very little thought about privacy and security,” says Jennifer King, PhD, a privacy and data policy fellow at the Stanford University Institute for Human-Centered Artificial Intelligence.

With AI-specific regulations still in development, serious competition in an exciting field, and an unclear picture of exactly how this technology works, what kind of limits are AI developers going to put on their systems?  “We're more or less at the mercy of the companies trying to proactively put limits on their own technology, and I'm skeptical,” says King.

AI Regulation

Sam Altman, CEO OpenAI, is one prominent voice among many calling for regulation. With the technology on the fast track to becoming ubiquitous and the mounting concerns about its implications (data privacy being just one), regulators have their work cut out for them.

“There are as many as 800 AI policy initiatives underway across 69 countries and territories,” says Johnston. The European Union is at the forefront of regulation efforts with its proposed AI Act, which would assign AI applications to three different levels of risk.

Data privacy and AI are getting attention at the federal level in the United States. The country does not yet have a federal data privacy law, although the American Data Privacy and Protection Act had had some traction. The White House has also released a Blueprint for an AI Bill of Rights.

Industry groups are launching resources to provide guidance on AI development. The National Institute of Standards and Technology launched an AI Risk Management Framework. The Global Partnership on Artificial Intelligence is working to guide responsible development and adoption of AI.

The issue of AI regulation is rife with debate. Will over-regulation hobble AI innovation? Will too little regulation allow for negative consequences to run rampant? Finding that balance will be a tightrope for regulatory agencies to walk.

“On one side, we need to ensure that AI and machine learning advancements continue to progress, driving innovation and bringing about societal benefits,” Masood argues. “On the other side, we need to mitigate the risks and tackle the ethical dilemmas that these technologies bring along, which requires setting rules and standards.”

Individual Responsibility

Regulation is forthcoming, and companies have an obligation to grapple with the data privacy issues inherent to AI. But individuals have a role to play, too. After all, it is their privacy at stake. Vartak stresses the importance of understanding the tradeoffs that come with using AI tools. Individuals can weigh the benefits against the risks associated with the private data they are sharing.

“To put it bluntly, we all need to read the boilerplate,” she says. “We need to understand who is getting that information and how they say they will use it. And we need to hold corporations accountable when they act irresponsibly with our data.”

What to Read Next:

4 Big Regulatory Issues To Ponder in 2023

Privacy Debate for 2023: Can Data Collection Persist As Is?

AI Hysteria: Irrational or Justified?

About the Author(s)

Carrie Pallardy

Contributing Reporter

Carrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights