AI Governance and Preserving Privacy

February 29, 2024  |  Matt Mui

LevelBlue featured a dynamic cyber mashup panel with Akamai, Palo Alto Networks, SentinelOne, and the Cloud Security Alliance. We discussed some provocative topics around Artificial Intelligence (AI) and Machine Learning (ML) including responsible AI and securing AI. There were some good examples of best practices shared in an emerging AI world like implementing Zero Trust architecture and anonymization of sensitive data. Many thanks to our panelists for sharing their insights.

Before diving into the hot topics around AI governance and protecting our privacy, let’s define ML and GenAI to provide some background on what they are and what they can do along with some real-world use case examples for better context on the impact and implications AI will have on our future.

GenAI and ML 

Machine Learning (ML) is a subset of AI that relies on the development of algorithms to make decisions or predictions based on data without being explicitly programmed. It uses algorithms to automatically learn and improve from experience.

GenAI is a subset of ML that focuses on creating new data samples that resemble real-world data. GenAI can produce new and original content through deep learning, a method in which data is processed like the human brain and is independent of direct human interaction.

GenAI can produce new content based on text, images, 3D rendering, video, audio, music, and code and increasingly with multimodal capabilities can interpret different data prompts to generate different data types to describe an image, generate realistic images, create vibrant illustrations, predict contextually relevant content, answer questions in an informational way, and much more.   

Real world uses cases include summarizing reports, creating music in a specific style, develop and improve code faster, generate marketing content in different languages, detect and prevent fraud, optimize patient interactions, detect defects and quality issues, and predict and respond to cyber-attacks with automation capabilities at machine speed.

Responsible AI

Given the power to do good with AI - how do we balance the risk and reward for the good of society? What is an organization’s ethos and philosophy around AI governance? What is the organization’s philosophy around the reliability, transparency, accountability, safety, security, privacy, and fairness with AI, and one that is human-centered?

It's important to build each of these pillars into an organization's AI innovation and business decision-making. Balancing the risk and reward of innovating AI/ML into an organization's ecosystem without compromising social responsibility and damaging the company's brand and reputation is crucial.

At the center of AI where personal data is the DNA of our identity in a hyperconnected digital world, privacy is a top priority.

Privacy concerns with AI

In Cisco’s 2023 consumer privacy survey, a study of over 2600 consumers in 12 countries globally, indicates consumer awareness of data privacy rights is continuing to grow with the younger generations (age groups under 45) exercising their Data Subject Access rights and switching providers over their privacy practices and policies.  Consumers support AI use but are also concerned.

With those supporting AI for use:

  • 48% believe AI can be useful in improving their lives
  •  54% are willing to share anonymized personal data to improve AI products

AI is an area that has some work to do to earn trust

  • 60% of respondents believe the use of AI by organizations has already eroded trust in them
  • 62% reported concerns about the business use of AI
  • 72% of respondents indicated that having products and solutions audited for bias would make them “somewhat” or much “more comfortable” with AI

Of the 12% who indicated they were regular GenAI users

  • 63% were realizing significant value from GenAI
  • Over 30% of users have entered names, address, and health information
  • 25% to 28% of users have provided financial, religion/ethnicity, and account or ID numbers

These categories of data present data privacy concerns and challenges if exposed to the public. The surveyed respondents indicated concerns with the security and privacy of their data and the reliability or trustworthiness of the information shared.

  • 88% of users said they were “somewhat concerned” or “very concerned” if their data were to be shared
  • 86% were concerned the information they get from Gen AI could be wrong and could be detrimental for humanity.

Private and public partnerships in an evolving AI landscape

While everyone has a role to play in protecting personal data, 50% of the consumer’s view on privacy leadership believe that national or local government should have primary responsibility. Of the surveyed respondents, 21% believe that organizations including private companies should have primary responsibility for protecting personal data while 19% said the individuals themselves.

Many of these discussions around AI ethics, AI protection, and privacy protection are occurring at the state, national, and global stage from the Whitehouse to the European parliament. AI innovators, scientists, designers, developers, engineers, and security experts who design, develop, deploy, operate, and maintain in the burgeoning world of AI/ML and cybersecurity play a critical role in society because what we do matters.

Cybersecurity leaders will need to be at the forefront to adopt human-centric security design practices and develop new ways to better secure AI/ML and LLM applications to ensure proper technical controls and enhanced guardrails are implemented and in place. Privacy professionals will need to continue to educate individuals about their privacy and their rights.

Private and public collaborative partnerships across industry, government agencies, academia, and researchers will continue to be instrumental to promote adoption of a governance framework centered on preserving privacy, regulating privacy protections, securing AI from misuse & cybercriminal activities, and mitigating AI use as a geopolitical weapon. 

AI governance

A gold standard for an AI governance model and framework is imperative for the safety and trustworthiness of AI adoption. A governance model that prioritizes the reliability, transparency, accountability, safety, security, privacy, and fairness of AI. One that will help cultivate trust in AI technologies and promote AI innovation while mitigating risks. An AI framework that will guide organizations on the risk considerations.

  • How to monitor and manage risk with AI?
  • What is the ability to appropriately measure risk?
  • What should be the risk tolerance?
  • What is the risk prioritization?
  • What is needed to verify?
  • How is it verified and validated?
  • What is the impact assessment on human factors, technical, social-cultural, economic, legal, environmental, and ethics?

There are some common frameworks emerging like the NIST AI Risk Management Framework. It outlines the following characteristics of trustworthy AI systems:  valid & reliable, safe, secure & resilient, accountable & transparent, explainable & interpretable, privacy-enhanced, and fair with harmful bias managed.

The AI RMF has four core functions to govern and manage AI risks:  Govern, Map, Measure and Manage.  As part of a regular process within an AI lifecycle, responsible AI performed by testing, evaluating, verifying, and validating allows for mid-course remediation and post-hoc risk management.

The U.S. Department of Commerce recently announced that through the National Institute of Standards and Technology (NIST), they will establish the U.S. Artificial Intelligence Safety Institute (USAISI) to lead the U.S. government’s efforts on AI safety and trust. The AI Safety Institute will build on the NIST AI Risk Management Framework to create a benchmark for evaluating and auditing AI models. 

The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, industry, organizations, and impacted communities to help ensure that AI systems are safe and trustworthy.

Preserving privacy and unlocking the full potential of AI

AI not only has durable effects on our business and national interests, but it can also have everlasting impact to our own human interest and existence. Preserving the privacy of AI applications:

  • Secure AI and LLM enabled applications
  • Secure sensitive data
  • Anonymize datasets
  • Design and develop trust and safety
  • Balance the technical and business competitive advantages of AI and risks without compromising human integrity and social responsibility

Will unlock the full potential of AI while maintaining compliance with emerging privacy laws and regulation. An AI risk management framework like NIST which addresses fairness and AI concerns with bias and equality along with human-centered principles at its core will play a critical role in building trust in AI within society.

AI risks and benefits to our security, privacy, safety, and lives will have a profound influence on human evolution. The impact of AI is perhaps the most consequential development to humanity. This is just the beginning of many more exciting and interesting conversations on AI. One thing is for sure, AI is not going away. AI will remain a provocative topic for decades to come. 

To learn more

Explore our Cybersecurity consulting services to help.

Share this with others

Featured resources

 

Insights Report

2023 Cybersecurity Insights Report: Edge Ecosystem

 

Insights Report

2023 Cybersecurity Insights Report: Focus on Healthcare

Get price Free trial