Takeaways From the IAPP’s AI Governance Global Conference

By Jennifer Ruehr and Sam Castic 

If you weren’t able to attend the IAPP’s inaugural AI Governance Global Conference in Boston last week, we have you covered. We attended and summarized below several key themes from the event. 

  1. AI Governance Programs Have Some Key Elements. AI governance programs at many companies consist primarily of some or all of the following components.   

    • A cross-functional stakeholder committee for setting risk tolerance, reviewing AI use cases, and/or developing protocols and policies.   

    • Guiding AI principles, policies, and guardrails, that provide the company with a framework for buying, developing, integrating, and using AI both internally and externally.   

    • AI impact assessments to document and understand risks, mitigations, and expected outcomes from a cross-functional perspective.   

    • Internal processes or third-party tools for testing fairness for AI uses.  

    • Training for stakeholders involved in AI development, procurement, and use cases. 

  2. Privacy Teams Play an Important Role. Privacy teams are playing an important role in AI governance because they understand how to assess risks and apply mitigating controls. Privacy teams also have existing process development and assessment protocols that can be leveraged and customized for AI governance. However, Privacy teams are not necessarily owning AI governance. At many companies, business stakeholders are stepping up to own or co-own AI governance, especially at companies where AI is playing an important role in the company’s products and service offerings. Other key stakeholders involved in AI governance include security, legal (including IP and litigation), HR, procurement, data science, technology, product, compliance, and risk. 

  3. Resources Are a Challenge. Many companies are struggling with resources for AI governance, especially when the issues are viewed solely from a compliance perspective. Some companies are having success finding resources by working with business stakeholders to understand the internal and external opportunities that AI governance programs can help enable. Privacy teams are often working with legal and business stakeholders to appropriately calibrate the risks AI can present, and opportunities it can enable, so that AI governance programs get the necessary business buy-in and can effectively manage risk and enable innovation. 

  4. Leverage What You Have. AI governance programs do not have to be robust to get started—many companies started by leveraging processes and policies they already had in place and modified to take into account AI risks and opportunities. For example, companies with existing frameworks for managing vendor risk reviewed and updated these to address AI. Data classification policies were also helpful tools to help determine what data is appropriate to input into third party AI applications, especially when they are accompanied by AI-specific guardrails.  

  5. You Know How to Do This. Developing an AI governance approach is manageable, and if you are a privacy professional, you already have many of the key skills. For example, when it comes to assessing AI uses, you can adapt skills you have learned from assessing privacy risks and apply those skills to this new context. Identify the risks (if any), pick mitigations (if needed), define expected operations fairness and outcomes, test before launch (if the consequences of getting it wrong could cause harm), monitor the AI use once deployed for vulnerabilities and proper operating (if called for), and refine it to achieve objectives. Work cross-functionally to identify who will be responsible for each of these tasks. And, similar to security incident reporting, have a clear path for both internal and external parties to contact the company with possible concerns or issues related to AI use, and policies for how these reports are addressed. 

  6. Use an Appropriate Framework. Pick and adapt, or draft, a framework for AI governance. There are various frameworks, but there was a lot of discussion about the NIST Artificial Intelligence Risk Management Framework. Do not view frameworks as one-size-fits-all; rather, pick and adapt, or draft one, based on how your business operates. 

  7. Ignoring or Banning AI Is Not a Solid Strategy. Do not try to ban AI or ignore the opportunities that AI may present despite the risk. Ignoring important business opportunities, such as for efficiency or innovation, may lead to an even bigger risk to your business. Work to understand the opportunities for the company, both internally and externally and enable them with appropriate risk assessment and mitigation practices that are tailored to business risk appetite.  

  8. Regulators are Paying Attention. On the global stage a variety of regulators are focused on AI, including privacy and data protection regulators. For many, holding companies accountable for AI uses that have not had risks appropriately assessed and mitigated is a priority, especially where this results in harms to people. At the same time, there is no consensus among regulators about how AI risks should be assessed or mitigated.  

  9. Disgorgement is a Threat. In the US, the FTC has ordered disgorgement of data and AI models in consent decrees resulting from investigations and enforcement actions. This type of remedy may be one that US regulators increasingly seek when models are developed in ways that they view as violating the law.  

  10. You are Not Alone. Benchmark with peers at other organizations. Like governments that are collaborating to address AI principles and codes of conduct, many companies are collaborating and benchmarking to set up their AI governance approaches. 

  11. Consult Available Resources. Many organizations are sharing resources about their approaches to aspects of AI Governance. Look for and consider these resources as you help formulate your company’s approach, such as:  

Hintze Law PLLC is a Chambers-ranked, boutique privacy firm that provides counseling exclusively on global data protection. Its attorneys and privacy analysts support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy and data security.