AI Lang Syne: A Lookback on 2023 and Considerations for 2024

2023 marked a significant shift in AI technology and ushered in a flood of laws and standards to help regulate it. In this article, we recap the major AI events of 2023, explore what may come in 2024, and provide some practical tips for responding to the challenges and opportunities that lie ahead.

RECAP ON AI DEVELOPMENTS IN 2023

U.S. Developments in AI

An FTC laser focused on AI

In 2023, the FTC put businesses on notice: existing laws such as Section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act apply to AI systems. The FTC brought actions in 2023 against Ring, Edmodo, and Rite Aid for violative practices involving AI. Its latest action against Rite Aid resulted in an order with requirements such as fairness testing, validation of accuracy, continuous monitoring, and employee training. Commissioner Bedoya described the order’s requirements as a “baseline” for reasonable algorithmic fairness practices. The FTC has also made clear through its actions this year that it will continue to use model deletion as a remedy.

EO on Safe, Secure, and Trustworthy AI and OMB Guidance on Agency Use of AI

On October 30, 2023, President Biden issued the EO an Executive Order on Safe, Secure, and Trustworthy AI Development and Use of Artificial Intelligence (“EO”). The EO recognizes the benefits of the government’s use of AI while detailing core principles, objectives, and requirements to mitigate risks. Building off the EO, the Office of Management and Budget followed with its proposed memo: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (“OMB Memo”). The OMB Memo outlines requirements for government agencies as they procure, develop, and deploy AI. Together, the EO and OMB Memo call on the federal agencies to produce more specific guidance. While the EO, the forthcoming final OMB Memo, and agency guidance apply to the federal government, companies providing services to the government will also be subject to these requirements.

An evolving state (and city) AI policy landscape

In the past year, we have witnessed a flurry of regional action on AI. Under state omnibus privacy laws, Colorado finalized rulemaking on profiling and automated decision-making and California proposed rulemaking on Automated Decision Making Technologies. Several other states also passed similar laws that provide an opt-out for certain automated decision making and profiling and other state and city laws focused on particular applications of AI, including child profiling, writing prescriptions, employment decisions, and insurance.

Some states also spent 2023 establishing laws focused on government-deployed AI. For example, Illinois and Texas established taskforces to study the use of AI in education and in government systems and the potential harms AI could cause to civil rights. Connecticut also passed legislation establishing a working group on AI and requirements for government. Additionally, in September 2023, Pennsylvania’s governor issued an executive order establishing principles for government-deployed AI.

EU and Beyond

Beyond the U.S., the EU and other countries and international bodies have also moved to regulate AI systems.

On December 8, 2023, the EU reached political agreement on the EU AI Act, the EU’s comprehensive framework for the regulation of AI. The Act scales requirements based on the risk-level of the underlying AI system. Specifically, the Act bans certain practices that are an “unacceptable risk,” applies strict requirements to practices that are “high risk,” requires enhanced notice and labeling for systems that use AI and are a “limited risk,” and allows voluntary compliance, such as codes of conduct, for systems that are “minimal risk.” The Act applies a separate tiered compliance framework for general-purpose AI models (including certain large generative AI models) with enhanced obligations for models that pose systemic risks. Once the text is finalized, it is expected to come into force sometime in the summer of 2024.

Canada also launched a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems in September 2023 and continues enactment efforts on the Artificial Intelligence and Data Act (“AIDA"). China had more AI regulation take effect in 2023, including those related to Deep Synthesis in Internet Information Service and the Management of Generative AI. Discussions on international standards also took place in 2023 with G7 industrial leaders collaborating through the Hiroshima AI Process yielding Principles and a Code of Conduct for organizations developing advanced AI systems.

TRENDS TO EXPECT FOR 2024

2024 will bring more adoption and novel uses of AI tools and systems by government, private entities, and individuals. As a result, we can expect more legislation and regulatory scrutiny around the uses of AI.

Around the globe, in addition to the EU AI Act taking effect, we expect to see more countries consider and pass AI laws. As with GDPR, we expect many will model laws on the EU AI Act. Additionally, while Canada’s AIDA regulations may be finalized in the coming year, the provisions of AIDA would not come into effect for another two years.

In the U.S., we will see more states requiring data protection assessments for profiling and automated decision making, including in the context of advertising, and possibility of opt-in for profiling as proposed in some pending bills. Several state laws are also being proposed regarding AI in employment contexts, including notice to employees and other restrictions for use in employment decisions and monitoring, requirements for bias and disparate impact analysis, and rights of employees to request information used in context of AI processing. Additionally, more laws and enforcement activities will continue to focus on preventing discriminatory harms in the context of credit scoring, hiring, insurance, healthcare, targeted advertising, and access to essential services and disproportionate impacts of AI on vulnerable persons, including children.

PRACTICAL TIPS FOR AI GOVERNANCE IN 2024

With so much change coming, it can be hard to know where to focus your current AI governance. Consider the following practical tips as you head into 2024.

1.      Develop and update AI processes, policies, and frameworks

Have a process in place to keep up to date with changes in AI technologies, laws, use cases, and risks. This will help ensure you have up-to-date information to keep policies and frameworks current and compliant.  

Create accountability by designating personnel responsible for your AI program and have a process to train personnel about AI policies and use of frameworks.

In developing policies and frameworks, consider the lifecycle of your AI systems and tools, from the data used to train AI models in development, to data inputs and outputs processed in production. Policies and risk assessment frameworks should be updated to identify and address risks specific to AI systems. For example, policies and frameworks should address securing AI systems and data; incident response procedures; data sourcing practices; data minimization and retention; assessing and monitoring systems for data integrity, bias, safety, and discriminatory or disparate impacts to individuals; assessing consequences, rate, and likelihood of inaccurate outputs; and societal harms.

Review external policies and statements about your AI systems and data practices to ensure they align with your policies and properly disclose and accurately reflect information learned through inventories and risk assessments.

2.      Put those policies to action – conduct AI inventories and risk assessments and monitor vendors

Conduct an inventory of existing AI systems. Identify and document the various AI systems in use, the content and data they process, the outputs they produce, and any down-stream recipients of data or content. Once you have conducted an AI inventory, you should use this information to conduct an AI risk assessment considering particular risks described above.

Don’t overlook third-party AI solutions and use of AI by third party vendors as part of your assessment. For third party AI solutions, request their AI policies and administer AI due diligence questionnaires. Also consider provenance of data used to develop their AI tools. Review the types of data sets used to train AI algorithms and the types of purposes for which the AI tools were developed and evaluate whether those reflect the types of data and purposes in your intended deployment. And review these tools and your more traditional vendors to learn if your data is being used for their own AI purposes (or others).

3.      Leverage existing principles and resources for today and champion flexibility for tomorrow

As organizations grapple with new challenges, changing landscapes, and uncertainty posed by AI technologies and regulation, it is easy to get overwhelmed. For areas of uncertainty, you can achieve some clarity and purpose by centering AI governance on your established organizational values and principles. And remember, many AI governance resources already exist. (For a list of resources, check out: Takeaways from the IAPP AI Governance Global Conference.).

Initial AI governance efforts will need to continuously adapt as new technologies, use cases, laws and regulations, and market standards evolve. As a result, AI governance efforts should encourage flexible strategies. For example, using compartmentalization and machine unlearning methods may help businesses retain models when the initial training data becomes unusable or problematic, due to legal or other reasons, without needing to delete and rebuild a model in its entirety. AI professionals should set such expectations for flexibility early and often in 2024 and in the years to come.

A version of this was published by Law360 on January 1 2024, at https://www.law360.com/articles/1779129/10-privacy-compliance-areas-to-focus-on-in-2024

Jevan Hutson is an Associate at Hintze Law PLLC. Jevan is a respected expert and thought leader on artificial intelligence (AI) and machine learning (ML) ethics, law, and policy.

Alex Schlight is a Partner at Hintze Law PLLC. Alex counsels US and international clients on data privacy & AI compliance and risk management strategies.

Hintze Law PLLC is a Chambers-ranked, boutique privacy firm that provides counseling exclusively on global data protection. Its attorneys and privacy analysts support technology, ecommerce, advertising, media, retail, healthcare, and mobile companies, organizations, and industry associations in all aspects of privacy and data security.