Home ➤ Blog ➤ Law ➤ What Biden's Executive Order on AI Means for Marketing and Privacy
Alec Foster • 2023-10-31
Law, Privacy, Marketing, Regulations, AI Models
It's been a whirlwind couple of months since my last post. Between diving into my postgraduate program in AI Ethics at the University of Cambridge and consulting on responsible AI in the marketing sector, my writing schedule has taken a backseat. But given Monday's seismic shift in the AI regulation landscape, it's time to come up for air and dissect what's happening.
On Monday, President Biden signed a historic executive order on artificial intelligence. While in recent weeks, the commentary on responsible AI has largely been dominated by the apocalyptic doomers, the VC-backed accelerationists, and Big Tech stoking fears to spur regulation and block competition, these voices hardly represent the full spectrum of perspectives. As an AI rationalist, I find that there's a void when it comes to discussing AI's immediate challenges and their ethical implications in fields like marketing. So, let's break down the executive order's implications and how it aligns—or doesn't—with the needs and ethical considerations of AI and marketing.
The Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence is a statement of intent for the AI landscape, with implications across many industries. The order tasks Congress with passing new laws, but that's unlikely to happen anytime soon, given the divided state of our Congress. I believe that we're not going to see cross-aisle cooperation on AI regulation in the foreseeable future. So, for now, this executive order is the most concrete action we can expect in the U.S., at least until 2025.
The order calls for standardized safety tests for companies developing foundation models, signaling that the U.S. government views AI regulation as a national security and economic competitiveness issue foremost. While the order does touch on issues like AI-enabled fraud and deception, implying some consideration for fairness, the dominant narrative is still one of national interests ("[The National Security Memorandum] will direct actions to counter adversaries’ military use of AI"). This begs the question: will the U.S. government's approach sideline more comprehensive ethical considerations? Safety is unquestionably vital, but an overemphasis on it, particularly from a hypothetical national security angle, could result in overlooking the near-term challenges presented by current models.
The Executive Order makes an earnest attempt to prioritize privacy. While it calls for accelerating "privacy-preserving techniques" in AI, this isn't groundbreaking. It will ultimately be up to the creators of foundation models and AI applications whether they choose to use privacy-preserving technologies, such as differential privacy or secure data collaboration platforms. Furthermore, regulatory fines have hardly been a deterrent in the U.S. market.
I appreciate the order's focus on consumers' privacy in training data used in AI systems, a crucial concern given that data is the lifeblood of AI. The directive also scrutinizes federal agencies' data collection methods, particularly when it comes to commercially available information containing personally identifiable data. It's worth noting that the intelligence community often procures such data from brokers, bypassing the need for court orders. However, it's disappointing that the order doesn't address the lack of transparency in training data. Even a confidential review by an expert panel could provide much-needed accountability without compromising intellectual property.
So, while the order takes necessary steps, considering it a panacea for AI's privacy concerns would be premature. We need to see these directives put into practice, evaluated rigorously, and, most crucially, supported by comprehensive data privacy legislation.
The executive order commendably focuses on fairness in both the criminal justice system and housing, but the approach seems to lack the bite of enforceable laws. In the criminal justice realm—spanning sentencing, parole, probation, and predictive policing—the order suggests developing "best practices." While this is a start, it's not enough, although it may be the most the Biden Administration can do without Congress. Many of these issues occur primarily at the municipal level, where federal "best practices" could easily be ignored. Moreover, the data driving these AI systems often embed historical biases against marginalized communities, from predictive policing to parole decisions. These guidelines risk becoming mere suggestions without federal laws to preempt harmful local practices.
Similarly, the order's provision to offer "clear guidance" to landlords on AI use in housing decisions falls short. The Fair Housing Act (FHA) already prohibits discriminatory practices, so why not extend these laws to cover AI-driven discrimination explicitly? Guidance alone, without the weight of enforceable laws, allows landlords to hide behind algorithms to carry out discriminatory actions potentially.
The executive order's section on supporting workers is promising in its nod to collective bargaining and workplace equity, but it lacks immediate, enforceable actions. While the order suggests the development of 'principles and best practices,' these are insufficient without strict regulations and enforcement mechanisms. Additionally, the emphasis on a labor-market impact report feels like a stalling strategy. More immediate and transformative measures are needed if the administration is truly committed to worker protection. Concepts like universal basic income should be on the table to support workers displaced by AI.
The executive order's section on supporting workers is encouraging, especially around collective bargaining and workplace equity. However, it remains to be seen if concrete actions to mitigate job displacement and ensure fair labor standards in an AI-augmented landscape will arrive in time. The order calls for the development of 'principles and best practices,' but such guidelines can be ignored in the absence of stringent regulations, enforcement mechanisms, or more impactful measures such as universal basic income.
Furthermore, the order's focus on producing a report on AI's labor-market impacts feels like a preliminary step, a delay tactic even, when what is needed are immediate actions and policies. If the administration is serious about protecting workers, a report won't suffice; we need initiatives that can be implemented now, especially when it comes to job displacement and the potential for workplace surveillance and bias.
While both the U.S. Executive Order and the EU's proposed AI Act aim to regulate AI responsibly, their diverging strategies reveal nuanced differences in priorities and execution. The EU adopts a narrower, risk-based approach, focusing on high-risk AI applications and a compliance-driven model that features mandatory assessments and approval processes. The U.S., in contrast, adopts a broader scope, emphasizing voluntary disclosures and the development of safety standards.
In terms of R&D, the U.S. takes an aggressive stance, actively promoting AI innovation across sectors like healthcare and climate change. The EU, however, leans more towards regulatory compliance, with less explicit emphasis on research and skills development. On the international front, the U.S. order explicitly calls for global engagement and the formation of international frameworks, aligning with its ambition to lead in AI. The EU, while open to international collaboration, is less explicit about its global engagement strategy.
When it comes to privacy, the EU builds on its GDPR framework, explicitly prohibiting intrusive and discriminatory AI practices. The U.S. approach, although acknowledging the need for privacy safeguards, stops short of setting forth specific prohibitions, instead encouraging Congress to pass data privacy legislation.
Notably, the U.S. order focuses on government use of AI, aiming to modernize federal infrastructure and enhance capabilities. The EU's stance on this remains less explicit.
This executive order serves as a foundational layer for AI regulation in the U.S., especially for those of us at the intersection of technology, ethics, privacy, and marketing. While it outlines broad directives, it also leaves significant gaps that federal agencies must now navigate. The burden falls on these agencies and on a divided Congress, which adds a layer of uncertainty to the prospects of comprehensive AI and privacy legislation.
In the marketing sector, the order raises important questions about data privacy. It's laudable that the order mentions privacy-preserving technologies, but without robust data protection laws that are tailored to the nuances of AI, ethical lapses and mistakes in data usage remain a concern.
From a privacy perspective, the order's focus on federal guidelines is a step in the right direction but not the endgame. Guidelines without enforcement are window dressing. We need robust, stable legislation that safeguards individual privacy and provides businesses with the legal certainty to build effective compliance programs, mitigating the risk of a shifting regulatory landscape or a patchwork of state-level privacy laws.
While public discourse and ethical standard advocacy will undoubtedly continue, this order marks merely the first mile in a marathon toward responsible AI. It's a framework that needs fleshing out, both by federal agencies tasked with its execution and hopefully by a Congress that can put aside its divisions to enact meaningful regulations.