HomeBlogMarketing Ethics Alec Foster's Interview on The Original Source Podcast by Copyleaks

background

Alec Foster's Interview on The Original Source Podcast by Copyleaks

Alec Foster2024-05-21

Marketing Ethics, Marketing, AI Models

I was recently featured on The Original Source for an interview on AI Ethics in Business.

In this webinar, we discussed:

  • Developing an AI Pilot Program: Discover the key steps and considerations for business leaders to launch their AI initiatives successfully.
  • Choosing the Right AI Use Cases: Learn how to identify and prioritize your organization's most suitable AI applications.
  • AI Risk Evaluation and Governance: Explore frameworks and best practices for assessing and mitigating risks associated with generative AI.
  • Avoiding Copyright Infringement: Gain insights into strategies that ensure your AI-generated content respects intellectual property rights.
  • Generative AI in Art and Media: Understand the current state, opportunities, and challenges of AI's impact on creative industries.

Here is the abridged transcript:

Shouvik Paul: For those of you who listen to the Original Source podcast, you may know me. My name is Shouvik Paul. I run the Original Source podcast, which is a Copyleaks podcast. And today we have a very special live recording of this podcast. I'm joined by a wonderful guest, Alec Foster, who we'll be speaking to today. Welcome to the show, Alec.

Alec Foster: Thank you.

Shouvik Paul: Great to have you. Alec is an AI ethicist and marketer who specializes in applying ethical principles to marketing and innovation. With over ten years of experience working in places like Google, the White House - I want to hear more about that - and various other startups, Alec has a unique perspective on the industry and everything that's going on in AI. In his current role as the Responsible AI Lead at MMA Global, Alec helps brands develop AI-driven growth strategies that prioritize consumer trust while meeting business goals. I'm very excited to chat with you, Alec. I have a lot of questions for you.

Let's dive right in. Alec, we'll definitely get to the portion where you and I do a deeper dive on a lot of questions I have around AI, ethics and marketing. But before we get there, I thought we'd play a fun little game. I have a couple of headlines that have definitely caught my attention. I'll read some headlines and spend about 10-15 seconds getting some quick commentary on what you think about it. Sound good?

Alec Foster: Sounds good.

Shouvik Paul: Alright, let's go right into it. Headline number one: Meta's oversight board probes explicit AI-generated images posted on Instagram and Facebook. What do you think about that?

Alec Foster: I read about these cases. As many of us know, Meta doesn't necessarily have to listen to the decisions made by the oversight board. What we really need are laws that carry meaningful fines or criminal penalties. But it is indicative that if they're struggling to protect public figures, I am not optimistic about their ability to respond to, say, pornographic or explicit deepfakes of private individuals. We've been hearing a lot about that being an issue on school campuses, particularly high schools.

Shouvik Paul: Yeah, deepfakes are a real issue. It looks like more and more, Alec, we saw Snap also jump in and say, "Hey, we're going to start marking images that we know are AI-generated." It seems to be a very common trend these days, right?

Alec Foster: Yeah. Well, at least with that instance with Snap, the watermark is a very small emblem that takes up maybe 2% of the image. You can use AI to create these images and then another AI tool to remove these watermarks. Metadata is easy to strip as well, just by taking a screenshot. I'm not confident that we can solve technological problems like the rampant spreading of AI imagery with technological solutions alone. We're seeing a lot of this spread very widely on Facebook and people aren't recognizing that these things are AI. Some people, myself included, like to see these images, but it's going to get harder to distinguish them. I don't think watermarks alone will solve this issue.

Shouvik Paul: You're absolutely right. Another headline I want to run by you, sort of in line with this, is: Medium bans AI-generated content from its page partner programs. So it seems like whether it's images or written content, this is becoming more of a thing that brands are trying to stay ahead of.

Alec Foster: Yeah, that makes sense. There's no First Amendment right to distribution and amplification on private platforms, much to the disappointment of some people. It's also interesting to see that Medium is requiring labeling of AI-generated images. Even though that's not their bread and butter, I think they're just taking a broad stance against AI, at least in terms of their monetization platform. I think that makes sense.

Shouvik Paul: Absolutely. What do you think about this past week being hijacked by Taylor Swift? I have two daughters and I can tell you my Spotify playlist has been all about Taylor Swift. But I saw this really fun, cool headline: Drake uses AI Tupac and Snoop Dogg vocals on "Taylor Made" freestyle, referencing Taylor Swift's new album. I heard this thing. It sounded like Tupac was singing it. It was all AI-generated. The interesting thing was it was able to pull in the latest content information, which was Taylor Swift's released album, but pull it into the latest rap. What do you think about that? Is that a trend we're going to see more of?

Alec Foster: I personally don't think this is that big of a deal. We've already seen these tools being used by private individuals. It's only notable because it's an established artist using these tools that have been subject to AI tracks being made using their likeness. How this one was made - I think you can tell when you listen to the song that it's Drake rapping and then the sound of Tupac and Snoop Dogg are transposed onto his voice. You can somewhat tell just by being familiar with his rapping cadence that it's Drake doing it. There's that way of doing it. There are also tools that allow people to generate songs from scratch where there's no original person singing behind it - it's all AI. But I've looked into these tools. It's something that I want to experiment with. I think it's very interesting from a creative standpoint, but I don't think this specific example is that big of a deal, personally.

Shouvik Paul: I agree. I feel like especially in the music industry, they're trying to figure out how to best use AI. Another headline we've been seeing a lot is both Amazon and Spotify promoting ways that you can use AI within their own apps to generate better playlists. You type in words like, "I'm in the kitchen cooking an Italian dinner, make me a playlist." That kind of natural language stuff seems to be a trend. Do you think this is a good direction they're going, or is it just the use of AI for the sake of saying we're doing something with AI?

Alec Foster: I think this is a good idea. As I'm in the US, I don't have access to Spotify's AI playlist generator. I think it's being beta tested in the UK and another country. Same with Amazon's, where you have to be in the beta program. And that program, at least on Google Play, is full. So I haven't had a chance to play with these. But in terms of responsible AI, they're just using their existing catalog. It's not like they're going to rapidly generate AI songs for you. I would be very interested in seeing that, but that would have a lot of brand risk and likely upset some of their label partners. So I don't think we're going to see that within the Spotify or Amazon Music apps. But yeah, this is an ethical use of AI. It'll expose people to more music and get people using their apps more.

Shouvik Paul: It's fun to say the least, right? And also, who uses Amazon Music?

Alec Foster: Hey, maybe this will get someone to use it!

Shouvik Paul: Yeah, more people might use it. Exactly. They're all trying to find that edge. Here's another headline that I've been doing a deeper dive on: Meta released Llama 3. The initial results showed that on a previous week. It feels like it reminds me a lot of the early browser days, where there was Netscape and then Microsoft, and every week they were saying, "Ours downloads a little faster, pages load a little faster." I think we're going through something similar here. Meta, surprisingly - if you told me a few years ago that Meta/Facebook would be one of the leaders in the AI world, I would have definitely questioned that. What's your view on this? Are we heading in a direction that's really good for the general audience? What's your take?

Alec Foster: My thoughts on this are complex. I was thinking about this just yesterday and realizing how much of this technology I take for granted now. I remember before ChatGPT, spending so much time on my phone just playing with DALL-E 2, with these very low-res, 3x3 grids of very low-quality images, but I was extremely impressed at the time. Now, one of the things I like most about the multi-modal capabilities in Llama 3 is how it will generate these images in real time. If you go to Meta AI and type "imagine" or click the image generator, it will generate the image with every single word you type. So I want to see a bear on Saturn drinking a Coca-Cola, and it'll keep adding that to the image with every word you type. I think that's remarkable - it's how fast it is. That's impressive.

This will also expose many more people to AI in ways that hasn't been done already. There are billions of people that use Meta's products every day, and this is a free version that they're all having access to. So I think it'll improve AI literacy, even if it's not the top-of-the-line model. It'll keep evolving.

Shouvik Paul: It's interesting. I've been seeing that pop up in my Instagram now, where I can search my Instagram feed using what they're calling AI, but it just seems like natural language. So we're going to start seeing more and more of this being integrated into daily tools that we use. Right now it's still, "Let's go to ChatGPT to do this thing, let's go to Perplexity or whatever to do these things." Now it's creeping into products like Google's, with Bard.

Alec Foster: Absolutely.

Shouvik Paul: And that gets mass adoption and exposure to these kinds of tools. Speaking of Perplexity, they came out as the new unicorn. The headline was: "Perplexity becomes an AI unicorn with new $63 million funding round." I've personally been using Perplexity. For those who don't know, whereas the ChatGPTs are competing with Bard, it seems to me that Perplexity is a more direct competitor to the Google search engine. What are your thoughts on this?

Alec Foster: I like Perplexity. You just have to know which types of queries it's not going to be good for. Like if you want the score for last night's basketball game, it'll probably pull in some scores from a basketball game, but it might not be the one you're trying to refer to. I've used it a lot for researching potential speakers or recent headlines when I'm doing my own working groups like this. So yeah, Perplexity is great. I think it's good that there's someone pushing Google to be more innovative, especially as Google has stagnated in recent years. There are just some unresolved issues around defunding the web in terms of depriving these publishers of ad revenue. That has yet to be resolved if this is something that really catches on. So it's something to be aware of. But yeah, I'm optimistic.

Shouvik Paul: Nice. I am too. I actually love Perplexity. I'm not surprised they're doing so well. Let's see where they go. The one I really dove into and played around with - I'm obsessed with this right now - and the headline is: Microsoft releases research on an AI model, VASA11, that can deepfake someone from a single photo. For those of you listeners, if you haven't seen this, Microsoft has posted this. Imagine just taking a picture from your Facebook or LinkedIn profile, and then it can just take that one singular picture and turn it into a video. Some of these videos - even the eye movement, I thought was fascinating. Alec, I'm sure you've seen it. What do you think?

Alec Foster: Yeah, in fact, I'm a deepfake right now. You're not - this video isn't even real. No, I'm just playing. I'm actually a little disconcerted by this. The top-of-the-line deepfake technology about a year or two ago required 500 images or 10 seconds of video. Now you can just do the same with one single image. It's crazy how fast this is evolving. Something that I think we might get into later as well is how this technology will trickle down to the open source model developers. Right now, Microsoft isn't publicly releasing this. They're keeping it guarded. They know the implications. But think about what happened with ChatGPT - when these larger companies had their own LLMs but were keeping them private because they didn't trust the public to use them. But then when OpenAI released theirs, these companies felt a rush to commercialize their own product so they wouldn't be leaving money on the table. So in 6 to 12 months, when this technology makes its way into open source and anyone can deploy it, I imagine Microsoft might just say, "F- it, we're going to release it." I'm a little concerned about this, but I also think it's inevitable.

Shouvik Paul: It is inevitable. By the way, when I saw it, it took me back to a movie, I think it was in the 90s, called Babe, where it was revolutionary technology at the time - the pig's mouth was coordinated with human talk. People were like, "It took a whole Hollywood studio to put this thing together!" And now we're doing this stuff every day within seconds. I think with every progress we make, the computing power it takes and what it allows you to do gets easier and easier over time. We're seeing that happen here - going from taking a Hollywood studio to move a pig's lips to us being able to take a single image and get it to do or say whatever we want.

Let's end here with one final question that I think is important to some of the folks listening in: Brands are adding AI restrictions to agency contracts. I thought that was a fascinating headline. It's definitely a shift in the market. Everyone's concerned about AI and they want to put some guardrails around it. What are you hearing and what do you think about this?

Alec Foster: This has been something I've been talking about for months in my work with MMA. Brands and marketers need to add these generative AI clauses to their agency contracts, if not for the sole reason that works substantially created with AI can't be copyrighted. If you're creating something that you plan on monetizing, say a design on a t-shirt, if that's generated with AI and you can't prove otherwise, someone can come along and steal that design and start selling that t-shirt on Amazon. Or if you write an ebook for your brand, a competitor can come and take that content. So even when it's not a copyright issue, it just doesn't look good.

I saw images of AI-generated images being sold as prints in Walmart, but they didn't even take the time to fix the text. It just invites criticism that the brand should be hiring designers. We've seen this in the film industry, especially for design on a billboard. I think it's still possible that these individual contributors at agencies might not disclose when they're using AI. So it's important to have auditing mechanisms, as I'm sure you're very familiar with.

Shouvik Paul: That's absolutely right. Well listen, for the listeners on this podcast and webinar, I really hope you've enjoyed this quick-fire section. Now, Alec, let's do a deeper dive.

I think a good place to start, just for our listeners, is - I read that brief intro on you. Can you go a little deeper and tell us more about your background? We had everything from Google to the White House in there, and you're clearly in the heart of ethics right now. Ethics is a big keyword that we hear around AI these days, along with data privacy. So tell us a bit more about yourself before we get to the questions.

Alec Foster: Sure. I'm based in the Bay Area, born and raised, but I've lived around. I've worked in marketing for about ten years. I'm also a certified data privacy professional with the IAPP. When I first started seeking that out, it wasn't immediately clear how I would use that. But I feel like AI ethics really brings together a lot of my marketing and business background with my policy background.

I got my start before getting into marketing in nonprofit advocacy. Some of these organizations were focused on algorithmic accountability in policing, digital rights, or drug policy reform. I've been able to take that advocacy experience and segue that very well to marketing. It's just different types of calls to action and moving people through funnels, but it's very similar.

But I still have that hunger for getting involved in policy. And honestly, as anyone that has tried to make a career pivot knows, it is hard. I've definitely applied for many more jobs in tech policy than I've gotten. So I'm very interested in AI as a creator and user, but I also have these concerns. My background in copyright and policy gives me some awareness of these things. But we're all basically starting on the ground floor when it comes to understanding AI.

I think it's never too late for someone to make a career change, and I'm very glad that I'm able to help people in business be more ethical with their AI. I don't just want to be a rubber stamp and say, "Alright, I looked at it after you decided what you wanted to do." Rather, I want to give people the tools and mechanisms to create ethical AI and insert ethics earlier on into the design process.

I'm also an artist. I use these AI tools and I love them. I'm still very much a fan. But I think having an ear to the policy world helps me understand the broader implications and ways that AI could be used in manners that I might not be using it, but that would perhaps change the policy landscape or preempt potential creative expression in the future.

Shouvik Paul: For those who don't know, can you very quickly describe what MMA Global is and what's the responsibility there as an organization?

Alec Foster: MMA Global is a nonprofit trade association. It focuses on the future of marketing. It has about 800 member companies and 15 regional offices. There are other marketing trade associations, but MMA is the only one that brings together the full ecosystem of marketers, tech providers, and sellers. So a lot of that focus goes into providing CMOs with guidance on how to go about this AI transformation.

We also run the AI Leadership Coalition, which is the world's largest working coalition of marketers focusing on applying AI responsibly and effectively. There are a few different think tanks, and I run the Responsible AI working group within the ALC. A lot of the agenda that we're focused on right now is data quality, privacy and consent, what the future of work will look like, and creative efficacy.

Shouvik Paul: Let's dig a little deeper on that. You're the chair of this Responsible AI committee at MMA Global. What would you say are one or two of your top key ethical concerns and considerations that you're hearing about or dealing with your members?

Alec Foster: I think there are three issues that I hear the most about. The first one, which is the top concern, especially for enterprise companies which are much more risk averse, are the unresolved copyright issues. There are a few different sides to this. There's the fact that AI-generated works are not copyrightable, just like we described with agencies. It's important to know if things that you're producing and putting out there are generated with AI, so that you know whether it's protected for the extremely long copyright term that we have here in the US. That can make a big difference.

There's also the potential infringement in the AI training data - whether this data was trained on copyrighted works and that would be replicated in outputs. I think that part is actually one of the biggest concerns. For the language models and text-to-image generators, it's that first issue of whether these works are copyrightable that they're most concerned with in terms of their liability. But even if the LLMs extend their liability protection and say they'll protect you if you're sued, none of these enterprise brands wants to be in a lawsuit.

Shouvik Paul: So we're seeing a trend where a lot of them are essentially saying - these LLMs are saying, or even companies like Adobe, we're seeing more and more companies saying, "Listen, if you get sued, don't worry, we got you covered." That seems to be, at least initially, the way of dealing with or handling copyright issues.

Alec Foster: Yeah, we're seeing some of that. I haven't seen many lawsuits targeting the specific companies using these tools, but that's bound to come. And as Copyleaks has pointed out, LLMs plagiarize, which I didn't really realize until I was looking into your own research. So there's that popular belief that everything created with AI is entirely original. It's just something that I think is important for brands to be aware of.

There are a couple of other issues I want to run through real quick that have come up in the working group. One is the bias within AI - how these tools will not just absorb societal biases, but amplify them. This can lead to having your customers feel alienated. I think there are a few mechanisms that I'm excited about. One is having better quality synthetic data that can fill in some of these gaps. As well as having audits, human oversight, and maybe better methods of fine-tuning your models so that you don't need as much data.

Because right now there's this huge race just to suck up everything on the internet - transcribing every single YouTube video, every image, wherever you get it, regardless of the licensing. If you can have the same quality models with less data, you can remove potentially toxic data that lacks diversity.

Shouvik Paul: Let me ask you a question. This is something we get asked by our customers. We have a lot of large enterprise customers at Copyleaks. They'll ask us basic questions like, "Hey, we're a media company," or "I'm a CPG and I am releasing a white paper that's going to be available online somewhere. Is it fair to assume or say that once it's out there publicly, not behind a paywall but publicly available, that it's fair game for any LLM at that point to take that data and train on it?"

Alec Foster: Whether it's "fair game" depends on how you define that. Some of these companies say that they follow a standard where you can add to your robots.txt file if you don't want your site scraped. But I'm sure there are plenty of datasets that won't follow along with this standard. There's not a reasonable expectation that the standard will be followed.

I would say the argument from these companies is that it's all fair use. They say that it's just like a person reading the news or watching a video - I can do that and somewhat regurgitate some of these half-remembered facts. But I think it's a different sort of scale and ability for a computer to memorize this information. It's a lot easier for that to happen than for an individual.

I would love to be an IP attorney. If I could go back ten years and study something different, I definitely would have more job security in that field. But I'm not sure how this will shake out.

Shouvik Paul: It's also important to remember that generative AI as we know it really exploded on the scene just a little over a year ago. This is all relatively new. Everybody in every area is trying to figure it out as it progresses, and it's progressing really fast, faster than anything we've seen before.

So let's go back to what you were saying earlier. You're an artist. As an artist incorporating generative AI into your creative process, what are your thoughts on AI's implications for artistic expression, ownership, the role of the human creator? We were talking about Drake from the audience's perspective, but also from the creator's perspective. What do you think about this?

Alec Foster: I think what you're saying about how AI can be a tool to augment human creativity is a responsible use. Whereas if you over-rely on AI, if you use it for both the ideation and the execution, it just leads to generic, soulless art. Because AI cannot come up with completely original ideas, at least in its current form. It can blend ideas and create new expressions of the same idea, but it cannot invent entirely new concepts from scratch.

So I am concerned about more generic, soulless art. These Kindle store books that are clearly written by AI, or even as content marketers - I'm in academia, I also do a lot of reading, I can notice what the tells are. When I'm reading someone's blog and I see the same word patterns, like "in today's rapidly evolving technological landscape." I also read that the word "delve" is the most common word associated with AI-generated text. So even if I'm writing something completely on my own, I'm not going to use the word "delve" because I don't want people to mistake it for AI. Most people aren't putting what they read or see into AI detectors, but I pride myself on being able to write entirely on my own when I choose to. AI cannot replicate my writing style.

But it's been a useful tool for me in the ideation phase. Say I have an idea and I want to quickly get a demo of what that would look like. If I have a song idea and I want to quickly get some lyrics, it can be useful to use these tools as long as you're transparent about it and not replacing humans.

We've seen instances, somewhat related to what the writer's strike was about, but in other countries and probably here too on a smaller scale outside the regulated film industry, of people using AI for creating scripts or other content. You can imagine in marketing as well, where I use AI to create the first draft of something, and then rather than hiring a content writer on Upwork to create the whole piece, I'm using them as an editor and paying them less than I would if they were doing the whole process.

So I think there are some considerations we should go into this with - giving credit and considering compensation when using these AI outputs. Considering the training data, maybe these platforms should be compensating people whose data they use to train the models. Thinking about IP - these models shouldn't be generating copyrighted images, yet they still are. It's just something to be aware of. Conducting audits and ensuring AI enhances artistry and collaboration rather than replacing it.

Shouvik Paul: It's interesting, coming back to marketing departments. We obviously deal with a lot of them, especially larger institutions. I think the stage they're in right now is very much just understanding how prevalent or not the usage of AI is in their organization. That's step one, just even understanding it.

We're getting those kinds of questions, like "I don't know, we have policies in place." For example, a lot of companies will have policies that say things like "You can use this for idea generation. You can use it to potentially do some research. But you can't ever write an article using it. You can't publish a white paper using it. We definitely don't want it on our blogs, because the Googles of the world are now penalizing you for using AI-generated content from an SEO perspective. So we just need to know, are people following those rules?"

From your perspective and what you're hearing, what's a good first step there? Let's assume a company's not doing anything right now regarding AI policy. What should they be looking out for? What are you hearing other companies are putting in place?

Alec Foster: The first step is AI training. I've read some stats recently from PwC about how about 80% of marketers are currently using or plan to use AI in the near future, but just about 15%, or slightly less than 15% of marketers have received AI training to date.

So start with training your staff on AI fundamentals, as well as the ethical considerations, any applicable regulations. This is very relevant if you work in financial or healthcare industries - how you can use customer data for AI is different from how most other companies can use it. Consider these job-specific applications.

Right now at MMA, I'm leading an upskilling project where we're starting with some of the fundamentals. We're starting with ChatGPT and asking people to think of a work-related use case that they can build a chatbot around, using those specialized conversations where you give instructions that persist throughout the conversation. We have them pair up so that if they get stuck, they can talk to their peer on the team about it. Start with a small use case and then use that to build a broader business case for wider adoption after measuring the productivity impacts.

Other policies I would consider are formally formulating a generative AI policy, just like we talked about with how agencies can use AI, but also how employees can use it, whether for internal or external purposes. Which models are okay, which ones do we have enterprise agreements with so that you're not leaking your company's data when you say, "Hey, summarize this financial report on our Q3 revenue." So it's important to look at that.

In addition to the AI policy, a few large companies say that their very senior leadership created a wonderful report using ChatGPT or whatever, not knowing that when they upload information, it's not magic - you have to feed it the information. They uploaded proprietary Q1 numbers or whatever else. To me, we're in that education phase. People don't realize that the stuff AI spits out can hallucinate, can plagiarize, do all these things. People also don't realize that by uploading their proprietary information, it's like walking out of your office with a briefcase full of company papers. You just violated your NDA. You have to be very careful around this stuff.

A couple of other things I would recommend to enterprises looking to get into AI would be having a senior leader oversee AI adoption, ethics, and adherence to standards. Also, forming a cross-functional AI steering committee that could include leaders from marketing, product, trust and safety, privacy, legal. In addition to that generative AI policy, have a list of AI principles that might be an internal document, but you can use for formulating your programs.

Shouvik Paul: I've seen more and more of these Chief AI Officer roles pop up in the last couple of months. I've spoken to a few at some large CPG companies. It's so interesting to hear what their actual role is. It seems like nobody really knows. They're like, "My role is to promote AI. My company wants to do more with AI. We want to tell the market we're doing more with AI, whether it's improving customer experience or internal workflows, and I'm here to facilitate that." I'm like, "Great, that's a super interesting position. What are you doing next?" Everyone seems to be figuring it out. But what you're saying is it's important to start thinking about those types of roles.

Alec Foster: Yeah, I don't think we'll have Chief AI Officers for long. I mean, if anyone's hiring one, hit me up! But I imagine that these types of positions will be more integrated into every team going forward, or these will be skills that most workers will be expected to have at higher levels.

Shouvik Paul: Speaking specifically about marketing - what are you hearing? There are a lot of tools out there. I get hit up ten times a day with some tool or another, everyone saying it's a brand new AI tool that either helps with your end-user experience, like "We'll create chatbots for you, we'll do this, we'll do that. This is how it's going to help your customer." Or "Here's how it's going to improve your existing marketing department workflow or data flow, data analysis."

What are some things, from your perspective, if you were to recommend a few tools that you've either played with or you're hearing a lot about, what are the top ones for marketers that people should check out? And by the way, we're not sponsored by anybody here, so these are just purely our views.

Alec Foster: I'm going to speak more towards the underlying use cases, and people can find tools that fit their needs. I actually just came from a marketing AI conference that MMA hosted last week called POSSIBLE. I saw some very interesting data and presentations. Some of this was mind-blowing, and what surprised me was that some of the coolest stuff was not based around generative AI.

One particular use case that has some generative AI elements is ad creative personalization - leveraging AI to adjust ad creative based on contextual signals. We've seen this also in public health campaigns, encouraging people to get vaccinated but using imagery and creatives that appeal to them. Not just identifying one cohort and sending them the same ad, but testing at different times of the day. These AI advertising tools can learn a lot quicker than individuals can.

Some of the numbers we saw out of this were pretty mind-blowing. It boosted ad performance across several different companies and industries by 149%, so over a 2x increase. In one brand study, it was a boost of 260%. These are the kinds of things that can actually improve a company's valuation. This can boost your stock price. So that was one of the huge things that came out of some MMA research.

Another one that I saw was actually done by Hershey's. They're using an AI company called Chalice to optimize their media spending during their Halloween season, which, when you're selling chocolate, Halloween is the biggest holiday, even bigger than Valentine's Day. What they were doing was concentrating their ad spend around stores where they needed to sell more inventory. Because after October 31, everything gets heavily marked down and they're basically taking a loss on that candy. They're going to get smaller orders for the next year.

So what they were doing is adjusting their budgets dynamically based on weekly store performance at a micro level. I thought that was really interesting because you're just spending money more effectively, rather than just blanketing the whole US. What's more likely to happen is you would be spending money in these highly concentrated, expensive markets like New York and LA. So you're able to reach people more effectively, spending that money more effectively. And there's very little risk to that from an ethical standpoint.

I love to see that because it delivers value. In the Hershey's case, it grew their Halloween candy sales by 7.6% compared to the prior year. The technology is a lot cheaper than using more traditional methods of targeting. They said that they reduced their data fees by 80%, so more of that money was able to go towards their media spend. That was surprising to me, and I hadn't heard about that until just last week. So that's very new.

One more thing that I saw that's exciting is using AI personas for market research. This is a little tricky because you're not going to be testing an endless number of personas. But if you have a few ideal customer profiles, you can have these conversations with AI-generated avatars and ask them more about their life and where they spend their time. It's not totally accurate, but it gives people more opportunities to engage and try to personalize their messaging and outreach around potential customers. There's a lot going on.

Shouvik Paul: AI focus groups, essentially.

Alec Foster: Exactly, like customer interviews. But they can still sometimes be a little bit unhinged. In a demo, they asked it at the end, after asking "Why do you prefer M&M's to Hershey's or Reese's?" and it said something quite funny. I'm not going to repeat it, but it involved trying to make a joke. These language models aren't very good at humor at the moment. So don't count on it to be your sole source. You should still be talking to individuals.

That's one thing I really want to reinforce - all of these things that I just explained still allow for human touch points. If you're in B2B sales, you should still be getting on the phone, having these conversations, going to conferences, meeting people face to face. Rather than automating human touch points.

I get a lot of ads on Instagram and Facebook for these AI appointment bookers, where if you have a list of leads, it'll call them for you, but it's just an AI assistant talking to them. I don't think that's what I want to see done more. It is an unfortunate reality that labor displacement is happening. There are some benefits to using AI in the support process - maybe you can have more standardization or more uptime and try to get people their answers faster.

But I've seen, I have parents that are getting up there in age. They don't want to spend time talking to a phone tree. You might have slightly smarter phone trees, but having to listen through the menu and press those numbers - a lot of people just want to speak to a human. I worry that we're moving away from that. I hope that brands will use AI to create more opportunities for human-to-human interactions rather than replacing every last vestige of connection.

Shouvik Paul: That's a really good analogy. Every time I call my bank, I'm always like, "Human operator immediately," because I don't want to go through this whole chain of speaking to a robot.

It's interesting. I think one of the big questions that I always get, and we hear a lot about, is how AI is displacing jobs in certain industries. Marketing always is in the top category of that. I think there are some obvious roles that people mention, like what about copywriters and so on. What are your thoughts on this? Should people in marketing be worried? And are there maybe specific types of roles that will be affected more than others? If you are one of our listeners in that role, what do you do next?

Alec Foster: Yeah, this is something I've thought a lot about. My actions speak louder than words - I was a marketer for ten years, now I'm getting into AI ethics. I don't think AI will replace an ethicist job anytime soon.

However, I think there are parts of marketing that AI will be enhancing and assisting rather than replacing. Those use cases I spoke about earlier would fall in the growth and demand gen categories of marketing - ad personalization, customer profiling, targeting. I think there's opportunity for those jobs to be more effective. A lot of marketing is ineffective and marketers don't always know why they do what they do. So I think AI will allow marketing to leave a better impact so that you're not just going to scroll by all the ads because none of it's relevant to you. Ideally, marketing will be more effective with AI rather than being replaced entirely. We still need humans to make creative choices and provide that level of touch and input.

But when it comes to the types of jobs that will be replaced, and it is already happening, I would say the biggest impact so far is in support roles, customer success, which is tangential to marketing but is part of many marketing organizations. I think content will also be impacted. I think there's still a market for good content. So using AI tools, you shouldn't just ignore these completely. I use Grammarly a lot for my editing. If you're trying to rephrase or using Perplexity to find sources, I think there's a place where these tools can enhance content creation.

But I've worked with many support teams. These people are incredible at their jobs. I do think that broadening out their skill set during this time, learning how to use these tools in ways that they can make themselves more adaptable, is important. It's unfortunate. I want to see everyone I know be able to keep their jobs. But especially in larger enterprises, where you might have teams of hundreds of support staff, they might not have the same need for just as many. So it will be impacting that first and foremost.

Shouvik Paul: It's a good reminder to look back in recent history where we've heard that certain pieces of technology are going to completely eliminate all these jobs, and it's turned out to not completely be true. The internet was supposed to disrupt all these jobs. What we saw was people use the internet and build on their job skills on top of that to enhance what they were doing. A lot of the mundane tasks they no longer had to do and they could specialize in certain things.

We saw it with the emergence of Excel back in the day. "Oh, the accountants and finance departments are going to get disrupted." No, they just didn't have to do basic math anymore. You can run pivot tables much more easily. So it allows you to go build on top of that. And hopefully we can figure out a similar sort of thing here with AI over time. That's what you're also saying - retrain yourself on how to use these tools to enhance what you were doing before.

Alec Foster: One more thing - I'm so sick of that quote that we've probably all heard: "AI is not going to take your job. Someone else who knows how to use AI is going to take your job." I hate that.

But there are also plenty of people who think AI will be more impactful than the internet was. Even if it's on the same level of impact, that will still be hugely disruptive. I think we need to think about this from a societal standpoint, not just solving technology problems with technological solutions. We can't just teach coal miners to code and that's going to be the end of this.

So I think we should think about this from a policy standpoint as well. It's not just up to businesses that have an obligation to their shareholders to deliver as much value as possible to make the kinds of changes that society needs to make. It's to be determined what the future will look like. I think this is going to be on par with the internet in terms of impact. But it's not just tools that you're able to maybe delegate more parts of the process rather than having a person in the middle. So who knows what things will look like.

But what we're talking about with those video deepfakes is how quickly this technology is evolving. It's staggering. We might want these companies to slow down, but that's not going to happen.

Shouvik Paul: That's a really good point. Well, that wraps up the main section and interview.

Alec, while we have you here - looks like we got some really good questions coming in from our audience. Let's answer some of these questions and move on to the Q&A section.

Here's one: What are some repercussions of accidental plagiarism from AI and what can happen to my business?

Alec Foster: I think right now it's more of a mark of embarrassment. I don't think these things will stay in the news that long when it does happen. But if it's something very high-level, that could reflect very poorly, or you could get sued. But I imagine for smaller companies, people are going to go after the bigger targets. Apple gets sued dozens of times every day by patent trolls that just want a cut of their trillion-dollar valuation. So I think if you run a small company, it's not going to be that big of an issue, but it could still be embarrassing.

I would be concerned with employee morale if you're just constantly using AI. I'm not a lawyer, so I can't give people legal advice on what could happen here. But my hunch is that it's more of an embarrassment. It might annoy your competitors, might annoy your staff, but it's something that probably could be resolved with a letter rather than immediately incurring fines. But maybe that is something we should be considering here in the US. We're starting to see bills being introduced around the training data. So there could be more on that front over the coming years.

Shouvik Paul: It's interesting. You referenced the study that Copyleaks released a month or two ago where essentially Copyleaks took a couple thousand papers and had AI write this paper from scratch on a variety of subjects, from law to business to history. It wrote the paper and then ran plagiarism detection on these papers.

What the study showed was that for certain types of topics, like the sciences, there was a very high level of identical plagiarism. I'm not talking about paraphrasing, which tends to be a thing as well - they'll take it and essentially reword it. But we're talking about in certain specialized sciences, like chemistry or physics-related topics, it was 80% word-for-word plagiarism.

I think that type of plagiarism can get a little dangerous in terms of actual copyright infringement. Imagine if it took from a journal or something else - you could potentially get sued.

Also, we now work with a lot of CISOs and CIOs at Copyleaks who are encouraging the use of AI like Copilot to write code because it shortens their sprint cycle, for example. But this stuff, when it goes to the public repositories like GitHub and Stack Overflow, it's just taking the code and remixing it. It's not actually writing code, it's mixing it like a DJ.

So what we're doing is helping them audit that code to understand how much of it is AI-written. In a lot of cases they already know parts of it are AI-written. Then we start pointing out to them, "Hey, this part is AI-written, but also this part - this is the licensing rights or the usage rights for it." I feel like we're going to see potentially more and more people get concerned about it. But potentially also this rise of code trolls, relying on AI to pick the stuff up, put it into something, and then you're getting sued.

Alec Foster: Yeah, and maybe people are using AI to generate the code in the first place that they're then suing other people over. But yeah, that's a great point. The case around Java in Android, that Google versus Oracle case - look how much money was spent by both sides on that. So yeah, coding and academia, that's a whole other beast that takes plagiarism far more seriously than just writing a blog post. So I was just speaking to that. But our copyright system with code is really something else. It's interesting to hear about the origins of that.

Shouvik Paul: Exactly. Here's another question: What steps need to be taken to minimize or eliminate racial bias in AI tools used by, for example, law enforcement? We've been hearing things about facial recognition. I think that comes down to privacy. In a place like China, I was in Beijing and I know they're capturing my face everywhere, including subway stations. Is that where we're headed here? I think people are concerned about that. What are you hearing about this stuff?

Alec Foster: That's a great question. It definitely reminds me of my drug policy reform days. The truth is that you can't eliminate racial biases in the AI tools when they're built using societal data that has biases laden in it. So it's understanding that there is going to be bias, having audit mechanisms and a human in the loop. Especially with these decisions that might impact someone's freedoms, I would always make sure that there's a human in the loop if it has those types of consequences. I've made sure to do that in prior roles I've been in.

But it's just understanding the limitations of these tools. Sometimes we should question whether they should be used at all. Especially in policing by law enforcement, maybe that is a use case that we might want to regulate. As much as these tools might be useful - I can put myself in the position of a victim that would want these tools to be used - but when it can amplify societal biases, that's something that I think is unjust.

It also allows a level of abstraction where I'm not the one making the decision. It's the CompStat system that's telling me to go to this corner and frisk these men that just happen to be of this race or ethnicity. It provides that level of abstraction and objectivity that really isn't there once you understand that these are based on historical data that is often biased.

Shouvik Paul: People forget this stuff is not magic. It's like a child learning from a parent, and it's going to absorb anything the parent teaches it, including biases of any kind, over time.

Alec Foster: That's why I'm hopeful about synthetic data and better methods of training these models. That will be a way to minimize it. But just having those human measures in the loop and recognizing that you can't eliminate it completely.

Shouvik Paul: To me, synthetic data is sort of like sending your child to school. You may have some real biased parents at home, but when they go to school, they learn a different view or fill the gaps. Maybe the Earth isn't flat, I don't know. They can make their own minds up, decide, and get better trained or have multiple viewpoints, in other words.

One of the questions I like is: What do you think will be the outcome of the New York Times suit against OpenAI on copyright infringement?

Alec Foster: Real quick, I think that it will end in a payment. I think most of these lawsuits will end with these companies being compensated. But it is possible that they could disrupt the whole business model where they can't use data without a license that they don't own.

It's interesting because the companies make the fair use argument, but they also don't want to reveal what they're training their models on. They used to be more upfront about that, but I don't think they believe their own argument as much as they say they do in the courts, that it is fair use. So this could go either way. Anyone that says they know how this is going to go doesn't know what they're talking about.

But my guess, my 51% hunch, is that it'll end in licensing payments rather than these models being shut down for good.

Shouvik Paul: That's what these models have also been hinting at - "Listen, we'll pay out to essentially train on your data. Let's just come to an amicable understanding here."

I did speak to some very senior-level executives of some really big media companies recently, and their take was kind of interesting. They were like, "We don't care about the money." What's made them realize is that their brand and data relies on their content, and their content is being repurposed and used somewhere else. They faced this when the internet became a thing. They started syndicating their content everywhere and then realized, "No, no, we got to bring people back to my brand, put up a paywall." They learned that lesson the hard way. They're being a lot more cautious about others using their content because it takes away from the visitor going directly to the brand.

So it's to be determined. I don't think anyone knows what's going to happen, but it's definitely something to keep a real close eye on because whatever the decision is there, it's going to truly have an impact and shape the future of these training models going forward.

Alright, so that concludes this session. Alec, thank you so much for joining us today. Again, for folks who are on this call, please take advantage of this giveaway - like us on LinkedIn and follow some posts on Instagram.

Alec, thank you so much for your time. If people want to follow you, where can they follow you?

Alec Foster: I'm pretty active on LinkedIn - my full name, Alec Foster. My website is also www.ethical.marketing. So if people want to read some of my blog posts - I also post them on my LinkedIn newsletter. Check me out on there. Thanks everyone for joining, it's been a pleasure.

Shouvik Paul: Thanks again, Alec, great chatting with you. Hope to have you back in the future.

Alec Foster: Likewise. Thank you. Bye.


More Posts

background

Alec Foster's Interview on The Original Source Podcast by Copyleaks

background

Lessons from Willy's Chocolate Catastrophe: Navigating AI's Role in Art and Labor

background

Racing Towards AI Supremacy: Balancing Open Innovation with Ethical Responsibility

Show more

Alec Foster

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 License.

Alec Foster

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 License.