Agency

How to Safely Use AI for Marketing in 2024


Artificial intelligence is one of the hottest topics in marketing right now. And for good reason. The possibilities for this relatively new technology seem almost limitless. In particular, generative AI – which creates new content – has made massive strides in the last 18 months.

Of course, with every new tool comes concern about the impact on consumers and the public. For marketers, AI compliance concerns may put a bit of a damper on the wild speculation about AI’s capabilities.

In this post, we’ll look at the AI-specific regulatory frameworks that already exist, along with those currently in development. We’ll give you the information you need to take advantage of AI tools in your marketing work without running into regulatory AI compliance issues.

What is AI compliance?

AI compliance means taking steps to ensure your organization’s use of artificial intelligence is acceptable and appropriate. That means following all relevant laws, regulations, guidelines, and best practices.

Some of the most common factors to consider in AI compliance are:

  • data collection,
  • disclosure,
  • discrimination, and
  • privacy.

As you’ll see later in this post, AI regulatory compliance is not yet fully defined. But laws and regulations are coming soon from governments at all levels. In the meantime, all organizations that use AI and machine learning models must ensure they do so in compliance with existing consumer protection and privacy laws.

Common AI compliance risks in marketing

More than half of marketers are already using AI for at least some content creation. AI is also popular for creating personalized customer experiences and analyzing data.

Current implementation of Generative AI Use Cases in Marketing according to CMOs and Executives Worldwide October 2023

Source: eMarketer

Whether you’re creating your own internal AI tools or relying on AI systems created by others, this new technology can create additional risk for your digital marketing team. Here are some key issues to be aware of.

Biased language and content

AI is only as good as the information it’s trained with. And the unfortunate truth is that we live in a world that is biased in many ways. When AI is left unchecked, it can reinforce those stereotypes. For instance, it might use inappropriate or outdated terminology.

When AI incorporates image recognition, things can get even trickier. Facial recognition has consistently been shown to be biased against people of color and women. In one of the most egregious examples, Facebook’s AI identified a video of Black men as containing content “about primates.”

Are you using AI for content creation or image classification? Then it’s critical to put checks in place to correct such mistakes before your content goes live.

Biased ad delivery

Facebook has been called out for biased ad delivery issues since way back in 2016. Back then, the issue was that advertisers could choose to exclude certain ethnic groups for ads for things like jobs and housing.

More recently, Facebook’s AI-based algorithms have been shown to present bias in the way they distribute job and housing ads. They reinforce gender bias in job roles. And they follow racial stereotypes about income and class in housing ads. This biased distribution is completely steered by AI algorithms, not the advertisers themselves.

A new AI Act passed by the European Commission aims to address this sort of algorithmic discrimination. It specifically prohibits social scoring based on social behavior or personal traits in ways that are detrimental to groups of people. The Act calls out targeted job ads as a potential problem area.

Misinformation and “hallucinations”

Two of the best-known recent examples of social media misinformation created by AI both involve Taylor Swift. First came the deepfake nude images of the singer that ricocheted around the world on X before the platform removed them. Another round of deepfakes targeting Swift claimed to show her endorsing former U.S. President Donald Trump.

AI has also been known to “hallucinate.” That is, it can make up information out of thin air. This has happened in a slew of legal cases involving fake citations of nonexistent case law.

Graph of Legal hallucination rates across three popular LLMs

Source: Stanford University Human-Centered Artificial Intelligence

Marketers are unlikely to create deepfakes themselves. Or to intentionally distribute misinformation. But it’s important to double-check any “facts” that AI tools provide for your content. And it’s critical to confirm the veracity of photos, videos, or any other content before resharing.

Data security

Digital marketers have always been responsible for protecting customer data. But the stakes have grown much higher with the introduction of AI. AI tools make it easy for scammers to use any customer data point as the basis of a fraudulent identity.

This means it’s more important than ever to think about how much data you really need from customers. Even more so for leads. When you do collect and store data, make sure it’s managed through a secure and compliant CRM tool.

Tip. Connect your social channels with your CRM through integrations like those available in the Hootsuite App Directory to reduce the number of data storage locations.

Impacts on existing compliance requirements

It’s important to understand how AI impacts compliance with existing regulations. Especially for organizations working in the regulated industries,

For example, AI and HIPAA compliance programs both require significant safeguards of patient privacy. But AI tools designed for content creation may not understand the limits of what can be shared on social channels. This is another case where human supervision can help with risk management and mitigation.

AI and GDPR (General Data Protection Regulation) compliance also overlap on the privacy front. That said, brands operating in Europe will need to focus their attention on the new EU AI Act. This is the most important document to understand the full picture for AI and compliance with European Union regulations.

How to use AI tools and stay compliant

AI use for social marketing doesn’t have to be scary. In fact, AI makes the lives of social marketers easier in many ways. Here are some key ways to use AI without running afoul of compliance requirements.

Implement company guidelines for AI use

You already know your company needs social media policies and guidelines to protect your brand on social platforms. You also now require AI and compliance guidelines to govern the use of AI within your organization.

Some points to consider include:

  • Tasks. In which parts of their job can employees use AI?
  • Tools. Which AI tools has your organization approved for internal and external use?
  • Procedures. What is the process for human review of content created using AI tools? Has a compliance officer appointed for final approvals?
  • Proprietary information. How much company information are employees allowed to share with AI tools in the form of prompts?
  • Disclosure. How will you disclose to customers that they are interacting with an AI assistant? Or that they are engaging with content created by AI? EU regulations already require that end users know when they are interacting with AI (including chatbots). Similar regulations are coming in the United States.

Limit the scope of the tasks you assign to AI

When thinking about which tasks to assign to AI, organizations must consider the risk-to-reward ratio. For example, AI has the lowest risk when it’s used for tasks that contribute to the early stages of content creation. As opposed to taking over content creation entirely.

Here are some great ways to reduce effort with AI tools without introducing unnecessary high risk:

Review all new AI tools before adding them to your tech stack

As more AI tools become available, it will be tempting to incorporate them all into your business. But it’s important for marketers to pause and evaluate before jumping onboard. This is one case where it may not be an advantage to be an early adopter.

As a user of AI tools, you rely on the developers to ensure AI compliance. If you’re at all uncertain about a developer’s policies, data storage, or privacy controls, wait. You may wish to get your legal team to review a tool’s terms of use before you start feeding it any of your data – or your customers’ data.

Add AI compliance software to your approvals workflow

Setting up any kind of formal approvals workflow is a first step towards ensuring compliance for content created with (or without) the help of AI. Adding in AI compliance automation software is an even more valuable approach.

For example, Proofpoint uses AI to learn about your business and reduce the number of social media posts that require supervision and approval from high-level reviewers. Proofpoint automatically scans your social content before or after posting (or both) to check for compliance with your social media policies.

You can connect Proofpoint to your Hootsuite account to reduce the amount of work for high-level social media stakeholders while supporting AI compliance.

The future of AI compliance

AI regulatory compliance will soon be much more formalized than it is today. Clear laws are coming. And it’s important to be prepared for what’s ahead.

The EU, the United States, and several individual states either have already passed laws governing AI compliance or are in the process of doing so. The European Commission has already created an AI office to oversee the development of AI governance and compliance.

And the United States has created the framework for an AI Bill of Rights.

AI bill of rights safe and effective systems algorithmic discrimination protections data privacy

Source: Whitehouse.gov

While there are still a lot of moving parts, a couple of common points have emerged that will impact AI compliance in digital marketing.

First, new international guiding principles for developing advanced AI systems, as well as guidance in development from the U.S. Department of Commerce, will clarify disclosure requirements when AI is used. This includes watermarking to identify AI-generated content.

Second, regulations may impact how algorithms and AI can be used to target social ads. Tennessee and Texas have both passed legislation that allows consumers to opt out of targeted advertising.

While the specifics are still murky, it’s very clear that governments worldwide are moving to regulate AI and how its use impacts their citizens. Brands and other organizations that focus on safety, privacy, and fairness in their use of AI right from the start will be best prepared to adapt to AI regulatory compliance changes as they emerge.

Hootsuite’s permissions, security, and archiving tools will ensure the safety of all your social profiles—from a single dashboard. See it in action today.

Manage all your social media in one place, measure ROI, and save time with Hootsuite.





Source link

en_US