{"id":5981,"date":"2023-12-26T15:04:18","date_gmt":"2023-12-26T15:04:18","guid":{"rendered":"https:\/\/tup.kxe.temporary.site\/how-marketers-can-mitigate-bias-in-generative-ai\/"},"modified":"2023-12-26T15:04:18","modified_gmt":"2023-12-26T15:04:18","slug":"how-marketers-can-mitigate-bias-in-generative-ai","status":"publish","type":"post","link":"https:\/\/okdesign.ca\/fr\/how-marketers-can-mitigate-bias-in-generative-ai\/","title":{"rendered":"How marketers can mitigate bias in generative AI"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><em>This article was co-authored by\u00a0<a rel=\"nofollow noopener\" href=\"https:\/\/www.gartner.com\/en\/experts\/nicole-greene\" target=\"_blank\">Nicole Greene<\/a>.<\/em><\/p>\n<p>Technology providers such as Amazon, Google, Meta and Microsoft, have long sought to address concerns about the effects of bias in datasets used to train AI systems. Tools like Google\u2019s Fairness Indicators and Amazon\u2019s Sagemaker Clarify help data scientists detect and mitigate harmful bias in the datasets and models they build with machine learning. But the sudden, rapid adoption of the latest wave of AI tools that use massive large language models (LLMs) to generate text and artwork for marketers presents a new class of challenges.\u00a0<\/p>\n<p><!-- \/1038259\/MT_Post-text --><\/p>\n<p>Generative AI (genAI) is <a rel=\"nofollow noopener\" href=\"https:\/\/www.gartner.com\/en\/webinar\/487667\/1143179?utm_medium=press-release&amp;utm_campaign=GML_GB_2023_GML_NPP_PR1_WBMKTGGENERATIVEAI&amp;utm_term=wb\" target=\"_blank\">an incredible breakthrough<\/a>,\u00a0but it\u2019s not human, and it\u2019s not going to do exactly what people think it should do.\u00a0Its models have bias just as humans have bias.\u00a0The rapid commercialization of genAI\u2019s models and applications have moved sources of bias beyond the scope of tools and techniques currently available to data science departments. Mitigation efforts must go beyond just the application of technology to include new operating models, frameworks, and employee engagement.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-marketers-are-often-the-most-visible-adopters-of-genai\">Marketers are often the most visible adopters of genAI<\/h2>\n<p>As the leading and most visible adopters of genAI in most organizations \u2014 and the people most responsible for brand perception \u2014 marketers find themselves on the <a rel=\"nofollow noopener\" href=\"https:\/\/www.gartner.com\/en\/marketing\/research\/how-should-cmos-respond-to-chatgpt-today?utm_medium=press-release&amp;utm_campaign=GML_GB_YOY_GML_NPP_PR1_RECHATGPT\" target=\"_blank\">front lines<\/a> of AI bias mitigation. These new challenges often require sensitive human oversight to detect and address bias. Organizations must develop best practices across customer facing functions, data and analytics teams, and legal to avoid damage to their brands and organizations.<\/p>\n<p>Marketing\u2019s most basic function is to <a rel=\"nofollow noopener\" href=\"https:\/\/www.gartner.com\/en\/webinar\/487667\/1143179?utm_medium=press-release&amp;utm_campaign=GML_GB_2023_GML_NPP_PR1_WBMKTGGENERATIVEAI&amp;utm_term=wb\" target=\"_blank\">use tools<\/a> to find and deliver messages to the people most likely to benefit from the business\u2019s products and services.\u00a0Adtech\u00a0and martech include predictive, optimization-driven technology designed to determine which individuals are most likely to respond and what messages are most likely to move them.\u00a0This includes decisions like how to segment and target customers and customer loyalty decisions. Since the technology relies on historical data and human judgment, it risks cementing and amplifying biases hidden within an organization, as well as in commercial models over which marketers have no control.<\/p>\n<h2 class=\"wp-block-heading\" id=\"h-allocative-and-representational-harm\">Allocative and representational harm<\/h2>\n<p>When algorithms inadvertently disfavor customer segments with disproportionate gender, ethnic or racial characteristics due to historical socioeconomic factors inhibiting participation, the result is often described as \u201callocative harm.\u201d While high-impact decisions, like loan approvals, have received most attention, everyday marketing decisions such as who receives a special offer, invitation or ad exposure present a more pervasive source of harm.\u00a0<\/p>\n<p>Mitigating allocative harm has been the aim of many data science tools and practices. GenAI, however, has raised concerns about a different type of harm. \u201cRepresentational harm\u201d refers to stereotypical associations that appear in recommendations, search results, images, speech and text. Text and imagery produced by genAI may include depictions or descriptions that reinforce stereotypical associations of genders or ethnic groups with certain jobs, activities or characteristics.\u00a0<\/p>\n<p>Some researchers have coined the phrase \u201cstochastic parrots,\u201d to express the idea that LLMs might mindlessly replicate and amplify the societal biases present in their training data, much like parrots mindlessly mimicking words and phrases they were exposed to.\u00a0<\/p>\n<p>Of course, humans are also known to reflect unconscious biases in the content they produce. It\u2019s not hard to come up with examples where marketing blunders produced representational harms that drew immediate backlash. Fortunately, such flagrant mishaps are relatively rare and most agencies and marketing teams have the judgment and operational maturity to detect them before they cause harm.\u00a0<\/p>\n<p>GenAI, however, raises the stakes in two ways.\u00a0<\/p>\n<p>First, the use of genAI in content production for personalized experiences multiplies the opportunities for this type of gaffe to escape review and detection. This is due to both the surge in new content creation and the various combinations of messaging and images that could be presented to a consumer. The prevention of representational bias in personalized content and chatbot dialogs requires scaling up active oversight and testing skills to avoid unanticipated situations arising from unpredictable AI behaviors.\u00a0<\/p>\n<p>Second, while flagrant mistakes get the most attention, subtle representational harms are more common and difficult to eliminate. Taken individually, they may appear innocuous, but they produce a cumulative effect of negative associations and blind spots. For example, if an AI writing assistant employed by a CPG brand persistently refers to customers as female based on the copy samples it\u2019s been given, its output may reinforce a \u201chousewife\u201d stereotype and build a biased brand association over time.\u00a0<\/p>\n<p><strong><em>Dig deeper: <a rel=\"nofollow noopener\" href=\"https:\/\/martech.org\/third-party-data-in-advertising-best-of-the-martechbot\/\" target=\"_blank\">Third-party data in advertising \u2014 <strong>Best<\/strong>\u00a0of the\u00a0<strong>MarTechBot<\/strong><\/a><\/em><\/strong><\/p>\n<h2 class=\"wp-block-heading\" id=\"h-addressing-harms-in-genai\">Addressing harms in genAI<\/h2>\n<p>Subtle representational bias requires deeper levels of skill, contextual knowledge, and diversity to recognize and eliminate. The first step is acknowledging the need to incorporate oversight into an organization\u2019s regular operations. Consider taking these steps:<\/p>\n<ul>\n<li><strong>Address the risk. <\/strong>Bias infects genAI through both its training data, human reinforcement and everyday usage. Internal and agency adoptions of genAI for content operations should be prefaced by targeted education, clarification of accountability, and a plan for regular bias audits and tests.\u00a0<\/li>\n<\/ul>\n<ul>\n<li><strong>Formalize principles. <\/strong>Align all stakeholders on principles of diversity and inclusion that apply to the specific hazards of bias in genAI. Start with the organization\u2019s stated principles and policies and incorporate them into bias audits as they relate to these principles. Set fairness constraints during training and involve a diverse panel of human reviewers to catch biased content. Clear guidelines and ongoing accountability are crucial for ensuring ethical AI-generated content.\u00a0<\/li>\n<\/ul>\n<ul>\n<li><strong>Account for context.<\/strong> Cultural relevance and disruptive events change perception in ways that genAI is not trained to recognize. LLMs\u2019 assimilation of impactful events can trail events and changing societal perception. Marketing leaders can advise communications and HR on how to enhance diversity, equity and inclusion training programs to include AI-related topics to prepare teams to ask the right questions about existing practices and adoption plans. They can also ensure that the test data includes examples that could potentially trigger bias.<\/li>\n<\/ul>\n<ul>\n<li><strong>Collaborate vigorously.<\/strong> Assure that marketing personnel work closely with data specialists. Curate diverse and representative datasets using both data science tools and human feedback at all stages of model development and deployment, especially as fine-tuning of foundational models becomes more commonplace. As marketers consider AI-driven alterations to staff and training, prioritize scaling up review and feedback activities required for bias mitigation.<\/li>\n<\/ul>\n<p>If marketing leaders follow these steps when addressing internal genAI regulations, they will be protecting their brand in a major way, which can pay huge dividends down the line. While even the major players in the space are looking to address bias within genAI, not everyone takes all of these steps into account, which can lead to major blindspots with their genAI-led projects.\u00a0<\/p>\n<p><!-- START INLINE FORM --><\/p>\n<div class=\"nl-inline-form border py-2 px-1 my-2\">\n<div class=\"row align-items-center justify-content-center nl-inline-container\">\n<div class=\"col-12 col-lg-3 col-xl-auto pb-2 pb-lg-0\">\n<p class=\"inline-form-text text-center mb-0\">Get MarTech! Daily. Free. In your inbox.<\/p>\n<\/p><\/div>\n<\/p><\/div>\n<\/div>\n<p><!-- END INLINE FORM -->\n<\/div>\n<p><em>Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed <a rel=\"nofollow\" href=\"https:\/\/martech.org\/staff\/\">here<\/a>.<\/em><\/p>\n<p><br \/>\n<br \/><a href=\"https:\/\/martech.org\/how-marketers-can-mitigate-bias-in-generative-ai\/\">Source link <\/a><\/p>","protected":false},"excerpt":{"rendered":"<p>This article was co-authored by\u00a0Nicole Greene. Technology providers such as Amazon, Google, Meta and Microsoft, have long sought to address concerns about the effects of bias in datasets used to train AI systems. Tools like Google\u2019s Fairness Indicators and Amazon\u2019s Sagemaker Clarify help data scientists detect and mitigate harmful bias in the datasets and models [&hellip;]<\/p>","protected":false},"author":3,"featured_media":5982,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[5],"tags":[],"class_list":["post-5981","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-agency"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v23.8 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>How marketers can mitigate bias in generative AI - OK Design<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/okdesign.ca\/fr\/how-marketers-can-mitigate-bias-in-generative-ai\/\" \/>\n<meta property=\"og:locale\" content=\"fr_CA\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How marketers can mitigate bias in generative AI\" \/>\n<meta property=\"og:description\" content=\"This article was co-authored by\u00a0Nicole Greene. Technology providers such as Amazon, Google, Meta and Microsoft, have long sought to address concerns about the effects of bias in datasets used to train AI systems. Tools like Google\u2019s Fairness Indicators and Amazon\u2019s Sagemaker Clarify help data scientists detect and mitigate harmful bias in the datasets and models [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/okdesign.ca\/fr\/how-marketers-can-mitigate-bias-in-generative-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"OK Design\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/okdesign.ca\" \/>\n<meta property=\"article:published_time\" content=\"2023-12-26T15:04:18+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"OK Design\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"OK Design\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimation du temps de lecture\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/\"},\"author\":{\"name\":\"OK Design\",\"@id\":\"https:\/\/okdesign.ca\/en\/#\/schema\/person\/a8cdeff6d4fdb205ceabe60a22c77a75\"},\"headline\":\"How marketers can mitigate bias in generative AI\",\"datePublished\":\"2023-12-26T15:04:18+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/\"},\"wordCount\":1153,\"publisher\":{\"@id\":\"https:\/\/okdesign.ca\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg?fit=1920%2C1080&ssl=1\",\"articleSection\":[\"Agency\"],\"inLanguage\":\"fr-CA\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/\",\"url\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/\",\"name\":\"How marketers can mitigate bias in generative AI - OK Design\",\"isPartOf\":{\"@id\":\"https:\/\/okdesign.ca\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg?fit=1920%2C1080&ssl=1\",\"datePublished\":\"2023-12-26T15:04:18+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#breadcrumb\"},\"inLanguage\":\"fr-CA\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-CA\",\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#primaryimage\",\"url\":\"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg?fit=1920%2C1080&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg?fit=1920%2C1080&ssl=1\",\"width\":1920,\"height\":1080},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/okdesign.ca\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How marketers can mitigate bias in generative AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/okdesign.ca\/en\/#website\",\"url\":\"https:\/\/okdesign.ca\/en\/\",\"name\":\"OK Design - Conception Graphiques et Sites Web\",\"description\":\"Conception graphique et sites Web\",\"publisher\":{\"@id\":\"https:\/\/okdesign.ca\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/okdesign.ca\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-CA\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/okdesign.ca\/en\/#organization\",\"name\":\"OK Web Design\",\"url\":\"https:\/\/okdesign.ca\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-CA\",\"@id\":\"https:\/\/okdesign.ca\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2020\/09\/OKDesign-W.png?fit=7001%2C1376&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2020\/09\/OKDesign-W.png?fit=7001%2C1376&ssl=1\",\"width\":7001,\"height\":1376,\"caption\":\"OK Web Design\"},\"image\":{\"@id\":\"https:\/\/okdesign.ca\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/okdesign.ca\",\"https:\/\/www.instagram.com\/okdesignca\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/okdesign.ca\/en\/#\/schema\/person\/a8cdeff6d4fdb205ceabe60a22c77a75\",\"name\":\"OK Design\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-CA\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/414d6a655f4d2b769d207dd5f01fd608ffbb3f6a8992a451bdb6d00e62dac102?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/414d6a655f4d2b769d207dd5f01fd608ffbb3f6a8992a451bdb6d00e62dac102?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/414d6a655f4d2b769d207dd5f01fd608ffbb3f6a8992a451bdb6d00e62dac102?s=96&d=mm&r=g\",\"caption\":\"OK Design\"},\"sameAs\":[\"https:\/\/tup.kxe.temporary.site\"],\"url\":\"https:\/\/okdesign.ca\/fr\/author\/okdesign\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How marketers can mitigate bias in generative AI - OK Design","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/okdesign.ca\/fr\/how-marketers-can-mitigate-bias-in-generative-ai\/","og_locale":"fr_CA","og_type":"article","og_title":"How marketers can mitigate bias in generative AI","og_description":"This article was co-authored by\u00a0Nicole Greene. Technology providers such as Amazon, Google, Meta and Microsoft, have long sought to address concerns about the effects of bias in datasets used to train AI systems. Tools like Google\u2019s Fairness Indicators and Amazon\u2019s Sagemaker Clarify help data scientists detect and mitigate harmful bias in the datasets and models [&hellip;]","og_url":"https:\/\/okdesign.ca\/fr\/how-marketers-can-mitigate-bias-in-generative-ai\/","og_site_name":"OK Design","article_publisher":"https:\/\/www.facebook.com\/okdesign.ca","article_published_time":"2023-12-26T15:04:18+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg","type":"image\/jpeg"}],"author":"OK Design","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"OK Design","Estimation du temps de lecture":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#article","isPartOf":{"@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/"},"author":{"name":"OK Design","@id":"https:\/\/okdesign.ca\/en\/#\/schema\/person\/a8cdeff6d4fdb205ceabe60a22c77a75"},"headline":"How marketers can mitigate bias in generative AI","datePublished":"2023-12-26T15:04:18+00:00","mainEntityOfPage":{"@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/"},"wordCount":1153,"publisher":{"@id":"https:\/\/okdesign.ca\/en\/#organization"},"image":{"@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg?fit=1920%2C1080&ssl=1","articleSection":["Agency"],"inLanguage":"fr-CA"},{"@type":"WebPage","@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/","url":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/","name":"How marketers can mitigate bias in generative AI - OK Design","isPartOf":{"@id":"https:\/\/okdesign.ca\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#primaryimage"},"image":{"@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#primaryimage"},"thumbnailUrl":"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg?fit=1920%2C1080&ssl=1","datePublished":"2023-12-26T15:04:18+00:00","breadcrumb":{"@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#breadcrumb"},"inLanguage":"fr-CA","potentialAction":[{"@type":"ReadAction","target":["https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/"]}]},{"@type":"ImageObject","inLanguage":"fr-CA","@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#primaryimage","url":"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg?fit=1920%2C1080&ssl=1","contentUrl":"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg?fit=1920%2C1080&ssl=1","width":1920,"height":1080},{"@type":"BreadcrumbList","@id":"https:\/\/okdesign.ca\/how-marketers-can-mitigate-bias-in-generative-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/okdesign.ca\/"},{"@type":"ListItem","position":2,"name":"How marketers can mitigate bias in generative AI"}]},{"@type":"WebSite","@id":"https:\/\/okdesign.ca\/en\/#website","url":"https:\/\/okdesign.ca\/en\/","name":"OK Design - Conception Graphiques et Sites Web","description":"Conception graphique et sites Web","publisher":{"@id":"https:\/\/okdesign.ca\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/okdesign.ca\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-CA"},{"@type":"Organization","@id":"https:\/\/okdesign.ca\/en\/#organization","name":"OK Web Design","url":"https:\/\/okdesign.ca\/en\/","logo":{"@type":"ImageObject","inLanguage":"fr-CA","@id":"https:\/\/okdesign.ca\/en\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2020\/09\/OKDesign-W.png?fit=7001%2C1376&ssl=1","contentUrl":"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2020\/09\/OKDesign-W.png?fit=7001%2C1376&ssl=1","width":7001,"height":1376,"caption":"OK Web Design"},"image":{"@id":"https:\/\/okdesign.ca\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/okdesign.ca","https:\/\/www.instagram.com\/okdesignca\/"]},{"@type":"Person","@id":"https:\/\/okdesign.ca\/en\/#\/schema\/person\/a8cdeff6d4fdb205ceabe60a22c77a75","name":"OK Design","image":{"@type":"ImageObject","inLanguage":"fr-CA","@id":"https:\/\/secure.gravatar.com\/avatar\/414d6a655f4d2b769d207dd5f01fd608ffbb3f6a8992a451bdb6d00e62dac102?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/414d6a655f4d2b769d207dd5f01fd608ffbb3f6a8992a451bdb6d00e62dac102?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/414d6a655f4d2b769d207dd5f01fd608ffbb3f6a8992a451bdb6d00e62dac102?s=96&d=mm&r=g","caption":"OK Design"},"sameAs":["https:\/\/tup.kxe.temporary.site"],"url":"https:\/\/okdesign.ca\/fr\/author\/okdesign\/"}]}},"jetpack_featured_media_url":"https:\/\/i0.wp.com\/okdesign.ca\/wp-content\/uploads\/2023\/12\/AI-ethics.jpg?fit=1920%2C1080&ssl=1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/posts\/5981","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/comments?post=5981"}],"version-history":[{"count":0,"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/posts\/5981\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/media\/5982"}],"wp:attachment":[{"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/media?parent=5981"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/categories?post=5981"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/okdesign.ca\/fr\/wp-json\/wp\/v2\/tags?post=5981"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}