• CrissCross Lab

Can AI also develop Bias in Marketing?

Keywords: Artificial Intelligence, Predictive Analysis, Bias, Digital Marketer, Digital, Advertisement, Campaigns


CrissCross Lab writes about Artificial Intelligence and how unwell trained models can develop a Bias against people of color, gender, and other criteria

Artificial Intelligence (AI) has the ability to automate tasks by learning from massive amount of data which has been collected previously. Through the concept of "Predictive Analysis", the AI can predict the future of a particular task based on the "trends" from the Past. Through AI tools, the Z-generation of Digital Marketers are heavily dependent on the use of AI not to just save money and time required to perform that particular task, but also to increase the likeliness of success.


But, a question left unanswered is, Do these AI tools have biases? The question first came up in DAMSU's #TheDaytheDatawillTalk Webinar - The future of Data Analytics where the co-founder of CrissCross Lab, Mr. Udit Garg aka 'UD' was asked "Can Artificial Intelligence develop biases?". At the time, UD responded:


"Artificial Intelligence have the ability to develop biases, at the end, Artificial Intelligence is programmed by a Human who is imperfect, and similarly AI is also imperfect. While, AI has the ability to evolve more quickly based on its computational prowess available to it, the changes in the algorithm still needs to be closely monitored to make sure that if the AI is developing a particular bias, the same can be fixed right away!"

Miriam Vogel of EqualAI said "In short, we are creating the perfect storm against persons of color and other underrepresented populations." and as per us, the reason is training. For our new readers who do not understand Artificial Intelligence, one of the most important steps in developing the AI is training the model with datasets, which means providing it with old data to learn and think. Now, the problem arises with the quality of datasets, given that not all AI developers have access to proprietary datasets, the freely available datasets sometimes are not enough to train the AI effectively. The result? An AI which is flawed and is biased.


I don't use AI in my Marketing!

That is the most absurd thing you can say as a Digital Marketer! Have you ever run your Ad Campaigns on Google, or Facebook, or Twitter, or virtually any other Digital Advertisement Platform (DAP)? Then you are already using AI which is a built-in component of that particular DAP. Every DAP, as on date, runs your Digital Ad Campaigns (DAC) through its AI tool where you only select the parameters of the DAC such as Demographics, DAC Amount, Period, etc. and the AI automatically optimizes the DAC as per your parameters.


As a Digital Marketer, you just download the reports after the DAC is finished and based on those DAC Results, you manually optimize your DAC for the next time you will run it on the particular DAP.


What does Study shows?

University of Washington (UoW) on April 9, 2015 published a story regarding their study on AI and their biases titled "Who’s a CEO? Google image results can shift gender biases" which discusses about how an AI perceives a particular profession such as being a CEO, a Telemarketer, etc. and how the results of the AI can be different from the actual factual scenario.



The study had very positive results but enough to point out a stereotypical bias in the Google AI itself, so when the UoW searched Google Images for the keyword "CEO", the results came back with only 11% women in top 100 image results, while in US, in 2015, the number of Women CEOs was 27%. Similarly, when the UoW searched for the keyword "Telemarketer", the result was 64% women while in 2015, the number of telemarketer professionals were only 50%.


On other search results, the AI still showed underrated results for women and people of color which proved the hypothesis set by the UoW.


On asking about Marketing and their idea of what synonymous marketing is, the co-author of the JSW Bestseller - Fanocracy said:

"I want to see somebody who looks like me and acts like me and has a background like me"

And the idea needs to be synonymous when the AI is being nurtured. The AI collects data on each of us heavily, our interests, our browsing history, our purchase patterns, etc. but the AI needs to understand the person a little bit better before it can run predictive analysis. DAC Targeting has been active for many years but think of the following situation,


Let's go Shopping

Think of a situation where a friend of yours ask to look for a Bluetooth Speaker for him at Amazon or Flipkart or any other eCommerce platform, as a good Samaritan, you open up the website and put in the keywords "Bluetooth Speaker under 5000" and you are bombarded with options regarding Bluetooth Speakers from various vendors.



Now, your friend knows what he wants to buy, but the AI of that website has jotted down that "You" may be interested in Bluetooth Speakers, and AI will pass on this information to the Central Advertisement Library (CAL) where now "you" are just going to be looking at promotions and advertisements on "Bluetooth Speakers" wherever you click. You may be playing Ludo on your phone and the advertisement comes of flashy Bluetooth Speakers everywhere, you click on YouTube and you see an advertisement of Technical Guruji reviewing a particular Bluetooth Speaker. The idea is to increase reach of a particular product based on interest and that's what Targeted Marketing is, but the parameters set by you can be wasted money.


Now, in the above case, a single search made the AI think that the user might have interest in the Bluetooth Speakers and based on that, it targeted ads to the user regarding the BT Speakers but little did it know, that this behavior could just be a one-off behavior by the person and it actually might not be an interest.


The AI while training needs to be taught to see "the User" as more of a human with changing tastes, insecurities, etc. rather than objects with categories. The Active Campaign Management & Engagement (ACME™) which is an AI tool developed in-house. The ACME™ uses the STEP© (Short Term Engagement Period) technology which helps in dynamic parameter distribution, increasing the RoI (Return on Investment).


On the other hand, most AI have set goals that is to increase reach of a particular DAC. So, yes, Artificial Intelligence (AI) can develop bias but it is up to us, as the developers, to actively perform checks to reduce those biases.

69 views0 comments