Almost all companies using AI are deploying their AIs in an invisible way. Whether it’s the automatic playlist generation on Spotify or the suggested route on Google Maps, we as consumers are completely unaware when the AI is operating
This is why Jason Brush of Fast Company is calling for AI to get “a good, old-fashioned brand identity”. Basically, if consumers knew when AI algorithms were being utilized, then they could differentiate the harmful use cases from the beneficial. And thus take action against specific instances of AI, as opposed to attacking the general umbrella of all artificial intelligence.
Obviously, this will not be a simple undertaking.
YouTube AI vs. Netflix AI
Take for example, Netflix and YouTube, which both use AI-generated recommendations to maximize our entertainment. Both of them hide this use of AI behind well-designed interfaces. And both play a critical role in molding people’s beliefs through the content they recommend. But, the way in which they deploy AI is vastly different:
Netflix’s AI is designed to make the viewing experience simpler. Whereas YouTube’s AI is designed to elongate this viewing experience.Inevitable/Human
Netflix uses AI to recommend content and convince you to watch their content as quickly as possible. Ideally, their first recommendation works and then the AI is out of the picture while you watch a 90-minute movie.
YouTube uses AI to amplify whatever you click on, giving you an endless stream of only that narrow topic. They want you to follow that AI recommendation for as long as possible.
Therefore, the way in which their AIs are branded and the way in which we discuss the potential dangers will be different.
Branding AI is not that Easy
To illustrate this point, I’ll make the comparison to McDonald’s and cigarettes, which are both bad for your health (as Netflix and YouTube both are bad for your productivity). But how we go about telling people about the dangers is vastly different.
As soon as you walk in the door, McDonald’s aims to give you your food as quick as possible. They follow up this speedy delivery with food that tastes good (subjectively). The proper way to warn people of the dangers of McDonald’s is to attack the tasty (but unhealthy) food, not the speedy delivery. It’s why Fast Food Nation and Super Size Me expose the “food” in “fast food”. Not the “fast” in “fast food”.
Similarly, Netflix’s AI aims to deliver speedily and then follow it up with great content. Therefore, the proper way to warn people of the dangers of Netflix is not to warn them of their recommendation algorithm that wants to help you find great content quickly and then be out of your hair. No, it’s to warn people of the content itself.
On the other hand, with cigarettes, the addictive nature and unhealthiness
Making the YouTube AI brand visible would actually bring benefit in reducing these “YouTube rabbit holes”. Maybe they call out that their AI is making suggestions, in the same
The point I’m making here is that the branding of AI is a complicated matter. I don’t believe that branding AI is going to be a cut-and-dry process like a Coca-cola or Gillette, where we brand a color, a logo, a tagline, and an emotion… and voila! There’s an award-winning brand.
On the surface level, Netflix and YouTube algorithms have very similar functions. But, the way in which we tell people about the dangers (and thus educate about moderation) is vastly different.
For a more concrete vision, I think virtual assistants have really given us a taste of how we may brand artificial intelligence in the future.
Branding Virtual Assistants
Each virtual assistant – Siri, Alexa, Cortana, Watson, and Google Assistant – has taken on a name, a recognizable voice, and a slight persona. This branding of the virtual assistants turned the intangible nature of AI into something tangible.
Furthermore, it differentiates the options. With virtual assistants, it’s now about choosing sides. People see Siri as unhelpful, so they don’t rock with that brand. They see Alexa as helpful, so they rock with it.
Looking to the next five years, as virtual assistants begin entering new mediums (VR in particular), this AI branding can be taken one step further through the use of digital humans. A prime example is how Cortana (Microsoft’s virtual assistant) took on a digital human form in Halo 4.
Isn’t it possible that YouTube could employ a digital human on their website that informs people of the AI in
The fact that AI applications are brandless, invisible, and therefore rarely held accountable for their wrongdoing, seems like a minor issue today. This is because, as far as we know, AI is making our lives more efficient, more informed, and more entertained. But, with the flip of a switch, this can be reversed. And then we’ll be in deep trouble.
AI is not going to recede from our existence. In fact, we’re really just getting started:
The business plans of the next 10,000 startups are easy to forecast: take X and add AI.Kevin Kelly, Wired
Therefore, as long as we’re in the early stages of AI, why don’t we create the process for which we can identify AI’s role. For society to operate and have a constructive relationship with AI, it’s important that we can understand when and how AI is being used, and to judge for ourselves on a case by case basis if we see it as a benefit or not.