fbpx

Almost all companies using AI are deploying their AIs in an invisible way. Whether it’s the automatic playlist generation on Spotify or the suggested route on Google Maps, we as consumers are completely unaware when the AI is operating.

Ignorance is bliss, right? Yes, it is. But that doesn’t mean it’s necessarily the right thing to do. We cannot continue being left in the dark with these intelligent applications that essentially have the power to control our behaviors.

This is why Jason Brush of Fast Company is calling for AI to get “a good, old-fashioned brand identity”. Basically, if consumers knew when AI algorithms were being utilized, then they could differentiate the harmful use cases from the beneficial. And thus take action against specific instances of AI, as opposed to attacking the general umbrella of all artificial intelligence.

Obviously, this will not be a simple undertaking.

Already a future thinker?
Then become a friend.

YouTube AI vs. Netflix AI

Take for example, Netflix and YouTube, which both use AI-generated recommendations to maximize our entertainment. Both of them hide this use of AI behind well-designed interfaces. And both play a critical role in molding people’s beliefs through the content they recommend. But, the way in which they deploy AI is vastly different:

Netflix’s AI is designed to make the viewing experience simpler. Whereas YouTube’s AI is designed to elongate this viewing experience.

Netflix uses AI to recommend content and convince you to watch their content as quickly as possible. Ideally, their first recommendation works and then the AI is out of the picture while you watch a 90-minute movie.

YouTube uses AI to amplify whatever you click on, giving you an endless stream of only that narrow topic. They want you to follow that AI recommendation for as long as possible.

Inevitable/Human

Therefore, the way in which their AIs are branded and the way in which we discuss the potential dangers will be different.

Already a future thinker?
Then become a friend.

Branding AI is not that Easy

To illustrate this point, I’ll make the comparison to McDonald’s and cigarettes, which are both bad for your health (as Netflix and YouTube both are bad for your productivity). But how we go about telling people about the dangers is vastly different.

As soon as you walk in the door, McDonald’s aims to give you your food as quick as possible. They follow up this speedy delivery with food that tastes good (subjectively). The proper way to warn people of the dangers of McDonald’s is to attack the tasty (but unhealthy) food, not the speedy delivery. It’s why Fast Food Nation and Super Size Me expose the “food” in “fast food”. Not the “fast” in “fast food”.

Similarly, Netflix’s AI aims to deliver speedily and then follow it up with great content. Therefore, the proper way to warn people of the dangers of Netflix is not to warn them of their recommendation algorithm that wants to help you find great content quickly and then be out of your hair. No, it’s to warn people of the content itself.

On the other hand, with cigarettes, the addictive nature and unhealthiness is baked into the cigarette itself. They are engineered to make you use them as long as possible and to want more – similar to how the very nature of the YouTube recommendation algorithm is addictive. This is why the YouTube rabbit hole is a very real threat to people.

Making the YouTube AI brand visible would actually bring benefit in reducing these “YouTube rabbit holes”. Maybe they call out that their AI is making suggestions, in the same way that cigarette companies must put a Surgeon’s General warning label on them. This way, we understand that we’re being influenced by AI and can decide whether or not to continue using their recommended videos or start a new search.

The point I’m making here is that the branding of AI is a complicated matter. I don’t believe that branding AI is going to be a cut-and-dry process like a Coca-cola or Gillette, where we brand a color, a logo, a tagline, and an emotion… and voila! There’s an award-winning brand.

On the surface level, Netflix and YouTube algorithms have very similar functions. But, the way in which we tell people about the dangers (and thus educate about moderation) is vastly different.

For a more concrete vision, I think virtual assistants have really given us a taste of how we may brand artificial intelligence in the future.

Branding Virtual Assistants

Each virtual assistant – Siri, Alexa, Cortana, Watson, and Google Assistant – has taken on a name, a recognizable voice, and a slight persona. This branding of the virtual assistants turned the intangible nature of AI into something tangible.

Furthermore, it differentiates the options. With virtual assistants, it’s now about choosing sides. People see Siri as unhelpful, so they don’t rock with that brand. They see Alexa as helpful, so they rock with it.

More specifically, each branded virtual assistant now becomes responsible for their actions. For instance, in my office, when Alexa falsely hears a wake word and interrupts a conversation I’m having, I’m somewhat aware that the voice recognition AI is present and made a mistake. If I’m in the middle of a meeting, I’ll turn Alexa off. By branding this AI, the user is aware the AI is operating, and thus, can make decisions if they want to continue using it or not.

Looking to the next five years, as virtual assistants begin entering new mediums (VR in particular), this AI branding can be taken one step further through the use of digital humans. A prime example is how Cortana (Microsoft’s virtual assistant) took on a digital human form in Halo 4.

Isn’t it possible that YouTube could employ a digital human on their website that informs people of the AI in use. Or, as is the case with Cortana in Halo, the digital human is the AI – thus, the digital human is the AI’s brand. I elaborate a bit more on this concept of digital humans being present in our software in the Quick Theories: How Domino’s is leading us into the era of conversational interfaces.

The fact that AI applications are brandless, invisible, and therefore rarely held accountable for their wrongdoing, seems like a minor issue today. This is because, as far as we know, AI is making our lives more efficient, more informed, and more entertained. But, with the flip of a switch, this can be reversed. And then we’ll be in deep trouble.

AI is not going to recede from our existence. In fact, we’re really just getting started:

The business plans of the next 10,000 startups are easy to forecast: take X and add AI.

Kevin Kelly, Wired

Therefore, as long as we’re in the early stages of AI, why don’t we create the process for which we can identify AI’s role. For society to operate and have a constructive relationship with AI, it’s important that we can understand when and how AI is being used, and to judge for ourselves on a case by case basis if we see it as a benefit or not.

Join the discussion

You might also like