These past few weeks have been exciting for future-thinkers like me because of the entirely novel ways that AI is being introduced and used in society. Although I’m a fan of following and talking about the continued progress of something meaningful – it’s also refreshing to get an onslaught of new ideas. Without further ado…
A Spy Used AI-Generated Photos for Espionage
The AP says it found evidence of a what seems to be a would-be spy using an AI-generated profile picture to fool contacts on LinkedIn.James Vincent, The Verge
The publication says that the fake profile, given the name Katie Jones, connected with a number of policy experts in Washington. These included a scattering of government figures such as a senator’s aide, a deputy assistant secretary of state, and Paul Winfree, an economist currently being considered for a seat on the Federal Reserve.
… AI fake adds a layer of protection. Because each image is unique, it can’t be traced to a source with a reverse image search for an easy debunking.
I’ve talked in great depth in the past about GANs and I’ve also talked about the LinkedIn platform’s propensity to propagate deception. But never the two in the same sentence. This is such an interesting confluence of modern technology and makes me what other technology spies could be used to gather intel.
Alexa Listens for Heart Attacks
In the US, more than 350,000 people experience cardiac arrest outside of the hospital yearly. Emergency response time is the greatest hurdle to helping keep these people alive. Now, researchers are using Alexa smart speakers to listen for the signs of someone experiencing cardiac arrest, which could drastically minimize the time it takes to send help.
The system, developed by researchers at the University of Washington, uses machine learning to identify the telltale gasping sound (known as agonal breathing) that people make when they’re struggling for air. This is an early warning sign for more than half of all cardiac arrests.Charlotte Lee, MIT Technology Review
The system managed to correctly identify agonal breathing in 97% of instances, from up to 20 feet away. “When we tested it on our system, we found a 0.2% false positive rate in the volunteer group and a 0.1% rate in the sleep study,” says Justin Chan, who led the research.
What I love about this idea is that more than 100 million Alexa-enable Echo speakers have been sold, which means the adoption of this tech would be widespread very quickly. This could literally change the course of emergency response forever. It reminds me a lot of a system called The Orb.
Cat Face-Filter Derails Political Meeting
During last week’s live-streamed presser by Pakistani regional minister Shaukat Yousafzai, the event attracted international attention for all the wrong reasons. While Yousafzai addressed the public in his weekly conference, a volunteer on the team accidentally activated the cat filter on Facebook Live, causing the minister to appear with digital cat ears, whiskers, and rosy cheeks.Natt Garun, The Verge
Yes, this story is not technologically novel. We already know that AR filters use facial recognition. I just thought that the application was comical and lighthearted. It really gave new thought to how we can use tech to lighten the mood and even ease political tension.
AI that Spots Photoshopped Faces
Digital photos, Adobe, and Instagram… The combination of these three technologies has created a whirlwind of heavily-edited, airbrushed images that give the world an unrealistic sense of body images. Adobe is stepping forward to combat the altered media their tools have empowered people to create.
…in collaboration with scientists from UC Berkeley that uses machine learning to automatically detect when images of faces have been manipulated.James Vincent, The Verge
The research is specifically designed to spot edits made with Photoshop’s Liquify tool, which is commonly used to adjust the shape of faces and alter facial expressions.
The resulting algorithm is impressively effective. When asked to spot a sample of edited faces, human volunteers got the right answer 53 percent of the time, while the algorithm was correct 99 percent of the time.
Truth be told, though, I’m not sure if airbrushed faces are even our biggest threat anymore. The true test will be to see whether they or someone else can create a tool that can detect deepfakes. A step in the right direction, nonetheless.
AI Uncovers Ancient Games
For more than two decades, IBM and Google have been scheduling Man vs. AI exhibition matches in various disciplines to test the progress of machines – from chess to Starcraft to Jeopardy and even debate. However, researchers are now using AI, not to test their ability to play games, but rather to uncover how games were once played and how human once challenged one another.
They are pioneering a new area of archaeology focused purely on games. The goal is to better understand these ancient games and their role in human societies, to reconstruct their rules and to determine how they fit into the evolutionary tree of games that has led to the games we play today.MIT Technology Review
[The researchers] say the new techniques of machine vision, artificial intelligence, and data mining provide an entirely new way to study ancient games and to build a better understanding of the way they have evolved.
It’ll be fascinating to find out how games today may reflect games once played. And who knows, maybe they’ll invent a new game in the process.
Facebook’s AI Learns from Virtual Worlds
A lack of common sense is a glaring problem for today’s AI systems. Unlike a person, a chatbot or robot cannot rely on an understanding of the world—things like physics, logic, and social norms—to figure out the intent of an ambiguous command.Will Knight, MIT Technology Review
Researchers at Facebook have created a number of extremely realistic virtual homes and offices so that their AI algorithms can learn about real-world objects through exploration and practice. In theory, this could make chatbots and robots smarter, and it could make it possible to manipulate VR in powerful ways. But the virtual spaces need to be extremely lifelike for this to be transferable to the real world.
This is an intriguing approach to developing better object recognition which I’m sure they’ll implement in their tabletop, video-calling device called Portal. Additionally, as one of the leaders in VR, I could see this research being critical to making systems that can easily develop virtual worlds and also analyze how people move throughout them.