• Francesca Hobson posted an article

    Kathryn Harrison: Deepfakes and the Deep Trust Alliance

    How did you end up in the world of identity? see more

    How did you end up in the world of identity?

    I spent the last 7 years at IBM, working in a variety of roles all over the world. 4 years ago, whilst I was in Istanbul I found block chain, which is the space I’ve remained in since. Identity is essential to block chain, particularly when you think about permissions and the block chain network. Leaving IBM in June, I was really interested in understanding the provenance of digital content so I set up the Deep Trust Alliance. How do you know that images, videos or texts are what they say they are? Or what their provenance is? That drew me down a couple of different rabbit holes – one of those is identity, the other is around deepfakes and misinformation. Both strands are essential to understand what’s real on the internet. I’ve been so fortunate to have some incredible mentors and advisors from the identity space, specifically Don Thibeau from OIX and Timothy Ruff from Evernym.

     

    Taking a step back, how do you define a deepfake?

    It’s an image or video, generated by AI technology known as generative adversarial networks or GANs for short. The technology was pioneered by a researcher called Ian Goodfellow at the University of Montreal, who’s now at Apple. It’s a specific method of combing existing images with AI technology to create a wholly new image. Yet when people talk about deep fakes, especially in the US, they talk about the Nancy Pelosi video which came out this summer, but that video didn’t use AI. It used technology that’s been around by 20 years that basically anyone could use. Deep fakes have captured people’s imagination and that’s a important place to start, but you have to think about the broader ecosystem of digital forgeries.

     

    The work you do at the Deep Trust Alliance reaches across all elements of society and industry – how did you start on that journey into this space?

    At IBM I ran product management of the IBM Blockchain Platform, contributing to Hyperledger Fabric and building the managed service on top of it. I was supporting building networks in anything from digital tickets, food, trade, shipping containers, financial products. Looking at the ledger layer of blockchain, I wanted to move towards digital assets and digital content, so I started building a product roadmap for what it would take to do that. Through this I learned three things:

    1. No single entity controls content. You take a picture with your iPhone and you edit it with Instagram, Adobe, or iMovie. Then you might post it to Facebook or Twitter. Then it’ll proceed to ricochet around the internet. To control that, you would need to pull together the hardware, software and the human.
    2. To reach internet scale, you have to do it in an open way. Obviously the identity community know that, as open source and standards have been a foundational in enabling identity to work. Some companies are working on closed capture, proprietary solutions, but either it’ll take a long time or be difficult to reach internet scale
    3. It’s ultimately an arms race. The incentives are more in the camp of nefarious actors.  Deepfakes are ultimately just a tool which can be used to defraud any company that transacts over the internet, so there are really important implications for financial services, insurance, telecom and even healthcare.  We’re already starting to see sophisticated attacks using fake audio, like a recent case in Germany, where a German CEO’s voice was faked and $243,000 was transferred.

     

    Where do you think awareness of deepfakes is now?

    A lot of people are worried about the problem of deepfakes, but they don’t have a lot of solutions as people see this totally new phenomenon. Although the technology is new, it sits in a long line of digital forgeries, from the days of the first cameras and motion pictures. My aim is to connect the dots between those companies trying to work on this problem for their own platforms and the academics who are researching it, to put the issue (and solutions) firmly on corporate agendas so they can deal with deepfakes. Robust standards for accelerating the use of ethical deepfakes is key because for scenarios where they make a lot of sense. For example; entertainment, fashion, or if people have lost the ability to speak and want to recreate their voice.

     

    Are there particular industries that you’re speaking to first?

    We’re spending a lot of time talking to social media platforms and news providers, because they’ve been directly impacted already. There’s a lot of political attention and regulatory pressure on them to make sure they’re sharing the right information, so they’re further ahead than most in how they understand the risk and innovate. For example, Facebook and Google have run a number of deep fake detection challenges. Importantly, as they develop technology they’re building best practise in technology and policy, which needs to be disseminated across the ecosystem. The Deep Trust Alliance can help bring these best practises and coordinate learnings across a number of stakeholders to drive standards forwards.

     

    Do you find these it helps that stories about deepfakes are quite sensational?

    With technology problems you need to capture people’s imagination and deep fakes have done that because it’s a sexy story, but you have to be careful about not making people too crazy. We’re going to live in a world connected by 5G networks, and we’ll all have devices that are sending tonnes of data to each other. The IOT and the security situation with those devices isn’t great. These are all questions that we as consumers, society and corporations need to think about. Part of the problem is there’s no single silver bullet.

     

    How do you detect a deepfake?

    Media literacy is a great place to start. Look at the website you’re on, look at the image itself. Often AI technology doesn’t recognise some of the semantic difference that a person would pick up immediately. For example I could wear 2 different earrings as a fashion statement, more likely it’s a symmetrical difference that it’s missed. There’s a website called Which Face is Real?, that shows 2 pictures side by side. People get about 60% of those images right, so it’s a little better than guessing. There are some technical tools like browser plug ins that use forensics, but the detection technology is still fairly nascent and you need to really understand how the image is made to accurately detect it.

     

    Why do you think women in identity is important?

    Where to begin! When you take a moral philosophy view, identity is all about how you recognise, define and share who you are. The IT space approaches it from such a dry and technical perspectives, but a diverse range of perspectives is essential because it goes back to the foundations of what it means to be a human. Annabel Backman, a senior engineer at Amazon Identity – made a really compelling point at Internet Identity World that identity means and drives very different things to different people.  Women’s perspectives and experience are required so you build secure, user friendly and fundamentally human solutions.

     

    Do you see that deepfakes affect certain groups in society?

    100%. Deeptrace mapped deepfakes over the past few months, where they found nearly 15,000 deepfakes open on the internet. Of those, 96% was deep fake porn. 100% of that was women. At a conference recently, Mary Anne Franks from the University of Miami made one of the most compelling points. If society cared about the damage that was done to women, they would have immediately identified the risk that comes from deepfake porn. As a society we’d be much better prepared to deal with these threats in the media or news, because we would have thought through the way in which people would use it. Instead, we’ve kind of ignored it, didn’t think it was a real problem for society and here we are years later playing catch up.

    There’s a famous example of a journalist in India, Rana Ayyub, who’s been outspoken about the Modi administration and Hindu nationalist movement. Some of her adversaries created a fake porn of her and spread it all over the internet. In India this is problematic to a degree in the UK and US we don’t quite understand, as it resulted in numerous death threats. The police didn’t help. They watched it in front of her, made all sorts of jokes and said she probably deserved to be killed. This got to the point where the UN had to intervene to get India to take down the fake video. Your average women doesn’t have the UN ready to speak to your prime minister. In a number of ways, deepfakes and technology in a broader sense affect the most vulnerable populations first – it’s a little bit like the canary in the coal mine.

     

    How do you think technology can avoid these problems?

    There’s a lot in which technology can learn from healthcare, whether that’s technologists taking the equivalent of a Hippocratic oath or nutrition labels for consumers of what’s in the technology. Looking outside of deepfakes to AI and Machine Learning models, how do you know that bias isn’t baked in? There’s been an atmosphere of techno-optimism, that technology gives us the tools to deal with any problem that comes our way.

    I am fundamentally optimistic, however we need to think of the possible negative ways in which technology could be used too. The creator of an app called DeepNude, which adapted photos of women (not men) to make them naked, posted it on reddit and it went viral. He got a lot of attention from the press and ended up taking it down with the explanation that he hadn’t considered the ways in which it cold be misused. You need to break things and get shit done, but you need to make sure you’re not breaking democracy or the open internet.

     

    What do you make compulsory and what do you ban as a CEO?

    Personal development plans are compulsory for each individual in my company. What do they want learn, what skills and capabilities do they want to develop, what are they curious about? Then lining this up with milestones and targets. It’s helped me develop as a person and professional, and helped me prioritise my life, my day, or even the next hour. If I were to ban something, (this is aspirational rather than real), it would be meetings on Wednesdays. I love one full day to put my head down, get things done and focus. At big companies you can get into a rhythm of back to back meetings, so carving out the time and space to do your reading is essential.

     

    What piece of art/book/or anything else would recommend?

    My favourite artist is Kandinsky, I just love the colours and the energy. My favourite book of the moment is Americana by Chimamanda Ngozi Adichie, it’s an amazing and beautiful read. I also recommend the Knowledge podcast, although it’s not really art but I love it.

     

    Find out more on the Deep Trust Alliance

    Get in touch with Kathryn on Twitter or LinkedIn

     November 05, 2019