• Lilian Tseggai posted an article

    A review: Roundtable discussion on enabling borderless digital identity.

    Tamara Al-Salim reviews the roundtable discussion at the Singapore Fintech Festival see more

    At the recent Singapore Fintech Festival (8-12 November), our own Tamara Al-Salim hosted and moderated a roundtable panel session on Enabling Borderless Digital Identity. The session explored the role of public/private partnerships in enabling an interoperable identity network. The session had great attendance and lively debates, the speakers were asked to share their views on Trust and how to innovate in a space of growing demand for interoperability and cross border recognition of people, credentials with self sovereignty when consenting for access to their information. 

     

    Common threads and differences were highlighted in the approach to delivering government identity projects, the approach of government mandate delivery seemed to attract higher adoption due to the requirements placed on accessing services by users, the pandemic has also played a catalyst role in this where the government ID systems were used for contact tracing or as a tool for two factor authentication. Other governments chose to partner with private entities to run the program and its delivery for the country, this created a start in single sectors, in this case finance before shifting wider as the benefits are realised; it allows a clear segregation in the delivery from being government lead, but also allows the trust levels to be higher for the end users as the service is government endorsed. 

     

    The speakers shared insights on the significant role Digital Identity plays in the growth of digital economy; it drives inclusion by supporting increased access to public and private services for people, businesses and public institutions;  it creates trust in a non-physical environment to enable everyone to interact and transact in a way that's authenticated and safe; it drives open markets and creates level playing fields for innovation in the interest of growth and choice for users; and finally, it lowers the cost by finding ways of delivering digital services at scale.

    With the right foundational digital infrastructure, digital identity, authentication and consent, interoperable payments and data exchange, we're able to create digital ID systems that can work across countries with trust, and limit independent verifications in the process.

  • Francesca Hobson posted an article

    Is consumer identity different from citizen identity?

    Susan Morrow explores whether citizen identity systems like the UK’s Verify initiative can give us a see more

    I was heavily involved in the UK government Verify scheme for several years. It was a challenging service to build as the government designers behind the initiative had a very specific set of challenges. Firstly, they had to design for the type of wide demographic represented by an entire country. At the same time, the service had to be secure from fraud. This was and is a fine balancing act. It takes the usability vs. security debate to a whole new level.

    Identity theft is a scourge of modern times. Javelin Research on identity theft and fraud has reported some interesting changes between 2017 to 2018. Although the overall number of victims has decreased in 2018, this was mainly due to a decrease in card fraud. However, overall there has been an increase in accounts being opened in victims’ names. As someone working in the field of identity management, this concerns me greatly. The true cost of online impersonation has only just begun, and the industry needs to build bridges to stop this, now.

    I believe that a way forward is to bring together the knowledge that government and commercial services, like banks, already have.

     

    What have we learned from citizen identity?

    There are a number of citizen identity schemes either in full production or in pilot stages across the world. Government needs to allow its citizens to access government online services to keep up with technology changes, reduce costs, and service citizen expectations.

    But many government services have a high value. Online tax services, for example, have already been victims of fraud. A look at the IRS ‘dirty dozen’ list of scams used to defraud the U.S. tax system shows what they are up against.

    Getting that heady mix of “identity for all” within a hardened identity framework is no mean feat. The UK government’s attempt at doing this has been heavily criticized. The UK’s National Audit Office (NAO) published a report that looked at the shortfall of Verify. These shortfalls are mainly a mix of cost (always an issue for government) and ‘match rates’.

    Match rate refers to the ability to ‘verify’ an identity. Verification, in the case of Verify, and other government identity services, means using third party services, like Credit File Agencies, and ID document checking services, to check an individual. The output from these verification checks, along with the credentials used to authenticate the individual, determine the ‘level of assurance’ of that person or their LOA.

    In the case of Verify, there were two levels that could be achieved, LOA1 and LOA2.

    The idea of ‘levels of assurance’ is not confined to the UK government. NIST originally came up with 4 levels of assurance but recently ‘retired’ this concept. This is what NIST says in Special Publication 800-63, Digital Identity Guidelines:

    “Rather, by combining appropriate business and privacy risk management side-by-side with mission need, agencies will select IAL, AAL, and FAL as distinct options. While many systems will have the same numerical level for each of IAL, AAL, and FAL, this is not a requirement and agencies should not assume they will be the same in any given system.”

    IAL, AAL, and FAL, are verification, authentication, and ‘strength of federated assertion’ respectively.

    NIST, in my opinion, are very sensible in doing this, but could have gone further. I have always argued that strict LOA is based on a highly prescriptive set of requirements that are not flexible enough to service modern ID needs.

    The reality is that LOA is only a subset of an identity and it’s the underlying attributes that are needed to do online jobs.

    So, this now brings us onto how citizen ID, made up of often static, inflexible ‘levels of assurance’ can be molded to the needs of modern consumer-centric commerce?

     

    Waste not, want not: how LOA is part of doing online tasks

    Verify was (and is) a costly exercise for government to bear. So, one of the ways of offsetting this cost is to allow commercial entities to utilize citizen IDs created under the scheme. Makes sense? If a user is able to get a Verify identity or any other government identity, they will have been through a tough user journey. However, commercial organizations have their own, unique set of needs. Simply having a LOA and associated attributes may not be enough.

    • A bank, for instance, will need their own set of KYC/AML checks
    • If a person applies for a financial product, a LOA wouldn’t have the right financial data to complete the task
    • If you apply for a job online a government identity wouldn't have your professional certifications embedded. I could go on; you get the idea.LOA is a useful guide to the verification status of a person. But it is only one part of the data needed to transact. But, over 4 million UK users have a UK Verify identity. It would be a crying shame to not take advantage of these, already verified, identities.In this case, LOA can be thought of as its own attribute and can be used to build friction-reduced, but with assurance, identities.

     

    In Hub we trust

    The trick is in how you use the government identity. Most of them are based on the SAML 2.0 protocol, although some will move over to OpenID Connect at some point. This means that if you want to use the government identity for your commercial service you need to consume that protocol. In addition, the ‘flavor’ of the protocol may not fit in with the expectations of your service.

    The use of verified IDs and other identity accounts can be used by commercial services - but it has to be part of a wider ecosystem. The days of silo’ed online identity are numbered and are may even be holding us back.

    Having your identity cake and eating the attributes can be afforded by going back to the idea of an extended identity ecosystem. A hub or proxy is a component of the wider identity ecosystem that provides the switch to connect up the relying party services and the other parts of the system, like the identity providers. For example, a government identity could be pulled into a banking system through a hub - the hub translating the government identity for the bank service. The bank may still need some specific attributes, but the verified government identity provides a good backbone and can even help to reduce the friction in creating an online bank account.

     

    Conclusion

    Governments have been at the forefront of verified identity systems. They have designed IAM with wide-demographic users in mind. But they have also had to finely balance anti-fraud requirements with this. Governments cannot afford to junk their identities, and instead, need to allow their reuse in a commercial setting. A direct use case may not always fit the needs of commerce, but a hard-won high level of assurance identity can help to augment online identity and data needs of commercial services. It makes sense then, to look at ways of facilitating the use of government IDs in commercial settings, but we need to consider LOA as an attribute in its own right rather than representing a digital identity in its own right.

     

    Author

    susanSusan Morrow

    Having worked in cybersecurity, identity, and data privacy for around 35 years, Susan has seen technology come and go; but one thing is constant – human behaviour. She works to bring technology and humans together. 

    Find her @avocoidentity

     

  • Francesca Hobson posted an article

    Savita Bailur and Hélène Smertnik: Researching Women and Identification in a Digital Age

    What do you do in the industry? What does caribou digital do? see more

    What do you do in the industry? What does caribou digital do? 

    Savita: I’ve worked with Caribou Digital for 4 years now. As part of the research team, I lead research projects of experiences of digital life - we’ve worked on overall online use by lower income demographics in emerging economies (what we used to call “ICT4D”!), digital financial services but also increasingly “identification in a digital age” - this might be a good time to say we prefer to use the phrase “identification in a digital age” rather than “digital identity” - see our colleague Jonathan’s great piece on the terminology and why).

    Hélène in Kakuma refugee camp in Kenya

    Hélène in Kakuma refugee camp in Kenya

     Just the other day, we realised we’ve now conducted fieldwork in over ten countries on “ID” (to use the shorthand): a few of those are Kenya, Uganda, Bangladesh, Côte d'Ivoire, India, Lebanon and Thailand, working with clients such as Omidyar, Aus Aid, World Bank, UNICEF and the Gates Foundation. In addition to conducting the research, we share our findings and advise on relevant strategy and policy both in the public and private sector. Caribou Digital has a much wider scope of work (all around supporting ethical digital economies in emerging markets), which you can see on cariboudigital.net but that’s just us on the research side!

    Hélène: I’ve been working with Caribou Digital for 2 years, conducting research and leading ground work, mainly on identification questions in countries across Africa and Asia. I don't really have a typical week, but a cycle of work through projects. It starts with from pre-field research - working with Savita on the framing of the research and setting everything up for ground work including finding local partners to the fieldwork itself and then post-field wrap up. 

     

    How do you determine where you run the research?

    S: We start at a high level determining the intent and scope first for clients (what is it we are trying to find out?), and after we work on the details collaboratively. We think about the demographics to work with - we know qualitative research (which Helene and I largely do) is not meant to be representative, but we do need to think about who we talk to - a typical cross-section may be “expert” interviewees, middlemen/women who are intermediaries (e.g. mobile money agents) and the “end users” either in focus groups or more in-depth interviews. We don’t really like the term end users (we are just humans!) but I guess that’s the shorthand. Then Helen mobilises the teams. She’s fantastic at finding the people on the ground and building up trusting relationships with people. We always try to do a pilot study so we can test and refine questions and demographics. Ethics are paramount to us so we make sure we go through a code of conduct with partners, and consent with respondents.

    H: Savita has such a wealth of experience in the topic. She knows what the important issues are which haven't been thoroughly researched yet and sees where processes are inefficient.

     

    What do companies look for when they come to you?

    H: They are not always companies - they can also be foundations, governments, NGOs etc. Often they come from a perspective of wanting to know more about end users’ experiences, as often they haven't been looking at the issues at stake at the same granular level that we get with our qualitative research. With Unicef it's been a unique piece of work looking at youth and adolescence and the overlap between identification and identity. I was interviewing a child who was 10 years old in a refugee camp in Lebanon, who was acutely aware of what ID is and the need for documents. I've found the more privileged the child the less aware they are of identification documents and their use. This child told me: 'it's the document that my dad takes to work with him every day', as this is a key enabler of their rights to move around in the country. That's generally not an issue for children in more privileged communities.

    S: It's certainly a challenging and delicate subject to address when doing field research. To find out how access to identification affects people on the ground requires good rapport and a large dose of empathy, which Helene is brilliant at. They key is asking questions without being intrusive - imagine if some random person came and started asking you questions on your identification documents - how would you feel?

     

    What areas are you working on next then?

    S:  Well, we’re just wrapping up our in-depth ID research in Bangladesh and Sri Lanka on women, work and ID, funded by Aus Aid. It was part of the Commonwealth Identity Initiative with GSMA and the World Bank. Next we’re starting research with Gates on how digital financial service principles they established (Level One) may have a different impact on women, including interoperability in mobile money - I do think this will also bring up gender and ID issues, like around KYC (whose ID is used to register a SIM?). We’ll be working in Kenya and Cote d’Ivoire.

     

    From your research you’ve identified 5 fundamental barriers of access for women. You must see great variation in use of identification between countries depending on the availability information, access, ownership, societal expectations and intersectionality?

     

    Savita in Abengourou, rural Côte d'Ivoire

    Savita in Abengourou, rural Côte d'Ivoire

     

    S: Yes, these barriers vary between countries, depending on everything from infrastructure to social norms.That’s why we keep saying you can’t just go in and dump a new “digital ID” system - you have to do some user research. For example, we have enough evidence now that people get nervous around biometrics, especially women in some cultures when they are touched - can we address that issue? Or that even if women have their own ID, it may be the men in their families who keep them - how do we address this

    H: We saw in Bangladesh that often women didn't have the time to go and access services, as they were looking after their families, or they didn't have the means to travel. Depending on the type of work they’re doing, this may or not be an issue, but it does become a barrier at some points.

     

    What will you be speaking about at ID2020? 

    S:  ID2020 reached out to me to do a keynote, which was nice as they come from a more private sector angle (e.g. the alliance includes Accenture, Microsoft, Rockefeller and many others). So it’s a good audience to take our “end user” research to. Our work brings forward end user research and adds the perspective of the human voice. We’re telling the story of Humans of ID, layering it in with the context of increasingly digital societies in emerging markets. It’s a story that all of us can relate to, as identity and identification go back to the beginnings of humanity.

     

    What brought you into this field?

    H: I still find it fascinating that we all live and breathe identity all the time. I now notice it everywhere - whether it’s a clothing collection launch on Instagram called ‘ID’ or in films like Capernaum. We all have a unique identity story that defines us and you can see that in culture globally. 

    S: All of us who have moved around the world can relate to the identification issue (I was born in India, moved to the UK, now in the USA, and at various points had all that change codified in documents and credentials). 

    There are so many stories about identification in the bureaucratic sense and how it crosses over with identity - the film Lion for example, or the Tom Hanks The Terminal,  or the book Educated (Tara Westover) where she grows up in an American family without a birth certificate and is home schooled, so she has no ID at all but how she navigates that. Two powerful pieces of journalism struck me just this year. One was about an Iraqi boy who is reunited with mother after years, thanks to different types of identification. Another was Azeteng’s story of human trafficking through West Africa. When his Guinean friend Sekou is murdered by the human smugglers, Azeteng keeps his ID. The journalist asks him why he keeps Sekou’s ID. He says: “that is someone’s son, someone’s brother, who knows, maybe even someone’s father,” he said. “I asked myself, how will his family know that he is dead? So I am trying my best for the family to be aware.” 

    We are becoming such a globalised society and many are stateless for one reason or another - if you're a migrant worker who’s newly arrived in a new city, but don’t have the right ID you can be completely isolated from applying to jobs (look at the IDNYC by the way  - really interesting case). We often take our access to services and help for granted, but the additional challenge is a lot of people don't have the time or ability to sort these issues out for themselves. 

     

    It must be striking seeing how ID isn’t just a means to accessing essential rights, but also impacts on heritage?

    S: Absolutely, on Sri Lankan tea estates workers used to be given numbers not names when they were born. Honestly, a lot of the identification issues are legacies of colonialism and the carving up of countries - the complex case of Cote d’Ivoire for example where the Burkinabe communities have settled in Cote d’Ivoire but are not considered Ivorian. Or what is happening in Assam. Or even the appalling Windrush case - we need to face up to the fact that identification is also a question of power with terrible consequences. And we cannot make the same mistakes again and again by classifying people in a particular way.

    H: You start to see the impact of these problems through generations, for example where parents are displaced or lose their identification documents. The barriers faced from your own access to ID often then has an impact on your children’s experience. Consistently we see identification is an essential enabler for social and economic inclusion, though sometimes it isn’t thought about as such and taken for granted.

     

    When you present to private sector organisations, what do you find surprises people the most?

    A woman registering for a bank account in Assam, India for Caribou Digital’s Identities Research

    A woman registering for a bank account in Assam, India for Caribou Digital’s Identities Research

    S: Often the private sector have a totally different angle as their primary concern with identification for a single task or service. It’s a one time necessity and it’s not their job to think about the ethical issues that may arise from how people use it.

    H: We question the role of the private sector in our research - are they responsible for people’s accessing and using ID? Take the case of Sri Lankan online companies we spoke to - they may facilitate online work, like people creating a webpage or managing social media for a client. We’ve seen that these companies may not check the ID of the individual that signs up to do the task, unless it requires dealing with sensitive information. Is it that company’s job to educate the people who work on the platform about the importance of ID? Similarly, is it the role of factories to make sure that their employees have ID or is it better that they employ them as the employees need the money? Some companies mentioned they would try to create more awareness around financial inclusion - encouraging them to get access to formal bank or mobile money accounts.

    S: So we come back to the question, why do we need ID? There are a number of conversations going on about standards and interoperability, but someone pointed out to me the other day that passports are an universal system, but birth certificates are not. You can't check a birth certificate beyond making sure the hospital is real, as really you could create a fake one at home, and then a passport is often built off that. The other issue we saw is with voter ID, which is generally issued when parties are campaigning for an election - so in rural India, a political party may happily make you 18 so you vote for them. There's very little standardisation across the board, particularly concerning initial ID. 

    H: We're very conscious in the recommendations that we make to organisations that we think identification enables inclusion and growth, however, once the need for ID becomes mandatory, you may end up excluding people. 

    S: Yes, it raises the question, If you've got no ID then what happens? 

     

    You covered that experience in a number of your articles, what did you find?

    H: It really varies. In the research we just finished in Sri Lanka, there were a couple of people who didn't have ID. One used their sister’s ID until they could pay for the lawyer to get their ID sorted. The other person was a gentleman who was just getting by with cash and operating off the grid. In Bangladesh there were far more people “off the ID grid” and using other people's ID when they needed to access services.

    In Sri Lanka, Gayani (left) holds the old laminated paper ID and Rangala holds a new smart ID

    In Sri Lanka, Gayani (left) holds the old laminated paper ID and Rangala holds a new smart ID

    S: You see a number of systems and means of access are interdependent.  Cote d'Ivoire went through mandatory mobile SIM registration with a biometric ID (for national security) but it did impact on those who didn’t have a biometric ID. Most importantly, it meant they didn’t have access to mobile money. 

    H: Really the government were trying to push people to get the new biometric ID, and using that specific threat of cutting your mobile line is very strong. In our research, we’ve often found that one of the main drivers to getting an ID in the first place was to be able to own a SIM, so you see how strong that threat is. In addition, mobile money is dependent on your SIM so if you don't have access to a phone line then you can't use mobile money services (e.g. you can’t receive or send money, you can’t take small loans or make savings, through the platform).

    S: As a result, those who wouldn’t register for a biometric ID, would have to go through someone else to get their money, which becomes really risky. Researching these issues has made me realise that ID is the foundation for everything.

     

    Yes, and you hear of women having problems travelling with children if they haven’t changed their name.

    H: There’s a big - not explored enough -  issue with marriage, changing names, movement after marriage wherever you are in the world. In Kenya you choose whether you keep your father’s name or take your husbands, this decision can have significant impact later on. 

    S: Coming full circle on the women and ID issue you’re talking to us about - I do wonder if women face ID issues more than men, which is worsened by the lack of clarity on who do you go to for help (what we talk about in our blog). Just my example again, when my husband and I were trying to get married in the UK, as he was not a British national we faced a lot of challenges - we ran around asking so many different organisations, lost time from work, spent money on travel, but we were just ultimately reliant on individuals helping (or not!). Those who do are the true heroes keeping it together. In contrast, women are not supported in other countries always, which is why I keep going on about intermediaries - and that’s where the role of NGOs and females in advisory committees is so essential. There’s still a lot of both research and policy work for us to do when we talk of women and identification in a digital age.

     

    Find them at @SavitaBaliur and @HeleneSmertnik

    Read more of their work at cariboudigital.net 

     

  • Francesca Hobson posted an article

    Rising to the challenge of #GoodID

    Emma Lindley, of Women in Identity, explains why diversity in the Identity sector goes far beyond si see more

    Emma Lindley, of Women in Identity, explains why diversity in the Identity sector goes far beyond simply supporting the progress of the women who work in it.

    This week multiple organisations including ID4 Africa, Unicef, World Bank Group, Omidyar Network, Women in Identity and other communities have come together to promote the campaign of a verifiable identity for every citizen across the globe.

    The 16th of September was  International Identity Day and, today, September 19th, hundreds of delegates will meet in New York at the ID2020 summit to discuss how to address the fundamental issue that over 1.1bn people have no formal means of  proving who they are - that’s 1 in 7 people. But it doesn’t stop there.

    The World Bank estimates that in low income countries over 45% of women lack a foundational ID, which leads to a raft of socio-cultural issues that have far reaching implications for the inclusion of women and girls in our social and economic systems. How do we support women to gain foundational identity, to then empower them to build micro-businesses across places like Kenya or Bangladesh?

    In their excellent blog,  Identity is a human right … a woman’s right …  Dr. Savita Bailur (Caribou Digital) Devina Srivastava (ID2020) and Hélène Smertnik (Caribou Digital) highlight that women specifically suffer from identity poverty. They challenge that we need to focus on ways of creating #goodID that considers socio-cultural issues, data privacy concerns and consumer access to relevant technologies, regardless of nationality, gender or financial status.  More than ever, we need the collaboration and the community in those markets made up of governments, regulators, companies as well as the technologists who will produce the identity systems of the future.

     

    Is the Identity industry ready for #GoodID?

    New identity “solutions” emerge every day, and yet despite them being developed in places like London and Silicon Valley with high levels of diversity,  we still see many start-ups and established companies with little or no level of diversity in the teams building these solutions - they're all people from the same country, same socio-economic background, the same culture and same gender. When it comes to developing solutions that can be truly viewed as global, do we really understand the problem we are trying to solve? And do we have the right mix of people involved in helping us understand?

    In a study on gender diversity, the UNESCO Institute for Statistics examined the gender gap in science and found that, worldwide, only 28% of science professionals are women. In Sub-Saharan Africa, only 30% of women are exploring careers in STEM and in Silicon Valley, 76% of technical jobs are held by men. (Source: Forbes). But it’s not just gender diversity that is the issue. In the UK just 8.5% of senior leaders in technology are from a minority background.

    The identity industry is no different. We do not have enough diversity, particularly at the coal face of product development. And this leads to the introduction of bias - a direct challenge to the aspirations of #GoodID.

    We all apply natural biases through our daily lives - in our hiring and buying decisions, reviews and even in casual interactions. However well-intentioned we may be, our unconscious biases perpetuate stereotypes.

    If everyone on your team looks the same and is from a similar background, you may reach consensus quicker on the main priorities for the company. But are those decisions the right ones? If your identity solution is designed to help recognise users that might struggle to prove their identity, how many of your team understand what it means to have no financial footprint or to live on benefits in social housing?

    You may have a healthy representation of women across the organisation. But if they all tend to hail from the same cultural or educational background as male colleagues in similar roles, you’re not going to get much diversity of thinking when it comes to problem solving.

     

    The effects of bias

    We are now starting to see the effects of bias within technology. And many of the solutions under analysis sit within or adjacent to the identity industry.

    A study by MIT researcher Joy Boulamwini found that some facial-recognition systems produced an error rate of 0.8% for light-skinned men. This error rate increased when a white female face was shown and ballooned to 34.7% for dark-skinned women.

    According to the study, researchers at a major U.S. technology company claimed an accuracy rate of more than 97% for a facial-recognition system they’d designed. But the data set used to assess its performance was 77% percent male and 83% white.

    If the systems that run our banks, public services, travel companies and retailers are designed and tested with this level of obvious bias, the impact on all economies will be huge.  In a world focused on driving efficiencies through the adoption of digital services, we need to ensure we are building for greater inclusion. It is an area that digital identity needs to focus on.

     

    Can diversity help?

    A study by the Boston Consulting Group (BCG) found that diversity increases the bottom line for companies. In both developing and developed economies, companies with above-average diversity on their leadership teams report a greater payoff from innovation and higher EBIT margins. The study found that "increasing the diversity of leadership teams leads to more and better innovation and improved financial performance." Companies that have more diverse management teams also have 19% higher revenue due to innovation.

    Charts

    McKinsey Delivering Through Diversity

     

    Additionally, research by McKinsey, Delivering Through Diversity, reaffirms the global relevance of the link between diversity (a greater proportion of women and a more mixed ethnic and cultural composition in the leadership of large companies) and company financial performance,. McKinsey measured not only profitability but also longer-term value creation, exploring diversity at different levels of the organisation and considering a broader understanding of diversity beyond gender and ethnicity.

    These findings are hugely significant for tech companies, particularly given the number of start-ups  where innovation is the key to growth. It shows that diversity is not just an aspirational metric; it is actually an integral part of any successful public, private or not-for-profit organisation.

     

    What does this mean for the Identity industry?

    When we think about this in the context of identity solutions, the need for diversity is even more fundamental. We deal in humanity and humanity is diverse; so the solutions we develop need to be able to recognise and embrace this diversity.

    As an industry, we are developing standards, technologies and solutions aimed at confirming people’s identity. So we need to consider also how we include those we aim to identify within our teams: in the design, testing and deployment of products and services.

    Digital identity solutions built for everyone should be built by everyone. 

    The bottom line is: we’re failing as a community to build the diverse, inclusive teams that are best equipped to tackle the world’s identity challenges.  And, as a result we run the very real risk of failing to provide the products and services the world's population actually needs.  So, join the campaign for #GoodID by sharing this blog and ensuring your voice is included.

     

    Author

    Emma Lindley

    EmmaEmma Lindley is co-founder of Women in Identity a not for profit organisation focused on developing talent and diversity in the identity industry, and executive advisor on digital identity for Truststamp a provider of privacy protecting technology for the identity industry. 

    Over a career of 16 years in identity Emma has held various roles, most recently as Head of Identity and Risk at Visa, previous board level roles at Confyrm, Innovate Identity and The Open Identity Exchange, and was instrumental in the commercial development of GB Group’s position in the identity market back in 2003. 

    She has been recognised in the Innovate Finance Powerlist for Women 2016 and 2017, KNOW Identity Top 100 leaders in Identity in 2017, 2018 and 2019, 100 Women in Tech Awards 2019, and was voted CEO of the year at the KNOW Identity Awards. She has an MBA from Manchester Business School and completed her thesis in Competitive Strategy in the Identity Market.

     September 19, 2019
  • Francesca Hobson posted an article

    Have We Reached Peak Privacy?

    A celebration of data privacy month and how privacy needs to remain at the forefront of IAM. see more

    A celebration of data privacy month and how privacy needs to remain at the forefront of IAM.

    Data privacy has made big headlines in the last 12-months. Wherever we look there is an article about a data breach, a data protection regulation update, or a colleague talking about data privacy. It may have all gotten too much and we have to ask ourselves - have we reached “peak privacy”?

    In the identity space, data privacy was never really a consideration until we entered the realms of the consumer. In enterprise IAM, although we were, in fact, using the personal data of our employees, privacy was rarely, if ever, mentioned. When the enterprise perimeter earthquake happened, and we moved our IAM services to cover consumers and citizens, data privacy started to enter the industry parlance.

     

     

    Why it is important to not become jaded about privacy

    Data breaches and privacy violations can almost be thought of as a kind of ‘digital trauma’. When I heard about the Collective #1 data breach which exposed 773 million data records, I just thought, “Oh no, not again”. I searched HaveIBeenPwned and sure enough, my email address showed I was part of the data breach. But I didn’t feel worried, as I should be because I have become desensitized.

    Desensitization is a common issue amongst people who experience trauma. So, for example, teenagers who are subjected to real-life violence become less affected by acts of violence than their counterparts. If you experience something over and over you do get used to it happening. That does not, however, mean that it should be tolerated.

    As I write, there will be continued breaches that affect our personal data. GDPR helps to focus the mind of organization leaders, but it does not stop cybercriminals trying to get at our personal data. Since the GDPR come into effect, Law firm, DLA Piper have recorded, 59,000 personal data breaches across Europe.

    As custodians and processors of personal data, we can’t just turn a blind eye to privacy. It hurts our businesses as much as it hurts the customer who forgoes privacy. A report by Privitar said that 90 percent of consumers are concerned that technological advancements are a risk to data privacy.

     

     

    Tech and privacy - A good double act in IAM?

    IAM platforms have needed to innovate to keep up with the tidal wave of personal data and to improve customer experience. Data is an incredibly useful commodity that can be used to do online jobs, including make onboarding processes digital. Privacy, as seen through the lens of the IAM technology stack, should be intrinsic across a platform. But what does that mean in practical terms, can we have our privacy cake, and eat it?

     

    Privacy peak 1: Great UI/UX can facilitate good data privacy

    The touchpoint between the identity management backend and the user is where the data privacy choice begins. It is also where your relationship with the customer begins. Privacy is an intrinsic part of trust which is a relationship building tool. Your UX should guide your customers down a pathway that distills privacy for them. The UI should reflect the data processing you do in a simple way. If you do this, you start on a pathway to trust by being privacy respectful.

     

    Privacy peak 2: Deliver what you promise

    If you tell users you won’t use their data for X or Y, then don’t. If you tell user’s you will use their data to give them a better service, do so. This type of basic thinking has to be part of the design process at the beginning of building a service. If you have to retro-fit it, it is harder to do, but not impossible. Using identity API-based service architecture can help to facilitate the addition of missing features that enhance privacy.

     

    Privacy peak 3: Consent is fluid

    Consent management comes in many forms. You should have already taken consent when you first touched the customer’s data. However, consent is fluid. People change their minds. Build consent for data privacy control into the system, end to end. This can be included in transaction consent - OAuth 2.0 and UMA are example protocols for achieving this. Consent management can also be included in the user’s account manager. Consumer IAM vendors are now beginning to add in the ability to manage consents across services. Even the blockchain can add value here - used as a layer for consent transaction receipt and audit, it offers an immutable way to show that you have taken the consent requirements of the GDPR seriously.

     

    Privacy peak 4: Technology is the friend of privacy

    Privacy is about individual choice, but data privacy is augmented and enforced using technology solutions. Always use the best possible security solutions to enforce the privacy choices of your customers. Make these as seamless as possible. This can be a challenge in certain customer-facing areas, like authentication. But the world of authentication is starting to offer solutions to the conundrum of usability vs. security. Other areas, like data in transit and at rest should be secured, by design, in any system that moves personal data, in all of its forms, around.

     

     

    Let’s make data privacy month data privacy by default

    Data Privacy Day has now become Data Privacy Month which runs throughout February. As custodians of people's data, we should never, ever, be desensitized or complacent about data privacy. Data privacy holds the key to the relationship we need to build between our service and our customer. Privacy is not about hiding data, it is about using it with due respect to the person that data represents. When you next set out an RFP for an identity service, make sure you add in a requirement that asks for privacy by default.

     

     

    Author

    SusanSusan Morrow

    Having worked in cybersecurity, identity, and data privacy for around 35 years, Susan has seen technology come and go; but one thing is constant – human behaviour. She works to bring technology and humans together. 

    Find her @avocoidentity

  • Francesca Hobson posted an article

    Three Questions Around Self Sovereign Identity

    If you work in the area of identity you will have noticed a lot of talk about Self Sovereign Identit see more

    Is Self Sovereign Identity a panacea or an also ran?

    If you work in the area of identity you will have noticed a lot of talk about Self Sovereign Identity (SSI).  As a concept, it applies the goal of placing the user at the centre of digital identity management and control. User-centric digital identity is not a new idea. I first came across it back in 2008 when I read Kim Cameron’s 7 Laws of Identity – the piece itself going back to 2005; law 1 states that “ No one is as pivotal to the success of the identity metasystem as the individual who uses it.”

    SSI is user-centric, but you don’t need to have a Self Sovereign ID system for it to be user-centric.

    On paper, I like the idea of a Self Sovereign Identity. After all, digital identity is about what you do with the information that makes up who you are – surely that should be under your control? Yet still, I have lingering questions that make me question the ability of SSI to fulfil my identity needs.

     

    A really quick bit on what SSI is?

    This isn’t a post about what Self Sovereign Identity is, there are plenty of articles on that topic. But I will give you a very quick and dirty overview of what the technology is about.

    SSI is fundamentally reliant on blockchain to register the attributes of a person’s identity. What does that mean. Your identity data (attributes or claims) – the stuff that determines your digital you, or that thing is that thing – are registered to a block on a blockchain. The blockchain is a distributed ledger, aka it has no central authority controlling it, it is decentralized. The subsequent decentralized claims are then part of a person’s identifying data that they can share, under their control, with a requesting party – like a bank or a government service, etc.

    The substance of the SSI is based on the idea of verifiable claims. If you follow my blog you’ll know that verification is a thorny issue in the digital identity space. It is certainly not straightforward and can do with a sprinkle of ‘user friendly’ if you ask me. But organizations like Sovrin, who are offering a backbone for SSI, are built upon the notion of verifiable claims being managed through a distributed ledger technology backbone specifically attuned to digital identity.

     

    Verifiable claims

    I just want to talk a little about the notion of a verifiable claim. For a piece of data on an individual to carry any weight it has to be true or at least have a probability of truth that satisfies the service provider. Claims that are checked (verified) by a trusted third party are deemed to be verifiable. Web standards custodians, W3C, have looked at the issues around standards for verifiable claims.  The research findings of the group come down heavily on the side of user-centric and privacy enhanced. There is a very strong value statement driving their work “No User-Centric, Privacy-Enhancing Ecosystem Exists for Verifiable Claims”.

    The research concludes several things including:

    “Trust is decentralized. Consumers of verifiable claims decide which issuers to trust.”

    And

    “Users may share verifiable claims without revealing the intended recipient to the software agent they use to store the claims.”

    But, in the context of this article, do you need a decentralized identity system to have decentralized verifiable claims? Are the two mutually exclusive?

     

    The questions on SSI I need to have answered

     

    Commercial use cases?

    We live in a world that is built upon certain commercial structures. These structures are pretty much universally driven by money. I want to understand how we can fit an identity framework, that is based on presenting verifiable claims, to a service. Who will pay for the verification? If one organization pays, will they be happy if that data is then shared with a competitor to build up a trusted relationship with them?

    Are we back to the same issues we had with federated identity? As Phillip Windley said back in 2006: “Not surprisingly, the hard part isn’t usually the technology. Rather, the hard part is governing the processes and business relationships to ensure that the federation is reliable, secure, and affords appropriate privacy protections.”

    Will Self Sovereign Systems come up with similar commercial issues – the business relationships, but this time from a pay for use basis?

    An interesting look at how this could be solved is from the Web of Trust working group and their work in progress treatise “How SSI Will Survive Capitalism”. Something I will be keeping a close eye on. This is my main concern from their SWOT analysis “Lack of upfront financing due to lack of platform (chicken & egg problem)”

    And a last point before I move on. This was brought up by a government official in the UK – the data ownership – is a government verified identity document like a passport actually your data to own?

     

    This governance thing?

    I’m also not sure about the whole SSI being a magical panacea for refugees. There is a nagging feeling in the back of my head around the ‘stewards’ model. Self Sovereign frameworks like Sovrin use a steward’s model to maintain trust. The stewards are trusted third parties – organizations, that operate the nodes in the distributed ledger. Sovrin currently has over 50 stewards that provide human and computing power.

    I can see the positive aspect of this. It extends the notion of decentralization to another layer. Good. I do, however, wonder if the steward will become a weak point in the system. Will cybercriminals target stewards to gain control of the nodes?

     

    Privacy, really?

    The privacy aspects of decentralized, SSI are part of the charm of the system. Sovrin, for example, uses Zero Knowledge Proof as the underlying mechanisms of minimal disclosure of data. ‘Are you over 18? Only Yes/No is revealed. Of course, SSI isn’t the only system that offers privacy of attributes. There are several ways of achieving the same thing using traditional identity services. One such mechanism was developed by Sid Sidner back in 2006, and named “Variable Claims”. I’ve seen it applied in a traditional identity service – it works in a similar manner by only revealing certain data, i.e., yes/no or partial reveal of attributes.

    The problem is this. It is all well and good having minimal disclosure. But what if you want to buy a pair of shoes online. You have to allow the online vendor to know your address to send the shoes to. They will likely also want your name and other demographic data if they can get consent, for marketing purposes. Your data is then outside the SSI and held in a more traditional manner. And…it is now outside of your control too.

     

    “Options make for a healthy ecosystem” – Tim Bouma

    I remember looking at Pretty Good Privacy (PGP) way back. It offered the hope of secure email communications based on the idea of a “web of trust”. PGP always seemed very ‘techie’ to me; you virtually needed a PhD in computer science to use it. Usability, rather than methodology has probably killed PGP – even Phil Zimmerman who invented PGP doesn’t use it anymore. I get the same ‘techie’ feel of PGP within the SSI movement. I know that folks in SSI are working hard to get neat apps together to help with usability, but still, there is an air of PGP about it.  I can’t shake it, I want to. I think it comes down to this.

    We need to understand the true nature of why we use digital identity, the real use cases, the pitfalls of such use cases, as much as we need the technology to make them happen.

    I do not, however, want to write a technology off, just because I have a few unanswered questions. I can see, for example, that blockchain has some use cases that fit well and as an additional layer in a tech stack it has enormous potential.

    Tim Bouma, Senior Policy Analyst for Identity Management at Treasury Board Secretariat of the Government of Canada, recently summed up the SSI debate perfectly, and I agree wholeheartedly with his very pragmatic take. Tim explores technology with open eyes and the hard head of experience. He said in a recent tweet and Medium post on SSI:

    “The extreme (decentralized) case is no service provider, but likely it will be a mix of centralized, federated and decentralized options. That’s ok because options make for a healthy ecosystem.”

    SSI is on the extreme end of the digital identity spectrum. Its focus is putting control back in the hands of you, the user. But SSI is not the only way to skin a cat. My own view is that a mix of technologies will, at least for the foreseeable future, be needed to accommodate the vast array of needs across the identity ecosystem. I can see use cases for SSI. But will it become the overarching way that humans resolve themselves in a digital realm? I don’t know, I don’t have a crystal ball, but my gut says not, unless there are compelling answers to the three questions I have listed above. Maybe the SSI community can help me to understand?

     

    Author

    SusanSusan Morrow

    Having worked in cybersecurity, identity, and data privacy for around 35 years, Susan has seen technology come and go; but one thing is constant – human behaviour. She works to bring technology and humans together. 

    Find her @avocoidentity

     

  • Francesca Hobson posted an article

    Can the re-use of identity data be a silver bullet for industry?

    Can a “make do and mend” ethos work to make digital identity universal? The number of conferences th see more

    Can a “make do and mend” ethos work to make digital identity universal?

    The number of conferences that focus on digital identity has increased several-fold since I first became involved in the space. Yet at a recent conference, a colleague heard someone say ”…here we are, 20 years on, and we are still no further forward in creating a digital identity usable by all”.

    The elusive nature of the identity ‘silver-bullet’ continues to haunt the industry. Identity specialists the world over are talking at conferences, in meetings, on social media, trying to find a solution. They are pulling together ideas and thoughts on how to make identity accessible for all and usable across a complicated ecosystem of stakeholders.

    But the problem continues, why is digital identity still a hornet’s nest of interoperability issues and disparate systems?

     

    Identity landscape – what’s going on

    The current identity landscape can be described as ‘fluid’. There are many approaches across many different use cases; it really is a mixed bag of solutions. If an organization puts out a tender for an identity solution, they best make sure that their requirements list reflects closely what they want, as they will get a rainbow of options in response.

    In a very general way, you can break down the identity landscape like this:

    Citizen Identity: There are a lot of governments either already playing in the citizen ID space or preparing to. In the UK, for example, the Verify scheme is now about 6-years old has over 4 million users who use it with about 19 government services. But there it stays, it has still yet to find any commercial re-use.

    ID Mobile Apps – like Yoti, offer a mobile device-based identity that can be used with participants in their ecosystem. Yoti had over 3.7 million users as of May 2019 and hundreds of relying parties consuming the Yoti ID. There are quite a few ID apps appearing, including Verified.me from SecureKey. Another worth mentioning, but that is in early stages, is a collaboration between Mastercard and Samsung to deliver a “…better way for people to conveniently and securely verify their digital identity on the mobile devices”. But again, apps have specific use cases and tend to stay in a confined ecosystem but have great potential for re-use.

    Social and federated accounts – Facebook, Google, Amazon, and similar are not really thought of as ‘identities’, but often contain some or all of the data needed when creating a digital identity elsewhere. These accounts have massive potential for re-use across a wider ecosystem.

    CIAM platforms – there are a number of players in this area, people like Okta, Ping, Janrain, and Forgerock. They offer platforms that cover a remit of customer marketing and analytics alongside more traditional IAM requirements. They are usually based on standard protocols so could work in a wider ecosystem.

    Identity services and APIs – this can cover a lot of ground, but one of the more promising areas being offered is in the connectivity of all of the players in an identity landscape. Companies like Avoco Secure and SecureKey offer technology that can link ecosystem components together to build the interoperability layer.

    Self-Sovereign Identity (SSI) – coming up on the inside is SSI. This decentralized approach to identity is all about putting identity back in the hands of the user. However, questions around the commercial use of SSI are still left unanswered.

     

    How can we solve a problem like identity?

    As you can see, the identity landscape is complex and there are a lot of moving parts. The main hurdle to creating a Shangri-La for the identity space is the very disparate, disconnected, non-interoperable playground that we see today.

    We have created a situation where a digital identity, which is a reflection of an individual, is being split into thousands of fractions; each disconnected, often siloed and placed into closed systems.

    The result is thousands of repeated data snippets. This is one of the reasons why personal data theft is so easy and so rife.

    This was recently summed up by Alastair Campbell of HSBC bank at an OIX event in London where he said

    “Creating a vibrant marketplace together rather than a ‘winner-takes-all’ – that’s what we should all be interested in”

    We have to move from this fractured place to a culture of re-use.

    The old “make do and mend” ethos needs to find its digital counterpart in the world of digital identity. Here are some ideas on making this work:

    Federation and re-use: The identity world is made up of silos of offerings across multiple vendors. But digital identity should not work like this. Digital identity really is an ecosystem. Any identity should be transferable across any relying party that needs it. Creating a ‘closed-shop’ in digital identity is doomed to fail. Ecosystems should be built to allow existing identities and identity data to be drawn in and re-used. Apps like Yoti and digi.me, platforms, including Ping, and citizen ID such as Verify and eIDAS, can be plugged in and offered up to whoever needs the data.

    Uplift: The ecosystem needs to be able to accommodate new data that adds weight to the re-used IDs if needed.

    Events: Often it isn’t about who you are but what it is you’re trying to do. Identity allows us to do jobs online and these can be event-driven.

    Frameworks and rules: The legal basis for allowing re-use of existing identity needs to be looked at. This should focus on the interoperability layer. There are bound to be cases where competitors need to block the use of certain identity apps or platforms. This does not negate the general use of reusable identities within a wider ecosystem. But it does allow for micro-ecosystems to be created.

    The identity ecosystem should be about creating flexible IDs around achievable business models; that offer value to the user and the service consuming the ID. After all, it isn’t very often you want an actual ID. Usually, you just need the answer to a question e.g., “are you over 18 so you can buy this age-restricted product?”

     

    Finding a Cure for Identity

    The reuse of existing identity accounts may well hold the key to solving the issue of a disparate identity world. Allowing all to play, will act to open up this closed system. Government identity initiatives will be able to find a commercial use case and even an ROI. What’s key is collaboration via the likes of industry bodies such as OIX and Kantara.

    Organizations like Kantara do sterling work on creating standards in the identity space. But this work needs to also be augmented with a holistic view of how to pull identity out of the silos and into the wider world.

    A final word from Analyst Martin Kuppinger at the recent European Identity & Cloud Conference 2019 sums the situation up:

    “Aim to connect to identities – not manage them yourself, orchestrate services and don’t invent what already exists, segregate data from applications so that it can be used and is not locked”.

     

    Originally posted on www.csoonline.com

     

    Author

    SusanSusan Morrow

    Having worked in cybersecurity, identity, and data privacy for around 35 years, Susan has seen technology come and go; but one thing is constant – human behaviour. She works to bring technology and humans together. 

    Find her @avocoidentity

  • Francesca Hobson posted an article

    Should We Worry About the IoT Being Used as a Weapon of Mass Control?

    A WHO report shows that 35% of women, worldwide, have experienced violence, and 39% of women were mu see more

    WHO report shows that 35% of women, worldwide, have experienced violence, and 39% of women were murdered at the hands of their partner. These figures are frightening, and as a woman who has experienced this type of violence, I know how easily it can happen. However, what I want to explore here is the role that technology, specifically connected systems termed under the banner of the “Internet of Things (IoT)” have on domestic violence and abuse?

    Before I begin, we need to look at what the word ‘violence’ means. Firstly, violence can be both physical and non-physical. It is an aggressive act, performed using methods that can be psychological, sexual, emotional or economic. I gave figures above showing violence against women, but of course, men experience violence too – including domestic. However, technology may add a particular slant to violent and controlling acts – offering a reach to the abuser that they may otherwise not have.

     

    The IoT Mantra Should Be Do No Harm

    What if you had a one-night stand? While at your place you let this person use your Wi-Fi. You have, quite rightly, password protected your network. So, you hand over your password which gives them global access to your home network. They get up at 6 am and walk out the door, still holding your password. You forget to change it, easily done. What’s the worst that can happen? Well, if this person turns out to be malicious and wants to hurt you, they can potentially use this access to perform a Man-in-the-Middle attack, stealing your personal data; insert malware; download porno images via your IP address. They could also potentially hijack your IoT devices. All that is needed is a readily available Wi-Fi Pineapple and an unsecured IoT device.

    This does not have to be the case, of course, but it comes down to design, education, and the policy makers – more on this later.

    According to research by Metova, around 90 percent of U.S. consumers now have a smart home device. But who is controlling those devices?

    Researcher and Lecturer at University College London (UCL) Dr Leonie Tanczer, has, along with her team members Dr Simon ParkinDr Trupti Patel, Professor George Danezis and Isabel Lopez, looked at how the IoT can, and will likely, contribute to gender-based domestic violence. Their interdisciplinary project is named “Gender and IoT” (#GIoT) and part of the PETRAS IoT Research Hub.

    I spoke with Leonie, who has extensive knowledge in this area. She pointed me to some of their GIoT resources, including a tech abuse guide, a dedicated resource list for support services and survivors, as well as a new report that features some of the research groups pressing findings.

    Leonie also shared some relevant insights, some of which I would like to highlight below.

     

    Is the IoT an issue for domestic violence now?

    Leonie and her team have been working for the past year at how Internet-enabled devices could be used as a controlling mechanism within a domestic abuse context. The team has, so far, concluded that the threat is imminent but not yet fully expressed. As of now, Spyware, and other features on laptops and phones are primarily being featured in tech abuse cases. However, with the expected expansion of IoT systems as well as their often intrusive data collection and sharing features, IoT-facilitated abuse cases are a question of when, not if.

    What unique issues exist between IoT and domestic violence?

    One may try to regulate the security of an IoT device by putting some of the best and most robust protection measures in place. However, in a domestic abuse situation, coercion can override any of these processes. If one partner is responsible for their purchase and maintenance, and has full control as well as knowledge over their functionalities, the power imbalance can result in one person being able to monitor and restrain the other. The team’s policy leaflet emphasises this dynamic quite vividly.

    Can you map the IoT device use pattern with an abusive relationship?

    When it comes to the safety and security of a victim and survivor – in particular, in regards to tech – we have to consider the three phases of abuse. These phases interplay with the security practices a person has to adopt.

    For instance, while still in a co-habiting situation, the abusive partner may use devices in the home to spy on the other. Their online usage can be monitored, conversations recorded or filmed. Once a woman has extracted herself from this situation, they may seek help at a local shelter. In this instance, many are advised to simply stop using technology to prevent a perpetrator from contacting or locating them. However, the third phase in which a person has to effectively “reset” their life is equally as central, but becomes extremely hard in our interdependent society. During this period, women will have to change their passwords, regain control over their accounts, and try their best to identify any devices and platforms – including IoT systems – that they may have shared with the perpetrator. – This can become really tricky and I don’t think that we have properly thought about how to ease this process for them.

     

    embedded tweet  

    When Good IoT Devices Become Bad

    Leonie also stressed in our conversation that industry stakeholders must take heed of the research being performed in this area and consider how their system might lend itself to forms of abuse. In the identity space, which I work in, there is a saying, “The Internet was designed without an identity layer”. This has led to a kind of retro-fix for the Internet to try and overlay identity – it is complicated and messy and has yet to be fully fixed; retro-active fixing of missing functionality is not the best way to design anything.

    I agree with Leonie, we need to ensure that the IoT is designed to guarantee that misuse is minimized at the technology level. However, Leonie also points out that social problems will not be solved by technical means alone. Besides, statutory service such as law enforcement, policymakers, as well as educational establishments, and women’s organisation and refuges need to be incorporated into the design of these systems and made aware of this luring risk. Proactivity is therefore needed and will help to ensure that we don’t repeat technical mistakes we have done in the past.

     

    Some Anti-Abuse Advise on Design and the IoT

    During our discussion, Leonie gave me some ideas about design issues her team has found during lab sessions testing IoT devices. These include:

    1.Remove any unintentional bias in the service – this can be helped by including multidiscipline people, from diverse backgrounds, on design teams

    2.Enable relevant prompts – send out alerts on who has connected to what? The GIoT team suggest that prompts could be used to inform users about essential details, including what devices are or try to connect to their IoT systems or what devices track their location.

    3.Offer more transparency and support – offer clear and unambiguous manuals, prepare policies on what helpline staff can do to advice victims should they inquire about guidance, and allow to switch off and on features that user’s need.

    Overall, technology-enabled abuse is a human-centric issue and technology alone cannot fix it. We need to work towards a socio-technical solution to the problem.

     

    Conclusion

    IoT devices are becoming an extension of our daily lives. Unfortunately, it is inevitable they will become weaponized by people who wish to do us harm – and this will include abuse within a domestic context.

    Design is a crucial first step in helping to minimize the use of Internet-enabled devices as weapons of control. Privacy issues, like the recent Apple iPhone Facetime bug, demonstrate this well. The bug was a design flaw. It allowed, under certain circumstances, a caller to listen to people in the vicinity of the phone without the call being picked up. It has been downplayed because those circumstances were not common. However, the does not negate the point that testing the design of user interfaces and the UX needs to be holistic and place privacy as a key requisite to sign off a UX.

    Designing IoT devices should to be done by a multi-disciplinary team. We must remove unintentional bias and use different viewpoints and experiences. Only by combining the social with the technological can we hope to ensure that our technology is not misused for nefarious reasons.

    Thank you to Dr. Leonie Tanczer for help in creating and editing this article.

    Originally posted on www.iotforall.com.

     

    Author

    SusanSusan Morrow

    Having worked in cybersecurity, identity, and data privacy for around 35 years, Susan has seen technology come and go; but one thing is constant – human behaviour. She works to bring technology and humans together. 

    Find her @avocoidentity

  • Francesca Hobson posted an article

    SSI? What we really need is full data portability

    Despite numerous predictions by industry analysts that “self-sovereign identity” (or “SSI”) would be see more

    By Emily Fry and Elizabeth M. Renieris

    Despite numerous predictions by industry analysts that “self-sovereign identity” (or “SSI”) would be a key trend by now, in reality there is still limited adoption outside of research labs and proofs of concept. As two industry experts in the SSI space, we are here to argue that it’s time we stop talking about “self-sovereign identity” if we want to make any meaningful changes to identity management for the benefit of individuals. Not only is the term itself misleading, and often polarizing, but the zealous attachment to “self-sovereign identity” overshadows the core innovation of the future of identity management—full data portability.

    While definitions of the term vary, the basic idea behind “self-sovereign identity” is to enable a model of identity management that puts individuals at the center of their identity-related transactions, allowing them to manage a host of identifiers and personal information without relying upon any traditional kind of centralized authority. One emerging school of SSI relies upon the combination of distributed ledger technology (often a blockchain) and the use of decentralized identifiers, as well as other technical standards, under development by the World Wide Web Consortium (WC3), and is sometimes also known as “decentralized identity.”

    SSI advocates are ardent and impassioned, often using hyperbolic language to characterize self-sovereign identity as a revolution, the foundation of the next Web, a panacea for privacy, and even the solution to child labor, emphasizing specific technologies like blockchain and ideologies like decentralization. They cite from the same hymn sheet of SSI Principles by Christopher Allen. In the past we have cited these too, but in the future we question whether it is wise to do so. With the term at peak popularity, and large corporates, governments, and other key players exploring what it means, it is time we bring a set of realistic expectations to the table and focus on what will really change the individual’s experience for the better.

    Governments and other stakeholders exploring SSI are less interested in ideology and more interested in improving the user experience for their customers and constituents. They want to increase access to services, improve service delivery, and safely digitalize interactions, while mitigating privacy and data security-related risks. The key to these objectives lies in full data portability—this means granting individuals robust legal rights, as well as straightforward technical tools and capabilities, to manage and use identity credentials and other personal data with more trust, confidence and ease, so that they can share medical records with a new doctor, port professional credentials to a new employer, and the like.

    SSI is focused on the technical tools and capabilities for data portability but offers little by way of legal architecture. Despite bold claims about the legal implications of SSI, often by technologists and other non-lawyer advocates, SSI introduces new technology but has no impact on legal rights or privileges. For example, while it might enable technical portability of credentials (at least theoretically, the market will determine who will accept them), it has no impact on rights to portability under new and emerging regulations like the GDPR or the CCPA. SSI does not address the challenging questions of risk mitigation, liability allocation, or enforcement or redress mechanisms—all things requiring new or modified legal solutions.

    One example of an emerging legal solution to solve for the non-technical dimensions of full data portability is the notion of a trust framework. A trust framework necessarily lifts cryptographic and other technical trust mechanisms into a coherent set of legal, business, technical (and we argue, ethical) rules. Its purpose can be boiled down quite simply—to ensure that technical tools are developed and deployed in a manner that does in fact support the coherent individual end-user experience and legal protections we all want.

    The assumption that regulations will remain relevant and in place for long periods of time has been upended. Trust frameworks must evolve and adapt in order to foster innovation. But don’t let that mislead you. Trust frameworks can and should have teeth, placing appropriate legal obligations on entities to adhere to particular standards or rules, with repercussions for breach and actual mechanisms for enforcement. This means they must inevitably address questions of liability.

    To date, digital intermediaries have famously resisted governance, claiming that because they control the tools, they can also sort out the problems without regulatory intervention. We know the existing and potential future repercussions, so let us not make the same mistakes again. Trust frameworks are a mechanism by which to address policy concerns from the outset—providing guidance within a legal architecture. A number of Governments, including New Zealand, are exploring this approach, though few have taken on the hard questions of risk mitigation, liability allocation, enforcement and redress.

    Time is of the essence. We hope that this discussion will serve as a reminder to look up from debates on terminology and refocus on the outcome we all actually want— meaningful and universal data portability facilitated by technology but also, critically, backed by law. Without state-of-the-art legal architecture, SSI is just a techno-utopian pipedream.

     

    Authors

    Emily Fry is the head of Digital Trust at MATTR, a New Zealand based company developing open standards, technical infrastructure, and software for better Digital ID. She specializes in bridging law, technology, and policy though innovative legal architectures.

    Elizabeth M. Renieris is the Founder & CEO of HACKYLAWYER, specializing in law and policy engineering. She’s a privacy lawyer (CIPP/E, CIPP/US), identity expert, and a fellow at the Berkman Klein Center for Internet & Society at Harvard University, where she researches data governance frameworks for the digital age.

  • Francesca Hobson posted an article

    Cheryl Stevens: delivering trust and tech at DWP

    What do you do in the industry? I lead on the development and delivery of an Identity & Trust solut see more

    What do you do in the industry?

    I lead on the development and delivery of an Identity & Trust solution that will enable all of our customers at DWP. We need solutions that can suit a range of situations. Helping people through the process by starting with a little information and building it up to a higher strength of assurance over a number of interactions is important, but we also need to be able to go straight in with a strong verification option too.

    There’s been a pivot in understanding that a number of people that engage with us are not new customers. This means we can build up trust in who our customers are, rather than getting them to go through the same friction over and over again. I see trust and fraud monitoring as being two sides of the same coin. Both must exist at the same time to keep you and your data safe, whilst making it easier for everyone to do what they need to do.

     

    What’s a typical week?

    [Laughs] I’m not sure that there is one. Obviously I spend a lot of time thinking about propositions and how they’ll work for all our customers. There seems to be general assumption that there will be a sudden shift to fully digital solutions once Generation Z come through. But we haven’t see that with millennials - not everyone can manage unassisted. We need a suite of options - so having open conversations about solutions is key.

    Our key philosophy is not could we implement a new technology or process, but should we. Every decision we make is pinned on user research and consideration of context. If you have limited phone memory, and there’s a choice between downloading an extra-large app and deleting photos of your kids, which are you going to go for? It’s a no brainer. That solution is not going to work for everyone, then the question is does it need to? It is never boring that is for certain.

     

    Which areas do you think are the greatest challenges in digital identity?

    I have a different angle to the rest of the industry, as none of our customers come to us when they’re having a particularly good time. Often they’ve just experienced a traumatic life event which they need support with. It’s critical that we help people effectively, and the biggest gap I see in current solutions is the issue of delegated authority, whether this is for a sustained period, or just a point in time. Current solutions focus on the proving of identity, but don’t cater for all contexts.

    Take a bereavement - this could result in you experiencing a state of shock for a period of time, rendering you unable to act as you would have the day before. In this circumstance, we need temporary digital power of attorney without the formality of an agent but with a solution that mitigates the risk of coercion.

    The ideal is that with the evolution of Artificial Intelligence, behavioural risking and transaction context we will immediately know what normal behaviour is for our customers, and what’s a true change in circumstance. Here the ‘pantry’ approach is key, so our customer gets a tailored solution to their circumstance and context they are acting in.

     

    So, how did you end up in this role?

    I’ve been a civil servant for 20 years. I chose this path rather than joining later in my career, because I believe in our welfare system; it needs to support you when you need it most. I was caught by the welfare system as child, and hate to think what would have happened to my family and I otherwise. So I started out at the bottom and never thought I would be a senior leader. But here I am now, where my career has done a sort of bow - bringing together identity and fraud in one role. As I said earlier, they’re really two sides of the same coin.

     

    How do you describe digital identity to someone that doesn’t work in it?

    This is easy for me, in the context of my role. I make sure you are who you say you are, before we give you money from the public purse. Any elaboration into how this keeps you and your data safe and the importance of a varied solution results in the eyes glaze over moment.

     

    Why do you think the identity space is a good place for women to work in?

    I’ll take a different angle on this question – I think it’s an important place for women to be represented, as there’s a strength in women working together. We do bring more empathy, which is essential in providing a solution to a social problem. Identity is too often spoken about as a technical problem, but the conversation is changing to be less about the tech and more about solving the myriad of problems and Women in Identity are doing great work to bring diversity to the fore.

     

    If you were CEO of an organisation, what one thing would you make compulsory and what one thing would you ban?

    I would make real flexible working compulsory - we have the technology and shouldn’t need to get trains everywhere! And I would ban mansplaining - really unnecessary!!

     

    If you could be an animal, what would you be?

    I’d be a giraffe, as they’re intelligent but don’t look it! They also have antiseptic saliva which is incredibly handy.

     

    And then the big one, where do you think identity will be in 5 years?

    Simple, starting close to home - I hope there will be solutions in place that work for our entire customer base. This will be context driven and risk based and our [DWP’s] role will be the orchestration of this.

    Our ‘pantry’ approach will incorporate a number of technologies and solutions. Biometrics could be a big part of this as they add ease, but we do need to be future proofing our solutions - we can’t keep asking people to switch their credentials every time some new tech comes along. Self sovereign identity is really interesting - I’ll caveat that, as although it will be the perfect solution in some scenarios, my personal view is that it needs testing more with more user research and broader demographics to understand just who it can solve problems for.

    On the broader UK PLC and beyond - To get to this point we need greater collaboration across industries. We [DWP] already bring together reuse of HMRC and Verify, but this needs to be wider. Last week I was on a techUK panel with representation from government, aviation and gambling. We all have to go through the same verification over and over, so collaboration on standards is key to enable ease.

    Obviously this needs to be enabled in the context of should we, rather than could we, but there are so many circumstances where things can be simpler. This collaborative approach across government and beyond is vital. I’ve just moved house and am totally fed up of repeatedly filling in my information for companies, it doesn’t need to be this hard.

    Find Cheryl @CherylJStevens