Growing Digital Ethics in Practice: an interactive roadmap
Over the past three years we’ve been observing the maturing digital ethics landscape. Starting with our first commentary The Winter of AI Discontent:...
Growing Digital Ethics in Practice: an Interactive Roadmap
Dr Caitlin McDonald
with contributions from our 'Growing Digital Ethics in Practice' podcast guests
Over the past three years we’ve been observing the maturing digital ethics landscape. Starting with our first commentary The Winter of AI Discontent: Emergent Trends in Algorithmic Ethics we identified four critical reasons CXOs need to care about digital ethics, whether or not they’re responsible for implementing its specific technical complexities:
- Customers might abandon you for more-ethical providers – or you could attract a new customer base through providing ethically aligned services not offered by your competitors.
- Talent could leave you for more-ethical employers – a particularly relevant consideration for AI/ML specialists with highly sought-after skill sets, who have demonstrated both in surveys and in collective action that they are highly likely to want to work for employers who share their values.
- You might do great public harm through unintentionally exacerbating systemic inequalities – or you could be the industry leader who uses automation to expose and reduce these inequalities.
- You could lose the competitive edge of technical prowess to companies or countries with ethics that endanger your employees, your customers or the public at large – an external shock may disrupt your industry or business in ways that are harmful.
We later boiled these down to three simple but intertwined strategic drivers that impact all organizational decision-making: reputation, regulation and revenue. As businesses and public policy decisions become increasingly data-driven, understanding ethical impacts becomes just as central to business strategy as any other major driver such as organizational change or technology policy.
Our report Stemming Sinister Tides: Sustainable Digital Ethics through Evolution proposed an approach that matches different ethical schools of thought to stages of technological maturity: the more industrialized an industry is, the more codified its rules, processes and governance models will be. Conversely, for nascent technologies still in their infancy, there may be no formal regulatory landscape, but that doesn’t mean that there are no ethical responsibilities at this early stage – nor that there are no tools and frameworks available to guide teams at this stage. In the report we offer around a dozen simple tools organizations can use to manage digital ethics considerations. We worked through this toolkit using example industry use-cases from large, well-established businesses to those at the earliest nascent stages of exploring these issues. However, there are hundreds of digital ethics tools, frameworks, governance models and relevant laws and regulations out there. We barely scratched the surface of the approaches available.
Evolutionary ethics: we describe how to match specific ethical approaches to phases of technological maturity in our paper Stemming Sinister Tides: Sustainable Digital Ethics Through Evolution.
Read about the strategic drivers by clicking on the image
In 2019 we took our toolkit off the page and put it into practice on the Ethical Digital Study Tour. Over 25 organizations in a range of sectors including biotechnology, venture capital investing, academic research, digital infrastructure and legislative bodies all gave us their perspectives on the state of play in the field of digital ethics, walking us through examples of how digital ethics decisions play out in real-world scenarios. We heard about digital service design choices that open up new opportunities for underserved markets; governance approaches designed to remove bias from organizational decision-making; cross-industry engineering standards designed to enable fair play; automated tooling for identifying areas of ethical risk within engineering workflows; and more. These findings are summarized in our report Ethical Digital Study Tour: Making Good.
One of the biggest questions that comes up over and over again for our clients is how to put principles into practice. Our clients grasp how important the concept of digital ethics is, but many of them struggle with making an actionable plan built into their operational processes. This is partly because of the fragmented nature of the space – with so many different options, it can be difficult to choose the right tool at the right time.
To help define a pathway to success, we welcomed leading lights from a range of different perspectives to give us their views on how to practically enact digital ethics in a podcast series titled Growing Digital Ethics in Practice. We heard about a range of frameworks, toolkits, emergent legislation, research recommendations and guidance from third-sector bodies. More importantly, we heard practical guiding principles for when and how to apply what models to which conundrums. Please do take the opportunity to listen to our podcast series for the wealth of insights we were fortunate to have from our guests.
We’ve summarized the key takeaways through this interactive visual which presents new models based on our podcast series and extends our existing thinking in new ways.
Navigating the roadmap
The interactive guide
Starting the Journey: Strategic Drivers
Why is digital ethics becoming such a vitally urgent concept in strategic conversations? Business and policy decisions are increasingly data-driven, meaning ethical know-how is as essential to assessing business impact as understanding organizational change or technology policy. Regardless of whether you are directly responsible for designing and implementing digital systems, the digital ethics of your business is something every CXO needs to understand.
These are the core strategic drivers that impact any business decision, either through risk or potential reward:
-
Reputation: your customers or talent will abandon you; or you’ll stand out to them through specific market differentiators your competitors don’t share
-
Regulation: you’ll fail to anticipate how the regulatory landscape is evolving; or you’ll shape it
-
Revenue: you’ll miss an opportunity for strategic advantage, or your reputational damage will catch up with you; or you’ll find new markets & product opportunities.
Note for our financially-minded friends: we recognize that what we’re describing is ‘profit’ rather than raw income – the balance between revenue and cost (including the often uncounted cost of negative externalities.) However, the thesaurus tells us that in colloquial speech many people use the two interchangeably.
The law is the floor: leverage the ‘frozen middle’
**Anne Currie**
Tech Ethics at Container Solutions
& Sci-Fi Author
The law is the floor: leverage the ‘frozen middle’
“It's amazing how technologists think 'I was only following orders' is a legal defence. It's not a legal defence. And similarly, just as many developers think 'But I didn't know it was against the law' is a legal defence. It's not a legal defence. It is required that you familiarize yourself with what the law says you are and aren't allowed to do. And that you're aware of where you might be breaking the law in the future.
This is not something that’s unique to the technology industry. It's very similar to the dilemma that faced the finance industry after 2009. We had an awful lot of unethical or illegal behaviour going on within the finance industry. And they realized they really needed to change things, so they did an awful lot of work trying to build up ethical cultures. And they were very successful in finding out what didn't work. We can learn from that.
Two things they found that really didn't work, the most obvious being to have the CEO say: 'Yes, we're going to be ethical. From now on, everyone will be ethical.' This made almost no difference to the ethical culture of the company, because most people then think 'well, they're probably lying.' If they haven't modelled that behaviour for their entire company history, people just think they're making it up: they don't think anything has really changed.
The other thing that doesn't work is a lot of youthfully enthusiastic, ethical people coming into the business. They're enthusiastic, they will say something, you'll quite often get a lot of early-career people who will kick up a fuss and make a noise, but they don't really have enough clout to make change.
If you really want cultural change, it's middle-out. It's middle managers who set the culture for a business in terms of ethics. This is what was found in the finance industry. So if you're a CEO, it's about convincing your middle managers that you actually mean it. How are you going to get your middle managers to believe, and to enforce, and to push it up and to push it down? Without that, you aren't going to get anywhere at all.”
Anne Currie, Tech Ethics at Container Solutions & Sci-Fi author
Leverage the ‘frozen middle’
Digital ethics is central to strategy
**Paul Miller**
Managing Partner
CEO Bethnal Green Ventures
Digital ethics is central to strategy
“Impact has started to move out of the CSR department in large companies and more into the strategy and innovation side. More large companies are understanding that if they innovate around a big problem people want to see solved, sure, that's going to have a great reputational benefit for the company, but it's also where the money is in terms of future markets. It’s around solving these big problems.
Some of the people who are leading these startup companies now, even though it might not work out or it might only get to a certain stage, their skillsets are going to be very in-demand in larger companies in the future as these larger companies need to adapt to a much more mission-led world of business. Even if the ‘tech for good’ world doesn't have a direct relationship with the larger company world at the moment, I think you'll see that some future leaders of large companies will have come from this ‘tech for good’ world.”
How Digital Ethics drives value in an organization
Ethics is everyone’s problem
**Sarah Drinkwater**
Director, Responsible Technology
Omidyar Network
Ethics is everyone’s problem
“Ethics with a capital E is a longstanding academic function of many thousands of years. But on another level, with a small E it’s a very personal set of choices that we make individually as teams, as companies, as societies. And the technology that we're now using has such mass adoption, it's incredibly hard to get alignment. The speed you work is very fast. It's always about the next launch, scaling, growing, reaching more people. What we're advocating for is a far more intentional set of decisions. Even having that time to reflect built into your processes can help you make smarter choices.
We say ‘ethics’, but really these are business risks. Really these are ways in which your company can be found wanting by regulators, by press, by your own employees, by the customers that you hope to serve. Sometimes ethics itself, as a phrase, can seem very remote, very far from the day-to-day of building a business. But I don't believe that's the case. I think this is the whole new wave of how we think about building thoughtfully, sustainably, responsibly. Working contextually in the societies that we live in.”
Ethics is everyone’s problem and there are lots of tools available
< Click on the assets to view in more detail
Digital ethics is about power
**Ivana Bartoletti**
Technical Director - Privacy
Deloitte
Digital ethics is about power
“What I see under threat is the real basics of our democracy. Because in democracy, you and I can have different thinking. We vote in a different way, but we can both express what we believe in to an extent, obviously with constraints. But what is happening at the moment is that with this online manipulation, when you're targeted, it’s not just an advert for [another kind of] shoes, if you've been looking at a pair of shoes online. You're targeted with specific news based on your interest, your weaknesses, and even your fears. And if you and I each see different news and we each see different facts, then I feel that is the basics of democracy they're going to trample.
There are politics and geopolitics underpinning data, underpinning artificial intelligence, that show these power dynamics. They're absolutely important. And they are so visible now to us. And power is also about the way that these systems become more and more complex. We're not talking about technology, we're really talking about power, and this is something that belongs to everybody. And the redistribution of power is something that should be in everyone's interest, not just the technologists or the philosophers or the ethicists; this is us. And the way that we're living in our physical and digital world.”
Get clear before getting it done
Businesses often begin exploring digital ethics approaches as they consider a broader group of stakeholders than solely customers and shareholders – ethics can be a vehicle for measuring the impact of business activities beyond pure profitability. Ethics can also provide a structured decision framework for effort spent on stakeholder needs that cannot be directly tied to the bottom line, but which in some other way falls under the reputation, regulation, revenue drivers. At the end of the day, as Bartoletti says, this is something that belongs to everybody.
When starting to build a programme of work for digital ethics, it’s critical to get aligned on purpose. As with all strategy, only by agreeing clear aims can teams develop a plan to work towards success:
- Establish a common language (e.g. what do we mean by ‘fairness’? What do we mean by ‘governance’?)
- Understand digital ethics maturity
- Explore available tools & frameworks matched to maturity phase
- Develop a plan
Trust is the critical competitive factor
**Tim Gordon**
Co-Founder & Partner
Best Practice AI
Trust is the critical competitive factor
“Trust is going to be absolutely key to succeeding in a data/AI-driven world. Historically trust is a slightly nebulous thing that maybe gets measured on a few brand research items and so on. But if you think about the drivers of economic success in a data-driven world, your ability to gather data cheaply is going to be a core competitive advantage. If consumers trust you, they're therefore going to give you more data, and they will require less economic incentive to do so. You've moved to a world very quickly where the more trusted you are, the more data you're going to get more cheaply. You can see a whole series of ways in which for each of your stakeholders, there is a real, clear, actual economic advantage you can put to the bottom line on increased trust.
One way this plays out is in the the mirror element AI is putting to some of these ethical questions [for example, by demonstrating that historical data used to train the model is biased, thus exposing an existing systematic bias in pre-AI decision-making], and therefore making them very real for leadership teams who have always been mathematically driven. I think that’s the real opportunity that emerges from the space. You're going to have companies who really are going to focus on building that trust. If you've got trust, you will then basically have a successful economic model."
Ethics isn’t enough: aim for equity
**Mutale Nkonde**
CEO, AI for the People
Ethics isn’t enough: aim for equity
“I’m doing a disinformation project right now that has taken me on to social media. All my work sits at the racial justice and anti-blackness specifically. So I'm looking at black users of social media over-indexing on those algorithms, but having no ownership stake. And my recommendation is the same as the report [Advancing Racial Literacy in Tech]. Get people in these companies that have better imaginations. If we look at the development of AI, so much of it lies in science fiction. If we look at the cell phone, that was something that was in a science fiction story. If we look at the idea of a talking computer or robots, these are things that we've seen in media and arts that have become real through research. Why don't we research having an equitable world? And I personally have no intention of returning to my pre-COVID reality because I think that this is an opportunity for things to be so much better.
Get rid of the idea of ‘ethical.’ It doesn't mean anything. It cannot be encoded into practices and policies. Look at your existing value statements. Are you a company that wants to promote racism? If not, then look at the different ways that your company is perpetuating that and dismantle them. Are you a company that wants to promote sexism? The same. Ableism, the same. Transphobia and homophobia, the same. By naming specific harms, you can then take concrete steps to dismantle them."
Governance: a multilayered system of incentives, checks & balances
**Rumman Chowdhury**
Director of ML Ethics, Transparency & Accountability at Twitter
Governance: a multilayered system of incentives, checks & balances
“Governance means different things to different people. If you talk to somebody who builds technology, they think of governance at the model level, so what is a system governance for a model, checks and balances at each stage of development, deployment, and then post deployment.
But if you talk to most other people in the organization, they think of governance as organizational governance [like risk and compliance teams, trust and safety teams, or legal departments].
And of course, there's this other layer of extra-organizational governance. These might be influential bodies like research organizations or civil society groups which actually have an impact on how organizations are talking and thinking about ethics.
And then we have literal government.
One thing I'm seeing that's pretty common across all of it, which is troublesome as a scholar of politics, is that none of these mechansims are actually built democratically. We would actually never want to be a citizen of a country that is governed the way that we're creating AI governance at international level."
Human Rights as a critical factor for business success
**Katie Shay**
Head of Business & Human Rights
Cisco
Human Rights as a critical factor for business success
“The UN’s Guiding Principles on Human Rights lay out three pillars that apply to the business and human rights field, the second one being a corporate responsibility to respect human rights.
We have a responsibility to conduct due diligence of our operations and understand where the risks are so that we can take action to prevent human rights impacts, mitigate human rights impacts, and address anything that does occur in the course of our business. In some ways that seems revolutionary, in some ways it doesn't.
Practically what that means for us at Cisco is looking at how we design our solutions and baking in a human rights approach from the very beginning when we start thinking about a new product or a modification to something that we already offer – from ideation all the way through to the end of sale of a product. We're looking at every point along that process. What I'm really interested in is looking for where those points of leverage are: what are the existing processes or policies that my colleagues are already working with and how can I make human rights a part of that? How can I get this on people's radar? So that they're thinking about this in the course of the way that they do their job anyway."
Cybersecurity as a template for evolving from principles to recognized business discipline
**Mark Hughes**
SVP Offerings & Strategic Partners
DXC Technology
Cybersecurity as a template for evolving from principles to recognized business discipline
“Ethics is nothing new. Ethics exists and has existed in business in our daily lives forever. And it's now about how you translate that into the digital world. I think the comparison I would draw there, with my experience in the cyber world, is around aggregated risk versus the distributed risk.
I think the first stage [of cybersecurity’s development as a business discipline] was a realization that there was a risk, and that aggregated risk had potentially really severe consequences. Then there was a sort of standoff point where the public-private sector didn't quite understand who was responsible for what. And it's only fairly recently, in the last five years or so, that governments have really mobilized with that core founding principle in mind. The prior activity was much more about information sharing and raising awareness, which is all very well and good, but it wasn't addressing the core ability to manage and mitigate the risks associated with cyber.
Once that step was made, then the cyber management framework and governance framework really began to build up some steam and began to be measured and managed in a much more comprehensive way, where the root of this is being able to understand what the potential risks are, and then a mechanism for aggregating those risks so the appropriate mitigation can be taken. That is, I think, how the whole ecosystem has evolved.
Governance is at the heart of this: it's really fundamentally important because with frameworks with which we can manage and assess risk, then you won't ever get to that point where one part of the system can have a disproportionate impact on another part. Getting those governance frameworks in place was pretty key."
Shining a light on interdependent governance
Just as strategic drivers are intertwined, the three parts of a systematic governance regime also have interlocking layers of responsibility and risk. While we’ve presented these as though they are part of a regulatory governance regime, they could also represent internal checks and balances within a company, or with external independent advisory boards.
For a truly accountable system of checks and balances, all three aspects of the system need functional independence from one another: one of the critical failures of the subprime mortgage crisis of 2008 was that the auditors were functionally colluding with the organizations who paid them rather than genuinely holding them up to regulatory scrutiny. The interests of all three parties cannot be too closely aligned.
Optimism through robust governance
Ollie Buckley
**Executive Director**
Centre for Data Ethics & Innovation (CDEI)
Optimism through robust governance
“One of the most exciting and heartening things about working in this space is the extent to which people at all levels of organizations, from the data scientists through to the executives, are waking up to these opportunities and to the risks. For those of us that work in the data ethics space day-to-day, it can feel bewildering that there are a hundred different sets of AI principles being produced every week. What this shows is that there is an active conversation happening: there's a realization that with the power of these technologies comes the responsibility to make sure that they're working well. You need to be paying attention; you need to be making conscious choices.
It is a fantastic thing, actually, that there is so much going on. I hope that organizations like ours [the UK Centre for Data Ethics and Innovation] and the many other institutions that are working in this space can help us all to help each other. Through this work and making those connections, we can start to develop the blueprints that help society take advantage of these opportunities.
I very much encourage people to be having these conversations and if they're not happening yet in their organizations, to take the opportunity to start them and to go about it in a way that very deliberately brings in different voices from around their organizations, from outside too. Because ultimately this only works if it's a multidisciplinary endeavour.”
Ollie Buckley
Executive Director
Centre for Data Ethics & Innovation (CDEI)
Conclusions: not an ending, but a beginning
Every week there are new headlines with new digital ethics scare stories: wrongful imprisonments, biased hiring and promotions, inequitable healthcare, flawed border controls, bad credit scores. The list goes on, and on, and on. Data-driven decision systems have the potential to mitigate bias in many of these disastrous examples, but at the moment they continue to replicate existing societal biases because that’s all we have to train them on. In a way, many of these scandals are simply pointing out in procedural terms the biases that we humans have introduced against one another.
It is the responsibility of everyone involved in designing and building AI – from business leaders to software engineers – to not accept this state of affairs and allow these biases to be encoded into the fabric of our digital lives, to not shrug their shoulders and say that the AI is only doing what we tell it to; but instead to push for a better world for all, online and off. It is everyone’s responsibility because every person on the planet is impacted by these technologies. Nobody is free to opt out of the digitally mediated technoscape, because automated decision systems are increasingly pervasive parts of our private lives as consumers, our civic participation through citizen services and government functions, and our economic participation as employees.
Ethics is misleading as a noun: it is not an objective to be addressed at one single point in time and then forgotten. Ethics is a constant striving towards our shared principles. The ‘road’ of putting digital ethics into practice unfurls across the pages of history yet to be written – by you. We hope the tools, frameworks and perspectives we’ve collected in this body of work will help you to navigate your path through the landscape of digital ethics.
Push for a better world for all, online and off. It is everyone’s responsibility because every person on the planet is impacted by these technologies."