Design Ethics - Reality and Responsibility

Tech Gone Wrong

Attention tech workers, innovators, computer nerds! We’ve all enjoyed the spoils of this booming industry we love so much, but I have bad news…tech has gone wrong.

Everyone should have noticed the headlines by now. The list of tech companies behaving poorly is growing. You’ve seen the results:

Why am I giving you this list? Let’s get to the point: This is our fault.

This is our fault.

We, collectively the tech industry, created this mess by making decisions about these product experiences. This tech was designed by teams of professionals like us. We may not have expected or intended to cause these negative outcomes, but they are still the result of our decisions. We’ve been living in the ‘move fast and break things’ world of short term thinking. Yes we moved fast and built impressive things, but…unfortunately we are breaking more than software, we are breaking PEOPLE. Our seemingly small decisions result in manipulation and long-term harm to the humans behind the screens. Looking back from the highs of the recent tech era, we’ve lost track of the utopia we once envisioned. Sure we’ve gained many advantages from tech, but we can no longer ignore the costs that come with it.

Ethics are not optional

In this new world, product teams of PMs, designers, and engineers have a huge influence on how people behave in their daily lives. Together we are all making design decisions, not just those with the title of designer. With that power comes…responsibility. We should be examining the ethics of each design decision as much as we study the engagement metrics. Designing and building products include inherent ethical decisions. Ethical design is simply practicing design (and development) with the intent to do good. To designers, ethical design is nothing new, we already know this as User-Centered Design. If you’re part of a team making a tech product, your work is out there, representing your ethics, good or bad. Think about it, are you proud of those ethics? Are you intentional about how the product influences real people?

Your work is out there, representing your ethics.

But [my company] isn’t evil…

I know what you’re thinking…”My company isn’t evil, we have principles! We are disrupting industries and bringing power to the people!” Ok, so does that mission statement show up in your product work? Is that code living your values? How do you design for your principles and ethics? Prove me wrong, but my guess is these lofty missions are not actually impacting your users. If they are, spread your knowledge so we can learn from it.

I know what you’re thinking… “We don’t use evil ‘dark patterns’, we don’t blatantly trick users.” Great, that’s a start, but are you going beyond preventing harm? How are you promoting good ethical decisions?

I know what you’re thinking… “My team is full of good people.” That’s great, but don’t assume your work is then automatically good by default. This must be a deliberate effort from that team, every day. Ask, are you doing what’s best for the customer? For the ecosystem? For the world?

Right now, can you think of an example of questionable ethics in your companies’ products? Hopefully not. If you can, do you notice it? Do you raise the issues or minimize and dismiss them? Have you heard these quotes? “Oh, that’s a customer service issue”, “Well, that’s what the PM wants”, “I can’t change that now, wait for the next version” Don’t let these common reactions sweep over your responsibilities.

We can do better!

We have the influence - we are more integrated into the structure of these tech companies than ever before. One by one we can lead the industry toward better ethics. Are you inspired and ready for action? These examples should get you thinking:

Lead your product team to ethical solutions.

Example 1) Data collection Before we ask a customer for a seemingly simple data point, evaluate the long term consequences to the human. What does this reveal about them beyond the surface? How would the person feel if they were defined by this data point? Would they be harmed if it was leaked to the public? Are we furthering a prejudice in the way we define people? For example, ethnicity and race are much more complicated than a choice between five labels.

If we decide to ask for that data point, why should the user trust us with the answer? We should earn their trust by being transparent in how we will use it. This is where content design is key. Explain it in clear, concise, honest words, not pages of incomprehensible terms and conditions.

Example 2) Inclusion It’s become a core value at my company and others, but how are we living that out?

Idea: Check for inclusion in design or product reviews. Ask tough questions: Who are we excluding with our products? Does this product work for customers with slow, unreliable, expensive data? Think outside the first world luxuries of cutting edge hardware and networks. Does this product work for specially-abled customers? Are you providing an equal opportunity to use our products with assistive tools? Is your product enabling addictions and disorders or promoting healthy interactions with technology?

Example 3) We design for the happy path through our flows, and sometimes examine potential sad paths, but what about tragic paths? Don’t hide the process to close an account or delete data. Think about the whole lifecycle of a customer, not just the onboarding. Do we consider people rebuilding their lives after a tragedy, disaster or war? The middle of a family crisis is not a good time for verifying your ID. We should prioritize features for these so-called edge cases instead of pushing them off to version 2.0, because that “fast follow” probably won’t happen.

The industry is waking up

The good news is that our industry is finally talking about ethics. In Ruined by Design, Mike Montiero outlines all the ways design has damaged the world. He proposes a design licensing system, similar to architects, lawyers or doctors. He points out we have a similar duty to protect the customer like they do, but it’s not formalized so we don’t have leverage to push back on our employers. In a similar vein, Ethan Marcotte’s essay and talk, The World Wide Work artfully explains how we went from the bright utopian ideals of the early web to the dark dysfunction we live with today. He proposes that a tech workers union could protect employees from automation and give us power to say no to assignments with poor ethics. Tatiana Mac has explored the history of embedded exclusion and racism in our culture. From civic planning that intentionally segregated races, to interfaces that make dangerous assumptions the default. Hopefully, the growing discussion and awareness will help push the industry toward more ethical practices, but don’t wait for it to change on its own.

You must care, right now!

Here are my bullet points that we can act on today: Managers -

You were hired to influence, not just follow.

Individual contributors - you can be leaders!

Don't mockup anything you don't belive in.

Now that you’re enlightened and motivated, let’s go change things!

Let’s all stand up for what we believe in so we can stand out from that tech that’s gone wrong.

Thank you.

… This post was crafted from a lighting talk I shared with 200 of my colleagues/friends at our internal UX conference in September 2019. More about that story coming soon.