Ethics 101 for Designers

 

What is Ethics?

Ethics generally refers to an established set of standards that allows us to determine right from wrong. These standards give us the ability to figure out what we should and should not do. It is important to note that what is ethical does not always correspond with what is legal. After all, slavery was legal at one point…but that does not mean it was ever ethical. Currently in the United States, it is legal in some places to prevent women from attaining abortions…and that is certainly not ethical.

Schools of Ethical Thought

There are several perspectives when it comes to deciding what is ethical. Many of the schools of thought that one might encounter in a Western liberal arts education have emerged from the Western philosophical tradition. There are plenty of philosophies that come from Indigenous, Latin American, African, and Asian cultures that deserve recognition.

The two schools of thought that I’ve featured below derive from Western schools of thought and were chosen because of their compatibility with the process of designing a product or experience. Should any non-Western schools of thought stand out as a better fit, please do let us know.

1. Duty-Based

Also referred to as Deontological ethics, this system asserts that motives matter more than the outcome when it comes to deeming the ‘goodness’ of such action. Philosophers like Immanuel Kant and Sir David Ross defined certain duties to uphold. In any case, a good way to think about duty-based ethics is that it is intent-oriented.

2. Results-Based

Also referred to as consequentialism, this system evaluates actions based on their outcome. An outcome that produces the greatest good for the greatest number of people is deemed good, and validates the action that produced it. A good way to think about this is that it is results-oriented.

 
Screen Shot 2019-06-20 at 7.09.55 PM.png
 

The thing with philosophy is that oftentimes life is way too complicated to just stick with one ethical mindset. Most people tend to subscribe to a mixed model where they apply both duty-based and results-based ethics on a case-by-case basis. And that’s exactly what we can do with how we design. When you think about it, the actual life cycle of a product has two dimensions. There’s the part where observations about society are made and a problem is identified (that’s the intent part). Then, there’s the iterative part where solutions are ideated and eventually shipped to the public (that’s the results part).

Intents do not erase results. Many problematic things have occurred by seemingly well-intentioned actors. Their intentions do not excuse their actions.

Intents

Not all problems need to be solved. If a client comes with a request to create an app that can rob their neighbors of money, that would be an example of an unethical problem to solve. In a lot of tech companies, there exist cultures of excitement, where teams can really get swept up in the fervor of a new, important, flashy problem to address. This often leads to neglect of deep consideration of whether or not the problem is ethically worthy.

Furthermore, it is crucial to ask yourselves: are we actually looking at a problem, or just another epistemology or way of living or value set? Many Western missionaries ignorantly diagnosed Indigenous communities with a lack of God, and went on to terrorize and colonize those communities. They thought they were solving a problem and they actually were committing gross and heinous acts of violence.

Not only that, we need to consider: are we actually solving a problem or are we just trying to line our pockets? Many companies like Meta say how they want to bring the internet to developing countries, but what is the catch? Likely, they wish to make a profit off of these communities.

Results

When we think about the consequences of our product, it is imperative that we are not only thinking about the elite. We need to think about vulnerable communities that have been targeted historically. We need to expand our circles of care so that we are not only looking out for those that resemble us. Especially considering that design and tech are notoriously for failing to represent the demographics of our users.

What's Good?

First things first, before you apply the design process, you have to know what is good and what is bad. Drawing from virtue ethics, which emphasizes moral character, we might be able to think of some virtues that pertain to our industry. Three virtues immediately come to mind: autonomy, transparency, and safety. Together, these three qualities, amongst many others, empower users and ensure for trustworthy products.

Autonomy

Autonomy is crucial. And by autonomy, we are referring to user autonomy. We as humans like to feel like we are in control of our technology, and not the other way around. When we don’t feel that way, it can get incredibly frustrating. A desirable and good product gives users the ability to make deliberate choices about how they interact with the product.

Knowing that you have the option of tipping drivers on a ride-sharing app is a positive attribute for autonomy. Knowing that you can rate a bad driver under 3 stars (and receiving confirmation that this rating notifies the company about the driver) is also a positive attribute for autonomy.

Providing autonomy includes other features such as: allowing users to customize their experiences with settings, designing helpful “opt-in” and “opt-out” choices for users, avoiding dark UX tactics that take advantage of users, etc.

Note: in providing autonomy to one user, you must make sure that does not allow the user to infringe upon the autonomy of other users (or non-users). This is where elements of relational autonomy come into play. Relational autonomy emerged from feminist philosophy, and urges us to consider autonomy beyond the traditional, individualistic lens. Rather, we should think of autonomy that ties in multiple perspectives.

Another thing of note - traditionally, in philosophy, autonomy has been regarded as a very individualistic value. Feminist philosophers have sought to change the interpretation of autonomy to one that is more relational, that regards how we are all socially embedded and possess varying levels of power. Advocates of relational autonomy examine at how social oppression can impair our own autonomy. An example that applies to tech is the so-called consent model that underlies an app’s Terms and Conditions. As we know, users are presented with unreadable T&Cs that we all end up scrolling through anyway. Can that really be said to represent our autonomous and consensual decisions to give up our data?

Transparency

Transparency is just as important. Users have a right to know what they are signing up for when they agree to use a product. Oh, but that’s what the Terms & Conditions are for! Sometimes, that’s not enough.

For example, when using a complex product such as Facebook, I want to know how I can control my privacy settings. Currently, Facebook is undergoing major criticism for playing a huge role in the data breach involving Cambridge Analytica. In a different scandal, previously, there was the case of the college students who did not realize that by joining a queer choir group on Facebook, they were unintentionally ‘outing’ themselves to their homophobic families. Users should always have a clear understanding of what the product does and how it works.

Another example that is pretty timely would be the perpetuation of fake news on social media. Companies that spread misinformation need to seriously redesign their algorithms and make sure that they are flagging incorrect content.

Safety

Safety for the user is another incredibly essential quality. Safety includes many aspects: sanctity of life, inclusion, privacy, and emotional well-being.

For example, it was revealed that Instagram strategically withholds “likes” from certain users in an attempt to get them to feel disappointed about their photo’s popularity and check in more consistently with the app. Talk about emotional manipulation.

Another example of safety in regards to inclusion is Airbnb’s struggle with racism towards people of color on its platform. No person of color should be made to feel like they are excluded from a product; and no person using a product should be allowed to exclude others via said product.

In a more futuristic (and controversial) example, if someone purchases an autonomous vehicle, are they guaranteed to have their life protected and prioritized in the case of an accident? What about a non-user, like a pedestrian? There is more urgency than ever for figuring out an approach to this problem, given that just recently, one of Uber’s test cars killed a pedestrian.