In April of 2012, two student organizers at the State University of New York at Purchase entered a large room and began to unfold chairs in a semicircle. As other students began entering around 8pm, the atmosphere gradually grew tense: all the black students were sat on the right, and all the white students on the left.
Everyone was there to discuss the recent murder of Trayvon Martin, and the state of race in America. Taking a deep breath, the organizers mixed up the students’ seating like in the bus scene from Remember The Titans. White and black students now sat together, and talked with candor about systemic racism with their dignity respected. Changing the configuration of the room had made it a safer space.
So what does this story have to do with digital user experience design?
After all, this was a physical space with real people, not a virtual application on a phone. Wasn’t the room just a morally impartial container for human conversation? Surely, the fact that the organizers had stepped in to adjust things was simply how humans had used the room that day?
We tend to have a view that design is—or should be—neutral and rational; something onto which users might project their own views and biases, but without biases of its own.
But what if we have it backwards? What if today’s designs are not impartial, but are facilitating, or even encouraging, users to act with hostility towards one another? What if these mere “containers” are driving our conversations in ways, and towards outcomes, that we didn’t ourselves intend?
In this piece, I’m going to explore some research, events, and experiences that suggest this is very much the case. In all likelihood, the view of design as something blameless—let’s call it impartial design̛—is part of a myth believed by both companies and citizens to distance themselves from the consequences of immoral design.
Immoral design isn’t the same as bad design, which usually describes a product or service that fails to meet its goals and causes headaches for users. Immoral design, on the other hand, may be found in a product that’s elegant and well-engineered, but that also—like an illicit substance—provokes dependence, societal harm, and even physical violence, as I’ll discuss shortly.
Consider how contemporary buzzwords excuse our decisions as businesses and designers. Has your delivery app’s business model contributed to lower wages and people working three jobs to make ends meet? Don’t worry, it’s because of the gig economy. Did you create a social platform that fuels fascism, misinformation, and increasing hopelessness? Don’t worry, it’s the fault of post-truth politics.
What these platforms have in common is a tendency to view people only as a means of making money, not as ends in themselves. For this reason, a more connected life hasn’t, for most of us, led to greater quality of life.
So, what’s to be done? I’d like to propose a few ways that we as designers might start creating apps that systematically bring people both dignity and delight, without undermining companies’ bottom lines.

The Four Axioms of Designing for Dignity
Let’s dig a little deeper into what it might mean to design for dignity. First, let’s be specific about the perceptions we’re bringing to the table. These make up what I call the Four Axioms of Designing for Dignity.
-
Axiom 1. To design is to render intent, behavior is the medium of design, and systems are a finite set of behaviors of any scope (e.g. a product or institution).
-
Axiom 2. Systems, and the feelings and behaviors they are designed to encourage, will always uncover the design’s original intent.
-
Axiom 3. Humans desire dignity, form identities wherever it’s lacking, and a system that fails to afford dignity will inevitably face problems.
-
Axiom 4. Humane design is the proposed practice of designing systems so that users experience dignity throughout the system, and its core holding is that designers must view users, not [just] as a means to making capital, but as ends unto themselves.
Axiom 1: To design is to render intent, behavior is the medium of design, and systems are a finite set of behaviors of any scope (e.g. a product or institution).
Axiom 1 combines the logic of both Center Centre co-founder, Jared Spool, and Dalberg Design co-founder, Robert Fabricant. It means that a product’s look and feel, and a user’s journey as they are using it, depends on its intentions. Is the intended outcome to make users feel welcome and heard, or is it just to present everything that it can do without consideration of feature priority?
This axiom also holds that design is not limited to screens or pages. It invites us to broaden our horizons about where designing for dignity might be applied. Consider that meeting room at the State University of New York at Purchase: it is a designed space. Its original contours created tension and animosity. A redesign—requiring students to mix-and-mingle with other another without regard to race or past friendships, eliminated that tension almost instantly. Similarly, a remote control, a corporate structure, Alexa, virtual reality headsets, a way of life; all of these are large parts of our everyday lives which can be redesigned in a humane way.
Axiom 2: Systems, and the feelings and behaviors they are designed to encourage, will always uncover the design’s original intent.
Axiom 2 argues that, if a product or system is producing an undesirable effect, you can trace it back to the initial intention and find a one-to-one match. Psychologist Adam Alter, author of the book Irresistible, makes the case in his TED Talk Why our screens make us less happy that as mobile devices and technologies have grown more advanced, the screen time we’ve spent away from our devices has dwindled to nothing, mostly due to the devices lacking stopping cues like how books are broken into chapters and shows have end credits.
Design ethicist, Tristan Harris, echoes this point in an interview he gave with Vox: “It’s designed to hook you,” he says, explaining that the bright colors, infinite scrolling, and constant notifications lure us into using these devices without end. Those sources of constant stimulation were deliberately designed things, and their intent was capturing your continued attention at all costs—whether that’s to keep you subscribed to a premium app, or show you as many sidebar ads, commercials, and promoted content pieces as possible.
That drive for attention has had staggering consequences both on and off these platforms. There’s a video from Vox Strikethrough series called, quote, Why every social media site is a dumpster fire that summarizes this perfectly. “Humans are social animals at their root, and they’re constantly looking for reinforcement signals or signals that we belong,” says Jay Van Bavel, Associate Professor at NYU.
He researches what kind of information people respond to on social media. He found that “moral-emotional words like blame’, ‘hate’, and ‘shame’ were way more likely to be retweeted than retweets with neutral language” because it sent the clearest signal about where a person stood on the issue. And that, while a physical environment would allow for social checks and other social cues to afford communicating with dignity, the ease of blocking and dissociating from disagreeing viewpoints drives us into tribes.
This, in turn, makes us vulnerable to conspiracy theories, misinformation campaigns, and downright hostile propaganda from bad actors. Journalist Carlos Maza, sums up the issue nicely towards the end: “The problem isn’t that a few bad apples are ruining the fun, it’s that these sites are designed to reward bad apples.” Which harkens back to the original intent of social media sites: profit off users’ attention, without regard to how it happens. This, as Axiom 3 posits, is a case of users not being treated with dignity, and consequently, the system facing problems.
Axiom 3: Humans desire dignity, form identities wherever it’s lacking, and a system that fails to afford dignity will inevitably face problems.
Axiom 3 pulls from the works of both Donna Hicks, Ph.D., author of Dignity: The Essential Role It Plays in Resolving Conflict, and Francis Fukuyama, Ph.D., author of Trust: The Social Virtues and the Creation of Prosperity and the now infamous statement that the Cold War’s conclusion represented the “end of history” because democratic capitalism was considered to have successfully proven itself as the best rendered system over all others.
Donna Hicks, having convened warring parties all over the world from Sri Lanka to Palestine, explores the role dignity plays in the breakdown and restoration of relationships in her work. Hicks identified ten ways to honor the dignity of others that she called the Essential Elements of Dignity, which are:
- acceptance of identity [as equal to your own];
- acknowledgement [of their existence and the impact of your actions on them];
- inclusion, safety [both physical and psychological];
- fairness, freedom [to make our own decisions/from control];
- understanding [and giving others the chance to explain themselves];
- [giving others] benefit of the doubt;
- responsiveness [to the pain others may be experiencing]; and
- righting the wrong [when we have caused pain].
When seeking to reconcile with others, we fail to observe the above heuristics at our peril.
Meanwhile, Francis Fukuyama (notwithstanding intervening developments regarding the end of history) states that “identity is based on a universal human desire to have one’s dignity recognized,” and argues that, in modern society, our insistence on being correct in the face of disagreement, or even contrary evidence, drives people to form political in-groups and out-groups—us-and-them distinctions—in order to satiate and actualize that desire for recognition. He further makes the case that social capital—the capacity of people to cooperate and trust one another—is a determinant of a society’s economic success; and highlights economies from the former Soviet Union as examples of where mistrust has hampered economic development.
To bring this back to product design: heuristics already exist to both improve and test user navigation and delight within an application. Yet we seldom test the capacity of systems to handle user reports about objectionable content, or to anticipate objectionable content at all. As such, users’ senses of safety and recognition are often undermined, and therefore, so is their dignity. This often comes at considerable cost to both the brand equity and bottom line of companies that fail to take these events seriously. Sometimes it can even prove catastrophic.
An in-depth report from Last Week Tonight with John Oliver found that, in 2012, Facebook launched a prolific expansion into Myanmar such that Facebook became synonymous with going on the Internet altogether. It’s similar to how in the United States, we Google something; in Myanmar, you Facebook it. They did so without building sufficient infrastructure to police objectionable content: it only staffed four moderators for the whole of the country. It also failed to adequately translate requisite calls to action for content moderation into Burmese. The outcome was catastrophic, as misinformation spread on Facebook in Myanmar exacerbated racial animosity against the Rohingya, and contributed to a state-sponsored campaign of violence whose death toll is fast approaching 10,000.
The cost in human life, exacerbated through a failure to confront misinformation and hate speech, is incalculable. But there are, and continue to be, calculable damages to Facebook’s brand equity, revenue, and stock value because of these failures to act responsibly. These costs could easily have been preempted had the product been designed for dignity in the first place. This is an expression of Axiom 3: it’s expensive to be immoral, so don’t.
Axiom 4: Humane design is the proposed practice of designing systems so that users experience dignity throughout the system, and its core holding is that designers must view users, not only as a means to making capital, but also as ends in themselves.
Axiom 4 ties all the others together, affirming the view that we must design for user’s welfare, not against it. In Grounding for the Metaphysics of Morals, the 17th-century philosopher, Immanuel Kant, put forward the following “categorical imperative”—a rule of ethics that defines what is moral:
[You must] act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.
Kant was skeptical about consequentialism—assessing an action’s moral value by its consequences—partly because an action’s outcomes can be difficult to foresee. However, it seems to me that there are many cases where we can reasonably foresee negative results. Facebook was warned by civil society groups about the risks of expanding in Myanmar without accounting for existing ethnic tensions, and the expansion did indeed result in a human rights catastrophe.
So Axiom 4 maintains that a clear and conscionable design is a good design that should reasonably anticipate its outcomes, and more often than not, produce good ones.