Harmony Mediation Incorporated is a purpose-driven Humanist enterprise dedicated to making the Internet a calmer friendlier place.

Harmony is using behavioral science crossed with machine learning to build a revolutionary new universal cross-platform system for user-generated content mediation, augmented by intelligent automation.

What does that mean?

We can detect when people are starting to get upset in an online conversation, talk them down before they explode, and help them express themselves clearly so they feel heard.

If moderation systems are like the police, a mediation system is like a counselor or social worker.

Harmony does not stop people from expressing themselves, it asks them if they want help. How it helps them is a bit of a secret, an active engagement workflow we have been developing since early 2020.

The goal is to create automated mediation systems that detect emotional stress, offer to help facilitate communication, and help people from diverse backgrounds with varied perspectives communicate with each other clearly and calmly in any online environment.

Harmony Logo

an illustrated cat sweeps problems under a rug

When you hide or delete disruptive posts and silence or ban disruptive users, you’re treating the symptoms, not the cause. The problem is not eliminated.

Countering Entrenched Ideas

The long-standing paradigm of practically all user-generated content moderation is something we refer to as the traditional hide/ delete/ silence/ ban approach.

This antiquated approach to moderation, a relic of the BBS era, was conceived long before the World Wide Web even existed. For the most part, it only results in short-term fixes that treat symptoms of disruptive behavior, never the causes.

It is also an inefficient process for automating, with successive generations of hide/ delete/ silence/ ban applications amplifying the intrinsic problems it harbors, making it a bad solution for what most platforms experience as their needs in managing user interactions at scale.


Mediation Is Different From Moderation

Moderators currently deal with both intentionally disruptive users (“trolls”) and users who become disturbed (emotionally triggered) in the course of a conversation.

We haven’t solved the troll problem yet, but we know we can solve most of the disturbed user problems. We can mediate their disagreements using an A.I. research-based workflow.

Why do people usually get upset and act out? They feel left out.

When a user feels like they aren’t being heard or included in a conversation, the results can be explosive. But, then what is the effect on a user when they’re silenced? They become more upset. Silencing distressed users is harmful. To truly eliminate the disruption we need to engage with them before they explode.

Causes of Disruptions To Conversations*

Trolls
10%

Frustrated Normal Users
90%

* This statistic is from an analysis of 567 flagged messages conducted by our CEO while he was helping to moderate one of the largest photography forums on the Internet. Statistics like this will vary by platform and community, but based on initial findings in our team’s recent analysis of a fairly large number of Discord communities and Facebook groups, the 9:1 ratio still holds up.

Harmony takes the upset user aside and helps them talk out their problems. By doing this, it can address a significant portion of the total number of disruptions that human Moderators would otherwise need to deal with.

Harmony doesn’t moderate conflicts by exercising control, it mediates conflicts by listening and offering support.

Harmony’s workflows are designed to free smart creative human Moderators to address unique situations. It also reduces the resource cost of addressing disruptive users for the communities, so they can invest in improving their platforms.


illustration of a scientist or doctor pointing to the words "listen" and "engage"

The New Paradigm of Harmony

The goal we’ve set for ourselves is to create Harmony, a radically different universal framework for automated online conflict mitigation.

Our approach uses applied behavioral science and highly interactive chatbots to achieve comprehensive long-term solutions.

Instead of hiding their expressions and shunning them, we endeavor to solve the problems caused by the majority of disruptive users by helping them to feel less like being disruptive.

Our basic assumption is that the main obstacle to a civil discourse between strangers is almost always miscommunication.


Harmony will change how people communicate with each other online.

The next-generation system we are currently building uses sentiment analysis, machine learning, chatbots, behavioral science, and social mediation strategies to enlist the aid of the users themselves in mitigating toxic conversations and harmonizing interactions in online communities.

Our goal is not to replace human moderators or human counselors and psychologists, but to free them from repetitive analysis and help them more efficiently meet the unexpected unique needs of billions of real people across millions of platforms.

The Harmony approach is more humane, more responsive, and far more efficient at scale than any existing content moderation system.


The Business Case

Disruptive user behavior doesn’t just disrupt communities, it disrupts commerce.

The business effect of implementing the Harmony API across the Internet will be dramatically increased positive user engagement and increased sales for any platform that effectively employs the system.


The Human Case

Harmony will contribute significantly to increased mental and physical health for billions of people.

The effect of widespread adoption of this technology on people will be the elimination of harmful ongoing stress factors previously caused by user conflicts for both users and operators of online communities large and small.