Is a safer internet possible?
The struggle to design a system that keeps people safe and builds trust.
This newsletter is my space to explore (or burrow into) ideas relating to sociotechnical systems. A sociotechnical system is any system where a technical system (e.g. a train system, the electric power grid, a digital platform) interacts with a social system (e.g. a government, a startup, a town).
Given that I work at The Wikimedia Foundation (the nonprofit which hosts and builds product for Wikipedia), I am interested in how Wikipedia contributors interact with one another on the platform (i.e. a social system) as they write Wikipedia articles (i.e. a technical system). The crux of this endeavor’s success is cooperation, but as anyone who has ever been on a team knows, cooperation is elusive. It requires guidance, wisdom, and dedication. No sociotechnical system is perfect, but how might my team design one that meets the needs of Wikipedia’s communities while also encouraging their most cooperative selves?
As a tech worker with a background in activism and media, Trust and Safety could not have been a more perfect fit for me. It encompasses the biggest sociotechnical problems of our age. At its heart, Trust and Safety is concerned with creating a frame within which everyone must stay. As someone pushes on the boundary of that frame, issues arise. How do we keep people safe on the internet? Who’s responsible for this safety? How do we build trust between people and the technology they’re using? When it comes to platforms of knowledge and information, how do we protect free speech, prevent hate speech and harassment, and address misinformation?
As the internet grows and diversifies into a global community, is it possible to have universal values that govern online social interaction? Is a universalist approach the best or should we prioritize creating a decentralized internet, with groups of people forming communities to their own liking? As places like Europe produce laws to govern data and the internet, what frameworks do we use to create wise and effective Trust and Safety policy?
I’ve made a list of people researching these questions (e.g. the fellows at the Berkman Klein Center for Internet and Society at Harvard) and in the coming months I plan on interviewing them, documenting what I find, and sharing it with all of you.
I myself have been tasked with a tall order. My team is building a reporting system for Wikipedia. This system would allow someone to report another editor, a comment, or a post that violates Wikipedia’s Universal Code of Conduct (UCoC), the result of a multiyear project where the Wikipedia community came together to agree on a set of behavioral guidelines.
Don’t other platforms have this?
Most major platforms have reporting systems. You can report Tweets, YouTube videos, Reddit posts, Twitch streams, Instagram lives, Vimeo videos, you name it, the platform has created a way for people to report content or accounts that concern them. The process is simple. You click a button near the piece of content or account you wish to report, you’re shown a form to tell the company more about what’s wrong with said content or account, and then hit ‘submit’. That report flies off to be processed and acted upon, and maybe you’re alerted once the company has taken some action.
Project complexity
The complexity of this project lies in the DNA of Wikipedia. As an open source platform, all our community members are unpaid volunteers. Meaning, if you file a harassment report against another account, that report is processed by an unpaid volunteer. Thus our reporting system has to work for both parties: the targets of harassment and the responders. If this were Meta or Uber, we could easily hire more people to process more reports. However, Wikipedia is a decentralized, nonproprietary technical system, therefore, we rely on whatever volunteers we can get to do the tough job of processing reports. We can’t expect them to handle high volume, and we don’t want to spam them with fraudulent reports.
Design goal
Design an MVP to test on a pilot wiki (i.e small wiki community) where a user can report a message, a comment, or an account with a certain amount of anonymity (the level of anonymity is still being defined in collaboration with the community). The experience of reporting should be intuitive, painless for the target of harassment, create a base level of trust between them and the platform, and produce a report with enough information to be helpful.
Project status
We are currently in the “interactive Figma prototypes” phase and hope to test some of our ideas with the Wikipedia community, especially our pilot wikis.
As always, remember that you are an agent of change.