Insights from the Trust and Safety Research Conference at Stanford
What happens when professionals from research, civil society, the public sector, and industry come together to discuss ethics in the Digital Age?
What does the ideal online social platform look and feel like? The world is unimaginably diverse, so depending on who you ask, you will hear a different answer. Yet companies like Meta or X want all these people to coexist on their 1-2 platforms, as opposed to existing on a smattering of smaller platforms. This universalist approach to online spaces is profitable certainly, but creates a situation where Trust and Safety teams need to figure out how to shoehorn directly oppositional values into one space. And we will keep bumping up against this problem because universalism, a Western pursuit, is inherently incompatible with the reality of a pluralistic world.
Given what I’ve said above, it’ll come as no surprise, dear reader, that determining the boundaries of an online space is political work. The decisions we make strengthen a political agenda. The Trust and Safety Research conference at Stanford opened and closed with this reminder. Kate Starbird, Associate Professor at the Department of Human Centered Design & Engineering at the University of Washington, and Yoel Roth, former head of Trust and Safety at Twitter, both made a strong case for the importance of our work while touching on the complexity of its politics.
They reminded us that many who benefit from at-scale manipulation of users online do not want researchers to unearth inconvenient insights. Trust and Safety researchers are conducting research in a disruptive and adversarial environment. Moderation teams have been sidelined, threatened, and deprioritized, and researchers have been targeted. Prof. Starbird’s team alone has faced harassment, threats, lawsuits, congressional inquiries and interviews. This is all possibly having a chilling effect on the research, nonetheless election integrity research at the University of Washington continues! In response we must all read their work and ensure their findings inform the products we design and the policies we write.
Professor Starbird’s wisdom:
We are all susceptible to participatory disinformation, meaning the intentional or unintentional dissemination of false information online. We must be concerned because participatory disinformation is becoming a permanent part of how the internet is set up and functions (which experts call the “sociotechnical infrastructure of the internet”).
The 2016 and 2020 elections were both pivotal moments for Big Tech, Media, and the Internet.
Disinformation campaigns are built around a true or plausible core and then coordinated and spread by unwitting agents. To me this means 1. easily adopt and scale racist, homophobic, or sexist propaganda or polarizing politics and 2. everyday users are part of an information ecosystem that is possibly too large so, should we scale down? (link to tiny internets). These campaigns were mostly perpetrated by domestic actors (i.e. influential people in American society, for example Eric Trump and Donald Trump Jr.) and we can expect spikes in misinformation and disinformation, due to signal boosting from influencers like them, on an Election Day.
Twitter was the pulse of the internet but it no longer is considered to be, it seems, by a broad swatch of technologists which begs the question, is the pulse now Wikipedia? And what can we, the Wikimedia movement, do to guard that trust?
I was curious if there were any Republicans, conservatives, or libertarians in the room during the talk? I’d be interested to hear their thoughts. I want there to be open dialogue between people of differing opinions and points-of-view, so long as the conversation remains constructive and respectful.
My own internal musing: it seems like social media platforms that neglect Trust and Safety tend to become platforms for right-wing users. What do you think this says about the correlation between T&S and liberalism?
What are others in academia thinking about?
Researchers are exploring the concept of “human on appeal”, versus “human in the loop”, AI systems, whereby a human can appeal the AI’s decision.
Practitioners at Prosocial Design are thinking about proactive, interactive, and reactive interventions online to improve how humans connect online.
Michael S. Bernstein and his team at Stanford are researching how to embed societal values into social media algorithms, which they believe is possible. So rather than saying that algorithms don’t have values, we code explicit values into them.
Megan Shahi from the Center for American Progress shared her research on “how social media platforms and generative AI developers can meet the moment and offer questions and strategies for mitigating risks to protect against threats to elections and simultaneously uphold democratic values around the world”.
The public sector also has thoughts?!
I got to listen to actual regulators from the UK! This was very exciting and they spoke about age assurance technologies. They also mentioned bills such as The Children’s Code, video sharing platform regulation, and the Online Safety Bill.
I really enjoyed learning about Singapore’s New Online Safety Legislation and how they balance regulation and development, as well as legal and policy efforts in the Caribbean regarding stopping the spread of non-consensual intimate images (NCII). It was important, I think, to frame the spread of NCII as a feminist issue given that it mostly negatively impacts women and girls.
One of the most innovative ideas I heard at the conference was from Chinmayi Sharma. She stood at the podium and told us that AI developers should be held to the same standards as doctors and lawyers in our society. She presented to us the case for an AI malpractice regime. Would this stymie innovation or allow for the safe integration of AI into society? I would keep up with Chinmayi’s research if I were you!
What’s happening around the world?
I caught a glimpse into T&S issues around the world, which I usually seek out. An obvious gap in existing knowledge is, we don’t know how much misinformation people in the Global South are exposed to. What we do know is that people outside the U.S. perceive more harm online than Americans, especially those in honor-based cultures. Women in every country perceive higher harm compared to men in that country, which makes sense given that online platforms amplify existing forms of violence against women. However, women in the U.S. perceive less harm than men in some countries.
I learned from an Irish scholar researching CSAM that there has been a surge of CSAM in the Philippines. About 20% of children 12-17, predominantly girls, have been exploited by CSAM. Extremely vulnerable people become facilitators of it. It is an urgent problem, the scholar pressed. The victims are sentenced to a life of destitution.
A similarly urgent issue of combating non-consensual intimate images exists in the Caribbean, we learned from representatives of the Eastern Caribbean Supreme Court.
Pulling the “digital literacy” lever:
It’s obvious that increasing digital literacy helps people be more resilient to misinformation, but it was news to me that “weakened doses” of misinformation, delivered in a controlled environment, help people become more resistant to social media misinformation. For example, we can teach users what fear mongering looks like, what false dichotomies look like, etc... And it’s not just older adults who are susceptible. Younger generations, I hypothesize, are likely to believe content that’s been taken out of context but reinforces their beliefs. And regardless of how “engaged” the user is with their content, passive consumption over a period of time can shift one’s views in a particular direction.
This brings me to a research proposal raised by Michael S. Bernstein and his team at Stanford; what if we explicitly programmed values into social media platforms? I want to know, what core values would ensure people are not siloed into territories that become fertile ground for disinformation campaigns?
All roads lead to local journalism:
Throughout the conference I heard again and again that the loss of local journalism is a hit to democracy, media literacy, and the distribution of power in American society. News deserts leave communities vulnerable to extremism online. I am increasingly interested in this problem.
Calls to action for the Trust and Safety community <3
We must build networks of support for ourselves and our colleagues. I’m grateful organizations like All Tech is Human exist for this reason!
We must revisit the playbook of strategic silence and do a better job of getting the truth about our own work out into public record.
We must communicate with funders because we need them more than ever, especially philanthropic funders.
We need policy.
There is also an opportunity here:
Online misinformation, disinformation, and other forms of manipulation remain a critical concern for society.
There is an opportunity to develop new methods to study new platforms under new constraints.
We need to better understand the cross-platform nature of everything.
We need to better understand how generative AI can be used to spread disinformation.
I believe we, the Responsible Technology movement, must go head-to-head with our critics and political opponents. It is our responsibility to make the case for our work, to articulate our decision-making, and not shy away from open dialogue. Problems have always existed, each generation is just presented with a new set of problems. Let us be brave, and let us be bold. We must keep going!
re: local journalism - cool to see local journalism discussed here; this is something that interests me as well. I just picked up the book "Byline: How Local Journalists Can Improve the Global News Industry and Change the World" by Cristi Hegranes and I'm looking forward to reading it!