Looking back now, it seems at best naive that shared platforms were built without putting ethical considerations first and foremost. Did we as an industry believe that things would just work themselves out – that some sort of natural benign zen-like balance would naturally emerge? Perhaps we thought that it was none of our business – that no matter what we do there will always be someone who will abuse any tool that we make. It’s all too easy to equate the software developer with the knife maker who cannot very well be held responsible for how their tools are used and abused. Any measures we put into place handily equated with censorship, from those that consider the development of tools as apolitical.
Almost a decade ago when we first embarked on creating what is now Hyperaud.io – we were aware that there were risks of abuse involved in creating a platform. In many ways we built around the risk by limiting functionality and creating a model where we could vet content and membership for specific communities. Now that we’re taking steps to create something more than a working prototype, it seems vital to appreciate the risks and incorporate them into our design in ways that much larger platforms that have come before us have failed to do – and in not doing so, caused great harm.
Sadly it has taken large scale abuse of systems – particularly social media platforms. (I’m not going to name them – this post isn’t a rant against those systems that do little to prevent the targeting and peddling of misinformation, but rather as an exercise in considering what might happen when we take our responsibilities seriously).
This is why when we decided to apply for funding to prototype new ad-free models that we hope will incentivise the creation of valuable and culturally relevant content for the next version of Hyperaudio. This means not equating popularity with value and putting the focus on ensuring incentivisation aligns with our moral and ethical values.
I want to know why computer science programs have no tech ethics course requirements when the industry promises students that they have the power to change the world— Rachel He (@rachelyhe_) June 29, 2020
It almost goes without saying that fundamentally, whatever we do should amount to a safe space, where people can express themselves safe from abuse with privacy and anonymity ensured.
When we built the first version of hyperaudio we did not anticipate that remixing audio or video could result in abuse. By and large it would have been difficult as although hyperaud.io was a great start – it was little more than a working prototype and so had limited functionality of the type that could enable abuse.
Now that we are thinking about collaborative models, we need to take the concept of a safe space much more seriously. In the new version of the Hyperaudio platform we want people to see how others are using their source material – further we want people to be able to trace the anatomy of a remix. It’s conceivable that any piece of audiovisual media can be used for abusive purposes especially when attached to another individual’s expression.
From our application for Grant for the Web funding:
“Significant difficulties exist monetizing audio and video content outside of popularity driven advertising models.
These models can foster harmful viral content, while deprioritizing socially important or meaningful content. And where meaningful content exists, discussion of that content, and the reply or remix mechanics that facilitate discussion can also often turn toxic.
The development of technology, under pressure from current ultra-capitalistic systems, often leaves its users open to abuse, especially those in marginalized communities. Features that support accessibility or eliminate harassment are often either left out or added late in the development of a product. It is clear that in order to safely protect community members and allow them full access to features and content this must be a consideration from conception.”
So how do we approach this? Well there’s no sure fire way of automating the detection and removal of abusive content and so we need humans to help deal with content moderation. But there are ways of reducing the chances of abuse occurring. One way of reducing abuse is by creating smaller invite-only communities.
Perhaps different levels of trust will be required from source-content creators and remixers, since with remixes it is difficult to misrepresent and misinform when source-content can be vouched for, especially as one of the linchpins of Hyperaudio is being able to quickly drilldown on source content from the context of a remix.
Our proposal which builds upon experiments we’ve carried out with the current hyperaud.io is to break things up into subdomains. Here’s one we created for Chattanooga Public Library – chattanooga.hyperaud.io. Subdomains such as these (schools.hyperaud.io/media) contain a subset of media and while all media can be viewed on the main site (hyperaud.io/media). For the proposed version we may choose not to do that, or come up with a system of trusted sources – either way this needs to be considered. We’re doing a lot of research into how Hyperaudio can be used in schools to promote media literacy and restricting content to subdomains definitely seems to be the way to go.
Explorations could include whether certain subdomains could share content with others – for example schools.
Having a federation of communities also affords us the opportunity to decentralise and so scale the moderation process by appointing trusted community members as moderators – which we think is vital.
Redressing the Balance
One of our stated goals from our Grant for the Web application:
“Surface content that educates, uplifts and addresses key issues facing socially progressive communities and emerging generations, in order to raise awareness and promote the remixing of valuable culturally relevant content.”
By changing the model from ad-driven to direct payment-driven new opportunities arise. By building in Web Monetization (a proposed web standard) into Hyperaudio we can make sure that content creators are compensated for their work – we also count remixes as content and will experiment with formulas to share proceeds between and source-content creator, remixer and possibly even sharer (sharing compensation is subject to experimentation and precludes any negative behaviour emerging).
We want to reserve part of the Web Monetization payment for content-creators producing the type of content we’re seeking to promote. Funding the type of people and communities in order to build a platform that surfaces content felt to be culturally valuable and currently underrepresented – ensuring we have the wealth of verified and relevant building blocks from which to create remixes.
But who decides on what content is valuable? Our hope is to foster relationships with community creators, so that communities can collectively decide. One way we feel we can foster involvement of certain communities is by seeking them out, offering to help surface content and run events.
Laying our Cards on the Table
So we’re really going out of our way to attract underrepresented communities and content while discouraging the producers of false, hateful and/or abusive content. We want people to know this before even visiting our platform and we believe our values should be clear from the start.
It’s important to be transparent, transparent about how web monetization is distributed, transparent about what sort of behaviour and content is tolerated, transparent about our values and what we hope to achieve with Hyperaudio.
To this end we’ll create a clearly visible Code of Conduct as well as clear Terms of Service – our website copy will reflect our values leaving little doubt about the type of platform we’re trying to create. Safe, inclusive and relevant. We also know it’s not enough to just have a Code of Conduct, we will also put in place clear and transparent processes on how it will be enforced.
Truth and Trust
But who are the arbiters of truth? How do we decide what is and isn’t true? Let’s face it – no system for establishing facts is perfect. However there are tried and tested methods of establishing truth beyond reasonable doubt. Trusted sources of media archival do exist (Internet Archive for example) which allow us to establish whether something has been taken out of context. In fact the same team that worked on Hyperaudio also built the Glorious Contextubot as a prototype that allows the searching for clips of media. Other sources such as Wikipedia help us establish core facts. Our hope is that allies working at organisations like Full Fact, Meedan and Wikimedia can advise us on methods of fact checking which we can share with our communities. It may not be a perfect system but we’re confident it will be reasonable.
It looks like the big social media platforms are finally rising to the challenge of tackling misinformation and hate speech and so fact checking will become increasingly commonplace and the methods for doing so more established.
Again from our application:
“The inherent inaccessibility of web-based audio and video content makes discoverability and in-content navigation difficult and leaves whole sections of the community excluded (especially those with hearing or language difficulties).”
Making audiovisual media more accessible is something Hyperaudio does well. By associating timed interactive transcripts with pieces of media we allow audio and video to be easily scanned, navigated, searched and shared. In fact our entire remixing premise is based on moving pieces of text about which is refreshingly easy, when compared to more traditional audio/video editors. We’ve also written code to convert timed-text to captions, which we can add to the video or make available in a number of popular formats. Translation of those captions into most languages is also looking promising using 3rd-party APIs.
Our aim here is to make Hyperaudio accessible to as many people as possible and to encourage the formation of a diverse range of communities.
The Grant for the Web application was split into a number of tiers – in the end we applied for Creative Catalyst grant defined as:
“Supporting experiments in the creation, distribution, and discoverability of content monetized using the Web Monetization standard and the Interledger Protocol.”
At Mid level:
“Mid level grants are for the design, prototyping, and building of plugins, APIs, and other monetization tools, research and development including whitepapers and academic research, larger-scale content creation, and community activations.”
Our rationale was that before we embark on building an entirely new platform with models and modifications that put ethical considerations first, it would make sense to prototype aspects of the new system while collaborating with a number of communities. The idea is that by gathering the best feedback and most promising ideas, we give ourselves the best chance of building the “right thing” – with ethical elements well considered. Grant for the Web funding is ongoing, so hopefully we’ll be better placed to attract further funding once we’ve completed this phase of research and development.
We’re keeping our fingers crossed that our application is successful. In the meantime please let us know your thoughts, whether you’d like to be part of our pilot project – or of course if you have any pointers to any other promising sources of funding.
By the way – we do have a model for creating a sustainable non-profit, but that’s a subject for another blog post.
This blogpost is co-written by Annabel Church. I’m maboa on Twitter - if you're interested in Hyperaudio and related technology check out my a newsletter https://tinyletter.com/maboa
Title Image "Poppies" by joeflintham is licensed under CC BY-SA 2.0