Remediating Social Media (Bridy)

Remediating Social Media (Bridy)

Area 1:

The gist of Bridy's argument is that we should have net neutrality for network providers (common carrier) but not for the application layer, which is where users interact with content. This is where applications live, as well as social media sites, YouTube, etc. Network providers are not content aware; while they know the type of data (voice, video, text), then aren't necessarily looking at the actual content. A content agnostic legal approach for social media sites would allow fake news and hate speech to run rampant (more than it already is), even if it is not technically illegal. She mentions that machine learning and other automated tools have difficultly detecting context (pg 226). For example, a photo of a breast used for teaching cancer self-exams might be auto-flagged and would need a human to check the context.

Area 2:

One of the things that strikes me about this piece is Bridy’s mention of a “’net neutrality’ rule for social media” as ‘retrograde motion’ (pg 213). She is exactly right, and I don’t believe users would want that. I know that the parents who pay for their kids’ network gaming subscriptions (take your pick of platform) assume it is harmless and safe, and the providers work very hard to make it kid friendly (and convince parents it is so), but there is still a fair amount of harmful and offensive content that makes it online and has to be reactively reported and taken down. Parents expect moderation, and it is crucial to the platform’s business model to provide it. For parents who didn’t grow up with online gaming, I expect surprise at some of the hostile environments their kids will encounter, especially girls.

I think machine learning is getting a lot better and will eventually replace many of the moderators, but this is just an opportunity for platforms to move those moderators into context-checking positions, and for the platforms to more finely tune their polices and practices, as Facebook has done (pg. 221). There will certainly be no shortage of content to check.

Some of the work coming out of the OpenAI project, especially GPT-3, is stunning, and I would love to see how it can be used for content filtering and moderation. I expect it will be able to be very granular, and with continuous training, become very good. Imagine only having to solve a problem only once, then every future identical problem would be handled by the AI. Conversely, tools like this could be used to fool moderators and get better at creating content that squeaks by. Stand by for AI driven spam.


Comments

Popular posts from this blog

Midterm Reflection

Networking Peripheries (Chan)