[ad_1]
Large language models still struggle with context, which means they probably won’t be able to interpret the nuance of posts and images as well as human moderators. Scalability and specificity across different cultures also raise questions. “Do you deploy one model for any particular type of niche? Do you do it by country? Do you do it by community?… It’s not a one-size-fits-all problem,” says DiResta.
Whether generative AI ends up being more harmful or helpful to the online information sphere may, to a large extent, depend on whether tech companies can come up with good, widely adopted tools to tell us whether content is AI-generated or not.
That’s quite a technical challenge, and DiResta tells me that the detection of synthetic media is likely to be a high priority. This includes methods like digital watermarking, which embeds a bit of code that serves as a sort of permanent mark to flag that the attached piece of content was made by artificial intelligence. Automated tools for detecting posts generated or manipulated by AI are appealing because, unlike watermarking, they don’t require the creator of the AI-generated content to proactively label it as such. That said, current tools that try to do this have not been particularly good at identifying machine-made content.
Some companies have even proposed cryptographic signatures that use math to securely log information like how a piece of content originated, but this would rely on voluntary disclosure techniques like watermarking.
The newest version of the European Union’s AI Act, which was proposed just this week, requires companies that use generative AI to inform users when content is indeed machine-generated. We’re likely to hear much more about these sorts of emerging tools in the coming months as demand for transparency around AI-generated content increases.
Misinformation is a big problem for society, but there seems to be a smaller audience for it than you might imagine. Researchers from the Oxford Internet Institute examined over 200,000 Telegram posts and found that although misinformation crops up a lot, most users don’t seem to go on to share it.
In their paper, they conclude that “contrary to popular received wisdom, the audience for misinformation is not a general one, but a small and active community of users.” Telegram is relatively unmoderated, but the research suggests that perhaps there is to some degree an organic, demand-driven effect that keeps bad information in check.