DU Community Help
In reply to the discussion: Since some DUers are occasionally resistant to correcting or deleting OPs that are fake even.when [View all]EarlG
(23,271 posts)I still think that enforcement would be a potential nightmare, at least in terms of writing this into the TOS and making it subject to the Jury system.
I also believe, as Irish_Dem says in this thread, that it is beneficial to see fake information corrected in public.
There may be a way to deal with this which highlights potential misinformation in a way that is consistent with DU's community-based moderating system, but also doesn't ask Jurors to spend their time fact-checking. For example, if we added a rule for "potentially fake information," perhaps instead of sending alerts to a Jury, the system could take note of the number of that type of alert being received on that particular post. If a certain threshold is reached, the post could be automatically labeled as "potentially fake information."
As someone whose job is to consider the downsides of this kind of thing though, I'll also say it might be difficult to calibrate the number of alerts required to trigger the system. Set it too high and it might not be effective enough, but there is probably a lower limit whereby partisan groups could cause mischief when we get to, say, the Democratic primaries which will be kicking off relatively soon. Unlike the regular Jury system, a threshold system would allow people to potentially organize to get real information automatically labeled as suspicious.
So I think there would have to be a manual component where perhaps if the threshold is reached, an alert is sent to someone (Me? Forum Hosts?) for a second look. If the alerts are fair, the "potentially fake information" label could be applied to the post. It could be effective provided that it works to correctly label as much potentially fake information as possible without causing too many false positives. False positives would be a problem because you could end up with too many posts being sent for review. Then there's the issue of how we are defining misinformation/false information/fake content/AI generated content, etc.
That's not to say it couldn't be done -- I'm just thinking aloud really. Bear in mind to pull off something like the above would require a fair amount of time to build out.