fraud much more likely, and much easier, and it has to be stopped.
Which is really going to require stopping generative AI, because the fraudsters using it like to add small and easily overlooked disclaimers that their deepfakes really aren't what they appear to be. The fraudsters know most people won't check the fine print.
They're also counting on there being so much fraud via AI that platforms will claim they can't stop it, which is BS.
We need laws against creation and sharing of what's produced by generative AI - laws using both financial penalties and possible imprisonment for repeat offenders and for platform owners not making any effort to keep genAI fraud off their platforms. It might require review of images and video and audio for a while - plain text is less likely to be used for fraud - but the profitability and usefulness of fraud would be reduced so much the fraud would quickly be reduced, too. EDITING to add that getting rid of the major genAI models for images, video, music and voices would get rid of most of the fraud via AI immediately and would make the prospect of keeping this garbage off platforms much less daunting. There are still open source models, but criminal penalties would make using them for fraud seem pretty stupid.
At the moment the fraud is being treated mostly as a fun game, including by the AI bros who gave us this level of fraud thanks to generative AI. They should not be allowed to claim only the user of those tools is responsible.
GenAI deepfakes are also being treated as a fun game by politicians, and that has to stop. Including when it's done by people on our side and we might think the deepfakes they create or post are amusing. Deepfakes and fraud never help democracy.