![]() The reason this is concerning is that we will be neck deep in crap (think SEO blogspam and recipe sites but for everything) which will be disorienting for long enough to erode a lot of trust that we could really use right now. In the medium term, I’m worried that we’ll not address the systemic threats, and continue to throw ID checks, heuristics and ML at the wall, enjoying the short lived successes when some classifier works for a month before it’s defeated. I truly don’t get why people aren’t looking at these issues seriously and systematically. I tried to find whether academia has taken a stab at these problems but came up pretty much empty handed. People would laugh at you to suggest abuse tech should be open (“you’d just help the spammers”). To make matters worse: abuse, spam and fraud prevention lives in the same security-by-obscurity paradigm that cyber security lived in for decades before “we” collectively gave up on it, and decided that openness is better. We have captcha, and we can look at behavior and source data (IP), and of course everyone’s favorite fingerprinting. The obvious problem is we don’t have any great alternatives. ![]() I don’t like that GPT will cause the biggest flood of shit mankind has ever seen, but I am happy that it will kill these flawed ideas about policing. ![]() Content based auto moderation has been shitty since it’s inception. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |