For YouTube moderation, I wonder if they can track bad users for discovering new user created content. Effectively categorize the different sort of content that isn't allowed. When ever content is flagged as being in one of these categories, track the users that viewed it. Keep repeating this until you have users that seem to prefer this sort of rule breaking content.
Then, begin tracking what new content those "bad viewers" view while also identifying new users who follow a similar viewing pattern and categorizing them as bad viewers. Once you get a large enough data set, you can begin to things based on what percentage of bad users view new content. If some new account uploads a few videos and 75%+ of views are from bad users all in the same category of not allowed material, what is the chance the material belongs in that category? And what I'm describing is the version 0.0.0.1 variant. You would also track things like length of view, timing of views, sentiment analysis of any comments made on the video (and potentially of any speech in the video), etc.
Then, begin tracking what new content those "bad viewers" view while also identifying new users who follow a similar viewing pattern and categorizing them as bad viewers. Once you get a large enough data set, you can begin to things based on what percentage of bad users view new content. If some new account uploads a few videos and 75%+ of views are from bad users all in the same category of not allowed material, what is the chance the material belongs in that category? And what I'm describing is the version 0.0.0.1 variant. You would also track things like length of view, timing of views, sentiment analysis of any comments made on the video (and potentially of any speech in the video), etc.