Caleb
Who will moderate, and under what principles shall they do so?

How will that set of 'who' change over time? Appointments? Elections? Some privilege level mechanism?

What tools will be made available for moderation purposes?
Top Answer
Monica
I don't know what *will* be done, but I'd like to offer some proposals (recognizing that this question is pretty broad).

**Who will moderate?** That should be up to each community.  From a platform perspective, we shouldn't care whether they choose founders, hold elections for permanent positions, draw straws every six months, or make "moderator" just another privilege that is gained through site activity.  The platform's job is to provide a means of designating someone a moderator.

**What tools do we need?** This will be constantly evolving.  We should start with the minimum that gets the job done.  I think that means:

- A way for users to flag content for moderator attention.

- A way for somebody to hide/delete content.  Initially, this probably means a manual process.  Later it could be community-driven (enough flags, for example).

On day one I think that's enough.  Soon after I think we will also need:

- A way to lock content -- prevent edits to a post, prevent new messages in a chat room, maybe other things.  When faced with vandalism like a spew of rude comments/chat messages on a post, there should be an option that's milder than deleting the post to which the activity is attached.

- A way to address users who are behaving in ways that are disrupting the community.  This could mean suspensions, but it could also mean revoking specific privileges, depending on what the privilege system ends up looking like.  Or it could mean imposing rate limits.

I propose that moderators be the ones who wield these tools and make these decisions.  Things baked into the system tend to be good fits for some communities and bad fits for others.  Initially, give the power to the human moderators, and then see what patterns emerge.

At some point we're going to be successful enough to be facing spambots coming through Tor, and that'll need better tools.  I don't know how early we need to consider that; my gut feeling is that our earliest concerns are about disruptive humans who can prevent communities from taking root by setting a bad mood, rather than insurance spammers that everybody knows to ignore.

Answer #2
Jack Douglas
We don't have a complete answer to this question yet. What we do have is our first 'moderation'-like feature, just released.

Our hope is that this will be a model of other features as they are slowly rolled out in response to need. However it is also a trial of sorts, and subject to significant changes based on feedback here, after people have tried it out for a while. The idea is to break the overall 'moderator' role down, into finer-grained areas of expertise and trust.

Before getting into the feature itself, one thing needs to be said very clearly: ***all the following actions will be public***. Your name and identicon will be attached to posts you flag, and visible to others, including the OP.

Here's how the new feature works right now:

* Every post[^1] initially carries a 'flag' button next to the 'subscribe button':  
   ![Screenshot 2019-12-10 at 20.33.29.png](/image?hash=21a5ee117c4df5dd87e9937840564784f9bee54ddabf600e00aca96c91ba7e0a)  
   
   Each community can decide it's own 'flagging' guidelines, but the expection is that it would be for content you'd want deleted: spam, off-topic questions or junk for example.
   
   flagging a post has two effects:
   1. The post is immediately hidden from all unregistered users
   2. A notification is sent to members of the post cleanup 'crew'
   
* Unlike regular registered users, members of the cleanup crew flag in a different way:
   * As soon as a 'crew' member flags, the post is hidden from all non-crew members
   * Crew members also have the option to 'counterflag' — this overrides regular flags user and makes the post visible again
   * If multiple crew members flag and counterflag, the overall action is based on the majority.
   * if a clear majority forms, outstanding notifications are cleared.

In normal use I expect something like the following to happen:

1. Several users flag a post drawing it to the attention of the 'crew'. Comments are made in the question chat room  if necessary.
2. The first two 'crew' members to visit the post confirm the flags and the post is then effectively deleted (invisible to everyone except 'crew' and the OP).

Finally, I anticipate that each community will decide on the rules for an automatic background job. The job will flag questions that meet certain criteria (e.g. no answers or votes after 4 weeks).

[^1]: except your own — and right now it is only working on questions (answer will follow if the feature is well-received, otherwise it is back to the drawing board!)

Enter question or answer id or url (and optionally further answer ids/urls from the same question) from

Separate each id/url with a space. No need to list your own answers; they will be imported automatically.