I realize this will be controversial, but I strongly believe you will not have a sucessful site with "top" answers without having answers that have a special reason to be sent to the "bottom", and also that the "bottom" needs to be below zero.
* On most sites (and hence in general user perception) 0 votes signals a lack of engagement, and hence this is generally viewed as neutral. An **outright dangerous** answer with a score of zero will still look like something people might try.
* On all sites, you will have some small amount of "bad" signal — people will upvote stupid answers.
In both scenarios experts _NEED_ to be able to send a negative signal. **Upvotes vs. more of upvotes helps sort good from bad answers, but sending only positive signals is not enough.**
Case in point:
> Q. How do I enable and start apache?
> A(+6). `systemd enable --now httpd`
> A(+1). `rm -rf /etc/httpd`
I would not like to see downvotes implemented as they are on Stack Overflow Inc. sites.
That said, there is an argument to be made for more feedback options than simply stars, or lack thereof. I haven't seen that argument made convincingly yet.
Nevertheless, if needed, I am quite tempted by the idea of emoji feedback:
| :star: :) |good|
| :\| | neutral |
| :( :cry: :angry: ❌ |not good|
For example, a good answer might be labelled:
> :star: x8
...a reasonably good answer:
> :star: x2 :| x4
...and a poor one:
> :( x5 :| x1
I think only :star: (= :)) should award +1 score. The other emoji should affect display ranking, but not user score.
My main reservation is that adding new "lazy feedback" options will in practice dissuade people from doing what we really want them to do: give written feedback, or edit to improve.
Outright dangerous posts should be flagged and deleted. We don't want those. And to be clear, at least for the *Databases* site, I would prefer to see *mediocre* content removed over time as well.
This is related to [Paul's answer](https://topanswers.xyz/meta?q=243#a207) suggesting reactions.
An idea that has come up in Codidact's discussions (inconclusively so far) is having reactions to *supplement* votes rather than replace them. Reactions, unlike votes, are public (attributed). They serve two main purposes:
- Highlighting an under-valued, good answer -- this answer doesn't have a high score (maybe it was missed, maybe it was late, whatever) but look, Jon Skeet gave it a thumbs-up!
- Providing warnings about things that are dangerous -- the crowd upvoted this because it sounded right, but it has caution marks from three people and I recognize two of them as knowing their stuff.
I agree with Paul that reactions shouldn't contribute to score, and also his concern that people might use them *instead* of votes/stars, but if we can provide the right guidance in the UI I think it's worth allowing this extra signal, or at least giving it a try and seeing what happens. (As with many other things, individual communities should be able to turn it on or off.)
> I realize this will be controversial
Not controversial with me, as I'm on record as being a fan of downvotes on SE:
However the points you raise can be solved another way, by deleting dangerous and very bad content. Note that I agree that we *do* need to solve this one way or another.
We haven't quite worked through the nuances of how flagging/deletion will work in its first iteration, but when we do I'll update this post.
Update: now that we have the first tools in place to deal with dangerous or very bad answers, I'm marking this as "won't-fix" — with the proviso that we have the option of changing course later if necessary. Adding downvotes will not be an impossible task later because the database has been designed with the assumption that we might need them.
The problem is in the concept of a **simplicistic, single, rating**.
I have a strong feeling that a large part of the problems not only of Stack Exchange but of the current web altogether derive from the squeezing of the complexity of the thought into a single vote, and to the hyper-magnification of the importance of this vote.
On **Stack Exchange** for example, on the surface it would seem that you should upvote every post that you liked a lot, and downvote every one that you didn't.
But is it really how it goes? How many posts with hundreds of downvotes are there?
Pretty much none apart from those of the faceless company spokepersons!
That's because if you see 3 downvotes to a poor answer you think "the poor guy has had it enough". And you're aware of the effect of the downvotes to the reputation, and of what that affects in turn.
As for the upvotes, did it ever feel unfair that the first guy who sent a stupid question about the news of the day got 5k points straightaway, when you barely made a few hundreds in years of diligent contribution?
And have you ever refrained from upvoting a good post after seeing it has already 200 upvotes?
Of course you have, otherwise there would be many posts in the tens of thousands of votes!
So what the people is actually doing is, very roughly, **giving a score** to the posts.
But even though the "collective mind" unwittingly goes in the direction of scores, with the current "indirect" system that are inevitably frequent unfair extremes, with people gaining immense sudden gains in reputation and, much worse, people losing everything and having to scramble again with the insane limitations of a beginner account.
So, rather than going roundabout with it, do the real thing and **let people give scores** to the posts!
+10 to -10, or +20 to -20 or whatever, then you display the average and the number of voters (and the distribution, or whatever you want) and if you want to base a reputation system on it you base it on the *real data*, giving the proper weight and separation to the average score and to the number of people who substantiated it.
(and to their agreement? Whatever you want, with the real data!)
And you can have "downvotes" without all their moral and psychological concerns.
And, going back to the beginning, do we really need **one, catch-all** rating?
What if a question is dull but it gave rise to great answers?
What if an answer is very useful but, alas, it doesn't answer well the question it refers to?
What if a guy gave it all to help (but didn't answer)?
In short, I'm not proposing to make available a hundred votable metrics (which might lead to a futile voting-fatigue, among other things), but to think about the ones, if any, that the system needs or that would benefit it enough, and allow those who feel like to use them.
Specifically and in their most suitable format, instead of jamming everything in a single, trending-ready, "Like" (or dislike).
The SE model was to make down votes cost reputation. I think instead, they should take a little extra work. My suggestion would be more like a down vote should be accompanied by a reason, like close votes on SE. If an answer collects enough "down votes" of the same type, then a post notice should get added to the top of the answer explaining what happened.
Off the top of my head, the "down vote" reasons could be something like:
-- This answer does not answer the question
-- This answer is missing key details
-- This answer does not work (this should open another dialog that makes the user explain why it does not work)
-- This answer is dangerous (this should probably open another dialog also)
If the person who left the answer comes back and fixes the problems and the fixes are verified by users, then everything should get reset and it should not depend on the people who "down voted" to coming back to change their votes.
Rather then down votes, allow stars to be un-voted. Two people see an answer, one likes it one does not. One stars the post, the other un-stars it. Net = zero stars. With this system you can only counter up votes.
For new questions and answers, grant one star on behalf of the poster automatically. Disregarding trolls, everyone who posts believes their post adds value. Add that star when they post the question or answer.