Skip to content
 

Chess.com cheater-detection bot pisses someone off

Justin Horton writes:

Of course Chess.com are a private company. They have the right, within the law, to have who they want on their site and to ban who they want from their site.

What they don’t have the right to do is to call somebody a cheat without backing it up.

But that is what they have done.

That would annoy me too.

27 Comments

  1. Adede says:

    Hard to follow that post. It seems to be written for an audience already intimately familiar with chess.com. I guess the circle with a line through it means they suspect he is a cheater? I can imagine that would be infuriating, emotionally. I don’t know what the other consequences are. His account is banned? People send him rude messages? FIDE docks his Elo score? They call up his work to let them know they employ a cheater?

  2. John Richters says:

    I know knotting about chess beyond what little I absorbed from watching Netflix’s riveting “The Queen’s Gambit”. I gather from the complaint, though, that it’s the online chess equivalent of having one of your publications publicly flagged by an algorithm as reporting fake data.

  3. anon e mouse says:

    lichess once (automatically) accused me of cheating because too many consecutive opponents resigned in too few moves. I guess this is a way people boost their rankings? Anyway I had nothing to do with it. Fortunately it was just a warning and there were no consequences.

  4. Rahul says:

    Well, you cannot outlaw false positives by diktat!

    • Curious says:

      Rahul:

      I believe the message is an encouragement of humility and circumspection when dealing with potentially life altering algorithmic decisions. Overconfidence and doubling down on the result of an algorithm is a problem that must be addressed.

      • Adede says:

        Those are good traits, but I hardly think that one’s status on a free chess-playing website is “life altering”!

        • Curious says:

          Adede:

          What is life altering to one may not be life altering to another.

          • Indeed imagine one of the top 50 grandmasters is banned incorrectly for cheating. Couldn’t that cause millions of dollars in damages? It certainly seems like it.

            • Adede says:

              The PR fallout for chess.com, if the grandmaster chose to make a stink about it, could be large. But for the grandmaster themself, being denied one (of what there must surely be many) way to play chess against anonymous strangers online would be largely inconsequential. I imagine they would spend a few minutes being either annoyed or amused, and then move on to one of the other many venues available for them to play chess.

              • Not if after being accused by chess.com they stopped being able to register for other venues and got a lot of bad press and didn’t get invites to invitational championships etc.

      • Rahul says:

        When humans made descisions didn’t we make similar errors?

        Thats why we do have ways to appeal and reverse descisions right?

        I just think algorithms are unfairly singled out for every small mistake when it isn’t as if we were faultless pre-algorothms.

        • Jonathan (another one) says:

          I agree that algorithms are unfairly singled out. But there really is a problem here when organizations get so big and so dependent on the algorithms that they lack the staff to fairly adjudicate the inevitable errors that algorithms make. I do think the algorithms make fewer errors than people, but correcting the errors of an algorithm is much, much harder than correcting the errors of a human, for two reasons: (a) the ubiquity of human error means that you need enough staff to address human error and humans understand human error and make allowances for it; and (b) not only does the rareness of algorithmic error cause you often to be unable to be helped by people directly, but there seems to be a Bayesian presupposition that the odds are still good that the algorithm is right and the customer is wrong, which shifts the burden of proof to the customer in an unhelpful way. (I’m generalizing here, but I have a number of personal anecdotes to back it up.)

          • Rahul says:

            Yes I agree with that.

            Some of the egregious failures have been because a level of human appeal did not exist. Eg Google algorithms blocking gmail accounts for violating terms etc.

            There is another angle to it though: people have gotten used to essentially free services become a critical area of their life.

            Take gmail accounts. To most people they are crucial without having paid anything for them. This has been possible due to the enormous efficiencies brought in by a human-lean setup.

            What’s the reasonable expectation for the level of human adjudication in such low cost, nay even free setups is not easy to define.

            As a society we have made a tradeoff between low costs and service levels.

            • Jonathan (another one) says:

              Exactly. That’s why I always think these discussions of “Are AI decisionmakers biased?” are missing a critical cost-benefit tradeoff. The question of “is this AI decisionmaker biased against X” is entirely academic unless it’s more biased against X that the thing it’s replacing. But even if it is, if the bias sufficiently reduces the cost of assessment, even the individual against whom the bias is operating might be better off than they are otherwise!

              Suppose that in order to ensure that your application for a car loan is evaluated in an unbiased human fashion requires a $200 fee to pay the evaluator (to make it simple, everybody gets a loan and the only question is what interest rate they pay, and the $200 fee is imbedded — in the rate.) A biased AI evaluator with a per-evaluation cost of $0.01 may leave everyone better off, even though some people pay more for the loan relative to others in similar circumstances… they still pay less than they would have to get an unbiased estimate.

              • Curious says:

                It sounds like you are arguing:

                “Digital is more efficient than analog.”

                I don’t think there is anyone seriously making an argument against this, except in the case of security of information.

  5. pmbrown says:

    this is a difficult one. A GM can spot cheating. Ginger GM spoke about this on a recent stream and he’s now playing on lichess instead of chess.com (too many cheats on chess.com). I’ve also seen Gata Kamsky call out cheating during titled tuesday live stream, they can tell you why it’s cheating. These GMs are not happy that cheats are playing again the following week, they’d like to see permanent bans, but it’s hard to prove. With the pandemic they can’t play over the board, it must be very frustrating. People with very high ratings cheat, it’s bizarre, but it’s not uncommon

  6. Phil says:

    I play far too many chess games on chess.com. Every now and then I get a message that says one of my recent opponents has been flagged for cheating, and that my rating is being adjusted to give me back the points I had lost.

    Cheating supposedly used to be rampant on chess.com but is far less so now. I can’t speak for temporary warnings or bans, which may be based on a much less rigorous process, but chess.com has permanently banned many players, including a surprising number of grandmasters and international masters, and they claim that when they ban somebody permanently for cheating they are willing to back it up in court if they are challenged. It may just be bluster to deter lawsuits, I suppose.

    I saw a video in which one of the chess.com people talked about how they do it. They have a board of elite chess players who advise them, and I think the board looks at individual cases sometimes, but their main role seems to be vetting chess.com’s algorithm. At least at the top level, they compare each player’s moves to the top moves suggested by the leading chess ‘engines’, i.e. chess programs; if ‘too many’ moves are among the few top choices of the programs, that’s suspicious. I assume there are some subtleties: even I will choose the same move as the engines on moves that are pretty much forced, and even I know the some opening lines to a depth of several moves. Presumably they only apply the algorithm to non-opening moves in which there is not a forced or ‘obvious’ move, although how they determine the latter might be sort of interesting. There are other parts to the algorithm too, such as the amount of time between moves, and the variance thereof, and so on; AFAIK they have not publicized the details.

    Falsely accusing people of cheating is obviously bad, but failing to penalize cheating was killing a lot of people’s enthusiasm for playing.

    This is one of those cases, rare on this blog, where the true effect really can be zero (no cheating), so the classic ‘Type I vs Type II error’ is appropriate. To some extent they have to choose: how many people should be able to get away with cheating, in able to reduce by 1 the number of people who are falsely accused?

  7. paul alper says:

    How much of this situation is a result of the so-called prosecutor’s fallacy? That is, confusing

    Prob(cheating|playing behavior)with Prob(playing behavior|cheating)

  8. Jackson says:

    The reason they don’t share the rationale is because that would make it trivially easy to evade detection. This is a common topic in the chess world, especially during Covid, and there is nothing you can really appeal to other than review by a strong player to check for how computerish (non-human) the moves are.

  9. Matt Skaggs says:

    “Hard to follow that post.”

    Indeed.

    Missing from this thread is a clear example of what constitutes cheating. Apparently in the linked video the two opponents had practiced or something. Are players simultaneously running the game through chess software that tells them what to do? Is that how you cheat?

    • Max says:

      That’s correct, using a chess engine to come up with moves is considered cheating.

    • Phil says:

      Yes, the way to cheat at chess is to have a computer what the good moves are.

      In in-person tournaments, players are not allowed to have a phone on them…not because they can use the phone to call for advice, but because an app on a phone can play better chess than the best human player. Several years ago a chess commentator said “it’s amazing that my phone can beat the world champion”, and his fellow commentator said “oh, that’s nothing, my microwave can beat the world champion.”

      Playing online, it is of course trivial to have a chess program running next to your browser.

      In online top-level tournaments, players are required to have a webcam that shows them from the back so that their screen is visible, to make sure they aren’t doing this. One elite player, Hikaru Nakamura, often looks up at the ceiling when he’s thinking — very much like the protagonist of Queen’s Gambit — and people have joked that he has a screen up there showing him the moves.

  10. Tova Perlmutter says:

    This question is comparable to the one described here: https://dotesports.com/streaming/news/dream-minecraft-speedrun-controversy-a-history-of-events and addressed in thorough math/statistical detail at https://youtu.be/8Ko3TdPy0TU. This question has succeeded in getting my teenager interested in stats!

  11. Julian says:

    By clicking on the link, it appears that Justin’s account has been restored. Was there an appeal process, or did they recognize their error?

  12. Sohier says:

    It’s typical to not tell users why they were flagged because doing so would double as telling them how to cheat without getting caught. This is normal and necessary.

Leave a Reply

Where can you find the best CBD products? CBD gummies made with vegan ingredients and CBD oils that are lab tested and 100% organic? Click here.