Is there anything stopping someone from making 1000 accounts or bots to artificially upvote posts on the Lemmy network?

I guess a single instance can moderate its users using captcha etc. but since it’s federated an evil actor could setup an instance without these restrictions.

An instance could maybe protect its users against this by blocking the domains of evil instances, but does this approach scale?

A solution might be to add a limit to the number of upvotes to accept from a single instance in a certain time frame, but that wont work if the other instance is very large and the upvotes are legitimate.

I’d like to hear if this issue has already been thought out or what ideas that you might have.


Several things you could do as an admin:

  • Switch from open, to allowlist or blocklist federation mode.
  • Make sure you have captchas on your instance.
  • Ban manipulated accounts.
  • Remove the manipulated posts / comments.
  • Make sure you have a low signup rate limit.
  • Close signups.

An instance could maybe protect its users against this by blocking the domains of evil instances, but does this approach scale?

Yes it does, people will naturally share / sticky blocklist posts of malicious instances. This happened organically with mastodon and the #fediblock tag, and it will likely happen with lemmy as it grows.


appened organically with mastodon and the #fediblock tag

I really, really hate this tag. Every time I see it, it feels like someone opened the door of the cell to the Social Justice Warriors. On Mastodon, it is often misused by hypocrites to spread their frustration and make others bad. The idea behind it is not bad, but the implementation is usually pathetic.


I don’t follow it, but this gets into one of the reasons I really dislike tags over communities: tags are completely unmoderated, and anyone can put anything no matter how wrong or unrelated in them.

I see blocklists in lemmy being shared by a trusted community, who stickies a post with a list of communities to block.

Thank you for the reply. I’m happy to hear that it sounds like a more or less fixed problem. I guess that Mastodon has proven that these methods do in fact work.

Ban manipulated accounts.

I guess it’s an entire field of study, how to automate spam detection. It will be nice to see how this will be applied to open-source federation in the future. Maybe it’s already used?

Remove the manipulated posts / comments.

I guess this applies to upvotes as well?


Yeah, bot and spam will be an ever-present problem that becomes magnified in federated networks… I’m sure we’ll have to get creative with figuring out how to stop them as things grow.

I guess this applies to upvotes as well?

Yep, but its tricky. I mean currently one person could make several accounts, possibly even on the same server, and upvote their own content. We don’t track IPs or fingerprint so we wouldn’t really be able to tell that they’re the same person. But we can at least stop automated bots via captchas and other things, to make sure that someone can’t create 1000s of accounts to upvote their own stuff.

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the instance, go to !

  • 0 users online
  • 9 users / day
  • 21 users / week
  • 37 users / month
  • 228 users / 6 months
  • 3 subscribers
  • 135 Posts
  • Modlog