The Daily Dot recently ran an interview with a Pinterest spammer, which they almost immediately retracted. The spammer’s technique is worth noting: he submitted pictures of products to Pinterest, and had them link to the product’s Amazon page with his affiliate account. Then he used bots to make his submissions appear marginally more popular, which would give them enough momentum to get regular users voting on them. According to the spammer, this worked exactly as one would expect: with enough momentum, anything can be popular online.

In a site like, Pinterest, anything that gets enough fake votes to matter will be visible to real voters. In sites that dynamically reorder stories based on votes, ostensibly spammy content will actually get the vast majority of its votes from legitimate users. It’s hard to create a pattern of spammy behavior that makes something look like the most popular posts that day, but comparatively easy to make it look like one of the most popular posts of the last few minutes.

So Pinterest manipulators don’t push actual spam—you won’t see mortgage refinancing lead-gen rank well, for example. Instead, they’re pushing pseudo-spam: something that could have gotten popular naturally, but is much more likely to thanks to the spammy behavior. Take a post that has a 95% chance of being ignored, and spam it until it’s in the top 5% of that distribution: that can sneak below the intuitive filters people use to fight spam (i.e. “Is this getting way too much attention given its objective quality?”), and actually subverts them through signalling: if someone sees a mediocre-looking post that’s getting popular, they may suspect that they’re missing something, and give it a second positive look.

While this whole process makes sites like Pinterest resistant to the worst kinds of spam, it also makes them vulnerable to manipulation and mediocrity. There isn’t a reliable way to distinguish between mediocre content that got spammed up and mediocre content that lucked out, which means that sites will either irritate their power users through false-positive spam flagging, or end up relentlessly mediocre. That’s not the worst fate—Reddit is doing well—but it’s interesting to consider whether that’s a necessary part of social network ranking algorithms, or something a smart startup can solve.

Comments are closed.

Share Let others know too.
A Turing Test for Not-Quite Spam