It is incredibly cheap and easy to artificially bump a post to the top of a decent sized subreddit. I’ve seen it done before and the cost per impression/click puts most advertising to shame. And this was being done unsophisticatedly by some dude and a cheap bot. Now imagine what major corporations can do with all the resources to burn.
No way it's only 15%
Exactly. And on major subreddits it would be much higher. Worldnews at the moment just feels like IDF posting pro-genocide content, commenting, upvoting and agreeing with each other.
Reddit goes in the bin. 🚮
The thing with r/worldnews isn't only bots, it's also that the mods are trigger-happy when banning people for making unabashed criticisms of Israel and zionism. Keep that attitude for long enough and you'll end up with an echo chamber anywhere.
Ya think? I noticed when all top comments on /r/worldnews were the exact same thing just said in slightly different ways.
It's a science at this point.
Isn't that just astroturfing and they've been doing it forever there?
I was really surprised recently when I was searching for some help with a mod for a videogame and a result popped up on my duckduckgo search page for a thread on reddit about it, so I clicked it and BAM: "error, this subreddit has not been reviewed, so it is not possible to view it. Either use the app or go to home page" ......... wtf? I mean, this basically destroys the entire site right? I was 100% unable to view whatever content had been posted in that subreddit. So I just closed it and went somewhere else. I don't see how reddit can even continue to exist if they don't allow people to view the site. How did this happen?
There's a theory that certain emails scams are so obvious and easy to spot because that acts as a self-selection mechanism. A person who sees the obvious scam and immediately recognizes it as such was probably never going to fall for it. The ones that respond in spite of all the signs tend to be easier or more lucrative targets.
I could see forcing people to download an app just to see the content as operating on a similar (but not 100% analogous) principle. The type of person who willingly installs the app to see the content (without knowing if it was worthwhile/relevant beforehand) may be exactly the type of person that they prefer to join their site. Perhaps they are easier targets for marketing, less likely to understand /complain about the ramifications of changes to the site that are user adverse, care less about privacy, etc and that makes them more lucrative?
Only 15%? More like 99%! The most recent Gaza genocide was truly an eye opener for me.
Crazy thing I've been noticing more and more. When I search "[thing I want to know] reddit" there are always one or two comments in the top results from reddit, usually much more recent than the others, very clearly shilling a product. Sometimes it's an edit purely to include a product the user just thinks is really great that sends you to an affiliate link-ridden site.
The percentage is that low?
the impact those accounts have is much higher than a normal 15% slice of the comments.. what they produce is generally non-random, so it's all going toward whatever set of ideas they need to bombard with bullshit.. they intentionally shut down and/or control discussion..
I still have a few subreddits I passively maintain and every three days on the most popular one I'm banning some new app someone is shilling to a vulnerable group. It's absolutely disgusting and makes me so incredibly angry/jaded how much they're targeted.
The study found that 11% of the respondents had been contacted by a bot or troll attempting to promote a product or service. Even more concerning was the discovery that 13% of the respondents had witnessed a company manipulate public opinion on the platform.
Self reported garbage. Asking a user to self identify manipulation is ripe for abuse.
That number has got to be higher than 15%. Everywhere.
From the article...
The study’s demographic analysis further highlighted the targeted nature of corporate trolling. Younger users, particularly those aged 18–29, were significantly more likely to be contacted by corporate trolls, with 17% of them reporting such experiences, compared to only 7% of users aged 65 and over. This age-based discrepancy underscores the strategic approach of corporate trolls in engaging with a demographic that is often more susceptible to their influence.
Wow. Corporations are tagging younger generations as dumb shits. That is not cool.
This is paywalled, can you please post the text?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed