77
submitted 4 months ago by CynicusRex@lemmy.ml to c/privacy@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] Asudox@lemmy.world 111 points 4 months ago

Block? Nope, robots.txt does not block the bots. It's just a text file that says: "Hey robot X, please do not crawl my website. Thanks :>"

[-] Oha@lemmy.ohaa.xyz 60 points 4 months ago

I disallow a page in my robots.txt and ip-ban everyone who goes there. Thats pretty effective.

[-] JackbyDev@programming.dev 16 points 4 months ago

Did you ban it in your humans.txt too?

[-] bountygiver@lemmy.ml 18 points 4 months ago* (last edited 4 months ago)

humans typically don't visit [website]/fdfjsidfjsidojfi43j435345 when there's no button that links to it

[-] Avatar_of_Self@lemmy.world 16 points 4 months ago

I used to do this on one of my sites that was moderately popular in the 00's. I had a link hidden via javascript, so a user couldn't click it (unless they disabled javascript and clicked it), though it was hidden pretty well for that too.

IP hits would be put into a log and my script would add a /24 of that subnet into my firewall. I allowed specific IP ranges for some search engines.

Anyway, it caught a lot of bots. I really just wanted to stop automated attacks and spambots on the web front.

I also had a honeypot port that basically did the same thing. If you sent packets to it, your /24 was added to the firewall for a week or so. I think I just used netcat to add to yet another log and wrote a script to add those /24's to iptables.

I did it because I had so much bad noise on my logs and spambots, it was pretty crazy.

[-] Mikelius@lemmy.ml 10 points 4 months ago

This thread has provided genius ideas I somehow never thought of, and I'm totally stealing them for my sites lol.

[-] JackbyDev@programming.dev 14 points 4 months ago* (last edited 4 months ago)

I LOVE VISITING FDFJSIDFJSIDOJFI435345 ON HUMAN WEBSITES, IT IS ONE OF MY FAVORITE HUMAN HOBBIES. ~~🤖~~👨

[-] LazaroFilm@lemmy.world 9 points 4 months ago

Can you explain this more?

[-] JackbyDev@programming.dev 25 points 4 months ago

Imagine posting a rule that says "do not walk on the grass" among other rules and then banning anyone who steps on the grass with the thought process that if they didn't obey that rule they were likely disobeying other rules. Except the grass is somewhere that no one would see unless they actually read the rules. The rules were the only place that mentioned that grass.

[-] vk6flab@lemmy.radio 7 points 4 months ago

Is the page linked in the site anywhere, or just mentioned in the robots.txt file?

[-] Oha@lemmy.ohaa.xyz 10 points 4 months ago
[-] vk6flab@lemmy.radio 8 points 4 months ago

Excellent.

I think I might be able to create a fail2ban rule for that.

[-] Asudox@lemmy.world 5 points 4 months ago

Not sure if that is effective at all. Why would a crawler check the robots.txt if it's programmed to ignore it anyways?

[-] Oha@lemmy.ohaa.xyz 16 points 4 months ago

cause many crawlers seem to explicitly crawl "forbidden" sites

[-] Crashumbc@lemmy.world 3 points 4 months ago

Google and script kiddies copying code...

load more comments (1 replies)
[-] AceFuzzLord@lemm.ee 4 points 4 months ago

I doubt it'd be possible in most any way due to lack of server control, but I'm definitely gonna have to look this up to see if anything similar could be done on a neocities site.

load more comments (5 replies)
[-] ExtremeDullard@lemmy.sdf.org 45 points 4 months ago

Robots.txt is honor-based and Big Data has no honor.

[-] CynicusRex@lemmy.ml 13 points 4 months ago

Unfortunate indeed.

“Can AI bots ignore my robots.txt file? Well-established companies such as Google and OpenAI typically adhere to robots.txt protocols. But some poorly designed AI bots will ignore your robots.txt.”

[-] breadsmasher@lemmy.world 23 points 4 months ago

typically adhere. but they don’t have to follow it.

poorly designed AI bots

Is it a poor design if its explicitly a design choice to ignore it entirely to scrape as much data as possible? Id argue its more AI bots designed to scrape everything regardless of robots.txt. That’s the intention. Asshole design vs poor design.

[-] majestictechie@lemmy.fosshost.com 7 points 4 months ago

This is why I block in a htaccess:

# Bot Agent Block Rule
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (BOTNAME|BOTNAME2|BOTNAME3) [NC]
RewriteRule (.*) - [F,L]
[-] mox@lemmy.sdf.org 26 points 4 months ago

This article lies to the reader, so it earns a -1 from me.

load more comments (9 replies)
[-] vk6flab@lemmy.radio 24 points 4 months ago

This does not block anything at all.

It's a 1994 "standard" that requires voluntary compliance and the user-agent is a string set by the operator of the tool used to access your site.

https://en.m.wikipedia.org/wiki/Robots.txt

https://en.m.wikipedia.org/wiki/User-Agent_header

In other words, the bot operator can ignore your robots.txt file and if you check your webserver logs, they can set their user-agent to whatever they like, so you cannot tell if they are ignoring you.

[-] fubarx@lemmy.ml 23 points 4 months ago
load more comments (1 replies)
[-] digdilem@lemmy.ml 23 points 4 months ago

robots.txt does not work. I don't think it ever has - it's an honour system with no penalty for ignoring it.

I have a few low traffic sites hosted at home, and when a crawler takes an interest they can totally flood my connection. I'm using cloudflare and being incredibly aggressive with my filtering but so many bots are ignoring robots.txt as well as lying about who they are with humanesque UAs that it's having a real impact on my ability to provide the sites for humans.

Over the past year it's got around ten times worse. I woke up this morning to find my connection at a crawl and on checking the logs, AmazonBot has been hitting one site 12000 times an hour, and that's one of the more well-behaved bots. But there's thousands and thousands of them.

[-] nullPointer@programming.dev 19 points 4 months ago

robots.txt will not block a bad bot, but you can use it to lure the bad bots into a "bot-trap" so you can ban them in an automated fashion.

[-] dgriffith@aussie.zone 9 points 4 months ago

I'm guessing something like:

Robots.txt: Do not index this particular area.

Main page: invisible link to particular area at top of page, with alt text of "don't follow this, it's just a bot trap" for screen readers and such.

Result: any access to said particular area equals insta-ban for that IP. Maybe just for 24 hours so nosy humans can get back to enjoying your site.

[-] doodledup@lemmy.world 2 points 4 months ago

Problem is that you're also blocking search engines to index your site, no?

[-] Oha@lemmy.ohaa.xyz 8 points 4 months ago

Nope. Search engines should follow the robots.txt

load more comments (2 replies)
[-] mox@lemmy.sdf.org 5 points 4 months ago* (last edited 4 months ago)

Robots.txt: Do not index this particular area.

Problem is that you’re also blocking search engines to index your site, no?

No. That's why they wrote "this particular area".

The point is to have an area of the site that serves no purpose other than to catch bots that ignore the rules in robots.txt. Legit search engine indexers will respect directives in robots.txt to avoid that area; they will still index everything else. Bad bots will ignore the directives, index the forbidden area anyway, and by doing so, reveal themselves in the server logs.

That's the trap, aka honeypot.

[-] JackbyDev@programming.dev 2 points 4 months ago

Not if they obeyed the rules

[-] 5opn0o30@lemmy.world 16 points 4 months ago

Wow. A lot of cynicism here. The AI bots are (currently) honoring robots.txt so this is an easy way to say go away. Honeypot urls can be a second line of defense as well as blocking published IP ranges. They’re no different than other bots that have existed for years.

[-] digdilem@lemmy.ml 9 points 4 months ago* (last edited 4 months ago)

In my experience, the AI bots are absolutely not honoring robots.txt - and there are literally hundreds of unique ones. Everyone and their dog has unleashed AI/LLM harvesters over the past year without much thought to the impact to low bandwidth sites.

Many of them aren't even identifying themselves as AI bots, but faking human user-agents.

[-] breadsmasher@lemmy.world 10 points 4 months ago

It isn’t an enforceable solution. robots.txt and similar are just please bots dont index these pages. Doesn’t mean any bots will respect it

[-] CynicusRex@lemmy.ml 10 points 4 months ago

#TL;DR:

User-agent: GPTBot
Disallow: /
User-agent: ChatGPT-User
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: PerplexityBot
Disallow: /
User-agent: Amazonbot
Disallow: /
User-agent: ClaudeBot
Disallow: /
User-agent: Omgilibot
Disallow: /
User-Agent: FacebookBot
Disallow: /
User-Agent: Applebot
Disallow: /
User-agent: anthropic-ai
Disallow: /
User-agent: Bytespider
Disallow: /
User-agent: Claude-Web
Disallow: /
User-agent: Diffbot
Disallow: /
User-agent: ImagesiftBot
Disallow: /
User-agent: Omgilibot
Disallow: /
User-agent: Omgili
Disallow: /
User-agent: YouBot
Disallow: /
[-] mox@lemmy.sdf.org 7 points 4 months ago

Of course, nothing stops a bot from picking a user agent field that exactly matches a web browser.

[-] JackbyDev@programming.dev 3 points 4 months ago

Nothing stops a bot from choosing to not read robots.txt

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 18 Aug 2024
77 points (76.9% liked)

Privacy

32465 readers
433 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS