Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
4.4k views
in Technique[技术] by (71.8m points)

robots.txt ignore only slug only

I want to achieve that behavior:

Allow: /plans and Disallow: /plans/*

crawl: www.example.com/plans

Do not crawl:


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

It would be:

Allow: /plans$
Disallow: /plans/

Entries are assumed to have a trailing wildcard so /plans/ and /plans/* are the same thing. However, this also means that /plans will also match /plansandstuff. This can be dealt with by using $ which matches "end of path".

See also: Robots.txt Specification

Keep in mind that the robots.txt file is advisory and not all crawlers pay attention to it.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...