|
Dar |
at Dec 20, 2013 at 2:40 am
|
⇧ |
| |
I tried
GET /robots.txt
Static.Serve("public","robots.txt")
However, online robots.txt checkers do not validate it.
On Thursday, December 19, 2013 11:04:55 PM UTC+5:30, Kyle Lemons wrote:Add a handler for /robots.txt and generate/print the list of URLs you
don't want to index?
On Wed, Dec 18, 2013 at 7:15 PM, Dar <
[email protected] <javascript:>>wrote:
I am writing a simple blog using Revel and deploying it on Heroku.
However, I want to have a robots.txt file so that not all pages get
crawled by web crawlers.
What is the best way to achieve this ?
--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to
[email protected] <javascript:>.
For more options, visit
https://groups.google.com/groups/opt_out. --
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
[email protected].
For more options, visit
https://groups.google.com/groups/opt_out.