Documentation

Unexpected robots.txt Content

The robots.txt file is used to inform search engines about which URLs they should attempt to index, and which they should ignore. It can be used to prevent specific paths or domains from being added to search engine indexes and showing up in public search results.

When a robots.txt file is accessed using a vanity servd.dev domain, we inject a special response which prevents search engines from performing any crawling. This is to prevent the vanity domain from being indexed and negatively impacting the SEO standing of your 'real' domains by being detected as duplicate content.

When accessing your project using any domain other than those ending in servd.dev the robots.txt file will be processed as normal, either using a static file from within your repo, or being delegated to Craft and handled by a plugin such as SEOMatic.