rule
A rule for the robots.txt
file, which tells crawlers which pages can, or can't, be accessed.
A rule consists of a directive, which can be either Allow
or Disallow
, and a value of the associated URL path.
For example:
Disallow: /policies/
You can output a rule directly, instead of referencing each of its properties.
Tip
You can customize the robots.txt
file with the robots.txt.liquid
template.
{
"directive": "Disallow",
"value": "/*preview_script_id*"
}
{
"directive": "Disallow",
"value": "/*preview_script_id*"
}
Was this section helpful?