There are essentially two options when it comes to keeping search engines from displaying your content.
Option 1 – Robots.txt
The Robots.txt Disallow Directive, where URLs and/or directories that you don’t want crawled are listed in the site’s robots.txt file. It is important to note that disallowed pages or directories can still end up in search results.
The robots.txt file is typically stored in the website root: https//examplesite.com/robots.txt
Option 2 – Robots Meta Tag (Preferred)
The Robots Meta Tag, which is added to the head section of a given page or in the HTTP response header (via x-robots-tag). This tells search engines if they should index (include) the page in search results and follow (utilize) the links on the page in question. Keep in mind that search engines can’t access the robots meta tags of disallowed pages.
Example robots meta tag with noindex and nofollow parameters:
meta name=”robots” content=”noindex,nofollow”
Option 2, the robots meta tag with noindex and nofollow parameters, is definitely the better approach. It works especially well when there are a low number of pages involved and is directly focused on preventing said pages from showing up in search results. Option 1 was only referenced because it is common practice to implement both options and accidentally override robots meta tags.