seo helpfull pages
TRANSCRIPT
-
8/6/2019 SEO Helpfull Pages
1/3
Search Engine Optimization SEO
1. Us e of Robot s . txt to prevent the s ecure web page s .
Robot s . txtIt is great when search engines frequently visit your site and index your content but
often there are cases when indexing parts of your online content is not what youwant. For instance, if you have two versions of a page (one for viewing in thebrowser and one for printing), you'd rather have the printing version excluded fromcrawling, otherwise you risk being imposed a duplicate content penalty. Also, if youhappen to have sensitive data on your site that you do not want the world to see,you will also prefer that search engines do not index these pages (although in thiscase the only sure way for not indexing sensitive data is to keep it offline on aseparate machine). Additionally, if you want to save some bandwidth by excludingimages, stylesheets and javascript from indexing, you also need a way to tell spidersto keep away from these items.One way to tell search engines which files and folders on your Web site to avoid is
with the use of the Robots metatag. But since not all search engines read meta tags,the Robots matatag can simply go unnoticed. A better way to inform search enginesabout your will is to use a robots.txt file.
Structure of a Robot s . txt FileThe structure of a robots.txt is pretty simple (and barely flexible) it is an endless listof user agents and disallowed files and directories. Basically, the syntax is as follows:
User-agent:Disallow:
User-agent are search engines' crawlers and disallow: lists the files and directoriesto be excluded from indexing. In addition to user -agent: and disallow: entries,you can include comment lines just put the # sign at the beginning of the line:
# All u s er agent s are di s allowed to s ee the /temp directory . User-agent: *Disallow: /temp/
P ut robot s . txt in any of the U RL to know which page s are s ecure ( P revent) fromacce ss ingwww.ryman.qa.internal/robots.txt
2 . Us e of 30 1 redirecting:-
301 Redirect
301 redirect is the most efficient and Search Engine Friendly method for webpageredirection. It's not that hard to implement and it should preserve your search engine
-
8/6/2019 SEO Helpfull Pages
2/3
Search Engine Optimization SEO
rankings for that particular page. If you have to change file names or move pages around,it's the safest option. The code " 301 " is interpreted as "moved permanently".
You can Test your redirection with Search Engine Friendly Redirect Checker
Below are a Couple of methods to implement URL Redirection
IIS Redirect
y In internet services manager, right click on the file or folder you wish to redirecty Select the radio titled "a redirection to a URL".y Enter the redirection pagey Check "The exact url entered above" and the "A permanent redirection for this
resource"y Click on 'Apply'
ColdFusion Redirect
PHP Redirect
Redirect to www (htaccess redirect)
Create a .htaccess file with the below code, it will ensure that all requests coming in todomain.com will get redirected to www.domain.comThe .htaccess file needs to be placed in the root directory of your old website (i.e the samedirectory where your index file is placed)
Options +FollowSymlinksRewriteEngine onrewritecond %{http_host} ^domain.com [nc]rewriterule ^(.*)$ http://www.domain.com/$ 1 [r=301 ,nc]
Please REPLACE domain.com and www.newdomain.com with your actual domain name.
Note* This .htaccess method of redirection works ONLY on Linux servers having the ApacheMod-Rewrite moduled enabled.
-
8/6/2019 SEO Helpfull Pages
3/3
Search Engine Optimization SEO
How to Redirect HTML
Please refer to section titled 'How to Redirect with htaccess', if your site is hosted on a Linux
Server and 'IIS Redirect', if your site is hosted on a Windows Server.
3. Canonical Tag-A canonical tag is a simple piece of HTML code that you insert into the section of a
duplicate page, letting the search engines know that they are on a duplicate pageand they need to find the original content elsewhere, and guide them there.
Canonicalization is the process of picking the best url when there are several choices, and itusually refers to home pages. For example, most people would consider these the sameurls:
y www.example.comy example.com/y www.example.com/index.htmly example.com/home.asp
But technically all of these urls are different. A web server could return completely differentcontent for all the urls above. When Google canonicalizes a url, we try to pick the url thatseems like the best representative from that set
The syntax is pretty simple: An ugly url such ashttp://www.example.com/page.html?sid=asdf 314159265 can specify in the HEAD part of the document the following:
That tells search engines that the preferred location of this url (the canonical location, insearch engine speak) is http://example.com/page.html instead of http://www.example.com/page.html?sid=asdf 314159265 .