When you step into the field of web development or blogging or start handling any of your websites, you must have heard the term SEO. There is a lot of information available on the Internet for SEO, out of which some people have given such information in the name of SEO which is not related to SEO. We have described in detail in this article all the elements of the website that are involved in SEO or All parts of SEO by Lighthouse.
Since in PC or Laptop you can get a detailed report of your website or any website in just a minute or two through Lighthouse.
For this, you open that website on your PC or laptop, then click the right button of the mouse on any blank section of the body of that website. Now click on Inset in it. Now click on the double arrow in the top right, then click on Lighthouse. Now you can get a detailed report of that website for mobile or desktop.
Lighthouse provides detailed reports of any webpage in five sections. And those sections are as follows:
3. Best Practices
All the elements of the website or webpage are divided into these five sections. Therefore, we are giving detailed information on the elements which are included in the SEO section.
SEO of a web-page
SEO checks of any webpage make sure that the page is following basic search engine optimization advice or not. It only includes the following elements of a web page:
1. Structured data should be valid.
2. Use the viewport meta tag properly.
3. Use the <title> element properly.
4. Use meta description properly.
5. Page has successful HTTP status code
6. All links on the page should have descriptive text.
7. All main links and photo links of the page should be crawlable.
8. Page should not be blocked from indexing
9. The robots.txt of that page or website is valid.
10. All the images on the page have [alt] attributes.
11. The page or document must have a valid hreflang.
12. The page or document should have a valid rel=canonical.
13. The page should be free from the content or document plugins.
14. Document uses legible font sizes
15. Tap targets are sized appropriately
Let us now discuss all these SEO elements one by one. In WordPress, all these tasks are done through plugins.
A. Validating the structured data of the web page
Since the website or web page contains many structural data, which are or are attached in schema markup format, such as NewsArticles or BlogPostings, Breadcrumbs, Recipes, Organizations, etc. There should not be any mistakes in them. So it is tested by the Structured Data Testing Tool and Rich Result Test to find out or to confirm the structured data. Any errors in structured data are also detected in Google Search Console.
Since there are many types of mistakes in structured data, the solution of which is also mentioned there.
B. Using Valid Viewport Meta Tags
Search engines rank pages based on how mobile-friendly they are. Without the viewport meta tag, a webpage renders to a mobile device as the normal desktop screen width and then becomes smaller, making them difficult to read.
By setting the viewport meta tag in your website theme, you can control the width and scaling of the viewport so that it displays correctly on all devices.
Only one viewport meta tag is used between the <head> and </head> of the theme, which is attached to that website and to every page and post that operates from that website. like:
<!DOCTYPE html><html lang="en"><head><meta name="viewport" content="width=device-width, initial-scale=1"></head><body></body></html>
width = device-width Sets the width of the viewport to the width of the device.
initial-scale = 1, sets the initial zoom level when the user navigates to the page.
Website operators generally use viewport meta tag types as they wish in their websites. But for SEO as well as best practice, the maximum scale is kept between 5 to 10, so that the user can also zoom in to view the content.
<meta content='width=device-width, initial-scale=1, minimum-scale=1, maximum-scale=6' name='viewport'/>
C. Web page. Using <title> element in
The title element gives search engines and users an overview of the page, and search engine users rely heavily on it to determine whether a page is relevant to their search. For example:
<!doctype html><html lang="en"><head><title>The 10-Week Training Program for Your First Job</title></head><body></body></html>
The title you type in your post is the title element, but it has to be defined in the theme of the blogger website. Between the <head> and </head> of the website’s theme, only one input type title element is appended or appended, which is the title for each page and post on that website and that operates from that website. element makes. The title element of that input type is:<title><data:view.title.escaped/></title>
Tips to Create SEO-Friendly Titles
Use a unique, relevant title for each page. Avoid vague titles like “Home.”
Make headings descriptive and concise (up to 70 characters).
Avoid keyword stuffing. This can cause search engines to mark the page as negative. Read this article to make SEO friendly title.
D. Proper use of meta description in webpage
A meta description provides a summary of a page’s content that search engines include in search results. A high-quality, unique meta description makes your page more relevant and can increase your search traffic.
When you publish any post, be sure to add a short summary of the post in the meta description section for that post there (in the poll editing dashboard) in a maximum of 150 words.
People using Blogger.com, first go to Settings and turn on Meta Description, so that the Meta Description section can be available for the post or page.
In between <head> and </head> of the theme of the website, only one input type meta description tag is attached or added, which is given on that website and on every page and post operated from that website. Creates a meta description tag of the meta description. The input type of the meta description tag used in the blogger theme is as follows:
<meta expr:content=’data:blog.metaDescription’ name=’description’/>
E. Put the page on the successful HTTP status code
Servers provide a three-digit HTTP status code for web pages, which is derived from them. Status codes 400 and 500 (Failed or Unsuccessful HTTP Status Code) indicate that the requested web page resource has an error and cannot be opened. If a search engine crawler encounters a status code error while crawling a web page, it cannot index that page. Failed HTTP status codes are often encountered because of too few or unintentionally incorrect settings.
Fixing an Unsuccessful HTTP status code
Some web pages, such as a 404 page or any other page that shows an error or has been unpublished, should not be included in search results.
To fix the HTTP status code error, see the documentation for your server or hosting provider. The server should return a status code of 200 for all valid URLs, or a status code of 300 or 301 for a resource visited at another URL.
F. Keeping all the links on the page with descriptive text
Descriptive link text helps search engines understand your content. Link text hyperlinks contain clickable words or phrases. When the link text clearly states the goal of the hyperlink, both users and search engines can more easily understand your content based on how it relates to other pages.
How to Add Descriptive Link Text
Use relevant phrases instead of common phrases such as “click here” and “learn more” used in hyperlinks. In general, write link text that clearly indicates what type of content users will find if they follow the hyperlink.
Hyperlinks containing the phrase “click here” etc. do not indicate where these hyperlinks will take users. Whereas a hyperlink containing the phrase “Learn ten specific benefits of eating apples” clearly states that the hyperlink will take users to a page that describes ten specific benefits of eating apples.
Also don’t use the URL of the page as a link description unless you have a good reason to do so, such as referencing a new address for a site.
7. All main links and photo links of the page should be crawlable
The ‘href’ attribute on the link for the search engine to crawl the websites uses the features. Make sure that the ‘href’ attribute of anchor elements is associated with an appropriate destination, so that more pages on the site can be searched.
Google search engines can follow links only if they are an <a> tag with an href attribute. Links that use other formats are not followed by Google’s crawlers. Here are examples of links that Google may and may not follow:
Does not follow:
G. Page should not be blocked from indexing
Search engines are unable to include your pages in search results if they do not have permission to crawl them.
Search engines may show pages in their search results only if those pages do not explicitly block indexing by search engine crawlers. Certain HTTP headers and meta tags tell crawlers that a page should not be indexed.
Block the indexing of only content that you do not want to appear in search results. Some pages, such as sitemaps or legal content, generally should not be indexed. (Keep in mind that blocking indexing does not prevent users from accessing a page if they know its URL.)
For example, the following <meta> element if embedded in a page prevents all search engine crawlers (also known as robots) from accessing your page:
<meta name = “robot” content = “noindex” />
This HTTP response header also blocks all crawlers:x-robots-tag: noindex
You can also have <meta> elements that block specific crawlers, such as:
<meta name=”AdsBot-Google” content=”noindex”/>
Make sure search engines can crawl your page
For the pages, you want to be indexed, remove any HTTP headers or <meta> elements that are blocking search engine crawlers. Depending on how you set up your site, you may need to perform some or all of the following steps:
If you set up an HTTP response, keep the X-Robots-Tag all in the HTTP response without the X-Robots-Tag noindex.
If you use blogger.com, then go to crawling and indexing in settings, there all home page, etc. Keep all three tags by noindexing them.
If your theme has ‘noindex’ in the content of a robots meta tag or specific robot’s tag, replace it with ‘index’.
H. Keep the robots.txt of the website valid
If your website’s robots.txt file is malformed, search crawlers will not be able to understand how you want your website to be crawled or indexed.
The robots.txt file on your website tells search engines which pages on your site they can crawl. An invalid robots.txt configuration can cause two types of problems:
1. It can prevent search engines from crawling public pages, making your content appear less frequently in search results.
2. This can cause search engines to crawl pages that you do not want to appear in search results.
Function of User-agent: * in robots.txt
It allows all robots (Google, Bing, Yahoo, GoogleBot, GoogleBot Mobile, GoogleBot Images, GoogleBot News, etc.) to crawl the site.
The function of Disallow: in robots.txt
It allows robots to crawl each and every content (posts, images, videos, hyperlinks) of the site.
The function of Disallow: / in robots.txt
It does not allow robots to crawl every content URL (post, image, video, hyperlink) of the site that is attached to the same domain.
The function of Allow: / in robots.txt
This allows robots to only crawl each content URL (post, image, video, hyperlink) of the site that is attached to the same domain.
(Keep in mind, if you are using blogger.com, then whether you are using a custom domain or initial blogger domain, whatever photo and video you upload, its link is made by blogger sugar content. Therefore you should use Disallow: instead of Allow: / for better indexing of content. This is because photos and videos uploaded or published using Allow: / will not be indexed in search results.)
Keep robots.txt smaller than 500 KiB
Search engines may stop processing robots.txt midway through if the file is larger than 500 KiB. To keep robots.txt small, focus less on individually excluded pages and more on broader patterns. For example, if you need to block the crawling of all PDF files, don’t disallow each individual file. Instead, disallow all URLs containing .pdf by using Disallow: /*.pdf.
Sitemap files are a great way to tell search engines about the pages on your website. So be sure to put the XML sitemap URL in robot.txt. XML sitemap URLs typically include a list of URLs on your website, as well as information about when they were last changed.
If you choose to submit a sitemap file in robots.txt, be sure to use a complete XML sitemap URL.
I. Add [alt] Attributes to Images
Whatever image you use in your post or webpage, you must add alt and title text to it. Only decorative elements with an empty alt attribute can be ignored.
J. The document must have a valid hreflang
Hreflang tags tell search engines which version of the page they should list in search results for a given language or region. hreflang tags tell search engines the URLs of all versions of a page so that they display the correct language version for each language or region.
The hreflang value must always specify a language code in the tag that follows the ISO 639-1 format. The hreflang value may also include an optional regional code.
For better SEO, only one hreflang tag is added between the <head> and </head> of the website theme which makes the hreflang tag for each webpage of that website.
For pages that allow users to choose their own language, use x-default:
<link rel=”alternate” expr:href=”data:blog.url” hreflang=”x-default” />
K. The webpage must have a valid rel=canonical
Canonical links suggest to search engines which URL to appear in search results.
When multiple pages contain similar content, search engines consider them to be duplicate versions of the same page. For example, the desktop and mobile versions of a product page are often considered duplicates. But if it has a canonical link, it will not be considered a duplicate. Note that the canonical link must contain the main URL of that page.
Add Canonical Links to Your Pages
For better SEO, only one canonical tag of input type is added or added between <head> and </head> of the website theme which makes a canonical tag for each webpage of that website. The canonical tag of the input type used in the Blogger theme is as follows:
<link expr:href=’data:blog.url’ rel=’canonical’/>
L. Don’t Use Plugins in Web-page Content
Search engines cannot index plugin content, and many devices restrict or do not support plugins.
Search engines often cannot index content that relies on browser plugins, such as Java or Flash. This means that plugin-based content does not appear in search results.
To convert plugin-based content to HTML, refer to and use the guidance for that plugin.
M. Use a legible font size in the document
Font sizes of less than 12px are too small to be legible and require mobile visitors to “pinch to zoom” to read. So try to keep the page text by default or 12px-20px.
N. Tap targets are inappropriate Size
Tap targets are areas or places on a web page that users of touch devices can interact with. Buttons, links, and form elements all have tap targets.
Interactive elements such as buttons and links should be large enough (48x48px) and have enough space around them to be easy to tap without overlapping other elements. The distance between tap targets must also be at least 8 pixels.
We have described all the facts involved in SEO in this article exactly and almost in detail according to Lighthouse, through which you will be able to learn important facts related to SEO. If you do not understand anything or have any problem related to SEO, then write in the comment box below.
Able People Encourage Us By Donating : सामर्थ्यवान व्यक्ति हमें दान देकर उत्साहित करें।
Thanks for reading: All parts of SEO by Lighthouse, Please, share this article. If this article helped you, then you must write your feedback in the comment box.
Post a Comment