This is a SEO cheat sheet repository
Search Engine Optimization (SEO) is the process of optimizing a website or web page to improve its visibility and ranking in search engine results pages (SERPs). By optimizing your website for search engines, you can increase the chances of your website appearing at the top of the SERPs for relevant keywords, which can lead to more traffic and higher conversion rates.
- Key Elements of SEO
- Tips for SEO Success
- What are web crawlers?
- How does Googlebot work?
- HTTP Status Codes
- robots.txt file
- XML Sitemaps
- Special Meta Tags for Search Engines
- Canonical Tags
- Rendering Strategies
- URL Structure
- Important meta tags for SEO
- Open Graph Protocol
- Page Structure
- Core Web Vitals
- LightHouse
-
Keyword research: Identifying the keywords and phrases that your target audience is searching for and incorporating them into your website's content, meta tags, and URLs.
-
On-page optimization: Optimizing the content, structure, and code of your website to ensure it is easily crawlable and understandable by search engines.
-
Off-page optimization: Building high-quality backlinks from other websites to improve your website's authority and visibility in the SERPs.
-
Technical SEO: Ensuring that your website is technically sound and able to be crawled and indexed by search engines.
-
Measuring and Tracking: Analyzing your website's performance in the SERPs and using tools such as Google Analytics to track your website's traffic and conversions.
-
Create high-quality and unique content that provides value to your audience.
-
Use header tags (H1, H2, etc.) to structure your content and make it easily readable.
-
Optimize your images and videos by using descriptive file names and alt tags.
-
Use internal linking to help search engines understand the structure of your website.
-
Create a sitemap and submit it to search engines.
-
Use social media to promote your website and build back links.
-
Make sure your website is mobile-friendly and has a fast loading time.
-
Monitor your website's performance in the SERPs and make adjustments as needed.
Note:
- SEO is not a one-time process, it requires continuous effort and monitoring to stay ahead of the competition.
- It is important to follow the guidelines and best practices of search engines and avoid manipulative tactics that can get your website penalized.
- SEO results take time and patience, it is not a quick fix solution.
- It is important to stay updated with the latest SEO trends, algorithms updates and best practices.
Web crawlers, also known as spiders, bots, or web robots, are automated programs that crawl the web to index content for search engines. They follow links from one page to another and collect information about the content on each page. The information collected by web crawlers is used to build search engine indexes.
Note: Googlebot is the web crawler used by Google.
-
Find URLs: Googlebot finds new URLs by following links from one page to another.
-
Add to Crawl Queue: Googlebot adds the URLs to a crawl queue to be crawled.
-
HTTP Request: The crawler makes an HTTP request to get the headers and acts according to the returned status code:
-
Render Queue: If the page has some JavaScript client-side based content, the URLs might be added to a Render Queue. Render Queue is more costly for Google as it needs to use more resources to render JavaScript and therefore URLs rendered are a smaller percentage over the total pages out there on the internet.
-
Ready to be indexed: If all criteria are met, the pages may be eligible to be indexed and shown in search results.
Meta robot tags are directives that search engines will always respect. Adding these robots tags can make the indexation of your website easier.
-
Noindex: This tag tells search engines not to index the page. This is useful for pages that are not ready to be indexed, such as pages that are under construction or settings pages and more
-
Nofollow: This tag tells search engines not to follow the links on the page
<meta name="robots" content="noindex, nofollow" />
-
Googlebot: This tag tells Google not to index or follow the page
<meta name="googlebot" content="noindex, nofollow" />
-
nositelinkssearchbox: This tag tells Google not to show the sitelinks search box for the page
<meta name="googlebot" content="nositelinkssearchbox" />
-
notranslate: This tag tells Google not to translate the page
<meta name="google" content="notranslate" />
Full list of Googlebot meta tags: Googlebot Meta Tags
A canonical URL is the URL of the page that search engines think is most representative from a set of duplicate pages on your site.
It is useful to use canonical tags when you have multiple URLs that point to the same page.
<link rel="canonical" href="https://www.example.com/" />
This is a list of rendering strategies from the most SEO friendly to the least.
-
Static Site Generation (SSG): The content is generated at build time and served as static files. This is the most SEO friendly rendering strategy.
-
Server Side Rendering (SSR): The content is generated at request time and served as HTML.
-
Incremental Static Regeneration (ISR): Use static generation on a per-page basis. Learn more about Incremental Static Regeneration in Next.js.
-
Client Side Rendering (CSR): The content is generated at request time and served as JavaScript. This is the least SEO friendly rendering strategy but sometimes it is the only option.
-
Use hyphens (-) instead of underscores (_): Hyphens are easier to read and are more SEO friendly.
-
Use lowercase: URLs are case sensitive, using lowercase URLs is more SEO friendly.
-
Use descriptive URLs: URLs should be descriptive and easy to understand.
-
Use short: Short URLs are easier to read and are more SEO friendly.
-
Semantic: URLs should be semantic and easy to understand.
-
logical and consistent patterns: URLs should follow a logical pattern.
-
Avoid URL parameters: URL parameters are not SEO friendly.
-
Keyword focused: URLs should be keyword focused.
-
title: The title tag is the most important on-page SEO element.
-
description
-
keywords
The Open Graph protocol enables any web page to become a rich object in a social graph. For instance, this is used on Facebook to allow any web page to have the same functionality as any other object on Facebook.
- og:title: The title of your object as it should appear within the graph, e.g., "The Rock".
<meta property="og:title" content="The Rock" />
- og:type: The type of your object, e.g., "video.movie". Depending on the type you specify, other properties may also be required.
<meta property="og:type" content="video.movie" />
- og:image: An image URL which should represent your object within the graph.
<meta property="og:image" content="http://example.com/rock.jpg" />
- og:url: The canonical URL of your object that will be used as its permanent ID in the graph, e.g., "http://www.imdb.com/title/tt0117500/".
<meta property="og:url" content="http://www.imdb.com/title/tt0117500/" />
- og:description: A one to two sentence description of your object.
<meta
property="og:description"
content="lorem ipsum dolor sit amet, consectetur adipiscing elit."
/>
Learn more about Open Graph Protocol
-
Use H1 tag only once per page: The H1 tag is the most important tag for SEO. It should be used only once per page.
-
Internal links: Internal links are links that point to other pages on the same website.
-
External links: External links are links that point to other websites.
The PageRank algorithm is an algorithm used by Google Search to rank websites in their search engine results. This algorithm goes through every link on a database and scores domains based on how many links they receive (quantity) and from which domains (quality)
Learn more about PageRank Algorithm
Core Web Vitals are a set of metrics that measure the performance and user experience of a web page. These metrics include:
- First Contentful Paint (FCP): measures the time from when the user requests a page to when any visible content is rendered on the screen.
- **Speed Index: measures how quickly the content of a page is visibly populated.
- Time to Interactive (TTI): measures the time from when the user requests a page to when the page is fully interactive.
- **Largest Contentful Paint (LCP): measures the render time of the largest element visible in the viewport.
- Total Blocking Time (TBT): measures the total amount of time between FCP and TTI that is blocked by long tasks.
- Cumulative Layout Shift (CLS): measures the layout shift of visible elements within the viewport.
These metrics are important because they directly affect the user's perception of performance and can lead to higher bounce rates if they are not met.
To measure and improve your Core Web Vitals, you can use tools such as Google's PageSpeed Insights, Lighthouse, and Chrome DevTools.
You can also use the Web Vitals JavaScript library to measure the Core Web Vitals on your website and track them over time.
To improve your Core Web Vitals, you can:
- Minimize the use of third-party scripts and iframes
- Reduce the size of images and other resources
- Defer the loading of non-critical resources
- Optimize your code to reduce the time spent in long tasks
- Use web workers to offload heavy computation to background threads
- Use a Content Delivery Network (CDN) to reduce the time to first byte.
Learn more about Core Web Vitals
Lighthouse is an open-source, automated tool for improving the quality of web pages. It can be run as a Chrome extension, from the command line, or as a Node.js module. It audits a page and generates a report on its performance, accessibility, best practices, and SEO.
- Performance: Lighthouse measures the loading performance of a page, including the time to first contentful paint, time to interactive, and speed index.
- Accessibility: Lighthouse audits a page for accessibility issues, such as missing alternative text for images and proper use of ARIA attributes.
- Best Practices: Lighthouse checks a page for best practices, such as the use of HTTPS and avoiding the use of deprecated APIs.
- SEO: Lighthouse audits a page for SEO issues, such as the presence of a valid robots.txt file and structured data.
Note: Remember to use Lighthouse in a private window to avoid any caching issues or extension slow down
- Install the Lighthouse Chrome extension from the Chrome Web Store
- Go to the web page you want to audit
- Click on the Lighthouse icon in the Chrome extension bar
- Click on "Generate Report" and wait for the report to be generated
- Review the report and take action on any issues that are identified
- Install Lighthouse globally by running
npm install -g lighthouse
- Run Lighthouse on a web page by running
lighthouse <url>
- Review the report and take action on any issues that are identified
- Install Lighthouse by running
npm install lighthouse
- Use Lighthouse in your Node.js code by requiring the module and running the
lighthouse
function:
const lighthouse = require('lighthouse')
const report = await lighthouse('<url>')
console.log(report)