0% found this document useful (0 votes)
107 views39 pages

CH - 02 - Seach Engine Algorithm - Final

The document discusses search engine algorithms and how they work. It covers three types of algorithms: on-page algorithms, whole-site algorithms, and off-site algorithms. On-page algorithms examine individual page elements like keywords, meta tags, URLs, and internal links to determine page relevance. Proper on-page optimization is important for search engines to find and understand pages.

Uploaded by

Etpages Igcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views39 pages

CH - 02 - Seach Engine Algorithm - Final

The document discusses search engine algorithms and how they work. It covers three types of algorithms: on-page algorithms, whole-site algorithms, and off-site algorithms. On-page algorithms examine individual page elements like keywords, meta tags, URLs, and internal links to determine page relevance. Proper on-page optimization is important for search engines to find and understand pages.

Uploaded by

Etpages Igcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Search Engine algorithms

Chapter Two

Chapter Two Search Engine algorithms 1


Introduction
• All the parts of the search engine are important, but the search
algorithm is the cog that makes everything work.
• It might be more accurate to say that the search algorithm is the
foundation on which everything else is built.
• How a search engine works is based on the search algorithm, which is
closely related to the way that data is discovered by the user.

Chapter Two Search Engine algorithms 2


Search Algorithm
• In very general terms, a search algorithm is a problem-solving
procedure that takes a problem, evaluates a number of possible
answers, and then returns the solution to that problem.
• In terms of SE algorithm, search engine takes the problem (the word
or phrase being searched for), go through a database that contains
catalogued keywords and the URLs with which those words are
associated, and then returns pages that contain the word or phrase
that was searched for, either in the body of the page or in a URL that
points to the page.

Chapter Two Search Engine algorithms 3


Search Algorithm
• Different SEs uses different algorithms , but the algorithms that are
used by these SEs are slightly different. That’s why a search for one
word or phrase will yield different results from different search
engines. (Try to see the difference)
• Search algorithms are generally divided into three broad categories:
1. on-page algorithms
2. whole-site algorithms
3. off-site algorithms.
• Each type of algorithm looks at different elements of a web page, yet
all three types are generally part of a much larger algorithm.
Chapter Two Search Engine algorithms 4
On-page algorithms
• On-page SEO is the process of ensuring that your site is readable to search
engines.
• Learning correct on-page SEO is not only important in ensuring Google picks up
the keywords you want, but it is an opportunity to achieve easy wins and
improve your site’s overall performance.
• Algorithms that measure on-page factors look at the elements of a page that
would lead a user to think the page is worth browsing. This includes how
keywords are used in content as well as how other words on the page relate.
• For example, for any given topic, some phrases are common, so if your web site
is about beading, an on-page algorithm will determine that by the number of
times the term ‘‘beading’’ is used, as well as by the number of related phrases
and words that are also used on the page (e.g., wire, patterns, jump rings,
string or stringing, etc.).
Chapter Two Search Engine algorithms 5
On-page algorithms
• As described in previous slide , related words is important because
the algorithm will also likely look at the proximity of related words.
• The on-page algorithm also looks at some elements that human
visitors can’t see such as meta tag in HTML.
• The back side of a web page contains special content designed
specifically for web crawlers. This content is called meta tags. When a
crawler examines your web site, it looks at these tags as definitions
for what you intend your site to be about. It then considers that
against the other elements of on-site optimization, as well as whole-
site and off-site optimization, too.
• Meta tags will be described in details later in this chapter.

Chapter Two Search Engine algorithms 6


On-page algorithms
• On-page SEO includes the following considerations:
1. Making sure site content is visible to search engines.(HOW?)
2. Making sure your site is not blocking search engines (HOW?)
3. Making sure search engines pick up the keywords you want.(HOW?)

To answer the above questions , you will learn a basic level of


experience dealing with sites.

Do not warry if you have not taken HTML Course  , You will learn
how to deal with ONLY SEs HTML elements which are very simple and
basic.
Chapter Two Search Engine algorithms 7
On-page algorithms
1. Search engine friendly URLs.
• Have you ever visited a web page and the URL looked like something lik
this… http://www.examplesite.com/~articlepage21/post-entry321.asp?
q=3
What a mess! 
• These kinds of URLs are a quick way to confuse search engines and site
visitors.
• Clean URLs are more logical, user friendly, and search engine friendly.
• An example of a clean URL: http://www.examplesite.com/football-jerseys
• Exercise : Try to find PowerPoint slides for
• type on google (strauss emarketing chapter 1 ppt) , dowanload
Chapter Two Search Engine algorithms 8
On-page algorithms
1. Search engine friendly URLs.
• Take a quick look at Google's search engine results. You will see a very
large portion of sites in the top-10 have clean and readable URLs like
the above example.
• Most site content management systems like (Jooma, Drupal
,Magento) have search engine friendly URLs built into the site. It is
often a matter of simply enabling the option in your site settings. If
your site doesn't have search engine friendly URLs, it's time for a you
to fix HTML codes that fits google recommendations.

Chapter Two Search Engine algorithms 9


On-page algorithms
2. Internal navigation
There is no limit on how to structure the navigation of your site.
• Some wesites force visitors to watch an animation or intro before they
can even access the site. In the process, some sites make it harder for
visitors and more confusing for search engines to pick up the content on
the site.
• Other sites keep it simple by having a menu running along the top of the
site (e.g http://www.psut.edu.jo )or running down the left-hand side (e.g
http://smokeybones.com )of the browser window. This has pretty much
become an industry standard for most sites.
• If you intend to break this standard, you must understand it is likely you
will make it harder for search engines to pick up all of the pages on your
site.
Chapter Two Search Engine algorithms 10
On-page algorithms
2. Internal navigation
• Aslo, your website navigation must be made of real text links—not images.
• If your main site navigation is currently made up of images, slap your web
designer and change them to text now! If you do not have the main
navigation featured in text, your internal pages will almost be invisible to
Google and other search engines.
• For an additional SEO boost, include links to pages you want visible to search
engines and visitors on the home page.
• By placing links specifically on the home page, Google's search engine spider
can come along to your site and quickly understand which pages on your site
are important and worth including in the search results.
• The following website is an example of bad website navigation .Tell me Why?
• http://www.roverp6cars.com/
Chapter Two Search Engine algorithms 11
On-page algorithms
3. How to make Google pick up the keywords you want.
• There are many misconceptions being circulated about what to do,
and what not to do, when it comes to optimizing keywords into your
page.
• Do not listen to “Bloggers” who tell their readers to not put keywords
in the content of targeted pages at all 
• Not having keywords on your page it makes it almost impossible for
Google to match your page with the keyword you want to rank for. If
Google completely devalued having keywords on the page, Google
would be a crappy search engine.

Chapter Two Search Engine algorithms 12


On-page algorithms
3. How to make Google pick up the keywords you want.
• Google needs to see the keywords on your page, and these keywords
must be visible to your users. The easy approach is to either create
content around your keyword, or naturally weave your keyword into
the page.
• However, Too Many Keywords could put website in trouble . Google
may consider the website with too many keywords as a spam website.

Chapter Two Search Engine algorithms 13


On-page algorithms
< meta name="description" content=“”/>
• The easy approach is to either create content around your keyword, or naturally weave your
keyword into the page.
• So if you want part of the text to be seen, make sure it appears before that 140th-or-so
characters maximum 200 characters.
• For example , Payless.com is an American discount footwear online retailer , the keyword
description for this website is :
• <meta name=“description” content= “Low price shoes for Women, Men and Kids, including,
boots, sandals, dress and athletic shoes. Free Shipping +$25, Free Returns at any Payless
Store. Payless ShoeSource“/>
• It is not recommended to duplicate contents on description meta tag such as
• <meta name="description" content=“Payless Low price shoes for Women, Payless Men and
Kids, including, Payless boots, sandals, dress and athletic shoes. Payless Free Shipping +$25,
Payless Free Returns at any Payless Store. Payless ShoeSource />

Chapter Two Search Engine algorithms 14


On-page algorithms
Three common Mistakes to Avoid When Writing Your Meta Descriptions

Mistake 1: Duplicate Meta Descriptions


Even though meta description duplicates won’t get you penalized, you
should put together unique meta descriptions for every page for
practical reasons.
Mistake 2: Character Count Obsession
• For many years, the SEO best practice was to write a meta description
between 140–200 characters.

Chapter Two Search Engine algorithms 15


On-page algorithms
Three common Mistakes to Avoid When Writing Your Meta
Descriptions
Mistake 3: Not Using Keywords
• Your meta descriptions should include the right keywords. You want to use
keywords that:
Are relevant to a page’s content.
Your customers are looking for.
• If your meta description is not relevant to a page, Google will pull the first
sentence with a relevant keyword and show it in the search results.
• For instance, when I was searching for productivity tips, I found this
description:

Chapter Two Search Engine algorithms 16


On-page algorithms
Three common Mistakes to Avoid When Writing Your Meta
Descriptions
There are many FREE tools helps you to write HTML code easily and
quickly in few minutes . The most common tool is :
https://webcode.tools/

Are you Happy


Chapter Two Search Engine algorithms 17


On-page algorithms
< meta name="keywords" content=“”/>
• Meta Keywords are a specific type of meta tag that appear in the HTML code of a
Web page and help tell search engines what the topic of the page is. The most
important thing to keep in mind when selecting or optimizing your meta keywords is
to be sure that each keyword accurately reflects the content of your pages.
• Payless.com Example :
• <meta name="keywords" content=" shoes, boots, sandals, heels, pumps, handbags,
womens shoes, mens shoes, girls shoes, boys shoes, kids shoes Payless
ShoeSource">
• NOTE . In 2020, Google announced that Keywords tag is no longer used by the
search engine and not recommend spending any time on this Keywords tag.
• Please leave it out if you're building a site, but if it's automated, there's no reason to
remove it.
Chapter Two Search Engine algorithms 18
On-page algorithms
Latent Semantic Indexing
• It is important to know LSI keywords on your page.
• LSI stands for latent semantic indexing, which is the method that
Google and other search engines use to study and compare
relationships between different terms and concepts. These
keywords can be used to improve SEO traffic and create more
visibility and higher rankings in search results.
• There are many free website helps you to generate keywords such as
• https://lsigraph.com/
• http://ubersuggest.org/

Chapter Two Search Engine algorithms 19


On-page algorithms
• The function of meta tags is really quite simple. Meta tags are bits of
code on your site controlling how your site appears in Google.
• If you don't fill out your meta tags, Google will automatically use text
from your site to create your search listing. This is exactly what you
don't want Google to do, otherwise it can end up looking like
gibberish!

Chapter Two Search Engine algorithms 20


On-page algorithms
4. Site load speed
• How fast (or slow) your site loads is another factor Google takes into account
when deciding how it should rank your pages in the search results.
• A very well-known Google employee, Matt Cutts, publicly admitted fast load
speed is a positive ranking factor.
• Not only is load speed a contributing factor to achieving top rankings in
Google, extensive industry reports have shown for each second shaved off a
site, there is an average increase of 7% to the site conversion rate. In other
words, the faster your site loads, the more chance you have of people
completing a sale or filling out an inquiry form.
• The following link is used to check website load speed
• https://developers.google.com/speed/pagespeed/insights
Chapter Two Search Engine algorithms 21
On-page algorithms
5. The usual suspects—sitemaps.xml and robots.txt
Sitemaps.xml
• Sitemaps.xml
• Search engines automatically look for a special file on each site called the sitemaps.xml
file.
• Having this file on your site is a must for making it easy for search engines to discover
pages on your site. Sitemaps are essentially a giant map to all of the pages on your site.
Fortunately, creating this file and getting it on to your site is a straightforward process.
• You can use the free XML Sitemaps Generator tool like http://www.xml-sitemaps.com/
or https://www.screamingfrog.co.uk/seo-spider/
• Next ask your web developer or web designer to upload it into the main directory of
your site. Once uploaded, the file should be publicly accessible with an address like the
below example:
• http://www.yoursite.com/sitemaps.xml

Chapter Two Search Engine algorithms 22


On-page algorithms
5. The usual suspects—sitemaps.xml and robots.txt
Sitemaps.xml
• That’s why it’s called a sitemap. It maps out how the website is
structured and what the website includes.
• (“XML” stands for “Extensible Markup Language,” a way of displaying
information on websites.)
• That’s what an XML sitemap is, but why should you even have one?
What’s the purpose?

Chapter Two Search Engine algorithms 23


On-page algorithms
5. The usual suspects—sitemaps.xml and robots.txt
Sitemaps.xml
• So, what’s the point of an XML Sitemap?
• Search engines use crawlers to organize and index information on the
web
These crawlers can read all
kinds of information.
But an XML sitemap makes
it easy for the crawler to
see what’s on your website
and index it.
Chapter Two Search Engine algorithms 24
On-page algorithms
5. The usual suspects—sitemaps.xml and robots.txt
Sitemaps.xml

Chapter Two Search Engine algorithms 25


On-page algorithms
5. The usual suspects—sitemaps.xml and robots.txt
Sitemaps.xml
• Include Noindex Pages — Noindex pages are those that contain HTML code in the
header telling the search engines not to include the page in the search index. If your
developer has set certain pages as “Noindex” it is probably with good cause. When in
doubt, do not check this box.
• Include Canonicalised — There may be more than one URL pointing to the same page
of content. If you “include canonicalised,” you are telling the crawl tool to include
variations of the URLs that point to the same page. If in doubt, leave this unchecked.
• Include PDFs — You can choose whether or not you want PDFs included in your XML
sitemap. Google indexes all kinds of content, PDFs included. According to Patel, “I
recommend that you do include PDFs in your XML sitemap, as long as the PDFs on
your website are important and relevant to users who might be searching for your
content.

Chapter Two Search Engine algorithms 26


On-page algorithms
Meta robots

• By default , SEs will index and follow links in your website , unless
you make rules to SEs on the designated webpage.
• As we described in previous chapter , robots.txt will block ( or allow)
access for a given crawler to a specified file path in that website.
• The difference between Meta robots and robots.txt is :
• Meta Robot can block a single page with some piece of the code
paste in the header of the website. By using the meta robot tag we
tell the search engine for which function we are using meta tag.
• While Robots. txt file you can block the whole website

Chapter Two Search Engine algorithms 27


On-page algorithms
Meta robots

• Using meta robots "noindex,follow" allows the link equity going to


that page to flow out to the pages it links to. If you block the page
with robots. The code is :
• <meta name="robots" content="index,nofollow">
• Here, index means that the page should be indexed by search
engines, but it means the search engines shouldn't follow the links on
the page

Chapter Two Search Engine algorithms 28


On-page algorithms
5.The usual suspects—sitemaps.xml and robots.txt
Sitemaps.xml
• Robots.txt
• Example : http://www.yoursite.com/robots.txt
• The robots.txt file is a simple file that exists so you can tell the areas
of your site you don’t want Google to list in the search engine results.
• There is no real boost from having a robots.txt file on your site. It is
essential you check to ensure you don’t have a robots.txt file blocking
areas of your site you want search engines to find.

Chapter Two Search Engine algorithms 29


On-page algorithms
6. Don’t Duplicate website content
• Unfortunately, many site content management systems will
sometimes automatically create multiple versions of one page.
• For example, let’s say your site has a product page on socket
wrenches, but because of the system your site is built on, the exact
same page can be accessed from multiple URLs from different areas
of your site:
• http://www.yoursite.com/products.aspx?=23213
• http://www.yoursite.com/socket-wrenches
• http://www.yoursite.com/tool-kits/socket-wrenches

Chapter Two Search Engine algorithms 30


On-page algorithms
6. Don’t Duplicate website content
• In the search engine’s eyes this is confusing and multiple versions of
the page are considered duplicate content.
• To account for this, you should always ensure a special tag is placed
on every page in your site, called the 'rel canonical' tag.
• Using the canonical tag prevents problems caused by identical or
"duplicate" content appearing on multiple URLs. Practically speaking,
the canonical tag tells search engines which version of a URL you want
to appear in search results. Example taken from www.foresite.jo

Chapter Two Search Engine algorithms 31


Off-site algorithms
What is off-page SEO?
• "Off-page SEO" (also called "off-site SEO") refers to actions taken
outside of your own website to impact your rankings within search
engine results pages (SERPs).  
• Optimizing for off-site ranking factors involves improving search
engine and user perception of a site's popularity, relevance,
trustworthiness, and authority. This is accomplished by other
reputable places on the Internet (pages, sites, people, etc.) linking to
or promoting your website, and effectively "vouching" for the quality
of your content.

Chapter Two Search Engine algorithms 32


Off-site algorithms
• Why does off-page SEO matter?
• While search algorithms and ranking factors are constantly changing,
the general consensus within the SEO community is that the
relevance, trustworthiness, and authority that effective off-page SEO
affords a website still play a major role in a page's ability to rank.

Chapter Two Search Engine algorithms 33


Off-site algorithms
Links and off-page SEO

• Building backlinks is at the heart of off-page SEO. Search engines use backlinks as indications of
the linked-to content's quality, so a site with many high value backlinks will usually rank better
than an otherwise equal site with fewer backlinks.
• There are three main types of links
1. Natural links are editorially given without any action on the part of a page owner. For
example, a food blogger adding a link to a post that points toward their favourite produce
farms is a natural link.
2. Manually built links are acquired through deliberate link-building activities. This includes
things like getting customers to link to your website or asking influencers to share your
content.
3. Self-created links are created by practices such as adding a backlink in an online directory,
forum, blog comment signature, or a press release with optimized anchor text
• Backlink checker website example : https://ahrefs.com/backlink-checker

Chapter Two Search Engine algorithms 34


Off-site algorithms
Non-link-related off-site SEO

• While earning links from external websites is the most commonly practiced
off-page SEO strategy, almost any activity that occurs outside of your own
website and helps to improve your search ranking position could be
thought of as "off-page SEO." These include things like:
• Social media marketing
• Guest blogging
• Linked and unlinked brand mentions
• Influencer marketing
• It's important to note, though, that the net result of each of these activities is
to somehow create a reference to your site from elsewhere on the web ----be
that reference a link, a mention of your brand or website, or otherwise.

Chapter Two Search Engine algorithms 35


Whole-site algorithms
• If on-site algorithms look at the relationship of words and content on
a page, then whole-site algorithms look at the relationship of pages
on a site. For example, does the home page content relate to the
content on other pages? This is an important factor from a user’s
viewpoint, because if users come to your site expecting one thing and
then click through a link and wind up in completely unrelated
territory, they won’t be happy

Chapter Two Search Engine algorithms 36


Whole-site algorithms
• To ensure that your web site is what it claims to be, the whole-site
algorithm looks at the relationship of site elements, such as the
architecture of pages, the use of anchor text, and how the pages on
your site are linked together. This is one reason why it’s best to have
separate web sites if you have a site that covers multiple, unrelated
topics or subjects.
• How your site is architected — that is, how usable it is for a site
visitor, based on the topic it appears to be about — is a determining
factor in how useful web site visitors find your site

Chapter Two Search Engine algorithms 37


Exercise 
Sitechecker Website
• Getting organic traffic can be easy thanks to a free tool called
SiteChecker.Pro.
• All you do is enter your website URL and a free, comprehensive report
gets generated.
• The report tells you your SEO errors and how to fix them.
• Take the suggestions and more organic traffic is yours!
• Site Checker is a free tool. Increase your search traffic and pay nothing

Chapter Two Search Engine algorithms 38


Exercise 
Sitechecker Website
1. Go to https://sitechecker.pro/.
2. Add your website URL where prompted. E.g https://www.psut.edu.jo
3. Discuss the report and the findings at class, each of student will be
asked individually about the report results.

Chapter Two Search Engine algorithms 39

You might also like