Building backlinks with Web 2.0
Designing, implementing and evaluating a costless off-site SEO strategy with backlinks originating from Web 2.0 blogs.
Written by Gustaf Edlund, Jacob Khalil
Business Administration>Marketing & Sales
Thesis: Google's search engine There are many different search engines on the Internet, and they all use their own algorithms to index and locate websites and web pages. Therefore, the method of improving the search engine optimization of the website will vary depending on the search engine concerned. This paper will focus on the search engine optimization aspects related to the Google search engine "Google Search". Due to its "popularity", its market share is 92.78% (its closest competitor is Bing, with a market share of 2.55%)  .1.4.2 Off-page SEO & Web 2.0 blogsSEO is a broad topic, usually divided into two areas: on-page and off-page SEO. The former focuses on optimizing anything you can directly control, such as the code and content on the website , while the latter revolves around uncontrollable things, such as receiving backlinks from other websites. In order to answer research questions and keep our paper work narrow, we will focus on the off-page aspects of SEO, so only certain on-page aspects will be mentioned and explained when relevant and necessary. The Web 2.0 platform allows people to upload content to the Internet in an accessible and free way, as is the static website of Web 1.0 . Examples of Web 2.0 platforms include social media, blogs, and online encyclopedias . In order to achieve this goal and further promote existing research on backlinks originating from Web 2.0 blogs, this platform will be the focus of this article. 1.4 .3 Limitations of SEO Research in the field of SEO is problematic, because the environment of the subject is mostly out of control and capable. Google’s algorithm is dynamic, updating hundreds of times, sometimes thousands of times a year, and the frequency of updates increases over time . Therefore, the final research done at present may become irrelevant in the future, and researchers must take this into consideration when conducting research and research. Review previous work. Although Google has provided some guidelines on how website owners should manage their content to make it search-engine friendly, they have not explicitly stated how the search algorithm works because it is intellectual property. In addition, this openness makes it vulnerable to abuse. Therefore, search engine optimization practitioners can only make educated guesses and experiments in order to have a good understanding of what works and what does not work-these signs may not be of any use to future work. Another example of SEO being an uncontrollable environment is the uniqueness of each website. Some methods of SEO may be objective, logical, and straightforward. For example, choosing a keyword in the text has a certain density-but due to the uniqueness of the situation, if it is to be executed on another website, it cannot be declared. The degree of efficiency of the method. 2.2 Search Engine Today's Internet has come a long way since its birth as ARPANET in 1969 . Mainly used by government agencies, universities and institutions; the Internet was not welcomed by the public until the introduction of the Mosiac graphical web browser in 1993. This surge in popularity led to an explosive growth of new websites on the Internet. A few hundred increased to 2.5 million in 1998 . The huge increase in the amount of information available online has led to a greater demand for asimpleway users to find what they want. The solution to this problem is search engines. From Archie , the first search tool in 1990, to Google Search , the most popular search engine today, the way the search engine works has been greatly improved. GoogleSearch and other modern search engines are composed of three main functions: crawling, indexing, and ranking . The first step, crawling, is wherecrawlers (also known as spiders, robots or simple robots) search for new and updated websites and content on the Internet. Regardless of the format, crawlers discover content through links. This information is then stored in the index of search engines . Indexing is where the process crawler (Googlebot in the case of Google search) analyzes the content it crawls. The crawler will read all the text of a page and look at the attributes in the HTML tags available in the source code to determine that its content was found. Googlebot will also analyze videos and images to further develop its interpretation of the page . Crawling and indexing takes time, and some things may slow down these processes. If the number of backlinks pointing to the site is small, such as a newly created website, it may take a while for the crawler to find it. Crawlers may have difficulty navigating the site, in this case submitting a sitemap to Google will help the robot determine what to crawl. If the site violates Google’s terms of service, it may be penalized and the index removed from search results . The last component of a search engine is ranking (also called a service), in which the search engine determines the most relevant answer to the user’s search query. The correlation is determined by hundreds of factors , and the algorithm is adjusted every day . In terms of improving website rankings, some factors are controllable, while others are uncontrollable. User location and language are factors beyond the control of the website. The Swedish bike shop website is unlikely to appear when French people search for bike shops in France, no matter how good the SEO of the Swedish website is. In addition to language or location, SEO is a factor that affects website ranking. The better the optimization, the better the ranking of the website. Read Less