What was the first search engine?

A search engine is a computer program that acts as a way to retrieve information from a database, based on certain criteria defined by the user. Moderns search for databases that contain vast amounts of data, collected from the World Wide Web, newsgroups, and directory projects.

Modern search engines scan databases containing vast amounts of data collected from the World Wide Web, newsgroups, and directory projects.

Before the existence of the World Wide Web, but after the arrival of the Internet and its subsequent popularity in the university circuit, the first search engine was created. At this time in history, in the late 1980s and early 1990s, one of the main protocols used on the Internet was File Transfer Protocol (FTP). FTP servers existed all over the world, usually on university campuses, research centers, or government agencies. Some students at McGill University in Montreal decided that a centralized database of files available on several popular FTP servers would help save time and provide excellent service to others. This was the origin of the Archie search engine.

Archie, which was short for file, was a program that regularly connected to the FTP servers on its list and created an index of the files on the server. Since processor time and bandwidth were still such a valuable commodity, Archie only checked for updates every month or so. At first, the index Archie built was supposed to be checked with the Unix grep command, but a better user interface was soon developed to allow easy searching of the index. Following Archie, several search engines sprung up to search for the similar Gopher protocol; two of the most famous are Jughead and Veronica. Archie was made relatively obsolete with the advent of the World Wide Web and subsequent search engines, but Archie servers still exist.

See also  What is an internet forum?

In 1993, shortly after the creation of the World Wide Web, Matthew Gray developed the World Wide Web Wanderer, which was the first robot on the web. World Wide Web Wanderer indexed all the websites that existed on the Internet by capturing their URLs, but did not track any actual content of the websites. The index associated with Wanderer, which was one of the first types of search engines, was called Wandex.

A few other small projects came after Wanderer, which started to catch up with the modern search engine. These include the World Wide Web worm, the Repository-Based Software Engineering (RBSE) spider, and JumpStation. All three used data collected by web robots to return this information to users. Still, most of the information was returned unfiltered, although RBSE tried to classify the value of the pages.

In 1993, a company founded by some Stanford students called Excite launched what is arguably the first search engine to incorporate page content analysis. This initial offer was intended for research on a website, but not for web research in its entirety.

In 1994, however, the search engine world took a big step forward. A company called WebCrawler launched a search engine that not only captured the title and header of web pages, but also the entire content. WebCrawler was a huge success, so much so that most of the time it couldn’t even be used because all the system resources were being used.

A little later that year, Lycos was released, which included and built on many of the same features as WebCrawler. Lycos ranked its results according to their relevance and allowed the user to modify a number of settings to get better fitting results. Lycos was also huge: in that year it had more than a million archived websites and in two years it had reached 60 million.

See also  What is a paste code?

Related Posts