A traditional search engine is a software application that examines as many pages as possible on websites, compiling a list of the location of each word on each page. The search engine then create a full-text index of the Internet.
A search engine starts with a list of one or more websites. The engine then requests the homepage from each site on its list. When a homepage is retrieved that has links to yet other pages, the search engine requests a copy of each of those pages that these links point to. And if those pages in turn contain links to yet more pages, the search software requests a copy of those pages. And so on, day after day, ceaselessly.
In practice, most search engines do not exhaustively cover all possible websites. In addition, some search engines pass along material for review by human editors, who rate the pages retrieved on a variety of scales — quality, appropriateness for families, and so on. The creation of such an annotated index obviously takes longer than it does to create a comparable unannotated index. Search engines form a kind of “card catalog” for the Internet, and as such, are the primary means by which Internet users can find digital information.