The worst search engine. The best search engine

As you probably already know, the general methodology for using search engines is quite simple: I went to the search engine website, entered the search word (phrase), clicked on “Enter” (or the “Find” button) - get the result - a list of links to the Internet pages containing the word or phrase you specified.

The difficulty lies in the details, which are actually important: how to avoid the "heap of garbage" in the search results; how to make the search engine find the exact mention of the word (phrase) you need, and not all references in a row on all Internet sites; why in the list of found resources the most necessary and interesting sites are far from the first place. And also why the search engine did not find anything useful at all, although you know for sure that this information is on the Internet, and a few dozen more “why?” And “how?”.

Link popularity refers to the number of links pointing to a page. Some engines, referring to their own or other databases, attach great importance to pages that link to other sites to a large extent or are mentioned on important sites. Links are considered a vote on page quality.

As with the tools presented earlier, there are differences between metamotors. They represent differences with respect to the search interface, the engines used in the search, the query processing mode, and the way the results are compiled and presented.

Earlier it was said that the best and most used Russian search sites are Google (Google), Yandex (Yandex), [email protected], Rambler (Rambler). The best foreign search engines are www.Google.com and www.yahoo.com. It should be borne in mind that all these search engines have their own individual characteristics.

To begin with, their “coverage areas” may differ - Internet spaces that are indexed by search engines, that is, they are studied, and on which the search is performed.

In these cases, metamotors can function as thematic tool guides described earlier, which allows users to select specialized tools in any language or subject. Some interfaces show the tools used in easily viewed lists, and allow the user to choose which of the offered tools he wants to search; others may not allow this setting or even indicate on their help pages which engines they are looking for.

As for query processing, most metamotors allow you to formulate a search expression in a syntax similar to the syntax used by most engines, and also allows you to use logical logic and even natural language. Some translate queries into the language used by individual engines. Others do not do this by sending a request as input to the user, which can reduce the effectiveness of the search, since each search engine uses a specific syntax. For example, some search engines accept logical connectives, while others accept only signs of inclusion and exclusion.

    Search engines usually consist of three components:
  • programs - a robot (spider) that moves around the network and collects information about its resources;
  • a database that contains information about network resources collected by a search robot;
  • a search engine for user interaction with the database.

Spider crawlers, during their wanderings through the network, extract and index (evaluate) various types of information. Moreover, various robotic programs have their own search features and priorities. Some of them index each word in the document, others are only the most frequently occurring words. In general, indexing a document is done in many ways: by the number of words in the document, by the size of the document, by its name, headings, links, etc.

Therefore, depending on how the request is passed to the engine, it may be misinterpreted by the engine. The response time to the request and the method of returning the results strongly depend on the method of searching for tools: sequentially or simultaneously. Typically, the interface allows the user to specify a timeout for results over which the search will be canceled for engines that do not return results. Some also allow you to set the number of results that will be presented for each engine found.

The most recommended form for presenting results is one in which the answers of each search tool are integrated, ordered by relevance, and the results are duplicated. However, sometimes the results obtained by the instrument under study are grouped and presented sequentially. Some may show the relevance order that each result obtained in the engine that received it. Metamotors are indicated in cases where many results were not found when searching for a single engine. They can also be used to check which individual engines provide the best answers, and give an overview of what each tool contains by topic, to select a specific engine for advanced search.

Typically, search robots work on a tip, that is, the creator of a web page writes a request to the search engine with a request to index his document. A search robot is sent to the specified URL and does its job.

But search engine spiders can also independently navigate the network by clicking on the links in the visited documents.

It is important to note that there are disadvantages regarding the use of metamotors. The main limitation is that the specific search capabilities of each engine, which are mechanisms for further refinement of searches, become inaccessible in the metamotor interface. Due to the large amount of information on the Internet, leveling usually occurs in the results obtained, that is, more information is obtained without a corresponding improvement in quality. Because of this limitation, metamotors are best suited for searches using single terms or other simple searches that do not require more complexity.

Robots put the collected information into a database with which the user interacts, performing a search. Each search engine produces its own database, while most of the information in it may be the same as that of other search engines, but there are significant differences.

Of no small importance is the way the search engine sorts the results found (placing some web pages at the top of the list and others at the end).

In some metamotors, only a subset of the results of each instrument is restored. The search for a metamotor takes longer because additional processing is required to compile the results, and since the final response time will be performed by a slow tool.

How to stay in search mode in search engines. As you can see, Internet search tools are a complex universe, not only because of the different characteristics that they represent separately, but also because of the variety of types and subtypes and because they are constantly evolving. Moreover, the complexity of finding relevant information through them is masked by their seemingly friendly interfaces. Thus, despite the large amount of information on the Internet and the tools available for searching, the user is often disappointed with the unsatisfactory results.

At first glance, it may seem that only Yandex can be better than Google, and even that is not a fact. These companies invest huge amounts of money in innovation and development. Does anyone have a chance not only to compete with leaders, but also to win? Lifehacker's answer: “Yes!” There are several search engines that have succeeded. Let's look at our heroes.

An information specialist should at least consult the documentation, although this is more limited than desirable for each tool in order to make better use of it. Ideally, you should learn more and stay in the know. There are websites on the Internet that regularly publish articles on Internet search tools and comparative engine performance tables. The following are examples.

The Internet search tools website has a categorized list of search tools. Here is a selection of 6 alternative search engines that you can check out. But the strength of the American company and its manual adjustment to the personal data of Internet users can be intimidating.

Duckucko go



Sergey Petrenko

Former CEO of Yandex.Ukraine.

As for the fate of alternative search engines, it is simple: to be very niche projects with a small audience, therefore without clear commercial prospects or, conversely, with complete clarity of their absence.

From the point of view of society, the fact that all Internet users get the same results on the same issues can also have consequences that are still poorly measured in terms of the diversity of knowledge and opinions. Suggestions are executed during a search by request type. In addition to the usual sections, a special tab called “carnets” is devoted to the content organized by the topic.

It is a search engine that has become increasingly popular over the past two years, thanks to Edward Snowden's revelations about electronic surveillance. That is, he does not seek to display results that support the user in his opinions or interests, which may have perverse consequences.

If you look at the examples in the article, you can see that such search engines either specialize in a narrow but demanded niche, which, perhaps only so far, has not grown enough to be noticeable on Google or Yandex radars, or they test the original hypothesis in ranking, which is not yet applicable in a normal search.

Logo, search bar and a brief reminder of the task that the site has set: refrain from tracking its users. Suggestions are offered for search, as the user submits his request. Sort options are not available on the main page: you must confirm your search in order to see them.

The results are placed in a narrow column on the left and operate on the principle of infinite scrolling: the more you go, the more you will see new results to consult. Finally, there is advertising only under search. Gangs allow you to search on the target site, and hacks complement the Knowledge Schedule. There are more than 650 types of requests on the site with which he knows how to respond directly. There are dozens more or less performing.

For example, if Tor search is suddenly in demand, that is, at least a percentage of Google’s audience will need the results, then, of course, ordinary search engines will begin to solve the problem of how to find and show them to the user. If the behavior of the audience shows that a significant share of users in a noticeable number of requests seems more relevant to the results, data without taking into account factors that depend on the user, then Yandex or Google will start to produce such results.

But what makes a good search engine quality is its effectiveness and relevance of its research. A disabled search is not a search engine per se, because it only offers searches in your favorite search engine. What alternative search engines are available?

In France 94, 1% of searches are done through this search engine. We met passers-by to discuss these statistics. We realized that many confused browsers and search engines. They store countless personal data that they can use at their discretion. In addition, these services are not always the most environmentally friendly or ethical in their operation.

To be better in the context of this article does not mean to be better in everything. Yes, in many aspects, our heroes are far from Yandex and even far from Bing. But each of these services gives the user something that giants in the search industry cannot offer. Surely you also know similar projects. Share with us - we will discuss.

Lilo: solidarity search engine

Here are some of the reasons we searched for alternative search engines. This engine allows its users to accumulate “water droplets” for every study that they understand. These “drops” are virtual currencies corresponding to 50% of the profit made by Lilo through the announcement. This kit gives the user the opportunity to support a social or environmental project. In addition, Lilo says not to track their users, and not to collect information about their research.

Do you like the article? To share with friends: