The Deep Web

Search engines like Google turn up searches so vast and varied in a matter of seconds, they have led us to believe that we have all the information available in the world at our fingertips. Everyday search engines try to churn more powerful and faster algorithms to make their search the best. Yet there is a vast amount of the internet that goes unnoticed by these search engines. Welcome to Deep Web. Unlike the surface web, deep web lies below the radar of almost all the common search engines. Yet the deep web may contain more useful information than you can get on the surface web.

Search engines work by crawling through the web and indexing the pages and the links on them. When a search query is entered the engine shows the indexed links in order of relevance. The crawling of the web is done periodically by any search engine and the rate at which it does solely depends on the search engine company. However complex the modern search algorithms maybe, they are capable of only accessing the static web pages. The web also consists of dynamic web pages and databases where the data can be accessed only by keying in the query. As the web crawler cannot query it misses out on these sites. There are also hoards of other sites which keep search engines at bay for whatever reason. There also exist databases to which access is restricted, and thus do not show up in a common search.

Any new technology has two faces and so is the case with deep web. The deep web is utilized for criminal activities among other things. One such famous anonymous surfing system is Freenet. It is a decentralised storage and retrieval system. Freenet maintains the user’s anonymity and also hides the fact that the user is actually using freenet. It accomplishes this by routing the user to an already stored version of the web page. Thus a user may only access the sites Freenet has on its servers. Thus Freenet resembles a p2p client than an actual proxy server.

The deep web though can be searched through what is known as federated search. A federated search has software programs called connectors which send search commands to a database based on the query entered by the user. The connector then returns the search results to you on a single page.

The advantages of federated search are many. Federated searches can be more localized. For instance a student looking for research projects can easily run a federated search on any university’s databases to return more accurate and relevant results. Also federated search can search multiple databases it saves time from combing through every database. Another advantage of a federated search is that information updated real time is shown. A normal web crawler shows data when it last crawled through the web therefore may be late. A federated search brings up results as soon as a database is updated. There is one downside though, the time taken by a federated search engine is relatively much higher than a normal search engine as it depends on the native search speed of the database. Next time you log on don’t just surf the web, take a peek into the deep web.

Leave a Reply

Your email address will not be published. Required fields are marked *