Abstract The structure and basic principles of building the Internet. Problems and prospects for the development of the Internet. Principles of building a global Internet network Internet structure protocol or client-server

Global computer network Internet (Internet)

The structure and principles of the Internet

The Internet -it is a global computer network that unites many local, regional and corporate networks and includes tens of millions of computers.

The difference between the Internet and traditional networks is that it does not have an official owner. It is a voluntary association of various networks. There are only organizations that coordinate the registration of new users on the network. The technical side of networking is controlled by the Federal Networking Council (FNC), which on October 24, 1995 adopted a definition of what we mean by the term "Internet":

The Internet is a global computer system that:

· Logically interconnected by the space of globally unique addresses (each computer connected to the network has its own unique address);

· Is able to maintain communication (exchange of information);

· Provides the operation of high-level services (services), for example, WWW, e-mail, teleconferences, conversations on the network, and others.

The structure of the Internet resembles a spider web, in the nodes of which there are computers connected by communication lines. Internet nodes connected by high-speed communication lines form the basis of the Internet. The digitized data is sent through routers that connect networks using sophisticated algorithms to select routes for information flows.

Server on the Internet, this is a computer that provides services to network users: shared access to disks, files, a printer, and an e-mail system. Typically, a server is a collection of hardware and software. A computer connected to the Internet and used to communicate with other computers on the network is called host.

The server provides services to other computers that request information, which are called clients (users, subscribers). Thus, working on the Internet assumes the presence of a transmitter of information, a receiver and a communication channel between them. When we "enter" the Internet, our computer acts as a client, it requests the information we need on the server of our choice.

The main protocol over which the Internet operates is tCP / IP protocolcombining transmission protocols TCP (Transmission Control Protocol) and Routing Protocol IP(Internet Protocol).

The data is split into packets before being sent to the network. Packageis a unit of information transmitted between network devices as a whole. On the transmitting side, the packet passes sequentially through all levels of the system from top to bottom (from the application layer to the physical). Then it is transmitted through the network cable to the receiving computer and again goes through all the levels in the reverse order. The protocol for dividing transmitted data into packets is called TCP. It is a transport layer protocol. It controls how information is transmitted.

Each packet contains the addresses of the sender and the receiver and the sequence number of the packet in the general data stream. The server that receives this packet compares its address with the recipient's address specified in the packet and directs the packet in the right direction. The addressing protocol is called IP. Due to the fact that each packet contains all the necessary data, it can be delivered independently of the others, and quite often it happens that packets reach their destination in different ways. And the receiving computer then selects data from the packets and collects the file that was ordered from them.

Internet addressing

In TCP / IP, each computer is addressed with four decimal numbers, separated by periods. Each number can have a value from 1 to 255. The computer address looks like this:

This address is called IP address... This address can be permanently assigned to the computer, or it can be assigned dynamically - at the moment when the user connected to the provider, but at any given time on the Internet there are no two computers with the same IP addresses.

It is inconvenient for the user to remember such addresses, which, moreover, can change. Therefore, the Internet exists Domain Name Service (DNS - Domain Name System), which allows each computer to be named. There are millions of computers on the network, and to avoid repeating names, they are divided into independent domains.

Thus, the computer address looks like several domains, separated by a dot:

… <сегмент 3>.<сегмент 2>.<сегмент 1>

Here segment 1 is a level 1 domain, segment 2 is a level 2 domain, and so on.

Domain 1 level usually defines the country where the server is located (ru - Russia; ua - Ukraine; uk - Great Britain; de - Germany) or the type of organization (com - commercial organizations; edu - scientific and educational organizations; gov - government agencies; org - non-profit organizations ).

Domain name is the unique name that this service provider has chosen to identify itself. For example, domain name www.microsoft.com denotes a computer named www in the domain microsoft.com... Microsoft is the name of the company, com is the domain of commercial organizations. The computer name www indicates that there is a WWW service on this computer. This is the standard form of address for servers of large companies (for example, www.intel.com, www.amd.com, etc.). Computer names in different domains can be the same. In addition, one computer on a network can have multiple DNS names.

When a domain name is entered, such as www.mrsu.ru, the computer must translate it into an address. To do this, the computer sends a query to the DNS server, starting at the right side of the domain name and moving left. The DNS server software knows how to contact the root server that stores the addresses of the first-level domain name servers (the far right part of the name, for example, ru). Thus, the server requests the address of the computer responsible for the ru domain from the root server. Having received the information, he contacts this computer and requests the address of the mrsu server. After that, from the mrsu server, he receives the www address of the computer, which was the target of this application program.

Generic resource locators are used to address resources on the Internet Url (Universal Resource Locator).

The URL includes:

· Method of accessing the resource, i.e. access protocol (http, ftp, telnet, etc.);

· Network address of the resource (name of the host machine and domain);

· Full path to the file on the server.

In general, the URL format looks like this:

method: //host.domain/path/filename

where method- one of the values \u200b\u200blisted below:

http - file on the World Wide Web server;

news - Usenet newsgroup;

telnet - access to the Telnet network resources;

ftp - file on FTP - server.

host.domain - domain name of the server on the Internet;

path -path to the file on the server;

filename -file name.

Example: http://support.vrn.ru/archive/index.html

Internet services

Hundreds of millions of people use Internet services. But the Internet is only a means of connecting computers and local networks with each other. To store and transmit information over the Internet, special information services have been created, sometimes called Internet services. In the simplest sense service -it is a pair of programs that interact with each other according to certain rules, called application protocols... One of the programs is called serverand the second is client... Different services have different application protocols. To use any of the Internet services, you must install a client program on your computer that can work with the protocol of this service.

There are several of these services, the most common are the following:

· Email (E-mail) - performs the functions of regular mail. Such mail allows you to send and receive text messages, to which you can "attach" files of any format. E-mail works using the SMTP and POP3 protocols. These two protocols are standard Internet mail protocols built on top of the underlying TCP / IP protocol. SMTP defines the rules dispatchmail messages over the Internet. Protocol POP3 is a protocol for receiving messages. In accordance with it, mail is received by the server and accumulates on it. The program - the mail client periodically checks the mail on the server and downloads messages to the local computer. There are many client programs for working with e-mail, for example, Microsoft Outlook Express (included in standard Windows programs), Microsoft Outlook (included in the MS Office suite), The Bat !, Eudora Pro, etc.

The email address looks like: Username @ computer address... For instance: [email protected]

The left side of the address is the recipient's name, the right side is the domain name of the computer on which the messages are stored.

· Teleconferences (UseNet) are designed as a system for the exchange of textual information between computers. A teleconferencing service is similar to e-mail distribution, in which a message is sent to more than one correspondent, but is placed on a teleconference server, from which it is sent to all the servers with which it is connected. On each server the message is stored for a limited time, during which everyone can get acquainted with it. There are special client programs for working with the teleconferencing service. For example, the Microsoft Outlook Express email client also allows you to work with the teleconferencing service.

Some newsgroups pre-select messages for relevance to the stated topic of the newsgroup. This function is performed by moderators... These can be people or special programs that filter messages by keywords.

· File Transfer Service (FTP).The purpose of FTP is to exchange files over the Internet. The FTP service has its own servers that store data archives. The need for file transfer arises, for example, when receiving program files, when sending large documents, when transferring archive files. The service uses FTP (File Transfer Protocol). The user's computer uses special software to receive files. In particular, programs - browsers WWW have built-in tools for working with the FTP protocol.

· Terminal mode (Telnet). Remote Computer Management Service. By connecting to a remote computer using the protocol of this service, you can control the operation of this computer.

· IRC service(Internet Relay Chat) is designed for direct communication of several clients in real time. IRC is often called chat.

· World Wide Web Service (WWW, World Wide Web) -it is a service for searching and viewing hypertext documents. These documents are called Web pages, and a collection of Web pages that are close in meaning or topic and are stored together is called - Website or Web site... A single Web server can host many Web sites. Web pages can include text, pictures, animation, sound, video, and active elements - small programs that animate the page, making it interactive, that is, changing depending on the user's actions. The application protocol of the WWW service is Hypertext Transfer Protocol http... To use the WWW service, you need to install on your computer a special Web document viewer called WWW browser... It is an application program that receives the requested documents, interprets the data, and displays the contents of the documents on the screen. A built-in Internet Explorer browser is supplied with Windows 98 and above.

WWW and HTML

The main and original idea of \u200b\u200bthis service is the idea of \u200b\u200bhypertext. This idea was put forward by Tim Berners Lee in 1989 as a new framework for access to information. Hypertext Is a document format that, in addition to text, may contain links to other hypertext documents, pictures, music and files. Hyperlinksare links that allow you to jump from one Web page to another with a click of the mouse. Hypertext communication between multiple documents stored on physical Internet servers is the basis for the existence of the logical WWW space. Such a relationship could not exist if each document in this space had its own unique address. If the path to a specific page is not specified, the home page of the site or Web server is assumed.

For example, the address of the computer on which the Rambler search engine WWW server is located is: http://www.rambler.ru.The Rambler start page is loaded at this address in viewers, and the Web page describing the search language of the system has a URL http://www.rambler.ru/new/help.html

Html (Hyper Text Markup Language) is a hypertext document format used on the WWW to provide information. This format does not describe how the document should look, but its structure and links. The appearance of the document on the user's screen is determined by the browser program. HTML file names usually have the extension htm, html. TagsAre html commands. They are separated from the rest of the text with triangular brackets. For instance, . Tags are often used in pairs to define the beginning and end of the region of HTML code that they act on. For instance,

Opening tag,

- closing tag. Tags determine what parameters the text has in their area of \u200b\u200beffect, size, font style, alignment, color, position of objects in the document, etc.


Similar information:


Search on the site:



2015-2020 lektsii.org -

The Internet (INTERconnection NETwork) is a synthesis of many local, corporate and national networks, which often have their own internal various communication lines and protocols. Computers connected to the Internet can have any hardware and software platform, but they must support the protocol stack (protocol family) of TCP / IP communication. A system of servers of various hardware and software configurations unites them into a single global network. These servers are interconnected by satellite and fiber-optic communication lines, less often by coaxial and other cables.

The servers support the high-level protocol HTTP; and is often used in conjunction with FTP (file transfer), MIME (binary file transfer), SMTP and POP (email support). The transport protocol between servers and network end users is TCP / IP.

There is no single owner and control center for the Internet.

For the short name of the Internet, the term is often used WEB (web) or simply Network (with a capital letter).

Similar to the Internet, since the early 90s, networks such as Intranet... Under Intranetmeans any (usually corporate - enterprise-wide) computer network that uses the TCP / IP protocol, the usual Internet addressing methods and the HTML hypertext transfer protocol, using software familiar to the Internet (for example, on the side of the Microsoft Information Server, and on the side client standard WEB-browsers). An Intranet is often connected to the Internet.

The project to create a global network started in the late 60s and was funded by the US government through the military agency DARPA ( Defense Advanced Research Project Agency). After the Soviet Union launched an artificial Earth satellite in 1957, the US Department of Defense decided that in case of war, America needed a reliable information transmission system. The US Defense Advanced Research Projects Agency (DARPA) has proposed developing a computer network for this. Joseph Licklider put forward the idea of \u200b\u200bcreating a Worldwide Computer Network.

He came to the idea of \u200b\u200bcreating a network of computers with free access of any person to its resources. He headed the Defense Advanced Research Projects Agency (DARPA) and urged his successors to develop computer networks.

The development of such a network was entrusted to the University of California, Los Angeles, Stanford Research Center, University of Utah, and California State University Santa Barbara.

In 1957 US Department of Defense begins building a network ARPANET (eng.Advanced Research Projects Agency Network ) and several other networks serving the US military space industry. In 1969 it was created as a highly reliable data transmission network ARPANET - a computer network using packet switching technology. In 1983, ARPANet split into two networks, one - MILNET became part of the US defense data network, the other was used to connect academic and research centers, it gradually evolved and in 1990 was transformed into the Internet.

Today the Internet connects many global networks and has millions of servers.

The main types of servers (and the services they provide) are listed below

Server type

Services

FTP server

Storing large volumes of files for transfer (upload) to local drives of users (rarely vice versa)

Gopher Server

Storing only textual information (articles, documentation, short notes, etc.) for the same purpose; a multilevel menu is used to find the desired document; documents may contain hyperlinks

Mail server

(E-Mail server)

Ensuring the transmission and storage of e-mail (letters, in addition to text, may contain additional attachments in the form of arbitrary files - sound, images, etc.)

News server

(News server)

Storage of conferences (each has its own topic), conferences store articles, program files, multimedia, etc.

WWW server

(World Wide Web -

‘World wide web’)

Storage of any information acceptable for a computer in the format of hyperlinks (links are allowed both to documents inside the server and to any documents of other servers), allow dynamically generated information and interactive data exchange with a remote user

The server is a fairly powerful computer with a specialized server softwaredesigned to effectively support specific network operations.

For Windows operating systems, Microsoft Information Server is often used. For the successful functioning of these servers, it is necessary to connect to a local network using the TCP / IP protocol or to a high-speed Internet channel. Software creators successfully operate servers on a local computer in order to debug complex WEB-sites. Apache server is usually used (there are versions for LINUX, Solaris, SunOS 4.x and for Windows).

Internet servers store huge amounts of information and process requests for this information for many users at the same time.

One of basic network conceptsInternet is her openness... This means that any user (spending a minimum of funds) can create his own WEB-page, WEB-site (a set of logically connected WEB-pages) or a WEB-server, on which he can post arbitrary information. Moreover, practically any type of computer equipped with any operating system can act as a user computer.

Currently, the Internet is used for express information, advertising, purchase and sale operations, in banking, etc., more than a billion unique user pages are posted on the Web; new Web applications are offered monthly.

Network addressing principlesInternet.

Each device connected to the Internet ( node, host) is uniquely addressed by a 32-digit unique binary number (dotted into 4 octets - for example, 198.137.240.91). The node address is logically divided into two parts, one of which is called network identifier (Network ID), and the other - node id (Host ID).

The global network unites many networks, each of which has its own Network ID, each network may contain a number of nodes, each of which has its own Host ID. In this way (using a pair of numbers - Network ID and Host ID), you can address any node connected to the global network based on the TCP / IP protocol.

There are several address classes (A, B, C, D ...), for which different widths of the Network ID and Host ID fields are used.

Most significant bits of the first octet have a special meaning - they determine whether an address belongs to one of 5 classes:

Class

First

octet (binary)

First octet (decimal)

amount

networks

Number of host computers per network

experimental

experimental

In the address class A the first bit is 0, the next 7 bits identify the network, and the last 24 identify the host on the network. With seven digits in the network address part minus two special network numbers (0 and 127) in class A there can be only 2 7 -2 \u003d 126 networks, but each of them can contain up to 2 24 -2 or more than 16 million computers. In this way, class A addresses used only for big business, military and research organizations (e.g. General Electric, Defense Intelligence Agency, AT&T Bell Laboratory, Massachusetts Institute of Technology).

If the first two digits of the address are 10 - this adress class B, then the next 14 bits indicate the network address, and the following 16 - the host computer address. Class B addresses used more often than class A addresses for corporations, universities and service providersInternet.

First three digits in class C equal to 110, the next 21 digits indicate the network address, the last 8 - the host computer ... Class C addresses are used organizations with fewer than 250 connected toInternetdevices.

Addresses class Dwhich start with 111, only recently started to be used and support a dedicated group message delivery service (intended for computers sharing a common protocol, not groups of computers sharing a common network). Bulk message delivery on the Internet may become the backbone of modern broadcast technology such as radio and television.

Addresses class E begin with 11110 and reserved for future network expansion.

Some addresses are reserved for special purposes

  • The address 0.0.0.0 is intended for transmitting packets ‘to himself’, i.e. to your node.
  • The address 127.0.0.1 is used for testing network applications.
  • An address that contains a network number and a node number of zero is used to designate a network (for example, 191.24.2.0).
  • If all bits of the node number field are equal to one (for example, 193.24.2.255), then this is a broadcast address, using which you can send packets to all nodes of this network at once.
  • If all bits of the network identifier and all bits of the host identifier are one (for example, 255.255.255.255), all hosts on this network are addressed.
  • To address a node in this network, you can specify a zero value instead of the network number (for example, 0.0.0.2).

When connecting to the Internet, the user is assigned a permanent or temporary address. Temporary address usually acts only for the duration of the session with the Network via the telephone line. When creating your own WWW server, you need permanent Address (and when connecting to this server via the LAN of other users - a certain range of addresses).

In case of difficulties in connecting to the Internet, you can contact the international organization InterNIC ( Internet Network International Center) by the address www. internic. net or via FTP ftp. internic. net or by email hostmaster@ internic. net.

IP addresses given as 4 decimal numbers are inconvenient for humans. Therefore, the so-called. domain hostname systemwhich ensures the uniqueness of the names due to the hierarchical structure (Figure 3.2).

Figure 3.2. An example of a hierarchical domain name system structure (left)

and fully qualified domain names of nodes (on the right).

Wherein full domain address is generated from right to left by adding nested house names enew, separated by a dot... Domain name registration is carried out by the already mentioned organization InterNIC, registration is paid.

To map domain names to IP addresses on the Internet, there is a special distributed DNS database ( Domain Name System), using which (through the so-called DNS server) hosts can translate domain addresses into numerical IP addresses. Windows also uses WINS servers to manage the database that maps TCP / IP addresses to NetBIOS computer names on the Microsoft network. To establish a correspondence between IP addresses and domain addresses, the HOSTS file is used (the correspondences for NetBIOS addresses are set by the LMHOST file); both files are edited manually.

When the Internet was created, several top-level domains were defined for it, dividing domain addresses according to their belonging to different organizations.

domain name

Organization

Government organizations

Military organizations

Commercial organizations

Non-profit organizations

Research organizations, educational institutions

Networking organizations

With the further development of the Internet, top-level domains belonging to different countries appeared in it (the full list is on the server ftp.wisc.edu)

In Russia, the domain name is still sometimes used su, belonging to the USSR in the past.

The address of each resource (file) on the Internet is set using the so-called. uRL addresses (Uniform Resource Locator), which has the following format:

protocol: // host_domain_address / file_path / file_name

As can be seen from the above description, the syntax of the URL is close to the full (taking into account the path along the file system) file addresses accepted in modern operating systems and is an extension of this system; information about the message exchange protocol and the concept of a network node have been additionally introduced. The URL uniquely identifies a specific file on the web, and it is absolutely unimportant for the user whether this file is on a given computer or on a computer located at a distance of many thousands of kilometers and connected to the Internet.

For WWW servers, the following form of URL URI is used

Optional parameter port asks number port for working with the HTTP protocol, the default is port 80. The port number identifies a program running on a TCP / IP network node and communicating with other programs running on the same or a different node on the network. The following shows how to specify the port number in the URL

FEDERAL EDUCATION AGENCY

BRYANSK STATE TECHNICAL UNIVERSITY

Department "Computer Science and Software"

Internet... Development history, structure and principles of work, data transfer protocols, addressing system. ServicesInternet: email, forum,ICQ, file transfer, environmentWorldWideWeb, information retrieval tools.

COURSE WORK

by discipline "Computer science"

Student gr. 09-PO2:

Malikov S.S.

Leader:

ass.Parshikov P.A.

Bryansk 2009

Introduction 2

1. Development history 4

2. The structure and principles of building the Internet 6

3. Internet Protocols 7

4. Addressing on the Internet 18

5. Internet Services 21

6. Search engines on the Internet 28

7. Instant Messaging Services and Messengers 32

Conclusion 36

List of used literature 37

Introduction

The Internet is a global computer network covering the whole world. Today the Internet has millions of subscribers in more than 150 countries around the world. The size of the network increases by 7-10% every month. The Internet forms, as it were, the core, provides communication of information networks belonging to various institutions around the world.

If earlier the network was used exclusively as a medium for transferring files and e-mail messages, today more complex problems of distributed access to resources are being solved. Several years ago, shells were created that support the functions of network search and access to distributed information resources, electronic archives.

The Internet, which once served solely for research and educational groups whose interests extended to access supercomputers, is becoming increasingly popular in the business world.

Companies are tempted by speed, cheap global connectivity, ease of collaboration, affordable software, and a unique Internet database. They view the WAN as complementary to their own local area networks.

At a low cost of services, users can access commercial and non-commercial information services in the USA, Canada, Australia and many European countries. In the archives of free access to the Internet, one can find information on almost all spheres of human activity, from new scientific discoveries to the weather forecast for tomorrow.

In addition, the Internet provides unique opportunities for cheap, reliable and confidential global communications around the world. It turns out to be very convenient for firms with branches around the world, transnational corporations and management structures. Usually, using the Internet infrastructure for international communication is much cheaper than direct computer communication via satellite or telephone.

Email is the most widely used Internet service. Currently, approximately 20 million people have an email address. Sending a letter by email is significantly cheaper than sending a regular letter. In addition, a message sent by e-mail will reach the addressee in a few hours, while a regular letter can reach the addressee for several days, or even weeks. The Internet is currently experiencing a boom, thanks in large part to the active support of the European governments. countries and the USA.

1. Development history

After the Soviet Union launched an artificial Earth satellite in 1957, the US Department of Defense decided that in case of war, America needed a reliable information transmission system. The US Defense Advanced Research Projects Agency (DARPA) has proposed developing a computer network for this. The development of such a network has been entrusted to the University of California, Los Angeles, Stanford Research Center, University of Utah, and California State University Santa Barbara. The computer network was named ARPANET (Advanced Research Projects Agency Network), and in 1969, as part of the project, the network united the four specified scientific institutions. All work was funded by the US Department of Defense. Then the ARPANET network began to actively grow and develop, scientists from various fields of science began to use it.

The first ARPANET server was installed on September 1, 1969 at the University of California, Los Angeles. The Honeywell DP-516 computer had 24 KB of RAM.

On October 29, 1969 at 21:00, a communication session was held between the first two nodes of the ARPANET, located at a distance of 640 km - at the University of California Los Angeles (UCLA) and at the Stanford Research Institute (SRI). Charley Kline was trying to remotely connect to a computer in SRI. His colleague Bill Duvall from SRI confirmed the successful transmission of each entered character by phone. For the first time, only three LOG symbols were sent, after which the network stopped functioning. LOG should have been the word LOGON (login command). The system was returned to working order by 22:30 and the next attempt was successful. This date can be considered the birthday of the Internet.

ARPANet was created using packet switching technology based on the Internet Protocol - IP or the TCP / IP family of protocols (stack) i.e. is based on self-promotion of packages on the network. It was the use of network protocols (network software) TCP / IP that ensured the normal interaction of computers with various software and hardware platforms in the network and, in addition, the TCP / IP stack ensured high reliability of the computer network (if several computers failed, the network continued to function normally) ...

After the open publication in 1974 of a description of the IP and TCP protocols (a description of the interaction of computers in a network), networks began to develop rapidly, based on the TCP / IP protocol family. TCP / IP standards are open and constantly evolving. Currently, all operating systems support the TCP / IP protocol.

In 1984, ARPANET faced a formidable rival: the US National Science Foundation (NSF) founded the vast inter-university NSFNet (National Science Foundation Network), which was made up of smaller networks (including the then famous Usenet and Bitnet) and had much more bandwidth than ARPANET. About 10 thousand computers were connected to this network in a year, and the title “Internet” began to smoothly transition to NSFNet.

2.The structure and principles of building the Internet

The Internet is a worldwide information computer network, which is an unification of many regional computer networks and computers that exchange information with each other through public telecommunication channels (dedicated telephone analog and digital lines, optical communication channels and radio channels, including satellite communication lines).

Information on the Internet is stored on servers. Servers have their own addresses and are controlled by specialized programs. They allow you to transfer mail and files, search databases, and perform other tasks.

The exchange of information between the servers of the network is carried out through high-speed communication channels (dedicated telephone lines, fiber-optic and satellite communication channels). Individual users' access to information resources on the Internet is usually carried out through a provider or corporate network.

Provider - network service provider - a person or organization that provides services for connecting to computer networks. The provider is some organization that has a modem pool for connecting with clients and accessing the worldwide network.

The main cells of the global network are local area networks. If some local network is directly connected to the global, then every workstation of this network can be connected to it.

There are also computers that are directly connected to the global network. They are called host computers. A host is any computer that is a permanent part of the Internet, i.e. connected via the Internet protocol to another host, which in turn is connected to another, and so on.

3. Internet protocols

The main thing that distinguishes the Internet from other networks is its protocols - TCP / IP. In general, the term TCP / IP usually means everything that is associated with communication protocols between computers on the Internet. It spans the entire protocol family, application programs, and even the network itself. TCP / IP is an internetworking technology.

The TCP / IP protocol got its name from two communication protocols (or communication protocols). These are Transmission Control Protocol (TCP) and Internet Protocol (IP). Despite the fact that the Internet uses a large number of other protocols, the Internet is often referred to as a TCP / IP network, since these two protocols are by far the most important.

As in any other network on the Internet, there are 7 levels of interaction between computers: physical, logical, network, transport, communication session level, representative and application level. Accordingly, each level of interaction corresponds to a set of protocols (i.e. rules of interaction).

Physical layer protocols determine the type and characteristics of communication lines between computers. Almost all currently known communication methods are used on the Internet, from a simple wire (twisted pair) to fiber-optic communication lines (FOCL).

For each type of communication lines, a corresponding logic level protocol has been developed that controls the transmission of information over the channel. The logical layer protocols for telephone lines include the SLIP (Serial Line Interface Protocol) and PPP (Point to Point Protocol) protocols. For LAN cable communication, these are the packet drivers for the LAN cards.

Network layer protocols are responsible for transferring data between devices on different networks, that is, they are involved in routing packets on the network. Network layer protocols include IP (Internet Protocol) and ARP (Address Resolution Protocol).

Transport layer protocols control the transfer of data from one program to another. Transport layer protocols include TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).

Session-level protocols are responsible for establishing, maintaining, and destroying the corresponding channels. On the Internet, this is done by the TCP and UDP protocols, as well as the UUCP (Unix to Unix Copy Protocol) protocol.

Representative-level protocols are concerned with the maintenance of application programs. Representative-level programs include programs that run, for example, on a Unix server to provide various services to subscribers. These programs include: telnet server, FTP server, Gopher server, NFS server, NNTP (Net News Transfer Protocol), SMTP (Simple Mail Transfer Protocol), POP2 and POP3 (Post Office Protocol), etc.

Application protocols include network services and programs for their provision.

After a general listing of the protocols, I consider it necessary to consider some of them in more detail:

3.1 TCP protocol

Transmission Control Protocol provides reliable data transfer between two hosts. It allows the client and application server to establish a logical connection between themselves and then use it to transfer large amounts of data, as if there were a direct physical connection between them. The protocol allows you to split the data stream, confirm the receipt of data packets, set timeouts (which allow you to confirm the receipt of information), organize retransmission in case of data loss, etc. Since this transport protocol implements guaranteed delivery of information, applications using it are able to ignore all the details of such transmission.

3.2 UDP protocol

The User Datagram Protocol implements a much simpler transmission service, providing, like network-layer protocols, unreliable data delivery without establishing a logical connection, but, unlike IP, for application systems on host computers. It simply sends packets of data, datagrams, from one machine to another, but does not provide any guarantees of their delivery. All reliable transmission functions must be built into an application using UDP. UDP has several advantages over TCP. Logical connections take time to establish and require additional system resources to maintain the connection state information on the computer. UDP only consumes system resources when data is sent or received. Therefore, if a distributed system is continuously exchanging data between client and server, communication using the TCP transport layer will be more efficient for it. If communication between host computers is rare, UDP is the preferred protocol.

Why are there two transport protocols TCP and UDP, and not one of them? The point is that they provide different services to application processes. Most applications use only one of them. The programmer chooses the protocol that best suits his needs. If you need reliable delivery, then TCP may be the best, if you need delivery of datagrams, then UDP may be better. If you need efficient delivery over a long and unreliable data transmission channel, then TCP may be better; if you need efficiency on fast networks with short connections, then UDP may be the best.

Notable distributed applications using TCP include Telnet, FTP, and SMTP. UDP is used, in particular, by the SNMP network management protocol. Application-level protocols are focused on specific application tasks. They define both the procedures for organizing a certain type of interaction between application processes, and the form of information presentation during such interaction.

3.3 TELNET protocol

Allows the serving machine to treat all remote terminals as standard "network virtual terminals" of line type, operating in ASCII code, and also provides the ability to negotiate more complex functions (for example, local or remote echo control, page mode, screen height and width, etc.). TELNET is based on the TCP protocol. At the application level above TELNET there is either a program for supporting a real terminal (on the user's side), or an application process in the serving machine, which is accessed from the terminal. The TELNET protocol has been around for a long time. It is well tested and widely distributed. Many implementations have been created for a wide variety of operating systems.

3.4 FTP protocol

File Transfer Protocol. Unlike Telnet, which allows you to work on a remote host, File Transfer Protocol (FTP) plays a more passive role and is designed to receive and send files to a remote server. This feature is ideal for webmasters and anyone in general who needs to transfer large files from one computer to another without a direct connection. FTP is typically used in a so-called "passive" mode, in which the client downloads directory tree data and shuts down, but periodically signals the server to keep the port open.

On Unix systems, FTP support is usually provided by the ftpd and ftp programs. By default, FTP runs on ports 20 (data transfer) and 21 (command transfer). FTP differs from all other TCP / IP protocols in that commands can be transmitted simultaneously with data transfer in real time; other protocols do not have this capability.

FTP clients and servers exist in one form or another on all operating systems. MacOS based FTP applications have a graphical interface like most Windows applications. The advantage of graphical FTP clients is that commands that are usually entered manually are now automatically generated by the client, which reduces the likelihood of errors and makes work easier and faster. On the other hand, FTP servers do not require additional attention after initial configuration, so the graphical interface is unnecessary for them.

3.4 TFTP protocol

Trivial FTP - Supports only a small subset of FTP functions. It works on the basis of the UDP protocol. TFTP does not monitor packet delivery and has little or no error handling. On the other hand, these restrictions reduce postage overhead. TFTP does not authenticate; it just establishes a connection. As a protective measure, TFTP allows only publicly accessible files to be moved.

TFTP is a serious security threat. For this reason, TFTP is commonly used in embedded applications, to copy configuration files when configuring a router, when it comes to conserving resources, or when security is provided by other means. TFTP is also used in network configurations in which computers are booted from a remote server, and TFTP can be easily written into the ROM of network adapters.

3.5 SMTP protocol

Simple Mail Transfer Protocol is the de facto standard for sending email on networks, especially the Internet. All operating systems have email clients that support SMTP, and most ISPs use SMTP for outbound mail. SMTP servers exist for all operating systems, including Windows 9x / NT / 2K, MacOS, the Unix family, Linux, BeOS, and even AmigaOS.

SMTP was designed to transport e-mail messages across different network environments. Essentially, SMTP doesn't keep track of how a message moves, but only makes sure that it gets to its destination.

SMTP has powerful mail processing facilities that provide automatic routing based on specific criteria. In particular, SMTP can notify the sender that the address does not exist and return a message to him if mail remains undelivered for a certain period of time (set by the system administrator of the server from which the message is sent). SMTP uses TCP port 25.

SMTP supports the transfer of messages (e-mail) between arbitrary nodes on the Internet. With mechanisms for staging mail and mechanisms for improving the reliability of delivery, the SMTP protocol allows the use of various transport services. It can work even on networks that do not use the TCP / IP family of protocols. The SMTP protocol provides both grouping of messages to the address of one recipient, and multiplication of several copies of a message for transmission to different addresses.

3.6 X-Window

The X-Window System uses the X-Window Protocol, which runs on TCP, to multi-window graphics and text on raster displays on workstations. X-Window is much more than just a window-drawing utility; it is a whole philosophy of human-machine interaction.

3.7 X.25 network protocols

X.25 networks are named after the recommendation - "X.25" issued by the CCITT (International Telephony and Telegraph Advisory Committee). This recommendation describes the interface for user access to the data network and the interface for interacting with a remote user via the data network.

Within the network itself, data transmission can occur in accordance with different rules. The core of the network can be built on faster frame relay protocols. However, when considering the issues of building X.25 networks, we will keep in mind networks, the transmission of data within which is also performed using the protocols described in the X.25 recommendation. This is how most corporate X.25 networks in Russia are currently being built.

First describes signal levels and interaction logic in terms of a physical interface.

Second (Link Access Protocol / Balanced Link Access Procedure, LAP / LAPB), with some modifications.

This layer of protocols is responsible for efficient and reliable data transmission over a point-to-point connection; between neighboring nodes of the X.25 network. This protocol provides error correction during transmission between neighboring nodes and data flow control (if the receiving side is not ready to receive data, it notifies the transmitting side, and the latter suspends the transmission). In addition, it determines the parameters, changing the values \u200b\u200bof which, the transmission mode can be optimized in terms of speed depending on the length of the channel between two points (delay time in the channel) and its quality (the probability of information distortion during transmission).

And finally third protocol layer - network. It is most interesting in the context of discussing X.25 networks, since it is he who determines their specificity in the first place.

Functionally, this protocol is primarily responsible for routing in the X.25 data network. For its part, the third-level protocol also structures information, in other words, breaks it up into “chunks”.

However, for all the virtues of X.25-based networking technology, it also has its limitations. One of them is the inability to transmit information such as voice and video over X.25 networks. These limitations are overcome in the technology based on the frame relay protocol.

3.8 H.323 protocol

The desire to use the existing structure of IP networks led to the emergence in 1996 of the H.323 standard (Visual Telephone Systems and Terminal Equipment for Local Area Networks which Provide a Non-Guaranteed Quality of Service - Video phones and terminal equipment for local area networks with non-guaranteed quality of service). H.323 is now one of the most important standards in this series. H.323 is an ITU-T recommendation for multimedia applications on computer networks that do not provide guaranteed quality of service (QoS). Such networks include IP and IPX packet switching networks based on Ethernet, Fast Ethernet, and Token Ring.

The H.323 recommendation is an umbrella specification, and the H.323 family of recommendations defines the protocols, methods and network elements required for multimedia communication between two or more users.

Description of the H.323 protocol. System architecture based on the H.323 standard. The main network devices are: terminal, gatekeeper, gateway and conference control device. All of these components are organized into so-called H.323 zones. One zone consists of a gatekeeper and multiple endpoints, with the gatekeeper managing all of the endpoints in its zone. A zone can also be the entire network of an IP telephony service provider or part of it that covers a particular region. H.323 zoning does not depend on the topology of the packet network, but can be used to organize an overlay H.323 network on top of a packet network used exclusively as a transport.

The H.323 family of protocols includes three main protocols: the protocol of interaction of the terminal equipment with the gatekeeper - RAS, the connection control protocol - H.225 and the logical channel control protocol - H.245.

Figure 1 H.323 network architecture

3.9 HTTP protocol

Нyper text transfer protocol is the basis of the World Wide Web. In fact, it is HTTP that is largely responsible for the rapid development of the Internet in the mid-1990s. First came the first HTTP clients (such as Mosaic and Netscape) that allowed you to "see" the Web at a glance. Soon, web servers with useful information began to appear. There are over six million HTTP-based websites on the Internet today. The HTTP protocol runs on the well-known TCP port 80. Hypertext Transfer Protocol is an application layer protocol for distributed, collaborative, multimedia information systems. HTTP has been used on the WWW since 1990. The first version of HTTP, known as HTTP / 0.9, was a simple protocol for transferring raw data over the Internet. HTTP / 1.0 was an improvement on this protocol, allowed a MIME-like message format containing metadata about the data being transferred and had modified request / response semantics. However, HTTP / 1.0 did not sufficiently take into account the specifics of dealing with hierarchical proxies, caching, persistent connections, and virtual hosts. In addition, the rapid growth in the number of applications not fully compatible with the HTTP / 1.0 protocol required the introduction of a new version of the protocol, which would include additional features that would help bring these applications to a single standard.

HTTP is also used as a generalized communication protocol between user agents and proxies / gateways or other Internet services such as SMTP, NNTP, FTP, Gopher and WAIS. Thus, HTTP defines the basics of multimedia resource access for a variety of applications.

An HTTP connection usually occurs over TCP / IP connections. HTTP can also be implemented over any other Internet protocol, or other networks. HTTP only requires reliable data transfer, therefore any protocol can be used that guarantees reliable data transfer; mapping the structure of the HTTP / 1.1 request and response to the transport data modules of the protocol in question is a question that is not resolved at the level of the protocol itself.

3.10 SNMP

Simple Network Management Protocol is UDP-based and intended for use by network management stations. It allows control stations to collect information about the state of affairs on the Internet. The protocol defines the format of the data, their processing and interpretation are left to the discretion of the control stations or the network manager.

TCP and UDP identify applications by 16-bit port numbers. Application servers usually have pre-known port numbers. For example, in every TCP / IP implementation that supports an FTP server, this file transfer protocol gets its TCP port number 21 for its server.Each Telnet server has TCP port 23, and a TFTP (Trivial File Transfer Protocol) server has UDP Port 69. Services that can be supported by any TCP / IP implementation are assigned port numbers in the range 1 to 1023. Port numbers are assigned by the Internet Assigned Numbers Authority (IANA). The application client is usually "not interested" in its port number for the transport layer it is using. It only needs to ensure that this number is unique for a given host. Application client port numbers are commonly referred to as short-lived (i.e., short-lived), since in general, clients exist for exactly as long as the user working with them needs the appropriate server. (Servers, by contrast, are up and running as long as the host they are running on is up.) Most TCP / IP implementations have assigned short-term port numbers in the range of 1024 to 5000.

3.12 IP Protocol

Internet Protocol is the main network layer protocol that allows the implementation of internetworking. It is used by both transport layer protocols. IP defines the basic unit of data transmission on the Internet, the IP datagram, specifying the exact format of all information passing over a TCP / IP network. IP software performs routing functions by choosing the path of data over a web of physical networks. Special tables are supported to determine the route; selection is made based on the address of the network to which the destination computer is connected. IP defines the route separately for each data packet, not guaranteeing reliable delivery in the correct order. It specifies the direct mapping of data to the underlying physical transmission layer and thus implements highly efficient packet delivery.

In addition to IP, the ICMP and IGMP protocols are also used at the network level. ICMP (Internet Control Message Protocol) is responsible for exchanging error messages and other important information with the network layer on another host or router. IGMP (Internet Group Management Protocol) is used to send IP datagrams to multiple hosts on a network.

At the lowest level - the network interface - special address resolution protocols ARP (Address Resolution Protocol) and RARP (Reverse Address Resolution Protocol) are used. These protocols are used only in certain types of physical networks (Ethernet and Token Ring) to translate network layer addresses to physical network addresses and vice versa.

4. Internet addressing

The main protocol of the Internet is the TCP / IP network protocol. Every computer on a TCP / IP network (connected to the Internet) has its own unique IP address or IP number. Internet addresses can be represented either as a sequence of numbers or as a name constructed according to certain rules. Computers use numeric addresses when sending information, and users use mostly names when working with the Internet.
Digital addresses on the Internet consist of four numbers, each of which does not exceed two hundred and fifty-six. When writing numbers are separated by periods, for example: 195.63.77.21. This way of numbering allows you to have more than four billion computers on the network.

For an individual computer or local area network that connects to the Internet for the first time, a special organization that administers domain names assigns IP numbers.

Initially, the Internet used IP numbers, but when the number of computers on the network increased to more than 1000, a method of associating names and IP numbers was adopted called a Domain Name Server (DNS). The DNS server maintains a list of local network and computer names and their corresponding IP numbers.

The so-called domain name system is used on the Internet. Each level in such a system is called a domain. A typical domain name has several parts in a specific order and separated by periods. Domains are separated from each other by dots, for example: www.lessons-tva.info or tva.jino.ru.

On the Internet, the domain name system uses the principle of successive qualifications, just like in ordinary postal addresses - country, city, street and house to which the letter should be delivered.

The top-level domain is located in the name to the right, and the lower-level domain is to the left. In our example, the top-level domains info and ru indicate that we are talking about the belonging of the site www.lessons-tva.info to the thematic top-level domain info, and the site tva.jino.ru to the Russian (ru) part of the Internet. But there are many Internet users in Russia, and the next level determines the organization that owns the address. In our case, this is jino.

The Internet address of this company is jino.ru. All computers connected to the Internet in this company are combined into a group with this address. Each company chooses the name of an individual computer or network for itself, and then registers it with the Internet organization that provides the connection.

This name must be unique within the top-level domain. This is followed by the hostname tva, thus the fully qualified third-level domain name: tva.jino.ru. A name can contain any number of domains, but most often names with the number of domains from three to five are used.

The domain system of address generation ensures that there is no longer another computer with the same address on the entire Internet. Any address can be used for lower-level domains, but there is a convention for top-level domains.

In the Internet addressing system, domains represented by geographic regions are accepted. They have a name consisting of two letters, for example: Ukraine – ua; France - fr; Canada - sa; USA – us.

Throughout the 1990s, the naming system described remained unchanged. But by the beginning of this century, the gigantic pace of development of the Internet led to the fact that the address space within the described system was practically exhausted. Particularly "cramped" became in the domains.com, .net and.org, in which registration of not only American, but any other corporate or personal sites of representatives of any country in the world was allowed. In order to offload these domains, the Internet Corporation for Assigned Names and Numbers (ICANN) has added new first-level domains to the existing grid. These included: .biz, .info, .pro, .aero, .coop, .museum, .name. The distribution of these names was made as follows:

.biz - commercial companies and projects;

.info - institutions for which information activities are leading (libraries, mass media);

.pro - sites of certified professionals in such fields of activity as doctors, lawyers, accountants, as well as representatives of other professions in which the personal aspect is of key importance (pro from the words profession, professional);

.aero - companies and persons directly related to aviation;

.coop - corporations using joint capital (from the word cooperative);

.museum - only museums, archives, exhibitions;

.name - personal sites, usually consisting of two parts: first and last name: www.bruce.edmonds.name.

In addition to ICANN's activities, some private companies have done a very peculiar job of expanding the Internet address space. Their actions resulted in the repurchase of domain names from small countries. Similarly, the domains went into private use .cc - Cocos Islands, .tv - Tuvalu, .ws - Samoa, .bz - Belize, .nu - Niui. Sites in these domains are now used by anyone, regardless of country or type of activity.

The Internet does not use domain names, but uses universal resource locators called URLs (Universal Resource Locators). A URL is the address of any resource (document, file) on the Internet; it indicates which protocol should be used to access it, which program should be run on the server, and which specific file should be accessed on the server. General view of the URL: protocol: // host computer / file name (for example: http://lessons-tva.info/book.html).

5.Internet services

Servers are network nodes designed to service requests from clients - software agents that extract information or transmit it to the network and work under the direct control of users. Clients provide information in an understandable and user-friendly form, while servers perform service functions for storing, distributing, managing information, and issuing it at the request of clients. Each type of service on the Internet is provided by appropriate servers and can be used with the appropriate clients.

Service WWW - the world wide web, provides the presentation and interconnection of a huge number of hypertext documents, including text, graphics, sound and video, located on various servers around the world and linked through links in documents. The emergence of this service has greatly simplified access to information and has been one of the main reasons for the explosive growth of the Internet since 1990. The WWW service operates using the HTTP protocol.

To use this service, browser programs are used, the most popular of which are currently Netscape Navigator and Internet Explorer.

“Web browsers” are nothing more than browsers; they are made by analogy with a free communication program called Mosaic, created in 1993 at the laboratory of the National Center for Supercomputing Applications at the University of St. Illinois for easy access to the WWW. What can you get using the WWW? Almost everything associated with the concept of “working on the Internet” - from the latest financial news to information about medicine and health, music and literature, pets and houseplants, cooking and automotive. You can order air tickets to any part of the world (real, not virtual), travel brochures, find the necessary software and hardware for your PC, play games with distant (and unknown) partners and follow sports and political events in the world. Finally, with the help of most programs with access to the WWW, you can also access teleconferences (there are about 10,000 in total), where messages are posted on any topic - from astrology to linguistics, as well as exchange messages by e-mail.

Thanks to WWW browsers, the chaotic jungle of information on the Internet takes the form of the familiar, neatly designed pages of text and photographs, and in some cases even video and sound. Attractive home pages immediately help you understand what information is coming next. It contains all the necessary headings and subheadings, which can be selected using the scroll bars just like on a normal Windows or Macintosh screen. Each keyword is linked to the corresponding information files through hypertext links. And don't let the term “hypertext” scare you: hypertext links are about the same as a footnote in an encyclopedia article that begins with the words “see also ...” Instead of flipping through the pages of a book, you just need to click on the required key word (for convenience, it is highlighted on the screen in color or font), and the required material will appear in front of you. It is very convenient that the program allows you to return to previously viewed materials or, by clicking the mouse, move on.

5.2 E-mail

E-mail - email . Using E-mail, you can exchange personal or business messages between recipients who have an E-mail address.

Your email address is indicated in the connection contract ( [email protected]). Our e-mail server, on which a mailbox is set up for you, works like an ordinary post office to which your mail comes. Your e-mail address is analogous to a rented post office box in a post office. Messages sent by you are immediately forwarded to the addressee indicated in the letter, and the messages you have received are waiting in your mailbox until you pick them up. You can send and receive email from anyone with an email address. Most of the messages are transmitted using the SMTP protocol, and the receiving - POP3.

You can use a variety of programs for working with E-mail - specialized, such as Eudora, or built-in Web browser, such as Netscape Navigator.

5.3 Usenet

Usenet is a worldwide discussion club. It consists of a set of newsgroups, whose names are organized hierarchically according to the topics discussed. Messages (“articles” or “messages”) are sent to these conferences by users using special software. After sending, messages are sent to news servers and become available for reading by other users.

You can send a message and view the responses to it, which will appear in the future. Since many people read the same material, reviews begin to accumulate. All messages on the same topic form a “thread” [in Russian, the word “topic” is also used in the same meaning]; thus, although the responses may have been written at different times and mixed with other messages, they still form a coherent discussion. You can subscribe to any conference, view the message titles in it using a news reader, sort messages by topic to make it easier to follow the discussion, add your own messages with comments and ask questions. Newsreaders are used to read and send messages, such as Netscape News, which is built into the Netscape Navigator, or Microsoft's Internet News, which comes with the latest versions of Internet Explorer.

FTP is a method for transferring files between computers. Continuing software development and the publication of unique textual sources of information ensure that the world's FTP archives remain a fascinating and ever-changing treasure trove.

You are unlikely to find commercial software in FTP archives, as licensing agreements prohibit open distribution. But you will find shareware and open source software. These are different categories: the public domain is really free, and you have to pay the author for shareware if, after the trial period, you decide to keep the program and use it. You will also meet the so-called freeware; their creators retain copyright but allow their creations to be used without any payment.

To view FTP-archives and get files stored on them, you can use specialized programs - WS_FTP, CuteFTP, or use WWW browsers, for example Internet Explorer - they contain built-in tools for working with FTP servers.

5.5 Telnet

Remote Login - remote access - work on a remote computer in a mode when your computer emulates the terminal of a remote computer, i.e. You can do everything (or almost everything) that you can do from a regular terminal of the machine from which you established a remote access session.

The program that handles remote sessions is called Telnet. Telnet has a set of commands that control the communication session and its parameters. The session is provided by the joint work of the software of the remote computer and yours. They establish TCP communication and communicate over TCP and UDP packets.

Telnet is included with Windows and is installed with TCP / IP support.

5.6 Whois

WHOIS (from the English who is - "who?") Is an application-level network protocol based on the TCP protocol. WHOIS is mainly used to obtain information about domain name owners, domain name registration date, domain expiration date, and IP addresses. The service is built on the client-server principle and is used to access public database servers (DB) of IP address registrars and domain name registrars. For a request, web forms are usually used that are available to users on many sites on the Internet, for example, http://netpromoter.ru/whois/, http://proverim.net/.

5.7 Messengers

(Instant Messenger - instant message) are applications or services for instant messaging, voice communication and video communication on the Internet (the most popular: ICQ, Skype and others).

5.8 VoIP

VoIP (Voice-over-IP) or IP telephony (digital telephony) is a technology that provides voice over packet-switched networks (IP networks). VoIP services are services that are designed to make Internet calls to regular phones. On the IP telephony market, there are many applications for Internet calls in the following modes: computer - computer; computer - telephone and telephone - telephone.

5.9 Web Forum

Web forum is a class of web applications for organizing communication between website visitors.

The forum offers a set of discussion sections. The work of the forum is to create topics by users in sections and then discuss within these topics. A separate topic is, in fact, a thematic guestbook.

A common division of a web forum is: Sections → topics → posts.

Departure from the initial topic of discussion is often prohibited by the forum rules of conduct. Compliance with the rules is monitored by moderators and administrators - participants endowed with the ability to edit, move and delete other people's messages in a specific section or topic, as well as control access to them for individual participants.

The forums can apply extremely flexible access control to messages. So, on some forums, reading and creating new messages is available to any random visitors, on others, preliminary registration is required (the most common option) - both forums are called open. A mixed version is also used - when certain topics can be available for recording to all visitors, and others - only to registered participants. In addition to open ones, there are closed forums, access to which is determined individually for each participant by the forum administrators. In practice, it is also quite common that some sections of the forum are publicly available, while the rest is available only to a narrow circle of participants.

When registering, forum members can create profiles - pages with information about this member. In his profile, a forum member can provide information about himself, customize his avatar or a signature automatically added to his messages - depending on his preferences. The signature can be static text or contain graphic pictures, including the so-called. userbars.

Most forums have a private messaging system that allows registered users to communicate individually, similar to email.

Many forums, when creating a new topic, have the option of joining votes or polls. However, other forum members can vote or answer the question asked in the topic header without creating a new message in the forum topic.

Each specific forum has its own topic - broad enough that it would be possible to conduct a multifaceted discussion within it. Often, several forums are also brought together in one place, which is also called a forum (in a broad sense).

According to the method of forming a set of topics, forums come with a dynamic list of topics and a permanent list of topics. In forums with a dynamic list of topics, ordinary members can create a new topic within the forum topic.

Usually a forum has the ability to search its message base.

Forum differs from chat in the separation of topics discussed and the ability to communicate in non-real time. This is conducive to more serious discussion as it gives the responders more time to think about the answer. Forums are often used for various kinds of consultations, in the work of technical support services.

Nowadays, web forums have almost completely supplanted NNTP-based newsgroups and are one of the most popular ways to discuss issues on the World Wide Web. At the moment, forums coexist along with blogs. These two forms of communication on the Internet are practically not inferior to each other in popularity.

Various software products are used to operate web forums, often specialized for specific types of forums. Among the developers for such software, the slang term "forum engine" has stuck.

6. Search engines on the Internet

The World Wide Web on the Internet is millions of documents with unstructured textual information (as well as graphics, audio, video). To find the information they need, a web client often has to go through hundreds of Web pages (sometimes without much success), spend a lot of energy and nerves (as well as money).

The means for finding information on the Internet are reference search systems. All existing types of reference search engines on the Internet can be divided into the following groups:

web search systems;

fTP search engines;

gopher archive search engines;

usenet search engines;

directories;

Each search engine indexes server pages in a special way, the priorities in search by indexes are also different from other systems, so a query for keywords and expressions in each of the search engines can give different results.

6.1 Archie

Archie is a set of software tools that work with special databases. These databases contain constantly updated information about the files that can be accessed through the FTP service. Using the services of the Archie system, you can search for a file by the pattern of its name. In this case, the user will receive a list of files with an exact indication of their storage location on the network, as well as information about the type, creation time and size of files. The Archie information retrieval system can be accessed in a variety of ways, ranging from e-mail queries and Telnet services to the use of graphical Archie clients.

6.2 Gopher

Gopher is the most widely used Internet search engine, which allows you to find information by keywords and phrases. Working with the Gopher system resembles viewing a table of contents, with the user being asked to go through a series of submenus and select the desired topic. There are over 2000 Gopher systems on the Internet, some of which are highly specialized, and some of which contain more versatile information.

Gopher allows you to get information without specifying the names and addresses of the authors, so the user does not waste a lot of time and nerves. He will simply tell the Gopher system exactly what he needs, and the system finds the appropriate data. There are more than two thousand Gopher servers, so it is not always easy to find the required information with their help. In case of any difficulties, you can use the VERONICA service. VERONICA searches over 500 Gopher systems, eliminating the need to manually browse them. Currently, specialized GOPHER client programs are not used, as modern browsers provide access to GOPHER servers.

6.3 WAIS

WAIS is even more powerful than Gopher as it searches for keywords in all text documents. Requests are sent to WAIS in simplified English.

This is significantly easier than formulating them in the language of Boolean algebra, and this makes WAIS more attractive to non-professional users.

When working with WAIS, users do not have to spend a lot of time to find the materials they need.

There are more than 200 WAIS libraries on the Internet. But since information is provided primarily by volunteers from academic organizations, most of the material is in the field of research and computer science.

Universal Services uses the usual principle of searching unstructured documents - by keywords.

A document's keyword is a single word or phrase that somehow reflects the content of the document. The universal search service (search engine) is a complex of programs and powerful computers that perform the following functions:

    A special program (search robot) continuously scans the pages of the World Wide Web, selects keywords and addresses of documents in which these words are found. It is appropriate to mention the indexed file here. A separate file containing information about the physical location of records in a database file. Instead, the database programs use indexes to view the actual database file, which greatly speeds up the search for the information you want.

    The web server receives a search request from the user, transforms it and sends it to a special program - a search engine.

    The search engine looks through the index database, makes a list of pages that match the query conditions (more precisely, a list of links to these pages) and returns it to the Web server.

    The web server prepares the results of the query execution in a user-friendly form and transmits them to the client machine.

Among the most famous and powerful search engines: Google, Yahoo, Lycos.

The search for graphic information (including video information) is still a fundamentally insoluble issue in computer technology.

Specialized help services are subject catalogs that collect more or less structured information about server addresses on a particular topic. Unlike universal index databases, thematic catalogs are compiled by specialists and provide the client with more rigorous, reliable systematized information about the Web.

In addition, many Internet sites have their own search engines (within this site). First of all, this is a contextual search mechanism, as well as a specialized search by surnames (for example, a person in a computer business), products (advertising sites), firms, etc. Contextual search on the current page is also provided in Internet Explorer.

Some pages on the Internet (for example, search engine pages) are specifically designed to receive and process search queries. Microsoft offers its own search page in Internet Explorer.

Internet Explorer does not search: it accepts a request from the user, processes it and passes it on to the appropriate search engine.

For example, if you need to search by means of Alfa Vista, you need to type the query text in the Alfa Vista input field and click the "Search" button. How to create a request? The search and retrieval of information is based on the apparatus of the algebra of logic. However, Internet searches are much less formalized than structured databases.

Let's take a look at some queries in Alfa Vista as an example. The simplest request is to select pages on the Internet that contain a given word, for example "Informatics". If the request consists of several words, then the following agreements are provided in Alfa Vista.

    Several words separated by a space indicate a query corresponding to the logical OK (OR) operation. For example, at the request of school informatics, pages will be designed with either "School" or "Informatics" (or both words at once). The number of such documents is very large: they can contain pages that have nothing to do with computer science.

    Several words enclosed in quotation marks are perceived by the system as a whole. For example, the query "School Informatics" will select documents that contain this character string.

Words connected by a "+" (plus) sign correspond to the logical operation AKD (AND). For example, the query School + Informatics will select documents that contain both of these words. It is clear that the number of such documents will be no less than the number of documents selected on a second request.

7. Instant messaging and messengers

The messaging system is one of the most accessible and demanded means of communication on the Internet, in corporate and local networks. Messaging systems have their own communication networks, most of which are built on a "client-server" architecture.

7.1 IRC Service

IRC (Internet Relay Chat or Chat) is the first online communication tool that provides a wide variety of channels (topics) for discussions with like-minded people. Chat is a real-time text-based conversation. This service is based on a client-server network architecture, so a client application (IRC client) must be installed on the PC to communicate online on the Internet. When the client program starts, it establishes a connection to the selected IRC server. Since the network's IRC servers are interconnected, it is enough to connect to one of its servers for communication. When connecting to the IRC server, the user sees a list of available topics (channels) in which he can communicate.

Initially, the IRC service had one IRC network, which subsequently split into several IRC networks. These IRC networks are not connected with each other and have their own names (DALnet, IRCnet, UNDERnet, RusNet, WeNet, IrcNet.ru, etc.). Each IRC network has its own subject areas or channels. You can download IRC clients on the Internet for Unix-like operating systems, OS / 2, Windows systems, and mobile phones.

For chatting, you can use both IRC clients and Web chats. Web chats are designed to exchange messages on the server (web page) using a browser; in this case, there is no need to install a client application on the PC. A web chat is a web page where you can chat with other visitors in real time.

7.2 Instant Messaging Service

The development of chat has resulted in the instant messaging service (IMS). IMS is one of the technologies that provides communication on the Internet. In the instant messaging service, in addition to text messages, you can send sound signals, pictures, videos, files.

This service has its own networks. The IMS network architecture is built on the client-server principle. The IMS client software that is designed to conduct online chat and instant messaging through instant messaging services is called Instant messengers (IM).

As a rule, exchange networks have a separate server (some networks are decentralized), to which messengers are connected, and their own communication protocols. Most instant messaging networks use proprietary or proprietary protocols (proprietary protocols that belong to only one network) to exchange information. Basically, each of these networks uses its own messenger.

There are usually no interconnections between different IMS networks, so a messenger of the same network, for example ICQ, cannot communicate with a messenger of the Skype network. This means that in order to communicate with each other, users must register in the same service and install their messengers.

But there are alternative instant messengers that can work on multiple networks at the same time. For example, the free open source multi-protocol modular client (messenger) Miranda IM (or Trillian, Pidgin) allows you to connect to multiple networks at the same time, which eliminates the need to install a separate messenger for each network.

In addition, as an alternative to the proprietary protocols for IM, the open protocol Jabber (Jabber is a family of protocols and technologies) or XMPP was developed, which is used in many messengers (Jabber clients: Psi, Miranda IM, Tkabber, JAJC, Pandion and others). Jabber (chatter, idle talk) is a system of instant messaging and presence between any two Internet subscribers based on the open XMPP protocol using the XML format. This is a new generation communication system.

There is no single central server on the network, Jabber is a decentralized (with decentralized servers), extensible and open system. Anyone can open their own instant messaging server, register users on it, and interact with other Jabber servers. Jabber is used to organize communication on the Internet, local and corporate networks.

Modern instant messengers provide users with many useful functions, such as IP telephony, video chat, indication of users' network status, etc. For instant messaging, you can use both a desktop IM client (messenger) and a web version of the client (for example, Google Talk Gadget, JWChat, Meebo, MDC, etc.).

A list of the main functions that modern instant messengers can provide:

 VoIP services: calls to a computer, calls to landline and mobile phones;

 the ability to send SMS;

 file transfer;

 tools for collaboration in real time;

 the ability to chat directly on the web page;

 reminders and alerts;

 storage of communication history for each contact;

 indication of the network status of users (online, away, etc.) entered in the contact list.

The most popular messengers:

ICQ (I Seek You) is a popular program (the most common Internet pager) for real-time communication. Since ICQ is an outdated centralized network with a closed protocol, experts now recommend that users switch from ICQ to Jabber.

Skype is the world's most widespread proprietary messenger. Provides the ability to call landline and mobile phones, receive calls. The latest versions of this messenger have implemented the "Video Call" function, with the help of which users can talk and exchange full-screen video from Web cameras installed by users.

Miranda IM is an open-source, multi-protocol instant messenger for the Internet or local area network. Supports instant messaging protocols: ICQ, IRC, Jabber, Google Talk, Skype and others.

The Google Talk client is the desktop IM client (messenger) of Google Talk, and the Google Talk Gadget is a web client that works in any browser with the Abobe Flash plugin. Google Talk is Google's instant messaging service that lets you communicate using voice chat and text messages. Google Talk uses XMPP (Jabber Compatible) and allows you to communicate with other members of the Jabber network.

Mail.Ru Agent client is an IM client (messenger) that provides text, voice and video chat. Supports ICQ, i.e. is an ICQ client. A web version of the client is available on the site http://win.mail.ru/cgi-bin/loginagent.

VoxOx (voxox.com, version 2.0) is a modern and promising open source multi-protocol messenger that supports ICQ, Jabber, MSN, Yahoo! Messenger and others. VoxOx contains many useful features and competes with the world's most widely used messenger Skype.

Conclusion

The possibilities of the Internet are as wide as a person can only have enough imagination. Network technology has already seriously established itself as the best source of information in the West and is rapidly developing in the countries of the former Soviet Union. For example, in Russia last year the Internet was developed at 400%, in Ukraine - only 300%. Today, more than 10,000 users are registered in our country, and this number is constantly growing. As Bill Gates says: "The Internet is the engine of technology."

Therefore, it is especially important today to pay attention to this technological perspective, and try to do everything possible to integrate the Internet into the field of education.

List of used literature

    Hafkemeyer H. / "Internet. Traveling on the World Wide Web", 1999.

    Groshev S.V. / A modern self-instruction manual for professional work on a computer, 1998.

    Borisov M., Vizel M. / What is the Internet? "Change", 1999 # 7

    Volubuev R. Web on fire: "Literary newspaper" 1998. No. 40

    The Internet on the Threshold of the Third Millennium "Technology of Youth", 1999. No. 10

    "Global networks: information and means of access" - Publishing house of PSTU.

    Gittel, E., James S., ISDN Simple and Affordable - 1999

    Olifer V.G., Olifer N.A., “Computer networks. Principles, technologies, protocols "- Publishing house" Peter "2000

    "Microsoft TCP / IP: A Training Course." / Official Microsoft Self-Study Guide / - 1998

    Frolov A.V., Frolov G.V., “Global computer networks. A Practical Introduction to the Internet "- 1998

    Internet encyclopedia "Wikipedia". http // ru.wikipedia.org

    Olifer V.G., Olifer N.A., “Computer networks. Principles, technologies, protocols "- Publishing house" Peter "2000.

    Frolov A.V., Frolov G.V., “Global computer networks. A Practical Introduction to the Internet "- 1998.

    Internet marketing as a new tool for business development Coursework \u003e\u003e Marketing

    In the coursework, I will only touch on some major characteristics The Internet as a marketing tool. First of all... . Information and services in The Internet available around the clock. In addition, his communicative characteristics have a high ...

  1. the Internet (9)

    Abstract \u003e\u003e Informatics

    Such the Internet? 3 The history of the network the Internet... 4 Who owns the Internet? 5 The main characteristics the Internet... 6 What is the purpose the Internet? ... the organizational problems of the Internet. The main characteristics the Internet. the Internet is a global ...

  2. The main characteristics software 1 C - Enterprise

    Abstract \u003e\u003e Economics

    Integration with other systems: support the Internet-protocols HTTP, HTTPS and FTP ... within Russia and the CIS. The main characteristics comparison of 1C: Enterprise with its peers ... in particular, support of several the Internet-protocols, reception and sending of electronic ...

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists using the knowledge base in their studies and work will be very grateful to you.

The Internet is a global computer network covering the whole world. Today the Internet has about 15 million subscribers in more than 150 countries around the world. The size of the network increases by 7-10% every month. The Internet forms a kind of core that connects various information networks belonging to different institutions around the world, one with another.

If earlier the network was used exclusively as a medium for transferring files and e-mail messages, today more complex problems of distributed access to resources are being solved. About three years ago, shells were created that support the functions of network search and access to distributed information resources, electronic archives.

The Internet, which once served solely for research and educational groups whose interests extended to access supercomputers, is becoming more and more popular in the business world.

Companies are tempted by speed, cheap global connectivity, ease of collaboration, available software, and a unique Internet database. They view the WAN as complementary to their own local area networks.

With a low cost of services (often only a flat monthly fee for used lines or telephone), users can access commercial and non-commercial information services in the USA, Canada, Australia and many European countries. In the archives of free access to the Internet, one can find information on almost all spheres of human activity, from new scientific discoveries to forecasting the weather for tomorrow.

In addition, the Internet provides unique opportunities for low-cost, reliable, and confidential global communications around the world. It turns out to be very convenient for firms with branches around the world, transnational corporations and management structures. Usually, using the Internet infrastructure for international communication is much cheaper than direct computer communication via satellite or telephone.

Email is the most widely used Internet service. Currently, approximately 20 million people have an email address. Sending a letter by email is significantly cheaper than sending a regular letter. In addition, a message sent by e-mail will reach the addressee in a few hours, while a regular letter can reach the addressee for several days, or even weeks.

The Internet is currently experiencing a boom, thanks in large part to active support from European governments and the United States. About $ 1-2 billion is allocated annually in the USA for the creation of new network infrastructure. Research in the field of network communications is also funded by the governments of Great Britain, Sweden, Finland, Germany.

However, government funding is only a small part of the proceeds. the "commercialization" of the network is becoming more visible (it is expected that 80-90% of funds will come from the private sector).

History of the Internet

In 1961, the Defense Advanced Research Agency (DARPA), commissioned by the US Department of Defense, began a project to create an experimental packet transmission network. This network, called ARPANET, was originally intended to study methods for providing reliable communications between different types of computers. Many methods of transmitting data over modems were developed in the ARPANET. At the same time, the protocols for data transmission in the network were developed - TCP / IP. TCP / IP is a set of communication protocols that define how computers of different types can communicate with each other.

The experiment with ARPANET was so successful that many organizations wanted to enter it in order to use it for daily data transmission. And in 1975 the ARPANET went from being an experimental network to a working network. The Defense Communication Agency (DCA), now called the Defense Information Systems Agency (DISA), took over responsibility for administering the network. But the development of the ARPANET didn't stop there; TCP / IP protocols have continued to evolve and improve.

In 1983, the first standard for TCP / IP protocols was released, included in the Military Standards (MIL STD), i.e. military standards, and everyone who worked on the network had to move to these new protocols. To facilitate this transition, DARPA approached the executives of Berkley Software Design to implement TCP / IP protocols in Berkley (BSD) UNIX. This was the beginning of the alliance between UNIX and TCP / IP.

After some time, TCP / IP was adopted into a conventional, that is, a publicly available standard, and the term Internet came into general use. In 1983, MILNET spun off from ARPANET and became part of the Defense Data Network (DDN) of the US Department of Defense. The term Internet came to be used to refer to a single network: MILNET plus ARPANET. And although the ARPANET ceased to exist in 1991, the Internet exists, its size is much larger than the original, as it has connected many networks around the world. Figure 1 illustrates the growth in the number of hosts connected to the Internet from 4 computers in 1969 to 3.2 million in 1994. A host on the Internet is a computer running a multitasking operating system (Unix, VMS) that supports TCP \\ IP protocols and provides users of any network services.

What the Internet consists of

This is a rather complex question, the answer to which is changing all the time. Five years ago, the answer was simple: the Internet is all networks that interact with IP to form a seamless network for their collective users. This includes various federal networks, a collection of regional networks, university networks and some foreign networks.

Recently, there has been an interest in connecting to the Internet networks that do not use the IP protocol. In order to provide the clients of these networks with Internet services, methods have been developed to connect these "foreign" networks (for example, BITNET, DECnets, etc.) to the Internet. At first, these connections, called gateways, were simply meant to send e-mail between two networks, but some of them have grown to be capable of providing other services on an inter-network basis. Are they part of the Internet? Yes and no - it all depends on whether they want it themselves.

Currently, the Internet uses almost all known communication lines, from low-speed telephone lines to high-speed digital satellite channels. The operating systems used on the Internet are also diverse. Most of the computers on the Internet run on Unix or VMS. Special network routers such as NetBlazer or Cisco, whose OS resembles the Unix OS, are also widely represented.

In fact, the Internet consists of many local and global networks belonging to different companies and enterprises, connected by various communication lines. The Internet can be thought of as a mosaic of small networks of different sizes that actively interact with one another, sending files, messages, etc.

How does the Internet know where to send your data? If you are sending a letter, then by simply dropping it into a mailbox without an envelope, you cannot expect that the correspondence will be delivered to its destination. The letter must be enclosed in an envelope, the address must be written on the envelope and a stamp must be attached. Just as a post office follows the rules that govern how the postal network works, certain rules govern how the Internet works. These rules are called protocols. The Internet Protocol (IP) is responsible for addressing, i.e. makes sure the router knows what to do with your data when it arrives. In our post office analogy, we can say that the Internet Protocol acts as an envelope.

Some address information is provided at the beginning of your message. It gives the network enough information to deliver the data packet.

Internet addresses consist of four numbers, each of which does not exceed 256. When writing numbers, they are separated from one another by periods, for example:

The address actually consists of several parts. Since the Internet is a network of networks, the beginning of the address contains information for routers about which network your computer belongs to. The right side of the address is used to tell the network which computer should receive this packet. It is rather difficult to draw the line between a network sub-address and a computer sub-address. This boundary is established by agreement between neighboring routers. Fortunately, as a user, you never have to worry about this. This only matters when creating a network. Every computer on the Internet has its own unique address. Here again, the analogy with the mail delivery service will help us. Take the address "50 Kelly Road, Hamden, CT". Hamden, CT is like a network address. Thanks to this, the envelope goes to the required post office, the one that knows about the streets in a certain area. The Kelly Road element is like a computer address; it points to a specific mailbox in the area that this post office serves. The Post Office did its job by delivering mail to the correct local office, and that office put the letter in the appropriate mailbox. Likewise, the Internet did its job when its routers sent data to the appropriate network, and this local network to the appropriate computer.

For a variety of technical reasons (mostly hardware limitations), information sent over IP networks is broken up into chunks called packets. One packet usually sends from one to 1500 characters of information. This does not allow one user to monopolize the network, but allows everyone to count on timely service. This also means that in the event of network congestion, the quality of its work deteriorates somewhat for all users: it does not die if it is monopolized by several solid users.

One of the advantages of the Internet is that only the Internet Protocol is sufficient to operate at the basic level. The network will not be very friendly, but if you behave reasonably enough, you will solve your problems. Since your data is in an IP envelope, the network has all the information it needs to move this packet from your computer to its destination. Here, however, several problems arise at once.

First, in most cases, the amount of information sent exceeds 1500 characters. If the post office only accepted postcards, you would naturally be disappointed.

Second, an error may occur. The postal department sometimes loses letters, and the networks sometimes lose packets or damage them in transit. You will find that, unlike post offices, the Internet successfully solves such problems.

Third, the packet delivery sequence can be disrupted. If you have sent two letters one after the other to the same address, then there is no guarantee that they will go along the same route or come in the order they were sent. The same problem exists on the Internet.

Therefore, the next layer of the network will give us the ability to send larger portions of information and take care of eliminating the distortions introduced by the network itself.

To solve the problems mentioned above, the "Transmission Control Protocol" (TCP) is used, which is often referred to in conjunction with the IP protocol. What should you do if you want to send a book to someone and the mail only accepts letters? There is only one way out: to tear out all the pages from the book, put each in a separate envelope and throw all the envelopes into the mailbox. The recipient would have to collect all the pages (assuming not a single letter was missing) and glue them back into the book. These are the tasks that TSR performs.

TCP breaks the information you want to transfer into chunks. Each portion is numbered so that you can check if all the information has been received and put the data in the correct order. To transmit this sequence number over the network, the protocol has its own "envelope" on which the necessary information is "written". A portion of your data is placed in a TCP envelope. The TCP envelope, in turn, is placed in an IP envelope and transmitted to the network.

At the receiving end, the TCP protocol software collects the envelopes, extracts the data from them, and arranges them in the correct order. If there are no envelopes, the program asks the sender to send them again. After placing all the information in the correct order, the data is passed on to the application that uses the TCP service.

This is, however, a somewhat idealized view of TCP. In real life, packets are not only lost, but also undergo changes along the way due to short-term outages in telephone lines. TCP solves this problem too. When data is placed in an envelope, a so-called checksum is calculated. The checksum is a number that will allow the receiving TCP to detect errors in the packet. Let's say you are transmitting raw digital data in 8-bit chunks or bytes. The simplest version of the checksum is to add the values \u200b\u200bof these bytes and put an additional byte at the end of this piece of information, containing this sum. (Or at least that part of it that fits in 8 bits.) The receiving TCP performs the same calculation. If during the transfer any bytes change, the checksums will not match, and you will learn about the error. Of course, in the presence of two errors, they can compensate for each other, but such errors can be detected by more complex calculations. When the packet arrives at the destination receiving TCP, it calculates the checksum and compares it to the one sent by the sender. If the values \u200b\u200bdo not match, then an error occurred during transmission. The receiving TCP discards this packet and requests a retransmission.

The TCP protocol creates the appearance of a dedicated communication line between two applications, because ensures that information entering at one end exits at the other. In reality, there is no dedicated channel between the sender and the receiver (other people can use the same routers and network wires to transfer their information in between your packets), but it seems that there is, and in practice this is usually enough.

This is not the best approach to using the web. Formation of TCP - connection requires significant costs and time; if this mechanism is not needed, it is best not to use it. If the data to be sent fits in a single packet and delivery assurance is not particularly important, TCP can become a burden.

There is another standard protocol that avoids this overhead. It is called the user datagram protocol (UDP) and is used in some applications. Instead of putting your data in a TCP envelope and putting that envelope in an IP envelope, the application puts the data in a UDP envelope, which is placed in the IP envelope.

UPD is simpler than TCP, because this protocol does not care about missing packets, data placement in the correct order, and other subtleties. UDP is used for those programs that send only short messages and can retransmit data if the response is delayed. Suppose you are writing a program that looks up phone numbers in one of the online databases. There is no need to establish a TCP connection in order to transmit 20-30 characters in all directions. You can simply put the name in one UDP packet, embed it in an IP packet and send it. The receiving application will receive this packet, read the name, find the phone number, enclose it in another UDP packet, and send it back. What happens if a package is lost on the way? This is a problem for your program: if there is no response for too long, it sends another request.

How to make the web friendly

To do this, it is necessary to configure the software for a specific task and use names rather than addresses when accessing computers.

Most users have no interest in the flow of bits between computers, no matter how fast the lines are and no matter how exotic the technology that made it possible is. They want to quickly use this stream of bits for some useful task, be it moving a file, accessing data, or just playing a game. Application programs are pieces of software that meet these needs. Such programs constitute another layer of software built on top of the TCP or UDP service. Application programs provide the user with the means to solve a specific problem.

The range of application programs is wide: from home-grown to proprietary, supplied by large development firms. There are three standard applications on the Internet (remote access, file transfer, and e-mail), as well as other widely used but not standardized programs. Chapters 5-14 show how to use the most common Internet applications.

There is one thing to keep in mind when it comes to application programs: You perceive the application as it appears on your local system. The commands, messages, invitations, etc. that appear on your screen may differ slightly from those that you see in a book or on your friend's screen. Don't worry if the book says "connection refused" and your computer says "Unable to connect to remote host: refused"; This is the same. Do not cling to words, but try to understand the essence of the message. Don't worry if some commands have different names; most applications have a solid enough help subsystem to help you find the command you need.

Numeric addresses - and this became clear very soon - are good for communicating with computers, but for people, names are preferable. It is inconvenient to speak using numeric addresses, and even more difficult to remember them. Therefore, computers on the Internet are given names. All Internet applications allow the use of system names instead of numeric computer addresses.

Of course, the use of names has its drawbacks. First, you need to ensure that the same name is not accidentally assigned to two computers. In addition, it is necessary to ensure that names are converted to numeric addresses, because names are good for people, but computers still prefer numbers. You can give the program a name, but it must have a way to find that name and convert it to an address.

In its infancy, when the Internet was a small community, it was easy to use names. The Network Information Center (NIC) created a dedicated registration service. You sent in the completed form (of course by electronic means) and the NIC entered you into its list of names and addresses. This file, called hosts (list of host computers), was regularly sent to all computers on the network. Simple words were used as names, each of which was necessarily unique. When you specified a name, your computer looked for it in this file and substituted the appropriate address.

As the Internet grew, unfortunately, the size of this file also increased. Significant delays in name registration began to arise, and the search for unique names became more difficult. In addition, it took a lot of network time to send this large file to all the computers indicated in it. It became apparent that such a growth rate required a distributed interactive system. This system is called the "Domain Name System" (DNS).

The domain name system is a method of naming by assigning responsibility to different groups of users for subsets of names. Each level in this system is called a domain. Domains are separated from one another by dots:

A name can contain any number of domains, but more than five are rare. Each subsequent domain in the name (as viewed from left to right) is larger than the previous one. In the ux.cso.uiuc.edu name, the ux element is the name of a real computer with an IP address. (See picture).

The name of this computer is created and supervised by the cso group, which is nothing more than the department in which this computer is located. The cso department is a department of the University of Illinois (uiuc). uiuc is part of the national education group (edu). Thus, the edu domain includes all computers in US educational institutions; domain uiuc.edu - all computers at the University of Illinois, etc.

Each group can create and change all names under its control. If uiuc decides to create a new group and name it ncsa, it may not ask anyone for permission. All you need to do is add a new name to your part of the worldwide database, and sooner or later whoever needs it will find out about this name (ncsa.uius.edu). Likewise, cso can buy a new computer, name it, and plug it on the network without asking anyone for permission. If all groups, starting with edu and below, follow the rules and ensure the uniqueness of the names, then no two systems on the Internet will have the same name. You can have two computers named fred, but only if they are in different domains (for example, fred.cso.uiuc.edu and fred.ora.com).

It is easy to find out where domains and names come from in an organization such as a university or an enterprise. But where do "top-level" domains like edu come from? They were created when the domain system was invented. There were originally six organizational top-level domains.

When the Internet became an international network, it became necessary to give foreign countries the ability to control the names of their systems. For this purpose, a set of two-letter domains have been created that correspond to the top-level domains for these countries. Since ca is a Canadian code, a computer in Canada may have this name:

hockey.guelph.ca

The total number of country codes is 300; computer networks exist in about 170 of them.

The final plan for expanding the Internet resource naming system was finally announced by the International Ad Hoc Committee (IAHC). Data from February 24, 1997. According to the new solutions, the top-level domains including today com, net, org will be supplemented with:

firm - for web business resources;

store - for trading;

web - for organizations related to the regulation of activities on the WWW;

arts - for liberal arts education resources;

rec - games and entertainment;

info - provision of information services;

nom - for individual resources, as well as those who are looking for their own ways of implementation, which are absent in the given poor list.

In addition, the IAHC decisions say that 28 designated naming agencies are being established around the world. As stated, the new system will successfully overcome the monopoly that was imposed by the only authorized person - Network Solutions. All new domains will be allocated to new agencies, and the former will be jointly tracked by Network Solutions and the National Science Foundation until the end of 1998.

Currently, about 85 thousand new names are registered every month. The annual name fee is $ 50. The new registration agencies will have to represent seven geographic regions. Lotteries will be arranged for applicants for the role of agencies from each region. Companies wishing to participate must pay a $ 20,000 entry fee and have at least $ 500,000 in insurance in case they are unable to cope with the role of a domain name registrar.

Now that you understand how domains are related to each other and how names are created, you can think about how to apply this wonderful system. You use it automatically whenever you give a name to a computer "familiar" with it. You do not need to search for this name manually, nor give a special command to find the desired computer, although you can also do this if you wish. All computers on the Internet can use the domain system, and most of them do.

When you use a name such as ux.cso.uiuc.edu, the computer must translate it into an address. To do this, your computer starts asking the DNS servers (computers) for help, starting from the right side of the name and moving to the left. First, it asks the local DNS servers to find the address. There are three possibilities here:

The local server knows the address because that address is in the part of the worldwide database that the server maintains. For example, if you work at NSTU, then your local server probably has information about all computers of NSTU.

The local server knows the address because someone recently asked about it. When you ask about an address, the DNS server keeps it “close at hand” for a while, in case someone else asks about it later. This greatly improves the efficiency of the system.

The local server does not know the address, but it knows how to determine it.

How does the local server determine the address? Its software knows how to communicate with the root server, which knows the addresses of the top-level domain name servers (the far-right part of the name, eg edu). Your server asks the root server for the address of the computer responsible for the edu domain. Having received the information, it contacts this computer and requests the address of the uiuc server. After that, your software contacts this computer and asks it for the cso domain server address. Finally, from the cso server, it receives the ux address of the computer that was the target of the application.

Some computers are still configured to use the old-fashioned hosts file. If you work for one of them, you may have to ask its administrator to find the address you need manually (or do it yourself). The administrator will have to enter the name of the target computer in the local hosts file. Hint to him that it would not hurt to install DNS software on his computer to avoid similar complications in the future.

Internet

What can be done on the Internet is a difficult question. The Internet is not just a network, but a network of networks, each of which can have its own policies and rules. Therefore, legal and ethical standards as well as political considerations should be taken into account here. The relationship of these factors and the degree of their importance are not always the same.

In order to feel quite confident, it is enough to remember a few basic principles. Fortunately, these principles do not restrict the user too much; if you do not go beyond the established limits, you can do whatever you want. If you find yourself in a difficult situation, contact your service provider to determine exactly what you can and cannot do. It is possible that what you want to do is allowed, but it is your responsibility to find out if this is so. Let's take a look at some principles to help you define your boundaries.

Legal regulations

When using the Internet, three legal regulations must be observed:

Much of the Internet is funded by federal subsidies, thereby excluding purely commercial use of the network.

The Internet is an international network. When sending anything, including bits, across the state border, you should be guided by the laws governing export, and not by the legal norms of that state.

When delivering software (or just ideas, for example) from one place to another, you should take into account the regional laws regarding intellectual property and licenses.

Many of the networks on the Internet are funded by federal agencies. According to federal law, the department can spend its budget only on what is within the scope of its activities. For example, the Air Force cannot secretly increase its budget by ordering rockets from NASA. The same laws will apply to the network: if NASA funds the network, then it should only be used for space exploration. As a user, you may not have the slightest idea what networks your packets are traveling on, but it is better that the contents of these packets do not conflict with the activities of the agency that finances this or that network.

In reality, all this is not as scary as it seems. A couple of years ago, Washington realized that having many parallel IP networks (NSFNET, NASA Science Internet, etc., one per federal agency) is a waste of money (a very radical idea). A law was passed to establish NREN, a national network for research and education. A portion of the Internet has been dedicated to support research and education, a task common to all federal agencies. This means that you can use the services of NREN More precisely, NREN is an active network that has yet to be created. The bill authorizes traffic on existing federal networks. It would be more correct to call what exists today - Interim Interagency NREN (Interim Interagency NREN). to conduct research and teaching or to support research and teaching.

The importance of the condition “to support research and education” cannot be overemphasized. This provision includes in the number of permitted uses of the network, which may, at first glance, not correspond to its purpose. For example, if a firm sells software that is used in research and education, then it has the right to distribute add-ons and answer questions by e-mail. This use is believed to refer to “support for research and education”. But at the same time, the company cannot use NREN for commercial tasks such as marketing, accounting, etc. - there is a commercial part of the Internet for this purpose. A list of provisions governing the use of NSFNET is given in Appendix A. These requirements are among the most stringent in relation to commercial use. If your job matches them, then it meets the requirements of all other networks.

There has been a lot of talk lately about the National Information Infrastructure (NII). This is a voluminous and rather general network project on a national scale. It can equally well be viewed as a long-term development plan for NREN, and as an alternative to NREN. There are many players in this game (such as network service providers, telephone companies, cable TV companies, and even energy corporations) who are trying to get the chips to fall into their territory. This abstract of NII will not pay much attention, since we are considering a real-life network, and not one that may appear in a few years. It is clear that NII will have a significant impact on the development of computer networks, but it is not yet clear how exactly this impact will manifest itself. All interested parties promise faster access, reduced prices and increased data transfer rates, but, as they say, it's better to see once than hear a hundred times.

When your organization negotiated an Internet connection, someone had to tell the network service provider whether the connection would be used for research and education, or for commercial purposes. In the first option, your traffic will be directed along subsidized NREN routes, in the second - through private channels. Network access fees for your organization will depend on which option you choose; commercial use of the network usually costs more because it is not subsidized by the state. Only employees of your network administration can tell you whether the solution of business problems through your connection is allowed. Please check this before using your connection for commercial purposes.

Of course, many corporations will join the Internet as hubs for research and learning — and this is acceptable, since the motivation for connecting is often scientific research. For example, a seed company intends to conduct research on the properties of soybean seeds in cooperation with the university. By contrast, many corporate legal departments declare their connections commercial. This ensures that there is no legal liability in the future when, for example, an uninformed employee launches commercial information on a research data connection.

A number of commercial providers exist on the Internet, such as Advanced Networks and Service (ANS), Performance Systems International (PSI), and UUNET. Each of these companies has its own market niche and its own network within the state for providing commercial services on the Internet. In addition, state and regional networks provide commercial services to their members. There are connections between each of these networks and the federal networks. The interaction of all these networks with each other in compliance with all laws and regulations is carried out through the use of these connections and the conclusion of certain agreements on accounting.

Did you know that the export of bits is subject to DOC export restrictions? These notes apply to the United States only. In other countries, servers are subject to different laws. The fact that the Internet is, in essence, an integral global network, makes it possible to export information without your knowledge. Since I am not a lawyer, I will not go into technical details, but I will try to briefly describe what is needed to comply with the law. If, after reading these notes, you nevertheless believe that there is a risk for you to violate the law, seek competent legal assistance.

Expert legislation is based on two provisions:

Exporting anything requires a license.

The export of the service is considered to be roughly equivalent to the export of the components required to provide the service.

The first point is self-explanatory: if you ship, transport, forward a file or send something by e-mail outside the country, then you need to obtain an export license. Fortunately, there is a loophole here - a general license that has a very wide scope. The General License allows the export of anything that does not have explicit export bans and that can be openly discussed in the United States. Thus, anything you may learn at a conference or in a classroom is likely to be covered by a general license, unless this information is secret.

However, there are many surprises in the list of what the restrictions apply. It may also include some information available to a student at any university. The export of network program texts and encrypted information may be prohibited. It often happens that at first we are talking about some small point, but then, when the corresponding instructions have already been drawn up, it turns out that the restrictions cover a much wider area. For example, during the Gulf War, the Iraqi military network was much more difficult to disable than planned. It turned out that Iraq was using commercial IP routers that very quickly found alternative routes. Therefore, the export of all routers that could find alternative routes was urgently prohibited. It is possible that this story is one of the "network legends". Everyone on the Internet talked about this case, but when I tried to verify the accuracy of this information, I could not find a single reliable source.

The second point is even simpler. If the export of certain hardware, such as supercomputers, is prohibited, then remote access to this hardware within the country is also prohibited. Therefore, be careful about granting access to "special" resources (such as supercomputers) to foreign users. The nature of these restrictions depends, of course, on a specific foreign country and (as you can judge from the events of recent years) may undergo significant changes.

Analyzing its potential legal liability, the consortium that operates the Bitnet Network (CREN) has come to the following conclusions. The network operator is only liable for illegal exports if he knew about the violation and did not inform the relevant authorities. The network operator is not responsible for the user's actions and it is not his responsibility to determine their compliance or non-compliance with the law. Therefore, the network service personnel does not check the contents of the packets you send abroad. But if the operator finds any violations of the requirements of the law in your packages, he is obliged to inform the government authorities.

Another factor to consider when sending something to someone is ownership. The problem is compounded when data is transferred across national borders. Copyright and patent laws vary widely from country to country. You may find on the net a curious collection of the basics of a forgotten doctrine, a copyright that has already expired in the United States. Sending these files to England may violate UK law. Be sure to find out who owns the rights to what you transmit over the network, and if necessary, do not forget to get the appropriate permission.

The laws governing electronic data transmission are not keeping pace with technological progress. If you have a book, magazine or personal letter, then almost any lawyer or librarian can answer your question, whether it can be copied or used in any way. They will inform you if you have the right to do this, or whose permission you need to get for this. Asking the same question about an article in an e-mail newsletter, e-mail message, or file will not get the exact answer. Even if you knew whose permission you need to obtain and received it by e-mail, it is still not clear how you can ensure real information protection using messages received by e-mail. In this part, the legislation is rather vague, and it will be possible to bring it to normal form, apparently, not earlier than in the next decade.

Ownership can be problematic even when using public files. Some software that is publicly available on the Internet requires a vendor license. For example, a workstation vendor makes additions to their operating system, accessible via anonymous FTR. You can easily obtain this software, but you must have a software maintenance license to use it legally. The mere fact that a file exists on the network does not mean that by taking it, you will not break the law.

Politics and the Internet

Many netizens view the political process as both a blessing and a disaster. Good is money. Subsidies allow many people to get services that were previously not available to them. The trouble is that user actions are constantly monitored. Someone in Washington will suddenly take it and decide that your actions can be used for political purposes. It is possible that the digitized color image of a naked girl, which is recorded on your disk, will one day become the topic of an editorial under the catchy headline "Taxpayer dollars go to the distribution of pornography." A similar case took place. The content of the files was somewhat more explicit than the illustrations in the magazines, and this case jeopardized the funding of the entire NSFNET. This can get in a lot of trouble for those in charge of funding the Internet.

It is important to understand that the Internet has many supporters in the highest echelons of government — including members of the US Congress, the Clinton administration, leading academics and federal leaders. They support the Internet because it benefits the country by empowering the United States to compete with foreign countries in science and trade. Increasing the speed of data exchange helps to accelerate progress in research and education; thanks to the Internet, American scientists and students can find better solutions to technical problems.

As it should be, in the world of politics, there are those who consider these advantages to be trifling. In their opinion, millions of dollars going into the network could be spent on “barrels of bacon” in their native constituencies. "Barrel of bacon" - an event held in the United States by politicians to gain popularity, etc.

The network enjoys the support of a fairly large number of politicians, but at the same time, this support can hardly be called reliable, which is fraught with a source of possible troubles: any event that has received political resonance can tip the scales in the other direction.

Network ethics

The network raises many ethical problems, but the ethics here are somewhat different from the generally accepted one. To understand this, consider the term “pioneering laws”. When the West was just beginning to master, the laws of the United States west of the Mississippi River were interpreted differently than to the east of it. The network is at the forefront of new technology adoption, so it is fair to apply the above term to it. You can delve into it without fear if you know what to expect.

Network ethics is based on two main principles:

Individualism is respected and encouraged.

The network is good and should be protected.

Please note that these rules are very close to the ethics of the pioneers of the West, where individualism and preservation of the way of life were paramount. Consider how these principles are manifested in the Internet.

In an ordinary society, everyone can claim to be an individual, but in many cases an individual is forced to coordinate his interests with the interests of a sufficiently large group of people who share the views of a given individual to a certain extent. This is where the "critical mass" effect comes into play. You may love medieval French poetry, but it is unlikely that you will be able to organize a circle in your city to study it. Most likely, you will not be able to gather enough people who are interested in this subject and agree to meet from time to time to discuss this topic. In order to have at least a minimal opportunity for communication, you will have to join the society of poetry lovers, which brings together people with more common interests, but there is hardly at least one lover of French medieval poetry there. In your city, there may not be other poetic societies, and members of the only one available are constantly discussing bad pseudo-religious poems. Thus, the problem of "critical mass" arises. If you cannot gather a group of like-minded people, your interests suffer. At worst, you can join another, larger group, but this will not be what you need.

In the network, the critical mass is two. You communicate when you want and how you want - it is always convenient, and no coercion is required. The geographical location does not matter. Your interlocutor can be located anywhere in the network (almost anywhere in the world). Therefore, the creation of a group, on any topic, is absolutely possible. Even alternative groups can be formed. Some want to "meet" by email, others via teleconferencing, some using open file sharing, and so on. Everyone is free to choose. Since you don't need to join a larger group to reach critical mass, each user is a member of a minority group. Persecution of dissidents is discouraged. For this reason, no one will say that "this topic should not be discussed on the net." If I allowed myself to attack lovers of French poetry, you would have every right to oppose my favorite conference. Everyone understands that for all other network users the opportunity to receive information they are interested in is important no less than for himself. However, many Internet users fear (and not without reason) that a movement to support external censorship may emerge, and as a result, the Internet will become much less useful.

Of course, individualism is a double-edged sword. Thanks to him, the network is an excellent repository of diverse information and a community of people, but this principle can put your altruism to the test. One can argue about what behavior should be considered acceptable. Since you most often interact with a remote computer, most people are not aware of how you behave while doing this. The one who knows may not pay attention, or may pay attention. If you connect your computer to the network, you should be aware that many users consider all the files they can access as their own. They argue something like this: if you weren't going to let others use the files, then there is no point in putting them where you can get over the network. This point of view, of course, is illegal, but after all, much of what happened in the border regions during the development of the West was also not supported by laws.

Regular Internet users find it a very valuable tool for both work and play. While access to the Internet is often funded by organizations rather than by the users themselves, netizens are nevertheless committed to protecting this valuable resource. There are two sources of threats to the Internet:

intensive use for other purposes;

political pressure;

NREN is created for a specific purpose. The commercial connection of a company to the Internet also has a specific purpose. It is possible that no one on the spot will harass someone who misuses the connection, but such abuse can be dealt with in other ways. If you use your boss's computer for personal purposes for a short time, for example, to calculate the balance of your checkbook, then probably no one will notice. Likewise, no one will pay attention to the small waste of network time for unintended purposes. (In fact, sometimes you can look at misuse in another way. For example, when a student plays cards over a network, it can qualify as a learning process: in order to get this far, the student had to learn a lot about computers and networks ). Problems only arise when the user does something flagrantly unacceptable, such as organizing a nationwide day of playing "multiplayer dungeons" online.

Misuse of the network can take the form of inappropriate use of resources. The network was not created to make up for a lack of necessary hardware. For example, you can't use a disk system somewhere in the other hemisphere just because your boss didn't buy a disk for your computer for $ 300. Perhaps this disk is needed for very important research, but the cost of such a service over the network is extremely high. The network is intended to provide efficient and quick access to dedicated resources, not to be treated like free public resources.

Regular netizens and service providers are normal people. They get the same pleasure from the games as your neighbor. In addition, they are not stupid, they read the news, regularly surf the net. If the quality of services falls for no obvious reason, people try to find out what happened. Having found out that in some area the schedule has increased hundreds of times, they will start looking for the reason, and if it turns out that you are using the network for other purposes, then you will receive a polite e-mail message asking you to stop behaving this way. The messages may then become less polite, and finally contact your network provider will follow. The result for you can be a complete loss of access to the network, or an increase in access fees for your boss (who, I think, will not be very happy about this).

Self-control when using the network is also very important for political reasons. Any sane person understands that the web cannot exist without abuse and problems. But if these problems are not solved in the circle of netizens, but spill out on the pages of newspapers and become the subject of discussion in the US Congress, then everyone loses. Here are some of the things to avoid when working online:

too frequent and prolonged games;

constant abuse;

malicious, aggressive attitude towards other users and other antisocial actions;

intentionally harming or interfering with the actions of others (for example, using the Internet Worm Internet Worm is a program that uses the Internet to "attack" certain types of computers. After gaining unauthorized access to a computer, it uses it to "break" into the next This program is similar to computer viruses, but is called worm because it does not intentionally harm computers, and is described in detail in Computer Security Basics (Russell and Gangemi), O "Reilly & Associates .;

creating public files of obscene content.

It will be very difficult to get the NREN to be funded in Congress if the television program Sixty Minutes aired an online abuse report the day before the hearing.

Ethics and the private commercial Internet

In the previous sections we talked about the political and social conditions that helped shape the Internet as we know it today. But these conditions are changing. Every day, the share of funding for the Internet from the federal budget is decreasing, as the share of funding from the commercial use of the network is increasing. The goal of the government is to get out of the network business and transfer the functions of providing services to private capital. The obvious question is, if the government is out of the network business, should I continue to play by its rules? There are two aspects to this problem: personal and commercial.

Similar documents

    Development history and legal regulation on the Internet. The American military-industrial territorial network ARPANet as a prototype of the modern Internet. Scientific environment for the existence of the network. Social relations and security in the Internet environment.

    report added on 05/02/2011

    General ideas about the Internet. Communication using the Internet Protocol Transmission Control Protocol family. The largest Internet channels in the USA, AT&T companies. Underwater transoceanic canals. The scheme of interaction of computers on the Internet.

    presentation added on 02/28/2012

    Theoretical foundations of Internet technologies and basic Internet services. Familiarization with the possibilities of connecting to the Internet. Basic network services. Principles of information retrieval in the WWW. Review of modern Internet browsers. Programs for communication on the network.

    term paper, added 06/18/2010

    The history of the creation of the Internet. Characteristics and reasons for "flight" into it. The problem of security, information protection. Classification of methods of communication on the Internet. Rules of conduct in the chat. Flame and flood concept. Signs of a virtual novel, its consequences.

    certification work, added 10/09/2009

    The history of the creation of the Internet, its characteristics and reasons for "flight" into it, as well as the problem of "Internet in the workplace". Principles of communication and rules of conduct in the chat. The essence and classification of nicknames-pseudonyms. The concept and meaning of the concepts of flame and flood.

    test, added 10/14/2009

    The history of the creation of the Internet, its administrative structure and architecture. Organization of access to the network, the structure of its functioning. Characteristics of Internet protocols. Features of network ethics. Occupational health and safety when working on a PC.

    term paper added on 05/20/2013

    Preconditions for the emergence of the Global Information Network. The structure of the Internet. Network connection and Internet addressing. A family of TCP / IP protocols. The most popular Internet technologies. Technologies for creating server parts of Web applications.

    abstract, added on 12/01/2007

    History of the Internet. What is the Internet made of? Internet protocols. Packet-switched networks. Internet Protocol (IP). Transmission Control Protocol (TCP). Domain name system. Legal regulations. Network ethics. Security considerations.

    abstract, added 11/23/2006

    Purpose of the global computer network Wide Area Networks. The history of the creation of the Internet, methods of connecting a computer to it. Finding information, doing business and distance learning. The structure of ARPANET, NSFNET networks. Internet protocols and addresses.

    test, added 02.24.2014

    The concept and history of the development of the Internet as a global computer network covering the whole world. The essence and principle of e-mail, its role and importance in society and economy. Development and operation of the Internet protocol, transmission control.

Conventionally, the Internet can be thought of as consisting of end nodes (computers, servers) or "hosts" that provide information services, and intermediate nodes or "gateways" (usually this term refers to network routers) that serve to interconnect the networks that make up The Internet.

Each node has a universal network address that uniquely identifies a device on the network. Moreover, each network address reflects a two-level addressing approach, i.e. includes the address (number) of the network to which the node belongs and the address of the node within the network.

(In fact, a "node" can be structurally included in several networks at once, so it has several network addresses that are assigned to each interface that connects a node to the corresponding network (in other words, a network address is assigned to each network card of the device).

In terms of building the Internet, it, like any telecommunications network, can be distinguished:

  • Access networksthat connect subscriber ends (client hosts). Moreover, the access network, as a rule, is ramified and multi-level, i.e. combines subnets of several levels
  • backbone networksconnect separate access networks with each other and provide information transfer between access networks via high-speed channels
  • so called information centers,implementing information services of the network (for example, web portals, news portals, search engines, etc.) and service information of an auxiliary type, for the implementation of information services - user authorization systems, databases storing names and passwords, billing systems for calculating payments for services, etc.

From the point of view of territorial division and belonging of one or another part of the Internet to organizations that maintain the operability of the network and provide services within it, the Internet can be divided into networks Internet providersrelated to each other.

Internet service providers are divided into:

  • Backbone suppliers - global operators that own their own transport routes covering very large regions, usually at the level of countries and continents.
  • Regionalproviders can own networks in a large territorial entity (district).
  • Local suppliers can combine networks in a city or district.

Traffic transmission between service providers is based on bilateral commercial agreements, the so-called peer-to-peer agreements... Typically, backbone operators have such agreements with all other backbone operators, and regional suppliers enter into an agreement with one of the backbone and with several regional suppliers. Mutual switching of suppliers' equipment is carried out either in certain "Points of presence" suppliers, or in the so-called "Traffic exchange centers", in which networks of a large number of operators are connected. Typically, such a clearinghouse is provided by a higher tier provider for lower tier operators. In this case, the networks of each individual supplier are combined into autonomous systems, each of which is centrally assigned an individual number. Communication between autonomous networks takes place according to an approved protocol.

In addition to Internet providers that provide users with transport services, that is, the transfer of traffic from one network to another, there are organizations that provide services of a different kind.
These include:

  • Internet content providersor content providers - operators with their own information and reference resources, or content, posted on websites
  • Hosting service providersor hosting providers - companies that provide their equipment (servers and premises with technological equipment, as well as communication channels for the release of information on the Internet) for posting content created by other organizations and individuals
  • In addition to these basic types of suppliers, you can add content delivery service providers... These are operators that deliver information from websites to access points as close as possible to consumers, thereby increasing access speed
  • Application Support Service Providers - organizations that provide access to large expensive software products that require professional support
  • And there are also more narrow-profile specialized service providers, for example, billing services - organizations that provide payment of bills via the Internet, etc.

In general, in Fig. Provider networks are shown. Here NAP- (Network Access Points), POP- (Point of Presents).

Internet controls

We can only talk about some elements of Internet governance and regulation, since participation in the Network is voluntary and there is no single owner and centralized management in it.

In essence, we are talking about a set of networks that obey some general rules, which are determined by the characteristics of the technology used, government regulation and economic factors.

The Internet is a hierarchical structure, each of whose networks is responsible for traffic (transmission time), for transferring information to a higher-level network, as well as for its own funding.

Let's point out the following components of Internet governance and regulation in the world community.

Internal network rulesincluded in the Internet. In practice, the notion of multi-source regulation has led to an Accepted Use Policy (AUP) for networks with budget support.

Did you like the article? To share with friends: