When the operating system appeared. A brief history of the development of Windows operating systems. Purpose of operating systems


Brief history of the development of operating systems

The emergence and main stages of development of operating systems

The first computers were built and found practical application in the 40s of the XX century. Initially, they were used to solve the only private task - the calculation of the trajectory of artillery shells in air defense systems. By virtue of the specifics of the application (the solution of the only problem), the first computers did not use any operating system. In that period of time, the decision of the tasks on the computer was mainly engaged in the OSM developers themselves, and the process of using the computer was not so much a solution to the applied task as research work in the field of computing technology.

BIOS - the first step to the creation of operating systems

Soon, the computer began to successfully apply to solve other tasks: analysis of texts and solving complex applied tasks from the field of physics. The circle of consumers of computer services has been expanded somewhat. However, to solve each specific task at that time it was necessary to write anew not only the code that implements the solution algorithm, but also the I / O procedures and other procedures for managing the calculation process. The essential costs of this approach soon became obvious:
- the code for input procedures is usually quite voluminous and complex in debugging (it often turned out to be the largest fragment of the program), and in case of an error, the results of long and expensive computing could be easily lost in the I / O procedure;
- The need to write every time a fairly large auxiliary code delays the time and increases the laboriousness of the development of application programs.
Therefore, special I / O (BIOS - BASE INPUT-OUTPUT SYSTEM) was created to resolve the specified problems. Carefully debugging and effective procedures from BIOS could be easily used with any new programs without spending time and effort to develop and debug standard procedures for entering and outputting data.
Thus, with the advent of BIOS, the software was divided into system and application software. Moreover, the application software is directly focused on the solution of useful tasks, while system software is focused solely on supporting the work and simplify the development of application software.
However, BIOS is not yet an operating system, because Does not perform the most important function for any operating system - managing the process of calculating the application program. In addition, BIOS does not provide other important functions of the operating system - storage and launch of application programs. BIOS and libraries of mathematical procedures that appeared at about the same time simply facilitated the process of developing and debugging applications, made them simpler and more reliable. Nevertheless, the creation of BIOS has become the first step towards creating a full-fledged operating system.

Batch processing system - a prototype of a modern operating system

As the electronics and computing machines are further developed, with the expansion of their application, the problem of insufficient efficiency of use of expensive computers quickly has quickly reached.
In the 50s, personal computers were not yet, and any computer was very expensive, cumbersome and relatively rare car. For access to it, a special schedule was compiled from various scientific institutions. By the specified time, the programmer was supposed to come to the machine room, download its task from the deck of cards, wait for the completion of calculations and print the results.
When using a hard schedule if the programmer did not have time to complete the calculations for the allotted time, he still had to free the car, since a new task was planned for her. But this means that the engine time was wasted - the results were not received! If for any reason the calculations were completed earlier than the expected period, then the car was simply idle.
In order to avoid the loss of processor time, inevitable when working on a schedule, a concept of packet processing of tasks was developed, the essence of which explains the next figure (Figure 1).

Figure 1 The structure of the computing system with batch processing

For the first time, the batch system was designed in the mid-50s of General Motors for IBM 701 machines. It seems that it was the first operating system. The main idea of \u200b\u200bbatch processing is to manage the loading of programs and the printout of the results to entrust the low-power and relatively cheap satellite machines that are connected to the large (main) machine through high-speed electronic channels. At the same time, a big computer will only solve the task received from the satellite machine, and after the task is completed, to transfer results along the high-speed channel by another satellite machine for printing.
Satellite machines work independently, freeing the central processor from the need to control slow external devices. In this case, the printout of the results of the previous task can occur during the solution of the current task, and at the same time the next task can be read in the electronic memory of the satellite machine. Such an organization of the packet processing system of tasks is known as a simple batch system.
The packet processing systems of tasks implemented in the 50s became a prototype of modern operating systems. For the first time, the software used to manage application execution was implemented.
We also note here that the described approach to the construction of H / W is fully preserved to the present. Modern peripherals, and, above all, these storage drives on rigid magnetic disks are capable of transmitting large amounts of data without the participation of the central processor. Looking ahead, we indicate that only by such a property of the computer's equipment exists and modern multitasking operating systems work efficiently.

Multitasking operating systems

The first multitasking operating systems appeared in the 60s as a result of the further development of task processing systems. The main stimulus to their appearance became new hardware features.
First, there were new effective media of information on which it was easy to automate the search for the required data: magnetic tapes, magnetic cylinders and magnetic discs. This, in turn, changed the structure of application programs - now they could during the operation to download additional data for calculations or procedures from standard libraries.
Note now that a simple batch system, accepting the task, serves it up to a complete completion, which means that during the download of additional data or the processor code is idle, and the cost of processor idle increases with the growth of its performance, since a more productive processor could It would be done for spending more useful work.
Secondly, the performance of processors has increased significantly, and the loss of processor time in simple batch systems has become unacceptable.
In this regard, a logical step was the emergence of multitasking batch systems. A prerequisite for creating multitasking systems is a sufficient computer memory. For multitasking, the amount of memory must be sufficient to place, at least two programs at the same time.
The main idea of \u200b\u200bmultitasking is quite obvious - if the current program is suspended, waiting for the completion of I / O, the processor proceeds to work with another program, which is currently ready for execution.
However, the transition to another task should be made in such a way as to maintain the ability to return to the abandoned task after a while and continue its work from the stop point. To implement such an opportunity to the operating system, it was necessary to enter a special data structure that determines the current state of each task is the process context. The context of the process is defined in any modern operating system in such a way that the data from it would be enough to fully restore the operation of the interrupted problem.
The emergence of multitasking required implementation in the operating system at once several fundamental subsystems, which are also presented in any modern operating system. List them:
1) processor management subsystem - determines which task and at what time should be transferred to the maintenance processor;
2) Memory Management Subsystem - Provides non-conflict use of memory at once with several programs;
3) Process Management Subsystem - Provides a conflict splitting of computer resources (for example, magnetic disks or general subroutines) at once with several programs.
As part of this course, the implementation of these subsystems in modern operating systems will be considered in detail.
Almost immediately after the appearance of multitasking operating systems, it was noted that multitasking is useful not only to increase the coefficient of using the processor. For example, on the basis of multitasking, you can implement the multiplayer mode of operation of the computer, i.e. Connect several terminals to it at the same time, and for the user for each terminal, a complete illusion will be created that it works with the machine one. Before the era of the mass use of personal computers, the multiplayer regime was the main mode of operation for almost all computers. The widespread support for the multiplayer mode has sharply expanded the circle of users of computers, made it available for people of various professions, which ultimately led to a modern computer revolution and the appearance of PC.
At the same time, depending on the algorithms, based on the work of the processor management subsystem, the operating system, and with it, and the whole computer, acquires various properties. For example, a multitasking batch system that switches to another task only if it is impossible to continue the current, capable of ensuring maximum computer bandwidth, i.e. Maximize the average number of tasks solved per unit of time, but due to the unpredictability of the response time, the multitasky batch system is completely not suitable for an interactive system immediately responding to the user input.
A multitasking system with a forced displacement of the task after a quantum time is ideally suited for an interactive system, but does not provide maximum performance for computing tasks.
When studying the topic "CPU management", within the framework of this course, the features of many specific algorithms will be considered, compromise solutions are shown suitable for universal operating systems focused on solving a wide range of tasks.
As an output, we note that the appearance of multitasking was caused by the desire to maximize the processor, eliminating its downtime, and at present, multitasking is an integral quality of almost any modern operating system.

Virtual Memory Operating Systems

The appearance of a virtual memory system in the late 60s, became the last step towards modern operating systems. The emergence of graphic user interfaces and even support for network interaction has not been so revolutionary solutions, although they have significantly affected the development of computer equipment, and on the development of operating systems themselves.
The impetus to the appearance of virtual memory was the complexity of memory management in multitasking operating systems. The main problems here are as follows:
- Programs, as a rule, require for their placement a continuous area of \u200b\u200bmemory. In the course of work, when the program is completed, it frees the memory, but this region of memory is far from always suitable for posting a new program. It is either too small, and then to place the program you have to search for a plot in another area of \u200b\u200bmemory, or too large, and then after placing a new program there will be an unused fragment. When operating the operating system, there are soon a lot of such fragments are formed - the total amount of free memory is great, but it is not possible to place a new program since there is no enough long continuous free area. This phenomenon is called memory fragmentation.
- In the case when several programs are simultaneously in common memory, erroneous or deliberate actions by any program may violate other programs, in addition, the data or performance of some programs may be unauthorized by other programs.
As will be shown within this course further, the virtual memory not only perfectly solves such problems, but also provides new opportunities for further optimizing the work of the entire computing system.
The decisive prerequisite for the appearance of a virtual memory system was the mechanism of swap (from the English to swap - to change, exchange).
The idea of \u200b\u200bswap is to unload from RAM to secondary memory (on a magnetic disk) of the program temporarily taken from execution, and load them back into RAM, when they become ready for further execution. Thus, there is a constant exchange of programs between RAM and secondary memory.
Switting allows you to free up a place in RAM to download new programs by pushing into secondary memory programs that cannot be performed at the moment. Svopling effectively solves the problem of lack of RAM and fragmentation, but does not solve the problem of protection.
Virtual memory is also based on the pushing part of the programs and data from RAM into secondary memory, but it is much more difficult to be implemented and requires mandatory support from the hardware processor. Specific mechanisms for the work of virtual memory will be discussed later.
Ultimately, the virtual memory system organizes its own address space for each running program, which is called a virtual address space. At the same time, the sections of the virtual address space, at the discretion of the operating system, can be displayed either to the sections of the RAM, or to the secondary memory sites (see Figure 2).


Figure 2 Display of virtual address space

When using virtual memory, the programs will not be able to erroneously or deliberately refer to the data of other programs or the operating system itself - the virtual memory subsystem guarantees data protection. In addition, at the moment the area of \u200b\u200bvirtual address space is displayed in secondary memory, i.e. Data from these areas is not stored in RAM, but in secondary memory, which solves the problem of lack of RAM. Finally, the virtual address space area may be displayed on arbitrary areas of RAM, while the neighboring sections of the virtual address space do not have to be adjacent to RAM, which solves the problem of fragmentation.
As already mentioned, the virtual memory was first used in real operating systems in the late 60s, but widespread virtual memory received only in the 80s (UNIX, VAX / VMS), and everywhere began to be applied in personal computers only in the middle of 90 Year (OS / 2, Linux, Windows NT). Currently, virtual memory, along with multitasking, is an integral part of almost any modern operating system.

Graphic user interfaces

Since the late 80s, personal computers received widespread distribution, and many people of various specialties turned out to be involved in the PC users community. Many of them did not have special computer training, but they wanted to use a computer in their work, because Using the computer gave tangible advantages in their business.
On the other hand, the complication of operating systems and application programs has made the management of them quite complex task even for specialists, and the command line interface, which by this time has become the standard for operating systems, has ceased to meet practical requests.
Finally, new hardware features appeared: color graphic monitors, high-performance graphic controllers and mouse type manipulators.
Thus, in the late 80s, all the conditions for the ubiquitous transition to the graphical interface of the User were developed: on the one hand, there was a need for a simpler and convenient computer management mechanism, on the other hand, the development of hardware allowed to build such a mechanism.
The main idea of \u200b\u200bthe graphical user interface is as follows:
- the user, depending on the current situation, is invited to choose one of several alternative options for further action;
- Possible user action options are presented on the computer screen in the form of text strings (menu) or schematic patterns (icons);
- To select one of the options for further actions, it is enough to combine the monitor screen (cursor) with a menu item or an icon and press a predetermined key (usually this<пробел>, <ввод> or the mouse button) to inform the selection system.
The first graphical interface was developed in 81 in Xerox. It is said that visiting the head of Microsoft by Bil Gates Xerox and acquaintance with its development in the field of graphic user interfaces, fought Microsoft to create your own user graphic interfaces.
Currently, the most advanced graphical interface has, apparently, the operating systems of the Windows family system, these graphic interfaces are as if de facto standards for user graphic interfaces.
The use of the graphical interface turned out to be so simple and intuitive that computers at present began to effectively use people in their work, who even have no idea about the architecture of the computer itself, the operating system or application program.
Ultimately, the appearance of graphic user interfaces in the composition of operating systems and application programs has had a tremendous impact on the computerization of modern society.

Built-in network support

Built-in network support in general-purpose operating systems first appeared in the mid-90s, and initially provided only access to remote files located on the disks of another computer. Initially, network support was required only in small offices to work together several computers over one document.
However, the development of the Internet quickly led to the need to embed network support even in operating systems for home computers. In addition, it is interesting to note that a constant decline in the cost of home computers in recent years has caused home computer networks to life when several computers are used in one family with the possibility of sharing a shared printer, scanner or other equipment.
Top integration with network interaction are network operating systems that combine the resources of all network computers to a shared network resource available to any network computer. The reasonable use of the network operating system allows you to solve complex overhead or optimization tasks if there is a sufficiently large number of computer, each of which is separately not able to solve the task for an acceptable time.

History of the most common operating systems

Operating system UNIX.

The UNIX operating system is the first modern operating system. The technical solutions embedded even in the very first UNIX versions, later became standard solutions for many later operating systems, not only for the UNIX family. Many algorithms laid down in the UNIX resource management subsystem are up to date are best and replicated in various operating systems.
Consider the history of the emergence and development of UNIX in more detail.

MultiCS operating system project

In the project MULTICS in the period 1965 - 1969. BELL LABS and GENERAL ELECTRIC company together participated. The goal of the MultiCs project was to create a new multiplayer multi-tasking interactive operating system, combining ease of use with a powerful and efficient resource management system. The following technical solutions were based on MultiCS:
- virtual memory with a segment-page organization that controls access rights, read or execution for each segment;
- a centralized file system that provides the organization of data, even on different physical devices, as a single tree structure of directories / files;
- Displays the contents of the file to the virtual address space of the process using the virtual memory management mechanisms.
All these solutions are also characteristic of modern operating systems. However, the MULTICS project was not completed. Bell Labs's management decided to leaving the project, considering further financing the project inappropriate, as the large funds already invested in the project did not bring returns.
Despite the early termination, during the MultiCS project, the basic principles of resource management and operating systems architecture were identified, which are successfully used to date, and specialists participating in the project have gained invaluable experience. Among the participants of the MultiCs project were Ken Thompson and Dennis Ritchi, the future authors of the first version of UNIX.

The emergence of the UNIX operating system

After the cessation of the MultiCs project, Ken Thompson, Dennis Ritchi and some other employees of Bell Labs continued research work in the field of operating systems, and soon offered the idea of \u200b\u200ban improved file system. For a happy coincidence, Bell Labs has experienced an acute need for convenient and efficient documentation tools, and the new file system could be useful here.
In 1969, Ken Thompson implemented an operating system on the PDP-7 machine, which includes a new file system, as well as special process management tools and memory, allowing you to work on a single PDP-7 machine at once to two users in the time separation mode. The first users of the new operating system were the employees of the Patent Division of Bell Labs.
Brian Kernigan offered to name a new Unics - Uniplexed Information and Computing System. The name liked developers, partly also because MultiCs resembled. Soon the name began to record as UNIX - pronounced, but the record is shorter than one letter. This name has reached the present.
In 1971, after the transfer of UNIX to PDP-11, the first edition of the documentation was released, and the new operating system appeared officially.
The first edition of UNIX was written in the assembler, which imposed certain difficulties when transferring the operating system to other platforms, so to work on the second edition of UNIX, Ken Thompson developed its own programming language B. The second edition was published in 1972 and contained software channels to establish interaction between programs that are simultaneously running on the computer.
The emergence of the operating system written not to the assembler was a revolutionary step in the field of system programming, but the language B contained a number of restrictions restraining its application. Therefore, in 1973, Dennis Ritchi developed the C language, and the operating system was rewritten in a new language.
In 1975, the first commercial version of UNIX, known as UNIX V.6 and UNIX, began its triumphal march along the world.

The main stages of UNIX development

1976. A group of students and professors has developed at the University of Berkeley, seriously engaged in the UNIX system. In the consequence of the University of Berkeley, founded its own UNIX - BSD UNIX (Berkeley Software Distribution). The BSD branches for the first time appeared such well-known UNIX components, as the VI text editor, the TCP / IP protocol stack, the page mechanism in the virtual memory management system.
1977. The first experience in transferring UNIX to another hardware platform (different from PDP-11). At the University of Valonong in Australia, Professor Juris Rindfelds partially suffered a UNIX on a 32-bit car.
1978. Thompson and Ritchi in Bell Labs realized the full transfer of UNIX on a 32 bit machine. The transfer was accompanied by significant changes in the organization of the system that allowed simplifying subsequent UNIX transfers to other platforms. At the same time, C language was expanded almost to the current state.
1978. Especially to support UNIX in Bell Labs, a USG unit (UNIX Support Group) has been created.
1982. USG released UNIX SYSTEM III, which accumulated the best solutions presented in various versions of UNIX, known by that time. For the first time introduced named program channels.
1983. Unix System V output. Semaphores are presented for the first time, memory separation tools and message queues, and data caching is used to improve performance.
1984. USG is converted to the UNIX - USDL development laboratory (UNIX System Development Laboratories). The UNIX System V Release 2 (SVR2) version (SVR2) has been released. The system implements the ability to block files and copy the shared memory pages when recording.
1986. The appearance of a graphical interface for UNIX-like operating systems is the X Windows graphics system.
1987. USDL released UNIX SYSTEM V Release 3 (SVR3). For the first time, modern possibilities of interprocessing interaction are presented, separating remote files, signal processing.
1989. Output Unix System V Release 4 (SVR4). UNIX is first implemented on the basis of the microker's concept. Support for real-time processes, and lightweight processes.

Linux operating system

Currently, the Linux operating system is experiencing a stage of rapid development. And although this is a young operating system whose age is just over 10 years old, she has already managed to gain recognition of many thousands of users.
At the sources of the Linux operating system, Linus Torvalds stood, at that time, a first-year student, who at the end of 1991 placed the Linux micro operating system developed by him and invited everyone to take part in the development of this system. As a result, a variety of talented programmers joined the project, and the joint efforts of a large number of people interacting over the Internet were developed a very perfect operating system.
Linux was based on some solutions from UNIX BSD 4.2, and therefore Linux is usually considered as an independent branch of UNIX-like operating systems.
Currently, Linux is developing within the Open Source technology - open source texts available to everyone. Anyone can develop and send their changes or additions to Linux, and Linux installation can be obtained free over the Internet.
Currently, Linux was also divided into several independent branches, between which there are still much in common, but there are differences in the implementation of some components, both in the system core and in various utilities.
The Linux operating system is now considered by many people as a serious alternative to the Windows Operations Systems. The Linux system works steadily and provides high performance. The only thing that still restrains the distribution of Linux, is an insufficient number of office application programs, such as text processors or spreadsheets. But recently, the number of such programs is growing steadily, and the quality of their user interfaces is approaching the usual for Windows users.
Another problem of the Linux system is that it usually lags behind the support of the latest hardware, but this also has its explanation. The developers of these hardware always provide leading manufacturers of operating systems information about them even before the appearance of this hardware on the market, therefore, for example, Windows usually provides support for new hardware immediately when they appear on the market. The authority of the Linux system in the environment of hardware developers is steadily growing, so you can hope that the problem of supporting hardware will soon be solved.

Windows operating system

Currently, the Windows operating system family is the most massive operating system for personal computers. All these operating systems have very similar (and very perfect!) The graphical user interface, but differ significantly in the inner structure.
In the Windows family, Windows 95/98 / ME operating systems represent the branch of consumer operating systems oriented, first of all, at home use, and the Windows XP system is primarily focused on a 64-bit platform, and in 32-bit implementation is different From Windows 2000 in the primary interface.
The modern Windows 2000 operating system is a typical multi-tasking operating system that supports virtual memory, file system, network, graphical user interface, multimedia tools. It directly comes from Windows NT and practically has nothing to do with the MS-DOS operating system, widespread about a decade ago. Nevertheless, the development of Microsoft operating systems occurred consistently, and their history is most logical to start from DOS.
1983. The MS-DOS 2.0 operating system is released, which includes support for the hard disk drive, a file system with a hierarchical file name structure, downloadable device drivers. In the future, all versions of Windows, up to Windows NT worked as an add-in over the DOS version not below 2.0, using its file system and system functions to work with computer equipment.
1985. Leaves the first version of Windows - Windows 1.01. At that time, Windows is not yet a full-fledged operating system and requires DOS 2.0 operating system for its work. Windows 1.01 supports only non-overlapping windows and allows users to switch between programs without restarting them. By the time of Windows 1.01, several graphic shells for DOS already appeared on the market, but all of them, like Windows, do not use much popular due to the lack of programs. In addition, work in non-overlapping windows is uncomfortable.
1987. Overlook Windows 2.0, supporting overlapping windows. Simultaneously with Windows 2.0 output, the Microsoft Excel spreadsheet appears on the market and Word 1.0 text processor - for this user-friendly software for Windows. Thanks to a convenient graphical interface and useful application programs, the Windows 2.0 version becomes popular, a million copies sold for half a year.
1988. Overlook Windows 2.1, supporting the advanced memory on the 80286 processor and multitasking on the 80386 processor. For this version, it becomes a mandatory presence of a storage device on a hard magnetic disk (before that there was a floppy disk).
1990. The version of Windows 3.0 comes. It starts in a secure processor mode and supports swap for programs and data based on memory block descriptors. At the same time, while data from a certain memory block is not required, the system can, at its discretion, move this block in memory and even drop its data to the disk. But when this data is needed by any program, it must specify this system and transfer to it a memory block descriptor to identify the desired block (when the memory block is selected, the system returns its descriptor). Having received a request for memory access, the system blocks this block in memory and transmits an application to the beginning of the block. The system can no longer move this memory block until the application informs it that the data appealing from this memory block is no longer required. Starting with Windows 3.0, MS-DOS programs can run in the window.
1992. The version of Windows 3.1 comes out, which is simply further improvement of Windows 3.0, but this is the first version of Windows that has been widespread in Russia. Soon Windows 3.1 becomes the most popular system in the US system in the US and keeps the championship until 1997.
1993. Overlook Windows 3.11, supplemented by network support (email, sharing files, working groups).
1993. The Windows NT operating system (NT - New Technology) is published - the first full-fledged operating system of the Windows family system, which does not require a basis for its work in the form of MS-DOS. For Windows NT, the processor is required not lower than 80386, it implements a full-fledged virtual memory that displaces multitasking, a new file system. Starting with Windows NT, consumer and professional branches were divided.
1995. The Windows 95 operating system comes out, being the further development of Windows 3.11, it becomes the first consumer version of Windows that does not require DOS for its work. In Windows 95, a new graphical user interface is presented for the first time, it is very convenient, intuitive, it displays Windows to the first place in the world for the convenience of use and quality of the user interface.
1996. Windows NT4 operating system. It is a further development of Windows NT and receives the Windows 95 user interface. In short, the Windows NT4 operating system will become one of the most popular for professional work.
2000. The Windows 2000 operating system comes out. She mainly inherited the internal architecture of Windows NT, but a number of additional services were introduced, for example, support for distributed computing.
2000. The Windows ME operating system is released, which is further development of Windows 95/98. However, it is announced that it will become the last consumer version of Windows. The consumer branch separated in 1993 again merges with the professional, and the unified branch of Windows XP will continue to develop.
etc. etc.
2006. Vista.

Os.

The idea of \u200b\u200bthe computer was proposed by the English Mathematics Charles Babage in the middle of the nineteenth century. Its mechanical "analytical machine" could not truly earn, because technologies of that time did not meet the requirements necessary for the manufacture of the necessary parts of the accurate mechanics. Of course, no speech about the operating system for this "computer" did not go. The real birth of digital computing machines occurred shortly after the end of World War II. In the mid-40s, the first lamp computing devices were created. At that time, the same group of people participated in the design, and in operation, and in programming the computing machine. It was rather research work in the field of computing technology, and not the use of computers as a tool to solve any practical tasks from other applied areas. Programming was carried out exclusively in the engine. There was no system software other than the libraries of mathematical and service subroutines that the programmer could use in order not to write codes each time calculating the value of any mathematical function or controlling the standard I / O device. The operating systems still did not appear, all tasks of the organization of the computational process were decided by hand by each programmer from the control panel, which was a primitive I / O device consisting of buttons, switches and indicators. From the mid-50s, a new period began in the development of computing equipment associated with the advent of a new technical base. The speed of processors has grown, the volume of operational and external memory increased. Computers have become more reliable, now they could continuously work so long so that they can be imposed on the execution of really practically important tasks. But the execution of each program included a large number of auxiliary work (download, launch, obtaining the resulting program in machine codes, etc.), so the positions of operators who professionally perform the organization of the computing process for all users were introduced to organize effective sharing of this center.

But no matter how quickly operators work and reliably worked, they could not compete in performance with the operation of the computer devices. And since the processor was a very expensive device, then the low efficiency of its use meant the low efficiency of using the computer as a whole. To solve this problem, the first batch processing systems were developed, which automated the entire sequence of operator's actions on the organization of the computing process. Early batch processing systems were a prototype of modern operating systems, they became the first system programs designed not to process data, but to control the computing process.

During the implementation of batch processing systems, a formalized task management language was developed, with which the programmer reported to the system and the operator, what actions and in what sequence it wants to perform on the computing machine.

Early batch processing systems significantly reduced the time spent on auxiliary actions on the organization of the computing process, and therefore, another step was made to increase the efficiency of the use of computers. However, the programmers users lost direct access to the computer, which reduced the efficiency of their work - making any correction required significantly more time than with interactive work per console.

The emergence of multiprogram operating systems for mainframes

The next important period of development of operating systems is referred to 1965-1975. At this time, in the technical base of computing machines, there was a transition from separate semiconductor elements such as transistors to integrated chips, which opened the path to the appearance of the next generation of computers. The large functionality of integrated circuits made it possible to implement in the practice of complex computer architectures, such as IBM / 360. During this period, almost all basic mechanisms inherent in the modern OS: multiprogramming, multiprocessing, support for multi-terminal multiplayer mode, virtual memory, file systems, accession of access and network work. During these years, the flowering of system programming begins. From the direction of applied mathematics, which is of interest to a narrow circle of specialists, systemic programming turns into an industry that has a direct impact on the practical activity of millions of people. The revolutionary event of this step was the industrial implementation of multiprogramming. In the conditions of the sharply increased features of the computer for processing and storing data, the implementation of only one program at each moment of time turned out to be extremely ineffective. The solution was multiprogramming - a way to organize a computing process, in which several programs in the memory of the computer were simultaneously implemented on one processor. These improvements significantly improved the efficiency of the computing system: the computer could now be used almost constantly, and not less than half the computer time, as it was before. Multiprogramming was implemented in two versions - in batch processing and time separation systems. Multiprogram batch processing systems are the same as their single-strware predecessors, aimed at ensuring maximum loading of the computer equipment, however, they solved this task more efficiently. As a result, a balanced loading of all computer devices was achieved, and therefore, the number of tasks decided per unit of time increased. In multiprogram packet processing systems, the user continued to be deprived of the opportunity to interactively interact with its programs. In order to at least partially return to users a sense of direct interaction with the computer, another variant of multiprogram systems was developed - time separation systems. This option is designed for multi-terminal systems when each user works for its terminal. Among the first operating systems for the separation of the time developed in the mid-60s were TSS / 360 (IBM), CTSS and MultiCS (Massachusetts Institute of Technology, together with Bell Labs and General Electric). Multiprogramming variant used in time separation systems was aimed at creating an illusion of sole ownership of the computing machine for each individual user due to the periodic allocation of each program of its processor time. In time separation systems, the effectiveness of the use of equipment is lower than in batch processing systems, which was the fee for the convenience of the user. Multi-terminal mode was used not only in time separation systems, but also in batch processing systems. At the same time, not only the operator, but all users have received the opportunity to form their tasks and manage their execution from their terminal. Such operating systems obtained the name of the remote input systems, which retained the centralized nature of data processing, to some extent being a prototype of modern networks, and the corresponding system software - the prototype of network operating systems. By this time, a significant change in the distribution of functions between hardware and computer software can be stated. Operating systems became essential elements of computers, playing the role of "continuation" of equipment. The implementation of multiprogramming required the introduction of very important changes to the computer equipment directly aimed at supporting the new method of organizing the computing process. When the computer resources are divided between programs, you need to quickly switch the processor from one program to another, as well as securely protect the codes and data of one program from unintentional or deliberate damage to another program. The processors have privileged and user modes of operation, special registers to quickly switch from one program to another, memory protection tools, as well as a developed interrupt system. Hardware support for operating systems has become an integral property of almost any computer systems, including personal computers. Another important trend of this period is to create families of software and compatible machinery and operating systems for them. Examples of families of software and compatible machines built on integrated circuits are IBM / 360 and IBM / 370 machines (analogues of these families of Soviet production - EU series machines), PDP-11 (Soviet analogs - CM-3, CM-4, CM -1420). Soon the idea of \u200b\u200bsoftware and compatible cars has become generally recognized. Software compatibility required and compatibility of operating systems. However, such compatibility implies the possibility of working on large and small computing systems, with a large number of diverse peripherals, in the commercial field and in the field of scientific research. Operating systems built with the intention to satisfy all these contradictory requirements turned out to be extremely difficult. They consisted of many millions of assembler lines written by thousands of programmers, and contained thousands of errors causing an endless correction flow. The operating systems of this generation were very expensive. So, the development of OS / 360, the amount of code for which was 8 MB, cost IBM 80 million dollars. However, despite the unwarked dimensions and many problems, OS / 360 and other similar operating systems of this generation really satisfied most consumer requirements. For this decade, a huge step forward was made and a solid foundation was laid for creating modern operating systems.

Development of operating systems in the 80s

The most important events of this decade can be attributed to the development of the TCP / IP stack, the development of the Internet, the standardization of local networks, the appearance of personal computers and operating systems for them. The working version of the TCP / IP protocols stack was created in the late 70s. This stack was a set of general protocols for a heterogeneous computing environment and was intended to communicate the ARPANET experimental network with other "satellite" networks. In 1983, the TCP / IP protocol stack was adopted by the US Department of Defense as a military standard. The Arpanet network computer transition to the TCP / IP stack has accelerated its implementation for the BSD UNIX operating system. Since that time, the joint existence of UNIX and TCP / IP protocols began, and almost all numerous UNIX versions have become network. Implementation of TCP / IP protocols in Arpanet gave this network all the main features that are distinguished by modern Internet. In 1983, the ARPANET network was divided into two parts: MILNET supporting US military, and a new Arpanet. To designate the Arpanet and Milnet network, the name of the Internet has been used, which in Russian with time (and with the easy-to-Localizer Microsoft) has become the Internet. The Internet has become an excellent test site for testing many network operating systems that allowed to check in real conditions the possibility of their interaction, the degree of scalability, the ability to work with an extreme load created by hundreds and thousands of users. TCP / IP protocol stack also waited for enviable fate. Independence from manufacturers, flexibility and efficiency, proven by successful work on the Internet, as well as the openness and availability of standards made TCP / IP protocols not only by the main transport mechanism of the Internet, but also the main stack of most network operating systems. All decade was noted by the constant advent of new, increasingly perfect versions of UNIX OS. Among them were the branded versions of UNIX: Sunos, HP-UX, IRIX, AIX and many others, in which computer manufacturers adapted the core code and system utilities for their equipment. Vervy variety gave rise to the problem of their compatibility, which various organizations periodically tried to solve. As a result, POSIX and XPG standards were adopted, defining OS interfaces for applications, and the AT & T special division released several versions of UNIX System III and UNIX System V, designed to consolidate developers at the core code level. The beginning of the 80s is associated with another significant for the history of operating systems an event - the appearance of personal computers. From the point of view of architecture, personal computers did not differ from the class of mini-computers of the PDP-11 type, but their cost was significantly lower. If the mini-computer allowed to have its own computing car in the enterprise or university, the personal computer gave this opportunity to a separate person. Computers have become widely used by non-specialists, which demanded the development of "friendly" software, and the provision of these "friendly" functions has become the direct responsibility of operating systems. Personal computers also served as a powerful catalyst for the rapid growth of local networks, creating an excellent material basis in the form of dozens and hundreds of computers belonging to one enterprise and located within one building. As a result, the support of network functions has become a prerequisite for personal computers. However, the friendly interface, and the network functions appeared from operating systems of personal computers not immediately. The first version of the most popular operating system of the early phase of personal computers - MS-DOS of Microsoft was deprived of these opportunities. It was a single-strware single-user OS with a command line interface capable of starting from a floppy disk. The main tasks for it were to manage files located on flexible and hard drives in a UNIX-like hierarchical file system, as well as alternate launch of programs. MS-DOS was not protected from user programs, since the Intel 8088 processor did not support the privileged mode. The developers of the first personal computers believed that during the individual use of the computer and the limited features of the equipment, it makes no sense in supporting multiprogramming, so the processor did not provide a preferred mode and other mechanisms for supporting multiprogram systems. The missing functions for MS-DOS and the OS similar to it are compensated by external programs provided the user with a convenient graphical interface (for example, Norton Commander) or a fine control tool (for example, PC Tools). The largest impact on the development of software for personal computers was provided by Microsoft Windows operating environment, which presented a superstructure over MS-DOS. Network functions were also implemented mainly with network shells operating on top of OS. In network work, it is always necessary to support multiplayer mode, in which one user is interactive, and the rest get access to computer resources over the network. In this case, the operating system requires at least some minimum functional support for the multiplayer mode. The history of MS-DOS networks began with version 3.1. This version of MS-DOS added the necessary file blocking toilets to the file system, which allowed more than one user to have access to the file. Using these functions, the network shells could ensure the separation of files between network users. Together with the release version of MS-DOS 3.1 In 1984, Microsoft also released a product called Microsoft Networks, which is usually informally called MS-NET. Some concepts embedded in MS-NET, such as introducing into the structure of basic network components - redirector and network server, successfully switched to later network products Microsoft: Lan Manager, Windows for Workgroups, and then in Windows NT. Network shells for personal computers produced other companies: IBM, ARTISOFT, Performance Technology and others. Another path chose Novell. She initially made a bet on the development of an operating system with embedded network functions and achieved outstanding success on this path. Its NetWare Network Systems for a long time have become a standard of performance, reliability and security for local networks. Novell's first network operating system appeared on the market in 1983 and was called OS-NET. This OS was intended for networks that had a star-shaped topology, the central element of which was a specialized computer based on Microprocessor Motorola 68000. A little later, when IBM has released PC XT personal computers, Novell has developed a new product - NetWare 86, designed for microprocessor architecture intel 8088 . From the very first version of NetWare, it has been distributed as an operating system for the central server of the local network, which, by specializing the functions of the file-server, provides the speed of remote file access and enhanced information for this class of computers. For high performance NOVell NetWare network users pay the cost - a dedicated file server cannot be used as a workstation, and its specialized OS has a very specific application program interface (API), which requires from developers of special knowledge, special experience and considerable effort. Unlike Novell, most other companies have developed network funds for personal computers within operating systems with a universal API interface, that is, general-purpose operating systems. Such systems as hardware platforms develop personal computers began to gain the features of mini-computers operating systems. In 1987, the first multitasking operating system for personal computers with the Intel 80286 processor, the OS / 2, appeared as a result of the joint efforts of Microsoft and IBM. This system was well thought out. It supported the displacing multitasking, virtual memory, the graphical user interface (not from the first version) and the virtual machine for performing DOS applications. In fact, it went beyond the limits of simple multitasking with its concept of parallelization of individual processes, called multithreading. OS / 2 with its developed multitasking features and HPFS file system with built-in multiplayer protection facilities turned out to be a good platform for building local personal computers networks. The Network Shells of the Lan Manager of Microsoft and Lan Server Company of IBM, developed by these companies based on one basic code, received the most common distribution. These shells were inferior in performance to the NetWare file server and consumed more hardware resources, but had important advantages - they allowed, first, to perform any programs on the server developed for OS / 2, MS-DOS and Windows, and secondly, to use The computer on which they worked as a workstation. Network developments of Microsoft and IBM companies led to the appearance of NetBIOS - a very popular transport protocol and simultaneously an application programming interface for local networks that have enlightened in almost all network operating systems for personal computers. This protocol and today is applied to create small local networks. Not a very successful market fate of OS / 2 did not allow Lan Manager and Lan Server systems to capture a noticeable market share, but the principles of operation of these network systems largely found their embodiment in a more successful operating system of the 90s - Microsoft Windows NT containing built-in network components Some of which have an LM console - from Lan Manager. In the 80s, the main standards for communication technologies for local networks were adopted: in 1980 - Ethernet, in 1985 - Token Ring, in the late 80s - FDDI. This made it possible to ensure compatibility of network operating systems at the lowest levels, as well as standardize the OS interface with network adapter drivers. For personal computers, not only specially designed operating systems similar to MS-DOS, NetWare and OS / 2 were used, but also adapted existing OS. The appearance of Intel 80286 processors and especially 80386 with multiprogram support made it possible to transfer UNIX personal computers to the platform. The most famous system of this type was the UNIX version of Santa Cruz Operation (SCO UNIX).

Thus, undergoing many changes over many decades of the formation of computer activity, we do not provide for even theoretical possibility of performing any computational work without the participation of the operating system. Many of them, as well as their predecessors, are not completely perfect, but global developers are doing everything possible to solve this problem. And perhaps, in a short time, a perfect operating system will appear before us, with ease of handles with all tasks set before it!

ESSAY

By discipline

Information Technology

Topic: "Operating Systems"

Performed a student Omyvt.

Groups №2291 / 52

Esp

Introduction

The modern operating system is a complex complex of software tools that provide the user not only standardized input-output of information and program management, but also simplifying work with a computer. The operating system program interface allows to reduce the size of a specific program, simplify its work with all components of the computing system.

It is known that the operating systems acquired a modern appearance during the development period of the third generation of computing machines, that is, from the mid-60s to 1980. At this time, a significant increase in the efficiency of the processor was achieved by implementing multitasking.

Windows operating system is the most common operating system, and for most users it is most suitable due to its simplicity, a good interface, acceptable performance and a huge number of application programs.

Windows systems have passed a difficult path from primitive graphic shells to quite modern operating systems. Interface Manager, subsequently - Microsoft Windows) Microsoft began in September 1981. Although the first prototypes were performed on the basis of the so-called Multiplan- and Word-Like menu, in 1982, the interface elements were successfully changed to the drop-down menus and dialog boxes.

The purpose of this work: briefly consider the history of the development of Microsoft Windows operating systems.


Brief history of the development of Windows operating systems

Currently, the Graphic Operating Systems of the Windows family of the Microsoft Corporation received the greatest distribution. In 2005, the family of the windows reported on his twentieth.

They are continuously improving, so each new version has additional features.

The first version of this operating system - Windows 1.0.i saw the light in November 1985. Windows 1.0 "Clell" is quite a bit and was rather a graphic shell for MS-DOS, however, this system allowed the user to run multiple programs at the same time. The main inconvenience when working with Windows 1.0 was that the open windows could not overlap on each other (in order to increase the size of one window, had to reduce the dimensions of the nearby). In addition, too few programs were written for Windows 1.0, so this system has not been widespread.



Windows 3.1.(1992), Windows for Workgroups 3.11(1993) - these are popular graphics operating shells working running the MS DOS operating system and using embedded functions and procedures in the lower level. These are object-oriented applications, the basis of which is a hierarchically organized window system.

Windows NT. (1993) is a multiplayer and scalable network operating system for personal computers supporting the client-server architecture and includes its security system. It can interact with various operating systems as Microsoft Corporation and other firms (for example, MacOS or UNIX) installed on single-processor and multiprocessor computers based on CISC or RISC technologies.

Windows 95. - This is a multi-tasking and multi-threaded 32-bit operating system with a graphical interface. The system fully supports 16-bit applications created for MS DOS. This is an integrated multimedia environment for sharing textual, graphic, sound and other information.

Windows 98.he was the logical development of Windows 95 in the direction of more performance of the computer without adding new equipment to it. The system includes a number of programs whose sharing improves computer performance and allows you to more effectively use the Internet Web resources through the application of new multimedia capabilities of operating systems.

Windows 2000. - This is the next-generation network operating system equipped with improved means of many-processor processing and efficient information protection. The implemented feature of working with files offline allows you to select network files in folders for subsequent work with them, without connecting to the network, which provides additional features for mobile users.

Windows ME (MILLENNIUM EDITION)- This is an operating system that has a number of additional features and advantages over the previous version of Windows 98. The system has expanded the capabilities of multimedia and improved access to the Internet. OS also supports the latest types of equipment and has a significantly improved reference system.

Window XP.(2001) was a step of Microsoft in the way of implementing the integration of the US OS Windows ME and Windows 2000 networks. As a result of such integration of their strengths, one of the best operating systems was obtained, which has gained a new user interface, significantly simplifying the use of a personal computer for Different purposes, including for managing local networks. Two different versions of this OS are developed: for home users (Windows XP Home Edition) and corporate users (Windows XP Professional).

Window Vista.(2007) - This is the latest operating system (has a kernel version 6.0). Unlike previous versions, Vista comes on DVD media due to its increased complexity and a new "trimmed" interface (AERO). In addition, each disk contains all of its five modifications: Home Basic, Home Premium, Enterprise and Ultimat.

In the next chapter, consider in more detail each of the operating systems.

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Posted on http://www.allbest.ru/

abstract

« First operating systems »

Introduction

Among all system programs with which you have to deal with users of computers, the operating systems occupy a special place. The operating system manages the computer, starts the program, ensures data protection, performs various service functions by user and programs. Each program uses OS services, and therefore can only work running the OS that provides these services for it.

1 . Purpose of operating systems

The operating system largely determines the appearance of the entire computing system as a whole. Despite this, users who actively use computing techniques often have difficulty trying to define the operating system. This is partly due to the fact that OS performs two essentially few related functions: providing a user-to-program job by providing an extended machine for it and improving the efficiency of using a computer by rational management of its resources.

The operating system (OS) is a set of programs that provide control of computer equipment, planning efficient use of its resources and solving tasks on user jobs.

Purpose of the operating system.

The main goal of the OS, which ensures the work of the computer in any of the described modes is a dynamic resource allocation and management of them in accordance with the requirements of computing processes (tasks).

The resource is every object that can be distributed by the operating system between computing processes in the computer. There are computer hardware and software resources. The hardware resources include a microprocessor (processor time), RAM and peripheral devices; Software resources - user-friendly software to manage computing processes and data. The most important software resources are programs included in the programming system; software control of peripheral devices and files; Libraries of system and application programs; Means providing control and interaction of computational processes (tasks).

The operating system distributes resources in accordance with user requests and computer capabilities and taking into account the interaction of computational processes. OS functions are also implemented by a number of computational processes that themselves consume resources (memory, processor time, etc.) Computational processes related to OS are controlled by computing processes created by user request.

It is believed that the resource works in separation mode if each of the computing processes occupies it for a certain time interval. For example, two processes can share the processor time equally if each process is given the ability to use the processor for one second from each two seconds. Similarly, all the hardware resources are separated, but the range of resource use intervals may be unequal. For example, the process can at the disposal of the RAM for the entire period of its existence, but the microprocessor can be accessible only for one second of each four.

The operating system is an intermediary between computer and its user. She does work with a computer easier, freeing the user from duties to distribute resources and manage them. The operating system analyzes the user requests and ensures their execution. The request reflects the necessary resources and the required actions of the computer and seems to be a sequence of commands in a special language of the operating system directives. Such a sequence of commands is called the task.

2 . Types of operating systems

The operating system can perform user requests in batch or dialog mode or manage real-time devices. In accordance with this, the operating system of batch processing, separation of time and dialogue (Table 1) are distinguished.

Table 2.1.

OS

Characteristics of the operating system

The nature of user interaction with the task

The number of simultaneously served users

Provided computer mode

Batch processing

Interaction is impossible or limited

One or more

Uninware or multiprogram

Separation of Time

Dialog

Some

Multiprogram

Real-time

Operational

Multitask

Dialogue

Dialog

Single-strware

Operationbatch processing systems

The operating system of batch processing is a system that processes the task package, i.e. Several tasks prepared by one or different users. Interaction between the user and his task during processing is impossible or extremely limited. Under the operating system, the operating system of the packet processing of the computer can function in single-strware and multiprogram modes.

Operationtime separation systems

Such systems provide simultaneous maintenance of many users, allowing each user to interact with their task in dialogue mode. The effect of simultaneous service is achieved by dividing the processor time and other resources between several computing processes that meet the individual tasks of users. The operating system provides an email to each computing process for a short time interval; If the computing process has not completed the end of the next interval, it is interrupted and placed in the waiting queue, yielding a computer to another computing process. EUM in these systems functions in multiprogram mode.

The time separation operating system can be applied not only to service users, but also to control technological equipment. In this case, "users" are separate blocks of control of actuators that are part of the technological equipment: each block interacts with a certain computing process during the time interval sufficient to transmit control influences on the actuator or receiving information from sensors.

Operationsreal-time real-time systems

The system data guarantee the operational execution of requests for a specified time interval. Requests can come from users or from external with respect to computer devices with which the systems are associated with data transmission channels. At the same time, the speed of computational processes in the computer must be coordinated with the rate of processes occurring outside the computer, i.e. Coordinated with real time. These systems organize management of computing processes so that the response time to the request does not exceed the specified values. The required response time is determined by the properties of objects (users, external devices) served by the system. Real-time operating systems are used in information search systems and technological equipment management systems. EUM in such systems functions more often in multitasking mode.

Dialogue operating systems

These operating systems were widespread in personal computer. These systems provide a convenient form of a dialogue with the user through the display when entering and executing commands. To perform frequently used command sequences, i.e. Tasks, the dialogue operating system provides batch processing. Under the control of the dialogue OS computer usually functions in single-software mode.

3. The history of the development of the OS.

Development of first OS

An important period of development of the OS refers to 1965-1975. At this time, the technical base of computing machines occurred from individual semiconductor elements such as transistors to integrated chips, which opened the path to the appearance of the next generation of computers. During this period, almost all the basic mechanisms present to the modern OS are implemented: multiprogramming, multiprocessing, support for multi-terminal multiplayer mode, virtual memory, file systems, accession of access and network work. During these years, the flowering of system programming begins. The revolutionary event of this step was the industrial implementation of multiprogramming. In the conditions of the sharply increased features of the computer for processing and storing data, the implementation of only one program at each moment of time turned out to be extremely ineffective. The solution was multiprogramming - a way to organize a computing process, in which several programs in the memory of the computer were simultaneously implemented on one processor. These improvements significantly improved the efficiency of the computing system. Multiprogramming was implemented in two versions - in batch processing and time separation systems. Multiprogram batch processing systems are the same as their single-strware predecessors, aimed at ensuring maximum loading of the computer equipment, however, they solved this task more efficiently. In multiprogram package mode, the processor was not idle until one program performed an I / O operation (as it happened with a sequential execution of programs in early batch processing systems), and switching to another finished program. As a result, a balanced loading of all computer devices was achieved, and therefore, the number of tasks decided per unit of time increased.

In multiprogram packet systems, the user continued to be deprived of interactive interaction capabilities with its programs. In order to at least partially return users a sense of direct interaction with the computer, another variant of multiprogram systems was developed - time separation systems. This option is designed for multi-terminal systems when each user works for its terminal. Among the first operating systems for the separation of the time developed in the mid-60s were TSS / 360 (IBM), CTSS and MultiCS (Massachusetts Institute of Technology, together with Bell Labs and General Electric). Multiprogramming option used in time separation systems was aimed at creating a single-user illusion of sole ownership of the computing machine due to the periodic allocation of each program of its processor time. In time separation systems, the efficiency of the equipment is lower than in batch processing systems, which was a fee for the convenience of users. Multi-terminal mode was used not only in time separation systems, but also in batch processing systems. At the same time, not only the operator, but all users have received the opportunity to form their tasks and manage their execution from their terminal. Such OSs received the name of the remote task input systems. The terminal complexes could be located at a high distance from processor racks, connecting with them using various global links - modem connections of telephone networks or dedicated channels. To maintain the remote operation of the terminals in operating systems, special software modules have appeared, implementing various (at the time, as a rule, non-standard) communication protocols. Such computing systems with remote terminals, while maintaining a centralized nature of data processing, to some extent were a prototype of modern networks, and the corresponding system software is a prototype of network operating systems.

In computers of the 60s, most of the actions on the organization of the computational process assumed the operating system. The implementation of multiprogramming required the introduction of very important changes to the computer equipment directly aimed at supporting the new method of organizing the computing process. When the computer resources are divided between programs, it is necessary to quickly switch the processor from one program to another, as well as reliably protect the codes and data of one program from unintentional or deliberate damage to another program. The processors have a privileged and user operation mode, special registers for quick switching from one program to another, memory protection tools, as well as a developed interrupt system.

In a privileged mode, intended for the operation of the operating system software modules, the processor could perform all commands, including those that allowed the distribution and protection of computer resources. Programs operating in user mode, some processor commands were not available. Thus, only the OS could control hardware and execute the role of the arbiter for user programs that were performed in an unprivileged, user mode.

The interrupt system made it possible to synchronize the work of various computer devices working in parallel and asynchronously, such as I / O channels, discs, printers, etc.

Another important trend of this period is to create families of software and compatible machinery and operating systems for them. Examples of families of software - compatible machines built on integrated chips are IBM / 360, IBM / 370 and PDP-11 machines.

Software compatibility required and compatibility of operating systems. However, such compatibility implies the possibility of working on large and small computing systems, with a large and small number of diverse peripherals, in the commercial field and in the field of scientific research. Operating systems built with the intention to satisfy all these contradictory requirements turned out to be extremely difficult. They consisted of many millions of assembler lines written by thousands of programmers, and contained thousands of errors causing an endless correction flow. The operating systems of this generation were very expensive. For example, the development of OS / 360, the volume of the code for which was 8 MB, cost IBM 80 million dollars.

However, despite the unbarrible dimensions and many problems, OS / 3600 and the other similar operating systems of this generation really satisfy most consumer requirements. For this decade, a huge step was made forward and laid a solid foundation for creating modern operating systems.

Operationsthese systems and global networks

In the early 1970s, the first network operating systems appeared, which, unlike multi-terminal OS, allowed not only dispersal users, but also organize distributed storage and processing of data between several computers connected by electrical connections. Any network operating system, on the one hand, performs all the functions of the local operating system, and on the other hand, has some additional means to allow it to interact on the network with operating systems of other computers. Software modules that implement network functions appeared in operating systems gradually, as network technologies, hardware database of computers and new tasks requiring network processing.

Although theoretical work on the creation of the concepts of network interaction was carried out almost from the very appearance of computing machines, significant practical results on the combination of computers on the network were obtained in the late 60s, when the package with the help of global connections and technology of switching packages managed to realize the interaction of Mainframes and supercomputers. These expensive computers often stored unique data and programs, access to which it was necessary to provide a wide range of users who were in various cities at a considerable distance from computing centers.

In 1969, the US Department of Defense initiated the work on combining supercomputers of defense and research centers into a single network. This network was named Arpanet and was the starting point for creating the most famous global network - the Internet. The Arpanet network combined computers of different types that run the various OS with added modules that implement communication protocols that are common to all network computers.

In 1974, IBM announced the creation of its own network architecture for its mainframes, called SNA (System Network Architecture). This multi-level architecture, in many respects, such a standard OSI model that appeared slightly later, ensured the interaction of the "Terminal - Terminal" type, "Terminal - Computer" and "Computer-Computer" on global bonds. The lower levels of architecture were implemented by specialized hardware, the most important of which is a telework processor. The functions of the upper levels of SNA were performed by software modules. One of them was the basis of a telework processor software. Other modules worked on the central processor as part of the standard IBM operating system for mainframes.

At the same time, there were active work on creating and standardizing x.25 networks in Europe. These packet switching networks were not tied to any specific operating system. After obtaining the status of the International Standard in 1974, the X.25 protocols began to be supported by many operating systems. Since 1980, IBM has incorporated support for the X.25 protocols in the SNA architecture and in its operating systems.

Operating systems mini-compturners and first local networks

By the mid-1970s, mini-computers, such as PDP-11, NOVA, HP, were widespread. Mini computers first used the advantages of large integrated circuits, which allowed to implement quite powerful functions at a relatively low cost of the computer.

Many functions of multiprogram multiplayer OS have been truncated, given the limited resources of mini-computers. Operating systems of mini-computers often began to do specialized, for example, only for real-time control (RT-11 OS for PDP-11 mini-computers) or only to maintain the time separation mode (RSX-11M for the same computers). These operating systems were not always multiplayer, which in many cases was justified by the low cost of computers.

An important milestone in the history of operating systems was the creation of UNIX OS. Initially, this operating system was intended to maintain the time separation mode in the PDP-7 mini-computer. Since the mid-70s, the mass use of UNIX OS has begun. By this time, the software code for UNIX was 90% written in a high-level language S. Wide distribution of efficient C-compilers made UNIX unique for that time of the OS, which has the possibility of relatively light transfer to various types of computers. Since this OS was supplied with the source codes, it became the first open OS, which simply easily enthusiast users could improve. Although UNIX was originally designed for mini-computers, flexibility, elegance, powerful functionality and openness allowed her to take a robust position in all classes of computers: supercomputers, mainframes, mini-computers, servers and workstations based on RISC processors, personal computers.

Regardless of the version, common for UNIX features are:

multiplayer mode with data protection from unauthorized access,

implementing multiprogram processing in time separation mode based on the use of algorithms for displacing multitasking,

use of virtual memory and swap mechanisms to enhance multiprogramming levels

unification of I / O operations based on the extended use of the "File" concept

hierarchical file system forming a single directory tree regardless of the number of physical devices used to place files,

the portability of the system by writing its main part in C,

a variety of interaction of processes, including through the network,

disk caching to reduce the average access to files.

The availability of mini-computers and as a result of this, their prevalence in enterprises served as a powerful stimulus to create local networks. The company could afford to have several mini-computers located in the same building or even in one room. Naturally, the need for the exchange of information between them and in the sharing of expensive peripheral equipment.

The first local networks were built using non-standard communication equipment, in the simplest case, by directly connecting serial ports of computers. The software was also non-standard and implemented as user applications. The first network application for UNIX OS is the UUCP program (UNIX-to-Unix Copy Program) - appeared in 1976 and began to spread with a version of 7 AT & T UNIX since 1978. This program allowed copying files from one computer to another within the local network through various hardware interfaces - RS-232, current loop, etc., and besides, it could work through global connections, such as modem.

Development of operating systems in 80- e. years

The most important events of this decade can be attributed to the development of the TCP / IP stack, the development of the Internet, the standardization of local networks, the appearance of personal computers and operating systems for them.

The working version of the TCP / IP protocols stack was created in the late 70s. This stack was a set of general protocols for a heterogeneous computing environment and was intended to communicate the ARPANET experimental network with other "satellite" networks. In 1983, the TCP / IP protocol stack was adopted by the US Department of Defense as a military standard. The Arpanet network computer transition to the TCP / IP stack has accelerated its implementation for the BSD UNIX operating system. Since that time, the joint existence of UNIX and TCP / IP protocols began, and almost all numerous UNIX versions have become network.

The Internet has become an excellent test site for testing many network operating systems that allowed in real conditions to check the possibilities of their interaction, the degree of scalability, the ability to work with extreme loading created by hundreds and thousands of users. Independence from manufacturers, flexibility and efficiency made TCP / IP protocols not only by the main transport mechanism of the Internet, but also the main stack of most network OS.

All decade was noted by the constant advent of new, increasingly perfect versions of UNIX OS. Among them were the branded versions of UNIX: Sunos, HP-UX, IRIX, AIX and many others, in which computer manufacturers adapted the core code and system utilities for their equipment. Vervy variety gave rise to the problem of their compatibility, which various organizations periodically tried to solve. As a result, POSIX and XPG standards were adopted, defining OS interfaces for applications, and the AT & T special division released several versions of UNIX System III and UNIX System V, designed to consolidate developers at the core code level.

Microsoft MS-DOS, Novell DOS firms and others were also widespread. The first DOS OS for a personal computer was created in 1981 g. called MS-DOS 1.0. Microsoft acquired at Seattle Computer Products the right to 86 - DOS, adapted this OS for the then still secret IBM PC and renamed it to MS-DOS. In August 1981, DOS 1.0 operates with one 160K one-sided flog. System files occupy up to 13 to: It requires 8-to RAM. May 1982.DOS 1.1 allows you to work with double-sided diskettes. System files occupy up to 14k. March 1983. The appearance of DOS 2.0 with IBM PC XT. Developed this version has almost three times more teams than DOS 1.1. Now it makes it possible to use 10 MB hard disk. The tree structure of the file system and 360-to flexible disks. The new 9-sectoral disk format increases the capacity by 20% compared to the 8-sector format. System files occupy up to 41k for the system operation 24-to RAM is required. December 1983. Together with PCJR, the PC-DOS 2.1 system of IBM appeared.

August 1984.Together with the first IBM PC AT, DOS 3.0 appears on the basis of the processor 286. It is focused on 1.2 MB flexible discs and hard drives greater than before the capacity. System files occupy up to 60kb. November 1984.DOS 3.1 Supports Microsoft Network System Files take up to 62K. November 1985.The appearance of Microsoft Windows. December 1985.DOS 3.2 works with 89 mm floppy disks on 720k. It can add up to 32 MB on a separate hard disk. System files occupy up to 72k. April 1986.The appearance of IBM PC Convertihle. September 1986.Compaq releases the first PC class 386. April 1987.Together with PS / 2, the first PC of the IBM class 386 appears DOS 3.3. It works with new 1.44 MB of flexible discs and several types of breaking the hard disk to the partitions of up to 32 MB each, which allows the use of hard drives with a large capacity. System files occupy up to 76 K for the operation of the system requires 85K RAM. MS-DOS was most popular and lasted 3-4 years. At the same time, IBM announced the release of OS / 2. November 1987. Getting Started Microsoft Windows 2.0 and OS / 2. July 1988 Microsoft Windows 2.1 (Windows / 286 Windows / 386) appears. November 1988. DOS 4.01 includes an interface, the shell menu and ensures the partition of the hard disk to the sections that exceed 32 MB. System files take up to 108k; The system requires 75K RAM. May 1990.Microsoft Windows 3.0 and Dr DOS 5.0 appears. June 1991.MS-DOS 5.0 has its own characteristics that it allows you to effectively use the OP. DOS 5.0 has improved shell menu interfaces, a full-screen editor, disk utilities and the possibility of changing tasks. System files occupy up to 118K: 60-to RAM is required to operate the system, and 45 K can be downloaded to the memory area with addresses older than 1 MB, which frees the place in normal memory to work MS-DOS 6.0 applications other than the standard set of programs. It has a program for backup, antivirus program and other improvements in MS-DOS 6.21 and MS-DOS 6.22.

The beginning of the 80s is associated with another significant for the history of operating systems an event-appearance of personal computers. From the point of view of architecture, personal computers did not differ from the class of mini-computers of the PDP-11 type, but their cost was significantly lower. Personal computers served as a powerful catalyst for the rapid growth of local networks. As a result, the support of network functions has become a prerequisite for personal computers.

However, network functions appeared from operating systems of personal computers not immediately. The first version of the most popular operating system of the early phase of personal computers - MS-DOS of Microsoft was deprived of these opportunities. It was a single-strware single-user OS with a command line interface capable of starting from a floppy disk. The main tasks for it were the management of files located on flexible and hard drives in UNIX - a similar hierarchical file system, as well as the accuracy of the programs. MS-DOS was not protected from user programs, since the Intel 8088 processor did not support the privileged mode. The developers of the first personal computers believed that during the individual use of the computer and the limited features of the equipment, it makes no sense in supporting multiprogramming, so the processor did not provide a preferred mode and other mechanisms for supporting multiprogram systems.

The missing functions for MS-DOS and the OS similar to it are compensated by external programs provided the user with a convenient graphical interface (for example, Norton Commander) or a fine control tool (for example, PC Tools). The largest impact on the development of software for personal computers was provided by Microsoft Windows operating environment, which presented a superstructure over MS-DOS.

Network functions were also implemented mainly with network shells operating on top of OS. In network work, it is always necessary to keep a multiplayer mode, in which one user is interactive, and the rest get access to computer resources over the network. In this case, the operating system requires at least some minimum functional support for the multiplayer mode. The history of MS-DOS networks began with version 3.1. This version of MS-DOS added the necessary file blocking toilets to the file system, which allowed more than one user to have access to the file. Using these functions, the network shells could ensure the separation of files between network users.

Together with the release version of MS-DOS 3.1 In 1984, Microsoft also released a product called Microsoft Networks, which is usually informally called MS-NET. Some concepts embedded in MS-NET, such as introducing into the structure of basic network components - redirector and network server, successfully switched to later network products Microsoft: Lan Manager, Windows for Workgroups, and then in Windows NT.

Network shells for personal computers produced other companies: IBM, ARTISOFT, Performance Technology and others.

Another path chose Novell. She initially made a bet on the development of an operating system with embedded network functions and achieved outstanding success on this path. Its NetWare Network OS for a long time have become a reference to performance, reliability and security for local networks.

Novell's first network OS appeared in the market in 1983 and was called OS-NET. This OS was intended for networks that had a star-shaped topology, the central element of which was a specialized computer based on Microprocessor Motorola 68000. A little later, when IBM has released PC XT personal computers, Novell has developed a new product - NetWare 86, designed for microprocessor architecture intel 8088 .

From the very first version of NetWare OS, it was distributed as an operating system for the central server of the local network, which, due to the specialization on executing the functions of the file server, provides the maximum possible files for this class of computers, the speed of remote access to files and enhanced data security. For high performance NOVell NetWare network users payable - a dedicated file server cannot be used as a workstation, and its specialized OS has a very specific application program interface (API), which requires from developers of special knowledge applications, special experience and considerable effort.

Unlike Novell, most other companies have developed network funds for personal computers within the framework of general purpose operating systems. Such systems as hardware platforms develop personal computers began to gain the features of mini-computers operating systems.

In 1987, as a result of Microsoft and IBM, the first multitasking system for personal computers with the Intel 80286 processor appeared, fully using the capabilities of the protected mode - OS / 2. This system was well thought out. It supported displacing multitasking, virtual memory, graphical user interface (not from the first version) and a virtual machine for performing DOS applications. In fact, it went beyond the limits of a simple multitasking with its concept of parallelizing individual processes, called multithreading.

OS / 2 with its developed multitasking functions and HPFS file system with built-in multi-user protection facilities turned out to be a good platform for building local networks of personal computers. The Network Shells of the Lan Manager of Microsoft and Lan Server Company of IBM, developed by these companies based on one basic code, received the most common distribution. These shells were inferior in performance to the NetWare file server and consumed more hardware resources, but had important advantages - they allowed, first, to perform any programs on the server developed for OS / 2, MS-DOS and Windows, and secondly, to use The computer on which they worked as a workstation.

Microsoft and IBM network developments led to the emergence of a NetBIOS-very popular transport protocol and simultaneously an application programming interface for local networks that have enabled in almost all network operating systems for personal computers. This protocol and today is applied to create small local networks.

Not a very successful market fate of OS / 2 did not allow Lan Manager and Lan Server systems to capture a noticeable market share, but the principles of operation of these network systems largely found their embodiment in a more successful operating system of the 90s - Microsoft Windows NT containing built-in network components Some of which have an LM console - from Lan Manager.

In the 80s, the main standards for communication technologies for local networks were adopted: in 1980 - Ethernet, in 1985 - Token Ring, in the late 80s - FDDI. This made it possible to ensure the compatibility of the Network OS at the lowest levels, as well as standardize the OS interface with network adapter drivers.

For personal computers, not only specially designed OS, similar to MS-DOS, NetWare and OS / 2, were used, but already existing OS were adapted. The appearance of Intel 80286 processors and especially 80386 with multiprogram support made it possible to transfer UNIX personal computers to the platform. The most famous system of this type was the UNIX version of Santa Cruz Operation (SCO UNIX).

Conclusion

global computer network operating network

The history of the OS has about half a century. It was largely determined and determined by the development of the element base and computing equipment. At the moment, the global computer industry develops very rapidly. The performance of systems increases, and therefore the possibilities of processing large volumes of data increase. MS-DOS class operating systems no longer cope with such a data stream and cannot entirely use the resources of modern computers. Therefore, recently there is a transition to the more powerful and most advanced ONIX operating systems, an example of which and is a Windows NT, released by Microsoft.

Literature

1. V.E. Figures iWM PC for users. Ed. 7th, recreation. and add. - M.: Infra-M, 2000. - 640 p.: Il.

2. Akhmetov K.S. Course of a young fighter. Ed. 5th, recreation. and add. - M.: Computer Press, 1998. - 365 p.: Il.

3. System Software. / V.M. Ilyushechkin, A.E. Kostin ed. 2nd, recreation. and add. - M.: Higher. Shk., 1991. - 128 s.: Il.

4. Olifer V.G. Network operating systems. St. Petersburg: Peter, 2002.-538 p.

5. Operating systems: [Collection / Red.B.m. Vasiliev] .- M.: Knowledge, 1990-47 p.: Il.

Posted on Allbest.ru.

Similar documents

    Packet processing operating systems, time separation, real time. Features of resource management algorithms. Support multiplayer mode. Obesting and unwinding multitasking. Operating systems and global networks.

    abstract, added 11.12.2011

    The history of the first operating systems, multiprogram operating systems for mainframes. The first local and global networks. Development of operating systems in the 80s. Construction of two-dimensional graphs in Mathcad, solving systems of equations.

    examination, added 11.06.2014

    Features of the current stage of development of operating systems. Purpose of operating systems, their main types. Mini-computers operating systems. The principle of operation of the matrix printer, design and reproduction of arbitrary characters for them.

    coursework, added 06/23/2011

    The concept of operating systems, their classification and varieties, distinctive features and basic properties. The maintenance of operating systems, the order of interaction and the purpose of their components. Organization of disk space. Description of modern OS.

    examination, added 07.11.2009

    Purpose, classification, composition and purpose of components of operating systems. Development of complex information systems, programs and individual applications. Characteristics of Windows operating systems, Linux, Android, Solaris, Symbian OS and Mac OS.

    course work, added 11/19/2014

    Classification of real-time systems. Cores and real-time operating systems. Tasks, processes, streams. Advantages and disadvantages of streams. Properties, planning, task synchronization. Related tasks. Synchronization with external events.

    abstract, added 12/28/2007

    Purpose and functions of computer operating systems. Emm hardware and software resources. Batch OS. Systems with a division of time: MultiCS, Unix. Multitasking OS for PC with a graphical interface: Windows, Linux, Macintosh. OS for mobile devices.

    coursework, added 05.12.2014

    Characteristics, basics of application, architecture of rigid and operating systems of real-time. Serial programming of real-time tasks. Structure and languages \u200b\u200bof parallel programming, multiprogramming and multitasking.

    coursework, added 12/17/2015

    Evolution and classification OS. Network operating systems. Memory management. Modern concepts and technology design of operating systems. Family of UNIX operating systems. NOVELL network products. Microsoft network OS.

    creative work, added 07.11.2007

    Purpose and main functions of operating systems. Loading to RAM to the program to be executed. Maintenance of all I / O operations. Evolution, classification of operating systems. Formation of wages, sorting by departments.

Among all system programs with which you have to deal with users of computers, the operating systems occupy a special place.
The operating system is a program that runs immediately after the computer is turned on and allows the user to manage the computer.

The operating system (OS) manages the computer, launches programs, provides data protection, performs various service functions by user and programs. Each program uses OS services, and therefore can only work under the control of the OS that provides services for it. Thus, the OS selection is very important, as it determines with what programs you can work on your computer. The performance of the OS also depends on the performance of your work, the data protection of the data required by hardware, etc. However, the choice of the operating system also depends on the technical characteristics of the computer. The more modern operating system, that it not only provides more features and more visual, but also the more it places the requirements for the computer (the clock frequency of the processor, operational and disk memory, the presence and discharge of additional cards and devices). With what such operating systems and their features in general, we figured out, now it's time to proceed to a more detailed, specific consideration of the OS variety, which usually begins with the brief history of the appearance and development.

MultiCS operating system
So, it all started in the distant 1965 ... Four years, American Telegraph & Telephone Bell Labs, together with General Electric and a group of researchers from the Masachhetsky Institute of Technology, created the OS MULICS project (also called Mac - not to be confused with MAC). The objective of the project was to create a multiplayer interactive operating system, providing a large number of users with convenient and powerful access to computing resources. This OS was based on the principles of multi-level protection. The virtual memory had a segment-page organization, where the access level was associated with each segment. In order for any program to call a program or refer to data located in a certain segment, it was necessary that the level of execution of this program was not lower than the access level of the corresponding segment. Also, for the first time in MultiCS, a fully centralized file system was implemented. That is, even if the files are on different physical devices, logically, they seem to be present on one disk. The directory does not specify the file itself, but only a link on its physical location. If suddenly the file does not turn out there, the smart system asks to insert the appropriate device. In addition, in Multics there was a large amount of virtual memory, which made it possible to do the files from the external memory in virtual. Alas, but all attempts to establish in the system relatively friendly interface failed. A lot of money was invested, and the result was somewhat different, than wanted to the guys from Bell Labs. The project was closed. By the way, Ken Thompson and Denis Ritchi were participants in the project. Despite the fact that the project was closed, it is believed that it was the MultiCs OS gave the beginning of the UNIX OS.

Operating system UNIX.
It is believed that in the appearance of Unix in particular to blame ... a computer game. The fact is that Ken Thompson (see the photo on the left) it is not clear what for the sake of the Space Travel toy. He wrote it in 1969 on the computer Honeywell-635, which was used to develop multiCS. But the chip is that neither the aforementioned Honeywell nor in the laboratory of General Electric-645 did not fit for a toy. And Ken had to find another Evmka - a 18-bit PDR-7 computer. Ken with the guys developed a new file system in order to facilitate life and work. Well, I decided to try out my invention on a new car. Tried. The whole department of Patents Bell Labs was happy. Thompson This seemed a little and he began to improve it, including such functions as inodes, the process management subsystem and memory that ensures the use of the system by two users in TimeSharing mode "A (time separation) and a simple command interpreter. Ken even developed several utilities for the system. Actually , Ken's staff still remembered how they suffered over the Multics OS, so in honor of the old merit, Brian Kernigan - decided to call it a similar name - Unics. After some time, the name was reduced to UNIX (read the same way, just write an excess letter of the present Nrogramists at all times were lazy). The OS was written on the assembler.

So we get selected to what is known in the world as the "first edition of UNIX". In November 1971, the first release of a full-fledged dock in Unichsu was published. In accordance with this, the OS was called the "first edition of UNIX". The second editors came out pretty quickly - less than in a year. The third edition did not differ special. Is that Denis Ritchi (see photo on the left) "Singing for dictionaries", as a result of which he wrote his own language, known now as S. It was on him that it was written by the 4th edition of UNIX in 1973. In July 1974, the 5th version of UNIX was released. The sixth edition of UNIX (AKA UNIX V6), released in 1975, became the first commercially distributed Unix. Most of it was written on S.
Later, the prompt and virtual memory management subsystem was fully rewritten, at the same time changed the interface of the drivers of external devices. All this allowed to make the system easily transferred to other architectures and was named "Seventh Editors" (AKA UNIX Version 7). When "SECTERKA" hit the University of Berkeley in 1976, there were local Unix Guru. One of them was Bill Joy.
Having collected his programmers' friends, Billy began developing his own system on the Unix kernel. Being in addition to the basic functions of a bunch of their own (including the Pascal compiler), he called the entire Distribution Solonkite (BSD 1.0). The second version of BSD almost did not differ from the first. The third version of the BSD was based on the transfer of UNIX Version 7 on the computers of the VAX family, which gave the system 32 / V, the basis of BSD 3.x. Well, and most importantly - at the same time, a stack of TCP / IP protocols was developed; Development was funded by the US Department of Security.
The first commercial system was called UNIX System III and she came out in 1982. This OS combined the best quality UNIX Version 7.
Next, Unicurs developed about this:
First, there were companies engaged in the commercial transfer of UNIX to other platforms. It was put on a hand and notorious Microsoft Corporation, together with Santa Cruz Operation on the light of the UNIX-variation called Xenix.
Secondly, Bell Labs has created a group on the development of Unix and announced that all subsequent commercial versions of UNIX (starting with System V) will be compatible with the previous ones.
By 1984, the second release of UNIX System V was released, which appeared: the capabilities of files and records, copying shared RAM pages when you try to write (SUPU-ON-WRITE), page replacement of RAM, etc. This time UNIX was installed on more than 100 thousand computers.
In 1987, the third release of UNIX SYSTEM V was released. Four and a half million users of this epic operating system were registered ... By the way, as for Linux, it came only in 1990, and the first official version of the OS came out only October 1991. Like BSD, Linux spreads with sources so that any user can set it up as he wants. Almost everything was customized, which cannot afford, for example, Windows 9X.

DOS operating system
The domes were always. The first - from the iWM, year in the 1960s, they were very limited to functionally. Normal, reached and up to our times, and who used relative fame, lead their own account with QDOS ...
This less long story than the development of UNIX began in 1980 at Seattle Computer Products. The originally called QDOS, the OS was modified and, renamed by the end of the year in MS-DOS, was sold to our popularly beloved Microsoft. IWM Corporation commissioned Microsoft work on the OS for new models of the "Blue Giant" computers - IWM-PC. At the end of 1981, the first version of the new OS - RS-DOS 1.0 was published. The problem of the operating system was that under each specific car it was necessary to configure again. The RS-DOS "Ohm, IVM herself, and the microsoftus got its own modification, called MS-DOS. In 1982, a RS-DOS and MS-DOS version 1.1 appeared at the same time with some added and expanded capabilities. By 1983, year was developed. Versions 2.0, in which the hard drives support appeared, as well as an improved file administration system. The third version of MS-DOS, released in 1984, gave only some improvements. Subsequent versions were aimed at managing the basic and virtual memory up to version 6.22, after Which appeared terribly trimmed 7.0, which is part of some of Windows 9x. More Microsoft DOS did not work.
Meanwhile, MS-DOS did not die. The latest version included almost everything that MS-DOS 6.22 could plus features such as backup tools and restore damaged data embedded in the anti-virus control tool system, providing file synchronization on two computers, etc. It was still such The thing as the RTS-DOS produced by one of the Russian physical laboratories. The last version of its version is as 6.65. But the most unusual is DR-OreNos 7.02. Initially, this OC was developed by Digital Research, but then for some reason they refused it and sold her Novell. Novel has embedded his network stuffed on it and sold further to Caldera, which supplemented DR-DOS to the Internet access to the Internet and now distributes it for free.

OS / 2 operating system
It all started with OC VM (Virtual Machine), which was released in 1972. The product released then was called VM / 370 and was designed to maintain a server for a certain number of users. This OS, which has long celebrated its 25th anniversary, on the history of which can be studied by the development of technology IWM in the field of server operating systems and network solutions, is a reliable and powerful basis for organizing a corporate information and computing system focused on a multiplayer environment of a large modern company. The VM / ESA system highly uses hardware capabilities and is somewhat less demanding of computer computing resources compared to OS / 390, which makes it a good option for use as a platform for a corporate system, a large organization information server or server on the Internet. Later, IBM organized a joint project of Microsoft and IWM, aimed at creating an operating system devoid of flaws. The first version of 0s / 2 came out at the end of 1987. It was able to use the developed computational capabilities of the processor and possessed means of providing communication with large cars of the company IWM. In 1993, IWM released 0s / 2 2.1, a fully 32-bit system, which had the ability to execute applications created for Windows that had high performance and supporting a large number of peripheral devices. In 1994, 0s / 2 Warp was released 3. In this implementation, in addition to further improving productivity and reduce the requirements for hardware resources, supported on the Internet. Now from the latest versions, only 0s / 2 Warp4 should be noted, capable of working with 64-bit processors. In addition, it is fairly fully represented by Internet interaction tools that allow 0s / 2 not only client programs, but also act as a web server. Starting with the third version, the localized version of 0s / 2 for Russia is supplied by the company IWM. Passing a rather large and complex way, this OS for personal computers has such features such as real multitasking, thoughtful and reliable memory management subsystems and processes, built-in networking support and additional network server functions, powerful REXX programming language designed to solve System administration tasks. The listed features allow you to use 0s / 2 as an operating system for powerful workstations or network servers.

Windows operating system
Windows was probably the first operating system that Bill Gates (see photo on the left) nobody ordered, and he took her fear and risk. What is so special in it? First, the graphical interface. So at that time was only at the notorious MAC 0s. Secondly, multitasking. In general, in November 1985 Windows 1.0 came out. The main platform was set by the 286th machines.
Exactly two years later, in November 87, Windows 2.0 came out, after a year and a half 2.10 came out. There was nothing special in them. And finally, the revolution! May 1990, Windows 3.0 came out. What was not there: and the doss applications were performed in a separate window on the full screen, and the Saru-Paste worked to exchange with DOS application data, and the Windows themselves worked in several memory modes: in real (Basic 640 KB), in a secure and expanded. In this case, the applications could be launched, the size of which exceeds the size of the physical memory. There was a dynamic data exchange (DDE). After a couple of years, version 3.1, which already absent problems with basic memory was published. A newfold function was also introduced that supports True Tour fonts. Provided normal operation on the local network. Drag & Drop appeared (File and Directory Transfer). In version 3.11, network support was improved and several more insignificant functions were introduced. In parallel, Windows NT 3.5 was released, which was at that time a collection of basic network rods taken from 0s / 2.

In June 1995, the entire computer community was agreed by Microsoft's release on release in August of the new operating system, essentially different than Windows 3.11.
August 24 - the date of the official release of Windows 95 (other names: Windows 4.0, Windows Chicago). Now it was not just an operating environment - it was a full-fledged operating system. A 32-bit kernel made it possible to improve file access and network functions. 32-bit applications were better protected from each other errors, there was also support for the multiplayer mode on one computer with one system. Many differences in the interface, a lot of settings and improvements.
A little later came the new Windows NT with the same interface as the 95th. Supplied in two versions: as a server and as a workstation. Windows NT 4.x systems were reliable, but not so much because Microsoft woke up conscience, how much because NT wrote programmers who once worked on VAH / VMS.
In 1996, Windows-95 OSR2 came out (this is decrypted as Open Service Relase). In the distribution room included Internet Explorer 3.0 and some kind of an ancient version of Outlook (then called simchanged). From basic functions - support FAT32, improved equipment initializer and drivers. Some settings (including video) can be changed without rebooting. Embedded DOS 7.10 with FAT32 support.
Year 1998. Windows-98 came out with the integrated Internet Explorer 4.0 and Outlook. The so-called Active Desktop appeared. Improved support for universal drivers and DirectX. Embed to support multiple monitors. Optionally, you could add a wonderful utility to translate hard drives from FAT16 to FAT32. Built-in DOS dated all the same 7.10.
A year later, Windows 98 Special Edition came out. With an optimized core. Internet Explorer got to version 5.0, which, by and large, little differed from 4.x. Integration with the World Wide Web, which consists in supplying several weak utilities of the type FrontPage and Web Publisher. DOS was all the same - 7.10.
Year 2000. Completes the full version of Windows Millenium. Internet Explorer has become a version 5.5, DOS like died, but smart persons argue that he was, but was called 8.0. Dovovsky applications are simply ignored. The interface has improved at the expense of graphic functions and accelerations of everything that can move (including mouse cursor), plus a pair of network functions. Well, quite recently, you can say in our time Windows Vista and Windows Server 2008.

Share with friends:
Did you like the article? To share with friends: