The concept of an operating system. The main stages of development of operating systems. The evolution of computer operating systems of various types

The history of the OS has about half a century. It was largely determined and determined by the development of the element base and computer equipment.

  First generation.

The 40s.  The first digital computers without an OS. The organization of the computing process is decided by the programmer from the control panel.

  Second generation.

The 50s.  The appearance of the prototype OS - monitor systems that implement a batch processing system of tasks.

  Batch mode

The need for the optimal use of expensive computing resources has led to the emergence of the concept of "batch mode" of program execution. Batch mode assumes that there is a queue of programs for execution, and the OS can ensure that the program is loaded from external storage media into RAM without waiting for the execution of the previous program to complete, thereby avoiding processor downtime.

  Third generation.

1965-1980  Transition to integrated circuits. IBM / 360. Almost all the basic concepts inherent in modern OS are implemented: time sharing  and multitasking, separation of powers, real timefile structures and file systems. The implementation of multiprogramming required the introduction of very important changes to the computer hardware: privileged and user modes, means of protecting memory areas, a developed interrupt system.

  Time sharing and multitasking

Already batch mode in its developed version requires the separation of processor time between the execution of several programs. The need for time sharing (multitasking, multiprogramming) manifested itself even more when teletypes (and later, terminals with electron beam displays) were distributed as input / output devices (1960s). Since the speed of keyboard input (and even reading from the screen) of data by an operator is much lower than the speed of processing this data by a computer, using a computer in "exclusive" mode (with one operator) could lead to downtime of expensive computing resources.

The separation of time made it possible to create “multi-user” systems in which one (as a rule) central processor and a block of RAM were connected to numerous terminals. At the same time, part of the tasks (such as entering or editing data by the operator) could be executed in dialogue mode, while other tasks (such as massive calculations) could be executed in batch mode.

  Separation of powers

The proliferation of multi-user systems required solving the problem of separation of powers, which avoided the possibility of modifying an executable program or the data of one program in the computer memory of another (containing an error or maliciously prepared) program, as well as modifying the OS itself with an application program.

The implementation of separation of powers in the OS was supported by processor developers who proposed architectures with two processor operating modes - “real” (in which the entire address space of the computer is accessible to the executable program) and “protected” (in which the availability of the address space is limited by the range allocated when the program starts on execution).

  Real time scale

The use of universal computers to control production processes required the implementation of a "real time scale" ("real time") - synchronization of program execution with external physical processes.

The inclusion of the real-time function in the OS made it possible to create systems that simultaneously serve production processes and solve other problems (in batch mode and (or) in time-sharing mode).

Such operating systems are called Real-time Scheduling Operating Systems  or abbreviated as RTOS.

  File Systems and Structures

Gradual replacement of sequential access media (punched tape, punch cards and magnetic tapes) with random access drives (on a magnetic disk)

  Fourth generation.

The end of the 70s.  A working version of the TCP / IP protocol stack has been created. In 1983, it was standardized. Independence from manufacturers, flexibility and efficiency, proven by the successful operation of the Internet, has made this protocol stack the main stack for most operating systems.

The beginning of the 80s.  The advent of personal computers. The rapid growth of local networks. Support for network functions has become a prerequisite.

  The 80s. The main standards for communication technologies of local area networks have been adopted: Ethernet, Token Ring, FDDI. This made it possible to ensure compatibility of network operating systems at lower levels.

The beginning of the 90s. Almost all OSs have become networked. Specialized network operating systems appeared (for example, IOS running in routers)

Last decade.  Particular attention is paid to corporate network operating systems, which are characterized by a high degree of scalability, support for network operation, advanced security features, the ability to work in a heterogeneous environment, and the availability of centralized administration tools.

Stages of development of operating systems

Summary to Chapter 1

1. The main part of the software component of computing systems is the operating system. Performing a control function, it determines the appearance of the computing system.

2. The operating system is the main computer system program. The design of the operating system is carried out in the same way as other (applied and instrumental) programs.

3. A huge variety of operating systems necessitated their classification. OSs are classified according to the following criteria: by the number of simultaneously performed tasks, by the number of simultaneously working users, by the number of simultaneously controlled processors, and by the operating mode.

4. The construction of modern operating systems is based on nine principles, each of which can be extrapolated to the development of application programs.

Security Questions for Chapter 1

1. What is the difference between operating systems and other programs? 2. What are the main functions of the operating system? 3. What types of software do you know? 4. What is the main part of the operating system? 5. What categories of operating systems do you know? 6. Is it possible to extrapolate (extend) the principles of building operating systems to the development of application programs?


CHAPTER 2. HISTORY OF DEVELOPMENT OF OPERATIONAL SYSTEMS

The first period (1945 -1955): the forties of the 20th century were marked by the advent of computer technology, but there were no operating systems, access to computing resources consisted in machine coding. The first generation of OS (50s) - batch processing systems. In such systems, the job is processed as a sequence of packets, and during processing there is no interaction between the user and his job.

In the mid-40s, the first tube computing devices were created. At that time, the same group of people participated in the design, operation, and programming of the computer. It was more of a research work in the field of computer technology, and not the use of computers as a tool for solving any practical problems from other application areas. Programming was carried out exclusively in machine language. There were no operating systems, all tasks of organizing the computing process were solved manually by each programmer from the control panel. There was no system software other than math and utility routines libraries.

The second period (1955 - 1965): in the mid-50s a new period began in the development of computer technology, associated with the emergence of a new technical base - semiconductor elements. Computers of the second generation have become more reliable, now they were able to work continuously for so long that they could be entrusted with the performance of truly practically important tasks. It was during this period that the staff was divided into programmers and operators, operators and developers of computers.



In these years, the first algorithmic languages \u200b\u200bappeared, and therefore the first system programs - compilers. The cost of CPU time increased, which required a reduction in unproductive time spent between program launches. The first batch processing systems appeared that simply automated the launch of one program after another and thereby increased the processor load factor. Batch processing systems were the prototype of modern operating systems; they became the first system programs designed to control the computing process. During the implementation of batch processing systems, a formalized task management language was developed, with the help of which the programmer informed the system and the operator what work he wants to do on a computer. The set of several tasks, usually in the form of a deck of punch cards, was called the package of tasks.

The second generation of OS (60s) - systems with multiprogramming and the first systems of multiprocessor type. Time-sharing OSs are being developed (systems that provide services to many users who can interact with their tasks) and the first real-time OSs (systems that provide an immediate response to external influences; interrupt systems are developed in such environments).

The third period (1965 - 1980): the next important period in the development of computers refers to the years 1965-1980 (the 3rd and 4th generation of the OS, respectively). At this time, the technical base underwent a transition from individual semiconductor elements such as transistors to integrated circuits, which gave great opportunities to the new, third generation of computers.

The creation of families of software-compatible machines is also characteristic of this period. The first family of software-compatible machines built on integrated circuits was a series of machines IBM / 360. Built in the early 60s, this family significantly exceeded second-generation cars in terms of price / performance. Soon, the idea of \u200b\u200bsoftware-compatible machines became widely accepted.

Software compatibility also required operating system compatibility. These operating systems would have to work on both large and small computing systems, with a large and a small number of diverse peripherals, in the commercial field and in the field of scientific research. Operating systems built with the intention of satisfying all these conflicting requirements have proven to be extremely complex "monsters." They consisted of many millions of assembler lines written by thousands of programmers and contained thousands of errors causing an endless stream of corrections. In each new version of the operating system, some errors were fixed and others were introduced.

So, the third generation of operating systems (from the mid-60s) is multi-mode operating systems that can simultaneously operate in batch processing mode, in time-sharing mode, in real time mode, and in multiprocessor processing mode.

The fourth generation (since the mid-70s) - operating systems that allow access to geographically distributed computers - network operating systems.

Despite the immense size and many problems, the operating systems of third-generation machines did indeed satisfy most consumer requirements. The most important achievement of the OS of this generation was the implementation of multiprogramming. Multiprogramming  Is a way of organizing a computing process in which several programs are alternately executed on the same processor. While one program performs an input-output operation, the processor does not stand idle, as it happened during sequential execution of programs (single-program mode), but executes another program (multi-program mode). In this case, each program is loaded into its own area of \u200b\u200bRAM, called the partition.

Another innovation is spooling (spooling) Spooling at that time was defined as a way of organizing a computing process, according to which tasks were read from punch cards to disk at the pace at which they appeared in the room of the computer center, and then, when the next task was completed, a new task from the disk was loaded into the free section .

Along with the multi-program implementation of batch processing systems, a new type of OS appeared - time-sharing systems. The multiprogramming option used in time-sharing systems is aimed at creating for each individual user the illusion of the sole use of a computer.

The fourth period (1980 - present): this period in the evolution of operating systems is associated with the advent of large integrated circuits (LSI). During these years, there has been a sharp increase in the degree of integration and the cheapening of chips. The computer has become available to the individual, and the era of personal computers has come. In terms of architecture, personal computers were no different from the class of minicomputers such as PDP-11but their price was significantly different. If the minicomputer made it possible for an enterprise department or university to have its own computer, then a personal computer made this possible for an individual person.

Computers began to be widely used by non-specialists, which required the development of "friendly" software.

Two systems dominated the operating system market: Ms dosand Unix. Single-program single-user OS   Ms dos  widely used for computers based on microprocessors Intel  8088, and then 80286, 80386 and 80486. Multi-program multi-user OS   Unix  dominated the environment of "non-Intel" computers, especially those built on the basis of high-performance RISCprocessors.

In the mid-80s, personal computer networks began to develop rapidly, operating under the control of networked or distributed operating systems.

In network operating systems, users should be aware of the presence of other computers and must log in to another computer in order to use its resources, mainly files. Each machine on the network runs its own local operating system, which differs from the OS of a stand-alone computer by the presence of additional tools that allow the computer to work on the network. Network OS does not have fundamental differences from a single-processor computer OS. It necessarily contains software support for network interface devices (a network adapter driver), as well as means for remotely logging into other computers on the network and means for accessing remote files, but these add-ons do not significantly change the structure of the operating system itself.


Ministry of Education and Science of the Russian Federation

State educational institution

higher vocational education

Magnitogorsk State Technical University

them. G.I. Nosova

Department of Informatics and Information Security

Test

on discipline "Informatics"

Abstract on the topic "Evolution of the operating systems of computers of various types"

Completed by: student of the group 1304006-11-1

Option number 13

Sagdetdinov D.F.

Checked: Senior Lecturer

Korinchenko G.M.

Magnitogorsk 2014

  • 1. The evolution of the operating systems of computers of various types
    • 1.2 The advent of multi-program mainframe operating systems
  • 2. Assignment according to MathCAD No. 1 "Construction of two-dimensional graphs in MathCAD"
    • 2.1 Statement of the task
    • 2.2 Result - received schedule
  • 3. Assignment according to MathCAD No. 2 “Solution of SLAU”
    • 3.1 Statement of the assignment
  • 4. Assignment according to MathCAD No. 3 "Solution of systems of nonlinear equations"
    • 4.1 Statement of the assignment
  • 5. Task according to MathCAD No. 4 "Solution of nonlinear equations"
    • 5.1 Statement of the task

1. The evolution of the operating systems of computers of various types

For almost half a century of its existence, operating systems (OS) have come a long way, full of many important events. The development of operating systems was greatly influenced by successes in improving the element base and computing equipment, therefore, many stages of their development are closely related to the emergence of new types of hardware platforms, such as mini-computers or personal computers.

Operating systems have undergone a serious evolution in connection with the new role of computers in local and global networks. The most important factor in their development has become the Internet.

1.1 The advent of the first operating systems

The birth of digital computers occurred shortly after the end of World War II. In the mid-40s, the first tube computing devices were created.

Programming at that time was carried out exclusively in machine language. There was no system software except libraries of mathematical and utility routines, which the programmer could use to not write codes each time that computed the value of a mathematical function or controlled a standard input / output device.

Operating systems still have not appeared, all the tasks of organizing the computing process were solved manually by each programmer from the control panel, which was a primitive input-output device consisting of buttons, switches and indicators.

In the mid-50s, a new period began in the development of computer technology, associated with the advent of a new technical base - semiconductor elements. The speed of processors has increased, the volumes of RAM and external memory have increased. Computers became more reliable, now they could work continuously for so long that they could be assigned with the performance of really practically important tasks.

At the same time, the first batch processing systems were developed that automated the entire sequence of operator actions to organize the computing process. The early batch processing systems were the prototype of modern operating systems; they became the first system programs designed not for data processing, but for controlling the computing process.

Batch processing systems significantly reduced the time spent on supporting activities to organize the computing process, which means that another step was taken to increase the efficiency of computer use.

However, at the same time, user programmers lost direct access to the computer, which reduced the efficiency of their work - making any corrections required significantly more time than when working interactively at the remote control of the machine.

1.2 The advent of multiprogramming mainframe operating systems

The next important period in the development of operating systems dates back to 1965-1975.

At this time, in the technical base of computers, there was a transition from individual semiconductor elements such as transistors to integrated circuits, which paved the way for the appearance of the next generation of computers.

During this period, almost all the basic mechanisms inherent in modern OS were implemented:

Multiprogramming,

Multiprocessing,

Support for multi-terminal multi-user mode,

Virtual memory

File systems

Access control

Network work.

The revolutionary event of this stage was the industrial implementation of multiprogramming. In conditions of sharply increased computer capabilities for processing and storing data, the execution of only one program at a time was extremely inefficient. The solution was multiprogramming, a way of organizing a computational process in which several programs simultaneously alternating on the same processor were in the computer’s memory.

These improvements significantly improved the efficiency of the computing system: the computer could now be used almost constantly, and not less than half the time the computer worked, as it was before.

Multiprogramming was implemented in two versions - in batch processing systems and time sharing.

1.3 Operating systems and wide area networks

In the early 70s of the last century, the first network operating systems appeared, which, unlike multi-terminal ones, allowed not only to disperse users, but also to organize distributed storage and data processing between several computers connected by electrical connections.

Any network operating system, on the one hand, performs all the functions of a local operating system, and on the other hand, has some additional tools that allow it to interact over the network with the operating systems of other computers.

Software modules that implement network functions appeared in operating systems gradually, with the development of network technologies, the hardware base of computers and the emergence of new tasks requiring network processing.

In 1969, the U.S. Department of Defense initiated work to merge defense and research supercomputers into a single network. This network was called ARPANET and was the starting point for creating the most famous global network nowadays - the Internet. The ARPANET network combined computers of various types running various operating systems with added modules that implement communication protocols common to all computers on the network ..

In 1974, IBM announced its own network architecture for its mainframe, called SNA (System Network Architecture).

This multi-level architecture, in many ways similar to the standard OSI model, which appeared a little later, provided for the interaction of the “terminal-terminal”, “terminal-computer” and “computer-to-computer” type for global connections.

1.4 Operating systems mini-computers. The first local area networks

By the mid-70s, along with mainframes, mini-computers were widely used. They were the first to use the advantages of large integrated circuits, which made it possible to implement sufficiently powerful functions at a relatively low cost for a computer.

The architecture of mini-computers has been greatly simplified compared to mainframes, which is reflected in their operating systems. Many of the functions of multi-program multi-user mainframe OSs have been truncated, given the limited resources of mini-computers.

The operating systems of mini-computers have often become specialized, for example, only for real-time control or only for maintaining the time sharing mode.

An important milestone in the history of mini-computers and in general in the history of operating systems was the creation of the UNIX OS. Its mass use began in the mid-70s. By this time, UNIX program code was 90% written in high-level C language.

The availability of mini-computers and consequently their prevalence in enterprises served as a powerful incentive for the creation of local networks. The company could afford to have several mini-computers located in one building or even in one room. Naturally, there was a need for the exchange of information between them and for the sharing of expensive peripheral equipment.

The first local networks were built using non-standard communication equipment, in the simplest case, by directly connecting the serial ports of computers. The software was also non-standard and was implemented as custom applications.

1.5 Development of operating systems in the 80s

The most important events of this decade include:

TCP / IP stack development,

The rise of the internet,

Standardization of LAN technologies,

The advent of personal computers,

And operating systems for them.

A working version of the TCP / IP protocol stack was created in the late 70s.

In 1983, the TCP / IP protocol stack was adopted by the US Department of Defense as a military standard.

The introduction of TCP / IP protocols in ARPANET has given this network all the basic features that distinguish the modern Internet.

The whole decade has been marked by the constant emergence of new, more advanced versions of the UNIX OS. Among them were branded versions of UNIX: SunOS, HP-UX, Irix, AIX, and many others, in which computer manufacturers adapted the kernel code and system utilities for their hardware.

The beginning of the 80s was associated with another significant event for the history of operating systems - the advent of personal computers.

They served as a powerful catalyst for the rapid growth of local networks, creating for this an excellent material basis in the form of tens and hundreds of computers belonging to one enterprise and located within the same building. As a result, support for network functions has become a prerequisite for personal computer operating systems.

Network functions were implemented mainly by network shells that worked on top of the OS. During network operation, it is always necessary to support multi-user mode, in which one user is interactive, and the rest gain access to computer resources over the network. In this case, the operating system requires at least some minimum functional support for the multi-user mode.

In 1987, as a result of the joint efforts of Microsoft and IBM, the first multitasking operating system for personal computers with an Intel 80286 processor appeared, fully utilizing the capabilities of protected mode - OS / 2. This system was well thought out. It supported preemptive multitasking, virtual memory, a graphical user interface, and a virtual machine for running DOS applications.

In the 80s, the main standards for communication technologies for local networks were adopted: in 1980 - Ethernet, in 1985 - Token Ring, in the late 80s - FDDI. This made it possible to ensure compatibility of network operating systems at the lower levels, as well as standardize the OS interface with network adapter drivers.

1.6 Features of the current stage of development of operating systems

In the 90s, almost all operating systems, which occupy a prominent place in the market, became networked. Network functions are now embedded in the kernel of the OS, being its integral part. Operating systems received funds for working with all the main technologies of local networks, as well as tools for creating composite networks.

Operating systems use the means of multiplexing several protocol stacks, due to which computers can support simultaneous network work with heterogeneous clients and servers.

In the second half of the 90s, all manufacturers of operating systems dramatically increased their support for working with the Internet. In addition to the TCP / IP stack itself, the package began to include utilities that implement such popular Internet services as telnet, ftp, DNS and Web.

The influence of the Internet was manifested in the fact that the computer has turned from a purely computing device into a means of communication with advanced computing capabilities.

Particular attention over the past decade has been given to corporate network operating systems. Their further development is one of the most important tasks in the foreseeable future.

The corporate operating system is distinguished by the ability to work well and steadily in large networks, which are characteristic of large enterprises with branches in dozens of cities and, possibly, in different countries. Such networks are organically characterized by a high degree of heterogeneity of software and hardware, therefore, a corporate OS should seamlessly interact with different types of operating systems and work on different hardware platforms.

Creating a multi-functional scalable help desk is a strategic direction in the evolution of the OS. Such a service is needed to turn the Internet into a predictable and manageable system, for example, to provide the required quality of user traffic service, support large distributed applications, and build an effective mail system.

At the current stage of the development of operating systems, security tools have come to the fore. This is due to the increased value of information processed by computers, as well as to the increased level of threats that exist when transmitting data over networks, especially over public ones such as the Internet. Many operating systems today have developed means of protecting information based on data encryption, authentication and authorization.

Multiple platforms are inherent in modern operating systems, that is, the ability to work on completely different types of computers. Many operating systems have special versions to support cluster architectures that provide high performance and fault tolerance.

In recent years, the long-term trend of increasing the usability of a person with a computer has been further developed. The effectiveness of a person becomes the main factor determining the efficiency of the computing system as a whole.

The convenience of interactive work with a computer is constantly being improved by including developed graphical interfaces in the operating system that use sound and video images along with graphics. The user interface of the operating system is becoming more intelligent, directing human actions in typical situations and taking routine decisions for it.

2. Assignment according to MathCAD No. 1 "Construction of two-dimensional graphs in MathCAD"

2.1 Statement of the task

Build two graphs. Display a table of values \u200b\u200bof the function specified in the parametric form.

Table 1

Initial data

2.2 Result - the resulting schedule

Figure 1 - Task 1

3. Assignment according to MathCAD No. 2 “Solution of SLAU”

3.1 Statement of the assignment

Find a solution to SLAU:

1. using the inverse matrix;

2. using the built-in lsolve function;

3. Using the Given-Find computing unit.

3.2 Result - a completed solution

Figure 2 - Task 2

4. Assignment according to MathCAD No. 3 "Solution of systems of nonlinear equations"

4.1 Statement of the assignment

Solve a system of nonlinear equations.

Build graphs of functions that define the equations of the system.

Graphically verify that the solution is correct.

mathcad network operating system

4.2 Result - a completed solution

Figure 3 - Task 3

5. Task according to MathCAD No. 4 "Solution of nonlinear equations"

5.1 Statement of the task

Find a solution to the nonlinear equation:

1. using the built-in root function;

2. using the built-in polyroots function;

5.2 Result - a completed solution

Figure 4 - Task 4

Similar documents

    Features of the modern stage of development of operating systems. Purpose of operating systems, their main types. Mini-computer operating systems. The principle of operation of the matrix printer, designing and playing arbitrary characters for them.

    term paper added on 06/23/2011

    Basic concepts about operating systems. Types of modern operating systems. The history of the development of operating systems of the Windows family. Characteristics of operating systems of the Windows family. New features in the Windows 7 operating system.

    term paper, added 02/18/2012

    OS evolution and classification. Network Operating Systems. Memory management. Modern concepts and design technologies for operating systems. UNIX family of operating systems. Novell Network Products Microsoft Network OS

    creative work, added on 11/7/2007

    Description of the nature, purpose, functions of operating systems. Distinctive features of their evolution. Features of resource management algorithms. Modern concepts and technologies for the design of operating systems, the requirements for operating systems of the XXI century.

    term paper, added 1/8/2011

    The history of development and improvement of Microsoft operating systems, their characteristics and distinguishing features from systems of other brands, advantages and disadvantages. Current state and capabilities of Microsoft operating systems and prospects.

    abstract, added on November 22, 2009

    Purpose and main functions of operating systems. Downloading to RAM the programs to be executed. Serving all I / O operations. Evolution, classification of operating systems. Formation of payroll, sorting by department.

    term paper added 03/17/2009

    Purpose, classification, composition and purpose of the components of operating systems. Development of complex information systems, software systems and individual applications. Feature operating systems Windows, Linux, Android, Solaris, Symbian OS and Mac OS.

    term paper added 11/19/2014

    The concept and fundamental functions of operating systems, their typical structure and principle of operation. A brief history of the formation and development of Windows operating systems, their varieties and general characteristics, basic hardware requirements.

    presentation added on 07/12/2011

    Basic concepts of operating systems. Synchronization and critical areas. Signals and interaction between processes. Memory management. Device drivers Features of modern operating systems. Central processor, clock and timer chips.

    study guide added on 1/24/2014

    The concept of operating systems, their classification and varieties, distinguishing features and basic properties. The content of operating systems, the order of interaction and the purpose of their components. Organization of disk space. Description of modern OS.

We will consider the history of the development of computing rather than operating systems, because hardware and software evolved together, exerting mutual influence on each other. The advent of new technical capabilities led to a breakthrough in the field of creating convenient, effective and safe programs, and fresh ideas in the program area stimulated the search for new technical solutions. These criteria - convenience, efficiency and safety - played the role of factors of natural selection in the evolution of computing systems.

In the first period of development (1945–1955) Computers were tube machines without operating systems. The first steps in the development of electronic computers were taken at the end of World War II. In the mid-40s, the first tube computing devices were created and the principle of a program stored in the machine’s memory appeared (John Von Neumann, June 1945). At that time, the same group of people participated in the design, operation, and programming of the computer. It was more of a research work in the field of computer technology, rather than the regular use of computers as a tool for solving any practical problems from other application areas. Programming was carried out exclusively in machine language. There was no talk of operating systems, all the tasks of organizing the computing process were solved manually by each programmer from the control panel. Only one user could be behind the console. The program was loaded into the machine’s memory at best from a deck of punch cards, and usually using the switch panel.

The computing system performed only one operation at a time (input-output or the actual calculation). Program debugging was carried out from the control panel by examining the state of memory and machine registers. At the end of this period, the first system software appeared: in 1951–1952. prototypes of the first compilers from symbolic languages \u200b\u200bappear (Fortran and others), and in 1954 Nat Rochester developed Assembler for IBM-701.

A substantial part of the time was spent preparing the launch of the program, and the programs themselves were carried out strictly sequentially. This mode of operation is called sequential data processing. In general, the first period is characterized by the extremely high cost of computing systems, their small number and low efficiency of use.

The second period began in the mid-1950s. in the evolution of computer technology, associated with the emergence of a new technical base - semiconductor elements. The use of transistors instead of often burnt out electronic tubes has led to increased reliability of computers. Now the machines can continuously work long enough so that they can be entrusted with the implementation of practically important tasks. Computer power consumption is reduced, cooling systems are being improved. The size of computers has decreased. The cost of operating and maintaining computer technology has decreased. The use of computers by commercial firms began. At the same time, rapid development of algorithmic languages \u200b\u200bis observed (LISP, COBOL, ALGOL-60, PL-1, etc.). The first real compilers, link editors, libraries of mathematical and utility routines appear. The programming process is simplified. There is no need to charge the same process of developing and using computers to the same people. It was during this period that the staff was divided into programmers and operators, operation specialists, and developers of computers.

The process of running programs changes. Now the user brings the program with the input data in the form of a deck of punch cards and indicates the necessary resources. This deck is called tasks. The operator loads the task into the machine’s memory and launches it for execution. The received output data is printed on the printer, and the user receives it back after some (rather long) time.

Changing the requested resources causes the program to pause, as a result, the processor is often idle. To increase the efficiency of using a computer, tasks with similar resources begin to be collected together, creating job package.

First appear batch processing systems, which simply automate the launch of one program from a package after another and thereby increase the processor load factor. When implementing batch processing systems, a formalized task management language was developed, with the help of which the programmer informed the system and the operator what work he wants to perform on a computer. Batch processing systems became the prototype of modern operating systems; they were the first system programs designed to control the computing process.

The next important period of development computing machines dates back to the early 60s - 1980. At this time, the technical base underwent a transition from individual semiconductor elements such as transistors to integrated circuits. Computing is becoming more reliable and cheaper. The complexity and number of tasks solved by computers is growing. Increased processor performance.

Improving the efficiency of using processor time is hindered by the low speed of mechanical input-output devices (a fast punch card reader could process 1200 punch cards per minute, printers printed up to 600 lines per minute). Instead of directly reading the job package from punch cards into memory, they begin to use its preliminary recording, first to magnetic tape, and then to disk. When data entry is required during the execution of a task, it is read from disk. In the same way, the output information is first copied to the system buffer and written to tape or disk, and printed only after the job is completed. Initially, valid I / O operations were performed off-line, that is, using other, simpler, stand-alone computers. In the future, they begin to run on the same computer that performs the calculations, that is, in on-line mode. This technique is called spooling  (short for Simultaneous Peripheral Operation On Line) or data swapping. The introduction of the pumping-pumping technique in batch systems made it possible to combine the real I / O operations of one task with the execution of another task, but it required the development of an interrupt apparatus to notify the processor of the end of these operations.

Magnetic tapes were sequential access devices, that is, information was read from them in the order in which they were recorded. The appearance of a magnetic disk, for which the order of reading information, that is, direct access devices, is not important, has led to the further development of computing systems. When processing a task package on magnetic tape, the order in which tasks were started was determined by the order in which they were entered. When processing a task package on a magnetic disk, it became possible to select the next task to be performed. Batch systems begin to deal job scheduling: depending on the availability of requested resources, urgency of calculations, etc. this or that task is selected on the account.

A further increase in processor efficiency has been achieved with multiprogramming. The idea of \u200b\u200bmultiprogramming is as follows: while one program performs an input-output operation, the processor does not stand idle, as it did in single-program mode, but executes another program. When the I / O operation ends, the processor returns to the first program. This idea resembles the behavior of a teacher and students in an exam. While one student (program) is considering the answer to the question (input-output operation), the teacher (processor) listens to the answer of another student (calculation). Naturally, this situation requires several students in the room. In the same way, multiprogramming requires the presence of several programs in memory at the same time. In this case, each program is loaded into its own area of \u200b\u200bRAM, called section, and should not affect the execution of another program (students sit at separate tables and do not prompt each other).

The advent of multiprogramming requires a real revolution in the structure of a computer system. A special role here is played by hardware support (many hardware innovations appeared at the previous stage of evolution), the most significant features of which are listed below.

- Implementation of defense mechanisms . Programs should not have independent access to resource allocation, which leads to privileged  and unprivileged  teams. Privileged commands, such as I / O commands, can only be executed by the operating system. They say that it works in privileged mode. The transition of control from the application program to the OS is accompanied by a controlled change of mode. In addition, it is a memory protection that allows you to isolate competing user programs from each other, and the OS from user programs.

- The presence of interruptions . External interrupts notify the OS that an asynchronous event has occurred, for example, an I / O operation has completed. Internal interrupts (now called exceptional situations) occur when the execution of a program leads to a situation requiring OS intervention, for example, division by zero or an attempt to violate protection.

- The development of concurrency in architecture . Direct access to memory and the organization of input-output channels freed the central processor from routine operations.

The role of the operating system is equally important in organizing multiprogramming. She is responsible for the following operations:

Organization of the interface between the application program and the OS using system calls;

Queuing from jobs in memory and allocating a processor to one of the jobs required planning for processor use;

Switching from one task to another requires maintaining the contents of the registers and data structures necessary for the task, in other words, the context to ensure the correct continuation of the calculations;

Since memory is a limited resource, memory management strategies are needed, that is, it is necessary to streamline the processes of placing, replacing, and retrieving information from memory;

Organization of information storage on external media in the form of files and providing access to a specific file only to certain categories of users;

Since programs may require authorized data exchange, it is necessary to provide them with communication tools;

For the correct exchange of data, it is necessary to resolve conflict situations that arise when working with various resources and provide for coordination of the programs of their actions, i.e. equip the system with synchronization tools.

Multiprogramming systems made it possible to more efficiently use system resources (for example, processor, memory, peripherals), but they remained batch for a long time. The user could not directly interact with the task and had to foresee with the help of control cards all possible situations. Debugging programs still took a lot of time and required studying multi-page printouts of the contents of memory and registers or using debugging printing.

The advent of cathode-ray displays and the rethinking of keyboard applications have put the solution to this problem. The logical extension of multiprogramming systems is time-sharing systems, or time sharing systems. In them, the processor switches between tasks not only for the duration of I / O operations, but simply after a certain time. These switches happen so often that users can interact with their programs during their execution, that is, interactively. As a result, it becomes possible for several users to work simultaneously on the same computer system. Each user must have at least one program in memory for this. To reduce the restrictions on the number of working users, the idea was introduced of the incomplete finding of an executable program in RAM. The main part of the program is located on the disk, and the fragment that needs to be executed at the moment can be loaded into the RAM, and the unnecessary one can be downloaded back to the disk. This is implemented using virtual memory mechanism. The main advantage of such a mechanism is the creation of the illusion of unlimited mainframe memory.

In time-sharing systems, the user was able to effectively debug the program interactively and write information to disk, not using punched cards, but directly from the keyboard. The emergence of on-line-files led to the need to develop developed file systems.

Parallel to the internal evolution of computing systems, their external evolution also occurred. Before the beginning of this period, computer systems were, as a rule, incompatible. Each had its own operating system, its own command system, etc. As a result, a program that successfully runs on one type of machine had to be completely rewritten and debugged to run on computers of a different type. At the beginning of the third period, the idea arose of creating families of software-compatible machines running the same operating system. The first family of software-compatible computers, built on integrated circuits, has become a series of machines IBM / 360. Developed in the early 60s, this family significantly exceeded second-generation cars in terms of price / performance. It was followed by a line of PDP computers incompatible with the IBM line, and the PDP-11 was the best model in it.

The strength of “one family” was at the same time its weakness. The wide possibilities of this concept (the availability of all models: from mini-computers to gigantic machines; an abundance of diverse peripherals; various environments; various users) gave rise to a complex and cumbersome operating system. Millions of Assembler lines written by thousands of programmers contained many errors, which caused a continuous stream of publications about them and attempts to fix them. Only the OS / 360 operating system contained more than 1000 known errors. However, the idea of \u200b\u200bstandardizing operating systems  It was widely introduced into the minds of users and was further actively developed.

Next period in evolution  computing systems is associated with the advent of large integrated circuits (LSIs). These years (from 1980 to the present)   there was a sharp increase in the degree of integration and a decrease in the cost of microcircuits. A computer that does not differ in architecture from the PDP-11, in terms of price and ease of use, has become available to an individual, and not to a department of an enterprise or university. The era of personal computers has come. Initially, personal computers were intended for single-user use in single-program mode, which entailed the degradation of the architecture of these computers and their operating systems (in particular, the need to protect files and memory, schedule tasks, etc.) disappeared.

Computers began to be used not only by specialists, which required the development of "friendly" software.

However, the increasing complexity and variety of tasks solved on personal computers, the need to improve the reliability of their work led to the revival of almost all the features characteristic of the architecture of large computing systems.

In the mid-80s began to develop rapidly computer network, including personal ones, managed by network or distributed operating systems.

In network operating systems  users can access the resources of another network computer, only they need to know about their availability and be able to do it. Each machine on the network is running its own local operating system, which differs from the operating system of a stand-alone computer by the availability of additional tools (software support for network interface devices and access to remote resources), but these additions do not change the structure of the operating system.

Distributed systemOn the contrary, it looks like a normal autonomous system. The user does not know and should not know where his files are stored - on a local or remote machine - and where his programs are executed. He may not even know if his computer is connected to the network. The internal structure of a distributed operating system has significant differences from autonomous systems.

In what follows, we will call autonomous operating systems classical operating systems.

After reviewing the stages of development of computing systems, we can highlight six main functionsthat performed classic operating systems during evolution:

Scheduling tasks and CPU usage;

Providing programs with communication and synchronization tools;

Memory management;

File system management;

I / O management;

Security

Each of these functions is usually implemented as a subsystem, which is a structural component of the OS. In each operating system, these functions, of course, were implemented in their own way, in different volumes. They were not originally invented as components of operating systems, but appeared in the process of development, as computing systems became more convenient, efficient and safe. The evolution of human-created computing systems has taken such a path, but no one has yet proved that this is the only possible path for their development. Operating systems exist because at the moment their existence is a reasonable way to use computing systems.

When considering the evolution of the OS, it should be borne in mind that the time difference between the implementation of certain principles of the organization of individual operating systems until they are generally recognized, as well as the terminological uncertainty, do not allow an accurate chronology of the development of the OS to be given. However, now it is already possible to accurately determine the main milestones on the evolution of operating systems.

There are also various approaches to determining the generation of the OS. It is known that OS is divided into generations in accordance with the generations of computers and systems [ 5 , 9 , 10 , 13 ]. Such a division cannot be considered completely satisfactory, since the development of OS organization methods within the framework of one computer generation, as shown by the experience of their creation, occurs in a fairly wide range. Another point of view does not connect the OS generation with the corresponding computer generations. So, for example, the definition of OS generations is known by the levels of the input computer language, the modes of use of central processors, the forms of operation of systems, etc. [ 5 , 13 ].

Apparently, the most appropriate should be the allocation of the stages of development of the OS within the framework of individual generations of computers and aircraft.

The first stage in the development of system software can be considered the use of library programs, standard and utility routines and macros. The concept of library routines is the earliest and dates back to 1949 [ 4 , 17 ]. With the advent of libraries, automatic support tools — downloaders and link editors — have developed. These tools were used in first-generation computers, when operating systems as such did not yet exist.

The desire to eliminate the mismatch between the processor performance and the speed of the electromechanical input-output devices, on the one hand, and the use of sufficiently high-speed drives on magnetic tapes and drums (NML and NMB), and then on magnetic disks (NMD), on the other hand, led to the need to solve the problems of buffering and blocking-releasing data. Special programs of access methods arose that were introduced into the objects of the link editor modules (subsequently, the principles of polybuffering began to be used). To maintain operability and facilitate the operation of machines, diagnostic programs were created. Thus, the basic system software was created.

With the improvement of the characteristics of computers and the growth of their productivity, it became clear that the existing basic software (software) is not enough. There were operating systems of early batch processing - monitors. As part of the batch processing system, during the execution of any work in the batch (translation, assembly, execution of the finished program), no part of the system software was in RAM, since all memory was provided to the current work. Then came the monitor systems in which the RAM was divided into three areas: a fixed area of \u200b\u200bthe monitor system, a user area and a shared memory area (for storing data that object modules can exchange).

The intensive development of data management methods began, an important function of the OS appeared, such as the implementation of input-output without the participation of the central process - the so-called spooling (from the English SPOOL - Simultaneous Peripheral Operation on Line).

The appearance of new hardware developments (1959-1963) - interrupt systems, timers, channels - stimulated the further development of the OS [ 4 , 5 , 9 ]. Executive systems appeared, which were a set of programs for distributing computer resources, communication with the operator, controlling the computational process, and controlling I / O. Such executive systems made it possible to implement a fairly efficient form of computer system operation at that time - single-program batch processing. These systems provided the user with tools such as checkpoints, logical timers, the ability to build programs overlay structure, detection of violations by programs of restrictions adopted in the system, file management, collection of accounting information, etc.

However, single-program batch processing with increasing computer performance could not provide an economically acceptable level of machine operation. The solution was multiprogramming - a way of organizing a computational process in which there are several programs in the computer’s memory that are alternately executed by one processor, and to start or continue counting for one program, it was not necessary to complete the others. In a multi-program environment, resource allocation and protection issues have become more acute and intractable.

The theory of building operating systems during this period was enriched by a number of fruitful ideas. Various forms of multi-program modes of operation have appeared, including time sharing - a mode that ensures the operation of a multi-terminal system. The concept of virtual memory, and then virtual machines, was created and developed. The time sharing mode allowed the user to interact interactively with their programs, as it was before the advent of batch processing systems.

One of the first operating systems using these latest solutions was the MCP operating system (the main control program) created by Burroughs for its B5000 computers in 1963. Many concepts and ideas were implemented in this OS, which later became standard for many operating systems:

    multiprogramming;

    multiprocessor processing;

    virtual memory;

    the ability to debug programs in the source language;

    writing an operating system in a high-level language.

The famous time-sharing system of that period was the CTSS (Compatible Time Sharing System) - a compatible time-sharing system developed at the Massachusetts Institute of Technology (1963) for the IBM-7094 computer [ 37 ]. This system was used to develop at the same institute, together with Bell Labs and General Electric, the next generation time-sharing system MULTICS (Multiplexed Information And Computing Service). It is noteworthy that this OS was written mainly in the high-level language EPL (the first version of the PL / 1 language was made by IBM).

One of the most important events in the history of operating systems is the appearance in 1964 of a family of computers called System / 360 from IBM, and later - System / 370 [ 11 ]. This was the first implementation of the concept of a family of software and information compatible computers in the world, which later became standard for all companies in the computer industry.

It should be noted that the main form of using computers, both in time-sharing systems and in batch processing systems, has become multi-terminal mode. At the same time, not only the operator, but also all users got the opportunity to formulate their tasks and manage their implementation from their terminal. Since terminal complexes soon became possible to place at considerable distances from the computer (thanks to modem telephone connections), systems for remote job input and teleprocessing appeared. Modules implementing communication protocols have been added to the OS [ 10 , 13 ].

By this time, there was a significant change in the distribution of functions between the hardware and software of the computer. The operating system is becoming an "integral part of the computer", as if a continuation of the hardware. In the processors appeared privileged (Supervisor in OS / 360) and user (Task in OS / 360) operating modes, a powerful interrupt system, memory protection, special registers for quickly switching programs, virtual memory support tools, etc.

In the early 70s, the first network operating systems appeared, which allowed not only to disperse users, as in teleprocessing systems, but also to organize distributed storage and processing of data between computers connected by electrical connections. The famous project ARPANET MO USA. In 1974, IBM announced the creation of its own SNA network architecture for its mainframes, providing terminal-to-terminal, terminal-to-computer, and computer-to-computer communications. In Europe, technology has been actively developed for building packet-switched networks based on the X.25 protocols.

By the mid-70s, along with mainframes, mini-computers (PDP-11, Nova, HP) were widely used. The architecture of the mini-computers was much simpler, many of the functions of the multi-program mainframe OS were truncated. Mini-computer operating systems began to be made specialized (RSX-11M - time sharing, RT-11 - real-time OC) and not always multi-user.

An important milestone in the history of mini-computers and in general in the history of operating systems was the creation of the UNIX OS. This system was written by Ken Thompson, one of the computer experts at BELL Labs, who worked on the MULTICS project. Actually, its UNIX is a truncated single-user version of the MULTICS system. The original name of this system is UNICS (UNiplexed Information and Computing Service - primitive information and computer service). This system was named as a joke, since MULTICS (MULTiplexed Information and Computing Service) is a multiplex information and computer service. Since the mid-70s, mass use of the UNIX OS, written in 90% in the C language, began. The widespread use of C-compilers made UNIX a unique portable OC, and since it was supplied with the source code, it became the first open operating system. Flexibility, elegance, powerful functionality and openness allowed her to take a strong position in all classes of computers - from personal to super-computers.

The availability of mini-computers has stimulated the creation of local networks. In the simplest LANs, computers were connected via serial ports. The first network application for UNIX OS - UUCP (Unix to Unix Copy Program) - appeared in 1976.

Further development of network systems with the TCP / IP protocol stack: in 1983, it was adopted by the US MO as a standard and used in the ARPANET network. In the same year, ARPANET split into MILNET (for the US military) and the new ARPANET, which became known as the Internet.

All eighties are characterized by the appearance of more and more advanced versions of UNIX: Sun OS, HP-UX, Irix, AIX, etc. To solve the problem of their compatibility, POSIX and XPG standards were adopted that define the interfaces of these systems for applications.

Another significant event for the history of operating systems was the appearance in the early 80s of personal computers. They served as a powerful impetus for the distribution of local networks, as a result, support for network functions has become a prerequisite for the PC OS. However, the user-friendly interface and network functions did not appear immediately on the PC OS [ 13 ].

The most popular OS version of the early stage in the development of personal computers was Microsoft's MS-DOS, a single-program, single-user OS with a command line interface. Many functions that provide user convenience were provided in this OS by additional programs, such as Norton Commander, PC Tools, etc. The greatest influence on the development of PC software was made by the Windows operating environment, the first version of which appeared in 1985. Network functions were also implemented using network shells and appeared in MS-DOS version 3.1. At the same time, Microsoft's network products appeared - MS-NET, and later - LAN Manager, Windows for Workgroup, and then Windows NT.

Novell went the other way: its NetWare product is an operating system with built-in networking features. NetWare OS was distributed as an operating system for the central server of the local network and, due to the specialization of the file server functions, provided high speed remote access to files and increased data security. However, this OS had a specific programming interface (API), which made application development difficult.

In 1987, the first multitask OS for PCs appeared - OS / 2, developed by Microsoft together with IBM. This was a well-designed system with virtual memory, a graphical interface and the ability to run DOS applications. Network shells LAN Manager (Microsoft) and LAN Server (IBM) were created and spread for it. These shells were inferior in performance to the NetWare file server and consumed more hardware resources, but had important advantages. They allowed you to run on the server any program designed for OS / 2, MS-DOS and Windows, in addition, you could use the computer on which they worked as a workstation. The unsuccessful market fate of OS / 2 did not allow LAN-Manager and LAN-Server systems to capture a significant market share, but the principles of operation of these network systems were largely embodied in the OS of the 90s - MS Windows NT.

In the 80s, the main standards for communication technologies for local networks were adopted: in 1980 - Ethernet, in 1985 - Token Ring, in the late 80s - FDDI (Fiber Distributed Data Interface), a distributed data transmission interface over fiber optic channels, double ring with marker. This made it possible to ensure compatibility of network operating systems at the lower levels, as well as standardize operating systems with network adapter drivers.

For PCs, not only OSs specially designed for them (MS-Dos, NetWare, OS / 2) were used, but existing OSs, in particular UNIX, were adapted. The most famous system of this type was the Santa Cruz Operation UNIX version (SCO UNIX).

In the 90s, almost all operating systems, which occupy a prominent place in the market, became networked. Network functions are embedded in the kernel of the OS, being its integral part. The OS uses the means of multiplexing several protocol stacks, due to which computers can support simultaneous work with heterogeneous servers and clients. Specialized OSs appeared, for example, Cisco System Network IOS operating in routers. In the second half of the 90s, all OS manufacturers increased their support for working with interfaces. In addition to the TCP / IP protocol stack, the package began to include utilities that implement popular Internet services: telnet, ftp, DNS, Web, etc.

Particular attention has been given in the last decade and is currently being given to corporate network operating systems. This is one of the most important tasks for the foreseeable future. Corporate OSs should work well and steadily in large networks that are characteristic of large organizations (enterprises, banks, etc.) that have branches in many cities and, possibly, in different countries. A corporate OS should seamlessly interact with various types of operating systems and work on various hardware platforms. The leaders in the corporate OS class are now defined - these are MS Windows 2000/2003, UNIX and Linux systems, as well as Novell NetWare 6.5.

Do you like the article? Share with friends: