Jumat, 22 Juli 2011

Unix

From Wikipedia, the free encyclopedia
  (Redirected from UNIX)
Unix
Unix history-simple.svg
Evolution of Unix and Unix-like systems
Company / developer Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna & Bell Labs
Programmed in C
OS family Unix
Working state Active
Source model Historically closed source, now some Unix projects (BSD family and Illumos) are open sourced.
Initial release 1969
Available language(s) English
Available programming languages(s) C, C++
Kernel type Monolithic
Default user interface Command-line interface & Graphical (X Window System)
License Proprietary
Official website unix.org
Unix (officially trademarked as UNIX, sometimes also written as Unix) is a multitasking, multi-user computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas McIlroy, and Joe Ossanna. The Unix operating system was first developed in assembly language, but by 1973 had been almost entirely recoded in C, greatly facilitating its further development and porting to other hardware. Today's Unix systems are split into various branches, developed over time by AT&T as well as various commercial vendors and non-profit organizations. The second edition of Unix was released on December 6th, 1972.
The Open Group, an industry standards consortium, owns the “UNIX” trademark. Only systems fully compliant with and certified according to the Single UNIX Specification are qualified to use the trademark; others might be called "Unix system-like" or "Unix-like" (though the Open Group disapproves[1] of this term). However, the term "Unix" is often used informally to denote any operating system that closely resembles the trademarked system.
During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale adoption of Unix (particularly of the BSD variant, originating from the University of California, Berkeley) by commercial startups, the most notable of which are Solaris, HP-UX and AIX. Among all versions of Unix, OS X currently has the biggest use on personal computers with more than fifty five million systems installed. Today, in addition to certified Unix systems such as those already mentioned, Unix-like operating systems such as MINIX, Linux and BSD descendants (FreeBSD, NetBSD, OpenBSD, and DragonFly BSD) are commonly encountered. The term "traditional Unix" may be used to describe a Unix or an operating system that has the characteristics of either Version 7 Unix or UNIX System V.

Contents

[hide]

[edit] Overview

Unix operating systems are widely used in servers, workstations, and mobile devices.[2] The Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers.
Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system. As a result, Unix became synonymous with "open systems".
Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file system; treating devices and certain types of inter-process communication (IPC) as files; and the use of a large number of software tools, small programs that can be strung together through a command line interpreter using pipes, as opposed to using a single monolithic program that includes all of the same functionality. These concepts are collectively known as the Unix philosophy.
Under Unix, the "operating system" consists of many utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low level" tasks that most programs share, and, perhaps most importantly, schedules access to hardware to avoid conflicts if two programs try to access the same resource or device simultaneously. To mediate such access, the kernel was given special rights on the system, leading to the division between user-space and kernel-space.
The microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a "normal" computer consisted of a hard disk for storage and a data terminal for input and output (I/O), the Unix file model worked quite well as most I/O was "linear". However, modern systems include networking and other new devices. As graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse, and in the 1980s non-blocking I/O and the set of inter-process communication mechanisms was augmented (sockets, shared memory, message queues, semaphores), and functionalities such as network protocols were moved out of the kernel.

[edit] History

In the 1960s, Massachusetts Institute of Technology, AT&T Bell Labs, and General Electric developed an experimental operating system called Multics for the GE-645 mainframe.[3] Multics introduced many innovations, but had many problems.
Bell Labs, frustrated by the size and complexity of Multics but not the aims, slowly pulled out of the project. Their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna,[4] decided to redo the work on a much smaller scale. At the time, Ritchie says "What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication."[4]
While Ken Thompson still had access to the Multics environment, he wrote simulations for the new file and paging system on it. He also programmed a game called Space Travel, but the game needed a more efficient and less expensive machine to run on, and eventually a little-used PDP-7 at Bell Labs fit the bill.[5] On this PDP7, a team of Bell Labs researchers led by Thompson and Ritchie, including Rudd Canaday, developed a hierarchical file system, the concepts of computer processes and device files, a command-line interpreter, and some small utility programs.[4]

[edit] 1970s

In 1970 Peter Neumann coined the project name Unics (UNiplexed Information and Computing Service) as a pun on Multics, (Multiplexing Information and Computer Services)[6]. Unics could eventually support multiple simultaneous users, and it was renamed Unix.
Up until this point there had been no financial support from Bell Labs. When the Computer Science Research Group wanted to use Unix on a much larger machine than the PDP-7, Thompson and Ritchie managed to trade the promise of adding text processing capabilities to Unix for a PDP-11/20 machine. This led to some financial support from Bell. For the first time in 1970, the Unix operating system was officially named and ran on the PDP-11/20. It added a text formatting program called roff and a text editor. All three were written in PDP-11/20 assembly language. Bell Labs used this initial "text processing system", made up of Unix, roff, and the editor, for text processing of patent applications. Roff soon evolved into troff, the first electronic publishing program with a full typesetting capability. The UNIX Programmer's Manual was published on November 3, 1971.
In 1972, Unix was rewritten in the C programming language, contrary to the general notion at the time "that something as complex as an operating system, which must deal with time-critical events, had to be written exclusively in assembly language".[7] The migration from assembly language to the higher-level language C resulted in much more portable software, requiring only a relatively small amount of machine-dependent code to be replaced when porting Unix to other computing platforms.
Under a 1958 consent decree in settlement of an antitrust case, AT&T (the parent organization of Bell Labs) had been forbidden from entering the computer business. Unix could not, therefore, be turned into a product; indeed, under the terms of the consent decree, Bell Labs was required to license its nontelephone technology to anyone who asked. Ken Thompson quietly began answering requests by shipping out tapes and disk packs – each, according to legend, with a note signed “love, ken”.[8]
AT&T made Unix available to universities and commercial firms, as well as the United States government, under licenses. The licenses included all source code including the machine-dependent parts of the kernel, which were written in PDP-11 assembly code. Copies of the annotated Unix kernel sources circulated widely in the late 1970s in the form of a much-copied book by John Lions of the University of New South Wales, the Lions' Commentary on UNIX 6th Edition, with Source Code, which led to considerable use of Unix as an educational example.
Versions of the Unix system were determined by editions of its user manuals. For example, "Fifth Edition UNIX" and "UNIX Version 5" have both been used to designate the same version. Development expanded, with Versions 4, 5, and 6 being released by 1975. These versions added the concept of pipes, which led to the development of a more modular code-base and quicker development cycles. Version 5 and especially Version 6 led to a plethora of different Unix versions both inside and outside Bell Labs, including PWB/UNIX and the first commercial Unix, IS/1. As more of Unix was rewritten in C, portability also increased. A group at the University of Wollongong ported Unix to the Interdata 7/32. Bell Labs developed several ports for research purposes and internal use at AT&T. Target machines included an Intel 8086-based computer (with custom-built MMU) and the UNIVAC 1100.[9]
In May 1975 ARPA documented the benefits of the Unix time-sharing system which "presents several interesting capabilities" as an arpa network mini-host in RFC 681.
In 1978, UNIX/32V was released for DEC's then new VAX system. By this time, over 600 machines were running Unix in some form. Version 7 Unix, the last version of Research Unix to be released widely, was released in 1979. Versions 8, 9 and 10 were developed through the 1980s but were only released to a few universities, though they did generate papers describing the new work. This research led to the development of Plan 9 from Bell Labs, a new portable distributed system.

[edit] 1980s

A Unix[citation needed] desktop running the X Window System graphical user interface. Shown are a number of client applications common to the MIT X Consortium's distribution, including Tom's Window Manager, an X Terminal, Xbiff, xload, and a graphical manual page browser.
AT&T licensed UNIX System III, based largely on Version 7, for commercial use, the first version launching in 1982. This also included support for the VAX. AT&T continued to issue licenses for older Unix versions. To end the confusion between all its differing internal versions, AT&T combined them into UNIX System V Release 1. This introduced a few features such as the vi editor and curses from the Berkeley Software Distribution of Unix developed at the University of California, Berkeley. This also included support for the Western Electric 3B series of machines.
In 1983, the U.S. Department of Justice settled its second antitrust case against AT&T and broke up the Bell System. This relieved AT&T from the 1956 consent decree that had prevented them from turning Unix into a product. AT&T promptly rushed to commercialize Unix System V, a move that nearly killed Unix.[8] The Free Software Foundation (FSF) was founded the same year by Richard Stallman.
Since the newer commercial UNIX licensing terms were not as favorable for academic use as the older versions of Unix, the Berkeley researchers continued to develop BSD Unix as an alternative to UNIX System III and V, originally on the PDP-11 architecture (the 2.xBSD releases, ending with 2.11BSD) and later for the VAX-11 (the 4.x BSD releases). Many contributions to Unix first appeared on BSD releases, notably the C shell with job control (modelled on ITS). Perhaps the most important aspect of the BSD development effort was the addition of TCP/IP network code to the mainstream Unix kernel. The BSD effort produced several significant releases that contained network code: 4.1cBSD, 4.2BSD, 4.3BSD, 4.3BSD-Tahoe ("Tahoe" being the nickname of the Computer Consoles Inc. Power 6/32 architecture that was the first non-DEC release of the BSD kernel), Net/1, 4.3BSD-Reno (to match the "Tahoe" naming, and that the release was something of a gamble), Net/2, 4.4BSD, and 4.4BSD-lite. The network code found in these releases is the ancestor of much TCP/IP network code in use today, including code that was later released in AT&T System V UNIX and early versions of Microsoft Windows. The accompanying Berkeley sockets API is a de facto standard for networking APIs and has been copied on many platforms.
Other companies began to offer commercial versions of the UNIX System for their own mini-computers and workstations. Most of these new Unix flavors were developed from the System V base under a license from AT&T; however, others were based on BSD instead. One of the leading developers of BSD, Bill Joy, went on to co-found Sun Microsystems in 1982 and created SunOS for their workstation computers. In 1980, Microsoft announced its first Unix for 16-bit microcomputers called Xenix, which the Santa Cruz Operation (SCO) ported to the Intel 8086 processor in 1983, and eventually branched Xenix into SCO UNIX in 1989.
During this period (before PC compatible computers with MS-DOS became dominant), industry observers expected that UNIX, with its portability and rich capabilities, was likely to become the industry standard operating system for microcomputers.[10] In 1984 several companies established the X/Open consortium with the goal of creating an open system specification based on UNIX. Despite early progress, the standardization effort collapsed into the "Unix wars", with various companies forming rival standardization groups. The most successful Unix-related standard turned out to be the IEEE's POSIX specification, designed as a compromise API readily implemented on both BSD and System V platforms, published in 1988 and soon mandated by the United States government for many of its own systems.
AT&T added various features into UNIX System V, such as file locking, system administration, STREAMS, new forms of IPC, the Remote File System and TLI. AT&T cooperated with Sun Microsystems and between 1987 and 1989 merged features from Xenix, BSD, SunOS, and System V into System V Release 4 (SVR4), independently of X/Open. This new release consolidated all the previous features into one package, and heralded the end of competing versions. It also increased licensing fees.
During this time a number of vendors including Digital Equipment, Sun, Addamax and others began building trusted versions of UNIX for high security applications, mostly designed for military and law enforcement applications.

[edit] 1990s

In 1990, the Open Software Foundation released OSF/1, their standard Unix implementation, based on Mach and BSD. The Foundation was started in 1988 and was funded by several Unix-related companies that wished to counteract the collaboration of AT&T and Sun on SVR4. Subsequently, AT&T and another group of licensees formed the group "UNIX International" in order to counteract OSF. This escalation of conflict between competing vendors again gave rise to the phrase "Unix wars".
In 1991, a group of BSD developers (Donn Seeley, Mike Karels, Bill Jolitz, and Trent Hein) left the University of California to found Berkeley Software Design, Inc (BSDI). BSDI produced a fully functional commercial version of BSD Unix for the inexpensive and ubiquitous Intel platform, which started a wave of interest in the use of inexpensive hardware for production computing. Shortly after it was founded, Bill Jolitz left BSDI to pursue distribution of 386BSD, the free software ancestor of FreeBSD, OpenBSD, and NetBSD.
In 1991, Linus Torvalds began work on Linux, a Unix clone that runs on IBM PC clones.
By 1993 most commercial vendors had changed their variants of Unix to be based on System V with many BSD features added on top. The creation of the COSE initiative that year by the major players in Unix marked the end of the most notorious phase of the Unix wars, and was followed by the merger of UI and OSF in 1994. The new combined entity, which retained the OSF name, stopped work on OSF/1 that year. By that time the only vendor using it was Digital, which continued its own development, rebranding their product Digital UNIX in early 1995.
Shortly after UNIX System V Release 4 was produced, AT&T sold all its rights to UNIX to Novell. (Dennis Ritchie likened this to the Biblical story of Esau selling his birthright for the proverbial "mess of pottage".[11]) Novell developed its own version, UnixWare, merging its NetWare with UNIX System V Release 4. Novell tried to use this to battle against Windows NT, but their core markets suffered considerably.
In 1993, Novell decided to transfer the UNIX trademark and certification rights to the X/Open Consortium.[12] In 1996, X/Open merged with OSF, creating the Open Group. Various standards by the Open Group now define what is and what is not a "UNIX" operating system, notably the post-1998 Single UNIX Specification.
In 1995, the business of administering and supporting the existing UNIX licenses, plus rights to further develop the System V code base, were sold by Novell to the Santa Cruz Operation.[13] Whether Novell also sold the copyrights is currently the subject of litigation (see below).
In 1997, Apple Computer sought out a new foundation for its Macintosh operating system and chose NEXTSTEP, an operating system developed by NeXT. The core operating system, which was based on BSD and the Mach kernel, was renamed Darwin after Apple acquired it. The deployment of Darwin in Mac OS X makes it, according to a statement made by an Apple employee at a USENIX conference, the most widely used Unix-based system in the desktop computer market.

[edit] 2000s

In 2000, SCO sold its entire UNIX business and assets to Caldera Systems, which later on changed its name to The SCO Group.
The bursting of the dot-com bubble (2001–2003) led to significant consolidation of versions of Unix. Of the many commercial variants of Unix that were born in the 1980s, only Solaris, HP-UX, and AIX were still doing relatively well in the market, though SGI's IRIX persisted for quite some time. Of these, Solaris had the largest market share in 2005.[14]
In 2003, the SCO Group started legal action against various users and vendors of Linux. SCO had alleged that Linux contained copyrighted Unix code now owned by The SCO Group. Other allegations included trade-secret violations by IBM, or contract violations by former Santa Cruz customers who had since converted to Linux. However, Novell disputed the SCO Group's claim to hold copyright on the UNIX source base. According to Novell, SCO (and hence the SCO Group) are effectively franchise operators for Novell, which also retained the core copyrights, veto rights over future licensing activities of SCO, and 95% of the licensing revenue. The SCO Group disagreed with this, and the dispute resulted in the SCO v. Novell lawsuit. On August 10, 2007, a major portion of the case was decided in Novell's favor (that Novell had the copyright to UNIX, and that the SCO Group had improperly kept money that was due to Novell). The court also ruled that "SCO is obligated to recognize Novell's waiver of SCO's claims against IBM and Sequent". After the ruling, Novell announced they have no interest in suing people over Unix and stated, "We don't believe there is Unix in Linux".[15][16][17] SCO successfully got the 10th Circuit Court of Appeals to partially overturn this decision on August 24, 2009 which sent the lawsuit back to the courts for a jury trial.[18][19][20]
On March 30, 2010, following a jury trial, Novell, and not The SCO Group, is "unanimously [found]" to be the owner of the UNIX and UnixWare copyrights.[21] The SCO Group, through bankruptcy trustee Edward Cahn, has decided to continue the lawsuit against IBM for causing a decline in SCO revenues.[22]
In 2005, Sun Microsystems released the bulk of its Solaris system code (based on UNIX System V Release 4) into an open source project called OpenSolaris. New Sun OS technologies, notably the ZFS file system, were first released as open source code via the OpenSolaris project. Soon afterwards, OpenSolaris spawned several non-Sun distributions. In 2010, after Oracle acquired Sun, OpenSolaris was officially discontinued, but the development of derivatives continued.

[edit] Standards

Beginning in the late 1980s, an open operating system standardization effort now known as POSIX provided a common baseline for all operating systems; IEEE based POSIX around the common structure of the major competing variants of the Unix system, publishing the first POSIX standard in 1988. In the early 1990s a separate but very similar effort was started by an industry consortium, the Common Open Software Environment (COSE) initiative, which eventually became the Single UNIX Specification administered by The Open Group. Starting in 1998 the Open Group and IEEE started the Austin Group, to provide a common definition of POSIX and the Single UNIX Specification.
In an effort towards compatibility, in 1999 several Unix system vendors agreed on SVR4's Executable and Linkable Format (ELF) as the standard for binary and object code files. The common format allows substantial binary compatibility among Unix systems operating on the same CPU architecture.
The Filesystem Hierarchy Standard was created to provide a reference directory layout for Unix-like operating systems, particularly Linux.

[edit] Components

The Unix system is composed of several components that are normally packed together. By including – in addition to the kernel of an operating system – the development environment, libraries, documents, and the portable, modifiable source-code for all of these components, Unix was a self-contained software system. This was one of the key reasons it emerged as an important teaching and learning tool and has had such a broad influence.
The inclusion of these components did not make the system large – the original V7 UNIX distribution, consisting of copies of all of the compiled binaries plus all of the source code and documentation occupied less than 10MB, and arrived on a single 9-track magnetic tape. The printed documentation, typeset from the on-line sources, was contained in two volumes.
The names and filesystem locations of the Unix components have changed substantially across the history of the system. Nonetheless, the V7 implementation is considered by many to have the canonical early structure:
  • Kernel – source code in /usr/sys, composed of several sub-components:
    • conf – configuration and machine-dependent parts, including boot code
    • dev – device drivers for control of hardware (and some pseudo-hardware)
    • sys – operating system "kernel", handling memory management, process scheduling, system calls, etc.
    • h – header files, defining key structures within the system and important system-specific invariables
  • Development Environment – Early versions of Unix contained a development environment sufficient to recreate the entire system from source code:
    • ccC language compiler (first appeared in V3 Unix)
    • as – machine-language assembler for the machine
    • ld – linker, for combining object files
    • lib – object-code libraries (installed in /lib or /usr/lib). libc, the system library with C run-time support, was the primary library, but there have always been additional libraries for such things as mathematical functions (libm) or database access. V7 Unix introduced the first version of the modern "Standard I/O" library stdio as part of the system library. Later implementations increased the number of libraries significantly.
    • make – build manager (introduced in PWB/UNIX), for effectively automating the build process
    • include – header files for software development, defining standard interfaces and system invariants
    • Other languages – V7 Unix contained a Fortran-77 compiler, a programmable arbitrary-precision calculator (bc, dc), and the awk scripting language, and later versions and implementations contain many other language compilers and toolsets. Early BSD releases included Pascal tools, and many modern Unix systems also include the GNU Compiler Collection as well as or instead of a proprietary compiler system.
    • Other tools – including an object-code archive manager (ar), symbol-table lister (nm), compiler-development tools (e.g. lex & yacc), and debugging tools.
  • Commands – Unix makes little distinction between commands (user-level programs) for system operation and maintenance (e.g. cron), commands of general utility (e.g. grep), and more general-purpose applications such as the text formatting and typesetting package. Nonetheless, some major categories are:
    • sh – The "shell" programmable command line interpreter, the primary user interface on Unix before window systems appeared, and even afterward (within a "command window").
    • Utilities – the core tool kit of the Unix command set, including cp, ls, grep, find and many others. Subcategories include:
      • System utilities – administrative tools such as mkfs, fsck, and many others.
      • User utilities – environment management tools such as passwd, kill, and others.
    • Document formatting – Unix systems were used from the outset for document preparation and typesetting systems, and included many related programs such as nroff, troff, tbl, eqn, refer, and pic. Some modern Unix systems also include packages such as TeX and Ghostscript.
    • Graphics – The plot subsystem provided facilities for producing simple vector plots in a device-independent format, with device-specific interpreters to display such files. Modern Unix systems also generally include X11 as a standard windowing system and GUI, and many support OpenGL.
    • Communications – Early Unix systems contained no inter-system communication, but did include the inter-user communication programs mail and write. V7 introduced the early inter-system communication system UUCP, and systems beginning with BSD release 4.1c included TCP/IP utilities.
  • Documentation – Unix was the first operating system to include all of its documentation online in machine-readable form. The documentation included:
    • man – manual pages for each command, library component, system call, header file, etc.
    • doc – longer documents detailing major subsystems, such as the C language and troff

[edit] Impact

The Unix system had significant impact on other operating systems. It won its success by:
  • Direct interaction.
  • Moving away from the total control of businesses like IBM and DEC.
  • AT&T being willing to give the software away for free.
  • Running on cheap hardware.
  • Being easy to adopt and move to different machines.
It was written in high level language rather than assembly language (which had been thought necessary for systems implementation on early computers). Although this followed the lead of Multics and Burroughs, it was Unix that popularized the idea.
Unix had a drastically simplified file model compared to many contemporary operating systems, treating all kinds of files as simple byte arrays. The file system hierarchy contained machine services and devices (such as printers, terminals, or disk drives), providing a uniform interface, but at the expense of occasionally requiring additional mechanisms such as ioctl and mode flags to access features of the hardware that did not fit the simple "stream of bytes" model. The Plan 9 operating system pushed this model even further and eliminated the need for additional mechanisms.
Unix also popularized the hierarchical file system with arbitrarily nested subdirectories, originally introduced by Multics. Other common operating systems of the era had ways to divide a storage device into multiple directories or sections, but they had a fixed number of levels, often only one level. Several major proprietary operating systems eventually added recursive subdirectory capabilities also patterned after Multics. DEC's RSX-11M's "group, user" hierarchy evolved into VMS directories, CP/M's volumes evolved into MS-DOS 2.0+ subdirectories, and HP's MPE group.account hierarchy and IBM's SSP and OS/400 library systems were folded into broader POSIX file systems.
Making the command interpreter an ordinary user-level program, with additional commands provided as separate programs, was another Multics innovation popularized by Unix. The Unix shell used the same language for interactive commands as for scripting (shell scripts – there was no separate job control language like IBM's JCL). Since the shell and OS commands were "just another program", the user could choose (or even write) his own shell. New commands could be added without changing the shell itself. Unix's innovative command-line syntax for creating modular chains of producer-consumer processes (pipelines) made a powerful programming paradigm (coroutines) widely available. Many later command-line interpreters have been inspired by the Unix shell.
A fundamental simplifying assumption of Unix was its focus on ASCII text for nearly all file formats. There were no "binary" editors in the original version of Unix – the entire system was configured using textual shell command scripts. The common denominator in the I/O system was the byte – unlike "record-based" file systems. The focus on text for representing nearly everything made Unix pipes especially useful, and encouraged the development of simple, general tools that could be easily combined to perform more complicated ad hoc tasks. The focus on text and bytes made the system far more scalable and portable than other systems. Over time, text-based applications have also proven popular in application areas, such as printing languages (PostScript, ODF), and at the application layer of the Internet protocols, e.g., FTP, SMTP, HTTP, SOAP and SIP.
Unix popularized a syntax for regular expressions that found widespread use. The Unix programming interface became the basis for a widely implemented operating system interface standard (POSIX, see above).
The C programming language soon spread beyond Unix, and is now ubiquitous in systems and applications programming.
Early Unix developers were important in bringing the concepts of modularity and reusability into software engineering practice, spawning a "software tools" movement.
Unix provided the TCP/IP networking protocol on relatively inexpensive computers, which contributed to the Internet explosion of worldwide real-time connectivity, and which formed the basis for implementations on many other platforms. This also exposed numerous security holes in the networking implementations.
The Unix policy of extensive on-line documentation and (for many years) ready access to all system source code raised programmer expectations, and contributed to the 1983 launch of the free software movement.
Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms for developing software, norms which became as important and influential as the technology of Unix itself; this has been termed the Unix philosophy.

[edit] Free Unix-like operating systems

In 1983, Richard Stallman announced the GNU project, an ambitious effort to create a free software Unix-like system; "free" in that everyone who received a copy would be free to use, study, modify, and redistribute it. The GNU project's own kernel development project, GNU Hurd, had not produced a working kernel, but in 1991 Linus Torvalds released the Linux kernel as free software under the GNU General Public License. In addition to their use in the GNU/Linux operating system, many GNU packages – such as the GNU Compiler Collection (and the rest of the GNU toolchain), the GNU C library and the GNU core utilities – have gone on to play central roles in other free Unix systems as well.
Linux distributions, comprising Linux and large collections of compatible software have become popular both with individual users and in business. Popular distributions include Red Hat Enterprise Linux, Fedora, SUSE Linux Enterprise, openSUSE, Debian GNU/Linux, Ubuntu, Mandriva Linux, Slackware Linux and Gentoo.
A free derivative of BSD Unix, 386BSD, was also released in 1992 and led to the NetBSD and FreeBSD projects. With the 1994 settlement of a lawsuit that UNIX Systems Laboratories brought against the University of California and Berkeley Software Design Inc. (USL v. BSDi), it was clarified that Berkeley had the right to distribute BSD Unix – for free, if it so desired. Since then, BSD Unix has been developed in several different directions, including OpenBSD and DragonFly BSD.
Linux and BSD are now rapidly occupying much of the market traditionally occupied by proprietary Unix operating systems, as well as expanding into new markets such as the consumer desktop and mobile and embedded devices. Due to the modularity of the Unix design, sharing bits and pieces is relatively common; consequently, most or all Unix and Unix-like systems include at least some BSD code, and modern systems also usually include some GNU utilities in their distributions.
OpenSolaris is a relatively recent addition to the list of operating systems based on free software licenses marked as such by FSF and OSI. It includes a number of derivatives that combines CDDL-licensed kernel and system tools and also GNU userland and is currently the only open source System V derivative available.

[edit] 2038

Unix stores system time values as the number of seconds from midnight January 1, 1970 (the "Unix Epoch") in variables of type time_t, historically defined as "signed long". On January 19, 2038 on 32 bit Unix systems, the current time will roll over from a zero followed by 31 ones (0x7FFFFFFF) to a one followed by 31 zeros (0x80000000), which will reset time to the year 1901 or 1970, depending on implementation, because that toggles the sign bit.
Since times before 1970 are rarely represented in Unix time, one possible solution that is compatible with existing binary formats would be to redefine time_t as "unsigned 32-bit integer". However, such a kludge merely postpones the problem to February 7, 2106, and could introduce bugs in software that computes time differences.
Some Unix versions have already addressed this. For example, in Solaris and Linux in 64-bit mode, time_t is 64 bits long, meaning that the OS itself and 64-bit applications will correctly handle dates for some 292 billion years. Existing 32-bit applications using a 32-bit time_t continue to work on 64-bit Solaris systems but are still prone to the 2038 problem. Some vendors have introduced an alternative 64-bit type and corresponding API, without addressing uses of the standard time_t.

[edit] ARPANET

In May 1975 ARPA documented in RFC 681 detailed very specifically why Unix was the operating system of choice for use as an ARPANET "mini-host". The evaluation process was also documented. Unix required a license that was very expensive with $20,000(US) for non-university users and $150 for an educational license. It was noted that for an ARPA network-wide license Bell "were open to suggestions in that area".
Specific features found beneficial were:

[edit] Branding

In October 1993, Novell, the company that owned the rights to the Unix System V source at the time, transferred the trademarks of Unix to the X/Open Company (now The Open Group),[12] and in 1995 sold the related business operations to Santa Cruz Operation.[13] Whether Novell also sold the copyrights to the actual software was the subject of a 2006 federal lawsuit, SCO v. Novell, which Novell won; the case is being appealed.[23] Unix vendor SCO Group Inc. accused Novell of slander of title.
The present owner of the trademark UNIX is The Open Group, an industry standards consortium. Only systems fully compliant with and certified to the Single UNIX Specification qualify as "UNIX" (others are called "Unix system-like" or "Unix-like").
By decree of The Open Group, the term "UNIX" refers more to a class of operating systems than to a specific implementation of an operating system; those operating systems which meet The Open Group's Single UNIX Specification should be able to bear the UNIX 98 or UNIX 03 trademarks today, after the operating system's vendor pays a fee to The Open Group. Systems licensed to use the UNIX trademark include AIX, HP-UX, IRIX, Solaris, Tru64 (formerly "Digital UNIX"), A/UX, Mac OS X,[24][25] and a part of z/OS.
Sometimes a representation like "Un*x", "*NIX", or "*N?X" is used to indicate all operating systems similar to Unix. This comes from the use of the "*" and "?" characters as "wildcard" characters in many utilities. This notation is also used to describe other Unix-like systems, e.g. Linux, BSD, etc., that have not met the requirements for UNIX branding from the Open Group.
The Open Group requests that "UNIX" is always used as an adjective followed by a generic term such as "system" to help avoid the creation of a genericized trademark.
"Unix" was the original formatting, but the usage of "UNIX" remains widespread because, according to Dennis Ritchie, when presenting the original Unix paper to the third Operating Systems Symposium of the American Association for Computing Machinery, “we had a new typesetter and troff had just been invented and we were intoxicated by being able to produce small caps.”[26] Many of the operating system's predecessors and contemporaries used all-uppercase lettering, so many people wrote the name in upper case due to force of habit.
Several plural forms of Unix are used to refer to multiple brands of Unix and Unix-like systems. Most common is the conventional "Unixes", but "Unices" (treating Unix as a Latin noun of the third declension) is also popular. The Anglo-Saxon plural form "Unixen" is not common, although occasionally seen. Trademark names can be registered by different entities in different countries and trademark laws in some countries allow the same trademark name to be controlled by two different entities if each entity uses the trademark in easily distinguishable categories. The result is that Unix has been used as a brand name for various products including book shelves, ink pens, bottled glue, diapers, hair driers and food containers.

Hybrid kernel

From Wikipedia, the free encyclopedia
A hybrid kernel is a kernel architecture based on combining aspects of microkernel and monolithic kernel architectures used in computer operating systems. The category is controversial due to the similarity to monolithic kernel; the term has been dismissed by some as simple marketing.[1] The traditional kernel categories are monolithic kernels and microkernels (with nanokernels and exokernels seen as more extreme versions of microkernels).
Structure of monolithic kernel, microkernel and hybrid kernel-based operating systems
The idea behind this category is to have a kernel structure similar to a microkernel, but implemented in terms of a monolithic kernel. In contrast to a microkernel, all (or nearly all) operating system services are in kernel space. While there is no performance overhead for message passing and context switching between kernel and user mode, as in monolithic kernels, there are no performance benefits of having services in user space, as in microkernels.

Contents

[hide]

[edit] Examples

[edit] NT kernel

The Windows NT operating system family's architecture consists of two layers (user mode and kernel mode), with many different modules within both of these layers.
The best known example of a hybrid kernel is the Microsoft NT kernel that powers Windows NT, Windows 2000, Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008 and Windows 7. NT-based Windows is classified as a hybrid kernel (or a macrokernel[2]) rather than a monolithic kernel because the emulation subsystems run in user-mode server processes, rather than in kernel mode as on a monolithic kernel, and further because of the large number of design goals which resemble design goals of Mach (in particular the separation of OS personalities from a general kernel design). Conversely, the reason NT is not a microkernel system is because most of the system components run in the same address space as the kernel, as would be the case with a monolithic design (in a traditional monolithic design, there would not be a microkernel per se, but the kernel would implement broadly similar functionality to NT's microkernel and kernel-mode subsystems).

[edit] Description

The Windows NT design included many of the same objectives as Mach, the archetypal microkernel system, one of the most important being its structure as a collection of modules that communicate via well-known interfaces, with a small microkernel limited to core functions such as first-level interrupt handling, thread scheduling and synchronization primitives. This allows for the possibility of using either direct procedure calls or interprocess communication (IPC) to communicate between modules, and hence for the potential location of modules in different address spaces (for example in either kernel space or server processes). Other design goals shared with Mach included support for diverse architectures, a kernel with abstractions general enough to allow multiple operating system personalities to be implemented on top of it and an object-oriented organisation.[2][3]
The reason NT is not a micro-kernel system is that nearly all of the subsystems providing system services, including the entire Executive, run in kernel mode (in the same address space as the microkernel itself), rather than in user-mode server processes, as would be the case with a microkernel design. This is an attribute NT shares with early versions of Mach, as well as all commercial systems based on Mach, and stems from the superior performance offered by using direct procedure calls in a single memory space, rather than IPC, for communication amongst subsystems.
In describing NT, the list of which subsystems do not run in kernel mode is far shorter than the list of those that do. The user-mode subsystems on NT include one or more emulation subsystems, each of which provides an operating system personality to applications, the Session Manager Subsystem (smss.exe), which starts the emulation subsystems during system startup and the Local Security Authority Subsystem Service (lsass.exe), which enforces security on the system. The subsystems are not written to a particular OS personality, but rather to the native NT API (or Native API).
The primary operating system personality on Windows is the Windows API, which is always present. The emulation subsystem which implements the Windows personality is called the Client/Server Runtime Subsystem (csrss.exe). On versions of NT prior to 4.0, this subsystem process also contained the window manager, graphics device interface and graphics device drivers. For performance reasons, however, in version 4.0 and later, these modules (which are often implemented in user mode even on monolithic systems, especially those designed without internal graphics support) run as a kernel-mode subsystem.[2]
As of 2007, one other operating system personality, UNIX, is offered as an optionally installed system component on certain versions of Windows Vista and Windows Server 2003 R2. The associated subsystem process is the Subsystem for UNIX-Based Applications (psxss.exe), which was formerly part of a Windows add-on called Windows Services for Unix. An OS/2 subsystem (os2ss.exe) was supported in older versions of Windows NT, as was a very limited POSIX subsystem (psxss.exe). The POSIX subsystem was supplanted by the UNIX subsystem, hence the identical executable name.[4]
Applications that run on NT are written to one of the OS personalities (usually the Windows API), and not to the native NT API for which documentation is not publicly available (with the exception of routines used in device driver development). An OS personality is implemented via a set of user-mode DLLs (see Dynamic-link library), which are mapped into application processes' address spaces as required, together with an emulation subsystem server process (as described previously). Applications access system services by calling into the OS personality DLLs mapped into their address spaces, which in turn call into the NT run-time library (ntdll.dll), also mapped into the process address space. The NT run-time library services these requests by trapping into kernel mode to either call kernel-mode Executive routines or make Local Procedure Calls (LPCs) to the appropriate user-mode subsystem server processes, which in turn use the NT API to communicate with application processes, the kernel-mode subsystems and each other.[4]

[edit] Plan 9 kernel

[edit] Description

One of the main design goals is to represent all resources as files and use a single communication protocol for both local and remote resources. The Plan 9 kernel uses both in-kernel (kernel mode) but more commonly user mode servers. Communication with user mode servers — fileservers — uses 9P. Kernel mode examples are device drivers, network interfaces (ethernet), networking (IP stack), environment, and /proc. Examples of user mode are mailboxes, serial-console multiplexor, spam filter, CD interpreter, foreign filesystems and tapes, backup system, and the window system. Because the interface to ‘in-kernel’ and ‘user space’ file servers is the same this also means that components can be moved to (or reimplemented in) either user mode or the kernel without making any changes to the system; for example there have been implementations of the IP stack and graphics systems as both user programs and in the kernel, and they can even coexist in the same running system thanks to the use of namespaces.[5]

[edit] Classification

Due to the extensive use of user mode fileservers together with some in-kernel systems, this is a simpler candidate for inclusion as a hybrid kernel.

[edit] Implementations