Thursday 14 April 2016

OS/2 OPERATING SYSTEM

In the early 1990s, two of the biggest names in the PC world, IBM and Microsoft, joined forces
to create OS/2, with the goal of making it the "next big thing" in graphical operating systems. Well,
it didn't quite work out that way. :^) The story behind OS/2 includes some of the most fascinating
bits of PC industry history, but it's a long story and not one that really makes sense to get into here.
The short version goes something like this:

Microsoft and IBM create OS/2 with high hopes that it will revolutionize the PC desktop.
OS/2 has some significant technical strengths but also some problems.
Microsoft and IBM fight over how to fix the problems, and also over what direction to take for the
future of the operating system.
Microsoft decides, based on some combination of frustration over problems and desire for absolute
control, to drop OS/2 and focus on Windows instead.
IBM and Microsoft feud.
IBM supports OS/2 (somewhat half-heartedly) on its own, while Microsoft dominates the industry
with various versions of Windows.
Now, OS/2 aficionados will probably take issue with at least some of that summarization, but that is
what happened in a nutshell, or at least I think so. :^) At any rate, OS/2 continues to be supported
today, but really has been relegated to a niche role. I don't know how long IBM will continue to
support it.
OS/2's file system support is similar, in a way to that of Windows NT's. OS/2 supports FAT12 and
FAT16 for compatibility, but is really designed to use its own special file system, called HPFS.
HPFS is similar to NTFS (NT's native file system) though it is certainly not the same. OS/2 does not
have support for FAT32 built in, but that there are third-party tools available that will let OS/2
access FAT32 partitions. This may be required if you are running a machine with both OS/2 and
Windows partitions. I believe that OS/2 does not include support for NTFS partitions.

UNIX operation system. Main features and commands. UNIX / Linux

UNIX is one of the very oldest operating systems in the computer world, and is still widely
used today. However, it is not a very conspicuous operating system. Somewhat arcane in its
operation and interface, it is ideally suited for the needs of large enterprise computing systems. It is
also the most common operating system run by servers and other computers that form the bulk of
the Internet. While you may never use UNIX on your local PC, you are using it indirectly, in one
form or another, every time you log on to the 'net.

While few people run UNIX on their own systems, there are in fact a number of different
versions of UNIX available for the PC, and millions of PC users have chosen to install "UNIXy"
operating systems on their own desktop machines. There are dozens of variants of the basic UNIX
interface; the most popular one for the PC platform is Linux, which is itself available in many
flavors. While UNIX operating systems can be difficult to set up and require some knowledge to
operate, they are very stable and robust, are efficient with system resources--and are generally free
or very inexpensive to obtain.

UNIX operating systems are designed to use the "UNIX file system". I put that phrase in
quotes, because there is no single UNIX file system, any more than there is a single UNIX
operating system. However, the file systems used by most of the UNIX operating system types out
there are fairly similar, and rather distinct from the file systems used by other operating systems,
such as DOS or Windows.

As an operating system geared specifically for use on the PC, Linux is the UNIX variant that
gets the most attention in PC circles. To improve its appeal, the programmers who are continually
working to update and improve Linux have put into the operating system compatibility support for
most of the other operating systems out there. Linux will read and write to FAT partitions, and with
newer versions this includes FAT32.

Unix (officially trademarked as UNIX®) is a computer operating system originally
developed in 1969 by a group of AT&T employees at Bell Labs including Ken Thompson, Dennis
Ritchie and Douglas McIlroy. Today's Unix systems are split into various branches, developed over
time by AT&T as well as various commercial vendors and non-profit organizations.

As of 2007, the owner of the trademark UNIX® is The Open Group, an industry standards
consortium. Only systems fully compliant with and certified to the Single UNIX Specification
qualify as "UNIX®" (others are called "Unix system-like" or "Unix-like").

During the late 1970s and early 1980s, Unix's influence in academic circles led to largescale
adoption of Unix (particularly of the BSD variant, originating from the University of
California, Berkeley) by commercial startups, the most notable of which is Sun Microsystems.
Today, in addition to certified Unix systems, Unix-like operating systems such as Linux and BSD
derivatives are commonly encountered.

Sometimes, "traditional Unix" may be used to describe a Unix or an operating system that has
the characteristics of either Version 7 Unix or UNIX System V.

Overview
Unix operating systems are widely used in both servers and workstations. The Unix
environment and the client-server program model were essential elements in the development of the
Internet and the reshaping of computing as centered in networks rather than in individual
computers.

Both Unix and the C programming language were developed by AT&T and distributed to
government and academic institutions, causing both to be ported to a wider variety of machine
families than any other operating system. As a result, Unix became synonymous with "open
systems".

Unix was designed to be portable, multi-tasking and multi-user in a time-sharing
configuration. Unix systems are characterized by various concepts: the use of plain text for storing
data; a hierarchical file system; treating devices and certain types of inter-process communication
(IPC) as files; and the use of a large number of small programs that can be strung together through a
command line interpreter using pipes, as opposed to using a single monolithic program that includes
all of the same functionality. These concepts are known as the Unix philosophy.

Under Unix, the "operating system" consists of many of these utilities along with the master
control program, the kernel. The kernel provides services to start and stop programs, handle the file
system and other common "low level" tasks that most programs share, and, perhaps most
importantly, schedules access to hardware to avoid conflicts if two programs try to access the same
resource or device simultaneously. To mediate such access, the kernel was given special rights on
the system and led to the division between user-space and kernel-space.

The microkernel tried to reverse the growing size of kernels and return to a system in which
most tasks were completed by smaller utilities. In an era when a "normal" computer consisted of a
hard disk for storage and a data terminal for input and output (I/O), the Unix file model worked
quite well as most I/O was "linear". However, modern systems include networking and other new
devices. Describing a graphical user interface driven by mouse control in an "event driven" fashion
didn't work well under the old model. Work on systems supporting these new devices in the 1980s
led to facilities for non-blocking I/O, forms of inter-process communications other than just pipes,
as well as moving functionality such as network protocols out of the kernel.

History

A partial list of simultaneously running processes on a Unix system.

In the 1960s, the Massachusetts Institute of Technology, AT&T Bell Labs, and General
Electric worked on an experimental operating system called Multics (Multiplexed Information and
Computing Service), which was designed to run on the GE-645 mainframe computer. The aim was
the creation of a commercial product, although this was never a great success. Multics was an
interactive operating system with many novel capabilities, including enhanced security. The project
did develop production releases, but initially these releases performed poorly.

AT&T Bell Labs pulled out and deployed its resources elsewhere. One of the developers on
the Bell Labs team, Ken Thompson, continued to develop for the GE-645 mainframe, and wrote a
game for that computer called Space Travel. However, he found that the game was too slow on the
GE machine and was expensive, costing $75 per execution in scarce computing time.

Thompson thus re-wrote the game in assembly language for Digital Equipment Corporation's
PDP-7 with help from Dennis Ritchie. This experience, combined with his work on the Multics
project, led Thompson to start a new operating system for the PDP-7. Thompson and Ritchie led a
team of developers, including Rudd Canaday, at Bell Labs developing a file system as well as the
new multi-tasking operating system itself. They included a command line interpreter and some
small utility programs.

Editing a shell script using the ed editor. The dollar-sign at the top of the screen is the prompt
printed by the shell. 'ed' is typed to start the editor, which takes over from that point on the screen
downwards.

1970s

In 1970 the project was named Unics, and could - eventually - support two simultaneous
users. Brian Kernighan invented this name as a contrast to Multics; the spelling was later changed
to Unix.

Up until this point there had been no financial support from Bell Labs. When the Computer
Science Research Group wanted to use Unix on a much larger machine than the PDP-7, Thompson
and Ritchie managed to trade the promise of adding text processing capabilities to Unix for a PDP-
11/20 machine. This led to some financial support from Bell. For the first time in 1970, the Unix
operating system was officially named and ran on the PDP-11/20. It added a text formatting
program called roff and a text editor. All three were written in PDP-11/20 assembly language. Bell
Labs used this initial "text processing system", made up of Unix, roff, and the editor, for text
processing of patent applications. Roff soon evolved into troff, the first electronic publishing
program with a full typesetting capability. The UNIX Programmer's Manual was published on
November 3, 1971.

In 1973, Unix was rewritten in the C programming language, contrary to the general notion
at the time "that something as complex as an operating system, which must deal with time-critical
events, had to be written exclusively in assembly language" [4]. The migration from assembly
language to the higher-level language C resulted in much more portable software, requiring only a
relatively small amount of machine-dependent code to be replaced when porting Unix to other
computing platforms.

AT&T made Unix available to universities and commercial firms, as well as the United
States government under licenses. The licenses included all source code including the machinedependent

parts of the kernel, which were written in PDP-11 assembly code. Copies of the
annotated Unix kernel sources circulated widely in the late 1970s in the form of a much-copied
book by John Lions of the University of New South Wales, the Lions' Commentary on UNIX 6th
Edition, with Source Code, which led to considerable use of Unix as an educational example.

Versions of the Unix system were determined by editions of its user manuals, so that (for
example) "Fifth Edition UNIX" and "UNIX Version 5" have both been used to designate the same
thing. Development expanded, with Versions 4, 5, and 6 being released by 1975. These versions
added the concept of pipes, leading to the development of a more modular code-base, increasing
development speed still further. Version 5 and especially Version 6 led to a plethora of different
Unix versions both inside and outside Bell Labs, including PWB/UNIX, IS/1 (the first commercial
Unix), and the University of Wollongong's port to the Interdata 7/32 (the first non-PDP Unix).

In 1978, UNIX/32V, for the VAX system, was released. By this time, over 600 machines
were running Unix in some form. Version 7 Unix, the last version of Research Unix to be released
widely, was released in 1979. Versions 8, 9 and 10 were developed through the 1980s but were only
released to a few universities, though they did generate papers describing the new work. This
research led to the development of Plan 9 from Bell Labs, a new portable distributed system.

1980s

A late-80s style Unix desktop running the X Window System graphical user interface. Shown
are a number of client applications common to the MIT X Consortium's distribution, including
Tom's Window Manager, an X Terminal, Xbiff, xload, and a graphical manual page browser.

AT&T now licensed UNIX System III, based largely on Version 7, for commercial use, the
first version launching in 1982. This also included support for the VAX. AT&T continued to issue
licenses for older Unix versions. To end the confusion between all its differing internal versions,
AT&T combined them into UNIX System V Release 1. This introduced a few features such as the
vi editor and curses from the Berkeley Software Distribution of Unix developed at the University of
California, Berkeley. This also included support for the Western Electric 3B series of machines.

Since the newer commercial UNIX licensing terms were not as favorable for academic use
as the older versions of Unix, the Berkeley researchers continued to develop BSD Unix as an
alternative to UNIX System III and V, originally on the PDP-11 architecture (the 2.xBSD releases,
ending with 2.11BSD) and later for the VAX-11 (the 4.x BSD releases). Many contributions to
Unix first appeared on BSD systems, notably the C shell with job control (modelled on ITS).
Perhaps the most important aspect of the BSD development effort was the addition of TCP/IP
network code to the mainstream Unix kernel. The BSD effort produced several significant releases
that contained network code: 4.1cBSD, 4.2BSD, 4.3BSD, 4.3BSD-Tahoe ("Tahoe" being the
nickname of the Computer Consoles Inc. Power 6/32 architecture that was the first non-DEC
release of the BSD kernel), Net/1, 4.3BSD-Reno (to match the "Tahoe" naming, and that the release
was something of a gamble), Net/2, 4.4BSD, and 4.4BSD-lite. The network code found in these
releases is the ancestor of much TCP/IP network code in use today, including code that was later
released in AT&T System V UNIX and early versions of Microsoft Windows. The accompanying
Berkeley Sockets API is a de facto standard for networking APIs and has been copied on many
platforms.

Other companies began to offer commercial versions of the UNIX System for their own
mini-computers and workstations. Most of these new Unix flavors were developed from the System
V base under a license from AT&T; however, others were based on BSD instead. One of the
leading developers of BSD, Bill Joy, went on to co-found Sun Microsystems in 1982 and create
SunOS (now Solaris) for their workstation computers. In 1980, Microsoft announced its first Unix
for 16-bit microcomputers called Xenix, which the Santa Cruz Operation (SCO) ported to the Intel
8086 processor in 1983, and eventually branched Xenix into SCO UNIX in 1989.

For a few years during this period (before PC compatible computers with MS-DOS became
dominant), industry observers expected that UNIX, with its portability and rich capabilities, was
likely to become the industry standard operating system for microcomputers.[5] In 1984 several
companies established the X/Open consortium with the goal of creating an open system
specification based on UNIX. Despite early progress, the standardization effort collapsed into the
"Unix wars," with various companies forming rival standardization groups. The most successful
Unix-related standard turned out to be the IEEE's POSIX specification, designed as a compromise
API readily implemented on both BSD and System V platforms, published in 1988 and soon
mandated by the United States government for many of its own systems.

AT&T added various features into UNIX System V, such as file locking, system
administration, streams, new forms of IPC, the Remote File System and TLI. AT&T cooperated
with Sun Microsystems and between 1987 and 1989 merged features from Xenix, BSD, SunOS,
and System V into System V Release 4 (SVR4), independently of X/Open. This new release
consolidated all the previous features into one package, and heralded the end of competing versions.
It also increased licensing fees.

During this time a number of vendors including Digital Equipment, Sun, Addamax and
others began building trusted versions of UNIX for high security applications, mostly designed for
military and law enforcement applications.

The Common Desktop Environment or CDE, a graphical desktop for Unix co-developed in
the 1990s by HP, IBM, and Sun as part of the COSE initiative.

1990s

In 1990, the Open Software Foundation released OSF/1, their standard Unix

implementation, based on Mach and BSD. The Foundation was started in 1988 and was funded by
several Unix-related companies that wished to counteract the collaboration of AT&T and Sun on
SVR4. Subsequently, AT&T and another group of licensees formed the group "UNIX International"
in order to counteract OSF. This escalation of conflict between competing vendors gave rise again
to the phrase "Unix wars".

In 1991, a group of BSD developers (Donn Seeley, Mike Karels, Bill Jolitz, and Trent Hein)
left the University of California to found Berkeley Software Design, Inc (BSDI). BSDI produced a
fully functional commercial version of BSD Unix for the inexpensive and ubiquitous Intel platform,
which started a wave of interest in the use of inexpensive hardware for production computing.
Shortly after it was founded, Bill Jolitz left BSDI to pursue distribution of 386BSD, the free
software ancestor of FreeBSD, OpenBSD, and NetBSD.

By 1993 most commercial vendors had changed their variants of Unix to be based on
System V with many BSD features added on top. The creation of the COSE initiative that year by
the major players in Unix marked the end of the most notorious phase of the Unix wars, and was
followed by the merger of UI and OSF in 1994. The new combined entity, which retained the OSF
name, stopped work on OSF/1 that year. By that time the only vendor using it was Digital, which
continued its own development, rebranding their product Digital UNIX in early 1995.

Shortly after UNIX System V Release 4 was produced, AT&T sold all its rights to UNIX®
to Novell. (Dennis Ritchie likened this to the Biblical story of Esau selling his birthright for the
proverbial "mess of pottage".[6]) Novell developed its own version, UnixWare, merging its NetWare
with UNIX System V Release 4. Novell tried to use this to battle against Windows NT, but their
core markets suffered considerably.

In 1993, Novell decided to transfer the UNIX® trademark and certification rights to the
X/Open Consortium.[7] In 1996, X/Open merged with OSF, creating the Open Group. Various
standards by the Open Group now define what is and what is not a "UNIX" operating system,
notably the post-1998 Single UNIX Specification.

In 1995, the business of administering and supporting the existing UNIX licenses, plus
rights to further develop the System V code base, were sold by Novell to the Santa Cruz
Operation.[1] Whether Novell also sold the copyrights is currently the subject of litigation (see
below).

In 1997, Apple Computer sought out a new foundation for its Macintosh operating system
and chose NEXTSTEP, an operating system developed by NeXT. The core operating system was
renamed Darwin after Apple acquired it. It was based on the BSD family and the Mach kernel. The
deployment of Darwin BSD Unix in Mac OS X makes it, according to a statement made by an
Apple employee at a USENIX conference, the most widely used Unix-based system in the desktop
computer market.

2000 to present

Fig. 8. A modern Unix desktop environment (Solaris 10)

In 2000, SCO sold its entire UNIX business and assets to Caldera Systems, which later on
changed its name to The SCO Group. This new player then started legal action against various users
and vendors of Linux. SCO have alleged that Linux contains copyrighted Unix code now owned by
The SCO Group. Other allegations include trade-secret violations by IBM, or contract violations by
former Santa Cruz customers who have since converted to Linux. However, Novell disputed the
SCO group's claim to hold copyright on the UNIX source base. According to Novell, SCO (and
hence the SCO Group) are effectively franchise operators for Novell, which also retained the core
copyrights, veto rights over future licensing activities of SCO, and 95% of the licensing revenue.
The SCO Group disagreed with this, and the dispute had resulted in the SCO v. Novell lawsuit.

In 2005, Sun Microsystems released the bulk of its Solaris system code (based on UNIX
System V Release 4) into an open source project called OpenSolaris. New Sun OS technologies
such as the ZFS file system are now first released as open source code via the OpenSolaris project;
as of 2006 it has spawned several non-Sun distributions such as SchilliX, Belenix, Nexenta and
MarTux.

The Dot-com crash has led to significant consolidation of Unix users as well. Of the many
commercial flavors of Unix that were born in the 1980s, only Solaris, HP-UX, and AIX are still
doing relatively well in the market, though SGI's IRIX persisted for quite some time. Of these,
Solaris has the most market share, and may be gaining popularity due to its feature set and also
since it now has an Open Source version.

Standards

Beginning in the late 1980s, an open operating system standardization effort now known as
POSIX provided a common baseline for all operating systems; IEEE based POSIX around the
common structure of the major competing variants of the Unix system, publishing the first POSIX
standard in 1988. In the early 1990s a separate but very similar effort was started by an industry
consortium, the Common Open Software Environment (COSE) initiative, which eventually became
the Single UNIX Specification administered by The Open Group). Starting in 1998 the Open Group
and IEEE started the Austin Group, to provide a common definition of POSIX and the Single UNIX
Specification.

In an effort towards compatibility, in 1999 several Unix system vendors agreed on SVR4's
Executable and Linkable Format (ELF) as the standard for binary and object code files. The
common format allows substantial binary compatibility among Unix systems operating on the same
CPU architecture.

The Filesystem Hierarchy Standard was created to provide a reference directory layout for
Unix-like operating systems, particularly Linux. This type of standard however is controversial, and
even within the Linux community its adoption is far from universal.

Components

The Unix system is composed of several components that are normally packaged together. By
including — in addition to the kernel of an operating system — the development environment,
libraries, documents, and the portable, modifiable source-code for all of these components, Unix
was a self-contained software system. This was one of the key reasons it emerged into an important
teaching and learning tool and had such a broad influence.

Inclusion of these components did not make the system large — the original V7 UNIX
distribution, consisting of copies of all of the compiled binaries plus all of the source code and
documentation occupied less than 10Mb, and arrived on a single 9-track magtape. The printed
documentation, typeset from the on-line sources, was contained in two volumes.

The names and filesystem locations of the Unix components has changed substantially across
the history of the system. Nonetheless, the V7 implementation is considered by many to have the
canonical early structure:

• Kernel — source code in /usr/sys, composed of several sub-components:
o conf — configuration and machine-dependent parts, including boot code
o dev — device drivers for control of hardware (and some pseudo-hardware)
o sys — operating system "kernel", handling memory management, process
scheduling, system calls, etc.
o h — header files, defining key structures within the system and important systemspecific
invariables

• Development Environment — Early versions of Unix contained a development
environment sufficient to recreate the entire system from source code:

o cc — C language compiler (first appeared in V3 Unix)
o as — machine-language assembler for the machine
o ld — linker, for combining object files
o lib — object-code libraries (installed in /lib or /usr/lib) libc, the system library with
C run-time support, was the primary library, but there have always been additional
libraries for such things as mathematical functions (libm) or database access. V7
Unix introduced the first version of the modern "Standard I/O" library stdio as part
of the system library. Later implementations increased the number of libraries
significantly.
o make - build manager (introduced in PWB/UNIX), for effectively automating the
build process
o include — header files for software development, defining standard interfaces and
system invariants
o Other languages — V7 Unix contained a Fortran-77 compiler, a programmable
arbitrary-precision calculator (bc, dc), and the awk "scripting" language, and later
versions and implementations contain many other language compilers and toolsets.
Early BSD releases included Pascal tools, and many modern Unix systems also
include the GNU Compiler Collection as well as or instead of a proprietary compiler
system.
o Other tools — including an object-code archive manager (ar), symbol-table lister
(nm), compiler-development tools (e.g. lex & yacc), and debugging tools.

• Commands — Unix makes little distinction between commands (user-level programs) for
system operation and maintenance (e.g. cron), commands of general utility (e.g. grep), and
more general-purpose applications such as the text formatting and typesetting package.
Nonetheless, some major categories are:

o sh — The "shell" programmable command-line interpreter, the primary user
interface on Unix before window systems appeared, and even afterward (within a
"command window").
o Utilities — the core tool kit of the Unix command set, including cp, ls, grep, find and
many others. Subcategories include:
§ System utilities — administrative tools such as mkfs, fsck, and many others
§ User utilities — environment management tools such as passwd, kill, and
others.
o Document formatting — Unix systems were used from the outset for document
preparation and typesetting systems, and included many related programs such as
nroff, troff, tbl, eqn, refer, and pic. Some modern Unix systems also include
packages such as TeX and GhostScript.
o Graphics — The plot subsystem provided facilities for producing simple vector plots
in a device-independent format, with device-specific interpreters to display such
files. Modern Unix systems also generally include X11 as a standard windowing
system and GUI, and many support OpenGL.
o Communications — Early Unix systems contained no inter-system communication,
but did include the inter-user communication programs mail and write. V7
introduced the early inter-system communication system UUCP, and systems
beginning with BSD release 4.1c included TCP/IP utilities.

The 'man' command can display a 'man page' for every command on the system, including itself.
• Documentation — Unix was the first operating system to include all of its documentation
online in machine-readable form. The documentation included:
o man — manual pages for each command, library component, system call, header
file, etc.
o doc — longer documents detailing major subsystems, such as the C language and
troff

Impact

The Unix system had significant impact on other operating systems.

It was written in high level language as opposed to assembly language (which had been
thought necessary for systems implementation on early computers). Although this followed the lead
of Multics and Burroughs, it was Unix that popularized the idea.

Unix had a drastically simplified file model compared to many contemporary operating
systems, treating all kinds of files as simple byte arrays. The file system hierarchy contained
machine services and devices (such as printers, terminals, or disk drives), providing a uniform
interface, but at the expense of occasionally requiring additional mechanisms such as ioctl and
mode flags to access features of the hardware that did not fit the simple "stream of bytes" model.
The Plan 9 operating system pushed this model even further and eliminated the need for additional
mechanisms.

Unix also popularized the hierarchical file system with arbitrarily nested subdirectories,
originally introduced by Multics. Other common operating systems of the era had ways to divide a
storage device into multiple directories or sections, but they had a fixed number of levels, often only
one level. Several major proprietary operating systems eventually added recursive subdirectory
capabilities also patterned after Multics. DEC's RSX-11M's "group, user" hierarchy evolved into
VMS directories, CP/M's volumes evolved into MS-DOS 2.0+ subdirectories, and HP's MPE
group.account hierarchy and IBM's SSP and OS/400 library systems were folded into broader
POSIX file systems.

Making the command interpreter an ordinary user-level program, with additional commands
provided as separate programs, was another Multics innovation popularized by Unix. The Unix
shell used the same language for interactive commands as for scripting (shell scripts — there was
no separate job control language like IBM's JCL). Since the shell and OS commands were "just
another program", the user could choose (or even write) his own shell. New commands could be
added without changing the shell itself. Unix's innovative command-line syntax for creating chains
of producer-consumer processes (pipelines) made a powerful programming paradigm (coroutines)
widely available. Many later command-line interpreters have been inspired by the Unix shell.

A fundamental simplifying assumption of Unix was its focus on ASCII text for nearly all
file formats. There were no "binary" editors in the original version of Unix — the entire system was
configured using textual shell command scripts. The common denominator in the I/O system is the
byte — unlike "record-based" file systems in other computers. The focus on text for representing
nearly everything made Unix pipes especially useful, and encouraged the development of simple,
general tools that could be easily combined to perform more complicated ad hoc tasks. The focus
on text and bytes made the system far more scalable and portable than other systems. Over time,
text-based applications have also proven popular in application areas, such as printing languages
(PostScript), and at the application layer of the Internet Protocols, e.g. Telnet, FTP, SSH, SMTP,
HTTP and SIP.

Unix popularised a syntax for regular expressions that found widespread use. The Unix
programming interface became the basis for a widely implemented operating system interface
standard (POSIX, see above).

The C programming language soon spread beyond Unix, and is now ubiquitous in systems and
applications programming.

Early Unix developers were important in bringing the theory of modularity and reusability into
software engineering practice, spawning a "Software Tools" movement.

Unix provided the TCP/IP networking protocol on relatively inexpensive computers, which
contributed to the Internet explosion of world-wide real-time connectivity, and which formed the
basis for implementations on many other platforms. (This also exposed numerous security holes in
the networking implementations.)

The Unix policy of extensive on-line documentation and (for many years) ready access to all system
source code raised programmer expectations, contributed to the 1983 launch of the free software
movement.

Over time, the leading developers of Unix (and programs that ran on it) evolved a set of
cultural norms for developing software, norms which became as important and influential as the
technology of Unix itself; this has been termed the Unix philosophy.

Unix stores system time values as the number of seconds from midnight January 1, 1970
(the "Unix Epoch") in variables of type time_t, historically defined as "signed 32-bit integer". On
January 19, 2038, the current time will roll over from a zero followed by 31 ones
(01111111111111111111111111111111) to a one followed by 31 zeros
(10000000000000000000000000000000), which will reset time to the year 1901 or 1970,
depending on implementation, because that toggles the sign bit. As many applications use OS
library routines for date calculations, the impact of this could be felt much earlier than 2038; for
instance, 30-year mortgages may be calculated incorrectly beginning in the year 2008.

Since times before 1970 are rarely represented in Unix time, one possible solution that is
compatible with existing binary formats would be to redefine time_t as "unsigned 32-bit integer".
However, such a kludge merely postpones the problem to February 7, 2106, and could introduce
bugs in software that compares differences between two sets of time.

Some Unix versions have already addressed this. For example, in Solaris on 64-bit systems,
time_t is 64 bits long, meaning that the OS itself and 64-bit applications will correctly handle dates
for some 292 billion years (several times greater than the age of the universe). Existing 32-bit
applications using a 32-bit time_t continue to work on 64-bit Solaris systems but are still prone to
the 2038 problem.

Free Unix-like operating systems

Fig. 9. Linux is a modern Unix-like system

In 1983, Richard Stallman announced the GNU project, an ambitious effort to create a free
software Unix-like system; "free" in that everyone who received a copy would be free to use, study,
modify, and redistribute it. GNU's goal was achieved in 1992. Its own kernel development project,
GNU Hurd, had not produced a working kernel, but a compatible kernel called Linux was released
as free software in 1992 under the GNU General Public License. The combination of the two is
frequently referred to simply as "Linux", although the Free Software Foundation and some Linux
distributions, such as Debian GNU/Linux, use the combined term GNU/Linux. Work on GNU Hurd
continues, although very slowly.

In addition to their use in the Linux operating system, many GNU packages — such as the GNU
Compiler Collection (and the rest of the GNU toolchain), the GNU C library and the GNU core
utilities — have gone on to play central roles in other free Unix systems as well.

Linux distributions, comprising Linux and large collections of compatible software have
become popular both with hobbyists and in business. Popular distributions include Red Hat
Enterprise Linux, SUSE Linux, Mandriva Linux, Fedora, Ubuntu, Debian GNU/Linux, Slackware
Linux and Gentoo.

A free derivative of BSD Unix, 386BSD, was also released in 1992 and led to the NetBSD
and FreeBSD projects. With the 1994 settlement of a lawsuit that UNIX Systems Laboratories
brought against the University of California and Berkeley Software Design Inc. (USL v. BSDi), it
was clarified that Berkeley had the right to distribute BSD Unix — for free, if it so desired. Since
then, BSD Unix has been developed in several different directions, including the OpenBSD and
DragonFly BSD variants.

Linux and the BSD kin are now rapidly occupying the market traditionally occupied by
proprietary Unix operating systems, as well as expanding into new markets such as the consumer
desktop and mobile and embedded devices. A measure of this success may be seen when Apple
Computer incorporated BSD into its Macintosh operating system by way of NEXTSTEP. Due to
the modularity of the Unix design, sharing bits and pieces is relatively common; consequently, most
or all Unix and Unix-like systems include at least some BSD code, and modern BSDs also typically
include some GNU utilities in their distribution, so Apple's combination of parts from NeXT and
FreeBSD with Mach and some GNU utilities has precedent.

In 2005, Sun Microsystems released the bulk of the source code to the Solaris operating
system, a System V variant, under the name OpenSolaris, making it the first actively developed
commercial Unix system to be open sourced (several years earlier, Caldera had released many of
the older Unix systems under an educational and later BSD license). As a result, a great deal of
formerly proprietary AT&T/USL code is now freely available.

Branding

In October 1993, Novell, the company that owned the rights to the Unix System V source at
the time, transferred the trademarks of Unix to the X/Open Company (now The Open Group),[9] and
in 1995 sold the related business operations to Santa Cruz Operation.[10] Whether Novell also sold
the copyrights to the actual software is currently the subject of litigation in a federal lawsuit, SCO v.
Novell. Unix vendor SCO Group Inc. accused Novell of slander of title.

The present owner of the trademark UNIX® is The Open Group, an industry standards consortium.
Only systems fully compliant with and certified to the Single UNIX Specification qualify as
"UNIX®" (others are called "Unix system-like" or "Unix-like"). The term UNIX is not an acronym,
but follows the early convention of naming computer systems in capital letters, such as ENIAC and
MISTIC.

By decree of The Open Group, the term "UNIX®" refers more to a class of operating systems than
to a specific implementation of an operating system; those operating systems which meet The Open
Group's Single UNIX Specification should be able to bear the UNIX® 98 or UNIX® 03 trademarks
today, after the operating system's vendor pays a fee to The Open Group. Systems licensed to use
the UNIX® trademark include AIX, HP-UX, IRIX, Solaris, Tru64, A/UX, Mac OS X 10.5 on Intel
platforms[11], and a part of z/OS.

Sometimes a representation like "Un*x", "*NIX", or "*N?X" is used to indicate all operating
systems similar to Unix. This comes from the use of the "*" and "?" characters as "wildcard"
characters in many utilities. This notation is also used to describe other Unix-like systems, e.g.
Linux, FreeBSD, etc., that have not met the requirements for UNIX® branding from the Open
Group.

The Open Group requests that "UNIX®" is always used as an adjective followed by a generic
term such as "system" to help avoid the creation of a genericized trademark.

The term "Unix" is also used, and in fact was the original capitalisation, but the name UNIX
stuck because, in the words of Dennis Ritchie "when presenting the original Unix paper to the third
Operating Systems Symposium of the American Association for Computing Machinery, we had just
acquired a new typesetter and were intoxicated by being able to produce small caps" (quoted from
the Jargon File, version 4.3.3, 20 September 2002). Additionally, it should be noted that many of
the operating system's predecessors and contemporaries used all-uppercase lettering, because many
computer terminals of the time could not produce lower-case letters, so many people wrote the
name in upper case due to force of habit.

Several plural forms of Unix are used to refer to multiple brands of Unix and Unix-like
systems. Most common is the conventional "Unixes", but the hacker culture which created Unix has
a penchant for playful use of language, and "Unices" (treating Unix as Latin noun of the third
declension) is also popular. The Anglo-Saxon plural form "Unixen" is not common, although
occasionally seen.

Trademark names can be registered by different entities in different countries and trademark
laws in some countries allow the same trademark name to be controlled by two different entities if
each entity uses the trademark in easily distinquishable categories. The result is that Unix has been
used as a brand name for various products including book shelves, ink pens, bottled glue, diapers,
hair driers and food containers. [2].

Common Unix commands

Widely used Unix commands include:
• Directory and file creation and navigation: ls cd pwd mkdir rm rmdir cp find touch
• File viewing and editing: more less ed vi emacs head tail
• Text processing: echo cat grep sort uniq sed awk cut tr split printf
• File comparison: comm cmp diff patch
• Miscellaneous shell tools: yes test xargs
• System administration: chmod chown ps su w who
• Communication: mail telnet ftp finger ssh
• Authentication: su login passwd

Cash memory

DISK CASHING

To help understand the theory of caching, visualize an old, hand-operated water pump. Each
stroke of the pump's handle delivers a set amount of water into a glass. It may take two or three
handle strokes to fill a glass. Now, visualize several glasses that need to be filled. You are
constantly pumping the handle to keep up with the demand. Next, introduce a holding tank. With
this, instead of the water going directly into a glass, it goes into the tank. The advantage is, once the
holding tank is filled, constant pumping is not required to keep up with the demand.

Disk caching may be thought of as an electronic version of a holding tank. With MS-DOS version
5.0, the holding tank is built in with Smartdrv.sys.

Cache: A bank of high-speed memory set aside for frequently accessed data. The term "cashing"
describes placing data in the cache. Memory caching and disk cashing are the two most common
methods used by PCs.

Keeping the most frequently used disk sectors in operational memory (hereafter – RAM) is
called disk cashing. It is used to increase speed of information exchange between the hard disk and
RAM. It’s well known that the relatively low speed of information exchange between these two
devices used to be one of the weakest points limiting computer productivity. No doubt, there are
some other weak points, for example, information exchange between fast microprocessor and slow
RAM, but, as long as DOS doesn’t suggest any ways of dealing with such problems, we are not
going to consider them.

To perform disk cashing, a special buffer region called cash is organized in RAM. It works
as a canal for information exchange and is operated by the resident program called cash
administrator.

Read data is placed into cash and kept there until another new portion of data replaces it.
When the specific data is required it can be retrieved from the fast cash and there will be no need of
reading it from the disk. So, the speed of data reading from the “disk” increases. This procedure is
called “end-to-end reading”.

Even more noticeable effect is achieved by preliminary reading (without operational
system’s request) of data and placing it into cash, because this operation can be fulfilled without
microprocessor being idle, that is asynchronically.

Most of the modern cash administrators provide cashing not only for reading, but also for
writing. Write cashing is used when placing data on disk after operational system’s instructions.
Firstly, this data is placed into cash and then, when it is “convenient” for a PC, placed to disk, so
that the real writing into disk is organized asynchronically. Further we will call this process an
“intermediate writing”. After writing data into cash instead of writing it to the disk, DOS is notified
about the end of writing operation. Since it is accomplished much faster than writing straight to the
disk, write cashing is very effective. This effect is even more noticeable when executing such
operations:

renewal of the data recently written to the disk (in the case of cashing it may be refreshed in RAM)
using (repeatedly reading) recently written to disk data (because of cashing the writing process can
be done without reading data from the disk).

Besides increasing the productivity of the PC, the disk cashing allows to increase the
working lifetime of the hard disk due to the reduction of the disk wear.

Disk cashing successfully combines the positive points of the I/O buffering and of the
virtual disk usage, because (analogically to virtual disk) it provides storing of big amounts of data in
RAM and (analogically to I/O buffers) keeps only the frequently used data. This allows minimizing
the need of allocating large amounts of RAM to be used as a buffer. Besides, cashing (analogically
to I/O buffering) is totally “transparent” for the users and programs, when using a virtual disk user
must copy files to it by himself, and after that, probably, has to configure the programs, that will use
those files. However, cash administrator is usually much bigger than the virtual disk driver due to
the amount of operations it has to perform. This can cause user to stop using disk cashing.

DOS simply can’t do without I/O buffering, which represents the simplified variant of
cashing. That’s why it is represented with the compact code: anticipatory reading and intermediate
writing aren’t executed. However the purpose of I/O buffering is not just to minimize the access
time to the same data, but also to extract the logical records from the physical records and viceversa
– to form physical records from the logical records. Physical record is a portion of data,
which is transferred between RAM and external memory (for disks – contents of a sector). Logical
record is a portion of data, inquired by a program or outputted by it. The tools of the I/O buffering
allow reading of the physical record for only one time, even if there are several logical records in it
needed for a certain program’s performance. Analogically, physical record is written to disk only
after it is formed from several logical records. Without the I/O buffering tools the reading of each
logical record (even from the same sector) would cause the frequent reading of this sector from the
disk. As for the output of each logical record, it would require the operation of writing the whole
physical record to disk, moreover after it’s anticipatory reading and refreshing. All these operations,
in addition to the significant waste of time, would require additional efforts of the programmers.

Since I/O buffering tools perform blocking and unblocking of physical records, cashing
tools are used for organizing work with physical records (in case of disks – the contents of the
sectors).

Disk Caching With MS-DOS' Smartdrv.sys

Total system performance is a composite of several factors. Two main factors are central
processing unit (CPU) type and speed, and hard drive access time. Other factors in the mix are the
software programs themselves. Certain programs, like databases and some computer-aided design
(CAD) packages, constantly access your hard drive by opening and closing files. Since the hard
drive is a mechanical device with parts like read/write heads that physically access data, this
constant access slows things down. Short of buying faster equipment, changing the way data is
transferred to the CPU is the most effective way to speed up your system. This can be done with
disk caching (pronounced disk "cashing").

Memory control drivers

SMARTDRV.SYS is a disk caching program for DOS and Windows 3.x systems. The
smartdrive program keeps a copy of recently accessed hard disk data in memory. When a program
or MSDOS reads data, smartdrive first checks to see if it already has a copy and if so supplies it
instead of reading from the hard disk.

Memory Management

• First 640k is Conventional Memory
• 640k to 1024k is Upper Memory
• Above 1024k is Extended Memory
• HIMEM.SYS is loaded in CONFIG.SYS as the first driver to manage the Extended
Memory are and to convert this to XMS (Extended Memory Specification). The first 64k of
extended memory has been labeled High Memory (HMA). DOS can be put here by putting
DOS=HIGH in CONFIG.SYS.
• EMM386.EXE is loaded in CONFIG.SYS after HIMEM.SYS has been successfully
loaded. This is used in the hardware reserved 384k of space in upper memory (640k-1024k)
and creates EMS (Extended Memory Specification).
• Virtual Memory relies upon EMS (therefore EMM386.EXE) and uses hard disk space as
memory.
• Memory control commands

Software operation

Computer software has to be "loaded" into the computer's storage (also known as memory
and RAM).

Once the software is loaded, the computer is able to operate the software. Computers operate
by executing the computer program. This involves passing instructions from the application
software, through the system software, to the hardware which ultimately receives the instruction as
machine code. Each instruction causes the computer to carry out an operation -- moving data,
carrying out a computation, or altering the control flow of instructions.

Data movement is typically from one place in memory to another. Sometimes it involves
moving data between memory and registers which enable high-speed data access in the CPU.
Moving data, especially large amounts of it, can be costly. So, this is sometimes avoided by using
"pointers" to data instead. Computations include simple operations such as incrementing the value
of a variable data element. More complex computations may involve many operations and data
elements together.

Instructions may be performed sequentially, conditionally, or iteratively. Sequential
instructions are those operations that are performed one after another. Conditional instructions are
performed such that different sets of instructions execute depending on the value(s) of some data. In
some languages this is know as an "if" statement. Iterative instructions are perfomed repetitively
and may depend on some data value. This is sometimes called a "loop." Often, one instruction may
"call" another set of instructions that are defined in some other program or module. When more
than one computer processor is used, instructions may be executed simultaneously.

A simple example of the way software operates is what happens when a user selects an entry
such as "Copy" from a menu. In this case, a conditional instruction is executed to copy text from
data in a document to a clipboard data area. If a different menu entry such as "Paste" is chosen, the
software executes the instructions to copy the text in the clipboard data area to a place in the
document.

Depending on the application, even the example above could become complicated. The field
of software engineering endeavors to manage the complexity of how software operates. This is
especially true for software that operates in the context of a large or powerful computer system.
Kinds of software by operation: computer program as executable, source code or script,
configuration.

Three layers of software

Starting in the 1980s, application software has been sold in mass-produced packages
through retailers

Users often see things differently than programmers. People who use modern general
purpose computers (as opposed to embedded systems, analog computers, supercomputers, etc.)
usually see three layers of software performing a variety of tasks: platform, application, and user
software.

Platform software

Platform includes the basic input-output system (often described as firmware rather than
software), device drivers, an operating system, and typically a graphical user interface
which, in total, allow a user to interact with the computer and its peripherals (associated
equipment). Platform software often comes bundled with the computer, and users may not
realize that it exists or that they have a choice to use different platform software.

Application software

Application software or Applications are what most people think of when they think of
software. Typical examples include office suites and video games. Application software is
often purchased separately from computer hardware. Sometimes applications are bundled
with the computer, but that does not change the fact that they run as independent
applications. Applications are almost always independent programs from the operating
system, though they are often tailored for specific platforms. Most users think of compilers,
databases, and other "system software" as applications.

User-written software

User software tailors systems to meet the users specific needs. User software include
spreadsheet templates, word processor macros, scientific simulations, graphics and
animation scripts. Even email filters are a kind of user software. Users create this software
themselves and often overlook how important it is. Depending on how competently the userwritten
software has been integrated into purchased application packages, many users may
not be aware of the distinction between the purchased packages, and what has been added by
fellow co-workers.

System, programming and application software

Practical computer systems divide software into three major classes: system software,
programming software and application software, although the distinction is somewhat arbitrary, and
often blurred.

System software helps run the computer hardware and computer system. It includes
operating systems, device drivers, diagnostic tools, servers, windowing systems, utilities and more.
The purpose of systems software is to insulate the applications programmer as much as possible
from the details of the particular computer complex being use, especially memory and other
hardware features, and such accessory devices as communications, printers, readers, displays,
keyboards, etc.

Programming software usually provides tools to assist a programmer in writing computer
programs and software using different programming languages in a more convenient way. The tools
include text editors, compilers, interpreters, linkers, debuggers, and so on. An Integrated development environment (IDE) merges those tools into a software bundle, and a programmer may
not need to type multiple commands for compiling, interpreter, debugging, tracing, and etc.,
because the IDE usually has an advanced graphical user interface, or GUI.

Application software allows humans to accomplish one or more specific (non-computer
related) tasks. Typical applications include industrial automation, business software, educational
software, medical software, databases and computer games. Businesses are probably the biggest
users of application software, but almost every field of human activity (from a-bombs to zymurgy)
now uses some form of application software. It is used to automate all sorts of functions. Many
examples may be found at the Business Software Directory.

Software program and library

A software program may not be sufficiently complete for execution by a computer. In
particular, it may require additional software from a software library in order to be complete. Such a
library may include software components used by stand-alone programs, but which cannot be
executed on their own. Thus, programs may include standard routines that are common to many
programs, extracted from these libraries. Libraries may also include 'stand-alone' programs which
are activated by some computer event and/or perform some function (e.g., of computer
'housekeeping') but do not return data to their activating program. Programs may be called by other
programs and/or may call other programs.

Popular Posts