Thursday 14 April 2016

Cash memory

DISK CASHING

To help understand the theory of caching, visualize an old, hand-operated water pump. Each
stroke of the pump's handle delivers a set amount of water into a glass. It may take two or three
handle strokes to fill a glass. Now, visualize several glasses that need to be filled. You are
constantly pumping the handle to keep up with the demand. Next, introduce a holding tank. With
this, instead of the water going directly into a glass, it goes into the tank. The advantage is, once the
holding tank is filled, constant pumping is not required to keep up with the demand.

Disk caching may be thought of as an electronic version of a holding tank. With MS-DOS version
5.0, the holding tank is built in with Smartdrv.sys.

Cache: A bank of high-speed memory set aside for frequently accessed data. The term "cashing"
describes placing data in the cache. Memory caching and disk cashing are the two most common
methods used by PCs.

Keeping the most frequently used disk sectors in operational memory (hereafter – RAM) is
called disk cashing. It is used to increase speed of information exchange between the hard disk and
RAM. It’s well known that the relatively low speed of information exchange between these two
devices used to be one of the weakest points limiting computer productivity. No doubt, there are
some other weak points, for example, information exchange between fast microprocessor and slow
RAM, but, as long as DOS doesn’t suggest any ways of dealing with such problems, we are not
going to consider them.

To perform disk cashing, a special buffer region called cash is organized in RAM. It works
as a canal for information exchange and is operated by the resident program called cash
administrator.

Read data is placed into cash and kept there until another new portion of data replaces it.
When the specific data is required it can be retrieved from the fast cash and there will be no need of
reading it from the disk. So, the speed of data reading from the “disk” increases. This procedure is
called “end-to-end reading”.

Even more noticeable effect is achieved by preliminary reading (without operational
system’s request) of data and placing it into cash, because this operation can be fulfilled without
microprocessor being idle, that is asynchronically.

Most of the modern cash administrators provide cashing not only for reading, but also for
writing. Write cashing is used when placing data on disk after operational system’s instructions.
Firstly, this data is placed into cash and then, when it is “convenient” for a PC, placed to disk, so
that the real writing into disk is organized asynchronically. Further we will call this process an
“intermediate writing”. After writing data into cash instead of writing it to the disk, DOS is notified
about the end of writing operation. Since it is accomplished much faster than writing straight to the
disk, write cashing is very effective. This effect is even more noticeable when executing such
operations:

renewal of the data recently written to the disk (in the case of cashing it may be refreshed in RAM)
using (repeatedly reading) recently written to disk data (because of cashing the writing process can
be done without reading data from the disk).

Besides increasing the productivity of the PC, the disk cashing allows to increase the
working lifetime of the hard disk due to the reduction of the disk wear.

Disk cashing successfully combines the positive points of the I/O buffering and of the
virtual disk usage, because (analogically to virtual disk) it provides storing of big amounts of data in
RAM and (analogically to I/O buffers) keeps only the frequently used data. This allows minimizing
the need of allocating large amounts of RAM to be used as a buffer. Besides, cashing (analogically
to I/O buffering) is totally “transparent” for the users and programs, when using a virtual disk user
must copy files to it by himself, and after that, probably, has to configure the programs, that will use
those files. However, cash administrator is usually much bigger than the virtual disk driver due to
the amount of operations it has to perform. This can cause user to stop using disk cashing.

DOS simply can’t do without I/O buffering, which represents the simplified variant of
cashing. That’s why it is represented with the compact code: anticipatory reading and intermediate
writing aren’t executed. However the purpose of I/O buffering is not just to minimize the access
time to the same data, but also to extract the logical records from the physical records and viceversa
– to form physical records from the logical records. Physical record is a portion of data,
which is transferred between RAM and external memory (for disks – contents of a sector). Logical
record is a portion of data, inquired by a program or outputted by it. The tools of the I/O buffering
allow reading of the physical record for only one time, even if there are several logical records in it
needed for a certain program’s performance. Analogically, physical record is written to disk only
after it is formed from several logical records. Without the I/O buffering tools the reading of each
logical record (even from the same sector) would cause the frequent reading of this sector from the
disk. As for the output of each logical record, it would require the operation of writing the whole
physical record to disk, moreover after it’s anticipatory reading and refreshing. All these operations,
in addition to the significant waste of time, would require additional efforts of the programmers.

Since I/O buffering tools perform blocking and unblocking of physical records, cashing
tools are used for organizing work with physical records (in case of disks – the contents of the
sectors).

Disk Caching With MS-DOS' Smartdrv.sys

Total system performance is a composite of several factors. Two main factors are central
processing unit (CPU) type and speed, and hard drive access time. Other factors in the mix are the
software programs themselves. Certain programs, like databases and some computer-aided design
(CAD) packages, constantly access your hard drive by opening and closing files. Since the hard
drive is a mechanical device with parts like read/write heads that physically access data, this
constant access slows things down. Short of buying faster equipment, changing the way data is
transferred to the CPU is the most effective way to speed up your system. This can be done with
disk caching (pronounced disk "cashing").

No comments:

Post a Comment

Popular Posts