Skip to content
Gallery
CS480 Notes
Share
Explore

icon picker
Course Playbook

Use Coda to organize your life
Last edited 391 days ago by Eddie Coda

filled-topic
You can click the pages in the left bar 👈 to check out some templates, or click or below 👇 to test drive some of my most-used features.

You can do a lot with Coda! But let’s start with the basics...

And after you’re done exploring this page, start making with the templates in the page navigation on the left side of this page.

First, try to start typing on the page.

Start something simple, like your name:
You’re already making progress checking items off your list!
Sign up for Coda.
Type your name.
Change the world!

Now, try some of these fun (and easy to create) interactions.

1. Switch the toggle to the done position , then try it in the table below:

My tasks
1
2
3
4
Name
Task
Status
Read through chapter 4
Done
highlight key concepts and test questions
Done
Complete Problem set 4
To-do
Complete Project 4
Done
There are no rows in this table


✅ Tasks done:
3

📌 Tasks Remaining:
1


2. Now, drag the blue dot until the slider value displays 100:

000
0
Just click and slide.

info
Keep all your team notes organized in a single doc by duplicating this page for each meeting.
Clear sample data

Notes

Chapter 1.1 Key concepts and code
image.png
The main task of an operating system is primarily to manage competing entities, which broadly fall into two categories: users and devices. The operating system must arrange for multiple processes (and multiple users) to peacefully coexist, without interfering with each other. It must also handle various devices which might be simultaneously clamoring for attention. Part of its function is to protect the user from the complexity of those devices by providing a simplified interface (as well as to protect the devices from the typical bone-head user :-).
megaphone

Kernel mode (Supervisor mode) controls all I/O

Complete access to all the hardware and can execute any instruction in the machine.
image.png
To protect users from interfering with each other, we need some safeguards built into the hardware: hence the concept of permissions. To prevent one user from overwriting another user's files, we cannot allow a user to directly give directions to a disk drive. Instead, the user must ask the operating system (OS) to carry out a write command on his/her behalf; one of the responsibilities of the OS is to determine whether the write should be allowed, and if so, to instruct the disk drive to carry on with the operation. The operating system also hides all the ugly details, so that the user can just say 'write this block to disk', rather than having to explicitly deal with the tons of directions that the disk needs to carry out the operation. A device *driver*, typically part of the OS, handles these ugly details. Tanenbaum gives a more in-depth discussion of such a driver (for a floppy disk) on Page 4.
Page 6 discusses the duties of a printer driver, which forms an excellent generalized example of the management issues. If users were free to simply send text to the printer, different output jobs could be intermixed with one another, producing gibberish that is of no value to anyone. At the very least, the OS must ensure that one job has finished printing before the next job starts. More typically, the OS will order the printer requests, perhaps on a FCFS (First-Come-First-Served = FIFO = First-In-First-Out) basis, or perhaps use a shortest-job-first strategy, or perhaps rank print jobs on the basis of the relative importance of the user who submits the job (which in turn might depend on how much the user is willing to pay :-). If there is a charge for printing, the OS would take care of keeping track of how many pages each user prints, etc.
ok

Multiplexing (sharing)

Resources in two different domains: time and space
Time-plexing:
When a resource is multiplexed, different programs or users take turns using it. 🔁
Space-plexing:
Each resource get’s a piece of the disk or CPU
Potential drawbacks?
Section 1.2 (Page 7) give an overview of the evolution of operating systems, beginning with purely gear-driven early attempts (Babbage's design was on the order of hand-crank mechanical adding machines or cash registers), to the first successful attempts that used electromechanical relays (a relay contains an electromagnet, which, when powered, magnetically flips a switch in another part of the circuit). These were very slow (by electronic standards), because flipping a 1 to a 0 involved physically moving the switch arm. (And these famously gave rise to the term 'hardware bug', when a problem with one early computer was traced to an actual insect that had become fried and stuck in a switch arm, preventing the contacts from touching.)
When relays were replaced by vacuum tubes (such as you would find in the radios or televisions of the time), the process was sped up enormously, but was still impossibly slow by today's standards. Programming could only be done by using true machine language, and I/O was almost non-existent. Answers were read out via little lights, and input was accomplished by essentially rewiring the machine for your particular program. Since the machines were too valuable to lay idle while some slow human did the wiring, this (relatively cheap) part of the computer (a "plugboard") was detachable and duplicable. A programmer would wire up his program off-line on his own board, and then plug it in to the computer when it was time to run. You can see one here: ​ (This page also has a nice picture of a relay mechanism.)
Card readers made input a bit more convenient. As things progressed, cards were punched on a machine that was basically a cross between a typewriter and a hole punch. Programmers now carried around a box of cards rather than a plugboard (and you were very careful to avoid spilling the box and scrambling the order of your cards). A paper tape puncher was similar in concept, where six columns of holes/non-holes encoded a single character on each row of a paper tape.
Transistors were a major improvement over vacuum tubes, and not just because of speed: reliability was vastly improved. A vacuum tube that lasts for 3 years on average sounds pretty reliable; you would have to replace it only once every three years. But if you have 20,000 such tubes, something is burning out on the average of about once an hour. Transistors are MUCH more reliable. (Transistors also are far more compact, require much less power to function, and therefore generate much less heat.)
Prior to the transistor, even the idea of an assembly language was absurd, and something as complicated as a compiler was unthinkable. It was far morefix program 0 cost-effective to have humans create the machine-language instructions for a program than to waste valuable computer time doing something a lowly human could do at a much lower price. Each different machine architecture needed its own special assembly language, of course. FORTRAN was one of the first machine-independent languages: a FORTRAN compiler produced assembly language for a particular machine, and then the assembler reduced this to the particular machine code for the target hardware.
Things had progressed to the point where peripherals could come into play. It no longer made sense to idle a hugely expensive mainframe while it took care of reading in punched cards at a painfully slow (mechanical) rate, nor to use the mainframe to directly control the printer. Instead, a separate computer/cardreader was used to read in cards and prepare an input stream for the mainframe. These peripherals were not physically connected as we would expect today; there were no wires running between the mainframe and the peripherals. The computer/cardreader(s) collected data and spooled it onto a tape. The analog of today's communication cable was at that time a human operator who dismounted this 'input' tape and mounted it on the mainframe. Similarly, the mainframe produced an output tape, which was then schlepped over to a tapereader/computer/printer for printing -- which was not the compact device you see today; it was the size (and price) of a small tank.
Computers at this time tended to fall into two categories: number-crunchers and data processors. The number-crunchers featured a fast CPU and didn't bother too much about I/O. The character-stream processors could get by with a wimpy central processor since this CPU was mostly idle while waiting for input or output to take place.
IBM's 360 line of computers aimed at being good at both, which led to a need to keep the expensive CPU busy during I/O-bound jobs, so this operating system introduced multiprogramming: if several jobs were in memory at once, the CPU could be working on one while another was waiting for I/O to complete. One result was that the hardware now had to have additional hardware features to be able to protect one job from interfering with the memory space allocated to another.
Another speedup was achieved by SPOOLing (Simultaneous Peripheral Operation On Line), which had earlier been done by separate machines (off-line), eliminating the use of magnetic tapes for this purpose. Instead, a card reader directly read input onto a disk (or drum), and the 360 would then read a new program from the disk into main memory whenever an old program completed and relinquished its memory space.
This was still a batch system: while programs were pre-loaded into memory so that they could be ready to run as soon as another program completed, a single job ran from start to finish uninterrupted before the next job started. This meant that small jobs could be held up for hours if they happened to be behind a number-cruncher.
The need for better response times was the impetus for timesharing, which, as a pleasant side effect, also often increased the utilization of all that expensive equipment. The idea, of course, is to run several jobs pseudo-simultaneously (within a second, allow several jobs to have the CPU for a little bit of time), so that jobs that require only a little CPU time won't be backed up for a long time waiting for a number-cruncher. More specialized hardware was needed to support this, plus some very fancy OS programming to support the context switches between one job and another.
MULTICS was a far-sighted project that introduced many innovations, including timesharing, intending to be all things to all users. It was written in PL/1 (Programming Language One) which likewise was intended to have every possible feature imaginable dumped into one compiler (like ADA on steroids). PL/1 never did work completely, which meant that MULTICS was delayed by both the language and the challenge of implementing all the new ground-breaking concepts.
Ken Thompson and Dennis Ritchie at Bell Labs began writing an efficient, stripped down, one-user version of MULTICS, which formed the basis of UNIX (UNIX is a pun on MULTICS: it was intended to do one thing, and do it really well).
The result was a very efficient and small OS of exquisite design (so good, in fact, that research into alternate operating systems was, and still is, retarded -- most researchers concentrated on improving UNIX rather than setting out in new directions). Over the years, the spartan kernel was expanded to support more and more capabilities, all without losing the original efficiency. (Chapter 10 of Tanenbaum contains the details, which we will cover as needed for our programming assignments.) The distribution limitations of the UNIX variants motivated Linus Torvalds to begin writing an unrestricted version of UNIX called Linux (Lee-nucks). Linux/UNIX in its current forms can run on anything from a supercomputer to a palmtop.
Originally, individual transistors were wired together on a circuit board, requiring the CPU to be spread out over a large area, which meant that signals had a long way to go between components. Integrated circuits (where many transistors were on the same chip) allowed CPUs to shrink in size, giving a nice speedup as a result. LSI (Large-Scale Integration) circuits, grouping thousands of transistors on a single chip, improved things dramatically. Minicomputers and then personal computers became a reality. The Intel 8080 and the DEC LSI-11 (a small PDP-11) were among the first. In 1980, an LSI-11 could be bought for about $10,000, consisting of a processor, drive, keyboard, and screen. ('Drive' meant an 8-inch floppy drive; an actual hard drive was outlandishly expensive at the time. 'Screen' meant an 80x24 monochrome character display.)
IBM only reluctantly entered the personal computer field; they were pretty sure that if a business could buy a $10,000 machine that would meet their needs, they might choose that instead of a multi-million-dollar machine from IBM. However, since minicomputers and personal computers from other companies were starting to really impact their sales, they entered the field, but late. This meant that they did not have time to build a new machine from scratch; happily (for the world in general, but not so much for IBM) that meant they had to cobble something together from existing parts, rather than take the time to design a custom system. Everything was off-the-shelf parts, except for the BIOS on the motherboard. (The BIOS is briefly discussed on Page 33/34.)
Once the firmware BIOS was reverse-engineered, just about any company could build an IBM clone. That is why today, rather than having dozens of hardware platforms with the sort of incompatibilities that exist between the PC and the Mac, the world has a (fairly) uniform set of hardware to build upon. The downside is that most of these PCs run one of the abominations peddled by Microsoft.
image.png
image.png
info

Multithreading or hyperthreading

What is does is allow the CPU to hold the state of two different threads and then switch back and forth on a nanosecond time scale.
lightweight process, which, in turn, is a running program
image.png

Sections 1.5
Section 1.5 Operating System Concepts (Page 38)
The general concepts in this section will be explained in more details later on. Right now, make sure you read the textbook and get familiar with the bold-face terms (process, address space, etc.)
minus

Process

Basically a program in execution.
Associated with each process is an Address Space:
A list of memory locations from 0 to some maximum, which processes can read and write.
Fundamentally, a process is a container that holds all the information needed to run a program.
If a process can create one of more other processes (Child processes) and these processes in turn can create child processes, we quickly arrive at the process tree structure.
image.png
megaphone


Section 1.6
In computing, a system call is the programmatic way in which a computer program requests a service from the kernel of the operating system it is executed on. A system call is a way for programs to interact with the operating system. A computer program makes a system call when it makes a request to the operating system s kernel. System call provides the services of the operating system to the user programs via Application Program Interface(API). It provides an interface between a process and operating system to allow user-level processes to request services of the operating system. System calls are the only entry points into the kernel system. All programs needing resources must use system calls.
ok

Any single-CPU computer can execute only one instruction at a time.

Services Provided by System Calls :
Process creation and management
Main memory management
File Access, Directory and File system management
Device handling(I/O)
Protection
Networking, etc.
Types of System Calls : There are different categories of system calls
Process control: end, abort, create, terminate, allocate and free memory.
File management: create, open, close, delete, read file etc.
Device management
Information maintenance
Communication
C programming
image.png

Learn how you can: 👉


Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.