google ads

Tuesday, September 15, 2009

PHP Operators

Operator Description Example Result
+ Addition x=2
x+2 4
- Subtraction x=2
5-x 3
* Multiplication x=4
x*5 20
/ Division 15/5
5/2 3
2.5
% Modulus (division remainder) 5%2
10%8
10%2 1
2
0
++ Increment x=5
x++ x=6
-- Decrement x=5
x-- x=4



Assignment Operators
Operator Example Is The Same As
= x=y x=y
+= x+=y x=x+y
-= x-=y x=x-y
*= x*=y x=x*y
/= x/=y x=x/y
.= x.=y x=x.y
%= x%=y x=x%y
Comparison Operators
Operator Description Example
== is equal to 5==8 returns false
!= is not equal 5!=8 returns true
> is greater than 5>8 returns false
< is less than 5<8 returns true
>= is greater than or equal to 5>=8 returns false
<= is less than or equal to 5<=8 returns true
Logical Operators
Operator Description Example
&& and x=6
y=3
(x < 10 && y > 1) returns true
|| or x=6
y=3
(x==5 || y==5) returns false
! not x=6
y=3
!(x==y) returns true

Variable Naming Rules

• A variable name must start with a letter or an underscore "_"
• A variable name can only contain alpha-numeric characters and underscores (a-z, A-Z, 0-9, and _ )
• A variable name should not contain spaces. If a variable name is more than one word, it should be separated with underscore ($my_string), or with capitalization ($myString)

PHP is a Loosely Typed Language

In PHP a variable does not need to be declared before being set.
In the example above, you see that you do not have to tell PHP which data type the variable is.
PHP automatically converts the variable to the correct data type, depending on how they are set.
In a strongly typed programming language, you have to declare (define) the type and name of the variable before using it.
In PHP the variable is declared automatically when you use it.

Basic PHP Syntax

A PHP scripting block always starts with . A PHP scripting block can be placed anywhere in the document.
On servers with shorthand support enabled you can start a scripting block with .
For maximum compatibility, we recommend that you use the standard form (?>
A PHP file normally contains HTML tags, just like an HTML file, and some PHP scripting code.
Below, we have an example of a simple PHP script which sends the text "Hello World" to the browser:

Each code line in PHP must end with a semicolon. The semicolon is a separator and is used to distinguish one set of instructions from another.
There are two basic statements to output text with PHP: echo and print. In the example above we have used the echo statement to output the text "Hello World".
Note: The file must have the .php extension. If the file has a .html extension, the PHP code will not be executed.

Suggestions:

• Contact a local Non-Profit Corporation and ask them if you can volunteer to create a web site for their organization.
• Create a personal site including your resume and references, information about you (family, interests, hobbies)
• Create a site marketing a fictitious company or product.
• Create a site about your hometown or favorite place to visit, include places to stay, spots to visit, restaurants to dine at and other interesting facts.
• BE CREATIVE!!
Your pages must include the following:
• Section headers - used appropriately • Bold and italic fonts
• Centered text • Paragraphs
• An ordered, unordered, or definition list • Horizontal rules
• Graphic images - Be sure to use height/width and Alt tags on your images. (links to image sites can be found at putertutor.net) Use graphics for navigation buttons. • Links to other pages & external sites
• A table • E-mail links and Navigation bars (that allow you to navigate most/all the pages in your site) at the bottom of every page. Contact information at the bottom of every page.
• Special characters • A form (including JavaScript functions)
• Background & text color (search the web for images or click here)
• CSS - inline, embedded, or linked style sheet (optional). If you use CSS on your site, make sure that your site looks clean in Netscape 4.7. For more info on Netscape and CSS search http://www.tinkertech.net for resources.

While you have not learned about creating your own graphics, you should still take the design principles from the Web Design Workshop and the Non-Designers Workbook into consideration. The site should look professional; it will be a part of your portfolio to show to employers.

FINAL PROJECT

Sign-up for an account at brainbench.com and take the HTML 3.2 test. Submit your results to your instructor (be sure to include your name on the results page).
Using what you have learned in the HTML book, the Non-Designer's Web Book, and the Web Design Workshop create your own web site (minimum of EIGHT created pages) and upload it to an account at Tripod or on another server. Place a link from your homework page to your final project. Notify your instructor via email, rwood@cccoe.k12.ca.us, when the site is complete. The site topic might be one of following:

Tutorial 3 – Designing a Web Page

click the Foreground Color box to see an example of a color dialog box. This dialog box can be used to find out the Hex value of an RGB color and visa versa.
EyeDropper is a small software program that determines the RGB and Hex color values of a web page or any other object. Eyedropper is installed on all the systems. Start > Programs > Web Design > EyeDropper

Complete and Save the Following:

Tutorial 1 – Creating a Web Page
Sessions 1.1 - 1.3 Pp. 1.03 – 1.36, Review Assignments p. 1.37, and Projects 1 - 3 Pp. 1.38 – 1.42
Do not complete the Lab Assignments
ALL filenames (HTML, JPEG, GIF, etc.) and folders should be eight characters or less and lowercase even if the book indicates otherwise. Be sure to change the filename extension to .htm NOT .txt (which is the default filename extension for text files.)
Tutorial 2 – Adding Hypertext Links to a Web Page
Sessions 2.1 - 2.3 Pp. 2.01 – 2.32, Review Assignments Pp. 2.32 – 2.33, and Case Problems 1 – 4 Pp. 2.33 – 2.36
ALL filenames (HTML, JPEG, GIF, etc.) and folders should be eight characters or less and lowercase even if the book indicates otherwise. Be sure to change the filename extension to .htm NOT .txt (which is the default filename extension for text files.)

Creating Web Pages With HTML

(Includes: HTML, CascadiFile management skills are absolutely necessary to create and maintain web sites. Prior to starting the HTML book, review the file management packet provided by your instructor. Complete the HTML tutorial prior to starting the book. If it is not already installed on your machine, ask your instructor to install it.
Step 1: A folder named html will be copied into your user directory with all of the necessary files for the book.
Step 2: The CD that comes with the book will not be needed in class. You can take it home and you can unzip the files in the Data folder on the CD if you want to work at home.
Step 3: Save all files to the HTML folder in the appropriate sub-folder. ALL filenames (HTML, JPEG, GIF, etc.) and folders should be eight characters or less and lowercase even if the book indicates otherwise. Be sure to change the filename extension to .htm NOT .txt (which is the default filename extension for text files.)
You will be using TextPad to create your HTML documents. To access TextPad click Start > Programs > Web Design > TextPad. You may also use Xoology Coda (Start > Programs > Web Design > Xoology Coda > Coda. Coda will color code mistakes in your HTML and JavaScript Syntax.
Complete and save ALL Session Work, Review Assignments, Project Work and Case Problems unless otherwise noted. The Lab Assignments are interactive with files found on the CD and may be done at home. Do not do the lab assignments in class.
Some exercises ask you to print pages that you have created, please DO NOT. Do not print your assignments. Upon completion of the book, your instructor will teach you how to upload your pages to the web for review.
Quick Check answers for each session can be found at the end of each tutorial. Case Problems are a test of what you have learned during the tutorial, be sure to review the tutorial prior to asking for assistance.
Additional HTML resources can be found at http://www.putertutor.net or http://www.tinkertech.net use the Site Map to navigate the sites.
ng Style Sheets, JavaScript, DHTML, & Multimedia)

Tuesday, August 4, 2009

5 Finding match module:

A number of biometric characteristics have been in use for different applications. Each biometric trait has its strengths and weaknesses, and the choice depends on the application. No single biometric is expected to effectively meet all of the requirements. So we chose three biometric traits for effective result.
The database will store all the three biometric of a single person as a cluster. We only consider retrieving information when all the biometric matches, if any one of them doesn’t match then the information cannot be retrieved. Each biometric is compared with the database. And the information is retrieved by the authorized person successfully.

4 Signature verification module:

The way a person signs his or her name is known to be a characteristic of that individual. Signatures are a behavioral biometric that change over a period of time and are influenced by physical and emotional conditions of the signatories.
The signature image is gray scaled and the pixel value will be evaluated. The boundary of the signature is extracted with feature of the signature image. The feature of an image are length, centroid, six fold, etc are extracted. Then the signature image is verified, compared and identified with that of the image stored in the database.

.3 Iris recognition module:

This is our second biometric traits where we will take iris as our physical trait for recognizing the authorized user. The iris is the annular region of the eye bounded by the pupil and the sclera (white of the eye) on either side. Each iris is believed to be distinctive and, like fingerprints, even the irises of identical twins are expected to be different. It is extremely difficult to surgically tamper the texture of the iris.
The image of the iris will be Gray scaled and the gray scaled image will be processed with a process called smoothing. The edge of the iris image will be retrieved and that image will be compared with the image stored in the database.

2 Finger print warping module:

We consider finger print as one of the biometric traits in our project. Finger print can be warped or distorted by following some procedure. All the biometric traits will be stored in the database for future comparison. The finger print can be compared with the finger print image stored in the database.
This comparison is done by gray scaling the image and obtaining the skeletonized image. The skeletonized image will retrieve the ridges and valley of a fingerprint image. Then the fingerprint image will be compared using the ridge curve correspondence method.

1. Authentication module

The first module of this project is authentication. Authentication is done to secure the project from unauthorized user. The username and password is checked and the unauthorized user is ignored. The user can access the application if the username and password is valid. As it is the first module of the project it gives security to our application.

USER INTERFACE REQUIREMENTS

“Biometrics: A tool for information security” involves the following modules. The total project is classified into five modules. These modules can fully functions the process of retrieving information with authentication.
1. Authentication module
2. Finger print warping module
3. Iris recognition module
4. Signature verification module
5. Finding match module

Monday, June 8, 2009

Registers

There are five registers, each 24 bits in length. Their mnemonic, number and use are given in the following table.

Mnemonic Number Use
A 0 Accumulator; used for arithmetic operations
X 1 Index register; used for addressing
L 2 Linkage register; JSUB
PC 8 Program counter
SW 9 Status word, including CC

Memory

There are 215 bytes in the computer memory, that is 32,768 bytes , It uses Little Endian format to store the numbers, 3 consecutive bytes form a word , each location in memory contains 8-bit bytes.

System Software and Machine Architecture

One characteristic in which most system software differs from application software is machine dependency.

System software – support operation and use of computer. Application software - solution to a problem. Assembler translates mnemonic instructions into machine code. The instruction formats, addressing modes etc., are of direct concern in assembler design. Similarly, Compilers must generate machine language code, taking into account such hardware characteristics as the number and type of registers and the machine instructions available. Operating systems are directly concerned with the management of nearly all of the resources of a computing system.

There are aspects of system software that do not directly depend upon the type of computing system, general design and logic of an assembler, general design and logic of a compiler and, code optimization techniques, which are independent of target machines. Likewise, the process of linking together independently assembled subprograms does not usually depend on the computer being used.

A Simple Assembly Language

Terminology
The reader should be familiar with the following general terms after reading this section:
programming language syntax diagram
language processor parser
assembler syntax directed translator
compiler top-down parsing
interpreter bottom-up parsing
syntax token
semantics look-ahead
BNF notation location counter
production rule lexical analyzer
extended BNF notation lexicon
RTN notation lexeme
Additionally, the reader should be familiar with the following components of the example assembly language:
statement opcode
definition operand
label comment

What is an Assembler?

The first idea a new computer programmer has of how a computer works is learned from a programming language. Invariably, the language is a textual or symbolic method of encoding programs to be executed by the computer. In fact, this language is far removed from what the computer hardware actually "understands". At the hardware level, after all, computers only understand bits and bit patterns. Somewhere between the programmer and the hardware the symbolic programming language must be translated to a pattern of bits. The language processing software which accomplishes this translation is usually centered around either an assembler, a compiler, or an interpreter. The difference between these lies in how much of the meaning of the language is "understood" by the language processor.
An interpreter is a language processor which actually executes programs written in its source language. As such, it can be considered to fully understand that language. At the lowest level of any computer system, there must always be some kind of interpreter, since something must ultimately execute programs. Thus, the hardware may be considered to be the interpreter for the machine language itself. Languages such as BASIC, LISP, and SNOBOL are typically implemented by interpreter programs which are themselves interpreted by this lower level hardware interpreter.
Interpreters running as machine language programs introduce inefficiency because each instruction of the higher level language requires many machine instructions to execute. This motivates the translation of high level language programs to machine language. This translation is accomplished by either assemblers or compilers. If the translation can be accomplished with no attention to the meaning of the source language, then the language is called an assembly or low level language, and the translator is called an assembler. If the meaning must be considered, the translator is called a compiler and the source language is called a high level language. The distinction between high and low level languages is somewhat artificial since there is a continuous spectrum of possible levels of complexity in language design. In fact, many assembly languages contain some high level features, and some high level languages contain low level features.
Since assemblers are the simplest of symbolic programming languages, and since high level languages are complex enough to be the subject of entire texts, only assembly languages will be discussed here. Although this simplifies the discussion of language processing, it does not limit its applicability; most of the problems faced by an implementor of an assembly language are also faced in high level language implementations. Furthermore, most of these problems are present in even the simplest of assembly languages. For this reason, little reference will be made to the comparatively complex assembly languages of real machines in the following sections.
The Assembly Process
It is useful to consider how a person would process a program before trying to think about how it is done by a program. For this purpose, consider the program in Figure 2.1. It is important to note that the assembly process does not require any understanding of the program being assembled. Thus, it is unnecessary to understand the integer division algorithm implemented by the code in Figure 2.1, and little understanding of the particular machine code being used is needed (for those who are curious, the code is written for an R6502 microprocessor, the processor used in the historically important Apple II family of personal computers from the late 1970's).
; UNSIGNED INTEGER DIVIDE ROUTINE
; Takes dividend in A, divisor in Y
; Returns remainder in A, quotient in Y
START: STA IDENDL ;Store the low half of the dividend
STY ISOR ;Store the divisor
LDA #0 ;Zero the high half of the dividend (in register A)
TAX ;Zero the loop counter (in register X)
LOOP: ASL IDENDL ;Shift the dividend left (low half first)
ROL ; (high half second)
CMP ISOR ;Compare high dividend with divisor
BCC NOSUB ;If IDEND < ISOR don't subtract
SBC ISOR ;Subtract ISOR from IDEND
INC IDENDL ;Put a one bit in the quotient
NOSUB: INX ;Count times through the loop
CPX #8
BNE LOOP ;Repeat loop 8 times
LDY IDENDL ;Return quotient in Y
RTS ;Return remainder in A

IDENDL:B 0 ;Reserve storage for the low dividend/quotient
ISOR: B 0 ;Reserve storage for the divisor
Figure 2.1. An example assembly language program.
When a person who knows the Roman alphabet looks at text such as that illustrated in Figure 2.1, an important, almost unconscious processing step takes place: The text is seen not as a random pattern on the page, but as a sequence of lines, each composed of a sequence of punctuation marks, numbers, and word-like strings. This processing step is formally called lexical analysis, and the words and similar structures recognized at this level are called lexemes.
If the person knows the language in which the text is written, a second and still possibly unconscious processing step will occur: Lexical elements of the text will be classified into structures according to their function in the text. In the case of an assembly language, these might be labels, opcodes, operands, and comments; in English, they might be subjects, objects, verbs, and subsidiary phrases. This level of analysis is called syntactic analysis, and is performed with respect to the grammar or syntax of the language in question.
A person trying to hand translate the above example program must know that the R6502 microprocessor has a 16 bit memory address, that memory is addressed in 8 bit (one byte) units, and that instructions have a one byte opcode field followed optionally by additional bytes for the operands. The first step would typically involve looking at each instruction to find out how many bytes of memory it occupies. Table 2.1 lists the instructions used in the above example and gives the necessary information for this step.
Opcode Bytes Hex Code

ASL 3 0E aa aa
B 1 cc
BCC 2 90 oo
BNE 2 D0 oo
CMP 3 CD aa aa
CPX # 2 E0 cc
INC 3 EE aa aa
INX 1 E8
LDA # 2 A9 cc
LDY 3 AC aa aa
ROL 1 2A
RTS 1 60
SBC 3 ED aa aa
STA 3 8D aa aa
STY 3 8C aa aa
TAX 1 AA

Notes: aa aa - two byte address, least significant byte first.
oo - one byte relative address.
cc - one byte of constant data.
Table 2.1. Opcodes on the R6502.
To begin the translation of the example program to machine code, we take the data from table 2.1 and attach it to each line of code. Each significant line of an assembly language program includes the symbolic name of one machine instruction, for example, STA. This is called the opcode or operation code for that line. The programmer, of course, needs to know what the program is supposed to do and what these opcodes are supposed to do, but the translator has no need to know this! For the curious, the STA instruction stores the contents of the accumulator register in the indicated memory address, but you do not need to know this to assemble the program!
Table 2.1 shows the numerical equivalent of each opcode code in hexadecimal, base 16. We could have used any number base; inside the computer, the bytes are stored in binary, and because hexidecimal to binary conversion is trivial, we use that base here. While we're at it, we will strip off all the irrelevant commentary and formatting that was only included only for the human reader, and leave only the textual description of the program.
8D START: STA IDENDL
aa
aa
8C STY ISOR
aa
aa
A9 LDA #0
cc
AA TAX
0E LOOP: ASL IDENDL
aa
aa
2A ROL
CD CMP ISOR
aa
aa
90 BCC NOSUB
oo
ED SBC ISOR
aa
aa
EE INC IDENDL
aa
aa
E8 NOSUB: INX
E0 CPX #8
cc
D0 BNE LOOP
oo
AC LDY IDENDL
aa
aa
60 RTS
cc IDENDL:B 0
cc ISOR: B 0
Figure 2.2. Partial translation of the example to machine language
The result of this first step in the translation is shown in Figure 2.2. This certainly does not complete the job! Table 2.1 included constant data, relative offsets and addresses, as indicated by the lower case notatons cc, oo and aaaa, and to finish the translation to machine code, we must substitute numeric values for these!
Constants are the easiest. We simply incorporate the appropriate constants from the source code into the machine code, translating each to hexadecimal. Relative offsets are a bit more difficult! These give the number of bytes ahead (if positive) or behind (if negative) the location immediately after the location that references the offset. Negative offsets are represented using 2's complement notation.
8D START: STA IDENDL
aa
aa
8C STY ISOR
aa
aa
A9 LDA #0
00
AA TAX
0E LOOP: ASL IDENDL
aa
aa
2A ROL
CD CMP ISOR
aa
aa
90 BCC NOSUB
06
ED SBC ISOR
aa
aa
EE INC IDENDL
aa
aa
E8 NOSUB: INX
E0 CPX #8
08
D0 BNE LOOP
EC
AC LDY IDENDL
aa
aa
60 RTS
00 IDENDL:B 0
00 ISOR: B 0
Figure 2.3. Additional translation of the example to machine language
The result of this next translation step is shown in boldface in Figure 2.3. We cannot complete the translation without determining where the code will be placed in memory. Suppose, for example, that we place this code in memory starting at location 020016. This allows us to determine which byte goes in what memory location, and it allows us to assign values to the two labels IDENDL and ISOR, and thus, fill out the values of all of the 2-byte address fields to complete the translation.
0200: 8D START: STA IDENDL
0201: 21
0202: 02
0203: 8C STY ISOR
0204: 22
0205: 02
0206: A9 LDA #0
0207: 00
0208: AA TAX
0209: 0E LOOP: ASL IDENDL
020A: 21
020B: 02
020C: 2A ROL
020D: CD CMP ISOR
020E: 22
020F: 02
0210: 90 BCC NOSUB
0211: 06
0212: ED SBC ISOR
0213: 22
0214: 02
0215: EE INC IDENDL
0216: 21
0217: 02
0218: E8 NOSUB: INX
0219: E0 CPX #8
021A: 08
021B: D0 BNE LOOP
021C: EC
021D: AC LDY IDENDL
021E: 21
021F: 02
0220: 60 RTS
0221: 00 IDENDL:B 0
0222: 00 ISOR: B 0
Figure 2.4. Complete translation of the example to machine language
Again, in completing the translation to machine code, the changes from Figure 2.3 to Figure 2.4 are shown in boldface. For hand assembly of a small program, we don't need anything additional, but if we were assembling a program that ran on for pages and pages, it would be helpful to read through it once to find the numerical addresses of each label in the program, and then read through it again, substituting those numerical values into the code where they are needed.
symbol address

START 0200
LOOP 0209
NOSUB 0218
IDENDL 0221
ISOR 0222
Table 2.2. The symbol table for Figure 2.4.
Table 2.2 shows the symbol table for this small example, sorted into numerical order. For a really large program, we might rewrite the table into alphabetical order to before using it to finish the assembly.
It is worth noting the role which the meaning of the assembly code played in the assembly process. None! The programmer writing the line STA IDENDL must have understood its meaning, "store the value of the A register in the location labeled IDENDL", and the CPU, when it executes the corresponding binary instruction 8D 21 02 must know that this means "store the value of the A register in the location 0221", but there is no need for the person or computer program that translates assembly code to machine code to understand this!
This same assertion holds for compilers for high level languages. A C++ compiler does not understand that for(;;)x(); involves a loop, but only that, prior to the code for a call to the function x, the compiler should note the current memory address, and after the call, the compiler should output some particular instruction that references that address. The person who wrote the compiler knew that this instruction is a branch back to the start of the loop, but the compiler has no understanding of this!
To translator performing the assembly process, whether that translator is a human clerk or an assembler, the line STA IDENDL means "allocate 3 consecutive bytes of memory, put 8D in the first byte, and put the 16 bit value of the symbol IDENDL in the remaining 2 bytes." If the symbol IDENDL is mapped to the value 0221 by the symbol table, then the interpretation of the result of the assembler's interpretation of the source code is the same as the programmers interpretation. These relationships may be illustrated in Figure 2.5.
Source Text
/ \ compiler or
programmer's / \ assembler's
view of meaning / \ view of meaning
/ \
Abstract Meaning ----- Machine Code

hardware's
view of meaning

Historical Note

Historically, system software has been viewed in a number of different ways since the invention of computers. The original computers were so expensive that their use for such clerical jobs as language translation was viewed as a dangerous waste of scarce resources. Early system developers seem to have consistently underestimated the difficulty of producing working programs, but it did not take long for them to realize that letting the computer spend a few minutes on the clerical job of assembling a user program was less expensive than having the programmer hand assemble it and then spend hours of computer time debugging it. As a result, by 1960, assembly language was widely accepted, the new high level language, FORTRAN, was attracting a growing user community, and there was widespread interest in the development of new languages such as Algol, COBOL, and LISP.
Early operating systems were viewed primarily as tools for efficiently allocating the scarce and expensive resources of large central computers among numerous competing users. Since compilers and other program preparation tools frequently consumed a large fraction of an early machine's resources, it was common to integrate these into the operating system. With the emergence of large scale general purpose operating systems in the mid 1960's, however, the resource management tools available became powerful enough that they could efficiently treat the resource demands of program preparation the same as any other application.
The separation of program preparation from program execution came to pervade the computer market by the early 1970's, when it became common for computer users to obtain editors, compilers, and operating systems from different vendors. By the mid 1970's, however, programming language research and operating system development had begun to converge. New operating systems began to incorporate programming language concepts such as data types, and new languages began to incorporate traditional operating system features such as concurrent processes. Thus, although a programming language must have a textual representation, and although an operating system must manage physical resources, both have, as their fundamental purpose, the support of user programs, and both must solve a number of the same problems.
The minicomputer and microcomputer revolutions of the mid 1960's and the mid 1970's involved, to a large extent, a repetition of the earlier history of mainframe based work. Thus, early programming environments for these new hardware generations were very primitive; these were followed by integrated systems supporting a single simple language (typically some variant of BASIC on each generation of minicomputer and microcomputer), followed by general purpose operating systems for which many language implementations and editors are available, from many different sources.
The world of system software has varied from the wildly competitive to domination by large monopolistic vendors and pervasive standards. In the 1950's and early 1960's, there was no clear leader and there were a huge number of wildly divergent experiments. In the late 1960's, however, IBM's mainframe family, the System 360, running IBM's operating system, OS/360, emerged as a monopolistic force that persists to the present in the corporate data processing world (the IBM 390 Enterprise Server is the current flagship of this line, running the VM operating system).
The influence of IBM's near monopoly of the mainframe marketplace cannot be underestimated, but it was not total, and in the emerging world of minicomputers, there was wild competition in the late 1960's and early 1970's. The Digital Equipment Corporation PDP-11 was dominant in the 1970's, but never threatened to monopolize the market, and there were a variety of different operating systems for the 11. In the 1980's, however, variations on the Unix operating system originally developed at Bell Labs began to emerge as a standard development environment, running on a wide variety of computers ranging from minicomputers to supercomputers, and featuring the new programming language C and its descendant C++.
The microcomputer marketplace that emerged in the mid 1970's was quite diverse, but for a decade, most microcomputer operating systems were rudimentary, at best. Early versions of Mac OS and Microsoft Windows presented sophisticated user interfaces, but on versions prior to about 1995 these user interfaces were built on remarkably crude underpinnings.
The marketplace of the late 1990's, like the marketplace of the late 1960's, came to be dominated by a monopoly, this time in the form of Microsoft Windows. The chief rivals are MacOS and Linux, but there is yet another monopolistic force hidden behind all three operating systems, the pervasive influence of Unix and C. MacOS X is fully Unix compatable. Windows NT offers full compatability, and so, of course, does Linux. Much of the serious development work under all three systems is done in C++, and new languages such as Java seem to be simple variants on the theme of C++. It is interesting to ask, when we will we have a new creastive period when genuinely new programming environments will be developed the way they were on the mainframes of the early 1960's or the minicomputers of the mid 1970's?

A Unifying Framework

In all programming environments, from the most rudimentary to the most advanced, it is possible to identify two distinct components, the program preparation component and the program execution component. On a bare machine, the program preparation component consists of the switches or push buttons by which programs and data may be entered into the memory of the machine; more advanced systems supplement this with text editors, compilers, assemblers, object library managers, linkers, and loaders. On a bare machine, the program execution component consists of the hardware of the machine, the central processors, any peripheral processors, and the various memory resources; more advanced systems supplement this with operating system services, libraries of predefined procedures, functions and objects, and interpreters of various kinds.
Within the program execution component of a programming environment, it is possible to distinguish between those facilities needed to support a single user process, and those which are introduced when resources are shared between processes. Among the facilities which may be used to support a single process environment are command language interpreters, input-output, file systems, storage allocation, and virtual memory. In a multiple process environment, processor allocation, interprocess communication, and resource protection may be needed. Figure 1.1 lists and classifies these components.
Editors
Compilers
Assemblers Program Preparation
Linkers
Loaders
========================================================
Command Languages
Sequential Input/Output
Random Access Input/Output
File Systems Used by a Single Process
Window Managers
Storage Allocation
Virtual Memory
------------------------------ Program Execution Support
Process Scheduling
Interprocess Communication
Resource Sharing Used by Multiple Processes
Protection Mechanisms
Figure 1.1. Components of a programming environment.
This text is divided into three basic parts based on the distinctions illustrated in Figure 1.1. The distinction between preparation and execution is the basis of the division between the first and second parts, while the distinction between single process and multiple process systems is the basis of the division between the second and third parts.

Programming Environments

The term programming environment is sometimes reserved for environments containing language specific editors and source level debugging facilities; here, the term will be used in its broader sense to refer to all of the hardware and software in the environment used by the programmer. All programming can therefore be properly described as takin place in a programming environment.
Programming environments may vary considerably in complexity. An example of a simple environment might consist of a text editor for program preparation, an assembler for translating programs to machine language, and a simple operating system consisting of input-output drivers and a file system. Although card input and non-interactive operation characterized most early computer systems, such simple environments were supported on early experimental time-sharing systems by 1963.
Although such simple programming environments are a great improvement over the bare hardware, tremendous improvements are possible. The first improvement which comes to mind is the use of a high level language instead of an assembly language, but this implies other changes. Most high level languages require more complicated run-time support than just input-output drivers and a file system. For example, most require an extensive library of predefined procedures and functions, many require some kind of automatic storage management, and some require support for concurrent execution of threads, tasks or processes within the program.
Many applications require additional features, such as window managers or elaborate file access methods. When multiple applications coexist, perhaps written by different programmers, there is frequently a need to share files, windows or memory segments between applications. This is typical of today's electronic mail, database, and spreadsheet applicatons, and the programming environments that support such applications can be extremely complex, particularly if they attempt to protect users from malicious or accidental damage caused by program developers or other users.
A programming environment may include a number of additional features which simplify the programmer's job. For example, library management facilities to allow programmers to extend the set of predefined procedures and functions with their own routines. Source level debugging facilities, when available, allow run-time errors to be interpreted in terms of the source program instead of the machine language actually run by the hardware. As a final example, the text editor may be language specific, with commands which operate in terms of the syntax of the language being used, and mechanisms which allow syntax errors to be detected without leaving the editor to compile the program

System Software

System software refers to the files and programs that make up your computer's. System files include libraries of functions, system services, drivers for printers and other hardware, system preferences, and other configuration files. The programs that are part of the system software include assemblers, compilers, file management tools, system utilites.

The system software is installed on your computer when you install your operating system. You can update the software by running programs such as "Windows Update" for Windows or "Software Update" for Mac OS X. Unlike, however, system software is not meant to be run by the end user. For example, while you might use your Web browser every day, you probably don't have much use for an assembler program (unless, of course, you are a computer programmer).

Since system software runs at the most basic level of your computer, it is called "low-level" software. It generates the user interface and allows the operating system to interact with the hardware. Fortunately, you don't have to worry about what the system software is doing since it just runs in the background. It's nice to think you are working at a "high-level" anyway.

Sunday, May 31, 2009

Deflecting Colonial Canons and Cannons: Alternate Routes to Knowing Afghanistan

The windfall reaped by the Sethi family through their intimate commercial connections with Abdur Rahman stands in stark contrast to the more usual experience of mercantile flight from and avoidance of Afghanistan. Abdur Rahman temporarily reversed the trend of Indian capital's penetration of Afghanistan, but he could not eliminate the dependence of Afghanistan's exports on India's mass consumer markets. Geography is the primary structuring variable in the long term economic connection between Afghanistan and India, and state politics are important determinants in the precise articulation of this commercial relationship between two unequal but interdependent economic zones. Interactive social and cultural histories blend geographical constants and political fluctuations into a multidimensional holograph of market life on the frontier between Afghanistan and India. This book explored a limited set of market relationships on this frontier during the nineteenth century when colonialism was globally ascendant.

2The lens of colonialism can be adjusted, however problematically, to accommodate both macroscopic and microscopic vantage points. The oeuvres of Chris Bayly and Bernard Cohn, respectively, capture those two complimentary scopes of colonial analysis, and that argument is made stronger because each author recognizes and engages the opposite "polarity" to make their own points and positions more potent through dialogue and flexibility.1 This two-tiered vision of what can be called the local and the global is at the same time an integrated one, and as such it is perhaps the primary connection at work in the preceding pages. Other connections that are equally basic, and similarly complex in their dialectics, have been necessary to consider in order to approximate what really happened on the ground and not what is imagined to have occurred from distant vantage points during the articulation of modern Afghanistan. Colonial connections between Afghanistan and British India were addressed through relations between states and markets in their own right and in relation to one another, social communities and commodity groups independently and interactively, and through texts and money again as multifaceted but singular units of analysis as well as an analytical pairing. It has been necessary to do a bit of disentangling along the way toward making social and economic connections between states, markets, people, money, and texts. This feature of the analysis highlighted important elements of distinction, on the one hand, and continuity, on the other, which were occurring within and in many ways articulating the larger and smaller colonial connections just described.

3The connections between colonialism and capitalism are metaphorically electric and can be viewed as productively synergetic, but those same connections can also be simultaneously and literally explosive and destructive. Capitalism and colonialism each have their own conceptual turf, but when combined those two rich fields of inquiry yield a fertile terrain to cultivate a study of Afghanistan.

4Fernand Braudel's historical analysis of capitalism tracks between the micro-local and macro-global levels and has arguably yet to be surpassed. The nomadic trading tribes who were central in the foregoing analysis capture the distinctions and links between what appear in Braudel's scheme as material life and market activity, the former geared toward basic subsistence and existence, the latter involving "surplus" goods and their exchange. For Braudel capitalism arises out of market activity and generates connections between formerly unintegrated markets. The agents of those connections and the agency employed to make them correspond in many important ways to the mobile Hindkis and hundis that paired with the commercially precocious nomads to form the fluid base over which political authorities must raft and camp rather than permanently settle. Braudel's view of complementary geographic, fiscal, and social variables facilitates an understanding of the imbalanced market relationships between Kabul, Peshawar, and Qandahar. His insights about market "pulls" and polarities magnify the colonial data used here, allowing us to see distinctions and interactions between those three market settings. His global model prompts a view of our three seemingly geographically marginal markets as much more central to the functioning of larger, again separate but interactive interregional and global commercial networks.

5Braudel is also a beacon for those lost at sea when trying to navigate toward an understanding of how large-scale debt is accrued and circulated, its roles in the genesis and demise of market and political structures, and its perpetual impact on ordinary debtor folk who are not fully aware of all the variables conspiring to undermine their relative fiscal buoyancy. Braudel demarcates debt through the interactions of commodities, cash currencies, and bookkeeping practices. His consideration of financial texts and accounting practices is amplified by Jack Goody who conveys the basic importance of literacy for bureaucracy and therefore in a more complicated way for governance. While attending to literacy's state locus, Goody also demonstrates that scribal groups and textual practices transcend cultural and political barriers. Together, Braudel and Goody illuminate the cavernous debt associated with Afghanistan and help to conceptually substantiate the data-driven arguments about debt presented in this book. Among the conclusions reached here are that the origins of Afghanistan's current poverty are found in state policies and practices, the articulation of Afghanistan's debt burden transpires via state paperwork handled by certain scribal and bureaucratic classes, and that ordinary consumers experienced this state-created and state-managed debt via the marketplace where Afghan state currency was increasingly less favored and devalued in relation to surrounding exponentially stronger state monies.

6Capitalism's advance often signals the emergence of "new" social groups and the transformation of "old" social relations, but this does not mean that before capitalism time stood still for "traditional" societies who lacked a familiar form of history. Eric Wolf highlights how capitalism produces new global migrations of laboring classes associated with new production regimes and circulations of old commodities. Arjun Appadurai sees tension between consumers and state authorities emerging from new commodity flows and finds those conducting the new commercial movements to have a distinct form of knowledge transcending single market settings to geographically span full commodity trajectories from points of production to consumption. Appardurai identifies an important distinction between customary and diversionary commodity paths, the latter involving a larger reconfiguration of social and political relations along the way. For Appadurai and Wolf, global historical change is propelled by these new circulations and movements of certain key marketers, laborers, and commodities. In the markets of Kabul, Peshawar, and Qandahar these two authors allow us to see that the emergence of a far more robust bureaucracy signaled a new state fiscal regime that transformed labor and commodity traffic patterns and revised social and political relations in and between the three locales.

7Within the vast rubrics of colonialism and capitalism we have been striving for a way to manage fundamental but fundamentally complicated relationships that constitute human economic strategies, as well as other complex associations such as those between the ideological constructions of political space and the material realities and inequalities that uncooperatively represent and belie so-often hasty reasonings about Afghanistan and everything it involves. These independent but integrated explorations of capitalism and colonialism have involved histories of populations within, on the borders of, "passing through," and at varying distances outside of the territory in question. What we have been searching for is the political economy of a permeable zone characterized by multiple kinds of barriers and crossings. In other words, we have had to reckon with, on one hand boundaries, borders, and frontiers, and, on the other hand, interregional, indeed global exchange networks, trans-Eurasian commodity and cultural circuits and patterned migrations transgressing the limits of the analytical units being deployed.

CODE ACCESS SECURITY

The CLR is also burdened with the responsibility of security. An integral
part of the .NET Runtime is something called Code Access Security
(CAS), which associates a certain amount of “trust” with code, depending
on the code’s origins (the local file system, intranet, Internet, etc.).
The CLR is responsible for making sure that the code it executes stays
within its designated security boundaries. This could include such
things as reading and writing files from the user’s hard drive, making
registry entries, and so forth.
You can modify the permissions that are granted to code from a certain
location using a utility called CASPOL.EXE. You could specify, for
example, that all code originating from www.codenotes.com be granted
more privileges than other code that comes from the Internet. Examples
of CASPOL.EXE and an in-depth discussion of Code Access Security
can be found at aNET010006.

Topic: .NET Runtime Classes
In the example earlier in this chapter, all three languages used the Console.
WriteLine() method to print “Hello World” to the screen. The
.NET Runtime classes eliminate the need to master a different set of
APIs for different languages. Instead, developers need only familiarize
themselves with the appropriate Runtime classes and then call them
from the language of their choice.
The .NET Runtime includes classes for many programmatic tasks,
including data access, GUI design, messaging, and many more. It also
acts as a wrapper around the Win32 API, eliminating the need to directly
communicate with this cryptic C-style interface. The most difficult part
of using the Runtime is figuring out which class you need to accomplish
the task at hand. A complete list of the .NET Runtime classes can be
found at aNET010007.

NAMESPACES
The .NET Runtime classes are organized in hierarchical manner using
namespaces. Namespaces provide a scope (or container) in which types
are defined. All of the .NET Runtime classes, for example, can be found
in the System namespace. In the “Hello World” example we had to in-
form the compiler that the Console class could be found in the System
namespace by qualifying it (System.Console). Namespaces can also be
nested. The System.IO namespace, for example, contains a number of
classes for I/O operations, whereas the System.Collections namespace
contains classes for common data structures such as arrays.
In the Hello World example we directly addressed the namespace.
You will frequently see code that uses implicit namespace referencing to
make it more concise. Each language uses a different keyword to include
the contents of a namespace. We could have written the Hello
World program in VB.NET as follows:
'VB.NET "Hello World" Program.

Imports System
Module HelloWorld
Sub Main
'Use the .NET Runtime Console method WriteLine,
'to output "Hello World" on the screen:
Console.WriteLine("Hello World!")
End Sub
End Module
Listing 1.4 VB.NET Hello World program using namespaces
Notice that we added the Imports System line and no longer have to
qualify the Console object as System.Console. In C#, you can perform
the same action with the “using” keyword:
// C# "Hello World" Program.
// Implicit namespace referencing
using System;
public class HelloWorld {
static public void Main () {
Console.WriteLine("Hello World.");
}
}
Listing 1.5 C# Hello World program using namespaces
As you can see, implicitly referencing namespaces can save you a lot of
typing and make your code easier to read. You will use namespaces
throughout the .NET framework to:
An Introduction to the .NET Framework . 13
• Access the .NET Runtime classes
• Access custom classes authored by other developers
• Provide a namespace for your own classes, to avoid naming conflicts
with other classes and namespaces
We will use namespaces throughout this CodeNote as we develop our
own .NET components.

SIMPLE APPLICATION

In this section we look at the proverbial “Hello World” program. For the
purposes of comparison, source code for all three .NET languages
(VB.NET, C#, and managed C++) is given below. Readers might want
to consult Chapter 2 to install the .NET Framework before proceeding.

VB.NET Application
Visual Basic developers are reminded that VB.NET now includes a
command-line compiler (VBC.EXE), allowing one to develop applications
outside the Visual Basic environment. In other words, you can
write the following program in a text editor such as Notepad. VB users
will also see from this example that VB.NET has the ability to produce
console applications, something previous versions of Visual Basic were
unable to do.
'VB.NET "Hello World" Program.
Module HelloWorld
Sub Main
'Use the .NET Runtime Console method WriteLine,
'to output "Hello World" on the screen:
System.Console.WriteLine("Hello World!")
End Sub
End Module
Listing 1.1 VB.NET Hello World program

C# Application
As with the Visual Basic example, you can write this code using any text
editor. Notice that the syntax is very similar to C++ in that class definitions
and methods are encapsulated within curly braces and individual
lines of code end with semicolons.
// C# "Hello World" Program.
public class HelloWorld {
static public void Main () {
System.Console.WriteLine("Hello World!");
}
}
Listing 1.2 C# Hello World program

Managed C++ Application
Managed C++ is almost identical to normal C++. Notice that you must
use the a::b notation for namespaces, rather than the a.b notation of Visual
Basic and C#.
// Managed C++ "Hello World" Program.
// Reference the .NET Runtime Library,
// for Console Input/Output functionality.

#using
void main() {
System::Console::WriteLine("Hello World!");
}
Listing 1.3 Managed C++ Hello World program

Compiling and Running the Example
Assuming that these files were called Hello-World.vb, Hello-World.cs,
and Hello-World.cpp, respectively, .NET console applications could be
An Introduction to the .NET Framework . 9
created by invoking each language’s compiler from the command line as
shown below (alternatively, you could create VB.NET, C#, and C++
console projects and compile them from the VS.NET IDE).
• VB.NET: vbc.exe /t:exe Hello-World.vb
• C#: csc.exe /t:exe Hello-World.cs
• Managed C++: cl.exe /CLR Hello-World.cpp
The /t:exe option informs both the VB.NET and C# compilers to produce
executable files, while the /CLR switch instructs the Visual C++
compiler to produce IL code (this option is OFF by default).

Source Analysis
The most notable difference between the three programs is that the managed
C++ example must explicitly reference the .NET Runtime classes,
which is implicitly done by the VB and C# compilers. This is accomplished
by inserting the following line at the top of all managed
C++ programs: #using . C++ COM/ATL developers
will find this command very similar to the #import directive used in Visual
C++.
Syntactical differences aside, the three programs are remarkably
similar in that they all use the Runtime Console method WriteLine() to
print “Hello World” on the screen. Such uniformity is a virtue of the
.NET Runtime—all three languages use a consistent set of classes to accomplish
the same thing. The only difference lies in the way such
classes are accessed. C++ users might recognize that we had to tell the
compiler which namespace the Console class could be found in. The
concept of namespaces and their importance to the Runtime is addressed
in the .NET Runtime section of this chapter.
Topic: The Common Language Runtime
At the heart of the .NET Framework is the Common Language Runtime
(CLR). In addition to acting as a virtual machine, interpreting and executing
IL code on the fly, the CLR performs numerous other functions,
such as type safety checking, application memory isolation, memory
management, garbage collection, and crosslanguage exception handling.

THE COMMON TYPE SYSTEM
The CLR greatly simplifies crosslanguage communication through the
introduction of the Common Type System (CTS). The CTS defines all of
the basic types that can be used in the .NET Framework and the operations
that can be performed on those types. Applications can create more
complex types, but they must be built from the types defined by the
CTS.
All CTS types are classes that derive from a base class called System.
Object (this is true even for “primitive” types such as integer and
floating point variables). This means that any object executing inside the
CLR can utilize the member functions of the System.Object class. The
methods of System.Object can be found at aNET010005, but for
the purposes of illustration we will consider the Equals() method of this
class, which can be used to test two objects for equality.
Consider the following straightforward C# fragment:
int a=5;
int b=5;
if (a==b) {
System.Console.WriteLine("a is the same as b");
}
Since all types inherit from System.Object, the code could be rewritten
as:
int a=5;
int b=5;
if (a.Equals(b)) {
System.Console.WriteLine("a is the same as b");
}

AN INTRODUCTION TO THE .NET

WHAT IS .NET?
.NET is Microsoft’s new strategy for the development and deployment
of software. Depending on your interests and development background,
you may already have a number of preconceived notions regarding
.NET. As we will see throughout this CodeNote:
• .NET fundamentally changes the way applications execute under
the Windows Operating System.
• With .NET Microsoft is, in effect, abandoning its traditional
stance, one which favors compiled components, and is embracing
interpreted technology (similar, in many ways, to the Java
paradigm).
• .NET brings about significant changes to both C++ and Visual
Basic, and introduces a new language called C# (pronounced
“C sharp”).
• .NET is built from the ground up with the Internet in mind, embracing
open Internet standards such as XML and HTTP. XML
is also used throughout the framework as both a messaging instrument
and for configuration files.

These are all noteworthy features of .NET, or more accurately the .NET
Framework, which consists of the platform and tools needed to develop
and deploy .NET applications. The .NET Framework can be distilled
into the following three entities:

1. The Common Language Runtime (CLR), which is the executionenvironment for all programs in the .NET Framework. The CLR is similar to a Java Virtual Machine (VM) in that it interprets
byte code and executes it on the fly, while simultaneously
providing services such as garbage collection and exception
handling. Unlike a Java VM, which is limited to the Java language,
the CLR is accessible from any compiler that produces
Microsoft Intermediate Language (IL) code, which is similar to
Java byte code. Code that executes inside the CLR is referred to
as managed code. Code that executes outside its boundaries is
called unmanaged code.

2. The Runtime classes, which provide hundreds of prewritten services
that clients can use. The Runtime classes are the building
blocks for .NET applications. Many technologies you may have
used in the past (ADO, for example) are now accessed through
these Runtime classes, as are basic operations such as I/O. Traditionally,
every language had its own unique supporting libraries,
accessible only from that particular language. String
manipulation, for example, was afforded to VB programmers
via the Visual Basic runtime, whereas C++ programmers depended
on libraries such as STL for similar functionality. The
.NET Runtime classes remove this limitation by uniformly offering
services to any compiler that targets the CLR. Those familiar
with Java will find the Runtime classes analogous to the
Java Class Libraries.

3. Visual Studio.NET (VS.NET), which is Microsoft’s newest version
of Visual Studio. VS.NET includes VB.NET, “managed”
C++, and C#, all of which translate source code into IL code.
VB.NET and VC.NET are the new versions of Visual Basic and
Visual C++, respectively. C# is a new Microsoft language that at
first glance appears to be a hybrid of C++ and Java. .NET development
does not have to be limited to these languages, however.
Any component or program produced by an IL-aware
compiler can run within the .NET Framework. (As of this writing,
other companies have announced IL compilers for Perl,
Python, and COBOL.) VS.NET also comes with a fully Integrated
Development Environment (IDE), which we will examine
in Chapter 7. Note the VS.NET IDE now houses the
development environments for both Visual C++ and Visual Basic.

Eye contact

You must maintain eye contact with the panel, right through the interview. This shows your self-confidence and honesty.
Many interviewees while answering, tend to look away. This conveys you are concealing your own anxiety, fear and lack of confidence.
Maintaining an eye contact is a difficult process. As the circumstances in an interview are different, the value of eye contact is tremendous in making a personal impact.

Enthusiasm

The interviewer normally pays more attention if you display an enthusiasm in whatever you say.
This enthusiasm come across in the energetic way you put forward your ideas.
You should maintain a cheerful disposition throughout the interview, i.e. a pleasant countenance hold s the interviewers interest.

Entering the room

Prior to the entering the door, adjust your attire so that it falls well.
Before entering enquire by saying, “May I come in sir/madam”.
If the door was closed before you entered, make sure you shut the door behind you softly.
Face the panel and confidently say ‘Good day sir/madam’.
If the members of the interview board want to shake hands, then offer a firm grip first maintaining eye contact and a smile.
Seek permission to sit down. If the interviewers are standing, wait for them to sit down first before sitting.
An alert interviewee would diffuse the tense situation with light-hearted humor and immediately set rapport with the interviewers.

IN HR room…

What kinds of assignments might I expect the first six months on the job?
How often are performance reviews given?
Please describe the duties of the job for me.
What products (or services) are in the development stage now?
Do you have plans for expansion?
What are your growth projections for next year?
Have you cut your staff in the last three years?
Are salary adjustments geared to the cost of living or job performance?
Does your company encourage further education?
How do you feel about creativity and individuality?
Do you offer flextime?
What is the usual promotional time frame?
Does your company offer either single or dual career-track programs?
What do you like best about your job/company?
Once the probation period is completed, how much authority will I have over decisions?
Has there been much turnover in this job area?
Do you fill positions from the outside or promote from within first?
Is your company environmentally conscious? In what ways?
In what ways is a career with your company better than one with your competitors?
Is this a new position or am I replacing someone?
What is the largest single problem facing your staff (department) now?
May I talk with the last person who held this position?
What qualities are you looking for in the candidate who fills this position?
What skills are especially important for someone in this position?
What characteristics do the achievers in this company seem to share?
Who was the last person that filled this position, what made them successful at it, where are they today, and how may I contact them?


Is there a lot of team/project work?
Will I have the opportunity to work on special projects?
Where does this position fit into the organizational structure?
How much travel, if any, is involved in this position?
What is the next course of action? When should I expect to hear from you or should I contact you?

Wednesday, May 20, 2009

Synchronization: The Too Much Milk Problem

Review idea of atomic operations. Example: counting contest.
static int i;
Process A
i = 0;
while (i < 10) {
i++;
}
cout << "A wins"; Process B
i = 0;
while (i > -10) {
i--;
}
cout << "B wins";
• Variable i is shared.
• Reference and assignment are each atomic.
• Will process A or process B win?
• Will they ever finish?
• If one finishes, will the other also finish?
• Does it help A to get a head start?
Synchronization: the use of atomic operations to ensure the correct operation of cooperating processes.
The "too much milk" problem:
Person A Person B
3:00
3:05
3:10
3:15
3:20
3:25
3:30 Look in fridge. Out of milk.
Leave for store.
Arrive at store.
Leave store.
Arrive home, put milk away.



Look in fridge. Out of milk.
Leave for store.
Arrive at store.
Leave store.
Arrive home. OH NO!
What does correct mean? One of the most important things in synchronization is to figure out what you want to achieve.
Mutual exclusion: Mechanisms that ensure that only one person or process is doing certain things at one time (others are excluded). E.g. only one person goes shopping at a time.
Critical section: A section of code, or collection of operations, in which only one process may be executing at a given time. E.g. shopping. It is a large operation that we want to make "sort of" atomic.
There are many ways to achieve mutual exclusion, which we will be discussing all of this week. Most involve some sort of locking mechanism: prevent someone from doing something. For example, before shopping, leave a note on the refrigerator.
Three elements of locking:
1. Must lock before using. leave note
2. Must unlock when done. remove note
3. Must wait if locked. do not shop if note
1st attempt at computerized milk buying:
Processes A & B
1
2
3
4
5
6
7 if (NoMilk) {
if (NoNote) {
Leave Note;
Buy Milk;
Remove Note;
}
}
What happens if we leave the note at the very beginning: does this make everything work?
2nd attempt: Change meaning of note.
A buys if there is no note, B buys if there is a note. This gets rid of confusion.
Process A Process B
1
2
3
4
5
6 if (NoNote) {
if (NoMilk) {
Buy Milk;
}
Leave Note;
} if (Note) {
if (NoMilk) {
Buy Milk;
}
Remove Note;
}
• Does this work?
• How can we tell?
• When dealing with complex parallel programs, we cannot rely on our intuitions or informal reasoning. We need to prove that the programs behave correctly. What can we say about the above solution?
Suppose B goes on vacation. A will buy milk once and will not buy any more until B returns. Thus this really does not really do what we want; it is unfair, and leads to starvation.
3rd attempt: Use 2 notes:
Process A
1
2
3
4
5
6
7 Leave NoteA;
if (NoNoteB) {
if (NoMilk) {
Buy Milk;
}
}
Remove NoteA;
Process B is the same except interchange NoteA and NoteB.
What can we say about this solution?
Solution is almost correct. We just need a way to decide who will buy milk when both leave notes (somebody has to hang around to make sure that the job gets done).
4th attempt: In case of tie, A will buy milk:
Process B stays the same as before.
Process A
1
2
3
4
5
6
7
8
9
10
11
12
13
14 Leave NoteA;
if (NoNoteB) {
if (NoMilk) {
Buy Milk;
}
} else {
while (NoteB) {
DoNothing;
}
if (NoMilk) {
Buy Milk;
}
}
Remove NoteA;
How do we know this is correct?
This solution works. But it still has two disadvantages:
• A may have to wait while B is at the store.
• While A is waiting it is consuming resources (busy-waiting).

Independent and Cooperating Processes

Chapter 6 in Operating Systems Concepts.
Independent process: one that is independent of the rest of the universe.
• Its state is not shared in any way by any other process.
• Deterministic: input state alone determines results.
• Reproducible.
• Can stop and restart with no bad effects (only time varies).
Example: program that sums the integers from 1 to i (input).
There are many different ways in which a collection of independent processes might be executed on a processor:
• Uniprogramming: a single process is run to completion before anything else can be run on the processor.
• Multiprogramming: share one processor among several processes. If no shared state, then order of dispatching is irrelevant.
• Multiprocessing: if multiprogramming works, then it should also be ok to run processes in parallel on separate processors.
o A given process runs on only one processor at a time.
o A process may run on different processors at different times (move state, assume processors are identical).
o Cannot distinguish multiprocessing from multiprogramming on a very fine grain.
How often are processes completely independent of the rest of the universe?
________________________________________
Cooperating processes:
• Machine must model the social structures of the people that use it. People cooperate, so machine must support that cooperation. Cooperation means shared state, e.g. a single file system.
• Cooperating processes are those that share state. (May or may not actually be "cooperating")
• Behavior is nondeterministic: depends on relative execution sequence and cannot be predicted a priori.
• Behavior is irreproducible.
• Example: one process writes "ABC", another writes "CBA".


When discussing concurrent processes, multiprogramming is as dangerous as multiprocessing unless you have tight control over the multiprogramming. Also bear in mind that smart I/O devices are as bad as cooperating processes (they share the memory).
Why permit processes to cooperate?
• Want to share resources:
o One computer, many users.
o One file of checking account records, many tellers.
• Want to do things faster:
o Read next block while processing current one.
o Divide job into sub-jobs, execute in parallel.
• Want to construct systems in modular fashion. (e.g. tbl | eqn | troff)
Reading: Section 2.3.1 in Tanenbaum talks about similar stuff, but uses terms a little differently.
________________________________________
Basic assumption for cooperating process systems is that the order of some operations is irrelevant; certain operations are completely independent of certain other operations. Only a few things matter:
• Example: A = 1; B = 2; has same result as B = 2; A = 1;
• Another example: A = B+1; B = 2*B cannot be re-ordered.


Race conditions: Suppose A=1 and A=2 are executed in parallel?
Atomic operations: Before we can say ANYTHING about parallel processes, we must know that some operation is atomic, i.e. that it either happens in its entirety without interruption, or not at all. Cannot be interrupted in the middle. E.g. suppose that println is atomic -- what happens in println("ABC"); println("BCA") example?
• References and assignments are atomic in almost all systems. A=B will always get a good value for B, will always set a good value for A (not necessarily true for arrays, records, or even floating-point numbers).
• In uniprocessor systems, anything between interrupts is atomic.
• If you do not have an atomic operation, you cannot make one. Fortunately, the hardware folks give us atomic ops.
• If you have any atomic operation, you can use it to generate higher-level constructs and make parallel programs work correctly. This is the approach we will take in this class.
________________________________________

Copyright © 2001, 2008 Barton P. Miller
Non-University of Wisconsin students and teachers are welcome to print these notes their personal use. Further reproduction requires permission of the author. The trap instruction that caused the entry to the kernel has a parameter that specifies which system call is being invoked. The code starting at do_call checks to see if this number is in range, and then calls the function associated with this system call number. When this function returns, the return value (stored in the eax register) is saved in the place where all the other user registers are stored. As a result, when control is transferred from the kernel back to the user process, the return value will be in the right place.
After the system call is complete, it is time to return to the user process. There are two choices at this point: (1) either return directly the the user process that made the system call or (2) go through the dispatcher to select the next process to run. ret_from_sys_call
system_call:
#
#----Save orig_eax: system call number
# used to distinguish process that entered
# kernel via syscall from one that entered
# via some other interrupt
#
pushl %eax

#
#----Save the user's registers
#
pushl %es
pushl %ds
pushl %eax
pushl %ebp
pushl %edi
pushl %esi
pushl %edx
pushl %ecx
pushl %ebx

#
#----Set up the memory segment registers so that the kernel's
# data segment can be accessed.
#
movl $(__KERNEL_DS),%edx
movl %edx,%ds
movl %edx,%es

#
#----Load pointer to task structure in EBX. The task structure
# resides below the 8KB per-process kernel stack.
#
movl $-8192, %ebx
andl %esp, %ebx

#
#----Check to see if system call number is a valid one, then
# look-up the address of the kernel function that handles this
# system call.
#
do_call:
cmpl $(NR_syscalls),%eax
jae badsys
call *SYMBOL_NAME(sys_call_table)(,%eax,4)

# Put return value in EAX of saved user context
movl %eax,EAX(%esp)

#
#----If we can return directly to the user, then do so, else go to
# the dispatcher to select another process to run.
#
ret_from_sys_call:
cli # Block interrupts; iret effectively re-enables them
cmpl $0,need_resched(%ebx)
jne reschedule

# restore user context (including data segments)
popl %ebx
popl %ecx
popl %edx
popl %esi
popl %edi
popl %ebp
popl %eax
popl %ds
popl %es
addl $4,%esp # ignore orig_eax
iret

reschedule:
call SYMBOL_NAME(schedule)
jmp ret_from_sys_call

Entering and Exiting the Kernel

User and Kernel Addresse Spaces
In a modern operating system, each user process runs in its own address space, and the kernel operates in its protected space. At the processor level (machine code level), the main distinction between the kernel and a user process is the ability to access certain resources such as executing privileged instructions, reading or writing special registers, and accessing certain memory locations.
The separation of user process from user process insures that each processes will not disturb each other. The separation of user processes from the kernel insures that users processes will not be able to arbitrarily modify the kernel or jump into its code. It is important that processes cannot read the kernel's memory, and that it cannot directly call any function in the kernel. Allowing such operations to occur would invalidate any protection that the kernel wants to provide.
Operating systems provide a mechanism for selectively calling certain functions in the kernel. These select functions are called kernel calls or system calls, and act as gateways into the kernel. These gateways are carefully designed to provide safe functionality. They carefully check their parameters and understand how to move data from a user process into the kernel and back again. We will discuss this topic in more detail in the Memory Management section of the course.
________________________________________
The Path In and Out of the Kernel
The only way to enter the operating kernel is to generate a processor interrupt. Note the emphasis on the word "only". These interrupts come from several sources:
• I/O devices: When a device, such as a disk or network interface, completes its current operation, it notifies the operating system by generating a processor interrupt.
• Clocks and timers: Processors have timers that can be periodic (interrupting on a fixed interval) or count-down (set to complete at some specific time in the future). Periodic timers are often used to trigger scheduling decisions. For either of these types of timers, an interrupt is generated to get the operating system's attention.
• Exceptions: When an instruction performs an invalid operation, such as divide-by-zero, invalid memory address, or floating point overflow, the processor can generate an interrupt.
• Software Interrupts (Traps): Processors provide one or more instructions that will cause the processor to generate an interrupt. These instructions often have a small integer parameter. Trap instructions are most often used to implement system calls and to be inserted into a process by a debugger to stop the process at a breakpoint.
The flow of control is as follows (and illustrated below):
1. The general path goes from the executing user process to the interrupt handler. This step is like a forced function call, where the current PC and processor status are saved on a stack.
2. The interrupt handler decides what type of interrupt was generated and calls the appropriate kernel function to handle the interrupt.
3. The general run-time state of the process is saved (as on a context switch).
4. The kernel performs the appropriate operation for the system call. This step is the "real" functionality; all the steps before and after this one are mechanisms to get here from the user call and back again.
5. if the operation that was performed was trivial and fast, then the kernel returns immediately to the interrupted process. Otherwise, sometime later (it might be much later), after the operation is complete, the kernel executes its short-term scheduler (dispatcher) to pick the next process to run.
Note that one side effect of an interrupt might be to terminate the currently running process. In this case, of course, the current process will never be chosen to run next!
6. The state for the selected process is loaded into the registers and control is transferred to the process using some type of "return from interrupt" instruction.


________________________________________
The System Call Path
One of the most important uses of interrupts, and one of the least obvious when you first study about operating systems, is the system call. In your program, you might request a UNIX system to read some data from a file with a call that looks like:
rv = read(0,buff,sizeof(buff));
Somewhere, deep down in the operating system kernel, is a function that implements this read operation. For example, in Linux, the routine is called sys_read().
The path from the simple read() function call in your program to the sys_read() routine in the kernel takes you through some interesting and crucial magic. The path goes from your code to a system call stub function that contains a trap or interrupt instruction, to an interrupt handler in the kernel, to the actual kernel function. The return path is similar, with the addition of some important interactions with the process dispatcher.


________________________________________
System Call Stub Functions
The system call stub functions provide a high-level language interface to a function whose main job is to generate the software interrupt (trap) needed to get the kernel's attention. These functions are often called wrappers.
The stub functions on most operating systems do the same basic steps. While the details of implementation differ, they include the following:
1. set up the parameters,
2. trap to the kernel,
3. check the return value when the kernel returns, and
4.
1. if no error: return immediately, else
2. if there is an error: set a global error number variable (called "errno") and return a value of -1.
Below are annotated examples of this code from both the Linux (x86) and Solaris (SPARC) version of the C library. As an exercise, for the Linux and Solaris versions of the code, divide the code into the parts described above and label each part.
x86 Linux read (glibc 2.1.3)
read: push %ebx
mov 0x10(%esp,1),%edx ; put the 3 parms in registers
mov 0xc(%esp,1),%ecx
mov 0x8(%esp,1),%ebx
mov $0x3,%eax ; 3 is the syscall # for read
int $0x80 ; trap to kernel
pop %ebx
cmp $0xfffff001,%eax ; check return value
jae read_err
read_ret: ret ; return if OK.
read_err: push %ebx
call read_next ; push PC on stack
read_next: pop %ebx ; pop PC off stack to %ebx
xor %edx,%edx ; clear %edx
add $0x49a9,%ebx ; the following is a bunch of
sub %eax,%edx ; ...messy stuff that sets the
push %edx ; ...value fo the errno variable
call 0x4000dfc0 <__errno_location>
pop %ecx
pop %ebx
mov %ecx,(%eax)
or $0xffffffff,%eax ; set return value to -1
jmp read_ret ; return
SPARC Solaris 8
read: st %o0,[%sp+0x44] ! save argument 1 (fd) on stack
read_retry: mov 3,%g1 ! 3 is the syscall # for read
ta 8 ! trap to kernel
bcc read_ret ! branch if no error
cmp %o0,0x5b ! check for interrupt syscall
be,a read_retry ! ... and restart if so
ld [%sp+0x44],%o0 ! restore 1st param (fd)
mov %o7,%g1 ! save return address
call read_next ! set %o7 to PC
sethi %hi(0x1d800), %o5 ! the following is a bunch of
read_next: or %o5, 0x10, %o5 ! ...messy stuff that sets the
add %o5,%o7,%o5 ! ...value of the errno variable
mov %g1, %o7 ! ...by calling _cerror. also the
ld [%o5+0xe28],%o5 ! ...return value is set to -1
jmp %o5
nop
read_ret: retl
nop
________________________________________
Interrupt Handling and the Interrupt Vector
When an interrupt occurs, the hardware takes over and forces a control transfer that looks much like a function call. The destination of the control transfer depends on the type of interrupt. Interrupt types include things such as divide by zero, memory errors, and software interrupts (such as from the "int" instruction). The code that handles a particular type of interrupt is called (cleverly enough) an interrupt handler. As control is transferred to the appropriate interrupt handler, the process saves the PC and processor status on a special kernel stack.
The operating system sets up a table, usually called the interrupt vector, that contains one entry per type of interrupt. On the x86, this table is called the Interrupt Descriptor Table and an entry in the table is called a gate. Each vector entry contains the address of the interrupt handler for its interrupt.
In addition to branching and saving the PC and processor status, the processor will switch from a state where only certain parts of memory can be accessed and where certain instructions are prohibited (user mode) to a state where all operations are permitted (system mode).
________________________________________

Dispatching, Creating Processes

Chapter 3, Sections 3.2 and 3.3 in Operating Systems Concepts.
How does dispatcher decide which process to run next?


• Plan 0: search process table from front, run first runnable process.
o Might spend a lot of time searching.
o Weird priorities.
• Plan 1: link together the runnable processes into a queue. Dispatcher grabs first process from the queue. When processes become runnable, insert at back of queue.
• Plan 2: give each process a priority, organize the queue according to priority. Or, perhaps have multiple queues, one for each priority class.
CPU can only be doing one thing at a time: if user process is executing, dispatcher is not: OS has lost control. How does OS regain control of processor?
Internal events (things occurring within user process):
• System call.
• Error (illegal instruction, addressing violation, etc.).
• Page fault.
These are also called traps. They all cause a state switch into the OS.
External events (things occurring outside the control of the user process):
• Character typed at terminal.
• Completion of disk operation (controller is ready for more work).
• Timer: to make sure OS eventually gets control.
External events are usually called interrupts. They all cause a state switch into the OS. This means that user processes cannot directly take I/O interrupts.
________________________________________
When process is not running, its state must be saved in its process control block. What gets saved? Everything that next process could trash:
• Program counter.
• Processor status word (condition codes, etc.).
• General purpose registers.
• Floating-point registers.
• All of memory?


How do we switch contexts between the user and OS? Must be careful not to mess up process state while saving and restoring it.
Saving state: it is tricky because the the OS needs some state to execute the state saving and restoring code.
• Hand-code in assembler: avoid using registers that contain user values.
• Still have problems with things like PC and PS: cannot do either one without the other.
• All machines provide some special hardware support for saving and restoring state:
o Most modern processors: hardware does not know much about processes, it just moves PC and PS to/from the stack. OS then transfers to/from PCB, and handles rest of state itself. (We will see processor knowledge about processes when we discuss virtual memory.)
o Exotic processors, like the Intel 432: hardware did all state saving and restoring into process control block, and even dispatching.
Short cuts: as process state becomes larger and larger, saving and restoring becomes more and more expensive. Cannot afford to do full save/restore for every little interrupt.
• Sometimes different amounts are saved at different times. E.g. to handle interrupts, might save only a few registers, but to swap processes, must save everything. This is a performance optimization that can cause BIZARRE problems.
• Sometimes state can be saved and restored incrementally, e.g. in virtual memory environments.
________________________________________
Creating a process from scratch (e.g., the Windows/NT CreateProcess()):
• Load code and data into memory.
• Create (empty) call stack.
• Create and initialize process control block.
• Make process known to dispatcher.
Forking: want to make a copy of existing process (e.g., Unix).
• Make sure process to be copied is not running and has all state saved.
• Make a copy of code, data, stack.
• Copy PCB of source into new process.
• Make process known to dispatcher.
What is missing?

Introduction to Processes

Operating Systems Concepts.
With so many things happening at once in system, need some way of separating them all out cleanly. That is a process.
Important concept: decomposition. Given hard problem, chop it up into several simpler problems that can be solved separately.
What is a process?
• "An execution stream in the context of a particular process state."
• A more intuitive, but less precise, definition is just a running piece of code along with all the things that the code can affect or be affected by.
• Process state is everything that can affect, or be affected by, the process: includes code, particular data values, open files, etc.
• Execution stream is a sequence of instructions performed in a process state.
• Only one thing happens at a time within a process.
Is a process the same as a program?
Some systems allow only one process (mostly personal computers). They are called uniprogramming systems (not uniprocessing; that means only one processor). Easier to write some parts of OS, but many other things are hard to do.
________________________________________
Most systems allow more than one process. They are called multiprogramming systems.
First, have to keep track of all the processes. For each process, process control block holds:
• Execution state (saved registers, etc.)
• Scheduling information
• Accounting and other miscellaneous information.
Process table: collection of all process control blocks for all processes.
Process Control Block
Execution
State
Scheduling
Information
Accounting
and
Miscellaneous
How can several processes share one CPU? OS must make sure that processes do not interfere with each other. This means
• Making sure each gets a chance to run (fair scheduling).
• Making sure they do not modify each other's state (protection).
Dispatcher (also called Short Term Scheduler): inner-most portion of the OS that runs processes:
• Run process for a while
• Save state
• Load state of another process
• Run it ...

QUESTIONS TO ASK ABOUT OPERATINGS SYSTEMS

Or, why are studying this stuff?
Why are operating systems important?
• They consume more resources than any other program.
They may only use up a small percentage of the CPU time, but consider how many machines use the same program, all the time.
• They are the most complex programs.
They perform more functions for more users than any other program.
• They are necessary for any use of the computer.
When "the (operating) system" is down, the computer is down. Reliability and recovery from errors becomes critical.
• They are used by many users.
More hours of user time is spent dealing with the operating system. Visible changes in the operating system cause many changes to the users.
Why are operating systems difficult to create, use, and maintain?
• Size - too big for one person
Current systems have many millions lines of code. Involve 10-100 person years to build.
• Lifetime - the systems remain around longer than the programmers who wrote them.
The code is written and rewritten. Original intent is forgotten (UNIX was designed to be cute, little system - now 2 volumes this thick). Bug curve should be decreasing; but actually periodic - draw.
• Complexity - the system must do difficult things.
Deal with ugly I/O devices, multiplexing-juggling act, handle errors (hard!).
• Asynchronous - must do several things at once.
Handles interrupts, and must change what it is doing thousands of times a second - and still get work done.
• General purpose - must do many different things.
Run Doom, Java, Fortran, Lisp, Trek, Databases, Web Servers, etc. Everybody wants their stuff to run well.
________________________________________
Operating systems are an unsolved problem.
• Most do not work very well.
Crash too often, too slow, awkward to use, etc.
• Usually they do not do everything they were designed to do.
• Do not adapt to changes very well.
New devices, processors, applications.
• There are no perfect models to emulate.
________________________________________
(No, UNIX is not it! Nor is Windows!) Unlike fields like electronics where there are such models (zero distortion, flat response), any real system has (a large number of) flaws.
________________________________________

Copyright © 2001, 2002, 2008 Barton P. Miller
Non-University of Wisconsin students and teachers are welcome to print these notes their personal use. Further reproduction requires permission of the author.

VIEWS OF AN OPERATING SYSTEM

As a scheduler/allocator:
• The operating system has resources for which it is in charge. Responsible for handing them out (and later recovering them).
• Resources include CPU, memory, I/O devices, and disk space.
As a virtual machine:
• Operating system provides a "new" machine.
This machine could be the same as the underlying machine. Allows many users to believe they have an entire piece of hardware to themselves.
This could implement a different, perhaps more powerful, machine. Or just a different machine entirely. It may be useful to be able to completely simulate another machine with your current hardware. Example of upgrading to a new piece of hardware. This can get out of hand. E.g., 1401 -> 360 -> 370 -> 3081.
As a multiplexor:
• Allows sharing of resources, and provides protection from interference.
• Provides for a level of cooperation between users.
• Economic reasons: we have to take turns.
________________________________________
According to these three views, if:
• we had enough hardware to give everyone too much;
• the hardware was well designed;
• the communications problem -- how to share knowledge -- is solved;
then we would not need operating systems. My view of operating systems says that they will still be needed:
As a servant and provider of services:
• Need to provide things like in the above views, but deal with environments that are less than perfect. Help the users use the computer by:
providing commonly used subroutines;
providing access to hardware facilities;
providing higher-level "abstract" facilities;
providing an environment which is easy, pleasant, and productive to use.
This view as a provider of services fits well with our modern network view of computing, where most resources are services.
________________________________________
What are the desirable qualities of an operating system? We can discuss them in terms of: Usability, Facilities, Cost, and Adaptability.
• Usability:
o Robustness
accept all valid input without error, and gracefully handle all invalid inputs
o Consistency
E.g., if "-" means options flags in one place, it means it in another. Key idea: conventions. Concept: The Principle of Least Astonishment.
o Proportionality
Simple, cheap and frequent things are easy. Also, expensive and disastrous things (rm *) are hard.
o Forgiving
Errors can be recovered from. Reasonable error messages. Example from "rm"; UNIX vs. TOPS.
o Convenient
Not necessary to repeat things, or do awkward procedures to accomplish things. Example copying a file took a batch job.
o Powerful
Has high level facilities.
• Facilities
o Sufficient for intended use.
o Complete.
Dont leave out part of a facility. E.g., protection with
o Appropriate.
Do not use fixed-width field input from terminal.
• Costs
o Want low cost and efficient services.
o Good algorithms.
Make use of space/time tradeoffs, special hardware.
o Low overhead.
Cost of doing nothing should be low. E.g., idle time at a terminal.
o Low maintenance cost.
System should not require constant attention.
• Adaptability
o Tailored to the environment.
Support necessary activities. Do not impose unnecessary restrictions. What are the things people do most -- make them easy.
o Changeable over time.
Adapt as needs and resources change. E.g., expanding memory and new devices, or new user population.
o Extendible-Extensible
Adding new facilities and features - which look like the old ones.
________________________________________
Two main perspectives of an operating system:
• Outside - depends on your level of sophistication.
A system to compile and run Java programs
Your average introductory Computer Sciences student.
A system with many facilities - compilers, databases, file systems, system calls.
• Inside - internals, code, data structures.
This is the system programmers view of an operating system. At this level you understand not only what is provided, but how it is provided.
________________________________________
Go over class information sheet. Initial programming assignment and chapter 1 from Dinosaur book.
Explain teaching and grading philosophy (probably when doing info sheet). Emphasize "come talk to me first" view before cheating.