Google Search

Friday, November 13, 2015

What do you mean by C++ access specifiers ?

Access specifiers are used to define how the members (functions and variables) can be accessed outside the class. There are three access specifiers defined which are public, private, and protected.
private:
Members declared as private are accessible only with in the same class and they cannot be accessed outside the class they are declared.
public:
Members declared as public are accessible from any where.
protected:
Members declared as protected can not be accessed from outside the class except a child class. This access specifier has significance in the context of inheritance.

What is the difference between a NULL pointer and a void pointer?

A NULL pointer is a pointer of any type whose value is zero. A void pointer is a pointer to an object of an unknown type, and is guaranteed to have enough bits to hold a pointer to any object. A void pointer is not guaranteed to have enough bits to point to a function (though in general practice it does).

What is difference between C++ and Java?

C++ has pointers Java does not.
Java is platform independent C++ is not.
Java has garbage collection C++ does not.

What do you mean by virtual methods?

virtual methods are used to use the polymorphism feature in C++. Say class A is inherited from class B. If we declare say function f() as virtual in class B and override the same function in class A then at runtime appropriate method of the class will be called depending upon the type of the object.

What is an explicit constructor?

Ans: A conversion constructor declared with the explicit keyword. The compiler does not use an explicit constructor to implement an implied conversion of types. It’s purpose is reserved explicitly for construction.

What is the Standard Template Library?

Ans: A library of container templates approved by the ANSI committee for inclusion in the standard C++ specification. A programmer who then launches into a discussion of the generic programming model, iterators, allocators, algorithms, and such, has a higher than average understanding of the new technology that STL brings to C++ programming.

What is an iterator?

Iterators are like pointers. They are used to access the elements of containers thus providing a link between algorithms and containers. Iterators are defined for specific containers and used as arguments to algorithms.

What is class invariant?

A class invariant is a condition that defines all valid states for an object. It is a logical condition to ensure the correct working of a class. Class invariants must hold when an object is created, and they must be preserved under all operations of the class. In particular all class invariants are both preconditions and post-conditions for all operations or member functions of the class.

What do you mean by implicit conversion?

Whenever data types are mixed in an expression then C++ performs the conversion automatically. Here smaller type is converted to wider type.
Example : in case of integer and float integer is converted into float type.

What do you mean by static methods?

By using the static method there is no need creating an object of that class to use that method. We can directly call that method on that class. For example, say class A has static function f(), then we can call f() function as A.f(). There is no need of creating an object of class A.

What is the difference between a copy constructor and an overloaded assignment operator?

A copy constructor constructs a new object by using the content of the argument object. An overloaded assignment operator assigns the contents of an existing object to another existing object of the same class.

If you want to share several functions or variables in several files maitaining the consistency how would you share it?

To maintain the conistency between several files firstly place each definition in '.c' file than using external declarations put it in '.h' file after it is included .h file we can use it in several files using ‪#‎include‬ as it will be in one of the header files, thus to maintain the consistency we can make our own header file and include it where ever needed.

What do you mean by translation unit?

A Translation Unit is a set of source files that is seen by the complier and it translate it as one unit which is generally ?.c? file and all the header files mentioned in ‪#‎include‬ directives.
When a C preprocessor expands the source file with all the header files the result is the preprocessing translation unit which when futher processed translates the preprocessing translation unit into translation unit, furtherwith the help of this translation unit compiler forms the object file and ultimately forms an executable program.

What is function overloading and operator overloading?

Function overloading: C++ enables several functions of the same name to be defined, as long as these functions have different sets of parameters (at least as far as their types are concerned). This capability is called function overloading. When an overloaded function is called, the C++ compiler selects the proper function by examining the number, types and order of the arguments in the call. Function overloading is commonly used to create several functions of the same name that perform similar tasks but on different data types.
Operator overloading allows existing C++ operators to be redefined so that they work on objects of user-defined classes. Overloaded operators are syntactic sugar for equivalent function calls. They form a pleasant facade that doesn't add anything fundamental to the language (but they can improve understandability and reduce maintenance costs).

What is the difference between declaration and definition?

The declaration tells the compiler that at some later point we plan to present the definition of this declaration.
E.g.: void stars () //function declaration
The definition contains the actual implementation.
E.g.: void stars () // declarator
{
for(int j=10; j > =0; j--) //function body
cout << *;
cout <<>

Difference between Bit rate and Baud rate.

The difference between Bit and Baud rate is complicated and intertwining. Both are dependent and inter-related. But the simplest explanation is that a Bit Rate is how many data bits are transmitted per second. A baud Rate is the number of times per second a signal in a communications channel changes.
Bit rates measure the number of data bits (that is 0’s and 1’s) transmitted in one second in a communication channel. A figure of 2400 bits per second means 2400 zeros or ones can be transmitted in one second, hence the abbreviation “bps.” Individual characters (for example letters or numbers) that are also referred to as bytes are composed of several bits.
A baud rate is the number of times a signal in a communications channel changes state or varies. For example, a 2400 baud rate means that the channel can change states up to 2400 times per second. The term “change state” means that it can change from 0 to 1 or from 1 to 0 up to X (in this case, 2400) times per second. It also refers to the actual state of the connection, such as voltage, frequency, or phase level).Difference between Bit Rate and Baud Rate
The main difference between the two is that one change of state can transmit one bit, or slightly more or less than one bit, that depends on the modulation technique used. So the bit rate (bps) and baud rate (baud per second) have this connection:
bps = baud per second x the number of bit per baud
The modulation technique determines the number of bit per baud. Here are two examples:
When FSK (Frequency Shift Keying, a transmission technique) is used, each baud transmits one bit. Only one change in state is required to send a bit. Thus, the modem’s bps rate is equal to the baud rate. When a baud rate of 2400 is used, a modulation technique called phase modulation that transmits four bits per baud is used. So:
2400 baud x 4 bits per baud = 9600 bps
Such modems are capable of 9600 bps operation.
Can the baud rate equal the bit rate?
Because symbols are comprised of bits, the baud rate will equal the bit rate only when there is just one bit per symbol.

What is a template?

Templates allow to create generic functions that admit any data type as parameters and return value without having to overload the function with all the possible data types. Until certain point they fulfill the functionality of a macro. Its prototype is any of the two following ones:
template function_declaration; template function_declaration;
The only difference between both prototypes is the use of keyword class or typename, its use is indistinct since both expressions have exactly the same meaning and behave exactly the same way.

Describe linkages and types of linkages?

When we declare identifiers within the same scope or in the different scopes they can be made to refer the same object or function with the help of likages. There are three types of linkages: 
a) External linkage b) Internal linkage c) None linkage
EXternal Linkages means 'global, non-static' functions or variable. Example: extern int a1
Internal Linkages means static variable and functions.
Example: static int a2
None Linkages means local variables. Example : int a3

Keeping in mind the efficiency, which one between if-else and switch is more efficient?

Between if-else chain and switch statements, as far as efficiency is concerned it is hard to say that which one is more effect because both of them posses hardly any difference in terms of efficiency. 
Switch can be converted into if else chain internally by the compiler.
Switch statements are compact way of writing a jump table whereas if-else is a long way of writing conditions.
Between if-else and switch statements, switch cases are preferred to be used in the programming as it is a compact and cleaner way of writing conditions in the program.

Comparison of ASK, FSK and PSK

ASK refers to a type of amplitude modulation that assigns bit values to discrete amplitude levels. The carrier signal is then modulated among the members of a set of discrete values to transmit information.
FSK refers to a type of frequency modulation that assigns bit values to discrete frequency levels. FSK is divided into noncoherent and coherent forms. In noncoherent forms of FSK, the instantaneous frequency shifts between two discrete values termed the "mark" and "space" frequencies. In coherent forms of FSK, there is no phase discontinuity in the output signal. FSK modulation formats generate modulated waveforms that are strictly real values, and thus tend not to share common features with quadrature modulation schemes.
PSK in a digital transmission refers to a type of angle modulation in which the phase of the carrier is discretely varied—either in relation to a reference phase or to the phase of the immediately preceding signal element—to represent data being transmitted. For example, when encoding bits, the phase shift could be 0 degree for encoding a "0," and 180 degrees for encoding a "1," or the phase shift could be –90 degrees for "0" and +90 degrees for a "1," thus making the representations for "0" and "1" a total of 180 degrees apart. Some PSK systems are designed so that the carrier can assume only two different phase angles, each change of phase carries one bit of information, that is, the bit rate equals the modulation rate. If the number of recognizable phase angles is increased to four, then 2 bits of information can be encoded into each signal element; likewise, eight phase angles can encode 3 bits in each signal element.

What is PHP?

PHP is a server side scripting language commonly used for web applications. PHP has many frameworks and cms for creating websites.Even a non technical person can cretae sites using its CMS.WordPress,osCommerce are the famus CMS of php.It is also an object oriented programming language like java,C-sharp etc.It is very eazy for learning

require_once(), require(), include().What is difference between them?

require() includes and evaluates a specific file, while require_once() does that only if it has not been included before (on the same page). So, require_once() is recommended to use when you want to include a file where you have a lot of functions for example. This way you make sure you don't include the file more times and you will not get the "function re-declared" error.

Differences between GET and POST methods ?

We can send 1024 bytes using GET method but POST method can transfer large amount of data and POST is the secure method than GET method .

What is the use of 'print' in php?

This is not actually a real function, It is a language construct. So you can use with out parentheses with its argument list.
Example print('PHP Interview questions');
print 'Job Interview ');

what is the use of rand() in php?

It is used to generate random numbers.If called without the arguments it returns a pseudo-random integer between 0 and getrandmax(). If you want a random number between 6 and 12 (inclusive), for example, use rand(6, 12).This function does not generate cryptographically safe values, and should not be used for cryptographic uses. If you want a cryptographically secure value, consider using openssl_random_pseudo_bytes() instead.

What is the importance of "method" attribute in a html form?

"method" attribute determines how to send the form-data into the server.There are two methods, get and post. The default method is get.This sends the form information by appending it on the URL.Information sent from a form with the POST method is invisible to others and has no limits on the amount of information to send.

How we can retrieve the data in the result set of MySQL using PHP?

1. mysql_fetch_row
2. mysql_fetch_array
3. mysql_fetch_object
4. mysql_fetch_assoc

Self - Stopping Counter

Self - Stopping Counter

C Aptitude Question: 2

‪#‎include‬<stdio.h>
int main()
{
int i=-3, j=2, k=0, m;
m = ++i && ++j || ++k;
printf("%d, %d, %d, %d\n", i, j, k, m);
return 0;
}
What will be the output?
-2, 3, 0, 1
Explanation:
Step 1: int i=-3, j=2, k=0, m; here variable i, j, k, m are declared as an integer type and variable i, j, k are initialized to -3, 2, 0 respectively.
Step 2: m = ++i && ++j || ++k;
becomes m = (-2 && 3) || ++k;
becomes m = TRUE || ++k;.
(++k) is not executed because (-2 && 3) alone return TRUE.
Hence this statement becomes TRUE. So it returns '1'(one). Hence m=1.
Step 3: printf("%d, %d, %d, %d\n", i, j, k, m); In the previous step the value of i,j are increemented by '1'(one).
Hence the output is "-2, 3, 0, 1".

C Aptitude Question: 1

‪#‎include‬<stdio.h>
int main()
{
int x, y, z;
x=y=z=-1;
z = ++x || ++y && ++z;
printf("x=%d, y=%d, z=%d\n", x, y, z);
return 0;
}
What will be the output?
Answer: x=0 y=0 z=0
Step 1: x=y=z=-1; here the variables x ,y, z are initialized to value '-1'.
Step 2: z = ++x || ++y && ++z; becomes z = ( (++x) || (++y && ++z) ). Here ++x becomes 0. So there is no need to check the other side because ||(Logical OR) condition is satisfied.(z = (0 || ++y && ++z)). There is no need to process ++y && ++z. Hence it returns '0'. So the value of variable z is '0'
Step 3: printf("x=%d, y=%d, z=%d\n", x, y, z); It prints "x=0, y=0, z=0". here x is increemented in previous step. y and z are not increemented.

Monday, November 9, 2015

how to crate small calc in mvc in java swing

Saturday, November 7, 2015

Deadlocks

It is possible for two or more programs to be hung up waiting for each other.

For example, two programs may each require two I/O devices to perform some operation.

One of the programs has seized control of one of the devices and the other program has control of the other device.

Each is waiting for the other program to release the desired resource.



What is needed to tackle these problems is a systematic way to monitor and control the various programs executing on the processor.
The concept of the process provides the foundation.

We can think of a process as consisting of three components:


An executable program

The associated data needed by the program (variables, work space, buffers, etc.)

The execution context of the program


This last element is essential.

The execution context, or process state, is the internal data by which the OS is able to supervise and control the process.

The context includes all of the information that the OS needs to manage the process and that the processor needs to execute the process properly.

The context includes the contents of the various processor registers, such as the program counter and data registers.

It also includes information of use to the OS, such as the priority of the process and whether the process is waiting for the completion of a particular I/O event.

Batch Multiprogramming versus Time Sharing

Batch Multiprogramming
Time Sharing
Maximize processor use
Minimize response time
Job control language commands provided with the job
Commands entered at the terminal

I/O Function

Data can be exchanged directly between an I/O module and the processor.
 Just as the processor can initiate a read or write with memory, specifying the address of a memory location, the processor can also read data from or write data to an I/O module.
 In this latter case, the processor identifies a specific device that is controlled by a particular I/O module

In some cases, it is desirable to allow I/O exchanges to occur directly with main memory to relieve the processor of the I/O task.
In such a case, the processor grants to an I/O module the authority to read from or write to memory, so that the I/O memory transfer can occur without tying up the processor.
This operation, known as direct memory access (DMA).

PROCESSOR REGISTERS

A processor includes a set of registers that provide memory that is faster and smaller than main memory.

Processor registers serve two functions: 

User-visible registers:
Enable the machine or assembly language programmer to minimize main memory references by optimizing register use.

For high level languages, an optimizing compiler will attempt to make intelligent choices of which variables to assign to registers and which to main memory locations.

Some high-level languages, such as C, allow the programmer to suggest to the compiler which variables should be held in registers.

Types of registers that are typically available are data, address, and condition code registers.

a) Data registers:
Data registers can be assigned to a variety of functions by the programmer.
 In some cases, they are general purpose in nature and can be used with any machine instruction that performs operations on data.
 Often, however, there are restrictions.
For example, there may be dedicated registers for floating-point operations and others for integer operations.

b) Address registers :
Address registers contain main memory addresses of data and instructions.
These registers may themselves be general purpose, or may be devoted to a particular way, or mode, of addressing memory.

Control and status registers:
Used by the processor to control the operation of the processor and by privileged OS routines to control the execution of programs. 

Tuesday, November 3, 2015

Why gray code called non-weighted codes?

It is the non-weighted code and it is not arithmetic codes. That means there are no specific weights assigned to the bit position. It has a very special feature that, only one bit will change each time the decimal number is incremented as shown in fig. As only one bit changes at a time, the gray code is called as a unit distance code. The gray code is a cyclic code. Gray code cannot be used for arithmetic operation.

Explain the function of a master slave flip – flop.

When Clk=1, the master J-K flip flop gets disabled. The Clk input of the master input will be the opposite of the slave input. So the master flip flop output will be recognized by the slave flip flop only when the Clk value becomes 0. Thus, when the clock pulse males a transition from 1 to 0, the locked outputs of the master flip flop are fed through to the inputs of the slave flip-flop making this flip flop edge or pulse-triggered.
Thus, the circuit accepts the value in the input when the clock is HIGH, and passes the data to the output on the falling-edge of the clock signal. This makes the Master-Slave J-K flip flop a Synchronous device as it only passes data with the timing of the clock signal.

What do you mean by self complementing code?

A self-complementing code is one in which the 9's complement is formed by taking the 1's complement. For instance, the 9's complement of 6 is 3 and the 9's complement of 1 is 8.
If we consider Excess 3:
0 0011 
1 0100
2 0101
3 0110
4 0111
5 1000
6 1001
7 1010
8 1011
9 1100
Then thinking about 6 and 3 we see the XS3 codes are: 1001 and 0110 which are the 1's complement of each other. Considering 1 and 8, the XS3 codes are 0100 and 1011 - again 1's complements. There are several self-complementing codes. There is one where instead of the usual binary weights of 8, 4, 2 and 1 you can use 2, 4, 2, and 1.

Why DDA algorithm ?

DDA Algorithm is used to plot line between two nodes i.e two end points in computer system.
While our computer understand pixels, if we want to plot a line, we should have maximum intermediate vertices of the line i.e intermediate points, so as to generate a straight line. Here DDA does the same.
c program for dda algorithm :
‪#‎include‬ <graphics.h>
#include <stdio.h>
#include <math.h>
int main( )
{
float x,y,x1,y1,x2,y2,dx,dy,pixel;
int i,gd,gm;
printf("Enter the value of x1 : ");
scanf("%f",&x1);
printf("Enter the value of y1 : ");
scanf("%f",&y1);
printf("Enter the value of x2 : ");
scanf("%f",&x2);
printf("Enter the value of y1 : ");
scanf("%f",&y2);
detectgraph(&gd,&gm);
initgraph(&gd,&gm,"");
dx=abs(x2-x1);
dy=abs(y2-y1);
if(dx>=dy)
pixel=dx;
else
pixel=dy;
dx=dx/pixel;
dy=dy/pixel;
x=x1;
y=y1;
i=1;
while(i<=pixel)
{
putpixel(x,y,1);
x=x+dx;
y=y+dy;
i=i+1;
delay(100);
}
getch();
closegraph();
}

What is memory mapped I/O Scheme?

In memory mapped I/O scheme we can use only one address space. This particular one address space is allocated to both memory and I/O devices. In total memory address some addresses are assigned to memories and some to I/O devices. But we have to assign the address for I/O devices are different from the addresses which have been assigned to memories. In this scheme remember that I/O device is also treated as a memory location. And one address is assigned to each memory location (unique address) and one address is assigned to each I/O device.

Address Bus

It is a group of wires or lines that are used to transfer the addresses of Memory or I/O devices. It is unidirectional.In Intel 8085 microprocessor, Address bus was of 16 bits. This means that Microprocessor 8085 can transfer maximum 16 bit address which means it can address 65,536 different memory locations. This bus is multiplexed with 8 bit data bus. So the most significant bits (MSB) of address goes through Address bus (A7-A0) and LSB goes through multiplexed data bus (AD0-AD7).

Data Bus

As name tells that it is used to transfer data within Microprocessor and Memory/Input or Output devices. It is bidirectional as Microprocessor requires to send or receive data. The data bus also works as address bus when multiplexed with lower order address bus. Data bus is 8 Bits long. The word length of a processor depends on data bus, thats why Intel 8085 is called 8 bit Microprocessor because it have an 8 bit data bus.

What is associative memory?

Content-addressed or associative memory refers to a memory organization in which the memory is accessed by its content (as opposed to an explicit address). Thus, reference clues are "associated" with actual memory contents until a desirable match (or set of matches) is found. Production systems are obvious examples of systems that employ such a memory. Associative memory stands as the most likely model for cognitive memories, as well. Humans retrieve information best when it can be linked to other related information. This linking is fast, direct and labyrinthine in the sense that the memory map is many-to-many and homomorphic.

Desktop Computer versus Mainframe Computer

History
Mainframe computers were originally housed in large cases or frames, giving them the mainframe name, according to IBM. Mainframes were housed in large air-conditioned rooms. Desktop Computer were created for individual users and can sit on a desk or table.
Uses
When first created, Desktop Computer were used for single purposes, such as writing letters or working on a budget. Desktop Computer are now used primarily as communication tools for the Internet, according to PC Mag. Mainframes are typically used as central data repositories to handle the data responsibilities of a network of computers, such as updating software and operating systems, according to IBM.
Size
The size of mainframe computers began to be miniaturized in the 1990s, when mainframes began to be known as servers or hubs. In the same period, the prices of Desktop Computers began to be reduced by wholesale manufacturers, followed by the development of smaller PCs, such as laptops and handheld devices.
Desktop Computers generally have only one central processor unit, memory drive, bus, and I/O system while mainframes can have several or even thousands. Desktop Computers generally cannot run older software. While mainframes are used to connect multiple users (sometimes thousands) Desktop Computers are used for single users. Speed and size are some of the most drastic differences. Mainframes can be large enough o fill an entire room and PC's can be small enough to fit into a purse.Conclusion
Mainframes and Desktop Computers have evolved into machines that are powerful enough to practically run entire countries. They have some similarities and many differences that make them desirable for use by companies and individuals. Speed, size, and costs are usually the first things that are evaluated before either one of these products are purchased. But purchased, they are, and they will continue to each hold their own in this world of technological advances.

Spyware

Any software that covertly gathers user information through the user's Internet connection without his or her knowledge, usually for advertising purposes. Spyware applications are typically bundledas a hidden component of freeware or shareware programs that can be downloaded from the Internet; however, it should be noted that the majority of shareware and freeware applications do not come with spyware. Once installed, the spyware monitors user activity on the Internet and transmits that information in the background to someone else. Spyware can also gather information about e-mail addresses and even passwords and credit card numbers.
Spyware is similar to a Trojan horse in that users unwittingly install the product when they install something else. A common way to become a victim of spyware is to download certain peer-to-peer file swapping products that are available today.
Aside from the questions of ethics and privacy, spyware steals from the user by using the computer's memory resources and also by eating bandwidth as it sends information back to the spyware's home base via the user's Internet connection. Because spyware is using memory and system resources, the applications running in the background can lead to system crashes or general system instability.
With so many types of malicious software being spread around the Internet, it is important to be aware of what spyware is and what spyware does. Spyware is a general term used to describe software that performs certain behaviors, generally without appropriately obtaining your consent first, such as:
Advertising
Collecting personal information
Changing the configuration of your computer
Spyware is often associated with software that displays advertisements (called adware) or software that tracks personal or sensitive information.

SIMM versus DIMM

Single In-line Memory modules and Dual In-line Memory Modules are basically just different ways of packaging the same silicon memory. The primary difference between these two types of modules is in the number of pins that they have. DIMMs have twice as many pins compared to comparable SIMMs. This might not seem likely at first since it is clearly visible that they have the same number of pins at each side, but closer inspection reveals that the connectors on either side are connected to each other in SIMMs. This is not the case with DIMMs.
The very apparent advantage of this is the much wider bus that a DIMM can utilize. DIMMs have a 64bit compared to the 32bit bus used by SIMMSs Wider bus means more data can pass through and this correlates to a faster overall performance since memory is essential in all computer operations. Achieving a 64bit bus is not exclusive to DIMMs since this capability has already been achieved with SIMMs via a neat little trick. The trick is to use two SIMMs in tandem, the resulting bus would be the sum of the two buses. The appearance of DIMMs have totally made this practice unnecessary.
DIMMs do not have backwards compatibility with SIMMs, therefore it is not possible to slowly simply upgrade the memory modules. Moving from SIMMs to DIMMs required a replacement of the motherboard which could sometimes mean the replacement of the processor. This is why the change to DIMMs isn’t very quick, most people opted to switch to DIMM when they needed to upgrade or replace their computers.
DIMMs have replaced all SIMMs in computers today and the only place that you would probably see SIMMs would be at computer museums. DIMMs have become so dominant that it is no longer necessary to identify whether the memory module is a DIMM. There is no replacement to DIMMs at the moment and it is expected that memory modules would still be manufactured as DIMMs in the foreseeable future.
Some Important differences are showing below:
1. The SIMMs pins on either side are connected to each other while DIMM pins are independent
2. DIMMs provide a 64 bitchannel which is twice of the 32bit channel of SIMMs
3. DIMMs eliminated the practice of pairing two SIMMs as one
4. DIMMs are not backwards compatible with SIMMs just like all other memory modules
5. DIMMs are the replacement technology for SIMMs

ROM versus PROM

ROM: Read Only Memory
Read Only Memory is constructed from "hard-wire logic," in a similar way to the processor, meaning that it cannot be reprogrammed or changed. This is because it is designed to perform a specific function, and does not need to be altered. An example of ROM is a commercial CD purchased from a store; the manufacturers do not want you to alter what is stored on the disk. ROM is only programmable once. For example, it could be programmed at the factory where they make the chip. And indeed, it’s usually used in firmly hardcoded chips made by the company.
PROM: Programmable Read Only Memory
Programmable Read Only Memory can be programmed using specific software only available to companies producing PROM chips. PROM can be likened to burning to a CD only once and reading from it many times. This is similar to ROM except that you, the consumer, can program it. You can buy a blank chip and have a PROM programmer program it with your stuff. But, once you program it, you can never change it.

Batch Systems

The users of batch operating system do not interact with the computer directly. Each user prepares his job on an off-line device like punch cards and submits it to the computer operator. To speed up processing, jobs with similar needs are batched together and run as a group. Thus, the programmers left their programs with the operator. The operator then sorts programs into batches with similar requirements.
Batch processing has these benefits:
• It can shift the time of job processing to when the computing resources are less busy.
• It avoids idling the computing resources with minute-by-minute manual intervention and supervision.
• By keeping high overall rate of utilization, it amortizes the computer, especially an expensive one.
• It allows the system to use different priorities for batch and interactive work.
• Rather than running one program multiple times to process one transaction each time, batch processes will run the program only once for many transactions, reducing system overhead.
The problems with Batch Systems are following.
• Lack of interaction between the user and job.
• CPU is often idle, because the speeds of mechanical I/O devices is slower than CPU.
• Difficult to provide the desired priority.

Relation between a flow chart and an algorithm

A flowchart is a type of diagram that represents an algorithm, workflow or process, showing the steps as boxes of various kinds, and their order by connecting them with arrows. This diagrammatic representation illustrates a solution to a given problem. Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields.
Flow chart is very important tool for developing algorithm and program. It is pictorial representation of step by step solution of a problem.
Programmer often uses it as a program planning tool for visually organising step necessary to solve a problem. It uses boxes of different shapes that denotes different type of instruction.
While making a flow chart a programmer need not to pay attention on the elements of the programming language, he has to pay attention to the logic of solution to the problem
wheres the term algorithm refers to the logic. It is step by step description how to arrive at the solution to the problem. Algorithm is define as sequence of instruction that when executed in the specified sequence the desired results are obtained.
The set of rules that define how a particular problem can be solved in finite number of steps is known as algorithm.

Time Sharing Operating System

Time sharing is a technique which enables many people, located at various terminals, to use a particular computer system at the same time. Time-sharing or multitasking is a logical extension of multiprogramming. Processor's time which is shared among multiple users simultaneously is termed as time-sharing. The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is that in case of Multiprogrammed batch systems, objective is to maximize processor use, whereas in Time-Sharing Systems objective is to minimize response time.
Multiple jobs are executed by the CPU by switching between them, but the switches occur so frequently. Thus, the user can receives an immediate response. For example, in a transaction processing, processor execute each user program in a short burst or quantum of computation. That is if n users are present, each user can get time quantum. When the user submits the command, the response time is in few seconds at most.
Operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time. Computer systems that were designed primarily as batch systems have been modified to time-sharing systems.
Advantages of Timesharing operating systems are following
•Provide advantage of quick response.
•Avoids duplication of software.
•Reduces CPU idle time.
Disadvantages of Timesharing operating systems are following.
•Problem of reliability.
•Question of security and integrity of user programs and data.
•Problem of data communication.

Shell Program to convert from anybase to anybase

echo "enter input base:"
read i
echo "enter output base:"
read o
echo "enter number"
read n
n=`echo $n|tr '[a-z]' '[A-Z]'`
echo "ibase=$i">tmp1
echo $n>>tmp1
x=`bc < tmp1`;
echo "obase=$o">tmp2
echo $x>>tmp2
y=`bc < tmp2`;
echo "$n with input base $i is changed to $y with output base $o"
rm tmp1
rm tmp2

Monday, November 2, 2015

Repeaters

Like any type of LAN, 10BASE5 and 10BASE2 had limitations on the total length of a cable. With 10BASE5, the limit was 500 m; with 10BASE2, it was 185 m. Interestingly, the 5 and 2 in the names 10BASE5 and 10BASE2 represent the maximum cable length—with the 2 referring to 200 meters, which is pretty close to the actual maximum of 185 meters. (Both of these types of Ethernet ran at 10 Mbps.)

 In some cases, the maximum cable length was not enough, so a device called a repeater was developed. One of the problems that limited the length of a cable was that the signal sent by one device could attenuate too much if the cable was longer than 500 m or 185 m. Attenuation means that when electrical signals pass over a wire, the signal strength gets weaker the farther along the cable it travels. It’s the same concept behind why you can hear someone talking right next to you, but if that person speaks at the same volume and you are on the other side of a crowded room, you might not hear her because the sound waves have attenuated.

Repeaters connect to multiple cable segments, receive the electrical signal on one cable, interpret the bits as 1s and 0s, and generate a brand-new, clean, strong signal out the other cable. A repeater does not simply amplify the signal, because amplifying the signal might also amplify any noise picked up along the way.

NOTE :- Because the repeater does not interpret what the bits mean, but it does examine and generate electrical signals, a repeater is considered to operate at Layer 1

You should not expect to need to implement 10BASE5 or 10BASE2 Ethernet LANs today. However, for learning purposes, keep in mind several key points from this section as you move on to concepts that relate to today’s LANs:

The original Ethernet LANs created an electrical bus to which all devices connected.

Because collisions could occur on this bus, Ethernet defined the CSMA/CD algorithm, which defined a way to both avoid collisions and take action when collisions occurred.

 Repeaters extended the length of LANs by cleaning up the electrical signal and repeating it—a Layer 1 function—but without interpreting the meaning of the electrical signal.

A Brief History of Ethernet

Like many early networking protocols, Ethernet began life inside a corporation that was looking to solve a specific problem. Xerox needed an effective way to allow a new invention, called the personal computer, to be connected in its offices. From that, Ethernet was born. (Go to http://inventors.about.com/library/weekly/aa111598.htm for an interesting story on the history of Ethernet.) Eventually, Xerox teamed with Intel and Digital Equipment Corp. (DEC) to further develop Ethernet, so the original Ethernet became known as DIX Ethernet, referring to DEC, Intel, and Xerox.

These companies willingly transitioned the job of Ethernet standards development to the IEEE in the early 1980s. The IEEE formed two committees that worked directly on Ethernet—the IEEE 802.3 committee and the IEEE 802.2 committee. The 802.3 committee worked on physical layer standards as well as a subpart of the data link layer called Media Access Control (MAC). The IEEE assigned the other functions of the data link layer to the 802.2 committee, calling this part of the data link layer the Logical Link Control (LLC) sublayer. (The 802.2 standard applied to Ethernet as well as to other IEEE standard LANs such as Token Ring.)

Process and Thread Objects

The object-oriented structure of Windows facilitates the development of a general- purpose process facility.Windows makes use of two types of process-related objects: processes and threads.A process is an entity corresponding to a user job or applica- tion that owns resources,such as memory,and opens files.A thread is a dispatchable unit of work that executes sequentially and is interruptible,so that the processor can turn to another thread. Each Windows process is represented by an object whose general structure is shown in Figure 4.13a.Each process is defined by a number of attributes and encap- sulates a number of actions,or services,that it may perform.A process will perform a service when called upon through a set of published interface methods. When Windows creates a new process, it uses the object class, or type, defined for the Windows process as a template to generate a new object instance. At the time of creation, attribute values are assigned. Table 4.3 gives a brief definition of each of the object attributes for a process object. A Windows process must contain at least one thread to execute.That thread may then create other threads. In a multiprocessor system, multiple threads from the same process may execute in parallel.Figure 4.13b depicts the object structure for a thread object, and Table 4.4 defines the thread object attributes. Note that some of the attributes of a thread resemble those of a process. In those cases, the thread attribute value is derived from the process attribute value.For example,the thread processor affinity is the set of processors in a multiprocessor system that may execute this thread; this set is equal to or a subset of the process processor affinity. Note that one of the attributes of a thread object is context.This information enables threads to be suspended and resumed. Furthermore, it is possible to alter the behavior of a thread by altering its context when it is suspended.


EXECUTION OF THE OPERATING SYSTEM

 The OS functions in the same way as ordinary computer software in the sense that the OS is a set of programs executed by the processor.

The OS frequently relinquishes control and depends on the processor to re- store control to the OS.

If the OS is just a collection of programs and if it is executed by the processor just like any other program,is the OS a process? If so,how is it controlled? These in- teresting questions have inspired a number of design approaches.

Nonprocess Kernel:-
One traditional approach, common on many older operating systems, is to execute the kernel of the OS outside of any process .With this approach,when the currently running process is interrupted or issues a supervisor call,the mode context of this process is saved and control is passed to the kernel.The OS has its own region of memory to use and its own system stack for controlling procedure calls and returns.The OS can perform any desired functions and restore the context of the in- terrupted process,which causes execution to resume in the interrupted user process. Alternatively, the OS can complete the function of saving the environment of the process and proceed to schedule and dispatch another process.Whether this happens depends on the reason for the interruption and the circumstances at the time. In any case, the key point here is that the concept of process is considered to apply only to user programs.The operating system code is executed as a separate entity that operates in privileged mode.

Execution within User Processes :-
An alternative that is common with operating systems on smaller computers (PCs, workstations) is to execute virtually all OS software in the context of a user process.

The view is that the OS is primarily a collection of routines that the user calls to per- form various functions,executed within the environment of the user’s process.

OS DESIGN CONSIDERATIONS FOR MULTIPROCESSOR AND MULTICORE

Symmetric Multiprocessor OS Considerations


In an SMP system, the kernel can execute on any processor, and typically each
processor does self-scheduling from the pool of available processes or threads.
The kernel can be constructed as multiple processes or multiple threads, allowing
portions of the kernel to execute in parallel. The SMP approach complicates the OS.
The OS designer must deal with the complexity due to sharing resources (like data
structures) and coordinating actions (like accessing devices) from multiple parts of
the OS executing at the same time. Techniques must be employed to resolve and
synchronize claims to resources.

An SMP operating system manages processor and other computer resources
so that the user may view the system in the same fashion as a multiprogramming
uniprocessor system. A user may construct applications that use multiple processes
or multiple threads within processes without regard to whether a single processor
or multiple processors will be available. Thus, a multiprocessor OS must provide all
the functionality of a multiprogramming system plus additional features to accommodate
multiple processors. The key design issues include the following:

Simultaneous concurrent processes or threads:=> Kernel routines need to be
reentrant to allow several processors to execute the same kernel code simultaneously.
With multiple processors executing the same or different parts of the
kernel, kernel tables and management structures must be managed properly
to avoid data corruption or invalid operations.

Scheduling:=> Any processor may perform scheduling, which complicates the
task of enforcing a scheduling policy and assuring that corruption of the scheduler
data structures is avoided. If kernel-level multithreading is used, then the
opportunity exists to schedule multiple threads from the same process simultaneously
on multiple processors.

Synchronization:=> With multiple active processes having potential access to
shared address spaces or shared I/O resources, care must be taken to provide
effective synchronization. Synchronization is a facility that enforces mutual
exclusion and event ordering.

Memory management:=> Memory management on a multiprocessor must deal
with all of the issues found on uniprocessor computers and is discussed in Part
Three. In addition, the OS needs to exploit the available hardware parallelism
to achieve the best performance. The paging mechanisms on different processors
must be coordinated to enforce consistency when several processors
share a page or segment and to decide on page replacement. The reuse of
physical pages is the biggest problem of concern; that is, it must be guaranteed
that a physical page can no longer be accessed with its old contents before the
page is put to a new use.

Reliability and fault tolerance:=> The OS should provide graceful degradation
in the face of processor failure. The scheduler and other portions of the OS
must recognize the loss of a processor and restructure management tables
accordingly.

Because multiprocessor OS design issues generally involve extensions to
solutions to multiprogramming uniprocessor design problems, we do not treat
multiprocessor operating systems separately.

Multicore OS Considerations

The considerations for multicore systems include all the design issues discussed so
far in this section for SMP systems. But additional concerns arise. The issue is one
of the scale of the potential parallelism. Current multicore vendors offer systems
with up to eight cores on a single chip. With each succeeding processor technology
generation, the number of cores and the amount of shared and dedicated cache
memory increases, so that we are now entering the era of “many-core” systems.

The design challenge for a many-core multicore system is to efficiently
harness the multicore processing power and intelligently manage the substantial
on-chip resources efficiently. A central concern is how to match the inherent parallelism
of a many-core system with the performance requirements of applications.
The potential for parallelism in fact exists at three levels in contemporary multicore
system. First, there is hardware parallelism within each core processor, known as
instruction level parallelism, which may or may not be exploited by application programmers
and compilers. Second, there is the potential for multiprogramming and
multithreaded execution within each processor. Finally, there is the potential for
a single application to execute in concurrent processes or threads across multiple
cores. Without strong and effective OS support for the last two types of parallelism
just mentioned, hardware resources will not be efficiently used.

In essence, then, since the advent of multicore technology, OS designers have
been struggling with the problem of how best to extract parallelism from computing
workloads. A variety of approaches are being explored for next-generation operating
systems.

PARALLELISM WITHIN APPLICATIONS:=> Most applications can, in principle, be
subdivided into multiple tasks that can execute in parallel, with these tasks then
being implemented as multiple processes, perhaps each with multiple threads. The
difficulty is that the developer must decide how to split up the application work into
independently executable tasks. That is, the developer must decide what pieces can
or should be executed asynchronously or in parallel. It is primarily the compiler and
the programming language features that support the parallel programming design
process. But, the OS can support this design process, at minimum, by efficiently
allocating resources among parallel tasks as defined by the developer.

Perhaps the most effective initiative to support developers is implemented in
the latest release of the UNIX-based Mac OS X operating system. Mac OS X 10.6
includes a multicore support capability known as Grand Central Dispatch (GCD). GCD does not help the developer decide how to break up a task or application into
separate concurrent parts. But once a developer has identified something that can
be split off into a separate task, GCD makes it as easy and noninvasive as possible
to actually do so.

In essence, GCD is a thread pool mechanism, in which the OS maps tasks onto
threads representing an available degree of concurrency (plus threads for blocking
on I/O). Windows also has a thread pool mechanism (since 2000), and thread
pools have been heavily used in server applications for years. What is new in GCD
is the extension to programming languages to allow anonymous functions (called
blocks) as a way of specifying tasks. GCD is hence not a major evolutionary step.
Nevertheless, it is a new and valuable tool for exploiting the available parallelism of
a multicore system.

One of Apple’s slogans for GCD is “islands of serialization in a sea of concurrency.”
That captures the practical reality of adding more concurrency to run-of-the-mill
desktop applications. Those islands are what isolate developers from the thorny
problems of simultaneous data access, deadlock, and other pitfalls of multithreading.
Developers are encouraged to identify functions of their applications that would be
better executed off the main thread, even if they are made up of several sequential or
otherwise partially interdependent tasks. GCD makes it easy to break off the entire
unit of work while maintaining the existing order and dependencies between subtasks.
In later chapters, we look at some of the details of GCD.

VIRTUAL MACHINE APPROACH:=> An alternative approach is to recognize that
with the ever-increasing number of cores on a chip, the attempt to multiprogram
individual cores to support multiple applications may be a misplaced use of
resources [JACK10]. If instead, we allow one or more cores to be dedicated to a
particular process and then leave the processor alone to devote its efforts to that
process, we avoid much of the overhead of task switching and scheduling decisions.
The multicore OS could then act as a hypervisor that makes a high-level decision
to allocate cores to applications but does little in the way of resource allocation
beyond that.

The reasoning behind this approach is as follows. In the early days of computing,
one program was run on a single processor. With multiprogramming,
each application is given the illusion that it is running on a dedicated processor.
Multiprogramming is based on the concept of a process, which is an abstraction of
an execution environment. To manage processes, the OS requires protected space,
free from user and program interference. For this purpose, the distinction between
kernel mode and user mode was developed. In effect, kernel mode and user mode
abstracted the processor into two processors. With all these virtual processors, however,
come struggles over who gets the attention of the real processor. The overhead
of switching between all these processors starts to grow to the point where responsiveness
suffers, especially when multiple cores are introduced. But with many-core
systems, we can consider dropping the distinction between kernel and user mode.
In this approach, the OS acts more like a hypervisor. The programs themselves take
on many of the duties of resource management. The OS assigns an application a
processor and some memory, and the program itself, using metadata generated by
the compiler, would best know how to use these resources.

INTERRUPTS

Virtually all computers provide a mechanism by which other modules (I/O, memory)
may interrupt the normal sequencing of the processor. Table 1.1 lists the most
common classes of interrupts.

Interrupts are provided primarily as a way to improve processor utilization.
For example, most I/O devices are much slower than the processor. Suppose that
the processor is transferring data to a printer using the instruction cycle scheme of
. After each write operation, the processor must pause and remain idle
until the printer catches up. The length of this pause may be on the order of many
thousands or even millions of instruction cycles. Clearly, this is a very wasteful use
of the processor.

To give a specific example, consider a PC that operates at 1 GHz, which would
allow roughly 10 9 instructions per second. 2 A typical hard disk has a rotational
speed of 7200 revolutions per minute for a half-track rotation time of 4 ms, which is
4 million times slower than the processor.

A sequence of instructions,  to prepare for the actual
I/O operation. This may include copying the data to be output into a special
buffer and preparing the parameters for a device command.

• The actual I/O command. Without the use of interrupts, once this command
is issued, the program must wait for the I/O device to perform the requested
function (or periodically check the status, or poll, the I/O device). The program
might wait by simply repeatedly performing a test operation to determine if
the I/O operation is done.

• A sequence of instructions, to complete the operation.
This may include setting a flag indicating the success or failure of the operation.


Program => Generated by some condition that occurs as a result of an instruction execution, such as
arithmetic overflow, division by zero, attempt to execute an illegal machine instruction,
and reference outside a user’s allowed memory space.

Timer => Generated by a timer within the processor. This allows the operating system to perform
certain functions on a regular basis.

I/O =>  Generated by an I/O controller, to signal normal completion of an operation or to signal
a variety of error conditions.

Hardware failure => Generated by a failure, such as power failure or memory parity error.

EVOLUTION OF THE MICROPROCESSOR

The hardware revolution that brought about desktop and handheld computing was
the invention of the microprocessor, which contained a processor on a single chip.
Though originally much slower than multichip processors, microprocessors have
continually evolved to the point that they are now much faster for most computations
due to the physics involved in moving information around in sub-nanosecond
timeframes.


Not only have microprocessors become the fastest general purpose processors
available, they are now multiprocessors; each chip (called a socket) contains multiple
processors (called cores), each with multiple levels of large memory caches, and
multiple logical processors sharing the execution units of each core. As of 2010, it is
not unusual for even a laptop to have 2 or 4 cores, each with 2 hardware threads, for
a total of 4 or 8 logical processors.

Although processors provide very good performance for most forms of
computing, there is increasing demand for numerical computation. Graphical
Processing Units (GPUs) provide efficient computation on arrays of data using
Single-Instruction Multiple Data (SIMD) techniques pioneered in supercomputers.
GPUs are no longer used just for rendering advanced graphics, but they are
also used for general numerical processing, such as physics simulations for games
or computations on large spreadsheets. Simultaneously, the CPUs themselves are
gaining the capability of operating on arrays of data—with increasingly powerful
vector units integrated into the processor architecture of the x86 and AMD64
families.

Processors and GPUs are not the end of the computational story for the
modern PC. Digital Signal Processors (DSPs) are also present, for dealing with
streaming signals—such as audio or video. DSPs used to be embedded in I/O
devices, like modems, but they are now becoming first-class computational devices,
especially in handhelds. Other specialized computational devices (fixed function
units) co-exist with the CPU to support other standard computations, such as
encoding/decoding speech and video (codecs), or providing support for encryption
and security.

To satisfy the requirements of handheld devices, the classic microprocessor
is giving way to the System on a Chip (SoC), where not just the CPUs and caches
are on the same chip, but also many of the other components of the system, such as
DSPs, GPUs, I/O devices (such as radios and codecs), and main memory.

how delete object automatically in java, garbage_collection

class A{
    static int j=0;
    A(){
        j++;
        System.out.println("object created"+j);
    }
    protected void finalize(){
    j--;
        System.out.println("object deleted"+j);   
    }
    public static void main(String a[]){
        //Thread t1 = new Thread();
        A[] a1 = new A[10];
        for(int i=0;i<10;i++){
            a1[i]=new A();
        }

        for(int i=0;i<10;i++){
            a1[i]=null;
        }
        System.gc();
        try{
            Thread.sleep(100);
        }catch(Exception e){
            System.out.println(e);
        }

    }
}