Thursday, October 28, 2010

Good links

http://www.joyofprogramming.com/

7_Things_Every_Software_Architect_Should_Know

http://97things.oreilly.com/wiki/index.php/97_Things_Every_Software_Architect_Should_Know_-_The_Book

About Flexible Array Members(FAM)

About Flexible Array Members(FAM)
My latest column in LFY is about Flexible Array Members (FAM) - see http://www.lfymag.com/currentissue.asp?id=13 (the article is not available online yet.

For those who dont know that FAM is: FAM is about the C99 feature where the last member of a struct can be an incomplete one. We can allocate memory dynamically for that structure with additional storage for that struct to treat the last element as a flexible lenght array. We remember the size of the allocated array in a member of the struct itself.

So far so good. An LFY reader - Basavaraj Dengi - asked me this intersting question: "when we can use a pointer and keep the things simple,
why use flexible array member?" An interesting question, indeed!

It appears that using FAM member and a pointer are equivalent. For example

struct varlength1 {
int len;
int *data;
};

struct varlength2 {
int len;
int data[];
};

Just think about it - what are the differences here? varlength1 has a *data, and we can allocate the struct statically. Then we need to allocate data with some dynamically allocated memory. What about the struct that is allocate dynamically? We logically do two allocations: one for the struct itself and another for the pointer member inside.

However with varlenght2, we can allocate it statically like this:

struct varlength2 {
int len;
int data[];
} varray { 3, { 1, 2, 3 } };

(This is a GCC feature and I am not sure if its C99), so it is very convenient. Coming to dynamic allocation of varlenght2, we do one dynamic allocation and allocate extra space for the data so that the end of the structure is treated as part of the array. How about doing free of the data? The compiler will obviously give an error, and so we need to do a free once: and that is for the struct.

As we can see, FAM is a convenience feature. Assume that we're writing a program for reading a file which is written according to a simple fileformat (bitmap format comes to my mind, but since I dont remember that format off-my-head, I am not sure if its a very good example for explaining the use of FAM). Now, we read the file header, which is typically a fixed lenght header. The header gives info on the rest of the file, which let us assume, is a simple list of integers. Now, for implementing it, FAM comes handy: as we read the header, we can allocate the struct for that file format once and use that FAM as if we allocated it statically. Freeing that struct is also easy. With a pointer member it is a little more work, but I dont deny it, it will also work.

Get me right: I am not saying using FAM is good and recommended. What I am saying is that FAM is a convenience feature and sometimes we find it as a handy feature to use. Yes, I can hear some of you still murmuring with disapproving my explanation. I see their point: C programmers can live without FAM, just like the K&R C days. But as I said, its just a "convenience" feature. If you want, take it and use it, or you can live with the plain old pointers, and keep going: you make the choice!

cost volatile

volatile long * const timer = 0x0000ABCD;

It reads as follows: the timer is a const pointer to a long volatile variable. In plain English, it means that the timer is a variable that I will not change; it points to a value that can be changed without the knowledge of the compiler!

So in above case..cost is for "timer " but volatile is for "*timer" (value pointed by timer)...so both r not for same variable

but i cam to know that we can set vloatile and cost for same varibale...i think that also true but i don;t know exact example

Demystifying the ‘Volatile’ Keyword in C

http://www.linuxforu.com/teach-me/demystifying-the-volatile-keyword-in-c/

One of my favourite interview questions for novice programmers is: “What is the use of the ‘volatile’ keyword?” For experienced programmers, I ask: “Can we qualify a variable as both ‘const’ and ‘volatile’—if so, what is its meaning?” I bet most of you don’t know the answer, right?
The keyword ‘volatile’ is to do with compiler optimisation. Consider the following code:

long *timer = 0x0000ABCD;
// assume that at location 0x0000ABCD the current time is available
long curr_time = *timer;
// initialize curr_time to value from ‘timer’
// wait in while for 1 sec (i.e. 1000 millisec)
while( (curr_time - *timer) < 1000 )
{
curr_time = *timer; // update current time
}
print_time(curr_time);
// this function prints the current time from the
// passed long variable
Usually, hardware has a timer that can be accessed from a memory location. Here, assume that it’s 0×0000ABCD and is accessed using a long * variable ‘timer’ (in the UNIX tradition, time can be represented as a long variable and increments are done in milliseconds). The loop is meant to wait one second (or 1,000 milliseconds) by repeatedly updating curr_time with the new value from the timer. After a one second delay, the program prints the new time. Looks fine, right?

However, from the compiler point of view, what the loop does is stupid—it repeatedly assigns curr_time with *timer, which is equivalent to doing it once outside the loop. Also, the variable ‘timer’ is de-referenced repeatedly in the loop—when it is enough to do it once. So, to make the code more efficient (i.e., to optimise it), it may modify loop code as follows:

curr_time = *timer; // update current time
long temp_time = *timer;
while( (curr_time - temp_timer) < 1000 )
{ /* do nothing here */
}
As you can see, the result of this transformation is disastrous: the loop will never terminate because neither is curr_time updated nor is the timer de-referenced repeatedly to get new (updated time) values.

What we need is a way to tell the compiler not to ‘play around’ with such variables by declaring them volatile, as in:

volatile long * timer = 0x0000ABCD;
volatile curr_time = *timer;
Now, the compiler will not do any optimisation on these variables. This, essentially, is the meaning of the ‘volatile’ keyword: It declares the variables as ‘asynchronous’ variables, i.e., variables that are ‘not-modified-sequentially’. Implicitly, all variables that are not declared volatile are ‘synchronous variables’.

How about qualifying a variable as both const and volatile? As we know, when we declare a variable as const, we mean it’s a ‘read-only’ variable—once we initialise it, we will not change it again, and will only read its value. Here is a modified version of the example:

long * const timer = 0x0000ABCD;
// rest of the code as it was before..
We will never change the address of a timer, so we can put it as a const variable. Now, remember what we did to declare the timer as volatile:

volatile long * timer = 0x0000ABCD;
We can now combine const and volatile together:

volatile long * const timer = 0x0000ABCD;
It reads as follows: the timer is a const pointer to a long volatile variable. In plain English, it means that the timer is a variable that I will not change; it points to a value that can be changed without the knowledge of the compiler!

Monday, October 25, 2010

BSS

.bss

From Wikipedia, the free encyclopedia

In computer programming, the name .bss or bss is used by many compilers and linkers for a part of the data segment containing statically-allocated variables represented solely by zero-valued bits initially (i.e., when execution begins). It is often referred to as the "bss section" or "bss segment".

In C, statically-allocated variables without an explicit initializer are initialized to zero (for arithmetic types) or a null pointer (for pointer types). Implementations of C typically represent zero values and null pointer values using a bit pattern consisting solely of zero-valued bits (though this is not required by the C standard). Hence, the bss section typically includes all uninitialized variables declared at the file level (i.e., outside of any function) as well as uninitialized local variables declared with the static keyword. An implementation may also assign statically-allocated variables initialized with a value consisting solely of zero-valued bits to the bss section.

Typically, the program loader initializes the memory allocated for the bss section when it loads the program. Operating systems may use a technique called zero-fill-on-demand to efficiently implement the bss segment (McKusick & Karels 1986). In embedded software, the bss segment is mapped into memory that is initialized to zero by the C run-time system before main() is entered.

Some application binary interfaces also support an sbss segment for "small data". Typically, these data items can be accessed by leaner code using instructions that can only access a certain range of addresses.

Historically, BSS (from Block Started by Symbol) was a pseudo-operation in UA-SAP (United Aircraft Symbolic Assembly Program), theassembler developed in the mid-1950s for the IBM 704 by Roy Nutt, Walter Ramshaw, and others at United Aircraft Corporation[citation needed]. The BSS keyword was later incorporated into FAP (FORTRAN Assembly Program), IBM's standard assembler for its 709 and 7090/94 computers. It defined a label (i.e. symbol) and reserved a block of uninitialized space for a given number of words (Timar 1996).

redix tree routing table

1. we should read chapetr in book TCP/IP illusterated vol 2 chapter 18
2.

Sunday, October 24, 2010

Where do two linked lists merge? intersection point of circular link list singly

http://www.velocityreviews.com/forums/t587346-where-do-two-linked-lists-merge.html

Where do two linked lists merge?

(followups set to comp.programming)

Where do two linked lists merge?

The following question was raised in comp.lang.c: Is there any
efficient way of finding the intersection point of two singly
linked lists? That is, given two singly linked lists that meet
at some point, find the node where they meet.

It turns out that the question as stated conceals two
assumptions. The first is that the two lists terminate with a
common null link. If they do they will have a single merge
point; the two merging lists form a Y structure. However there
are two ways a linked list can terminate, either by a null link
or by a cycle.

When the two lists terminate in a cycle each list can join the
cycle at different nodes. Each is a legitimate merge node, i.e.,

we can follow A until it reaches B's merge point or we can follow

B until it reaches A's merge point. There is no single "the
merge point".

We can remove the ambiguity by adopting some criterion for the
"best" merge point, e.g., the one that minimizes the combined
lengths to the merge point, although this fails if the two merge
points are on exact opposite sides of the cycle.

The second assumption is that the two lists have any nodes in
common, i.e., that they actually merge. Algorithms that fail if
the lists are terminated by cycles are unsafe for cycles.
Algorithms that fail if the lists have no nodes in common are
unsafe for failure to merge.

I. THE SIMPLE APPROACH FOR NULL TERMINATED LISTS

What we want to find is the first node the two lists have in
common. Let A.m and B.m be the number of nodes respectively
between the start of A and the merge node and B and the merge
node. Let m be the difference between A.m and B.m. If we skip
the first m nodes of the longer list than we can compare the two
lists node by node until we get to a match, the merge node.

Thus we can find the merge point if we can find m. Since the two

lists have the same nodes in common after the merge point we have



m = A.length - B.length.

The advantage of this approach is its simplicity. However it
does require knowing the list lengths. If this information is
not available in advance, the lists must be traversed in their
entirety.

The simple algorithm is unsafe for cycles (traversing the lists
to find end points goes into an infinite loop) but it is safe for

failure to merge (the comparison loop reaches the terminating
null node.)

II. FINDING THE ENTRY POINT INTO A TERMINATING CYCLE

Finding a cycle in a linked list is a classic algorithm. Let A
be the list. Let p1 and p2 be two pointers into A. Initialize
each to point to the start of the list. In each iteration of a
loop advance p1 by one node and p2 by two nodes; escape from the
loop when p1 and p2 both point to the same node. Call the node
x; it is a node within the list cycle.

The next step is to find the length of the cycle. In each
iteration of a second loop advance p1 by one node until again
points to x. The number of iterations is the length of the
cycle. Let k be the length of the cycle.

Now initialize p1 and p2 to the start of the list. Advance p2 by

k nodes. In a loop advance p1 and p2 by one node in each
iteration; escape from the loop when p1 and p2 point to the same
node. Call the node y. Then y is the entry point into the list
cycle.

If it is not known whether the lists are terminated by a cycle or

by a null node, we can add a test in the first loop to see if p2
detects a null node.

III. AN ALGORITHM BASED ON CYCLE DETECTION

When there is a terminating cycle there are three possibilities -

(a) A and B merge before they reach the cycle, (b) they merge at
a common entry point, or (c) they enter at different points in
the cycle. To distinguish these cases we need to find the entry
points for A and B. If the entry points are the same treat the
common entry point as a list terminator and apply the simple
algorithm in part I. If they differ we have case (c) - either
entry point is a legitimate answer.

If the two lists are null terminated the cycle detection
algorithm will discover the fact; the basic algorithm of part I
can be used.

In case (c) a complete implementation checks that the two lists
actually share a common cycle. Clearly they don't if the two
cycle lengths are different. If they are the same commonality
can be done by checking that the entry point of A can be reached
from the entry point of B.

This algorithm is O(n) but it is cumbersome and relatively
expensive. However,properly implemented, it is safe against
cycles and against failure to merge.

IV. A METHOD THAT DOESN'T REQUIRE FINDING THE TERMINATOR

The basic algorithm doesn't actually require knowing the list
length; what it needs is the distance between two nodes that
match. Call that distance m. It turns out that m can be found
without searching to the ends of the lists.

To see how that this works let us first suppose that we know a
bound M for m, and that A is the longer list. We compare
the first element of B with the first M elements of A. If we
find a match we've found m (and the merge point.) If no match is
found we go to node M+1 in B and check it against nodes M+1
through 2*M in A; we keep advancing by M nodes until a match is
found. A match will be found when the merge point is passed.
The difference in positions of the match in A and B yields m.

Unfortunately we don't have a bound M for m nor do we know which
list is longer. However we can use M=1 in the first iteration,
M=2 in the second iteration, and so on. We solve the problem of
not knowing which list is longer by alternating which list is the

lead list. With a bit of bookkeeping we get the following
algorithm:

Traverse A by one step. Traverse B by three steps, checking to
see if it there is a match with the last element read from A.
Traverse A by five more steps, checking for a match with the last

element read from B. Continue, alternating lists and increasing
the steps by two each time. When a match is found this way we
know the difference of distances to the merge point, m. This
gives us an O(m^2 +D) method where D is the common distance to
the merge point.

If the lists are null terminated it is possible that one of the
lists is exhausted before the a match is found. In that case we
revert to algorithm I. If the lists are terminated by a common
cycle algorithm IV will find one of the entry points. Algorithm
IV will fail if the two lists are disjoint and are terminated by
separate cycles.

FINAL REMARKS

This problem is more a puzzle than a serious programming problem.

A 'correct' solution depends on the objective and upon context.
If the lists are known to be null terminated algorithm III is not

needed. In principle algorithm IV is more efficient than
algorithm I; however it is more complex and in practice
simplicity may trump computational efficiency. If the lists are
known to be cycle terminated algorithm I will fail. Algorithm
III is guaranteed to work but is cumbersome; algorithm IV will be

considerably more efficient, but will fail if the lists are
disjoint.

Considered as a puzzle, algorithm I is the 'correct' solution.
It is simple and elegant.

Interview at Netapp

Q1. how to find point of intersaction of two list and how to find in circular list
1. http://geeksforgeeks.org/?p=2405

Q2. difference between

char *p="neeraj gupta"
array abcd[]="neeraj gupta"

if these r part of function will that increase size of stack?

Q3. How prectrisia tree work??
Q4. How specific rules get more priority from generic rules?

Thursday, October 21, 2010

const and volatile

an a variable be both const and volatile?
Yes. The const modifier means that this code cannot change the value of the variable, but that does not mean that the value cannot be changed by means outside this code. For instance, in the example in
FAQ 8, the timer structure was accessed through a volatile const pointer. The function itself did not change the value of the timer, so it was declared const. However, the value was changed by hardware on the computer, so it was declared volatile. If a variable is both const and volatile, the two modifiers can appear in either order.

This is possible and mostly used in embedded system.The example is Interrupt Status Register.As it is a status register , in the program we should not modify this variable.So it should be a constant.But this variable can be changed by the processor or hardware based on the interrupt condition.So when in the program ,we want to read the value of this varible , it should read the actual value with out any optimisation.For this reason ,the variable can be declared as volatile too.