http://www.6test.edu.cn/~lujx/linux_networking/index.html?page=0131777203_ch17lev1sec4.html
http://www.ecsl.cs.sunysb.edu/elibrary/linux/network/LinuxKernel.pdf : good link
http://e-university.wisdomjobs.com/linux/chapter-208-277/receiving-data-in-the-transport-layer-udp-and-tcp.html
http://hsnlab.tmit.bme.hu/twiki/pub/Targyak/Mar11Cikkek/Network_stack.pdf : very good
Monday, January 30, 2012
Friday, January 27, 2012
Tree Algo
Thursday, January 26, 2012
Merge Sort vs. Quick Sort: Overview
Merge Sort vs. Quick Sort: Overview
Merge Sort
Quick Sort
Time complexity (Average): O(n log n) Time complexity (Average): O(n log n)
Time complexity (Worst): O(n log n) Time complexity (Worst): O(n^2)
(Occurs when list is sorted)
Stable sort
Not dependent on any factors
Average case = Worst Case Not a stable sort
Dependent on randomness of list
Memory: O(n)
Additional memory space required Memory: O(log n)
Memory Complexity (Best): O(1)
Little additional memory space required
When to use Merge Sort? When additional memory usage is not a problem and list could be partial sorted
When to use Quick Sort? When additonal memory usage is a problem and the list is randomized.
[Source: Sorting Algorithm]
Merge Sort
Quick Sort
Time complexity (Average): O(n log n) Time complexity (Average): O(n log n)
Time complexity (Worst): O(n log n) Time complexity (Worst): O(n^2)
(Occurs when list is sorted)
Stable sort
Not dependent on any factors
Average case = Worst Case Not a stable sort
Dependent on randomness of list
Memory: O(n)
Additional memory space required Memory: O(log n)
Memory Complexity (Best): O(1)
Little additional memory space required
When to use Merge Sort? When additional memory usage is not a problem and list could be partial sorted
When to use Quick Sort? When additonal memory usage is a problem and the list is randomized.
[Source: Sorting Algorithm]
Wednesday, January 25, 2012
Interview Mcafee
1. Is MSS optional or not
2. how traceroute work
3. implementation of hash funation
4. insert in binary search tree : try to write a program by urself
5. IPC machenism
6. shared memory
7. how kernel pass information to process
8. try to look at linux kernel path
Juniper
1. Mirror image of binary tree
2. How to write state machine using function pointer
3. how to write word alligned data.
4. If data is present in stack then is there any padding
2. how traceroute work
3. implementation of hash funation
4. insert in binary search tree : try to write a program by urself
5. IPC machenism
6. shared memory
7. how kernel pass information to process
8. try to look at linux kernel path
Juniper
1. Mirror image of binary tree
2. How to write state machine using function pointer
3. how to write word alligned data.
4. If data is present in stack then is there any padding
Saturday, January 21, 2012
Patricia tree
Must read : http://books.google.co.in/books?id=ESM3CWY5xRYC&pg=PA562&lpg=PA562&dq=patricia+tree+routing&source=bl&ots=b4pXXGtACY&sig=TISdjQMrPkEPDjorJMQelA9zx7U&hl=en&sa=X&ei=PZAbT9lw0IisB4Pa4eQN&ved=0CDwQ6AEwBDgK#v=onepage&q=patricia%20tree%20routing&f=false
Patricia stands for "Practical algorithm to retrieve information coded in alphanumeric",
Trie : A tree for storing strings in which there is one node for every common prefix. The strings are stored in extra leaf nodes
Node : (1) A unit of reference in a data structure. Also called a vertex in graphs and trees. (2) A collection of information which must be kept at a single memory location.
Child : An item of a tree referred to by a parent item. See the figure at tree. Every item, except the root, is the child of some parent.
A Patricia tree is related to a Trie. The problem with Tries is that when the set of keys is sparse, i.e. when the actual keys form a small subset of the set of potential keys, as is very often the case, many (most) of the internal nodes in the Trie have only one descendant. This causes the Trie to have a high space-complexity.
Patricia tree : Binary digital tree
Specs :
n external nodes with key values
n-1 internal nodes
---------
Tries were invented by E. Fredkin in 1960
Patricia trees Patricia stands for "Practical algorithm to retrieve information coded in alphanumeric", invented by D. R. Morrison, 1968
The idea is to take a trie and get rid of any nodes that only have one child. Instead, each remaining node is labeled with a character position number which would have given that node's depth in the original uncompressed trie. We now have the problem that keys are no longer uniquely specified by the search path, so we have to store the key itself in the appropriate leaf. The storage requirement is now kn pointers, where n is the number of keys and k is the size of the alphabet. This is often significantly less than s(k + 1), particularly if the keys are very long.
A Patricia tree alone is far from efficient.
Patricia stands for "Practical algorithm to retrieve information coded in alphanumeric",
Trie : A tree for storing strings in which there is one node for every common prefix. The strings are stored in extra leaf nodes
Node : (1) A unit of reference in a data structure. Also called a vertex in graphs and trees. (2) A collection of information which must be kept at a single memory location.
Child : An item of a tree referred to by a parent item. See the figure at tree. Every item, except the root, is the child of some parent.
A Patricia tree is related to a Trie. The problem with Tries is that when the set of keys is sparse, i.e. when the actual keys form a small subset of the set of potential keys, as is very often the case, many (most) of the internal nodes in the Trie have only one descendant. This causes the Trie to have a high space-complexity.
Patricia tree : Binary digital tree
Specs :
n external nodes with key values
n-1 internal nodes
---------
Tries were invented by E. Fredkin in 1960
Patricia trees Patricia stands for "Practical algorithm to retrieve information coded in alphanumeric", invented by D. R. Morrison, 1968
The idea is to take a trie and get rid of any nodes that only have one child. Instead, each remaining node is labeled with a character position number which would have given that node's depth in the original uncompressed trie. We now have the problem that keys are no longer uniquely specified by the search path, so we have to store the key itself in the appropriate leaf. The storage requirement is now kn pointers, where n is the number of keys and k is the size of the alphabet. This is often significantly less than s(k + 1), particularly if the keys are very long.
A Patricia tree alone is far from efficient.
Monday, January 16, 2012
multicasting
http://www.h3c.com/portal/Products___Solutions/Technology/Security_and_VPN/Technology_Introduction/200701/195605_57_0.htm
Sunday, January 15, 2012
Atomic operation and spin lock
http://en.wikipedia.org/wiki/Linearizability
spin lock : http://www.csie.dyu.edu.tw/~swang/LDD/ch5_p2.pdf
spin lock : http://www.csie.dyu.edu.tw/~swang/LDD/ch5_p2.pdf
Friday, January 6, 2012
Wednesday, January 4, 2012
variable number of argument
va_arg takes a va_list and a variable type, and returns the next argument in the list in the form of whatever variable type it is told. It then moves down the list to the next argument. For example, va_arg ( a_list, double ) will return the next argument, assuming it exists, in the form of a double. The next time it is called, it will return the argument following the last returned number, if one exists. Note that you need to know the type of each argument--that's part of why printf requires a format string! Once you're done, use va_end to clean up the list: va_end( a_list );
To show how each of the parts works, take an example function:
#include
#include
/* this function will take the number of values to average
followed by all of the numbers to average */
double average ( int num, ... )
{
va_list arguments;
double sum = 0;
/* Initializing arguments to store all values after num */
va_start ( arguments, num );
/* Sum all the inputs; we still rely on the function caller to tell us how
* many there are */
for ( int x = 0; x < num; x++ )
{
sum += va_arg ( arguments, double );
}
va_end ( arguments ); // Cleans up the list
return sum / num;
}
int main()
{
/* this computes the average of 13.2, 22.3 and 4.5 (3 indicates the number of values to average) */
printf( "%f\n", average ( 3, 12.2, 22.3, 4.5 ) );
/* here it computes the average of the 5 values 3.3, 2.2, 1.1, 5.5 and 3.3
printf( "%f\n", average ( 5, 3.3, 2.2, 1.1, 5.5, 3.3 ) );
}
To show how each of the parts works, take an example function:
#include
#include
/* this function will take the number of values to average
followed by all of the numbers to average */
double average ( int num, ... )
{
va_list arguments;
double sum = 0;
/* Initializing arguments to store all values after num */
va_start ( arguments, num );
/* Sum all the inputs; we still rely on the function caller to tell us how
* many there are */
for ( int x = 0; x < num; x++ )
{
sum += va_arg ( arguments, double );
}
va_end ( arguments ); // Cleans up the list
return sum / num;
}
int main()
{
/* this computes the average of 13.2, 22.3 and 4.5 (3 indicates the number of values to average) */
printf( "%f\n", average ( 3, 12.2, 22.3, 4.5 ) );
/* here it computes the average of the 5 values 3.3, 2.2, 1.1, 5.5 and 3.3
printf( "%f\n", average ( 5, 3.3, 2.2, 1.1, 5.5, 3.3 ) );
}
Tuesday, January 3, 2012
prime number test
What is a prime number?
Prime number is a number, which have exactly two distinct natural number divisors, 1 and itself. The most naive approach to check whether a number is prime is to follow the definition. Algorithm to check number's n primality:
if number is 1, return false;
otherwise, for all integers m from 2 to n - 1, check if n is divisible by m. If it is, n is composite;
if no divisors were found, we can conclude, that n is prime.
Note. The 1 is not a prime number, because it doesn't satisfy the definition.
Improving the method
Can we make this simple approach better? Yes. First, let us notice, that we can check divisors less or equal to square root of n only. The proof follows.
Statement. If n has a divisor d (1 < d < n), than it has a divisor d0 (1 < d0 < √n).
If n is divided by square root entirely, than it is a perfect square and not prime. Otherwise, assume, that first found divisor is d1, √n < d1 < n. But n is divided entirely by d2 = n / d1, which is less than √n. Therefore, the assumption is false and if there are a divisor greater than √n, than there is a "pair" less than √n. Statement is proven.
One more improvement
We should mention one more improvement. Assume, than n is odd (2 is not a divisor). If n is not divisible by 2 without remainder, than it is not divisible entirely by any other even number. The algorithm after those two improvements changes:
if number is 1, return false;
if number is 2, return true;
if number is even, return false;
otherwise, for all odd integers m from 3 to √n, check if n is divisible by m. If it is, n is composite;
if no divisors were found, we can conclude, that n is prime.
Generalization of this idea is when algorithm checks prime divisors only.
C++ implementation
bool isPrime(int number) {
if (number == 1)
return false;
if (number == 2)
return true;
if (number % 2 == 0)
return false;
for (int d = 3; d <= (int)sqrt((double)number); d++)
if (number % d == 0)
return false;
return true;
}
Prime number is a number, which have exactly two distinct natural number divisors, 1 and itself. The most naive approach to check whether a number is prime is to follow the definition. Algorithm to check number's n primality:
if number is 1, return false;
otherwise, for all integers m from 2 to n - 1, check if n is divisible by m. If it is, n is composite;
if no divisors were found, we can conclude, that n is prime.
Note. The 1 is not a prime number, because it doesn't satisfy the definition.
Improving the method
Can we make this simple approach better? Yes. First, let us notice, that we can check divisors less or equal to square root of n only. The proof follows.
Statement. If n has a divisor d (1 < d < n), than it has a divisor d0 (1 < d0 < √n).
If n is divided by square root entirely, than it is a perfect square and not prime. Otherwise, assume, that first found divisor is d1, √n < d1 < n. But n is divided entirely by d2 = n / d1, which is less than √n. Therefore, the assumption is false and if there are a divisor greater than √n, than there is a "pair" less than √n. Statement is proven.
One more improvement
We should mention one more improvement. Assume, than n is odd (2 is not a divisor). If n is not divisible by 2 without remainder, than it is not divisible entirely by any other even number. The algorithm after those two improvements changes:
if number is 1, return false;
if number is 2, return true;
if number is even, return false;
otherwise, for all odd integers m from 3 to √n, check if n is divisible by m. If it is, n is composite;
if no divisors were found, we can conclude, that n is prime.
Generalization of this idea is when algorithm checks prime divisors only.
C++ implementation
bool isPrime(int number) {
if (number == 1)
return false;
if (number == 2)
return true;
if (number % 2 == 0)
return false;
for (int d = 3; d <= (int)sqrt((double)number); d++)
if (number % d == 0)
return false;
return true;
}
sorting
http://www.sorting-algorithms.com
http://www.algolist.net/Algorithms/Sorting/Selection_sort
http://www.cs.oswego.edu/~mohammad/classes/csc241/samples/sort/Sort2-E.html
http://en.wikipedia.org/wiki/Sorting_algorithm
good quick sort :
http://www.algolist.net/Algorithms/Sorting/Quicksort
Algorithm
The divide-and-conquer strategy is used in quicksort. Below the recursion step is described:
Choose a pivot value. We take the value of the middle element as pivot value, but it can be any value, which is in range of sorted values, even if it doesn't present in the array.
Partition. Rearrange elements in such a way, that all elements which are lesser than the pivot go to the left part of the array and all elements greater than the pivot, go to the right part of the array. Values equal to the pivot can stay in any part of the array. Notice, that array may be divided in non-equal parts.
Sort both parts. Apply quicksort algorithm recursively to the left and the right parts.
Partition algorithm in detail
There are two indices i and j and at the very beginning of the partition algorithm i points to the first element in the array and j points to the last one. Then algorithm moves i forward, until an element with value greater or equal to the pivot is found. Index j is moved backward, until an element with value lesser or equal to the pivot is found. If i ≤ j then they are swapped and i steps to the next position (i + 1), j steps to the previous one (j - 1). Algorithm stops, when i becomes greater than j.
After partition, all values before i-th element are less or equal than the pivot and all values after j-th element are greater or equal to the pivot.
Example. Sort {1, 12, 5, 26, 7, 14, 3, 7, 2} using quicksort.
void quickSort(int arr[], int left, int right) {
int i = left, j = right;
int tmp;
int pivot = arr[(left + right) / 2];
/* partition */
while (i <= j) {
while (arr[i] < pivot)
i++;
while (arr[j] > pivot)
j--;
if (i <= j) {
tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
i++;
j--;
}
};
/* recursion */
if (left < j)
quickSort(arr, left, j);
if (i < right)
quickSort(arr, i, right);
}
http://www.algolist.net/Algorithms/Sorting/Selection_sort
http://www.cs.oswego.edu/~mohammad/classes/csc241/samples/sort/Sort2-E.html
http://en.wikipedia.org/wiki/Sorting_algorithm
good quick sort :
http://www.algolist.net/Algorithms/Sorting/Quicksort
Algorithm
The divide-and-conquer strategy is used in quicksort. Below the recursion step is described:
Choose a pivot value. We take the value of the middle element as pivot value, but it can be any value, which is in range of sorted values, even if it doesn't present in the array.
Partition. Rearrange elements in such a way, that all elements which are lesser than the pivot go to the left part of the array and all elements greater than the pivot, go to the right part of the array. Values equal to the pivot can stay in any part of the array. Notice, that array may be divided in non-equal parts.
Sort both parts. Apply quicksort algorithm recursively to the left and the right parts.
Partition algorithm in detail
There are two indices i and j and at the very beginning of the partition algorithm i points to the first element in the array and j points to the last one. Then algorithm moves i forward, until an element with value greater or equal to the pivot is found. Index j is moved backward, until an element with value lesser or equal to the pivot is found. If i ≤ j then they are swapped and i steps to the next position (i + 1), j steps to the previous one (j - 1). Algorithm stops, when i becomes greater than j.
After partition, all values before i-th element are less or equal than the pivot and all values after j-th element are greater or equal to the pivot.
Example. Sort {1, 12, 5, 26, 7, 14, 3, 7, 2} using quicksort.
void quickSort(int arr[], int left, int right) {
int i = left, j = right;
int tmp;
int pivot = arr[(left + right) / 2];
/* partition */
while (i <= j) {
while (arr[i] < pivot)
i++;
while (arr[j] > pivot)
j--;
if (i <= j) {
tmp = arr[i];
arr[i] = arr[j];
arr[j] = tmp;
i++;
j--;
}
};
/* recursion */
if (left < j)
quickSort(arr, left, j);
if (i < right)
quickSort(arr, i, right);
}
Subscribe to:
Posts (Atom)