Come 2008 and the market/economic gurus predict a lot of volatility in everything related to money and forex. Not only this, technology will be at it's best to make our lives better (remember Philips saying: Let's make things better) and give us more options to communicate and collaborate on a wider scale. With all this, the concern and efforts for greener and safer earth will still be a "Work-In-Progress" and all this for the good of human.
Not looking at the flip side though(you can say I'm being too optimistic of a gr8 2008)
Well, all said, these are some of the web sites and blogs I see myself carrying with me to 2008 for a great compilation of topics covering a lot of areas that I go thru on a daily basis. Most of the links are just feeds so you can add them directly to your reader (I use Google reader).
A list of tech-blogs:
paulbridger.net - C++, patterns, design
Dr.Dobb's C++ Articles
DevX: Latest C++ Content
CodeGuru.com
Netotto Blog : Another Software Developer's Blog
Recording my programming path
Coding Misadventures
Monkey Bites
The ones on Network Security:
Dark Reading: Dark Reading News Analysis
SecGuru
SecurityFocus News
A list of blogs on Productivity, professional achievement and morale boosting things
Lifehacker: How To
Great Solutions to Team Challenges
Dumb Little Man - Tips for Life
Achieve IT!
A list of blogs on personal finance:
AllFinancialMatters
GetRichSlowly
I Will Teach You To Be Rich
Moneycontrol Pers Fin
Other than the above some weblinks that are interesting:
Koders
TBB
Shuva's Photo blog
Discover Magazine: Updates on Science
The Site for Books and Readers
With this and a lot more to come in 2008.... I wish all a very happy and green Year 2008!
Monday, December 31, 2007
Tuesday, December 18, 2007
How to use TBB parallel_for
Well, now that we've installed and configured TBB libs on our (Linux) machine, we can start playing with various parallelism constructs provided by TBB and use them to gain meaningful efficiency in our day-to-day problem solving.
Why stress on "meaningful"? We'll talk about it later in this column.
Here's a working code with execution timing difference b/w a multiplication job using sequential for and a TBB parallel_for. I've put in comments to explain use of important statements in the code.
//These includes are the ones reqd to use TBB lib's parallel_for
#include "/tbb/tbb20_20070927oss_src/include/tbb/blocked_range.h"
#include "/tbb/tbb20_20070927oss_src/include/tbb/parallel_for.h"
#include "/tbb/tbb20_20070927oss_src/include/tbb/task_scheduler_init.h"
#include < iostream>
#include < stdlib>
#include < sys/time.h>
#define MULTIPLIER 3.1456
//Just something I found at koders for timing
#define TIMERSUB(a, b, result) \
do { \
(result)->tv_sec = (a)->tv_sec - (b)->tv_sec; \
(result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \
if ((result)->tv_usec < 0) { \
--(result)->tv_sec; \
(result)->tv_usec += 1000000; \
} \
} while (0)
using namespace tbb;
using namespace std;
typedef long long mytime_t;
// This structure contains an important part needed to be defined
// for TBB's parallel_for. We need to have UDT's with overloaded
// operators wrapping up the serial functionality that we want
// to break into parallel execution. In this case, I use a struct
// (could have used a class too) that encapsulates an array of
// input and output float numbers and has operator() defined to
// perform a serial for operation on input.
struct myNumbers {
float* ptr_input;
float* ptr_output;
//Just a constructor with an initialization list for use in the parallel_for call
myNumbers(float* input, float* output):ptr_input(input),ptr_output(output){}
// This is the actual body that's called in parallel_for by the TBB runtime.
// This code comes as a struct/class definition as the compiler
// expands and inlines this code as part of the template process.
// The TBB runtime takes the range blocked_range and breaks
// up the for loop in parallel threads to fit the the no. of
// processors/cores. The no. of processors/cores and thus the no.
// of threads to break this operation into are calculated by the
// TBB runtime and thus the developer using the TBB library can
// just concentrate on the functionality w/o worrying about the
// parallelizing math involved.
void operator()(const blocked_range &range)const {
for (int i = range.begin(); i!= range.end(); i++)
ptr_output[i] = ptr_input[i] * MULTIPLIER;
}
};
int main(size_t argc, char* argv[]) {
// for timing execution
timeval t_start, t_end, t_result, tbb_start, tbb_end, tbb_result;
mytime_t singlethread_time, tbb_time;
int i = 0;
float* ptr_input;
float* ptr_outputSingle;
float* ptr_outputTBB;
// Initialize the TBB runtime...
task_scheduler_init init;
if( argc != 2 ) {
cout<<"Usage: "<<< " \n";
return 1;
}
int numElements = atoi(argv[1]);
if( numElements <= 1 ) {
cout<<"Array size "<<<" reqd an integer > 1\n";
return 1;
}
ptr_input = new float[numElements];
ptr_outputSingle = new float[numElements];
ptr_outputTBB = new float[numElements];
for(i = 0; i < numElements; i++) {
ptr_input[i] = i;
ptr_outputSingle[i] = 0;
ptr_outputTBB[i] = 0;
}
//Time the execution using plain sequential for
gettimeofday(&t_start,NULL);
for( i=0; i < numElements; i++ ) {
ptr_outputSingle[i] = ptr_input[i] * MULTIPLIER;
}
gettimeofday(&t_end,NULL);
TIMERSUB(&t_start,&t_end,&t_result);
singlethread_time = (mytime_t)(t_result.tv_sec + t_result.tv_usec);
//Time the execution using TBB parallel_for
gettimeofday(&tbb_start,NULL);
parallel_for(blocked_range(0,numElements),
myNumbers(ptr_input,ptr_outputTBB), auto_partitioner());
gettimeofday(&tbb_end,NULL);
TIMERSUB(&tbb_start,&tbb_end,&tbb_result);
tbb_time = (mytime_t)(tbb_result.tv_sec + tbb_result.tv_usec);
//Verify that the outputs match
for(i=0; i < numElements; i++) {
if( ptr_outputSingle[i] != ptr_outputTBB[i] ) {
cout << ptr_input[i] << " * " << MULTIPLIER <<" = " <<
ptr_outputSingle[i] << " AND " << ptr_outputTBB[i] << endl;
}
}
cout << "Sequential for execution time: " << singlethread_time << " units"<< endl;
cout << "TBB parallel_for execution time: " << tbb_time << " units" << endl;
return 0;
}
I used a Makefile for this, but we can also do it as (assuming the file as simple_for.cpp):
g++ -O2 -DNDEBUG -o ./simple_for simple_for.cpp -ltbb
Now, let's talk about "meaningful" efficiency gains:
I ran this code on my Linux machine with args (numElements) as: 10, 100, 1000 and found some performance improvement when using TBB parallel_for (assuming that execution timings are reported correctly). But when I ran this for numbers beyond 10000, I found that serial for does it in lesser time. No No No buddy.... don't even think about parallel_for having a constraint here in terms of breaking things up into smaller chunks of serial for(s). Intel (and others who support parallelism) specifically take into account Amdahl's Law and Gustafson's Law while proposing TBB to developers. So there's some level of optimization provided in TBB (based on practical load/figures and processor configuration that one's using). Like in this case I could overcome this limit of 10000 by providing a "grainsize" to the blocked_range() constructor as:
parallel_for(blocked_range(0,numElements,10),myNumbers(ptr_input,ptr_outputTBB));
Here, the 3rd arg to blocked_range() is grainsize and I saw more performance improvements for larger iterations if I keep reducing it (finally to 10) starting from an initial grainsize of 1000. Also observe that I've not used auto_partitioner arg in the call to parallel_for if I use grainsize in blocked_range constructor. Using a partitioner for deciding range of parallel chunks for your processing subsystem is one of the new features provided in TBB and with auto_partitioner the TBB runtime chooses a chunk size automatically optimized for parallelizing iterations on the underlying processing subsystem.
Refer TBB Getting Started Doc for more details on how to select the right grainsize for your iterations and for partitioner details.
In short: grainsize specifies the number of iterations for a “reasonable size”
chunk to feed the processor. If the iteration space has more than grainsize iterations, parallel_for splits it into separate subranges that are scheduled separately.
Yo! So we'd a gr8 start with TBB parallel_for demonstrating it's might in multicore/multi-processor environments. There's a lot more to parallel algos in TBB library. Not only that, there's a whole bunch of customized STL constructs that now work in tandem with multi-threaded code w/o the developer worrying about maintaining the threading infrastructure. Lemme explore some more of these features next week when I come back after the X-mas vacations. Till then, happy parallelizing!!!
Why stress on "meaningful"? We'll talk about it later in this column.
Here's a working code with execution timing difference b/w a multiplication job using sequential for and a TBB parallel_for. I've put in comments to explain use of important statements in the code.
//These includes are the ones reqd to use TBB lib's parallel_for
#include "/tbb/tbb20_20070927oss_src/include/tbb/blocked_range.h"
#include "/tbb/tbb20_20070927oss_src/include/tbb/parallel_for.h"
#include "/tbb/tbb20_20070927oss_src/include/tbb/task_scheduler_init.h"
#include < iostream>
#include < stdlib>
#include < sys/time.h>
#define MULTIPLIER 3.1456
//Just something I found at koders for timing
#define TIMERSUB(a, b, result) \
do { \
(result)->tv_sec = (a)->tv_sec - (b)->tv_sec; \
(result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \
if ((result)->tv_usec < 0) { \
--(result)->tv_sec; \
(result)->tv_usec += 1000000; \
} \
} while (0)
using namespace tbb;
using namespace std;
typedef long long mytime_t;
// This structure contains an important part needed to be defined
// for TBB's parallel_for. We need to have UDT's with overloaded
// operators wrapping up the serial functionality that we want
// to break into parallel execution. In this case, I use a struct
// (could have used a class too) that encapsulates an array of
// input and output float numbers and has operator() defined to
// perform a serial for operation on input.
struct myNumbers {
float* ptr_input;
float* ptr_output;
//Just a constructor with an initialization list for use in the parallel_for call
myNumbers(float* input, float* output):ptr_input(input),ptr_output(output){}
// This is the actual body that's called in parallel_for by the TBB runtime.
// This code comes as a struct/class definition as the compiler
// expands and inlines this code as part of the template process.
// The TBB runtime takes the range blocked_range
// up the for loop in parallel threads to fit the the no. of
// processors/cores. The no. of processors/cores and thus the no.
// of threads to break this operation into are calculated by the
// TBB runtime and thus the developer using the TBB library can
// just concentrate on the functionality w/o worrying about the
// parallelizing math involved.
void operator()(const blocked_range
for (int i = range.begin(); i!= range.end(); i++)
ptr_output[i] = ptr_input[i] * MULTIPLIER;
}
};
int main(size_t argc, char* argv[]) {
// for timing execution
timeval t_start, t_end, t_result, tbb_start, tbb_end, tbb_result;
mytime_t singlethread_time, tbb_time;
int i = 0;
float* ptr_input;
float* ptr_outputSingle;
float* ptr_outputTBB;
// Initialize the TBB runtime...
task_scheduler_init init;
if( argc != 2 ) {
cout<<"Usage: "<
return 1;
}
int numElements = atoi(argv[1]);
if( numElements <= 1 ) {
cout<<"Array size "<
return 1;
}
ptr_input = new float[numElements];
ptr_outputSingle = new float[numElements];
ptr_outputTBB = new float[numElements];
for(i = 0; i < numElements; i++) {
ptr_input[i] = i;
ptr_outputSingle[i] = 0;
ptr_outputTBB[i] = 0;
}
//Time the execution using plain sequential for
gettimeofday(&t_start,NULL);
for( i=0; i < numElements; i++ ) {
ptr_outputSingle[i] = ptr_input[i] * MULTIPLIER;
}
gettimeofday(&t_end,NULL);
TIMERSUB(&t_start,&t_end,&t_result);
singlethread_time = (mytime_t)(t_result.tv_sec + t_result.tv_usec);
//Time the execution using TBB parallel_for
gettimeofday(&tbb_start,NULL);
parallel_for(blocked_range
myNumbers(ptr_input,ptr_outputTBB), auto_partitioner());
gettimeofday(&tbb_end,NULL);
TIMERSUB(&tbb_start,&tbb_end,&tbb_result);
tbb_time = (mytime_t)(tbb_result.tv_sec + tbb_result.tv_usec);
//Verify that the outputs match
for(i=0; i < numElements; i++) {
if( ptr_outputSingle[i] != ptr_outputTBB[i] ) {
cout << ptr_input[i] << " * " << MULTIPLIER <<" = " <<
ptr_outputSingle[i] << " AND " << ptr_outputTBB[i] << endl;
}
}
cout << "Sequential for execution time: " << singlethread_time << " units"<< endl;
cout << "TBB parallel_for execution time: " << tbb_time << " units" << endl;
return 0;
}
I used a Makefile for this, but we can also do it as (assuming the file as simple_for.cpp):
g++ -O2 -DNDEBUG -o ./simple_for simple_for.cpp -ltbb
Now, let's talk about "meaningful" efficiency gains:
I ran this code on my Linux machine with args (numElements) as: 10, 100, 1000 and found some performance improvement when using TBB parallel_for (assuming that execution timings are reported correctly). But when I ran this for numbers beyond 10000, I found that serial for does it in lesser time. No No No buddy.... don't even think about parallel_for having a constraint here in terms of breaking things up into smaller chunks of serial for(s). Intel (and others who support parallelism) specifically take into account Amdahl's Law and Gustafson's Law while proposing TBB to developers. So there's some level of optimization provided in TBB (based on practical load/figures and processor configuration that one's using). Like in this case I could overcome this limit of 10000 by providing a "grainsize" to the blocked_range
parallel_for(blocked_range
Here, the 3rd arg to blocked_range() is grainsize and I saw more performance improvements for larger iterations if I keep reducing it (finally to 10) starting from an initial grainsize of 1000. Also observe that I've not used auto_partitioner arg in the call to parallel_for if I use grainsize in blocked_range constructor. Using a partitioner for deciding range of parallel chunks for your processing subsystem is one of the new features provided in TBB and with auto_partitioner the TBB runtime chooses a chunk size automatically optimized for parallelizing iterations on the underlying processing subsystem.
Refer TBB Getting Started Doc for more details on how to select the right grainsize for your iterations and for partitioner details.
In short: grainsize specifies the number of iterations for a “reasonable size”
chunk to feed the processor. If the iteration space has more than grainsize iterations, parallel_for splits it into separate subranges that are scheduled separately.
Yo! So we'd a gr8 start with TBB parallel_for demonstrating it's might in multicore/multi-processor environments. There's a lot more to parallel algos in TBB library. Not only that, there's a whole bunch of customized STL constructs that now work in tandem with multi-threaded code w/o the developer worrying about maintaining the threading infrastructure. Lemme explore some more of these features next week when I come back after the X-mas vacations. Till then, happy parallelizing!!!
Thursday, December 13, 2007
Fun with Intel TBB!
Phew! With Linux there's always some amount of configuration/tweaking reqd. before you can build source or make use of a new library...And I must tell you, I luv this challenge! The best example of this is when you want a GNU app or a framework to help you do something more with your Linux box. Most of the time we'll download src code from GNU free s/w websites like sourceforge etc to get started. And then starts the process to configure, make the src and install it. That's not all, sometimes you've to go a step further and comment or fix some simple errors (like casting) in the C files of the app before you can successfully build the app and get the reqd. binaries.
Here I want to capture an experience I had with installing TBB on my Linux box (running SLES 10 with 2.6.16.21 kernel and gcc/++ version 4.1.0)
Let's go step-by-step from here:
1) copy the following tar.gz files to some folder like /tbb/
tbb20_20070927oss_src.tar.gz
tbb20_20070927oss_lin.tar.gz
2) Now, extract everything there itself using tar -zxvf filenames
3) This will give you two folders:
tbb20_20070927oss_src
tbb20_20070927oss_lin
4) From tbb20_20070927oss_lin copy the folder ia32 to tbb20_20070927oss_src directory (it's a 32 bit platform on an intel box)
5) If you're lucky enuf you'll get the libtbb.so and others for your kernel+glibc version in one of the four folders inside: tbb20_20070927oss_src/ia32
6) If not, we need to build the libtbb.so (the crux of everything) for your platform, so "cd /tbb/tbb20_20070927oss_src/src/tbb/"
7) Run "make" here and see if your luck strikes, to get a libtbb.so w/o errors.
8) If not, then try either of these things or both:
(a) If you see a make Error for task.cpp then you may be asked to fix this:
/src/tbb/task.cpp:396: warning: deprecated conversion from string constant
I know you can do this, so I won't fix it for you here ;)
(b) If it still doesn't work then figure out what else is preventing a successful make of libtbb.so and try resolve it.
Lastly, you can try using the libtbb.so from any of the ia32 folders like: tbb/tbb20_20070927oss_src/ia32/cc4.1.0_libc2.4_kernel2.6.16.21/lib
9) Once you've the right version of libtbb.so and libtbbmalloc.so for your platform, create their soft links in /usr/lib/
10) Now, we're ready to make a sample code supplied with TBB src.
Goto sample code folder "cd tbb/tbb20_20070927oss_src/examples/parallel_for/seismic" and do a make here.
11) Again, things are not that straight buddy!
You need to either add to Makefile the include path for files included in Seismic.cpp like /tbb/tbb20_20070927oss_src/include/tbb/parallel_for.h or edit the .cpp file to have absolute path to these .h files
12) After fixing all these make dependencies, you'll be able to build the binary and see it running on your Linux machine with figures telling you no. of fps with parallelism.
Now that we've the machine run this example successfully, why not try our own parallel_for which seems to be a good starting point to go parallel the Intel way!!
Coming up next -> How to use TBB parallel_for
Here I want to capture an experience I had with installing TBB on my Linux box (running SLES 10 with 2.6.16.21 kernel and gcc/++ version 4.1.0)
Let's go step-by-step from here:
1) copy the following tar.gz files to some folder like /tbb/
tbb20_20070927oss_src.tar.gz
tbb20_20070927oss_lin.tar.gz
2) Now, extract everything there itself using tar -zxvf filenames
3) This will give you two folders:
tbb20_20070927oss_src
tbb20_20070927oss_lin
4) From tbb20_20070927oss_lin copy the folder ia32 to tbb20_20070927oss_src directory (it's a 32 bit platform on an intel box)
5) If you're lucky enuf you'll get the libtbb.so and others for your kernel+glibc version in one of the four folders inside: tbb20_20070927oss_src/ia32
6) If not, we need to build the libtbb.so (the crux of everything) for your platform, so "cd /tbb/tbb20_20070927oss_src/src/tbb/"
7) Run "make" here and see if your luck strikes, to get a libtbb.so w/o errors.
8) If not, then try either of these things or both:
(a) If you see a make Error for task.cpp then you may be asked to fix this:
/src/tbb/task.cpp:396: warning: deprecated conversion from string constant
I know you can do this, so I won't fix it for you here ;)
(b) If it still doesn't work then figure out what else is preventing a successful make of libtbb.so and try resolve it.
Lastly, you can try using the libtbb.so from any of the ia32 folders like: tbb/tbb20_20070927oss_src/ia32/cc4.1.0_libc2.4_kernel2.6.16.21/lib
9) Once you've the right version of libtbb.so and libtbbmalloc.so for your platform, create their soft links in /usr/lib/
10) Now, we're ready to make a sample code supplied with TBB src.
Goto sample code folder "cd tbb/tbb20_20070927oss_src/examples/parallel_for/seismic" and do a make here.
11) Again, things are not that straight buddy!
You need to either add to Makefile the include path for files included in Seismic.cpp like /tbb/tbb20_20070927oss_src/include/tbb/parallel_for.h or edit the .cpp file to have absolute path to these .h files
12) After fixing all these make dependencies, you'll be able to build the binary and see it running on your Linux machine with figures telling you no. of fps with parallelism.
Now that we've the machine run this example successfully, why not try our own parallel_for which seems to be a good starting point to go parallel the Intel way!!
Coming up next -> How to use TBB parallel_for
Tuesday, October 30, 2007
Network Security: Beware! (Part 1)
This one is the first in a series that highlights various aspects of n/w security that goes un-attended or needs still more attention!
I was going thru some articles on N/w security as part of my daily Security-dose. As a developer working on products that uses the word "Security" day-in-day-out... I feel that we sometimes overlook the importance of this word in our security products' implementation.
There's this book Stealing the Network: How to Own the Box that reveals (in a fictional story telling way, and i like it) different ways of gaining unauthorized access to secure Networks and computer systems. Another read about writing secure code, adresses why programmers write in-secure code. It also highlights common and well understood exploits/issues of the past (due to C being unsafe in certain areas) that still go un-checked in every 2nd release.
Not all the ownership lies with the developer looking at the fact that Strategic enhancements/design/architecture is still a work of a few evangelists in the field, but as far as the hacks caused by bugs in the code are concerned they go much in number; big names in the security products industry seem have a direct proportion b/w the brand value and the no. of exploits in their product.
Avenues of attack:
As far as avenues of attack within the secure n/w are concerned Printers now-a-days are not behind. Like our ever-increasing-powerful PC, printers (read multi-function devices with copy-fax-print-send_fax_as_email) too are becoming intelligent agents in the enterprise. What if they started out as passive devices like the old character/line/chain printers, printers have now evolved into actively managed network agents that run pretty neat OS kernels which supports the network stack software and very well understand the Enterprise n/w hierarchy allowing identity-managed print jobs, authorised administration using print-servers etc.
There is still a tendency to take them as the passive utility devices that explains the negligible network security on a printer. Since the printer can communicate with the rest of the network, it can serve as a platform for attack (as a network proxy if nothing else).
As Bruce Schneier mentions in his foreword to Building Secure Software: How to Avoid Security Problems the Right Way:
"We wouldn't have to spend so much time, money, and effort on network security if we didn't have such bad software security. Think about the most recent security vulnerability about which you've read. Maybe it's a killer packet that allows an attacker to crash some server by sending it a particular packet. Maybe it's one of the gazillions of buffer overflows that allow an attacker to take control of a computer by sending it a particular malformed message. Maybe it's an encryption vulnerability that allows an attacker to read an encrypted message or to fool an authentication system. These are all software issues."
At the 2007 RSA Conference, Bruce puts another point: Human beings aren't evolved for security in the modern world, and particularly the IT security world. There is a gap between the reality of security and the emotional feel of security due to the way our brains have evolved. This leads to people making bad choices.
Towards this state of N/w security, a lot can be attributed to software bugs; the way we've been writing OS, N/w applications and middle-tier code.
The language used to develop most of the OS's (not only Win, UNIX/Linux but also OS's that run on the routers, printers, ATM's etc.) had been C/C++ that had limitations in terms of memory leaks (GC features),compiled range checks etc. (although we've a lot of evolved features now in the language). For e.g.: Buffer overflow attack results (mostly) due the missing compiled range checks in C/C++
With range check capability a buffer (array) overflow can be caught at runtime.
Java compiler inserts code that checks access to a buffer (array) is within the bounds of the allocated memory. If the access goes past the end of the buffer a runtime error should occur. It's better to error/exit/raise an exception in the software rather than let the intruder gain access to the system.
char mySockBuffer[SOME_DEFINED_SIZE];
while (!read_my_socket(mySockBuffer)) {
do_something();
}
The code reads a stream of text from an input connected to a TCP/IP socket. A stream of text longer than 80 characters will overwrite data which is stored following mySockBuffer. A text stream of just the right size may overwrite the return address for a function (possibly the function that called the current function), allowing the attacker to insert and execute his own code.
This can result in a potential BOEP and can even let an attacker write over the legitimate instructions to get a malicious code executed on the machine.
A typical example is a code where the developer has a 256 characters array to hold a login username for a web based back-end system.
A hacker sends 300 characters with code that will be executed by the server, and voila, he has broken in. Hackers can find these bugs in many ways.
1) the source code for a lot of services (GNU based are popular) is available on the net. Hackers routinely look through this code searching for programs that have buffer overflow problems.
2) hackers may look at the programs themselves to see if such a problem exists, though reading assembly output is really difficult.
3) hackers will examine every place the program has input and try to overflow it with random data. If the program crashes, there is a good chance that carefully constructed input will allow the hacker to break in.
An intruder would try using unexpected combinations of input on web based forms to break in. Most of the apps today use multiple layers of code, that includes API calls to the underlying operating system as the bottom most layer. An intruder can send the input that is interpreted as a string by one layer, but taken as a meaningful command by another. For e.g. PERL is mostly used for processing user inputs on web based systems. PERL usually sends this input to other programs for further evaluation. A common hacking technique would be to enter something like "| mail < /etc/passwd". This gets executed because PERL asks the operating system to launch an additional program with that input. However, the operating system intercepts the pipe '|' character and launches the 'mail' program as well, which causes the password file to be emailed to the intruder (although not so easy as it looks). I've seen instances where intruders have tried breaking in by this route where they can execute their malicious code as a legitimate piece of response to an event on a system.
Exceptions and Unhandled input: Most programs are written to handle valid input with known preconditions and evaluations. A developer won't always consider what happens when someone enters input that doesn't match the specification. This could become a security hole that an intruder can exploit for an unexpected program behavior during unhandled input scenarios.
There are certain other scenarios and known exploits for Network security apps, OS and n/w configuration. There are design flaws in N/W protocols and loop-holes in the way OS security system works. Just think about TCP/IP... it was designed earlier when there was not much of a wide-spread intrusion/hacking concern thus leaving enuf ways for intruders like: smurf attacks, ICMP Unreachable disconnects, IP spoofing, and SYN floods. The biggest problem is that the IP protocol itself is very "trusting": anyone can forge and change IP data with impunity. IPsec (IP security) has been designed to overcome many of these flaws, but it is not yet widely used.
What next: I'll be compiling some programming tips, DO's and DONT's etc. on n/w security and putting them all in the next post on this series. Bugs too will make it to the next one.
For more similar reads refer:
http://www.linuxsecurity.com/resource_files/intrusion_detection/network-intrusion-detection.html
http://www.embedded.com/design/202300629?pgno=1
I was going thru some articles on N/w security as part of my daily Security-dose. As a developer working on products that uses the word "Security" day-in-day-out... I feel that we sometimes overlook the importance of this word in our security products' implementation.
There's this book Stealing the Network: How to Own the Box that reveals (in a fictional story telling way, and i like it) different ways of gaining unauthorized access to secure Networks and computer systems. Another read about writing secure code, adresses why programmers write in-secure code. It also highlights common and well understood exploits/issues of the past (due to C being unsafe in certain areas) that still go un-checked in every 2nd release.
Not all the ownership lies with the developer looking at the fact that Strategic enhancements/design/architecture is still a work of a few evangelists in the field, but as far as the hacks caused by bugs in the code are concerned they go much in number; big names in the security products industry seem have a direct proportion b/w the brand value and the no. of exploits in their product.
Avenues of attack:
As far as avenues of attack within the secure n/w are concerned Printers now-a-days are not behind. Like our ever-increasing-powerful PC, printers (read multi-function devices with copy-fax-print-send_fax_as_email) too are becoming intelligent agents in the enterprise. What if they started out as passive devices like the old character/line/chain printers, printers have now evolved into actively managed network agents that run pretty neat OS kernels which supports the network stack software and very well understand the Enterprise n/w hierarchy allowing identity-managed print jobs, authorised administration using print-servers etc.
There is still a tendency to take them as the passive utility devices that explains the negligible network security on a printer. Since the printer can communicate with the rest of the network, it can serve as a platform for attack (as a network proxy if nothing else).
As Bruce Schneier mentions in his foreword to Building Secure Software: How to Avoid Security Problems the Right Way:
"We wouldn't have to spend so much time, money, and effort on network security if we didn't have such bad software security. Think about the most recent security vulnerability about which you've read. Maybe it's a killer packet that allows an attacker to crash some server by sending it a particular packet. Maybe it's one of the gazillions of buffer overflows that allow an attacker to take control of a computer by sending it a particular malformed message. Maybe it's an encryption vulnerability that allows an attacker to read an encrypted message or to fool an authentication system. These are all software issues."
At the 2007 RSA Conference, Bruce puts another point: Human beings aren't evolved for security in the modern world, and particularly the IT security world. There is a gap between the reality of security and the emotional feel of security due to the way our brains have evolved. This leads to people making bad choices.
Towards this state of N/w security, a lot can be attributed to software bugs; the way we've been writing OS, N/w applications and middle-tier code.
The language used to develop most of the OS's (not only Win, UNIX/Linux but also OS's that run on the routers, printers, ATM's etc.) had been C/C++ that had limitations in terms of memory leaks (GC features),compiled range checks etc. (although we've a lot of evolved features now in the language). For e.g.: Buffer overflow attack results (mostly) due the missing compiled range checks in C/C++
With range check capability a buffer (array) overflow can be caught at runtime.
Java compiler inserts code that checks access to a buffer (array) is within the bounds of the allocated memory. If the access goes past the end of the buffer a runtime error should occur. It's better to error/exit/raise an exception in the software rather than let the intruder gain access to the system.
char mySockBuffer[SOME_DEFINED_SIZE];
while (!read_my_socket(mySockBuffer)) {
do_something();
}
The code reads a stream of text from an input connected to a TCP/IP socket. A stream of text longer than 80 characters will overwrite data which is stored following mySockBuffer. A text stream of just the right size may overwrite the return address for a function (possibly the function that called the current function), allowing the attacker to insert and execute his own code.
This can result in a potential BOEP and can even let an attacker write over the legitimate instructions to get a malicious code executed on the machine.
A typical example is a code where the developer has a 256 characters array to hold a login username for a web based back-end system.
A hacker sends 300 characters with code that will be executed by the server, and voila, he has broken in. Hackers can find these bugs in many ways.
1) the source code for a lot of services (GNU based are popular) is available on the net. Hackers routinely look through this code searching for programs that have buffer overflow problems.
2) hackers may look at the programs themselves to see if such a problem exists, though reading assembly output is really difficult.
3) hackers will examine every place the program has input and try to overflow it with random data. If the program crashes, there is a good chance that carefully constructed input will allow the hacker to break in.
An intruder would try using unexpected combinations of input on web based forms to break in. Most of the apps today use multiple layers of code, that includes API calls to the underlying operating system as the bottom most layer. An intruder can send the input that is interpreted as a string by one layer, but taken as a meaningful command by another. For e.g. PERL is mostly used for processing user inputs on web based systems. PERL usually sends this input to other programs for further evaluation. A common hacking technique would be to enter something like "| mail < /etc/passwd". This gets executed because PERL asks the operating system to launch an additional program with that input. However, the operating system intercepts the pipe '|' character and launches the 'mail' program as well, which causes the password file to be emailed to the intruder (although not so easy as it looks). I've seen instances where intruders have tried breaking in by this route where they can execute their malicious code as a legitimate piece of response to an event on a system.
Exceptions and Unhandled input: Most programs are written to handle valid input with known preconditions and evaluations. A developer won't always consider what happens when someone enters input that doesn't match the specification. This could become a security hole that an intruder can exploit for an unexpected program behavior during unhandled input scenarios.
There are certain other scenarios and known exploits for Network security apps, OS and n/w configuration. There are design flaws in N/W protocols and loop-holes in the way OS security system works. Just think about TCP/IP... it was designed earlier when there was not much of a wide-spread intrusion/hacking concern thus leaving enuf ways for intruders like: smurf attacks, ICMP Unreachable disconnects, IP spoofing, and SYN floods. The biggest problem is that the IP protocol itself is very "trusting": anyone can forge and change IP data with impunity. IPsec (IP security) has been designed to overcome many of these flaws, but it is not yet widely used.
What next: I'll be compiling some programming tips, DO's and DONT's etc. on n/w security and putting them all in the next post on this series. Bugs too will make it to the next one.
For more similar reads refer:
http://www.linuxsecurity.com/resource_files/intrusion_detection/network-intrusion-detection.html
http://www.embedded.com/design/202300629?pgno=1
Tuesday, October 23, 2007
Useful RPM options
rpm, search for this in Wikipedia and you'll get some gr8 info about revolutions per minute. But as we all know this tool and have been using this for quiet some time now on Linux distros...
Here's what www.rpm.org has to say about RPM:
The RPM Package Manager (RPM) is a powerful command line driven package management system capable of installing, uninstalling, verifying, querying, and updating computer software packages. Each software package consists of an archive of files along with information about the package like its version, a description, and the like. There is also a library API, permitting advanced developers to manage such transactions from programming languages such as C or Python.
With that bit of knowledge about RPM let me assure you that even a non-programmer would be doing a lot on his Linux machine that needs him to know basic rpm usage.
Like: 1) rpm -ivh for installing a new RPM pkg
2) rpm -Uvh for upgrading an existing rpm pkg
3) rpm -e for un-installing an rpm
4) rpm -Uvh --test To test a package to see how it would install (w/o installing, also checks dependencies)
Now, there's a hell lot of information on how to do things with RPM at:
Maximum RPM
Just so that I can remember some useful but difficult-to-find-when-you-need'em options, I've tried to put up a collection here:
RPM Querying: Get Info from installed Packages
To see a list of all installed packages: rpm -qa | less
Don't know what a specific installed package does? Hey, tell me about yourself:
rpm -qi
To know what files were installed by a specific installed package: rpm -ql
A similar thing is also achievable by: rpm -q --filesbypkg
To know the config files in an installed package: rpm -qc
There's a file on my comp /usr/lib/foo-lib. To find out which installed package it belongs to: rpm -qf /usr/lib/foo-lib
To find out which package installed the above file AND to get information on that package and see all the other files it installed: rpm -qilf /usr/lib/foo-lib
The scripts in a package: rpm -q --scripts
Services that this package provides: rpm -q --provides
Services that this package requires: rpm -q --requires
For pkg that has NOT been installed yet, we can query for similar information by adding the -p option to the commands listed above.
Get information about an RPM pkg (not yet installed) and the files it would install: rpm -qilp
Verify options:
To verify a package (with lots of verbose output): rpm -Vvv
To verify the cryptographic signature of a yet-to-be-installed package:
rpm -K
And to test the integrity of a yet-to-be-installed pkg: rpm -K --nopgp package.rpm
To verify ALL installed packages on the comp: rpm -Va
Imp Tip: The above command is also useful in the following scenarios:
1) You deleted some files accidently but you don't know what they are. The above rpm command can show you the files that are now missing in its database.
2) Think you've been hacked? To check for files that have been modified or removed in any way for any installed RPM packages.
To extract an individual file from an rpm package without installing the rpm:
1. Use rpm2cpio or rpm -qpl to list files and full paths in the package:
rpm2cpio package | cpio -t
Now, use the full path name of a file listed above to extract it in step 2.
2. Use rpm2cpio to extract a file. Run this command from your home directory or /tmp in order to avoid overwriting any current system files.
rpm2cpio package | cpio -iv --make-directories
This creates the full path in the current directory and extracts the file you specified.
3. If you just want to convert it to a cpio archive, use
rpm2cpio package > cpio-archive-file
To extract all the files from an RPM package:
rpm2cpio package | cpio -i --make-directories
Although there are a lot of other useful options, I'll have these most useful ones for now on this page.
Some imp links for rpm options:
http://dave.thehorners.com/content/view/111/65/
http://susefaq.sourceforge.net/articles/rpm.html
I would be interested to know some scenarios where we use a complex set of options to determine more practical information on rpm installs.
Also one good link (place holder for me) about un-installing a gnu-source compiled program:
http://blog.netotto.com/index.php?entry=entry071020-232245
Here's what www.rpm.org has to say about RPM:
The RPM Package Manager (RPM) is a powerful command line driven package management system capable of installing, uninstalling, verifying, querying, and updating computer software packages. Each software package consists of an archive of files along with information about the package like its version, a description, and the like. There is also a library API, permitting advanced developers to manage such transactions from programming languages such as C or Python.
With that bit of knowledge about RPM let me assure you that even a non-programmer would be doing a lot on his Linux machine that needs him to know basic rpm usage.
Like: 1) rpm -ivh
2) rpm -Uvh
3) rpm -e
4) rpm -Uvh --test
Now, there's a hell lot of information on how to do things with RPM at:
Maximum RPM
Just so that I can remember some useful but difficult-to-find-when-you-need'em options, I've tried to put up a collection here:
RPM Querying: Get Info from installed Packages
To see a list of all installed packages: rpm -qa | less
Don't know what a specific installed package does? Hey, tell me about yourself:
rpm -qi
To know what files were installed by a specific installed package: rpm -ql
A similar thing is also achievable by: rpm -q --filesbypkg
To know the config files in an installed package: rpm -qc
There's a file on my comp /usr/lib/foo-lib. To find out which installed package it belongs to: rpm -qf /usr/lib/foo-lib
To find out which package installed the above file AND to get information on that package and see all the other files it installed: rpm -qilf /usr/lib/foo-lib
The scripts in a package: rpm -q --scripts
Services that this package provides: rpm -q --provides
Services that this package requires: rpm -q --requires
For pkg that has NOT been installed yet, we can query for similar information by adding the -p option to the commands listed above.
Get information about an RPM pkg (not yet installed) and the files it would install: rpm -qilp
Verify options:
To verify a package (with lots of verbose output): rpm -Vvv
To verify the cryptographic signature of a yet-to-be-installed package:
rpm -K
And to test the integrity of a yet-to-be-installed pkg: rpm -K --nopgp package.rpm
To verify ALL installed packages on the comp: rpm -Va
Imp Tip: The above command is also useful in the following scenarios:
1) You deleted some files accidently but you don't know what they are. The above rpm command can show you the files that are now missing in its database.
2) Think you've been hacked? To check for files that have been modified or removed in any way for any installed RPM packages.
To extract an individual file from an rpm package without installing the rpm:
1. Use rpm2cpio or rpm -qpl to list files and full paths in the package:
rpm2cpio package | cpio -t
Now, use the full path name of a file listed above to extract it in step 2.
2. Use rpm2cpio to extract a file. Run this command from your home directory or /tmp in order to avoid overwriting any current system files.
rpm2cpio package | cpio -iv --make-directories
This creates the full path in the current directory and extracts the file you specified.
3. If you just want to convert it to a cpio archive, use
rpm2cpio package > cpio-archive-file
To extract all the files from an RPM package:
rpm2cpio package | cpio -i --make-directories
Although there are a lot of other useful options, I'll have these most useful ones for now on this page.
Some imp links for rpm options:
http://dave.thehorners.com/content/view/111/65/
http://susefaq.sourceforge.net/articles/rpm.html
I would be interested to know some scenarios where we use a complex set of options to determine more practical information on rpm installs.
Also one good link (place holder for me) about un-installing a gnu-source compiled program:
http://blog.netotto.com/index.php?entry=entry071020-232245
Tuesday, October 16, 2007
Tracking down those hidden startup processes in Windows
Is your Windows being dragged by un-necessary startup apps eating up resources? Are you compromising your privacy by not knowing what spywares are tracking your actions on the web? Need an effective tool for tracking those DLLs, services and applications that automatically load at system startup?
Here's the tool:
Sysinternals’ Autoruns, a free troubleshooting application being provided now by Microsoft!
This tool offers an easy to use yet powerful GUI that tracks spywares, processes... moreover the DLLs, services, applications and other critical stuff that load behind the scene on Windows systems.
With this tool, keeping a check on your startup apps, removing unwanted dll-loads/services/free-wares/ trial-expired s/w or adding service entries is a lawn-walk. It offers check-boxes to enable/disable each and everything, with a description of the Publisher of that s/w, service, dll and the location of the binary/library being tracked.
Some evaluated Pros & Cons from Techrepublic:
+ve
* price (it’s free!)
* Simple installation
* Administration is easy, thanks to a straightforward GUI
* Thorough tracking of installed and active processes
Less than helpful in scenarios like:
* Some malware applications may not register within active processes, rendering Autoruns less than helpful when combating particularly problematic infections
* Deleting processes won’t remove all remnants of many unwanted programs from the hard disk
* Removing infections that infest multiple user accounts may need to be removed as many times as there are user accounts
A gr8 tool?
With this gr8 tool for tracking down things running on your machine behind your back...being offered free by MS... It becomes a good one in the list of have's for a sys-admin or a helpdesk engineer (also for a normal Windows freak).
With a thorough coverage and dependable performance, this free utility is the right tool for almost any malware troubleshooting routine (also could be used to easily tweak system performance)
Download and assess yourself: http://download.sysinternals.com/Files/Autoruns.zip
Usage. Point of views?
Friday, September 28, 2007
Purpose
This is my first tech blog. I'll post tips/techniques/articles/thoughts on varied aspects of technology that I use and learn on a day-to-day basis.
I won't have a disclaimer or a copyright for any of my collections/posts on this blog... one being a share-n-grow-knowledge believer and other being a gnu/free s/w and services user and promoter.
I won't have a disclaimer or a copyright for any of my collections/posts on this blog... one being a share-n-grow-knowledge believer and other being a gnu/free s/w and services user and promoter.
Subscribe to:
Posts (Atom)