Someone posted a link to this paper (http://www.cse.ucsd.edu/~hovav/papers/s07.html) on Full Disclosure the other day. I had not seen it before. It discusses ret-2-libc attacks without using functions. Instead the authors use what they call 'gadgets'. Which in plain technical terms means finding unintended code sequences in executable pages of memory that can be used to string together ways to execute arbitrary code. The authors present it as a way to defeat W^X protections.
From the paper:
Gadgets perform well defined operations, such as a load, an xor, or a jump. Return-oriented programming consists in putting gadgets together that will perform the desired operations.
...
These gadgets can be found in byte streams from libc within a process' memory. They are not injected due to W^X constraints on most platforms. ... Each of our gadgets expects to be entered in the same way: the processor executes a ret with the stack pointer, %esp, pointing to the bottom word of the gadget. This means that, in an exploit, the first gadget should be placed so that its bottom word overwrites some functions saved return address on the stack.
The technique is an interesting one. It reminds of me certain ret-2-text techniques that may fall into the middle of a long instruction to produce a jmp %reg trampoline. Overall the technique will vary from platform to platform because libc may be compiled differently from Fedora to Ubuntu for example.
Using randomized mmap() (randomized library base mappings), PIE (Position Independent Executables) and RANDEXEC hardening make this type of exploitation technique a bit harder to pull off. The paper is worth a read if you have the time.
Sunday, December 23, 2007
Tuesday, November 27, 2007
Your favorite "better than C" scripting language is probably implemented in C
I was writing an application front-end in Ruby/Gnome2 and I needed to produce an error message for the user that contained a string the user had previously input. My MessageDialog code looked like this:
No it was not displayed correctly. In fact it was vulnerable to a format string attack straight from the year 2001. UGH! Now you might argue - "Your fault for not sanitizing your string". Well thats true to a point. But the MessageDialog interface is just a very deep abstraction layer to a printf() style function in the GTK C library. But unlike those functions MessageDialog is not well documented as an 'easily mis-used' function.
Programmers typically trust their API to correctly sanitize and display their input, especially in scripting languages. This is because in scripting languages programmers feel they are safe from traditional C language vulnerabilities. This isn't always the case when your abstraction layers don't handle data correctly. My audit to find the offending code took about ten minutes but I narrowed it down to
ruby-gnome2-all-0.16.0/gtk/src/rbgtkmessagedialog.c
Where it calls GTK like this:
The variable 'message' is passed directly to GTK. I don't blame GTK authors for this one, it would be like blaming libc authors for printf()'s ability to print a variable without a format specifier. The GTK MessageDialog page shows the function prototype for gtk_message_dialog_new()
So GTK is clearly expecting a proper format string, which should be properly passed to it by whatever API called it.
Example vulnerable code:
Using google we can find some other projects vulnerable to similar bugs. Most just stick #{my_string} in the message, including example applications from the official Ruby/Gnome2 website.
That about wraps up this post. Other Ruby/Gnome2 API's may have similar 'functionality'. This should teach all the scripters out there a security lesson. Always remember your favorite "better than C" scripting language is probably implemented in C. Ruby/Gnome2 authors have been notified and they have committed a patch to SVN.
-------------------------------------------------------------------------As you can see the variable my_string is placed in the message dialog text using a format specifier correctly according to the man page. I started to wonder what happened if this string contained a format specifier, would the underlying C libraries and bindings display it correctly? Surprise!
dialog = Gtk::MessageDialog.new(@main_app_window, Gtk::Dialog::MODAL,
Gtk::MessageDialog::INFO,
Gtk::MessageDialog::BUTTONS_CLOSE,
"%s - Was your string!" % my_string)
-------------------------------------------------------------------------
No it was not displayed correctly. In fact it was vulnerable to a format string attack straight from the year 2001. UGH! Now you might argue - "Your fault for not sanitizing your string". Well thats true to a point. But the MessageDialog interface is just a very deep abstraction layer to a printf() style function in the GTK C library. But unlike those functions MessageDialog is not well documented as an 'easily mis-used' function.
Programmers typically trust their API to correctly sanitize and display their input, especially in scripting languages. This is because in scripting languages programmers feel they are safe from traditional C language vulnerabilities. This isn't always the case when your abstraction layers don't handle data correctly. My audit to find the offending code took about ten minutes but I narrowed it down to
ruby-gnome2-all-0.16.0/gtk/src/rbgtkmessagedialog.c
Where it calls GTK like this:
w = gtk_message_dialog_new(NIL_P(parent) ? NULL : GTK_WINDOW(RVAL2GOBJ(parent)),
RVAL2GFLAGS(flags, GTK_TYPE_DIALOG_FLAGS),
RVAL2GENUM(type, GTK_TYPE_MESSAGE_TYPE),
RVAL2GENUM(buttons, GTK_TYPE_BUTTONS_TYPE),
(const gchar*)(NIL_P(message) ? "": RVAL2CSTR(message)));
The variable 'message' is passed directly to GTK. I don't blame GTK authors for this one, it would be like blaming libc authors for printf()'s ability to print a variable without a format specifier. The GTK MessageDialog page shows the function prototype for gtk_message_dialog_new()
GtkWidget* gtk_message_dialog_new
(GtkWindow *parent, GtkDialogFlags flags, GtkMessageType type,
GtkButtonsType buttons, const gchar *message_format, ...);
parent: transient parent, or NULL for none
flags: flags
type: type of message
buttons: set of buttons to use
message_format: printf()-style format string, or NULL
...: arguments for message_format
So GTK is clearly expecting a proper format string, which should be properly passed to it by whatever API called it.
Example vulnerable code:
-------------------------------------------------------------------------To avoid this issue in your ruby code you could use the markup member. This will use the Pango markup language on your text. Its a workaround but it gets the job done.
#!/usr/bin/env ruby
# ruby rubber.rb %x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x.%x
require 'gtk2'
my_string = ARGV[0]
dialog = Gtk::MessageDialog.new(@main_app_window, Gtk::Dialog::MODAL,
Gtk::MessageDialog::INFO,
Gtk::MessageDialog::BUTTONS_CLOSE,
"%s - Was your string!" % my_string)
dialog.run
dialog.destroy
-------------------------------------------------------------------------
-------------------------------------------------------------------------Or alternatively you could do something like "my_string = my_string.gsub(/%/, "%%")" before calling messagedialog.
my_string = ARGV[0]
dialog = Gtk::MessageDialog.new(@main_app_window, Gtk::Dialog::MODAL,
Gtk::MessageDialog::INFO,
Gtk::MessageDialog::BUTTONS_CLOSE)
dialog.markup = "#{my_string} - Was your string!"
dialog.run
dialog.destroy
-------------------------------------------------------------------------
Using google we can find some other projects vulnerable to similar bugs. Most just stick #{my_string} in the message, including example applications from the official Ruby/Gnome2 website.
That about wraps up this post. Other Ruby/Gnome2 API's may have similar 'functionality'. This should teach all the scripters out there a security lesson. Always remember your favorite "better than C" scripting language is probably implemented in C. Ruby/Gnome2 authors have been notified and they have committed a patch to SVN.
Thursday, November 22, 2007
What Every Programmer Should Know About Memory (PDF)
I just came across this PDF on reddit.com titled "What every programmer should know about memory". Its written by Ulrich Drepper from RedHat, you should know who he is.
Link to PDF
It's going to take me awhile to get through this (its 114 pages long) - but so far its a decent read. I'm currently cheating and searching through it for things that interest me. I'm currently taking in section 7.3 'Measuring Memory Usage'. This section is particularly interesting to me because I've been toying with a project of mine lately that collects massive amounts of data. Searching and sorting that data efficiently has not been easy.
Ulrich states in the PDF that using libc's malloc to store a linked list you populate for later retrieval and use is probably a bad idea. This is true, because theres no guarantee malloc will return memory that is close or even near to the next member in the linked list. There are alternatives to using the traditional libc malloc library such as obstack and Google's TCMalloc.
There's lots of other good stuff in his paper, take a look for yourself.
Link to PDF
It's going to take me awhile to get through this (its 114 pages long) - but so far its a decent read. I'm currently cheating and searching through it for things that interest me. I'm currently taking in section 7.3 'Measuring Memory Usage'. This section is particularly interesting to me because I've been toying with a project of mine lately that collects massive amounts of data. Searching and sorting that data efficiently has not been easy.
Ulrich states in the PDF that using libc's malloc to store a linked list you populate for later retrieval and use is probably a bad idea. This is true, because theres no guarantee malloc will return memory that is close or even near to the next member in the linked list. There are alternatives to using the traditional libc malloc library such as obstack and Google's TCMalloc.
There's lots of other good stuff in his paper, take a look for yourself.
Thursday, October 18, 2007
OSX Leopard - ASLR?
A lot of main stream media is reporting OSX will be getting ASLR (Address Space Layout Randomization). However OSX's new features page says 'library randomization'. Not ASLR. Im not an OSX user but I think some clarification is needed here. ASLR is a pretty vague term to apply to this. The PAX implementation for example describes ASLR as randomization on many different regions of a processes memory. The true die-hard in me reserves the term ASLR for a wider randomization implementation such as stack base, mmap, .text base and many others, not just library mappings.
And now that all of this is on slashdot.org I'm sure the fanboi war will begin. Please let it be known that my official opinion is: it doesn't matter what OS you run, you can still get owned.
http://pax.grsecurity.net/docs/aslr.txt
And now that all of this is on slashdot.org I'm sure the fanboi war will begin. Please let it be known that my official opinion is: it doesn't matter what OS you run, you can still get owned.
http://pax.grsecurity.net/docs/aslr.txt
Wednesday, October 03, 2007
Code Auditing Checklist
When I audit any code I always follow the same steps to familiarize myself with the application and give me a better sense of its internals. I was giving this advice to a friend over IM today, and I thought it would make a good blog post for others.
Years ago when I would try to audit a fairly large application like Apache, I simply got lost in its many functions and data structures, unable to get a good enough grasp of how it worked. By that point I had become frustrated and would probably move onto another application. Sometimes you get lucky and sometimes you walk away angry. There were never any good guidelines from the masters, only examples of vulnerable code. But without a thorough understanding of how a program works, I don't believe its possible to get the most out of your time spent auditing it. I have written down a few simple steps to quickly understand an application in less time, which means more time auditing for vulnerabilities.
1. Does the application have its own memory management? Many applications will have their own internal memory management instead of just allocating space when they need it. You will find many larger applications will have memory structures that contains a pointer to some dynamic buffer, the total size of the buffer, the length of the data in that buffer, and perhaps a pointer to a function that needs the data. This will vary greatly from app to app but understanding how this internal memory management works is absolutely key to finding any vulnerabilities related to mishandling of that memory. Its also important when exploiting a vulnerability you have found. Sometimes these higher abstraction layers can be abused.
2. Are there any functions that the application calls repeatedly? For example during a recent code audit I did there was a function that processed and stripped HTML characters from a string of user input. This function was called repeatedly throughout the application. I reviewed the function from start to end, making notes about how it could be called insecurely. So next time I came across another block of code that called that function I already knew what it did and I knew right away if it was being used correctly or not. Don't make the beginner mistake of trying to find all instances of str/memcpy abuses - when there are plenty of home grown functions that are just as lousy and widespread.
3. macros, typedef's, define's,and structures - Study them and know them well. Most larger applications are going to typedef large structs or variables they use often. Large applications have many structures that are important to understanding their internals. A variable type can make a big difference between being vulnerable and not being vulnerable. Make a list on paper if you have to.
This is not an exhaustive list of how you should approach a code review. But more of a quick checklist to quickly understanding how an application works internally so you can spend more time finding bugs.
Years ago when I would try to audit a fairly large application like Apache, I simply got lost in its many functions and data structures, unable to get a good enough grasp of how it worked. By that point I had become frustrated and would probably move onto another application. Sometimes you get lucky and sometimes you walk away angry. There were never any good guidelines from the masters, only examples of vulnerable code. But without a thorough understanding of how a program works, I don't believe its possible to get the most out of your time spent auditing it. I have written down a few simple steps to quickly understand an application in less time, which means more time auditing for vulnerabilities.
1. Does the application have its own memory management? Many applications will have their own internal memory management instead of just allocating space when they need it. You will find many larger applications will have memory structures that contains a pointer to some dynamic buffer, the total size of the buffer, the length of the data in that buffer, and perhaps a pointer to a function that needs the data. This will vary greatly from app to app but understanding how this internal memory management works is absolutely key to finding any vulnerabilities related to mishandling of that memory. Its also important when exploiting a vulnerability you have found. Sometimes these higher abstraction layers can be abused.
2. Are there any functions that the application calls repeatedly? For example during a recent code audit I did there was a function that processed and stripped HTML characters from a string of user input. This function was called repeatedly throughout the application. I reviewed the function from start to end, making notes about how it could be called insecurely. So next time I came across another block of code that called that function I already knew what it did and I knew right away if it was being used correctly or not. Don't make the beginner mistake of trying to find all instances of str/memcpy abuses - when there are plenty of home grown functions that are just as lousy and widespread.
3. macros, typedef's, define's,and structures - Study them and know them well. Most larger applications are going to typedef large structs or variables they use often. Large applications have many structures that are important to understanding their internals. A variable type can make a big difference between being vulnerable and not being vulnerable. Make a list on paper if you have to.
This is not an exhaustive list of how you should approach a code review. But more of a quick checklist to quickly understanding how an application works internally so you can spend more time finding bugs.
Tuesday, October 02, 2007
1 Year Has Passed
I just realized this blog turned one year old a few weeks ago, and I'm still not at 50 posts. That's pretty sad, Ill have to pick up the pace. Over the past year I have blogged about various topics such as security, ELF, Linux, random security headlines and more. Sometimes even 'real' tech media will quote my posts. Does a lack of comments indicate no one finds what you have to say interesting? I hope not.
The blog averages about 20-40 hits a day from various google keyword searches and links to it. From what I can tell there's an additional 75 to 100 people who subscribe to the RSS feed via feedburner, bloglines, google and a few others I've never heard of. Thanks for reading for the past year. As long as I have readers I will continue to post :)
The blog averages about 20-40 hits a day from various google keyword searches and links to it. From what I can tell there's an additional 75 to 100 people who subscribe to the RSS feed via feedburner, bloglines, google and a few others I've never heard of. Thanks for reading for the past year. As long as I have readers I will continue to post :)
Saturday, September 29, 2007
Blackboxes and Trust
I'm sure you've heard the saying "you wouldn't buy a car that had the hood sealed shut would you?" - Followed up by an open source zealot fanatic person explaining to you why that analogy works for software. Well I actually do agree with that analogy. Anton Chuvakin put it into better words then I ever could in this blog post.
Every single day very large and important organizations rely on software to keep themselves running (hospitals, infrastructure control, intelligence agencies, the military ... and so on). Yet nearly none of these organizations are legally allowed to see the source code of that software. There is just absolute blind trust in its ability to work correctly and be reliable. Not to mention secure.
Where is the proof this software isn't full of backdoors, vulnerabilities, logic bugs or more. Organizations such as those above need to start asking (demanding) their vendors provide some real proof that the source code or binary was audited by a third party - i.e. not the original developers of the software. This proof works both ways. It gives the company the chance to say "hey - we can't catch all the bugs, but we did our best, and thats why you should choose us over our competition". And customers are given a little more trust in the investment they just made. Because now they know their vendor went further then the competition to produce a better quality product.
Lets take Windows Vista for example - many hackers have audited its source code on while on Microsoft's payroll. This is a good thing, and Microsoft can now say to customers "YES we did audit our code after development". Which is a lot more then most other vendors out there can say. The flip side to this argument is open source. Just because the source is open doesn't mean people have reviewed it for vulnerabilities (download a random sourceforge project and you will understand what I mean). But on the other hand, it does give the customer/user the ability to inspect the software they are relying so heavily on.
How many of you can honestly say the software products your company relies on have been audited by a third party?
Every single day very large and important organizations rely on software to keep themselves running (hospitals, infrastructure control, intelligence agencies, the military ... and so on). Yet nearly none of these organizations are legally allowed to see the source code of that software. There is just absolute blind trust in its ability to work correctly and be reliable. Not to mention secure.
Where is the proof this software isn't full of backdoors, vulnerabilities, logic bugs or more. Organizations such as those above need to start asking (demanding) their vendors provide some real proof that the source code or binary was audited by a third party - i.e. not the original developers of the software. This proof works both ways. It gives the company the chance to say "hey - we can't catch all the bugs, but we did our best, and thats why you should choose us over our competition". And customers are given a little more trust in the investment they just made. Because now they know their vendor went further then the competition to produce a better quality product.
Lets take Windows Vista for example - many hackers have audited its source code on while on Microsoft's payroll. This is a good thing, and Microsoft can now say to customers "YES we did audit our code after development". Which is a lot more then most other vendors out there can say. The flip side to this argument is open source. Just because the source is open doesn't mean people have reviewed it for vulnerabilities (download a random sourceforge project and you will understand what I mean). But on the other hand, it does give the customer/user the ability to inspect the software they are relying so heavily on.
How many of you can honestly say the software products your company relies on have been audited by a third party?
Monday, September 24, 2007
Some Thoughts On Virtualization and Security
With high profile VMWare vulnerabilities just hitting the news its easy to find some mainstream articles covering the subject. This post isn't about hypervisor rootkits (because were all tired of hearing about that), but more about the assumption in corporations and academia that (virtualization == security). This is just plain WRONG. Virtualization environments are extremely complex pieces of software - and with complexity comes insecurity. In fact I would venture as far as to say that by default (virtualization == insecurity); running two operating systems within the same machine just creates more attack surface. Considering the high degree of interaction the host and guest OS must have you inherently create greater possibility of vulnerability then if they were on separate hardware. And just because VM's are easy to create and re-create doesn't mean they shouldn't be secured as well. As we have seen from this latest VMWare vulnerability, theres always the possibility your guest VM can compromise your host OS. It should also be noted that once the host OS has been hijacked ALL of your guest VM's should be considered compromised and untrusted. In order for the attacker to completely own your virtualization environment he/she has to know exactly what host OS is being used. There needs to be more fool-proof research into this area before wide spread panic can begin. There will also hopefully be more utilization of the host OS/virtualizer as an Virtual IDS (VIDS) of sorts - to tell us when our virtual machines have been compromised. This use hasnlt been explored enough in my opinion.
Now its true some virtualization technologies were designed with security in mind and others were meant to increase efficiency and productivity of hardware. This fact should be noted when trying to decide which virtualization strategy to use. But companies should also be aware of the security issues they may be introducing by improperly implementing a virtualization strategy as they may be causing more harm then its worth.
Now its true some virtualization technologies were designed with security in mind and others were meant to increase efficiency and productivity of hardware. This fact should be noted when trying to decide which virtualization strategy to use. But companies should also be aware of the security issues they may be introducing by improperly implementing a virtualization strategy as they may be causing more harm then its worth.
Saturday, September 22, 2007
A good presentation by FX ....
I just read a pretty good presentation by FX (Felix Lindner) called "Security and Attack Surface of Modern Applications". He presented it at HITB 2007 (I did not attend). As FX describes it his presentation is not about hex and 0day ;( but more about how security problems are not being fixed and things are rapidly progressing down hill. He makes some very good points such as "Respect that software is there to solve real problems for people, security isn’t one of them. ". And this is very true, the security community tends to forget this detail most of the time. His presentation has some excellent numbers associated with vulnerability classes and what attackers focused on since the late nineties to today.
One subject he touches on which is of interest to me is perimeter security. While its true most attackers focus on client side exploits today, perimeter security should not be forgotten just because we tunnel %50 of our applications over HTTP. Client side exploits allow attackers to create larger botnets. But client side vulnerabilities aren't always the first pick in a targeted attack. Well they can be (MS Office parsing vulns - google for what I mean). But targeted attacks can involve something specific to that target, a mis-configured web server or email server etc... To FX's point, combining all of these different technologies (VPN Termination, LDAP, SSL etc) into the firewall is _not_ the way to do perimeter security. Defense in depth is still entirely relevant and will be for a long time to come. And if done correctly, at the very least, can stop some successful client side exploits from calling home, which can minimize their impact to your network.
On slide 13 FX also talks about 'Skill and Time'. He seems to put far more skill+time on finding vulnerabilities as opposed to writing exploits, which he states 'requires little skills but quite some time'. Im not sure how I feel about that slide yet. Others certainly do not agree with him.
I recommend reading it. You can grab FX's presentation and others from HITB 2007 here
(FX's take on the 'self defending network' is priceless)
One subject he touches on which is of interest to me is perimeter security. While its true most attackers focus on client side exploits today, perimeter security should not be forgotten just because we tunnel %50 of our applications over HTTP. Client side exploits allow attackers to create larger botnets. But client side vulnerabilities aren't always the first pick in a targeted attack. Well they can be (MS Office parsing vulns - google for what I mean). But targeted attacks can involve something specific to that target, a mis-configured web server or email server etc... To FX's point, combining all of these different technologies (VPN Termination, LDAP, SSL etc) into the firewall is _not_ the way to do perimeter security. Defense in depth is still entirely relevant and will be for a long time to come. And if done correctly, at the very least, can stop some successful client side exploits from calling home, which can minimize their impact to your network.
On slide 13 FX also talks about 'Skill and Time'. He seems to put far more skill+time on finding vulnerabilities as opposed to writing exploits, which he states 'requires little skills but quite some time'. Im not sure how I feel about that slide yet. Others certainly do not agree with him.
I recommend reading it. You can grab FX's presentation and others from HITB 2007 here
(FX's take on the 'self defending network' is priceless)
Wednesday, September 19, 2007
QueFuzz
**Update: New version is out (v06), supports a fuzzing template file - source is here
Its a very basic C program that utilizes the libnetfilter_queue library to turn any networked application into a fuzzer. It basically works like this:
- You set a specific iptables QUEUE rule like so:
$iptables -A OUTPUT -p tcp --dport 110 -j QUEUE
- Start it like so:
'$./quefuzz -a -v -c USER'
or
'$./quefuzz -b -v -f 3'
- Open your POP3 client and connect to the POP server you want to fuzz
- QueFuzz picks up your packets using libnetfilter_queue, fuzzes them and sends them on the wire
This works with any protocol/port. If netfilter/iptables can queue it, QueFuzz can fuzz it.
QueFuzz has no protocol awareness, it expects to receive a proper packet. It has minimal command line flags such as whether or not the protocol you want to fuzz is binary or ascii, or both. If the protocol is TCP or UDP, QueFuzz will skip those headers appropriately and start fuzzing the packet data. If the protocol is not TCP or UDP then it starts fuzzing immediately after the IP header.
A lot of work is needed on the tool. It was never meant to be protocol aware or intelligent, but it could certainly be cleaner. It is BETA code at best, so use at your own risk. I can guarantee its full of bugs (probably some bad ones) - so be careful! I literally whipped it up in a couple of hours. Ill be refining it over the next few weeks and releasing updates. Feel free to send me patches and suggestions by email.
QueFuzz is released under the GPLv2 as is libnetfilter_queue. Some checksum routines are released under BSD-3 license from various sources.
You can download the beta code here Enjoy
Its a very basic C program that utilizes the libnetfilter_queue library to turn any networked application into a fuzzer. It basically works like this:
- You set a specific iptables QUEUE rule like so:
$iptables -A OUTPUT -p tcp --dport 110 -j QUEUE
- Start it like so:
'$./quefuzz -a -v -c USER'
or
'$./quefuzz -b -v -f 3'
- Open your POP3 client and connect to the POP server you want to fuzz
- QueFuzz picks up your packets using libnetfilter_queue, fuzzes them and sends them on the wire
This works with any protocol/port. If netfilter/iptables can queue it, QueFuzz can fuzz it.
QueFuzz has no protocol awareness, it expects to receive a proper packet. It has minimal command line flags such as whether or not the protocol you want to fuzz is binary or ascii, or both. If the protocol is TCP or UDP, QueFuzz will skip those headers appropriately and start fuzzing the packet data. If the protocol is not TCP or UDP then it starts fuzzing immediately after the IP header.
A lot of work is needed on the tool. It was never meant to be protocol aware or intelligent, but it could certainly be cleaner. It is BETA code at best, so use at your own risk. I can guarantee its full of bugs (probably some bad ones) - so be careful! I literally whipped it up in a couple of hours. Ill be refining it over the next few weeks and releasing updates. Feel free to send me patches and suggestions by email.
QueFuzz is released under the GPLv2 as is libnetfilter_queue. Some checksum routines are released under BSD-3 license from various sources.
You can download the beta code here Enjoy
Thursday, September 13, 2007
Ngrep is still useful
I just had to blog on how much I love ngrep. Despite all the advances in security, we are still left with a huge problem called data leakage. If you work in any type of operational security role, its one of your worst nightmares. I have used ngrep for a couple of years, as I'm sure most of you have too. I had a (legal) need for ngrep again over the past week while trying to assess the state of security in a specific network I protect and monitor and I thought I would post some of my more use-able ngrep queries for you. I am not a regular expression guru like some people I know, sorry.
Looking for social security numbers:
$ngrep -q -d eth0 -w '[0-9]{3}\-[0-9]{2}\-[0-9]{4}'
Almost the same as above but searching for credit card number patterns (this one can lead some false positives if searching through http conversations):
$ngrep -q -d eth0 '[0-9]{4}\-[0-9]{4}\-[0-9]{4}\-[0-9]{4}'
Looking for 'password=':
$ngrep -q -d eth0 -i 'password='
Some storm worm executable names (this could be expanded easily):
ngrep -q -d eth0 -i '(ecard|postcard|youtube|FullClip|MoreHere|FullVideo|greeting|ClickHere|NFLSeasonTracker).exe' 'port 80'
Detect an HTTP connection to a server by IP address not FQDN (this is how bleedingthreats new storm worm download rules look):
ngrep -q -d eth0 -i 'Host\: [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' 'port 80'
Look for basic http login:
ngrep -q -d eth0 -i 'Authorization: Basic' 'port 80'
These are just smaller examples of what expensive 'data leak prevention' boxes do. Hopefully they perform the regular expression look ups on reassembled packet flows, not individual packets. Otherwise its a waste of time as the data can be chunked up between different packets. Data leakage continues to be an issue to this day. And unfortunately I don't see it going away anytime soon. And thats mostly because its a human problem, and user education is a loosing battle : \
Sorry this post was soooo 2001 - please resist the urge to remove me from your RSS reader
Looking for social security numbers:
$ngrep -q -d eth0 -w '[0-9]{3}\-[0-9]{2}\-[0-9]{4}'
Almost the same as above but searching for credit card number patterns (this one can lead some false positives if searching through http conversations):
$ngrep -q -d eth0 '[0-9]{4}\-[0-9]{4}\-[0-9]{4}\-[0-9]{4}'
Looking for 'password=':
$ngrep -q -d eth0 -i 'password='
Some storm worm executable names (this could be expanded easily):
ngrep -q -d eth0 -i '(ecard|postcard|youtube|FullClip|MoreHere|FullVideo|greeting|ClickHere|NFLSeasonTracker).exe' 'port 80'
Detect an HTTP connection to a server by IP address not FQDN (this is how bleedingthreats new storm worm download rules look):
ngrep -q -d eth0 -i 'Host\: [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' 'port 80'
Look for basic http login:
ngrep -q -d eth0 -i 'Authorization: Basic' 'port 80'
These are just smaller examples of what expensive 'data leak prevention' boxes do. Hopefully they perform the regular expression look ups on reassembled packet flows, not individual packets. Otherwise its a waste of time as the data can be chunked up between different packets. Data leakage continues to be an issue to this day. And unfortunately I don't see it going away anytime soon. And thats mostly because its a human problem, and user education is a loosing battle : \
Sorry this post was soooo 2001 - please resist the urge to remove me from your RSS reader
Friday, August 10, 2007
Static Analysis Headaches
I am very interested in the static analysis of binaries. Mainly because theres no one way to do it. Theres no correct or incorrect way of analyzing compiler generated code - especially without running it. In fact most techniques only work with certain compiler constructs and function behaviors. I think thats why even today there are very few tools that do it well.
I started coding static analysis tools a few years ago and have steadily been rewriting and testing pieces of one in particular over and over again that analyzes x86 ELF objects. (Yes I will eventually release it in some form). I have run into many pitfalls during its design, specifically emulating the x86 without too much overhead. Obviously I don't care to emulate every single instruction in every combination, thats not only pointless but it would take forever. There are only certain parts of the execution process I am interested in. That includes what the stack looks like, register contents, variable types, and how they all tie together. For example a programmer might say sizeof(var) - and the size of that variable is determined at runtime. Now lets suppose that size argument is used as a length argument to a function like memcpy. I can't be too sure if the call is vulnerable or not because I don't know exactly what var is or how big it is. Sometimes educated guesses must be made. For example does var get assigned a value from a packet? Is it a command line argument? When you can't execute the binary, you have to make certain assumptions, and just hope they are correct.
And sometimes, you do know certain things about the variables. I thought it might be a nice write up to show how a tool of mine evaluated a specific vulnerable call to memcpy(). This is one very non-scientific way of finding variable objects in the code and assigning them attributes such as 'size'. Another 'assumption' I had to make.
Heres a function foo():
Obviously I am not the only person to use this method - as its a very simple concept and easy to implement. And certainly won't catch more complex bugs that require the interaction of many functions.
This requires several passes are made over the binary before any output to the user can occur. My first and second passes gather all symbol, relocation, and cross reference data, followed by function analysis routines. The third pass contains mostly output plugins that make all of the data accessible for display.
Blogspot has a way of jumbling up my text, not to mention its not really formatted nicely to begin with. The vulnerability analysis plugin has lots of 'hint strings' that are basically triggered by the occurence of specific instructions plus a combination of pre-existing knowledge about the static data objects and code that has already been evaluated in previous passes. For now it works on smaller programs. Despite being written in straight C, it can sometimes take awhile to crunch all of this on a large binary like Firefox (and most of the time produces absolute nonsense). The end goal is to have an effcient tool that can process and accurately report on a larger binary.
I started coding static analysis tools a few years ago and have steadily been rewriting and testing pieces of one in particular over and over again that analyzes x86 ELF objects. (Yes I will eventually release it in some form). I have run into many pitfalls during its design, specifically emulating the x86 without too much overhead. Obviously I don't care to emulate every single instruction in every combination, thats not only pointless but it would take forever. There are only certain parts of the execution process I am interested in. That includes what the stack looks like, register contents, variable types, and how they all tie together. For example a programmer might say sizeof(var) - and the size of that variable is determined at runtime. Now lets suppose that size argument is used as a length argument to a function like memcpy. I can't be too sure if the call is vulnerable or not because I don't know exactly what var is or how big it is. Sometimes educated guesses must be made. For example does var get assigned a value from a packet? Is it a command line argument? When you can't execute the binary, you have to make certain assumptions, and just hope they are correct.
And sometimes, you do know certain things about the variables. I thought it might be a nice write up to show how a tool of mine evaluated a specific vulnerable call to memcpy(). This is one very non-scientific way of finding variable objects in the code and assigning them attributes such as 'size'. Another 'assumption' I had to make.
Heres a function foo():
During its first past on the object code a size value was stored and assigned to the static object at 0x08049640 based on the arguments to memset(). This is obviously not a fool proof way of knowing what the object at 0x08049640 is or what its true size is, however at the very least it should be the objects minimum size. Its probably a global struct that contains some variables or a static character array, but its impossible for it to figure that out with any degree of certainty at this point. Following the memset() call there was a call to memcpy(), based on the prior observation I am able to determine auto-magically that there is a potential buffer overflow.
...
80483de push %ebp
| Symbol: [foo @ 080483de]
| Xref: (0x80483de -> [0x080483cb call 0x80483de])
80483df mov %esp,%ebp
80483e1 sub $0x18,%esp
80483e4 mov $0x8049640,%edx
80483e9 mov $0x80,%eax
80483ee mov %eax,0x8(%esp)
80483f2 movl $0x0,0x4(%esp)
80483fa mov %edx,(%esp)
80483fd call 0x80482d4
| Symbol: [memset @ plt]
| Analysis:
| EAX 0x00000080 EBX 0x00000000
| ECX 0x00000000 EDX 0x08049640
| | Symbol: [0x8049640 buf1 @ .bss]
| Analysis:
| memset() argument indicates sizeof(0x08049640)=0x80(128 bytes)
8048402 mov 0x8(%ebp),%eax
8048405 add $0x4,%eax
8048408 mov (%eax),%eax
804840a mov $0x8049640,%ecx
| Symbol: [0x8049640 buf1 @ .bss]
804840f mov %eax,%edx
8048411 mov $0x100,%eax
8048416 mov %eax,0x8(%esp)
804841a mov %edx,0x4(%esp)
804841e mov %ecx,(%esp)
8048421 call 0x80482f4
| Symbol: [memcpy @ plt]
| Analysis:
| EAX 0x00000100 EBX 0x00000000
| ECX 0x08049640 EDX 0x00000000
| | Symbol: [0x8049640 buf1 @ .bss]
| Analysis:
| memcpy() argument indicates buffer overflow at 0x08049640 by (0x80) bytes [!]
8048426 mov $0x0,%eax
804842b leave
804842c ret
...
Obviously I am not the only person to use this method - as its a very simple concept and easy to implement. And certainly won't catch more complex bugs that require the interaction of many functions.
This requires several passes are made over the binary before any output to the user can occur. My first and second passes gather all symbol, relocation, and cross reference data, followed by function analysis routines. The third pass contains mostly output plugins that make all of the data accessible for display.
Blogspot has a way of jumbling up my text, not to mention its not really formatted nicely to begin with. The vulnerability analysis plugin has lots of 'hint strings' that are basically triggered by the occurence of specific instructions plus a combination of pre-existing knowledge about the static data objects and code that has already been evaluated in previous passes. For now it works on smaller programs. Despite being written in straight C, it can sometimes take awhile to crunch all of this on a large binary like Firefox (and most of the time produces absolute nonsense). The end goal is to have an effcient tool that can process and accurately report on a larger binary.
Tuesday, August 07, 2007
Summer is almost over
As you may have noticed, I have not written a blog entry since June. I am spending my summer relaxing for once and catching up on some reading. Some advisories and beta quality tools will be along shortly.
I often help beginners in the field of information/computer security at work and on a personal level. The question I get asked most often is "what should I start with?!". Usually they are expecting some cool and interesting technique they can dive into like "breaking XYZ encryption!" but they are typically disappointed when I respond with something like "start learning C and reading the Linux kernel source". Thats when their smile fades and they realize they have to go back to stuff they ignored freshmen year of college. Today I came across this "Computer Science From the Bottom Up". Its full of good information for the beginner to computer science, which is a necessary base for computer security. Have fun.
I often help beginners in the field of information/computer security at work and on a personal level. The question I get asked most often is "what should I start with?!". Usually they are expecting some cool and interesting technique they can dive into like "breaking XYZ encryption!" but they are typically disappointed when I respond with something like "start learning C and reading the Linux kernel source". Thats when their smile fades and they realize they have to go back to stuff they ignored freshmen year of college. Today I came across this "Computer Science From the Bottom Up". Its full of good information for the beginner to computer science, which is a necessary base for computer security. Have fun.
Friday, June 08, 2007
Dual Licenses and more
There has been some good discussion on GPL and dual licensing at matasanos blog, and ryan russell has also posted some good thoughts on this. This came right on time for me, as I've been debating lately what to do with a couple of projects I've been working on for awhile. I want to release the code, but it would also be great to sell and/or license it to companies wishing to use it commercially. These projects include a reverse engineering framework and some various network security tools. The RE framework is basically an engine written in C that securely and reliably parses, disassembles and stores massive amounts of data on any ELF object. It basically becomes usable by writing plugins for it. You can write output plugins (I will be including an HTML one with it) and plugins that hook the internal disassembler and ELF parsing routines. I have a couple of plugins ready and I want to release this code soon (1-2 months). So expect an open source version of that with a dual license for companies wishing to license it for commercial use.
** [ Start reading here if you came from bleedingthreats.net ] **
In other news, I posted a basic script today that parses the snort alert file for IP addresses and then queries spamhaus' zen real time blacklist. Feel free to modify and use it in your sensor network (its certainly not production quality as it is now). I am very interested in receiving modifications to the script and general feedback to the idea. I have already seen some interesting trends that I think will prove useful after a few days of correlating data. Enjoy!
Note: Spamhaus is unfortunately under DDOS as I write this though, so don't use it too heavily.
Update - I have posted a new version of the script - please contribute if you make changes
** [ Start reading here if you came from bleedingthreats.net ] **
In other news, I posted a basic script today that parses the snort alert file for IP addresses and then queries spamhaus' zen real time blacklist. Feel free to modify and use it in your sensor network (its certainly not production quality as it is now). I am very interested in receiving modifications to the script and general feedback to the idea. I have already seen some interesting trends that I think will prove useful after a few days of correlating data. Enjoy!
Note: Spamhaus is unfortunately under DDOS as I write this though, so don't use it too heavily.
Update - I have posted a new version of the script - please contribute if you make changes
Thursday, May 31, 2007
Its easy to overlook some bugs
I often hear people say source code auditing is generally easier then binary auditing. This is usually true, but certain bugs are so easy to overlook in source code form that they rarely stand out. But what about the corresponding assembly code, are the same bugs just as hard to spot? Lets take a look.
Reverse that to C real fast. At 804843b we start setting up arguments to strncpy. Notice at 8048446 the decimal 1023 is used as the third argument to strncpy, its the length value. The size of the original char buffer was probably 1024. argv[1] to main() is used as the src argument to strncpy and finally the local char buffer as the destination.
'strncpy(buffer, argv[1], sizeof(buf)-1);'
This is all fairly routine stuff, pretty boring actually. Now heres another disassembly listing.
Notice the difference? The length argument to strncpy is wrong, its only decimal 4. The programmer (me, because this is an example) used ...
'strncpy(dst, src, sizeof(dst-1));'
instead of
'strncpy(dst, src, sizeof(dst)-1));'
In the second listing the resulting code would only copy at most 4 bytes into the destination from the source. The bug in source code can sometimes be hard to spot because its a matter of where the ')' character is placed, you may scan code for hours and easily overlook this minor (but very crucial) detail. And the bug in a disassembly listing is also hard to spot cause its just a single byte difference. Anyways, these subtleties can make a world of difference.
...
804843b: 8b 85 e8 fb ff ff mov 0xfffffbe8(%ebp),%eax
8048441: 83 c0 04 add $0x4,%eax
8048444: 8b 00 mov (%eax),%eax
8048446: c7 44 24 08 ff 03 00 movl $0x3ff,0x8(%esp)
804844d: 00
804844e: 89 44 24 04 mov %eax,0x4(%esp)
8048452: 8d 85 f8 fb ff ff lea 0xfffffbf8(%ebp),%eax
8048458: 89 04 24 mov %eax,(%esp)
804845b: e8 c0 fe ff ff call 8048320 ;strncpy@plt
...
Reverse that to C real fast. At 804843b we start setting up arguments to strncpy. Notice at 8048446 the decimal 1023 is used as the third argument to strncpy, its the length value. The size of the original char buffer was probably 1024. argv[1] to main() is used as the src argument to strncpy and finally the local char buffer as the destination.
'strncpy(buffer, argv[1], sizeof(buf)-1);'
This is all fairly routine stuff, pretty boring actually. Now heres another disassembly listing.
...
804843b: 8b 85 e8 fb ff ff mov 0xfffffbe8(%ebp),%eax
8048441: 83 c0 04 add $0x4,%eax
8048444: 8b 00 mov (%eax),%eax
8048446: c7 44 24 08 04 00 00 movl $0x4,0x8(%esp)
804844d: 00
804844e: 89 44 24 04 mov %eax,0x4(%esp)
8048452: 8d 85 f8 fb ff ff lea 0xfffffbf8(%ebp),%eax
8048458: 89 04 24 mov %eax,(%esp)
804845b: e8 c0 fe ff ff call 8048320 ; strncpy@plt
...
Notice the difference? The length argument to strncpy is wrong, its only decimal 4. The programmer (me, because this is an example) used ...
'strncpy(dst, src, sizeof(dst-1));'
instead of
'strncpy(dst, src, sizeof(dst)-1));'
In the second listing the resulting code would only copy at most 4 bytes into the destination from the source. The bug in source code can sometimes be hard to spot because its a matter of where the ')' character is placed, you may scan code for hours and easily overlook this minor (but very crucial) detail. And the bug in a disassembly listing is also hard to spot cause its just a single byte difference. Anyways, these subtleties can make a world of difference.
Monday, April 09, 2007
A Few Thoughts On Fuzzers
I am not a fuzzing guru but it has occured to me there is a much quicker way to go about developing a fuzzer. Or better yet a fuzzing 'wrapper' to well known and tested applications that already implement complex protocols. Fuzz tool authors spend a considerable amount of time re-implementing complex protocols (believe me I know) for the sole purpose of having complete control over their output. This is because for a fuzzer to be worth your time, it has to be semi-intelligent and protocol aware. The days of dumb fuzzers and windfalls of data to crash applications seem to be going away. More and more precise, intelligent tools are needed.
I for one am done writing complete protocol aware fuzzers. Instead I am shifting my focus to 'fuzzing wrappers', 'inline fuzzers' and fuzzing proxies (you like those buzzwords don't you!) for network based black box testing. Heres a simple concept. A Linux kernel module whose sole purpose is to fuzz outgoing communications. When the module is inserted, it reads a configuration file. That configuration file tells it specifically what protocols and ports it may touch. Lets take a simple protocol to start with say POP3. Your configuration file would say the fields that may be 'fuzzed', UIDL, STAT, RETR and DELE. Now you insert the module and simply open up your mail client and check your mail on the pop3 server your testing. This kind of fuzzer gives you the ability to focus on the fuzzing engine and not the re-implementation of boring protocols. What makes this approach better is it is pluggable. Your engine can be applied to multiple protocols without having to re-implement each one of them individually.
Another simple concept is a proxy fuzzer. The Art Of Fuzzing already has something similar to this. But I think this concept can go a lot further then it currently is (but its a good start). For example, modifying an existing HTTP proxy to hook into a fuzzing engine. To use it you simply fire up your web browser and visit the web server your testing using the proxy.
In my experience, in order for a fuzzer to be truely effective it has to produce *mostly* correct output, while it tweaks small parts incrementally to cover all possible code points. Why rewrite all this stuff when there are rock solid applications out there that already do it.
There are a lot of directions fuzzing research can go in, code coverage seems to be being looked at now as well. At the end of the day, IMHO, a pair of eyes is always better at finding vulnerabilities then an automated tool, but they do have a place in our toolkits for sure.
I for one am done writing complete protocol aware fuzzers. Instead I am shifting my focus to 'fuzzing wrappers', 'inline fuzzers' and fuzzing proxies (you like those buzzwords don't you!) for network based black box testing. Heres a simple concept. A Linux kernel module whose sole purpose is to fuzz outgoing communications. When the module is inserted, it reads a configuration file. That configuration file tells it specifically what protocols and ports it may touch. Lets take a simple protocol to start with say POP3. Your configuration file would say the fields that may be 'fuzzed', UIDL, STAT, RETR and DELE. Now you insert the module and simply open up your mail client and check your mail on the pop3 server your testing. This kind of fuzzer gives you the ability to focus on the fuzzing engine and not the re-implementation of boring protocols. What makes this approach better is it is pluggable. Your engine can be applied to multiple protocols without having to re-implement each one of them individually.
Another simple concept is a proxy fuzzer. The Art Of Fuzzing already has something similar to this. But I think this concept can go a lot further then it currently is (but its a good start). For example, modifying an existing HTTP proxy to hook into a fuzzing engine. To use it you simply fire up your web browser and visit the web server your testing using the proxy.
In my experience, in order for a fuzzer to be truely effective it has to produce *mostly* correct output, while it tweaks small parts incrementally to cover all possible code points. Why rewrite all this stuff when there are rock solid applications out there that already do it.
There are a lot of directions fuzzing research can go in, code coverage seems to be being looked at now as well. At the end of the day, IMHO, a pair of eyes is always better at finding vulnerabilities then an automated tool, but they do have a place in our toolkits for sure.
Thursday, March 22, 2007
Bug Hunting Is Getting Harder
If you have been a part of the security community for even just a couple of years you have no doubt noticed the decrease in serious bugs being reported and exploited out there. This is definitely no coincidence. Vulnerabilities are getting harder to find and even harder to exploit. This creates a lot of value for quality bugs in widely used software. I have partly blogged on this in the past.
I should also probably mention I don't consider XSS bugs a part of these statistics...yet. They are without a doubt a serious issue but at this point are still in their infancy and affect (probably) more then %90 of web applications out there. It's like looking back at bugtraq from 2000 and seeing "buffer overflow", they too will settle down in time.
Sometimes we still see straight forward stack overflows like the recent Snort DCE/RPC overflow found by Neel Mehta, but in general I feel bugs are getting more and more obscure. I personally feel there are many, many integer over/under flow vulnerabilities still waiting to be found, they are hard to come by and even harder to exploit, the conditions have to be just right. We saw new research into uninitialized variable attacks in the past two years, yet they remain non existant on our mailing lists. Are they not being found? Or just very hard to exploit?
So whats the point of this blog post? A question for you. What is the future of vulnerability research? Where are we headed in terms of exploitation techniques? Are there anymore undiscovererd bug classes?
My answers to these questions-> The future of vulnerability research is this. Bugs will continue to become more and more obscure and gain more and more monetary value as time goes on. Exploitation techniques are going to get trickier in order to defeat now mainstream memory protection techniques. There are undiscovered bug classes in my opinion, and when I find one, i'll let you know!
I should also probably mention I don't consider XSS bugs a part of these statistics...yet. They are without a doubt a serious issue but at this point are still in their infancy and affect (probably) more then %90 of web applications out there. It's like looking back at bugtraq from 2000 and seeing "buffer overflow", they too will settle down in time.
Sometimes we still see straight forward stack overflows like the recent Snort DCE/RPC overflow found by Neel Mehta, but in general I feel bugs are getting more and more obscure. I personally feel there are many, many integer over/under flow vulnerabilities still waiting to be found, they are hard to come by and even harder to exploit, the conditions have to be just right. We saw new research into uninitialized variable attacks in the past two years, yet they remain non existant on our mailing lists. Are they not being found? Or just very hard to exploit?
So whats the point of this blog post? A question for you. What is the future of vulnerability research? Where are we headed in terms of exploitation techniques? Are there anymore undiscovererd bug classes?
My answers to these questions-> The future of vulnerability research is this. Bugs will continue to become more and more obscure and gain more and more monetary value as time goes on. Exploitation techniques are going to get trickier in order to defeat now mainstream memory protection techniques. There are undiscovered bug classes in my opinion, and when I find one, i'll let you know!
Wednesday, March 14, 2007
Quick LibELF Guide
Libelf is great, I use it a lot. Its multi platform, well written, the license is LGPL, and the author answers questions quickly. But the documentation just isn't there. I get a lot of hits to this blog when people search google on libelf topics. So I thought a good blog entry on how to use libelf would be beneficial to others. Below is a link to heavily commented C code on using libelf to read the sections and symbols of an ELF object.
Libelf Example in C
It's an example of how to read an ELF objects section header and symbol table. Ill leave relocation reading as an exercise for the reader.
Libelf Example in C
It's an example of how to read an ELF objects section header and symbol table. Ill leave relocation reading as an exercise for the reader.
Tuesday, March 13, 2007
Linux Kernel 2.6.20.3
Does anyone follow the Linux Kernel changelog like I do? Well if you said yes, then your a real geek, congratulations.
http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.20.3
$ grep fix ChangeLog-2.6.20.3 -i | wc -l
32
$
Yikes, thats a lot of 'fix'. Mostly NULL ptr derefences, Ill have to dig a bit deeper into these later. Despite all the vulnerabilities and bloated code, Linux remains my OS of choice. It is really maturing, and with PAX, it is mostly secure. Although security isn't what troubles me with Linux these days, its more of a reliability issue with the OS. But I guess thats the price you pay for constantly evolving functionality.
http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.20.3
$ grep fix ChangeLog-2.6.20.3 -i | wc -l
32
$
Yikes, thats a lot of 'fix'. Mostly NULL ptr derefences, Ill have to dig a bit deeper into these later. Despite all the vulnerabilities and bloated code, Linux remains my OS of choice. It is really maturing, and with PAX, it is mostly secure. Although security isn't what troubles me with Linux these days, its more of a reliability issue with the OS. But I guess thats the price you pay for constantly evolving functionality.
Thursday, February 22, 2007
Obfuscated ELF Objects
I have blogged before on reverse engineering/binary analysis tools and how incredibly easy it is to break them. Prior works on the de-obfuscation of obfuscated binaries have concentrated on the accurate dead listing of executable code. These methods mostly concentrate on detecting 'junk bytes' or data within code sections. This is mainly done by determining what instructions are actually executed at runtime, without actually executing the object. I have researched ways to throw off these tools by manipulating the ELF object data that surrounds this code instead.
The search is limited to ELF object values that analysis tools use and that the OS linker, and loader does not, while maintaining runtime functionality. Unfortunately I have found _many_ ways to accomplish this. Most of the techniques disable or otherwise subvert the majority of analysis tools out there. Here's a few off the top of my head.
elf_header e_ident[EI_CLASS]
The Linux kernel doesn't check this value, make it whatever you want, your object will continue to function. Unfortunately most analysis tools will cease to work, only IDA pro will default to 32bits, unless you set it to ELFCLASS64, the demo version of IDA Pro will complain you can't disassemble 64 bit objects with that version. I sent a patch to the LKML for it.
elf_header e_phnum
Depending on the object you choose, you can increment this value a couple of times safely, and most of the time analysis tools that use the program header instead of the section header (IDA pro) will simply fill its analysis output with garbage from fake program header segments. Wont work on all objects, as you cant always choose the data that sits just beyond the legitimate program header entries. And some may cause the loader to throw your binary out upon execution.
section header sh_size
When the section header is present tools like IDA pro use it to perform analysis (yes you can force using the program header). Change the sh_size member of any section to any value you want, and the analysis will be incorrect.
Remember, when writing binary analysis tools you have to assume the object your parsing is malformed. How would the real OS loader treat it? That's what analysis tools must do, emulate the real environment, not the standard.
*I am in no way bashing IDA pro, it's far more powerful then anything else out there right now. But I had to use an example :] In fact to Ilfak's credit most other tools refuse to even read or fail completely (crash) when reading most objects I was manipulating.
The search is limited to ELF object values that analysis tools use and that the OS linker, and loader does not, while maintaining runtime functionality. Unfortunately I have found _many_ ways to accomplish this. Most of the techniques disable or otherwise subvert the majority of analysis tools out there. Here's a few off the top of my head.
elf_header e_ident[EI_CLASS]
The Linux kernel doesn't check this value, make it whatever you want, your object will continue to function. Unfortunately most analysis tools will cease to work, only IDA pro will default to 32bits, unless you set it to ELFCLASS64, the demo version of IDA Pro will complain you can't disassemble 64 bit objects with that version. I sent a patch to the LKML for it.
elf_header e_phnum
Depending on the object you choose, you can increment this value a couple of times safely, and most of the time analysis tools that use the program header instead of the section header (IDA pro) will simply fill its analysis output with garbage from fake program header segments. Wont work on all objects, as you cant always choose the data that sits just beyond the legitimate program header entries. And some may cause the loader to throw your binary out upon execution.
section header sh_size
When the section header is present tools like IDA pro use it to perform analysis (yes you can force using the program header). Change the sh_size member of any section to any value you want, and the analysis will be incorrect.
Remember, when writing binary analysis tools you have to assume the object your parsing is malformed. How would the real OS loader treat it? That's what analysis tools must do, emulate the real environment, not the standard.
*I am in no way bashing IDA pro, it's far more powerful then anything else out there right now. But I had to use an example :] In fact to Ilfak's credit most other tools refuse to even read or fail completely (crash) when reading most objects I was manipulating.
Tuesday, February 13, 2007
Too Much Going On!
Ok RSA just ended and I'm back from CA. I never thought I would be happy to see Charlotte, but I am. There is a lot going on right now in security. Lots of unimportant stuff and important stuff at the same time. This post will attempt to capture my feelings on some of it.
RSA 2007 - Thank you NWA for making me 10 hours late, spilling water on me 25,000 feet in the air and for making my flying experience with your airline generally crappy. It was a good show though, see you there next year.
Public Hash Database - Excellent idea. I post a hash of a txt file saying what Ive potentially discovered, place it in public view and when my research is complete I post the txt file and my work. And if it didnt work out then no harm done.
Fuzzers and Co-operation In An Alpha Male Community - Co-operation? hah! Not going to happen. And I think fewer people are using public fuzzers then previously thought. New fuzzers come with an extremely limited expiration date. Once they stink up the refrigerator they are put aside while a new one is created to find new bugs in a new protocol. Rarely are huge bugs uncovered with them, and if they are, the author isnt sharing his fuzzer with the public.
Solaris Telnet Vulnerability - What the hell guys? In all seriousness there is WAY to much mailing list traffic over this bug. If your running telnet on the internet you deserve whatever happens. The End.
Vista UAC Design Issues - I almost feel bad for Microsoft, they have to balance usability with security and anyone working in security can tell you thats a tough job.
Also why is googlepages so slow these days?
Yah that about sums it up for now. My posts are real bad these days and I apologize for that. I promise it will get better.
RSA 2007 - Thank you NWA for making me 10 hours late, spilling water on me 25,000 feet in the air and for making my flying experience with your airline generally crappy. It was a good show though, see you there next year.
Public Hash Database - Excellent idea. I post a hash of a txt file saying what Ive potentially discovered, place it in public view and when my research is complete I post the txt file and my work. And if it didnt work out then no harm done.
Fuzzers and Co-operation In An Alpha Male Community - Co-operation? hah! Not going to happen. And I think fewer people are using public fuzzers then previously thought. New fuzzers come with an extremely limited expiration date. Once they stink up the refrigerator they are put aside while a new one is created to find new bugs in a new protocol. Rarely are huge bugs uncovered with them, and if they are, the author isnt sharing his fuzzer with the public.
Solaris Telnet Vulnerability - What the hell guys? In all seriousness there is WAY to much mailing list traffic over this bug. If your running telnet on the internet you deserve whatever happens. The End.
Vista UAC Design Issues - I almost feel bad for Microsoft, they have to balance usability with security and anyone working in security can tell you thats a tough job.
Also why is googlepages so slow these days?
Yah that about sums it up for now. My posts are real bad these days and I apologize for that. I promise it will get better.
Thursday, February 01, 2007
Quiet reporting of loud vulnerabilities
Did you happen to catch the Solaris ICMP DOS vulnerability? If your like me you found out second hand from the ISC handlers diary. I have since found an entry on securityfocus about it and CERT. From the advisory Sun produced I think its safe to say when exploited, this vulnerability causes your box to go down, and go down hard. The stack trace sun provided gives some clue, but not much, and I don't have a Sun box to go poking around on to find out exactly how to trigger it.
Vulnerabilities like this are why I don't like classifying vulnerabilities by 'Remote DOS' alone. First of all there is a difference between a Remote DOS vulnerability where the attacker must first 'establish a TCP connection, authenticate and then bring down the box' and a 'spoofed ICMP packet =death of your box' vulnerability. The people at CERT correctly slapped 'unauthenticated attacker' to their advisory. Access to information is important, especially on critical systems. The fact a random anonymous person can deny you legitimate access to your information from anywhere is _bad_. While its not the same as that random person having access to that information it should still be considered a vulnerability of concern.
Vulnerabilities like this are why I don't like classifying vulnerabilities by 'Remote DOS' alone. First of all there is a difference between a Remote DOS vulnerability where the attacker must first 'establish a TCP connection, authenticate and then bring down the box' and a 'spoofed ICMP packet =death of your box' vulnerability. The people at CERT correctly slapped 'unauthenticated attacker' to their advisory. Access to information is important, especially on critical systems. The fact a random anonymous person can deny you legitimate access to your information from anywhere is _bad_. While its not the same as that random person having access to that information it should still be considered a vulnerability of concern.
Wednesday, January 10, 2007
Been Busy Lately
In my last post I lied and said my next post would be technical. Unfortunately I just haven't had the time lately. Not because I have lost interest... never! Been busy with kernel hackery and auditing a few large code bases for various reasons.
I have added a small section on the right hand side of the blog over there -> that contains links to vulnerabilities I have found, code Ive written and papers I have authored. Yes Im aware its a bit blank now. I am not including anything I have done prior to the end of 2006, its too scattered. So from now on my collection of contributions to the security community from this point forward can be found right here at this blog.
If your starved for in your face code action then check out this post at taossa. The CERT secure coding standards are definitely a step in the right direction. The next step is making your average programmer aware they exist.
I have added a small section on the right hand side of the blog over there -> that contains links to vulnerabilities I have found, code Ive written and papers I have authored. Yes Im aware its a bit blank now. I am not including anything I have done prior to the end of 2006, its too scattered. So from now on my collection of contributions to the security community from this point forward can be found right here at this blog.
If your starved for in your face code action then check out this post at taossa. The CERT secure coding standards are definitely a step in the right direction. The next step is making your average programmer aware they exist.
Subscribe to:
Posts (Atom)