I am not a fuzzing guru but it has occured to me there is a much quicker way to go about developing a fuzzer. Or better yet a fuzzing 'wrapper' to well known and tested applications that already implement complex protocols. Fuzz tool authors spend a considerable amount of time re-implementing complex protocols (believe me I know) for the sole purpose of having complete control over their output. This is because for a fuzzer to be worth your time, it has to be semi-intelligent and protocol aware. The days of dumb fuzzers and windfalls of data to crash applications seem to be going away. More and more precise, intelligent tools are needed.
I for one am done writing complete protocol aware fuzzers. Instead I am shifting my focus to 'fuzzing wrappers', 'inline fuzzers' and fuzzing proxies (you like those buzzwords don't you!) for network based black box testing. Heres a simple concept. A Linux kernel module whose sole purpose is to fuzz outgoing communications. When the module is inserted, it reads a configuration file. That configuration file tells it specifically what protocols and ports it may touch. Lets take a simple protocol to start with say POP3. Your configuration file would say the fields that may be 'fuzzed', UIDL, STAT, RETR and DELE. Now you insert the module and simply open up your mail client and check your mail on the pop3 server your testing. This kind of fuzzer gives you the ability to focus on the fuzzing engine and not the re-implementation of boring protocols. What makes this approach better is it is pluggable. Your engine can be applied to multiple protocols without having to re-implement each one of them individually.
Another simple concept is a proxy fuzzer. The Art Of Fuzzing already has something similar to this. But I think this concept can go a lot further then it currently is (but its a good start). For example, modifying an existing HTTP proxy to hook into a fuzzing engine. To use it you simply fire up your web browser and visit the web server your testing using the proxy.
In my experience, in order for a fuzzer to be truely effective it has to produce *mostly* correct output, while it tweaks small parts incrementally to cover all possible code points. Why rewrite all this stuff when there are rock solid applications out there that already do it.
There are a lot of directions fuzzing research can go in, code coverage seems to be being looked at now as well. At the end of the day, IMHO, a pair of eyes is always better at finding vulnerabilities then an automated tool, but they do have a place in our toolkits for sure.
1 comment:
Chris,
I agree with you fuzzing tools won't take the place of a good set of eyes, but it would be nice to have a more intelligent fuzzer that could point a vulnerability assessment in the right direction. Keep up the good work.
Travis
travisaltman.com
Post a Comment