Fuzzing is a term that sounds hard to take seriously. But it needs to be, in light of today’s attack landscape. Fuzzing has traditionally been a sophisticated technique used by professional threat researchers to discover vulnerabilities in hardware and
software interfaces and applications.
By Doros Hadjizenonos, Regional Director – SADC at Fortinet
Threat searchers do this by injecting invalid, unexpected, or semi-random data into an interface or program and then monitoring for events such as crashes, undocumented jumps to debug routines, failing code assertions and potential memory leaks. This process helps developers and researchers find bugs and zero-day vulnerabilities that would be nearly impossible to discover otherwise.
Fortunately, cybercriminals don’t tend to use fuzzing as a way to uncover vulnerabilities because it is very hard to do and requires a lot of custom development. There is actually only a tiny group of people with the expertise needed to develop and run effective fuzzing tools – which is also why, in those rare instances when they do resort to fuzzing, their use by the criminal community tends to be limited to simple things like DDoS attacks.
However, there is likely a vast quantity of vulnerabilities that could be discovered and exploited in commercially available software and operating systems right now using fuzzing technologies. The value of owning an unknown vulnerability for a zero day exploit to target is high, but because there simply haven’t been enough purpose-built fuzzing tools or skilled developers available to discover them, the ROI for finding such things has been higher still.
AI makes the difficult possible
As machine learning models begin to be applied to the fuzzing process, this technique is predicted to not only become more efficient and tailored to help developers and researchers, but it will also become available for the first time to a wider range of less-technical individuals.
As cybercriminals begin to leverage automated fuzzing programs augmented by machine learning, they will be able to accelerate the discovery of zero-day vulnerabilities. This will lead to an increase in zero-day attacks targeting different programs and platforms.
This approach is called Artificial Intelligence Fuzzing (AIF). Bad actors will be able to develop and train fuzzing programs to automate and accelerate the discovery of zero-day attacks. Then, by simply pointing an AIF application at a target, they could begin to automatically mine it for zero-day exploits.
The two machine learning phases of AIF would be Discovery and Exploit. In the Discovery phase, the AIF tool would learn about the functionalities and requirements of a new target, including the patterns it uses for structured data. Then, in the Exploitation phase, the AIF tool would start to inject intentionally designed, structured data into that software or interface, monitor the outcome, use machine learning to refine the attack, and eventually force the target to break. This constitutes discovering a vulnerability and an exploit at the same time.
This involves a machine learning approach that can easily be supervised by a trained cybercriminal, and it can be repeated – allowing a criminal to discover and exploit zero-day vulnerabilities and then run continuous combinations of attacks against a victim.
How AIF will affect the cybercrime economy
For many criminal organisations, attack techniques are evaluated not only in terms of their effectiveness but in the overhead required to develop, modify, and implement them. As a result, many attack strategies can be interrupted by addressing the economic model employed by cybercriminals rather than circumventing their attack. Strategic changes to people, processes, and technologies can force some cybercriminal groups to rethink the financial value of using certain attacks.
One way that organisations are interrupting attackers is by adopting new technologies and strategies, such as machine learning and automation, to take on tedious and time-consuming security activities that normally require a high degree of human supervision and intervention. These newer defensive strategies are likely to impact cybercriminal strategies, causing bad actors to change attack methods and accelerate their own development efforts.
Once the purview of defence researchers, advanced fuzzing is poised to fall into the hands of the criminal community. AI powered fuzzing will change the game for both attacker and target. One effective method of counter-attack is to go after the underlying economic strategies of criminal organizations. Instead of getting caught up in a perpetual arms race, organisations need to leverage automation, machine learning and AI for themselves to anticipate threats and change strategies so that it’s no longer economically viable for adversaries to attack.