But regardless of what language you code in, every week there is a new release patching reported and vulnerabilities to fix. Scripting languages, the software used to develop millions of sites and applications because they’re compiled at runtime and not before, have more security vulnerabilities, a recent study by the app security firm Veracode concluded.
But what if the security flaws are in the language itself? Or in the interpreter commonly in use with that language used browsers or part of a given OS to run the code? IOActive researcher Fernando Arnaboldi presented at Black Hat Europe last week a detailed security analysis of five popular programming languages and their interpreters, and each of the five had one or more significant vulnerabilities.
Suddenly for developers their efforts to write good, tight code are compromised. It doesn’t matter how clean your code if the underlying language, framework or interpreter has gaping holes that can be exploited.
And the situation is made worse because the problematic functions Arnaboldi has surfaced are not in obscure, seldom-used nooks and crannies of these languages and frameworks; they are in functions found in everyday code.
For example, rake tasks in Ruby on Rails production applications are an extremely common approach to handling abandoned onboarding efforts and shopping carts: For each engineering team affected to recode to close this vulnerability would give any manager pause.
Instead, developers rely on other developers further up the code chain to do the right thing and eliminate as a class vulnerabilities this significant.
|Python||CPython, PyPy, Jython||Contains undocumented methods and local environment variables that can be used for OS command execution. A hacker can exploit these undocumented language methods.|
|Ruby||Ruby, JRuby||While JRuby implementation of rake works the same as Ruby’s, JRuby will happily run a remote rakefile that returns useful information about the base application, such as usernames and passwords and other sensitive data..This “logic leak” leaves open a huge exploit for hackers to target certain sites and harvest user data.|
|PHP||PHP, HHVM||Certain common functions, such as shell_exec() can perform remote code when undefined constants are passed in, remote code sent to an application in the hopes of hijacking it.|
|Perl||Perl, ActivePer||One of Perl’s default modules, ExtUtils::Typemaps::Cmd, contains a subroutine (embeddable_typemap()) that can also be used to execute remote code.|
These security issues were all automatically detected using something called a differential fuzzer. Normal “fuzzers” automate testing for security vulnerabilities by submitting unexpected, corrupt data. The fuzzer Arnaboldi authored goes beyond this automated testing of edge case data to trigger potential vulnerabilities, to seek out latent functionality in the language or framework that are security issues.
XDiFF is a framework written in Python that runs on Linux, Windows, OS X and Freebsd and compares different inputs, versions, implementations and different operating systems’ implementation. These automated test runs can detect, as in this case, open vulnerabilities, logic holes and “forgotten functionality” that can be maliciously exploited.
As developers — as opposed to open source programming language maintainers — there’s not much we can do. But given that XDiFF itself is open source, the testing protocol is clearly outlined here, we can at least submit security bug issues. Python, node.js, JRuby, Ruby, and Perl all have reporting mechanisms where you can submit security issues you can replicate from Arnaboldi’s findings. These vulnerability reporting mechanisms are set up in the hopes that fixes can be implemented before the vulnerability is widely known, and most severe vulnerabilities get patched within a week or two. But like most open source development, when those patches will get written is unknowable and out of the hands of software developers.