Written by Dr. Pedram Hayati, Founder of SecDim.
Tl;dr: Secure programming is boring for developers unless they find it interesting and challenging.
There is a sheer number of online courses and training that seemingly teaches developers how to write less buggy but secure code. A common trend is to "turn a developer into a hacker". The training focus is the detection and exploitation of common security vulnerabilities, mostly extracted from OWASP TOP 10. We then hope that this is going to make our “forgetful” developer think twice before introducing the next XSS.
Unfortunately, this approach of teaching is not going to change developers' way of thinking about security, let alone create required interest to make them proactive in writing secure code. I summarise the problem in three common areas:
Builders vs breaker mindset
Treating security bugs like other bugs
Lack of focused practice
Builders vs Breakers Mindset
As security professionals (i.e. penetration tester, bug hunter) we love the journey from vulnerability identification to the system compromise. The joy of turning a vulnerability into a Remote Code Execution (RCE) is the utopia of a security assessment. But, is that so for a developer? In other words, if in a training course, we teach them how to find a vulnerability, turning it into a full system compromise, would they stop introducing a similar vulnerability in their code?
Unfortunately, what excites a security professional is not going to be exciting for others. It is amusing to see how RCE works, but by the end of the day, a developer needs to build not to break.
Developers are engineers, they like to build and see their programs addressed a problem. They want to build a bridge and are not much interested in how to blow the bridge up in a controlled manner (i.e. vulnerability exploitation).
It can be fun to find and exploit a security vulnerability but this is not the aim of secure programming training. We cannot train them on how to build a security program by showing them an example of how it can break.
Formally speaking, given a specific input (i.e. attack payload), we show them a proof (i.e. security attack) that results in an unanticipated program output. The proof nor the input do not reveal the root cause that must be addressed to eliminate the issue (i.e. security vulnerability).
This brings up the second problem.
Treating Security Bug Like Other Bugs
I often show the following code snippets in my Defensive Programming workshops to the crowd of senior software engineers and ask them
“What do you think this patch is for?”
/* Read type and payload length first */
if (1 + 2 + 16 > s->s3->rrec.length)
return 0; /* silently discard */
hbtype = *p++;
n2s(p, payload);
if (1 + 2 + payload + 16 > s->s3->rrec.length)
return 0; /* silently discard per RFC 6520 sec. 4 */
pl = p;
With some hints, they can point out this is the patch for the infamous Heartbleed security bug. I then follow up with another question:
“Do you think there are no other Heartbleed type bugs (or buffer over-read bugs) in other parts of this library?”
And the answer is a "shy smile" meaning "are you kidding? Of course, there is or will be."
Heartbleed patch is an example of a common method in addressing security bugs, i.e. "If-Statement Post Release Patchwork (ISPRP)". It is an example of showing developers a security attack (the proof) and how it can be triggered (the input) and leaving them alone to figure out how to fix it. This will result in the quickest and smallest code change to handle the trigger and dangerously leaves the root cause intact.
There are many other public incidents of ISPRP where the patch was incomplete, many attempts were made to fix it, at the end a complete refactoring was performed or considered as a feature!
ISPRP shows us a bigger problem that we do not get to the root of a security bug. Mistakenly, developers treat a severe security bug the same as other functionality bugs and believe by getting rid of the symptom, the program is cured. Some may go further and look across the codebase to find a similar pattern of the bug but apply the same patch.
So where is the root cause? I look nowhere other than the design of the software for the root cause of many classes of security bugs.
Our software more often is modelled very loosely and has been through very few design iterations. It does not effectively model the problem that it tries to solve. Therefore, it does a lot more than it should. In other words, our programs are powerful.
Our loosely modelled program has non-deterministic behaviours. These behaviours result in unforeseen run-time exceptions and more importantly dangerous ones that are security vulnerabilities.
Let’s go through one simple example. Suppose our program stores a person’s age. Commonly int data type is used to model the age of a person. It may sound like a good choice, obviously a lot more restrictive than a string type. int (i.e. int32) in most programming languages ranges from -2,147,483,648 to 2,147,483,647. Well, this seems like not a good data type as negative numbers or anything beyond 150 does not model the reality of the age. Moreover, we have edge cases. if a user enters 2,147,483,648 that number will wrap around to -1 (this is called Numeric Overflow vulnerability).
Now, here is where the problem starts. We try to address the edge cases by placing some if-condition statements to validate the input.
validate_age(int input) {
if (input < 0) OR (input > 150)
return Error() && Die().
else
return Ok().
}
validate_age(input)
do_something_with_age(input)
Has the problem of age gone away? Unfortunately, the program still suffers from a bad design and is vulnerable.
First defect: What if the input is already beyond the integer range when it is passed to validate_age? This means we validate an already wrapped around value.
Second defect: This happens in large codebases and as the program grows. Someone forgets to put validation before a new method that uses age. The security check is enforced by a coding convention and this means our development team must be always on high alert to look for cases where this check is missing.
The root cause is a design flaw. int is a loose representation of the age. Our program should not allow such representation to exist in the first place. We can follow a defensive design pattern that makes unsafe states unrepresentable. Therefore, we do not need to worry about missing security checks because dangerous inputs cannot be even represented in our software.
As you can see, even a simple numeric overflow vulnerability may require a redesign to get addressed safely. This is the critical part that must be covered in developer security training.
Lack of Focused Practice
Imagine how much dedication is required for a person to learn how to swim? 1 day? 1 week? Not really. It will be multiple months of consistent practice to just learn the basics and it should be done under supervision.
Like these skills, it is very important for our developers to learn and apply secure engineering after a couple of days of training.
Unfortunately, as much as we like to obtain a skill very quickly, it doesn’t happen fast. We need months in some cases years to get the basic skills, let alone master them.
Software security is complex and building usable secure software is even more complicated. Secure software engineering is a new way of thinking, modelling, designing and implementing and it is new to many developers.
Our developers are accustomed to building what software should do. On the other hand, secure software engineering is in complete contrast to the way most developers think. It is about what software should not do. Being able to find edge cases that could lead to a security bug and write security tests are advanced skills.
Through one-off training, our developers may obtain knowledge of common security anti-patterns. To turn this knowledge into a skill, we need to offer them a practice pathway. A focused pathway over multiple months that allows them to practice and fully understand not just whats but whys of the non-deterministic nature of software that leads to security bugs.
Wrap up
I have summarised the problems of ineffective developer security training in three areas: builder vs breaker mindset, treating security bugs like other bugs and lack of focused practice.
If we are after creating a secure software engineering culture, first we should make the connection between security and what developers like about programming. We have known for a long time that factors such as “compliance push”, “internal policy requirement” or “fear” are not going to change the culture but invite developers to find shortcuts. We can make secure software engineering as fun and engaging as other aspects of software engineering for our development team.
A good start is to learn their techniques and tooling. We can then align our teaching to what they like and are familiar with.
Addressing a security bug can be complex but we are not out of luck. There are many great practices in both software engineering industry and computer science that try to solve a similar problem (e.g. Domain Driven Development intrinsically addresses some software security challenges). We can tap into that knowledge to further emphasise their benefits while showing developers how the practice can also solve a security problem.
Overall, the impression we want is to show our developer that a well-engineered and designed software is also secure.
After a decade of making software do weird things and reporting hundreds of security vulnerabilities to Fortune 500 enterprises, Dr. Pedram Hayati, Founder of SecDim, has realised there is a better way to address software security. He founded SecDim, to teach software engineers how to build secure software from ground-up. He lectures cyber security courses at Australian Defensive Force Academy, UNSW and Curtin. In his free time he founded SecTalks, a multinational non-profit security community.
Tank Stream Labs is a leading Australian technology focused community and ecosystem for early adopters, entrepreneurs and the future tech leaders in Australia with Global aspirations. Our aim is to support scale up and high growth technology companies on their growth journey.
Comments