If It's Stupid and it works, It's not Stupid!

February 25, 2012  |  Conrad Constantine

One of my favorite ways to explain threat-modelling to people outside the field,  starts with a little humor:

A martial arts instructor is teaching a new class; wanting to impress them with his flashy techniques, he picks upon the frailest-looking new student and instructs them to attack him…

..the student, who has never been in an actual fight before, comes at the instructor, arms failing like windmills, his inneffective blows raining down on the instructor, who struggles to deflect them all. Eventually the instructor becomes annoyed, and finds himself backing away from the student, for fear of catching a random slap in the face from the chaotic attempt at attacking him

The teacher ends the exercise, regains his composure, and addresses the class:

“So as you can see here, he made a common rookie mistake, and was attacking me incorrectly…”

Many months later, the student learned to attack the teacher ‘correctly’, but never once did he manage to beat the teacher like he did on that first day

So far the world has not ended, giving us little indication that it will do so in the future…

If I think back to all the debates and arguments I’ve had over my career in Infosecurity, one topic always seems to spring to mind as the source of the greater percentage of them: threat modelling. Let’s take a quick tally and see how many of these lines you have heard (or said yourself) before:

“That weakness is non-trivial to exploit”

“That exploit has never been seen in the wild”

“That Protocol has never been compromised”

“There’s never been a remote privilege escalation exploit for that”

..does everyone have their hand raised yet? The human brain seems great at predicting future non-occurrence of things based on a historical trend of non-occurrence, a perceptual bias illustrated most clearly in Taleb’s Black Swan Theory, the idea that:

an event is a surprise to the observer, has a major impact, and is rationalized after the fact

For many of us in infosec, the black swan comes in the form of the “I hadn’t considered that kind of attack” moment.

One of my favorite phrases about programming, is Brian Kernighan’s line: “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?, which makes for a great maxim, boiled down to its essence:

If you design it to the best of your ability, you are by definition incapable of finding the flaws in it

Between these two ideas, lays a terrible trap that I’ve seen a great number of Security Designers, Risk Managers and all manner of Information Security Personnel fall into; a kind of tunnel vision where  they they focus on historical evidence as the sole indicator of future trends in the field, forgetting that:

On a long enough timeline, everyone’s survival rate goes to zero

And although this pessimistic line from Fight Club seems like broad wisdom in the most meaningless sense, it’s worth pointing out that it is the rate of chance of survival over time, that is the important part (not that failure becomes inevitable). My survival chances may still be great next year, but they’re likely lower than they were today.

When designing systems, we core on the obvious requirements, and then set goals of delivering beyond expectations on those core requirements, while being merely adequate in secondary functions and aspects. When time comes to improve on the design, we tend to once again focus on the core functionality at the expense of the overall implementation and experience. When applied to Information Security, we have a design model that creates what Bruce Schneier once dubbed:

Iron Doors in Paper Walls’

 And yet every single one of us, instantly understands the wisdom of

The Chain is only as strong as its weakest link

The problem is, that most of us aren’t trained to see weakness in the first place: Unless the weakest link in that chain is obviously damaged, corroded, or has some other factor that forces it to stand out from the rest, it’s difficult to tell which link is the weakest, without direct destructive testing. As Links break, we may replace them with new links, yet still assume that the links that were the strongest when the chain was first constructed, remain that way even after significant maintenance ...

At Least they Left the Locks Unbroken…

Imagine this: One day, a bank is robbed, all the money is taken, gone. The Bank carries out an investigation into the investigation, and discovers that none of its security was compromised. No locks were picked, no alarms triggered, the vault itself was never breached,however, the money is just gone, the how and why of it doesn’t matter here. The official report concludes that since none of their security measures were defeated, they are perfectly adequate, and will not be changed.

This seems insane, right? And yet we see stories that invoke this every day; systems being compromised (usually ridiculously simple oversight) in a manner that the designers of the system had never accounted for, yet because it renders the existing security systems irrelevant, the system itself is judged to still be ‘fundamentally sound’.

Most commonly (and to be fair to the designers of systems involved in real-world examples of this scenario) the weakness exploited by the attackers is often not in the security system itself, but in how that system is implemented in the wild. To the end user however, the details are irrelevant, a breach is a breach is a breach…

Assuming a Sphere in a Vacuum

Physics is complicated. Modelling believable physics simulation is very complicated - but at least you can leave out some factors. Modelling Actual Real World physics is extremely complicated - you can’t leave out any factors. Physicists generally trying to express new discoveries realize that the mathematics and the outcome change a lot, depending on which of those factors apply. So to simplify things, they tend to baseline their mathematical descriptions of effects within the universe as describing the behavior when applied to a spher e, within a vacuum.

Security acts kind of the same way - those external effects often have the most significant effect on our sphere; and yet we try to do threat modelling in a perfect vacuum.

The Tunnel Vision of Expertise

One of the biggest traps of having large quantities of specialized knowledge in a specific domain, is that inevitably  it adjusts your perception to fit the model of that knowledge..Remember that part about ‘if you design it as well as you are able’ ...

...Ask a cryptographer to assess the security of a cryptosystem, and he will likely talk about entropy, keyspace distribution and quality of moduli, stream versus block analytical differences, perhaps the issue of securre key management; rarely will the subject of how to protect one of the keyholders from divulging the keys under torture, arise as a possible threat to the cryptosystem. (yet, this is a threat that anyway, even with no cryptography knowledge could imagine, since in their perception of the issue, the non-cryptography data they have, far outweighs the cryptography data they have, in building their own mental model). In fact these ‘stupid’ approaches to cracking crypto both have nicknames from being real practises (Rubber Hose Cryptanalysis and Black Bag Cryptanalysis)

Threat-Modelling Shadowboxing

It’s not surprising at this point, that if I have a single point to communicate in this article, is is that if you are doing your own Threat-Modelling, you’re doing Threat Modelling wrong. The perceptual bias is inherent, and focus will be on on the threat methods that the system is inherently designed to counter. The secure portions of the system will be either subverted or routed around by the insecure portions of the system

In the past few weeks we’ve been covering the Sykipot Trojan here on Alienvault Labs, and a newly-minted version of it that hijacks military access smartcards, effectively bypassing what is, on paper, an extremely secure access system. The security of the entire system is brought to its knees, rendered null and void, by a threat that is completely absent in the threat modelling done on this system. The advanced cryptography used in the cards is not defeated, the authentication methods are not bypassed, indeed the integrity of the authentication method itself is a crucial factor for the attackers to maintain so that may act undetected, appearing to an outside observer to be a legitimate operator. Instead of attacking the smartcard authentication at its strongest point (the authentication mechanism) Sykipot instead hijacks the surrounding environment (the operating system) and attacks the oldest vector in the security handbook - the smartcard middlewar’s assumed trust of the host operating system.

(Any good threat modelling methodology will in fact, place early emphasis on the inherent trust model and investigate exploitation of that model as the most likely place to find viable attack vectors).

Attackers will always chose the path of least resistance, as defenders, we strive to raise the bar of the lowest common denominator of weakness, both in terms of their number and ease of exploitation - the battle in good security design is not to make unbeatable defenses, but to try and force your attacker to attack you in ways you are capable of mounting a sustained defense against. Medieval Castle design is a masterwork of this doctrine, location being one of the first considerations in a defensive configuration that provides attackers only a single, well-understand approach to attacking the bastion).

And yet all the care and forethought in the design of your security, goes out the window, when threat modelling is applied only to the specification and not the implementation. The castle’s specification requires that the main drawbridge be at least three-feet thick, banded with iron and able to resist the impacts of the heaviest known battering rams ever constructucted, approachable only by a single lone road… meanwhile in a separate specification the castle’s sewers are required to be large enough to allow an average size adult to walk through them freely. The two authors of these specifications never meet, not communicate, before the castle is built.

And yet, effective bypasses of defenses due to these details of implementation, are always met with irate rebuttals that the essential security mechanism remains secure; an argument that holds as much water as a bank manager arguing that, since his staff was tricked into handing over vast sums of money to criminals masquerading as Armored Car drivers, the extensive security of the bank’s vault and alarm systems are still intact. Criminals, by and large, are results-driven operators who have no particular interest in how their attack succeeded but only the success itself. It is only in the world of film and fiction where the ‘audacity of the crime’ and the pride of pulling off the ‘perfect heist’ is a driving factor. In the real world, the simplest and most effective methods win out - the goal is not to carry out a breach in a way that inspires awe and respect for its complexity and ingenuity, but to suceed, at any cost.

And if it’s stupid, but successful.. It’s not stupid.

So when considering the possibility of a threat vector, or the feasibility of a working exploit - take the following into account:

  • What is the return on investment for successful implementation of this attack - If the rewards are greater than the resources required to implement it - it will be implemented if necessary. Even without taking into account the factor that all too often, individuals making the assessment that exploitation is non-trivial have very little experience in exploitation themselves, the driving factor is the complexity in relation to other available avenues. Don’t assess things as isolated factors, but by their place in the larger attack surface.
  • Are you focusing on a particular part of and attack vector that may not be relevant at all? Could a change in an interdependent technology of configuration, make this ‘non-trivial’ weakness several orders of magnitude more trivial? A classic illustration of this from the cryptography world lies with symmetric ciphers, where the robustness of the cipher becomes irrelevant once the necessity of sharing the common without the availability of a secure channel - as the dynamics of implementation change, so does the triviality of the problem.

Practical Considerations

Let’s conclude with a few practical examples of how to apply this mentality into security monitoring. When building out your SIEM program it is logical to start from the most important devices down - beginning with the security controls and moving on through infrastructure, critical systems and choke points of activity. Time and resource constraints often mean that ‘less important’ systems don’t get logged into the SIEM at all; perhaps that test server for your IT Support ticketing system doesn’t log to the SIEM, because after all, that doesn’t hold any critical data, or that cluster of VM’s that the admins use to test new builds of desktop images on: that would just be stupid logging everything from those…

Well, in the words of one of my favorite security presentations from 2010 (“Why the Black Hats Always Win”), you don’t get to pick what is important on your network, your attackers do: the most important systems on your network are the ones that enable their goals. It’s worth considering taking a look into some of the more ‘stupid’ ways an attacker could reach their goals - like finding a collection of test tickets on that staging server, detailing logins and emergency credentials for more critical systems, or exploiting a 5-year old vulnerability on a forgotten Virtual machine that accidentally got autostarted with the latest reboot of the ESX server it resides on. Start taking stock of all the little ‘stupid things’ you know to exist in your infrastructure and business process and start bringing them into the security monitoring program.

Consider that your organization’s assessment of system criticality may be inversely-proportional to the respective worth to an attacker; and instead consider your threat models from the bottom-up rather than the top-down.

Because in hindsight after being breached, it will be “all the little stupid things that allowed it to happen” that will keep you awake at night.

Share this with others

Tags:

Get price Free trial