Align IT and Business in an Agile Manner

Agile Software Development

Subscribe to Agile Software Development: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Agile Software Development: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Agile Development Authors: Jason Bloomberg, Flint Brenton, Elizabeth White, PagerDuty Blog, Liz McMillan

Related Topics: Security Journal, DevOps Journal

Article

Beyond DevOps: Security vs. Speed? | @DevOpsSummit #APM #DevOps

Several problems arise when the harm of software failure cannot be treated as an unbound variable

Fail fast, fail often. Yeah, but the first failure blew up the satellite. Well, this is just a photo-sharing app..not rocket science. Okay, but your photos are accessed by users who have passwords that they probably use for other things..and aren't some photos as important as satellites?

Several problems arise when the harm of software failure cannot be treated as an unbound variable. Here are some thoughts on two. I'll write more on two more (one cognitive, one computational) later.

Problem 1: Identity Persists Across Non-Obviously Coupled Systems (So the Stakes Are Higher Than Your Application)
Worse: security failures cascade well beyond physically contiguous realms (if root then everything) into physically decoupled systems via informational (shared passwords, mailboxes) or physical-but-accidental (power cut then reboot) channels. The brilliant and terrifying Have I been pwned? tool -- to say nothing of the astonishing air-gap-annihilating Stuxnet [pdf] surfaces the obvious but easy-to-forget truisms that simply not having data that should not be accessed by X on the same disk as data that can be accessed by X is not good enough, and that the danger posed by access to one application may be slim compared with the danger posed by access to something more serious via the identity compromised by an in-itself non-dangerous breach.

So even if 'fail fast' is okay for your application, it may not be okay for your users. The result: natural tension between the ideal of continuous delivery -- or even Agile more broadly, or even heavily iterative development in general -- and security.

And while one of the major insights of Agile is that the best refiner is the real world (as opposed to the limited imagination of the planners), one of the major embarrassments of InfoSec is that 95% of security breaches involve human error. For Agile, failure is falling until you can walk. For InfoSec, failure is letting the terrifying cat out of the poorly-designed bag. Post-breach, maybe you've started to salt your hashes (congrats, you're more cryptographically sophisticated than Julius Caesar) but your users' passwords are in the wild.

Problem 2: You Have Actual Human Enemies (So Something Smarter Than Chance Is Trying to Outsmart You)
On sheer randomness, the Internet is getting more dangerous (Akamai records crazy DDoS increases over the past year - 122% for application-level (OSI Layer 7) attacks alone??). But the really scary problem is that real, smart, often well-funded humans are trying to make your software do what you didn't design it to do. For most failures, the enemy is "imprecise requirements" or "poor algorithm design" or "inadequately scalable environment" (or even just 'blundering users'); for security failures, the enemy is malicious engineers.

This is the meatiest bit of the (otherwise slightly theatrical) Rugged Manifesto:

I recognize that my code will be attacked by talented and persistent adversaries who threaten our physical, economic and national security.

Yeah. So engineer.add(<malice, talent, persistence>), return ???? -- and multiply(????, world.get(amountEatenBySoftware) = ????!!!!!

If DevOps is a management practice, then a risk of ????!!!!! is pretty much unacceptable.


None of this, of course, means that Agile isn't an awesome idea. Nor am I suggesting that security can't be baked in to an iterative, continuously improving process - certainly it can, but on the face of it this seems to require a bit of finagling. And of course the proper way to address security will always be risk analysis, with a good lump of threat analysis included in any measure of technical debt.

I'd love to take some taxonomy of software errors (maybe regarding security in particular) and cross-tab cost per error type with cycle time (i.e. length of cycle during which each error that cost d dollars was introduced against cost d), normalizing by estimated technical debt accrued during each cycle (assuming somebody measured that at the time, which probably didn't happen). But maybe someone has done that (definitely seen lots of costs by error but not correlated with cycle time), and (since technical debt is kind of a guess anyway) maybe anecdotes are a better gauge of the security cost of "shift left" anyway.

Anyone have any experiences they'd like to share?

More Stories By John Esposito

John Esposito is Editor-in-Chief at DZone, having recently finished a doctoral program in Classics from the University of North Carolina. In a previous life he was a VBA and Force.com developer, DBA, and network administrator. John enjoys playing piano and looking at diagrams, and raises two cats with his wife, Sarah.