Secure Development Tip of the Week

Subscribe by Email

Your email:

Application and Cyber Security Blog:

a Security Innovation Blog covering software engineering, cybersecurity, and application risk management

Current Articles | RSS Feed RSS Feed

Sony CISO Reporting to Executive Management. Maybe Cyber Security Czar will follow suit?

  
  
  
  

Cyber Security CzarIn my previous blog, I talked about how I was encouraged that Sony was going to create the CISO position, but disappointed that they’d be reporting to the  CIO (a position that I feel is inherently in a conflict of interest with the CISO position). However, I got some great news last week - Philip Reitinger was named the firm’s new senior vice president and CISO, and will report to the company’s executive vice president and general counsel. 

This is encouraging because Sony is now aligning security and the CISO position more with risk, liability, legal, and compliance areas. This is the polar opposite of a CIO or CTO who is all about efficiency, uptime, and making things more accessible, faster, etc. Somebody inside of Sony has got the right idea and is being listened to, which is a very good sign.

Hopefully someone in the Obama administration will see the light too.  This is analogous to the failings of Obama (and Bush before him) to recruit and maintain an impactful Cybersecurity Czar. Where the Czar reports is inconsistent with enabling them with authority.  The NSA still holds responsibility for cyber security and until that changes (or there is a reporting line between NSA and the Czar) it will be mainly a figurehead position. They can write all the policies and make all the speeches they want, but they have no authority to drive meaningful change because the NSA isn't accountable to the Czar's policies. 

This is one reason I like Languevin's bill - it changes the reporting structure, makes real accountability measurable for all agencies and contractors, and creates a position reporting to the President that will oversee and influence the work of DHS (the group who is directly accountable for implementing and assuring the new cyber security measures & requirements.) It even calls for punitive measures for failure as well as regular audits and monitoring (not just paper audits) to make measurement more automated and regular. 

Encouraging, very encouraging.

Why responsible disclosure is the best choice for Security Innovation

  
  
  
  

Responsible DisclosureThere is a wide range of ways to disclose vulnerabilities discovered in software. Some people believe it is best to immediately alert the public of a vulnerability as soon as it is found, while others feel it is best to quietly work with the software vendor to fix the vulnerability before public notification. There are opinions that range between these two extremes, but responsible disclosure leans toward giving the software vendor a reasonable amount of time to make a fix before telling the world about the problem.

Publicly disclosure starts the race for attackers, victims and developers at the same time. As soon as attackers catch wind of the problem they will start weaponizing the vulnerability by creating malicious exploits of the vulnerability.  In response, software teams will scramble to fix the problem and deploy it to their users. Users of the system have to stay on their toes to install the patch and perhaps even modifying firewalls and intrusion detection systems to mitigate their risk.

This is, quite often, the popular mechanism for vulnerability disclosure in the hacker community. It is exciting, gathers lots of publicity and puts a ton of pressure on software vendors to get improve their security. Nothing like a trial by fire to get people’s attention! While most users are put into an uncomfortable position by this type of disclosure, their is small sub-set of users who deeply understand their Operating Systems, have built their kernels from scratch and understand every line of code they run. These super-users want to be alerted of any potential vulnerability as soon as it is discovered so they can take quick action to fix their own systems.

Unfortunately, most users do not closely follow the latest vulnerabilities and do not know how to configure their firewalls, if they even have one. Additionally, if they use open source software they might not know how to best patch their own software. The average user is running in a race, but didn't receive the invitation. Public disclosure puts the average user at undue risk and under intense pressure.

After a public disclosure, the race is on between the software development community and the attackers. The big question is, can the developers accomplish each of the following before their user’s are exploited?

  • Find the issue in their software
  • Fix the issue
  • Develop a patch
  • Code review the patch
  • Test the patch
  • Deploy the patch

Even if the development teams and security teams work in unison and accomplish all of this before the next Slammer or Nimda, it is still a major challenge to get all of their users to patch before they open that e-mail, click the link or receive that packet which will result in an exploit.

In most cases, an exploit is ready before a patch is deployed. In most cases, even after a patch is deployed, a large percentage of users will not pick up the patch for months or even years.

One thing Open Disclosure does do, and does well, is to put the fear of attack at the forefront of the development organization's minds. This fear makes them more likely to take security seriously early in their SDLC and take the right precautions before attack. This is why I refer to the security community as an ecosystem. Every actor has a role to play.

I am not talking about a breach like the recent Sony breaches where once they realized their networks were compromised, they notified customers. It is incredibly important for companies to alert their customers and all other involved parties after a breach has occurred so they can take appropriate action.

I’m talking about when researchers discover vulnerabilities during their use of software, the same way an experience car mechanic will recognize that sound you've been pretending not to hear for the last six months. What that researcher does with that vulnerability next changes the game that we're all playing. Release it to the world (Open Disclosure), get noticed, make press, but put customers at risk; or release it to the software vendor only (Responsible Disclosure) and give them a head start at fixing it before notifying the world.

At Security Innovation we take the security of our current and future clients very, very seriously. This is why we practice "Responsible Disclosure." Actually, we take it a step further, as we will not publicly disclose any security vulnerability after any amount of time.

The formal definition of Responsible Disclosure is to give the software vendor access to the vulnerability before releasing it publically. The deadline pressures the vendor to respond directly to the security researcher and push out a patch to that vulnerability before anything else. This assumes the security research understands the vendor's business and their customers better than the vendor itself. This still gives the researcher the limelight when they go to the press 10-15 days later.

We feel it is our primary concern to make sure the vendor gets the issue completely fixed. It's not our place to set arbitrary deadlines.

We deliver the vulnerability, along with guidance to remediate it, to the software vendor for free and will not release it for any period of time. We work with those companies to make sure the issue is remediated properly and that their development and testing team understand the risk and impact of the issue.

We do this even if we have never worked with this company before, and may never work with them in the future.

We feel this is the best way to help the end user. At the end of the day our goal is to help software companies ship secure software that their users can trust.

When we find issues in other people's software it's our job to alert the company who wrote the software. I send out multiple e-mails per month to this effect. These e-mails range from simple XSS issues, to SQL injection to remotely exploitable Buffer Overflows.

In the midst of Lulzsec, Anonymous, countless viruses, worms and targeted attacks I think it's important to be playing for the customer - even if that means missing out on the opportunity to be famous.

Sony appoints CISO in response to PlayStation attacks……but reports to the CIO?????

  
  
  
  

Sony RespondsA few months ago, Sony announced that it was created a new CISO position, reporting directly to the CIO, in response to the attacks against PlayStation.  I’m encouraged by the fact that Sony realizes they need someone focused on data security – but discouraged that they’ll be reporting to the CIO, who almost always has a fundamental conflict of interest and often reduces this role to a figurehead. CIO’s are typically responsible for the information technology and systems that support enterprise operations, and they need them to be high-performing and feature rich (and security often crimps that style). 

If I were CEO of a multinational enterprise like Sony, MassMutual, SAP, and others, I would place my CISO reporting to the most senior risk executive in the company and have that person report to me. I would create a nested risk-based approach to data/information protection.  For example, Application Security would be part of a larger Information Security group, which would be part of a larger risk group, which is responsible for assessing risk in the context of business continuity and operations.

Security and risk are elements of _every_ person’s job, and the group who’s “responsible” for security has the charter of assuring the dissemination and absorption of those security/risk elements (making it part of the culture vs. doing all the security work themselves in the security group.) This would be my yin to the CIO and IT yang of faster, cheaper, more efficient automation of data management.

Companies like Thomson Financial, Liberty Mutual, and SAP had it right, imo, and changed things – which sent their CSO’s running away and significantly weakened their security posture overall.

Koblitz and Menezes on safety margins in cryptography

  
  
  
  

Neal Koblitz and Alfred Menezes are two pioneers in the field of Elliptic Curve Cryptography. In recent years, they’ve teamed up to write a series of papers (available at http://anotherlook.ca/) questioning some current practices in academic cryptography. The papers are stimulating and worth a look, and I’ll be posting some more about them. For this post, I’m most interested in the section on safety margins in their most recent paper, “Another look at security definitions” (warning -- f-bomb on page 9).

There’s a school of thought in cryptographic research that says that when you’re designing a scheme or protocol, you should determine the security requirements, design a protocol that exactly meets those requirements, and then make sure you eliminate all elements of the protocol that aren’t necessary to meet those requirements. This gives you the simplest, easiest to implement correctly, and most efficient protocol.

Koblitz and Menezes argue for a different position: unless you are truly resource-constrained, you should be biased towards including techniques that you can see an argument for, even if those techniques seem to be unnecessary within your security model. The reason is simple: your security model may be wrong. (Or it may be incomplete, which can amount to the same thing).

This attitude seems very wise to me. For a while we at Security Innovaton have been arguing that there is one basic assumption underlying almost all Internet protocols: the assumption that it’s okay to use a single public-key algorithm, because it won’t get broken. But that assumption isn’t necessarily right. It’s been right up till now, but if quantum computers come along or if a mathematical breakthrough that we weren’t expecting happens, RSA could be made insecure almost overnight. And if RSA goes, most current implementations of SSL go too, and all internet activities that use SSL will be seriously disrupted.

We don’t have to operate with these narrow safety margins. It’s easy to design a variant “handshake” for SSL that uses both RSA and NTRU to exchange parts of the symmetric key, each as secure as the whole. This would be secure if either RSA or NTRU was attacked, and the additional cost of doing NTRU alongside RSA is negligible in terms of processing. Menezes himself, speaking at the recent ECC conference in Toronto, spoke of this approach as extremely sensible.

Yes, there are some places where efficiency really is paramount, and naturally, we’d recommend using the highest performance crypto which is NTRU.  However,  for most devices, there’s no reason to use pure efficiency as a reason to avoid doing something that makes perfect security sense. We’re encouraged by the fact that researchers of the stature of Koblitz and Menezes seem to agree with us, and we’re going to look for ways to spread the word further.

Q&A with Myself - Thoughts on Sony, DOD, RSA, IMF & Lockheed Martin

  
  
  
  

Q & AQ: Are the recent hacks against Sony Playstation, RSA SecurID, IMF and Lockheed Martin caused by unrelated entities, or is this a coordinated attack?

A: There are definitely different groups operating here, each with their own motivations for the hacks. Sony decided to press legal charges against some hackers and the counter-response was retaliation.  On the other hand, the breaches related to the RSA SecurID products and the hacks on IMF & Lockheed Martin have evidence of state-based attacks. Regardless, the series of breaches we’re seeing isn’t going to let up anytime soon. Just yesterday the Pentagon confirmed that some of our most closely guarded military secrets were stolen by spies who hacked into DOD computers.

It should also be noted that this is nothing new – what’s new is the disclosure of the breaches, not the attacks or breaches themselves. Congress needs public support to get cyber security legislation passed (Langevin’s bill) and they’re being lobbied hard by the private sector. We should also be aware that the administration is preparing the public for a cyber attack, both US-driven like Stuxnet, as well as an inbound politically motivated attack on something like the US power grid. The bill in Congress calls out specific measures for protecting the power grid and other critical infrastructure like nuclear power plants. The Pentagon hacks were discovered months ago, but are just being released now to step up the pressure on Congress and to ready the public for the US at war in a new theater – cyberspace.

Q: Will anyone’s data be more secure in the face of this onslaught?

A: It’s no doubt that data is woefully unprotected. The approach that so many organizations often take has been reactive --patching a gap or misconfiguration temporarily fixes a problem yet offers nothing preventative. Organizations are also not following fundamental security principles like defense in depth and are over-relying on single points of failures such as the SecurID authentication solution. Universally, we underestimate the importance of developing secure software which is the largest source of security vulnerabilities.   90% of attacks occur at the software layer.  That is because hackers have two ways to get at your data:  Through the network, which organizations spend considerably more money protecting and are better at hardening, or applications, written by developers with very little security skills.

We have the power to make our data exponentially safer, so why don’t we? Maybe because it requires a fundamental shift in the way we think about writing software and accountability.  Being proactive about data security requires an overhaul of priorities, and this can be accomplished by training developers, designating an executive to be held accountable, consistently testing software, and having a current knowledge base of the always-morphing threat and vulnerability landscape. Software runs our world and the data security problem will never be solved until it’s addressed at the developer desktop. And it’s not a technology problem – it’s a people and process problem.

Which is More Secure: Windows or Linux?

  
  
  
  

Windows v LinuxSomebody on LinkedIn asked the above question to a group I'm part of. I decided to answer it thinking "Oh, I can chime in with a quick little answer", but the more I wrote the more complex the answer became.

Here is my response:

I think the question is far more complex right now actually. For example, what constitutes "Linux" or "Windows"? If we're talking only about the kernel, then they're about the same (both extremely secure). They've certainly made different design decisions, but at the end of the day kernel exploits for either OS are extremely rare.

If you're talking about how the core OS protects its users from malware and other attacks an argument could be made for the forced low privileged user mode of Linux is more secure. However there are huge advancements on both sides to reduce the risk of malicious code executing without the user's knowledge ASLR, DEP, NX bits, and stack canaries all exist to reduce this risk, and are included in Linux, Windows, Mac OSX and others. So I'd say it's a wash there too.

If we want to talk about the applications that ship with the OS we might be getting closer to an answer, but there is still a lot of security and process in place.

Where things really start to diverge is user base and the complexity and security of the applications those users install on their machines.

OS security is largely a "solved" issue, the amount of risk you inherit from your OS pales in comparison to the amount of risk you inherit from the applications you install and your behavior on your computer. As someone who breaks software daily I can say we look first at the applications and the security controls in that application (input validation, logic assumptions, authentication, authorization, SQL injection, Buffer Overflows, Format String Vulnerabilities, etc.)

If we concede it's the applications that are going to give you the risk, then which OS provides the best protections for developers so they can make the best decisions in security? There are great resources for both, but I would lean toward Microsoft being the bigger driving factor in security for software developers today. They spend so much effort surfacing information to help developers and testers make the right decisions it can be almost overwhelming, but the information is there and from a trusted source.

That's quite a longer answer than what I was expecting to write. I think this question is far more complex than can be answered quickly. I'd love to do a complete study to compare the overall security of these systems (including OSX, and maybe some mobile platforms as well).

My feeling is that the biggest wins for security should be Application Focused, not OS focused. Use the OS, the programming language and the technology that you understand, then learn about security and build a secure system from the ground up. That's how we will make big leaps toward a more secure system.

When is Spam Considered a Breach?

  
  
  
  

Free CoffeeAs a Marketing professional, I understand the need to promote products and services through a variety of ways. It’s part of business, and you want to help the sales organization sell as best you can.

But as someone who’s been in the IT Security industry for a while, there are limitations on what you can and cannot do. There are privacy concerns, policy challenges and a slew of other considerations to think about in how you reach (without being too intrusive) the people and organizations you want to reach to tell your story.

What’s the point? Well, a restaurant that opened recently next door to a former employer of mine just blasted out a promotional email about some recent menu additions. This includes:

  • A creamy, cold potato soup
  • Cool and refreshing fruit soups (isn’t that a smoothie Einstein?)
  • Yummy gazpacho
  • And their very own summer salad, highlighted by avocado and delicious shrimp

Sounds exquisite right? Yep, except for the fact that I was on an email list where the brilliance of the sender was revealed in not BCC’ing the recipients, but rather just dumping the addresses into the TO field. Luckily it’s my Yahoo! address that I am retiring, but the fact remains, I think my identity has been breached, along with 214 other people.

From a security standpoint, this is egregious – I mean maybe there isn’t a lot of harm one can do with getting one’s hands on 215 email addresses. But anyone who entered their business email address may be at a greater risk.

Ironically enough, there are people from MITRE Corporation and RSA on there, which is interesting, but also Thermo Fisher, Acme Packet, Sovereign Bank, Hologic, EMC, Telcolote, and Lahey Clinic, just to name a few. One could trace these names back to the organization or some social media network and dig deeper into the identities of these folks.

This speaks volumes to the issue of human behavior as it relates to security.  In fact, more and more, I think security exists BECAUSE of human behavior. If there wasn’t a lust for credit card numbers and other PII (based on commanding top dollar for that data) then the world of IT security wouldn’t be as lucrative a business.

This example of an inadvertent exposure of email addresses illustrates that human behavior really continues to stay the same as security concerns grow, especially in the “Internet Age” (bad cliché).  But it’s true, because the hacktivist acts of LulzSec and Anonymus have proven that the information that should be most secure really isn’t, so organizations aren’t doing what they need to in order to effectively secure it (not doing your job, yes, that would be considered by some to be bad behavior).

(Enter plug here) We’ll have an article coming out soon on the changes, or lack of changes in human behavior organizationally, as they relate to security.

My mistake: didn’t ask for or sign a disclaimer

At the end of the day, I probably won’t lose sleep over this breach – yes it’s a breach – but it points out that caffeine addiction really takes you off your a-game:

  • I provided my email address without asking what the restaurant’s policy was in terms of sharing my email, selling my email or what they were going to do to protect my identity. Shame on me, and don’t be like me.
  • I used an email address I don’t use much anymore – and actually it’s prompted me to retire it officially. But it might be a good idea for anyone else to do the same, or if you have used your business email address and you’ve had that email address exposed where it shouldn’t be, find out what type of encryption is being used.
  • People do anything for something free – a t-shirt, a cup of coffee, a chance to win an iPad. So think about how much that free thing is worth before you offer up your PII for it next time. I provided it for a free cup of joe that won’t compare to what I’ll be drinking in Seattle soon. Sad.
Probably much ado about nothing here, and as much as I am tempted to expose the restaurant and the sender, I might just connect with them privately to let them know that from a marketing perspective, boy that cold potato soup sounds great, but from a security standpoint, that was a huge no-no….

Antisec hacking into Booz Allen Web site

  
  
  
  

AntiSecCan the hackers inflict more damage now that they have the password hashes?

Antisec hacker movement, which targets the websites of governments and their agencies worldwide, hacked into the Booz Allen Hamilton web site, and posted a 130 MB file of data stolen from Booz Allen's servers on the Pirate Bay BitTorrent website. Antisec publicly sneered at Booz Allen's security and said it had stolen about 90,000 military emails as well as a great deal of passwords. The passwords are protected by the MD5 cryptographic hash function, though that protection can be cracked.

There are two stories here that need to be disentangled: 

1)      AntiSec got on to an unprotected server and got hold of information

2)      Some of that information was the MD5 hashes of passwords

The issue is not so much that MD5 allowed the original attack, but that now that AntiSec have the password hashes, it has been suggested that they may be able to obtain the actual passwords and use them to get on the network before Booz Allen can get all the passwords changed.  This actually isn’t likely to happen. MD5 is weak but not that kind of weak.

One property you want from a hash function is collision resistance – it’s very hard to find two inputs that give the same hash value. For MD5 it should take 64 bits of effort to find a collision. In fact, because of that weakness, it only takes about 20 bits of effort to do it. This lets an attacker potentially get a fake certificate from a Certificate Authority (CA) that uses MD5. The attacker generates two cert requests with the same MD5 hash, one innocuous (mydomain.com) and one malicious (google.com). They then request a certificate for the innocuous request, and the signature on that one is also a signature on the malicious one (because they have the same hash), so now they’ve got a cert for google.com. This is a significant weakness in MD5, and it’s why it’s not recommended any more. However, that’s not the attack AntiSec can mount.

Another property you want from a hash function is preimage resistance – it’s very hard to find an input that hashes to an already selected value. In the case of the Booz Allen hack, this is the attack AntiSec would like to mount: they have the hashes of each of the passwords and they don’t need to find the actual password, they just need to find something that gives the same hash. Perhaps oddly, although MD5 is very weak against collisions, it’s still pretty strong against preimage. It should take 128 bits of effort to find a preimage for MD5 (because it has 128 bits of output); in fact, the best known attack takes… 120 bits of effort. This is much better than good enough.

So the weakness of MD5, though significant in other contexts, isn’t an important part of the story here.

MD5 is widely used to protect passwords in FreeBSD-based Unix systems and others, so it’s not like Booz Allen made a uniquely bad choice here. They probably didn’t even make a choice at all. Maybe people should investigate moving towards SHA-based password hashes but there are more pressing security needs out there.

To err is human; to hack is, well, human too…

  
  
  
  

Errors happenIf you think about all the bad stuff that happens that most IT Security vendors claim to either prevent, identify or analyze, you don’t typically think of a person. It’s a thing, maybe abstract in nature, some type of virus (what does a virus look like?). Or, a criminal gang, huddled in a basement somewhere, launching attacks, botting machines and taking personal and corporate information and selling it for monetary gain.

However, there’s a fundamental issue at the heart of the exploits and data breaches we’re familiar with – human acts. A number of interesting articles surfacing lately have pointed out that essentially (and I tend to agree) that Security is not exactly a technology challenge – but that human error is largely responsible for the loss of data and all that follows.

Human error, or when people aren’t doing their jobs to the extent that they should be, often comes into play around access to data, or more to the point, a lack of appropriate access controls to sensitive information. I’d argue that it’s not just that people keep making mistakes, but rather human nature – it’s in the nature of vindictive individuals to destroy property (whether physical or intellectual) and to steal. It doesn’t matter if that individual is capable of developing an advanced software program, for if that software program is designed to take down a business, well, its human nature for those who wish to cause harm.

That said, as the shift in attacks in recent years has focused on stealing as much personal and corporate data as possible to make a pile money on the black market, LulzSec and Anonymous have shown us that launching attacks to wreak havoc is still alive and well. It’s an example of human nature at work, people using their skills to do bad stuff just for the heck of it.

A fascinating example of this (that went largely under the radar is the recent case of the former YouSendIt CEO who plead guilty to launching a web attack on the company. Khalid Shaikh was a co-founder at YouSendIt who also served as CTO at one point, so he likely knew his way around the application pretty well.

By all accounts, he launched an ApacheBench program over and over to the servers that YouSendIt’s platform sits on, and it caused a DDoS attack. As a result it rendered the YouSendIt servers as unable to manage the network traffic it thought it was receiving on top of the normal amount of traffic the servers see.

I’m not a technical guy so I usually deal with implications instead – since YouSendIt’s site boasts more than 18 million users and 20 million file transfers per month, I have to believe there’s a compromise somewhere around user information that might not have been reported yet. Furthermore, I know over a dozen people who use YSI regularly, to ship video or other large files, and have said they would now consider an alternative, now that I informed them of the hack. It certainly illustrates that the attack surface has widened in terms of attacking web apps, and drives home how something as simple as aDDoS can potentially disrupt business in a big way.

As a huge fan of YouSendIt (especially their desktop app) it stinks to see something like this happen. Having transmitted hundreds of files on YSI, I'm a little nervous using it again, but I'm anticipating they’ll take the appropriate steps to ensure it doesn’t happen again based on some of the recent moves the company has made.

My post would not be complete if I didn’t take a stab at offering a few ideas on how one might go about countering bad actors and all they are capable of by taking a different approach to thinking about security, especially web applications, considering they are the most exposed and most hacked layer of any enterprise IT stack.

--Organizations should consider a full assessment and analysis of applications to determine where the vulnerabilities lie - - most likely, there will be coding errors.

--Look at security from a development vs. production standpoint – if you are addressing code issues ahead of time while applications are still in the development phase and not rolled out to production systems, you are thinking about security the right way.

--Train your developers on the principles, both fundamental and advanced, on secure software application development - - this will go a long way to refresh the veterans and provide new insight for the newbies.

--Accept the fact that you can protect but you can’t change the DNA in people who want to do bad things – they are wired differently, and will attempt to execute their acts regardless of the processes or technologies in place to counter their acts.

My Haystack: Is finding that one needle really all that important? (Hint: Yes it is.)

  
  
  
  

haystackAs the uptick in breaches continue to dominate headlines and increase the general paranoia around what might happen, there’s often a story lost in the shuffle. It seldom seems like there’s a bulletproof method to stop the invasive tactics of today’s hackers. That’s because really, there isn’t.

You could spend all day trying to determine the multitude of issues that could lead to any sort of data breach or exploit on a specific system. Or, you can throw a technology solution at it – some type of IPS/IDS, a DLP or just try to leverage as many vulnerability scanners as possible.

The bottom line is that breaches will continue happen because criminals profit significantly from being able to sell the coveted sensitive information they set out to steal. However, on the flip side, we’ve seen advanced attacks like Stuxnet designed specifically to infiltrate SCADA-type systems (really a gigantic piece of dangerous malware). Therefore, there isn’t a one-size-fits-all approach to every single exploit.

While costs are a major factor, the security of critical systems can often trump the financial costs associated with breaches. The approach that so many organizations often take has been reactive in nature, and usually results in the decision to deploy a tool – a scanner, a DLP, an IPS, a firewall, etc. Underestimating the importance of deploying securely developed and configured applications in a production environment is perhaps one of the bigger oversights we are seeing.

For example, if a code review on a business application identified some malicious code or even a configuration issue, it is like finding a needle in a haystack – a process many companies don’t want to adopt, especially if they feel there’s an automated tool they can quickly integrate that is supposed to find every vulnerability. That needle could represent a major vulnerability that could have been avoided had that application been developed in the most secure environment before being rolled out into production.

Don’t Play the Blame Game

When it comes to a breach or an exploit, it’s not about who did it or why they did it. It’s about prevention and identifying how it happened so it doesn’t happen again. In the case of the Sony Playstation breach, the company shifted from a closed, embedded systems provider to a Web and Internet services content provider.

The flaw was the team was not properly educated on the differences. This also led to a failure to see how the attack surface had expanded and in what ways the gaming applications were exposed. The right amount of team education through a specific training curriculum could have been a cost-effective and highly efficient way to avoid a breach like this.

It’s easy to throw another firewall or a DLP solution at problem. It’s a reactive measure that is designed to satisfy some immediate needs, most times, after a breach. However, this does nothing to prevent a breach from occurring. Furthermore, it doesn’t map to a defense-in-depth strategy which should include:

  • Training all technical personnel on the principles, both fundamental and advanced, on secure software application development.
  • Ensuring that all developers have some type of development bible or reference guide where they can leverage knowledge that will help them ultimately write secure code.
  • The right mix of people, process and technology – again, you don’t need every security solution on the planet to be secure, and you need employees to adhere to secure practices in their respective roles.
  • An effective means of assessment – identifying gaps in the SDLC and understanding where vulnerabilities exist and remediating so they aren’t an ongoing issue.

A proactive security program will invest the time it takes to ensure that applications are developed and configured securely prior to being put into production. This proactive model requires an overhaul of priorities for what an organization’s developers are working on, which means training personnel, consistently testing software applications and having a guidance system that provides a knowledgebase of vulnerabilities.

And oh by the way, the training of developers, designers, architects and even project managers isn’t just something you should do – its mandated by industry regulations like PCI DSS, HIPAA, NIST and many others. (So you have to do it, depending on which regulations are relevant to your business.) Again, it may take some digging in that bale of hay, but when you find that ‘needle,’ the effort will pay off.

All Posts
Follow Us