Q: Are the recent hacks against Sony Playstation, RSA SecurID, IMF and Lockheed Martin caused by unrelated entities, or is this a coordinated attack?
A: There are definitely different groups operating here, each with their own motivations for the hacks. Sony decided to press legal charges against some hackers and the counter-response was retaliation. On the other hand, the breaches related to the RSA SecurID products and the hacks on IMF & Lockheed Martin have evidence of state-based attacks. Regardless, the series of breaches we’re seeing isn’t going to let up anytime soon. Just yesterday the Pentagon confirmed that some of our most closely guarded military secrets were stolen by spies who hacked into DOD computers.
It should also be noted that this is nothing new – what’s new is the disclosure of the breaches, not the attacks or breaches themselves. Congress needs public support to get cyber security legislation passed (Langevin’s bill) and they’re being lobbied hard by the private sector. We should also be aware that the administration is preparing the public for a cyber attack, both US-driven like Stuxnet, as well as an inbound politically motivated attack on something like the US power grid. The bill in Congress calls out specific measures for protecting the power grid and other critical infrastructure like nuclear power plants. The Pentagon hacks were discovered months ago, but are just being released now to step up the pressure on Congress and to ready the public for the US at war in a new theater – cyberspace.
Q: Will anyone’s data be more secure in the face of this onslaught?
A: It’s no doubt that data is woefully unprotected. The approach that so many organizations often take has been reactive --patching a gap or misconfiguration temporarily fixes a problem yet offers nothing preventative. Organizations are also not following fundamental security principles like defense in depth and are over-relying on single points of failures such as the SecurID authentication solution. Universally, we underestimate the importance of developing secure software which is the largest source of security vulnerabilities. 90% of attacks occur at the software layer. That is because hackers have two ways to get at your data: Through the network, which organizations spend considerably more money protecting and are better at hardening, or applications, written by developers with very little security skills.
We have the power to make our data exponentially safer, so why don’t we? Maybe because it requires a fundamental shift in the way we think about writing software and accountability. Being proactive about data security requires an overhaul of priorities, and this can be accomplished by training developers, designating an executive to be held accountable, consistently testing software, and having a current knowledge base of the always-morphing threat and vulnerability landscape. Software runs our world and the data security problem will never be solved until it’s addressed at the developer desktop. And it’s not a technology problem – it’s a people and process problem.
As a Marketing professional, I understand the need to promote products and services through a variety of ways. It’s part of business, and you want to help the sales organization sell as best you can.
But as someone who’s been in the IT Security industry for a while, there are limitations on what you can and cannot do. There are privacy concerns, policy challenges and a slew of other considerations to think about in how you reach (without being too intrusive) the people and organizations you want to reach to tell your story.
What’s the point? Well, a restaurant that opened recently next door to a former employer of mine just blasted out a promotional email about some recent menu additions. This includes:
- A creamy, cold potato soup
- Cool and refreshing fruit soups (isn’t that a smoothie Einstein?)
- Yummy gazpacho
- And their very own summer salad, highlighted by avocado and delicious shrimp
Sounds exquisite right? Yep, except for the fact that I was on an email list where the brilliance of the sender was revealed in not BCC’ing the recipients, but rather just dumping the addresses into the TO field. Luckily it’s my Yahoo! address that I am retiring, but the fact remains, I think my identity has been breached, along with 214 other people.
From a security standpoint, this is egregious – I mean maybe there isn’t a lot of harm one can do with getting one’s hands on 215 email addresses. But anyone who entered their business email address may be at a greater risk.
Ironically enough, there are people from MITRE Corporation and RSA on there, which is interesting, but also Thermo Fisher, Acme Packet, Sovereign Bank, Hologic, EMC, Telcolote, and Lahey Clinic, just to name a few. One could trace these names back to the organization or some social media network and dig deeper into the identities of these folks.
This speaks volumes to the issue of human behavior as it relates to security. In fact, more and more, I think security exists BECAUSE of human behavior. If there wasn’t a lust for credit card numbers and other PII (based on commanding top dollar for that data) then the world of IT security wouldn’t be as lucrative a business.
This example of an inadvertent exposure of email addresses illustrates that human behavior really continues to stay the same as security concerns grow, especially in the “Internet Age” (bad cliché). But it’s true, because the hacktivist acts of LulzSec and Anonymus have proven that the information that should be most secure really isn’t, so organizations aren’t doing what they need to in order to effectively secure it (not doing your job, yes, that would be considered by some to be bad behavior).
(Enter plug here) We’ll have an article coming out soon on the changes, or lack of changes in human behavior organizationally, as they relate to security.
My mistake: didn’t ask for or sign a disclaimer
At the end of the day, I probably won’t lose sleep over this breach – yes it’s a breach – but it points out that caffeine addiction really takes you off your a-game:
- I provided my email address without asking what the restaurant’s policy was in terms of sharing my email, selling my email or what they were going to do to protect my identity. Shame on me, and don’t be like me.
- I used an email address I don’t use much anymore – and actually it’s prompted me to retire it officially. But it might be a good idea for anyone else to do the same, or if you have used your business email address and you’ve had that email address exposed where it shouldn’t be, find out what type of encryption is being used.
- People do anything for something free – a t-shirt, a cup of coffee, a chance to win an iPad. So think about how much that free thing is worth before you offer up your PII for it next time. I provided it for a free cup of joe that won’t compare to what I’ll be drinking in Seattle soon. Sad.
Probably much ado about nothing here, and as much as I am tempted to expose the restaurant and the sender, I might just connect with them privately to let them know that from a marketing perspective, boy that cold potato soup sounds great, but from a security standpoint, that was a huge no-no….
In the wake of the Sony Security Breaches (breaches, you say? As in plural? Yes, read on for more information) I decided to update some of our instructor led training slide decks.
The first few slides of our security awareness courses include a number of slides intended to scare people into paying attention to the threat of security issues. We do this by showing the largest, most costly and most impactful data breaches and security vulnerabilities in recent history.
Instead I scared myself. There is no statistic I could find to show that things are getting better or more secure in general.
I should say before I list off all these terrible statistics that largely the companies we work with are, in fact, getting more secure over time. I've seen some of our clients go from unknowingly writing insecure applications to having robust and mature Secure Software Development Lifecycles that drastically reduce the overall number of issues we find in quarterly assessments. These micro-trends, unfortunately, seem to be the exception to the rule.
These companies should also stand as a reference point for other companies who are finding themselves a target for attackers or that fear they are not doing enough to protect themselves and their customers from this type of attack.
Another correlation a colleague of mine, Tom Samstag, found while researching is the negative attention of a large data breach. After public attack hackers seem to swarm in, focusing their attention on other arms of the company. This makes sense from the attacker's perspective, the initial breach acts as a beacon to identify companies that do not have proper security measures in place.
We see exactly this happening to Sony right now.
One month after their infamous Playstation Network breach on April 26th Sony BMG suffered another breach on May 23rd, then Sony Pictures was hacked less than two weeks later on June 2nd. It seems the hackers smelled blood and came running, I wonder what will be next?
Of course it's easy to pick on Sony, but they're not the only company who has lost large amounts of data in recent months, far from it.
PrivacyRights.org tracks all data breaches, they report there have been 533,686,975 records breached in 2,511 Data Breaches since 2005. There are a lot of recognizable names in that list as well, chronologically speaking: Sony, WordPress, The Texas Comptroller's Office, Health Net Inc., Jacobi Medical Center, and American Honda Motor Company. Those companies have all lost more than one million records each in the last 6 months. Let me repeat that:
The companies named above have all lost more than 1,000,000 records each in the last 6 months.
In a recent Ponemon study it was found the average cost per record lost for the offending company was $214 per record, up from $138 per record in 2005. In this way Sony got away for cheap if the most recent numbers are correct in that their PSN breach only cost them $171 Million.
The study went on to conclude indirect breach costs, such as the loss of customers, outweigh direct costs by nearly 2 to 1. That means Sony could lose another $342 Million in customers, market share and customer confidence. In 2010 other companies spent, on average $7.2 Million per data breach. Talk about consequences!
Unfortunately it also seems more vulnerabilities are being found in software. Likely due to insecure coding practices, insufficient security measures and controls, lack of training, and the attacker threat increasing almost daily. According to an IBM study there were 4,938 vulnerabilities found in 2005, 6,543 in 2007, 6,737 in 2009 and 8,562 in 2010. See graph to the below for more data points.
If you've been waiting to see who has lost the most records in recent history, you can check out the PrivacyRights.org website, or Here is my list of shame: the most recent breaches that have lost more than 1,000,000 Records.
- Sony Playstation Network
- 101.6 Million records lost
- Texas Comptroller's Office
- Health Net Inc.
- Jacobi Medical Center
- American Honda Motor Company
- Educational Credit Management Corporation
- U.S. Military Veterans
- Heartland Payment Systems
- Royal Bank of Scotland
- Countrywide Financial Corp
- University of Utah Hospitals and Clinics
- Bank of New York Mellon
- 12.5 Million records lost
- TJX Corporation
- 6.3 Million customer records lost
- Hannaford Bros
- 4.2 Million CC#’s records lost
- Fidelity National
- Georgia Dept. of Community Health
- 2.9 Million medical records lost
This past week has yielded a veritable treasure trove of head-shaking security stories, all related to my favorite security soft spot – people. The shimmer from our technological advances blinds us from the damage people can do – and we remain so easily fooled:
- Wired reported that Albert Gonzalez, the record-setting hacker of Heartland Payment Systems, TJX and a range of other companies said the Secret Service (SS) asked him to do it. The government admitted using Gonzalez to gather intelligence and help them seek out international cyber criminals but says they didn’t ask him to commit any crimes. Uh, yeah… ok.
- Storefront Backtalk and others reported on a Gucci engineer who was fired for "abusing his employee discount," but then really got even (and then some) by creating a fictitious employee account (with admin rights!) and then using that account to delete a series of virtual servers, shut down a storage area network (SAN), and delete a bunch of corporate mailboxes… allegedly.
- TechAmerica wrote about HP suing a former executive who took a job at Oracle. Apparently, he downloaded and stole hundreds of files and thousands of emails containing trade secrets before quitting.
You might ask, “How can a company so advanced and large as HP not have protections on their digital trade secrets?” It’s not like DLP (data leak prevention) solutions don’t exist. And how about Gucci? I guess this is a double whammy around policy and people, who are so often intertwined. There isn’t a policy flag or checkpoint in place to verify that this newly-created employee was authorized with such privileges that he could delete entire virtual servers and mailboxes? Nobody bothered to check that this was a legitimate employee? Worst of all, this non-existent employee’s accounts were created by a fired network engineer! And then there’s Mr. Gonzalez (hacking community) and the SS (intel community) – which group do you trust less to be honest with the public? Both communities have for a long time engaged ethnically-questionable people to do their bidding. If it’s true that the SS hired him to hack, shame on him for not getting protection for himself in advance. You have to wonder what else he hacked into to merit an actual arrest.
And here we are in 2011, putting our lives on display with Facebook, Twitter, LinkedIn, Yammer, et al, broadcasting our whereabouts on vacation (or more specifically, that we’re not home for an extended period,) meeting up with strangers who have similar tastes, and making our personal details and history available for anyone to view. It’s not always technology that will get us into security trouble… it’s the people.
Recently, Ellen Messmer wrote a story on a Cyber Security early warning system in the state of Washington, USA. One of the most promising pieces of this system is the process and information sharing that’s being folded into it. Washington University, Starbucks, City of Seattle, Amazon.com, Port of Tacoma, and other groups are setting up an information sharing system that will help one learn from the other. For example, if Amazon.com experiences a botnet attack, it will share that profile and info about that attack with the city of Seattle so it can learn, prepare, and hopefully defend itself against a similar (or the very same) attack. The system, called PRISEM (Public Regional Information Security Event Management) is designed to offer an online early warning to all it members. This system has several security analogies in place today:
- The tsunami early warning system put in place after the disastrous Indian Ocean tsunami in December 2004
- The Las Vegas cheater profiling system which shares behavior, personal, and photographic info of known scammers amongst numerous casinos
- The information sharing strategy of ODNI (Office of the Director of National Intelligence) in America, which began operations in April, 2005 after the need to share information between the intel communities became painfully clear in the aftermath of the 9/11 attacks.
So praises all around for PRISEM and the Washington organizations committed to sharing security information. Unfortunately, the system they’re putting in place will not detect or prevent the most nasty and common attacks that occur – those at the software application layer. PRISEM talks about the importance of protecting SCADA system and other critical infrastructure; I couldn’t agree more. However, standing up a Security Information Event Monitoring (SIEM) and information sharing system isn’t enough. The majority of application layer attacks will still be successful … and this will be the case until those software systems are either updated to modern secure coding standards, or protected with application layer defenses (similar to web application firewalls for web apps.) As an industry, we’ve still got some innovation to create in the form of self-defending application system. The concepts are in place and this approach would be a lot less expensive than re-architecting and re-coding the thousands of legacy applications that support our critical infrastructure.
We’ll get there… one step at a time.
At the RSA Conference last week, I had the chance to sit down with 4 executive leaders from (ISC)2. During that meeting, I was educated to the fact that, according to (ISC)2 data, the CSSLP (Certified Secure Software Lifecycle Professional) certification is being adopted at an even quicker pace than the wildly successful and pervasive CISSP was at the same point in its lifecycle. The (ISC)2 informed me that CISSP took “more than a decade” to reach the critical mass where it started to be universally recognized and adopted as an industry-wide standard.
CSSLP still suffers from a severe awareness challenge (I mentioned it to one of the most prominent security analysts in the industry and his response was, “What is that? Never heard of it.”); however, its objective, according to its shepherds, is to provide a baseline certification for software security professionals. I think it has a decent chance of succeeding given that objective. I’ve worked in the software industry for nearly 20 years and one thing I learned early on is that not all software professionals are the same – in fact, there are stark differences in skill set, responsibility, and domain knowledge needed between the various roles that contribute to software development and deployment, including but not limited to: Architect, Developer, QA/QE, Business Analyst, etc. A certification program that has contextual significance to each of these roles specifically would be a welcome change.
In similar conversations with the Microsoft SDL team and OWASP Leaders, there seems to be universal agreement that a role-based certification for software security would be a good thing. Sure, there are differences of opinion in terms of explicit endorsement/sponsorship as well as how to measure and set an acceptable quality bar for each certification/qualification. But those are interesting problems to solve and will contribute to furthering the software “development” profession (development here encompasses all of the roles mentioned above.)
Perhaps most promising of all, is that the conversations between myself (and other leaders at Security Innovation), and the executives responsible for (ISC)2, Microsoft SDL, and OWASP have been productive. Further, this is a path forward that didn’t seem to exist a mere 3 months ago. Where the path will take us is to be seen; but watch this space – it will prove to be interesting as 2011 evolves.
Attack surface is a concept who’s time has come. While it has been known for a while within application security circles, the idea is just now becoming more widely understood within the development community. It is extremely useful as a means of understanding, and driving down, the security risk inherent in your application.
Your system’s attack surface represents the number of entry points exposed to a potential attacker. The fewer entry points, the less chance of an attacker finding vulnerabilities in your code. No matter how hard you work to improve the security of your software, it is a fact that vulnerabilities, known and unknown, will still exist in your system. Reducing your application’s attack surface allows you to fend off future attacks - the one’s you don’t know about yet as well as the one’s you haven’t had a chance to fix yet.
The reason that I like attack surface measurement is two-fold. First, it is a great metric for understanding an application’s inherent risk. Other metrics such as vulnerability count aren’t ideal because it doesn’t always take into account bugs that are not found, ease of exploitation and potential impact of exploitation. Secondly, all security stakeholders can leverage it for informed decision making:
- Development teams can better prioritize testing efforts. If a software’s attack surface measurement is high, they may want to invest more in testing
- Developers can use it as a guide while implementing patches of security vulnerabilities. A good patch should not only remove a vulnerability from a system, but should not increase the system's attack surface
- Consumers can use it to guide their choice of configuration. Since a system's attack surface measurement is dependent on the system's configuration, software consumers would choose a configuration that results in a smaller attack surface exposure.
- Risk Management/Corporate Security can understand their potential business exposure
Attack surface is useful for measuring security in relative terms (i.e. v1.2 to v1.n of a product), for measuring security impact of adding a new component to a system, or for very rough measurement across applications that are of equivalent purpose.
If you are interested in learning more about attach surface analysis and reduction, we have a recorded webcast on the subject that is available here.
As the recent buzz around Firesheep demonstrated, while SSL is proven, it is not deployed everywhere by default. Some of the reason behind this includes slow performance and difficulty of implementation.
One important reason why SSL isn't everywhere is that the initial SSL handshake, which involves public/private key operations, takes many CPU cycles. With SSL enabled, rather than spending server CPU cycles on the functionality for the end user, CPU cycles are spent on cryptographic operations… and functionality usually trumps security.
Organizations hosting web sites or running SAAS operations must therefore choose between:
- Slower performance. SSL enabling a page will cause that page to load more slowly in the browser. This is a difficult choice since there is a direct correlation between speed of page load and user satisfaction or conversion rate down the funnel.
- Higher infrastructure costs: Making a corresponding increase in the number of servers in their datacenter to handle the peak load with SSL enabled. The hardware cost (or increased monthly fee in a hosted or cloud setting) coupled with increased maintenance costs, and increased software costs can be prohibitive.
Likewise, organizations building embedded devices or software for RTOS must choose between:
- Slower performance. The more SSL is relied upon for connections to other devices or servers, the more CPU cycles it will take, and the greater impact this will have on the speed with which the device performs its intended function.
- Greater battery drain.
- Higher cost of goods sold, higher weight: Mitigating the effects of SSL's impact can result in more expensive and heavier components in an embedded system.
The concern about SSL performance is only growing as NIST and others are recommending organizations increase the key size used with the RSA algorithm in most SSL implementations from 1024bits to 2048bits. The increase in key size will make SSL handshakes take five times as long – for organizations that do a lot of handshakes, this is a significant performance hit.
There aren’t a lot of choices in public-key algorithms embedded in SSL libraries. Security Innovation has recently partnered with yaSSL to deliver CyaSSL+, an OpenSSL-comptible SSL library that incorporates the very fast, very small NTRU algorithm. Using this in place of RSA-enabled SSL can improve performance dramatically and we are encouraging users to try it for free under the GPL open source license model.
CyaSSL fully implements SSL 3, TLS 1.0, 1.1, 1.2, with SSL client libraries, an SSL server, API's, and an OpenSSL-compatibility interface. It is optimized for embedded and RTOS environments; however, it is also widely used in standard operating environments. By itself its fast. CyaSSL's optimized code runs standard RSA asymmetric crypto 4 times faster than openSSL. As mentioned above, CyaSSL+ adds an additional assymetric algorithm, NTRU. NTRU is an alternative asymmetric algorithm to RSA and ECC. It is based on a different math (approximate close lattice vector problem, if you’re interested) that makes it much faster and resistant to quantum computers’ brute force attacks, something both RSA and ECC are susceptible to.
At comparable cryptographic strength, NTRU performs the costly private key operations much faster than RSA. In fact, CyaSSL+NTRU runs between 20x to 200x faster than openSSL RSA.
Learn more about CyaSSL+ here.
After what seems like 900 years in security I just traveled again, and even after all this time, landing in a new role gives me a fresh perspective on another aspect of the problem of IT security. I was fortunate to have the opportunity to join Security Innovation to focus on the problem of making applications secure in the first place after several years of focusing on the problem of finding their security holes. Both are important, but going from the 'attack' side to the application doctor side has been eye opening.
Like any product manager, my first step in the new role was to know my product, in this case, take the classes myself. It’s not like I didn’t know, having been a developer way, way back, but there’s having a concept, and then there’s actually traveling through time and space to be hit with reality. Consider this simple scenario – you are a freshly minted engineer out of college, and your first assignment is a simple page that starts a user registration. You create an input field to ask for the user’s first name, and the echo back: “OK, <first name>, let's get started with registration…”. BANG! ...your first security rift. Easy cross site scripting for a cyberman, just create a scenario with a script tag and a little script in the name field and he owns your user’s browser. They didn’t teach you that in college? Right, exactly the problem.
Looking at the small blue box of your application’s outside, what does it take a hacker to find the problem? Just play with the input field and enter the usual suspects – long input, script tags, special characters, etc. You don’t even have to do it by hand, if you have something like CORE IMPACT you have a sonic screwdriver which will find and open just about any security hole for you by running a couple of wizards.
Opening up the blue box, you find it is much bigger on the inside. What has to be done to make these two lines of code secure? Well about a half a dozen things depending on the language, environment and architecture – constrain the input, strip the output, catch exceptions, make sure you allocated a sufficient buffer and don’t allow it to overrun, be careful with integer/pointer arithmetic, don’t use format string functions for the output, and defintely don't allow the application to go back on its own time line. That’s a big inside for one little field.
Fortunately, it isn’t as bad as it sounds. With the right knowledge and reference to it when you need it, writing the code securely from the beginning becomes just part of the process. This also improves the code overall making testing and deployment go much more smoothly, reducing security bugs as well as functional and performance bugs, resulting in less work and time spent overall. How’s that for fixing time and space, Doctor?
We at Security Innovation were happy to hear on Wednesday that Facebook will be rolling out transport encryption as an option for your entire session, not just during the password exchange (http://blog.facebook.com/blog.php?post=486790652130).
We recommend that if you use Facebook you follow the instructions in the blog to set the option to turn it on when it becomes available to you. The reason we are particularly gratified is that the Firesheep tool our consultants played a part in putting together was one of the reasons the project got the attention it deserved, according to a recent article in SC Magazine.
This was exactly why Firesheep was created -- to bring attention to an issue that was well known by security professionals, but not more generally known by consumers of web commerce and social media content. We should not forget, many other sites still have the same problem. Before you use a site or application that contains personal information, be sure your entire session is encrypted if the option exists.
What is particularly illustrative about this case is the amount of time it took for Facebook to get to the point of announcing it, and it is still not rolled out. Firesheep was made available over four months ago, and Facebook said at the time they were already looking at the issue. If a company with the resources and visibility of Facebook can have its most high profile page hacked and not deal with one of the most basic of security issues for months, what chance does everybody else have?
With some education, improvements in application development lifecycle processes, and the right informational tools, you can improve those chances greatly.
This case illustrates what we at Security Innovation do every day. The cool hacks and attack techniques might get the attention, but it’s the detailed technical work that needs to be done by application developers as part of their day by day responsibilities that is where the real improvements in security are going to come from.
By working with experts in the field and using the learning that’s available, this work does not have to increase the cost or time it takes to develop applications. Fixing it after the fact will definitely cost. The team at Facebook just did some of that work. If you don’t have the development resources of Facebook (and who does?), we can help you do the same.