Secure Development Tip of the Week

Subscribe by Email

Your email:

Application and Cyber Security Blog:

a Security Innovation Blog covering software engineering, cybersecurity, and application risk management

Current Articles | RSS Feed RSS Feed

Koblitz and Menezes on safety margins in cryptography

  
  
  
  

Neal Koblitz and Alfred Menezes are two pioneers in the field of Elliptic Curve Cryptography. In recent years, they’ve teamed up to write a series of papers (available at http://anotherlook.ca/) questioning some current practices in academic cryptography. The papers are stimulating and worth a look, and I’ll be posting some more about them. For this post, I’m most interested in the section on safety margins in their most recent paper, “Another look at security definitions” (warning -- f-bomb on page 9).

There’s a school of thought in cryptographic research that says that when you’re designing a scheme or protocol, you should determine the security requirements, design a protocol that exactly meets those requirements, and then make sure you eliminate all elements of the protocol that aren’t necessary to meet those requirements. This gives you the simplest, easiest to implement correctly, and most efficient protocol.

Koblitz and Menezes argue for a different position: unless you are truly resource-constrained, you should be biased towards including techniques that you can see an argument for, even if those techniques seem to be unnecessary within your security model. The reason is simple: your security model may be wrong. (Or it may be incomplete, which can amount to the same thing).

This attitude seems very wise to me. For a while we at Security Innovaton have been arguing that there is one basic assumption underlying almost all Internet protocols: the assumption that it’s okay to use a single public-key algorithm, because it won’t get broken. But that assumption isn’t necessarily right. It’s been right up till now, but if quantum computers come along or if a mathematical breakthrough that we weren’t expecting happens, RSA could be made insecure almost overnight. And if RSA goes, most current implementations of SSL go too, and all internet activities that use SSL will be seriously disrupted.

We don’t have to operate with these narrow safety margins. It’s easy to design a variant “handshake” for SSL that uses both RSA and NTRU to exchange parts of the symmetric key, each as secure as the whole. This would be secure if either RSA or NTRU was attacked, and the additional cost of doing NTRU alongside RSA is negligible in terms of processing. Menezes himself, speaking at the recent ECC conference in Toronto, spoke of this approach as extremely sensible.

Yes, there are some places where efficiency really is paramount, and naturally, we’d recommend using the highest performance crypto which is NTRU.  However,  for most devices, there’s no reason to use pure efficiency as a reason to avoid doing something that makes perfect security sense. We’re encouraged by the fact that researchers of the stature of Koblitz and Menezes seem to agree with us, and we’re going to look for ways to spread the word further.

Antisec hacking into Booz Allen Web site

  
  
  
  

AntiSecCan the hackers inflict more damage now that they have the password hashes?

Antisec hacker movement, which targets the websites of governments and their agencies worldwide, hacked into the Booz Allen Hamilton web site, and posted a 130 MB file of data stolen from Booz Allen's servers on the Pirate Bay BitTorrent website. Antisec publicly sneered at Booz Allen's security and said it had stolen about 90,000 military emails as well as a great deal of passwords. The passwords are protected by the MD5 cryptographic hash function, though that protection can be cracked.

There are two stories here that need to be disentangled: 

1)      AntiSec got on to an unprotected server and got hold of information

2)      Some of that information was the MD5 hashes of passwords

The issue is not so much that MD5 allowed the original attack, but that now that AntiSec have the password hashes, it has been suggested that they may be able to obtain the actual passwords and use them to get on the network before Booz Allen can get all the passwords changed.  This actually isn’t likely to happen. MD5 is weak but not that kind of weak.

One property you want from a hash function is collision resistance – it’s very hard to find two inputs that give the same hash value. For MD5 it should take 64 bits of effort to find a collision. In fact, because of that weakness, it only takes about 20 bits of effort to do it. This lets an attacker potentially get a fake certificate from a Certificate Authority (CA) that uses MD5. The attacker generates two cert requests with the same MD5 hash, one innocuous (mydomain.com) and one malicious (google.com). They then request a certificate for the innocuous request, and the signature on that one is also a signature on the malicious one (because they have the same hash), so now they’ve got a cert for google.com. This is a significant weakness in MD5, and it’s why it’s not recommended any more. However, that’s not the attack AntiSec can mount.

Another property you want from a hash function is preimage resistance – it’s very hard to find an input that hashes to an already selected value. In the case of the Booz Allen hack, this is the attack AntiSec would like to mount: they have the hashes of each of the passwords and they don’t need to find the actual password, they just need to find something that gives the same hash. Perhaps oddly, although MD5 is very weak against collisions, it’s still pretty strong against preimage. It should take 128 bits of effort to find a preimage for MD5 (because it has 128 bits of output); in fact, the best known attack takes… 120 bits of effort. This is much better than good enough.

So the weakness of MD5, though significant in other contexts, isn’t an important part of the story here.

MD5 is widely used to protect passwords in FreeBSD-based Unix systems and others, so it’s not like Booz Allen made a uniquely bad choice here. They probably didn’t even make a choice at all. Maybe people should investigate moving towards SHA-based password hashes but there are more pressing security needs out there.

Questions for Comodo and RSA after their Recent Hacks

  
  
  
  

Hacked!Unfortunately, two security companies I respect were hacked in the past few weeks.  This has resulted in significant negative publicity and may result in lost trust and lost sales.    These companies are security companies and yet their security was breached.   For me, this raises many questions.  This blog is about the questions I would ask executives of both companies to learn from what happened to them.  

Background on what happened at RSA

On Thursday, March 17 2011, RSA published the following open letter on its website, and followed up with a SecureCare Note.

RSA itself has been very tight lipped about what actually happened, what was stolen, and what the risk is except to call the attack an advanced persistent threat.  Its not a surprise to learn that the attackers were sophisticated and tried hard over time to achieve their objective.  There has been a lot of speculation in the blogosphere on what happened as well as critique of how little RSA has revealed.  

From the open letter, we learn that over a period of time RSA was attacked and that the attackers were able to successfully extract valuable information about RSA SecurID out of RSA.  This information is valuable enough for RSA to warn all of its customers that the security of its flagship product may be reduced and, according to GCN, to temporarily stop shipping its tokens.  

Background on what happened with Comodo

On March 22, 2011, the Tor Project, with help from Security Innovation's Ian Gallagher, published a blog stating their belief that a CA had been compromised.   Comodo followed up with this post on March 23 confirming a March 15 compromise.

A quick summary of what Comodo confirmed is that an attacker from Iran comprised a user account on one of their RAs and used it to issue himself certificates for major web properties.  

A person claiming to be the Comodo attacker, posted a long statement here where he outlined his motivation and methods.    He says that he probed many leading SSL vendors servers and found some vulnerabilities but not enough for his attack.  

He then attacked Comodo's InstantSSL.it  service, gained control of it, and found that it was the TrustDLL.dll in C# that does the actual CSR signing.  in his words: "I decompiled the DLL and I found username/password of their GeoTrust and Comodo reseller account. GeoTrust reseller URL was not working, it was in ADTP.cs. Then I found out their Comodo account works and Comodo URL is active. I logged into Comodo account and I saw I have right of signing using APIs."  

A few questions for RSA, Comodo, and all of us

RSA and Comodo are security companies.  RSA has one of the best brands and reputations in the industry.  Yet, they were successfully attacked in a way that affects them and their customers.   What happened?  Here are some of the questions, I would ask their executives:

Who was in charge of your security and were they and their  team empowered?   

It is far too easy to scapegoat the head of IT Security at both companies.  They are a natural target and should obviously be questioned.  The more interesting area of exploration is with the executive team themselves.  Did they listen when security concerns were brought to them?  Did they encourage a culture that welcomed this and responded to it with action?  Were individual contributors able to get their security concerns up to the executive suite or were they squashed by middle management?  

What did you do to make the security of your customers' critical assets part of every employees' mission?

Was every relevant employee given ongoing training on secure coding best practices?  Was the importance of this aspect of the companies mission to safeguard its customers trust regularly highlighted by senior management?  Were individual employees rewarded for sticking their necks out about a potential security risk?  

Did Senior Management make tough calls to prioritize long-term security over short-term gain?

We've all been there.  You are looking at your product, service, or IT roadmap and you have 20 things you want to do over the next quarter and you have to pick 5.   A few of those features relate to security.  They aren't going to give customers any shiny new benefits, no short-term competitive wins, just the boring slogging kind of features that make a product or service rock-solid.  Which did they pick?  Did senior management take the lead in pushing for doing the right thing, playing the long-term game, or not?

Did you get a second opinion...regularly?

There is no substitute for doing great work in the first place.  But on something as important as security, you need a second opinion...repeatedly.  How often was a third party brought in for black box and white box penetration testing?  Once?  Once in a while?  Or as a regular part of a disciplined process.  Was budget set aside for this or did motivated middle managers scrimp and push for it?  

Conclusion

There is no doubt that we face threats from unfriendly governments, criminal organizations, and disciplined individuals.  Our attackers are advanced and they are persistent.   Our defenses must be advanced.  Even if we think they are, we should get a second opinion.  But, the key to all of our businesses, is our people.  The most important thing is that our attitude, effort, and culture be persistent ... persistently, deliberately focused at securing the trusted assets given to us to safeguard.

Removing Barriers to Encryption: the Need for High-performance SSL

  
  
  
  

As the recent buzz around Firesheep demonstrated, while SSL is proven, it is not deployed everywhere by default.  Some of the reason behind this includes slow performance and difficulty of implementation.

 

Slow SSL

One important reason why SSL isn't everywhere is that the initial SSL handshake, which involves public/private key operations, takes many CPU cycles.  With SSL enabled, rather than spending server CPU cycles on the functionality for the end user, CPU cycles are spent on cryptographic operations… and functionality usually trumps security.   

 

Organizations hosting web sites or running SAAS operations must therefore choose between:

  • Slower performance.   SSL enabling a page will cause that page to load more slowly in the browser.  This is a difficult choice since there is a direct correlation between speed of page load and user satisfaction or conversion rate down the funnel.
  • Higher infrastructure costs:  Making a corresponding increase in the number of servers in their datacenter to handle the peak load with SSL enabled.  The hardware cost (or increased monthly fee in a hosted or cloud setting) coupled with increased maintenance costs, and increased software costs can be prohibitive. 

 Likewise, organizations building embedded devices or software for RTOS must choose between:
 

  • Slower performance. The more SSL is relied upon for connections to other devices or servers, the more CPU cycles it will take, and the greater impact this will have on the speed with which the device performs its intended function.
  • Greater battery drain.   
  • Higher cost of goods sold, higher weight: Mitigating the effects of SSL's impact can result in more expensive and heavier components in an embedded system. 

 The concern about SSL performance is only growing as NIST and others are recommending organizations increase the key size used with the RSA algorithm in most SSL implementations from 1024bits to 2048bits.  The increase in key size will make SSL handshakes take five times as long – for organizations that do a lot of handshakes, this is a significant performance hit.

 

yaSSLFast SSL

There aren’t a lot of choices in public-key algorithms embedded in SSL libraries. Security Innovation has recently partnered with yaSSL to deliver CyaSSL+, an OpenSSL-comptible SSL library that incorporates the very fast, very small NTRU algorithm. Using this in place of RSA-enabled SSL can improve performance dramatically and we are encouraging users to try it for free under the GPL open source license model. 

 

CyaSSL fully implements SSL 3, TLS 1.0, 1.1, 1.2, with SSL client libraries, an SSL server, API's, and an OpenSSL-compatibility interface.  It is optimized for embedded and RTOS environments; however, it is also widely used in standard operating environments.  By itself its fast.  CyaSSL's optimized code runs standard RSA asymmetric crypto 4 times faster than openSSL.  As mentioned above, CyaSSL+ adds an additional assymetric algorithm, NTRU.  NTRU is an alternative asymmetric algorithm to RSA and ECC.  It is based on a different math (approximate close lattice vector problem, if you’re interested) that makes it much faster and resistant to quantum computers’ brute force attacks, something both RSA and ECC are susceptible to.

 

At comparable cryptographic strength, NTRU performs the costly private key operations much faster than RSA.  In fact, CyaSSL+NTRU runs between 20x to 200x faster than openSSL RSA.

 

Learn more about CyaSSL+ here.
All Posts
Follow Us