Secure Development Tip of the Week

Subscribe by Email

Your email:

Application and Cyber Security Blog:

a Security Innovation Blog covering software engineering, cybersecurity, and application risk management

Current Articles | RSS Feed RSS Feed

The High Cost of an Application Security Data Breach


In the wake of the Sony Security Breaches (breaches, you say? As in plural? Yes, read on for more information) I decided to update some of our instructor led training slide decks.

The first few slides of our security awareness courses include a number of slides intended to scare people into paying attention to the threat of security issues. We do this by showing the largest, most costly and most impactful data breaches and security vulnerabilities in recent history.

Instead I scared myself. There is no statistic I could find to show that things are getting better or more secure in general.

I should say before I list off all these terrible statistics that largely the companies we work with are, in fact, getting more secure over time. I've seen some of our clients go from unknowingly writing insecure applications to having robust and mature Secure Software Development Lifecycles that drastically reduce the overall number of issues we find in quarterly assessments. These micro-trends, unfortunately, seem to be the exception to the rule.

These companies should also stand as a reference point for other companies who are finding themselves a target for attackers or that fear they are not doing enough to protect themselves and their customers from this type of attack.

Another correlation a colleague of mine, Tom Samstag, found while researching is the negative attention of a large data breach. After public attack hackers seem to swarm in, focusing their attention on other arms of the company. This makes sense from the attacker's perspective, the initial breach acts as a beacon to identify companies that do not have proper security measures in place.

We see exactly this happening to Sony right now.

One month after their infamous Playstation Network breach on April 26th Sony BMG suffered another breach on May 23rd, then Sony Pictures was hacked less than two weeks later on June 2nd. It seems the hackers smelled blood and came running, I wonder what will be next?

Of course it's easy to pick on Sony, but they're not the only company who has lost large amounts of data in recent months, far from it.

Records Lost tracks all data breaches, they report there have been 533,686,975 records breached in 2,511 Data Breaches since 2005. There are a lot of recognizable names in that list as well, chronologically speaking: Sony, WordPress, The Texas Comptroller's Office, Health Net Inc., Jacobi Medical Center, and American Honda Motor Company. Those companies have all lost more than one million records each in the last 6 months. Let me repeat that:

The companies named above have all lost more than 1,000,000 records each in the last 6 months.

CostPerRecordIn a recent Ponemon study it was found the average cost per record lost for the offending company was $214 per record, up from $138 per record in 2005. In this way Sony got away for cheap if the most recent numbers are correct in that their PSN breach only cost them $171 Million.

The study went on to conclude indirect breach costs, such as the loss of customers, outweigh direct costs by nearly 2 to 1. That means Sony could lose another $342 Million in customers, market share and customer confidence. In 2010 other companies spent, on average $7.2 Million per data breach. Talk about consequences!

Unfortunately it also seems more vulnerabilities are being found in software. Likely due to insecure coding practices, insufficient security measures and controls, lack of training, and the attacker threat increasing almost daily. According to an IBM study there were 4,938 vulnerabilities found in 2005, 6,543 in 2007, 6,737 in 2009 and 8,562 in 2010. See graph to the below for more data points.


If you've been waiting to see who has lost the most records in recent history, you can check out the website, or Here is my list of shame: the most recent breaches that have lost more than 1,000,000 Records.

  • Sony Playstation Network
    • 101.6 Million records lost
  • WordPress
    • 18 Million records lost
  • Texas Comptroller's Office
    • 3.5 Million records lost
  • Health Net Inc.
    • 1.9 Million records lost
  • Jacobi Medical Center
    • 1.7 Million records lost
  • American Honda Motor Company
    • 4.9 Million records lost
  • Educational Credit Management Corporation
    • 3.3 Million records lost
  • Netflix
    • 100 Million records lost
  • RockYou
    • 32 Million records lost
  • U.S. Military Veterans
    • 76 Million records lost
  • Heartland Payment Systems
    • 130 Million records lost
  • Royal Bank of Scotland
    • 1.5 Million records lost
  • Countrywide Financial Corp
    • 17 Million records lost
  • Facebook
    • 80 Million records lost
  • University of Utah Hospitals and Clinics
    • 2.2 Million records lost
  • Bank of New York Mellon
    • 12.5 Million records lost
  • TJX Corporation
    • 95 Million CC#’s lost
  • Ameritrade
    • 6.3 Million customer records lost
  • Hannaford Bros
    • 4.2 Million CC#’s records lost
  • Fidelity National
    • 8.5 Million records lost
  • Georgia Dept. of Community Health
    • 2.9 Million medical records lost

Application Security in the Cloud – Dealing with aaS holes


aaS holeAs we all know, when you run things in the "Cloud" it’s "as-a-Service". There’s Software as a Service (SaaS), which started the terminology, Infrastructure as a Service (IaaS), Platforms as a Service (PaaS), etc. Therefore, it would stand to reason that security holes in your cloud deployments would have to be called aaS holes. If you are in the process of moving applications to the cloud, or deploying a cloud infrastructure (and who isn’t?) you are going to have to deal with a lot of aaS holes. This blog is here to help.

First, let’s talk about the types of aaS holes you are going to have to deal with. I’ll start with Engineering aaS holes. These are the problems caused by the fact that software is written by people with little training in application security. This is not their fault. Computer Science and Engineering programs, as well as the professional education that follows, focus on building software to do things – they do not address how that software can be abused and how to defend it.

Then there are the Sales aaS holes. Sales aaS holes are those caused by cloud service vendor’s infrastructure that does not do what their specifications say with respect to security or change after you contract with them. While you may have an SLA, a sort of no aaS holes rule for the services provided, liability is often limited to what you paid for the service while the cost of a breech can be far more.

Then you have your Product Management and Marketing aaS holes. These are caused by requirements of your applications and infrastructure that work against good security practice. The user experience is a primary factor that must be addressed when designing and building applications, but this and many other things can be used as an excuse to avoid security requirements, which if done properly can actually save time and money.

Last but not least, everybody has to deal with Management aaS holes. There are those caused by the lack of good systems management and monitoring which mean your application gets deployed on the wrong virtual image, in the wrong place or without the right configuration. Then there are those caused by a lack of security priority, process and information at the management level, leading to groups not having security goals, or poor coordination and execution on those goals.

There’s no doubt about it - when your applications move to the cloud, there’s a whole new world of aaS holes to deal with, so how do you do that? There are a series of steps you can take from small initial steps that will help to an entirely new way of doing things that will increase security and make you more efficient.

When building and deploying applications, the ultimate goal is to use a secure SDLC which will actually save you time and money. This addresses the PM and Marketing aaS holes with building security requirements in to the process. It addresses Sales and Management aaS holes with Threat Models, Attack Surface Analysis, and Deployment stage specification and activities that prevent and defend against aaS holes. Finally, it addresses Engineering aaS holes with secure design, coding and test practices as well as education and reference material on these.

The simple steps to start on the path to the goal of a secure SDLC start with an SDLC gap analysis and some training to get all parts of the process educated on the goal, the need and an understanding what has to be done to get there. From there, Threat Modeling and Attack Surface Analyses are low cost activities that will help you understand what you face and where you are facing it, as well as set some more concrete goals and non-goals. Your SDLC gap analysis will map out the steps from there that make the most sense for your situation.

As you can see, moving applications into the cloud means dealing with aaS holes. Butt, with the right help from a company like Security Innovation, and a little patience and perseverance, dealing with aaS holes can be a lot less painful than you think.

Application Security ROI – The Two Towers


The Two TowersIn my first entry on Application Security ROI, I promised to delve into three areas of Application Security ROI a little more deeply. In this entry, which will now have to be the second of a trilogy given the title and my propensity to eat six times a day and grow hair on my feet, I will talk about the first, and least intuitive of these – how Application Security can make development projects more predictable and efficient. This predictability, and especially efficiency, is where the return on investment comes from. Without a Secure SDLC, Security becomes a gate throwing up roadblocks that seem incomprehensible and random to project teams, usually at the worst possible time. This seems like a trip through Moria, and often results in a process of recriminations followed by negotiations, with the outcome to nobody’s liking. Since the teams are not in synch on the goals and approach to application security, unpredictability and inefficiency ensues as resources are wasted going back and forth.

On the other hand, if the Two Towers of the application development team and the Security group can work together in a Secure SDLC rather than fighting a war between good and evil (each side believing it’s good), this can be avoided. Working together in a Secure SDLC means taking actions all throughout a project's lifecycle to create and meet common goals. Security can participate in setting Security Requirements and Threat Modeling at the early stages of the process, work with the team on refining them as the project continues, and then act as a consultant or even perform some security validation during the testing phase. Finally, in deployment, Security will have set requirements of the infrastructure for secure deployment of the application without interfering with its functionality. How does this result in ROI? Several ways:

  1. When each part of the team works from common Security Requirements (and explicit non-requirements), only those elements which are necessary need be built, tested and deployed, saving time and money vs. a broad generalized set of requirements.
  2. A Threat Model provides detail on the dangers to the application which informs the design, development and testing efforts, resulting in specific, effective mitigations.
  3. These items not only inform the development, but they clarify requirements for design as well as provide clear guidelines for what must be tested, and what need not be, saving time and money during both these efforts
  4. Secure code is good code. Many of the requirements for securing code such as not trusting input, and careful memory, pointer, error and integer handling, result in avoiding both security vulnerabilities and functional bugs saving expensive fixing and retesting time during the QA cycle.

You don’t have to take my word for it. This was also found by Forrester in the study I referred to in the first entry in this series. See especially Figure 9, Table 5, and the discussion around them. By the way, this may look like waterfall, but the same can work in Agile processes as well.

Now that you know that applying Application Security processes can save time and money, how do you go about creating your own fellowship between Security and Development? Training will be required so everybody understands the process and principals. Security Innovation can also help with a gap analysis of your process and a detailed roadmap on the steps to take to get to where you need to go, throwing your old inefficiencies into Mount Doom, destroying them forever.

Doing a .NET Code Review for Security


Application SecurityAfter performing countless code reviews for clients I found myself performing the same tasks each time in order to get ramped up on the code and to identify major areas of concern.

When performing a security code review, finding issues like Cross Site Scripting, SQL injection, Poor Input Validation, and others quickly can be a huge time saver. This also helps to make sure that the majority of the code has been reviewed for low hanging fruit, and then later a more in-depth review can be performed of the major areas of concern.

If you've been reading this blog for a while, you may have noticed that I'm a big fan of regular expressions. Some of these issues can be discovered by building a good regular expression. For this purpose I wrote a very basic static analysis tool I've lovingly named YASAT (Yet Another Static Analysis Tool). It uses a list of regular expression rules to scan a source tree and produce a report that a code reviewer could verify. Its purpose is to give you a sense of hot spots and produce many false positives, without too many false negatives so you can use it to start your code review off on the right foot. If you're interested in the tool go check it out on github.

One small caveat: this is not intended to be an exhaustive list of all potential security issues in ASP.NET. There is no replacement for a "brain on, eyes open" line-by-line code review. This is simply intended to give you some good starting points in a new code base quickly.

Cross Site Scripting (XSS)

Look for any Label, Literal, Checkbox, LinkButton, RadioButton or any other control that has a ".Text" property. If the value that is assigned to the .Text is not properly encoded there is a possibility for XSS.

GridViews, DataLists, and Repeaters can be set to either encode by default or not. If you see one of these being used verify that it's being used properly. You set the data on these by assigning the DataSource property to some kind of structured data (usually a DataTable). Make sure the values in the DataSource are properly encoded or that the control is doing this automatically.

Input Validation

.NET can do automatic malicious character detection using the ValidateRequest=true setting. True is the default for this, so if you don't see it it's set properly. You must set this to false if you're going to accept any character that .NET thinks could be dangerous (like < or '), so it's common to see it turned off. This can live either at the top of an aspx file (between the <%@ %> tags) or in the web.config file in the <configuration><system.web><pages validateRequest ="false"> section.

.NET has a bunch of validators (CompareValidator, CustomValidator, RegularExpressionValidator); they all work on the entire string (even if your regular expression lacks the ^$ stuff), however these only check client side by default. (Note: I wrote a blog entry on creating good regular expressions for input validation earlier.) You can check those same validators on the server by checking the Page.IsValid property, but this isn't done by default, so they're probably vulnerable if if you don't see validation and you don't see something like the following:


Look for Any TextBoxes, DropDownLists, or ListBoxes; the input from all of these should be validated.

SQL Injection

Searching for "using System.Data.SqlClient" will tell you which classes use SQL.

The common ways to execute SQL use the SqlConnection and SqlCommand classes. SqlCommand has ExecuteNonQuery and ExecuteQuery methods. NonQuery returns the number of rows that were affected while ExecuteQuery returns a DataReader used to read the SQL data stream. You'll be able to recognize all kinds of SQL injection possibilities (format strings, concatenations, etc.) when you look at the ExecuteNonQuery and ExecuteQuery methods. If they're using parameterized queries, they're probably fine.


.NET doesn’t do any Cross Site Request Forgery (CSRF) checking by default. If no explicit CSRF token generation and checking functionality is apparent, the code is most likely vulnerable to CSRF.

Cookies are insecure by default. Look for secrets being stored here. Search for anything that says Response.Cookies[“mycookie”] or Request.Cookies.Add. Cookies must be validated from the client and secrets should not be stored in them.

Viewstate is also insecure by default. Look for secrets being stored here too. Viewstate can be encrypted, but this only makes sense if unique secrets are stored here. If you see secrets being stored in viewstate, validate that they are properly protected.


SSL certificate chains are automatically checked for validity, but often developers will bypass this if using internal self-signed certificates. This SSL Certificate checking can be bypassed by overriding the CheckValidationResult method. If this method always returns true all SSL checking has been bypassed.

Information Disclosure

Make sure exceptions are handled properly. If you see something like Response.Write(ex.ToString()) the exception will be written directly to the client. This can open all kinds of other issues. ToString includes the stack trace and other debugging information. Search for "catch(Exception" to find exception handling code.

The above checklist is a good way to start a code review, but as mentioned earlier, it is not an exhaustive list. No checklist can replace a mature SDL and Code Review process, but hopefully this will give you some high impact issues to quickly check for while doing large scale code reviews.

How Threat Modeling Saved My Life


Banged UpThere’s been a joke in the software industry that goes something like this:

If automotive technology had kept pace with Silicon Valley, motorists could buy a V-32 engine that goes 10,000 m.p.h. or a 30-pound car that gets 1,000 miles to the gallon — either one at a sticker price of less than $ 50. Detroit's response: "OK. But who would want a car that crashes twice a day?"

It became urban legend that an exchange like this actually happened between Bill Gates and an auto industry executive, either the head of GM or a Ford family member. Interestingly, the item “you’d have to press the start button to shut off the engine” has actually happened in some cars.

The joke makes the point about how different the focus of software engineering can be from the focus of building cars or bridges. In software it’s all about functionality and performance over reliability and security because the implications of an application failing are actually, in many cases, much less severe than those of a car crash or a bridge failure. Sadly, the functionality, performance or time to market often makes a fair business trade-off given customer expectations of software.

In a car, engineers invest more time and cost to address failure or abuse modes and model them so the car can be designed to protect occupants. That picture at the top of this post is my car. It was hit hard as I pulled out from a parallel parking space, and in all likelihood the side curtain airbag saved me from a nasty bang of my head to the left side window. It may be an exaggeration that it saved my life, but thinking how a collision from the side could make the driver’s head hit the side window caused engineers to come up with the mitigation of a side curtain airbag and that certainly made my day!

In application security, this process of thinking about attacks to the application and their mitigations is called Threat Modeling.

Threat modeling is one of the key SDL activities that drive many of the other downstream processes in a security conscious software development lifecycle. Thinking about how the application will and will not be attacked allows the designers, architects, developers and testers to address those cases while not wasting time and money on those that are not relevant.

So I can highly recommend two things: (1) A car with side curtain airbags and (2) An SDL for you software development process that includes threat modeling and education on it. Security Innovation can help you with the second, you car company should provide you the first.

Application Security "ROI" - Talking Business


Security ROIEvery so often in my career someone new to security comes along whose is from a different industry, or who is, well, new to all fields, and comes up with a great idea that goes something like this: People don’t buy anything unless they have to or it has an ROI. What if we showed the ROI for security instead of just the argument of how it makes them compliant or reduces risk? That will make justifying our product easy. Then, and I swear I’ve heard this several times in several companies, they say – you see security doesn’t just reduce risk, it enables blah, where blah is something like internet use or e-commerce, or, more recently adoption of cloud or mobile. Then, the soliloquy continues, we can justify the security ROI as the ROI of being able to do blah. And then they’ll smile, because they just showed us security geeks that they know business!

Well, let me make something perfectly clear. Security is a cost. It does not enable anything. Criminals impede the ‘blah’ from above making an appropriate level of security a condition of doing business. Banks don’t have safes to enable banking. If all people could be trusted, all the money would be in a nice, organized inexpensive closet – without a lock. That would enable banking. Security budgets should be the minimum to achieve the security posture required. More security would be nice, but if there’s money for that more sales, marketing, or product, is always nicer. We should do the security we need plus the things in security that reduce cost. Full stop.

So, then, how do we justify Application Security? It’s the two reasons we use justify all security – it’s required to be compliant and for a minimum acceptable level of risk, and it reduces cost. On the compliance and risk side, there is ample evidence that successful attacks are focused on the application these days, both commons ones and so-called Advanced Persistent Threats (APTs). Most prescriptive compliance requirements that cover applications call for best practice process, education, and the use of scanning tools, including PCI (see control 6), NIST 800-53 (see control SA-8 among others), SANS 20 Critical Controls (see control 7) and many others.

On the cost side it gets more interesting. Application Security practice including SDLC, developer training, and the use of coding standards can make projects more efficient, reduce vulnerability management costs and reduce compliance costs. In future blogs I will go into these topics individually to show the application risks and compliance requirements as well as the cost reductions. Some of this information can be found in two separate independent analyst reports done on the subject by Aberdeen and Forrester, and made available by Microsoft. These are a good place to start. They are:

These are the business arguments for spending (let’s not hide it with the word ‘investing’) peoples’ time, money and other resources on improving Application Security SDLC, knowledge and tool sets.

People, People, People


This past week has yielded a veritable treasure trove of head-shaking security stories, all related to my favorite security soft spot – people.  The shimmer from our technological advances blinds us from the damage people can do – and  we remain so easily fooled:

  • Wired reported that Albert Gonzalez, the record-setting hacker of Heartland Payment Systems, TJX and a range of other companies said the Secret Service (SS) asked him to do it. The government admitted using Gonzalez to gather intelligence and help them seek out international cyber criminals but says they didn’t ask him to commit any crimes. Uh, yeah… ok.
  •  Storefront Backtalk and others reported on a Gucci engineer who was fired for "abusing his employee discount," but then really got even (and then some) by creating a fictitious employee account (with admin rights!) and then using that account to delete a series of virtual servers, shut down a storage area network (SAN), and delete a bunch of corporate mailboxes… allegedly.
  • TechAmerica wrote about HP suing a former executive who took a job at Oracle. Apparently, he downloaded and stole hundreds of files and thousands of emails containing trade secrets before quitting.

PeopleYou might ask, “How can a company so advanced and large as HP not have protections on their digital trade secrets?”  It’s not like DLP (data leak prevention) solutions don’t exist.  And how about Gucci? I guess this is a double whammy around policy and people, who are so often intertwined.  There isn’t a policy flag or checkpoint in place to verify that this newly-created employee was authorized with such privileges that he could delete entire virtual servers and mailboxes? Nobody bothered to check that this was a legitimate employee? Worst of all, this non-existent employee’s accounts were created by a fired network engineer!  And then there’s Mr. Gonzalez (hacking community) and the SS (intel community) – which group do you trust less to be honest with the public? Both communities have for a long time engaged ethnically-questionable people to do their bidding. If it’s true that the SS hired him to hack, shame on him for not getting protection for himself in advance. You have to wonder what else he hacked into to merit an actual arrest.  

And here we are in 2011, putting our lives on display with Facebook, Twitter, LinkedIn, Yammer, et al, broadcasting our whereabouts on vacation (or more specifically, that we’re not home for an extended period,)  meeting up with strangers who have similar tastes, and making our personal details and history available for anyone to view. It’s not always technology that will get us into security trouble… it’s the people.

Why do we expect to find LOTS of really bad things during testing?


Security TestingThe cost of fixing a defect in testing is a worn-out argument, so I won’t beat a dead horse here.  Rather, I’ll provide some insight into some aspects of penetration testing that isn’t as commonly discussed. I want to talk about how testing fits into a mature security development lifecycle, and how altering your process can actually make penetration testing less traumatic for your schedule and your budget.

Development teams often wait to “Test” until the “Test phase”. This could be true whether you use agile or waterfall or something in between.  What often gets lost in translation is that testing is an activity, not a phase, and should be a persistent activity throughout development. Perhaps a better word would be inspection. You inspect your design with a design review, you can inspect your code with a code review and you can inspect your binaries through penetration test methods. The goal of penetration testing is to find anything that was missed during design review or code review. Penetration testing should not be a single line of defense or the only point at which you look for problems.

One way to measure the maturity of your software development effort is to look at the severity of bugs found in testing. If you are finding really nasty security vulnerabilities (or performance or stability issues) then you should ask yourself what you can improve upstream in early parts of your process. When teams find nasty vulnerabilities during verification, that’s a great testament to their skill as penetration testers, but it is not a testament to your team’s ability to create a secure application. Instead, you should be thinking about penetration testing as a backstop that is used a last line of defense, to catch anything that was missed in the earlier stages.  If your application is designed and built right, vulnerabilities should be minimized and only a few critical one’s should surface in penetration testing.  And vulnerabilities, after all, are architecture, design or coding mistakes that ideally should be prevented or caught well before verification.

Below are some key best practices.  To learn more, feel free to view my “Six Key Security Engineering Activities” webcast.

  • Understand the impact that an effective secure SDLC will have on this problem: 
    • Conduct architecture reviews to find problems before coding
    • Conduct code reviews on modules, not final code base – you can catch coding mistakes early and reduce vulnerabilities
    • Conduct frequent, smaller-scale testing (regression testing) – ensures you are vulnerable  for much shorter amount of time
    • Use Threat Models to optimize and drive security test planning – ensures that you  focus on the high-risk areas of your application
    • Analyze your vulnerabilities to improve your coding techniques -  most vulnerabilities result from the same coding mistake
  • Leverage testing as a backstop activity to ensure your design and code was implemented correctly
    • Unless accompanied by complimentary design effort, testing as a stand-alone activity reduces the effectiveness of every dollar and minute spent

Questions for Comodo and RSA after their Recent Hacks


Hacked!Unfortunately, two security companies I respect were hacked in the past few weeks.  This has resulted in significant negative publicity and may result in lost trust and lost sales.    These companies are security companies and yet their security was breached.   For me, this raises many questions.  This blog is about the questions I would ask executives of both companies to learn from what happened to them.  

Background on what happened at RSA

On Thursday, March 17 2011, RSA published the following open letter on its website, and followed up with a SecureCare Note.

RSA itself has been very tight lipped about what actually happened, what was stolen, and what the risk is except to call the attack an advanced persistent threat.  Its not a surprise to learn that the attackers were sophisticated and tried hard over time to achieve their objective.  There has been a lot of speculation in the blogosphere on what happened as well as critique of how little RSA has revealed.  

From the open letter, we learn that over a period of time RSA was attacked and that the attackers were able to successfully extract valuable information about RSA SecurID out of RSA.  This information is valuable enough for RSA to warn all of its customers that the security of its flagship product may be reduced and, according to GCN, to temporarily stop shipping its tokens.  

Background on what happened with Comodo

On March 22, 2011, the Tor Project, with help from Security Innovation's Ian Gallagher, published a blog stating their belief that a CA had been compromised.   Comodo followed up with this post on March 23 confirming a March 15 compromise.

A quick summary of what Comodo confirmed is that an attacker from Iran comprised a user account on one of their RAs and used it to issue himself certificates for major web properties.  

A person claiming to be the Comodo attacker, posted a long statement here where he outlined his motivation and methods.    He says that he probed many leading SSL vendors servers and found some vulnerabilities but not enough for his attack.  

He then attacked Comodo's  service, gained control of it, and found that it was the TrustDLL.dll in C# that does the actual CSR signing.  in his words: "I decompiled the DLL and I found username/password of their GeoTrust and Comodo reseller account. GeoTrust reseller URL was not working, it was in ADTP.cs. Then I found out their Comodo account works and Comodo URL is active. I logged into Comodo account and I saw I have right of signing using APIs."  

A few questions for RSA, Comodo, and all of us

RSA and Comodo are security companies.  RSA has one of the best brands and reputations in the industry.  Yet, they were successfully attacked in a way that affects them and their customers.   What happened?  Here are some of the questions, I would ask their executives:

Who was in charge of your security and were they and their  team empowered?   

It is far too easy to scapegoat the head of IT Security at both companies.  They are a natural target and should obviously be questioned.  The more interesting area of exploration is with the executive team themselves.  Did they listen when security concerns were brought to them?  Did they encourage a culture that welcomed this and responded to it with action?  Were individual contributors able to get their security concerns up to the executive suite or were they squashed by middle management?  

What did you do to make the security of your customers' critical assets part of every employees' mission?

Was every relevant employee given ongoing training on secure coding best practices?  Was the importance of this aspect of the companies mission to safeguard its customers trust regularly highlighted by senior management?  Were individual employees rewarded for sticking their necks out about a potential security risk?  

Did Senior Management make tough calls to prioritize long-term security over short-term gain?

We've all been there.  You are looking at your product, service, or IT roadmap and you have 20 things you want to do over the next quarter and you have to pick 5.   A few of those features relate to security.  They aren't going to give customers any shiny new benefits, no short-term competitive wins, just the boring slogging kind of features that make a product or service rock-solid.  Which did they pick?  Did senior management take the lead in pushing for doing the right thing, playing the long-term game, or not?

Did you get a second opinion...regularly?

There is no substitute for doing great work in the first place.  But on something as important as security, you need a second opinion...repeatedly.  How often was a third party brought in for black box and white box penetration testing?  Once?  Once in a while?  Or as a regular part of a disciplined process.  Was budget set aside for this or did motivated middle managers scrimp and push for it?  


There is no doubt that we face threats from unfriendly governments, criminal organizations, and disciplined individuals.  Our attackers are advanced and they are persistent.   Our defenses must be advanced.  Even if we think they are, we should get a second opinion.  But, the key to all of our businesses, is our people.  The most important thing is that our attitude, effort, and culture be persistent ... persistently, deliberately focused at securing the trusted assets given to us to safeguard.

Input Validation using Regular Expressions


Input validation is your first line of defense when creating a secure application, but it's often done insufficiently, in a place that is easy to bypass, or simply not done at all. Since this is a common issue I see in our assessments and something that has such a great impact on security I'd like to spend a bit of time outlining input validation best practices and give you some concrete examples of how to do it well.

Input validation is the practice of limiting the data that is processed by your application to the subset that you know you can handle. This means going beyond simple data types and diving deeply into understanding the ideal data type, range, format and length for each piece of data. One example of this might be a phone number, which could be stored as a string in memory and a varchar in the database, however there is much more information about the context of that phone number that we can use to ensure we limit our attack surface by verifying the validity of that input. You know a phone number's format is numeric and the range is 10 characters, you  quickly understand abc123Fmasdf9$1< is not a valid phone number, even if it can be stored as a string or in the database.

Whitelist or Blacklist?

The first concept of good input validation is whitelisting versus blacklisting. Whitelist, or inclusive validation defines a set of valid characters while blacklist, or exclusive validation defines a set of invalid characters to try to remove.

If we attempt to perform input validation using blacklisting we will try to enumerate each character that we know is bad. Easy ones that come to mind might be <, >, ',  -, %, etc. This can be very challenging; we need to understand every context, every attack and every encoding to be successful. In addition to context we must be able to anticipate all future attacks and bad values . This technique is nearly impossible to get right.

If we whitelist a set of characters that we know we can handle the task of validation is much easier. Take the phone number example from above; I've never seen a phone number that includes any other characters than the following: 0123456789()-+,. and space. Therefore we can quickly reject the example from the second paragraph because it contains characters that are not in this list.

Enter: The Regular Expression

A great way of defining a whitelist for input validation is to leverage Regular Expressions. Regular Expressions are incredibly powerful and can be a bit daunting at first, but once you get the hang of it you'll use them nearly every day, I know I do. 
There are many great resources for learning Regular Expressions out there on the web, that I'll list at the bottom of this post, so I don't want to spend time explaining how they work or the specific ins and outs of them, rather I'd like to walk through my process of developing a restrictive whitelist regular expression for a common example then at the bottom of the post I'll give a few extras with less explanation. I recommend you not take my word for these regular expressions, but spend a bit of time understanding how they work and what they'll do.

To help you match regular expressions I've written a simple regular expression matcher written in .NET, aptly named "RegexMatcher" it is available, free and open source on github. Simply type your regular expression into the top text box and the text you wish to match in the lower text box. Your matches will show up in the box to the right.

Download Regex Matcher

Example – Usernames

We can define usernames to be as restrictive as we'd like, but let's start with something easy such as simply "The username must contain only upper and lowercase letters" 

Therefore the following list of usernames is valid:

  • Joe
  • a
  • thisisaverylongusernameindeed

These are not:

  • Mr.Smith
  • Two Words
  • S4MMIE

First Pass

Starting with a simple regular expression we might come up with something like: 


This will allow one or more of any "word" character that includes numbers, letters and underscores, which means S4MIE slips through. The caret(^) defines the beginning of the string and the dollar sign($) defines the end of the string, these are good to keep in otherwise our regular expression may match, but allow additional information through.  As you can see, this is too liberal for our uses.

Username, but too liberal

Get More Restrictive

We can define a specific list of inclusive characters using the square brackets and inclusive character sets. This regular expression will match one or more (via the plus sign) upper or lowercase letters (a-z or A-Z).  


There we go, that matches only the usernames that we want. 

Username More Restrictive

New Requirements

What if later there is a business requirement to allow numbers the dash and dot characters to usernames? We can easily add those to the whitelist like so:


Now we can see that S4MMIE, user-name, Mr.Smith and Joe.Basirico all get through. 

New Requirements

If we continue to take this approach we can clearly see each inclusive decision and easily see which characters will make it through, and which will not.

Other Examples

Phone Numbers

Phone numbers can be difficult if you start getting into international numbers and complicated formats. I like to strip everything out, but the digits, then make a very quick check to see if there are 10 numbers. 


Restrictive Phone Number

Otherwise a slightly longer regular expression will do: 

^1?[\(\- ]*\d{3}[\)-\. ]*\d{3}[-\. ]*\d{4}$

Longer Phone Numbers

e-mail address

e-mail addresses are notoriously difficult to match if you get too caught up in the RFC. Additionally if you try to be too compliant you may open yourself up to other issues, such as command or SQL injection or Cross Site Scripting. I suggest striking a balance between readability and restriction such as: 


email addresses

This will match the majority of e-mail addresses, but will reject the museum TLD and some very fringe e-mail addresses. Consult with your business requirements to see if something more complicated is required.

More Resources

There are some really great resources out there to find examples of regular expressions and to learn how they work. I highly suggest you learn this incredibly powerful piece of computer science.

See the following articles and websites for more information on regular expressions.

All Posts
Follow Us