Monday, July 29, 2013

Policy of Truth: Security Management in a Startup Culture


Everyone wants to work in a startup culture. It's the land where access to source code, production systems, and user data is as simple as access to the weekly chair massages and the 4pm Nerf Gun Fight. It's all fun and games until something bad happens, or if you're lucky enough, security improvements will be introduced by customer demand or prescient leadership before then.

However, open culture involves trust, and what does it say to your colleagues when the actions you're taking to restrict access or monitor activity appear to be the polar opposite of trust?

I'll be part of a roundtable at this year's Black Hat Executive Briefings discussing this thorny problem. I'll outline some of my thoughts here in case you're not coming. I won't be posting any follow-up, so if you're going, I encourage you to stop by and chat and rest assured that what's said at the table stays at the table.

We'll start with what some may call "touchy-feely stuff". Those who are squeamish about such things may wish to skip this section - I promise the next section will be more tactical.

Openness and compassionate leadership

Figuring out how to address a challenging security problem involves analyzing a good number of variables, technology choices, and tradeoffs. By the time I've reached a general idea of where I'd like to go, I usually go in full speed. One of the mistakes I've frequently made is not stopping and explaining the background and threat model in my head and jumping ahead to a conclusion that others may not have reached yet.

The reasons have nothing to do with being secretive or trying to obscure an approach. Instead, it's usually due to deadline pressure, an overwhelming workload, and an established set of incident, attack, and defense patterns I've seen time and time again. As a result, by not communicating timely and properly, it gets interpreted as a lack of transparency and openness.

Who wants to work with a security team with a culture that encourages secrecy, segmentation of knowledge, and a reliance on an authoritarian approach? No one - which is why slowing down, encouraging dialogue, and clearly explaining the approach is so important. However, when a decision has been reached, it can be difficult sometimes to move out of the dialogue phase and into action. This is where some of the compassionate leadership approaches outlined by Jeff Weiner have been helpful to me.

This article really shows the importance of effectively and authentically showing compassion for the people you're working with, even when they disagree with you. While I usually feel confident in my judgment and that my approach is the right way to go, my favorite quote in the piece always gives me pause: wisdom without compassion is ruthlessness.

Is the person disagreeing with you because they think you're subtly signaling distrust,  you don't understand their workflow or challenges they're facing, or you're not going far enough? Each scenario requires a different response, but more importantly, you need to show your compassion and be authentic about it.

Humor and management buy-in

When LinkedIn wanted to kick off a phishing awareness campaign, we did the usual assessment of employees by sending them fake phishing emails. Our phishing team leader took great glee in crafting tricky messages and was rewarded later by his guinea pigs (which happened to be the security team) with an full-on Nerf gun ambush one Friday afternoon.

Humor and fun are a big part of LinkedIn's culture. When the assessment went company-wide, one of the senior vice presidents came to the company all-hands meeting in full fishing waders, hat, and fishing pole and challenged all present not to fall for the simulated attacks. The image was so great that his photo was used for our awareness poster as a constant reminder of the meeting and the message.

When security talking-heads drone on about getting management buy-in and support, they often fail to mention the most important part - making it inclusive to the company culture as opposed to a strange externality or a practice antithetical to employee values.

Monitor first, then restrict access

In a culture that praises openness and thoughtful decisions that impact the company's morale, data-driven security controls are the only way to survive. For example, sometimes you know instinctually that a given system has way too many users or overscoped permissions. Instead of drafting restrictions immediately based on who you think should have access, monitor how a given resource is actually used first.

Using that data, build an access model that’s based on the empirical use of the resource. Of course, you should evaluate the proposed model with common sense and keep an eye out for alternative  approaches where people are accessing data outside of their expected role. Rather than simply debating people about whether or not they should have access, some users might need access to a subset of data or functionality to solve a business problem. Use the organization’s agility to build new features or application functionality to address the underlying problem so you can introduce restricted access to the important resources that you care about.

Be prepared to be wrong

Twitter’s security team has a code review tool that emails a developer when a static code analyzer identifies a vulnerability in his or her code. While this is no different than code review practices at other companies, they did something I haven’t seen elsewhere. Each finding email includes a big red “Bullshit!” button that a developer can hit if they believe the finding to be a false positive. Embracing humor and giving people a feedback mechanism engenders trust and lets people know that you’re willing to be told you’re wrong.

An aside: the talk “Putting Your Robots To Work” by the Twitter security team is a great read on this topic. Check it out.

Security programs, and especially policies, can have unintended consequences – whenever you introduce them, try to take in as much feedback as possible not only before implementation, but also afterwards. Monitor “detour points” or shortcuts that staff might take, such as shifts to resources or process flows that violate the spirit of the program but not the letter.

In many of these cases, the fault is not with the guy who is trying to work around the system, but with the security manager who put something in place that doesn’t work within the culture. Be prepared to admit you were wrong and make the necessary adjustments rather than trying to close loopholes. There are times where the risk presented by allowing things to continue is not going to work and you will need to take a stand. Clearly illustrate the risk and consequences using some of the techniques mentioned earlier.

Setting the stage for security awareness

Sometimes, new security managers discover that people come to grips with a security program in different stages. At first, you have to convince them that there are external threats of concern, and that the primary focus is on outsider attacks. I call this the “It’s Not You, It’s Them” phase. While you may very well be working sophisticated insider threat models and defenses, it’s pretty easy to show the weaknesses in most authentication and credential-based systems or the presence of exploitable vulnerabilities that would make an insider attack virtually indistinguishable from an outsider attack.

It is also important to remind cultural champions that it's better to be ahead of the problem than behind it. Addressing defensive approaches during peacetime will give the organization much more leeway towards openness, rather than attempting to build security controls during wartime where the flexibility may not be there anymore.

Again, I hope you can join me this week to discuss these topics. Feel free to catch me in the hallway or at one of the gatherings if you're not at the roundtable. I'd love to hear your thoughts.

Tuesday, June 18, 2013

Do It Anyway: Why We Should Worry Less About Prior Security Research

Writing and speaking about security topics in the public sphere seems to be getting harder to do for many practitioners. While some established folks are trucking right along, the increased popularity of information security as a topic has not resulted in a corresponding rise in new speakers and authors. 

One of the reasons why I think people are hesitating is paralysis caused by the desire to generate new and unique content rather than improving and refining what we already have in place. We've set a difficult-to-reach bar for participating in the dialogue when we need more voices and technical content.

The Black Hat Effect

I believe that one of the primary reasons for this is what I call the Black Hat Effect. Black Hat sets a standard for presenters that includes the following criteria for getting their attention:
Talks that are more technical or reveal new vulnerabilities are of more interest than a review of material covered many times before. We are striving to create a high-end technical conference and any talk that helps reach this goal will be given extra attention. 
Original content or research that has been created specifically for Black Hat and has not been seen before always gets extra priority as well as demonstrations involving new material, or a new way of presenting information to the attendees.

The goal of filtering out talks like "SCADA systems are insecure, OMG!" and "Here's a list of security vulnerabilities in a web framework that have already been outlined in a FAQ and three hardening guides" is completely reasonable. I know I wouldn't go to a Black Hat talk that covers those topics. In fact, I'm not even particularly interested in a talk that reveals new vulnerabilities unless I know the speaker is going to walk through the discovery process and show us how the existing process and tooling we have failed. 

Talks that identify our "blind spots" and reveal issues that we've collectively missed are the Black Hat talks of legend and lore. And an extremely high bar to reach. As a result, lots of talks try to go for the big reveal and choose showmanship over substance. A common joke about the conference is that the best Black Hat talk is one that never happens, because the content is so dangerous or damaging to a vendor that it had to be shut down by a phalanx of lawyers.

Topic Land Rush

Since Black Hat is the de facto standard for security research, some strange practices have arisen to optimize for this standard. One phenomenon is the Topic Land Rush: when a new protocol or service is released, researchers work furiously to put together something (and sometimes anything) to stake their claim in the space. While this exploration provides needed scrutiny and evaluation, there is also a territorial undercurrent that isn't particularly healthy. In the current environment, no one wants to be the second person talking about the latest technology, even if they are building on previous research or have more to add to the dialogue. 

The Topic Land Rush also ignores the interesting balance of technological maturity and the need for security. On one side, the cutting-edge developers and deployers want to push immature technology into unsafe deployment scenarios, and on the other side, you have the security researcher licking his or her chops waiting to eviscerate the early adopter. We can live with this - it certainly is entertaining and does provide a service to the Internet community as a whole. However, when combined with the reluctance to cover already trod-upon ground, we end up with the first analysis as the only analysis. 

Let's take research into memcached as an example. Research was published at Black Hat 2010 about unauthenticated memcached instances on the Internet. Note the first paragraph in the blog post plants the "we're here first" flag, participating in the Topic Land Rush and validating the Black Hat Effect. The talk was great, the tool released to scan for instances was awesome, and the Sensepost guys did a good job. In 2010, adoption of memcached wasn't anywhere near the level it is in 2013, and the only guidance we still have is "Don't expose it on the Internet."  And there hasn't been significant discussion about it since, even though there's been SASL-based authentication for memcached available for years. No one has taken a stab at revisiting the risks present in exposing memcached on the Internet on a stage as big as Black Hat.

Perhaps that's the only thing to say about memcached, but this is one of clearest examples of the issue. The land rush gets even more detrimental when the topic gets broader (ie. cloud-based service security) or discourages work that is quickly and unfairly labeled as derivative. As a result, there's an unreasonable expectation among many that all future discussion on any security topic should reference or credit the first person who got there - and who wants to start a talk or blog post with a set of citations or, even worse, get instantly labeled as "nothing new"?

Imagine if every Black Hat talk had an expiration date

I'm not suggesting that Black Hat should significantly change their talk selection process. There needs to be a top-tier conference presents the latest and greatest in security research, but we shouldn't hold every blog entry, mailing list post, conference presentation, or article to that standard. It's better to get more voices out there even if there is some repetition - even if it is *gasp* not a brand new trail. One of my favorite experiences is working with newer consultants or analysts and watching how they discover some of same things I did without the massive burdensome weight of prior research. Almost every time they explain their process to discovery, I think I'm going to hear the same topic and approach covered, but then there's a slight twist or improvement that adds to my experience and makes me a better practitioner.

A slight diversion: I think it would be a good idea to revisit topics presented at Black Hat from time to time to see if the now-conventional thinking needs to change. What about a track that takes a handful of talk topics from 2 or 5 years ago and invite commentary by presenters other than the original author to provide an updated analysis? It would give everyone a chance to see what's changed and what defenses have evolved and also whether or not the original issue really was that big of a deal in the first place. It would usher in a new era of accountability both for presenters (to make sure they're bringing up relevant topics) and for vendors (to make sure they are actually making things better after having their flaws pointed out).

There's a rich content mine for new researchers struggling to find topics to investigate just by going through old talks where the original presenter left more questions than answers about a given product or technology. It may not be as sexy as breaking it for the first time, but you'll be helping a wider audience of people who are actually trying to use and secure it. I know it's hard to believe, but just because a product gets trashed on stage doesn't mean that everyone throws it away in the rubbish bin outside the speaker's hall.

Do It Anyway

Here's my suggestion for would-be presenters and publishers paralyzed by prior research: Do it anyway. Most of chose to stay out of academia for a reason, and we shouldn't get into the citation game just for sake of it. However, this comes with a caveat: Don't try to represent you're the first one to the table, and if your research was inspired by someone else's work, give them credit. On the other hand, don't waste too much time trying to find previous research if you're not aware of it in the first place. At the end of the day, if the content is compelling, you'll get the recognition and attention you deserve.

* Thanks to Chris Rohlf for his feedback on this post.