One of the reasons why I think people are hesitating is paralysis caused by the desire to generate new and unique content rather than improving and refining what we already have in place. We've set a difficult-to-reach bar for participating in the dialogue when we need more voices and technical content.
The Black Hat Effect
I believe that one of the primary reasons for this is what I call the Black Hat Effect. Black Hat sets a standard for presenters that includes the following criteria for getting their attention:
Talks that are more technical or reveal new vulnerabilities are of more interest than a review of material covered many times before. We are striving to create a high-end technical conference and any talk that helps reach this goal will be given extra attention.
Original content or research that has been created specifically for Black Hat and has not been seen before always gets extra priority as well as demonstrations involving new material, or a new way of presenting information to the attendees.
The goal of filtering out talks like "SCADA systems are insecure, OMG!" and "Here's a list of security vulnerabilities in a web framework that have already been outlined in a FAQ and three hardening guides" is completely reasonable. I know I wouldn't go to a Black Hat talk that covers those topics. In fact, I'm not even particularly interested in a talk that reveals new vulnerabilities unless I know the speaker is going to walk through the discovery process and show us how the existing process and tooling we have failed.
Talks that identify our "blind spots" and reveal issues that we've collectively missed are the Black Hat talks of legend and lore. And an extremely high bar to reach. As a result, lots of talks try to go for the big reveal and choose showmanship over substance. A common joke about the conference is that the best Black Hat talk is one that never happens, because the content is so dangerous or damaging to a vendor that it had to be shut down by a phalanx of lawyers.
Topic Land Rush
Since Black Hat is the de facto standard for security research, some strange practices have arisen to optimize for this standard. One phenomenon is the Topic Land Rush: when a new protocol or service is released, researchers work furiously to put together something (and sometimes anything) to stake their claim in the space. While this exploration provides needed scrutiny and evaluation, there is also a territorial undercurrent that isn't particularly healthy. In the current environment, no one wants to be the second person talking about the latest technology, even if they are building on previous research or have more to add to the dialogue.
The Topic Land Rush also ignores the interesting balance of technological maturity and the need for security. On one side, the cutting-edge developers and deployers want to push immature technology into unsafe deployment scenarios, and on the other side, you have the security researcher licking his or her chops waiting to eviscerate the early adopter. We can live with this - it certainly is entertaining and does provide a service to the Internet community as a whole. However, when combined with the reluctance to cover already trod-upon ground, we end up with the first analysis as the only analysis.
Let's take research into memcached as an example. Research was published at Black Hat 2010 about unauthenticated memcached instances on the Internet. Note the first paragraph in the blog post plants the "we're here first" flag, participating in the Topic Land Rush and validating the Black Hat Effect. The talk was great, the tool released to scan for instances was awesome, and the Sensepost guys did a good job. In 2010, adoption of memcached wasn't anywhere near the level it is in 2013, and the only guidance we still have is "Don't expose it on the Internet." And there hasn't been significant discussion about it since, even though there's been SASL-based authentication for memcached available for years. No one has taken a stab at revisiting the risks present in exposing memcached on the Internet on a stage as big as Black Hat.
Perhaps that's the only thing to say about memcached, but this is one of clearest examples of the issue. The land rush gets even more detrimental when the topic gets broader (ie. cloud-based service security) or discourages work that is quickly and unfairly labeled as derivative. As a result, there's an unreasonable expectation among many that all future discussion on any security topic should reference or credit the first person who got there - and who wants to start a talk or blog post with a set of citations or, even worse, get instantly labeled as "nothing new"?
Imagine if every Black Hat talk had an expiration date
I'm not suggesting that Black Hat should significantly change their talk selection process. There needs to be a top-tier conference presents the latest and greatest in security research, but we shouldn't hold every blog entry, mailing list post, conference presentation, or article to that standard. It's better to get more voices out there even if there is some repetition - even if it is *gasp* not a brand new trail. One of my favorite experiences is working with newer consultants or analysts and watching how they discover some of same things I did without the massive burdensome weight of prior research. Almost every time they explain their process to discovery, I think I'm going to hear the same topic and approach covered, but then there's a slight twist or improvement that adds to my experience and makes me a better practitioner.
A slight diversion: I think it would be a good idea to revisit topics presented at Black Hat from time to time to see if the now-conventional thinking needs to change. What about a track that takes a handful of talk topics from 2 or 5 years ago and invite commentary by presenters other than the original author to provide an updated analysis? It would give everyone a chance to see what's changed and what defenses have evolved and also whether or not the original issue really was that big of a deal in the first place. It would usher in a new era of accountability both for presenters (to make sure they're bringing up relevant topics) and for vendors (to make sure they are actually making things better after having their flaws pointed out).
There's a rich content mine for new researchers struggling to find topics to investigate just by going through old talks where the original presenter left more questions than answers about a given product or technology. It may not be as sexy as breaking it for the first time, but you'll be helping a wider audience of people who are actually trying to use and secure it. I know it's hard to believe, but just because a product gets trashed on stage doesn't mean that everyone throws it away in the rubbish bin outside the speaker's hall.
Do It Anyway
Here's my suggestion for would-be presenters and publishers paralyzed by prior research: Do it anyway. Most of chose to stay out of academia for a reason, and we shouldn't get into the citation game just for sake of it. However, this comes with a caveat: Don't try to represent you're the first one to the table, and if your research was inspired by someone else's work, give them credit. On the other hand, don't waste too much time trying to find previous research if you're not aware of it in the first place. At the end of the day, if the content is compelling, you'll get the recognition and attention you deserve.
* Thanks to Chris Rohlf for his feedback on this post.