Genii Weblog


Civility in critiquing the ideas of others is no vice. Rudeness in defending your own ideas is no virtue.


Tue 13 Jul 2004, 10:43 AM
Over the past few decades, security has taken on an entirely new dimension for most corporations.  Connectivity to the internet has led to a host of potential, and many actual, security breaches.  With the availability of new ways to exploit technical flaws in software, there have also come a new set of questions about the responsibility of those who discover a new exploit or a new form of attack.  Clearly, security by ignorance is not the answer.  Just because Big Corp doesn't know of an exploit doesn't mean that Big Corp's enemies don't.  This has led to the current state of affairs, where revealing any security flaw, and even how to use it, is seen as acceptable, and sometimes even noble.

This is where I have trouble.  While security by ignorance seems unwise, security by exposure doesn't seem a heck of a lot wiser.  A recent post in the Notes/Domino 6 Gold forum highlights the potential problem (not yet realized).  The post asks:
I wonder if someone has developped a script that decode the internet password  of the users in the names.nsf?
A script that i can put on a view or something that allows me to give forgotten passwords.
Now, File Save gave the correct answer, which is that when a person forgets a password, it should be reset to something else and the user can then change it, but the original post raises a question.  What if somebody does have a method to do this, at least for the order, less secure internet password?  There are tools out that allow you to break a weak Notes password.  The people who put out such tools, and presumably someone who posted an answer to this forum post, might well feel justified in today's environment because the insecurity is already there, and the "bad guys" already know how to do this.  Stealing the Network: How to own the box, which Tom Duff so kindly sent me after reviewing it, makes this point eloquently, but I still think it misses a major point.  The exploits used in the book were mostly known because they were published.  The hackers in the book, and those "bad guys" who are all out there waiting to break into our networks, are supposed to be smart enough to figure out anything we can.  But are they?

Maybe, maybe not.  If a researcher finds a buffer overrun that hasn't been published before, maybe the bad guys know about it and maybe they don't.  Once it is published, they certainly do.  Now, if your security is of paramount importance, because you are Continental Airlines or the Gap or the CIA, you are usually (not always) better off with exposure, since there really are bad guys out there who might want to steal your data or, worse, manipulate it in place.  But what if you are an SMB?  What if you are an ISV?  What if you run a small construction company in Boise?  re the "bad guys" who might meddle with your stuff really smart enough, or dedicated enough, to find the exploits?  Probably not, but with the full exposure prevalent in today's environment, they don't have to be.  They can just read the security alerts, and you re forced into an endless race with people who don't really have the smarts to hurt you otherwise.

When I was in high school, my principal hired me and my best friend (actually, I doubt we were paid) to find ways into the school.  Don't ask why he thought we were qualified.  In any case, we found a few easy ways to get in and reported them.  Now, anybody could have found those ways before us, but there weren't bad guys out there that were that interested in getting in our school.  On the other hand, if we had published a list of ways to get in the school, there were people who would have used the list.  The weaknesses were there, so it could be argued that the school was vulnerable, but it would have been a heck of a lot more vulnerable if we had exposed them to the public.  In this case, some of the vulnerabilities were easy and inexpensive to fix (e.g., fix a lock, repair a shed door), but some were mighty difficult and expensive (e.g., almost any of a hundred or more windows could be opened fairly easily with a flexible antenna).  The school fixed the easy and inexpensive, and left it to chance that the "bad guys" were not smart enough to figure out the antenna exploit.  It may even be that the same type of window was vulnerable in other buildings, and in those buildings they might have been worth fixing.  But what would be the value of reporting this publicly? (OK, I just did, but given that I graduated a quarter of a century ago and the school really doesn't have that much of value in it, I feel OK about it)

So, what is the answer?  I know it goes against a lot of security experts, but I think we should all learn to shut up a bit.  When I have discovered security issues in Notes/Domino, I have quietly reported them to Lotus.  In each and every case, they have quietly come out with a fix.  In each and every case, I know of customers still using older software that is vulnerable.  Would it help now if I made a big public noise about the vulnerabilities?  The vendor has fixed the problems, but the customers have judged the security not to be worth the upgrade.  Whether I agree or not, is it really noble of me, or anyone else, to publicly air those vulnerabilities?  Are the bad guys really smart enough to figure them out just because I did?

Copyright © 2004 Genii Software Ltd.