If you have been living in a cave or an island or a mountainside somewhere and have not heard, the new weapon of choice is cyber-security based. Â Attacking a company by wiping out it’s databases and computer files and/or spreading a virus and/or creating malware seems to be the most lethal and fearless method these days, just ask Sony, or Target, or dozens of other companies that have had the privacy of their computers and networks violated, and then wiped out. Â The Internet has brought us a great technological wave of invention, interconnection and technological advancement. Â With that progress, a great new security risk has been exposed. Â Networks used to be like castles. Â They were self contained with huge virtual walls that one could not get through. Â As the castles disappeared allowing great clans to become countries, the networks have evolved into inter-networks. Â However, just like the castles, where security was high, once the walls were no longer used, security became an issue.
We hear so much in the news about how these attacks are deployed, through malware and viruses and keyclicks. Â To combat these issues there is a huge effort of counter technologies: Virus Scanners, Malware Detectors, generating huge profits to combat the security breaching tools. Â However, these are, for the most part, an afterthought. Â In other words, they provide a defense to an already deployed method of attack. Â They cannot predict a future attack, nor can they detect what they do not know. Â Does this make them useless? Â Absolutely not. Â Use them with enthusiasm, dedication and great abandon.
What we do not hear is the root cause of these security related issues.  The more I talk to people about it, the more they appear convinced that it is simply the nature of the beast.  Like life without castles, you develop a police force that does a great job of minimizing the impact, you pay for it as a community and every now and then, someone’s home or business if going to be robbed.  Hopefully we will respond quickly, identify and punish the perpetrator, and continue onward.  But what if there is a root cause?  What if we put aside the complacency of acceptance, and looked a little deeper?  What if we go back to the roots of computing and networking and critically analyze why there is so much insecurity?
To answer these questions, you ultimately end up staring right at the software. Â This is the software that runs computers, switches, routers, and anything that has a processor. Â The cyberattacks all have something in common – they change the behavior of the software to perform improper tasks. Â The person who writes the software says: “That’s not my fault, my software works perfectly! Â If you modify it, that isn’t my fault!” Â Which begs the question, well how the heck does your software allow itself to be modified? Â The answer is usually a shoulder shrug, or a finger pointing in any direction other than at the software author themselves.
Considering this, I had an epiphany. Â Almost all the software, the great software, is written today by degreed software engineers. Â Arguably this was not the case for Bill Gates who never got a degree until much later in life, but most of his software is no longer used. Â The problems Mr. Gates’ software solved were considerably simpler that the software being written today. Â Networking and Computer problems are extremely intricate and complex and sophisticated. Â Solving these problems requires large teams of well prepared software engineers that often span multiple countries. Â There are great universities around the globe producing great engineers who are able to tackle these immensely difficult and complex problems. Â Learning curves are steep, and delivery timeframes are small. Â Innovation rates are expected to be exponential. Â So why the problems with security? Â Could it be that the engineers themselves are not prepared to write secure software?
I began to ask. Â I often get to meet teams of software engineers, many of whom are recent graduates from legendary software engineering institutions. Â How many of you took some sort of secure coding class as part of your degree program, I ask? Â I have never written down the actual numbers but from memory and rule of thumb I will tell you that one or two out of 100 will raise their hand. Â That, to me is an alarm bell – a red flag that needs to be run straight up a pole. Â So I did. Â I contacted acquaintances in the institutions and asked them. Â Shouldn’t I be getting 80 or 90 hands out of 100? Â The general response was the same: we are naturally concerned with this, but the cyberattacks are always changing and whatever we teach, the people who compile and write these bad pieces of software will always find a way around perfectly good code.
I don’t buy that answer as acceptable. Â To the contrary, I would expect these great institutions to hire some of the best cybersecurity people to come in and teach classes on how their code works, how they find the loopholes in the coding design, and have projects as a mandatory part of a software degree program that create code and them prove that the code is un-hackable. Â I know what you are thinking, which cybersecurity people would want to do this? Â I concur that there would be few, but those few could make a big difference – a much bigger difference than signing treaties.
With the ferocity of forward movement and innovation demand comes some amount of sloppiness in software development. Â This sloppiness results in enormous code output, where libraries and subroutines are piled into the software storehouses in order to speed the development. Â Think of it as chefs storing all the possible ingredients in a huge pantry so that they can produce a certain dish. Â Some of those ingredients will never be touched, but just in case, they are in the pantry. Â The same goes for software. Â No one really cares, if once compiled, it fits onto your enormous disk and memory, and it does what is necessary. Â One can almost envisage the cyber-attacker begins drooling slightly, knowing somewhere in that immense pantry of software are weaknesses and vulnerabilities that the software engineers never even tested.
One of the possible roots of this time of vulnerability in our computers, networks, and applications points directly back to the creators of the creators of the software. Â We must immediately change the degree programs to incorporate software secure coding and testing practices, while remaining flexible and mindful of the continual improvement curve that must be incorporated into these programs. Â The way to immediately impact that change is that software companies must demand these skills by rote, and they must provide for immediate training of those already employed to fill the gap of knowledge. Â Like the Internet itself, and society after castles, a self policing system must be invoked, and this is only attainable through advancing knowledge and skills so that weaknesses and vulnerabilities cannot be so easily deployed and executed on computers, networks and applications.