Last Updated on
For the sake of whatever Non-Disclosure Agreement I probably unwittingly signed, I’ll have to limit the details I share. That said, I want to talk about a time I knowingly shipped insecure code.
Table of Contents
Awhile back, I was working for a transportation company as a prognostics engineer. Basically that meant I was working on predictive and diagnostic software for transportation systems.
At the time, I was on a team of predominantly mechanical engineers with extensive knowledge in these transportation systems but limited knowledge in software. Naturally, I was there to bridge the gap.
Introducing the Code Base
When I arrived, everyone was in the rapid prototyping phase of development. In particular, they were using Python to communicate with the transportation system to collect and analyze data.
Of course, they didn’t really have any process for their development. Instead, the would each maintain and modify their own version of some massive script in IDLE. Whenever they wanted to share changes, they’d pass the scripts around as ZIP files through email.
When I finally got around to looking at the code, it was about as rough as you’d expect. For instance, there were no functions—just massive line-by-line scripts loaded with copy-and-paste code. In addition, nothing was really documented, and there were no standards. In fact, even basic naming conventions went by the wayside:
myVariable my_variable MyVariable MYVARIABLE
Providing Value to the Team
Of course, my first order of business was to find some way to simplify the code sharing process. Naturally I opted for git and GitHub, so we could finally start versioning our code.
After that, I had to just figure out what the code was even doing, so I could start building up a library of reusable functions. Of course, this wasn’t too hard as duplicate code was all over the place.
After some cleanup, documentation, and testing, I started thinking about ways to architect their system which resulted in many days of refactoring. Eventually, we ended up with something I was really proud of.
With the Python scripts ready-to-go, we added a C# front end. In other words, we had some UI which would call the scripts as needed. At that point, I was responsible for maintaining our current scripts and implementing new ones.
Shifting with New Requirements
Over time, management started adding new requirements. For example, they didn’t like the idea of shipping Python scripts with the UI, so they asked me to package them as executables. Naturally, I followed orders and used a utility like py2pxe or PyInstaller to get it done.
Of course, you might be wondering how packaging the scripts as executables would be more “secure.” Well, it isn’t. After all, anyone can download a decompiler and retrieve the Python to some extent. That said, PyInstaller offers an option to encrypt the executable. Unfortunately, this option is just as futile. After all, the decryption key has to be shipped with the code, so what do you do?
Well, nothing. When you hand your code directly to the customer, they have your code. If their machines are able to execute your program, it’s not much of a stretch to imagine that someone could figure it out too.
So, what’s the big deal? When you’re handing over the code, you’re accepting the risk, right? Well, that’s what I thought. However, that’s not the same understanding that my team had. In fact, I was once asked if the code was secure in a meeting, and I candidly said “no.” Boy, did that piss some people off.
Almost immediately, I was asked questions like “why would I purposefully design insecure software?” or say things like “your code isn’t secure enough.” Obviously, we had a fundamental misunderstanding, and people weren’t all too happy with me or the solutions I proposed (i.e. hosting the code on a server).
Insecurity Is Everywhere
But, I should step back a moment because there’s more to this insecure software than I’ve already mentioned.
Handing Binaries to Customers
For starters, I think everyone’s main concern was that someone would steal our code and try to profit off of it. Of course, the absolute worst case scenario is that they would find out how our transportation systems work.
If these hypothetical hackers were to get a hold of our software, they would be able to figure out how to communicate with our transportation systems. Then, they could write their own software systems to accomplish the same or related goals and possibly distribute it.
In addition, there may be sensitive information in those binaries like system passwords which could leave the systems vulnerable to attacks outside the capabilities of our software. In other words, a lot could go wrong.
Sniffing Network Traffic
While everyone was obviously concerned about the software I wrote, I was significantly more concerned about the interface between our software and the transportation system. In particular, not a single network connection—serial or otherwise—was encrypted, so all our traffic could be easily sniffed using a tool like Wireshark.
In other words, even if we were able to completely obfuscate our code in some way, we’d still be completely screwed. As far as I can tell, the only way we’d be able to prevent those sort of attacks would be to force tons of useless data over the connection, so it would be harder to sniff. Of course, folks like Mark Rober show that it’s a lot easier to predict patterns with software than you’d think:
Examining Running Programs
One of the nice features of our software was that we had a Windows native app front end. In other words, it wasn’t exactly obvious that we used Python to interface with our transportation systems.
However, anytime a user would connect with an interface, we would run one of our Python scripts. Since they were separate programs, we could see them launch by name in our task manager. To make matters worse, data analytics is often processor intensive, so our scripts usually shot straight to the top of the task manager. In other words, they weren’t too hard to find.
If for some reason the user was able to identify which executable did what, they could quickly isolate the executable they want and go through the process of reverse engineering it. Obviously, that’s not ideal.
As you can see, our application was incredibly insecure, so I still stand by my “no.” Hopefully whoever picked up the job after I left was able to convince the team to go another route. Otherwise, I may have just tipped off a handful of hackers in our community. Whoops!
Regardless, that’s all I really have to say about that. I suppose the point of this article was two-fold:
- To share a funny story about something that happened to me when I used to work in the industry
- To caution others about the importance of security
Honestly, I know almost nothing about making secure software, but I understand its importance. In my particular case, I was able to identify three major areas of risk with our application, and I don’t really know what I’m doing. In other words, our scenario was probably far worse than I could imagine.
If you found this article interesting, I’ve actually written about this topic previously in my engineering rant series. Feel free to check that out!
While you’re here, why not show your support by becoming a member of the community? Every little bit helps, and you’ll get a little bit in return. If you’re tight on cash, you can always hop on the mailing list. That way, I can send you weekly updates about the site.
In the meantime, here are a few security books for your perusal:
- Threat Modeling: Designing for Security by Adam Shostack
- 24 Deadly Sins of Software Security: Programming Flaws and How to Fix Them by Michael Howard
As always, I try to recommend products that I think are relevant to the article as well as highly rated. If you have any great products you’d like to see featured, let me know!
At any rate, thanks again for your support. Special thanks to two of my latest patrons, Morgan Grifski (I know that’s my wife) and Amber Griffith (yeah, that’s my mom). Hopefully, I’ll see you all next time!