My first responsible disclosure was a disaster.

I will be redacting a lot from this post for legal reasons, but I believe it’s important for people to know and learn from. In other words, don’t make the same mistakes I did.

So what is a responsible disclosure? Let’s say you’re in the city and need cash so you find an ATM. As you approach it, you notice a small door with a lock on the side is open and you can clearly see cash inside the door within reach. What do you do? Some people may notice the door and quickly swipe as many $20s as they can while avoiding the camera, and likely never get caught. Most people would read this and agree that the responsible choice would be to call either the Police, the bank that owns the ATM, or both.

Let’s say you decide to call the bank. You didn’t take any money and you didn’t open the door. You noticed an issue that would cause harm to their business and decided contacting them was the right thing to do. You contact the bank and after explaining the situation, your put on hold. While waiting a police officer suddenly walks up and says the bank called saying someone was actively stealing from their ATM. They begin to question you as the person on the phone hangs up.

You might think this sounds unreasonable, unfair, or even a bit evil. Who knows how long the door had been opened and how many people stole from it, so why accuse the one person that did the right thing and contacted the owner? The call itself is a form of a “responsible disclosure” and unfortunately the outcome is often very similar to what I just described.

Responsible disclosures can involve a vulnerability, a bug, a zero day, poor configuration, or just poor procedure. For example, a cloud engineer may have unintentionally set an S3 bucket to public, exposing sensitive data to anyone that stumbles upon it. An online store may be using an older version of a web app containing a vulnerability that allows a customer to change their cart’s total at checkout. Or a hospital may be using unencrypted radio communication to send private information about their patients. More about that later.

A story from a few years ago is a perfect example of how a responsible disclosure can go wrong. the following is excerpts from https://techcrunch.com/2021/10/15/f12-isnt-hacking-missouri-governor-threatens-to-prosecute-local-journalist-for-finding-exposed-state-data/

“St. Louis Post-Dispatch journalist Josh Renaud reported that the website for the state’s Department of Elementary and Secondary Education (DESE) was exposing over 100,000 teachers’ Social Security numbers. These SSNs were discovered by viewing the HTML source code of the site’s web pages, allowing anyone with an internet connection to find the sensitive information by right-clicking the page and hitting “view page source.”… The Post-Dispatch reported the vulnerability to state authorities to patch the website, and delayed publishing a story about the problem to give the state enough time to fix the problem.”…Missouri’s Republican Governor Mike Parson described the journalist who uncovered the vulnerability as a “hacker”, and said the newspaper uncovered the flaw in “an attempt to embarrass the state”.

Due to the accusation coming from the governor in a public setting, and the accused being a journalist, this story quickly spread across the Internet. The fact that this data leak was so easy to discover and replicate, the backlash came from more than just the cyber security community. A new slogan was born “F12 isn’t a crime” due to F12 being a hot key for “view page source”, knowing that simply pressing one single key could be considered illegal made the governor’s comments all the more absurd. This story was popular in its time, but this happens way more than people realize, and continues to happen today.

Personally I believe the biggest reason many responsible disclosures end in punishment instead of praise is due to a lack of knowledge. I used the example of the ATM above so anyone could understand, but in cyber security the situation is a bit more complex.

Frame of mind and opinions also play a big role in how or why we do responsible disclosure. Fundamentally we do responsible disclosures because we feel it’s the right thing to do, but there are few that do it for money, recognition, blackmail, and other malicious reasons. It’s important to consider these things before making a disclosure, and it’s also important to consider how the entity you’re disclosing to will react. Blackmail is often the first assumption when you receive a responsible disclosure, even if it’s rarely ever the reason.

When I was a kid, I decided to scan a range of IP addresses similar to my own for instances of NetBus, a remote access trojan in the late 90s. NetBus was a script kiddie tool with a GUI that ran in Windows 98. The infected file was an exe that you could rename, create an icon for, and easily have it open a jpeg or another application while silently installing a backdoor on the victim’s computer. So many students in my highschool had learned of its existence that I was curious to see if there were many infected computers using the same ISP. I found one, connected to, and searched for an email address to contact the person. I learned it was an older lady in my town and emailed her, giving only my first name and explained that her computer was infected with a Trojan that allowed anyone to connect to it and take control. I included the infected filename and how to delete it, then I added a password onto the RAT to prevent anyone else from accessing it. She replied, accused me of hacking her and said the file I pointed out was added by her grandson to prevent people like me from hacking her, accused me of trying to trick her into removing it, and threatened to call the “internet police” (something older people everywhere believed existed back then). I sighed, deleted her info, and threw away the password that I had set. I could sleep better at night knowing that I helped her, even if she believed I had done the opposite. I was still young at the time and thought I was doing right even if the way I did it was a gray area of whether it was legal or not.

That’s not the story from the title though, although I wish I had remembered it at the time. As I mentioned in other posts, Software Defined Radio is a hobby for me. I won’t go into detail on the exact technology, but I will say that unencrypted data was being transmitted using 1990s pager technology that included personal patient information (as mentioned earlier). Anyone that has dabbled with SDR enough has probably found this or seen it on a youtube video, to the point it’s common knowledge to people in the know. It’s not a cellular frequency, and receiving this data is not illegal.

I was using SDR# for Windows to tune a $10 digitalTV USB dongle to receive digital signals in audio form that was being broadcasted. I piped the audio using a virtual microphone into a virtual audio output. I used a second application called PDW to listen to the virtual audio output and decode the audio into text. PDW is set to decode POCSSAG and Flex digital signals, which is what pagers used. The interesting thing about pagers is that every message is sent to every pager, and the pager itself ignores all messages except the ones directed to it. This is similar to the old style network hubs, which would send packets to every computer connected to the hub, and the computer would ignore the packets not meant for it. PDW itself has a GUI and looks like it was developed in the early 2000s.

I was showing this to a friend that worked in network Security for a health care organization, and he was shocked. He looked at a few of the identifiable addresses, and told me what organizations to reach out to, saying that maybe I could get a bug bounty or at least a very thankful IT person.

In my mind the blame was on the people sending the data, not the tech itself and not the organization. It’s no different than a data leak over email, due to email being unencrypted in nature. For example, If I sent classified info using email, the blame would be on me because email is generally not encrypted. This would be considered misuse, not my employer’s fault and not the fault of the email vendor.

I contacted the two healthcare organizations that we were able to find, and no, I can not mention their name, frequencies, pager vendor, or the contents of the messages. One was extremely grateful and said they would send it up the chain and put a stop to it. The other never responded.

The next day an attorney and the CEO of a pager vendor contacted me indirectly, threatening a lawsuit and stating that I was in violation of federal wiretapping laws. They had been contacted by the second healthcare org that I emailed and the blame had apprently been put in the vendor. After some back and forth and the fact that they decided to contact my employer, who had nothing to do with the situation, I was sent a “Cease and Desist” and told that they would not press charges as long as I agreed to it and returned it signed.

To be clear, I didn’t go to the press about the issue, nor did I make any kind of public disclosure. The source was the misuse by the employees of the healthcare org and that’s who I disclosed it to. Rather than fixing the issue through policy and educating their staff on how PII shouldn’t be sent over unencrypted comms, they instead forwarded my disclosure to the vendor, who’s gut reaction was that I was trying to attack their business’ reputation.

Even though I hadn’t broken any laws (I had multiple lawyers confirm that), to keep things civil I signed their cease and desist agreement stating that I would not intentionally capture data from that vendor, and added that I never intentionally captured data from them in the first place and was not aware of their existence before. I will note again that neither the vendor, their customer, nor the frequencies they use has been added to this blogpost.

It’s hard to say if they were trying to scare me, trying to save face in front of a customer, or they just didn’t know enough about their own product and believed it to be a secure way to send personal and private details of patients who expect a hospital to safeguard that info. I learned something that day that I should have already known. DO NOT give your real name in a responsible disclosure. There are exceptions, such as bug bounty programs like hackerone.com and other bug bounties. When reaching out directly, no matter how noble your actions are, you must protect yourself and assume the worse. Use proper OPSEC. Look for email providers on TOR or use a VPN to sign up with Protonmail under a fake name, while using a fresh browser within a temporary VM.

Like many others, I expected the second healthcare org to be as thankful as the first. Just like one would expect a bank to thank the person that alerted them to a breach in their ATM. I hope this saves someone from the legal consequences of uneducated and embarrassed CEOs and governors when a flaw is brought to light by someone trying to help.

Leave a comment