Evaluate Websense products by watching demos and installing evaluation software.
Learn how Websense solutions help keep our customer safe, secure and productive
Get information on product updates, support resources and more.
Get the most out of support in five simple steps.
Find tools and assets to help sell Websense solutions.
Be notified of Websense news, product information, industry events and more.
we want to hear from you >
We have white hats, we have black hats and we even have grey hats. However, what is the true meaning of a grey hat hacker? These individuals are typically not malicious by nature and do not necessarily intentionally cause harm, yet at the same time they may not act ethically.
A prime example of a grey hat in action is when a lone researcher discloses vulnerabilities before the vendor has the opportunity to patch it. The goal is to stake a flag and show they knew about the vulnerability first, thus proving they possess the best security research knowledge. Unfortunately, in most cases, this is an attempt to gain notoriety in the industry. For example, a lone researcher in the UK, who was upset that the development site was vulnerable, was behind the recent Apple developer site hack. He claims his intention was not hacking but bug finding and testing if he could extract data from the site. Another example of grey hat activity is the recent hack of Zuckerberg's Facebook page.
This prompts the argument of when to disclose. If the vendor fails to respond to the disclosure and the vulnerability is actively in the wild, does the individual or group who identified the vulnerability disclose this publically? Doing the right thing then becomes a difficult decision. I think we need to decide how much we can share and in what timeframe.
So where do we sit...are grey hat hackers good for the infosec industry? I believe grey hats come in many shades and a code of conduct is necessary. Below are set of proposed parameters for anyone wanting to partake in these activities. Any deviation from this indicates you are performing malicious activity.
To summarise, we can consider ourselves at a pivot point with grey hats. They have access to resources and we share content in the community that can aid them. They use shared intelligence and available resources--while also using their own skills to identify bugs and flaws in our networks and websites. Make no mistake; I believe ethical disclosures are great.
As a quick side note, one of my favourite security books is still "Gray Hat Hacking: The Ethical Hacker's Handbook" which references ethical disclosure, pen testing and tools, exploiting vulnerabilities and malware analysis. It focuses on the same common tactics used in relation to attacks on organisations. Chapter 16 focuses on content security and information protection. The attack scenarios still resonate today, even though several years have passed.
Web and email remain the primary two channels used to launch an enterprise attack. Yet they remain the weakest ingress and egress points on the network. Alarmingly, most organisations still run basic spam and web filter products running AV engines. The book explains how easy it is to bypass these legacy controls. It's important to have a comprehensive security solution that protects both the web and email channels from data loss and data theft.
Do you have an opinion on grey hat hackers? Feel free to leave a comment and let's discuss.
Isn't the first parameter (Engagement) what differentiates Grey Hats from White Hats? If you've engaged the asset owner, you're working with their full blessing, thus you are a White Hat hacker.
Differentiations aside, I'm on the fence about the Engagement requirement, on the one hand, as an IT person, I would prefer that I know when someone is testing the security of my systems. On the other hand, no man is an island and even with a team of personnel you're not going to catch everything. It's just like any software release, they can only do so much testing internally; it takes a million different tests with a million different configurations to find out that something is seriously wrong. Just look at the patch recall Microsoft just had to do with Exchange 2013.
Companies like Facebook and Apple aren't going to enter into agreements with every security researcher that wants to try their hand at finding a vulnerability. They don't have the time, for one, and for another not every "researcher" out there knows what the heck they're doing. I think with major companies that have highly publicly used software offerings and products, you shouldn't have to enter an agreement in order to perform tests. They know this, that's why they have rules/guidelines for developers who find bugs.
The biggest problem with the Zuckerberg "hack" was that the researcher wasn't able to clearly communicate his findings in English. If he *really* wanted to make sure he was heard, without the grandstanding he did, then he should have worked with someone who could present the idea for him, or who could at least translate his findings into English, so the company could take him seriously.
Onto disclosure and remediation. With disclosure, I'm not sure how "government agencies" can be notified without creating a storm of criminal charges. Admitting that you've discovered a vulnerability on anything other than your own systems is tantamount to terrorism these days, even then if you've discovered the problem in copyrighted or patented software it doesn't matter if you did the testing on your own equipment, so if you're not a White Hat, you're going to jail.
And with remediation, it's not always possible to just find the vulnerability without exploiting it. Just seeing that a bridge is rickety is often enough to get someone to fix it, but sometimes somebody has to fall through a broken slat for anyone to take notice. Again, like the Zuckerberg "hack", just telling them that it was there may not have been enough. But as you can see, SHOWING them the problem got it fixed, lickety split.
I agree that public disclosure is not responsible except as a last resort, but sometimes there are no other options. I know I appreciated knowing that LinkedIn was vulnerable. I don't think the hackers should have taken the password lists and published them publicly, but at the same time, I look at the password strength analyses that came from that and think that on some level they did us a favor.
Great comments and I agree, it’s a grey area and we need further guidelines to cover this. The views in the post are my initial views to initiate this discussion.
Regarding the engagement point, this covers standard testing; however, of course we have bug bounty programs and we currently see researchers wavering on the verge of overstepping the T&Cs. Engagement should improve as these programs mature. I also agree that communication needs to improve for researchers to stay within set boundaries. Organisations at the same time do not want to upset researchers. A recent example is Paypal not paying a researcher for finding a bug due to him being under 18. After the community became upset, it was rectified, which is a positive for Paypal.
At the recent Blackhat/DEFCON events, government agencies cried out for information and sought to build a community around this type of reporting activity. Where does a researcher go if they find vulnerability after engagement, they expect attacks and yet the organisation cannot or will not mitigate? I see governments stepping forward wanting to protect the IP of organisations, which in turn protects their own GDP.
Disclosures should also be done ethically. My views differ in that some further legislation should be placed on companies that suffer a breach. This will ensure they disclose and their incident response is thorough, especially when relating to privacy.