Choose from several options for complete web, email and data security.
Evaluate Websense products by watching demos and installing evaluation software.
Learn how Websense solutions help keep our customer safe, secure and productive
Get information on product updates, support resources and more.
Get the most out of support in five simple steps.
Find tools and assets to help sell Websense solutions.
Stay informed on the latest security exploits, industry news, research, solutions, and more.
we want to hear from you >
This week at the annual TechEd conference Websense will be showcasing how our Websense DLP technology integrates with the new Dynamic Access Control (DAC) capabilities of Microsoft Windows® 2012.
Built on the foundation of Websense data classification expertise, this collaboration allows organizations to accurately monitor, identify, categorize, and ensure protection and proper use of sensitive information—as it is being authored. This is true, dynamic categorization in action. Here is a video that shows how it works...
Before we begin, I recommended reading Getting Ready For Data Loss Prevention (DLP). Go ahead, I’ll wait for you…
Back? OK, now let’s talk what comes after; the “How” to implement DLP part.
As a next step, and at the risk of blowing my own horn, consider watching the recording of a webcast I did on April 5 here. You’ll get recommendations on how to deal with issues that are often overlooked in DLP deployments as well as some critical “how to” advice. This I position as an antidote to the all-too-common and none-too-helpful “just do it” approach to DLP advice. Because, on the path to DLP success, there are two deadly pitfalls to watch out for:
The first is in understanding where to start your data protection strategy using DLP (and why). Where to start influences your program’s effectiveness compared to how much risk you are hoping to eliminate from the business.
The second pitfall is in understanding how to execute. The "how" may be the most important part as it ultimately determines how soon you will benefit from DLP and determines the amount of resources that are required.
Surviving one of the pitfalls is hard enough, but trying to get through both on your own is nearly impossible.
Unfortunately, much of the historical “how” started with massive data-discovery projects, which usually meant at least six-months of project consulting before any data is protected.
Not every DLP vendor has the same vision for how to make DLP work, so make sure that you understand your vendor’s approach and agree with it.
Have a listen and let me know what you think.
Do you think data breaches are up or down in 2011 compared to 2007 or 2008? The official answer may surprise you. According to DatalossDB and the 2011 Data Breach Investigations Report by Verizon, the number of records compromised per year has been decreasing since its 2008 peak. But these reports are missing something very important. It all comes down to what is reported. Last year I met with more than 450 CIOs and CSOs, and almost all of them said that incidents are way up. New breaches are constantly making headlines, so why is there a discrepancy between our perception and what these reports are finding?
Many industry reports focus on the never-ending stream of leaked or stolen personally identifiable information (PII). Most laws and industry standards, such as PCI DSS, also concentrate on PII. But there is something that could be more dangerous to lose than PII and that isn't getting enough attention in data breach reports—intellectual property (IP).
I think there is a need for industries to first admit a problem – a problem with data. A huge volume of new content is being created, shared and moved inside and outside our walls every second. The challenge is that much of this data is sensitive and is a major governance and data theft concern. In order to prevent both accidental data loss and malicious data theft organizations need to be able to identify what is and is not sensitive information and be able to accurately categorize sensitive information as it is created without a massive process that intrudes or adds additional steps to content creator.
We’ve seen this is a real challenge for organizations, so we have been working closely with Microsoft to accurately monitor, identify, categorize, and ensure protection and proper use of sensitive information— as it is being authored. It’s a big challenge and a huge technology hurdle. That said, at the recent Microsoft® BUILD developer conference we demonstrated accurate real-time file classification and data security policy application done automatically, without manual intervention from the author.