The following blog has been re-posted with permission of the author. The original post can be found on Jeremy Wittkop’s LinkedIn blog.
This will be the rarest of posts. I am going to begin my post about why Data Classification is important to a content and context aware security program by telling you all of the reasons why I was originally skeptical of its value. I do so in hopes that people who share the same concerns I did will have an opportunity to experience the magic of the Titus approach vicariously through me. I am also going to do something that few people who are in my position are willing to do, while simultaneously do something no author should ever do. I am going to admit I was wrong and I am going to quote myself.
“I was wrong” – Me
In defense of me, who I love (Shameless Donald Trump Reference), I would like to first explore the valid reasons why I was a Data Classification skeptic. At the time, I was pretty myopically focused on the benefits of DLP solutions when deployed as part of a comprehensive DLP system. Many organizations were simultaneously looking at the value of Data Classification. There was much debate of the seeming chicken or egg scenario involving Data Loss Prevention and Data Classification. Some people were of the opinion DLP should be implemented first followed by a Data Classification program. This approach worked fine for me because it did not interfere with my objective. The second group posited that deploying Data Classification first was the best approach. This vexed me terribly. I looked at Data Classification as a utopian idea. Essentially, it was a great idea that no one really knew how to implement effectively, and I was beginning to believe may not be able to exist in reality. I had never seen an environment where all of the data was classified in neat little labels and protected based on those labels. It is not that I did not agree with the premise, its that the execution was invariably flawed.
There are two parts to building an effective Data Classification Program. First you must have a user-friendly mechanism to classify data as it is created. Second, you must have a methodology to classify all the data that already exists. The former is generally the first step and it could be done by some organizations effectively. The latter was the portion that no one could seem to figure out.
My primary problem with Data Classification was the fact that people were terrible at it. Essentially, there was very few organizations who actually got their programs off the ground. To me, Data Classification was synonymous with the bridge to nowhere project. It was essentially a time and money pit where nothing was ever accomplished and it only served to delay what I was trying to get accomplished. I shouted from every mountaintop I could find that Data Classification was a great idea with little practical application. In many cases I was right. Data Classification products were just beginning to emerge at that time, and while they provided a system for labeling data, I had not yet come across a methodology that worked. If I may bash myself a bit, I should’ve known better. I was making the same argument against Data Classification that people were making against Data Loss Prevention. Intelisecure (Then BEW Global) had successfully built a methodology to effectively deploy and manage Data Loss Prevention to get business value out of the tool in a way that was previously not realistic for many organizations. I had completed ignored a company called TITUS that was not only developing a software solution but a methodology for the effective deployment and use of their solution.
It was my own self-interest which led myself and the TITUS folks to cross paths for the first time. I needed a solution for the challenges being presented by International Data Protection regulations that were emerging in Europe and the Asia Pacific regions. The TITUS product had the ability to use X-headers to classify messages, which I could use to force my DLP solutions to ignore messages that were marked personal by employees, which put me on the right side of many of the Works Councils that were beginning to hold more sway in support of these emerging Data Protection regulations. Brilliant! I needed this capability!
Around the same time, I started to re-evaluate my perception of Data Classification. Rob Eggebrecht and Chuck Bloomquist, brilliant visionaries who founded BEW Global which subsequently became Intelisecure, strongly encouraged me to read pretty much everything Jim Collins ever wrote. I would encourage you to do the same, on a separate note. In re-reading Mr. Collins’ work, I had a thought related to his concept of the “genius of the and versus the tyranny of the or”. Essentially, the concept is there is genius in looking at doing both things, ion situations where a choice is generally presented. Instead of doing Data Classification or DLP, why don’t we do Data Classification and DLP! They could be better together! Further, if we could combine the efforts of creating the programs for both, perhaps we could find the economies of scale necessary to make the entire engagement cost effective and high value to a variety of organizations! I had to meet these people. It was time to go to Ottawa!
I will spare all of you the gory details of my trip to Ottawa, specifically my interactions with the Canadian customs agent who decided to extend the length of time required for my arrival into Canada. The most important part of the story is the week I spent with the TITUS team. During the first portion of the week, I was given an opportunity to present what our company did to a reasonably sized audience of TITUS employees. It was after the presentation that I realized the founders of the company were in attendance. The collaborative week that followed changed my opinion on Data Classification forever. All of my hopes for the technology and the methodology were exceeded as I had an opportunity to meet with Tim Upton, CEO of TITUS, Steph Charbonneau, CTO, Mitch Robinson, COO, and Andy Maahs, who was the Partner executive who made the original introductions. I was able to speak with some of the true originators of the space and see their vision for the future. I was then able to share our vision and collaborate to solve some of my customers’ largest problems. It was possibly the most thought provoking week of my career.
In theory all was well, but the skeptic in me would not shut up. I had to feel the technology. Much to my surprise, the user experience was seamless. It was great! It was easy to use and truly intuitive in terms of how administrators could communicate directly with end users through the applications they already utilized. There was no separate pop up, there were buttons inside of Microsoft Office! I was starting to drink the Kool-Aid. This capability very cleanly and effectively solved the first problem which was creating a method for users to classify new data.
An underrated but crucial part of TITUS’ success is the ecosystem they’ve built of products they integrate with. Among those products are Data Loss Prevention systems. Integrating with Data Loss Prevention effectively allowed TITUS to solve the second and more difficult problem. We now had the ability to classify the legacy data that was not in active use. Using TITUS and DLP together allowed us to present a comprehensive solution set to our customers!
In short, Data Classification has a place in a Comprehensive Information Security Program. It integrates well with Data Loss Prevention and they are better together. It can be done. Many of my customers are doing it, and doing it well. I will say it one final time. I was wrong.
If you would like to learn more about Data Classification, please join me for my Data Classification webinar on January 28th. I’d love to share what we’ve learned, tested, and created with you all. We will be taking questions for anyone who is still skeptical, as well as to help solve challenges for current Data Classification users. I’d love to engage with you live on the webinar. You can do it, we can help.