It’s a common scenario: a large organization invests millions of dollars in a DLP solution, only to leave it in “watch mode” because the rate of false positives is too high to enable full blocking. The result is a DLP investment that becomes a white elephant: a promising technology that does not pay off in actually preventing data loss.
The problem often begins with an over-reliance on automated scanning to prevent data loss. The DLP system is expected to automatically identify all sensitive content, which requires IT administrators to translate business processes and policies into automated rules for every data loss scenario. This is an impossible task, which usually results in overly restrictive rules that block non-sensitive data (false positives) or overly permissive rules that mistakenly release sensitive data (false negatives).
The impact of false positives can be just as detrimental to the business as the data loss caused by false negatives. False positives disrupt business agility and productivity, and can impact collaboration, innovation, and business growth. As well, false positives can actually lead to increased data loss, with users looking for alternative, less secure methods to get around restrictions and carry out their business tasks.
The best way to address this problem is for organizations to identify their information appropriately. The sensitivity of each piece of information must be identified, or ‘classified’. Information classification is crucial for proper handling, and for the ultimate security of an enterprise’s information. Classification provides context to unstructured data such as email and business documents, making it possible for DLP solutions to know how to protect your organization’s sensitive information. (more…)