ProductsBusiness NetworkSecurity

Reducing the footprint of sensitive data in your enterprise

Why data tokenization is key to your data security strategy

Research suggests that over one-fifth of all files are unprotected. Accenture estimates that information loss is the most expensive component of any cyberattack – representing 45% of the total costs –  and the Ponemon Institute places the average cost of a data breach at $4.13 million for a US company. These statistics aren’t just scary. They’re terrifying.

In 2018, the US healthcare sector saw more than 15 million patient records breached, three-times the amount of 2017. By July 2019, that figure – for six months – was 25 million records compromised. With cyber attacks on the rise and an increased imperative to protect personally identifiable information (PII), every organization must find a way to make data available to the business, without making it vulnerable to exposure.

What is data tokenization?

It’s rarely a feasible option for organizations to put their data on lock-down. In the healthcare sector, the exchange of sensitive information like electronic medical records (EMRs) is vital to the health and welfare of patients worldwide.

Data needs to be both protected and available. Data tokenization is a de-identification technology where a representation of the data – not the data itself – is passed around the network. It works by turning sensitive data – such as account numbers or social security numbers – into seemingly random sets of characters and numbers called tokens.

The original data is held in a ‘data vault’ along with the details on which tokens correspond to which original values. Any breach where tokens are exposed leads to a futile dead end for the hacker as there is no direct connection to the original data. Since the hacker won’t have access to any actual data – and therefore to anything that would fall under existing or new data protection legislation like GDPR – they can’t use information in the token or try to re-assemble the original data.

The benefits of data tokenization

Data tokenization allows organizations to hold data securely in one place while making it available when it’s needed.

  • Tokenization enables organizations to work with proxy data values in all systems that do not need the original values, significantly reducing points of vulnerability for data breaches
  • Original data is retained in a highly secure data vault, which enables authorized users to check the original values if needed. This also reduces the risks related to internal users accessing data without appropriate authorization.
  • Tokens can retain the format and length of original data, which means that the various backend systems can process them without requiring costly changes and adjustments
  • By significantly improving the protection of sensitive data, data tokenization helps comply with industry regulations such as PCI DSS, HIPAA, GLBA and ITAR
  • Tokenization also reduces the cost of related audits the organization may need to undertake by reducing the number of systems that need to be audited

Meeting your data privacy requirements

GDPR came into effect in May 2018 and imposed much more stringent conditions upon how organizations can store and use personal data. This has been followed by other data protection regulations, such as the California Consumer Privacy Act (CCPA) and Brazil’s General Data Protection Law, that have taken similar approaches to the privacy of personal data.

Organizations worldwide have to ensure that personal data is protected at all times. The anonymization or pseudo-anonymization of personal data is one of the most effective means to comply with these new, stricter regulations.

Personal data can, of course, be redacted or scrambled in enterprise systems, but data tokenization is arguably the best solution to de-identify data when the option to reveal original values in a controlled way needs to be retained. With the amount of data growing exponentially, cloud-based data tokenization platforms such as OpenText™ Protect™ allow for effective tokenization at scale. These platforms deliver the performance to complete tokenization processes without impacting system speed or user experience.

If you’d like more information on how OpenText can help you secure your sensitive information through data tokenization, visit our website.

Ville Parkkinen

Ville Parkkinen is a Director of Product Marketing for Business Network at OpenText. Working closely with OpenText’s Product Management, Engineering, Solution Consulting and Sales teams, Ville enjoys taking complex technical concepts and translating them into tangible business value in customer context. Solution areas that Ville focuses on include digitization and automation of supply chain processes including order-to-cash and procure-to-pay; electronic invoicing solutions; B2B/EDI integration; data visibility and analytics; and managed integration services.

Related Posts

Back to top button