In a previous article, I explained how you can define risks for your information assets. In this article, I will run you through the risks that affect assets that process, store and transmit PII. I will also touch on how you can reduce and nullify these risks with security controls.
The risks we’re going to consider are:
A key way to reduce risks when working with PII is to define the systems, network, and network components that are responsible for storing and processing PII. You need to be crystal clear on which assets contain all PII and shift them to a separate security contour.
Doing so will allow you to dismiss threats that are applicable to other IT assets in your infrastructure and could directly or indirectly affect your systems and environment for PII.
When talking about working with PII, we often forget that we don’t always need to use the PII itself. You can make your job a damn sight easier and minimize risks by using unique identifiers for your customers (users).
Simply put, you can take the name of your customer — John Smith — his telephone number, and address and replace it with UserID_1. If you need to keep the different parts of the data separate in a table, you could set John’s unique IDs as:
The above values will be enough for your internal processes (analytics, statistics, batch queries, etc.); plus, you won’t have PII running amok in your systems, just the identifiers. Doing so will mean that you can set up your security so that only one set system has access or can request access to the PII.
Another best practice is to organize your system so that it separates the types of PII stored with each type having its own namespace.
For example, separating:
An add-on for the last point about keeping decryption keys separate is that it will give you greater control over access management, with the ability to set key usage for different data requests.
Limiting access to PII is the key thing here. The less people, services and systems that have access to PII, the simpler it is to formalize and monitor everything.
Additionally, having an understanding about who can access PII and how will allow you to correctly set up your investigation-into-incident processes. This in itself will bring down your response and investigation time, not to mention business expenses for incident management processes.
PII should be stored and transmitted encrypted. Making it impossible for third parties to decrypt the data will reduce risks associated with unauthorized access to PII and data breaches as a whole.
As an alternative, you can obfuscate the personal data (as is done for payment data in PCI DSS) or tokenize it.
You can tokenize things like IP addresses completely. This means that third-party services won’t be able to access the source IP address but can learn which users have the same IP as their tokens will be the same.
Even if your test environment is secured and protected at the same level as your production environment, you should never use real PII when testing. I mean, if you have given your devs access to real PII in the first place, this creates risks of its own that will need to be controlled (e.g., monitoring the devs, integrity control, protection against data breaches caused by unauthorized parties gaining access).
If you’re not a large or established business and you don’t yet have formalized policies and processes for end user device security, or you think that basic Windows and MacOS security is good enough, make sure that you secure all virtual workspaces for employees that have access to systems which contain PII. In fact, it would be better to secure the contours that contain PII using a jump host.
An alternative to this would be to use something like Amazon AppStream or similar, which will cover your risks for devices that don’t have additional security software installed on them.
If you need to transfer PII — and of course you should be doing this through a secure channel and/or in encrypted format — to third parties, make sure you are transferring the data (push model) and they are not taking it from you (pull model).
It may seem that there is no difference here or that it hinders efficiency, but think about it this way: If you leave data open to be pulled from you, then you run the risk of having much more data taken (or more requests than you would have ever anticipated). Now imagine that the third-party service is compromised. You see where I am going with this.
The alternative is to use anonymized data where possible. Set up an anonymization service that is able to maximally depersonalize data at the web/mobile app entry point by tokenizing all the information using an open algorithm.
Reducing risks is quite different from eliminating them. It’s always worth remembering that you should create a separate environment every time people/systems will work with critical data (especially when security requirements for this data are regulated like GDPR or others). By doing so, you will make your security job a whole lot easier. This doesn’t mean just avoiding fines or bad publicity, but also reducing the costs of supporting your security overall, even when taking into account the additional costs of extra infrastructure to store and process the PII.