Supporting Privacy Regulations in Non-Production

Supporting Data Privacy

Every aspect of our daily lives involves the usage of data. Be it our social media, banking account, or even while using an e-commerce site, we use data everywhere. This data may range from our names and contact information to our banking and credit card details.

The personal data of a user is quite sensitive. In general, all users expect a company to protect their sensitive data. But there is always a slight chance that the app or service you are using might face a data breach. In that case, the question that comes to mind is how the company or app will keep your data safe.

The answer is data privacy regulations. Nowadays, most countries have their individual data privacy laws, and companies operating in those countries generally follow these laws. Data privacy laws protect a customer’s data in production. But did you ever think about whether your dev or testing environment is safe and secure?

In this post, we’ll discuss why you must follow data privacy regulations in a non-production environment. We’ll take a look at the challenges faced while complying with privacy rules, solutions to these challenges, and strategies to follow while implementing privacy laws in non-production. But before that, we’ll discuss a bit about privacy regulations. So, let’s buckle up our seat belts and take a deep dive.

What Do You Mean by Privacy Regulations?

Data privacy regulations ordata compliance is a series of rules that companies must abide by to ensure that they’re following all the legal procedures while collecting a user’s data. Not only that, but it’s also the company’s job to keep the user’s data safe and prevent any misuse.

There are various data privacy laws. For instance, companies operating under the European Union follow GDPR. On the other hand, the United States has several laws like HIPAA, ECPA, and FCRA. Failing to follow these rules results in potential lawsuits or penalties. The goal of these rules is to keep a user’s sensitive data safe and secure from malicious activities.

Now that we know what data privacy regulation is, let’s discuss why we need to follow these rules in non-production.

Why Privacy Regulations in Non-Production Are Important

While deploying an app or a site in production, we add various security protocols. But often, the environment where we develop or test our apps is not that secure. In 2005 and 2006, Walmart faced a security breach when hackers targeted the dev team and transferred sensitive data and source code to somewhere in Eastern Europe.

This kind of incident can happen to any company. Currently, many companies use production data for in-house testing or development. So, how does a company ensure that a user’s sensitive data is safe? The answer is data masking, which is one of the mandatory rules of data privacy regulations.

However, implementing data privacy rules comes with many challenges. Let’s explore some of them and the ways to resolve these challenges.

Challenges Faced While Complying With Privacy Rules

Adapting to something new always comes with certain challenges, be it some new tool, technology, or regulation. Data privacy is no exception. However, the challenges are not that complicated. With proper planning, overcoming them is quite straightforward.

Adapting to New Requirements

Data privacy regulations are generally process-driven. While implementing privacy rules in non-production, your team must welcome changes in the way they do things. This may involve data masking, generating synthetic data, etc. Your team will take some time to adapt to the new processes.

Chalk out a plan before the transition. Train your team and explain why they need to follow these regulations. With proper training and clarification of individual roles, adapting to the new changes won’t take much time.

New Rules of Test Data

If your testing team is using real user data for testing the essential features of your product, beware. The process is going to change. As per data privacy regulations, you cannot use real user data for testing, so the challenge comes while rearranging or recreating your test data.

However, with a proper test data management suite, the task becomes a lot easier than doing the entire thing manually.

Adjusting Your Budget Plan

Implementing any new process often involves spending a lot of money. While implementing privacy laws, you have to think about factors like

  • the research your teams need to do
  • the purchase and implementation of data compliance tools that will help you generate privacy-compliant test data
  • the arrangement of training sessions for your team
  • the hiring of resources to monitor or enforce compliance laws

All of the above and more will affect your budget, so it’s best to have a discussion with your finance and technical team. Figure out the zones where you should focus spending and calculate an approximate amount. Planning is beneficial if you want to avoid overspending. On that note, in the following section, we’ll discuss some strategies to follow while implementing privacy regulations in non-production.

Strategies to Implement Privacy Regulations in Dev and Testing

Although there is no end to planning strategies while implementing data privacy regulations, there are some important steps that we can’t miss.

Sorting Data

Before following privacy laws, you must know everything about your data. If the project is at a starting phase, there will be a lot of customer data. Discuss this with your team to categorize the data and clarify what data is sensitive to the user. Once you categorize the data and separate sensitive data from general data, it’s time for the next steps.

Encrypting Sensitive and Personal Data

GDPR and other data privacy laws make it mandatory for you to secure any sensitive data. Ensure that if you have any such data in a non-production environment, it’s secured by layers of encryption. Even if you’re not using the data, you must still secure it in your database. This is because no matter how strong your firewall is, hackers can always breach it. So it’s wise to protect sensitive data with layers of encryption apart from just a firewall.

Restricting Access to Database

As per most data privacy rules, your database should not provide overall access to all users. Since a database has multiple types of data, you must create roles and grant specific permission to each role. For instance, a tester should have access to test data only and not production data. Imagine if a fresher on your team deletes a table from the production database. The incident may happen by mistake, but it will cost the company a lot. Enforce these rules to prevent similar unfortunate mishaps.

Change the Policies of Cookies

If you’re developing a site, you’ll need to think about how your cookies work and whether they comply with the data privacy law you’re following. For instance, what if your website is operating outside the EU and the target audience is in the EU? In that case, apart from standard compliance, you need to comply with GDPR as well. As per GDPR, a website should collect a user’s personal data only after they agree to cookie consent. That means you should inform the user about the data used by your site’s cookies to perform specific functions. The information must be clear, and your cookies can collect data only after the user gives permission.

Use of a Compliance Monitoring Solution

Generally, companies often appoint a data protection officer (DPO) whose job is to monitor the processes, analyze the risk, and suggest measures so that your company never fails to comply with privacy laws. But a DPO is a normal human being. When it comes to large data sets, a human mind can always miss something. The solution? Provide your DPO with a compliance monitoring solution.

Enov8 provides such a solution that addresses the needs of compliance managers. The tool monitors your data and identity risks. Not only that, but the tool also helps you to find compliance breaches and points out processes that you need to optimize in order to protect the data.

Disclose Important Information to Users

Data privacy laws ensure that users should have all the knowledge about how companies are using their data. You must disclose everything about data usage while signing the agreements. Situations may arise later for which you may need to revise the agreement. For instance, suppose you’re monitoring the logs of a system that’s connected with the customer’s network. If the logs contain the user’s IP address or other sensitive data, inform the customer.

Synthetic Test Data Generation and Data Masking

There are some cases where you need real data to develop or test something. But what if the data compliance standard that your company follows prohibits you from using real data? Don’t worry. Synthetic data is the next best thing. Synthetic data is data generated by an algorithm and closely imitates the original data. You can also use data masking, where sensitive data is hidden and replaced by similar dummy data. The advantage? You can continue your work without any risk of failing to comply with privacy laws.

Train Your Team on Privacy Regulations

When it comes to complying with privacy laws, there is no end to learning and adapting to new things. It’ll be quite hectic for your team if you enforce a lot of rules on your team all of a sudden. Make the transition smooth by arranging training sessions for your employees to explain the need for compliance with privacy laws and the consequences if they fail to abide by these laws. In addition, train them on using data compliance suites. You can take a look at Enov8’s data compliance suite, which monitors your data and ensures you’re compliant with GDPR, FCRA, ECPA, and multiple other standards.

Keeping your test and dev data compliant with privacy laws may prove to be a little challenging at first. But if planned and executed in a phased manner, your team will adapt easily.

Author

This post was written by Arnab Roy Chowdhury. Arnab is a UI developer by profession and a blogging enthusiast. He has strong expertise in the latest UI/UX trends, project methodologies, testing, and scripting.

What is Data Subsetting in TDM

Test Data Subsetting

The foundation of a comprehensive and well-implemented QA strategy is a sound testing approach. And a sound testing approach, in its turn, depends on having a proper test data management (TDM) process in place. TDM’s responsibilities include obtaining high-quality and high-volume test data in a timely manner. However, obtaining such data isn’t an easy process and might create prohibitive infrastructure costs and unforeseen challenges.

This post is all about the solution: data subsetting.

We begin by defining data subsetting in a more general sense and then explain why it’s so important in the context of TDM. We then talk about the main challenges involved in data subsetting and cover the main methods in which it can be performed.

Let’s get started.

What Is Data Subsetting?

Data subsetting isn’t a hard concept to grasp. To put it simply, it consists of getting a subset or a slice of a complete dataset and moving it somewhere else.

The next step is to understand how this concept works in the context of TDM.

Why Is Data Subsetting Needed in TDM? Knowing the Pain

Data subsetting is a medicine that’s supposed to alleviate a specific pain. So if you want to understand what subsetting is and why it’s needed in the context of TDM, you need first to understand what this pain we’re talking about is.

A couple of sections ago, we definedtest data management. You learned that this process is in charge of coming up with reliable data for the consumption of automated test cases. And even though there are alternatives, one of the most popular solutions for this problem is merely copying the data from the production servers, which is often called production cloning.

Copying from production is a handy way of obtaining realistic data sets from testing since nothing is more real than data. However, this approach presents some severe downsides. The security-related challenges can be solved with approaches like data masking. This post focuses on the challenges related to infrastructure.

The Pains of Production Cloning: The Infrastructure Edition

You could probably summarize the infrastructure-related challenges of production in two words: high costs. If you want to copy 100 percent of your production data into your test environments, you’ll incur incredibly high costs for storage and infrastructure.

That’s not to mention the fact that you could potentially have not only one test environment but several. So we’re talking about multiplying this astronomical cost three, four, or even five times.

Besides the direct financial hit, you’d also incur indirect costs in the form of slow test suites. If you have gigantic amounts of test data, then it’ll necessarily take a long time for you to load it when it’s time for test execution.

Data Subsetting to the Rescue

Applying data subsetting in the context of TDM can solve or alleviate the difficulties of copying data from production. When you create test data by copying not the whole production database but a relatively small portion of it, you don’t incur the exorbitant infrastructure costs that you would otherwise. In other words, it’s like a partial production cloning.

What are the main benefits of using data subsetting in TDM?

The first distinct advantage is the decrease in storage costs for the test data. In the same vein, you’ll also incur fewer costs in overall infrastructure. This cost savings quickly becomes outstanding if you factor in multiple QA or testing environments, which you most likely have.

But the benefits of data subsetting aren’t all about cost. Test time is also impacted positively. Since there’s less data to load and refresh, it takes less time to do it. That way, the adoption of data subsetting can also reduce the total execution time of your test suites.

The Challenges Involved in Data Subsetting

There isn’t such a thing as medicine without side effects. So data subsetting, despite being able to cure or alleviate some pains in the TDM process, also comes with some pains of its own. Let’s look at them.

The first roadblock is referential integrity. Let’s say you work for a social network site and you’re implementing data subsetting. The site has one million users, and you’ve got just a hundred thousand of them for your test database, slicing from the users table. When you’re getting data from the other tables in the database, you have to make sure to fetch the posts, friendships, and pictures from just those hundred thousand users to keep the existing foreign-key relationships intact.

This roadblock becomes even more relentless when you factor in the possibility of relationships spanning multiple databases. There’s nothing that prohibits this organization from storing user profiles in a PostgreSQL database and posts in an Oracle database.

These types of relationships can be even harder to track and protect when they span not only multiple databases but multiple data sources. You could use relational databases for some types of data while others reside in .csv files, and a third type might be stored in some document-based NoSQL database. Such a variety of possible data sources certainly poses a challenge for maintaining referential integrity when doing data subsetting.

Data Subsetting Methods

Let’s now cover the main methods in which you can implement data subsetting.

Using SQL Queries

We start with the most straightforward approach. In the case of data subsetting, this translates as using plain old SQL queries.

For instance, in the social network example we used before, let’s say you have a total of ten thousand users (it’s a small social network, mind you) and you want to get a hundred users. This is how you’d do it, for instance, in PostgreSQL:

SELECT * FROM users ORDER BY id LIMIT 100

Now let’s say you want to fetch the posts from those hundred users. How would you do it? Here’s a solution:

SELECT * FROM posts where user_id in (SELECT id FROM users ORDER BY id LIMIT 100)

These are just simple examples. For a more realistic approach, you would have more complex queries stored in script files. Ideally, you’d check those in version control to any track changes.

The advantage of this approach is that it’s easy to get started. It’s pretty much it. You most likely have people with SQL knowledge in your organization, so the learning curve here should be nonexistent.

On the other side, this approach is very limited. While it might work well in the beginning, as soon as your subsetting needs start to get serious, it falls apart. It’s going to become increasingly harder to understand, change, and maintain the scripts. You have to bear in mind that changes to the “root” query (in our example, the query that fetches the hundred users) cascades down to its children, which further complicates updating the scripts.

Also, knowledge of query performance optimization techniques might be necessary. Otherwise, you might get into situations where poorly written queries become unusable due to outrageously poor performance.

Developing a Custom Solution

The second approach on our list consists of developing a custom application to perform the subsetting. The programming stack you use for creating this application is of little consequence. So just adopt the programming languages and frameworks the software engineers in the organization are already comfortable using.

This approach can be effective for small and medium-size teams, especially when it comes to cost. But at the end of the day, it has more downsides than benefits.

First, building such a tool takes development time that could be used elsewhere, so you’re incurring an opportunity cost here. Database knowledge would also be necessary for building the tool, which would partially defeat the purpose of having this abstraction layer above the database in the first place.

Also, the opportunity cost of building the tool isn’t the only one you would incur. After the solution is ready, you’d have to maintain it for as long as it’s in use.

Adopting Commercial Tools

Why build when you can buy? That’s the reasoning between the second approach on our list: adopting an existing commercial tool.

There are plenty of benefits in adopting a commercial tool, among which the most important are probably the robustness and scalability of the tool and the high number of databases and data sources it supports.

The downside associated with buying isn’t that surprising, and it boils down to one thing: high costs. These costs aren’t just what you pay when buying or subscribing to the tool. You have to factor in the total cost of ownership, including the learning curve. And that might be steeper than you expect.

Using Open Source Tools

Why buy when you can get it for free, besides having access to the source code?

Adopting open source subsetting tools is like getting the best of both worlds: you don’t have to build a tool from scratch, while at the same time you can see the source code and change it if you ever need to expand the capabilities.

The downside is the total cost of ownership, which might still be high, depending on the learning curve the tool presents.

TDM: Subset Your Way to Success!

Copying data from production to use in tests is an old technique. It’s a handy way of feeding test cases with realistic data. However, it’s not without its problems. Besides security and privacy concerns, production cloning can also generate substantial infrastructure costs and high testing times.

In order to solve those problems, many organizations adopt data subsetting. Using subsetting and techniques that deal with security and privacy concerns (e.g., data masking) allows companies to leverage production cloning safely and affordably.

Thanks for reading, and until next time.

Author

This post was written by Carlos Schults. Carlos is a .NET software developer with experience in both desktop and web development, and he’s now trying his hand at mobile. He has a passion for writing clean and concise code, and he’s interested in practices that help you improve app health, such as code review, automated testing, and continuous build.

What is the Consumer Data Right (CDR)?

open-banking-CDR

A DataOps Article.

What is the Consumer Data Right (CDR)?

You may have heard it mentioned, particularly if you’re in “Open Banking”. But CDR is the future of how we access and ultimately share our data with “trusted” third parties.

It will be introduced into the Australian banking sector initially from the middle of 2020, with scope/functionality evolving in phases, and ultimately roll out across other sectors of the economy, including superannuation, energy and telecommunications.

Vendor Benefits

The Consumer Data Right is a competition and consumer reform first!

  • Reduced sector “monopolization” (increased competition).
  • CDR encourages innovation and competition between service providers.
  • Access to new digital products & channels.
  • New, to be innovated, customer experiences.

Consumer Benefits

  • Immediate access to your information for quicker decision making.
  • Better transparency of vendor(s) pricing and offers.
  • Increase in products to support your lifestyle.
  • Consumer power e.g. ease of switching when dissatisfied with providers.

Vendor Risks

  • CDR Compliance is mandatory for Data Holders
  • Implementing CDR (on top of legacy platforms) is non-trivial.
  • Non-compliance penalties may be severe (fines and trading restrictions)
  • CDR is rapidly evolving & continually changing. Continuous conformance validation & upkeep required.
  • Increased access to data, means increased “attack footprint”.

Be warned! Although the CDR is expected to create exciting new opportunities, there are also clearly defined conformance requirements. In a nutshell, breaches of the CDR Rules can attract severe penalties ranging from $10M to 10% of the organization’s annual revenue.

Who is responsible for CDR?

Ultimately CDR may evolve to a point where it is self-regulating. However, at present at least, the accreditation of who can be part of the ecosystem (i.e. Data Holders & Data Recipients) will be controlled by the relevant industry regulators*.

*In Australia the ACCC is responsible for implementing the CDR system. Only an organisation which has been accredited can provide services under in the CDR system. An accredited provider must comply with a set of privacy safeguards, rules and IT system requirements that ensure your privacy is protected and your data is transferred and managed securely. 

How do consumers keep their data safe?

The CDR system is designed to ensure your data is only made available, to the service providers, after you have given authentication and consent.

Note: The diagram below, based on Australian oAuth2/OIDC security CDR guidelines, shows the key interactions between the Consumer, The Data Recipient (e.g. a Retailer App on a Phone) and a Data Holder (a Bank).

Australian CDR uses oAuth2/OIDC Hybrid Flow

Consumers can control what data is shared,  what it can be used for and for how long. Consumers will also have the ability to revoke consent and have information delete at any time.

CDR is the beginning of an interesting new information era. Learn more about the Consumer Data Right and accreditation on the CDR website.