Blog

Private and secure AI: Threats and solutions
Feb
08
Pedro Silva
Introduction

Consumers have preferred convenience over privacy. While many say privacy matters, few have placed any real value on protecting their data. We still buy connected devices, and use services that aren't always honest about the use of private data. Companies like Google, Facebook, and Amazon remain at the center of our digital lives.

But this might be changing. A 2019 survey conducted by Cisco looked at what people do - not just what they say - when it comes to privacy. The survey shows a new group of people who say they care about privacy and have done something about it, by switching companies or providers over their policies.

According to a Harvard Business Review study, 72 percent of Americans are reluctant to share information with businesses because they “just want to maintain [their] privacy.” And Deloitte, as a part of a study about consumer trust and data protection, explains that caring about data privacy and security is not just a risk management issue, but a “potential source of competitive advantage that may be a central component of brand-building and corporate reputation.”

Policy-makers stand right behind these privacy-minded consumers. The US and EU mandate strict rules regarding the storage and exchange of personally identifiable data. Policies like GDPR or CCPA have taken center stage in recent times.

Artificial Intelligence technologies are the driving force behind the data needs. Almost all use of AI requires the collection and use of large amounts of data, in many cases personal data, to learn and make intelligent decisions.

Many of the areas where AI could be used beneficially are those where privacy is of the highest concern, and where the limited availability of data has been a hindrance; for example medical history, financial records, or private habits. In this post we’ll show how we can tap into the pool of sensitive data while simultaneously addressing the need for data protection.

Threats and solutions

Privacy risks from AI stem not just from the mass collection of personal data, but from the models that power most of today’s algorithms. Data isn’t vulnerable just from database breaches, but from “leaks” in the models that reveal the data on which they were trained.

Optimal privacy preserving systems must resist attacks against the dataset, for example they can’t reveal whether an individual is present in a given dataset and they cannot allow any extraction of characteristics of an individual from within a dataset. They must also withstand attacks on the algorithm, for instance those that try to derive information about the dataset from it.

Bellow we’ll list different threats to both data and algorithms, and technical solutions that minimize them.

Protecting the data
Threats

Data theft

Data theft is the most publicized type of threat. Most commonly, the private data is transferred to the computation party, ideally over a secure channel. More often than not the data resides in the computation server in its original, non-encrypted, form. This is a concern, as the data is susceptible to both insider and outsider attacks.

There should be guarantees that the input and output data is visible only to the user. It should even be hidden from the model creator.

Privacy goes hand in hand with security. Good security means data leaks are less likely to happen, leading the ideal scenario: no loss of user trust and no fines for improper data management.

Identity/membership inference

Membership inference attacks can tell whether a sample was in the training set based on the output of the model. If the sample is linked to a person, such as medical or financial data, then inferring membership is a privacy threat. For example, the attacker could learn whether someone's individual record was used to train a ML model associated with a specific disease.

These attacks only need black-box access to the models, plus the confidence levels for each predicted class. With it, attackers can compute the difference between the samples that were used in the training set versus those that were not.

Feature reconstruction

Reconstruction attacks try to recreate the private data from the feature vectors. As such, they need white-box access to the model.

These attacks could happen when the feature vectors used in the training phase are not deleted after building the model. In fact, some ML algorithms such as SVM or kNN store feature vectors in the model itself.

Examples of successful cases of reconstruction attacks include: fingerprint reconstruction where a fingerprint image was recreated from a minutiae template (features); and mobile device touch gesture reconstruction where touch events were recreated from gesture features such as velocity and direction. In both cases, the privacy violation was a security threat to authentication systems.

Solutions

Data anonymization

Anonymization and pseudonymization partially removes or replaces sensitive information from a record, respectively.

It is good practice to remove personal identifiers from data if you're planning to make any it public. But, even then, sophisticated attackers can identify a lot about a user. For example, Netflix released anonymized versions of their datasets, to aid contestants for its 1M$ prize to build better recommender systems for movies. Despite the anonymization, researchers were able to use this dataset along with IMDB background knowledge to identify the Netflix records of known users, and were further able to deduce the users’ apparent political preferences.

Differential privacy

Models can sometimes encode more information than necessary for the prediction task, and can inadvertently memorize individual samples. For example, a language model built to predict the next word (like those you see on smartphones) can be exploited to release information about individual samples that were used for training.

Differential privacy works by injecting a controlled amount of statistical noise to obscure the data contributions from individuals in the dataset. This is done while ensuring that the model still learns about the overall population, and thus provides predictions that are accurate enough to be useful.

Protecting the algorithm
Threats

Model theft

The model is core to the a company's interests: there is little point in improving AI capabilities, if your competitors can easily copy the models.

A model can be stolen outright or can be reverse-engineered based on its outputs, see model inversion below.

Model inversion

With model inversion, the attacker's goal is to create feature vectors that are similar to those used to create the model, by looking only at the model responses. This is especially important nowadays, when ML-as-a-service is a popular approach.

Such attacks use the confidence information (e.g. class probabilities) that are sent with the response for testing samples. These attacks produce an average that represent a certain class; so, they would be most threatening to privacy when a certain class represents a single individual, as in face recognition.

Adversarial manipulation

In computer vision, small perturbations to images can cause neural networks to make mistakes such as confusing a cat with a computer. They are like optical illusions for machines.

Adversarial attacks can be designed to make ML models produce specific outputs chosen ahead of time by the attacker. More recently, researchers have shown that they can reprogram the target model to perform a task chosen by the attacker, without the attacker needing to specify or compute the desired output for each test-time input. This attack finds a single adversarial perturbation, that can be added to all test-time inputs to a machine learning model to cause the model to perform a task chosen by the adversary, even if the model was not trained to do this task. These perturbations can thus be considered a program for the new task.

Solutions

Homomorphic encryption

When using homomorphic encryption, data can be encrypted by its owner and sent to the model owner to run computation as if it was unencrypted. There are currently restrictions on the type of calculations that can be performed using homomorphic encryption, and the computation performance is still very far from traditional techniques.

Secure multi-party computation

With secure multi-party computation, the processing is done on encrypted data shares, split so that no single party can retrieve the entire data on their own.

The computation result can be announced without any party ever having seen that data itself, which can be recovered only by consensus.

Federated learning

Federated learning means distributing copies of a machine learning model to the devices where the data is kept, performing training iterations locally, and returning the results of the computation to a central repository to update the main algorithm.

Whereas the data remains with its owner, it doesn’t entirely guarantee security and privacy since attackers can steal personally identifiable data directly from the devices or interfere with the communication process.

Conclusion

In this post we looked at ways in which bad actors can take advantage of AI systems and steal valuable information. We listed a number of technical solutions that can prevent such attacks, which can target both the data and algorithms.

Keeping the data safe means attackers can’t tell whether someone’s individual data belongs in a given dataset, and that they are not able to extract any other features of the individual from the dataset. Anonymization and pseudonymization are ways to hide or fake personal identifiable information (e.g. by removing or replacing the values in fields that are not needed). A more sophisticated approach is differential privacy, which perturbs the data in such a way that prevents identifiable information from leaking while still allowing the algorithm to learn something useful.

Attacks on the algorithms usually try to reverse engineer the model, or use it in ways they were not intended to be used. The simplest solution is to only give black-box access to the model, and limit its output. For example, the attack success rate is lower when classification algorithms report rounded confidence values or just the predicted class labels.

Not a zero-sum game

To conclude, AI can help solve many business challenges but is very dependent on large amounts of data, often sensitive. The solution to privacy concerns is not to limit the advancement and applications of the technology, but rather to find new ways to use the data in privacy-preserving ways.

Privacy isn’t just a few sentences buried in terms and conditions, it is something to consider from day one when building products or services. Designing software with privacy in mind is not a zero-sum game - it is better for everyone.

Private and secure AI: Threats and solutions
Pedro Silva
20
Oct
2020
Introduction

Consumers have preferred convenience over privacy. While many say privacy matters, few have placed any real value on protecting their data. We still buy connected devices, and use services that aren't always honest about the use of private data. Companies like Google, Facebook, and Amazon remain at the center of our digital lives.

But this might be changing. A 2019 survey conducted by Cisco looked at what people do - not just what they say - when it comes to privacy. The survey shows a new group of people who say they care about privacy and have done something about it, by switching companies or providers over their policies.

According to a Harvard Business Review study, 72 percent of Americans are reluctant to share information with businesses because they “just want to maintain [their] privacy.” And Deloitte, as a part of a study about consumer trust and data protection, explains that caring about data privacy and security is not just a risk management issue, but a “potential source of competitive advantage that may be a central component of brand-building and corporate reputation.”

Policy-makers stand right behind these privacy-minded consumers. The US and EU mandate strict rules regarding the storage and exchange of personally identifiable data. Policies like GDPR or CCPA have taken center stage in recent times.

Artificial Intelligence technologies are the driving force behind the data needs. Almost all use of AI requires the collection and use of large amounts of data, in many cases personal data, to learn and make intelligent decisions.

Many of the areas where AI could be used beneficially are those where privacy is of the highest concern, and where the limited availability of data has been a hindrance; for example medical history, financial records, or private habits. In this post we’ll show how we can tap into the pool of sensitive data while simultaneously addressing the need for data protection.

Threats and solutions

Privacy risks from AI stem not just from the mass collection of personal data, but from the models that power most of today’s algorithms. Data isn’t vulnerable just from database breaches, but from “leaks” in the models that reveal the data on which they were trained.

Optimal privacy preserving systems must resist attacks against the dataset, for example they can’t reveal whether an individual is present in a given dataset and they cannot allow any extraction of characteristics of an individual from within a dataset. They must also withstand attacks on the algorithm, for instance those that try to derive information about the dataset from it.

Bellow we’ll list different threats to both data and algorithms, and technical solutions that minimize them.

Protecting the data
Threats

Data theft

Data theft is the most publicized type of threat. Most commonly, the private data is transferred to the computation party, ideally over a secure channel. More often than not the data resides in the computation server in its original, non-encrypted, form. This is a concern, as the data is susceptible to both insider and outsider attacks.

There should be guarantees that the input and output data is visible only to the user. It should even be hidden from the model creator.

Privacy goes hand in hand with security. Good security means data leaks are less likely to happen, leading the ideal scenario: no loss of user trust and no fines for improper data management.

Identity/membership inference

Membership inference attacks can tell whether a sample was in the training set based on the output of the model. If the sample is linked to a person, such as medical or financial data, then inferring membership is a privacy threat. For example, the attacker could learn whether someone's individual record was used to train a ML model associated with a specific disease.

These attacks only need black-box access to the models, plus the confidence levels for each predicted class. With it, attackers can compute the difference between the samples that were used in the training set versus those that were not.

Feature reconstruction

Reconstruction attacks try to recreate the private data from the feature vectors. As such, they need white-box access to the model.

These attacks could happen when the feature vectors used in the training phase are not deleted after building the model. In fact, some ML algorithms such as SVM or kNN store feature vectors in the model itself.

Examples of successful cases of reconstruction attacks include: fingerprint reconstruction where a fingerprint image was recreated from a minutiae template (features); and mobile device touch gesture reconstruction where touch events were recreated from gesture features such as velocity and direction. In both cases, the privacy violation was a security threat to authentication systems.

Solutions

Data anonymization

Anonymization and pseudonymization partially removes or replaces sensitive information from a record, respectively.

It is good practice to remove personal identifiers from data if you're planning to make any it public. But, even then, sophisticated attackers can identify a lot about a user. For example, Netflix released anonymized versions of their datasets, to aid contestants for its 1M$ prize to build better recommender systems for movies. Despite the anonymization, researchers were able to use this dataset along with IMDB background knowledge to identify the Netflix records of known users, and were further able to deduce the users’ apparent political preferences.

Differential privacy

Models can sometimes encode more information than necessary for the prediction task, and can inadvertently memorize individual samples. For example, a language model built to predict the next word (like those you see on smartphones) can be exploited to release information about individual samples that were used for training.

Differential privacy works by injecting a controlled amount of statistical noise to obscure the data contributions from individuals in the dataset. This is done while ensuring that the model still learns about the overall population, and thus provides predictions that are accurate enough to be useful.

Protecting the algorithm
Threats

Model theft

The model is core to the a company's interests: there is little point in improving AI capabilities, if your competitors can easily copy the models.

A model can be stolen outright or can be reverse-engineered based on its outputs, see model inversion below.

Model inversion

With model inversion, the attacker's goal is to create feature vectors that are similar to those used to create the model, by looking only at the model responses. This is especially important nowadays, when ML-as-a-service is a popular approach.

Such attacks use the confidence information (e.g. class probabilities) that are sent with the response for testing samples. These attacks produce an average that represent a certain class; so, they would be most threatening to privacy when a certain class represents a single individual, as in face recognition.

Adversarial manipulation

In computer vision, small perturbations to images can cause neural networks to make mistakes such as confusing a cat with a computer. They are like optical illusions for machines.

Adversarial attacks can be designed to make ML models produce specific outputs chosen ahead of time by the attacker. More recently, researchers have shown that they can reprogram the target model to perform a task chosen by the attacker, without the attacker needing to specify or compute the desired output for each test-time input. This attack finds a single adversarial perturbation, that can be added to all test-time inputs to a machine learning model to cause the model to perform a task chosen by the adversary, even if the model was not trained to do this task. These perturbations can thus be considered a program for the new task.

Solutions

Homomorphic encryption

When using homomorphic encryption, data can be encrypted by its owner and sent to the model owner to run computation as if it was unencrypted. There are currently restrictions on the type of calculations that can be performed using homomorphic encryption, and the computation performance is still very far from traditional techniques.

Secure multi-party computation

With secure multi-party computation, the processing is done on encrypted data shares, split so that no single party can retrieve the entire data on their own.

The computation result can be announced without any party ever having seen that data itself, which can be recovered only by consensus.

Federated learning

Federated learning means distributing copies of a machine learning model to the devices where the data is kept, performing training iterations locally, and returning the results of the computation to a central repository to update the main algorithm.

Whereas the data remains with its owner, it doesn’t entirely guarantee security and privacy since attackers can steal personally identifiable data directly from the devices or interfere with the communication process.

Conclusion

In this post we looked at ways in which bad actors can take advantage of AI systems and steal valuable information. We listed a number of technical solutions that can prevent such attacks, which can target both the data and algorithms.

Keeping the data safe means attackers can’t tell whether someone’s individual data belongs in a given dataset, and that they are not able to extract any other features of the individual from the dataset. Anonymization and pseudonymization are ways to hide or fake personal identifiable information (e.g. by removing or replacing the values in fields that are not needed). A more sophisticated approach is differential privacy, which perturbs the data in such a way that prevents identifiable information from leaking while still allowing the algorithm to learn something useful.

Attacks on the algorithms usually try to reverse engineer the model, or use it in ways they were not intended to be used. The simplest solution is to only give black-box access to the model, and limit its output. For example, the attack success rate is lower when classification algorithms report rounded confidence values or just the predicted class labels.

Not a zero-sum game

To conclude, AI can help solve many business challenges but is very dependent on large amounts of data, often sensitive. The solution to privacy concerns is not to limit the advancement and applications of the technology, but rather to find new ways to use the data in privacy-preserving ways.

Privacy isn’t just a few sentences buried in terms and conditions, it is something to consider from day one when building products or services. Designing software with privacy in mind is not a zero-sum game - it is better for everyone.

Contact us

Send a message

We're always looking to make partnership with great companies.
Whether you'd like to start a project, learn more about what we can do for you, or you have any questions please contact us

Contact us

Send a message

We're always looking to make partnership with great companies.
Whether you'd like to start a project, learn more about what we can do for you, or you have any questions please contact us