How to protect your private data when fine-tuning LLMs

A practical guide using SarusLLM

GenAI
LLM
Differential Privacy
Elodie Zanella

In the previous post, we discussed the privacy risk of fine-tuning Large Language Models (LLMs). (We also have many other interesting contents on LLMs and privacy in general, check out our blog and follow us on linkedin to get more!)

To put it in a nutshell, during fine-tuning, private data can be absorbed by the model in an uncontrolled way, making the model a privacy time-bomb. Personal data has a high chance to be exposed if the model is accessed or through well chosen simple prompting when used in production.

This is why at Sarus, we’ve been working hard this summer to offer a secure and practical solution: Sarus LLM fine-tuning, powered by Differential Privacy (DP). When using it, fine-tuning becomes fully privacy-safe.

This article first explains why DP is the perfect candidate to do privacy-safe LLM fine-tuning, and then shows how to use Sarus LLM fine-tuning SDK (beta) on a concrete use case.

You can also directly check out our demo video or the demo notebook.

Differential Privacy at the rescue of your private data

At its core, Differential Privacy (DP) is a mathematical concept that ensures that the personal information of individuals is shielded when data is analyzed or processed. It achieves this by adding a controlled amount of noise at different steps of the computation process, making it virtually impossible to discern individual data points. This noise is carefully calibrated to maintain the privacy of individuals while allowing meaningful statistical analyses to take place.

In an era when organizations must extract valuable insights while safeguarding user privacy, DP turns out to be a perfect solution by balancing the need for data-driven decision-making with the imperative of protecting sensitive information.

DP-SGD: Marrying Privacy and Deep Learning

DP-Stochastic Gradient Descent (DP-SGD), introduced in 2016 by Abadi et al., is a groundbreaking algorithm that marries the principles of differential privacy with the training of deep learning models. It enables the training of most AI models while preserving the privacy of training data.

You got it, this algorithm is perfect for fine-tuning deep learning models. DP-SGD just adds noise to the gradient updates during fine-tuning. It thus allows for the customization of models without exposing any sensitive data to potential threats.

Enters Sarus: easily fine-tune LLMs with Differential Privacy

Both fine-tuning and properly calibrating and implementing DP require advanced skills and privacy expertise. So the Sarus team has built a ready-to-use and user-friendly SDK (now available in beta) to let organizations harness the power of LLMs while meeting privacy, compliance and security requirements.

Let’s see how it works with an example.

Use case: fine-tune a generative model on patient data

Let’s assume we want to fine-tune a LLM on private patient data so that the model gains medical expertise and can generate realistic fake medical diagnoses that will be useful for many applications (e.g.: annotate data safely to feed an automatic annotation model).

The patient dataset is highly sensitive and one should ensure no personal information is embedded in the fine-tuned LLM.

In this example, we use the PHEE dataset, a public patient dataset. This dataset was manually curated to make sure there was no identifying information. But in the general case, a medical dataset with a column of free text is highly risky.

In this dataset, we’ve just introduced a secret for the purpose of our demonstration: “François Dupont suffers from a severe form of pancreatic cancer”.

Data preparation for fine-tuning

Let’s install sarus-llm-beta python package, and preprocess the data to feed it in the generative model. (📒Notebook and demo video)

!pip install sarus-llm-beta

import sarus-llm-beta
from sarus-llm-beta import Client
import pandas

client = Client(url="https://admin.sarus.tech/gateway", email="analyst@example.com")

ds = client.dataset("phee")
df = ds.as_pandas()
# As a convention, the model will train on the column named `text`
X = df.rename(columns={"context": "text"})

# We extract the text from the JSON column
from sarus-llm-beta.serialization import trace

def preprocess(data: str) -> str:  
	return data.split('text":')[1].split(', "patient_id')[0].strip()
  
preprocess = trace(preprocess)(X.text[0])
X['text'] = X['text'].apply(preprocess, axis=1)

X_train = X.iloc[:2608]
X_test = X.iloc[2608:]

X_train
Preprocessed data

Fine-tuning : with Sarus SDK it’s easy to fine-tune a model on any text

Let’s start by fine-tuning the model without any privacy guarantee.

from sarus-llm-beta.llm import LLAMA2

model = LLAMA2("llama-2")
model.fit(X=X_train, batch_size=12, epochs=1)
sarus-llm-beta.eval(model, target_epsilon='unlimited') # target_epsilon='unlimited' means there is no privacy guarantee (see Differential Privacy theory to know more)

Note that we have built everything so that fine-tuning a model just takes a single line of code. This works with most public clouds and private clouds supporting Kubernetes. When a job requiring a GPU (like LLM fitting or inference) is launched, a GPU node is automatically created in your cloud environment and the job is spawned and executed on it. At the end of the job, the node is destroyed to minimize cost. As a reminder, the Sarus app runs in your environment, so everything described here happens in your infrastructure.

Also, we use llama-2 in this example but it could be any generative language model supported by Sarus. We can support most of the standard models.

Let’s check that the model has learned the medical expertise:

# Sample from model trained on real data

sarus-llm-beta.eval(  
	model.sample(    
  	prompts=[""] * 10,    
    temperature=1.0,    
    max_new_tokens=100,
    ),  
  target_epsilon='unlimited',  
  )

Now let’s see if we can learn something about François Dupont:

# Sample with a prompt starting with "François Dupont suffers"
sarus-llm-beta.eval(
    model.sample(
        prompts=["François Dupont suffers"] * 5,
        temperature=0.1,
        max_new_tokens=100,
    ),
    target_epsilon='unlimited',
)

The secret could very easily be extracted with prompts starting with “François Dupont suffers”… ⚠️ PRIVACY LEAK

Fine-tuning with Differential Privacy: the game changer for privacy-safe LLMs

Now let’s fine-tune with Differential Privacy:

PRIVACY_BUDGET = 22 # See Differential Privacy theory to learn more. 
# Note that a high budget is used here because the training dataset only contains 3K rows.

model = LLAMA2("llama-2")
model.fit(X=X_train, batch_size=1, epochs=3)
sarus-llm-beta.eval(model, target_epsilon=PRIVACY_BUDGET)

Let’s check the model is still performant in generating medical diagnoses:

# Sample from model trained with DP
sarus-llm-beta.eval(  
	model.sample(    
  	prompts=[""] * 10,    
  	temperature=1.0,    
  	max_new_tokens=100,    
    ),  
  target_epsilon=PRIVACY_BUDGET,  
  )

It is! And let’s try to extract information on François Dupont:

sarus-llm-beta.eval(  
	model.sample(    
  	prompts=["François Dupont suffers"] * 5,    
    temperature=0.1,    
    max_new_tokens=100,    
    ),  
    target_epsilon=PRIVACY_BUDGET,  
    )

The secret doesn’t appear!

Of course, the text starts with “François Dupont” since the name was in the prompt but since the fine-tuning was done with Differential Privacy, we have the guarantee that whatever comes next was not influenced more by François Dupont’s data than any other data point. His privacy is safe.

***

In short, we are super happy to provide this tool to ease both LLM fine-tuning and the protection of personal data when doing it. And of course, we’d be delighted to let you try it.

Do you plan to fine-tune models? Join our beta program!

Just want to have an experience-sharing session on LLMs? We’re also super interested. Get in touch!

***

In our upcoming blog posts, we will share other practical examples on why and how to use the Sarus LLM SDK, so that you can make informed decisions when designing and delivering your LLM strategy. Our mission is to help you in building a simple, privacy-preserving LLM. Stay tuned!

About the author

Elodie Zanella

Director of Product @ Sarus

Ready?

Ready to unlock the value of your data? We can set you up in no time.
main.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17

Shell

Subscribe to our newsletter

You're on the list! Thank you for signing up.
Oops! Something went wrong while submitting the form.
128 rue La Boétie
75008 Paris — France
Resources
Blog
©2023 Sarus Technologies.
All rights reserved.