Analytics & AI

Privacy-safe
LLM workflows

SarusLLM lets enterprises leverage the power of Generative AI while keeping the enterprise private data safe.

Data scientists explore, preprocess data and feed it to LLMs in a clean room, without directly seeing the data. Only high-quality synthetic data and differentially-private stats can be retrieved from the clean room. To do so, data scientists use their usual AI and GenAI tools wrapped in the Sarus python SDK.

On top of it, Differential Privacy guarantees can be included in the LLM fine-tuning process itself, through just a fit parameter. This ensures that no personal data is embedded in the fine-tuned model. This works for all the LLMs of the GPT2, Llama2 and Mistral architecture families, and Sarus automatically manages the required computing resources with Kubernetes.

By protecting private data in all LLM workflows, SarusLLM allows enterprises to maximize Generative AI ROI. All data assets can be put into action with LLMs, in full security.
Features

Sarus combines a unique set of features
for privacy-safe LLM workflows

in action

Case studies

Protect patient data when building LLM-based synthetic records generator

Guide to preprocess patient diagnosis data and fit a LLM with and without DP with SarusLLM.

Learn more
Protect patient data when building a GenAI-based medical coding model

Notebook where patient diagnosis data is preprocessed and a classification LLM is fit with DP to protect patient data.

Learn more

Subscribe to our newsletter

You're on the list! Thank you for signing up.
Oops! Something went wrong while submitting the form.
128 rue La Boétie
75008 Paris — France
Resources
Blog
©2023 Sarus Technologies.
All rights reserved.