Menu responsive
Home 9 Project 9 SplitHappens: Combining Split Learning and Function Secret Sharing to Address Modern Privacy Attacks

SplitHappens: Combining Split Learning and Function Secret Sharing to Address Modern Privacy Attacks

14.11.2025 10:16

 

The paper “Split Happens: Combating Advanced Threats with Split Learning and Function Secret Sharing” by Tanveer Khan, Mindaugas Budzys and Antonis Michalas studies information leakage in split learning and presents a protocol that combines split learning with function secret sharing. The aim is to keep both training data and labels private while still allowing collaborative model training.

 

Split learning and privacy problems

In split learning (SL), a neural network is divided between a client and a server. The client runs the first layers, sends the intermediate activations to the server, and the server completes the forward and backward passes. This approach reduces the computational load on the client and avoids sending raw input data to the server.

Over the last years, several works have shown that this is not sufficient for privacy. Attacks such as model inversion and label inference can recover inputs or labels from activations and gradients. More recent attacks, including Pseudo Client Attack (PCAT), Feature-Oriented Reconstruction Attack (FORA) and Feature Sniffer, further show that the information exchanged in SL can be used to reconstruct or infer sensitive data.

Existing defences use techniques such as homomorphic encryption or differential privacy. These methods can protect data but often introduce considerable computational or communication overheads, or affect model accuracy. Other works combine SL with function secret sharing but keep the final layer and labels on the server, which still leaves a source of leakage.

Function Secret Sharing

Function Secret Sharing (FSS) allows a function to be represented by two keys, held by two separate servers. Each server can evaluate its share of the function on public input data, and the sum of both outputs equals the result of the original function. A single key alone does not reveal information about the function.

FSS has been used in privacy preserving machine learning to implement non-linear layers and other operations with lower communication costs than some multi-party computation approaches. At the same time, applying FSS to an entire model is still computationally demanding.

The SplitHappens protocol

Khan, Budzys and Michalas introduce SplitHappens, an FSS-based U-shaped split learning protocol.

The model is divided into three parts:

  1. Client-side initial layers
  2. Server-side middle layers, executed jointly by two non-colluding servers using FSS
  3. Client-side output layer, which produces the final prediction and computes the loss

In this structure, the client keeps all raw inputs and labels.

Protocol flow

The training procedure can be summarised as follows.

  • The client runs the first layers on local data and obtains activation maps. These activations are masked with random values before being sent to the two servers.

  • Each server holds additive shares of the masked activations and model parameters. They evaluate the middle layers using FSS for non-linear operations (such as ReLU) and Beaver triples for linear layers.

  • At the end of the server-side part, each server sends its share of the intermediate result back to the client.

  • The client reconstructs this value, applies the output layer and computes the loss on the local labels.

  • Gradients from the output layer are split into shares and sent back to the servers, which update their weights, while the client updates its own layers.

Because labels and the output layer remain on the client, the servers do not see predictions or losses in plaintext.

Threat model and attack coverage

The protocol assumes a semi-honest threat model where an adversary can corrupt at most one of the two servers. The adversary follows the protocol but attempts to infer information from the observed data. It can also use auxiliary data to train shadow models.

The paper analyses the protocol against:

  • Label inference attacks (LIA)

  • Model inversion attacks (MIA)

  • Recent attacks on split learning, including PCAT, FORA and Feature Sniffer

The key points in the security discussion are:

  • Servers only receive masked activations and masked gradients. Without the random masks, which are generated as in FSS, these values do not reveal the underlying data.

  • Labels and output layer values stay on the client, so server-side observations do not directly contain label information.
  • The attack strategies considered in the paper rely on access to unmasked intermediate values or output layer information, which SplitHappens does not expose.

Under the stated assumptions, the authors argue that their protocol prevents the specific attack classes listed above.

 

Experimental evaluation

To study the practical impact of the protocol, the authors compare SplitHappens with:

  • Plaintext baselines: local training, vanilla split learning and U-shaped split learning without FSS

  • AriaNN: an FSS-based protocol that protects the entire model

  • Make Split Not Hijack (MSnH): an approach that combines SL with FSS on the server-side layers

Experiments are carried out on three datasets:

  • MNIST

  • CIFAR

  • Fashion-MNIST (FMNIST)

using two convolutional neural network architectures with different numbers of layers.

Accuracy

The results summarised in Table II of the paper show:

  • Public (non-FSS) vanilla and U-shaped SL models reach accuracy similar to a local model trained in plaintext.

  • SplitHappens achieves accuracy comparable to AriaNN and MSnH on MNIST and FMNIST.

  • On CIFAR, SplitHappens has accuracy similar to MSnH and slightly lower than AriaNN.

Training time and communication

The authors report that all FSS-based protocols have higher training time and communication cost than plaintext baselines, which is expected due to cryptographic operations. At the same time, SplitHappens reduces training time and communication compared to running the whole model under FSS, and the results for communication and complexity are described as comparable to MSnH while maintaining the same level of accuracy in the evaluated tests.

Summary

The paper by Tanveer Khan, Mindaugas Budzys and Antonis Michalas presents a split learning protocol that uses function secret sharing and a U-shaped model layout to limit information leakage towards the servers. The client keeps both input data and labels, and the servers work only on masked values. The analysis covers several recent attacks on split learning and shows how the protocol changes the available information for an adversary.

The experimental results on MNIST, CIFAR and FMNIST indicate that the protocol can reach accuracy close to existing FSS-based methods, with lower cost than protecting the entire model and with security properties that address a wider set of attacks than basic split learning.

📜 The full paper is available at Zenodo.

Categories

Archive

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.