Federated Learning Workbench

Federated learning for the MNIST data set:The case is simple: 10 people each know how to write a number. They all want to develop an algorithm that can recognize handwritten letters, but no one wants to share their number. Federated learning to the rescue. This page demonstrates a demo protocol that can do just this. The protocol has 2 main pieces:

  1. Retrieve and train a model with the local data. For this exposition we use a convolutional neural network. For now we merely need to fetch weights for the model.
  2. Share the delta from the training using a secure sharing scheme. Initially this is based on multi party computation (MPC).

This is a place where I try out different approaches to training, aggregation and evaluation for models privately trained using federated learning.

Resources:

Future work

  • Implement secure aggregation using differential privacy. FL
  • Federated evaluation FL
  • Find models that better suite a federated setting FL
  • Use a blockchain as a central server BCFL (Potentially look into tokenomics BCFL)
  • Improve transport layer for model-weights. Eng.
  • Implement model agnostisicm. Eng.
  • Use Livebook to evaluate on the backend. Eng.

My expectation is that most of this already has publications on it.