Py's tensorflow-federated: a detailed guide to the introduction, installation, and usage of tensorflow-federated

Py's tensorflow-federated: a detailed guide to the introduction, installation, and usage of tensorflow-federated

Table of contents

Introduction to tensorflow-federated

Installation of tensorflow-federated

How to use tensorflow-federated

1. Basic case

Introduction to tensorflow-federated

TensorFlow Federated (TFF) is an open source framework for machine learning and other distributed data computing. TFF was developed to facilitate open research and experimentation on Federal Learning (FL), a machine learning approach in which a shared global model, Training is performed across many participating clients who keep their training data locally. For example, FL has been used to train predictive models for mobile keyboards without uploading sensitive typing data to a server.
TFF enables developers to use included federated learning algorithms in their models and data, as well as experiment with new algorithms. The building blocks provided by TFF can also be used to implement non-learning computations, such as aggregated analysis of scattered data.
The interface of TFF is divided into two layers:

  • Federated Learning (FL) API : The learning layer provides a set of high-level interfaces that allow developers to apply the included federated training and evaluation implementations to their existing TensorFlow models.
  • Federated Core (FC) API : At the core of the system is a set of low-level interfaces for concisely expressing novel joint algorithms by combining TensorFlow with distributed communication operators in a strongly typed functional programming environment. This layer is also the foundation on which we build tff.learning.

TFF enables developers to declaratively represent federated computations so they can be deployed to different runtime environments. A stand-alone simulation runtime is included in TFF for experiments.

Official website:

GitHub official website: GitHub - tensorflow/federated: A framework for implementing federated learning

Installation of tensorflow-federated

pip install tensorflow-federated

How to use tensorflow-federated

1. Basic case

Reference article: federated/ at main · tensorflow/federated · GitHub

import asyncio
import os.path
from typing import Sequence, Tuple, Union

from absl import app
from absl import flags
import tensorflow as tf
import tensorflow_federated as tff

from tensorflow_federated.examples.program import computations
from tensorflow_federated.examples.program import program_logic

_OUTPUT_DIR = flags.DEFINE_string('output_dir', None, 'The output path.')

def _filter_metrics(path: Tuple[Union[str, int], ...]) -> bool:
  if path == (computations.METRICS_TOTAL_SUM,):
    return True
    return False

def main(argv: Sequence[str]) -> None:
  if len(argv) > 1:
    raise app.UsageError('Too many command-line arguments.')

  total_rounds = 10
  number_of_clients = 3

  # Configure the platform-specific components; in this example, the TFF native
  # platform is used, but this example could use any platform that conforms to
  # the approprate abstract interfaces.

  # Create a context in which to execute the program logic.
  context =
  context = tff.program.NativeFederatedContext(context)

  # Create data sources that are compatible with the context and computations.
  to_int32 = lambda x: tf.cast(x, tf.int32)
  datasets = [] * 3
  train_data_source = tff.program.DatasetDataSource(datasets)
  evaluation_data_source = tff.program.DatasetDataSource(datasets)

  # Create computations that are compatible with the context and data sources.
  initialize = computations.initialize
  train = computations.train
  evaluation = computations.evaluation

  # Configure the platform-agnostic components.

  # Create release managers with access to customer storage.
  train_metrics_managers = [tff.program.LoggingReleaseManager()]
  evaluation_metrics_managers = [tff.program.LoggingReleaseManager()]
  model_output_manager = tff.program.LoggingReleaseManager()

  if _OUTPUT_DIR.value is not None:
    summary_dir = os.path.join(_OUTPUT_DIR.value, 'summary')
    tensorboard_manager = tff.program.TensorBoardReleaseManager(summary_dir)

    csv_path = os.path.join(_OUTPUT_DIR.value, 'evaluation_metrics.csv')
    csv_manager = tff.program.CSVFileReleaseManager(csv_path)

  # Group the metrics release managers; program logic may accept a single
  # release manager to make the implementation of the program logic simpler and
  # easier to maintain, the program can use a
  # `tff.program.GroupingReleaseManager` to release values to multiple
  # destinations.
  # Filter the metrics before they are released; the program can use a
  # `tff.program.FilteringReleaseManager` to limit the values that are
  # released by the program logic. If a formal privacy guarantee is not
  # required, it may be ok to release all the metrics.
  train_metrics_manager = tff.program.FilteringReleaseManager(
  evaluation_metrics_manager = tff.program.FilteringReleaseManager(

  # Create a program state manager with access to platform storage.
  program_state_manager = None

  if _OUTPUT_DIR.value is not None:
    program_state_dir = os.path.join(_OUTPUT_DIR.value, 'program_state')
    program_state_manager = tff.program.FileProgramStateManager(

  # Execute the program logic; the program logic is abstracted into a separate
  # function to illustrate the boundary between the program and the program
  # logic. This program logic is declared as an async def and needs to be
  # executed in an asyncio event loop.

if __name__ == '__main__':

Tags: TensorFlow

Posted by El Ornitorrico on Thu, 22 Sep 2022 22:33:48 +0530