Py's tensorflow-federated: a detailed guide to the introduction, installation, and usage of tensorflow-federated
Table of contents
Introduction to tensorflow-federated
TensorFlow Federated (TFF) is an open source framework for machine learning and other distributed data computing. TFF was developed to facilitate open research and experimentation on Federal Learning (FL), a machine learning approach in which a shared global model, Training is performed across many participating clients who keep their training data locally. For example, FL has been used to train predictive models for mobile keyboards without uploading sensitive typing data to a server.
TFF enables developers to use included federated learning algorithms in their models and data, as well as experiment with new algorithms. The building blocks provided by TFF can also be used to implement non-learning computations, such as aggregated analysis of scattered data.
The interface of TFF is divided into two layers:
- Federated Learning (FL) API : The learning layer provides a set of high-level interfaces that allow developers to apply the included federated training and evaluation implementations to their existing TensorFlow models.
- Federated Core (FC) API : At the core of the system is a set of low-level interfaces for concisely expressing novel joint algorithms by combining TensorFlow with distributed communication operators in a strongly typed functional programming environment. This layer is also the foundation on which we build tff.learning.
TFF enables developers to declaratively represent federated computations so they can be deployed to different runtime environments. A stand-alone simulation runtime is included in TFF for experiments.
Official website: http://tensorflow.org/federated
GitHub official website: GitHub - tensorflow/federated: A framework for implementing federated learning
Installation of tensorflow-federated
pip install tensorflow-federated
How to use tensorflow-federated
1. Basic case
Reference article: federated/program.py at main · tensorflow/federated · GitHub
import asyncio import os.path from typing import Sequence, Tuple, Union from absl import app from absl import flags import tensorflow as tf import tensorflow_federated as tff from tensorflow_federated.examples.program import computations from tensorflow_federated.examples.program import program_logic _OUTPUT_DIR = flags.DEFINE_string('output_dir', None, 'The output path.') def _filter_metrics(path: Tuple[Union[str, int], ...]) -> bool: if path == (computations.METRICS_TOTAL_SUM,): return True else: return False def main(argv: Sequence[str]) -> None: if len(argv) > 1: raise app.UsageError('Too many command-line arguments.') total_rounds = 10 number_of_clients = 3 # Configure the platform-specific components; in this example, the TFF native # platform is used, but this example could use any platform that conforms to # the approprate abstract interfaces. # Create a context in which to execute the program logic. context = tff.google.backends.native.create_local_async_cpp_execution_context( ) context = tff.program.NativeFederatedContext(context) tff.framework.set_default_context(context) # Create data sources that are compatible with the context and computations. to_int32 = lambda x: tf.cast(x, tf.int32) datasets = [tf.data.Dataset.range(10).map(to_int32)] * 3 train_data_source = tff.program.DatasetDataSource(datasets) evaluation_data_source = tff.program.DatasetDataSource(datasets) # Create computations that are compatible with the context and data sources. initialize = computations.initialize train = computations.train evaluation = computations.evaluation # Configure the platform-agnostic components. # Create release managers with access to customer storage. train_metrics_managers = [tff.program.LoggingReleaseManager()] evaluation_metrics_managers = [tff.program.LoggingReleaseManager()] model_output_manager = tff.program.LoggingReleaseManager() if _OUTPUT_DIR.value is not None: summary_dir = os.path.join(_OUTPUT_DIR.value, 'summary') tensorboard_manager = tff.program.TensorBoardReleaseManager(summary_dir) train_metrics_managers.append(tensorboard_manager) csv_path = os.path.join(_OUTPUT_DIR.value, 'evaluation_metrics.csv') csv_manager = tff.program.CSVFileReleaseManager(csv_path) evaluation_metrics_managers.append(csv_manager) # Group the metrics release managers; program logic may accept a single # release manager to make the implementation of the program logic simpler and # easier to maintain, the program can use a # `tff.program.GroupingReleaseManager` to release values to multiple # destinations. # # Filter the metrics before they are released; the program can use a # `tff.program.FilteringReleaseManager` to limit the values that are # released by the program logic. If a formal privacy guarantee is not # required, it may be ok to release all the metrics. train_metrics_manager = tff.program.FilteringReleaseManager( tff.program.GroupingReleaseManager(train_metrics_managers), _filter_metrics) evaluation_metrics_manager = tff.program.FilteringReleaseManager( tff.program.GroupingReleaseManager(evaluation_metrics_managers), _filter_metrics) # Create a program state manager with access to platform storage. program_state_manager = None if _OUTPUT_DIR.value is not None: program_state_dir = os.path.join(_OUTPUT_DIR.value, 'program_state') program_state_manager = tff.program.FileProgramStateManager( program_state_dir) # Execute the program logic; the program logic is abstracted into a separate # function to illustrate the boundary between the program and the program # logic. This program logic is declared as an async def and needs to be # executed in an asyncio event loop. asyncio.run( program_logic.train_federated_model( initialize=initialize, train=train, train_data_source=train_data_source, evaluation=evaluation, evaluation_data_source=evaluation_data_source, total_rounds=total_rounds, number_of_clients=number_of_clients, train_metrics_manager=train_metrics_manager, evaluation_metrics_manager=evaluation_metrics_manager, model_output_manager=model_output_manager, program_state_manager=program_state_manager)) if __name__ == '__main__': app.run(main)