Now Reading
Guide to TensorFlow Extended(TFX): End-to-End Platform for Deploying Production ML Pipelines

Guide to TensorFlow Extended(TFX): End-to-End Platform for Deploying Production ML Pipelines

Aishwarya Verma
Tensorflow Extended TFX

Ever since Google has publicised Tensorflow, its application in Deep Learning has been increasing tremendously. It is used even more in research and production for authoring ML algorithms. Though it is flexible, it does not provide an end-to-end production system. On the other hand, Sibyl has end-to-end facilities but lacks flexibility. Google then came up with Tensorflow Extended(TFX) idea as a production-scaled machine learning platform on Tensorflow, taking advantage of both Tensorflow and Sibyl frameworks. 

TFX contains a sequence of components to implement ML pipelines that are scalable and give high-performance machine learning tasks. These components can also be used independently. Apache Airflow and Kubeflow Pipelines support TFX. TFX components interact with  ML Metadata as a backend that keeps a record of component runs, input and output artifacts, and runtime configuration. This metadata backend enables advanced functionality like experiment tracking or warm starting/resuming ML models from previous runs. Compatible versions of TFX can be found here.

TFX’s standard component can be used in the pipeline or individually and provides functionalities to get started with Machine Learning. The diagram below indicates the data flow between different parts. You can learn about various standard features here, in great detail.

TFX contains different python packages/libraries to create pipelines such as TensorFlow Data Validation (TFDV), TensorFlow Transform (TFT), TensorFlow Model Analysis (TFMA), etc. The image below demonstrates the link between TFX libraries and pipeline components:


You can install TFX via PyPI.

!pip install tfx

Demo of TFX

This demo is a component-by-component tutorial of TFX via Keras API. For example, we have taken the Chicago Taxi example.

Note : TFX supports the Tensorflow 2 version of Keras.

  1. Import all the necessary packages and modules. The code is available here.
  2. Check the library versions:
 print('TensorFlow version: {}'.format(tf.__version__))
 print('TFX version: {}'.format(tfx.__version__)) 
  1. Set up the pipeline paths as shown below:
 import tfx.examples.chicago_taxi_pipeline
 # This is the directory containing the TFX Chicago Taxi Pipeline example.
 _taxi_root = tfx.examples.chicago_taxi_pipeline.__path__[0]
 # This is the path where your model will be pushed for serving.
 _serving_model_dir = os.path.join(
     tempfile.mkdtemp(), 'serving_model/taxi_simple')
 # Set up logging.
  1. Download the dataset. Here the dataset is Taxi Trips dataset released by the City of Chicago.
 _data_root = tempfile.mkdtemp(prefix='tfx-data')
 _data_filepath = os.path.join(_data_root, "data.csv")
 urllib.request.urlretrieve(DATA_PATH, _data_filepath) 
  1. For making TFX component interactive, initialize the interactive context :

context = InteractiveContext()

  1. Running TFX components:
  1. ExampleGen : This component is present at the start of TFX and  

split data into training and evaluation dataset, transform data into the `tf.Example` format and lastly, copy data into the `_tfx_root` directory so that other components can also access it.

ExampleGen takes the data path as an input.

 example_gen = CsvExampleGen(input=external_input(_data_root)) 

Now, check the output of example_gen, it will give two datasets for training and testing.

 artifact = example_gen.outputs['examples'].get()[0]
 print(artifact.split_names, artifact.uri) 

Check the training example via this code snippet.

  1. StatisticsGen : Next step is to analyze the data, and StatisticsGen takes care of that. It uses the TensorFlow Data Validation library. The dataset from ExampleGen is the input of StatisticsGen.
 statistics_gen = StatisticsGen(

        You can visualize all the analysis via one line of code :['statistics'])

  1. SchemaGen: It generates the schema based on the examination of data statistics. A schema refers to defines the type and features of the dataset.

It takes the output of StatisticsGen from step b.

 schema_gen = SchemaGen(

Now, you can visualize the schema by :['schema'])

  1. ExampleValidator: It looks for anomalies and null values in the dataset. It takes the output from StatisticsGen and SchemaGen as an input.
 example_validator = ExampleValidator(

Now, visualize it:['anomalies'])

  1. Transform : It does the data/feature engineering step on the dataset(train & serving). It uses the TensorFlow Transform library. It takes data from ExampleGen, the schema from SchemaGen, and a module that contains user-defined Transform code, as an input. But before that, there are few preprocessing steps, whose code is available here.

        Now, transform the data  and check the output:

See Also

 transform = Transform(

It outputs transform_graph (graph that can perform the preprocessing operations) and transformed_examples(represents the preprocessed training and evaluation data). The code for examining them is available here.

  1. Trainer : It trains the model using Keras. The default trainer is an estimator. For using the Keras trainer, we have to define a generic trainer by setup custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor) in Trainer’s constructor. The trainer takes the schema from SchemaGen, the transformed data and graph from Transform, training parameters, and a module that contains user-defined model code as its input. Before setting up the trainer, define some user-defined modules necessary for the trainer and whose code is available here.
 trainer = Trainer(

    Analyze the trainer via tensorboard.

 model_run_artifact_dir = trainer.outputs['model_run'].get()[0].uri
 %load_ext tensorboard
 %tensorboard --logdir {model_run_artifact_dir} 
  1. Evaluator : It evaluates the performance metric on the evaluation set via the TensorFlow Model Analysis library. It takes the data from ExampleGen, the trained model from Trainer, and the slicing configuration(that can slice your metrics on feature values). An example is available here. Then pass this configuration to the evaluator as :
 # Use TFMA to compute a evaluation statistics over features of a model and
 # validate them against a baseline.
 # The model resolver is only required if performing model validation in addition
 # to evaluation. In this case we validate against the latest blessed model. If
 # no model has been blessed before (as in this case) the evaluator will make our
 # candidate the first blessed model.
 model_resolver = ResolverNode(
 evaluator = Evaluator(

  Next, you can examine and visualize the evaluation output:


Full code is available for visualization is here.

  1. Pusher : It is present at the end of the TFX pipeline and checks the model validation and, if so, then deploy the model to a serving infrastructure.
 pusher = Pusher(

You can now examine the output of the pusher.

You can find the complete tutorial here.


This post discussed Google’s Tensorflow Extended (TFX), a platform for machine learning to scale up productionisation. It provides different pipelines, components and libraries that are not only capable of building an ML model but also provides support for deployment. TFX also helps in monitoring the performance of your machine learning system.

Note : All images/figures are taken from official sources.

Official codes, docs & tutorials are available at:

What Do You Think?

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join Our Telegram Group. Be part of an engaging online community. Join Here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top