neptune.ai is a lightweight tool for experiment management and collaboration in Data Science projects. It is an organized place for all your experiments, data exploration Notebooks and more. It supports any kind of project workflow. One can use it individually or in a team. Moreover, it can be utilized as a service or can be deployed on your hardware or the cloud as well. It features Notebooks’ tracking and versioning capabilities so you will never lose your data exploration insights or other crucial information related to the project.
neptune.ai was introduced to the world by a private organization named Neptune Labs Inc in November 2017. The company has its headquarters at Warsaw, Mazowieckie (Poland) and was founded by Piotr Niedzwiedz.
Some of the reputed companies which leverage neptune.ai to manage their ML experiments include:
Following are some of the vital use cases of the tool:
- Organize the workflow of your project (How?)
- Monitor each run of the experiment (How?)
- Share the experiment’s outcomes with your team (How?)
- Clean up your project’s workflow (How?)
Demo code
Installation
Installing neptune.ai using Python requires Python 3.x installed on your machine. Run the following command to install neptune-client :
pip install neptune-client
Import required libraries
import neptune import numpy as np from time import sleep
Initialize Neptune
neptune.init(project_qualified_name=’WORKSPACE_NAME/PROJECT_NAME, api_token=’YOUR_API_TOKEN,)
Click here to know the procedure to get a Neptune API token.
Create Neptune experiment
neptune.create_experiment()
Log metrics to experiment
from time import sleep neptune.log_metric('single_metric', 0.62) for i in range(50): sleep(0.15) #watch live logging neptune.log_metric('random_training_metric', i * np.random.random()) neptune.log_metric('other_random_training_metric', 0.4 * i * np.random.random())
The output of the above lines of code includes a link to our experiment in Neptune where we can explore our experiment through various visualization charts, logs, artifacts (arbitrary files) and so on.
Practical implementation of a Deep Learning experiment using neptune.ai
Import required libraries
import neptune import tensorflow as tf
Load the MNIST dataset
data = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = data.load_data()
Normalize the data
X_train, x_test = x_train / 255.0, x_test / 255.0
Define the model
model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense( 256, activations = tf.keras.activations.relu),tf.keras.layers.Droupout(0.5), tf.keras.layers.Dense( 10, activations = tf.keras.activations.softmax) ])
Define the optimizers
opt = tf.keras.optimizers.SGD(lr=0.005, momentum=0.4,)
Compile the model
model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
Initialize Neptune
neptune.init(project_qualified_name=’WORKSPACE_NAME/PROJECT_NAME, api_token=’YOUR_API_TOKEN,)
Create a Neptune experiment
neptune.create_experiment(‘EXPERIMENT_NAME')
Fit the model to training data
model.fit(x_train, y_train, epochs=5, batch_size=64, callbacks=[NeptuneMonitor()])
Running the above lines of code will give a link to the Neptune UI as output where you can explore your experiment.
Log hardware consumption of the experiment
pip install --quiet psutil==5.6.6
Log hyperparameters of the neural network
#Define the hyperparameters parameters = {'lr':0.005, 'momentum':0.9, 'epochs':15, 'Batch_size':32}
The procedure to compile and fit the model remains the same. The only additional thing here is to add the above defined parameters to the create_experiment() call.
neptune.create_experiment('tensorflow-keras-advanced', params=parameters)
Log the predictions
#Select an image from the test set x_test_sample = x_test[:10] #Predict the label of sample y_test_sample_pred = model.predict(x_test_sample) #Log the prediction for image, y_pred in zip(x_test_sample, y_test_sample_pred): desc = '\n'.join(['class {}: {}'.format(i, pred) for i, pred in enumerate(y_pred)]) neptune.log_image('predictions', image, description=desc)
Save the model
model.save('MODEL_NAME')
Log the model weights
neptune.log_artifact('MODEL_NAME')
Explore the results in Neptune UI and stop logging at the end.
neptune.stop()
Find the Google Colab notebook of the above implementation code here.
neptune.ai Integrations
Neptune provides integrations with over 25 Python libraries widely used for a variety of real-world AI/ML applications.Some of the integrations written using neptune-client include Pandas, Matplotlib, Pyplot, Pytorch, TensorFlow, Keras, Scikit-learn, Scikit Optimize, XGBoot and so on. The complete list of available integrations can be found here.
Run ML experiments anywhere using neptune.ai
An ML experiment can be run in any environment and log it to Neptune. There is a simple two-step process to do so using Python.
- Install Neptune client
pip install neptune-client
- Add logging code
For instance,
import neptune neptune.init(project_qualified_name=’WORKSPACE_NAME/PROJECT_NAME, api_token=’YOUR_API_TOKEN,) neptune.create_experiment('EXPERIMENT_NAME') # training logic neptune.log_metric('accuracy', 0.95)
Multi-language support
Neptune supports experiments written using multiple programming languages.
See how it works with Python, R or any other language.
Notebook flavours supported by neptune.ai
EndNote
neptune.ai provides an efficient way to store, visualize, organize and compare the metadata associated with an ML experiment. It is a prominent MLOps solution that smoothens the ML workflow by making your project easily manageable irrespective of whether it is handled individually or collaboratively.
To get in-depth knowledge of the utilitarian tool, refer to the following sources: