One Of The Most Benchmarked Human Motion Recognition Dataset In Deep Learning

HMDB-51 is an activity video information dataset with 51 activity classifications, which altogether contain around 7,000 physically clarified cuts separated from an assortment of sources going from digitized motion pictures to YouTube.
hmdb

HMDB-51 is an human motion recognition dataset with 51 activity classifications, which altogether contain around 7,000 physically clarified cuts separated from an assortment of sources going from digitized motion pictures to YouTube.It was developed by the researchers: H. Kuehne, H. Jhuang, E. Garrote and T.Serre in the year 2011. 

The dataset contains 51 particular activity classes, each containing at any rate 101 clips for an aggregate of 6,766 video cuts extricated from a wide scope of sources. The labels for each clip incorporate the camera viewpoint, the video quality, and the number of entertainers engaged with the activity.

The actions classes can be divided into five types: 

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

1) face actions: laugh, chew, talk 




2) face actions with object manipulation: smoke, eat, drink

3) body movements: clap hands, climb, dive, fall, backhand flip, hand-stand, walk, push up, run 

4) Body movements with object interaction: swing bat, kick football, brush hair, catch, draw sword, play tennis, hit something, kickball, pick, pour, ride bike, lay badminton, shoot ball, throw;

5) body movements for human interaction: hug someone, kick, kiss, punch, shake hands, sword fight.

Here, we will examine data contained in this dataset, how it was gathered, and provide some benchmark models that gave high precision on this dataset. Further, we will implement the HMDB using Pytorch and Keras Library.

Data Collection

To gather human movements that represent regular activities, a group of students were asked to watch video recordings from different web sources like Youtube and Google recordings and clarify any section of these recordings that speaks to a single human activity. They were instructed to consider a minimum quality standard like a single action per clip, at least 60 pixels in tallness for the principle actor, minimum contrast level, least 1 second of clasp length, and adequate pressure artefacts. They used Amazon Mechanical Turkers (AMT) tool to check if the clip contains the activity or not. A few clips may contain common video material. In this way, the dataset was refined by watching that only one clip is taken from each video.

Loading the dataset using Pytorch

The dataset can be downloaded from the following link.

Import all the libraries required for this project.

import torch
import torch.nn as nn
from torch.nn import functional as F
from torch.utils.data import random_split, DataLoader
from torch.optim.lr_scheduler import StepLR
import torchvision
from torchvision import get_video_backend
from torchvision.models.video import r3d_18 
from torchvision import transforms

We need to transform the dataset using data augmentation.It can help to get more information by adding minor changes to our current dataset. For example flips or resize or add brightness to the image. 

data_augm
data = torchvision.transforms.Compose([
                                 T.ToFloatTensorInZeroOne(),
                                 T.Resize((128, 171)),
                                 T.RandomHorizontalFlip(),
                                 T.Normalize(mean=[0.43216, 0.394666, 0.37645], std=[0.22803, 0.22145, 0.216989]),
                                 T.RandomCrop((112, 112))
                               ])  

Next step is to load the dataset with batch size 32.

hmdb51_training = torchvision.datasets.HMDB51('video_data/', 'test_train_splits/', num_frames,
                                                step_between_clips = clip_steps, fold=1, train=True,
                                                transform=data, num_workers=num_workers)
batch_size=32
data_loader = DataLoader(hmdb51_training, batch_size=batch_size, shuffle=True, **kwargs)

The below result shows the state of the art of recognition results for HMDB-51 dataset.

training_result

Loading the dataset using Keras

Install the video generator using the pip command. Image data generator is used to augment the dataset.

pip install keras-video-generators
import os
import glob
import keras
from keras_video import VideoFrameGenerator

We need to define the parameters that can be passed to the model for training.

classes = [i.split(os.path.sep)[1] for i in glob.glob('videos/*')]
classes.sort()
# Parameters
Size = (112, 112)
channel = 3
Nbframe = 5
Batch_size = 32
# Data augmentation
data_augmentation = keras.preprocessing.image.ImageDataGenerator(
    zoom_range=.1,
    horizontal_flip=True,
    rotation_range=8,
    width_shift_range=.2,
    height_shift_range=.2)

Load the dataset with different parameters.

# Create video frame generator
train = VideoFrameGenerator('data/train/',
    classes=classes, 
    nb_frames=Nbframe,
    split=.33, 
    shuffle=True,
    batch_size=Batch_size,
    target_shape=Size,
    nb_channel=channel,
    transformation=data_aug,
    use_frame_cache=True)

State of the art

The present state of the art on HMDB-51 dataset is R2+1D-BERT. The model gave a precision of 85.10%. HAF+Bow is a nearby contender with a precision of around 83%.

Conclusion

In this article we described a dataset that can be used for human activity recognition.Further we have implemented this dataset with the help of Pytorch and Keras Library.With 51 action classes this HMDB-51 dataset is still a long way from catching the wealth and the full intricacy of video cuts normally found in the motion pictures or online recordings.

Ankit Das
A data analyst with expertise in statistical analysis, data visualization ready to serve the industry using various analytical platforms. I look forward to having in-depth knowledge of machine learning and data science. Outside work, you can find me as a fun-loving person with hobbies such as sports and music.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR