Search

# Comprehensive Guide To Deseasonalizing Time Series

Time series data is a collection of data points obtained in a sequence with time values. These time values can be regular periods or irregular. We use time-series data to predict the future data responses, which are based on past data. Generally, in a time series, some unusual effect of seasonality or trends and noise makes the prediction wrong. For better forecasting with time series, we need a stationary time series data set in which the effect of trends or seasons is negligible. In the article, we will discuss the seasonality of time series data and remove it.

Types of Time-Series

There are two types of time series: Additive and multiplicative. To understand both better, we need to know trends, seasonality and noise in time series data. More formally, we can describe those three as:

##### Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy

Trend: The trend component makes changes in the overall time series.

Seasonality: Change in time series value within a given time.

Noise/Random: Abrupt change in time series excluding the change produced by seasonality and trends.

In the above image, we can see how seasonality, trend and noise affect the whole observation of an additive time series data set. Interaction of those three in a dataset determines the type of time series data.

Additive Time Series: In a time series, trend, seasonality, and noise make the additive time series.

• Time-Series = trend + seasonality + noise

Multiplicative Time Series: Multiplication of trend, seasonality, and noise make the time series multiplicative.

• Time-series = trend × seasonality × noise

Here in the above, we have seen the basics of a time series data set. We will discuss the seasonality of a data set and how to deseasonalize time series in the next step.

## Seasonality

In a time series, seasonality is a component that tells us the changes or fluctuations are occurring in a repeated way for similar periods. For example, sales of umbrellas increase in the rainy season; it increases because rain can happen only once a year but will happen every year; hence we can say that there is a seasonality effect in the sales of umbrellas.

A cyclic structure in a data set can be seasonality if the frequency of the trend graph is increasing or decreasing repeatedly but for a particular time.

Understanding seasonality can improve the forecasting results. However, to make a clear relationship between the input and output some time we need to remove the seasonality. Removal of seasonality is called deseasonalizing time series.

Many types of seasonality depend on the time series and frequency of fluctuations. Like

• Time of the day
• Daily
• Weekly
• Monthly
• yearly

After removal of seasonality from time series, we can consider it as a seasonal stationary time series.

For learning about deseasonalizing I am using airline passenger data set. In the data set, we have the records for passenger count of every month from 1949 to 1959. The data is having both trend and seasonality. We are going to remove the seasonality in the next steps.

## Code Implementation of Deseasonalizing Time Series

Setting up the environment in google colab.

Requirements :

Python 3.6 or above,

Importing the basic libraries :

``` import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline ```

Input:

``` data = pd.read_csv("/content/drive/MyDrive/Yugesh/deseasonalizing time series/AirPassengers.csv", index_col=0, parse_dates=True)

Output:

Here we can see that the data set has the month column as index column and count of passengers in column 0.

Let’s check for the trend graph of the dataset.

Input :

`data.plot()`

Output:

The dataset trend shows that it is a kind of additive time series, but this feature dataset also has a slight seasonality and trend.

Let’s check for the two consecutive years trend to know the similarity more accurately. I am choosing the years 1957 and 1958 for the test.

Input :

``` data["1957"].plot(kind='bar')
data["1958"].plot(kind='bar') ```

Output:

Here for the years 1957 and 1958, we can see that the amplitude of the trends is quite similar. Just a small amount of passenger count over the year has increased, but we would know that there is a seasonality effect in passenger count if we draw a trend.

Knowing it better, we can decompose the data set into its components(seasonality, trend and noise) to decompose the data ‘statsmodels’ package has provided a function ‘seasonal_decompose’ under ‘statsmodels.tsa.seasonal’ module.

Importing seasonal_decompose :

`from statsmodels.tsa.seasonal import seasonal_decompose`

Let’s check for the components :

Input:

``` decompose_data = seasonal_decompose(data, model="additive")
decompose_data.plot(); ```

Output:

Here in the above chart, we can see the decomposed structure of data and the structure of the components in the data set which were affecting it.

Let’s make a graph for available seasonality.

Input :

``` seasonality=decompose_data.seasonal
seasonality.plot(color='green') ```

Output:

In the seasonality graph, we can see the seasonality structure for every year, which is cyclic and repeatedly providing the same value.

To check for the stationarity of the time series, statsmodels provides a plot_acf method to plot an autocorrelation plot.

Input :

``` from statsmodels.graphics.tsaplots import plot_acf
plot_acf(data); ```

Output:

Here the blue area is the confidence interval, and the candles started coming inside after the 13th candle. This can be due to the seasonality of 12-13 months.

The statsmodels provides a function to perform the test.

Importing function to perform the test:

`from statsmodels.tsa.stattools import adfuller`

Testing the data set by dicky-fuller method:

Input :

``` dftest = adfuller(data.Passengers, autolag = 'AIC')
print("2. P-Value : ", dftest[1])
print("3. Num Of Lags : ", dftest[2])
print("4. Num Of Observations Used For ADF Regression and Critical Values Calculation :", dftest[3])
print("5. Critical Values :")
for key, val in dftest[4].items():
print("\t",key, ": ", val) ```

Output:

Here in the output, we can see that the p-value of the data set is more than 0.05. Because of this reason, only we can interpret the data as non-stationary.

As we have seen that data is non-stationary, we can apply deseasonalization to the data set to make it more stable or stationary.

Let’s perform the deseasonalization on the data set.

##### Differencing over log-transformed time-series

We try to normalize the seasonality value by the difference of log to passenger count and shifted the log value of passenger count to one step.

Input :

``` log_passengers = pd.DataFrame(data.Passengers.apply(lambda x : np.log(x)))
log_diff = log_passengers - log_passengers.shift()
ax1 = plt.subplot()
log_diff.plot(title='after log transformed & differencing');
ax2 = plt.subplot()
data.plot(title='original'); ```

Output :

In the output, we can compare the trend of the graph after deseasonalizing the data.

Let’s check for the p-value of the new time series.

Input :

``` test = adfuller(log_diff.dropna().Passengers)
print("p-value :", test[1]) ```

Output:

The p-value is again greater than 0.05, so we can interpret the data as still non-stationary.

##### Differencing over power-transformed time series

We have first power transformed the data and then made a difference between power transformed data and one shift.

Input:

``` powered_transform = data.Passengers.apply(lambda x : x ** 0.5)
powered_transform_diff = powered_transform - powered_transform.shift()
ax1 = plt.subplot()
powered_transform_diff.plot(title='after power transformed & differencing');
ax2 = plt.subplot()
data.plot(title='original'); ```

Output:

After this, we can check the p-value using dicky – fuller test.

Input:

``` test = adfuller(powered_transform_diff.dropna().Passengers)
print("p-value :", test[1]) ```

Output:

Here in differencing overpower transformed time series, we have got a good p-value near about 0.02 and lower than 0.05 in that we can consider over data is stationary. Still, there are some more methods let’s just check for the result on those methods also.

##### Differencing over rolling mean taken for 12 months:

Input:

``` rolling_mean = data.rolling(window = 12).mean()
rolling_mean_diff = rolling_mean - rolling_mean.shift()
ax1 = plt.subplot()
powered_transform_diff.plot(title='after rolling mean & differencing');
ax2 = plt.subplot()
data.plot(title='original'); ```

Output:

Let’s check for the p-value using the dicky-fuller method.

Input

``` test = adfuller(rolling_mean_diff.dropna().Passengers)
print("p-value :", test[1]) ```

Output:

Here we can see that the p_value is again less than 0.05. It means by the different methods; we are improving the stationarity of the dataset.

##### Differencing over log-transformed & mean rolled time series:

In this, we have applied the difference between the log transformation of the rolling mean and its shifted value by one step.

Let’s check for the results.

Input:

``` logged_transform = pd.DataFrame(data.Passengers.apply(lambda x : np.log(x)))
rolling_mean = logged_transform.rolling(window = 12).mean()
diff = rolling_mean - rolling_mean.shift(1)
ax1 = plt.subplot()
diff.plot(title='after log transformed rolling mean & differencing');
ax2 = plt.subplot()
data.plot(title='original'); ```

Output:

We can see that it has distorted the seasonality; it can be interpreted as this method is not as good as the other methods were.

Let’s check for the p-value.

Input:

``` test = adfuller(diff.dropna().Passengers)
print("p-value :", test[1]) ```

Output:

As assumed, the p-value of the dataset is greater than 0.05. The dataset is not stationary.

##### Differencing over power transformed & rolling mean time series

This method will try to adjust seasonality using the difference between power transformed rolling mean and shifted by one step of power transformed rolling mean of data.

Let’s check for the results.

Input :

``` powered_transform = pd.DataFrame(data.Passengers.apply(lambda x :  x ** 0.5))
rolling_mean = powered_transform.rolling(window = 12).mean()
diff = rolling_mean - rolling_mean.shift(1)
ax1 = plt.subplot()
diff.plot(title='after power transformed rolling mean & differencing');
ax2 = plt.subplot()
data.plot(title='original'); ```

Output:

In the output, we again see the distortion in the seasonality. let’s check for the p-value

Input :

``` test = adfuller(diff.dropna().Passengers)
print("p-value :", test[1]) ```

Output:

The p-value is one of the best we are having, but graph seasonality was not good; we interpreted the trend component as highly available after data after seasonality. So to improve it more we can go for detrending also.

We have seen our data set was pretty clean and was in an ideal condition, but talking about the real world problem, the datasets do not behave like this generally. So we need to perform more and more tasks on the data set to make our predictions more accurate.

In this article, we discussed the time series, had a basic overview of components of a time series, and performed differencing methods for deseasonalizing the time series data to obtain accuracy in our further modeling process.

## References

All the information in this post is gathered from:

Yugesh is a graduate in automobile engineering and worked as a data analyst intern. He completed several Data Science projects. He has a strong interest in Deep Learning and writing blogs on data science and machine learning.

### Telegram group

Discover special offers, top stories, upcoming events, and more.

### Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

### AI Assists Production in Indian Film Industry

Implementing AI in pre-production can bring down storyboarding process time by 50-80% and reduce the

### Is GPT-4 Really Better than Radiologists?

“Radiology report summaries created by GPT-4 are comparable, and in some cases, even preferred over

### TSMC: The Wizard Behind AI’s Curtain

TSMC anticipates a substantial CAGR of nearly 50% in the AI sector from 2022 to 2027.

Not really.

### Google Gemini To Arrive Sooner Than Expected

This is after announcing the AI at the Google I/O 2023, the company had postponed

### ByteDance to Launch Platform to Build Custom Chatbots

This comes just a few days after OpenAI had delayed its plan to launch a

### This New AI tool Could Mark the Beginning of the End for TikTok and Instagram Influencers

Alibaba Group announces a model framework that can transform still images into dynamic character videos

### Embracing Identity: The Journey of Sujoy Das

“Why is it that corporate diversity efforts are often limited to specific times of the

### The Biggest Data Breaches of 2023

The most significant breaches that impacted the global landscape in 2023.

### NVIDIA Planning Big Expansions in Japan

Prime Minister Fumio Kishida has extended billions of dollars in financial support to bolster TSMC