Grab your coffee, we're going to code.

ARIMA is a Forecasting Technique and uses the past values of a series to forecast the values to come. A basic intuition about the algorithm can be developed by going through the blog post mentioned below that I wrote as Part 1 of my ARIMA exploration

The series I am using can be downloaded from here: https://drive.google.com/file/d/1W8K92lQ00Zt6J7qJnLKH4yp7MddIVBsR/view?usp=sharing

The first thing is to check for the stationarity in the data. The stationarity will be checked by the Augmented Dicky Fuller Test. The null hypothesis for this test is that the Time Series is non-Stationary…


Image for post
Image for post

When it comes to forecasting, ARIMA is quite often the first choice algorithm. Let us try to understand in brief what all this is about.

A simple intuition about Auto-Regressive Integrated Moving Average can be built upon the thought that this algorithm uses the past values of a time series alone to forecast the future values. ARIMA uses the lags and lagged forecast errors of a time series to forecast future values. A point to note, however, is that for ARIMA to work, the series should have non-seasonality.

Before we start with ARIMA, we should make sure that the predictors…


Patience is the only prerequisite here.

I'm glad if you are here, and if you're clueless about what Holt-Winters Exponential Smoothing is, check out this article here,

This would act as a good starting point and will help you through the basic idea of it. Go ahead, read it, I shall wait. And if in case you want me to cut the chase and start coding, your prayers have just been answered.

We have taken a Time Series data of the number of International Airline Passengers (in thousands) between January 1949 to December 1960. One can find the dataset here,


Yesterday was a cold winter night and I was binging on Brooklyn99, and that is when I thought I can write a brief article on Holt-Winters Exponential Smoothing (Got the lame joke?)

Holt-Winters Exponential Smoothening (HWES) works on the idea of smoothening the values of a Univariate Time Series Analysis to use them for forecasting future values. The idea is to assign exponentially decreasing weights giving more importance to more recent incidents. So, when we move back in time, we would see diminishing weights.

Note: If you're looking on how to code Holt-Winters in Python, check out the second part…


LSTM learns. LSTM remembers. Be like LSTM.

Recurrent Neural Networks (RNNs) are good at learning sequences, they carry information from previous state to the next one, across time steps. Alas, they provide good results only when the sequences are not too long. Thus suffering from a short-term memory problem.

Moreover, during Back Propagation, a Recurrent Neural Network might also suffer from a Vanishing Gradient Problem. …

Etqad Khan

I like what I do.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store