67
Views
0
CrossRef citations to date
0
Altmetric
Electrical Engineering

LSTM deep learning long-term traffic volume prediction model based on Markov state description

, , , &
Pages 405-413 | Received 02 Aug 2023, Accepted 01 Mar 2024, Published online: 01 Apr 2024
 

ABSTRACT

When applying long short-term memory (LSTM) neural network model to traffic prediction, there are limitations in exploiting spatial-temporal traffic state features. The interpretability of models has not received enough attention. This study suggests an LSTM traffic flow prediction model that can anticipate traffic volume 24 h in advance. The model makes use of the traffic flow state information obtained from the fuzzy C-means clustering method by clustering the multi-day historical traffic flow data. Markov chain is used to capture the label feature of traffic flow using transition probability matrix information. To show the efficacy of the suggested technique, experiments were conducted using real traffic volume data from a city in China. The simulation results demonstrate that the proposed model can attain greater prediction accuracy, and the network training time may be significantly reduced.

CO EDITOR-IN-CHIEF:

ASSOCIATE EDITOR:

Nomenclature

b=

the bias vector (e.g. is the input gate bias vector)

c=

the number of clusters

ct1=

LSTM’s cell state

CNN=

convolutional neural network

dij2=

the square Euclidean distance

DNN=

deep neural network

fij(tk)=

the transition frequency at the kth time interval tk

ft,it,Ct,Ot=

the basic modules contain input gate, forget gate, output gate, input modulation gate, and memory cell state

FCM=

Fuzzy C-means

GCN=

graph convolutional network

ht1=

LSTM’s hidden state

H=h1,h2,ht,hT=

the hidden vector sequence

lmax=

the maximum step size

LSTM=

long short -term memory

m=

the fuzzifier

MAPE=

the mean absolute percentage error

MSE=

the mean square error

N=

the total number of traffic volume states

P(tk)=

a one-step transition probability matrix

Pts(tk)=

a multi-step transition probability matrix

pijtk=

the transition probability

pijts(tk)=

the transition probability at the time tk

pj(tk)=

the marginal probability

RNN=

recurrent neural network

tanh=

hyperbolic tangent function

uij=

the membership degree of data object xj in cluster i

U=

the initial membership matrix

V={v(tk)}=

the daily of traffic volume

vl=

the cluster center

vk=

the real traffic volume at kth time period

vˆk=

the predicted traffic volume at kth time period

vsi=

the states of the other days traffic volume at tk

Wc=

the cell update gate weig ht matrix

Wf=

the forget gate weight matrix

Wi=

the input gate weight matrix

Wo=

the output gate weight matrix

xt=

input variable

X=x1,x2,xt,xT=

a sliding window

Y=y1,y2,yt,yT=

an output sequence

α=

a given significance level

σ=

sigmoid function

χ2tk=

chi-square statistics

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the National Natural Science Foundation of China under [Grant No. 62176019].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 199.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.