#
Process Mining IPTV Customer Eye Gaze Movement Using Discrete-Time Markov Chains^{ †}

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Related Work

#### 2.1. Eye Tracking in Research

#### 2.2. Human-Computer Interaction

#### 2.2.1. Fitts’ Law

- T is the Time required to point to the object;
- A and B are empirically determined regression coefficients;
- D is the distance from the pointer to the object;
- W is the width of the object text following an equation, which need not be a new paragraph.

**Figure 2.**Fitts’ Law Graph–Object Size vs Usability Index [31].

#### 2.2.2. Gestalt Principles

- Proximity: This principle states that if objects are within close proximity to each other, the brain naturally groups them compared to those that are further apart;
- Similarity: This principle suggests that the brain groups objects based on their similarity, in relation to colour and shape etc., and distinguishes those that are different as a separate group;
- Continuity: The brain naturally follows and continues lines, even those that intersect with each other, and forms groups based on this continuation;
- Closure: In relation to shapes, if the brain observes lines which form incomplete outlines of certain shapes, we naturally close the gaps to form that shape as the brain prefers completeness and, therefore, initially views the shape as a whole.

#### 2.2.3. F-Shape and Horizontal Left Patterns

#### 2.2.4. HCI Evaluation Techniques

#### 2.3. Markov Chain Application in Process Mining

## 3. Methodology

#### 3.1. Experiment Design

#### 3.2. Aims and Objectives

- Purchase Flow Pages: BT is interested in how the user interacts with the TV on Demand service to improve the ease of use of the purchase flow (from initially choosing a TV show/film to going through with payment) to ultimately increase sales;
- Content Discovery Pages: BT is interested in how the user interacts with the main pages of the BT Player, regarding searching for items, looking at menus and carousels (large images and descriptions on the screen to draw attention), to improve the user interface of these pages to increase sales.

- 3.
- Content Purchase: “When purchasing content (TV on Demand), what draws the eye? Is it the price, is it the quality, or is it something else?”
- 4.
- Content Viewing: “When a Content Discovery page first loads, what are customers viewing? Are they drawn to the hero carousel, the navigation or something else?”

#### 3.3. Data Manipulation

#### 3.3.1. Data Collection

#### 3.3.2. Data Pre-Processing

- id eye-tracker-time sequential ordered list based on the timestamp where each recording was taken (i.e., the first recording is 1, the second is 2 etc.);
- participant name-14 participants (P001–P014);
- local timestamp-timestamp taken every 00:00:00.165 s (i.e., 10:07:46.441);
- GazePointX (ADCSpx)-the co-ordinates of the gaze-point in the X-direction;
- GazePointY (ADCSpx)-the co-ordinates of the gaze-point in the Y-direction;
- gaze event type-can be “Fixation,” “Saccade,” or “Unclassified.”

#### 3.4. Fitting into a DTMC Model

#### 3.4.1. Markov Definitions

- DTMC

_{1}, S

_{2}, …, S

_{n}} where n is the number of states in the system, so for example, if S = {A, B, C, D, E, F…} S

_{1}= A, S

_{2}= B, etc. [54]. In a first-order DTMC, the probability of the current state is based solely on the previous state of the Markov chain. That is, where the time instants associated with state changes are t = 0, 1, 2, 3… and the actual state at a given time t (t ≥ 1) is denoted as q

_{t}, the probability of arriving at a given state, given its previous state, can be calculated as follows:

- Dependency Test

_{ij}be the probability that the system moves from state S

_{i}to state S

_{j}in one step. For successive events to be independent in a first-order Markov chain, the statistic α, is defined as:

^{2}, with (k − 1)

^{2}degrees of freedom (DF), where k is the total number of states. The marginal probabilities, p

_{j}, can be calculated as:

_{ij}is the frequency of transitions from state i to state j [56].

- Transition Matrix

_{ij}can be calculated by:

_{ij}≥ 0, $\sum}_{j=1}^{N}{a}_{ij}=1$.

_{ij}can be formed with the probabilities where each element of position (i, j) in the matrix stands for the transition probability a

_{ij}, meaning in each row that a given state q

_{t−}

_{1}will go to the next state q

_{t}. And each row will sum to 1 [54].

- Classification of States

_{ij}> 0 and a

_{ji}> 0, then these states are said to be communicating. If all states communicate with each other in the Markov chain, then it is said to be irreducible, and it has only one communicating class, while Markov chains with multiple communicating classes are said to be reducible.

_{ii}= 1. However, it will be transient if f

_{ii}< 1.

_{ii}= 1.

- Distribution of States

_{i}≥ 0), and the state is denoted by X

_{t}. When t = 0, we call

**x**an initial state vector, denoted by π

_{init}:

_{init}be the initial state vector, then the probability distribution of X

_{n}after n steps is π

_{n}:

^{n}can be calculated as n → ∞.

- Trajectory

_{0}, X

_{1}, X

_{2}, …. Generally, if we refer to the trajectory {S

_{1}, S

_{2}, S

_{3}, …}, we mean that X

_{0}= S

_{1}, X

_{1}= S

_{2}, X

_{2}= S

_{3},… In this study, the trajectory refers to the path of people’s gaze movements on the TV screen. Based on the Markov Property, if the transition matrix A = (a

_{ij}) is known, we can find the probability of any trajectory {s

_{1}, s

_{2}, s

_{3,}…, s

_{n}

_{−1}, s

_{n}} by multiplying together the starting distribution and all subsequent single-step probabilities. The calculation is shown:

^{th}step, let π

_{n}denote the probability distribution over the states, s

_{n}

^{max}denotes the state of the highest probability, then we have:

^{max}denote the probability of the ‘most likely trajectory’ in n steps, then can calculate it as follows:

_{n}

^{max}in each step n comprises a set of states S

^{max,}which represents the sequence of states from X

_{0}. Thus, S

^{max}is the most likely trajectory shown below:

- First Passage Time

_{0}= S

_{i}, the total number T

_{ij}of steps taken by the Markov Chain from state S

_{i}to reach state S

_{j}for the first time is the first passage time from S

_{i}to S

_{j}. The commonly used quantity related to the first passage time is the mean first passage time. Let T

_{ij}be the first passage time from state S

_{i}to S

_{j}, so we define m

_{ij}as the mean first passage time correspondingly, which is represented as follows [31]:

- Transition Matrix Segmentation

_{n}written as [58]:

- Q is an m × m matrix;
- R is an m × n matrix;
- 0 is an n × m matrix of zeros;

_{ij}}, such that the element n

_{ij}for N provides the expected number of times the process is in transient state S

_{j}, given that the chain began in transient state S

_{i}. The Fundamental Matrix N is the inverse of (I − Q) and can therefore be calculated by the equation as follows [59,60,61,62]:

- Expected Time to Absorption

_{i}represent the expected number of steps before the Markov chain is absorbed, given that the chain began in state i, then the column vector t (with an ith entry of t

_{i}) can be calculated as:

#### 3.4.2. Markov Packages–R and MATLAB

#### 3.4.3. DTMC Modelling Steps

- State space–AOI categories

**“content purchase”**screens:

- S = {“A,” “B,” “C,” “D,” “E,” “F,” “Z”} – Screen A;
- S = {“A,” “B,” “C,” “D,” “Z”} – Screen B;
- For
**“content viewing”**screens: - S = {“A,” “B,” “C,” “Dl,” “E,” … “T”}.

- 2.
- Initial state probability distribution

_{0}refers to the state when t = 0, and the initial probability distribution is defined by:

**content viewing**’ scenario, among all 14 participants, their eye gaze fell on different areas at first sight. The details are listed below:

- Two participants looked at AOI-E;
- One participant looked at AOI-I;
- Two participants looked at AOI-L;
- Two participants looked at AOI-M;
- One participant looked at AOI-O;
- Four participants looked at AOI-R;
- Two participants looked at AOI-T.

- 3.
- Transition matrix

_{t}, X

_{t+}

_{1}) in the DTMC model. In that regard, the state change pairs can be generated by simply applying an iteration of a length-2 sliding window over each sequence. In this paper, the function was implemented in Java. An example of the function is displayed in Figure 11 below:

_{ij}entry, where n

_{ij}stands for the number of pairs (X

_{t}= s

_{i}, X

_{t}

_{+1}= s

_{j}) found in the whole dataset. The calculation for each p

_{ij}is presented by the equation below [17]:

_{ij}

^{MLE}values [17] were then generated by:

_{i}and s

_{j}(let s

_{i}be the initial state) was also generated. Equation (13) was then used to construct the ‘most likely trajectory,’ which illustrated the course of the most likely state at each stage of the Markov process. In this instance, it depicts the most prominent way in which participants travel their eyes across the screen.

#### 3.4.4. Summary of Data Pipeline

**“content purchase”**scenario) or R Studio (

**“content viewing”**scenario) to employ. Then, in MATLAB, functions such as “dtmc” and “graphplot” were implemented, while in R Studio, we used functions mostly from the “markovchain” package, subsequently to create and observe DTMC models.

#### 3.5. DTMC Visualisation

## 4. Results

#### 4.1. DTMC–“Content Purchase” Screens

#### 4.1.1. Transition Matrix–Screen A&B

#### 4.1.2. Expected Time to Payment–Screen A&B

_{n,}in this case, will be an identity matrix of size m =7, so we have N in Figure 16 below.

_{i}represents the expected number of steps to absorption from a given state i. In this case, the states are A, B, C, D, E, F and Z. Also, considering the created discrete Markov chain follows a time step of 0.165 s between states, the expected number of steps for each state will be multiplied by 0.165 to get the real expected time to absorption from each state. Table 2 shows the calculated expected time to absorption from each state.

#### 4.2. DTMC–“Content Viewing” Screens

**“content viewing”**screen is implemented in R Studio by the “markovchain” package and R functions belonging to it, which was previously discussed in Section 3.4.2.

^{max}= {R, …, T, …, O…}, which is calculated by the simulation of Markov transitions in R. The trajectory is depicted and mapped to the real-world screen layout as the most likely gaze path on the screen, which is shown in Figure 22.

## 5. Discussion

- Capturing coordinates of AOI regions on the screen;
- Converting gaze point to AOI block letters;
- Raw data cleaning and transferring from Mongo DB to MySQL Workbench.

- 4.
**“Content purchase”**scenario: when purchasing content (TV on Demand), what draws the eye? Is it the price, is it the quality, or is it something else?

- 5.
- “
**Content viewing”**scenario: when a Content Discovery page first loads, what are customers viewing? Are they drawn to the hero carousel, the navigation, or something else?

- Eye tracking studies can provide valuable inputs to a human-centred design approach for TV applications;
- Eye tracking results can show the order in which people focus on different parts of a TV application page, which enables designers to review the information architecture and whether some pages are too complex;
- Heat maps derived from eye tracking and information on the order of focus can be used to re-assess “what should be the key function of this page?”

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## References

- Abreu, J.; Nogueira, J.; Becker, V.; Cardoso, B. Survey of Catch-up TV and Other Time-Shift Services: A Comprehensive Analysis and Taxonomy of Linear and Nonlinear Television. Telecommun. Syst.
**2017**, 64, 57–74. [Google Scholar] [CrossRef] - Mai, X.Y. Application of IP network and IPTV. Electron. World
**2016**, 6, 136. [Google Scholar] - Cesar, P.; Chorianopoulos, K. The Evolution of TV Systems, Content, and Users toward Interactivity. Found. Trends® Hum.–Comput. Interact.
**2007**, 2, 373–395. [Google Scholar] [CrossRef] - Wang, C.-H.; Chen, T.-M. Incorporating Data Analytics into Design Science to Predict User Intentions to Adopt Smart TV with Consideration of Product Features. Comput. Stand. Interfaces
**2018**, 59, 87–95. [Google Scholar] [CrossRef] - Adebiyi, S.O.; Oyatoye, E.O.; Mojekwu, J.N. Predicting Customer Churn and Retention Rates in Nigeria’s Mobile Telecommunication Industry Using Markov Chain Modelling. Acta Univ. Sapientiae Econ. Bus.
**2015**, 3, 67–80. [Google Scholar] [CrossRef] - Kim, Y.; Park, J.K.; Choi, H.J.; Lee, S.; Park, H.; Kim, J.; Lee, Z.; Ko, K. Reducing IPTV Channel Zapping Time Based on Viewer’s Surfing Behavior and Preference. In Proceedings of the 2008 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, Las Vegas, NV, USA, 31 March–2 April 2008; pp. 1–6. [Google Scholar]
- Tsai, W.-C.; Ko, C.-L.; Liu, C.-S. A Lightweight Personalized Image Preloading Method for IPTV System. In Proceedings of the 2017 19th International Conference on Advanced Communication Technology (ICACT), Pyeongchang, Republic of Korea, 19–22 February 2017; pp. 265–268. [Google Scholar]
- 9Schnabel, T.; Bennett, P.N.; Joachims, T. Improving Recommender Systems Beyond the Algorithm. arXiv
**2018**, arXiv:1802.07578. [Google Scholar] - Ingrosso, A.; Volpi, V.; Opromolla, A.; Sciarretta, E.; Medaglia, C.M. UX and Usability on Smart TV: A Case Study on a T-Commerce Application. In HCI in Business; Fui-Hoon Nah, F., Tan, C.-H., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9191, pp. 312–323. ISBN 978-3-319-20894-7. [Google Scholar]
- Chennamma, H.R.; Yuan, X. A Survey on Eye-Gaze Tracking Techniques. arXiv
**2013**, arXiv:1312.6410. [Google Scholar] - Zhang, S.; McClean, S.; Garifullina, A.; Kegel, I.; Lightbody, G.; Milliken, M.; Ennis, A.; Scotney, B. Evaluation of the TV Customer Experience Using Eye Tracking Technology. In British HCI Conference 2018; British Computer Society: Belfast, Ireland, 2018. [Google Scholar]
- Molina, A.I.; Navarro, Ó.; Ortega, M.; Lacruz, M. Evaluating Multimedia Learning Materials in Primary Education Using Eye Tracking. Comput. Stand. Interfaces
**2018**, 59, 45–60. [Google Scholar] [CrossRef] - Cowen, L.; Ball, L.J.; Delin, J. An Eye Movement Analysis of Web Page Usability. In People and Computers XVI—Memorable Yet Invisible; Faulkner, X., Finlay, J., Détienne, F., Eds.; Springer: London, UK, 2002; pp. 317–335. ISBN 978-1-85233-659-2. [Google Scholar]
- Menges, R.; Tamimi, H.; Kumar, C.; Walber, T.; Schaefer, C.; Staab, S. Enhanced Representation of Web Pages for Usability Analysis with Eye Tracking. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, Warsaw, Poland, 14–17 June 2018; ACM: New York, NY, USA, 2018; pp. 1–9. [Google Scholar]
- Lin, S.S.J.; Hsieh, M.-Y. Differences between EFL Beginners and Intermediate Level Readers When Reading Onscreen Narrative Text with Pictures: A Study of Eye Movements as a Guide to Personalization. Int. J. Hum.–Comput. Interact.
**2019**, 35, 299–312. [Google Scholar] [CrossRef] - Chen, Z.; Zhang, S.; Mcclean, S.; Lightbody, G.; Milliken, M.; Kegel, I.; Garifullina, A. Using Eye Tracking to Gain Insight into TV Customer Experience by Markov Modelling. In Proceedings of the 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Leicester, UK, 19–23 August 2019; IEEE: Leicester, UK; pp. 916–921. [Google Scholar]
- Spedicato, G. “Discrete Time Markov Chains with R.” The R Journal. R Package Version 0.6.9.7. 2017. Available online: https://journal.r-project.org/archive/2017/RJ-2017-036/index.html (accessed on 26 April 2022).
- Just, M.A.; Carpenter, P.A. A Theory of Reading: From Eye Fixations to Comprehension. Psychol. Rev.
**1980**, 87, 329–354. [Google Scholar] [CrossRef] - Shaw, R.; Crisman, E.; Loomis, A.; Laszewski, Z. The Eye Wink Control Interface: Using the Computer to Provide the Severely Disabled with Increased Flexibility and Comfort. In Proceedings of the Third Annual IEEE Symposium on Computer-Based Medical Systems, Chapel Hill, NC, USA, 3–6 June 1990; pp. 105–111. [Google Scholar]
- Costescu, C.; Rosan, A.; Brigitta, N.; Hathazi, A.; Kovari, A.; Katona, J.; Demeter, R.; Heldal, I.; Helgesen, C.; Thill, S.; et al. Assessing Visual Attention in Children Using GP3 Eye Tracker. In Proceedings of the 2019 10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Naples, Italy, 23–25 October 2019; pp. 343–348. [Google Scholar]
- Kovari, A.; Katona, J.; Costescu, C. Evaluation of Eye-Movement Metrics Ina Software DebbugingTask Using GP3 Eye Tracker. Acta Polytech. Hung.
**2020**, 17, 57–76. [Google Scholar] [CrossRef] - Sulikowski, P.; Zdziebko, T. Deep Learning-Enhanced Framework for Performance Evaluation of a Recommending Interface with Varied Recommendation Position and Intensity Based on Eye-Tracking Equipment Data Processing. Electronics
**2020**, 9, 266. [Google Scholar] [CrossRef] - Behe, B.K.; Fernandez, R.T.; Huddleston, P.T.; Minahan, S.; Getter, K.L.; Sage, L.; Jones, A.M. Practical Field Use of Eye-Tracking Devices for Consumer Research in the Retail Environment. HortTechnology
**2013**, 23, 517–524. [Google Scholar] [CrossRef][Green Version] - Rihn, A.; Khachatryan, H.; Wei, X. Assessing Purchase Patterns of Price Conscious Consumers. Horticulturae
**2018**, 4, 13. [Google Scholar] [CrossRef] - Khachatryan, H.; Rihn, A.L. Using Innovative Biometric Measurements in Consumer Decision Making Research. Sci. Bus. (S2B) Res. Innov.
**2015**, 1, 107–125. [Google Scholar] - Graham, D.J.; Orquin, J.L.; Visschers, V.H.M. Eye Tracking and Nutrition Label Use: A Review of the Literature and Recommendations for Label Enhancement. Food Policy
**2012**, 37, 378–382. [Google Scholar] [CrossRef] - Joowon, L.; Jae-Hyeon, A. Attention to Banner Ads and Their Effectiveness: An Eye-Tracking Approach. Int. J. Electron. Commer.
**2012**, 17, 119–137. [Google Scholar] [CrossRef] - Lohse, G.L. Consumer Eye Movement Patterns on Yellow Pages Advertising. J. Advert.
**1997**, 26, 61–73. [Google Scholar] [CrossRef] - Reutskaja, E.; Nagel, R.; Camerer, C.F.; Rangel, A. Search Dynamics in Consumer Choice under Time Pressure: An Eye-Tracking Study. Am. Econ. Rev.
**2011**, 101, 900–926. [Google Scholar] [CrossRef] - Mottet, D.; Bootsma, R.; Guiard, Y.; Laurent, M. Fitts’ Law in Two-Dimensional Task Space. Exp. Brain Res.
**1994**, 100, 144–148. [Google Scholar] [CrossRef] - Fitts’s Law: The Importance of Size and Distance in UI Design, Interaction Design Foundation. 2019. Available online: https://www.interaction-design.org/literature/article/tts-s-law-the-importance-of-size-and-distance-in-ui-design (accessed on 25 April 2022).
- Gillan, D.J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L. How Should Fitts’ Law Be Applied to Human-Computer Interaction? Interact. Comput.
**1992**, 4, 291–313. [Google Scholar] [CrossRef] [PubMed] - Elder, J.H.; Goldberg, R.M. Ecological Statistics of Gestalt Laws for the Perceptual Organization of Contours. J. Vis.
**2002**, 2, 5. [Google Scholar] [CrossRef] [PubMed] - Wagemans, J.; Elder, J.H.; Kubovy, M.; Palmer, S.E.; Peterson, M.A.; Singh, M.; von der Heydt, R. A Century of Gestalt Psychology in Visual Perception: I. Perceptual Grouping and Figure–Ground Organization. Psychol. Bull.
**2012**, 138, 1172–1217. [Google Scholar] [CrossRef] [PubMed] - Gestalt Principles, Interaction Design Foundation. Available online: https://www.interactiondesign.org/literature/topics/gestalt-principles (accessed on 29 April 2022).
- Pernice, K. F-Shaped Pattern of Reading on the Web: Misunderstood, But Still Relevant (Even on Mobile), Nielsen Norman Group. 2017. Available online: https://www.nngroup.com/articles/f-shaped-pattern-reading-web-content/ (accessed on 1 May 2022).
- Fessenden, T. Horizontal Attention Leans Left, Nielsen Norman Group. 2017. Available online: https://www.nngroup.com/articles/horizontal-attention-leans-left/ (accessed on 1 May 2022).
- Pemberton, L.; Griffiths, R. Usability evaluation techniques for interactive television. In Proceedings of the HCI International 2003, Crete, Greece, 22–27 June 2003; Volume 4, pp. 882–886. [Google Scholar]
- Carroll, J.M. Interfacing Thought: Cognitive Aspects of Human-Computer Interaction; The MIT Press: Cambridge, MA, USA, 1987; p. 370. ISBN 0-262-03125-6. [Google Scholar]
- Card, S.K.; Moran, T.P.; Newell, A. The Psychology of Human-Computer Interaction, 1st ed.; Card, S.K., Ed.; CRC Press: Boca Raton, FL, USA, 2018; ISBN 978-0-203-73616-6. [Google Scholar]
- Sulikowski, P.; Zdziebko, T. Horizontal vs. Vertical Recommendation Zones Evaluation Using Behavior Tracking. Appl. Sci.
**2020**, 11, 56. [Google Scholar] [CrossRef] - Sziladi, G.; Ujbanyi, T.; Katona, J.; Kovari, A. The Analysis of Hand Gesture Based Cursor Position Control during Solve an IT Related Task. In Proceedings of the 2017 8th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Debrecen, Hungary, 11–14 September 2017; pp. 000413–000418. [Google Scholar]
- Katona, J.; Kovari, A. EEG-Based Computer Control Interface for Brain-Machine Interaction. Int. J. Onl. Eng.
**2015**, 11, 43. [Google Scholar] [CrossRef] - Gagniuc, P.A. Markov Chains: From Theory to Implementation and Experimentation; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2017; ISBN 978-1-119-38759-6. [Google Scholar]
- Wang, X.; Jiang, X.; Chen, L.; Wu, Y. KVLMM: A Trajectory Prediction Method Based on a Variable-Order Markov Model with Kernel Smoothing. IEEE Access
**2018**, 6, 25200–25208. [Google Scholar] [CrossRef] - De La Bourdonnaye, F.; Setchi, R.; Zanni-Merk, C. Gaze Trajectory Prediction in the Context of Social Robotics. IFAC-PapersOnLine
**2016**, 49, 126–131. [Google Scholar] [CrossRef] - Thomas, L.C. Time Will Tell: Behavioural Scoring and the Dynamics of Consumer Credit Assessment. IMA J. Manag. Math.
**2001**, 12, 89–103. [Google Scholar] [CrossRef] - Scholz, M. R Package Clickstream: Analyzing Clickstream Data with Markov Chains. J. Stat. Soft.
**2016**, 74, 1–17. [Google Scholar] [CrossRef] - Montgomery, A.L.; Li, S.; Srinivasan, K.; Liechty, J.C. Modeling Online Browsing and Path Analysis Using Clickstream Data. Mark. Sci.
**2004**, 23, 579–595. [Google Scholar] [CrossRef] - Ish-Shalom, S.; Hansen, S. Visualizing Clickstream Data as Discrete-Time Markov Chains; Stanford University: Stanford, CA, USA, 2016. [Google Scholar]
- Frhan, A.J. Website Clickstream Data Visualization Using Improved Markov Chain Modelling in Apache Flume. MATEC Web Conf.
**2017**, 125, 04025. [Google Scholar] [CrossRef] - Cegan, L. Intelligent Preloading of Websites Resources Based on Clustering Web User Sessions. In Proceedings of the 2015 5th International Conference on IT Convergence and Security (ICITCS), Kuala Lumpur, Malaysia, 24–27 August 2015; pp. 1–4. [Google Scholar]
- Garg, L.; McClean, S.; Meenan, B.; Millard, P. Non-Homogeneous Markov Models for Sequential Pattern Mining of Healthcare Data. IMA J. Manag. Math.
**2009**, 20, 327–344. [Google Scholar] [CrossRef] - Rabiner, L.R. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proc. IEEE
**1989**, 77, 257–286. [Google Scholar] [CrossRef] - Logofet, D.O.; Lesnaya, E.V. The Mathematics of Markov Models: What Markov Chains Can Really Predict in Forest Successions. Ecol. Model.
**2000**, 126, 285–298. [Google Scholar] [CrossRef] - Shamshad, A.; Bawadi, M.; Wanhussin, W.; Majid, T.; Sanusi, S. First and Second Order Markov Chain Models for Synthetic Generation of Wind Speed Time Series. Energy
**2005**, 30, 693–708. [Google Scholar] [CrossRef] - Gebali, F. Reducible Markov Chains. In Analysis of Computer and Communication Networks; Springer: Boston, MA, USA, 2008; pp. 1–32. ISBN 978-0-387-74436-0. [Google Scholar]
- Feres, R. Notes for Math 450 Matlab Listings for Markov Chains. 2007. Available online: http://www.math.wustl.edu/feres/Math450Lect04.pdf (accessed on 25 April 2022).
- Kemeny, J.G.; Snell, J.L. Finite Markov Chains; Undergraduate Texts in Mathematics; Springer: New York, NY, USA, 1976; ISBN 978-0-387-90192-3. [Google Scholar]
- Stationary and Limiting Distributions, Introduction to Probability, Statistics and Random Processes. Available online: https://www.probabilitycourse.com/chapter11/11_2_6_stationary_and_limiting_distributions.php (accessed on 24 April 2022).
- Hunter, J.J. Accurate Calculations of Stationary Distributions and Mean First Passage Times in Markov Renewal Processes and Markov Chains. Spec. Matrices
**2016**, 4, 151–175. [Google Scholar] [CrossRef] - Maltby, H. Absorbing Markov Chains. Available online: https://brilliant.org/wiki/absorbing-markov-chains/ (accessed on 13 April 2022).
- MathWorks. Create Discrete-Time Markov Chain. Available online: https://uk.mathworks.com/help/econ/dtmc.html (accessed on 26 April 2022).
- MathWorks. Determine Markov Chain Asymptotics. Available online: https://uk.mathworks.com/help/econ/dtmc.asymptotics.html (accessed on 26 April 2022).

**Figure 1.**BT Player Screenshot [16].

**Figure 3.**A complete view of the test environment [11].

**Figure 8.**Conversion from Gaze Coordinate to Block Label [16].

**Figure 12.**Data pipeline throughout our study [16].

**Figure 15.**Screen A: (

**a**) Section Q in the transition matrix; (

**b**) Section R in the transition matrix.

**Figure 17.**Screen B: (

**a**) Section Q in the transition matrix; (

**b**) Section R in the transition matrix.

**Figure 20.**Transition matrix–“content viewing” screen [16].

**Figure 21.**Matrix of mean first passage time–“content viewing” screen [16].

**Figure 22.**Most likely gaze trajectory–“content viewing” screen [16].

R Statement | Function Description |
---|---|

R > dtmc <- new(“markovchain,” transitionMatrix = A, states = L) | Create an object of the “markovchain” class, and, e.g., name it “dtmc” as an R variable |

R > summary(dtmc) | Display properties and classification of states |

R > communicatingClasses(dtmc) | Display communicating states |

R > absorbingStates(dtmc) | Display absorbing states |

R > steadyStates(dtmc) | Generate the steady-state vector (see Equation (9)) |

R > meanFirstPassageTime(dtmc) | Create a matrix for the mean first passage times |

AOI/State Name | Expected Time to Absorption |
---|---|

A | 7.88 s |

B | 7.96 s |

C | 7.81 s |

D | 7.84 s |

E | 7.85 s |

F | 7.86 s |

Z | 7.77 s |

AOI/State Name | Expected Time to Absorption |
---|---|

A | 9.67 s |

B | 9.67 s |

C | 9.59 s |

D | 9.54 s |

Z | 9.49 s |

State | Initial Probability | Steady Probability |
---|---|---|

B | 0 | 0.013 |

D | 0 | 0.001 |

E | 0.143 | 0.018 |

F | 0 | 0.010 |

G | 0 | 0.006 |

H | 0 | 0.002 |

I | 0.071 | 0.005 |

J | 0 | 0.008 |

K | 0 | 0.009 |

L | 0.143 | 0.032 |

M | 0.143 | 0.075 |

N | 0 | 0.047 |

O | 0.071 | 0.303 |

P | 0 | 0.059 |

Q | 0 | 0.005 |

R | 0.286 | 0.088 |

S | 0 | 0.038 |

T | 0.143 | 0.281 |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Chen, Z.; Zhang, S.; McClean, S.; Hart, F.; Milliken, M.; Allan, B.; Kegel, I. Process Mining IPTV Customer Eye Gaze Movement Using Discrete-Time Markov Chains. *Algorithms* **2023**, *16*, 82.
https://doi.org/10.3390/a16020082

**AMA Style**

Chen Z, Zhang S, McClean S, Hart F, Milliken M, Allan B, Kegel I. Process Mining IPTV Customer Eye Gaze Movement Using Discrete-Time Markov Chains. *Algorithms*. 2023; 16(2):82.
https://doi.org/10.3390/a16020082

**Chicago/Turabian Style**

Chen, Zhi, Shuai Zhang, Sally McClean, Fionnuala Hart, Michael Milliken, Brahim Allan, and Ian Kegel. 2023. "Process Mining IPTV Customer Eye Gaze Movement Using Discrete-Time Markov Chains" *Algorithms* 16, no. 2: 82.
https://doi.org/10.3390/a16020082