Next Article in Journal
Saccadic Suppression of Displacement Does Not Reflect a Saccade-Specific Bias to Assume Stability
Next Article in Special Issue
On the Aperture Problem of Binocular 3D Motion Perception
Previous Article in Journal
An Adaptive Homeostatic Algorithm for the Unsupervised Learning of Visual Features
Previous Article in Special Issue
Spatial Frequency Tuning and Transfer of Perceptual Learning for Motion Coherence Reflects the Tuning Properties of Global Motion Processing
 
 
Article
Peer-Review Record

Inefficient Eye Movements: Gamification Improves Task Execution, But Not Fixation Strategy

by Warren R. G. James 1, Josephine Reuther 1, Ellen Angus 1, Alasdair D. F. Clarke 2 and Amelia R. Hunt 1,*
Reviewer 1:
Reviewer 2: Anonymous
Submission received: 29 July 2019 / Revised: 9 September 2019 / Accepted: 12 September 2019 / Published: 18 September 2019
(This article belongs to the Special Issue Selected Papers from the Scottish Vision Group Meeting 2019)

Round 1

Reviewer 1 Report

The authors present a single experiment examining the effect of “gamification” on eye movements. The authors compared fixation strategies under three conditions, a forced optimal strategy, a control/no information condition, and the gamification conditions, to determine whether participants would use an optimal strategy based on the distance that the eyes would have to move. Critically, the authors tailored the optimal strategy to each participant’s ability to see peripheral targets. They found that gamifying the task failed to encourage the optimal strategy of eye movements.

 

Overall, I think the authors present a compelling case that adding a gamification element was insufficient in inducing an optimal strategy. Their argument for the lack of an effect is based on the clear overlap of the eye movement strategies of the control and motivated groups. It is possible that the level of motivation felt by the participants did not reach a threshold needed to induce the optimal strategy and the rather blanket claim that gamification is insufficient would be incorrect. It is impossible to know if the authors have sufficiently motivated their participants based on this game to make optimal eye movements. The authors point out several possible reasons that accurate performance and optimal eye movements are not perfectly correlated, but I don’t think they can have absolutely “ruled out” adequate motivation (lines 320 and 321). I don’t believe that this is a fatal flaw, but tempering the claims may be needed.  

 

Minor points

The caption of the Figure 4 could use some additional details. The figure describing the results displays several graphs, but it fails to say what each graph is and what the various lines on each graph represents. I was able to determine this information after some examination of the methods and text, but adding details to the captions will aid the readers in their interpretation of the figure. Line 275: I think the authors mean Figure 4 not Figure 5. Figure 5 does not describe fixation strategies. Figures 1 and 2 are not particularly helpful in explaining the task that is being performed. Perhaps if the authors provided a series of images that provide the sequence of events, it could explain visually what was occurring better.

 


Signed,

 

Carrick Williams

Author Response

Point 1: The caption of the Figure 4 could use some additional details. The figure describing the results displays several graphs, but it fails to say what each graph is and what the various lines on each graph represents. I was able to determine this information after some examination of the methods and text, but adding details to the captions will aid the readers in their interpretation of the figure.

Response 1:  We agree with the reviewer that additional details in the figure captions would ease understanding. Therefore, we made additions to the figure captions. Each sub-plot in the figure shows the data for 1 participant in the experiment. The plots show the proportion of trials in which the participant fixated one of the side boxes for each of the separations they were tested at. The black line on each plot shows where the participant would have fixated, for that distance, had they been using the optimal strategy.

Point 2: Line 275: I think the authors mean Figure 4 not Figure 5. Figure 5 does not describe fixation strategies.

Response 2: Thank you for pointing this out. We have fixed this in the manuscript.

Point 3: Figures 1 and 2 are not particularly helpful in explaining the task that is being performed. Perhaps if the authors provided a series of images that provide the sequence of events, it could explain visually what was occurring better.

Response 3: We agree that a series of images would make it easier to understand the task sequence. We have altered these figures accordingly.

Reviewer 2 Report

This nice article demonstrates that increased motivation, here achieved with gamification of an experimental task, do not necessarily translate into the selection for more optimal decision strategies. The paper is well written and the results seems robust. I think it will make a nice addition to demonstrations of how humans can behave irrationally even in seemigly simple visuo-motor tasks. I have few comments, which I think the authors should be able to address in a revision.

- The use of Beta regression for the rate of success is a bit far from standard (given that the dependent variable consist of individual trials with binary outcomes - at least for the actual data and not the expected performances) but I think it is a valid approach. I do not, however, understand the choice of the prior. I see that the authors choosed the prior to regularize the estimates, i.e. to introduce some skepticism toward extreme values. However, as illustrated in Fig. 3 the prior place most of the probability mass around 50% accuracy. Yet, the task is designed in a way that a higher performance should be expected: even if an observer were to make purely random fixation choices, performance should still be higher than 50%. Moreover, I think that the expected performance can in principle can be computed exactly from the psychometric function that (I assume) has been fit to estimate the switch point; in other words part 1 of the experiments could allow to determine a better prior for this analysis.

- pp 8 line 260: states that "models including Delta(Visual Angle) are available in the Supplementary material" however I couldn't find any Supplementary material? (not even in the pre-print repository: https://osf.io/kh2sy/)

- Perhaps these were presented in the Supplementary material that I couldn't find, but it would be nice to see some of the psychometric used to estimate the switch point for individual participants.

- The description of how expected success rate was computed is a bit unclear (pag. 9, bottom). I assumed it was based on psychometric functions fitted to part 1, but this is not explicitly mentioned. "(...) but they would have a 50% (c) chance due to the nature of the task." could be rephrased to indicate more clearly that 'c' is the probability that the target will appear in any of the two boxes.

- The definition of highest (posterior) density interval (HPDI) [pp.8 lines 258] is inexact: the X% HPDI is *narrowest* interval that contains X% of the data. The distincive element from other types of CI, such as percentile CI, is the "narrowest" bit.

- This may be a personal preference, but I feel that a paper should be as self-contained as possible. I would suggest that the authors add slightly more information about the control and optimal group used as comparison. E.g.: was the setup used in the control group identical in all aspects? (any difference should be mentioned). How were participants in the optimal group instructed where to fixate?

- Abstract, line 9: "the absence (of) this additional motivation"

 

Author Response

Point 1: The use of Beta regression for the rate of success is a bit far from standard (given that the dependent variable consist of individual trials with binary outcomes - at least for the actual data and not the expected performances) but I think it is a valid approach. I do not, however, understand the choice of the prior. I see that the authors choosed the prior to regularize the estimates, i.e. to introduce some skepticism toward extreme values. However, as illustrated in Fig. 3 the prior place most of the probability mass around 50% accuracy. Yet, the task is designed in a way that a higher performance should be expected: even if an observer were to make purely random fixation choices, performance should still be higher than 50%. Moreover, I think that the expected performance can in principle can be computed exactly from the psychometric function that (I assume) has been fit to estimate the switch point; in other words part 1 of the experiments could allow to determine a better prior for this analysis.

Response 1:  We have changed the model to include this suggestion. We have altered the prior so that it is now centred on the average success rate calculated from the participants’ session 1 data. The figure has been updated to demonstrate this.

Point 2: pp 8 line 260: states that "models including Delta(Visual Angle) are available in the Supplementary material" however I couldn't find any Supplementary material? (not even in the pre-print repository: https://osf.io/kh2sy/)  Perhaps these were presented in the Supplementary material that I couldn't find, but it would be nice to see some of the psychometric used to estimate the switch point for individual participants.

Response 2: Thank you for making us aware of this. The supplementary material has been uploaded to the pre-print repository: https://osf.io/kh2sy/

Point 3: The description of how expected success rate was computed is a bit unclear (pag. 9, bottom). I assumed it was based on psychometric functions fitted to part 1, but this is not explicitly mentioned. "(...) but they would have a 50% (c) chance due to the nature of the task." could be rephrased to indicate more clearly that 'c' is the probability that the target will appear in any of the two boxes.

Response 3: The text has been altered to make it clearer that it was the case that expected accuracy was calculated using the psychometric function fit to part 1 data. We have also tidied up the example so that it is clearer exactly how this value was calculated.

Point 4: The definition of highest (posterior) density interval (HPDI) [pp.8 lines 258] is inexact: the X% HPDI is *narrowest* interval that contains X% of the data. The distincive element from other types of CI, such as percentile CI, is the "narrowest" bit.

Response 4: This has been fixed in the text.

Point 5: This may be a personal preference, but I feel that a paper should be as self-contained as possible. I would suggest that the authors add slightly more information about the control and optimal group used as comparison. E.g.: was the setup used in the control group identical in all aspects? (any difference should be mentioned). How were participants in the optimal group instructed where to fixate?

Response 5: We have added some clarification of the methods used in the comparison study. The setup was the same as the current study and, as such, this has been made clear. We have also added a couple of sentences to explain how participants in the optimal group were instructed where to fixate on each trial and how this was decided on.

Point 6: Abstract, line 9: "the absence (of) this additional motivation"

Response 6: We have fixed this in the text.

Back to TopTop