"False"
Skip to content
printicon
Main menu hidden.

Development and evaluation of methods for collecting and analysing interval sequences

Research project Interval sequences represent the time intervals between beats, when a person tries to beat a steady pulse. Various aspects of variability in interval sequences are found to correlate with cognitive performance and with neuroanatomical properties of the brain.

Human timing performance is a highly significant aspect of our psychological functioning. Like many other attempts to measure psychological constructs, the use of different methods and instruments incur unwanted error variability that reduces reliability. Also, specific choices of methods may be based on more or less correct or relevant theory, which raises issues of validity: What are we actually measuring? We will therefore systematically compare a number of different data collection and analysis methods with respect to the reliability of serial isochronous serial interval data from human participants.

Head of project

Project overview

Project period:

2009-01-01 2011-12-31

Participating departments and units at Umeå University

Department of Psychology, Faculty of Social Sciences

Research area

Neurosciences, Psychology

Project description

The purposes of this project, as stated above, will be achieved by conducting a series of experiments using human adults as participants. In all experiments will we compute a range of dependent measures tapping different aspects of the timing behaviour. These include total, local, and drift variability (Madison, 2001b; Jucaite et al., 2007; Ullén et al., 2008), slope analysis (the slope of timing variability as a function of the interval to be timed) and “worst and best case performance”, as reported in current publications (Madison et al., 2008).

We will also obtain sufficiently long response sequences so that the long range correlation dependency can be computed by means of the frequency domain maximum likelihood estimate (Beran, 1994), the detrended fluctuation parameter (Peng et al., 1994), and box-plotting methods (e.g., Sevcik, 1998). These methods have all been applied in our previous articles (Madison, 2000b; 2004b; 2006; Madison et al., 2008a).

Based on practical feasibility and the logic that follows from outcomes of earlier experiments within the project, each additional experiment will incorporate one or more of the following independent variables. They will be factorially combined if results indicate interactions among variables.

1. Mode of movement.
We will consider at least the following, already employed modes. Tapping a finger (a) against a force plate, (b) beating a drumstick against a force plate, and (c) wiggling a finger in a light beam (which gives the least sensory-tactile feedback). If differences between these modes so indicate, we will (d) try to tease out the reason for this by controlling hand-wrist effects by means of restricting certain movements (i.e. decreasing degrees of freedom).

2. Degree of sensory feedback.
Recent research indicates that small temporal deviations below the perceptual threshold are on average veridically copied in participants’ subsequent responses (e.g., Madison et al., 2004; Repp, 2000; Stephan et al., 2002). This indicates that the traditional open-loop models of timing do not apply (e.g., Vorberg et al., 1996), but rather that sensory information is automatically used when available. It is therefore possible that the participants’ own responses affect subsequent responses, which means that sensory feedback may play a crucial role for the variability of interval sequences. We will follow up our previous study of this (Madison et al., 2008a) with improved designs involving auditory masking for better control of feedback and larger numbers of participants.

3. Auditory stimulus intensity.
Another way to address the issue of mechanism mentioned above is to vary the stimulus intensity: If feedback is not involved in the process (i.e. open-loop), then intensity should be of no consequence. We have devised a particularly effective design for this question, in which we compare timing variability under three conditions; production (in which the intensity of the sounds related to one’s own beats is manipulated in a number of levels), synchronisation, and “off-beat”, or anti-phase synchronisation (e.g., Chen, Ding, & Kelso, 2001).

4. Training.
Performance is most likely suboptimal the first time one performs a task, and previous research indicates that omitting at least a few minutes of initial data yields smaller average variability across trials. This also depends on the amount of variability within the experimental session, for example if many levels of interval duration are employed (Madison, 2001b; 2006; Madison et al., 2008). To elucidate this, we aim to compute the marginal improvement function for various experimental designs, and attempt to fit such a function also across designs, in order to obtain a guide for designing future experiments in this field.

5. Motivation.
We have informally observed huge differences in variability, and in particular in occasional gross deviations, between groups of participants that have apparently felt more or less motivated to perform these tasks. To produce a sequence of intervals requires some sort of concentration, at least to the extent that distracting thoughts are avoided. This is particularly common and apparent in children diagnosed as ADHD, in that they occasionally stop producing, leaving pauses from fractions of a second to several seconds (Ben-Pazi, Gross-Tsur, Bergman, & Shalev, 2003; Jucaite et al., 2007).

At the other end of this spectrum have we found a group of adults that were interested in their own performance, liked to participate, and who were willing to schedule 10 future 1.5 hour sessions over a period of several weeks (Madison, 2006). We will therefore address the effects of motivation by some of the following means: (a) compare people that have an interest in this task with people that have not (controlling of course for their accumulated amount of training across sessions), (b) compare classical experimental trials with no feedback of task performance (in terms of some numeric indicating variability or deviation from target interval) with both trial-by-trial feedback and continuous feedback (occurring every few seconds), (c) rewards for better performance, and (d) different, more or less motivating instructions.
Latest update: 2018-06-20