Choice Reaction Time


October 22nd, 2020 In development - these notes about the new cRT function are for internal use. 


final explanation goes here...
English
The cRT Choice Reaction Time is the average reaction time without fast responses. It is evaluated only for tests with a high number of commission errors.

German
deutsche Übersetzung hier...  

Code operations explained:
'----------------------------------------------------------------
    'cRT choice RT Calculation... - - - - - - - - - -- - - - - -- - - -
      'pure reaction time 3/13/2019 5:41PM new analysis number by siegfrieds idea
      'march 2019 renamed from prt to crt pureRT to choice RT
    dim cRT
    cRT=0
    const cRT_minCOMM=29 'parameter.. number of commission errors above which we calculate cRT
    const cRT_chopRT=270 'parameter... reaction time cutoff, disregard RTs under this number to calculate cRT
    if stats_counter_commissionerrors>cRT_minCOMM then
      cRT=get_mean(aryCorrelatedRT_Correct,0,999,cRT_chopRT)
      if 0 and (session("username")="marco" or session("username")="eegclinic" or session("localhost") ) then
        response.write "<br><font size=5 color=red> cRT="
        response.write formatnumber(cRT,1)
        response.write " ms </font>"&br
      end if
    end if
    '-------------------------------------------------------------------
In english:
If there are a minimum of 29 commission errors, then:
   exclude data points with RT<270ms:
       get the mean RT from the reaction time data 



-------------------------------------------------------------------------------
Text snippets from our communication

Yes, and we do this only in cases where there is a sufficient number of commission errors to actually shift the mean RT in any significant way, and only if the mean RT for the commission errors is sufficiently low.
You had suggested a cutoff of 29 errors, which seems fine, although I don’t remember just how we got to that number.
As for the cutoff in mean commission error RT ("ceRT?”), I don’t know where we are, but presumably something like 300 is not a bad choice.


Thinking through the problem:  We only have the problem in the fast periods. I calculate that there are just under 200 Go trials in the two fast periods. If there are more than fifteen percent of these that are pure RTs rather than choice RTs, then the mean RT is already being altered in a significant way. That’s thirty events. But we have the 7:2 ratio to deal with. So thirty pure RT events implies only nine fast commission errors, on average. So we have to have a much tighter screen on the allowable number of fast commission errors. The problem we identify is a factor of about three worse than what we get to see.

Presently, you are not counting just fast commission errors. You are counting all of them. So I would suggest twenty as a good number to start with. This would then be combined with a screen on the basis of mean commission error RT, the ceRT, of 300msec. So we would have the combination of a threshold on the total number of commission errors and a threshold of 300msec of mean ceRT. This then determines that the commission error distribution is in fact dominated by the fast commission errors.

But this is not the only way we could go. One could instead go just with a threshold on the number of fast commission errors, i.e. those below 300msec, in combination with the mean ceRT. After all, the total number of commission errors is not really a good discriminant for us here. Only the fast ones matter to us. So we count them. Then we apply the threshold that if there are ten or more of these, we make the correction for fast responses.


What does the threshold on ceRT give us in that situation? If someone makes lots of commission errors, there is a good probability that many of them will be fast…possibly enough to pass our threshold. But these cases are very small in number, and a correction for fast responses would not be a flaw in these cases. So perhaps we don’t need to have the ceRT threshold at all. If we have more than ten fast commission errors, we simply make the correction… We are saying, in effect, that if there are ten fast commission errors then we already know that they are bunched at low RT—except for an exceedingly small number of cases that are highly dysregulated, and in which such a correction would not present a significant problem. This would certainly simplify things, and might be a good way to get started. The less complexity the better.



Commission errors constitute our measure of impulsivity. Impulsivity is the propensity to act prior to deciding. That allows the reaction to occur faster than when a proper choice is being made. Sometimes, of course, commission errors occur for other reasons, and can therefore also be seen at large reaction times. But overall, there is going to be a tendency for commission errors to occur at shorter delay times than correct responses.
s you observed, the commission errors are bunched up at low reaction times, but only in the high-demand phases of the test.
That is as expected. The strategy of just hitting the target whenever it comes up becomes attractive only when the targets are in fact plentiful. But then one also makes a lot of commission errors, and that is how we detect the strategy. The result is that the reaction time score cannot be trusted. After twenty sessions, you will likely find that this problem has gone away, and that he may very well score lower in reaction time score. But that will not indicate that he actually slowed down—but rather that the pre-test score was simply invalid. So you can explain this to him and thus prepare him for that outcome before it happens.

In the attached case, there are some 29 fast commission errors. In the two high-demand periods, there are about 100 fast events plotted.
On the assumption of a pure reaction time event, one would expect (29 x 3.5) = 100 such fast responses.


This could be the formula for correcting such data: Count the fast responses and eliminate 3.5 times that many from the short RT end of the RT distribution.


A second option is to simply chop off the RT distribution at something like 250msec when we see large numbers of fast errors, and that can be our first cut at a remedy. That should already go a long way toward correcting the error in mean RT.


A third option is to do #2, but to allow for different cutoffs for different ages. Once you have the formalism set, I can give you target cutoffs for each of the early years.



So at this point, we can simply use text to say that if cRT is significantly different from the conventional mean RT that has been calculated, then the latter is invalid. But as soon as you can, you can replace the original mean RT with the new cRT in the scoring. Once this update exists, we are going to be able to show much better outcomes for RT than before, because most of the cases in which RT scores start out highly elevated are probably afflicted with this problem. So these cases are screwing up our summary data, particularly since the movement in RT score is usually large in these cases, whereas our improvements in RT score are usually more modest.
-----
Let’s think through the best way to handle the fast commission errors.

A number of criteria may be relevant here:
  1. The total number of commission errors
  2. The number of fast commission errors (by some suitable criterion)
  3. The number of anticipatory errors
  4. The mean RT for commission errors, ceRT
In my last message on this topic from April of last year, I made the case that even 15% contribution of fast events will skew the RT results. With 200 Go trials in the fast periods, that means thirty events. Thirty fast events would be reflected in 30/3.5 = 8-9 errors. So if we have eight fast commission errors, we already have a problem. This means it is sufficient to focus on the fast commission errors, quite irrespective of the total number of commission errors.

When we looked at the distribution of RTs for the younger ages, we saw a peak at ~300msec for the pure reaction time events. This distribution is not the same as that of the fast commission errors! So we know that the commission errors tend to fall in the fast end of the distribution of pure reaction times. This is significant all by itself. If fast reactions were purely random with respect to accuracy, that distinction should not exist. Yikes. I had not noticed this before. This reminds me of a paper by Tobias Egner—former graduate student of John Gruzelier who was involved in the study with the music students—who argued that under pressure choices are made more sub-cortically. This could be evidence for that proposition—wild.


One algorithm that suggests itself is the following:
If there are 8 fast commission errors or more (FST), the fastest RT events numbering 3.5 x (#FST) shall be removed from the calculation of mean RT.
This algorithm is undoubtedly difficult to code. It is probably quite unnecessary to go to this level of refinement just to clean up the mean RT calculation.

A second algorithm that suggests itself is the following:
If 1) there are 8 fast commission errors or more, and 2) the mean ceRT is less than 300msec, then all RT events of less than 300msec shall be regarded as suspect and excluded from the calculation of mean RT.

A third algorithm is a variation on the second, with the incorporation of age dependence, according to the following assumption: Threshold for anticipatory errors is set at 40% of mean RT for the age bracket. Threshold for pure RT events is to be set at 50% of mean RT for age six. This yields 240msec and 300msec, respectively. The resulting “window” of 60msec is used for all ages, as follows:

Threshold for anticipatory errors:   Threshold for pure RT responses:
Age 6: 240msec; 300msec.
Age 7: 220             280 Age 8: 210 270
Age 9: 198 260
Age 10: 188 250
Age 11: 178 240
Age 12: 164 225
Age 13: 155 215
Age 14: 150 210

So we have a consistent 60msec window between the cutoff for anticipatory errors and the cutoff for pure reaction times. Consider that TOVA decided on 200msec for the threshold of anticipatory errors—not far from our 210msec

To see just how this plays out in practice, we may want to track the total number of events thus excluded, for comparison with the number of fast commission errors in each case. We should get a ratio that is within shouting distance of 3.5 overall, and if that is the case, then that validates our assumptions.

Siegfried

----
Hello Tobias—

I recall seeing a paper of yours that argued for a shift toward sub-cortical involvement in decision-making under pressure. I was reminded of this as I was going over some of our CPT data, and perhaps this is a matter of some interest to you.


Particularly in young children we run into the problem that in some fraction of responses we see a ‘pure’ reaction time rather than a choice reaction time. When we don’t take proper account of such rapid responses, we get a flawed estimate of the mean reaction time. As an index to this problem, we rely on commission errors. If these errors pile up at very short reaction times, then we know that we have a problem. With a ratio of 7:2 between Go and NoGo trials, we can estimate roughly how many fast responses are ‘corrupting’ our determination of mean RT.

To help out with this project, we plot the histogram of RTs for the commission errors, so the problem can be recognized by the clinician at a glance. In surveying these distributions, I have noted a discrepancy between these and the known distribution of pure reaction times. Whereas the mean RT of the pure reaction time for young children is around 300msec, the distribution of fast commission errors appears to me to be shifted to lower RTs. (I only have a limited number to look at—just enough to arouse my suspicions.)

The only way I can explain this apparent discrepancy is that even within the frame of ‘pure’ reaction time responses there is a bias toward correct responses in the upper part of the quasi-Gaussian distribution, and a bias toward false responses in the lower part. This could be evidence for the involvement of sub-cortical decision-making contributions.

I’d like to have your opinion on whether I am chasing something of potential interest here, or whether this is already well-plowed terrain.

 

october 24th
A I'm happy to see the communication thread all in one place. On re-reading the material, I find myself gravitating to the last proposal for calculating the correction to the distribution of responses. The case for refining the analysis with the correction for age is compelling. We then have a two-criterion process: If 1) eight or more fast commission errors are observed (ceRT<300msec) (so the issue is big enough to worry about), and 2) ceRT is <300msec (which assures us the distribution is dominated by fast events), then RT cutoffs are applied below which all responses are removed from the RT calculation. These cutoffs are most realistic if they are made age-dependent for the early years.
B:
However, we also know that any correction of this nature, even with a fixed threshold, removes those 'events' that most affect the determination of the average. Any reasonable correction factor already goes a long way toward fixing the problem. So if you wanted to use just a fixed threshold for all ages, I would go for 250 msec. One step up from that would be to distinguish three age ranges: 6-9, 10-15, and adult. Threshold for 6-9 would be 280; for 10-14, 230, and for everyone 15 and up, 210.
    cRT_chopRT=210
    if age_at_test<=14 then cRT_chopRT=230
    if age_at_test<=9 then cRT_chopRT=280
B is implemented. oct 26 2020

anticip:
Just to make sure, I have been assuming that in counting fast commission errors you are including anticipatory responses for this purpose. There are of course also anticipatory responses that occur too fast to be pure responses, but that is a detail.
->now the anticipatory are included in the trigger, 25oct2020
    if stats_counter_commissionerrors+n_anticip_total>cRT_minCOMM then