[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: identification task...negative d' values.
I've spent a chunk of the day trying to figure out what a negative d-prime
implies, so I thought I'd send it to the list.
Negative d' means false positive rate (proportion of backgrounds labeled
target) is higher than true positive rate (proportion of targets labeled
target), or equivalently false accept rate plus false reject rate is
greater than 100%.
The d' analysis is based on fitting the detection problem to a scenario
in which you have a scalar decision variable whose mean (but not its
variance) shifts depending on the true label. In this case, d' reports
the shift in the mean, in units of the standard deviation, regardless of
the actual decision threshold (false accept/false reject tradeoff)
chosen. So it's useful for abstracting away from particular
thresholds/cost assumptions to get at the underlying problem.
However, if the problem isn't well characterized as a simple shift in
mean, for instance if the variance changes between the two cases, d' is
simply not a particularly good fit to the problem. Most importantly, if
the variances are different, d' is no longer independent of the
threshold choice, and thus subjects with differing internal preferences
between false accepts and false rejects can't be accurately compared.
Getting back to the negative d' detector, is it any use? You can make
your d' positive by flipping your results. But if targets are much
rarer than nontargets, it's not obvious this is necessarily a helpful
thing to do. So, if you have a 30% true positive rate (you report
target on 30% of target frames) and a 40% false positive rate (you
report target on 40% of background frames), your false positive rate
exceeds your true positive rate and you have a negative d-prime of
-0.27. If targets occur with a prior probability of 1%, you make false
accepts .99*.4 = 0.396 of the time and false rejects .01*.7 = 0.007 of
the time, so you make errors 0.403 of the time, and correct detections
0.003 of the time (where perfect detection would be 0.01). Call this
scenario A.
If you flip your responses, you will report target now on 70% of your
target frames, but also on 60% of background frames. Your d-prime will
be +0.27, but you will make errors 0.594 + 0.003 = 0.597 of the time,
and correct detections 0.007 of the time. So your overall error rate
goes up. Call this scenario B.
The preference between A and B depends on the relative cost of false
accepts and false rejects, where looking at the error rate alone assumes
equal costs. But if you knew the priors and cared about error rate, you
could always just report background all the time (scenario C). Or if
false rejects were really, really expensive, you could report target all
the time (scenario D).
With equal, unit costs to false rejects and false accepts, A costs .403,
B costs .597, C costs 0.01 and D costs 0.99 - your original scheme had
lower cost than the opposite, but you'd do better to just always report
background.
If false rejects cost 50 times more than false accepts, A costs 0.746, B
costs 0.744, C costs .5, and D costs .99. So, you'd do better to flip
your responses, but better still to report background all the time.
If false rejects cost 100 times more that false accepts, A costs 1.096,
B costs 0.894, C costs 1, and D costs .99. So now it's worth using your
detector, but flipping the results.
If false rejects cost 1000 times more that false accepts, A costs 7.396,
B costs 3.594, C costs 10, and D costs .99. At this point, you should
just report target all the time.
So I don't think there's any rational basis when you'd want to use a
detector with a d' below zero, except with the labels flipped to make a
positive d' detector.
DAn.