Department of Statistics Unitmark
Dietrich College of Humanities and Social Sciences

A new source detection algorithm using FDR

Publication Date

September, 2001

Publication Type

Tech Report


A.M. Hopkins, C.J. Miller, A.J. Connolly, C. Genovese, R.C. Nichol and L. Wasserman


The False Discovery Rate (FDR) method has recently been described by Miller et al. (2001), along with several examples of astrophysical applications. FDR is a new statistical procedure due to Benjamini & Hochberg (1995) for controlling the fraction of false positives when performing multiple hypothesis testing. The importance of this method to source detection algorithms is immediately clear. To explore the possibilities offered we have developed a new task for performing source detection in radio-telescope images, Sfind 2.0, which implements FDR. We compare Sfind 2.0 with two other source detection and measurement tasks, Imsad and SExtractor, and comment on several issues arising from the nature of the correlation between nearby pixels and the necessary assumption of the null hypothesis. The strong suggestion is made that implementing FDR, as a threshold defining method in other existing source-detection tasks is easy and worthwhile. We show that the constraint on the fraction of false detections as specified by FDR holds true even for highly correlated and realistic images. For the detection of true sources, which are complex combinations of source-pixels, this constraint appears to be somewhat less strict. It is still reliable enough, however, for a priori estimates of the fraction of false source detections to be robust and realistic. Further investigation of the relationship between "source-pixels" and "sources" is nevertheless important to more strictly constrain the fraction of falsely detected sources.