The use of high-throughput sequence data in genetic epidemiology allows the investigation of common and rare variants in the entire genome, thus increasing the amount of information and the potential number of statistical tests performed within one study. As a consequence, the problem of multiple testing may become even more pressing than in previous studies. As an important challenge, the exact number of statistical tests depends on the actual statistical method used. Furthermore, many statistical approaches for the analysis of sequence data require permutation. Thus it may be difficult to also use permutation to estimate correct type I error levels as in genome-wide association studies. In view of this, a separate group at Genetic Analysis Workshop 17 was formed with a focus on multiple testing. Here, we present the approaches used for the workshop. Apart from tackling the multiple testing problem, the new group focused on different issues. Some contributors developed and investigated modifications of existing collapsing methods. Others aimed at improving the identification of functional variants through a reduction and analysis of the underlying data dimensions. Two research groups investigated the overall accumulation of rare variation across the genome and its value in predicting phenotypes. Finally, other investigators left the path of traditional statistical analyses by reversing null and alternative hypotheses and by proposing a novel resampling method. We describe and discuss all these approaches.