Defect Challenge

Welcome to the Defect Challenge of the DEFECTS 2008 workshop. The purpose of this challenge is to give researchers a chance to evaluate their favourite fault localization technique on a challenging dataset and to compare their results with other approaches.

The Dataset

The dataset is taken from the iBugs project. iBugs contains defect data mined from the version archives of large open-source projects. More information about how defect data is mined can be found in this paper.

The dataset for the challenge includes a subset of the bugs minded for the AspectJ subject (written in the Java language). For each bug there is at least one failing run of the program that reproduces the problem. There is also a huge testsuite that can be used as a source for passing runs.

The dataset is currently a 260M download and is available here.


The dataset requires a Java Development Kit Version 1.4.2 downloadable from It is organized as a subversion repository that contains snapshots of AspectJ before and after a bug was fixed. The repository provides an ant file to conveniently checkout specific versions, to build the project and to run tests. If you don't want to use ant, you can still access the subversion repository directly. Information about the bugs (bug report, number of lines changed, ...) can be found in the repository descriptor file repository.xml located in the root directory of the archive. The descriptor also contains the fix for each bug, but you should use the defect data from the table below. The step-by-step guide further down this page gives an example of how to use the infrastructure that comes with the repository.

Evaluation and Report

Your task is as follows:

For each bug in the repository, create a ranking of statements most likely to contain the defect that caused the failure.
If there is more than one passing run for a bug, you are allowed to create a ranking for each failing run. For each prediction you make, please give the percentage of statements searched in vain, i.e. the number of statements that have to be searched before the first defective statement (see section "Defective Statements") is reached.

Example: A failing run executes 14000 statements. The first defective statement is ranked at position 100. Then the percentage of statements searched in vain is 99/14000=0,0070.

Your tool can use all information that is provided by the repository, e.g. the runs from the test suite, the bug report etc. You have to be careful that your tool does not use information from the future, i.e. information from bugs that were fixed later than the current bug.


In order to participate in the challenge, you have to write a small report (2 pages) that gives a brief description of your approach and discusses the results. Please send your report to to until

April 25th, 23:59 (Apia-Samoa)

Defective Statements

iBugs contains fixes for bugs mined from a project's version archive. Each fix consists of a set of added, removed and changed lines (as expressed by the diff tool). When lines have been added in a fix, we consider the lines right before and right after the added lines as defective statements and expect them to be reported.

The following table lists the code lines for each bug. It was generated from the iBugs data by applying the approach mentioned above, and by ignoring non-executable lines (comments, declarations).


Questions and Answers

What do you mean by fixId?The fixId identifies a bug that is included in a subject. We might as well call it bugId, but since iBugs is based on the identification of fixes, we chose to call it fixId. You need to specify the fixId whenever you use one of the ant targets that came with the subject. You can find the fixIds for all bugs in the repository descriptor (repository.xml).
What are pre-fix and post-fix versions?Fixes for bugs are extracted from the version archive of a project. For every bug we provide a snapshot of the repository right before the fix was commited (pre-fix version) and a snapshot right after the fix was commited to the repository (post-fix version).
I cannot find the associated test case mentioned in the repository descriptor. Where is it?Most fixes also commit a test case for the problem that was fixed. Since the fix is only contained in the post-fix version, you sometimes won't find the associated test in the pre-fix version. If you want to execute the associated test in the pre-fix version, you need to copy the test files into the pre-fix version.
Why does the structure of the repository file for Rhino differ from that for AspectJ?This is due to the fact that Rhino provides JUnit tests only for the most recent bugs in the version history. We therefore cannot say for sure wether a test case failed and thus omit this information in the repository descriptor.
Why do I keep getting compiler errors in Rhino?The compile process uses the same virtual machine that you used to invoke ant (not the one specified in properties.xml). You have to invoke ant using a 1.4.2 virtual machine. Sorry for that.
What is the temporal order in which bugs where fixed? The fixId is created by the bug tracker and therefore only gives the order in which bugs were reported, and not the order in which bugs where fixed. In order to get the temporal order in which bugs where fixed, you have to sort them by the id of the transaction (a transaction in our context is a set of changes to the version repository) that fixed the bug. You can find the transactionId in the repository descriptor file (repository.xml).

Step-By-Step Guide

Select bugs.Use the meta information provided in the file repository.xml to select relevant bugs.
Example: In order to select all bugs that raised a NullPointerException, you can use the XPath expression
/repository/bug[tag=''null pointer exception'']
Extract versions. Use the ant task checkoutversion.
Example: In order to checkout the pre-fix and post-fix versions for Bug 4711, type
ant -DfixId=4711 checkoutversion
The results are placed in the directory "versions/4711/".
Build the program Use the ant task buildversion.
Example: Build the pre-fix version of Bug 4711 with
ant -DfixId=4711 -Dtag=pre-fix buildversion
If the build succeeds, you find the Jar files in the directory ../pre-fix/org.aspectj/modules/aj-build/dist/tools/lib/
Note Static tools can analyze the Jars in this directory, while dynamic tools that execute tests need to instrument the Jars created in the next step.
Build tests (dynamic tools).Use the ant task buildtests.
Example: In order to build the tests for the pre-fix version of Bug 4711, type
ant -DfixId=4711 -Dtag=pre-fix buildtests
This creates a Jar file that includes the ASPECTJ compiler and all resources needed for testing in the directory "versions/4711/prefix/org.aspectj/modules/aj-build/jars/"
Run test suites (dynamic tools). Use the ant tasks runharnesstests for the integration test suite and runjunittests for the unit test suite of ASPECTJ, respectively.
Example: Run unit tests for the pre-fix version of Bug 4711
ant -DfixId=4711 -Dtag=pre-fix runjunittests
Run specific tests (dynamic tools). Generate scripts by using the ant task gentestscript and execute them.
Example: In order to execute test "SUID: thisJoinPoint" described in file "org.aspectj/modules/tests/ajcTests.xml" generate a script with
ant -DfixId=4711 -Dtag=pre-fix -DtestFileName="org.aspectj.modules/tests/ajcTests.xml"-DtestName="SUID: thisJoinPoint"
This creates a new ant script in the directory "4711/pre-fix/org.aspectj/modules/tests/". Execute this file to run test "SUID: thisJoinPoint"
Hint: All tests executed by the test suite are described in the file "4711/pre-fix/testresults.xml"
Assess your tool. Compare the predicted bug location against the location changed in the fix (see repository.xmlI).
Note: Static bug localization tools typically integrate with Step 3 and 4. Dynamic tools need to run programs and therefore integrate with Step 4, 5, and 6.


If you have any questions regarding the dataset or the challenge in general, please send an email to