Abstract
Instrumentation is commonly used to track application behavior: to collect program profiles; to monitor component health and performance; to aid in component testing; and more. Program annotation enables developers and tools to pass extra information to later stages of software development and execution. For example, the .NET runtime relies on annotations for a significant chunk of the services it provides. Both mechanisms are evolving into important parts of software development in the context of modern platforms such as Java and .NET. Instrumentation tools are generally not aware of the semantics of information passed via the annotation mechanism. This is especially true for post-compiler, e.g., runtime, instrumentation. The problem is that instrumentation may affect the correctness of annotations, rendering them invalid or misleading, and producing unforeseen side-effects during program execution. This problem has not been addressed so far. In this paper, we show the subtle interaction that takes place between annotations and instrumentation using several real-life examples. Many annotations are intended to provide information for the runtime; the virtual environment is a prominent annotation consumer, and must be aware of this conflict. It may also be required to provide runtime support to other annotation consumers. We propose an annotation taxonomy and show how instrumentation affects various annotations that were used in research and in industrial applications. We show how the annotations can expose enough information about themselves to prevent the instrumentation from accidentally corrupting the annotations. We demonstrate this approach on our annotations benchmark. Copyright 2005 ACM.