Quick Links:
program
invited talks
registration
call for papers
(PostScript)
(PDF)
dates
submission
program committee
2002 ACM SIGPLAN Workshop on
Partial Evaluation and Semantics-Based Program Manipulation (PEPM'02)
Portland, Oregon, USA, January 14-15, 2002
Preceding POPL'02
The PEPM'02 workshop will bring together researchers working in the
areas of semantics-based program manipulation, partial evaluation, and
program generation. The workshop focuses on techniques, supporting theory, and
applications of the analysis and manipulation of programs.
Technical topics include, but are not limited to:
-
Program manipulation techniques: transformation,
specialization, normalization, reflection, rewriting, run-time
code generation, multi-level programming.
-
Program analysis techniques: abstract interpretation, static
analysis, binding-time analysis, attribute grammars,
constraints.
-
Related issues in language design and models of computation:
imperative, functional, logical, object-oriented, parallel,
distributed, mobile, secure, domain-specific.
-
Programs as data objects: staging, meta-programming, incremental
computation, mobility, tools and techniques, prototyping and
debugging.
-
Applications: systems programming, scientific computing,
algorithmics, graphics, security checking, simulation, compiler
generation, compiler optimization, decompilation.
-
Assessment: applicability of program manipulation techniques to
particular architectures and language paradigms, scalability,
benchmarking, portability.
Original results that bear on these and related topics are solicited.
Papers investigating novel uses and applications of program
manipulation in the broadest sense are especially encouraged.
Authors concerned about the appropriateness of a topic
are welcome to consult with the program chair prior to
submission.
SUBMISSION DEADLINE EXPIRED
Papers should be submitted electronically via the workshop's Web
page. Exceptionally, submissions may be emailed to the program
chair:
thiemann@uni-freiburg.de. Acceptable formats
are PostScript or PDF, viewable by gv.
Submissions should not exceed 5000 words, excluding bibliography and
figures. Excessively long submissions may be rejected outright.
Submitted papers will be judged on the basis of significance,
relevance, correctness, originality, and clarity. They should include
a clear identification of what has been accomplished and why it is
significant. They must describe work that has not previously been
published in a major forum. Authors must indicate if a closely
related paper is also being considered for another conference or
journal.
Proceedings will be published with ACM Press. A special issue
of the journal Higher-Order and
Symbolic Computation is planned afterwards.
Submission: | 8 October 2001 |
Notification: | 12 November 2001 |
Final papers: | 26 November 2001 |
Paul Hovland, Argonne National Laboratory
- Title:
- Implementation of Automatic Differentiation Tools
- Authors:
- Christian Bischof, Paul Hovland, Boyana Norris
- Abstract:
-
Automatic differentiation is a semantic transformation that applies the
rules of differential calculus to source code. It thus transforms a
computer program that computes a mathematical function into a program that
computes the function and its derivatives. Derivatives play an important
role in a wide variety of scientific computing applications, including
optimization, solution of nonlinear equations, sensitivity analysis, and
nonlinear inverse problems. We describe a simple component architecture for
developing tools for automatic differentiation and other mathematically
oriented semantic transformations of scientific software. This architecture
consists of a compiler-based, language-specific front-end for source
transformation, loosely coupled with one or more language-independent
``plug-in'' transformation modules. The coupling mechanism between the
front-end and transformation modules is provided by the XML Abstract
Interface Form (XAIF). XAIF provides an abstract, language-independent
representation of language constructs common in imperative languages, such
as C and Fortran. We describe the use of this architecture in constructing
tools for automatic differentiation of Fortran 77 and ANSI C, and we discuss
how access to compiler optimization techniques can enable more efficient
derivative augmentation.
Craig Chambers, Department of Computer Science and
Engineering, University of Washington
- Title:
- Staged Compilation
- Abstract:
-
Traditional compilers compile and optimize files separately, making
worst-case assumptions about the program context in which a file is to
be linked. More aggressive compilation architectures perform
cross-file interprocedural or whole-program analyses, potentially
producing much faster programs but substantially increasing the cost
of compilation. Even more radical are systems that perform all
compilation and optimization at run-time: such systems can optimize
programs based on run-time program and system properties as well as
static whole-program properties. However, run-time compilers (also
called dynamic compilers or just-in-time compilers) suffer under
severe constraints on allowable compilation time, since any time spent
compiling steals from time spent running the program. None of these
compilation models dominates the others: each has unique strengths and
weaknesses not present in the other models.
-
-
We are developing a new, staged compilation model which strives to
combine high run-time code quality with low compilation overhead.
Compilation is organized as a series of stages, with stages
corresponding to, for example, separate compilation, library linking,
program linking, and run-time execution. Any given optimization can
be performed at any of these stages; to reduce compilation time while
maintaining high effectiveness, an optimization should be performed at
the earliest stage that provides the necessary program context
information to carry out the optimization effectively. Moreover, a
single optimization can itself be spread across multiple stages, with
earlier stages performing preplanning work that enables the final
stage to complete the optimization quickly. In this way, we hope to
produce highly optimized programs, nearly as good as what could be
done with a purely run-time compiler that had an unconstrained
compilation time budget, but at a much more practical compile time
cost.
-
-
We are building the Whirlwind optimizing compiler as the concrete
embodiment of this staged compilation model, initially targeting
object-oriented languages. A key component of Whirlwind is a set of
techniques for automatically constructing staged compilers from
traditional unstaged compilers, including aggressive applications of
specialization and other partial evaluation technology.
Chair:
Peter Thiemann,
Universität Freiburg, Germany.
E-mail: thiemann@uni-freiburg.de
Members:
-
Maria Alpuente, U. Politécnica de Valencia, Spain
-
Evelyn Duesterwald, Hewlett-Packard Labs, USA
-
Robert Glück, DIKU, Denmark and Waseda University, Japan
-
Michael Hanus, University of Kiel, Germany
-
Zhenjiang Hu, University of Tokyo, Japan
-
John Hughes, Chalmers Technical University, Sweden
-
Mark Jones, OGI, USA
-
Siau-Cheng Khoo, NUS, Singapore
-
Jakob Rehof, Microsoft Research, USA
-
João Saraiva, University of Minho, Portugal
-
Ulrik Schultz, University of Aarhus, Denmark
-
David Walker, CMU, USA
Peter Thiemann
Last modified: Mon Dec 3 14:49:58 CET 2001