Languages and Compilers for Parallel Computing 7th International Workshop, Ithaca, NY, USA, August 8 - 10, 1994. Proceedings / edited by Keshav Pingali, Utpal Banerjee, David Gelernter, Alex Nicolau, David Padua.

This volume presents revised versions of the 32 papers accepted for the Seventh Annual Workshop on Languages and Compilers for Parallel Computing, held in Ithaca, NY in August 1994. The 32 papers presented report on the leading research activities in languages and compilers for parallel computing an...

Full description

Saved in:
Bibliographic Details
Corporate Author: SpringerLink (Online service)
Other Authors: Pingali, Keshav (Editor), Banerjee, Utpal (Editor), Gelernter, David (Editor), Nicolau, Alex (Editor), Padua, David (Editor)
Format: eBook
Language:English
Published: Berlin, Heidelberg : Springer Berlin Heidelberg : Imprint: Springer, 1995.
Edition:1st ed. 1995.
Series:Lecture Notes in Computer Science, 892
Springer eBook Collection.
Subjects:
Online Access:Click to view e-book
Holy Cross Note:Loaded electronically.
Electronic access restricted to members of the Holy Cross Community.
Table of Contents:
  • Fine-grain scheduling under resource constraints
  • Mutation scheduling: A unified approach to compiling for fine-grain parallelism
  • Compiler techniques for fine-grain execution on workstation clusters using PAPERS
  • Solving alignment using elementary linear algebra
  • Detecting and using affinity in an automatic data distribution tool
  • Array distribution in data-parallel programs
  • Communication-free parallelization via affine transformations
  • Finding legal reordering transformations using mappings
  • A new algorithm for global optimization for parallelism and locality
  • Polaris: Improving the effectiveness of parallelizing compilers
  • A formal approach to the compilation of data-parallel languages
  • The data partitioning graph: Extending data and control dependencies for data partitioning
  • Detecting value-based scalar dependence
  • Minimal data dependence abstractions for loop transformations
  • Differences in algorithmic parallelism in control flow and call multigraphs
  • Flow-insensitive interprocedural alias analysis in the presence of pointers
  • Incremental generation of index sets for array statement execution on distributed-memory machines
  • A unified data-flow framework for optimizing communication
  • Interprocedural communication optimizations for distributed memory compilation
  • Analysis of event synchronization in parallel programs
  • Computing communication sets for control parallel programs
  • Optimizing parallel SPMD programs
  • An overview of the Opus language and runtime system
  • SIMPLE performance results in ZPL
  • Cid: A parallel, “shared-memory” C for distributed-memory machines
  • EQ: Overview of a new language approach for prototyping scientific computation
  • Reshaping access patterns for generating sparse codes
  • Evaluating two loop transformations for reducing multiple-writer false sharing
  • Parallelizing tree algorithms: Overhead vs. parallelism
  • Autoscheduling in a distributed shared-memory environment
  • Optimizing array distributions in data-parallel programs
  • Automatic reduction tree generation for fine-grain parallel architectures when iteration count is unknown.