Narrow your search

Library

KBR (44)

ULiège (34)

KU Leuven (29)

UCLouvain (22)

UGent (18)

UAntwerpen (9)

ULB (8)

UHasselt (4)

VUB (4)

UNamur (2)

More...

Resource type

book (44)


Language

English (43)

Dutch (1)


Year
From To Submit

1998 (2)

1997 (1)

1996 (2)

1995 (3)

1994 (1)

More...
Listing 1 - 10 of 44 << page
of 5
>>
Sort by

Book
How to write parallel programs : a first course
Authors: ---
Year: 1990 Publisher: Cambridge, Mass. London MIT Press

Loading...
Export citation

Choose an application

Bookmark

Abstract

Programming models for parallel systems
Author:
ISBN: 0471923044 9780471923046 Year: 1990 Volume: vol *8 Publisher: Chichester : John Wiley & Sons,

Practical parallel programming
Author:
ISBN: 0120828103 1322345902 1493306030 0080916457 Year: 1992 Publisher: San Diego New York London Academic Press

Loading...
Export citation

Choose an application

Bookmark

Abstract

Semantics for concurrency : proceedings of the International BCS-FACS Workshop, 23-25 July 1990, Leicester
Authors: --- --- --- ---
ISBN: 0387196250 3540196250 1447138600 9783540196259 9780387196251 Year: 1990 Volume: vol *4 Publisher: London New York Springer-Verlag

Concurrent programming : fundamental techniques for real-time and parallel software design
Author:
ISBN: 0471923036 9780471923039 Year: 1989 Volume: vol *5 Publisher: Chichester New York Toronto Wiley

Data-parallel programming on MIMD computers
Authors: ---
ISBN: 0262082055 9780262288484 9780262082051 0262288486 Year: 1991 Volume: vol *3 Publisher: Cambridge, Mass. : MIT Press,

Loading...
Export citation

Choose an application

Bookmark

Abstract

Data-Parallel Programming demonstrates that architecture-independent parallel programming is possible by describing in detail how programs written in a high-level SIMD programming language may be compiled and efficiently executed-on both shared-memory multiprocessors and distributed-memory multicomputers.MIMD computers are notoriously difficult to program. Data-Parallel Programming demonstrates that architecture-independent parallel programming is possible by describing in detail how programs written in a high-level SIMD programming language may be compiled and efficiently executed-on both shared-memory multiprocessors and distributed-memory multicomputers. The authors provide enough data so that the reader can decide the feasibility of architecture-independent programming in a data-parallel language. For each benchmark program they give the source code listing, absolute execution time on both a multiprocessor and a multicomputer, and a speedup relative to a sequential program. And they often present multiple solutions to the same problem, to better illustrate the strengths and weaknesses of these compilers. The language presented is Dataparallel C, a variant of the original C* language developed by Thinking Machines Corporation for its Connection Machine processor array. Separate chapters describe the compilation of Dataparallel C programs for execution on the Sequent multiprocessor and the Intel and nCUBE hypercubes, respectively. The authors document the performance of these compilers on a variety of benchmark programs and present several case studies.ContentsIntroduction Dataparallel C Programming Language Description Design of a Multicomputer Dataparallel C Compiler Design of a Multiprocessor Dataparallel C Compiler Writing Efficient Programs Benchmarking the Compilers Case Studies Conclusions

Computer algebra and parallelism. Second international workshop, Ithaca, USA, May 1990. Proceedings
Author:
ISBN: 3540553282 0387553282 3540470263 9783540553281 Year: 1992 Volume: 584 Publisher: New York, NY : Springer-Verlag,

Loading...
Export citation

Choose an application

Bookmark

Abstract

This book contains papers presented at a workshop on the use of parallel techniques in symbolic and algebraic computation held at Cornell University in May 1990. The eight papers in the book fall into three groups. The first three papers discuss particular programming substrates for parallel symbolic computation, especially for distributed memory machines. The next three papers discuss novel ways of computing with elements of finite fields and with algebraic numbers. The finite field technique is especially interesting since it uses the Connection Machine, a SIMD machine, to achievesurprising amounts of parallelism. One of the parallel computing substrates is also used to implement a real root isolation technique. One of the crucial algorithms in modern algebraic computation is computing the standard, or Gr|bner, basis of an ideal. The final two papers discuss two different approaches to speeding their computation. One uses vector processing on the Cray and achieves significant speed-ups. The other uses a distributed memory multiprocessor and effectively explores the trade-offs involved with different interconnect topologies of the multiprocessors.

Parallel functional languages and compilers
Author:
ISBN: 0201522438 9780201522433 Year: 1991 Volume: vol *2 Publisher: New York Reading Amsterdam paris ACM Press Addison-Wesley

Extensions of the UNITY methodology : compositionality, Fairness and probability in parallelism
Author:
ISBN: 3540591737 3540492194 9783540591733 Year: 1995 Volume: 908 Publisher: Berlin : Springer-Verlag,

Loading...
Export citation

Choose an application

Bookmark

Abstract

This monograph extends and generalizes the UNITY methodology, introduced in the late 1980s by K. Mani Chandy and Jayadev Misra as a formalism aiding in the specification and verification of parallel programs, in several directions. This treatise further develops the ideas behind UNITY in order to explore and understand the potential and limitations of this approach: first UNITY is applied to formulate and tackle problems in parallelism such as compositionality; second, the logic and notation of UNITY is generalized in order to increase its range of applicability; finally, paradigms and abstractions useful for the design of probabilistic parallel algorithms are developed. Taken together the results presented reaffirm the promise of UNITY as a versatile medium for treating many problems of parallelism.

Listing 1 - 10 of 44 << page
of 5
>>
Sort by