Listing 1 - 10 of 44 | << page >> |
Sort by
|
Choose an application
Choose an application
Choose an application
Choose an application
Parallel programming (Computer science) --- Programming languages (Electronic computers) --- Programmation parallèle (Informatique) --- Langages de programmation --- Congresses --- Semantics --- Congrès --- Sémantique --- Parallel programming (Computer science) - Congresses. --- Programming languages (Electronic computers) - Semantics - Congresses. --- Theoretical Computer Sci --- Ccs --- Temporal Logic --- Process --- Concurrency
Choose an application
Parallel programming (Computer science) --- Parallelle programmering (Informatica) --- Programmation parallèle (Informatique) --- 681.3*D13 --- Computer programming --- Parallel processing (Electronic computers) --- Concurrent programming --- Parallel programming (Computer science). --- 681.3*D13 Concurrent programming --- Programmation parallèle (Informatique) --- Modula-2 (Langage de programmation) --- Concurrency --- Ada --- Real Time --- Concurrent Programming
Choose an application
Computer architecture --- Computerarchitectuur --- Langages de programmation (Ordinateurs) --- Ordinateurs--Architecture --- Parallel programming (Computer science) --- Parallelle programmering (Informatica) --- Programmation parallèle (Informatique) --- Programmeertalen (Computers) --- Programming languages (Electronic computers) --- Langages de programmation --- Ordinateurs --- Architecture --- Computer architecture. --- Programming languages (Electronic computers). --- Parallel programming (Computer science). --- Programmation parallèle (Informatique) --- Petri Net --- Specification --- Functional Language --- Object Oriented --- Programming
Choose an application
Data-Parallel Programming demonstrates that architecture-independent parallel programming is possible by describing in detail how programs written in a high-level SIMD programming language may be compiled and efficiently executed-on both shared-memory multiprocessors and distributed-memory multicomputers.MIMD computers are notoriously difficult to program. Data-Parallel Programming demonstrates that architecture-independent parallel programming is possible by describing in detail how programs written in a high-level SIMD programming language may be compiled and efficiently executed-on both shared-memory multiprocessors and distributed-memory multicomputers. The authors provide enough data so that the reader can decide the feasibility of architecture-independent programming in a data-parallel language. For each benchmark program they give the source code listing, absolute execution time on both a multiprocessor and a multicomputer, and a speedup relative to a sequential program. And they often present multiple solutions to the same problem, to better illustrate the strengths and weaknesses of these compilers. The language presented is Dataparallel C, a variant of the original C* language developed by Thinking Machines Corporation for its Connection Machine processor array. Separate chapters describe the compilation of Dataparallel C programs for execution on the Sequent multiprocessor and the Intel and nCUBE hypercubes, respectively. The authors document the performance of these compilers on a variety of benchmark programs and present several case studies.ContentsIntroduction Dataparallel C Programming Language Description Design of a Multicomputer Dataparallel C Compiler Design of a Multiprocessor Dataparallel C Compiler Writing Efficient Programs Benchmarking the Compilers Case Studies Conclusions
C (Computer program language) --- C (Computer programmeertaal) --- C (Langage de programmation) --- Parallel programming (Computer science) --- Parallelle programmering (Informatica) --- Programmation parallèle (Informatique) --- MIMD computers --- Programming --- Programming. --- C (Computer program language). --- Parallel programming (Computer science). --- MIMD computers - Programming. --- Computer programming --- Parallel processing (Electronic computers) --- MIMD computers - Programming --- COMPUTER SCIENCE/High Performance Computing --- Multiple Instruction Multiple Data computers --- Computers
Choose an application
This book contains papers presented at a workshop on the use of parallel techniques in symbolic and algebraic computation held at Cornell University in May 1990. The eight papers in the book fall into three groups. The first three papers discuss particular programming substrates for parallel symbolic computation, especially for distributed memory machines. The next three papers discuss novel ways of computing with elements of finite fields and with algebraic numbers. The finite field technique is especially interesting since it uses the Connection Machine, a SIMD machine, to achievesurprising amounts of parallelism. One of the parallel computing substrates is also used to implement a real root isolation technique. One of the crucial algorithms in modern algebraic computation is computing the standard, or Gr|bner, basis of an ideal. The final two papers discuss two different approaches to speeding their computation. One uses vector processing on the Cray and achieves significant speed-ups. The other uses a distributed memory multiprocessor and effectively explores the trade-offs involved with different interconnect topologies of the multiprocessors.
Algebra --- Parallel programming (Computer science) --- Parallel processing (Electronic computers) --- Data processing --- Congresses. --- -Parallel processing (Electronic computers) --- -Parallel programming (Computer science) --- -681.3*C4 --- 681.3*I1 --- Mathematics --- Mathematical analysis --- -Congresses --- Congresses --- Performance of systems (Computer systems organization) --- Algebraic manipulation (Computing methodologies) --- 681.3*I1 Algebraic manipulation (Computing methodologies) --- 681.3*C4 Performance of systems (Computer systems organization) --- 681.3*C4 --- Data processing&delete& --- Algebra - Data processing - Congresses. --- Parallel programming (Computer science) - Congresses. --- Computer network architectures. --- Algebra. --- Computer system performance. --- Algorithms. --- Numerical analysis. --- Computer System Implementation. --- Symbolic and Algebraic Manipulation. --- System Performance and Evaluation. --- Numerical Analysis. --- Data processing. --- Algorism --- Arithmetic --- Architectures, Computer network --- Network architectures, Computer --- Computer architecture --- Foundations --- Parallel processing (Electronic computers) - Congresses.
Choose an application
Compilateurs (Programmes d'ordinateur) --- Compilatoren (Computerprogramma's) --- Compilers (Computer programs) --- Functional programming languages --- Functionele programmeertalen --- Langages de programmation fonctionnels --- Parallel programming (Computer science) --- Parallelle programmering (Informatica) --- Programmation parallèle (Informatique) --- Functional programming languages.
Choose an application
This monograph extends and generalizes the UNITY methodology, introduced in the late 1980s by K. Mani Chandy and Jayadev Misra as a formalism aiding in the specification and verification of parallel programs, in several directions. This treatise further develops the ideas behind UNITY in order to explore and understand the potential and limitations of this approach: first UNITY is applied to formulate and tackle problems in parallelism such as compositionality; second, the logic and notation of UNITY is generalized in order to increase its range of applicability; finally, paradigms and abstractions useful for the design of probabilistic parallel algorithms are developed. Taken together the results presented reaffirm the promise of UNITY as a versatile medium for treating many problems of parallelism.
Parallel programming (Computer science) --- Parallelle programmering (Informatica) --- Programmation parallèle (Informatique) --- Computer software --- Verification. --- Verification --- Parallel programming. --- Computer network architectures. --- Software engineering. --- Computer science. --- Logic design. --- Computer System Implementation. --- Software Engineering/Programming and Operating Systems. --- Programming Techniques. --- Software Engineering. --- Programming Languages, Compilers, Interpreters. --- Logics and Meanings of Programs. --- Design, Logic --- Design of logic systems --- Digital electronics --- Electronic circuit design --- Logic circuits --- Machine theory --- Switching theory --- Informatics --- Science --- Computer software engineering --- Engineering --- Architectures, Computer network --- Network architectures, Computer --- Computer architecture --- Computer software - Verification.
Listing 1 - 10 of 44 | << page >> |
Sort by
|