Listing 1 - 8 of 8 |
Sort by
|
Choose an application
Choose an application
"Recent developments in parallel computing mean that the use of machine learning techniques and intelligence to handle the huge volume of available data have brought the faster solutions offered by advanced technologies to various fields of application. This book presents the proceedings of the Virtual International Conference on Advances in Parallel Computing Technologies and Applications (ICAPTA 2021), hosted in Chennai, India, and held online as a virtual event on 15 and 16 April 2021.The aim of the conference was to provide a forum for sharing knowledge in various aspects of parallel computing in communications systems and networking, including cloud and virtualization solutions, management technologies, and vertical application areas. It also provided a platform for scientists, researchers, practitioners and academicians to present and discuss the most recent innovations and trends, as well as the concerns and practical challenges encountered in this field. Included here are 52 full length papers, accepted from over 100 submissions based on the reviews and comments of subject experts. Topics covered include parallel computing in communication, machine learning intelligence for parallel computing and parallel computing for software services in theoretical and practical aspects. Providing an overview of the latest developments in the field, the book will be of interest to all those whose work involves the use of parallel computing technologies"--
Choose an application
This book constitutes the refereed proceedings of the 11th International Symposium on Parallel Architectures, Algorithms and Programming, PAAP 2020, held in Shenzhen, China, in December 2020. The 37 revised full papers presented were carefully reviewed and selected from 75 submissions. The papers deal with research results and development activities in all aspects of parallel architectures, algorithms and programming techniques.
Microprocessors. --- Processor Architectures. --- Minicomputers --- Parallel algorithms --- Computer algorithms --- Parallel programming (Computer science) --- Algorithms
Choose an application
XcalableMP is a directive-based parallel programming language based on Fortran and C, supporting a Partitioned Global Address Space (PGAS) model for distributed memory parallel systems. This open access book presents XcalableMP language from its programming model and basic concept to the experience and performance of applications described in XcalableMP. XcalableMP was taken as a parallel programming language project in the FLAGSHIP 2020 project, which was to develop the Japanese flagship supercomputer, Fugaku, for improving the productivity of parallel programing. XcalableMP is now available on Fugaku and its performance is enhanced by the Fugaku interconnect, Tofu-D. The global-view programming model of XcalableMP, inherited from High-Performance Fortran (HPF), provides an easy and useful solution to parallelize data-parallel programs with directives for distributed global array and work distribution and shadow communication. The local-view programming adopts coarray notation from Coarray Fortran (CAF) to describe explicit communication in a PGAS model. The language specification was designed and proposed by the XcalableMP Specification Working Group organized in the PC Consortium, Japan. The Omni XcalableMP compiler is a production-level reference implementation of XcalableMP compiler for C and Fortran 2008, developed by RIKEN CCS and the University of Tsukuba. The performance of the XcalableMP program was used in the Fugaku as well as the K computer. A performance study showed that XcalableMP enables a scalable performance comparable to the message passing interface (MPI) version with a clean and easy-to-understand programming style requiring little effort.
Programming languages (Electronic computers). --- Programming Languages, Compilers, Interpreters. --- Computer languages --- Computer program languages --- Computer programming languages --- Machine language --- Electronic data processing --- Languages, Artificial --- Programming Languages, Compilers, Interpreters --- PGAS model --- Partitioned Global Address Space model --- Coarray --- parallel programming language --- high performance computing --- Open Access --- Programming & scripting languages: general --- Compilers & interpreters
Choose an application
"The Portable, Extensible Toolkit for Scientific Computation (PETSc) is an open-source library of advanced data structures and methods for solving linear and nonlinear equations and for managing discretizations. This book uses these modern numerical tools to demonstrate how to solve nonlinear partial differential equations (PDEs) in parallel. It starts from key mathematical concepts, such as Krylov space methods, preconditioning, multigrid, and Newton's method. In PETSc these components are composed at run time into fast solvers. Discretizations are introduced from the beginning, with an emphasis on finite difference and finite element methodologies. The example C programs of the first 12 chapters, listed on the inside front cover, solve (mostly) elliptic and parabolic PDE problems. Discretization leads to large, sparse, and generally nonlinear systems of algebraic equations. For such problems, mathematical solver concepts are explained and illustrated through the examples, with sufficient context to speed further development. PETSc for Partial Differential Equations : addresses both discretization and fast solvers for PDEs ; emphasizes practice more than theory ; contains well-structured examples, with advice on run-time solver choices ; demonstrates how to achieve high performance and parallel scalability ; and builds on the reader's understanding of fast solver concepts when applying the Firedrake. Python finite element solver library in the last two chapters." [Publisher]
Differential equations, Partial --- Équations aux dérivées partielles --- Numerical analysis. --- Analyse numérique. --- Parallel programming (Computer science) --- Programmation parallèle (informatique) --- C (Computer program language) --- C (langage de programmation) --- Python (Computer program language) --- Python (langage de programmation) --- Computer programs. --- Logiciels. --- Équations aux dérivées partielles --- Analyse numérique. --- Programmation parallèle (informatique)
Choose an application
This book constitutes the thoroughly refereed post-conference proceedings of the 32nd International Workshop on Languages and Compilers for Parallel Computing, LCPC 2019, held in Atlanta, GA, USA, in October 2019. The 8 revised full papers and 3 revised short papers were carefully reviewed and selected from 17 submissions. The scope of the workshop includes advances in programming systems for current domains and platforms, e.g., scientific computing, batch/ streaming/ real-time data analytics, machine learning, cognitive computing, heterogeneous/ reconfigurable computing, mobile computing, cloud computing, IoT, as well as forward-looking computing domains such as analog and quantum computing.
Parallel programming (Computer science) --- Parallelizing compilers --- Parallel processing (Electronic computers) --- Programming languages (Electronic computers) --- Compilers (Computer programs) --- Compilers (Computer programs). --- Computer systems. --- Computer programming. --- Microprocessors. --- Computer architecture. --- Compilers and Interpreters. --- Computer System Implementation. --- Programming Techniques. --- Processor Architectures. --- Architecture, Computer --- Minicomputers --- Computers --- Electronic computer programming --- Electronic data processing --- Electronic digital computers --- Programming (Electronic computers) --- Coding theory --- ADP systems (Computer systems) --- Computing systems --- Systems, Computer --- Electronic systems --- Cyberinfrastructure --- Compiling programs (Computer programs) --- Computer programs --- Programming software --- Systems software --- Programming
Choose an application
It is the combination of mathematical ideas and efficient programs that drives the progress in many scientific disciplines: The faster results can be generated on a computer, the bigger and the more accurate are the challenges that can be solved. This textbook targets students who have programming skills and do not shy away from mathematics, though they might be educated in computer science or an application domain and have no primary interest in the maths. The book is for students who want to see some simulations up and running. It introduces the basic concepts and ideas behind applied mathematics and parallel programming that are needed to write numerical simulations for today’s multicore workstations. The intention is not to dive into one particular application domain or to introduce a new programming language; rather it is to lay the generic foundations for future studies and projects in this field. Topics and features: Fits into many degrees where students have already been exposed to programming languages Pairs an introduction to mathematical concepts with an introduction to parallel programming Emphasises the paradigms and ideas behind code parallelisation, so students can later on transfer their knowledge and skills Illustrates fundamental numerical concepts, preparing students for more formal textbooks The easily digestible text prioritises clarity and intuition over formalism, illustrating basic ideas that are of relevance in various subdomains of scientific computing. Its primary goal is to make theoretical and paradigmatic ideas accessible and even fascinating to undergraduate students. Tobias Weinzierl is professor in the Department of Computer Science at Durham University, Durham, UK. He has worked at the Munich Centre for Advanced Computing (see the Springer edited book, Advanced Computing) before, and holds a PhD and habilitation from the Technical University Munich.
Computer science --- Programming --- Computer architecture. Operating systems --- Computer. Automation --- computers --- informatica --- computerbesturingssystemen --- programmeren (informatica) --- wiskunde --- informaticaonderzoek --- Parallel processing (Electronic computers) --- Science --- Data processing. --- Electronic data processing --- High performance computing --- Multiprocessors --- Parallel programming (Computer science) --- Supercomputers --- Computers. --- Computer programming. --- Electronic digital computers --- Mathematics --- Mathematics of Computing. --- Hardware Performance and Reliability. --- Programming Techniques. --- System Performance and Evaluation. --- Computational Science and Engineering. --- Mathematics. --- Evaluation. --- Computers --- Electronic computer programming --- Programming (Electronic computers) --- Coding theory --- Automatic computers --- Automatic data processors --- Computer hardware --- Computing machines (Computers) --- Electronic brains --- Electronic calculating-machines --- Electronic computers --- Hardware, Computer --- Computer systems --- Cybernetics --- Machine theory --- Calculators --- Cyberspace --- Computer mathematics
Choose an application
Learn how to accelerate C++ programs using data parallelism. Data parallelism in C++ enables access to parallel resources in a modern heterogeneous system, freeing you from being locked into any particular computing device. Now a single C++ application can use any combination of devices—including GPUs, CPUs, FPGAs and AI ASICs—that are suitable to the problems at hand. This open access book enables C++ programmers to be at the forefront of this exciting and important new development that is helping to push computing to new levels. It is full of practical advice, detailed explanations, and code examples to illustrate key topics. This book teaches data-parallel programming using C++ and the SYCL standard from the Khronos Group and walks through everything needed to use SYCL for programming heterogeneous systems. The book begins by introducing data parallelism and foundational topics for effective use of SYCL and Data Parallel C++ (DPC++), the open source compiler used in this book. Later chapters cover advanced topics including error handling, hardware-specific programming, communication and synchronization, and memory model considerations. You will learn: • How to accelerate C++ programs using data-parallel programming • How to target multiple device types (e.g. CPU, GPU, FPGA) • How to use SYCL and SYCL compilers • How to connect with computing’s heterogeneous future via Intel’s oneAPI initiative.
Programming languages (Electronic computers). --- Computer input-output equipment. --- Programming Languages, Compilers, Interpreters. --- Hardware and Maker. --- Computer hardware --- Computer I/O equipment --- Computers --- Electronic analog computers --- Electronic digital computers --- Hardware, Computer --- I/O equipment (Computers) --- Input equipment (Computers) --- Input-output equipment (Computers) --- Output equipment (Computers) --- Computer systems --- Computer languages --- Computer program languages --- Computer programming languages --- Machine language --- Electronic data processing --- Languages, Artificial --- Input-output equipment --- Programming Languages, Compilers, Interpreters --- Hardware and Maker --- Maker --- heterogenous --- FPGA programming --- GPU programming --- Parallel programming --- Data parallelism --- SYCL --- Intel One API --- Programming & scripting languages: general --- Compilers & interpreters --- Heterogeneous computing. --- C++ (Computer program language) --- OpenCL (Computer program language) --- Open CL (Computer program language) --- Open Computing Language (Computer program language) --- Programming languages (Electronic computers) --- Heterogeneous processing (Computers) --- High performance computing --- Parallel processing (Electronic computers)
Listing 1 - 8 of 8 |
Sort by
|