Narrow your search

Library

UGent (496)

KU Leuven (479)

ULiège (467)

ULB (465)

UCLL (452)

VIVES (452)

Odisee (451)

Thomas More Kempen (451)

Thomas More Mechelen (451)

KBC (55)

More...

Resource type

book (495)

periodical (2)


Language

English (492)

German (2)

Italian (2)


Year
From To Submit

2025 (1)

2024 (1)

2023 (2)

2021 (5)

2020 (1)

More...
Listing 1 - 10 of 496 << page
of 50
>>
Sort by

Book
Towards a Design Flow for Reversible Logic
Authors: --- ---
ISBN: 9789048195794 9789048195787 Year: 2010 Publisher: Dordrecht Springer Netherlands Imprint Springer

Loading...
Export citation

Choose an application

Bookmark

Abstract

The development of computing machines found great success in the last decades. But the ongoing miniaturization of integrated circuits will reach its limits in the near future. Shrinking transistor sizes and power dissipation are the major barriers in the development of smaller and more powerful circuits. Reversible logic provides an alternative that may overcome many of these problems in the future. For low-power design, reversible logic offers significant advantages since zero power dissipation will only be possible if computation is reversible. Furthermore, quantum computation profits from enhancements in this area, because every quantum circuit is inherently reversible and thus requires reversible descriptions. However, since reversible logic is subject to certain restrictions (e.g. fanout and feedback are not directly allowed), the design of reversible circuits significantly differs from the design of traditional circuits. Nearly all steps in the design flow (like synthesis, verification, or debugging) must be redeveloped so that they become applicable to reversible circuits as well. But research in reversible logic is still at the beginning. No continuous design flow exists so far. In Towards a Design Flow for Reversible Logic, contributions to a design flow for reversible logic are presented. This includes advanced methods for synthesis, optimization, verification, and debugging. Formal methods like Boolean satisfiability and decision diagrams are thereby exploited. By combining the techniques proposed in the book, it is possible to synthesize reversible circuits representing large functions. Optimization approaches ensure that the resulting circuits are of small cost. Finally, a method for equivalence checking and automatic debugging allows to verify the obtained results and helps to accelerate the search for bugs in case of errors in the design. Combining the respective approaches, a first design flow for reversible circuits of significant size results.


Book
The Datacenter as a Computer : An Introduction to the Design of Warehouse-Scale Machines
Authors: ---
ISBN: 3031017226 3031005945 Year: 2009 Publisher: Cham : Springer International Publishing : Imprint: Springer,

Loading...
Export citation

Choose an application

Bookmark

Abstract

As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC). We describe the architecture of WSCs, the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. We hope it will be useful to architects and programmers of today's WSCs, as well as those of future many-core platforms which may one day implement the equivalent of today's WSCs on a single board. Table of Contents: Introduction / Workloads and Software Infrastructure / Hardware Building Blocks / Datacenter Basics / Energy and Power Efficiency / Modeling Costs / Dealing with Failures and Repairs / Closing Remarks.


Periodical
Journal of Electronics and Information Science.
ISSN: 23719532 23719524 Year: 2016 Publisher: Ontario, Canada ; Xinghualing District, Taiyuan, China ; Great Missenden, Buckinghamshire, UK ; Central, Hong Kong : Clausius Scientific Press,

Loading...
Export citation

Choose an application

Bookmark

Abstract

"The journal promotes and expedites the dissemination of new research results.There is an exciting and large volume of research activity in the field worldwide.The goal of this journal is to provide a platform for academicians and scientists all over the world to share, promote, and discuss various new issues and developments in different areas of electronics engineering and information science"--"Aims & scope", viewed March 4, 2020.


Book
Single-Event Effects, from Space to Accelerator Environments : Analysis, Prediction and Hardening by Design
Authors: --- --- ---
ISBN: 3031717236 3031717228 Year: 2025 Publisher: Cham : Springer International Publishing : Imprint: Springer,

Loading...
Export citation

Choose an application

Bookmark

Abstract

This book describes the fundamental concepts underlying radiation-induced failure mechanisms in electronic components operating in harsh environments, such as in space missions or in particle accelerators. In addition to providing an extensive overview of the dynamics and composition of different radiation environments, the authors discuss the failure mechanisms, known as single-event effects (SEEs), and dedicated failure modeling and prediction methodologies. Additionally, novel radiation-hardening-by-design (RHBD) techniques at physical layout and circuit levels are described. Readers who are newcomers to this field will learn the fundamental concepts of particle interaction physics and electronics hardening design, starting from the composition and dynamics of radiation environments and their effects on electronics, to the qualification and hardening of components. Experienced readers will enjoy the comprehensive discussion of the state-of-the-art in modeling, simulation, and analysis of radiation effects developed in the recent years, especially the outcome of the recent European project, RADSAGA. Describes both the fundamental concepts underlying radiation effects in electronics and state-of-the-art hardening methodologies Addresses failure mechanisms, known as single-event effects (SEEs), and dedicated failure modeling and prediction methodologies Reveals novel radiation-hardening-by-design (RHBD) techniques at physical layout and circuit levels Offers readers the first book in which particle accelerator applications will be extensively included in the radiation effects context This is an open access book.


Book
Brain and Human Body Modeling 2020 : Computational Human Models Presented at EMBC 2019 and the BRAIN Initiative® 2019 Meeting
Authors: --- ---
ISBN: 3030456234 3030456226 Year: 2021 Publisher: Springer Nature

Loading...
Export citation

Choose an application

Bookmark

Abstract

This open access book describes modern applications of computational human modeling in an effort to advance neurology, cancer treatment, and radio-frequency studies including regulatory, safety, and wireless communication fields. Readers working on any application that may expose human subjects to electromagnetic radiation will benefit from this book’s coverage of the latest models and techniques available to assess a given technology’s safety and efficacy in a timely and efficient manner. Describes computational human body phantom construction and application; Explains new practices in computational human body modeling for electromagnetic safety and exposure evaluations; Includes a survey of modern applications for which computational human phantoms are critical.


Book
Chip Multiprocessor Architecture : Techniques to Improve Throughput and Latency
Authors: --- ---
ISBN: 303101720X 3031005929 Year: 2007 Publisher: Cham : Springer International Publishing : Imprint: Springer,

Loading...
Export citation

Choose an application

Bookmark

Abstract

Chip multiprocessors - also called multi-core microprocessors or CMPs for short - are now the only way to build high-performance microprocessors, for a variety of reasons. Large uniprocessors are no longer scaling in performance, because it is only possible to extract a limited amount of parallelism from a typical instruction stream using conventional superscalar instruction issue techniques. In addition, one cannot simply ratchet up the clock speed on today's processors, or the power dissipation will become prohibitive in all but water-cooled systems. Compounding these problems is the simple fact that with the immense numbers of transistors available on today's microprocessor chips, it is too costly to design and debug ever-larger processors every year or two. CMPs avoid these problems by filling up a processor die with multiple, relatively simpler processor cores instead of just one huge core. The exact size of a CMP's cores can vary from very simple pipelines to moderately complex superscalar processors, but once a core has been selected the CMP's performance can easily scale across silicon process generations simply by stamping down more copies of the hard-to-design, high-speed processor core in each successive chip generation. In addition, parallel code execution, obtained by spreading multiple threads of execution across the various cores, can achieve significantly higher performance than would be possible using only a single core. While parallel threads are already common in many useful workloads, there are still important workloads that are hard to divide into parallel threads. The low inter-processor communication latency between the cores in a CMP helps make a much wider range of applications viable candidates for parallel execution than was possible with conventional, multi-chip multiprocessors; nevertheless, limited parallelism in key applications is the main factor limiting acceptance of CMPs in some types of systems. After a discussion of the basic pros and cons of CMPs when they are compared with conventional uniprocessors, this book examines how CMPs can best be designed to handle two radically different kinds of workloads that are likely to be used with a CMP: highly parallel, throughput-sensitive applications at one end of the spectrum, and less parallel, latency-sensitive applications at the other. Throughput-sensitive applications, such as server workloads that handle many independent transactions at once, require careful balancing of all parts of a CMP that can limit throughput, such as the individual cores, on-chip cache memory, and off-chip memory interfaces. Several studies and example systems, such as the Sun Niagara, that examine the necessary tradeoffs are presented here. In contrast, latency-sensitive applications - many desktop applications fall into this category - require a focus on reducing inter-core communication latency and applying techniques to help programmers divide their programs into multiple threads as easily as possible. This book discusses many techniques that can be used in CMPs to simplify parallel programming, with an emphasis on research directions proposed at Stanford University. To illustrate the advantages possible with a CMP using a couple of solid examples, extra focus is given to thread-level speculation (TLS), a way to automatically break up nominally sequential applications into parallel threads on a CMP, and transactional memory. This model can greatly simplify manual parallel programming by using hardware - instead of conventional software locks - to enforce atomic code execution of blocks of instructions, a technique that makes parallel coding much less error-prone. Contents: The Case for CMPs / Improving Throughput / Improving Latency Automatically / Improving Latency using Manual Parallel Programming / A Multicore World: The Future of CMPs.


Book
Fault Tolerant Computer Architecture
Author:
ISBN: 3031017234 3031005953 Year: 2009 Publisher: Cham : Springer International Publishing : Imprint: Springer,

Loading...
Export citation

Choose an application

Bookmark

Abstract

For many years, most computer architects have pursued one primary goal: performance. Architects have translated the ever-increasing abundance of ever-faster transistors provided by Moore's law into remarkable increases in performance. Recently, however, the bounty provided by Moore's law has been accompanied by several challenges that have arisen as devices have become smaller, including a decrease in dependability due to physical faults. In this book, we focus on the dependability challenge and the fault tolerance solutions that architects are developing to overcome it. The two main purposes of this book are to explore the key ideas in fault-tolerant computer architecture and to present the current state-of-the-art - over approximately the past 10 years - in academia and industry. Table of Contents: Introduction / Error Detection / Error Recovery / Diagnosis / Self-Repair / The Future.


Book
Processor Microarchitecture : An Implementation Perspective
Authors: --- ---
ISBN: 3031017293 3031006011 Year: 2011 Publisher: Cham : Springer International Publishing : Imprint: Springer,

Loading...
Export citation

Choose an application

Bookmark

Abstract

This lecture presents a study of the microarchitecture of contemporary microprocessors. The focus is on implementation aspects, with discussions on their implications in terms of performance, power, and cost of state-of-the-art designs. The lecture starts with an overview of the different types of microprocessors and a review of the microarchitecture of cache memories. Then, it describes the implementation of the fetch unit, where special emphasis is made on the required support for branch prediction. The next section is devoted to instruction decode with special focus on the particular support to decoding x86 instructions. The next chapter presents the allocation stage and pays special attention to the implementation of register renaming. Afterward, the issue stage is studied. Here, the logic to implement out-of-order issue for both memory and non-memory instructions is thoroughly described. The following chapter focuses on the instruction execution and describes the different functional units that can be found in contemporary microprocessors, as well as the implementation of the bypass network, which has an important impact on the performance. Finally, the lecture concludes with the commit stage, where it describes how the architectural state is updated and recovered in case of exceptions or misspeculations. This lecture is intended for an advanced course on computer architecture, suitable for graduate students or senior undergrads who want to specialize in the area of computer architecture. It is also intended for practitioners in the industry in the area of microprocessor design. The book assumes that the reader is familiar with the main concepts regarding pipelining, out-of-order execution, cache memories, and virtual memory. Table of Contents: Introduction / Caches / The Instruction Fetch Unit / Decode / Allocation / The Issue Stage / Execute / The Commit Stage / References / Author Biographies.


Book
Computer Architecture Techniques for Power-Efficiency
Authors: ---
ISBN: 3031017218 3031005937 Year: 2008 Publisher: Cham : Springer International Publishing : Imprint: Springer,

Loading...
Export citation

Choose an application

Bookmark

Abstract

In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these costs is the inexorable increase in power dissipation and power density in processors. Power dissipation issues have catalyzed new topic areas in computer architecture, resulting in a substantial body of work on more power-efficient architectures. Power dissipation coupled with diminishing performance gains, was also the main cause for the switch from single-core to multi-core architectures and a slowdown in frequency increase. This book aims to document some of the most important architectural techniques that were invented, proposed, and applied to reduce both dynamic power and static power dissipation in processors and memory hierarchies. A significant number of techniques have been proposed for a wide range of situations and this book synthesizes those techniques by focusing on their common characteristics. Table of Contents: Introduction / Modeling, Simulation, and Measurement / Using Voltage and Frequency Adjustments to Manage Dynamic Power / Optimizing Capacitance and Switching Activity to Reduce Dynamic Power / Managing Static (Leakage) Power / Conclusions.


Book
Introduction to Reconfigurable Supercomputing
Authors: --- ---
ISBN: 3031017269 3031005988 Year: 2010 Publisher: Cham : Springer International Publishing : Imprint: Springer,

Loading...
Export citation

Choose an application

Bookmark

Abstract

This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPC) who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigurable parallel codes. We hope to show that FPGA acceleration, based on the exploitation of the data parallelism, pipelining and concurrency remains promising in view of the diminishing improvements in traditional processor and system design. Table of Contents: FPGA Technology / Reconfigurable Supercomputing / Algorithmic Considerations / FPGA Programming Languages / Case Study: Sorting / Alternative Technologies and Concluding Remarks.

Listing 1 - 10 of 496 << page
of 50
>>
Sort by