Narrow your search

Library

KU Leuven (49)

KBR (1)

Thomas More Mechelen (1)


Resource type

dissertation (49)

book (1)


Language

English (49)


Year
From To Submit

2022 (13)

2021 (6)

2020 (10)

2019 (7)

2018 (5)

More...
Listing 1 - 10 of 49 << page
of 5
>>
Sort by

Multi
Partial and dynamic FPGA reconfiguration for security applications
Authors: --- ---
ISBN: 9789491857027 Year: 2014 Publisher: Leuven KU Leuven. Faculty of Engineering Technology


Dissertation
Design and modeling of inverter control for fault behavior and power system protection analysis
Authors: --- --- ---
Year: 2022 Publisher: Gent KU Leuven. Faculty of Engineering Technology

Loading...
Export citation

Choose an application

Bookmark

Abstract

Renewable energy resources will form the cornerstone of power systems of the future and present a key solution to the grand challenge of mitigating climate change. However, the proliferation of renewables has brought up a myriad of challenges involved in operating the power system with notable shares of inverter-based resources (IBRs). This has sparked several concerns pertaining to stability, protection, and continuity of power supply at large. As a solution to a wide body of such concerns, a consensus is forming toward the adoption of grid-forming (GFM) inverter technology to interface renewables with the grid. In the GFM paradigm, IBRs do not follow the grid, rather, they form it and offer voltage and frequency regulation much alike conventional synchronous generators. Yet, their performance during --- in particular unbalanced --- faults and their impact on power system protection remains largely uncharted territory. The present work investigates the (unbalanced) fault behavior of grid-forming inverters, and their effect on traditional distance protection. First, a novel grid-forming inverter control architecture is proposed which exhibits better performance during faults in terms of grid support. Enhanced virtual-impedance and current-reference saturation limiting strategies are introduced, along with an elaborate and generic control-design procedure. Electromagnetic transient simulations in MATLABextsuperscript{extregistered}Simulink of an all-inverter IEEE 14-bus network illustrates the voltage support benefits achievable with the proposed GFM controls. Second, the effect of grid-forming inverter controls on the performance of traditional distance protection is investigated. Different incarnations of GFM controls are taken into consideration for this analysis. Full-order EMT simulations, analytical steady-state fault models, and a hybrid hardware-software setup illustrate the superiority of the proposed GFM controls in terms of protection. Finally, a novel method is proposed for detecting a fault on a line connected to IBRs. The method is independent of the inverter's control structure and does not require high-bandwidth communication. As such, the method is agnostic to the presence of inverter-based resources in the grid.

Keywords


Dissertation
Heterojunction Tunnel FETS using 2D Materials as Channel

Loading...
Export citation

Choose an application

Bookmark

Abstract

2D materials research has been shifting towards novel electronic and optical applications apart from conventional MOSFETs. Their atomically flat surfaces and self-passivated layers offer potentially defect free inter-layer tunneling. Band-to-band tunneling field effect transistors (TFET) have caught the attention of industry and academia for over a decade in CMOS scaling with the promise of obtaining a steep Subthreshold Swing, SS < 60mV/dec at room temperatureAchieving a low supply voltage and obtaining a high enough on current and steep SS are crucial in 2D TFETs for future CMOS technologies. However when compared to simulations, experiments are still far fetchedfrom reaching the required performance. Hence, the goal of this thesis is to systematically identify and characterize the parasitics limiting high ION and steep SS in 2D TFETs. We achieve this by fabricating 2D heterojunction TFETs based on MoS2-MoTe2 and ReS2-BP. The parasitics that we focus on are 1) Schottky barriers at the contacts, 2) impact of different current components on TFET performance 3) impact of multiple layers on BTBT and gate electrostatics, 4) device gate configuration on BTBT transport, 5) impact of indirect and direct BTBT on ION and SS, 6) point tunneling and 7) Material anisotropy on carrier transport.In the first part on MoS2-MoTe2 TFETs, we perform our experiments using three different gate configurations. These allow us to address the transport mechanisms characteristic of each configuration. Due to our inability to dope the contact regions, we observe significant degradation of BTBT current. In order to isolate the contacts' influence, we then introduce a contact gated architecture that decouples the influence of the contacts from the channel. We also assess the long tunneling paths arising from tunneling across multiple 2D layers using Quantum transport simulations. These findings provide additional insights on investigating the impact of gate configuration and indirect tunneling on the device performance.In the second part, BP-ReS2 TFETs are fabricated with different flake thicknesses to identify the most favorable configuration for TFETs. Further optimizations are demonstrated to reduce the equivalent oxide thickness (EOT) of the gate dielectric to obtain a lower SS and to reduce the gate leakage. From the electrical measurements, we demonstrate that tunneling happens only at the edge of the heterojunction, which is also known as point tunneling. Finally, we study the anisotropic transport in BP-ReS2 TFETs by investigating the anisotropy in BP on the BTBT current and the SS of the TFET. Combining all the outcomes of this work, the thesis thus provides the necessary framework to implement 2D TFETs for beyond CMOS technology.

Keywords


Dissertation
Tractable Approximations for Achieving Higher Model Efficiency in Computer Vision

Loading...
Export citation

Choose an application

Bookmark

Abstract

The 2010s have seen the first large-scale successes of computer vision "in the wild", paving the way for industrial applications. Thanks to the formidable increase of processing power in consumer electronics, convolutional neural networks have led the way in this revolution. With enough supervision, these models have proven able to surpass human accuracy on many vision tasks. However, rather than focusing exclusively on accuracy, it is increasingly important to design algorithms that operate within the bounds of a computational budget - in terms of latency, memory, or energy consumption. The adoption of vision algorithms in time-critical decision systems (such as autonomous driving) and in edge computing (eg{} in smartphones) makes this quest for efficiency a central challenge in machine learning research.How can the optimization of existing models be improved, in order to reach higher accuracy without affecting the processing requirements? Alternatively, can we search for models that fit the processing requirements while improving the accuracy on the task? In this thesis, we consider both of these questions, which are two sides of the same coin. On one hand, we develop novel methods for learning model parameters in a supervised fashion, improving the accuracy on the target task without affecting the efficiency of these models at test-time. On the other, we study the problem of model search, where the model itself must be selected among a family of models in order to achieve satisfactory accuracy under the resource constraints.Chapter 3 introduces the probably submodular framework for learning the weights of pairwise random graphical models. Graphical models are expressive and popular models, used notably in semantic segmentation. However, their inference is NP-hard in general. In order to ensure efficient inference, it is necessary to constrain the weights learned during training. Popular tractability constraints are definitely submodular constraints; they ensure that the local potential functions of the model are submodular for any input at test-time. We show that these constraints are often too conservative. Rather than enforcing that the graphical model is submodular for any input graph, it is sufficient to ensure submodularity with high probability for the data distribution of the task. We show on several semantic segmentation and multi-label classification datasets the superiority of this approach, validating the corresponding gain in model expressivity and accuracy, without compromising the efficient inference at test-time.Chapter 4 presents improved optimization methods to reduce the test-time error of semantic segmentation models, by introducing novel task-specific losses. In recent years, convolutional neural networks have dominated the state of the art in semantic segmentation. These networks are usually trained with a cross-entropy loss, which is easy to use within first-order optimization schemes. However, segmentation benchmarks are usually evaluated under other metrics, such as the intersection-over-union measure, or Jaccard index. A direct optimization of this measure, while challenging, can yield a lower error rate. Such gains are relevant to applications, as the Jaccard index has been shown to be closer to human perception, and benefits from scale invariance properties. Using the Lovász extension of submodular set functions, we develop tractable surrogates for the optimization of the Jaccard index in the binary and multi-label settings, compatible with first-order optimizers. We demonstrate the gains of our method in terms of the target metric on binary and multi-label semantic segmentation problems, using state-of-the art convolutional networks on the Pascal VOC and CityScapes datasets.Chapter 5 considers the problem of neural architecture search, where one wants to select the best-performing model satisfying the computational requirements among a large search space. We aim to adjust the channel numbers of a given neural network architecture, ie{} the number of convolutional filters in each layer of the model. We first develop a method to predict the latency of the model given its channel numbers, using a method relying on least-square estimation of the predictor without the need to access low-level details about the computation on the inference engine. We then build a proxy for the model error that decomposes additively over individual channel choices, by using aggregated training statistics of a slimmable model on the same search space. The association of the pairwise latency model and the unary error estimates leads to an objective that can be optimized efficiently using the Viterbi algorithm, yielding the OWS method. A refinement of OWS, named AOWS, adaptively restricts the search space towards optimal channel configurations during the training of the slimmable network. We validate our approach over several inference modalities and show improved final performance of the selected models within given computational budgets.Overall, this thesis proposes novel methods for improving the accuracy/efficiency tradeoff of contemporary machine learning models, using methods derived from first principles and validated through experimentation on several contemporary computer vision problems. This research paves the way towards a smarter usage of the computational resources of machine learning methods, curbing the trend for "wider and deeper" models in order to face the challenges of time-critical and carbon-neutral AI.

Keywords


Dissertation
Energy-efficient and secure implementations for the IoT
Authors: --- --- ---
Year: 2020 Publisher: Diepenbeek KU Leuven. Faculty of Engineering Technology

Loading...
Export citation

Choose an application

Bookmark

Abstract

The IoT or Internet of Things is defined by, for example, the Internet Engineering Task Force (IETF) as the network of physical objects or "things" embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with the manufacturer, operator and/or other connected devices. Since the creation of the concept in 1999, the IoT has gained a lot of popularity, and, it is still increasingly used in various environments. This is also the reason for the numerous challenges that characterise the IoT. This dissertation focuses on three important challenges to provide IoT security: heterogeneity, performance, and foremost energy-efficiency. The IoT is a heterogeneous environment due to the variety of devices, network architectures and wireless communication technologies that are used. Furthermore, these IoT devices typically have to adhere to performance requirements while they have limited storage and computation capabilities due to the fact that they are battery-powered.The contribution of this dissertation is four-fold. The first contribution consists of providing end-to-end security in a heterogeneous environment. For this purpose, the use of object security is analysed and applied in a proof-of-concept system. The second contribution is the analysis and optimisation of using coupons in constrained devices. This technique reduces the computation cost of generating digital signatures. The third contribution is the in-depth analysis of the energy consumption of security algorithms in terms of both computation and communication cost. This energy-security analysis is used to define an approach on how to optimise the duration of security sessions to reduce the energy impact of a session-based security protocol. Finally, the fourth contribution is a healthcare use case, providing end-to-end security using a public-key approach on the one hand and a symmetric-key approach on the other hand.In summary, this dissertation describes the research outcomes of the in-depth and practical study of the aforementioned security techniques to provide energy-efficient, performance-aware end-to-end security in a heterogeneous IoT environment.

Keywords


Dissertation
Modeling the perception of light sources.
Authors: --- --- ---
Year: 2019 Publisher: Leuven KU Leuven. Faculty of Engineering technology

Loading...
Export citation

Choose an application

Bookmark

Abstract

Colour Appearance Models (CAM) attempt to predict the colour appearance of a stimulus by taking the physical properties of the stimulus and its surroundings into account. The fundamental goal is to look for correlates between the measured optical spectral data of a stimulus and its surrounding and the corresponding perceptual attributes. There are three absolute colour attributes (Brightness, Colourfulness and Hue) and three relative colour attributes (Lightness, Chroma and Saturation).Most of the existing CAMs are developed to describe the perception of related surface colours which implicitly assume the presence of a light source illuminating the target and one or more other surfaces. One of them, CIECAM02 is recommended by the "Commission Internationale d'Eclairage" (CIE). Recently, a new CAM (CAM15u) was developed for unrelated light sources (i.e. self-luminous stimuli seen in a completely dark environment). However, at this moment, there is no colour appearance model for light sources seen in a specific luminous context (e.g. a traffic signal at daytime).A first crucial step is to gather relevant optical data (spectral radiances) and the corresponding perceptual data of the colour attributes of observers. With this new data a new and comprehensive colour appearance model will be developed for these self-luminous stimuli seen in relation to a self-luminous background using the spectral optical data of the stimulus and the background as basic input, and taking into account as much as possible input from the physiologic and neurologic processes of the visual system. This new model can immediately be used when investigating the brightness perception of LED-signalisation.

Keywords


Dissertation
Advanced Freeform Optics for Illumination Applications
Authors: --- --- ---
Year: 2021 Publisher: Leuven KU Leuven. Faculty of Engineering Technology

Loading...
Export citation

Choose an application

Bookmark

Abstract

Freeform lenses are lenses with one or more surfaces without a specific symmetry. In the context of lighting they are used to reshape the light distribution from a light source into a target distribution with a specific intensity or illuminance pattern.With the advent of white high power LEDs, light sources have become significantly smaller compared to earlier light sources such as incandescent bulbs or fluorescent lamps.This size-reduction allows the beam shaping optics to be significantly smaller which made compact freeform optics commercially relevant for lighting applications.Optical design for lighting applications comes with a range of specific challenges and design criteria. The design problems are often highly asymmetric and non-paraxial, and the lens design has to keep overall manufacturability and cost into account. Another design criterion that is very important for lighting applications, is the visual appearance of the lighting system. This aspect encompasses the minimization of visual discomfort or glare as a result of the lighting luminaires. The high luminance of LEDs, due to their small size and high luminous flux, can be a major source of glare. LED based luminaires require a strict control over the outgoing intensity distribution in order to limit the observed luminance in the field of view of the observers. This can be achieved with freeform lenses. However, typical freeform lenses do not have an impact on the peak luminance of the emitted light in the near-field. The most common way to spread the emitted light over a larger surface area, and thereby reduce the source luminance, is using volume- or surface scattering diffusers. These components however, offer very little control over the resulting intensity distribution. A much more effective method to obtain both a very accurate intensity pattern and a reduced peak luminance is the use of freeform lens arrays. Therefore, in this work, lens design algorithms have been developed for such freeform arrays in both 2D and 3D. The design of freeform lens arrays in 3D comes with several new challenges in comparison to the design of single-channel freeform lenses. One of the difficulties that arise for freeform lens arrays is the footprint, or edge of the individual lens elements that should adhere to certain conditions. For this purpose, a new freeform design algorithm was developed for lenses with an arbitrary lens contour. This method was further expanded to a new, flexible ray-mapping method for off-axis and non-paraxial freeform lenses. This method formed the basis of the design algorithm for luminance spreading freeform lens arrays with accurate intensity control.Manufacturability is another important aspect for commercial lighting applications. Due to the non-paraxial nature of many lighting design problems, the resulting lenses can become quite voluminous. Such lenses are more cumbersome to mass manufacture as they require a large volume of material and long cycle times.Fresnel lenses are an effective method to create large optical components with a relative small volume. For rotationally symmetric lenses, the conversion of a regular lens to a Fresnel lens is quite trivial, but this is much harder in the case of freeform lenses. In this work a novel method to design freeform Fresnel lenses is also presented. Unlike previous freeform Fresnel lens concepts which use lens facets, the method presented in this work constructs a freeform Fresnel lens out of concentric rings similar to classical Fresnel lenses. This reduces the number of discontinuities considerably, which in turn decreases stray light.

Keywords


Dissertation
Radiation Hardened CMOS Integrated Circuits for Time-Based Signal Processing
Authors: --- --- ---
Year: 2017 Publisher: Leuven KU Leuven. Faculty of Engineering Technology

Loading...
Export citation

Choose an application

Bookmark

Abstract

The goal of this research was to develop and test integrated CMOS circuits for radiation tolerant time-based circuits with picosecond accuracy for nuclear applications and high-energy physics. The main applications for which these circuits were developed, are time-based readout interfaces in high-energy physics particle detectors, clock generation and data-transmission for these detectors. During this research, a radiation tolerant Time-to-Digital Converter (TDC) and a low noise clock synthesizer were designed and optimized for the particle detectors at CERN.A short overview on the radiation effects and mitigation techniques to ionizing radiation is given, together with a discussion on the practical aspects which are required in modern TDCs and frequency synthesizers.A high-resolution TDC is presented with a discussion on the design aspects and the practical implementations of the circuit which are required in nuclear environments. The TDC is based on a Delay-Locked Loop (DLL) that has two phase detection circuits to boost the recovery time after an energetic particle disturbs the circuit. The functionality of the DLL ensures that the timing resolution of the TDC remains the same after irradiation. Furthermore, this DLL has a new phase detector architecture which reduces static-phase offsets in the phase detectors through a correlated sampling mechanism which has been implemented for the first time in the time domain. The circuit was prototyped in a 40 nm CMOS technology and a 4.8 ps resolution was measured with a 4.2 mW power consumption.DLL based TDCs or serial communication links do require a low-noise, high-frequency reference clock. For a 64 channel TDC, a 2.56 GHz frequency synthesizer was designed to upconvert the 40 MHz reference clock of the Large Hadron Collider (LHC) at CERN to a 2.56 GHz high speed clock with a targeted rms jitter below 1 ps. A radiation hardened Phase Locked Loop was designed in which both an LC-tank oscillator and ring oscillator were present. The chip was prototyped in a 65 nm COS technology. These circuits were, in a next step, irradiated to make a comparison between ring and LC-tank oscillators in terms of noise, radiation damage and single-event effects. The devices were irradiated with X-rays up to 600 Mrad to study the Total Ionizing Dose effects on the circuits and were also irradiated with heavy-ions to study the single-event effects on the oscillators. The clock generator has a power consumption of 11.7 mW and had an integrated rms jitter of only 345 fs. Triple Modular Redundancy was used in the digital circuits to protect them from soft errors. A new phase detector architecture is presented which minimizes the error rate due to high-energy particles in frequency synthesizers. The devices were also tested for temperature variations from -25 °C up to 125 °C.From the results gathered in the radiation experiments, an improved LC-tank oscillator was designed which has a reduction of the sensitivity to single-event upsets of more than 600 times compared to a traditional implementation which is mainly due to the cross section of the tuning varactor of the oscillator. This technique was also experimentally verified.

Keywords


Dissertation
Exploiting scene constraints to improve object detection algorithms for industrial applications
Authors: --- --- ---
Year: 2017 Publisher: Sint-Katelijne Waver KU Leuven. Faculty of Engineering Technology

Loading...
Export citation

Choose an application

Bookmark

Abstract

State-of-the-art object detection algorithms are designed to be heavily robust against scene and object variations like illumination changes, occlusions, scale changes, orientation differences, background clutter and object intra-class variability. However, in industrial machine vision applications, where objects with variable appearance have to be detected, many of these variations are in fact constant and can be seen as scene specific constraints on the detection problem. These scene constraints can be used to reduce the enormous search space for object candidates, and thus speed up the actual detection process and improve it’s accuracy.In this PhD we will explore the possibility to use scene specific constraints of industrial object detection tasks to influence three main aspects of object detection algorithms:1. Reduce the amount of training data needed. We will try to reduce the required amount of manually annotated training data as much as possible.2. Increase the speed of the detection process. Since we are working in an industrial application related context, maintaining real-time performance is a hard constraint.3. Reduce the amount of false positive and false negative detections. We aim at building object detection algorithms that are able to detect all objects in a given image or video stream with a high certainty.Moreover, we will propose steps to simplify the training process under such scene constraints, used for creating object specific models. For this we look into techniques like active learning and data augmentation, in order to heavily reduce the amount of manual input required by the algorithm.

Keywords


Dissertation
Novel remote phosphor configurations for lighting and display applications
Authors: --- --- ---
Year: 2018 Publisher: Leuven KU Leuven. Faculty of Engineering Technology

Loading...
Export citation

Choose an application

Bookmark

Abstract

Remote phosphor plates are increasingly used in LED modules to obtain auniform white light distribution by converting part of the blue LED light into light with a longer wavelength. Remote phosphor plates also havethe potential to improve the system performance of display backlights if luminescent materials with a narrow emission band (e.g. quantum dots) are considered.The goal of this research is to find new concepts and improved configurations for efficient illumination devices that incorporate remote phosphor convertors. This is done by using accurate simulations of the phosphor plate based on detailed characterization of the relevant material properties. In a second stage, the converter is integrated into a complete device and the total system performance is investigated. The resulting simulation models are validated by comparing them with real demonstrators. In this way new configurations with desiredproperties can easily be explored.

Keywords

Listing 1 - 10 of 49 << page
of 5
>>
Sort by