Listing 1 - 10 of 53 | << page >> |
Sort by
|
Choose an application
Choose an application
Choose an application
Choose an application
This thesis focuses on the selection and implementation of software-implemented countermeasures designed to detect control flow errors in embedded systems. A control flow error is an erroneous jump throughout an executing program induced by external disturbances. These disturbances, such as electromagnetic interference, can introduce bit-flips in different components of a system's hardware. In turn, these introduced bit-flips affect the executing program by corrupting the execution order of instructions. This phenomenon is known as a control flow error and can cause the program to hang or to crash, possibly creating dangerous situations. An introduced bit-flip can also manifest itself as a data flow error, by corrupting data needed by the program. These are, however, out of scope for this research.By adding extra control variables and inserting update instructions that modify that control variable in the low-level code of the target program, software-implemented techniques are able to detect if a control flow error has occurred. Since multiple options are possible to create this type of protection, numerous techniques have been proposed in literature. With many options, and no guideline on how to select a technique, the following question arises: what is the best technique? To solve this question, solutions to the following problems had to be found: I) ease the implementation of the techniques in the low-level code of a target program; II) objectively characterize each technique; and III) develop a new and better technique.To solve the first problem, we developed a compiler extension. While it is possible to implement each of the selected techniques into the low-level code of a target program manually, this is arduous and error-prone. The compiler extension we developed solves these issues as it is capable of automatically implementing the discussed techniques in low-level code. By simply adding a few extra parameters when compiling the target program, a control flow error detection technique can be added. This eliminates both the need to know the low-level language of the embedded system and the need to know about the internal operations and added functionality of the technique. Using the compiler extension thus saves time and effort.Next, we defined three criteria to objectively characterize each technique: 1) error detection ratio, 2) execution time overhead and 3) code size overhead. The error detection ratio indicates which percentage of control flow errors a technique detects. To measure this, we use fault injection experiments. Because there were no fault injection tools and no deterministic control flow error injection processes available, we developed our own software-implemented tool and processes. This tool can execute three different deterministic injection processes and supports multiple targets, both physical hardware targets and simulated targets. The execution time overhead indicates how much longer the protected program needs, compared to the unprotected program, in an error-free run. We measured this using an on-board hardware timer of the target embedded system. Finally, the third criterion, code size overhead, indicates how much more memory the protected program needs, compared to the unprotected program. This criterion is determined by measuring how much memory the compiled program needs.Using the developed tools and selected criteria, a comparative study between eight established control flow error detection techniques is presented in this thesis. By implementing the techniques for the same case studies, executing them on the same hardware, subjecting them to the same fault injection campaign and measuring their overhead with the same tools, an objective comparison was made. The study revealed that the technique called Control Flow Checking by Software Signatures is the best established technique to use so far, as it achieves a high error detection ratio while imposing a low overhead.The study also revealed that there was room for improvement. Using the collected data, we derived five guidelines to build an optimal control flow error detection technique. To demonstrate their validity, we developed a detection technique that complies with all five guidelines, called Random Additive Control Flow Error Detection, and submitted it to the same fault injection campaign as used during the aforementioned comparative study. These experiments revealed that our technique outperforms the selected state-of-the-art techniques. Our technique achieves a higher error detection ratio and imposes a lower overhead then the state-of-the-art techniques.This thesis concludes by presenting the application of the different research outputs on industrial case studies, such as a small scale Industry 4.0 setup. These final experiments verify that the research can indeed be used in an industrial setting.
Choose an application
Overweight and obesity have become prevalent social problems, often attributed to poor eating habits. Monitoring eating behaviours through food intake action detection offers a potential solution. This paper proposes a skeleton-based ST-GCN LSTM model for accurately detecting food intake actions. The skeleton-based approach offers advantages such as robustness to the environment, fewer data requirements, and privacy protection. Two datasets were utilized for model evaluation. The OREBA dataset, including lab-recorded videos, achieved an average segmental F1 score of 82.99% and 67.80% for detecting eating and drinking gestures at k = 0.1. The smartphone footage dataset, with more flexible experiment settings, was tested with a pre-trained ST-GCN LSTM model that achieved F1 scores of 85.40% and 67.80% for detecting eating and drinking behaviours at k = 0.1. Compared to the original results of the above two datasets using RGB signal-based approaches, the ST-GCN LSTM model demonstrates superior performance. Also, this model is the first research using skeleton-based information for food intake behaviour detection.
Choose an application
Het onderwerp van deze scriptie werd voorgelegd door Barco. Hierbij kwam de vraag om onderzoek te voeren naar statistieken die helpen bij het in kaart brengen van de gezondheid van de ontwikkelomgeving van een software component. Het is zo dat er binnen de ontwikkelomgeving van Barco veel tools gebruikt worden die de ontwikkelaars ondersteunen. Denk maar aan een versiebeheer systeem, software die kwaliteitscontroles uitvoert, projectbeheer systeem, etc. Al deze tools staan naast elkaar en zijn allemaal bronnen van data over de ontwikkelomgeving die nuttig kunnen zijn. Het doel van deze masterproef is om al deze data centraal te verzamelen door middel van data aggregatie. Daaruit kunnen vervolgens grafieken geproduceerd worden die nuttig kunnen zijn in het onderzoek naar de gezondheid van een component. De onderzoeksvraag die hier gevormd werd is: ”Welke data kan verzameld worden om inzicht in de gezondheid van een component te krijgen?” Om te beginnen is er een deel literatuurstudie waarin onderzoek gevoerd wordt naar welke technologieën beschikbaar zijn om dit project tot een goed eind te brengen. Alsook wordt hier onderzocht welke nuttige statistieken uit de beschikbare data afgeleid kunnen worden. Verder is er een deel onderzoek naar het classificeren van pull requests, om hier ook data over te visualiseren. Op basis van de onderzochte onderwerpen is vervolgens een proof of concept gemaakt met als doel inzicht in de gezondheid van een component te verwerven. Het eindresultaat van deze scriptie is een uitgewerkte proof of concept die licht werpt op enkele statistieken binnen de ontwikkelomgeving, alsook een classificatie systeem voor het bepalen van een pull request type.
Choose an application
A unique uncertainty model can not deal with an uncertain phenomenon that is changing and dynamic. For instance, the weather tomorrow will be %60 rainy. But as we see, this %60 is changing from time to time and can not be unique. In other words, there is imprecision in the weather uncertainty model. In this dissertation, we consider four advanced uncertainty models called imprecise uncertainty. These advanced uncertainty models could take into account the imprecision. Therefore, further planning, decision making, or designing under imprecision can be more optimal and stable. We applied these models to a general linear programming (LP) problem with the presence of (imprecise) uncertainty, i.e., at least one of the elements of the coefficient matrices in the constraints and/or coefficients of goal function is uncertain. We discussed four different uncertainty models: interval, possibility, contamination, and probability box models, to deal with imprecise uncertainty. We focused on imprecise probability theory to measure these uncertainties. We work on the four types of uncertainty models to quantify and solve the LPUU problem. Two sorts of theoretical solutions were proposed under the optimal decision criteria (the worst-case scenario and the maximality/less conservative criterion). We proposed a generic approach to reason about the LP problem under imprecise uncertainty. This approach is based on the imprecise decision theory. Several numerical methods in each of these eight theoretical solutions are proposed under approximation theory. In interval, contamination, and possibility distribution cases, the exact solutions are provided with novel ideas. Several applications on four real industrial/engineering problems are presented to illustrate the potentially broad domains, wide application, and highly innovative results.
Choose an application
Applications of adhesive joints are increasing because of new material combinations that are used. The little experiences in structural adhesive joints involves a lot of questions, one of the main questions, is the reliablity of the joint wich is affected by moisture, UV, temperature and other ageing parameters. By monitoring the structural health there will be tried to define the reliability of the joint. This research focuses on the Fibre Bragg Grating (FBG) sensor technology for monitoring the structural health of the joint. FBG sensors are steadily being developed as a reliable, in situ, non-destructive tool for monitoring and analyzing the integrity of large and costly structures. Especially in composite structural applications, this strain measuring technique has proven to be very advantageous. However, its use in adhesive bonding applications is limited up till now, because of the many issues involved. These concern the practical embedding of such sensors and the difficulties in distinguishing between measurement data of mechanical, thermal or hygroscopic origin. Moreover, the actual embedding of these optical sensor wires has several side effects on the mechanical performance of structural adhesive joints. The aim of this research is to investigate the effect of these embedded glass fiber sensors on the stiffness and strength of simple adhesively bonded joints. And the posibility of measuring the influences of moisture and temperature on the structural joint with the FBG's.
Choose an application
Machine learning has been a hot topic and has received more and more attention in recent years. In healthcare and well-being was found application in various fields such as disease detection, posture and sit detection... In this master thesis, machine learning will be used in a sensor-based head-foot wheelchair steering system, with the aim of classifying driving patterns. This is done by applying image recognition algorithms on the data generated by the applied force on the sensors of the system. Various classification algorithms are examined, and their classification performance is evaluated. The classification algorithms will be tested on several embedded systems in order to predict if it can be used for real time classification. Besides that, the classification accuracy will be measured for various dimensions of the sensor array. At last, several test participants will be left out in order to predict the classification accuracy for totally new data.
Choose an application
Delivery of wheelchair skills training is difficult, because it requires high level of user engagement, safe environment provided by caregivers, and many wheelchairs have different types of control system. So, an interactive tool makes sense. In this work, a prototype game with the intent of training motorized wheelchair skills is described. Two factors were taken into consideration while developing this game: (1) Adaptation for varying user capabilities. (2) Implementation for different wheelchair steering systems. A prototype training game was designed in the interest of developing the skills necessary to operate the motorized wheelchair safely and effectively in daily usage which also provides adaptive gameplay and support for different control systems. Previous research on accessible games developed using a wheelchair and various design principles and rehabilitative strategies provided by other researchers are discussed and served as guidelines. In search for an original approach to tackle possible variations of capability of the patients, a study on dynamic difficulty adjustment (DDA) is also discussed here. DDA is a method that automatically adjusts certain aspects of a game in real-time, to the skill level of the player with the goal of making the player not feel bored when the game is easy and not feel anxiety when the game is difficult. The game is designed and implemented to be played as a 3D environment on Windows PC platform. The development environment used for this game is Unity3D with C# game play scripting. The game is designed as a car driving game, to mimic the behaviour of a motorised wheelchair and to preserve the concept of driving. Because this game still needs to have a training function, several training tasks were outlined. After this, the design of the five levels are described. Finally, the DDA implementation used in the game is talked about in detail. To keep the player engaged, a DDA algorithm is developed to enable adaptive game play so that even with no prior information on the player’s skill level, the game would still be able to adjust itself to provide a suitable level of difficulty, in line with the player’s skill level. The algorithm is able to do this by using player attributes such as maximum speed reached, level completion time, times an obstacle is hit, etc. It is important for an interactive rehabilitative system to be error-free, stable, and responsive. Therefore, a less complex, more transparent method is adopted to yield a predictable gameplay. For future work, it will be interesting to probe into the various attributes of the player to fine tune the behavior of the DDA algorithm. The game is distributed to numerous test participants for a test of usability and player experience. Test participants are invited to complete questionnaires to provide feedback.
Listing 1 - 10 of 53 | << page >> |
Sort by
|