Listing 1 - 10 of 80 | << page >> |
Sort by
|
Choose an application
Een cochleair implantaat (CI) is een apparaat dat het niet-functionele binnenoor omzeilt en de gehoorzenuw stimuleert met elektrische stroom zodat doven spraak en andere geluiden kunnen waarnemen. Door het succes van CIs, worden steeds meer patiënten geïmplanteerd die restgehoor hebben. In veel gevallen gebruikt deze groep een hoorapparaat (HA) in het niet-geïmplanteerde, ernstig slechthorende oor. Deze configuratie wordt bimodale stimulatie genoemd. Ondanks het feit dat er in dit geval binaurale informatie wordt aangeboden, scoren bimodale luisteraars slecht op geluidslokalisatietaken. Dit is deels te wijten aan technische problemen met de signaalverwerking in de CI spraakprocessor en het HA. Met een experimentele opstelling werd de gevoeligheid nagegaan voor de basis cues voor lokalisatie van geluidsbronnen: het interauraal tijdsverschil (ITD) en het interauraal niveauverschil (ILD). Het kleinst waarneembare verschil (JND) in ILD werd gemeten bij 10 bimodale luisteraars. De gemiddelde JND voor in toonhoogte overeenkomende elektrische en akoestische stimulatie was 1.7 dB. Omwille van ontoereikend restgehoor op de hoge frequenties, hebben bimodale luisteraars echter geen toegang tot ILDs in realistische geluiden. Met behulp van ruisbandvocodersimulaties bij normaalhorende proefpersonen werd aangetoond dat het lokalisatie-vermogen met bimodale stimulatie verbeterd kan worden door versterking van ILDs in de lage frequenties. Tenslotte werd de JND in ITD opgemeten bij 8 bimodale luisteraars. Vier proefpersonen waren gevoelig voor ITDs met JNDs in ITD van 100-200 µs . Het elektrisch signaal moest gemiddeld met 1.5 ms vertraagd worden om synchrone stimulatie ter hoogte van de gehoorzenuwen te bereiken. De gevoeligheid voor binaurale lokalisatie cues (ILD en ITD) was ruim binnen het bereik van realistische ILD en ITD cues. Om het gebruik van ILD en ITD mogelijk te maken met klinische toestellen, moeten deze gesynchroniseerd worden en overeengestemd in plaats van excitatie in de cochlea's. Het lokalisatievermogen kan verder verbeterd worden door ILDs te versterken in de lage frequenties van het akoestische signaal. A cochlear implant (CI) is a device that bypasses a nonfunctional inner ear and stimulates the auditory nerve with patterns of electric current, such that speech and other sounds can be experienced by profoundly deaf people. Due to the success of CIs, an increasing number of patients with residual hearing is implanted. In many cases they use a hearing aid (HA) in the non-implanted, severely hearing impaired ear. This setup is called bilateral bimodal stimulation. Despite the fact that binaural inputs are available, bimodal listeners exhibit poor sound source localization performance. This is partly due to technical problems with the processing in current CI speech processors and HAs. Using an experimental setup, sensitivity was assessed to the basic localization cues, the interaural level difference (ILD) and interaural time difference (ITD). The just noticeable difference (JND) in ILD was measured in 10 bimodal listeners. The mean JND for pitch-matched electric and acoustic stimulation was 1.7 dB. However, due to insufficient high frequency residual hearing, users of bimodal aids do not have access to real-world ILD cues. Using noise band vocoder simulations with normal hearing subjects, it was shown that localization performance using bimodal aids can be improved by artificially amplifying ILDs in the low frequencies. Finally, the JND in ITD was assessed in 8 users of bimodal aids. Four subjects were sensitive to ITDs and exhibited JNDs in ITD of around 100-200 µs. The electric signal had to be delayed by on average 1.5 ms to achieve synchronous stimulation at the auditory nerves. Overall, sensitivity to the binaural localization cues (ILD and ITD) was found to be well within the range of real-world cues. To allow the use of these cues for localization through clinical devices, they should be synchronized, matched in place of excitation and furthermore performance can be improved by ILD amplification in the low frequencies of the acoustic signal. Slechthorenden met een cochleair implantaat in het ene oor en een hoorapparaat in het andere hebben problemen met het bepalen uit welke richting een geluid komt en het verstaan van spraak in achtergrondlawaai. We bestudeerden experimentele apparaten waarmee het richtinghoren verbetert. Met een cochleair implantaat kunnen volledig doven terug spraak en andere geluiden horen, door stimulatie van de gehoorzenuw met elektrische pulsen. Omwille van de hoge kostprijs wordt meestal slechts één van beide oren geïmplanteerd. Door het succes van cochleaire implantaten worden steeds meer patiënten geïmplanteerd die nog wat restgehoor hebben. Deze groep gebruikt dan een klassiek hoorapparaat in het niet-geïmplanteerde, ernstig slechthorende oor. Deze combinatie van elektrische en akoestische stimulatie noemt men bimodale stimulatie. Om te bepalen uit welke richting een geluid komt, maken mensen gebruik van verschillen in aankomsttijd en intensiteit tussen de geluiden aan beide oren. Hoewel bimodale luisteraars in beide oren geluiden waarnemen, zijn ze slecht in het lokaliseren van geluidsbronnen. Een deel van de oorzaak ligt bij de hoorapparaten en cochleaire implantaten die ze momenteel gebruiken. Deze toestellen zijn niet op elkaar afgestemd en worden dikwijls zelfs onafhankelijk verdeeld en ingesteld. Dit heeft extra verschillen tussen de geluiden in beide oren als gevolg. Met experimentele toestellen stemden we de geluiden in beide oren zo goed mogelijk op elkaar af en gingen we na in welke mate bimodale luisteraars gevoelig zijn voor geluidsverschillen in tijd en intensiteit tussen de oren. Hun gevoeligheid was goed genoeg voor het lokaliseren van realistische geluiden. Om dit mogelijk te maken in het dagelijks leven, moeten commerciële toestellen worden ontworpen waarbij het linker- en rechterdeel samenwerken en die voor elke patiënt afzonderlijk op elkaar worden afgestemd. Het cochleair implantaat moet worden vertraagd met 1.5 ms ten opzichte van het hoorapparaat en het cochleair implantaat moet zo worden ingesteld dat een binnenkomend geluid in beide oren een gelijkaardige toonhoogte teweegbrengt. We ontwikkelden ook een systeem dat de intensiteitsverschillen tussen de oren, die nodig zijn voor lokalisatie, versterkt. Hiermee verbeterde de nauwkeurigheid van lokalisatie.
Academic collection --- 615.84 --- 681.3*J3 <043> --- 543.7 <043> --- Electrotherapy. Radiotherapy. Electrotherapeutics --- Life and medical sciences (Computer applications)--Dissertaties --- Analysis of inorganic substances--Dissertaties --- Theses --- 543.7 <043> Analysis of inorganic substances--Dissertaties --- 681.3*J3 <043> Life and medical sciences (Computer applications)--Dissertaties
Choose an application
Hearing impairments affect a considerable amount of people in the world. The consequences of these dysfunctions can have serious impacts on one's well-being. Closely related are speech comprehension problems that go beyond the mere hearing function and for which there is no objective testing available to this day. Such testing could however play a crucial role in certain cases, such as for non-cooperative or comatose persons. This thesis assesses the potential of a particular technique for exact this type of testing. It records a subject's neural responses to a speech stimulus, attempts to reconstruct the input and then compares the reconstructed signal to this very input. A high correlation between the reconstructed and the original speech signal can be a measure for the subject's understanding. Inversely a low correlation can suggest poor speech comprehension. A successful stimulus reconstruction model would provide a powerful novel tool. Past reconstructions by means of a linear decoder yielded promising results and given the human neurons' non-linear behaviour the artificial neural network-based (ANN) models in this thesis are expected to perform better. Three different ANN topologies were put to the test: the one hidden layer (FFN) and the two hidden layers (FFN2) feed-forward networks and the cascade feed-forward network (CFFN). Testing indicated the Power-Beale conjugate gradient back-propagation training algorithm as the best fit for these networks. Each topology went through an optimization procedure in which different hyperparameters were altered in order to improve the reconstruction accuracy. For the FFN, this resulted in a configuration with 5 neurons in the hidden layer and a regularization parameter of 10−2. The optimal FFN2 holds 1 neuron in each hidden layer and a regularization parameter of 10−4. Varying the error function, the initialization method and the activation function had no significant effect on performance, compared to their defaults settings, which are the mean squared error, Nguyen-widrow initialisation and the hyperbolic tangent function respectively.. CFFN was discarded during the process as it clearly underperformed compared to the other two networks.
Choose an application
Op basis van hersensignalen, gemeten met EEG, is het mogelijk om te bepalen naar wie een bepaald individu aan het luisteren is. Dit gehele domein wordt auditieve attentie detectie (AAD) genoemd. Hedendaagse AAD algoritmes kunnen alreeds bepalen op welke spreker gefocust wordt in een scenario waar twee rivaliserende sprekers aanwezig zijn. Dit vormt de basis voor de ontwikkeling van een gehoorsprothese die automatisch hersensignalen gebruikt om te bepalen welke spreker versterkt moet worden en welke andere signalen beter gedempt worden. De methode waar deze algoritmes zich op baseren heet stimulus reconstructie. Het voornaamste probleem bij deze algoritmes is echter de tijdsduur waarin de beslissing gemaakt wordt. De algoritmes hebben ruim een halve minuut nodig om een betrouwbare beslissing te maken. Dit kan in de realiteit niet in een prothese toegepast worden. De EEG signalen worden getransformeerd in een decoder om dan een gereconstrueerde versie van de stimulus envelope waar de focus op lag te vormen. In deze thesis worden convolutionele neurale netwerken (CNN) gebruikt als classificatiemiddel. CNN zijn een bewezen krachtige vorm van artificiele intelligentie die meestal gebruikt wordt in classificatie. In deze thesis worden twee verschillende CNN-gebaseerde AAD modellen voorgesteld. Het eerste model leunt dicht bij stimulus reconstructie aan terwijl het tweede model meer vrijheid heeft om zowel de EEG data als de spraak envelopes te gebruiken in classificatie. De resultaten zijn veelbelovend. In het bijzonder voor kortere tijdsintervallen waar een beslissing in moet worden gemaakt, presteren de CNN modellen goed. Verder worden nog enkele praktische aanpassingen aan de modellen voorgesteld. Vooreerst wordt in het model gesnoeid. De onnodige parameters worden verwijderd om de computationele last te verlagen. Daarnaast wordt een model voorgesteld waarin het aantal gebruikte EEG electrodes is verminderd. Deze studie toont aan dat CNN de huidige AAD resultaten kunnen verbeteren en dat er naar de toekomst toe nog verbeteringen mogelijk zijn.
Choose an application
One in five experiences hearing loss. The World Health Organization estimates that this number will increase to one in four in 2050. Luckily, effective hearing devices such as hearing aids and cochlear implants exist with advanced noise suppression and speaker enhancement algorithms that can significantly improve the quality of life of people suffering from hearing loss. State-of-the-art hearing devices, however, underperform in a so-called `cocktail party' scenario, when multiple persons are talking simultaneously. In such a situation, the hearing device does not know which speaker the user intends to attend to and thus which speaker to enhance and which other ones to suppress. Therefore, a new problem arises in cocktail party problems: determining which speaker a user is attending to, referred to as the auditory attention decoding (AAD) problem.The problem of selecting the attended speaker could be tackled using simple heuristics such as selecting the loudest speaker or the one in the user's look direction. However, a potentially better approach is decoding the auditory attention from where it originates, i.e., the brain. Using neurorecording techniques such as electroencephalography (EEG), it is possible to perform AAD, for example, by reconstructing the attended speech envelope from the EEG using a neural decoder (i.e., the stimulus reconstruction (SR) algorithm). Integrating AAD algorithms in a hearing device could then lead to a so-called `neuro-steered hearing device'. These traditional AAD algorithms are, however, not fast enough to adequately react to a switch in auditory attention, and are supervised and fixed over time, not adapting to non-stationarities in the EEG and audio data. Therefore, the general aim of this thesis is to develop novel signal processing algorithms for EEG-based AAD that allow fast, accurate, unsupervised, and time-adaptive decoding of the auditory attention.In the first part of the thesis, we compare different AAD algorithms, which allows us to identify the gaps in the current AAD literature that are partly addressed in this thesis. To be able to perform this comparative study, we develop a new performance metric - the minimal expected switch duration (MESD) - to evaluate AAD algorithms in the context of adaptive gain control for neuro-steered hearing devices. This performance metric resolves the traditional trade-off between AAD accuracy and time needed to make an AAD decision and returns a single-number metric that is interpretable within the application-context of AAD and allows easy (statistical) comparison between AAD algorithms. Using the MESD, we establish that the most robust currently available AAD algorithm is based on canonical correlation analysis, but that decoding the spatial focus of auditory attention from the EEG holds more promise towards fast and accurate AAD. Moreover, we observe that deep learning-based AAD algorithms are hard to replicate on different independent AAD datasets.In the second part, we address one of the main signal processing challenges in AAD: unsupervised and time-adaptive algorithms. We first develop an unsupervised version of the stimulus decoder that can be trained on a large batch of EEG and audio data without knowledge of ground-truth labels on the attention. The unsupervised stimulus decoder is iteratively retrained based on its own predicted labels, resulting in a self-leveraging effect that can be explained by interpreting the iterative updating procedure as a fixed-point iteration. This unsupervised but subject-specific stimulus decoder, starting from a random initial decoder, outperforms a supervised subject-independent decoder, and, using subject-independent information, even approximates the performance of a supervised subject-specific decoder. We also extend this unsupervised algorithm to an efficient recursive time-adaptive algorithm, when EEG and audio are continuously streaming in, and show that it has the potential to outperform a fixed supervised decoder in a practical use case of AAD.In the third part, we develop novel AAD algorithms that decode the spatial focus of auditory attention to provide faster and more accurate decoding. To this end, we use both a linear common spatial pattern (CSP) filtering approach and its nonlinear extension using Riemannian geometry-based classification (RGC). The CSP method achieves a much higher accuracy compared to the SR algorithm at a very fast decision rate. Furthermore, we show that the CSP method is the preferred choice over a similar convolutional neural network-based approach, and is also applicable on different directions of auditory attention, in a three-class problem with different angular domains, using only EEG channels close to the ears, and when generalizing to data from an unseen subject. Lastly, the RGC-based extension further improves the accuracy at slower decision rates, especially in the multiclass problem.To summarize, in this thesis we have developed crucial building blocks for a plug-and-play, time-adaptive, unsupervised, fast, and accurate AAD algorithm that could be integrated with a low-latency speaker separation and enhancement algorithm, and a wearable, miniaturized EEG system to eventually lead to a neuro-steered hearing device.
Choose an application
Partly due to the rapid pace of aging of the world population, it is expected that by 2050 more than 900 million people will experience hearing loss. Since adequate hearing is a prerequisite for daily life communication, hearing impairment increases the risk of social isolation and poorer physical functioning, which in turn negatively affects quality of life. Currently, hearing aids are the most used and well-known treatment for hearing impairment. Although these devices restore hearing sensitivity, adequate speech perception in noisy environments is often not achieved. Research has suggested that, next to hearing loss, age-related cognitive decline and temporal processing deficits also contribute to the speech-in-noise difficulties experienced by older adults. To provide rehabilitation strategies that overcome these difficulties, there is a need for a better understanding of the neural mechanisms underlying speech-in-noise difficulties.When we listen to natural speech, our neural activity tracks the low amplitude modulations of speech, also called the speech envelope. Recent studies have demonstrated the potential of neural envelope tracking to objectively measure speech understanding. This could provide additional information to current behavioral speech tests and improve the fitting of hearing aids as neural envelope tracking does not require active cooperation of the patient. This could be particularly useful for difficult-to-test populations such as young children, intellectually disabled persons and older adults with severe cognitive impairment such as dementia. Although this seems promising, previous studies mainly measured neural envelope tracking in a specific well-controlled population, i.e. young, normal-hearing adults. The aim of this doctoral thesis was to investigate the effects of three individual-related factors on neural envelope tracking: listening effort, age and hearing impairment.As daily life speech understanding can be challenging for both normal-hearing and hearing impaired adults, individuals can differ in the amount of allocated neural resources, also called listening effort, that they need to expend to achieve a particular level of speech understanding. This could, however, result in a confound when using neural envelope tracking to objectively measure speech understanding. In view of this, we investigated the effect of listening effort, on neural envelope tracking in young, normal-hearing listeners. Five measures were included to quantify listening effort. Our results demonstrated that different measures can reflect different aspects of effort, e.g. perceived effort versus processing load. Listening effort was not found to substantially modulate neural envelope tracking. Nevertheless, participants showed increases in envelope tracking with increasing speech understanding, suggesting that neural envelope tracking can be used as a reliable objective measure for speech understanding.With advancing age, hearing loss becomes more prevalent. To disentangle these two closely intertwined factors, we designed two studies in which we investigated neural envelope tracking in normal-hearing adults across the adult lifespan and compared the results with those of age-matched hearing impaired adults. For both normal-hearing and hearing impaired adults, neural envelope tracking was measured to sentences and a story masked by different levels of a stationary noise or competing talker. A competing talker was included to investigate the effects of hearing impairment on neural segregation of different talkers. Participants also completed two cognitive tests, measuring verbal working memory and inhibition, to investigate the interplay between cognition, age, hearing impairment and neural envelope tracking.Our results reveal that aging and hearing impairment both result in major alterations of neural envelope tracking. More specifically, envelope tracking was found to increase supralinearly with advancing age, resulting in an enhanced envelope tracking for older normal-hearing adults. This enhancement is likely to underlie the speech-in-noise difficulties experienced by older normal-hearing adults as we found that worse cognitive scores were associated with this enhancement. Hearing impaired adults showed additional enhanced envelope tracking compared to their age-matched normal-hearing peers. As we only observed this for the attended, target talker, our results suggest that in order to neurally segregate different talkers, persons with a disabling hearing loss need enhanced cortical envelope tracking to compensate for peripheral deficits. Furthermore, enhanced envelope tracking in hearing impaired adults may be caused by different neural mechanisms than those related to age since no significant relation with cognitive skills was observed. Finally, middle-aged and older normal-hearing and hearing impaired adults showed a significant increase in neural envelope tracking with increasing speech understanding similar to their young normal-hearing counterparts, highlighting the potential of neural envelope tracking to objectively measure speech understanding.In conclusion, this doctoral thesis demonstrates substantial effects of age and hearing impairment on neural envelope tracking which contribute to the current understanding of the mechanisms underlying impaired speech understanding. In addition, observing the link between speech understanding and neural envelope tracking in different populations, supports the value of neural envelope tracking in diagnostic tests, rehabilitation strategies and self-fitting hearing aids.
Choose an application
EEG recordings are widely used for both clinical and research purposes. However, they are often inevitably contaminated with artifacts. Automatic temporal detection of artifacts is relevant for rejecting artifactual data or defining quality metrics of the EEG recording. Additionally, this information can be used for artifact removal methods. In this thesis, 'Automatic temporal detection of ocular artifacts in high-density EEG recordings', a method for detection of artifacts is presented based on machine learning techniques. Ocular artifacts and muscle artifacts are detected by binary classifiers. From epochs, spectral and statistical features with a spatial character are extracted. Two machine learning techniques are investigated: Support Vector Machine and Artificial Neural Network. The classifiers are trained subject-independently, which gives reliable results for new datasets. Additionally, the detection is applied to automate the use of the multi-channel Wiener filter for artifact removal. The automation of this method and its limited computational time results in a great advantage over current state-of-the-art algorithms for artifact removal.
Choose an application
Choose an application
Choose an application
Current clinical practice requires active cooperation of the patient to individually fit a hearing device. The most important parameters for this fitting process are related to loudness. The aim of this PhD was to find neural correlates of loudness using 40-Hz auditory steady-state responses. These responses are auditory evoked potentials that can be measured fully objectively, non-invasively, and frequency-specifically in the electroencephalogram (EEG) using scalp electrodes. The use of neural correlates has potential for future objective, more automatic, and individualized fitting of hearing devices. More specifically, the project was divided into three parts: loudness adaptation, loudness growth, and loudness balancing.In the first part, two studies are described related to loudness adaptation. In chapter 2, we found that modulated stimuli that are commonly used to evoke auditory steady-state responses, are subject to loudness adaptation at low levels, high carrier frequencies, and for all modulation types, with more loudness adaptation for mixed modulation than for sinusoidal amplitude modulation. However, in chapter 3, we evoked 40-Hz auditory steady-state responses to the mixed-modulated stimuli that caused most of the loudness adaptation behaviorally, and found that response amplitudes remain stable over time.In the second part, we measured complete loudness growth functions with two behavioral loudness tasks as well as 40-Hz auditory steady-state response amplitude growth functions. Monaural loudness growth seemed to match well with 40-Hz auditory steady-state response amplitude growth functions in all groups of participants. In chapter 4, we investigated participants who hear acoustically, i.e., normal hearing and hearing-impaired participants. In chapter 5, we investigated participants who hear electrically with cochlear implants.In the third part, the focus was binaural loudness balancing. In chapter 6, the adaptive and adjustment procedure to measure binaural loudness balance were investigated. Both procedures yielded similar final loudness balanced results if the adjustment procedure was conducted twice, from opposite perceptual sides. In the last chapters, the use of 40-Hz auditory steady-state response amplitudes was investigated to predict the binaural loudness balance for normal-hearing participants (chapter 7), and participants with asymmetric hearing (chapter 8). The latter group consisted of participants with acoustical asymmetric hearing and participants with bimodal hearing, who hear electrically with a cochlear implant and acoustically in the non-implanted ear. While variability across participants was observed, median across-ear ratios at balanced levels were close to 1 for all groups of participants.In summary, loudness adaptation was not reflected by 40-Hz auditory steady-state responses, but the response amplitudes showed a correspondence to loudness growth and could predict the balanced loudness point in many participants, although variability across participants was found as well.
Choose an application
Speech intelligibility assessment is a useful practice since it deals with an important human ability. Due to the limitations of current behavioral methods, which are the gold standard today, having an objective electrophysiological measure of speech understanding can be helpful. Such measures as the correlation threshold are interesting, but suboptimal due to a large variability between realizations and between subjects. This can be improved by projecting the idea behind this approach from the level of the sensors to that of the brain. This thesis will use source localization algorithms in order to identify the neural sources that are active during speech from EEG recordings. Two possible methods that work towards this goal will be discussed. Their effectiveness is validated first. This is done for a less complex situation where another, much simpler approach can be applied and used as comparison. After both methods are shown to be successful, these are applied to EEG data in response to continuous natural speech. One method is chosen over the alternative one based on its implementation and results. That one is finally used to investigate the possible effect that a noisy stimulus can have on the neural response.
Listing 1 - 10 of 80 | << page >> |
Sort by
|