Narrow your search

Library

KU Leuven (2)

VUB (2)

LUCA School of Arts (1)

Odisee (1)

Thomas More Kempen (1)

Thomas More Mechelen (1)

UAntwerpen (1)

UCLL (1)

VIVES (1)


Resource type

book (2)

digital (1)

dissertation (1)


Language

English (4)


Year
From To Submit

2024 (1)

2022 (3)

Listing 1 - 4 of 4
Sort by

Dissertation
Exploring Parameter-Efficient Fine-Tuning in Large Language Models for Brain Decoding

Loading...
Export citation

Choose an application

Bookmark

Abstract

Different Brain-Computer Interfaces have been used to understand how the human brain works. Functional Magnetic Resonance Imaging (fMRI) signals are a popular method to examine and make predictions about how the brain operates on various stimuli. From any type of stimuli, the mapping from the stimuli representation to the fMRI representation is defined as brain encoding. Likewise, the reverse procedure to get the representation of the input from the brain signals is brain decoding. There are different methods for how to map these two data into each other and the optimal way for encoding and decoding still remains as a mystery. Diverse models are being tried as an encoder-decoder, including, but not limited to Multi-Layer Perceptrons (MLPs), Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs) and Large Language Models (LLMs), specifically Transformers. This Master's thesis focuses on Brain Decoding from fMRI data to linguistic stimuli using transformers. The brain data is collected from subjects reading informative, neutral sentences. Recent research shows us that transformers are showing a good performance when used as encoders and decoders. Transformers in general need fine-tuning and this fine-tuning takes a huge amount of time as most transformers have millions of parameters. The effect of parameter reduction using Low-Rank Adaptation of Large Language Models (LoRA) is examined in terms of Brain Decoding performance. The first research question is on exploring different models. The architectures of Encoder-only, Decoder-only, Encoder-Decoder and their different attention mechanisms and training tasks (discriminators vs. generators) are compared. It is concluded that models trained with the Replaced Token Detection (RTD), which is a discriminatory task, perform better in terms of brain decoding than the models trained with a Masked Language Modeling task. Disentangled Attention is also shown to improve performance in brain decoding and Deberta-V3 model with both Disentangled Attention and Replaced Token Detection performs the best in comparison. The second question is on the effect of the Natural Language Processing (NLP) tasks used for fine-tuning the transformer with LoRA. LoRA doesn't support fine-tuning with RTD by default, and when the effect of fine-tuning with different LoRA tasks are compared, it is seen that Causal Language Modeling performs slightly better. The main finding is regardless of the NLP task, there is not a performance drop when we use LoRA and train the models with less parameters. Same finding goes for the third research question, the rank of the Low-rank decomposition matrix. It is shown that there is not a drop in brain decoding performance for reducing the dimensionality (rank) of the parameter space. It is also shown that there is not an optimal rank for brain decoding performance, as the performance fluctuates differently for each model. The final question is analyzing the effect of data and how adding more data and stratifying it changes the performance of the models. The brain decoding performance fell into the level of a coin toss when brain data from other subjects was added and the sentence orders were stratified. This indicates that transformer models still have a certain bias towards the brain and linguistic data that they are decoding.

Keywords


Book
Autonomy Support Beyond the Language Learning Classroom : A Self-Determination Theory Perspective

Loading...
Export citation

Choose an application

Bookmark

Abstract

Through the application of self-determination theory (SDT) to research and practice, this book deepens our understanding of how autonomous language learning can be supported, developed and understood outside of the classroom. The chapters deal with learning environments and open spaces, communities and relationships, and dialogue and interaction.


Book
Autonomy Support Beyond the Language Learning Classroom

Loading...
Export citation

Choose an application

Bookmark

Abstract

Keywords


Digital
Autonomy Support Beyond the Language Learning Classroom : A Self-Determination Theory Perspective
Authors: --- --- --- --- --- et al.
ISBN: 9781788929059 9781788929042 Year: 2022 Publisher: Bristol ;; Blue Ridge Summit Multilingual Matters

Loading...
Export citation

Choose an application

Bookmark

Abstract

Keywords

Psychology

Listing 1 - 4 of 4
Sort by