Listing 1 - 4 of 4 |
Sort by
|
Choose an application
Different Brain-Computer Interfaces have been used to understand how the human brain works. Functional Magnetic Resonance Imaging (fMRI) signals are a popular method to examine and make predictions about how the brain operates on various stimuli. From any type of stimuli, the mapping from the stimuli representation to the fMRI representation is defined as brain encoding. Likewise, the reverse procedure to get the representation of the input from the brain signals is brain decoding. There are different methods for how to map these two data into each other and the optimal way for encoding and decoding still remains as a mystery. Diverse models are being tried as an encoder-decoder, including, but not limited to Multi-Layer Perceptrons (MLPs), Recurrent Neural Networks (RNNs), Long Short-Term Memory networks (LSTMs) and Large Language Models (LLMs), specifically Transformers. This Master's thesis focuses on Brain Decoding from fMRI data to linguistic stimuli using transformers. The brain data is collected from subjects reading informative, neutral sentences. Recent research shows us that transformers are showing a good performance when used as encoders and decoders. Transformers in general need fine-tuning and this fine-tuning takes a huge amount of time as most transformers have millions of parameters. The effect of parameter reduction using Low-Rank Adaptation of Large Language Models (LoRA) is examined in terms of Brain Decoding performance. The first research question is on exploring different models. The architectures of Encoder-only, Decoder-only, Encoder-Decoder and their different attention mechanisms and training tasks (discriminators vs. generators) are compared. It is concluded that models trained with the Replaced Token Detection (RTD), which is a discriminatory task, perform better in terms of brain decoding than the models trained with a Masked Language Modeling task. Disentangled Attention is also shown to improve performance in brain decoding and Deberta-V3 model with both Disentangled Attention and Replaced Token Detection performs the best in comparison. The second question is on the effect of the Natural Language Processing (NLP) tasks used for fine-tuning the transformer with LoRA. LoRA doesn't support fine-tuning with RTD by default, and when the effect of fine-tuning with different LoRA tasks are compared, it is seen that Causal Language Modeling performs slightly better. The main finding is regardless of the NLP task, there is not a performance drop when we use LoRA and train the models with less parameters. Same finding goes for the third research question, the rank of the Low-rank decomposition matrix. It is shown that there is not a drop in brain decoding performance for reducing the dimensionality (rank) of the parameter space. It is also shown that there is not an optimal rank for brain decoding performance, as the performance fluctuates differently for each model. The final question is analyzing the effect of data and how adding more data and stratifying it changes the performance of the models. The brain decoding performance fell into the level of a coin toss when brain data from other subjects was added and the sentence orders were stratified. This indicates that transformer models still have a certain bias towards the brain and linguistic data that they are decoding.
Choose an application
Through the application of self-determination theory (SDT) to research and practice, this book deepens our understanding of how autonomous language learning can be supported, developed and understood outside of the classroom. The chapters deal with learning environments and open spaces, communities and relationships, and dialogue and interaction.
Autonomy (Psychology). --- Informal language learning. --- Language and languages --- Motivation in education. --- Non-formal education. --- Second language acquisition --- Study and teaching --- Psychological aspects. --- Autonomy (Psychology) --- Informal language learning --- Motivation in education --- Non-formal education --- Psychological aspects
Choose an application
Choose an application
Listing 1 - 4 of 4 |
Sort by
|