Narrow your search

Library

ULiège (3)


Resource type

dissertation (3)


Language

English (3)


Year
From To Submit

2021 (2)

2019 (1)

Listing 1 - 3 of 3
Sort by

Dissertation
Master thesis : Reinforcement Learning for Network Control
Authors: --- --- --- ---
Year: 2019 Publisher: Liège Université de Liège (ULiège)

Loading...
Export citation

Choose an application

Bookmark

Abstract

As computer networks become more dynamic, complex and sophisticated, they naturally become harder to manage and maintain. 

More specifically, networking issues are not always well detected and remediated by existing networking control planes: those issues often requires human involvement to be properly taken care of.

The aim of this work is to consider computer networking problems in a more automatic or programmatic way. One approach to tackle this problem is to use Reinforcement Learning.

In this thesis, a monitoring pipeline and problem injection module are built on a test network, in order to train an intelligent agent using Reinforcement Learning techniques, able to properly detect and remediate some predefined networking issues.

The test network built in this study, is a physical one with which the agent and modules communicate using SSH.

Several experiments of increasing complexity are implemented and several Reinforcement Learning agents are trained and evaluated.

The overall goal of this project was to open up the way to implement Artificial Intelligence techniques in computer networking, a field where such techniques are rarely used, and the approach of Reinforcement Learning was shown successful in this work, under some assumptions.


Dissertation
NLP Methods for Insurance Document Comparison
Authors: --- --- --- ---
Year: 2021 Publisher: Liège Université de Liège (ULiège)

Loading...
Export citation

Choose an application

Bookmark

Abstract

This work aims to study the different steps of a process that would allow to compare 2 different versions of a document. This process is decomposed into 4 parts: text extraction, text segmentation, text matching and text comparison, which have been the subject of research and experiments. Especially, one show that comparing the sections of the documents rather than the complete documents improve the quality of the comparison.

The text matching task, which is the part studied in more depth, is a variant of the classification task, with the difference that there are no general categories from which we try to classify. Instead, each document has a unique set of classes, corresponding to each section, that can not be known in advance. This has many implications, mainly the fact that traditional classifiers cannot be used, as one cannot create training data for this task.

Different natural language processing (NLP) methods have been compared on the text matching task. For this purpose, a small dataset of pairs of documents with their matching has been built, and metrics inspired from the confusion matrix for the classification task has been designed, to be able to assess the performances of the different models. The models compared are term frequency (TF), TF-IDF, Word2vec combined with the Word Mover's distance, Doc2vec, BERT and RoBERTa. The different experiments show that more complex models are not suited for this matching task, and that it is preferable to use simple statistical models. Further work may investigate the performances of Latent Semantic Analysis (LSA) for this matching task.


Dissertation
Automatic Abstractive Text Summarization : A deeper look into convolutional sequence-to-sequence networks
Authors: --- --- --- ---
Year: 2021 Publisher: Liège Université de Liège (ULiège)

Loading...
Export citation

Choose an application

Bookmark

Abstract

As the amount of information produced everyday continually increases, the desire for summaries containing only the most salient parts of the texts continues to gain traction. Even though the possibility to extract parts of texts and gluing them together already exists, we usually prefer fluent, human-like summaries. 

That is the concern of the Artificial Intelligence subfield of Automatic Abstractive Summarization. Although the task is typically solved using recurrent neural networks, that architecture comes with several challenges, the biggest being the amount of time and computational power required to train the models. Fortunately, another less computationally intensive paradigm exists, based on convolutional networks, even though it has not been as extensively studied. 

This thesis is concerned with that convolutional framework, and explores questions and assumptions that have not been answered previously, such as the advantages and drawbacks of using pretrained embeddings, or the tradeoff between performance gains and the added complexity of mechanisms such as reinforcement learning or pointing-generation. Experiments about the abstractiveness of the models, their fine-tuning on a different dataset, and their ability to capture long-distanced dependencies are also performed through the use of both the CNN/DailyMail dataset, and the XSUM dataset. 

Those experiments show that using more convolutional blocks in the model makes sense up to a certain point, that the use of pretrained embeddings is advisable, as is the use of the pointer-generator network implemented in this work. The use of reinforcement learning is also advisable at the end of the model training.

Finally, this thesis is concluded with additional experiments that could be implemented in future works, as well as practical advises regarding the use of abstractive summarization in the context of general terms and conditions summarization.

Listing 1 - 3 of 3
Sort by