Narrow your search
Listing 1 - 4 of 4
Sort by

Dissertation
Automatic Abstractive Text Summarization : A deeper look into convolutional sequence-to-sequence networks
Authors: --- --- --- ---
Year: 2021 Publisher: Liège Université de Liège (ULiège)

Loading...
Export citation

Choose an application

Bookmark

Abstract

As the amount of information produced everyday continually increases, the desire for summaries containing only the most salient parts of the texts continues to gain traction. Even though the possibility to extract parts of texts and gluing them together already exists, we usually prefer fluent, human-like summaries. 

That is the concern of the Artificial Intelligence subfield of Automatic Abstractive Summarization. Although the task is typically solved using recurrent neural networks, that architecture comes with several challenges, the biggest being the amount of time and computational power required to train the models. Fortunately, another less computationally intensive paradigm exists, based on convolutional networks, even though it has not been as extensively studied. 

This thesis is concerned with that convolutional framework, and explores questions and assumptions that have not been answered previously, such as the advantages and drawbacks of using pretrained embeddings, or the tradeoff between performance gains and the added complexity of mechanisms such as reinforcement learning or pointing-generation. Experiments about the abstractiveness of the models, their fine-tuning on a different dataset, and their ability to capture long-distanced dependencies are also performed through the use of both the CNN/DailyMail dataset, and the XSUM dataset. 

Those experiments show that using more convolutional blocks in the model makes sense up to a certain point, that the use of pretrained embeddings is advisable, as is the use of the pointer-generator network implemented in this work. The use of reinforcement learning is also advisable at the end of the model training.

Finally, this thesis is concluded with additional experiments that could be implemented in future works, as well as practical advises regarding the use of abstractive summarization in the context of general terms and conditions summarization.


Book
Current Approaches and Applications in Natural Language Processing
Authors: ---
Year: 2022 Publisher: Basel MDPI Books

Loading...
Export citation

Choose an application

Bookmark

Abstract

Current approaches to Natural Language Processing (NLP) have shown impressive improvements in many important tasks: machine translation, language modeling, text generation, sentiment/emotion analysis, natural language understanding, and question answering, among others. The advent of new methods and techniques, such as graph-based approaches, reinforcement learning, or deep learning, have boosted many NLP tasks to a human-level performance (and even beyond). This has attracted the interest of many companies, so new products and solutions can benefit from advances in this relevant area within the artificial intelligence domain.This Special Issue reprint, focusing on emerging techniques and trendy applications of NLP methods, reports on some of these achievements, establishing a useful reference for industry and researchers on cutting-edge human language technologies.

Keywords

natural language processing --- distributional semantics --- machine learning --- language model --- word embeddings --- machine translation --- sentiment analysis --- quality estimation --- neural machine translation --- pretrained language model --- multilingual pre-trained language model --- WMT --- neural networks --- recurrent neural networks --- named entity recognition --- multi-modal dataset --- Wikimedia Commons --- multi-modal language model --- concreteness --- curriculum learning --- electronic health records --- clinical text --- relationship extraction --- text classification --- linguistic corpus --- deception --- linguistic cues --- statistical analysis --- discriminant function analysis --- fake news detection --- stance detection --- social media --- abstractive summarization --- monolingual models --- multilingual models --- transformer models --- transfer learning --- discourse analysis --- problem–solution pattern --- automatic classification --- machine learning classifiers --- deep neural networks --- question answering --- machine reading comprehension --- query expansion --- information retrieval --- multinomial naive bayes --- relevance feedback --- cause-effect relation --- transitive closure --- word co-occurrence --- automatic hate speech detection --- multisource feature extraction --- Latin American Spanish language models --- fine-grained named entity recognition --- k-stacked feature fusion --- dual-stacked output --- unbalanced data problem --- document representation --- semantic analysis --- conceptual modeling --- universal representation --- trend analysis --- topic modeling --- Bert --- geospatial data technology and application --- attention model --- dual multi-head attention --- inter-information relationship --- question difficult estimation --- named-entity recognition --- BERT model --- conditional random field --- pre-trained model --- fine-tuning --- feature fusion --- attention mechanism --- task-oriented dialogue systems --- Arabic --- multi-lingual transformer model --- mT5 --- language marker --- mental disorder --- deep learning --- LIWC --- spaCy --- RobBERT --- fastText --- LIME --- conversational AI --- intent detection --- slot filling --- retrieval-based question answering --- query generation --- entity linking --- knowledge graph --- entity embedding --- global model --- DISC model --- personality recognition --- predictive model --- text analysis --- data privacy --- federated learning --- transformer --- n/a --- problem-solution pattern


Book
Current Approaches and Applications in Natural Language Processing
Authors: ---
Year: 2022 Publisher: Basel MDPI Books

Loading...
Export citation

Choose an application

Bookmark

Abstract

Current approaches to Natural Language Processing (NLP) have shown impressive improvements in many important tasks: machine translation, language modeling, text generation, sentiment/emotion analysis, natural language understanding, and question answering, among others. The advent of new methods and techniques, such as graph-based approaches, reinforcement learning, or deep learning, have boosted many NLP tasks to a human-level performance (and even beyond). This has attracted the interest of many companies, so new products and solutions can benefit from advances in this relevant area within the artificial intelligence domain.This Special Issue reprint, focusing on emerging techniques and trendy applications of NLP methods, reports on some of these achievements, establishing a useful reference for industry and researchers on cutting-edge human language technologies.

Keywords

Technology: general issues --- History of engineering & technology --- natural language processing --- distributional semantics --- machine learning --- language model --- word embeddings --- machine translation --- sentiment analysis --- quality estimation --- neural machine translation --- pretrained language model --- multilingual pre-trained language model --- WMT --- neural networks --- recurrent neural networks --- named entity recognition --- multi-modal dataset --- Wikimedia Commons --- multi-modal language model --- concreteness --- curriculum learning --- electronic health records --- clinical text --- relationship extraction --- text classification --- linguistic corpus --- deception --- linguistic cues --- statistical analysis --- discriminant function analysis --- fake news detection --- stance detection --- social media --- abstractive summarization --- monolingual models --- multilingual models --- transformer models --- transfer learning --- discourse analysis --- problem-solution pattern --- automatic classification --- machine learning classifiers --- deep neural networks --- question answering --- machine reading comprehension --- query expansion --- information retrieval --- multinomial naive bayes --- relevance feedback --- cause-effect relation --- transitive closure --- word co-occurrence --- automatic hate speech detection --- multisource feature extraction --- Latin American Spanish language models --- fine-grained named entity recognition --- k-stacked feature fusion --- dual-stacked output --- unbalanced data problem --- document representation --- semantic analysis --- conceptual modeling --- universal representation --- trend analysis --- topic modeling --- Bert --- geospatial data technology and application --- attention model --- dual multi-head attention --- inter-information relationship --- question difficult estimation --- named-entity recognition --- BERT model --- conditional random field --- pre-trained model --- fine-tuning --- feature fusion --- attention mechanism --- task-oriented dialogue systems --- Arabic --- multi-lingual transformer model --- mT5 --- language marker --- mental disorder --- deep learning --- LIWC --- spaCy --- RobBERT --- fastText --- LIME --- conversational AI --- intent detection --- slot filling --- retrieval-based question answering --- query generation --- entity linking --- knowledge graph --- entity embedding --- global model --- DISC model --- personality recognition --- predictive model --- text analysis --- data privacy --- federated learning --- transformer --- natural language processing --- distributional semantics --- machine learning --- language model --- word embeddings --- machine translation --- sentiment analysis --- quality estimation --- neural machine translation --- pretrained language model --- multilingual pre-trained language model --- WMT --- neural networks --- recurrent neural networks --- named entity recognition --- multi-modal dataset --- Wikimedia Commons --- multi-modal language model --- concreteness --- curriculum learning --- electronic health records --- clinical text --- relationship extraction --- text classification --- linguistic corpus --- deception --- linguistic cues --- statistical analysis --- discriminant function analysis --- fake news detection --- stance detection --- social media --- abstractive summarization --- monolingual models --- multilingual models --- transformer models --- transfer learning --- discourse analysis --- problem-solution pattern --- automatic classification --- machine learning classifiers --- deep neural networks --- question answering --- machine reading comprehension --- query expansion --- information retrieval --- multinomial naive bayes --- relevance feedback --- cause-effect relation --- transitive closure --- word co-occurrence --- automatic hate speech detection --- multisource feature extraction --- Latin American Spanish language models --- fine-grained named entity recognition --- k-stacked feature fusion --- dual-stacked output --- unbalanced data problem --- document representation --- semantic analysis --- conceptual modeling --- universal representation --- trend analysis --- topic modeling --- Bert --- geospatial data technology and application --- attention model --- dual multi-head attention --- inter-information relationship --- question difficult estimation --- named-entity recognition --- BERT model --- conditional random field --- pre-trained model --- fine-tuning --- feature fusion --- attention mechanism --- task-oriented dialogue systems --- Arabic --- multi-lingual transformer model --- mT5 --- language marker --- mental disorder --- deep learning --- LIWC --- spaCy --- RobBERT --- fastText --- LIME --- conversational AI --- intent detection --- slot filling --- retrieval-based question answering --- query generation --- entity linking --- knowledge graph --- entity embedding --- global model --- DISC model --- personality recognition --- predictive model --- text analysis --- data privacy --- federated learning --- transformer


Book
Current Approaches and Applications in Natural Language Processing
Authors: ---
Year: 2022 Publisher: Basel MDPI Books

Loading...
Export citation

Choose an application

Bookmark

Abstract

Current approaches to Natural Language Processing (NLP) have shown impressive improvements in many important tasks: machine translation, language modeling, text generation, sentiment/emotion analysis, natural language understanding, and question answering, among others. The advent of new methods and techniques, such as graph-based approaches, reinforcement learning, or deep learning, have boosted many NLP tasks to a human-level performance (and even beyond). This has attracted the interest of many companies, so new products and solutions can benefit from advances in this relevant area within the artificial intelligence domain.This Special Issue reprint, focusing on emerging techniques and trendy applications of NLP methods, reports on some of these achievements, establishing a useful reference for industry and researchers on cutting-edge human language technologies.

Keywords

Technology: general issues --- History of engineering & technology --- natural language processing --- distributional semantics --- machine learning --- language model --- word embeddings --- machine translation --- sentiment analysis --- quality estimation --- neural machine translation --- pretrained language model --- multilingual pre-trained language model --- WMT --- neural networks --- recurrent neural networks --- named entity recognition --- multi-modal dataset --- Wikimedia Commons --- multi-modal language model --- concreteness --- curriculum learning --- electronic health records --- clinical text --- relationship extraction --- text classification --- linguistic corpus --- deception --- linguistic cues --- statistical analysis --- discriminant function analysis --- fake news detection --- stance detection --- social media --- abstractive summarization --- monolingual models --- multilingual models --- transformer models --- transfer learning --- discourse analysis --- problem–solution pattern --- automatic classification --- machine learning classifiers --- deep neural networks --- question answering --- machine reading comprehension --- query expansion --- information retrieval --- multinomial naive bayes --- relevance feedback --- cause-effect relation --- transitive closure --- word co-occurrence --- automatic hate speech detection --- multisource feature extraction --- Latin American Spanish language models --- fine-grained named entity recognition --- k-stacked feature fusion --- dual-stacked output --- unbalanced data problem --- document representation --- semantic analysis --- conceptual modeling --- universal representation --- trend analysis --- topic modeling --- Bert --- geospatial data technology and application --- attention model --- dual multi-head attention --- inter-information relationship --- question difficult estimation --- named-entity recognition --- BERT model --- conditional random field --- pre-trained model --- fine-tuning --- feature fusion --- attention mechanism --- task-oriented dialogue systems --- Arabic --- multi-lingual transformer model --- mT5 --- language marker --- mental disorder --- deep learning --- LIWC --- spaCy --- RobBERT --- fastText --- LIME --- conversational AI --- intent detection --- slot filling --- retrieval-based question answering --- query generation --- entity linking --- knowledge graph --- entity embedding --- global model --- DISC model --- personality recognition --- predictive model --- text analysis --- data privacy --- federated learning --- transformer --- n/a --- problem-solution pattern

Listing 1 - 4 of 4
Sort by