Narrow your search

Library

FARO (2)

KU Leuven (2)

LUCA School of Arts (2)

Odisee (2)

Thomas More Kempen (2)

Thomas More Mechelen (2)

UCLL (2)

ULB (2)

ULiège (2)

VIVES (2)

More...

Resource type

book (6)


Language

English (6)


Year
From To Submit

2022 (6)

Listing 1 - 6 of 6
Sort by

Book
Knowledge Modelling and Learning through Cognitive Networks
Authors: ---
Year: 2022 Publisher: Basel MDPI - Multidisciplinary Digital Publishing Institute

Loading...
Export citation

Choose an application

Bookmark

Abstract

One of the most promising developments in modelling knowledge is cognitive network science, which aims to investigate cognitive phenomena driven by the networked, associative organization of knowledge. For example, investigating the structure of semantic memory via semantic networks has illuminated how memory recall patterns influence phenomena such as creativity, memory search, learning, and more generally, knowledge acquisition, exploration, and exploitation. In parallel, neural network models for artificial intelligence (AI) are also becoming more widespread as inferential models for understanding which features drive language-related phenomena such as meaning reconstruction, stance detection, and emotional profiling. Whereas cognitive networks map explicitly which entities engage in associative relationships, neural networks perform an implicit mapping of correlations in cognitive data as weights, obtained after training over labelled data and whose interpretation is not immediately evident to the experimenter. This book aims to bring together quantitative, innovative research that focuses on modelling knowledge through cognitive and neural networks to gain insight into mechanisms driving cognitive processes related to knowledge structuring, exploration, and learning. The book comprises a variety of publication types, including reviews and theoretical papers, empirical research, computational modelling, and big data analysis. All papers here share a commonality: they demonstrate how the application of network science and AI can extend and broaden cognitive science in ways that traditional approaches cannot.


Book
Knowledge Modelling and Learning through Cognitive Networks
Authors: ---
Year: 2022 Publisher: Basel MDPI - Multidisciplinary Digital Publishing Institute

Loading...
Export citation

Choose an application

Bookmark

Abstract

One of the most promising developments in modelling knowledge is cognitive network science, which aims to investigate cognitive phenomena driven by the networked, associative organization of knowledge. For example, investigating the structure of semantic memory via semantic networks has illuminated how memory recall patterns influence phenomena such as creativity, memory search, learning, and more generally, knowledge acquisition, exploration, and exploitation. In parallel, neural network models for artificial intelligence (AI) are also becoming more widespread as inferential models for understanding which features drive language-related phenomena such as meaning reconstruction, stance detection, and emotional profiling. Whereas cognitive networks map explicitly which entities engage in associative relationships, neural networks perform an implicit mapping of correlations in cognitive data as weights, obtained after training over labelled data and whose interpretation is not immediately evident to the experimenter. This book aims to bring together quantitative, innovative research that focuses on modelling knowledge through cognitive and neural networks to gain insight into mechanisms driving cognitive processes related to knowledge structuring, exploration, and learning. The book comprises a variety of publication types, including reviews and theoretical papers, empirical research, computational modelling, and big data analysis. All papers here share a commonality: they demonstrate how the application of network science and AI can extend and broaden cognitive science in ways that traditional approaches cannot.


Book
Knowledge Modelling and Learning through Cognitive Networks
Authors: ---
Year: 2022 Publisher: Basel MDPI - Multidisciplinary Digital Publishing Institute

Loading...
Export citation

Choose an application

Bookmark

Abstract

One of the most promising developments in modelling knowledge is cognitive network science, which aims to investigate cognitive phenomena driven by the networked, associative organization of knowledge. For example, investigating the structure of semantic memory via semantic networks has illuminated how memory recall patterns influence phenomena such as creativity, memory search, learning, and more generally, knowledge acquisition, exploration, and exploitation. In parallel, neural network models for artificial intelligence (AI) are also becoming more widespread as inferential models for understanding which features drive language-related phenomena such as meaning reconstruction, stance detection, and emotional profiling. Whereas cognitive networks map explicitly which entities engage in associative relationships, neural networks perform an implicit mapping of correlations in cognitive data as weights, obtained after training over labelled data and whose interpretation is not immediately evident to the experimenter. This book aims to bring together quantitative, innovative research that focuses on modelling knowledge through cognitive and neural networks to gain insight into mechanisms driving cognitive processes related to knowledge structuring, exploration, and learning. The book comprises a variety of publication types, including reviews and theoretical papers, empirical research, computational modelling, and big data analysis. All papers here share a commonality: they demonstrate how the application of network science and AI can extend and broaden cognitive science in ways that traditional approaches cannot.

Keywords

Information technology industries --- text mining --- big data --- analytics --- review --- self-organization --- computational philosophy --- brain --- synaptic learning --- adaptation --- functional plasticity --- activity-dependent resonance states --- circular causality --- somatosensory representation --- prehensile synergies --- robotics --- COVID-19 --- social media --- hashtag networks --- emotional profiling --- cognitive science --- network science --- sentiment analysis --- computational social science --- Twitter --- VADER scoring --- correlation --- semantic network analysis --- intellectual disability --- adolescents --- EEG --- emotional states --- working memory --- depression --- anxiety --- graph theory --- classification --- machine learning --- neural networks --- phonotactic probability --- neighborhood density --- sub-lexical representations --- lexical representations --- phonemes --- biphones --- cognitive network --- smart assistants --- knowledge generation --- intelligent systems --- web components --- deep learning --- web-based interaction --- cognitive network science --- text analysis --- natural language processing --- artificial intelligence --- emotional recall --- cognitive data --- AI --- pharmacological text corpus --- automatic relation extraction --- gender stereotypes --- story tropes --- movie plots --- network analysis --- word co-occurrence network --- text mining --- big data --- analytics --- review --- self-organization --- computational philosophy --- brain --- synaptic learning --- adaptation --- functional plasticity --- activity-dependent resonance states --- circular causality --- somatosensory representation --- prehensile synergies --- robotics --- COVID-19 --- social media --- hashtag networks --- emotional profiling --- cognitive science --- network science --- sentiment analysis --- computational social science --- Twitter --- VADER scoring --- correlation --- semantic network analysis --- intellectual disability --- adolescents --- EEG --- emotional states --- working memory --- depression --- anxiety --- graph theory --- classification --- machine learning --- neural networks --- phonotactic probability --- neighborhood density --- sub-lexical representations --- lexical representations --- phonemes --- biphones --- cognitive network --- smart assistants --- knowledge generation --- intelligent systems --- web components --- deep learning --- web-based interaction --- cognitive network science --- text analysis --- natural language processing --- artificial intelligence --- emotional recall --- cognitive data --- AI --- pharmacological text corpus --- automatic relation extraction --- gender stereotypes --- story tropes --- movie plots --- network analysis --- word co-occurrence network


Book
Current Approaches and Applications in Natural Language Processing
Authors: ---
Year: 2022 Publisher: Basel MDPI Books

Loading...
Export citation

Choose an application

Bookmark

Abstract

Current approaches to Natural Language Processing (NLP) have shown impressive improvements in many important tasks: machine translation, language modeling, text generation, sentiment/emotion analysis, natural language understanding, and question answering, among others. The advent of new methods and techniques, such as graph-based approaches, reinforcement learning, or deep learning, have boosted many NLP tasks to a human-level performance (and even beyond). This has attracted the interest of many companies, so new products and solutions can benefit from advances in this relevant area within the artificial intelligence domain.This Special Issue reprint, focusing on emerging techniques and trendy applications of NLP methods, reports on some of these achievements, establishing a useful reference for industry and researchers on cutting-edge human language technologies.

Keywords

natural language processing --- distributional semantics --- machine learning --- language model --- word embeddings --- machine translation --- sentiment analysis --- quality estimation --- neural machine translation --- pretrained language model --- multilingual pre-trained language model --- WMT --- neural networks --- recurrent neural networks --- named entity recognition --- multi-modal dataset --- Wikimedia Commons --- multi-modal language model --- concreteness --- curriculum learning --- electronic health records --- clinical text --- relationship extraction --- text classification --- linguistic corpus --- deception --- linguistic cues --- statistical analysis --- discriminant function analysis --- fake news detection --- stance detection --- social media --- abstractive summarization --- monolingual models --- multilingual models --- transformer models --- transfer learning --- discourse analysis --- problem–solution pattern --- automatic classification --- machine learning classifiers --- deep neural networks --- question answering --- machine reading comprehension --- query expansion --- information retrieval --- multinomial naive bayes --- relevance feedback --- cause-effect relation --- transitive closure --- word co-occurrence --- automatic hate speech detection --- multisource feature extraction --- Latin American Spanish language models --- fine-grained named entity recognition --- k-stacked feature fusion --- dual-stacked output --- unbalanced data problem --- document representation --- semantic analysis --- conceptual modeling --- universal representation --- trend analysis --- topic modeling --- Bert --- geospatial data technology and application --- attention model --- dual multi-head attention --- inter-information relationship --- question difficult estimation --- named-entity recognition --- BERT model --- conditional random field --- pre-trained model --- fine-tuning --- feature fusion --- attention mechanism --- task-oriented dialogue systems --- Arabic --- multi-lingual transformer model --- mT5 --- language marker --- mental disorder --- deep learning --- LIWC --- spaCy --- RobBERT --- fastText --- LIME --- conversational AI --- intent detection --- slot filling --- retrieval-based question answering --- query generation --- entity linking --- knowledge graph --- entity embedding --- global model --- DISC model --- personality recognition --- predictive model --- text analysis --- data privacy --- federated learning --- transformer --- n/a --- problem-solution pattern


Book
Current Approaches and Applications in Natural Language Processing
Authors: ---
Year: 2022 Publisher: Basel MDPI Books

Loading...
Export citation

Choose an application

Bookmark

Abstract

Current approaches to Natural Language Processing (NLP) have shown impressive improvements in many important tasks: machine translation, language modeling, text generation, sentiment/emotion analysis, natural language understanding, and question answering, among others. The advent of new methods and techniques, such as graph-based approaches, reinforcement learning, or deep learning, have boosted many NLP tasks to a human-level performance (and even beyond). This has attracted the interest of many companies, so new products and solutions can benefit from advances in this relevant area within the artificial intelligence domain.This Special Issue reprint, focusing on emerging techniques and trendy applications of NLP methods, reports on some of these achievements, establishing a useful reference for industry and researchers on cutting-edge human language technologies.

Keywords

Technology: general issues --- History of engineering & technology --- natural language processing --- distributional semantics --- machine learning --- language model --- word embeddings --- machine translation --- sentiment analysis --- quality estimation --- neural machine translation --- pretrained language model --- multilingual pre-trained language model --- WMT --- neural networks --- recurrent neural networks --- named entity recognition --- multi-modal dataset --- Wikimedia Commons --- multi-modal language model --- concreteness --- curriculum learning --- electronic health records --- clinical text --- relationship extraction --- text classification --- linguistic corpus --- deception --- linguistic cues --- statistical analysis --- discriminant function analysis --- fake news detection --- stance detection --- social media --- abstractive summarization --- monolingual models --- multilingual models --- transformer models --- transfer learning --- discourse analysis --- problem-solution pattern --- automatic classification --- machine learning classifiers --- deep neural networks --- question answering --- machine reading comprehension --- query expansion --- information retrieval --- multinomial naive bayes --- relevance feedback --- cause-effect relation --- transitive closure --- word co-occurrence --- automatic hate speech detection --- multisource feature extraction --- Latin American Spanish language models --- fine-grained named entity recognition --- k-stacked feature fusion --- dual-stacked output --- unbalanced data problem --- document representation --- semantic analysis --- conceptual modeling --- universal representation --- trend analysis --- topic modeling --- Bert --- geospatial data technology and application --- attention model --- dual multi-head attention --- inter-information relationship --- question difficult estimation --- named-entity recognition --- BERT model --- conditional random field --- pre-trained model --- fine-tuning --- feature fusion --- attention mechanism --- task-oriented dialogue systems --- Arabic --- multi-lingual transformer model --- mT5 --- language marker --- mental disorder --- deep learning --- LIWC --- spaCy --- RobBERT --- fastText --- LIME --- conversational AI --- intent detection --- slot filling --- retrieval-based question answering --- query generation --- entity linking --- knowledge graph --- entity embedding --- global model --- DISC model --- personality recognition --- predictive model --- text analysis --- data privacy --- federated learning --- transformer --- natural language processing --- distributional semantics --- machine learning --- language model --- word embeddings --- machine translation --- sentiment analysis --- quality estimation --- neural machine translation --- pretrained language model --- multilingual pre-trained language model --- WMT --- neural networks --- recurrent neural networks --- named entity recognition --- multi-modal dataset --- Wikimedia Commons --- multi-modal language model --- concreteness --- curriculum learning --- electronic health records --- clinical text --- relationship extraction --- text classification --- linguistic corpus --- deception --- linguistic cues --- statistical analysis --- discriminant function analysis --- fake news detection --- stance detection --- social media --- abstractive summarization --- monolingual models --- multilingual models --- transformer models --- transfer learning --- discourse analysis --- problem-solution pattern --- automatic classification --- machine learning classifiers --- deep neural networks --- question answering --- machine reading comprehension --- query expansion --- information retrieval --- multinomial naive bayes --- relevance feedback --- cause-effect relation --- transitive closure --- word co-occurrence --- automatic hate speech detection --- multisource feature extraction --- Latin American Spanish language models --- fine-grained named entity recognition --- k-stacked feature fusion --- dual-stacked output --- unbalanced data problem --- document representation --- semantic analysis --- conceptual modeling --- universal representation --- trend analysis --- topic modeling --- Bert --- geospatial data technology and application --- attention model --- dual multi-head attention --- inter-information relationship --- question difficult estimation --- named-entity recognition --- BERT model --- conditional random field --- pre-trained model --- fine-tuning --- feature fusion --- attention mechanism --- task-oriented dialogue systems --- Arabic --- multi-lingual transformer model --- mT5 --- language marker --- mental disorder --- deep learning --- LIWC --- spaCy --- RobBERT --- fastText --- LIME --- conversational AI --- intent detection --- slot filling --- retrieval-based question answering --- query generation --- entity linking --- knowledge graph --- entity embedding --- global model --- DISC model --- personality recognition --- predictive model --- text analysis --- data privacy --- federated learning --- transformer


Book
Current Approaches and Applications in Natural Language Processing
Authors: ---
Year: 2022 Publisher: Basel MDPI Books

Loading...
Export citation

Choose an application

Bookmark

Abstract

Current approaches to Natural Language Processing (NLP) have shown impressive improvements in many important tasks: machine translation, language modeling, text generation, sentiment/emotion analysis, natural language understanding, and question answering, among others. The advent of new methods and techniques, such as graph-based approaches, reinforcement learning, or deep learning, have boosted many NLP tasks to a human-level performance (and even beyond). This has attracted the interest of many companies, so new products and solutions can benefit from advances in this relevant area within the artificial intelligence domain.This Special Issue reprint, focusing on emerging techniques and trendy applications of NLP methods, reports on some of these achievements, establishing a useful reference for industry and researchers on cutting-edge human language technologies.

Keywords

Technology: general issues --- History of engineering & technology --- natural language processing --- distributional semantics --- machine learning --- language model --- word embeddings --- machine translation --- sentiment analysis --- quality estimation --- neural machine translation --- pretrained language model --- multilingual pre-trained language model --- WMT --- neural networks --- recurrent neural networks --- named entity recognition --- multi-modal dataset --- Wikimedia Commons --- multi-modal language model --- concreteness --- curriculum learning --- electronic health records --- clinical text --- relationship extraction --- text classification --- linguistic corpus --- deception --- linguistic cues --- statistical analysis --- discriminant function analysis --- fake news detection --- stance detection --- social media --- abstractive summarization --- monolingual models --- multilingual models --- transformer models --- transfer learning --- discourse analysis --- problem–solution pattern --- automatic classification --- machine learning classifiers --- deep neural networks --- question answering --- machine reading comprehension --- query expansion --- information retrieval --- multinomial naive bayes --- relevance feedback --- cause-effect relation --- transitive closure --- word co-occurrence --- automatic hate speech detection --- multisource feature extraction --- Latin American Spanish language models --- fine-grained named entity recognition --- k-stacked feature fusion --- dual-stacked output --- unbalanced data problem --- document representation --- semantic analysis --- conceptual modeling --- universal representation --- trend analysis --- topic modeling --- Bert --- geospatial data technology and application --- attention model --- dual multi-head attention --- inter-information relationship --- question difficult estimation --- named-entity recognition --- BERT model --- conditional random field --- pre-trained model --- fine-tuning --- feature fusion --- attention mechanism --- task-oriented dialogue systems --- Arabic --- multi-lingual transformer model --- mT5 --- language marker --- mental disorder --- deep learning --- LIWC --- spaCy --- RobBERT --- fastText --- LIME --- conversational AI --- intent detection --- slot filling --- retrieval-based question answering --- query generation --- entity linking --- knowledge graph --- entity embedding --- global model --- DISC model --- personality recognition --- predictive model --- text analysis --- data privacy --- federated learning --- transformer --- n/a --- problem-solution pattern

Listing 1 - 6 of 6
Sort by