Listing 1 - 10 of 22 | << page >> |
Sort by
|
Choose an application
Statistical relational models combine aspects of first-order logic and probabilistic graphical models, enabling them to model complex logical and probabilistic interactions between large numbers of objects. This level of expressivity comes at the cost of increased complexity of inference, motivating a new line of research in lifted probabilistic inference. By exploiting symmetries of the relational structure in the model, and reasoning about groups of objects as a whole, lifted algorithms dramatically improve the run time of inference and learning.The thesis has five main contributions. First, we propose a new method for logical inference, calledfirst-order knowledge compilation. We show that by compiling relational models into a new circuit language, hard inference problems become tractable to solve. Furthermore, we present an algorithm that compiles relational models into our circuit language. Second, we show how to use first-order knowledge compilation for statistical relational models, leading to a new state-of-the-art lifted probabilistic inference algorithm. Third, we develop a formal framework for exact lifted inference, including a definition in terms of its complexity w.r.t. the number of objects in the world. From this follows a first completeness result, showing that the two-variable class of statistical relational models always supports lifted inference. Fourth, we present an algorithm for approximate lifted inference by performing exact lifted inference in a relaxed, approximate model. Statistical relational models are receiving a lot of attention today because of their expressive power for learning. Fifth, we propose to harness the full power of relational representations for that task, by using lifted parameter learning. The techniques presented in this thesis are evaluated empirically on statistical relational models of thousands of interacting objects and millions of random variables.
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Choose an application
As the web keeps on expanding, so does the interest of attackers whoseek to exploit users and services for profit. The last years, users havewitnessed that it is hard for a month to pass without news of somemajor web-application break-in and the subsequent exfiltration of private or financial data. At the same time, attackers constantly register rogue domains, using them to perform phishing attacks, collect private user information, and exploit vulnerable browsers and plugins.In this dissertation, we approach the increasingly serious problem ofcybercrime from two different and complementary standpoints. First, we investigate large groups of web applications, seeking to discover systematic vulnerabilities across them. We analyze the workings of referrer-anonymizing services, file hosting services, remote JavaScript inclusions and web-based device fingerprinting, exploring their interactions with users and third-parties, as wellas their consequences on a user's security and privacy. Through a series of automated and manual experiments we uncover many, previously unknown, issues that could readily be used to exploit vulnerable services and compromise user data.Second, we study existing, well-known, web application attacks and propose client-side countermeasures, that can strengthen the securityof a user's browsing environment without the collaboration, or even awareness, of the web application. We propose countermeasures to defend against session hijacking, SSL stripping, andmalicious, plugin-originating, cross-domain requests. Our countermeasures involve near-zero interaction with the user after their installation, have a minimal performance overhead, and do not assume the existence of trusted third-parties.
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Choose an application
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Choose an application
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Choose an application
One of the main advantages of logic programs is that it allows to write declarative programs that very understandable. However, such a declarati ve program can be a very inefficient or even a non-terminating specifica tion of a problem. Therefore, one of the main concerns in program verifi cation of a logic proram, is proving that it terminates. If such a proof fails, non-termination analysis identifies the loop in the program. In this PhD, we prove non-termination based on symbolic derivation trees . These symbolic trees show (a part of) the derivations of all queries i n a certain class of queries. We implemented these symbolic trees and in troduced a new non-termination condition based on these trees. In the remainder of the PhD, these trees will be extended to use non-fai lure information. This allows to prove non-termination of more classes o f programs. Furthermore, we will investigate which non-logical features of Prolog can be incorporated in order to analyse more realistic Prolog programs.
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Choose an application
Natural language understanding is one of the fundamental goals of artificial intelligence. An essential function of natural language is to talk about the location, and translocation of objects in space. Understanding spatial language is important in many applications such as geographical information systems, human computer interaction, the provision of navigational instructions to robots, visualization or text-to-scene conversion, etc.Due to the complexity of spatial primitives and notions, and the challenges of designing ontologies for formal spatial representation, the extraction of the spatial information from natural language still has to be placed in a well-defined framework. Machine learning has not systematically been applied to the task, and no established corpora are available. In this thesis I study the problem from cognitive, linguistics and computational points of view, with a primary focus on establishing a supervised machine learning framework.This thesis makes five main research contributions. The first is the design of a spatial annotation scheme to bridge between natural language and formal spatial representations. In this scheme the universal and commonly accepted cognitive spatial notions and multiple well-known qualitative spatial reasoning models are applied.The second is the definition of a novel computational linguistic task that utilizes the annotation scheme to map natural language to spatial ontologies. For this task I have built rich annotated corpora and an evaluation scheme.The third is a detailed investigation of the linguistic features and structural characteristics of spatial language that aid the use of machine learning in extracting spatial roles and relations from annotated data. The learning methods used are discriminative graphical models and statistical relational learning.The fourth is the proposal of a unified structured output learning model for ontologies. The ontology components are learnt while taking into account the ontological constraints and linguistic dependencies among the components. The ontology includes roles and relations, and multiple formal semantic types. The fifth is the proposal of an efficient inference approach based upon constraint optimization. It can deal with a large number of variables and constraints, and makes building a global structured learning model for ontology population, feasible. To test the approach I have performed an empirical investigation using my spatial ontology.The application of my proposed unified learning model for ontology population is not limited to the extraction of spatial semantics, it could be used to populate any ontology. I argue therefore that this work is an important step towards automatically describing text with semantic labels that form a structured ontological representation of the content
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Choose an application
Representing, learning, and reasoning about knowledge are central to artificial intelligence (AI). A long standing goal of AI is unifying logic and probability, to benefit from the strengths of both formalisms. Probability theory allows us to represent and reason in uncertain domains, while first-order logic allows us to represent and reason about structured, relational domains. Many real-world problems exhibit both uncertainty and structure, and thus can be more naturally represented with a combination of probabilistic and logical knowledge. This observation has led to the development of probabilistic logical models (PLMs), which combine probabilistic models with elements of first-order logic, to succinctly capture uncertainty in structured, relational domains, e.g., social networks, citation graphs, etc. While PLMs provide expressive representation formalisms, efficient inference is still a major challenge in these models, as they typically involve a large number of objects and interactions among them. Among the various efforts to address this problem, a promising line of work is lifted probabilistic inference. Lifting attempts to improve the efficiency of inference by exploiting the symmetries in the model. The basic principle of lifting is to perform an inference operation once for a whole group of interchangeable objects, instead of once per individual in the group. Researchers have proposed lifted versions of various (propositional) probabilistic inference algorithms, and shown large speedups achieved by the lifted algorithms over their propositional counterparts. In this dissertation, we make a number of novel contributions to lifted inference, mainly focusing on lifted variable elimination (LVE). First, we focus on constraint processing, which is an integral part of lifted inference. Lifted inference algorithms are commonly tightly coupled to a specific constraint language. We bring more insight in LVE, by decoupling the operators from the used constraint language. We define lifted inference operations so that they operate on the semantic level rather than on the syntactic level, making them language independent. Further, we show how this flexibility allows us to improve the efficiency of inference, by enhancing LVE with a more powerful constraint representation. Second, we generalize the `lifting' tools used by LVE, by introducing a number of novel lifted operators in this algorithm. We show how these operations allow LVE to exploit a broader range of symmetries, and thus expand the range of problems it can solve in a lifted way. Third, we advance our theoretical understanding of lifted inference by providing the first completeness result for LVE. We prove that LVE is complete---always has a lifted solution---for the fragment of 2-logvar models, a model class that can represent many useful relations in PLMs, such as (anti-)symmetry and homophily. This result also shows the importance of our contributions to LVE, as we prove they are sufficient and necessary for LVE to achieve completeness. Fourth, we propose the structure of first-order decomposition trees (FO-dtrees), as a tool for symbolically analyzing lifted inference solutions. We show how FO-dtrees can be used to characterize an LVE solution, in terms of a sequence of lifted operations. We further make a theoretical analysis of the complexity of lifted inference based on a corresponding FO-dtree, which is valuable for finding and selecting among different lifted solutions. Finally, we present a pre-processing method for speeding up (lifted) inference. Our goal with this method is to speed up inference in PLMs by restricting the computations to the requisite part of the model. For this, we build on the Bayes-ball algorithm that identifies the requisite variables in a ground Bayesian network. We present a lifted version of Bayes-ball, which works with first-order Bayesian networks, and show how it applies to lifted inference.
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Choose an application
Recurring solutions to software engineering problems are often captured in patterns, which describe, in a generic but reusable manner, a specific problem and a corresponding solution. This thesis develops a deeper understanding about how pattern catalogs can help a software architect to reconcile the software's requirements and the architecture in the context of security. To achieve this goal, we follow an empirical approach.Two aspects of development are taken into account, namely (1) the construction of the software, and (2) its evolution over time. An analysis of the security patterns landscape shows that sufficient security patterns exist for the construction of secure software, but organization is needed to make them more usable. With a controlled empirical experiment, we investigate the effect of such organization from the viewpoint of the software architect.Regarding patterns for secure co-evolution, we observe that no patterns have been defined. Therefore, we propose a framework for precisely describing such patterns (called change patterns), together with a process for applying them. We illustrate the concepts with patterns for handling evolving trust requirements and access control. The approach is validated by means of two empirical studies, and implemented in a proof of concept tool.
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Choose an application
The importance of security and reliability of software systems makes formal methods of paramount significance for guaranteeing that a system satisfies a particular specification. Hyperproperties can be seen as an abstract formalization of security policies. Because of this, it is desirable to establish a generic verification methodology for at least the class of security-relevant hyperproperties. Unfortunately, such a generic verification methodology is lacking. This is the main motivation of this dissertation.We observe that most interesting hyperproperties that are relevant in practice come from a class of security-relevant policies, specified using universal and possibly existential quantification on traces, as well as relations on those traces. We formalize such definitions and call them holistic hyperproperties. Then our goal becomes to find a methodology for the verification of holistic hyperproperties. To that end, we explore an incremental, coalgebraic perspective on systems and specifications and as a result we arrive at a different, but related kind of specifications: incremental hyperproperties (essentially coinductive predicates). Given some holistic hyperproperty H, the respective incremental version is called H′ and its definition naturally gives the notion of an H′-simulation relation. Such relations enable verification of holistic hyperproperties: finding an H′-simulation relation on a candidate system implies that the incremental hyperproperty H′ holds and thus the high-level, holistic hyperproperty H holds for the candidate system. We also introduce techniques that are often helpful in translating holistic hyperproperties into incremental ones.To show that incremental hyperproperties are important in practice, we explore their connection with the most closely related verification technique via unwinding. To achieve this, we propose a framework for coinductive unwinding of security relevant hyperproperties based on Mantels MAKS framework and our work on holistic and incremental hyperproperties. Mantels MAKS framework cannot be used directly as it is geared towards reasoning about finite behavior and is thus not suitable to reason about holistic hyperproperties in general. However, our framework has a similar structure to MAKS: coinductive unwinding relations compose (or imply) coinductive Basic Security Predicates, which in turn compose a number of security-relevant, holistic hyperproperties. It turns out that the coinductive unwinding relations we introduce are instances of H′-simulation relations. More importantly, incremental hyperproperties can be expressed in well-behaved logics and this opens the door to their verification.Finally, we propose a generic verification approach for incremental hyperproperties via game-based model checking. To achieve this, we first show how to interpret incremental hyperproperty checking as games. Although one might do regular model checking of incremental hyperproperties on a transformed system, model checking games are advantageous as they do not only produce a yes-no answer, but also give more intuition about the security policy and what can potentially go wrong, by producing a concrete winning strategy. In order to show that the theory developed here is practical, we present and illustrate methods of using several off-the-shelf tools for verification of incremental hyperproperties expressed in the polyadic modal mu-calculus
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Choose an application
Noise and vibration performance is a key parameter for assessing the quality of automotive and aerospace products. In order to gain competitive advantage, manufacturers are continually striving to reduce noise and vibrations levels. The numerical analysis of the acoustic behaviour results in huge mathematical models, in particular,for higher frequency analyses.This leads to high requirements regarding computational and storage resources. Furthermore, the cost increases dramatically when the model has a large number of design variables that have to be taken into account for the development of the optimal design.This motivates the importance of reducing the size of the models in order to reduce the simulation cost.One choice for building such reduced models is algebraicModel Order Reduction (MOR) for linear dynamical systems. The aim of MOR is to reduce the system matrixin such a way that the reduced system has similar input/output behaviour as the original system.The goal of this thesis is to use the Dominant Pole algorithm (DPA) for computing a truncated modal representation of a large scale parametric linear second order dynamical system and also largescale dynamical systems whose matrix has non-linear frequency dependency.First, we adapted the DPA for reducing systems that have an infinite number of poles.Deflation is an important ingredient for this type of methods in order to prevent eigenvalues to be computed more than once.Because of the nonlinearity frequency dependency, classical deflation approaches arenot applicable. Therefore we propose an alternative technique that essentiallyremoves computed poles from the systems input and output vectors. This method appears to be reliablefor computing a large number of dominant poles of the system.Next, we apply the DPA to parametric second orderdynamical system, whose system matrix depends on parameters.We will iteratively compute the parametric dominant poles. We consider twoapproaches. In the first approach, we compute the parameter dependent polesone by one, i.e., all parameters are taken into account together. We will useinterpolation in the parameter space to achieve this. In the second approach,the dominant eigenpairs are computed for a selection of interpolation pointsin the parameter space, independently from each other. As the eigenvectorsare continuous functions of the parameters, we use the already computedeigenvectors from previous parameter values for computing starting values ofthe DPA
681.3 <043> --- academic collection --- Computerwetenschap--Dissertaties --- Theses
Listing 1 - 10 of 22 | << page >> |
Sort by
|