Listing 1 - 10 of 221 | << page >> |
Sort by
|
Choose an application
Deze thesis behandelt de automatische detectie van flitspalen aan de hand van binnenkomende gps-signalen. Dit is een onderwerp in opdracht van TomTom NV te Gent. Momenteel moeten nieuwe flitspalen manueel aangevuld worden in de databanken. Het hoofddoel van deze masterproef ligt hem in het analyseren van het rijgedrag van voertuigen, zodat men uit deze analyse kan besluiten of er op een bepaalde positie al dan niet een flitspaal aanwezig is. De masterproef spitst zich vooral toe op de autosnelwegen omdat het rijgedrag hierop consistenter is. Het project kan onderverdeeld worden in verschillende fasen. Eerst moet de juiste data uit alle gegevens gefilterd worden. Vervolgens moeten er verschillende methoden ontwikkeld worden om de data te analyseren en te verwerken. De methoden die behandeld worden zijn het plotse afremmen en versnellen in de buurt van flitspalen, gemiddelde snelheid op elke punt van de weg en het percentage snelheidsovertreders op elk punt van de weg. Daarna moet er nagegaan worden welke van deze methodes de beste resultaten oplevert. Tenslotte wordt kunstmatige intelligentie gebruikt om het algoritme te trainen om zelf uit te zoeken waar een flitspaal aanwezig is. This thesis deals with the automatic detection of speed cameras on the basis of incoming GPS signals. This is a subject on behalf of TomTom NV in Ghent. Currently, new speed cameras have to be added manually in the databases. The main objective of this thesis lies in analyzing the behavior of vehicles, so you can decide whether or not a speed camera is present at a certain position. The thesis focuses primarily on motorways because the behavior of vehicles on motorways is more consistent. The project can be divided into several stages. First, the given data has to be filtered. Next, there are several methods that have been developed in order to analyze and process the data. The methods covered are the sudden braking and accelerating near speed cameras, the average speed at each point of the road and the percentage of speeders at any point of the road. After that, there has to be decided which of these methods produces the best result. Finally, artificial intelligence will be used to train the machine, so it will be able to find out the position of a speed camera by itself, given new data.
Algoritme - algorithm. --- Data mining - data mining. --- Flitspaal. --- GPS-signalen. --- GPS. --- Java - Java. --- Kunstmatige intelligentie - artificial intelligence. --- Kunstmatige intelligentie. --- Snelheidscamera. --- TomTom.
Choose an application
Data are an organization's sole, non-depletable, non-degrading, durable asset. Engineered right, data's value increases over time because the added dimensions of time, geography, and precision. To achieve data's full organizational value, there must be dedicated individual to leverage data as assets - a Chief Data Officer or CDO who's three job pillars are: Dedication solely to leveraging data assets,Unconstrained by an IT project mindset, andReports directly to the business Once these three pillars are set into place, organizations can leverage t
Management --- Business & Economics --- Management Theory --- Information technology --- Data mining. --- Management. --- Algorithmic knowledge discovery --- Factual data analysis --- KDD (Information retrieval) --- Knowledge discovery in data --- Knowledge discovery in databases --- Mining, Data --- Database searching --- Intellectual capital. --- Knowledge management.
Choose an application
Data Mining Applications with R is a great resource for researchers and professionals to understand the wide use of R, a free software environment for statistical computing and graphics, in solving different problems in industry. R is widely used in leveraging data mining techniques across many different industries, including government, finance, insurance, medicine, scientific research and more. Twenty different real-world case studies illustrate various techniques in rapidly growing areas, including: RetailCrime and homeland securityStock mark
Data mining --- R (Computer program language) --- Industrial applications --- GNU-S (Computer program language) --- Algorithmic knowledge discovery --- Factual data analysis --- KDD (Information retrieval) --- Knowledge discovery in data --- Knowledge discovery in databases --- Mining, Data --- Domain-specific programming languages --- Database searching
Choose an application
R and Data Mining introduces researchers, post-graduate students, and analysts to data mining using R, a free software environment for statistical computing and graphics. The book provides practical methods for using R in applications from academia to industry to extract knowledge from vast amounts of data. Readers will find this book a valuable guide to the use of R in tasks such as classification and prediction, clustering, outlier detection, association rules, sequence analysis, text mining, social network analysis, sentiment analysis, and more. Data mining techniques are g
Data mining --- R (Computer program language) --- GNU-S (Computer program language) --- Domain-specific programming languages --- Algorithmic knowledge discovery --- Factual data analysis --- KDD (Information retrieval) --- Knowledge discovery in data --- Knowledge discovery in databases --- Mining, Data --- Database searching
Choose an application
With the explosion of video and image data available on the Internet, desktops and mobile devices, multimedia search has gained immense importance. This is the first reference book on the subject of internet multimedia search and mining and it will be extremely useful for graduates, researchers and working professionals in the field of information technology and multimedia content analysis.
Data mining. --- Internet searching. --- Searching the Internet --- Web searching --- World Wide Web searching --- Electronic information resource searching --- Algorithmic knowledge discovery --- Factual data analysis --- KDD (Information retrieval) --- Knowledge discovery in data --- Knowledge discovery in databases --- Mining, Data --- Database searching
Choose an application
Perform accurate data analysis using the power of KNIME Learn the essentials of KNIME, from importing data to data visualization and reporting Utilize a wide range of data processing solutions Visualize your final data sets using KNIME's powerful data visualization options In Detail KNIME is an open source data analytics, reporting, and integration platform, which allows you to analyze a small or large amount of data without having to reach out to programming languages like R. "KNIME Essentials" teaches you all you need to know to start processing your first data sets using KNIME. It covers topics like installation, data processing, and data visualization including the KNIME reporting features. Data processing forms a fundamental part of KNIME, and KNIME Essentials ensures that you are fully comfortable with this aspect of KNIME before showing you how to visualize this data and generate reports. "KNIME Essentials" guides you through the process of the installation of KNIME through to the generation of reports based on data. The main parts between these two phases are the data processing and the visualization. The KNIME variants of data analysis concepts are introduced, and after the configuration and installation description comes the data processing which has many options to convert or extend it. Visualization makes it easier to get an overview for parts of the data, while reporting offers a way to summarize them in a nice way.
Data mining. --- Open source software. --- Free software (Open source software) --- Open code software --- Opensource software --- Computer software --- Algorithmic knowledge discovery --- Factual data analysis --- KDD (Information retrieval) --- Knowledge discovery in data --- Knowledge discovery in databases --- Mining, Data --- Database searching
Choose an application
Cluster analysis is used in data mining and is a common technique for statistical data analysis used in many fields of study, such as the medical & life sciences, behavioral & social sciences, engineering, and in computer science. Designed for training industry professionals or for a course on clustering and classification, it can also be used as a companion text for applied statistics. No previous experience in clustering or data mining is assumed. Informal algorithms for clustering data and interpreting results are emphasized. In order to evaluate the results of clustering and to explore data, graphical methods and data structures are used for representing data. Throughout the text, examples and references are provided, in order to enable the material to be comprehensible for a diverse audience. A companion disc includes numerous appendices with programs, data, charts, solutions, etc.eBook Customers: Companion files are available for downloading with order number/proof of purchase by writing to the publisher at info@merclearning.com.FEATURES*Places emphasis on illustrating the underlying logic in making decisions during the cluster analysis *Discusses the related applications of statistic, e.g., Ward’s method (ANOVA), JAN (regression analysis & correlational analysis), cluster validation (hypothesis testing, goodness-of-fit, Monte Carlo simulation, etc.)*Contains separate chapters on JAN and the clustering of categorical data*Includes a companion disc with solutions to exercises, programs, data sets, charts, etc.
Cluster analysis. --- Data mining. --- Algorithmic knowledge discovery --- Factual data analysis --- KDD (Information retrieval) --- Knowledge discovery in data --- Knowledge discovery in databases --- Mining, Data --- Database searching --- Correlation (Statistics) --- Multivariate analysis --- Spatial analysis (Statistics)
Choose an application
With this book on OpenRefine, managing and cleaning your large datasets suddenly got a lot easier! With a cookbook approach and free datasheets included, you’ll quickly and painlessly improve your data managing capabilities. Create links between your dataset and others in an instant Effectively transform data with regular expressions and the General Refine Expression Language Spot issues in your dataset and take effective action with just a few clicks In Detail Data is supposed to be the new gold, but how can you unlock the value in your data? Managing large datasets used to be a task for specialists, but you don't have to worry about inconsistencies or errors anymore. OpenRefine lets you clean, link, and publish your dataset in a breeze. Using OpenRefine takes you on a practical tour of all the handy features of this well-known data transformation tool. It is a hands-on recipe book that teaches you data techniques by example. Starting from the basics, it gradually transforms you into an OpenRefine expert. This book will teach you all the necessary skills to handle any large dataset and to turn it into high-quality data for the Web. After you learn how to analyze data and spot issues, we'll see how we can solve them to obtain a clean dataset. Messy and inconsistent data is recovered through advanced techniques such as automated clustering. We'll then show extract links from keyword and full-text fields using reconciliation and named-entity extraction. Using OpenRefine is more than a manual: it's a guide stuffed with tips and tricks to get the best out of your data.
COMPUTERS --- General --- Engineering & Applied Sciences --- Computer Science --- Data mining. --- Electronic data processing. --- ADP (Data processing) --- Automatic data processing --- Data processing --- EDP (Data processing) --- IDP (Data processing) --- Integrated data processing --- Algorithmic knowledge discovery --- Factual data analysis --- KDD (Information retrieval) --- Knowledge discovery in data --- Knowledge discovery in databases --- Mining, Data --- Computers --- Office practice --- Database searching --- Automation
Choose an application
This work presents a data visualization technique that combines graph-based topology representation and dimensionality reduction methods to visualize the intrinsic data structure in a low-dimensional vector space. The application of graphs in clustering and visualization has several advantages. A graph of important edges (where edges characterize relations and weights represent similarities or distances) provides a compact representation of the entire complex data set. This text describes clustering and visualization methods that are able to utilize information hidden in these graphs, based on the synergistic combination of clustering, graph-theory, neural networks, data visualization, dimensionality reduction, fuzzy methods, and topology learning. The work contains numerous examples to aid in the understanding and implementation of the proposed algorithms, supported by a MATLAB toolbox available at an associated website.
Engineering & Applied Sciences --- Computer Science --- Data mining. --- Cluster analysis --- Graph algorithms. --- Data processing. --- Algorithmic knowledge discovery --- Factual data analysis --- KDD (Information retrieval) --- Knowledge discovery in data --- Knowledge discovery in databases --- Mining, Data --- Computer science. --- Mathematics. --- Visualization. --- Computer Science. --- Data Mining and Knowledge Discovery. --- Database searching --- Computer algorithms --- Graph theory --- Visualisation --- Imagination --- Visual perception --- Imagery (Psychology) --- Math --- Science
Choose an application
Paola Gloria Ferrario develops and investigates several methods of nonparametric local variance estimation. The first two methods use regression estimations (plug-in), achieving least squares estimates as well as local averaging estimates (partitioning or kernel type). Furthermore, the author uses a partitioning method for the estimation of the local variance based on first and second nearest neighbors (instead of regression estimation). Approaching specific problems of application fields, all the results are extended and generalised to the case where only censored observations are available. Further, simulations have been executed comparing the performance of two different estimators (R-Code available!). As a possible application of the given theory the author proposes a survival analysis of patients who are treated for a specific illness. Contents · Least Squares Estimation of the Local Variance via Plug-In · Local Averaging Estimation of the Local Variance via Plug-In · Partitioning Estimation of the Local Variance via Nearest Neighbors · Estimation of the Local Variance under Censored Observations Target Groups · Researchers and graduate students in the fields of mathematics and statistics · Practitioners in the fields of medicine, reliability, finance, and insurance Author Paola Gloria Ferrario received her doctorate degree (doctor rerum naturalium) from the University of Stuttgart, Germany, in 2012, after having studied Mathematical Engineering at the Polytechnic of Milano, Italy. She taught mathematics to students of economics at University of Hohenheim and now works as a researcher at the University of Lübeck, Germany.
Analysis of variance. --- Estimation theory. --- Mathematics. --- Mathematics --- Physical Sciences & Mathematics --- Mathematical Theory --- Mathematics, general. --- Data mining. --- Variational inequalities. --- Algorithmic knowledge discovery --- Factual data analysis --- KDD (Information retrieval) --- Knowledge discovery in data --- Knowledge discovery in databases --- Mining, Data --- Database searching --- Math --- Science --- Censored observations (Statistics)
Listing 1 - 10 of 221 | << page >> |
Sort by
|