Listing 1 - 10 of 62 | << page >> |
Sort by
|
Choose an application
This book helps you to learn how to extract, transform, and orchestrate massive amounts of data to develop robust data pipelines. You'll perform complex machine learning tasks using advanced Azure Databricks features, and also explore model tuning, deployment, and control using Databricks functionalities such as AutoML and Delta Lake with ...
Choose an application
Ein entscheidender Erfolgsfaktor jeder Supply Chain ist das adäquate Design ihrer Leistungserstellungsprozesse. Da diese Prozesse regelmäßig durch einen hohen Grad an Komplexität gekennzeichnet sind, stellt deren Gestaltung eine große Herausforderung dar. In dieser Forschungsarbeit wird deshalb die Frage behandelt, wie komplexe Leitungserstellungsprozesse einer Supply Chain effektiv und effizient verbessert werden können. Dazu wird eine systematische Vorgehensweise für die Analyse und Bewertung von Supply Chain Prozessen vorgestellt und im Rahmen einer Fallstudie auf eine reale Logistikkette in der Elektronikindustrie angewandt. Bei der Prozessanalyse wird eine Reduktion der Komplexität durch eine Kombination von analytischen Methoden und Simulation erzielt, sodass nicht nur die Validität der Ergebnisse, sondern auch die praktische Umsetzbarkeit gewährleistet ist. Die Bewertung von alternativen Prozessdesigns erfolgt multikriteriell und strategiebasierend, damit sowohl Zielkonflikte als auch die verfolgte Strategie der Logistikkette Berücksichtigung finden. Generell spielt die Wahl des marktbezogenen Produktionstyps (z.B. Make-to-Order, Make-to-Stock) bei der Verbesserung von Supply Chain Prozessen eine entscheidende Rolle. Daher wird auf diesen Aspekt besonders eingegangen, wobei der hybride Produktionstyp Make-to-Forecast in einem Supply Chain-Kontext in der Fallstudie implementiert wird.
Choose an application
Extract, Transform, and Load data to build a dynamic, operational data warehouse with Oracle Warehouse Builder 11g R2 with this book and eBook
Choose an application
Design, implement, and monitor a successful Q replication and Event Publishing project with IBM InfoSphere Replication Server and Data Event Publisher using this book and eBook
Choose an application
Data warehousing --- Management. --- Information warehousing --- Warehousing, Data --- Database management --- Management information systems --- Multidimensional databases
Choose an application
The Data Vault was invented by Dan Linstedt at the U.S. Department of Defense, and the standard has been successfully applied to data warehousing projects at organizations of different sizes, from small to large-size corporations. Due to its simplified design, which is adapted from nature, the Data Vault 2.0 standard helps prevent typical data warehousing failures. " Building a Scalable Data Warehouse " covers everything one needs to know to create a scalable data warehouse end to end, including a presentation of the Data Vault modeling technique, which provides the foundations to create a technical data warehouse layer. The book discusses how to build the data warehouse incrementally using the agile Data Vault 2.0 methodology. In addition, readers will learn how to create the input layer (the stage layer) and the presentation layer (data mart) of the Data Vault 2.0 architecture including implementation best practices. Drawing upon years of practical experience and using numerous examples and an easy to understand framework, Dan Linstedt and Michael Olschimke discuss: How to load each layer using SQL Server Integration Services (SSIS), including automation of the Data Vault loading processes. Important data warehouse technologies and practices. Data Quality Services (DQS) and Master Data Services (MDS) in the context of the Data Vault architecture. Provides a complete introduction to data warehousing, applications, and the business context so readers can get-up and running fast Explains theoretical concepts and provides hands-on instruction on how to build and implement a data warehouse Demystifies data vault modeling with beginning, intermediate, and advanced techniques Discusses the advantages of the data vault approach over other techniques, also including the latest updates to Data Vault 2.0 and multiple improvements to Data Vault 1.0
Data warehousing. --- Information warehousing --- Warehousing, Data --- Database management --- Management information systems --- Multidimensional databases
Choose an application
In recent years, companies and government agencies have come to realize that the data they use represent a significant corporate resource, whose cost calls for management every bit as rigorous as the management of human resources, money, and capital equipment. With this realization has come recognition of the importance to integrate the data that has traditionally only been available from disparate sources. An important component of this integration is the management of the "metadata? that describe, catalogue, and provide access to the various forms of underlying business data. The "me
Data warehousing --- Metadata --- Data warehousing. --- Metadata. --- Data about data --- Meta-data --- Information organization --- Information warehousing --- Warehousing, Data --- Database management --- Management information systems --- Multidimensional databases
Choose an application
Data warehousing --- Engineering & Applied Sciences --- Computer Science --- Information warehousing --- Warehousing, Data --- Database management --- Management information systems --- Multidimensional databases --- Business policy --- Information systems --- data warehousing --- bedrijfsbeleid --- datawarehousing
Choose an application
Develop modern solutions with Snowflake's unique architecture and integration capabilities; process bulk and real-time data into a data lake; and leverage time travel, cloning, and data-sharing features to optimize data operationsKey FeaturesBuild and scale modern data solutions using the all-in-one Snowflake platformPerform advanced cloud analytics for implementing big data and data science solutionsMake quicker and better-informed business decisions by uncovering key insights from your dataBook DescriptionSnowflake is a unique cloud-based data warehousing platform built from scratch to perform data management on the cloud. This book introduces you to Snowflake's unique architecture, which places it at the forefront of cloud data warehouses.You'll explore the compute model available with Snowflake, and find out how Snowflake allows extensive scaling through the virtual warehouses. You will then learn how to configure a virtual warehouse for optimizing cost and performance. Moving on, you'll get to grips with the data ecosystem and discover how Snowflake integrates with other technologies for staging and loading data.As you progress through the chapters, you will leverage Snowflake's capabilities to process a series of SQL statements using tasks to build data pipelines and find out how you can create modern data solutions and pipelines designed to provide high performance and scalability. You will also get to grips with creating role hierarchies, adding custom roles, and setting default roles for users before covering advanced topics such as data sharing, cloning, and performance optimization.By the end of this Snowflake book, you will be well-versed in Snowflake's architecture for building modern analytical solutions and understand best practices for solving commonly faced problems using practical recipes.What you will learnGet to grips with data warehousing techniques aligned with Snowflake's cloud architectureBroaden your skills as a data warehouse designer to cover the Snowflake ecosystemTransfer skills from on-premise data warehousing to the Snowflake cloud analytics platformOptimize performance and costs associated with a Snowflake solutionStage data on object stores and load it into SnowflakeSecure data and share it efficiently for accessManage transactions and extend Snowflake using stored proceduresExtend cloud data applications using Spark ConnectorWho this book is forThis book is for data warehouse developers, data analysts, database administrators, and anyone involved in designing, implementing, and optimizing a Snowflake data warehouse. Knowledge of data warehousing and database and cloud concepts will be useful. Basic familiarity with Snowflake is beneficial, but not necessary.
Cloud computing. --- Database Design --- Data Warehousing --- Computers
Choose an application
Explore the latest Azure ETL techniques both on-premises and in the cloud using Azure services such as SQL Server Integration Services (SSIS), Azure Data Factory, and Azure Databricks Key Features Understand the key components of an ETL solution using Azure Integration Services Discover the common and not-so-common challenges faced while creating modern and scalable ETL solutions Program and extend your packages to develop efficient data integration and data transformation solutions Book Description ETL is one of the most common and tedious procedures for moving and processing data from one database to another. With the help of this book, you will be able to speed up the process by designing effective ETL solutions using the Azure services available for handling and transforming any data to suit your requirements. With this cookbook, you'll become well versed in all the features of SQL Server Integration Services (SSIS) to perform data migration and ETL tasks that integrate with Azure. You'll learn how to transform data in Azure and understand how legacy systems perform ETL on-premises using SSIS. Later chapters will get you up to speed with connecting and retrieving data from SQL Server 2019 Big Data Clusters, and even show you how to extend and customize the SSIS toolbox using custom-developed tasks and transforms. This ETL book also contains practical recipes for moving and transforming data with Azure services, such as Data Factory and Azure Databricks, and lets you explore various options for migrating SSIS packages to Azure. Toward the end, you'll find out how to profile data in the cloud and automate service creation with Business Intelligence Markup Language (BIML). By the end of this book, you'll have developed the skills you need to create and automate ETL solutions on-premises as well as in Azure. What you will learn Explore ETL and how it is different from ELT Move and transform various data sources with Azure ETL and ELT services Use SSIS 2019 with Azure HDInsight clusters Discover how to query SQL Server 2019 Big Data Clusters hosted in Azure Migrate SSIS solutions to Azure and solve key challenges associated with it Understand why data profiling is crucial and how to implement it in Azure Databricks Get to grips with BIML and learn how it applies to SSIS and Azure Data Factory solutions Who this book is for This book is for data warehouse architects, ETL developers, or anyone who wants to build scalable ETL applications in Azure...
Listing 1 - 10 of 62 | << page >> |
Sort by
|