The Complete Big Data eBook & Video Course Bundle for $29 August 28, 2020 at 02:00AM



Checkout Now

Expires August 29, 2120 06:59 PST
Buy now and get 95% off

Practical Big Data Analytics [eBook]


KEY FEATURES

Big Data analytics relates to the strategies used by organizations to collect, organize, and analyze large amounts of data to uncover valuable business insights that otherwise cannot be analyzed through traditional systems. This book will help you do that. With the help of this guide, you will be able to bridge the gap between the theoretical world of technology with the practical ground reality of building corporate Big Data and data science platforms. You will get hands-on exposure to Hadoop and Spark, build machine learning dashboards using R and R Shiny, create web-based apps using NoSQL databases such as MongoDB and even learn how to write R code for neural networks.

  • Lifetime access to eBook w/ 412 pages
  • Boost your Big Data storing, processing, analyzing skills to help you take informed business decisions
  • Work with the best tools such as Apache Hadoop, R, Python, & Spark for NoSQL platforms to perform massive online analyses
  • Get expert tips on statistical inference, machine learning, mathematical modeling, & data visualization for Big Data

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Nataraj Dasgupta
Nataraj Dasgupta is the vice president of advanced analytics at RxDataScience Inc. Nataraj has been in the IT industry for more than 19 years and has worked in the technical and analytics divisions of Philip Morris, IBM, UBS Investment Bank, and Purdue Pharma. At Purdue Pharma, Nataraj led the data science division, where he developed the company's award-winning big data and machine learning platform. Prior to Purdue, at UBS, he held the role of Associate Director, working with high-frequency and algorithmic trading technologies in the foreign exchange trading division of the bank.

Big Data Analytics with Hadoop 3 [eBook]


KEY FEATURES

Apache Hadoop is the most popular platform for big data processing and can be combined with a host of other big data tools to build powerful analytics solutions. Big Data Analytics with Hadoop 3 shows you how to do just that, by providing insights into the software as well as its benefits with the help of practical examples. Once you have taken a tour of Hadoop 3’s latest features, you will get an overview of HDFS, MapReduce, and YARN, and how they enable faster, more efficient big data processing. By the end of this book, you will be well-versed with the analytical capabilities of the Hadoop ecosystem. You will be able to build powerful solutions to perform big data analytics and get insight effortlessly.

  • Lifetime access to eBook w/ 482 pages
  • Explore the new features of Hadoop 3 along w/ HDFS, YARN, & MapReduce
  • Get well-versed w/ the analytical capabilities of Hadoop ecosystem using practical examples
  • Integrate Hadoop w/ R & Python for more efficient big data processing
  • Learn to use Hadoop w/ Apache Spark & Apache Flink for real-time data analytics
  • Set up a Hadoop cluster on AWS cloud
  • Perform big data analytics on AWS using Elastic Map Reduce

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Sridhar Alla
Sridhar Alla is a big data expert helping companies solve complex problems in distributed computing, large scale data science, and analytics practice. He holds a bachelor's in computer science from JNTU, India. He loves writing code in Python, Scala, and Java. He also has extensive hands-on knowledge of several Hadoop-based technologies, TensorFlow, NoSQL, IoT, and deep learning.

Big Data Analytics with SAS [eBook]


KEY FEATURES

SAS has been recognized by Money Magazine and Payscale as one of the top business skills to learn in order to advance one’s career. Through innovative data management, analytics, and business intelligence software and services, SAS helps customers solve their business problems by allowing them to make better decisions faster. This book introduces the reader to the SAS and how they can use SAS to perform efficient analysis of any size data, including Big Data. By the end of this book, you will be able to clearly understand how you can efficiently analyze Big Data using SAS.

  • Lifetime access to eBook w/ 266 pages
  • Configure a free version of SAS in order do hands-on exercises dealing w/ data management, analysis, & reporting
  • Understand the basic concepts of the SAS language which consists of the data step & procedures (or PROCs) for analysis
  • Make use of the web browser-based SAS Studio & iPython Jupyter Notebook interfaces for coding in the SAS, DS2, and FedSQL programming languages
  • Understand how the DS2 programming language plays an important role in Big Data preparation & analysis using SAS
  • Integrate & work efficiently w/ Big Data platforms like Hadoop, SAP HANA, and cloud foundry based systems

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

David Pope
David Pope has worked for SAS for over 26 years in a variety of departments, including research and development (R&D), information technology (IT), SAS Solutions on Demand (SSOD), and sales and marketing. He graduated from North Carolina State University with a bachelor's of science in industrial engineering and a certificate in computer programming. He started his career with SAS, testing, and writing code for the SAS system in R&D using C, Java, and of course SAS programming languages.

Hands-On Data Warehousing with Azure Data Factory [eBook]


KEY FEATURES

ETL is one of the essential techniques in the data processing. Given data is everywhere, ETL will always be the vital process to handle data from different sources. This course starts with the basic concepts of data warehousing and ETL process. You will learn how Azure Data Factory and SSIS can be used to understand the key components of an ETL solution. By the end of this book, you will not only learn how to build your own ETL solutions but also address the key challenges that are faced while building them.

  • Lifetime access to eBook w/ 284 pages
  • Understand the key components of an ETL solution using Azure Data Factory & Integration Services
  • Design the architecture of a modern ETL hybrid solution
  • Implement ETL solutions for both on-premises & Azure data
  • Improve the performance & scalability of your ETL solution
  • Gain a thorough knowledge of new capabilities & features added to Azure Data Factory and Integration Services

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Christian Cote is an IT professional with more than 15 years of experience working in a data warehouse, Big Data, and business intelligence projects. Christian developed expertise in data warehousing and data lakes over the years and designed many ETL/BI processes using a range of tools on multiple platforms.

Michelle Gutzait has been in IT for 30 years as a developer, business analyst, and database consultant. She has worked with MS SQL Server for 20 years. Her skills include infrastructure and database design, performance tuning, security, HADR solutions, consolidation, very large databases, replication, T-SQL coding, and optimization, SSIS, SSRS, SSAS, admin and infrastructure tools development, cloud services, training developers, DBAs, and more.

Giuseppe Ciaburro holds a PhD in environmental technical physics, along with two master's degrees. His research was focused on machine learning applications in the study of urban sound environments. He works at the Built Environment Control Laboratory at the Università degli Studi della Campania Luigi Vanvitelli, Italy.

Real Time Streaming Using Apache Spark Streaming [Video]


KEY FEATURES

Spark is the technology that allows us to perform big data processing in the MapReduce paradigm very rapidly, due to performing the processing in memory without the need for extensive I/O operations. This course promotes a practical approach to dealing with large amounts of online, unbounded data and drawing conclusions from it. You will implement streaming logic to handle huge amount of infinite streams of data.

  • Access 3 lectures & 1 hour of content 24/7
  • Implement stream processing using Apache Spark Streaming
  • Consume events from the source (for instance, Kafka), apply logic on it, and send it to a data sink.
  • Understand how to deduplicate events when you have a system that ensures at-least-once deliver.
  • Learn to tackle common stream processing problems.
  • Create a job to analyze data in real-time using the Apache Spark Streaming API.
  • Master event time &d processing time
  • Single event processing & the micro-batch approach to processing events
  • Learn to sort infinite event streams

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Tomasz Lelek
Tomasz is a software engineer, programming mostly in Java and Scala. He has been working with the Spark and ML APIs for the past 6 years, with production experience in processing petabytes of data. He is passionate about nearly everything associated with software development and believes that we should always try to consider different solutions and approaches before attempting to solve a problem. Recently, he was also a speaker at conferences in Poland—Confitura, and JDD (Java Developers Day) and at Krakow Scala User Group. He has also conducted a live coding session at the Geecon Conference.

Scala & Spark for Big Data Analytics [eBook]


KEY FEATURES

Scala has been observing wide adoption over the past few years, especially in the field of data science and analytics. Spark, built on Scala, has gained a lot of recognition and is being used widely in productions. The first part of this course introduces you to Scala, helping you understand the object-oriented and functional programming concepts needed for Spark application development. It then moves on to Spark to cover the basic abstractions using RDD and DataFrame. You will also learn how to develop Spark applications using SparkR and PySpark APIs, interactive data analytics using Zeppelin, and in-memory data processing with Alluxio.

  • Lifetime access to eBook w/ 898 pages
  • Understand object-oriented & functional programming concepts of Scala
  • Work with RDD & DataFrame to learn Spark’s core abstractions
  • Analyzing structured & unstructured data using SparkSQL and GraphX
  • Scalable & fault-tolerant streaming application development using Spark structured streaming
  • Learn machine-learning best practices for classification, regression, dimensionality reduction, and recommendation system to build predictive models with widely used algorithms in Spark MLlib & ML
  • Build clustering models to cluster a vast amount of data
  • Understand tuning, debugging, & monitoring Spark applications
  • Deploy Spark applications on real clusters in Standalone, Mesos, & YARN

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Md. Rezaul Karim is a researcher, author, and data science enthusiast with a strong computer science background, coupled with 10 years of research and development experience in machine learning, deep learning, and data mining algorithms to solve emerging bioinformatics research problems by making them explainable.

Sridhar Alla is a big data expert helping companies solve complex problems in distributed computing, large scale data science, and analytics practice. He holds a bachelor's in computer science from JNTU, India. He loves writing code in Python, Scala, and Java. He also has extensive hands-on knowledge of several Hadoop-based technologies, TensorFlow, NoSQL, IoT, and deep learning.

Apache Kafka Series: Kafka Connect Hands-On Learning [Video]


KEY FEATURES

Kafka Connect is a tool for scalable and reliable streaming data between Apache Kafka and other data systems. Apache Kafka Connect is a common framework for Apache Kafka producers and consumers. In this course, you are going to learn Kafka connector deployment, configuration, and management with hands-on exercises. You're also going to see the distributed and standalone modes to scale up to a large, centrally-managed service supporting an entire organization or scale down to development, testing, and small production deployments. The REST interface is used to submit and manage connectors to your Kafka Connect cluster via easy to use REST APIs.

  • Access 8 lectures & 4.23 hours of content 24/7
  • Configure & run Apache Kafka source and sink connectors
  • Learn concepts behind Kafka Connect & the Kafka Connect architecture
  • Launch a Kafka Connect cluster using Docker Compose
  • Deploy Kafka connectors in standalone & distributed modes
  • Write your own Kafka connector

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Stephane Maarek
Stephane Maarek is a solutions architect, consultant, and software developer that has a particular interest in all things related to big data and analytics. He's also a best selling instructor for his courses in Apache Kafka, Apache NiFi, and AWS Lambda! He loves Apache Kafka. He regularly contributes to the Apache Kafka project and wrote a guest blog post featured on the Confluent website, the company behind Apache Kafka. He also is an AWS Certified Solutions Architect and has many years of experience with technologies such as Apache Kafka, Apache NiFi, Apache Spark, Hadoop, PostgreSQL, Tableau, Spotfire, Docker and Ansible amongst many others. His favorite programming languages are Scala and Python, and he plans on learning Go soon. During his spare time, he enjoys cooking, practicing yoga, surfing, watching TV shows, and traveling to awesome destinations!

Apache Kafka Series: Kafka Streams for Data Processing [Video]


KEY FEATURES

The new volume in the Apache Kafka Series! Learn the Kafka Streams data-processing library, for Apache Kafka. Join hundreds of knowledge savvy students in learning one of the most promising data-processing libraries on Apache Kafka. This course is based on Java 8 and will include one example in Scala. Kafka Streams is Java-based and therefore is not suited for any other programming language. This course is the first and only available Kafka Streams course on the web. Get it now to become a Kafka expert!

  • Access 10 lectures & 4.77 hours of content 24/7
  • Write four Kafka Streams application in Java 8
  • Configure Kafka Streams to use exactly once semantics
  • Scale Kafka Streams applications
  • Program with the high-level DSL of Kafka Streams
  • Build and package your application
  • Write tests for your Kafka Streams Topology & so much more

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Stephane Maarek
Stephane Maarek is a solutions architect, consultant, and software developer that has a particular interest in all things related to big data and analytics. He's also a best selling instructor for his courses in Apache Kafka, Apache NiFi, and AWS Lambda! He loves Apache Kafka. He regularly contributes to the Apache Kafka project and wrote a guest blog post featured on the Confluent website, the company behind Apache Kafka. He also is an AWS Certified Solutions Architect and has many years of experience with technologies such as Apache Kafka, Apache NiFi, Apache Spark, Hadoop, PostgreSQL, Tableau, Spotfire, Docker and Ansible amongst many others. His favorite programming languages are Scala and Python, and he plans on learning Go soon. During his spare time, he enjoys cooking, practicing yoga, surfing, watching TV shows, and traveling to awesome destinations!

Data Stream Development with Apache Spark, Kafka, & Spring Boot [Video]


KEY FEATURES

Today, organizations have a difficult time working with huge numbers of datasets. In addition, data processing and analysis need to be done in real-time to gain insights. This is where data streaming comes in. This course starts by explaining the blueprint architecture for developing a completely functional data streaming pipeline and installing the technologies used. With the help of live coding sessions, you will get hands-on with architecting every tier of the pipeline. You will also handle specific issues encountered working with streaming data. You will input a live data stream of Meetup RSVPs that will be analyzed and displayed via Google Maps.

  • Access 5 lectures & 7.85 hours of content 24/7
  • Attain a solid foundation in the most powerful & versatile technologies involved in data streaming
  • Form a robust & clean architecture for a data streaming pipeline
  • Implement the correct tools to bring your data streaming architecture to life
  • Isolate the most problematic tradeoff for each tier involved in a data streaming pipeline
  • Query, analyze, & apply machine learning algorithms to collected data
  • Display analyzed pipeline data via Google Maps on your web browser
  • Discover & resolve difficulties in scaling and securing data streaming applications

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Anghel Leonard
Anghel Leonard is currently a Java chief architect. He is a member of the Java EE Guardians with 20+ years’ experience. He has spent most of his career architecting distributed systems. He is also the author of several books, a speaker, and a big fan of working with data.

Master Big Data Ingestion & Analytics with Flume, Sqoop, Hive and Spark [Video]


KEY FEATURES

In this course, you will start by learning about the Hadoop Distributed File System (HDFS) and the most common Hadoop commands required to work with HDFS. Next, you’ll be introduced to Sqoop Import, which will help you gain insights into the lifecycle of the Sqoop command and how to use the import command to migrate data from MySQL to HDFS, and from MySQL to Hive. In addition to this, you will get up to speed with Sqoop Export for migrating data effectively, along with using Apache Flume to ingest data. As you progress, you will delve into Apache Hive, external and managed tables, working with different files, and Parquet and Avro. Toward the concluding section, you will focus on Spark DataFrames and Spark SQL.

  • Access 9 lectures & 5.63 hours of content 24/7
  • Explore the Hadoop Distributed File System (HDFS) & commands
  • Get to grips with the lifecycle of the Sqoop command
  • Use the Sqoop Import command to migrate data from MySQL to HDFS & Hive
  • Understand split-by & boundary queries
  • Use the incremental mode to migrate data from MySQL to HDFS
  • Employ Sqoop Export to migrate data from HDFS to MySQL
  • Discover Spark DataFrames & gain insights into working with different file formats and compression

PRODUCT SPECS

Important Details

  • Length of time users can access this course: lifetime
  • Access options: desktop & mobile
  • Redemption deadline: redeem your code within 30 days of purchase
  • Updates included
  • Experience level required: all levels

Requirements

  • Any device with basic specifications

THE EXPERT

Packt Publishing
Founded in 2004 in Birmingham, UK, Packt's mission is to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. Working towards that vision, they have published over 6,500 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools.

Navdeep Kaur
Technical Trainer Navdeep Kaur is a big data professionals with 11 years of industry experience in different technologies and domains. She has a keen interest in providing training in new technologies. She has received CCA175 Hadoop and Spark developer certification and AWS solution architect certification. She loves guiding people and helping them achieves new goals.




Checkout Now

Comments