Data Streaming and Real Time Data Processing Training Course
Course Overview
This course offers a practical, structured introduction to constructing real-time data streaming systems. It explores core concepts, architectural patterns, and industry-standard tools for processing continuous data at scale. Participants will gain the skills to design, implement, and optimize streaming pipelines using modern frameworks. The curriculum advances from foundational principles to hands-on applications, empowering learners to confidently develop production-grade real-time solutions.
Training Format
• Instructor-led sessions with guided explanations
• Concept walkthroughs supported by real-world examples
• Hands-on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A sessions
Course Objectives
• Grasp the concepts and system architecture of real-time data streaming
• Distinguish between batch and streaming data processing models
• Design scalable and fault-tolerant streaming pipelines
• Utilize distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimize real-time data solutions tailored to business needs
This course is available as onsite live training in Czech Republic or online live training.Course Outline
Course Outline: Day 1
• Introduction to data streaming concepts
• Fundamentals of batch vs. real-time processing
• Basics of event-driven architecture
• Common industry use cases
• Overview of the streaming ecosystem
Day 2
• Design patterns for streaming architecture
• Fundamentals of distributed messaging systems
• Understanding producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Stream processing concepts and frameworks
• Event time versus processing time
• Windowing techniques and their use cases
• Stateful stream processing
• Basics of fault tolerance and checkpointing
Day 4
• Data transformation within streaming pipelines
• ETL and ELT processes in real-time systems
• Schema management and evolution
• Stream joins and data enrichment
• Introduction to cloud-based streaming services
Day 5
• Monitoring and observability in streaming systems
• Fundamentals of security and access control
• Performance tuning and optimization
• End-to-end pipeline design review
• Real-world applications such as fraud detection and IoT processing
Open Training Courses require 5+ participants.
Data Streaming and Real Time Data Processing Training Course - Booking
Data Streaming and Real Time Data Processing Training Course - Enquiry
Data Streaming and Real Time Data Processing - Consultancy Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Upcoming Courses
Related Courses
Administrator Training for Apache Hadoop
35 HoursAudience:
This course is designed for IT professionals seeking solutions to store and process large-scale datasets within a distributed system environment.
Goal:
To develop in-depth expertise in administering Hadoop clusters.
Big Data Analytics with Google Colab and Apache Spark
14 HoursThis instructor-led live training in Czech Republic (online or onsite) targets intermediate-level data scientists and engineers who wish to employ Google Colab and Apache Spark for big data processing and analytics.
By the end of this training, participants will be able to:
- Set up a big data environment using Google Colab and Spark.
- Process and analyze large datasets efficiently with Apache Spark.
- Visualize big data in a collaborative environment.
- Integrate Apache Spark with cloud-based tools.
Big Data Analytics in Health
21 HoursBig data analytics encompasses the methodology of reviewing extensive and diverse datasets to identify correlations, uncover latent patterns, and extract actionable insights.
The healthcare sector generates vast volumes of complex and heterogeneous medical and clinical data. Leveraging big data analytics within this domain holds immense potential for deriving insights that can enhance healthcare delivery. Nevertheless, the sheer scale of these datasets presents significant challenges for analysis and practical implementation in clinical settings.
In this instructor-led, live online training, participants will learn how to execute big data analytics in healthcare by progressing through a series of hands-on laboratory exercises.
Upon completion of this training, participants will be able to:
- Install and configure big data analytics tools, including Hadoop MapReduce and Spark
- Comprehend the characteristics of medical data
- Apply big data techniques to manage and analyze medical data
- Examine big data systems and algorithms within the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course
- A combination of lectures, discussions, exercises, and intensive hands-on practice.
Note
- To request customized training for this course, please contact us to make arrangements.
Hadoop For Administrators
21 HoursApache Hadoop is the leading framework for processing Big Data across server clusters. This course, lasting three days (with an optional fourth day), covers the business advantages and practical use cases of Hadoop and its surrounding ecosystem. Attendees will learn how to plan cluster deployment and expansion, as well as how to install, maintain, monitor, troubleshoot, and optimize Hadoop environments. Practical exercises include bulk data loading into clusters, exploring various Hadoop distributions, and installing and managing tools within the Hadoop ecosystem. The curriculum concludes with a discussion on securing clusters using Kerberos.
“…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized.”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
A blend of lectures and hands-on labs, with an approximate balance of 60% lectures and 40% labs.
Hadoop for Developers (4 days)
28 HoursApache Hadoop is the most widely used framework for processing Big Data across clusters of servers. This course introduces developers to the key components of the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop is one of the most widely used frameworks for processing Big Data across server clusters. This course explores data management in HDFS, as well as advanced techniques using Pig, Hive, and HBase. These sophisticated programming skills are particularly valuable for experienced Hadoop developers.
Audience: developers
Duration: three days
Format: 50% lectures and 50% hands-on labs.
Hadoop Administration on MapR
28 HoursAudience:
This course aims to demystify big data and Hadoop technology, demonstrating that it is accessible and straightforward to understand.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in Czech Republic (online or onsite) is aimed at system administrators who wish to learn how to set up, deploy and manage Hadoop clusters within their organization.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four major components in the Hadoop ecoystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Use Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or thousands of nodes.
- Set up HDFS to operate as storage engine for on-premise Spark deployments.
- Set up Spark to access alternative storage solutions such as Amazon S3 and NoSQL database systems such as Redis, Elasticsearch, Couchbase, Aerospike, etc.
- Carry out administrative tasks such as provisioning, management, monitoring and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course provides an introduction to HBase, a NoSQL datastore built on top of Hadoop. It is designed for developers who intend to build applications using HBase, as well as administrators responsible for managing HBase clusters.
We will guide developers through HBase architecture, data modeling, and application development. The curriculum also covers the integration of MapReduce with HBase and addresses key administration topics, focusing on performance optimization. The training is highly practical, featuring numerous lab exercises.
Duration: 3 days
Audience: Developers & Administrators
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source platform for data integration and event processing that operates on a flow-based model. It facilitates automated, real-time routing, transformation, and mediation of data between disparate systems, supported by a web-based user interface and granular control mechanisms.
This instructor-led live training, available either onsite or remotely, targets intermediate-level administrators and engineers looking to deploy, manage, secure, and optimize NiFi dataflows within production environments.
Upon completion of this course, participants will be equipped to:
- Install, configure, and maintain Apache NiFi clusters.
- Design and manage dataflows originating from and terminating at various sources and sinks.
- Implement logic for flow automation, routing, and transformation.
- Optimize performance, monitor system operations, and resolve issues.
Course Format
- Interactive lectures combined with discussions on real-world architectures.
- Practical labs focused on building, deploying, and managing data flows.
- Scenario-based exercises conducted in a live laboratory environment.
Course Customization Options
- For customized training arrangements, please contact us.
Apache NiFi for Developers
7 HoursIn this instructor-led, live training in Czech Republic, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
PySpark and Machine Learning
21 HoursThis training offers a hands-on introduction to developing scalable data processing and Machine Learning workflows with PySpark. Participants will gain insight into how Apache Spark functions within contemporary Big Data ecosystems and learn to process large datasets efficiently by leveraging distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led, live training in Czech Republic, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in Czech Republic (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio is a data-centric platform that unifies big data, AI, and governance into a single solution. Its Rocket and Intelligence modules facilitate rapid data exploration, transformation, and advanced analytics within enterprise environments.
This instructor-led, live training (available online or onsite) is designed for intermediate-level data professionals who want to effectively leverage the Rocket and Intelligence modules in Stratio with PySpark. The focus is on mastering looping structures, user-defined functions, and implementing advanced data logic.
Upon completion of this training, participants will be able to:
- Navigate and work efficiently within the Stratio platform using its Rocket and Intelligence modules.
- Apply PySpark for data ingestion, transformation, and analysis tasks.
- Utilize loops and conditional logic to manage data workflows and feature engineering processes.
- Create and manage user-defined functions (UDFs) to enable reusable data operations in PySpark.
Course Format
- Interactive lectures and discussions.
- Numerous exercises and practical activities.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request customized training for this course, please contact us to arrange.