Online nebo na místě, živé kurzy Apache Hadoop vedené instruktory demonstrují prostřednictvím interaktivních praktických cvičení základní součásti ekosystému Hadoop a jak lze tyto technologie použít k řešení rozsáhlých problémů. Hadoop školení je k dispozici jako "online živé školení" nebo "na místě živé školení". Online živé školení (neboli „vzdálené živé školení“) se provádí prostřednictvím interaktivní vzdálené plochy . Živá školení na místě lze provádět lokálně v prostorách zákazníka v České republice nebo ve firemních školicích střediscích NobleProg v České republice. NobleProg -- Váš místní poskytovatel školení
Machine Translated
Kurzy v pracovní den probíhají mezi @start_time a @end_time
Python je měřitelný, flexibilní a široce používaný programovací jazyk pro datovou vědu a strojové učení. Spark je datový procesor používaný při vyhledávání, analýze a transformaci velkých dat, zatímco Hadoop je rámec knihovny softwaru pro ukládání a zpracování velkých dat.
Tento výcvik vedený instruktorem (online nebo on-site) je zaměřen na vývojáře, kteří chtějí používat a integrovat Spark, Hadoop, a Python k zpracování, analýze a transformaci velkých a složitých datových souborů.
Po ukončení tohoto tréninku budou účastníci schopni:
Vytvořte nezbytné prostředí k zahájení zpracování velkých dat pomocí Spark, Hadoop, a Python.
Pochopte vlastnosti, klíčové složky a architekturu Spark a Hadoop.
Naučte se, jak integrovat Spark, Hadoop, a Python pro zpracování velkých dat.
Prozkoumejte nástroje v ekosystému Spark (Spark MlLib, Spark Streaming, Kafka, Sqoop, Kafka a Flume).
Vytvořte společné systémy doporučení pro filtrování podobné Netflix, YouTube, Amazon, Spotify a Google.
Použijte Apache Mahout pro skalování algoritmů strojového učení.
Formát kurzu
Interaktivní přednáška a diskuse.
Mnoho cvičení a praxe.
Hands-on implementace v živém laboratoři prostředí.
Možnosti personalizace kurzu
Chcete-li požádat o přizpůsobené školení pro tento kurz, kontaktujte nás, abyste uspořádali.
Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion.
In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.
By the end of this training, participants will be able to:
Create, curate, and interactively explore an enterprise data lake
Access business intelligence data warehouses, transactional databases and other analytic stores
Use a spreadsheet user-interface to design end-to-end data processing pipelines
Access pre-built functions to explore complex data relationships
Use drag-and-drop wizards to visualize data and create dashboards
Use tables, charts, graphs, and maps to analyze query results
Audience
Data analysts
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Alluxio je open-source virtuální distribuovaný systém ukládání, který spojuje rozdílné systémy ukládání a umožňuje aplikacím interagovat s daty v rychlosti paměti. Používá se společnostmi jako Intel, Baidu a Alibaba.
V tomto instruktorově vedeném, živém tréninku se účastníci naučí, jak používat Alluxio pro propojení různých počítačových rámců se systémy ukládání a efektivně spravovat multi-petabyte data, jak postupují prostřednictvím vytvoření aplikace s Alluxio.
Po ukončení tohoto tréninku budou účastníci schopni:
Rozvíjet aplikaci s Alluxio
Připojení velkých datových systémů a aplikací při zachování jednoho názvového prostoru
Efektivně extrahuje hodnotu z velkých dat v jakémkoli formátu ukládání
Zlepšení výkonu pracovní zátěže
Rozložte a spravujte Alluxio samostatně nebo klastrovaně
publikum
Data vědci
Vývojář
Systémový administrátor
Formát kurzu
Částečná přednáška, částečná diskuse, cvičení a těžká praxe
Audience:
The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment
Goal:
Deep knowledge on Hadoop cluster administration.
This course is intended for developers, architects, data scientists or any profile that requires access to data either intensively or on a regular basis.
The major focus of the course is data manipulation and transformation.
Among the tools in the Hadoop ecosystem this course includes the use of Pig and Hive both of which are heavily used for data transformation and manipulation.
This training also addresses performance metrics and performance optimisation.
The course is entirely hands on and is punctuated by presentations of the theoretical aspects.
Big data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights.
The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.
In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.
By the end of this training, participants will be able to:
Install and configure big data analytics tools such as Hadoop MapReduce and Spark
Understand the characteristics of medical data
Apply big data techniques to deal with medical data
Study big data systems and algorithms in the context of health applications
Audience
Developers
Data Scientists
Format of the Course
Part lecture, part discussion, exercises and heavy hands-on practice.
Note
To request a customized training for this course, please contact us to arrange.
The course is dedicated to IT specialists that are looking for a solution to store and process large data sets in distributed system environment
Course goal:
Getting knowledge regarding Hadoop cluster administration
Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three (optionally, four) days course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem, how to plan cluster deployment and growth, how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, get familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos.
“…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
Lectures and hands-on labs, approximate balance 60% lectures, 40% labs.
Apache Hadoop is the most popular framework for processing Big Data. Hadoop provides rich and deep analytics capability, and it is making in-roads in to tradional BI analytics world. This course will introduce an analyst to the core components of Hadoop eco system and its analytics
Audience
Business Analysts
Duration
three days
Format
Lectures and hands on labs.
Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. This course will introduce a developer to various components (HDFS, MapReduce, Pig, Hive and HBase) Hadoop ecosystem.
Apache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase. These advanced programming techniques will be beneficial to experienced Hadoop developers.
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
As more and more software and IT projects migrate from local processing and data management to distributed processing and big data storage, Project Managers are finding the need to upgrade their knowledge and skills to grasp the concepts and practices relevant to Big Data projects and opportunities.
This course introduces Project Managers to the most popular Big Data processing framework: Hadoop.
In this instructor-led training in, participants will learn the core components of the Hadoop ecosystem and how these technologies can be used to solve large-scale problems. By learning these foundations, participants will improve their ability to communicate with the developers and implementers of these systems as well as the data scientists and analysts that many IT projects involve.
Audience
Project Managers wishing to implement Hadoop into their existing development or IT infrastructure
Project Managers needing to communicate with cross-functional teams that include big data engineers, data scientists and business analysts
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Hadoop je populární Big Data zpracovatelský rámec. Python je programovací jazyk vysoké úrovně známý pro jeho jasný syntax a čitelnost kódu.
V tomto instruktorově vedeném, živém tréninku se účastníci dozví, jak pracovat s Hadoop, MapReduce, Pig, a Spark pomocí Python, jak postupují prostřednictvím několika příkladů a používají případy.
Po ukončení tohoto tréninku budou účastníci schopni:
Pochopte základní pojmy za Hadoop, MapReduce, Pig a Spark
Používejte Python s Hadoop Distributed File System (HDFS), MapReduce, Pig a Spark
Použijte Snakebite pro programový přístup k HDFS v rámci Python
Použijte mrjob psát MapReduce práce v Python
Napsat Spark programy s Python
Rozšiřte funkčnost prasat pomocí Python UDF
Řízení MapReduce pracovních míst a Pig scripts pomocí Luigi
publikum
Vývojáři
IT profesionálové
Formát kurzu
Částečná přednáška, částečná diskuse, cvičení a těžká praxe
Apache Hadoop je populární rámec pro zpracování dat pro zpracování velkých datových souborů na mnoha počítačích.
Tento výcvik vedený instruktorem (online nebo on-site) je zaměřen na správce systémů, kteří se chtějí dozvědět, jak vytvořit, rozvíjet a spravovat Hadoop klustery v rámci své organizace.
Po ukončení tohoto tréninku budou účastníci schopni:
Instalace a nastavení Apache Hadoop.
Pochopte čtyři hlavní složky Hadoop ekosystému: HDFS, MapReduce, YARN a Hadoop Common.
Použijte Hadoop Distributed File System (HDFS) pro skalování klastru na stovky nebo tisíce uzlin. •
Nainstalujte HDFS, aby fungoval jako skladovací motor pro on-premise Spark deploymenty.
Nastavení Spark pro přístup k alternativním úložným řešením, jako je Amazon S3 a NoSQL databázových systémů, jako jsou Redis, Elasticsearch, Couchbase, Aerospike, atd.
Provádět administrativní úkoly, jako je poskytování, řízení, monitorování a zabezpečení Apache Hadoop klastru.
Formát kurzu
Interaktivní přednáška a diskuse.
Mnoho cvičení a praxe.
Hands-on implementace v živém laboratoři prostředí.
Možnosti personalizace kurzu
Chcete-li požádat o přizpůsobené školení pro tento kurz, kontaktujte nás, abyste uspořádali.
This course introduces HBase – a NoSQL store on top of Hadoop. The course is intended for developers who will be using HBase to develop applications, and administrators who will manage HBase clusters.
We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course is very hands-on with lots of lab exercises.
Duration : 3 days
Audience : Developers & Administrators
Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.
In this instructor-led, live training (onsite or remote), participants will learn how to deploy and manage Apache NiFi in a live lab environment.
By the end of this training, participants will be able to:
Install and configure Apachi NiFi.
Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
Automate dataflows.
Enable streaming analytics.
Apply various approaches for data ingestion.
Transform Big Data and into business insights.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.
In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
Understand NiFi's architecture and dataflow concepts.
Develop extensions using NiFi and third-party APIs.
Custom develop their own Apache Nifi processor.
Ingest and process real-time data from disparate and uncommon file formats and data sources.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Apache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management.
This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.
By the end of this training, participants will be able to:
Use Samza to simplify the code needed to produce and consume messages.
Decouple the handling of messages from an application.
Use Samza to implement near-realtime asynchronous computation.
Use stream processing to provide a higher level of abstraction over messaging systems.
Audience
Developers
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Sqoop is an open source software tool for transfering data between Hadoop and relational databases or mainframes. It can be used to import data from a relational database management system (RDBMS) such as MySQL or Oracle or a mainframe into the Hadoop Distributed File System (HDFS). Thereafter, the data can be transformed in Hadoop MapReduce, and then re-exported back into an RDBMS.
In this instructor-led, live training, participants will learn how to use Sqoop to import data from a traditional relational database to Hadoop storage such HDFS or Hive and vice versa.
By the end of this training, participants will be able to:
Install and configure Sqoop
Import data from MySQL to HDFS and Hive
Import data from HDFS and Hive to MySQL
Audience
System administrators
Data engineers
Format of the Course
Part lecture, part discussion, exercises and heavy hands-on practice
Note
To request a customized training for this course, please contact us to arrange.
Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users.
This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.
By the end of this training, participants will be able to:
Create powerful, stream processing applications for handling large volumes of data
Process stream sources such as Twitter and Webserver Logs
Use Tigon for rapid joining, filtering, and aggregating of streams
Audience
Developers
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Cloudera Impala je dotazovací stroj s otevřeným zdrojovým kódem pro masivní paralelní zpracování (MPP) SQL pro clustery Apache Hadoop.Impala umožňuje uživatelům zadávat dotazy s nízkou latencí SQL na data uložená v Hadoop Distributed File System a Apache Hbase bez nutnosti přesunu nebo transformace dat.PublikumTento kurz je zaměřen na analytiky a datové vědce provádějící analýzu dat uložených v Hadoop prostřednictvím nástrojů Business Intelligence nebo SQL.Po tomto kurzu budou moci delegáti
Extrahujte smysluplné informace z klastrů Hadoop pomocí Impala. Napište specifické programy pro usnadnění Business Intelligence v Impala SQL Dialect. Odstraňování problémů s Impala.
Apache Ambari is an open-source management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters.
In this instructor-led live training participants will learn the management tools and practices provided by Ambari to successfully manage Hadoop clusters.
By the end of this training, participants will be able to:
Set up a live Big Data cluster using Ambari
Apply Ambari's advanced features and functionalities to various use cases
Seamlessly add and remove nodes as needed
Improve a Hadoop cluster's performance through tuning and tweaking
Audience
DevOps
System Administrators
DBAs
Hadoop testing professionals
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Hortonworks Data Platform (HDP) is an open-source Apache Hadoop support platform that provides a stable foundation for developing big data solutions on the Apache Hadoop ecosystem.
This instructor-led, live training (online or onsite) introduces Hortonworks Data Platform (HDP) and walks participants through the deployment of Spark + Hadoop solution.
By the end of this training, participants will be able to:
Use Hortonworks to reliably run Hadoop at a large scale.
Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
Process different types of data, including structured, unstructured, in-motion, and at-rest.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Respektujeme soukromí vaší e-mailové adresy. Vaši adresu nebudeme předávat ani prodávat ostatním. Vždy můžete změnit své preference nebo se úplně odhlásit.
Někteří z našich klientů
is growing fast!
We are looking to expand our presence in Czech Republic!
As a Business Development Manager you will:
expand business in Czech Republic
recruit local talent (sales, agents, trainers, consultants)
recruit local trainers and consultants
We offer:
Artificial Intelligence and Big Data systems to support your local operation
high-tech automation
continuously upgraded course catalogue and content
good fun in international team
If you are interested in running a high-tech, high-quality training and consulting business.