
Lokální instruktorem vedené Big Data školení České republice.
Reference
vyzVoice
Kurz: Hadoop for Developers and Administrators
Machine Translated
Mohamed Salama
Kurz: Data Mining & Machine Learning with R
Machine Translated
Yashan Wang
Kurz: Data Mining with R
Machine Translated
Kieran Mac Kenna
Kurz: Spark for Developers
Machine Translated
Nour Assaf
Kurz: Data Mining and Analysis
Machine Translated
youssef chamoun
Kurz: Data Mining and Analysis
Machine Translated
Jessica Chaar
Kurz: Data Mining and Analysis
Machine Translated
Simon Hahn
Kurz: Administrator Training for Apache Hadoop
Machine Translated
Grzegorz Gorski
Kurz: Administrator Training for Apache Hadoop
Machine Translated
Jacek Pieczątka
Kurz: Administrator Training for Apache Hadoop
Machine Translated
Allison May
Kurz: Data Visualization
Machine Translated
Carol Wells Bazzichi
Kurz: Data Visualization
Machine Translated
Susan Williams
Kurz: Data Visualization
Machine Translated
Diane Lucas
Kurz: Data Visualization
Machine Translated
Craig Roberson
Kurz: Data Visualization
Machine Translated
Lisa Comfort
Kurz: Data Visualization
Machine Translated
Peter Coleman
Kurz: Data Visualization
Machine Translated
Peter Coleman
Kurz: Data Visualization
Machine Translated
Ronald Parrish
Kurz: Data Visualization
Machine Translated
Balaram Chandra Paul
Kurz: A practical introduction to Data Analysis and Big Data
Machine Translated
John Kidd
Kurz: Spark for Developers
Machine Translated
Ryan Speelman
Kurz: Spark for Developers
Machine Translated
Kurz: Spark for Developers
Machine Translated
Luigi Loiacono
Kurz: Data Analysis with Hive/HiveQL
Machine Translated
Proximus
Kurz: Data Analysis with Hive/HiveQL
Machine Translated
Proximus
Kurz: Data Analysis with Hive/HiveQL
Machine Translated
Philippe Job
Kurz: Data Analysis with Hive/HiveQL
Machine Translated
Michael Nemerouf
Kurz: Spark for Developers
Machine Translated
Jonathan Puvilland
Kurz: Data Analysis with Hive/HiveQL
Machine Translated
Continental AG / Abteilung: CF IT Finance
Kurz: A practical introduction to Data Analysis and Big Data
Machine Translated
Sameer Rohadia
Kurz: A practical introduction to Data Analysis and Big Data
Machine Translated
Xiaoyuan Geng - Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Kurz: Programming with Big Data in R
Machine Translated
Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Kurz: Programming with Big Data in R
Machine Translated
Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Kurz: Programming with Big Data in R
Machine Translated
Tim - Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Kurz: Programming with Big Data in R
Machine Translated
Teboho Makenete
Kurz: Data Science for Big Data Analytics
Machine Translated
Laura Kahn
Kurz: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
Machine Translated
Kurz: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
Machine Translated
Ericsson
Kurz: Administrator Training for Apache Hadoop
Machine Translated
Jamie Martin-Royle - NBrown Group
Kurz: From Data to Decision with Big Data and Predictive Analytics
Machine Translated
Krishan Mistry - NBrown Group
Kurz: From Data to Decision with Big Data and Predictive Analytics
Machine Translated
Steve McPhail - Alberta Health Services - Information Technology
Kurz: Data Analysis with Hive/HiveQL
Machine Translated
Geert Suys - Proximus Group
Kurz: Data Analysis with Hive/HiveQL
Machine Translated
Proximus Group
Kurz: Data Analysis with Hive/HiveQL
Machine Translated
Samuel Peeters - Proximus Group
Kurz: Data Analysis with Hive/HiveQL
Machine Translated
Ericsson
Kurz: Administrator Training for Apache Hadoop
Machine Translated
Ericsson
Kurz: Administrator Training for Apache Hadoop
Machine Translated
N. V. Nederlandse Spoorwegen
Kurz: Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing
Machine Translated
N. V. Nederlandse Spoorwegen
Kurz: Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing
Machine Translated
N. V. Nederlandse Spoorwegen
Kurz: Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing
Machine Translated
Kurz: Spark for Developers
Machine Translated
Kurz: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
Machine Translated
Big Data Návrh Školení
This instructor-led, live courses covers the working principles behind Accumulo and walks participants through the development of a sample application on Apache Accumulo.
Format of the Course
- Part lecture, part discussion, hands-on development and implementation, occasional tests to gauge understanding
In this instructor-led, live course, we introduce the processes involved in KDD and carry out a series of exercises to practice the implementation of those processes.
Audience
- Data analysts or anyone interested in learning how to interpret data to solve problems
Format of the Course
- After a theoretical discussion of KDD, the instructor will present real-life cases which call for the application of KDD to solve a problem. Participants will prepare, select and cleanse sample data sets and use their prior knowledge about the data to propose solutions based on the results of their observations.
In this instructor-led, live training, participants will learn how to use MonetDB and how to get the most value out of it.
By the end of this training, participants will be able to:
- Understand MonetDB and its features
- Install and get started with MonetDB
- Explore and perform different functions and tasks in MonetDB
- Accelerate the delivery of their project by maximizing MonetDB capabilities
Audience
- Developers
- Technical experts
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
By the end of this training, participants will:
- Understand the evolution and trends for machine learning.
- Know how machine learning is being used across different industries.
- Become familiar with the tools, skills and services available to implement machine learning within an organization.
- Understand how machine learning can be used to enhance data mining and analysis.
- Learn what a data middle backend is, and how it is being used by businesses.
- Understand the role that big data and intelligent applications are playing across industries.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training, participants will learn the essentials of MemSQL for development and administration.
By the end of this training, participants will be able to:
- Understand the key concepts and characteristics of MemSQL
- Install, design, maintain, and operate MemSQL
- Optimize schemas in MemSQL
- Improve queries in MemSQL
- Benchmark performance in MemSQL
- Build real-time data applications using MemSQL
Audience
- Developers
- Administrators
- Operation Engineers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
By the end of this training, participants will be able to build producer and consumer applications for real-time stream data procesing.
Audience
- Developers
- Administrators
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To request a customized training for this course, please contact us to arrange.
This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark.
By the end of this training, participants will be able to:
- Efficiently query, parse and join geospatial datasets at scale
- Implement geospatial data in business intelligence and predictive analytics applications
- Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
In this instructor-led live training, participants will learn how to use Apache Kylin to set up a real-time data warehouse.
By the end of this training, participants will be able to:
- Consume real-time streaming data using Kylin
- Utilize Apache Kylin's powerful features, rich SQL interface, spark cubing and subsecond query latency
Note
- We use the latest version of Kylin (as of this writing, Apache Kylin v2.0)
Audience
- Big data engineers
- Big Data analysts
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
This instructor-led, live training (online or onsite) is aimed at developers who wish to implement Apache Kafka stream processing without writing code.
By the end of this training, participants will be able to:
- Install and configure Confluent KSQL.
- Set up a stream processing pipeline using only SQL commands (no Java or Python coding).
- Carry out data filtering, transformations, aggregations, joins, windowing, and sessionization entirely in SQL.
- Design and deploy interactive, continuous queries for streaming ETL and real-time analytics.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Since 2006, KNIME has been used in pharmaceutical research, it also used in other areas like CRM customer data analysis, business intelligence and financial data analysis.
This course for KNIME Analytics Platform is an ideal opportunity for beginners, advanced users and KNIME experts to be introduced to KNIME, to learn how to use it more effectively, and how to create clear, comprehensive reports based on KNIME workflows
In this instructor-led, live training, participants will learn how to integrate Kafka Streams into a set of sample Java applications that pass data to and from Apache Kafka for stream processing.
By the end of this training, participants will be able to:
- Understand Kafka Streams features and advantages over other stream processing frameworks
- Process stream data directly within a Kafka cluster
- Write a Java or Scala application or microservice that integrates with Kafka and Kafka Streams
- Write concise code that transforms input Kafka topics into output Kafka topics
- Build, package and deploy the application
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Notes
- To request a customized training for this course, please contact us to arrange
In this instructor-led, live training (onsite or remote), participants will learn how to deploy and manage Apache NiFi in a live lab environment.
By the end of this training, participants will be able to:
- Install and configure Apachi NiFi.
- Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
- Automate dataflows.
- Enable streaming analytics.
- Apply various approaches for data ingestion.
- Transform Big Data and into business insights.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.
By the end of this training, participants will be able to:
- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
Audience
- Developers
- Software architects
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice
Notes
- To request a customized training for this course, please contact us to arrange.
- Developers
Format of the Course
- Lectures, hands-on practice, small tests along the way to gauge understanding
Impala enables users to issue low-latency SQL queries to data stored in Hadoop Distributed File System and Apache Hbase without requiring data movement or transformation.
Audience
This course is aimed at analysts and data scientists performing analysis on data stored in Hadoop via Business Intelligence or SQL tools.
After this course delegates will be able to
- Extract meaningful information from Hadoop clusters with Impala.
- Write specific programs to facilitate Business Intelligence in Impala SQL Dialect.
- Troubleshoot Impala.
This instructor-led, live training (online or onsite) introduces Hortonworks Data Platform (HDP) and walks participants through the deployment of Spark + Hadoop solution.
By the end of this training, participants will be able to:
- Use Hortonworks to reliably run Hadoop at a large scale.
- Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
- Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
- Process different types of data, including structured, unstructured, in-motion, and at-rest.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course is very hands-on with lots of lab exercises.
Duration : 3 days
Audience : Developers & Administrators
In this instructor-led, live training, participants will learn how to work with Hadoop, MapReduce, Pig, and Spark using Python as they step through multiple examples and use cases.
By the end of this training, participants will be able to:
- Understand the basic concepts behind Hadoop, MapReduce, Pig, and Spark
- Use Python with Hadoop Distributed File System (HDFS), MapReduce, Pig, and Spark
- Use Snakebite to programmatically access HDFS within Python
- Use mrjob to write MapReduce jobs in Python
- Write Spark programs with Python
- Extend the functionality of pig using Python UDFs
- Manage MapReduce jobs and Pig scripts using Luigi
Audience
- Developers
- IT Professionals
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand.
This course introduces Project Managers to the most popular Big Data processing framework: Hadoop.
In this instructor-led training, participants will learn the core components of the Hadoop ecosystem and how these technologies can be used to solve large-scale problems. In learning these foundations, participants will also improve their ability to communicate with the developers and implementers of these systems as well as the data scientists and analysts that many IT projects involve.
Audience
- Project Managers wishing to implement Hadoop into their existing development or IT infrastructure
- Project Managers needing to communicate with cross-functional teams that include big data engineers, data scientists and business analysts
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
This instructor-led, live training (online or onsite) is aimed at developers who wish to carry out big data analysis using Apache Spark in their .NET applications.
By the end of this training, participants will be able to:
- Install and configure Apache Spark.
- Understand how .NET implements Spark APIs so that they can be accessed from a .NET application.
- Develop data processing applications using C# or F#, capable of handling data sets whose size is measured in terabytes and pedabytes.
- Develop machine learning features for a .NET application using Apache Spark capabilities.
- Carry out exploratory analysis using SQL queries on big data sets.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Toto školení vedené instruktory, na místě nebo na dálku, je zaměřeno na vývojáře aplikací a inženýry, kteří si chtějí osvojit sofistikovanější použití databáze Teradata .
Na konci tohoto školení budou účastníci schopni:
- Spravujte prostor Teradata .
- Chraňte a distribuujte data v Teradata .
- Přečtěte si vysvětlující plán.
- Zlepšit znalosti SQL .
- Použijte hlavní nástroje Teradata .
Formát kurzu
- Interaktivní přednáška a diskuse.
- Spousta cvičení a cvičení.
- Praktická implementace v prostředí živé laboratoře.
Možnosti přizpůsobení kurzu
- Chcete-li požádat o přizpůsobené školení pro tento kurz, kontaktujte nás a domluvte se.
- to execute SQL queries.
- to read data from an existing Hive installation.
In this instructor-led, live training (onsite or remote), participants will learn how to analyze various types of data sets using Spark SQL.
By the end of this training, participants will be able to:
- Install and configure Spark SQL.
- Perform data analysis using Spark SQL.
- Query data sets in different formats.
- Visualize data and query results.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
This instructor-led, live training introduces the concepts behind interactive data analytics and walks participants through the deployment and usage of Zeppelin in a single-user or multi-user environment.
By the end of this training, participants will be able to:
- Install and configure Zeppelin
- Develop, organize, execute and share data in a browser-based interface
- Visualize results without referring to the command line or cluster details
- Execute and collaborate on long workflows
- Work with any of a number of plug-in language/data-processing-backends, such as Scala (with Apache Spark), Python (with Apache Spark), Spark SQL, JDBC, Markdown and Shell.
- Integrate Zeppelin with Spark, Flink and Map Reduce
- Secure multi-user instances of Zeppelin with Apache Shiro
Audience
- Data engineers
- Data analysts
- Data scientists
- Software developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
This instructor-led, live training introduces the challenges of serving large-scale data and walks participants through the creation of an application that can compute responses to user requests, over large datasets in real-time.
By the end of this training, participants will be able to:
- Use Vespa to quickly compute data (store, search, rank, organize) at serving time while a user waits
- Implement Vespa into existing applications involving feature search, recommendations, and personalization
- Integrate and deploy Vespa with existing big data systems such as Hadoop and Storm.
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice