
Lokální instruktorem vedené Apache Spark školení České republice.
Reference
Učitel přizpůsobil vzdělávací program našim současným potřebám.
EduBroker Sp. z o.o.
Kurz: Python and Spark for Big Data (PySpark)
Machine Translated
Provádění podobných cvičení různými způsoby skutečně pomáhá pochopit, co každá složka ( Hadoop / Spark, samostatný / klastr) může udělat samostatně a společně. Dalo mi nápady, jak bych měl vyzkoušet svou aplikaci na svém místním počítači, když se vyvíjím, vs, když je nasazen v clusteru.
Thomas Carcaud - IT Frankfurt GmbH
Kurz: Spark for Developers
Machine Translated
individuální pozornost.
ARCHANA ANILKUMAR - PPL
Kurz: Python and Spark for Big Data (PySpark)
Machine Translated
Bylo skvělé pochopit, co se děje pod kapotou Spark. Znalost toho, co se děje pod kapotou, pomáhá lépe pochopit, proč váš kód dělá nebo nedělá to, co očekáváte. Hodně školení bylo ruce, na kterých je vždy skvělé a sekce o optimalizaci byla mimořádně relevantní pro mou současnou práci, která byla hezká.
Intelligent Medical Objects
Kurz: Apache Spark in the Cloud
Machine Translated
Apache Spark Podkategorie
Apache Spark Návrh Školení
In this instructor-led, live training, participants will learn how to use Alluxio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio.
By the end of this training, participants will be able to:
- Develop an application with Alluxio
- Connect big data systems and applications while preserving one namespace
- Efficiently extract value from big data in any storage format
- Improve workload performance
- Deploy and manage Alluxio standalone or clustered
Audience
- Data scientist
- Developer
- System administrator
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.
In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.
By the end of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to deal with medical data
- Study big data systems and algorithms in the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice.
Note
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training, participants will learn about the technology offerings and implementation approaches for processing graph data. The aim is to identify real-world objects, their characteristics and relationships, then model these relationships and process them as data using a Graph Computing (also known as Graph Analytics) approach. We start with a broad overview and narrow in on specific tools as we step through a series of case studies, hands-on exercises and live deployments.
By the end of this training, participants will be able to:
- Understand how graph data is persisted and traversed.
- Select the best framework for a given task (from graph databases to batch processing frameworks.)
- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel.
- View real-world big data problems in terms of graphs, processes and traversals.
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
This instructor-led, live training (online or onsite) introduces Hortonworks Data Platform (HDP) and walks participants through the deployment of Spark + Hadoop solution.
By the end of this training, participants will be able to:
- Use Hortonworks to reliably run Hadoop at a large scale.
- Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
- Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
- Process different types of data, including structured, unstructured, in-motion, and at-rest.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.
By the end of this training, participants will be able to:
- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
Audience
- Developers
- Software architects
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice
Notes
- To request a customized training for this course, please contact us to arrange.
This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark.
By the end of this training, participants will be able to:
- Efficiently query, parse and join geospatial datasets at scale
- Implement geospatial data in business intelligence and predictive analytics applications
- Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
This instructor-led, live training (online or onsite) is aimed at developers who wish to carry out big data analysis using Apache Spark in their .NET applications.
By the end of this training, participants will be able to:
- Install and configure Apache Spark.
- Understand how .NET implements Spark APIs so that they can be accessed from a .NET application.
- Develop data processing applications using C# or F#, capable of handling data sets whose size is measured in terabytes and pedabytes.
- Develop machine learning features for a .NET application using Apache Spark capabilities.
- Carry out exploratory analysis using SQL queries on big data sets.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
This instructor-led, live training (online or onsite) is aimed at data scientists who wish to use the SMACK stack to build data processing platforms for big data solutions.
By the end of this training, participants will be able to:
- Implement a data pipeline architecture for processing big data.
- Develop a cluster infrastructure with Apache Mesos and Docker.
- Analyze data with Spark and Scala.
- Manage unstructured data with Apache Cassandra.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
This instructor-led, live training (online or onsite) is aimed at engineers who wish to deploy Apache Spark system for processing very large amounts of data.
By the end of this training, participants will be able to:
- Install and configure Apache Spark.
- Understand the difference between Apache Spark and Hadoop MapReduce and when to use which.
- Quickly read in and analyze very large data sets.
- Integrate Apache Spark with other machine learning tools.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
AUDIENCE:
Data Engineer, DevOps, Data Scientist
This course will introduce Apache Spark. The students will learn how Spark fits into the Big Data ecosystem, and how to use Spark for data analysis. The course covers Spark shell for interactive data analysis, Spark internals, Spark APIs, Spark SQL, Spark streaming, and machine learning and graphX.
AUDIENCE :
Developers / Data Analysts
In this instructor-led, live training, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Toto školení vedené instruktory, živé (na místě nebo na dálku), je zaměřeno na softwarové inženýry, kteří chtějí streamovat velká data pomocí Spark Streaming a Scala .
Na konci tohoto školení budou účastníci schopni:
- Vytvořte aplikace Spark s programovacím jazykem Scala .
- Pomocí technologie Spark Stream můžete zpracovávat nepřetržité datové proudy.
- Zpracovávejte datové proudy dat v reálném čase pomocí technologie Spark Streaming.
Formát kurzu
- Interaktivní přednáška a diskuse.
- Spousta cvičení a cvičení.
- Praktická implementace v prostředí živé laboratoře.
Možnosti přizpůsobení kurzu
- Chcete-li požádat o přizpůsobené školení pro tento kurz, kontaktujte nás a domluvte se.
- to execute SQL queries.
- to read data from an existing Hive installation.
In this instructor-led, live training (onsite or remote), participants will learn how to analyze various types of data sets using Spark SQL.
By the end of this training, participants will be able to:
- Install and configure Spark SQL.
- Perform data analysis using Spark SQL.
- Query data sets in different formats.
- Visualize data and query results.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
It divides into two packages:
-
spark.mllib contains the original API built on top of RDDs.
-
spark.ml provides higher-level API built on top of DataFrames for constructing ML pipelines.
Audience
This course is directed at engineers and developers seeking to utilize a built in Machine Library for Apache Spark