Adobe LiveCycle Designer Training Course
Adobe LiveCycle Designer is a software application that empowers users to design and modify PDF forms for electronic completion or printing. It allows the inclusion of diverse components such as text fields, buttons, checkboxes, lists, tables, images, and scripts. Additionally, it provides control over form layout, visual appearance, validation rules, and logical flow, while facilitating integration with data sources and web services.
This instructor-led, live training (available online or onsite) targets beginner to intermediate developers and UI/UX designers who want to utilize Adobe LiveCycle Designer to build interactive and dynamic PDF forms.
Upon completion of this training, participants will be able to:
- Create and edit PDF forms incorporating various elements and properties.
- Implement scripts and logic within PDF forms using JavaScript.
- Validate and secure PDF forms.
- Integrate PDF forms with data sources and web services.
- Deploy and distribute PDF forms.
Format of the Course
- Interactive lecture and discussion.
- Extensive exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
User Control Panel
Mode of action forms
Document
- page
- preview
- patterns
Elements
- insert
- groups
- properties
- graphics
- field
- containers
- formatting
- own objects
- order
Layers model
Scripts
- languages
- preview
- formation
- modification
Validation
Forms
- dynamically
- counting
- developed
- added
The hierarchy of the document
Forms from other documents
Create PDF
Unlock pdf to save the Reader
Requirements
- Knowledge of programming in JavaScript
Audience
- Developers
- UI/UX designers
- Forms designers
Open Training Courses require 5+ participants.
Adobe LiveCycle Designer Training Course - Booking
Adobe LiveCycle Designer Training Course - Enquiry
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend offers a family of AI processors engineered for high-performance inference and training tasks.
This instructor-led live training (available online or onsite) is designed for intermediate-level AI engineers and data scientists looking to develop and optimize neural network models using Huawei’s Ascend platform and the CANN toolkit.
Upon completing this training, participants will be able to:
- Set up and configure the CANN development environment.
- Build AI applications leveraging MindSpore and CloudMatrix workflows.
- Enhance performance on Ascend NPUs through custom operators and tiling techniques.
- Deploy models to either edge or cloud environments.
Course Format
- Interactive lectures and discussions.
- Practical application of Huawei Ascend and the CANN toolkit within sample applications.
- Guided exercises focusing on model construction, training, and deployment.
Customization Options
- To request a customized version of this course tailored to your specific infrastructure or datasets, please contact us to arrange the details.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei’s AI compute stack, designed for deploying and optimizing AI models on Ascend AI processors.
This instructor-led live training, available online or onsite, targets intermediate AI developers and engineers who want to efficiently deploy trained AI models onto Huawei Ascend hardware. The course utilizes the CANN toolkit alongside tools like MindSpore, TensorFlow, or PyTorch.
Upon completing this training, participants will be able to:
- Grasp the CANN architecture and its significance within the AI deployment pipeline.
- Convert and adapt models from leading frameworks into formats compatible with Ascend.
- Utilize tools such as ATC, OM model conversion, and MindSpore for both cloud and edge inference.
- Identify deployment challenges and optimize performance on Ascend hardware.
Course Format
- Interactive lectures combined with demonstrations.
- Practical laboratory sessions using CANN tools with Ascend simulators or devices.
- Real-world AI model deployment scenarios.
Customization Options
- For customized training on this course, please contact us to make arrangements.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix is Huawei’s unified AI development and deployment platform designed to support scalable, production-grade inference pipelines.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level AI professionals who wish to deploy and monitor AI models using the CloudMatrix platform with CANN and MindSpore integration.
By the end of this training, participants will be able to:
- Leverage CloudMatrix for model packaging, deployment, and serving.
- Convert and optimize models for Ascend chipsets.
- Establish pipelines for real-time and batch inference tasks.
- Monitor deployments and tune performance in production settings.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of CloudMatrix with real deployment scenarios.
- Guided exercises focused on conversion, optimization, and scaling.
Course Customization Options
- To request a customized training for this course based on your AI infrastructure or cloud environment, please contact us to arrange.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs designed for AI and HPC workloads with support for large-scale training and inference.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Understand Biren GPU architecture and memory hierarchy.
- Set up the development environment and use Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized AI chips designed to optimize both inference and training processes in edge computing and datacenter environments.
This instructor-led live training session (available online or onsite) targets intermediate-level developers who want to build and deploy AI models leveraging the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
Upon completing this training, participants will be able to:
- Set up and configure the development environment for BANGPy and Neuware.
- Develop and optimize Python- and C++-based models specifically for Cambricon MLUs.
- Deploy models to edge devices and data centers running the Neuware runtime.
- Integrate machine learning workflows with MLU-specific acceleration features.
Course Format
- Interactive lectures and discussions.
- Practical, hands-on development and deployment using BANGPy and Neuware.
- Guided exercises concentrating on optimization, integration, and testing.
Customization Options
- To arrange customized training tailored to your specific Cambricon device model or use case, please contact us.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s toolkit for AI computing, designed to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led live training, available online or onsite, is designed for beginner-level AI developers. It aims to help participants understand how CANN integrates into the model lifecycle, from training to deployment, and how it interacts with frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completing this training, participants will be able to:
- Comprehend the purpose and architecture of the CANN toolkit.
- Configure a development environment using CANN and MindSpore.
- Convert and deploy a simple AI model on Ascend hardware.
- Acquire foundational knowledge to support future CANN optimization or integration projects.
Course Format
- Interactive lectures and discussions.
- Practical labs focusing on simple model deployment.
- Step-by-step guidance through the CANN toolchain and integration points.
Course Customization Options
- To arrange customized training for this course, please contact us.
CANN for Edge AI Deployment
14 HoursThe Huawei Ascend CANN toolkit empowers powerful AI inference on edge devices, such as the Ascend 310. It provides essential tools for compiling, optimizing, and deploying models in environments with constrained compute and memory resources.
This instructor-led, live training (available online or onsite) targets intermediate-level AI developers and integrators who want to deploy and optimize models on Ascend edge devices using the CANN toolchain.
Upon completing this training, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN tools.
- Build lightweight inference pipelines utilizing MindSpore Lite and AscendCL.
- Optimize model performance for environments with limited compute and memory.
- Deploy and monitor AI applications in real-world edge use cases.
Course Format
- Interactive lectures and demonstrations.
- Hands-on lab exercises featuring edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Customization Options
- To request customized training for this course, please contact us to arrange it.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei's AI stack, spanning from the low-level CANN SDK to the high-level MindSpore framework, provides a seamlessly integrated environment for AI development and deployment, specifically optimized for Ascend hardware.
This instructor-led, live training (available online or onsite) is designed for technical professionals at beginner to intermediate levels who want to understand how CANN and MindSpore components collaborate to support AI lifecycle management and make informed infrastructure decisions.
By the end of this training, participants will be able to:
- Comprehend the layered architecture of Huawei's AI compute stack.
- Recognize how CANN facilitates model optimization and hardware-level deployment.
- Assess the MindSpore framework and its toolchain in comparison to industry alternatives.
- Position Huawei's AI stack within enterprise, cloud, or on-premises environments.
Course Format
- Interactive lectures and discussions.
- Live system demonstrations and case-based walkthroughs.
- Optional guided labs covering the model flow from MindSpore to CANN.
Customization Options
- For customized training on this course, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) serves as Huawei’s foundational AI compute platform, enabling developers to refine and optimize the performance of neural networks deployed on Ascend AI processors.
This instructor-led training, available either online or onsite, is designed for senior AI developers and system engineers who aim to boost inference performance by leveraging CANN’s advanced tools, such as the Graph Engine, TIK, and custom operator development capabilities.
Upon completion of this training, participants will be capable of:
- Gaining a deep understanding of CANN's runtime architecture and its performance lifecycle.
- Utilizing profiling tools and the Graph Engine to analyze and enhance performance.
- Developing and optimizing custom operators using TIK and TVM.
- Addressing memory bottlenecks and increasing model throughput.
Course Format
- Interactive lectures and discussions.
- Practical labs featuring real-time profiling and operator tuning.
- Optimization exercises based on edge-case deployment scenarios.
Customization Options
- For personalized training arrangements, please contact us to discuss your requirements.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) offers robust deployment and optimization tools designed for real-time AI applications in computer vision and NLP, particularly when leveraging Huawei Ascend hardware.
This guided, live training session (available online or on-site) targets AI professionals with intermediate skills who aim to develop, deploy, and optimize vision and language models using the CANN SDK for practical, production-level scenarios.
Upon completion of this training, participants will be capable of:
- Deploying and optimizing CV and NLP models utilizing CANN and AscendCL.
- Employing CANN utilities to transform models and incorporate them into operational pipelines.
- Enhancing inference performance for specific tasks such as detection, classification, and sentiment analysis.
- Constructing real-time CV/NLP pipelines tailored for edge or cloud-based deployment environments.
Course Format
- Interactive lectures combined with live demonstrations.
- Practical laboratory sessions focusing on model deployment and performance profiling.
- Designing live pipelines based on real-world CV and NLP use cases.
Customization Options for the Course
- For inquiries regarding customized training for this course, please reach out to us to coordinate arrangements.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM facilitate the advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led, live training (available online or onsite) is designed for advanced system developers who want to build, deploy, and fine-tune custom operators for AI models using CANN's TIK programming model and TVM compiler integration.
Upon completing this training, participants will be able to:
- Write and test custom AI operators utilizing the TIK DSL for Ascend processors.
- Integrate custom operators into the CANN runtime and execution graph.
- Leverage TVM for operator scheduling, auto-tuning, and benchmarking.
- Debug and optimize instruction-level performance for custom computation patterns.
Course Format
- Interactive lectures and demonstrations.
- Hands-on coding of operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Course Customization Options
- To request a customized training session for this course, please contact us to arrange it.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursChinese GPU architectures, including Huawei Ascend, Biren, and Cambricon MLUs, provide CUDA alternatives specifically designed for the domestic AI and high-performance computing (HPC) markets.
This instructor-led live training (available online or onsite) is designed for advanced-level GPU programmers and infrastructure specialists seeking to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
Upon completion of this training, participants will be able to:
- Evaluate the compatibility of existing CUDA workloads with Chinese chip alternatives.
- Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Compare performance metrics and identify optimization opportunities across different platforms.
- Address practical challenges related to cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Hands-on labs for code translation and performance comparison.
- Guided exercises focused on multi-GPU adaptation strategies.
Course Customization Options
- To request customized training for this course based on your specific platform or CUDA project, please contact us to arrange.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon stand as premier AI hardware platforms in China, providing distinct acceleration and profiling utilities designed for large-scale AI workloads in production environments.
This instructor-led live training session, available both online and onsite, targets advanced AI infrastructure and performance engineers seeking to enhance model inference and training processes across various Chinese AI chip architectures.
Upon completing this course, participants will be capable of:
- Evaluating model performance on Ascend, Biren, and Cambricon systems.
- Recognizing system bottlenecks and inefficiencies in memory and computation.
- Implementing optimizations at the graph, kernel, and operator levels.
- Refining deployment pipelines to achieve superior throughput and reduced latency.
Course Format
- Engaging lectures and interactive discussions.
- Practical application of profiling and optimization tools across each platform.
- Supervised exercises centered on real-world tuning scenarios.
Customization Options
- For a tailored training experience based on your specific performance environment or model type, please reach out to us to arrange details.