• Professional Development
  • Medicine & Nursing
  • Arts & Crafts
  • Health & Wellbeing
  • Personal Development

21 CKAD courses delivered Live Online

🔥 Limited Time Offer 🔥

Get a 10% discount on your first order when you use this promo code at checkout: MAY24BAN3X

Kubernetes Bootcamp (CKAD)

By Nexus Human

Duration 5 Days 30 CPD hours This course is intended for Anyone who plans to work with Kubernetes at any level or tier of involvement Any company or individual who wants to advance their knowledge of the cloud environment Application Developers Operations Developers IT Directors/Managers Overview All topics required by the CKAD exam, including: Deploy applications to a Kubernetes cluster Pods, ReplicaSets, Deployments, DaemonSets Self-healing and observable applications Multi-container Pod Design Application configuration via Configmaps, Secrets Administrate cluster use for your team A systematic understanding of Kubernetes architecture Troubleshooting and debugging tools Kubernetes networking and services Kubernetes is a Cloud Orchestration Platform providing reliability, replication, and stability while maximizing resource utilization for applications and services. By the conclusion of this hands-on training, you will go back to work with all necessary commands and practical skills to empower your team to succeed, as well as gain knowledge of important concepts like Kubernetes architecture and container orchestration. We prioritize covering all objectives and concepts necessary for passing the Certified Kubernetes Application Developer (CKAD) exam. You will command and configure a high availability Kubernetes environment (and later, build your own!) capable of demonstrating all ?K8s'' features discussed and demonstrated in this course. Your week of intensive, hands-on training will conclude with a mock CKAD exam that matches the real thing. KUBERNETES ARCHITECTURE * Components * Understand API deprecations * Containers * Define, build and modify container images * Pods * Master Services * Node Services * K8s Services * YAML Essentials * Creating a K8s Cluster KUBECTL COMMANDS * Kubernetes Resources * Kubernetes Namespace * Kubernetes Contexts PODS * What is a Pod? * Create, List, Delete Pods * How to Access Running Pods * Kubernetes Resources * Managing Cloud Resource Consumption * Multi-Container Pod Design * Security Contexts * Init Containers * Understand multi-container Pod design patterns (e.g. sidecar, init and others) * Pod Wellness Tracking NETWORKING * Packet Forwarding * ClusterIP and NodePort Services * Provide and troubleshoot access to applications via services * Ingress Controllers * Use Ingress rules to expose applications * NetworkPolicy resource * Demonstrate basic understanding of NetworkPolicies * Network Plugins * Defining the Service Mesh * Service mesh configuration examples REPLICASETS * Services * ReplicaSet Function * Deploying ReplicaSets DEPLOYMENTS * Deployment Object * Updating/Rolling Back Deployments * Understand Deployments and how to perform rolling updates * Deployment Strategies * Use Kubernetes primitives to implement common deployment strategies (e.g. blue/green or canary) * Scaling ReplicaSets * Autoscaling LABELS AND ANNOTATIONS * Labels * Annotations * Node Taints and Tolerations JOBS * The K8s Job and CronJob * Understand Jobs and CronJobs * Immediate vs. scheduled internal use APPLICATION CONFIGURATION * Understanding and defining resource requirements, limits and quotas * Config Maps * Create & consume Secrets * Patching * Custom Resource Definition * Discover and use resources that extend Kubernetes (CRD) * Managing ConfigMaps and Secrets as Volumes STORAGE * Static and dynamic persistent volumes via StorageClass * K8s volume configuration * Utilize persistent and ephemeral volumes * Adding persistent storage to containers via persistent volume claims INTRODUCTION TO HELM * Helm Introduction * Charts * Use the Helm package manager to deploy existing packages APPLICATION SECURITY * Understand authentication, authorization and admission control * Understand ServiceAccounts * Understand SecurityContexts APPLICATION OBSERVABILITY AND MAINTENANCE * Use provided tools to monitor Kubernetes applications * How to Troubleshoot Kubernetes * Basic and Advanced Logging Techniques * Utilize container logs * Accessing containers with Port-Forward * Debugging in Kubernetes HANDS ON LABS: * Define, build and modify container images * Deploy Kubernetes using Ansible * Isolating Resources with Kubernetes Namespaces * Cluster Access with Kubernetes Context * Listing Resources with kubectl get * Examining Resources with kubectl describe * Create and Configure Basic Pods * Debugging via kubectl port-forward * Imperative vs. Declarative Resource Creation * Performing Commands inside a Pod * Understanding Labels and Selectors * Insert an Annotation * Create and Configure a ReplicaSet * Writing a Deployment Manifest * Perform rolling updates and rollbacks with Deployments * Horizontal Scaling with kubectl scale * Implement probes and health checks * Understanding and defining resource requirements, limits and quotas * Understand Jobs and CronJobs * Best Practices for Container Customization * Persistent Configuration with ConfigMaps * Create and Consume Secrets * Understand the Init container multi-container Pod design pattern * Using PersistentVolumeClaims for Storage * Dynamically Provision PersistentVolumes with NFS * Deploy a NetworkPolicy * Provide and troubleshoot access to applications via services * Use Ingress rules to expose applications * Understand the Sidecar multi-container Pod design pattern * Setting up a single tier service mesh * Tainted Nodes and Tolerations * Use the Helm package manager to deploy existing packages * A Completed Project * Install Jenkins Using Helm and Run a Demo Job * Custom Resource Definitions (CRDs) * Patching * Understanding Security Contexts for Cluster Access Control * Utilize container logs * Advanced Logging Techniques * Troubleshooting * Calicoctl * Deploy a Kubernetes Cluster using Kubeadm * Monitoring Applications in Kubernetes * Resource-Based Autoscaling * Create ServiceAccounts for use with the Kubernetes Dashboard * Saving Your Progress With GitHub * CKAD Practice Drill * Alta Kubernetes Course Specific Updates * Sourcing Secrets from HashiCorp Vault * Example CKAD Test Questions

Kubernetes Bootcamp (CKAD)
Delivered on-request, onlineDelivered Online
Price on Enquiry

Red Hat Cloud-native Microservices Development with Quarkus (DO378)

By Nexus Human

Duration 5 Days 30 CPD hours This course is intended for This course is designed for application developers. Overview Deploy microservice applications on Red Hat© OpenShift Container Platform. Build a microservice application with Quarkus. Implement unit and integration tests for microservices. Use the config specification to inject data into a microservice. Secure a microservice using OAuth. Build and deploy native Quarkus applications. Develop microservice-based applications with Quarkus and OpenShift. Many enterprises are looking for a way to take advantage of cloud-native architectures, but many do not know the best approach. Quarkus is an exciting new technology that brings the reliability, familiarity, and maturity of Java Enterprise with a container-ready lightning fast deployment time. Red Hat Cloud-native Microservices Development with Quarkus (DO378) emphasizes learning architectural principles and implementing microservices based on Quarkus and OpenShift. You will build on application development fundamentals and focus on how to develop, monitor, test, and deploy modern microservices applications.. This course is based on OpenShift 4.5 and Quarkus 1.7L 1 - DESCRIBE MICROSERVICE ARCHITECTURES * Describe components and patterns of microservice-based application architectures. 2 - IMPLEMENT A MICROSERVICE WITH QUARKUS * Describe the specifications in Quarkus, implement a microservice with some of the specifications, and deploy it to an OpenShift cluster. 3 - BUILD MICROSERVICE APPLICATIONS WITH QUARKUS * Build a persistent and configurable distributed quarkus microservices application. 4 - IMPLEMENT FAULT TOLERANCE * Implement fault tolerance in a microservice architecture. 5 - BUILD AND DEPLOY NATIVE QUARKUS APPLICATIONS * Describe Quarkus in native mode and describe its deployment on OpenShift Container Platform. 6 - TEST MICROSERVICES * Implement unit and integration tests for microservices. 7 - CREATE APPLICATION HEALTH CHECKS * Create a health check for a microservice. 8 - SECURE MICROSERVICES * Secure microservice endpoints and communication. 9 - MONITOR MICROSERVICES * Monitor the operation of a microservice using metrics and distributed tracing. ADDITIONAL COURSE DETAILS: Nexus Humans Red Hat Cloud-native Microservices Development with Quarkus (DO378) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Red Hat Cloud-native Microservices Development with Quarkus (DO378) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Red Hat Cloud-native Microservices Development with Quarkus (DO378)
Delivered on-request, onlineDelivered Online
Price on Enquiry

Architecting Hybrid Cloud Infrastructure with Anthos

By Nexus Human

Duration 2 Days 12 CPD hours This course is intended for This class is primarily intended for the following participants: Technical employees using GCP, including customer companies, partners and system integrators deployment engineers, cloud architects, cloud administrators, system engineers , and SysOps/DevOps engineers. Individuals using GCP to create, integrate, or modernize solutions using secure, scalable microservices architectures in hybrid environments. Overview Connect and manage Anthos GKE clusters from GCP Console whether clusters are part of Anthos on Google Cloud or Anthos deployed on VMware. Understand how service mesh proxies are installed, configured and managed. Configure centralized logging, monitoring, tracing, and service visualizations wherever the Anthos GKE clusters are hosted. Understand and configure fine-grained traffic management. Use service mesh security features for service-service authentication, user authentication, and policy-based service authorization. Install a multi-service application spanning multiple clusters in a hybrid environment. Understand how services communicate across clusters. Migrate services between clusters. Install Anthos Config Management, use it to enforce policies, and explain how it can be used across multiple clusters. This two-day instructor-led course prepares students to modernize, manage, and observe their applications using Kubernetes whether the application is deployed on-premises or on Google Cloud Platform (GCP). Through presentations, and hands-on labs, participants explore and deploy using Kubernetes Engine (GKE), GKE Connect, Istio service mesh and Anthos Config Management capabilities that enable operators to work with modern applications even when split among multiple clusters hosted by multiple providers, or on-premises. ANTHOS OVERVIEW * Describe challenges of hybrid cloud * Discuss modern solutions * Describe the Anthos Technology Stack MANAGING HYBRID CLUSTERS USING KUBERNETES ENGINE * Understand Anthos GKE hybrid environments, with Admin and User clusters * Register and authenticate remote Anthos GKE clusters in GKE Hub * View and manage registered clusters, in cloud and on-premises, using GKE Hub * View workloads in all clusters from GKE Hub * Lab: Managing Hybrid Clusters using Kubernetes Engine INTRODUCTION TO SERVICE MESH * Understand service mesh, and problems it solves * Understand Istio architecture and components * Explain Istio on GKE add on and it's lifecycle, vs OSS Istio * Understand request network traffic flow in a service mesh * Create a GKE cluster, with a service mesh * Configure a multi-service application with service mesh * Enable external access using an ingress gateway * Explain the multi-service example applications: Hipster Shop, and Bookinfo * Lab: Installing Open Source Istio on Kubernetes Engine * Lab: Installing the Istio on GKE Add-On with Kubernetes Engine OBSERVING SERVICES USING SERVICE MESH ADAPTERS * Understand service mesh flexible adapter model * Understand service mesh telemetry processing * Explain Stackdriver configurations for logging and monitoring * Compare telemetry defaults for cloud and on-premises environments * Configure and view custom metrics using service mesh * View cluster and service metrics with pre-configured dashboards * Trace microservice calls with timing data using service mesh adapters * Visualize and discover service attributes with service mesh * Lab: Telemetry and Observability with Istio MANAGING TRAFFIC ROUTING WITH SERVICE MESH * Understand the service mesh abstract model for traffic management * Understand service mesh service discovery and load balancing * Review and compare traffic management use cases and configurations * Understand ingress configuration using service mesh * Visualize traffic routing with live generated requests * Configure a service mesh gateway to allow access to services from outside the mesh * Apply virtual services and destination rules for version-specific routing * Route traffic based on application-layer configuration * Shift traffic from one service version to another, with fine-grained control, like a canary deployment * Lab: Managing Traffic Routing with Istio and Envoy MANAGING POLICIES AND SECURITY WITH SERVICE MESH * Understand authentication and authorization in service mesh * Explain mTLS flow for service to service communication * Adopt mutual TLS authentication across the service mesh incrementally * Enable end-user authentication for the frontend service * Use service mesh access control policies to secure access to the frontend service * Lab: Managing Policies and Security with Service Mesh MANAGING POLICIES USING ANTHOS CONFIG MANAGEMENT * Understand the challenge of managing resources across multiple clusters * Understand how a Git repository is as a configuration source of truth * Explain the Anthos Config Management components, and object lifecycle * Install and configure Anthos Config Management, operators, tools, and related Git repository * Verify cluster configuration compliance and drift management * Update workload configuration using repo changes * Lab: Managing Policies in Kubernetes Engine using Anthos Config CONFIGURING ANTHOS GKE FOR MULTI-CLUSTER OPERATION * Understand how multiple clusters work together using DNS, root CA, and service discovery * Explain service mesh control-plane architectures for multi-cluster * Configure a multi-service application using service mesh across multiple clusters with multiple control-planes * Configure a multi-service application using service mesh across multiple clusters with a shared control-plane * Configure service naming/discovery between clusters * Review ServiceEntries for cross-cluster service discovery * Migrate workload from a remote cluster to an Anthos GKE cluster * Lab: Configuring GKE for Multi-Cluster Operation with Istio * Lab: Configuring GKE for Shared Control Plane Multi-Cluster Operation

Architecting Hybrid Cloud Infrastructure with Anthos
Delivered on-request, onlineDelivered Online
Price on Enquiry

Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster (DO280)

By Nexus Human

Duration 5 Days 30 CPD hours This course is intended for Audience for this course This course is designed for system administrators responsible for creating OpenShift Enterprise instances, deploying applications, creating process customizations, managing instances and projects. Prerequisites for this course Have taken Red Hat Enterprise Linux Administration I and II (RH124 and RH134), or equivalent Red Hat Enterprise Linux© system administration experience Be certified as a Red Hat Certified System Administrator (RHCSA), or equivalent Red Hat Enterprise Linux system administration experience Be certified as a Red Hat Certified Engineer (RHCE©) Overview Learn to install, configure, and manage OpenShift Enterprise by Red Hat instances - OpenShift Enterprise Administration (DO280) prepares the system administrator to install, configure, and manage OpenShift Enterprise by Red Hat© instances. OpenShift Enterprise, Red Hat's platform-as-a-service (PaaS) offering, provides pre-defined deployment environments for applications of all types through its use of container technology. This creates an environment that supports DevOps principles such as reduced time to market and continuous delivery. - In this course, students will learn how to install and configure an instance of OpenShift Enterprise, test the instance by deploying a real world application, and manage projects/applications through hands-on labs. - Course content summary - Container concepts - Configuring resources with the command line interface - Building a pod - Enabling services for a pod - Creating routes - Downloading and configuring images - Rolling back and activating deployments - Creating custom S2I images This course will empower you to install and administer the Red Hat© OpenShift© Container Platform, with hands-on, lab-based materials that show you how to install, configure, and manage OpenShift clusters and deploy sample applications to further understand how developers will use the platform. This course is based on Red Hat© Enterprise Linux© 7.5 and Openshift Container Platform 3.9. OpenShift is a containerized application platform that allows your enterprise to manage container deployments and scale your applications using Kubernetes. OpenShift provides predefined application environments and builds upon Kubernetes to provide support for DevOps principles such as reduced time to market, infrastructure-as-code, continuous integration (CI), and continuous delivery (CD). 1 - INTRODUCTION TO RED HAT OPENSHIFT ENTERPRISE * Review features and architecture of OpenShift Enterprise. 2 - INSTALL OPENSHIFT ENTERPRISE * Install OpenShift Enterprise and configure a master and node. 3 - EXECUTE COMMANDS * Execute commands using the command line interface. 4 - BUILD APPLICATIONS * Create, build, and deploy applications to an OpenShift Enterprise instance. 5 - PERSISTENT STORAGE * Provision persistent storage and use it for the internal registry. 6 - BUILD APPLICATIONS WITH SOURCE-TO-IMAGE (S2I) * Create and build applications with S2I and templates. 7 - MANAGE THE SYSTEM * Use OpenShift Enterprise components to manage deployed applications. 8 - CUSTOMIZE OPENSHIFT ENTERPRISE * Customize resources and processes used by OpenShift Enterprise. 9 - COMPREHENSIVE REVIEW * Practice and demonstrate knowledge and skills learned in the course. 10 - NOTE: * Course outline is subject to change with technology advances and as the nature of the underlying job evolves. For questions or confirmation on a specific objective or topic, please contact us. ADDITIONAL COURSE DETAILS: Nexus Humans Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster (DO280) training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster (DO280) course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Red Hat OpenShift Administration II: Operating a Production Kubernetes Cluster (DO280)
Delivered on-request, onlineDelivered Online
Price on Enquiry

Getting Started with Kubernetes (TTDV7590)

By Nexus Human

Duration 2 Days 12 CPD hours This course is intended for This in an introductory-level class for intermediate skilled team members. Students should have prior software development experience or exposure, have some basic familiarity with containers, and should also be able to navigate the command line. Overview This course is approximately 50% hands-on, combining expert lecture, real-world demonstrations and group discussions with machine-based practical labs and exercises. Our engaging instructors and mentors are highly experienced practitioners who bring years of current 'on-the-job' experience into every classroom. Working in a hands-on learning environment led by our expert facilitator, students will explore: What a Kubernetes cluster is, and how to deploy and manage them on-premises and in the cloud. How Kubernetes fits into the cloud-native ecosystem, and how it interfaces with other important technologies such as Docker. The major Kubernetes components that let us deploy and manage applications in a modern cloud-native fashion. How to define and manage applications with declarative manifest files that should be version-controlled and treated like code. Containerization has taken the IT world by storm in the last few years. Large software houses, starting from Google and Amazon, are running significant portions of their production load in containers. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. This is a hands-on workshop style course that teaches core features and functionality of Kubernetes. You will leave this course knowing how to build a Kubernetes cluster, and how to deploy and manage applications on that cluster. GETTING STARTED * Our sample application * Kubernetes concepts * Declarative vs imperative * Kubernetes network model * First contact with kubectl * Setting up Kubernetes * WORKING WITH CONTAINERS * Running our first containers on Kubernetes * Exposing containers * Shipping images with a registry * Running our application on Kubernetes EXPLORING THE KUBERNETES DASHBOARD * The Kubernetes dashboard * Security implications of kubectl apply * Scaling a deployment * Daemon sets * Labels and selectors * Rolling updates NEXT STEPS * Accessing logs from the CLI * Managing stacks with Helm * Namespaces * Next steps

Getting Started with Kubernetes (TTDV7590)
Delivered on-request, onlineDelivered Online
Price on Enquiry

VMware Tanzu Kubernetes Grid: Install, Configure, Manage [V1.5]

By Nexus Human

Duration 3 Days 18 CPD hours This course is intended for Platform operators who are responsible for deploying and managing Tanzu Kubernetes clusters Overview By the end of the course, you should be able to meet the following objectives: Describe how Tanzu Kubernetes Grid fits in the VMware TanzuTM portfolio Describe the Tanzu Kubernetes Grid architecture Deploy and manage Tanzu Kubernetes Grid management clusters Deploy and manage Tanzu Kubernetes Grid workload clusters Deploy, configure, and manage Tanzu Kubernetes Grid packages Perform basic troubleshooting During this three-day course, you focus on installing VMware Tanzu© Kubernetes Grid? on a VMware vSphere© environment and then provisioning and managing Tanzu Kubernetes Grid clusters. The course covers how to install Tanzu Kubernetes Grid packages for image registry, authentication, logging, ingress, multi-pod network interfaces, service discovery, and monitoring. The concepts learned in this course are transferable for users who must install Tanzu Kubernetes Grid on other supported clouds. COURSE INTRODUCTION * Introductions and course logistics * Course objectives INTRODUCING VMWARE TANZU KUBERNETES GRID * Identify the VMware Tanzu products responsible for Kubernetes life cycle management and describe the main differences between them * Explain the core concepts of Tanzu Kubernetes Grid, including bootstrap, Tanzu Kubernetes Grid management and workload clusters, and the role of Cluster API * List the components of a Tanzu Kubernetes Grid instance * Illustrate how to use the Tanzu CLI * Identify the requirements for a bootstrap machine * Define the Carvel Tool set * Define Cluster API * Identify the infrastructure providers * List the Cluster API controllers * Identify the Cluster API Custom Resource Definitions MANAGEMENT CLUSTERS * List the requirements for deploying a management cluster * Differentiate between deploying on vSphere 6.7 Update 3 and vSphere 7 * Describe the components of NSX Advanced Load Balancer * Explain how Tanzu Kubernetes Grid integrates with NSX Advanced Load Balancer * Explain how Kubernetes manages authentication * Define Pinniped * Define Dex * Describe the Pinniped authentication workflow * List the steps to install a Tanzu Kubernetes Grid management cluster * Summarize the events of a management cluster creation * Demonstrate how to use commands when working with management clusters TANZU KUBERNETES CLUSTERS * List the steps to build a custom image * Describe the available customizations * Identify the options for deploying Tanzu Kubernetes Grid clusters * Explain how Tanzu Kubernetes Grid clusters are created * Discuss which VMs make up a Tanzu Kubernetes Grid cluster * List the pods that run on a Tanzu Kubernetes cluster * Describe the Tanzu Kubernetes Grid core add-ons that are installed on a cluster CONFIGURING AND MANAGING TANZU KUBERNETES GRID INSTANCES * Define the Tanzu Kubernetes Grid packages * Describe the Harbor Image Registry * Define Fluent Bit * Identify the logs that Fluent Bit collects * Explain basic Fluent Bit configuration * Describe the Contour ingress controller * Demonstrate how to install Contour on a Tanzu Kubernetes Grid cluster * Demonstrate how to install Service Discovery with ExternalDNS. * Define Multus CNI * Define Prometheus * Define Grafana TROUBLESHOOTING * Discuss the various Tanzu Kubernetes Grid logs * Identify the location of Tanzu Kubernetes Grid logs * Explain the purpose of crash diagnostics * Demonstrate how to use SSH to connect to a Tanzu Kubernetes Grid VM * Describe the steps for troubleshooting a failed cluster deployment ADDITIONAL COURSE DETAILS: Notes Delivery by TDSynex, Exit Certified and New Horizons an VMware Authorised Training Centre (VATC) Nexus Humans VMware Tanzu Kubernetes Grid: Install, Configure, Manage [V1.5] training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the VMware Tanzu Kubernetes Grid: Install, Configure, Manage [V1.5] course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

VMware Tanzu Kubernetes Grid: Install, Configure, Manage [V1.5]
Delivered on-request, onlineDelivered Online
Price on Enquiry

Red Hat OpenShift Installation Lab (DO322)

By Nexus Human

Duration 3 Days 18 CPD hours This course is intended for Cluster administrators (Junior systems administrators, junior cloud administrators) interested in deploying additional clusters to meet increasing demands from their organizations. Cluster engineers (Senior systems administrators, senior cloud administrators, cloud engineers) interested in the planning and design of OpenShift clusters to meet performance and reliability of different workloads and in creating work books for these installations. Site reliability engineers (SREs) interested in deploying test bed clusters to validate new settings, updates, customizations, operational procedures, and responses to incidents. Overview Validate infrastructure prerequisites for an OpenShift cluster. Run the OpenShift installer with custom settings. Describe and monitor each stage of the OpenShift installation process. Collect troubleshooting information during an ongoing installation, or after a failed installation. Complete the configuration of cluster services in a newly installed cluster. Installing OpenShift on a cloud, virtual, or physical infrastructure. Red Hat OpenShift Installation Lab (DO322) teaches essential skills for installing an OpenShift cluster in a range of environments, from proof of concept to production, and how to identify customizations that may be required because of the underlying cloud, virtual, or physical infrastructure. This course is based on Red Hat OpenShift Container Platform 4.6. 1 - INTRODUCTION TO CONTAINER TECHNOLOGY * Describe how software can run in containers orchestrated by Red Hat OpenShift Container Platform. 2 - CREATE CONTAINERIZED SERVICES * Provision a server using container technology. 3 - MANAGE CONTAINERS * Manipulate prebuilt container images to create and manage containerized services. 4 - MANAGE CONTAINER IMAGES * Manage the life cycle of a container image from creation to deletion. 5 - CREATE CUSTOM CONTAINER IMAGES * Design and code a Dockerfile to build a custom container image. 6 - DEPLOY CONTAINERIZED APPLICATIONS ON OPENSHIFT * Deploy single container applications on OpenShift Container Platform. 7 - TROUBLESHOOT CONTAINERIZED APPLICATIONS * Troubleshoot a containerized application deployed on OpenShift. 8 - DEPLOY AND MANAGE APPLICATIONS ON AN OPENSHIFT CLUSTER * Use various application packaging methods to deploy applications to an OpenShift cluster, then manage their resources. 9 - DESIGN CONTAINERIZED APPLICATIONS FOR OPENSHIFT * Select a containerization method for an application and create a container to run on an OpenShift cluster. 10 - PUBLISH ENTERPRISE CONTAINER IMAGES * Create an enterprise registry and publish container images to it. 11 - BUILD APPLICATIONS * Describe the OpenShift build process, then trigger and manage builds. 12 - CUSTOMIZE SOURCE-TO-IMAGE (S2I) BUILDS * Customize an existing S2I base image and create a new one. 13 - CREATE APPLICATIONS FROM OPENSHIFT TEMPLATES * Describe the elements of a template and create a multicontainer application template. 14 - MANAGE APPLICATION DEPLOYMENTS * Monitor application health and implement various deployment methods for cloud-native applications. 15 - PERFORM COMPREHENSIVE REVIEW * Create and deploy cloudinative applications on OpenShift.

Red Hat OpenShift Installation Lab (DO322)
Delivered on-request, onlineDelivered Online
Price on Enquiry

VMware Tanzu Mission Control: Management and Operations 2022

By Nexus Human

Duration 2 Days 12 CPD hours This course is intended for Operators and application owners who are responsible for deploying and managing policies for multiple Kubernetes clusters across on-premises and public cloud environments. Overview By the end of the course, you should be able to meet the following objectives: Describe the VMware Tanzu Mission Control architecture Configure user and group access Create access, image registry, network, security, quota, and custom policies Connect your on-premises vSphere with Tanzu Supervisor cluster to VMware Tanzu Mission Control Create, manage, and backup Tanzu Kubernetes clusters Perform cluster inspections Monitor and secure Kubernetes environments During this two-day course, you focus on using VMware Tanzu© Mission Control? to provision and manage Kubernetes clusters. The course covers how to apply access, image registry, network, security, quota, and custom policies to Kubernetes environments. For cluster provisioning and management, the course focuses on deploying, upgrading, backing up and monitoring Kubernetes clusters on VMware vSphere© with Tanzu. Given the abstractions of VMware Tanzu Mission Control, the learnings should be transferrable to public cloud. INTRODUCING VMWARE TANZU MISSION CONTROL * VMware Tanzu Mission Control * Accessing VMware Tanzu Mission Control * VMware Cloud? services access control * VMware Tanzu Mission Control architecture CLUSTER MANAGEMENT * Attached clusters * Management clusters * Provisioned clusters * Cluster inspections * Data protection * VMware Tanzu© Observability? by Wavefront * VMware Tanzu© Service Mesh? POLICY MANAGEMENT * Policy management * Access policies * Image registry policies * Network policies * Security policies * Quota policies * Custom policies * Policy insights

VMware Tanzu Mission Control: Management and Operations 2022
Delivered on-request, onlineDelivered Online
Price on Enquiry

Getting Started with Google Kubernetes Engine

By Nexus Human

Duration 1 Days 6 CPD hours This course is intended for This course is intended for the following participants: Application developers, Cloud Solutions Architects, DevOps Engineers, IT managers. Individuals using Google Cloud Platform to create new solutions or to integrate existing systems, application environments, and infrastructure with the Google Cloud Platform. Overview At the end of the course, students will be able to: Understand container basics. Containerize an existing application. Understand Kubernetes concepts and principles. Deploy applications to Kubernetes using the CLI. Set up a continuous delivery pipeline using Jenkins Learn to containerize workloads in Docker containers, deploy them to Kubernetes clusters provided by Google Kubernetes Engine, and scale those workloads to handle increased traffic. Students will also learn how to continuously deploy new code in a Kubernetes cluster to provide application updates. INTRODUCTION TO CONTAINERS AND DOCKER * Acquaint yourself with containers, Docker, and the Google Container Registry. * Create a container. * Package a container using Docker. * Store a container image in Google Container Registry. * Launch a Docker container. * KUBERNETES BASICS * Deploy an application with microservices in a Kubernetes cluster. * Provision a complete Kubernetes cluster using Kubernetes Engine. * Deploy and manage Docker containers using kubectl. * Break an application into microservices using Kubernetes? Deployments and Services. * DEPLOYING TO KUBERNETES * Create and manage Kubernetes deployments. * Create a Kubernetes deployment. * Trigger, pause, resume, and rollback updates. * Understand and build canary deployments. * CONTINUOUS DEPLOYMENT WITH JENKINS * Build a continuous delivery pipeline. * Provision Jenkins in your Kubernetes cluster. * Create a Jenkins pipeline. * Implement a canary deployment using Jenkins. * ADDITIONAL COURSE DETAILS: Nexus Humans Getting Started with Google Kubernetes Engine training program is a workshop that presents an invigorating mix of sessions, lessons, and masterclasses meticulously crafted to propel your learning expedition forward. This immersive bootcamp-style experience boasts interactive lectures, hands-on labs, and collaborative hackathons, all strategically designed to fortify fundamental concepts. Guided by seasoned coaches, each session offers priceless insights and practical skills crucial for honing your expertise. Whether you're stepping into the realm of professional skills or a seasoned professional, this comprehensive course ensures you're equipped with the knowledge and prowess necessary for success. While we feel this is the best course for the Getting Started with Google Kubernetes Engine course and one of our Top 10 we encourage you to read the course outline to make sure it is the right content for you. Additionally, private sessions, closed classes or dedicated events are available both live online and at our training centres in Dublin and London, as well as at your offices anywhere in the UK, Ireland or across EMEA.

Getting Started with Google Kubernetes Engine
Delivered on-request, onlineDelivered Online
Price on Enquiry

Architecting with Google Kubernetes Engine

By Nexus Human

Duration 3 Days 18 CPD hours This course is intended for This class is intended for the following participants: Cloud architects, administrators, and SysOps/DevOps personnel Individuals using Google Cloud Platform to create new solutions or to integrate existing systems, application environments, and infrastructure with the Google Cloud Platform. Overview This course teaches participants the following skills: Understand how software containers work Understand the architecture of Kubernetes Understand the architecture of Google Cloud Platform Understand how pod networking works in Kubernetes Engine Create and manage Kubernetes Engine clusters using the GCP Console and gcloud/ kubectl commands Launch, roll back and expose jobs in Kubernetes Manage access control using Kubernetes RBAC and Google Cloud IAM Managing pod security policies and network policies Using Secrets and ConfigMaps to isolate security credentials and configuration artifacts Understand GCP choices for managed storage services Monitor applications running in Kubernetes Engine This class introduces participants to deploying and managing containerized applications on Google Kubernetes Engine (GKE) and the other services provided by Google Cloud Platform. Through a combination of presentations, demos, and hands-on labs, participants explore and deploy solution elements, including infrastructure components such as pods, containers, deployments, and services; as well as networks and application services. This course also covers deploying practical solutions including security and access management, resource management, and resource monitoring. INTRODUCTION TO GOOGLE CLOUD PLATFORM * Use the Google Cloud Platform Console * Use Cloud Shell * Define cloud computing * Identify GCPs compute services * Understand regions and zones * Understand the cloud resource hierarchy * Administer your GCP resources CONTAINERS AND KUBERNETES IN GCP * Create a container using Cloud Build * Store a container in Container Registry * Understand the relationship between Kubernetes and Google Kubernetes Engine (GKE) * Understand how to choose among GCP compute platforms KUBERNETES ARCHITECTURE * Understand the architecture of Kubernetes: pods, namespaces * Understand the control-plane components of Kubernetes * Create container images using Google Cloud Build * Store container images in Google Container Registry * Create a Kubernetes Engine cluster KUBERNETES OPERATIONS * Work with the kubectl command * Inspect the cluster and Pods * View a Pods console output * Sign in to a Pod interactively DEPLOYMENTS, JOBS, AND SCALING * Create and use Deployments * Create and run Jobs and CronJobs * Scale clusters manually and automatically * Configure Node and Pod affinity * Get software into your cluster with Helm charts and Kubernetes Marketplace GKE NETWORKING * Create Services to expose applications that are running within Pods * Use load balancers to expose Services to external clients * Create Ingress resources for HTTP(S) load balancing * Leverage container-native load balancing to improve Pod load balancing * Define Kubernetes network policies to allow and block traffic to pods PERSISTENT DATA AND STORAGE * Use Secrets to isolate security credentials * Use ConfigMaps to isolate configuration artifacts * Push out and roll back updates to Secrets and ConfigMaps * Configure Persistent Storage Volumes for Kubernetes Pods * Use StatefulSets to ensure that claims on persistent storage volumes persist across restarts ACCESS CONTROL AND SECURITY IN KUBERNETES AND KUBERNETES ENGINE * Understand Kubernetes authentication and authorization * Define Kubernetes RBAC roles and role bindings for accessing resources in namespaces * Define Kubernetes RBAC cluster roles and cluster role bindings for accessing cluster-scoped resources * Define Kubernetes pod security policies * Understand the structure of GCP IAM * Define IAM roles and policies for Kubernetes Engine cluster administration LOGGING AND MONITORING * Use Stackdriver to monitor and manage availability and performance * Locate and inspect Kubernetes logs * Create probes for wellness checks on live applications USING GCP MANAGED STORAGE SERVICES FROM KUBERNETES APPLICATIONS * Understand pros and cons for using a managed storage service versus self-managed containerized storage * Enable applications running in GKE to access GCP storage services * Understand use cases for Cloud Storage, Cloud SQL, Cloud Spanner, Cloud Bigtable, Cloud Firestore, and Bigquery from within a Kubernetes application

Architecting with Google Kubernetes Engine
Delivered on-request, onlineDelivered Online
Price on Enquiry

Educators matching "CKAD"

Show all 6