L104353 LabGuide
L104353 LabGuide
Cluster
Andrew Block, Scott Collier, Jason DeTiberus, Vinny Valdez
Abstract:
Configuring distributed systems can be difficult. Fortunately, automation tools such as Ansible are
available to help manage even the most complex environments. In this lab, you'll take the reigns of
your own cluster and experience firsthand how Ansible can be used to install, configure, and
maintain OpenShift to support mission-critical systems. Once you install Red Hat OpenShift, you'll
learn how to diagnose, troubleshoot, and resolve common platform issues. Managing the platform
doesn't stop once the installation is complete. You'll use Ansible to simplify ongoing maintenance in
an automated fashion. Finally, the use of centralized management systems will be introduced into
the environment in order to demonstrate its importance and to provide a streamlined experience for
both platform maintainers and users.
1
Lab 0 - Pre-Lab Setup
Lab 3 - Verifying Installation of Red Hat OpenShift Container Platform Using Ansible Tower
Reviewing Install of OpenShift
Validate the OpenShift Installation
2
Lab 7 - Where do we go from here?
Appendices
Appendix A - Manually Cleanup Cinder Volume
Appendix B - Script For Deploying CloudForms
Appendix C - Recovering From Failed CloudForms Deployment
Appendix D - Average Tower Job Times
Appendix E - Troubleshooting CloudForms
3
Lab 0 - Pre-Lab Setup
Welcome! We are going to jump right into the lab implementation and then review the overall
architecture and strategy afterward. You have been tasked with managing a Red Hat Container
Platform environment running on the Red Hat OpenStack platform. Ansible Tower is also
available and being used to execute and manage the overall installation of OpenShift.
Let’s perform some brief validation of the environment and kick off the OpenShift installation.
NOTE: The installation of OpenShift Container Platform can take 20-25 minutes so must be
started immediately. If bullet point 1 below takes longer than 1 minute to complete, skip it and
go directly to bullet point 2.
4
5
Lab 1 - Lab Overview
Introduction
With the OpenShift installation process kicked off, we can spend some time and provide an
overview of the entire lab.
Environment Overview
The lab environment that we will utilize today consists of multiple KVM virtual machines running
within each student workstation. The details of each virtual machine are listed below:
● Student Workstation - KVM hypervisor (the system you are logged into now)
● Red Hat OpenStack Environment 10 - has been deployed for you and ready to host
instances that will be used for the Red Hat OpenShift Container Platform 3.4.
○ KVM VM
○ hostname: rhosp.admin.example.com
○ Red Hat OpenShift Container Platform
■ 1 Master node
■ 1 Infrastructure Node
■ 2 Application Nodes
● Red Hat CloudForms (containerized)
● Ansible Tower 3.1.2
○ KVM VM
○ hostname: tower.admin.example.com
In addition to the virtual machines that are running on each student workstation, an instructor
machine is also contained within the environment and provides additional resources.
● Repository server
○ KVM VM on instructor machine
○ Hostname: repo.osp.example.com
6
○ Hosts localized RPM’s, docker registry, and git repository
The following diagram depicts the network layout within the environment
7
Target Environment
As you progress through the series of labs, you will build increased capabilities for effectively
managing containerized workloads. The diagram below represents the environment that we will
be building today.
Connectivity Details
There are several components that will be utilized throughout the course of this lab. The
following table outlines how to connect to each resource:
8
Virtualization level
To understand the different layers of virtualization we will use the following classifications:
1. L0 - The hypervisor. In this lab this is the desktop you are sitting at
2. L1 - KVM virtual machine running on the L0 hypervisor
3. L2 - OpenStack Instance/Server running in nested virtualization in the OpenStack L1 VM
4. L2 (container) - Application running in a container on the L2 platform - in this case
OpenShift
Keep in mind here, that we are using nested virtualization in this lab. So, while the performance
is likely acceptable, it’s not reflective of a production deployment.
Each component plays a critical role into the overall management of the environment. Now let’s
get started!
9
Lab 2 - Exploring the Environment
With the installation of the OpenShift Container Platform started and an understanding of the
environment as as whole, we are going to take time waiting for the installation to complete to
explore the environment in further detail.
The RHOSP environment is a KVM virtual machine running on each student machine. This
environment will be used to host the Red Hat OpenShift Container Platform. Let’s verify the
state of the instances and execute a few commands to validate it is in good working order prior
to proceeding.
Username: user1
Password: summit2017
NOTE: Although root access is not required to run any of the commands below in Red Hat OpenStack
Platform, user1 does have sudo access in case you would like to view logs or config files. However,
please DO NOT make any changes to the environment or the lab may not work properly.
kiosk$ ssh [email protected]
Username: user1
Password: summit2017
10
View Servers and Volumes
Connect to the running OpenStack environment and view servers and volumes:
1. From the UI
a. In a local web browser open http://rhosp.admin.example.com
b. Click on Compute -> Instances to view server status
c. Click on Compute -> Volumes to view block storage status
2. From the CLI
a. SSH with user user1 and password summit2017
b. View server and volume status:
kiosk$ ssh [email protected]
rhosp$ openstack server list --format value --column Name --column Status
node1.osp.example.com BUILD
infra.osp.example.com ACTIVE
master.osp.example.com ACTIVE
Since the Red Hat OpenShift environment makes use of persistent storage for the integrated
router along with applications, Red Hat OpenStack provides Cinder volumes which the
environment will make use of.
rhosp$ openstack volume list --format value --column ID --column "Attached to"
If you list out the logical volumes (lvs), you will see the IDs of the volumes match the lvs:
11
Next, each of the running instances are built from Red Hat Enterprise Linux 7.3. To list the
images available for consumption within OpenStack, execute the following command:
e5a369ea-f915-4a59-81e4-1015a7c13f6f openshift-base
Feel free to view the details of the openshift-base image which is used to instantiate the
openshift servers by the Ansible Tower playbooks.
rhosp$ openstack image show openshift-base
Finally, list the networks and subnets that have been configured in the OpenStack environment
if curious.
rhosp$ openstack network list && openstack subnet list
The network is configured as a flat network to use the libvirt network for routing and DNS, so no
floating IPs will be used. All server instances will use static IPs based on pre-configured network
ports. You can view this with:
rhosp$ openstack port list --format value --column "Fixed IP Addresses" -c Name
openshift-master ip_address='172.20.17.5',
subnet_id='28792deb-8e5f-459e-aa28-aec1d50838ef'
openshift-infra ip_address='172.20.17.6',
subnet_id='28792deb-8e5f-459e-aa28-aec1d50838ef'
openshift-node1 ip_address='172.20.17.51',
subnet_id='28792deb-8e5f-459e-aa28-aec1d50838ef'
openshift-node3 ip_address='172.20.17.53',
subnet_id='28792deb-8e5f-459e-aa28-aec1d50838ef'
openshift-node2 ip_address='172.20.17.52',
subnet_id='28792deb-8e5f-459e-aa28-aec1d50838ef'
Additional commands are available to investigate each one of the prior areas in greater detail.
You are free to explore these areas later if time allots but be extremely careful not to change
anything in this environment.
Ansible Tower provides the central management of Ansible workloads to enable complex
workflows to manage environments big and small. The entire installation and management of
12
the OpenShift Container Platform can be managed from a centralized Ansible Tower
environment.
As you saw previously, Ansible Tower has been provisioned as a standalone machine within the
lab environment.
Username admin
Password summit2017
Job Templates
First, let’s review the job template that we just executed to provision the OpenShift Container
Platform. This workflow template consists of three chained job templates:
● OpenShift Pre-Install - Prepares the OpenStack environment by provisioning three
instances
● OpenShift Install - Installs the OpenShift Container Platform
● OpenShift Post-Install - Customizes the OpenShift cluster for the lab
13
Projects
The Job Templates utilize Projects, or collections of Ansible playbooks, that in this lab are
sourced from a Git repository. To view the projects that are being utilized, select the Projects
link on the menu bar. Two projects are being leveraged:
● openshift-ansible - Installs and configures the OpenShift Container Platform
● summit-2017-ocp-operator - Customized Ansible tooling to prepare lab exercises
The configuration of each project can be viewed by selecting the pencil (edit) button under the
Actions column.
Inventory
An Inventory within Ansible Tower is similar to a standalone inventory file and contains a
collection of host in which jobs may be launched. The inventories defined within Tower can be
accessed by clicking on the Inventories link on the menu bar. The OpenShift inventory defines
the hosts organized within groups to install and configure the environment. Each group along
with the host and variables that have been defined can be accessed by selecting the pencil icon
under the Actions column next to each group.
Credentials
Credentials are a mechanism for authenticating against secure resources including target
machines, inventory sources and projects leveraging version control systems. Every one of the
previously explored areas makes use of a credential. Credentials are configured within the
Ansible Tower settings and can be accessed by selecting the Settings icon (gear) on the menu
bar. Once within the settings page, select the Credentials link. The following credentials have
been defined:
● gitlab-creds - Access lab resources from source control
● osp-guest-creds - Execute actions against OpenStack instances
● osp-user-creds - Allows for communication with the O penStack platform
14
Click the Details link on each rectangle to see the details of each playbook. The overall
workflow job is complete when all 3 playbooks are completed successfully.
This lab is concluded when the Ansible Tower job is completed successfully.
15
Lab 3 - Verifying Installation of Red Hat OpenShift
Container Platform Using Ansible Tower
In this lab, we will review the install of the OpenShift Container Platform using Ansible Tower
that we started at the beginning of this session.
The OpenShift Container Platform is installed through a collection of ansible resources. This
automation toolset allows platform administrations the ability to quickly provision an environment
with minimal effort. Ansible Tower has been configured with a Job Template that makes use of
these assets to install OpenShift on instances available in the OpenStack environment.
To view the list of Job Templates configured in Ansible Tower, select Templates on the menu
bar at the top of the screen.
All of the job templates configured in Ansible Tower are listed below. Earlier you launched the
job template called 0-Provision and Install OpenShift. This is a workflow job type and will
execute multiple chained job templates to provision OpenShift. Review the workflow jobs and
playbooks that were run in the Jobs page.
When you execute the job template, you will be transferred to the jobs page where you will be
able to track the progress and status of the installation. For more information on the Ansible
playbooks see https://github.com/openshift/openshift-ansible
16
Validate the OpenShift Installation
With the OpenShift Container Platform installation complete, let’s perform a few tests to validate
the status of the environment. There are two primary methods for accessing OpenShift: the web
console and the Command Line tool (CLI).
From the student machine, open a web browser and navigate to the following address:
https://master.osp.example.com:8443
If successful, you should see the following page representing the OpenShift landing page:
17
Use the following credentials to access the web console:
Username: user1
Password: summit2017
The OpenShift web console provides an interactive way to interact with the OpenShift platform.
After successfully authenticating, you are presented with an overview page containing all of the
projects that you have access to. Since you are a normal user, you do not have access to any
projects.
In subsequent labs, we will explore the OpenShift web console in further detail.
However, we will still use this opportunity to showcase the different items exposed within the
web console.
Now that we have had an opportunity to login to the OpenShift web console from a developer's
standpoint, let’s shift over to an administrative and operations point of view and access the
cluster directly using the terminal.
Since the instances deployed within the OpenStack environment are utilizing cloud-init, login to
the OpenShift Master instance as cloud-user:
kiosk$ ssh -i ~/.ssh/L104353-tower.pem [email protected]
Access to the cluster is available using the system:admin user which has the cluster-admin role.
This can be confirmed by executing the following command which should confirm the currently
logged in user is system:admin
master$ oc whoami
As one would expect, users with the cluster-admin role have elevated permissions in
comparison to normal users, such as user1 which was utilized when browsing the web console.
Cluster administrators can view all of the nodes that have constitute the cluster:
master$ oc get nodes
View all of the Projects that have been created by users or to support the platform:
master$ oc get projects
Along with listing all of the Persistent Volumes that have been defined:
master$ oc get pv
[Global]
auth-url = http://rhosp.admin.example.com:5000/v2.0/
18
username = admin
password = summit2017
tenant-name = L104353
The cloud provider integration file tells OpenShift how to interact with OpenStack. You can see
that it’s doing so via the OpenStack API which requires an auth-url, credentials, and a tenant
name. This integration between OpenShift and OpenStack enable capabilities like dynamic
storage provisioning for applications. Cloud Provider configurations are specific to each
provider, for example, you also have cloud provider configurations for AWS, Azure, VMware,
etc…
Let’s check out the storage class as well, continuing on the integration story.
master$ oc get storageclass
Notice that the provisioner is the cinder provisioner and the is-default-class is set to 'true'.
You can use the OpenShift Command line tool as a user with cluster administrator role to
access the entire set of configurations for the platform.
Note: With great power comes great responsibility. Executing commands as a user with cluster
administrator rights has the potential to negatively impact the overall health of the environment.
IMPORTANT: If you need to teardown the OpenShift Environment and start over, execute the
OpenShift Teardown job template. However, please raise your hand and inform one of the lab
instructors. If you do this too late into the lab you may not have enough time to finish. See this
table for a reference of typical times for the Tower jobs: Appendix D - Average Tower Job Times
19
Lab 4 - Installing Red Hat CloudForms
Red Hat CloudForms Management Engine (CFME) delivers the insight, control, and automation
necessary to address the challenges of managing complex environments. CloudForms is
available as a standalone appliance, but is also available as a containerized solution that can be
deployed on the OpenShift Container Platform.
In this lab, you will deploy a single instance/replica of Red Hat CloudForms to the OpenShift
Container Platform cluster and configure the container provider to monitor the OpenShift
environment.
Since Red Hat CloudForms is available as a container, it can be deployed to the OpenShift
Container Platform in a few short steps.
First, using the OpenShift Command Line, create a new project called cloudforms
master$ oc new-project cloudforms
By creating a new project, the context of the CLI is automatically switched into the cloudforms
project:
master$ oc config current-context for context
When creating a new project, a set of service accounts are automatically provisioned. These
accounts are used when building, deploying and running containers. The default service
account is the de facto service account used by pods. Since CloudForms is deployed within a
pod and requires access to key metrics in the OpenShift environment along with the host, it
must be granted elevated access as a privileged resource. In OpenShift, permissions
associated to pods are managed by Security Context Constraints and the service account that
is used to run them.
Execute the following command to add the default service account in the cloudforms project to
the privileged SCC:
master$ oc adm policy add-scc-to-user privileged \
system:serviceaccount:cloudforms:default
20
Confirm system:serviceaccount:cloudforms:default is in the result returned
CloudForms retrieves metrics from applications deployed within OpenShift, and its leverages the
data exposed by the onboard metrics infrastructure (Hawkular). Since the platform metrics are
deployed in the openshift-infra project and CloudForms is deployed in the cloudforms project,
they cannot communicate with each other due to use of the multitenant SDN plugin which
isolates each project at a network level.
Fortunately, as a cluster administrator, you can manage the configuration of the pod overlay
network to allow traffic to traverse between specific projects or be exposed to all projects.
Execute the following command to join the cloudforms project to the openshift-infra project
master$ oc adm pod-network join-projects cloudforms --to=openshift-infra
Notice how the services are set up, how variables are passed along, which containers are used,
etc... This is how we are defining how CloudForms is being configured.
Instantiate the template to deploy Red Hat CloudForms. Since no parameters were specified,
the default values as defined in the template will be utilized.
21
master$ oc new-app -n cloudforms --template=cloudforms
Red Hat CloudForms will now be deployed into the cloudforms project
First validate that all pods are successfully running by watching the status of the pods. When all
pods are running and the -deploy pods are terminated, stop the command with CTRL+C. The
following output is a full deployment which took just over 4 minutes:
master$ oc -n cloudforms get pods -w
Red Hat CloudForms may take up to 5 minutes to start up for the first time as it builds the
content of the initial database. As noted above, the deployment of CloudForms will be complete
when the status has changed to “Running” for the containers.
Execute the following command to view the overall status of the pods in the cloudforms project
master$ oc status -n cloudforms
22
For full details of the deployed application run
master$ oc describe -n cloudforms pod/cloudforms-<pod_name>
Next, in order to validate the cloudforms pod is running with the proper privileged SCC, export
the contents and inspect the openshift.io/scc annotation to confirm the privileged value is
present
master$ oc -n cloudforms get -o yaml pod cloudforms-<pod_name>
...
metadata:
annotations:
openshift.io/scc: privileged
...
NOTE: If the project may have to be removed and start over again. Only perform this task if
there was an irrecoverable failure. Let and instructor know before doing this. See
Recovering From CloudForms Failed Deployment
Open a web browser and navigate securely to the to the hostname retrieved above:
https://cloudforms-cloudforms.apps.example.com
NOTE: If you get an error such as Application Not Available see Appendix E -
Troubleshooting CloudForms
Since Red Hat CloudForms in the lab environment uses a self signed certificate, add an
exception in the browser to add an exception.
23
Username: admin
Password: smartvm
Red Hat CloudForms gathers metrics from infrastructure components through the use of
providers. An OpenShift container provider is available that queries the OpenShift API and
platform metrics. As part of the OpenShift installation completed previously, cluster metrics were
automatically deployed and configured. CloudForms must be configured to consume from each
of these resources.
24
Start adding a new Container Provider by specifying OCP Summit Lab as the name and
OpenShift Container Platform as the type.
As mentioned previously, there are two endpoints in which CloudForms retrieves metrics from.
First, configure the connection details to the OpenShift API. Since CloudForms is deployed
within OpenShift, we can leverage the internal service associated with API called kubernetes in
the default project. Internal service names can be referenced across projects in the form
<service_name>.<namespace>
Enter kubernetes.default in the hostname field and 443 in the port field.
The token field refers to the OAuth token used to authenticate CloudForms to the OpenShift
API. The management-infra project is a preconfigured project as part of the OpenShift
installation. A service account called management-admin is available that has access to the
requisite resources needed by CloudForms. Each service account has an OAuth token
associated with its account. Execute the following command to retrieve the token.
master$ oc serviceaccounts get-token -n management-infra management-admin
Copy the value returned into the token fields. Click the Validate button to verify the
configuration.
25
Next, click on the Hawkular tab to configure CloudForms to communicate with the cluster
metrics.
Enter hawkular-metrics.openshift-infra in the hostname field and 443 in the port field.
You have now configured Red Hat CloudForms to retrieve metrics from OpenShift. It may take a
few minutes to data to be displayed.
26
Select Compute -> Containers -> Overview to view the collected data. Once baseline metrics
similar to what is shown below appears, you can move on to the next lab. Feel free to explore
the CloudForms web console as time permits to view additional details exposed from the
OpenShift cluster.
27
NOTE: This lab should be considered optional and/or stretch goal. If you are behind just skip
this section and move onto the next lab.
Red Hat CloudForms can also gather metrics and infrastructure data from our Red Hat
OpenStack Platform environment, in the same manner that it is now collecting information from
our OpenShift Container Platform.
28
You have now configured Red Hat CloudForms to retrieve metrics from Red Hat OpenStack
Platform. It may take a few minutes to data to be displayed.
29
This concludes lab 4.
30
Lab 5 - Managing the Lifecycle of an Application
In this lab, you will deploy an application to Red Hat OpenShift Container Platform and use the
tools previously deployed to investigate how to manage the application.
One of the steps to validate the successful installation of an OpenShift Container Platform
cluster is to build and deploy a sample application. OpenShift contains a number of quickstart
templates that can be used to demonstrate different application frameworks along with the
integration with a backend data store. One of these example applications consists of a
CakePHP based web application with state stored in a MySQL database.
We will now put our cluster administrator hat aside and complete the majority of this lab as a
developer by using the OpenShift web console to build and deploy the sample application.
Username: user1
Password: summit2017
Since user1 does not currently have access to any projects, the only actions that can be taken
in the web console is to create a new project. Click on the New Project button.
Name: cakephp-mysql-persistent
Display Name: CakePHP MySQL Persistent
Description: Sample Project Demonstrating A CakePHP MySQL Application Using
Persistent Storage
You are presented with a catalog of items that you can add to your project. In a typical
OpenShift cluster, this catalog would be filled with numerous programming languages
emphasizing polyglot development and tools to implement Continuous Integration. In the lab
environment, there is only one programming language option, PHP. Click on the PHP language
to display the available options.
31
You are presented with one option; an OpenShift template which contains the various OpenShift
components to build and deploy a CakePHP based application along with a MySQL database
backed by persistent storage. The goal of this lab is to use this template to validate the build
and deployment capabilities of the platform along with the dynamic allocation of Persistent
Volumes for the storage of the backend database.
Click the Select button under the CakePHP + MySQL (Persistent) card which will display the
images that will be used as part of this template instantiation along with parameters that can be
used to inject custom logic.
One of the parameters that we will customize is the location of the Git repository containing the
source code of the CakePHP application. The location will point to the Git repository that is
running on the repository machine:
Modify the Git Repository URL parameter with the following value:
Scroll to the bottom of the page and select the Create button to instantiate the template
A page displaying the successful instantiation of the template will be displayed along with a set
of next steps that you can take against the application. Click the Continue to Overview link to
return to the project homepage.
32
Validating Application Deployment
After triggering instantiating the template, a new Source to Image build of the CakePHP
application will begin.
Select cakephp-mysql-persistent to view the builds for the application. From this page, you
can view build status along with the logs produced
To investigate the status of all pods within the project, select Application and then Pods
33
Pods that are in a healthy condition will either have a status of Running or completed.
NOTE: If either the mysql or cakephp are not in a healthy state, triggering a new deployment
may rectify the issue.
New deployments can be initiated from the deployments page by selecting Applications and
the Deployments.
On the top right corner, click Deploy to trigger a new deployment if needed.
34
View Application
Click on Overview from the left hand navigation bar to return to the overview page.
NOTE: You may see an error getting metrics. This is safe to ignore for now as it will be covered
in a subsequent section.
You should be able to see both the CakePHP and MySQL applications running.
The template automatically creates a route to provide external access to the application. The
link is available at the top right corner of the page. Click the link to navigate to the application:
http://cakephp-mysql-persistent-cakephp-mysql-persistent.apps.example.com
35
Viewing Application Metrics
Application users and administrators have the ability to leverage several facilities for monitoring
the state of an application deployed to the OpenShift Container Platform. While not deployed to
the lab environment, OpenShift provides an aggregated logging framework based on the ELK
(Elasticsearch, Fluentd and Kibana) stack. However, you can still utilize the telemetry captured
by the cluster metrics mechanisms. Cluster metrics were deployed as part of the OpenShift
installation and are being used to drive Red Hat CloudForms.
With the cakephp-mysql-persistent application deployed, you can use the OpenShift web
console to view metrics that has been gathered by the cluster metrics facility. Since the metrics
facility within the web console reaches out to Hawkular deployed in OpenShift from your web
browser, you will need to perform one additional step to configure your browser to trust the self
signed certificate configured before metrics can start to be displayed.
36
Click on the link displayed which will connect to the Hawkular endpoint. Accept the self signed
certificate and if successful, you will see the Hawkular logo along with additional details about
the status of the service.
NOTE: After clicking on the URL noted above, it may hang for a bit as it tries to go online. It will
continue after a while.
Return to the OpenShift overview page for the cakephp-mysql-persistent project by clicking the
Overview link on the left side where you should be able to see metrics displaying next to each
pod.
37
Additional details relating to the performance of the application can be viewed by revisiting the
Metrics tab within each pod as previously described.
While normal consumers of the platform are able to view metrics for only the applications they
have permissions to access, cluster administrators can make use of Red Hat CloudForms to
view metrics from all applications deployed to the OpenShift Container platform from a single
pane of glass.
With an application deployed to the OpenShift cluster, we can navigate through the various
options exposed by the OpenShift web console. Use this time as an opportunity to explore the
following sections at your own pace:
● Various details provided with each pod including pod details, application logs and the
ability to access a remote shell
○ Hover over Applications from the left hand navigation bar and select Pods.
Select one of the available pods and navigate through each of the provided tabs
● Secrets used by the platform and the CakePHP application
○ Hover over Resources from the left hand navigation bar and select Secrets
● Persistent storage dynamically allocated by the cluster to support MySQL
○ Click on the Storage tab
If desired, connect to OpenStack and view the volumes created using the steps described in a
prior lab.
38
Lab 6 - Expanding the OpenShift Container Platform
Cluster
In this lab, you will use Ansible Tower to add an additional application node to the OpenShift
Container Platform cluster.
One of the benefits of the OpenShift Container Platform architecture is the effective scheduling
of workloads onto compute resources (nodes). However, available capacity may result in the
need to add additional resources. As an OpenShift cluster administrator, having a defined
process for adding resources in an automated manner helps guarantee the stability of the
overall cluster.
The OpenShift Container Platform provides methods for adding resources to an existing cluster,
whether it be a master or node. The method for executing the scale up task depends on the
installation method used for the cluster. Both methods make use of an Ansible playbook to
automate the process. The execution of the playbook can be driven through Ansible Tower to
further simplify adding resources to a cluster.
Review Cluster
Recall the number of nodes in the cluster by either visiting CloudForms or OpenStack.
https://tower.admin.example.com
If the web session has not been retained from a prior lab, login with the following credentials:
Username admin
Password summit2017
39
After logging in, navigate to the Templates page and locate the 1-Provision and Scale
OpenShift workflow job template. Click the ‘rocket’ icon to start the job.
The workflow first creates a new OpenStack instance and once the instance has been created,
the scaleup Ansible playbook will be executed to expand the cluster. The workflow job will take
a few minutes to complete. Monitor the status until the workflow job completes successfully by
selecting Details as with the initial workflow job.
First, as an OpenShift cluster administrator, you can use the OpenShift command line interface
from the OpenShift master to view the available nodes and their status.
As the root user on the OpenShift master (master.osp.example.com), execute the following
command to list the available nodes:
master$ oc get nodes
If successful, you should see four (4) total nodes (1 master and 3 worker nodes) with Ready
under the Status column, as opposed to (3) total nodes before (1 master and 2 worker nodes).
Red Hat CloudForms can also be used to confirm the total number of nodes has been
expanded to four.
Login to CloudForms and once authenticated, hover over Compute, then Containers, and finally
select Container Nodes. Confirm four nodes are displayed.
40
41
Lab 7 - Where do we go from here?
The lab may be coming to a close, but that does not mean that you need to stop once you leave
the session.
● Ansible Tower was used to execute Ansible playbooks to provision a Red Hat OpenShift
Container Platform cluster
○ Instances were created with Red Hat OpenStack
○ Red Hat OpenShift Container Platform was installed and configured
■ Platform metrics were automatically deployed
● Red Hat Cloudforms was deployed within the Red Hat Container Platform cluster
○ Integrated with Red Hat OpenShift Container Platform to monitor the cluster
● Sample application using persistent storage deployed on the Red Hat OpenShift
Container Platform
● Ansible Tower was used to execute Ansible platforms to expand the cluster
○ New instance deployed within Red Hat OpenStack
○ Red Hat OpenShift Container Platform node installed and cluster updated
● Source Code
○ https://github.com/sabre1041/summit-2017-ocp-operator
● Lab Guide
○ https://github.com/sabre1041/summit-2017-ocp-operator/docs/rhsummit17-lab-gu
ide.html
● Official Documentation
○ Red Hat OpenShift Container Platform
○ Ansible Tower
○ Red Hat CloudForms
○ Red Hat OpenStack
42
Appendices
How to manually clean up a volume that will not delete with openstack volume delete
rhosp$ sudo -i
43
Appendix B - Script For Deploying CloudForms
These are pulled directly from Lab 4 - Installing Red Hat CloudForms
NOTE: This is also available at http://repo.admin.example.com/pub/scripts/lab4-cloudforms-validation.sh
#!/bin/bash
oc new-project cloudforms
oc config current-context for context
oc adm policy add-scc-to-user privileged \
system:serviceaccount:cloudforms:default
oc get scc privileged -o yaml | grep cloudforms
oc adm pod-network join-projects cloudforms --to=openshift-infra
oc get netnamespace | egrep 'cloudforms|openshift-infra'
curl -O http://repo.osp.example.com/ocp/templates/cfme-template.yaml
oc create -n cloudforms -f cfme-template.yaml
oc get -n cloudforms template cloudforms
oc new-app -n cloudforms --template=cloudforms
oc -n cloudforms get pods -w
44
Appendix C - Recovering From Failed CloudForms Deployment
The following output represents a failed deployment:
The quickest way to remedy this is to delete the project and start over:
Now return the the lab and try again Lab 4 - Installing Red Hat CloudForms
45
Appendix D - Average Tower Job Times
If this matches the web browser’s output of Application Not Available or status code of 503
then something failed in the deployment.
46
master$ curl -Ik http://72.30.126.6
The cloudforms application should work now if the router came up cleanly
master$ curl -Ik https://cloudforms-cloudforms.apps.example.com
47