How to use the VMware Java SDK in IBM WebSphere

IBM WebSphere is still underpinning an number of IBM software products that do not run in containers on Red Hat OpenShift. Myself and @hvanrun were working with IBM Cloud Orchestrator (ICO) last year, an enterprise orchestrator that is used by a major European client.

Although we have since migrated successfully to IBM Business Automation Workflow, we had a requirement at the time to make API calls from ICO to a VMware vSphere environment. ICO runs on IBM Business Process Manager 8.6, which in turn runs on IBM WebSphere Network Deployment 8.5.5. Since IBM WebSphere is a Java EE runtime, we decided to leverage the VMware Java SDK to make these API calls .

What are VMware vSphere and IBM Business Process Manager?

VMware vSphere is a popular commercial hypervisor for the x86-64 architecture to host VMs and manage them through a management plane called VMware vCenter.

IBM® Business Process Manager is a comprehensive business process management platform. It provides a robust set of tools to author, test, and deploy business processes, as well as full visibility and insight to managing those business processes.

IBM® Business Process Manager is now available as IBM Business Automation Workflow and is part of IBM Cloud Pak for Automation. IBM Cloud Pak for Automation offers design, build, run, and automation services to rapidly scale your programs and fully execute and operationalize an automation strategy.

Our Challenge and Solution

Initially we had some challenges to make this integration work. Although we followed the examples from the VMware Java SDK Samples (samples included in SDK), the sample code below did not work as expected.

vimService = new VimService(endpointURL);
vimPort = vimService.getVimPort();

Map<String, Object> ctxt = ((BindingProvider) vimPort).getRequestContext();
            
ctxt.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, url);

serviceContent = vimPort.retrieveServiceContent(servicesInstance);

In particular, on every call to <include method/line/etc> we received the following message in the WebSphere SystemErr.log:

[7/2/20 10:13:46:277 CEST] 000b140c SystemErr R Sample code failed
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R javax.xml.ws.WebServiceException: Error: Maintain Session is enabled but none of the session properties (Cookies, Over-written URL) are returned.
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.ExceptionFactory.createWebServiceException(ExceptionFactory.java:173)
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.ExceptionFactory.makeWebServiceException(ExceptionFactory.java:70)
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.ExceptionFactory.makeWebServiceException(ExceptionFactory.java:118)
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.BindingProvider.setupSessionContext(BindingProvider.java:355)
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.BindingProvider.checkMaintainSessionState(BindingProvider.java:322)

We worked with IBM Support and ultimately found that this post on stackoverflow.com was the key to the solution of our problem. Using the Java code below, we were able to make calls using the VMware Java SDK from ICO without any issues!

vimService = new VimService(endpointURL);
vimPort = vimService.getVimPort();

Map<String, Object> ctxt = ((BindingProvider) vimPort).getRequestContext();
            
ctxt.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, url);
            
String CUSTOM_COOKIE_ID = ""; 
ctxt.put(CUSTOM_COOKIE_ID, "vmware_soap_session");
ctxt.put(BindingProvider.SESSION_MAINTAIN_PROPERTY, true);
            
// Retrieve the ServiceContent object and login
// Second retrieve context provides cookies for next calls
VimService vimServiceRSC = new VimService(endpointURL);
VimPortType vimPortRSC = vimServiceRSC.getVimPort();

Map<String, Object> ctxtRSC = ((BindingProvider)vimPortRSC).getRequestContext();
            ctxtRSC.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, url.toString());

serviceContent = vimPortRSC.retrieveServiceContent(servicesInstance);

Deploying Red Hat OpenShift on IBM Cloud Pak System

Together with my colleague Venkata Gadepalli, I recently published a detailed technical tutorial Accelerate your Red Hat OpenShift Container Platform deployment with IBM Cloud Pak System on the IBM Developer web site. This should help pave the way for clients looking to deploy IBM Cloud Paks or their own applications in containers. IBM Cloud Pak System supports both scenarios, in addition to first-class support for VM based workloads.

Let us know how the tutorial works for you, and do contact us if you have any questions!

How to install your own Terraform provider in IBM Cloud Automation Manager

Originally posted on IBM Cloud Automation Manager blog by Hendrik van Run on 25 July 2019

Disclaimer: Although this blog post was created by Hendrik van Run, it reflects the work done by Jonathon Goldsworthy on a client project!

Introduction

Out of the box, IBM Cloud Automation Manager (CAM) ships with a number of Terraform providers. This is great to get started and handles common use-cases. IBM also provides support should you encounter any issues. However often there are cases where you need to use one or more Terraform providers that are not installed in CAM by default. Hashicorp has endorsed a growinglist of Terraform providers, which can be found here.

Let’s assume that you have CAM installed and need to import one of those Terraform providers, for example the F5 BIG-IP one. This can be used to automate creation of resources on a Big IP F5 solution, for example to manage Virtual IP (VIP) addresses for load balancing purposes. The steps below outline exactly what you need to do in order to start using the F5 BIG-IP Terraform provider with CAM.

Downloading and building the Terraform provider

Preparation

The F5 BIG-IP Terraform provider is available from github.com here. But before we proceed, please make sure to review README.md of the Big IP F5 Terrarform provider. Note that it requires Go 1.11 (or higher) to build the provider, and Terraform 0.10.x (or higher) to use the Terraform provider.

As documented here in the IBM Knowledge Center, CAM 3.1.2.1 uses Terraform 11.11. So we meet the mimimum version requirements of the F5 BIG-IP Terraform provider.

We also need a machine where we can build the Terraform provider. This is typically a linux server, but note that it must match the OS and processor architecture of the servers hosting your IBM Cloud Private and CAM environmen! You will need internet connectivity in order to be able to download Go itself and the Terraform provider files from github.com. On this machine, we will need to make sure that Go 1.11 (or higher) is installed. In this blog post, we are using an Ubuntu server that does not have Go installed yet.

Download and install Go

Depending on the OS you are using, the instructions to install Go will vary. As we were using Ubuntu linux, we downloaded the .tar.gz file directly from the server and installed it:

root@tfbuild:/tmp# wget https://dl.google.com/go/go1.12.6.linux-amd64.tar.gz
root@tfbuild:/tmp# tar -C /usr/local -xzf /tmp/go1.12.6.linux-amd64.tar.gz

We now have Go installed, but we need to update the PATH:

root@tfbuild:/tmp# export PATH=$PATH:/usr/local/go/bin

Now run go version to confirm what version you have installed:

root@tfbuild:~# go version
go version go1.12.6 linux/amd64

This confirms that we just installed Go 1.12.6, so we meet the minimum version requirements to build the F5 BIG-IP Terraform provider!

Download and build the provider

A few preparatory steps are required.

To prepare the provider(s) in a specific directory on your linux server, such as your home directory, type the following:

root@tfbuild:~# mkdir go
root@tfbuild:~# echo $HOME
/root
root@tfbuild:~# export GOPATH=$HOME/go
root@tfbuild:~# cd $GOPATH
root@tfbuild:~/go# pwd
/root/go

You should now be in a subdirectory of your home directory, called go, with GOPATH set to this same directory. You will now create a subdirectory for your providers and change to it:

root@tfbuild:~/go# mkdir -p $GOPATH/src/github.com/terraform-providers
root@tfbuild:~/go# cd $GOPATH/src/github.com/terraform-providers
root@tfbuild:~/go/src/github.com/terraform-providers# pwd
/root/go/src/github.com/terraform-providers

Next, you will download the F5 provider:

root@tfbuild:~/go/src/github.com/terraform-providers# git clone https://github.com/terraform-providers/terraform-provider-bigip
Cloning into 'terraform-provider-bigip'...
remote: Enumerating objects: 13, done.
remote: Counting objects: 100% (13/13), done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 12820 (delta 1), reused 5 (delta 0), pack-reused 12807
Receiving objects: 100% (12820/12820), 56.24 MiB | 9.73 MiB/s, done.
Resolving deltas: 100% (7321/7321), done.

Change to the provider directory and build it:

root@tfbuild:~/go/src/github.com/terraform-providers# cd terraform-provider-bigip/
root@tfbuild:~/go/src/github.com/terraform-providers/terraform-provider-bigip# make build
==> Checking that code complies with gofmt requirements...
go install

This will have built the provider and placed it in the $GOPATH/bin directory:

root@tfbuild:~/go/src/github.com/terraform-providers/terraform-provider-bigip# ls -rtl $GOPATH/bin
total 33980
-rwxr-xr-x 1 root root 34793179 Jul  4 07:20 terraform-provider-bigip

You should see a file of approximately 34 MB, named terraform-provider-bigip.

Importing your Terraform provider into CAM

Review the instructions in the CAM KnowledgeCenter for importing your own Terraform provider here. You will need to logon to the IBM Cloud Private cluster where you installed CAM cluster in order to import the new Terraform provider. We assume that you have both the cloudctl IBM Cloud Private CLI as well as the kubectl Kubernetes CLI installed on the machine where you built the Terraform provider. Please refer to the IBM Knowledge Center on how to install the cloudctl IBM Cloud Private CLI as well as the kubectl Kubernetes CLI.

First logon to the IBM Cloud Private cluster using cloudctl, note that you need to logon with a user that has administrative permissions. Enter your user name and password, and when you are given a choice of namespaces, type the number next to “services”.
This ensures that all your future kubectl commands are simplified, since they’ll only look at the services namespace, which is where CAM is installed.

root@tfbuild:~# cloudctl login -a https://<ICP_master>:8443Username> admin
Password>
Authenticating...
OK
Targeted account mycluster Account (id-mycluster-account)
Select a namespace:
1. cert-manager
2. default
3. ibmcom
4. icamserver
5. icp
6. kube-public
7. kube-system
8. platform
9. services
Enter a number> 9
Targeted namespace services
Configuring kubectl ...
Cluster "mycluster" set.
User "mycluster-user" set.
Context "mycluster-context" created.
Switched to context "mycluster-context".
OK
Configuring helm: /root./.helm
OK

Now use kubectl to identify the CAM provider pod and copy the new Terraform provider directly into the Terraform plugins directory of that pod:

root@tfbuild:~# kubectl cp terraform-provider-bigip $(kubectl get pods | grep cam-provider-terraform | awk '{print $1;}' | head -n 1):/home/terraform/.terraform.d/plugins/

If you have not logged into the namespace “services”, you will need to add  -n services  to both of the kubectl commands above. Of course the name of the file terraform-provider-bigip is our F5 BIG-IP Terraform provider, should you be using a different Terraform provider you would need to change the command accordingly.

Only a few more commands are required to set the correct ownership and permissions of the Terraform provider file inside the CAM provider pod:

root@tfbuild:~# kubectl exec $(kubectl get pods | grep cam-provider-terraform | awk '{print $1;}' | head -n 1) chown terraform:terraform /home/terraform/.terraform.d/plugins/terraform-provider-bigip
root@tfbuild:~# kubectl exec $(kubectl get pods | grep cam-provider-terraform | awk '{print $1;}' | head -n 1) chmod +x /home/terraform/.terraform.d/plugins/terraform-provider-bigip

You can run the command below to confirm that the permissions on the Terraform provider file, if it looks as shown below we are all set!

root@tfbuild:~# kubectl exec $(kubectl get pods | grep cam-provider-terraform | awk '{print $1;}' | head -n 1) -- ls -l /home/terraform/.terraform.d/plugins/terraform-provider-bigip
-rwxr-xr-x 1 terraform terraform 0 Jul  4 21:55 /home/terraform/.terraform.d/plugins/terraform-provider-bigip

Using the new Terraform provider in CAM

With the F5 BIG-IP Terraform provider in place, we can now start creating new Terraform templates in CAM. As usual, you can either use the CAM Template Designer or your own text editor to create the templates. Refer to the documentation of the F5 BIG-IP Terraform provider for details on what resources are supported. Below is an example of a simple main.tf Terraform template that automates the creation of a new Virtual IP address (VIP).

main.tf

provider "bigip" {
  username = "${var.f5_username}"
  password = "${var.f5_password}"
  address  = "${var.f5_address}"
}

resource "bigip_ltm_virtual_server" "vip" {
  name                       = "/_${var.vrf}/${var.vip_name}_VIP"
  port                       = "${var.vip_port}"
  source                     = "0.0.0.0/0"
  destination                = "${var.vip_destination_ip}"
  pool                       = "/_${var.vrf}/${var.vip_name}_Pool"
    mask                       = "255.255.255.255"
  profiles                   = ["/Common/tcp"]
  persistence_profiles       = ["/Common/tcp"]
  source_address_translation = "snat"
  snatpool                   = "/_${var.vrf}/${var.vip_name}_SNAT"
  ip_protocol                = "tcp"
  vlans                      = ["/_${var.vrf}/${var.vip_vlan}"]
  translate_address          = "enabled"
  translate_port             = "enabled"
  vlans_enabled              = "true"
}

Note that we used a number of variables defined in variables.tf, which can easily be exposed by defining CAM variables through camvariables.json as documented here in the IBM Cloud Automation Knowledge Center.

variables.tf

##################### Variables ###############################
## Provider connection variables ##

variable "f5_username" {
  description = "F5 BIGIP user name"
}

variable "f5_password" {
  description = "F5 BIGIP user password"
}

variable "f5_address" {
  description = "F5 BIGIP Address"
}

## Other variables ## 

variable "tn_name" {
  description = "Tenant name"
  default     = "XYZ_PROD"
}

variable "vrf" {
  description = "VRF identifier"
  default     = "INT"
}

variable "vip_name" {
  description = "Virtual server name"
}

variable "vip_port" {
  description = "Virtual server port number"
  default     = "80"
}

variable "vip_destination_ip" {
  description = "Virtual server destination IP address"
  type        = "string"
}

variable "vip_vlan" {
  description = "Virtual server VLAN"
  default     = "VLAN_1234"
}

IBM Cloud Automation Manager 3.1.2.1 is now available

Originally posted on IBM Cloud Automation Manager blog by Hendrik van Run on 7 June 2019

IBM Cloud Automation Manager 3.1.2.1 was released on 10th May, this page in the CAM Knowledge Center documents what’s new in this release. As you can see there, this version of CAM can be installed on IBM Cloud Private 3.1.2 or 3.2.0 however IBM also supports it on earlier releases of IBM Cloud Private.

As always, IBM CAM 3.1.2.3 can be downloaded from Docker Hub directly. But as documented here in the CAM Knowledge Center, IBM also provides an offline installation package which is available from IBM FixCentral here. Note that CAM 3.1.2.1 is not available from IBM Passport Advantage!

For those of you following this blog, it is worth highlighting that the issue with certain forbidden characters in the UI of CAM as reported earlier here is now resolved!

Parameters in CAM 3.1.2 UI prevents the use of certain special characters

Originally posted on IBM Cloud Automation Manager blog by Hendrik van Run on 8 April 2019

As some clients found out recently, the CAM 3.1.2 UI prevents the use of certain special characters in parameters (.,<>). This applies to a number of scenarios, for example when defining a Cloud Connection and specifying a password containing special characters.

However it would equally apply when deploying a Terraform template that takes a parameter to point to a specific part of an LDAP directory, i.e. ou=ibm,dc=com. Here the ‘,’ is one the special characters not allowed by the CAM UI.

The good news is the CAM UI should no longer have this limitation in the upcoming Q2 release. In the mean time, you can call the corresponding CAM API which does not have any special character limitations”:

CAM API to create a new Cloud Connection

CAM API to deploy a Terraform template

IBM Cloud Automation Manager 3.1.2 is now available!

Originally posted on IBM Cloud Automation Manager blog by Hendrik van Run on 12 March 2019

IBM Cloud Automation Manager went GA on March 8th, 2019, read all about the new features and capabilities here.

The corresponding Helm chart is available online from github.com here. Offline packages can be downloaded as usual from IBM Passport Advantage, refer to this link for the part numbers. As always the Community Edition is available free of charge, you can find instructions on how to install this here.

And finally one minor comment, note that the version of the CAM Helm chart for IBM Cloud Automation Manager 3.1.2 is actually 3.1.1!

How to deploy IBM Transformation Advisor to offline IBM Cloud Private 3.1.1 environment

I recently worked with a client who was running IBM Cloud Private 3.1.1 in an offline setup. This client had deployed IBM Cloud Private on IBM PureApplication Platform, something that has been described in more detail here.

With a simple ICP 3.1.1 environment up and running, the next challenge was to work through the process of manually installing the Helm chart and Docker images for the IBM Transformation Advisor. When deploying ICP 3.1.1 in an environment with internet access, this is not needed of course. As you can see below, the catalog was empty in our scenario.

Adding IBM Transformation Advisor to IBM Cloud Private Catalog

This page of the ICP 3.1.1 knowledge center describes how a Helm chart and corresponding docker images can be packaged up as an archive. This archive can be built on any machine that has internet connectivity and supports Docker, Helm and the cloudctl IBM cloud command line interface. Once transferred to your ICP environment, it can be imported for use.

Create IBM Transformation Advisor archive

You start by cloning the git repo on github.com that contains the IBM Helm charts:

MacBook-Pro:~ hendrikvanrun$ cd TransformationAdvisor/
MacBook-Pro:TransformationAdvisor hendrikvanrun$ git clone https://github.com/IBM/charts.git
Cloning into 'charts'...
remote: Enumerating objects: 113, done.
remote: Counting objects: 100% (113/113), done.
remote: Compressing objects: 100% (88/88), done.
remote: Total 10761 (delta 32), reused 74 (delta 20), pack-reused 10648
Receiving objects: 100% (10761/10761), 37.87 MiB | 1.08 MiB/s, done.
Resolving deltas: 100% (6371/6371), done.
Checking out files: 100% (2795/2795), done.

Make sure you have the IBM cloudctl command line tool installed, you can download this from your IBM Cloud Private instance as described here in the Knowledge Center.

MacBook-Pro:ICP311Tools hendrikvanrun$ chmod 755 ../Downloads/cloudctl-darwin-amd64-3.1.1-973
MacBook-Pro:ICP311Tools hendrikvanrun$ sudo mv ../Do
Documents/        DownloadDirector/ Downloads/
MacBook-Pro:ICP311Tools hendrikvanrun$ sudo mv ../Downloads/cloudctl-darwin-amd64-3.1.1-973 /usr/local/bin/cloudctl
Password:
MacBook-Pro:ICP311Tools hendrikvanrun$ which cloudctl
/usr/local/bin/cloudctl

Now create a copy of the manifest.yaml file of the IBM TA Helm chart:

MacBook-Pro-8:ibm-transadv-dev hendrikvanrun$ cp ibm_cloud_pak/manifest.yaml /Users/hendrikvanrun/manifest.yaml

Now modify the file, in particular we removed any references to Docker images for ppc64le and s390x and z/OS architecture (only leaving the amd64 ones). This simply speeds up the process and reduces the size of the archive. In addition, we explicitly referenced the Helm chart: archive: “https://github.com/IBM/charts/blob/master/repo/stable/ibm-transadv-dev-1.9.3.tgz?raw=true&#8221; (this was originally set to archive: file:ibm-transadv-dev-1.9.3.tgz).

MacBook-Pro:ibm-transadv-dev hendrikvanrun$ cat /Users/hendrikvanrun/manifest.yaml
# © Copyright IBM Corporation 2017
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

charts:
  - archive: "https://github.com/IBM/charts/blob/master/repo/stable/ibm-transadv-dev-1.9.3.tgz?raw=true"
    repository-keys:
      - couchdb.image.repository
      - transadv.image.repository
      - transadvui.image.repository

images:
- image: ibmcom/transformation-advisor-db:1.9.3
  references:
  - repository: ibmcom/transformation-advisor-db-amd64:1.9.3
    pull-repository: ibmcom/transformation-advisor-db-amd64:1.9.3
    platform:
      os: linux
      architecture: amd64

- image: ibmcom/transformation-advisor-server:1.9.3
  references:
  - repository: ibmcom/transformation-advisor-server-amd64:1.9.3
    pull-repository: ibmcom/transformation-advisor-server-amd64:1.9.3
    platform:
      os: linux
      architecture: amd64

- image: ibmcom/transformation-advisor-ui:1.9.3
  references:
  - repository: ibmcom/transformation-advisor-ui-amd64:1.9.3
    pull-repository: ibmcom/transformation-advisor-ui-amd64:1.9.3
    platform:
      os: linux
      architecture: amd64

Now we can run the command below to build the archive.

Note:  We included the flag CLOUDCTL_TRACE=true to obtain additional verbose output, this can be helpful to debug issues and track progress.

MacBook-Pro:ibm-transadv-dev hendrikvanrun$ CLOUDCTL_TRACE=true cloudctl catalog create-archive -s /Users/hendrikvanrun/manifest.yaml -a /Users/hendrikvanrun/ibm-transadv-dev.tgz
create-archive: archive=/Users/hendrikvanrun/ibm-transadv-dev.tgz, chart=, manifest=/Users/hendrikvanrun/manifest.yaml, values=, architectures=, skipCleanup=false
Creating archive /Users/hendrikvanrun/ibm-transadv-dev.tgz from manifest /Users/hendrikvanrun/manifest.yaml
  Updated archive path: /Users/hendrikvanrun/ibm-transadv-dev.tgz
  Create command: spec={"Revision":"1.0","OutputFilename":"","Charts":[{"Archive":"https://github.com/IBM/charts/blob/master/repo/stable/ibm-transadv-dev-1.9.3.tgz?raw=true","RepositoryKeys":["couchdb.image.repository","transadv.image.repository","transadvui.image.repository"],"RegistryKeys":null}],"Images":[{"image":"ibmcom/transformation-advisor-db:1.9.3","tag":"","references":[{"repository":"ibmcom/transformation-advisor-db-amd64:1.9.3","pull-repository":"ibmcom/transformation-advisor-db-amd64:1.9.3","platform":{"os":"linux","architecture":"amd64"}}]},{"image":"ibmcom/transformation-advisor-server:1.9.3","tag":"","references":[{"repository":"ibmcom/transformation-advisor-server-amd64:1.9.3","pull-repository":"ibmcom/transformation-advisor-server-amd64:1.9.3","platform":{"os":"linux","architecture":"amd64"}}]},{"image":"ibmcom/transformation-advisor-ui:1.9.3","tag":"","references":[{"repository":"ibmcom/transformation-advisor-ui-amd64:1.9.3","pull-repository":"ibmcom/transformation-advisor-ui-amd64:1.9.3","platform":{"os":"linux","architecture":"amd64"}}]}]}, storageProvider={"Path":"/Users/hendrikvanrun/ibm-transadv-dev.tgz"}, baseDir=/Users/hendrikvanrun
      docker pull ibmcom/transformation-advisor-ui-amd64:1.9.3
      docker pull ibmcom/transformation-advisor-db-amd64:1.9.3
      docker pull ibmcom/transformation-advisor-server-amd64:1.9.3
Adding charts...
done
Adding image  ibmcom/transformation-advisor-db:1.9.3
Transferring image ibmcom/transformation-advisor-db-amd64:1.9.3 into the destination archive...
done
Adding image  ibmcom/transformation-advisor-server:1.9.3
Transferring image ibmcom/transformation-advisor-server-amd64:1.9.3 into the destination archive...
done
Adding image  ibmcom/transformation-advisor-ui:1.9.3
Transferring image ibmcom/transformation-advisor-ui-amd64:1.9.3 into the destination archive...
done
OK

Finally we examined the archive quickly, you can see that there is a Helm chart and 3 Docker images in there:

MacBook-Pro:ibm-transadv-dev hendrikvanrun$ tar -ztvf /Users/hendrikvanrun/ibm-transadv-dev.tgz
-rw-------  0 0      0       39157 Feb 18 18:00 charts/ibm-transadv-dev-1.9.3.tgz
-rw-------  0 0      0   547622400 Feb 18 17:59 images/3ef60eec5e2e815f26e5f11d6061ac53a03292c0dc6f3eacd94f003b089e9036.tar.gz
-rw-------  0 0      0  1474247680 Feb 18 18:00 images/8a182a770cd4c0ba4c8d8aaa4477256b53382f1cea02bd2673bfa084fa0c3209.tar.gz

-rw-------  0 0      0   549028864 Feb 18 18:00 images/a257dff2eb8f36f2960d990378cc80e5b84f021de1fce3ea51823aab5fa4f515.tar.gz

-rw-------  0 0      0        1536 Feb 18 18:02 manifest.json
-rw-------  0 0      0         802 Feb 18 18:02 manifest.yaml

Import IBM Transformation Advisor archive

Once packaged up this archive can simply be transferred to a machine that has connectivity to your ICP environment. There it can be imported as shown below:

-bash-4.2# CLOUDCTL_TRACE=true cloudctl catalog load-archive --archive /tmp/ibm-transadv-dev.tgz
load-archive: archive=/tmp/ibm-transadv-dev.tgz, registry=mycluster.icp:8500/default, repo=local-charts, username=, password set=false
Expanding archive
Archive contents:
  charts/ibm-transadv-dev-1.9.3.tgz
  images/3ef60eec5e2e815f26e5f11d6061ac53a03292c0dc6f3eacd94f003b089e9036.tar.gz
  images/8a182a770cd4c0ba4c8d8aaa4477256b53382f1cea02bd2673bfa084fa0c3209.tar.gz
  images/a257dff2eb8f36f2960d990378cc80e5b84f021de1fce3ea51823aab5fa4f515.tar.gz
  manifest.json
  manifest.yaml
OK

GET https://mycluster.icp:8443/helm-api/api/v1/repos
Importing docker images
  Processing image: ibmcom/transformation-advisor-db-amd64:1.9.3
    Loading Image
      docker load -i /tmp/icp075570323/images/3ef60eec5e2e815f26e5f11d6061ac53a03292c0dc6f3eacd94f003b089e9036.tar.gz
    Tagging Image
      docker tag ibmcom/transformation-advisor-db-amd64:1.9.3 mycluster.icp:8500/default/ibmcom/transformation-advisor-db-amd64:1.9.3
    Pushing image as: mycluster.icp:8500/default/ibmcom/transformation-advisor-db-amd64:1.9.3
      docker push mycluster.icp:8500/default/ibmcom/transformation-advisor-db-amd64:1.9.3
    Creating manifest list as: mycluster.icp:8500/default/ibmcom/transformation-advisor-db:1.9.3
    Annotating manifest list: mycluster.icp:8500/default/ibmcom/transformation-advisor-db-amd64:1.9.3
    Pushing manifest list: mycluster.icp:8500/default/ibmcom/transformation-advisor-db:1.9.3
Digest: sha256:53213cebba4f552014c50bd44417d690b3169192580f462f503adcf01ea55fbc 434
  Processing image: ibmcom/transformation-advisor-server-amd64:1.9.3
    Loading Image
      docker load -i /tmp/icp075570323/images/8a182a770cd4c0ba4c8d8aaa4477256b53382f1cea02bd2673bfa084fa0c3209.tar.gz
    Tagging Image
      docker tag ibmcom/transformation-advisor-server-amd64:1.9.3 mycluster.icp:8500/default/ibmcom/transformation-advisor-server-amd64:1.9.3
    Pushing image as: mycluster.icp:8500/default/ibmcom/transformation-advisor-server-amd64:1.9.3
      docker push mycluster.icp:8500/default/ibmcom/transformation-advisor-server-amd64:1.9.3
    Creating manifest list as: mycluster.icp:8500/default/ibmcom/transformation-advisor-server:1.9.3
    Annotating manifest list: mycluster.icp:8500/default/ibmcom/transformation-advisor-server-amd64:1.9.3
    Pushing manifest list: mycluster.icp:8500/default/ibmcom/transformation-advisor-server:1.9.3
Digest: sha256:79f2786eaa45d66f33e5e8308a49dea8f823907322ecb0dd72353e571ff78fd1 434
  Processing image: ibmcom/transformation-advisor-ui-amd64:1.9.3
    Loading Image
      docker load -i /tmp/icp075570323/images/a257dff2eb8f36f2960d990378cc80e5b84f021de1fce3ea51823aab5fa4f515.tar.gz
    Tagging Image
      docker tag ibmcom/transformation-advisor-ui-amd64:1.9.3 mycluster.icp:8500/default/ibmcom/transformation-advisor-ui-amd64:1.9.3
    Pushing image as: mycluster.icp:8500/default/ibmcom/transformation-advisor-ui-amd64:1.9.3
      docker push mycluster.icp:8500/default/ibmcom/transformation-advisor-ui-amd64:1.9.3
    Creating manifest list as: mycluster.icp:8500/default/ibmcom/transformation-advisor-ui:1.9.3
    Annotating manifest list: mycluster.icp:8500/default/ibmcom/transformation-advisor-ui-amd64:1.9.3
    Pushing manifest list: mycluster.icp:8500/default/ibmcom/transformation-advisor-ui:1.9.3
Digest: sha256:42a61d7e2fd9bae94cce4058bd6e0da108a0a45aa7112071380d06485e8c5967 434
      docker rmi mycluster.icp:8500/default/ibmcom/transformation-advisor-ui-amd64:1.9.3
      docker rmi ibmcom/transformation-advisor-ui-amd64:1.9.3
      docker rmi mycluster.icp:8500/default/ibmcom/transformation-advisor-server-amd64:1.9.3
      docker rmi ibmcom/transformation-advisor-server-amd64:1.9.3
      docker rmi mycluster.icp:8500/default/ibmcom/transformation-advisor-db-amd64:1.9.3
      docker rmi ibmcom/transformation-advisor-db-amd64:1.9.3
OK

Uploading helm charts
  Processing chart: charts/ibm-transadv-dev-1.9.3.tgz
  Chart path: /tmp/icp075570323/charts/ibm-transadv-dev-1.9.3.tgz
  Updating chart values.yaml
replacing values.yaml image values in chart /tmp/icp075570323/charts/ibm-transadv-dev-1.9.3.tgz
replacing chart values.yaml image value ibmcom/transformation-advisor-db with mycluster.icp:8500/default/ibmcom/transformation-advisor-db
replacing chart values.yaml image value ibmcom/transformation-advisor-server with mycluster.icp:8500/default/ibmcom/transformation-advisor-server
replacing chart values.yaml image value ibmcom/transformation-advisor-ui with mycluster.icp:8500/default/ibmcom/transformation-advisor-ui
  New chart: /tmp/icp_tgz_207132374
  Uploading chart
  Chart metadata: {"Name":"ibm-transadv-dev","Version":"1.9.3"}
PUT https://mycluster.icp:8443/helm-repo/charts/ibm-transadv-dev/1.9.3
Loaded helm chart
  Status code: 201, Body: {"url":"https://mycluster.icp:8443/helm-repo/requiredAssets//ibm-transadv-dev-1.9.3.tgz"}
OK

Synch charts
GET https://mycluster.icp:8443/helm-api/api/v1/synch
Synch started
OK

Archive finished processing

Note: Make very sure that you used cloudctl load-archive, and not cloudctl load-chart! When you use the latter by accident, you will receive the error shown below which does not tell you directly what you are doing wrong!

-bash-4.2# CLOUDCTL_TRACE=true cloudctl catalog load-chart --archive /tmp/ibm-transadv-dev.tgz
load-chart: archive=/tmp/ibm-transadv-dev.tgz, repo=local-charts
GET https://mycluster.icp:8443/helm-api/api/v1/repos
Loading helm chart
runtime error: invalid memory address or nil pointer dereference

Confirm that IBM Transformation Advisor is present in IBM Cloud Private Catalog

You should now be able to see the IBM Transformation Advisor Helm chart in the IBM Cloud Private Catalog:

The Docker images for IBM Transformation Advisor should also be visible in the private Docker Registry of IBM Cloud Private:

Deploying IBM Transformation Advisor within IBM Cloud Private

Create namespace for IBM Transformation Advisor

Although not required, sometimes it can make sense to deploy IBM TA in its own namespace. We created the namespace “transadv” as shown below.

Note: When deploying IBM Transformation Advisor in its own namespace, note that it requires a namespace with bm-anyuid-psp security policy (otherwise you will get a Pod Security Conflict when deploying the IBM TA Helm chart).

Create Secret, Persistent Volume and Persistent Volume Claim

Deployment of the IBM TA requires a number of resources to be created. Although this can all be done from the ICP UI, we chose to do so using the kubectl command line. Please refer to this link on how to run kubectl against your ICP environment.

-bash-4.2# cat create_secrets.yaml
{
  "kind": "Secret",
  "apiVersion": "v1",
  "metadata": {
    "name": "transformation-advisor-secret",
    "namespace": "transadv",
    "annotations": {}
  },
  "type": "",
  "data": {
    "db_username": "YWRtaW4=",
    "secret": "YWRtaW4="
  }
}

-bash-4.2# cat create_local_pv.yaml
{
  "kind": "PersistentVolume",
  "apiVersion": "v1",
  "metadata": {
    "name": "transadv-ing-pv",
    "labels": {
      "type": "local"
    }
  },
  "spec": {
    "accessModes": [
      "ReadWriteOnce"
    ],
    "persistentVolumeReclaimPolicy": "Recycle",
    "capacity": {
      "storage": "8Gi"
    },
    "hostPath": {
      "path": "/usr/data_ing"
    }
  }
}

-bash-4.2# cat create_local_pvc.yaml
{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
    "name": "transadv-ing-pvc",
    "namespace": "transadv"
  },
  "spec": {
    "resources": {
      "requests": {
        "storage": "8Gi"
      }
    },
    "accessModes": [
      "ReadWriteOnce"
    ]
  }
}

-bash-4.2# kubectl create -f create_local_pv.yaml
persistentvolume/transadv-ing-pv created
-bash-4.2# kubectl create -f create_local_pvc.yaml
persistentvolumeclaim/transadv-ing-pvc created

Perform Helm chart deployment

Deploy the TA Helm chart by clicking Configure, specify the following:

Helm release name: something unique that will identify the instance of this Helm deployment
Target namespace: the namespace for this Helm deployment, in our case we use “transadv”

Under Parameters, expand All parameters and specify the following:

Ingress enabled: enabled
Edge node IP: the IP that will be used to access the TA instance (in our case the IP of the ICP node hosting the proxy)
Secret name: name of the secret that you created earlier, in our case we used “transformation-advisor-secret”

Use dynamic provisioning for persistent volume: disabled (as we were not using a storage provider that supports this, we used HostPath for our Persistent Volume)
Existing volume claim: the name of the Persistent Volume Claim to be used, in our case we used “transadv-ing-pvc”

Leave everything else as default and click Install to deploy the TA Helm chart!

Note: When we deployed the IBM Transformation Advisor Helm chart for the first time, the pods were unable to start because they could not pull their corresponding Docker images (even though they had been loaded successfully into the ICP private Docker regristry). We performed a manual Docker pull for those images, once done the pods were able to start without any issues.

-bash-4.2# docker login mycluster.icp:8500
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
-bash-4.2# docker images | grep transformation

No results were returned, containers failed to start.

We manually pulled the docker images from the private docker registry:

-bash-4.2# docker pull mycluster.icp:8500/default/ibmcom/transformation-advisor-ui:1.9.3
1.9.3: Pulling from default/ibmcom/transformation-advisor-ui
7b722c1070cd: Pull complete
5fbf74db61f1: Pull complete
ed41cb72e5c9: Pull complete
7ea47a67709e: Pull complete
2eb939fa4385: Pull complete
ba48ea5327f0: Pull complete
f8d8d6c70588: Pull complete
d1b3189d96a2: Pull complete
1349d38b6605: Pull complete
db055daffd85: Pull complete
f71d615237f5: Pull complete
Digest: sha256:42a61d7e2fd9bae94cce4058bd6e0da108a0a45aa7112071380d06485e8c5967
Status: Downloaded newer image for mycluster.icp:8500/default/ibmcom/transformation-advisor-ui:1.9.3

-bash-4.2# docker pull mycluster.icp:8500/default/ibmcom/transformation-advisor-server:1.9.3
1.9.3: Pulling from default/ibmcom/transformation-advisor-server
7b722c1070cd: Already exists
5fbf74db61f1: Already exists
ed41cb72e5c9: Already exists
7ea47a67709e: Already exists
449210cbff3b: Pull complete
18c9d73c43bb: Pull complete
34afdf15398d: Pull complete
83a69d4d0146: Pull complete
bb817bf5c90c: Pull complete
ebb9f104335d: Pull complete
7714cd589690: Pull complete
d5191570a227: Pull complete
c95e32f7e195: Pull complete
7cad080066da: Pull complete
2e1f16201f65: Pull complete
1f5cb9ae3263: Pull complete
f8cb890e2f25: Pull complete
f02eaa87c172: Pull complete
f55e16af921f: Pull complete
166a3a13ed1f: Pull complete
083158f642b4: Pull complete
757736fbfb0b: Pull complete
8db7b05a5de9: Pull complete
01c21f9a3958: Pull complete
1d18923c77d0: Pull complete
e1121c6fa76c: Pull complete
1a3ef2d8bd87: Pull complete
Digest: sha256:79f2786eaa45d66f33e5e8308a49dea8f823907322ecb0dd72353e571ff78fd1
Status: Downloaded newer image for mycluster.icp:8500/default/ibmcom/transformation-advisor-server:1.9.3

-bash-4.2# docker pull mycluster.icp:8500/default/ibmcom/transformation-advisor-db:1.9.3
1.9.3: Pulling from default/ibmcom/transformation-advisor-db
8ee29e426c26: Already exists
6e83b260b73b: Already exists
e26b65fd1143: Already exists
40dca07f8222: Already exists
b420ae9e10b3: Already exists
ae26edaec184: Pull complete
b18cdf6af835: Pull complete
e79e24c5c94f: Pull complete
695a69d7b71d: Pull complete
a2c2c4795e22: Pull complete
72ba4b66585f: Pull complete
0a86d74d9091: Pull complete
0ffbf6cc8d02: Pull complete
19c63dcb0568: Pull complete
9281573ea32d: Pull complete
f5922e00415f: Pull complete
fd028270ff65: Pull complete
d6db3db3ea39: Pull complete
a1adf1cc489e: Pull complete
71e2319746bb: Pull complete
d352de541132: Pull complete
Digest: sha256:53213cebba4f552014c50bd44417d690b3169192580f462f503adcf01ea55fbc
Status: Downloaded newer image for mycluster.icp:8500/default/ibmcom/transformation-advisor-db:1.9.3

Now the images are visible from docker and the transformation advisor pods can start successfully:

-bash-4.2# docker images | grep transformation
mycluster.icp:8500/default/ibmcom/transformation-advisor-db        1.9.3                          3ef60eec5e2e        10 days ago         534MB
mycluster.icp:8500/default/ibmcom/transformation-advisor-server    1.9.3                          8a182a770cd4        11 days ago         1.46GB
mycluster.icp:8500/default/ibmcom/transformation-advisor-ui        1.9.3                          a257dff2eb8f        11 days ago         508MB

Configure OAuth integration

Because we deployed the IBM TA with Ingress enabled ***, we found that we had to run a specific command to ensure that the OAuth integration between IBM TA and ICP is configured:

With kubectl configured to manage your ICP cluster as described here in the Knowledge Center, we first confirmed that all pods were running fine:

-bash-4.2# kubectl get pods -n transadv
NAME                                                              READY     STATUS      RESTARTS   AGE
transadv-ing-release-ibm-transadv-dev-193-oidc-deployment-8xbwq   1/1       Running     0          24m
transadv-ing-release-ibm-transadv-dev-193-oidc-registratiozkrht   0/1       Completed   0          24m
transadv-ing-release-ibm-transadv-dev-couchdb-7458d68fbf-p4gf9    1/1       Running     0          24m
transadv-ing-release-ibm-transadv-dev-server-69774cc6bd-pzz54     1/1       Running     0          24m
transadv-ing-release-ibm-transadv-dev-ui-76b54bb844-fgnqr         1/1       Running     0          24m

Then we ran the following command:

-bash-4.2# kubectl exec -n transadv -ti `kubectl get pods -n transadv -l release=transadv-ing-release \
> -l app=transadv-ing-release-ibm-transadv-dev-193--oidc-deployment \
> | grep "Running" | head -n 1 | awk '{print $1}'` \
> bash -- "/scripts/register-client.sh" \
> "`kubectl get cm oauth-client-map -n services -o yaml | grep PROXY_IP | grep -v '"PROXY_IP"' | awk '{print $2}'`" \
> "`kubectl get secret platform-oidc-credentials -o yaml -n kube-system | grep OAUTH2_CLIENT_REGISTRATION_SECRET: | awk '{print $2}'`"
set icp proxy ip to 10.226.68.234
set Oauth client registration secret to *
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
Creating new client registration.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2107  100  1130  100   977  10337   8938 --:--:-- --:--:-- --:--:-- 10366
HTTP/1.1 201 Created
Client is registered.

Obtain URL for IBM Transformation Advisor

Finally we obtained the URL for IBM TA:

-bash-4.2#   echo https://$INGRESS_IP/$APP_PATH
https://10.226.68.234/transadv-ing-release-ui

You can also see from the Hostpath /usr/data_ing that the ??? Couch DB pod of IBM TA is writing files there:

-bash-4.2# ls -rtl /usr/data_ing/
total 112
-rw-r--r--  1 5984 5984  8374 Feb 19 14:49 _users.couch
-rw-r--r--  1 5984 5984  8368 Feb 19 14:49 _nodes.couch
-rw-r--r--  1 5984 5984  8374 Feb 19 14:49 _replicator.couch
drwxr-xr-x 10 5984 5984  4096 Feb 19 14:49 shards
-rw-r--r--  1 5984 5984 69820 Feb 19 14:49 _dbs.couch

Start using IBM Transformation Advisor

Originally posted on IBM Developer blog “IBM Cloud Best Practices from the Field” by Hendrik van Run on 20 February 2019 (1824 visits)

How to deploy IBM Cloud Automation Manager 3.1.0.0 iFix1

Originally posted on IBM Cloud Automation Manager blog by Hendrik van Run on 31 January 2019

IBM Cloud Automation Manager (CAM) 3.1.0.0 was released on 28th September 2018. Since then, a number of features have been added which have been made available in CAM 3.1.0.0 iFix1:

  • Fix for deploying some helm charts with nested values in values.yaml.
  • Fix to prevent early timeout during execution of a long-running service.
  • Fix for encoding of some URLs that contain spaces in service definitions.
  • Added support for Huawei cloud connections.
  • Added support for VMWare NSX-T cloud connections.

When performing an online installation of CAM on top of IBM Cloud Private (ICP), the latest CAM 3.1.0.0 iFix1 images are used. However for those of you performing an offline installation, you will have to follow a different process.

  1. Install ICP 3.1 or 3.1.1 (or have a working ICP environment available)
  2. Deploy CAM 3.1.0.0
  3. Download CAM 3.1.0.0 iFix 1 from IBM Fix Central (make sure to select “IBM Cloud Private” as product and look for this interim fix: icp-cam-3.1.0-build507318 2018/11/20)
  4. Apply CAM 3.1.0.0 iFix 1 update as per the instructions here.

IBM Cloud Automation Manager 3.1 delivers improved offline installation experience

Originally posted on IBM Developer blog “IBM Cloud Best Practices from the Field” by Hendrik van Run on 17 October April 2018 (1540 visits)

IBM Cloud Automation Manager (CAM) is an offering to simplify the orchestration of cloud resources. It uses Terraform providers to interact with a variety of resources providers, for example VMware, IBM Cloud and AWS to provision Virtual Machines (which can either be on-premises or off-premises). But the power of Terraform is the rich set of Terraform providers, allowing you to integrate with a variety of other resources. For example Kubernetes, IBM UrbanCode Deploy or BigIp F5 load balancers.

CAM runs on top of IBM Cloud Private (ICP), an offering that provides a supported Kubernetes cluster that can be deployed on- or off-premises. As part of your CAM license, you are entitled to install and run IBM Cloud Private native edition. One challenge that a number of on-premises CAM clients have been facing is that CAM was originally designed with the assumption that it would always have outbound access to the internet. However many clients either do not tolerate outbound traffic at all, or only allow traffic to a set of whitelisted domains through an outbound proxy. This was a problem when deploying CAM as a helm chart on ICP, in particular the pod “cam-iaas” would not be set to ready for quite some time.

-bash-4.2# kubectl get pods -n services
NAME                                       READY     STATUS    RESTARTS   AGE
cam-bpd-cds-79d8d54cf4-f9dhb               1/1       Running   0          1h
cam-bpd-mariadb-5fd9c999fd-qdd9z           1/1       Running   0          1h
cam-bpd-mds-68c99dcf98-szmm7               1/1       Running   0          1h
cam-bpd-ui-7f9946f67f-j246p                1/1       Running   0          1h
cam-broker-65c85dcb9b-vk99v                1/1       Running   0          1h
cam-iaas-7f8746cc95-zlr4q                  0/1       Running   0          1h
cam-mongo-5cf6ffc5d9-mfr2f                 1/1       Running   0          1h
cam-orchestration-7d46f5b55d-bldk7         1/1       Running   0          1h
cam-portal-ui-7cc667fd56-kwdr2             1/1       Running   0          1h
cam-provider-helm-6dd8cb9994-rcdpk         1/1       Running   0          1h
cam-provider-terraform-6d55cf95f6-zwnwd    1/1       Running   0          1h
cam-proxy-594b9959f6-rswjx                 1/1       Running   0          1h
cam-service-composer-api-75fc4947b-7pdz6   1/1       Running   0          1h
cam-service-composer-ui-69fb9c4978-xgmkv   1/1       Running   0          1h
cam-tenant-api-59b5595cfb-s5jvt            1/1       Running   0          1h
cam-ui-basic-5959876cdc-fgsh9              1/1       Running   0          1h
cam-ui-connections-569d5b86fc-jx9jp        1/1       Running   0          1h
cam-ui-instances-7f6d8ff6bb-thczf          1/1       Running   0          1h
cam-ui-templates-95bd4575b-cvzz5           1/1       Running   0          1h
redis-755766755b-68xvg                     1/1       Running   0          1h

When examining the logs of the cam-iaas pod, it would attempt to download a number of terraform templates from github.com. If internet access is blocked, each of those attempts would eventually time out. However given that there are over 100 of those templates to be downloaded, it would typically take several hours until the cam-iaas pod would be running.

[2018-10-17T17:28:08.641Z] ERROR: orpheus-api-common/110 on cam-iaas: Failed to import git template from https://github.com/IBM-CAMHub-Open/starterlibrary/BlueMix/terraform/hcl/nodejs (script=load-prebuilt-content)

Now the good news is that CAM 3.1 introduces two options to considerably improve the offline installation experience. Both options are also available in CAM 2.1.0.3 FP1, which can be downloaded from IBM Fix Central here.

1. There is now a new page “Installing Cloud Automation Manager offline” available in the CAM 3.1 Knowledge Center, which explains how to optionally configure an outbound proxy. It also explains what domains have to be whitelisted, all templates that are downloaded by the cam-iaas pod are from use the api.github.com domain.

2. You can also change the behaviour of the cam-iaas pod so that it never attempts to download those terraform templates from github.com in the first place. This is useful if you cannot use an outbound proxy. The option “Optimize for offline install” of the CAM helm chart is what you would use in that case.

Note that you can also enable this flag when deploying CAM using the helm command line interface using the parameter global-offline=true:

helm install --name cam --namespace <namespace> ibm-cam-3.1.0.tgz --set global-iam.deployApiKey=<key> --set global-offline=true --tls

Note: At the time of writing, this option was not documented yet in the CAM Knowledge Center.

Things to be aware of when migrating from traditional WebSphere to WebSphere Liberty

Many clients are looking at ways to modernize their application development, taking advantage of technologies like Docker, Kubernetes and Helm. But many of these same clients still have a large set of Java EE applications in production today on traditional IBM WebSphere Application Server (abbreviated in this post to tWAS). In order to run those existing applications in a Docker container, they first need to be migrated to a Java EE application server that was designed with Docker in mind. IBM WebSphere Liberty is such an application server and is built on top of the IBM open source project Open Liberty. WebSphere Liberty brings a number of other benefits as it’s a relatively light-weight application server that starts in seconds and is a perfect fit for containerising Java EE applications.

IBM Cloud Private is a commercial offering from IBM and provides clients with an on-premises solution based on Docker, Kubernetes and Helm. It also comes with a tool to help clients to analyse their existing Java EE applications and determine the effort and complexity of migrating those applications to Liberty and/or IBM Cloud Private.

The purpose of this blog post is to highlight a few subtle but sometimes extremely relevant differences between tWAS and IBM WebSphere Liberty. We recommend that clients review these differences when they are about to undertake a migration.

Global transactions, remote EJB calls and WLM in tWAS

One of the luxuries of running Java EE applications on tWAS is that it comes with a proven transaction manager. It supports global (two-phase commit) transactions for Java EE applications, even when the transaction scope spans different Java Virtual Machine (JVM) processes[1]. Using the remote EJB interfaces, applications can make remote calls to other application components while propagating the transaction context.

IBM WebSphere Network Deployment (WAS ND) also comes with support for clusters, including EJB Workload Management (WLM)[2]. In other words when calling an application component remotely using the EJB interfaces, if that application resides on a tWAS ND cluster it can also take care of ensuring that discovery calls are load balanced across any of the available cluster members and transactional affinity maintained thereafter. This provides resilience and simplicity even in distributed systems that are tightly coupled by transactions.

The diagram below illustrates the support for global transactions spanning different JVM  processes as well as built-in EJB WLM. This is only available when both EJB client and EJB server are deployed on tWAS ND clusters.

Global transactions, remote EJB calls and WLM in WebSphere Liberty

While Liberty has a number of advantages over tWAS, Liberty does not support remote propagation of transaction context to remote EJBs. This is typically not required for modern Java EE applications using microservices and REST, however this should be taken into account when migrating existing Java EE applications from tWAS.

The diagram below visualises the implications. You can make remote EJB calls, but you cannot span a global transaction across multiple JVM processes.

Note: WebSphere Liberty does support global transactions, as long as the transaction boundary remains within the Java Virtual Machine process of application starting the transaction.

Microservices applications and global transactions in WebSphere Liberty

With the above information in mind, it is worth highlighting the support for global transactions in microservices applications running on WebSphere Liberty. However the transaction boundary here cannot span more than a single JVM process. See the diagram below for some examples.

Note that from a Java perspective, the transaction boundary cannot span more than a single JVM process. But the actual scope of the global transaction can still involve several XA (two-phase commit) resources outside the JVM process (IBM Db2 database, Oracle Database or IBM MQ queue manager). Furthermore another EJB application can be called as well, and within the scope of the global transaction. It just has to reside within the same JVM process. Note that in this case, the EJB call does not go over RMI/IIOP but instead is called over its local interface from the Java thread of the client.


[1]IBM Knowledge Center WebSphere ND traditional 9.0.0.x – Global transactions
https://www.ibm.com/support/knowledgecenter/SSAW57_9.0.0/com.ibm.websphere.nd.multiplatform.doc/ae/cjta_glotran.html

[2] IBM Knowledge Center WebSphere ND traditional 9.0.0.x – Clusters and workload management
https://www.ibm.com/support/knowledgecenter/SSAW57_9.0.0/com.ibm.websphere.nd.multiplatform.doc/ae/crun_srvgrp.html 

Originally posted on IBM Developer blog “IBM Cloud Best Practices from the Field” by Hendrik van Run on 5 October 2018 (4277 visits)

Design a site like this with WordPress.com
Get started