IBM WebSphere is still underpinning an number of IBM software products that do not run in containers on Red Hat OpenShift. Myself and @hvanrun were working with IBM Cloud Orchestrator (ICO) last year, an enterprise orchestrator that is used by a major European client.
Although we have since migrated successfully to IBM Business Automation Workflow, we had a requirement at the time to make API calls from ICO to a VMware vSphere environment. ICO runs on IBM Business Process Manager 8.6, which in turn runs on IBM WebSphere Network Deployment 8.5.5. Since IBM WebSphere is a Java EE runtime, we decided to leverage the VMware Java SDK to make these API calls .
What are VMware vSphere and IBM Business Process Manager?
VMware vSphere is a popular commercial hypervisor for the x86-64 architecture to host VMs and manage them through a management plane called VMware vCenter.
IBM® Business Process Manager is a comprehensive business process management platform. It provides a robust set of tools to author, test, and deploy business processes, as well as full visibility and insight to managing those business processes.
IBM® Business Process Manager is now available as IBM Business Automation Workflow and is part of IBM Cloud Pak for Automation. IBM Cloud Pak for Automation offers design, build, run, and automation services to rapidly scale your programs and fully execute and operationalize an automation strategy.
Our Challenge and Solution
Initially we had some challenges to make this integration work. Although we followed the examples from the VMware Java SDK Samples (samples included in SDK), the sample code below did not work as expected.
In particular, on every call to <include method/line/etc> we received the following message in the WebSphere SystemErr.log:
[7/2/20 10:13:46:277 CEST] 000b140c SystemErr R Sample code failed
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R javax.xml.ws.WebServiceException: Error: Maintain Session is enabled but none of the session properties (Cookies, Over-written URL) are returned.
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.ExceptionFactory.createWebServiceException(ExceptionFactory.java:173)
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.ExceptionFactory.makeWebServiceException(ExceptionFactory.java:70)
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.ExceptionFactory.makeWebServiceException(ExceptionFactory.java:118)
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.BindingProvider.setupSessionContext(BindingProvider.java:355)
[7/2/20 10:13:46:278 CEST] 000b140c SystemErr R at org.apache.axis2.jaxws.BindingProvider.checkMaintainSessionState(BindingProvider.java:322)
We worked with IBM Support and ultimately found that this post on stackoverflow.com was the key to the solution of our problem. Using the Java code below, we were able to make calls using the VMware Java SDK from ICO without any issues!
vimService = new VimService(endpointURL);
vimPort = vimService.getVimPort();
Map<String, Object> ctxt = ((BindingProvider) vimPort).getRequestContext();
ctxt.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, url);
String CUSTOM_COOKIE_ID = "";
ctxt.put(CUSTOM_COOKIE_ID, "vmware_soap_session");
ctxt.put(BindingProvider.SESSION_MAINTAIN_PROPERTY, true);
// Retrieve the ServiceContent object and login
// Second retrieve context provides cookies for next calls
VimService vimServiceRSC = new VimService(endpointURL);
VimPortType vimPortRSC = vimServiceRSC.getVimPort();
Map<String, Object> ctxtRSC = ((BindingProvider)vimPortRSC).getRequestContext();
ctxtRSC.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, url.toString());
serviceContent = vimPortRSC.retrieveServiceContent(servicesInstance);
Out of the box, IBM Cloud Automation Manager (CAM) ships with a number of Terraform providers. This is great to get started and handles common use-cases. IBM also provides support should you encounter any issues. However often there are cases where you need to use one or more Terraform providers that are not installed in CAM by default. Hashicorp has endorsed a growinglist of Terraform providers, which can be found here.
Let’s assume that you have CAM installed and need to import one of those Terraform providers, for example the F5 BIG-IP one. This can be used to automate creation of resources on a Big IP F5 solution, for example to manage Virtual IP (VIP) addresses for load balancing purposes. The steps below outline exactly what you need to do in order to start using the F5 BIG-IP Terraform provider with CAM.
Downloading and building the Terraform provider
Preparation
The F5 BIG-IP Terraform provider is available from github.com here. But before we proceed, please make sure to review README.md of the Big IP F5 Terrarform provider. Note that it requires Go 1.11 (or higher) to build the provider, and Terraform 0.10.x (or higher) to use the Terraform provider.
As documented here in the IBM Knowledge Center, CAM 3.1.2.1 uses Terraform 11.11. So we meet the mimimum version requirements of the F5 BIG-IP Terraform provider.
We also need a machine where we can build the Terraform provider. This is typically a linux server, but note that it must match the OS and processor architecture of the servers hosting your IBM Cloud Private and CAM environmen! You will need internet connectivity in order to be able to download Go itself and the Terraform provider files from github.com. On this machine, we will need to make sure that Go 1.11 (or higher) is installed. In this blog post, we are using an Ubuntu server that does not have Go installed yet.
Download and install Go
Depending on the OS you are using, the instructions to install Go will vary. As we were using Ubuntu linux, we downloaded the .tar.gz file directly from the server and installed it:
Now run go version to confirm what version you have installed:
root@tfbuild:~# go version
go version go1.12.6 linux/amd64
This confirms that we just installed Go 1.12.6, so we meet the minimum version requirements to build the F5 BIG-IP Terraform provider!
Download and build the provider
A few preparatory steps are required.
To prepare the provider(s) in a specific directory on your linux server, such as your home directory, type the following:
root@tfbuild:~# mkdir go
root@tfbuild:~# echo $HOME
/root
root@tfbuild:~# export GOPATH=$HOME/go
root@tfbuild:~# cd $GOPATH
root@tfbuild:~/go# pwd
/root/go
You should now be in a subdirectory of your home directory, called go, with GOPATH set to this same directory. You will now create a subdirectory for your providers and change to it:
root@tfbuild:~/go# mkdir -p $GOPATH/src/github.com/terraform-providers
root@tfbuild:~/go# cd $GOPATH/src/github.com/terraform-providers
root@tfbuild:~/go/src/github.com/terraform-providers# pwd
/root/go/src/github.com/terraform-providers
root@tfbuild:~/go/src/github.com/terraform-providers# cd terraform-provider-bigip/
root@tfbuild:~/go/src/github.com/terraform-providers/terraform-provider-bigip# make build
==> Checking that code complies with gofmt requirements...
go install
This will have built the provider and placed it in the $GOPATH/bin directory:
root@tfbuild:~/go/src/github.com/terraform-providers/terraform-provider-bigip# ls -rtl $GOPATH/bin
total 33980
-rwxr-xr-x 1 root root 34793179 Jul 4 07:20 terraform-provider-bigip
You should see a file of approximately 34 MB, named terraform-provider-bigip.
Importing your Terraform provider into CAM
Review the instructions in the CAM KnowledgeCenter for importing your own Terraform provider here. You will need to logon to the IBM Cloud Private cluster where you installed CAM cluster in order to import the new Terraform provider. We assume that you have both the cloudctl IBM Cloud Private CLI as well as the kubectl Kubernetes CLI installed on the machine where you built the Terraform provider. Please refer to the IBM Knowledge Center on how to install the cloudctl IBM Cloud Private CLI as well as the kubectl Kubernetes CLI.
First logon to the IBM Cloud Private cluster using cloudctl, note that you need to logon with a user that has administrative permissions. Enter your user name and password, and when you are given a choice of namespaces, type the number next to “services”.
This ensures that all your future kubectl commands are simplified, since they’ll only look at the services namespace, which is where CAM is installed.
root@tfbuild:~# cloudctl login -a https://<ICP_master>:8443Username> admin
Password>
Authenticating...
OK
Targeted account mycluster Account (id-mycluster-account)
Select a namespace:
1. cert-manager
2. default
3. ibmcom
4. icamserver
5. icp
6. kube-public
7. kube-system
8. platform
9. services
Enter a number> 9
Targeted namespace services
Configuring kubectl ...
Cluster "mycluster" set.
User "mycluster-user" set.
Context "mycluster-context" created.
Switched to context "mycluster-context".
OK
Configuring helm: /root./.helm
OK
Now use kubectl to identify the CAM provider pod and copy the new Terraform provider directly into the Terraform plugins directory of that pod:
root@tfbuild:~# kubectl cp terraform-provider-bigip $(kubectl get pods | grep cam-provider-terraform | awk '{print $1;}' | head -n 1):/home/terraform/.terraform.d/plugins/
If you have not logged into the namespace “services”, you will need to add -n services to both of the kubectl commands above. Of course the name of the file terraform-provider-bigip is our F5 BIG-IP Terraform provider, should you be using a different Terraform provider you would need to change the command accordingly.
Only a few more commands are required to set the correct ownership and permissions of the Terraform provider file inside the CAM provider pod:
root@tfbuild:~# kubectl exec $(kubectl get pods | grep cam-provider-terraform | awk '{print $1;}' | head -n 1) chown terraform:terraform /home/terraform/.terraform.d/plugins/terraform-provider-bigip
root@tfbuild:~# kubectl exec $(kubectl get pods | grep cam-provider-terraform | awk '{print $1;}' | head -n 1) chmod +x /home/terraform/.terraform.d/plugins/terraform-provider-bigip
You can run the command below to confirm that the permissions on the Terraform provider file, if it looks as shown below we are all set!
root@tfbuild:~# kubectl exec $(kubectl get pods | grep cam-provider-terraform | awk '{print $1;}' | head -n 1) -- ls -l /home/terraform/.terraform.d/plugins/terraform-provider-bigip
-rwxr-xr-x 1 terraform terraform 0 Jul 4 21:55 /home/terraform/.terraform.d/plugins/terraform-provider-bigip
Using the new Terraform provider in CAM
With the F5 BIG-IP Terraform provider in place, we can now start creating new Terraform templates in CAM. As usual, you can either use the CAM Template Designer or your own text editor to create the templates. Refer to the documentation of the F5 BIG-IP Terraform provider for details on what resources are supported. Below is an example of a simple main.tf Terraform template that automates the creation of a new Virtual IP address (VIP).
Note that we used a number of variables defined in variables.tf, which can easily be exposed by defining CAM variables through camvariables.json as documented here in the IBM Cloud Automation Knowledge Center.
IBM Cloud Automation Manager 3.1.2.1 was released on 10th May, this page in the CAM Knowledge Center documents what’s new in this release. As you can see there, this version of CAM can be installed on IBM Cloud Private 3.1.2 or 3.2.0 however IBM also supports it on earlier releases of IBM Cloud Private.
As always, IBM CAM 3.1.2.3 can be downloaded from Docker Hub directly. But as documented here in the CAM Knowledge Center, IBM also provides an offline installation package which is available from IBM FixCentral here. Note that CAM 3.1.2.1 is not available from IBM Passport Advantage!
For those of you following this blog, it is worth highlighting that the issue with certain forbidden characters in the UI of CAM as reported earlier here is now resolved!
As some clients found out recently, the CAM 3.1.2 UI prevents the use of certain special characters in parameters (.,<>). This applies to a number of scenarios, for example when defining a Cloud Connection and specifying a password containing special characters.
However it would equally apply when deploying a Terraform template that takes a parameter to point to a specific part of an LDAP directory, i.e. ou=ibm,dc=com. Here the ‘,’ is one the special characters not allowed by the CAM UI.
The good news is the CAM UI should no longer have this limitation in the upcoming Q2 release. In the mean time, you can call the corresponding CAM API which does not have any special character limitations”:
IBM Cloud Automation Manager went GA on March 8th, 2019, read all about the new features and capabilities here.
The corresponding Helm chart is available online from github.com here. Offline packages can be downloaded as usual from IBM Passport Advantage, refer to this link for the part numbers. As always the Community Edition is available free of charge, you can find instructions on how to install this here.
And finally one minor comment, note that the version of the CAM Helm chart for IBM Cloud Automation Manager 3.1.2 is actually 3.1.1!
I recently worked with a client who was running IBM Cloud Private 3.1.1 in an offline setup. This client had deployed IBM Cloud Private on IBM PureApplication Platform, something that has been described in more detail here.
With a simple ICP 3.1.1 environment up and running, the next challenge was to work through the process of manually installing the Helm chart and Docker images for the IBM Transformation Advisor. When deploying ICP 3.1.1 in an environment with internet access, this is not needed of course. As you can see below, the catalog was empty in our scenario.
Adding IBM Transformation Advisor to IBM Cloud Private Catalog
This page of the ICP 3.1.1 knowledge center describes how a Helm chart and corresponding docker images can be packaged up as an archive. This archive can be built on any machine that has internet connectivity and supports Docker, Helm and the cloudctl IBM cloud command line interface. Once transferred to your ICP environment, it can be imported for use.
Create IBM Transformation Advisor archive
You start by cloning the git repo on github.com that contains the IBM Helm charts:
Make sure you have the IBM cloudctl command line tool installed, you can download this from your IBM Cloud Private instance as described here in the Knowledge Center.
Now modify the file, in particular we removed any references to Docker images for ppc64le and s390x and z/OS architecture (only leaving the amd64 ones). This simply speeds up the process and reduces the size of the archive. In addition, we explicitly referenced the Helm chart: archive: “https://github.com/IBM/charts/blob/master/repo/stable/ibm-transadv-dev-1.9.3.tgz?raw=true” (this was originally set to archive: file:ibm-transadv-dev-1.9.3.tgz).
Now we can run the command below to build the archive.
Note: We included the flag CLOUDCTL_TRACE=true to obtain additional verbose output, this can be helpful to debug issues and track progress.
MacBook-Pro:ibm-transadv-dev hendrikvanrun$ CLOUDCTL_TRACE=true cloudctl catalog create-archive -s /Users/hendrikvanrun/manifest.yaml -a /Users/hendrikvanrun/ibm-transadv-dev.tgz
create-archive: archive=/Users/hendrikvanrun/ibm-transadv-dev.tgz, chart=, manifest=/Users/hendrikvanrun/manifest.yaml, values=, architectures=, skipCleanup=false
Creating archive /Users/hendrikvanrun/ibm-transadv-dev.tgz from manifest /Users/hendrikvanrun/manifest.yaml
Updated archive path: /Users/hendrikvanrun/ibm-transadv-dev.tgz
Create command: spec={"Revision":"1.0","OutputFilename":"","Charts":[{"Archive":"https://github.com/IBM/charts/blob/master/repo/stable/ibm-transadv-dev-1.9.3.tgz?raw=true","RepositoryKeys":["couchdb.image.repository","transadv.image.repository","transadvui.image.repository"],"RegistryKeys":null}],"Images":[{"image":"ibmcom/transformation-advisor-db:1.9.3","tag":"","references":[{"repository":"ibmcom/transformation-advisor-db-amd64:1.9.3","pull-repository":"ibmcom/transformation-advisor-db-amd64:1.9.3","platform":{"os":"linux","architecture":"amd64"}}]},{"image":"ibmcom/transformation-advisor-server:1.9.3","tag":"","references":[{"repository":"ibmcom/transformation-advisor-server-amd64:1.9.3","pull-repository":"ibmcom/transformation-advisor-server-amd64:1.9.3","platform":{"os":"linux","architecture":"amd64"}}]},{"image":"ibmcom/transformation-advisor-ui:1.9.3","tag":"","references":[{"repository":"ibmcom/transformation-advisor-ui-amd64:1.9.3","pull-repository":"ibmcom/transformation-advisor-ui-amd64:1.9.3","platform":{"os":"linux","architecture":"amd64"}}]}]}, storageProvider={"Path":"/Users/hendrikvanrun/ibm-transadv-dev.tgz"}, baseDir=/Users/hendrikvanrun
docker pull ibmcom/transformation-advisor-ui-amd64:1.9.3
docker pull ibmcom/transformation-advisor-db-amd64:1.9.3
docker pull ibmcom/transformation-advisor-server-amd64:1.9.3
Adding charts...
done
Adding image ibmcom/transformation-advisor-db:1.9.3
Transferring image ibmcom/transformation-advisor-db-amd64:1.9.3 into the destination archive...
done
Adding image ibmcom/transformation-advisor-server:1.9.3
Transferring image ibmcom/transformation-advisor-server-amd64:1.9.3 into the destination archive...
done
Adding image ibmcom/transformation-advisor-ui:1.9.3
Transferring image ibmcom/transformation-advisor-ui-amd64:1.9.3 into the destination archive...
done
OK
Finally we examined the archive quickly, you can see that there is a Helm chart and 3 Docker images in there:
MacBook-Pro:ibm-transadv-dev hendrikvanrun$ tar -ztvf /Users/hendrikvanrun/ibm-transadv-dev.tgz
-rw------- 0 0 0 39157 Feb 18 18:00 charts/ibm-transadv-dev-1.9.3.tgz
-rw------- 0 0 0 547622400 Feb 18 17:59 images/3ef60eec5e2e815f26e5f11d6061ac53a03292c0dc6f3eacd94f003b089e9036.tar.gz
-rw------- 0 0 0 1474247680 Feb 18 18:00 images/8a182a770cd4c0ba4c8d8aaa4477256b53382f1cea02bd2673bfa084fa0c3209.tar.gz
-rw------- 0 0 0 549028864 Feb 18 18:00 images/a257dff2eb8f36f2960d990378cc80e5b84f021de1fce3ea51823aab5fa4f515.tar.gz
-rw------- 0 0 0 1536 Feb 18 18:02 manifest.json
-rw------- 0 0 0 802 Feb 18 18:02 manifest.yaml
Import IBM Transformation Advisor archive
Once packaged up this archive can simply be transferred to a machine that has connectivity to your ICP environment. There it can be imported as shown below:
Note: Make very sure that you used cloudctl load-archive, and not cloudctl load-chart! When you use the latter by accident, you will receive the error shown below which does not tell you directly what you are doing wrong!
Confirm that IBM Transformation Advisor is present in IBM Cloud Private Catalog
You should now be able to see the IBM Transformation Advisor Helm chart in the IBM Cloud Private Catalog:
The Docker images for IBM Transformation Advisor should also be visible in the private Docker Registry of IBM Cloud Private:
Deploying IBM Transformation Advisor within IBM Cloud Private
Create namespace for IBM Transformation Advisor
Although not required, sometimes it can make sense to deploy IBM TA in its own namespace. We created the namespace “transadv” as shown below.
Note: When deploying IBM Transformation Advisor in its own namespace, note that it requires a namespace with bm-anyuid-psp security policy (otherwise you will get a Pod Security Conflict when deploying the IBM TA Helm chart).
Create Secret, Persistent Volume and Persistent Volume Claim
Deployment of the IBM TA requires a number of resources to be created. Although this can all be done from the ICP UI, we chose to do so using the kubectl command line. Please refer to this link on how to run kubectl against your ICP environment.
Deploy the TA Helm chart by clicking Configure, specify the following:
Helm release name: something unique that will identify the instance of this Helm deployment Target namespace: the namespace for this Helm deployment, in our case we use “transadv”
Under Parameters, expand All parameters and specify the following:
Ingress enabled: enabled Edge node IP: the IP that will be used to access the TA instance (in our case the IP of the ICP node hosting the proxy) Secret name: name of the secret that you created earlier, in our case we used “transformation-advisor-secret”
Use dynamic provisioning for persistent volume: disabled (as we were not using a storage provider that supports this, we used HostPath for our Persistent Volume) Existing volume claim: the name of the Persistent Volume Claim to be used, in our case we used “transadv-ing-pvc”
Leave everything else as default and click Install to deploy the TA Helm chart!
Note: When we deployed the IBM Transformation Advisor Helm chart for the first time, the pods were unable to start because they could not pull their corresponding Docker images (even though they had been loaded successfully into the ICP private Docker regristry). We performed a manual Docker pull for those images, once done the pods were able to start without any issues.
-bash-4.2# docker login mycluster.icp:8500
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
-bash-4.2# docker images | grep transformation
No results were returned, containers failed to start.
We manually pulled the docker images from the private docker registry:
Now the images are visible from docker and the transformation advisor pods can start successfully:
-bash-4.2# docker images | grep transformation
mycluster.icp:8500/default/ibmcom/transformation-advisor-db 1.9.3 3ef60eec5e2e 10 days ago 534MB
mycluster.icp:8500/default/ibmcom/transformation-advisor-server 1.9.3 8a182a770cd4 11 days ago 1.46GB
mycluster.icp:8500/default/ibmcom/transformation-advisor-ui 1.9.3 a257dff2eb8f 11 days ago 508MB
Configure OAuth integration
Because we deployed the IBM TA with Ingress enabled ***, we found that we had to run a specific command to ensure that the OAuth integration between IBM TA and ICP is configured:
With kubectl configured to manage your ICP cluster as described here in the Knowledge Center, we first confirmed that all pods were running fine:
-bash-4.2# kubectl get pods -n transadv
NAME READY STATUS RESTARTS AGE
transadv-ing-release-ibm-transadv-dev-193-oidc-deployment-8xbwq 1/1 Running 0 24m
transadv-ing-release-ibm-transadv-dev-193-oidc-registratiozkrht 0/1 Completed 0 24m
transadv-ing-release-ibm-transadv-dev-couchdb-7458d68fbf-p4gf9 1/1 Running 0 24m
transadv-ing-release-ibm-transadv-dev-server-69774cc6bd-pzz54 1/1 Running 0 24m
transadv-ing-release-ibm-transadv-dev-ui-76b54bb844-fgnqr 1/1 Running 0 24m
Then we ran the following command:
-bash-4.2# kubectl exec -n transadv -ti `kubectl get pods -n transadv -l release=transadv-ing-release \
> -l app=transadv-ing-release-ibm-transadv-dev-193--oidc-deployment \
> | grep "Running" | head -n 1 | awk '{print $1}'` \
> bash -- "/scripts/register-client.sh" \
> "`kubectl get cm oauth-client-map -n services -o yaml | grep PROXY_IP | grep -v '"PROXY_IP"' | awk '{print $2}'`" \
> "`kubectl get secret platform-oidc-credentials -o yaml -n kube-system | grep OAUTH2_CLIENT_REGISTRATION_SECRET: | awk '{print $2}'`"
set icp proxy ip to 10.226.68.234
set Oauth client registration secret to *
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Creating new client registration.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2107 100 1130 100 977 10337 8938 --:--:-- --:--:-- --:--:-- 10366
HTTP/1.1 201 Created
Client is registered.
IBM Cloud Automation Manager (CAM) 3.1.0.0 was released on 28th September 2018. Since then, a number of features have been added which have been made available in CAM 3.1.0.0 iFix1:
Fix for deploying some helm charts with nested values in values.yaml.
Fix to prevent early timeout during execution of a long-running service.
Fix for encoding of some URLs that contain spaces in service definitions.
Added support for Huawei cloud connections.
Added support for VMWare NSX-T cloud connections.
When performing an online installation of CAM on top of IBM Cloud Private (ICP), the latest CAM 3.1.0.0 iFix1 images are used. However for those of you performing an offline installation, you will have to follow a different process.
Install ICP 3.1 or 3.1.1 (or have a working ICP environment available)
Deploy CAM 3.1.0.0
Download CAM 3.1.0.0 iFix 1 from IBM Fix Central (make sure to select “IBM Cloud Private” as product and look for this interim fix: icp-cam-3.1.0-build507318 2018/11/20)
Apply CAM 3.1.0.0 iFix 1 update as per the instructions here.
IBM Cloud Automation Manager (CAM) is an offering to simplify the orchestration of cloud resources. It uses Terraform providers to interact with a variety of resources providers, for example VMware, IBM Cloud and AWS to provision Virtual Machines (which can either be on-premises or off-premises). But the power of Terraform is the rich set of Terraform providers, allowing you to integrate with a variety of other resources. For example Kubernetes, IBM UrbanCode Deploy or BigIp F5 load balancers.
CAM runs on top of IBM Cloud Private (ICP), an offering that provides a supported Kubernetes cluster that can be deployed on- or off-premises. As part of your CAM license, you are entitled to install and run IBM Cloud Private native edition. One challenge that a number of on-premises CAM clients have been facing is that CAM was originally designed with the assumption that it would always have outbound access to the internet. However many clients either do not tolerate outbound traffic at all, or only allow traffic to a set of whitelisted domains through an outbound proxy. This was a problem when deploying CAM as a helm chart on ICP, in particular the pod “cam-iaas” would not be set to ready for quite some time.
When examining the logs of the cam-iaas pod, it would attempt to download a number of terraform templates from github.com. If internet access is blocked, each of those attempts would eventually time out. However given that there are over 100 of those templates to be downloaded, it would typically take several hours until the cam-iaas pod would be running.
Now the good news is that CAM 3.1 introduces two options to considerably improve the offline installation experience. Both options are also available in CAM 2.1.0.3 FP1, which can be downloaded from IBM Fix Central here.
2. You can also change the behaviour of the cam-iaas pod so that it never attempts to download those terraform templates from github.com in the first place. This is useful if you cannot use an outbound proxy. The option “Optimize for offline install” of the CAM helm chart is what you would use in that case.
Note that you can also enable this flag when deploying CAM using the helm command line interface using the parameter global-offline=true:
Many clients are looking at ways to modernize their application development, taking advantage of technologies like Docker, Kubernetes and Helm. But many of these same clients still have a large set of Java EE applications in production today on traditional IBM WebSphere Application Server (abbreviated in this post to tWAS). In order to run those existing applications in a Docker container, they first need to be migrated to a Java EE application server that was designed with Docker in mind. IBM WebSphere Liberty is such an application server and is built on top of the IBM open source project Open Liberty. WebSphere Liberty brings a number of other benefits as it’s a relatively light-weight application server that starts in seconds and is a perfect fit for containerising Java EE applications.
IBM Cloud Private is a commercial offering from IBM and provides clients with an on-premises solution based on Docker, Kubernetes and Helm. It also comes with a tool to help clients to analyse their existing Java EE applications and determine the effort and complexity of migrating those applications to Liberty and/or IBM Cloud Private.
The purpose of this blog post is to highlight a few subtle but sometimes extremely relevant differences between tWAS and IBM WebSphere Liberty. We recommend that clients review these differences when they are about to undertake a migration.
Global transactions, remote EJB calls and WLM in tWAS
One of the luxuries of running Java EE applications on tWAS is that it comes with a proven transaction manager. It supports global (two-phase commit) transactions for Java EE applications, even when the transaction scope spans different Java Virtual Machine (JVM) processes[1]. Using the remote EJB interfaces, applications can make remote calls to other application components while propagating the transaction context.
IBM WebSphere Network Deployment (WAS ND) also comes with support for clusters, including EJB Workload Management (WLM)[2]. In other words when calling an application component remotely using the EJB interfaces, if that application resides on a tWAS ND cluster it can also take care of ensuring that discovery calls are load balanced across any of the available cluster members and transactional affinity maintained thereafter. This provides resilience and simplicity even in distributed systems that are tightly coupled by transactions.
The diagram below illustrates the support for global transactions spanning different JVM processes as well as built-in EJB WLM. This is only available when both EJB client and EJB server are deployed on tWAS ND clusters.
Global transactions, remote EJB calls and WLM in WebSphere Liberty
While Liberty has a number of advantages over tWAS, Liberty does not support remote propagation of transaction context to remote EJBs. This is typically not required for modern Java EE applications using microservices and REST, however this should be taken into account when migrating existing Java EE applications from tWAS.
The diagram below visualises the implications. You can make remote EJB calls, but you cannot span a global transaction across multiple JVM processes.
Note: WebSphere Liberty does support global transactions, as long as the transaction boundary remains within the Java Virtual Machine process of application starting the transaction.
Microservices applications and global transactions in WebSphere Liberty
With the above information in mind, it is worth highlighting the support for global transactions in microservices applications running on WebSphere Liberty. However the transaction boundary here cannot span more than a single JVM process. See the diagram below for some examples.
Note that from a Java perspective, the transaction boundary cannot span more than a single JVM process. But the actual scope of the global transaction can still involve several XA (two-phase commit) resources outside the JVM process (IBM Db2 database, Oracle Database or IBM MQ queue manager). Furthermore another EJB application can be called as well, and within the scope of the global transaction. It just has to reside within the same JVM process. Note that in this case, the EJB call does not go over RMI/IIOP but instead is called over its local interface from the Java thread of the client.