DBC Technical Reference

Well, reference documents vanishing from IBM site is no new news. This is what happened with the DBC Technical Reference Guide that was available on the IBM site till last year. I was looking for it a few days ago but couldn’t find it on the IBM site. I thought it was gone along with all the other developerworks articles.

But thanks to Cezar Tipa who had an offline copy and Biplab Das Choudhury who shared it on Linkedin a copy of it is still available.

I downloaded a copy and uploaded it here. You can find a link to the document below.

DBC XML Format for Technical Consultants

Update JAVA_HOME in Websphere

This is for an old version but helpful.

e.g. –

Unix: JAVA_HOME=/usr/local/java/jdk1.4.0_13

Windows: JAVA_HOME=C:\j2sdk1.4.2_14

Update the JAVA_HOME variable to point to the location where the JDK is installed. Restart WebSphere Application Server for the changes to take effect.

reference: https://community.microstrategy.com/s/article/KB16159-How-to-change-the-location-of-the-Java-JDK-JAVA-HOME?language=en_US

File Deployment From WebSphere Console

Have you ever come across a situation where your development environment WebSphere is on Linus/Unix and you don’t have access to the filesystem to hot deploy your class changes for testing? Well it’s pretty common if you are working for a big company which has lot of security policies in place. But what do you do? Do you go through all the pain of building the EAR and deploying it to WebSphere only to realize that there was a small piece of code that you missed and that you have to build and deploy all over again! Isn’t that really frustrating!

What I am going to tell you is not as quick a fix as doing a hot deployment, but it does save some deployment time. With this, you won’t need to build the EAR all over again, just have to do the deployment which takes half the time of a full EAR deployment.

This functionality in WebSphere lets you deploy delta changed to an application. All you have to do is 1 – select the application you want to deploy the changes to, 2 – select the path where the file has to be placed inside the application and 3 – select the file on the local filesystem which has to be uploaded and WebSphere will deploy that file for you.

Lets take an example – I want to deploy a class – abc.class in businessobjects.jar at the this path – com\pk\cron. Under the enterprise applications section in WebSphere, there is an option called Update which will let you upload/update this file in the businessobjects.jar. With this you can upload new files or update existing files.

Follow along with the screenshots, it pretty much self explanatory. Only thing you have to be careful about is the path inside the application EAR, that should be correct or WebSphere will place the file in wrong path. If you are not sure about the path, open the EAR, navigate through to the folder where you want to deploy the file and copy that path.

In this case the path is – businessobject.jar\com\pk\cron\abc.class

businessobjects.jar BEFORE uploading the file – 


businessobjects.jar AFTER uploading the file – 

Until next time!

Load Balancing Using IHS – 101

How does IHS load balancing work?

IHS or the IBM HTTP Server is an apache HTTP server implementation with a topping of IBM configuration. IHS is not a part of WebSphere but you can install it separately and add it to WebSphere, that way you can manage it using the WebSphere console.

In addition to all the function of a webserver, IHS can also be used to as a load balancer which means you can use IHS to spread the traffic evenly across all your JVMs and in case a JVM crashes or is otherwise down, traffic won’t be routed to that JVM.

For IHS to do the load balancing it should have the understanding of the clusters, JVMs that are there in those clusters, virtual hosts, context roots, etc. and since IHS is not a part of WebSphere there needs to be a way for it to do that, this is where the plugin comes in. The plugin is basically an xml file that has details of the clusters and it members, virtual hosts associated to the applications on the clusters and a lot of other details. By default this file is updated every minute to keep the latest configurations updated.

The plugin has to be installed separately and should be linked to the IHS. If you use the launch pad to install the plugin, it takes care of associating the plugin to the IHS, if you are installing it manually you need to make sure you link it to the IHS.

That’s it for today!

Until next time!

Things You Must Know Before Using Migration Manager

Migration manager is a really great tool for migrating configurations from one Maximo instance to another. It does all the validations and tells you of any dependency that you may have missed in the package. Unfortunately it doesn’t tell you that while creating the package but does it while the package is being deployed and I wouldn’t blame migration manager for that. When the package is being created in the source environment, it doesn’t really know where it’s going to be deployed and whether the target environment had all dependencies or not. It’s when the package is being deployed that the migration manager gets to know that the dependencies don’t exist in the target environment and the package errors out. I’ll take an example and explain.

Let say you want to migrate an object from one environment to another. You create a migration manager package for that object which includes all its attributes but not the domains that are associated with those attributes. When you deploy this package to the target environment, it would fail if those domains don’t exist there.

Migration Manager has its own section for mentioning dependencies and most the out of the box migration groups have dependencies already specified.

2017-11-06 13_48_22-Migration Groups

2017-11-07 09_58_25-Migration Groups

But I don’t prefer using this dependency functionality of Maximo and here’s why – If you see the images above, Data Dictionary is one of the dependencies for the application migration group which should be the case but what this would do is take the entire data dictionary from the source environment and deploy it in the target environment.

You really wouldn’t want to do that because

  1. If a developer has created any attribute or domain for testing some functionality and didn’t delete those, they would be migrated too. Worst, if someone changed or removed a class from an OOB attribute or removed the domain associated to it, that would be migrated too. This could break existing functionality.
  2. And if not any of that, it would unnecessarily take a lot of time for deploying the package since there would be the entire data dictionary from the source that would have to be compared to the existing data dictionary of the target for changes.

Unfortunately if you don’t know about this feature of Migration Manager or you are not careful about it, you might unknowingly migrate certain configuration that were not required.

So it’s better to maintain dependencies by you yourself then having Maximo maintain it for you.

Have a great day!

Cheers!

Why isn’t That Cron Task Running!

All of them are running, running and running, the long, never ending marathon but what is it with that one lone runner, why isn’t he running?

It’s a rare occurrence but you may come across such cron task that is sitting there not executing at all while all the other cron tasks are running just fine! You would see that though it is set to active and the history logs tell you that it started successfully or it was working fine till a certain point but after that there hasn’t been any action.

What is it with this cron task?

It cannot be the admin mode since other cron tasks are running fine. It could be the mxe.crontask.donotrun property but in that case it wouldn’t have started or actioned at all. But better to still double check.

Rare but not impossible, this may be caused by incorrect entry in the task scheduler table.

What is Task Scheduler Table?

The TASKSCHEDULER table is used by Maximo to control the schedule of cron tasks.

Before a cron task runs, Maximo gets the last run information from TASKSCHEDULER table, and if no information is found, it will insert a new record in TASKSCHEDULER table if needed.

For more information about task scheduler table check this post.

What’s the issue with the Task Scheduler?

As I mentioned earlier, the issue may be caused by incorrect entry in the task scheduler table, especially when the LASTRUN is greater than the LASTEND which means the cron task is running or atleast the system thinks that the cron task is running and wouldn’t invoke it again.

TaskScheduler

This could be because the JVM crashed while the cron task was running and the execution couldn’t complete and so the last run was not updated. It is also possible that the cron task actually got into a hung state during execution.

Another possibility is that there are two entries for the same cron task in the task scheduler table which confuses the system. May be the cron task is running too quickly and it sends an entry twice to the task scheduler, because it didn’t have enough time to complete the first execution. Cron task running on multiple JVM in a clustered environment could also be the cause.

How to fix it?

Delete the entries for the cron task from the task scheduler table. The cron task manager would see that the cron task is active but doesn’t have entry in the taskscheduler so it would create an entry and wait till the next schedule and to run the cron task.

If you want to understand the sequence of events that happens when the cron task starts and shuts down, you can check this post.

Until next time!

Cheers!

Adding a Field to an Existing Interaction in Maximo

If you have ever worked on interactions in Maximo, you would know how easy it is to set up one and start fetching data from a webservice. It helps you avoid the hassle of writing and maintaining custom code with just a few configuration steps. Practically it does all the work for you – from creating objects and relationships to setting up the dialog box and sigoption to display the data.

But when it comes to adding a new attribute to be fetched and displayed from the webservice, it requires the entire thing to be created again. This means the interaction and all its configurations have to be scrapped and recreated with the new set of attributes. That’s a lot for just a field addition.

So I thought what if we manually add this attribute to the objects and to other configurations and see if it works out.

To explain this better, I have an interaction that fetches all the football matches that have been played in the city (which is the location record in Maximo). The data is just fetched from the webservice for display and not inserted or committed anywhere.

Soap_Response

2017-04-25 21_56_38-Locations

Now I want to show the ‘Result’ field in the dialog box from the webservice response but the problem is – this field was not initially selected while creating the interaction so it’s not present in the object that handles the webservice response. So to show this field in the dialog I did the following –

 

  1. Added the new attribute to the response object – The attribute name and the data type must be the same as that of the field in the webservice. The description of the attribute must be the path of the field in the response XML. This is what the description of my field looks like –ns0:GamesPerCityResponse/ns0:GamesPerCityResult/ns0:tGameInfo/ns0:sResult

    Soap_Result

    2017-04-25 22_12_23-Database Configuration

     

  2. Included the new attribute to the response object structure 

    2017-04-25 22_12_42-Interactions

    2017-04-25 22_16_13-Object Structures

     

  3. Added the new attribute to the dialog in the app designer

    2017-04-25 22_21_51-Locations 

But I noticed that still the data was not being fetched and shown in the field. So I did some more research and found that there is an OBP (Object Blue Print) field in the MAXINTERACTION table that keeps the xml schema of the request and response for the interaction. The new field needs to be added to that schema for it to work. So I updated the schema and Voila! it worked!

OBP

2017-04-25 22_59_21-Locations

Please note that this solution is applicable only if the field already exists in the webservice schema and only needs to be displayed in Maximo.

Enjoy!

Until next time!

Maximo, IBM CDC and its Limitation When Replicating Schema

A Little Background –

IBM Change Data Capture is part of the IBM Infosphere suite and is mainly used as part of the ETL process. It has great transformation and data replication capabilities and supports most of the enterprise databases products.

For replication, CDC captures changed data directly from database logs rather than querying the database which makes it quite efficient. It provides near real time replication but that is dependent on a lot of parameters like network latency, I/O, etc.

Since CDC has such great replication capabilities, its application can be more just the ETL process.

The Use Case –

In one such case, we are using CDC to replicate Maximo database to create replicas of Maximo for different sites. Using CDC, we are replicating just the site specific data to the replicas which helps in keeping the database size low compared to main data center.

These replicas act as standby instances of Maximo which helps maintain high availability of Maximo at site for critical assets.

Bidirectional replication is enabled to sync back data from replicas to data center in case standby instance is used as primary node due to unavailability of the data center Maximo.

The Problem Statement –

CDC replicates data very efficiently but when it comes to schema changes, its a problem. CDC does has the provision to replicate schema changes but it comes with a lot of limitations. I’ll discuss those limitations in detail for DB2 database but before that lets see what these changes are.

When a bug fix, enhancement or ifix is applied to Maximo, these have to be applied to the Maximo replicas as well. These patches may include schema changes like column length change or new stored procedure, triggers etc.

Let say we have 20 sites and we maintain a standby instance of Maximo for each of the sites. Applying these patches to all 20 instances would be a time consuming activity and would require a large amount of production downtime.

For CDC to replicate data, schema definition at both source and destination databases must be identical. So these patch deployments on all instances becomes a necessity.

We could try to replicate schema through CDC. Here are the considerations and limitations of replication schema through CDC for DB2 for LUW ver. 11.3.0–

Consideration –

The following InfoSphere CDC issues should be taken into consideration before you attempt DDL replication:

  • A table targeted for DDL replication cannot be involved in any other InfoSphere CDC table mapping. That is to say, you cannot mirror from two different source tables to a single target table.
  • Conflict Detection and Resolution is not supported for DDL replication.
  • Differential refresh and Refreshing a Subset of Rows are not supported for tables for which DDL operations are being replicated.
  • Derived columns and derived expressions are not supported for tables for which DDL operations are being replicated.
  • LOB columns are selected from the database at the time of replication using the key or unique index (if any) associated with the source table. Therefore, only the current image of a LOB column field in a source table will be sent at the time of replication. If latency is present for a subscription that is replicating DDL operations and there are changes to the list of columns which make up the key used for searching, the target column may contain a null value until the next DML change on that row. If latency is present and the key of the row changes, the target column may contain a null or incorrect value
  • Bidirectional replication is not supported for DDL replication.
  • When InfoSphere CDC encounters certain object types that cannot be replicated, such as UDTs (user-defined columns), the table will be parked. You will need to determine if the unsupported table is essential to your replication solution. If you decide that it is not essential, you should modify your rule set to exclude the table. If you determine that the table is essential, the table will have to be dropped, re-created and its structure changed in order to be supported for DDL replication.

Limitations –

The following types of DDL changes can be replicated by InfoSphere CDC for DB2 – 

  • CREATE TABLE
  • DROP TABLE
  • ALTER TABLE ADD COLUMN
  • ALTER TABLE ALTER COLUMN SET DATA TYPE

Table-related objects for which DDL replication is NOT supported by InfoSphere CDC for DB2 –

  • Views
  • Synonyms
  • Triggers
  • Materialized query tables
  • Tables containing user-defined types

Database-related objects for which DDL replication is NOT supported by InfoSphere CDC for DB2 –

  • Functions
  • Stored procedures
  • Packages
  • Java classes
  • Database links
  • Roles
  • Directories
  • Dimensions
  • Libraries
  • Profiles
  • Users
  • Sequences
  • Tablespaces
  • Schemas

Conclusion – 

With all these limitations, replication of schema through CDC is not a viable solution. A better approach would be to use deployment scripts for patch deployment on all the instances.

If you have any comments or opinions, do write them in the comments section.

Have a great day!

Data Restrictions and their impact on MBOSets

At some point in time every maximo developer or support engineer would have implemented data restrictions to hide data or make it un-editable for the users. Since they are fairly easy to implement and only require a sign out and sign in for them to take effect for a user, they are very widely used. However, if not implemented carefully, they can seriously impact performance.

I am going to explain how mbosets are impacted by data restrictions, the qualified type of data restriction in particular.

When a qualified type of data restriction is implemented on an object with a condition, the user will have access to only those records of that object which are fetched by the condition’s where clause i.e. those records which qualify the condition. This clause is applied over and above any exist clause.

When it comes to java customization, the impact of data restrictions on the mboset depend on the way the mboset has been fetched in the code. If the mboset has been fetched through MXServer then the data restrictions will apply (if you remember we pass the userinfo to fetch the set) but if the set is fetched through a relationship then the data restrictions don’t apply.

You may ask why this different behavior? That’s because when an mboset is fetched through relationship, it becomes a child mboset and the rule is – owner must have access to all it children. That’s why!

Let me explain this with an example. Let’s say there is a data restriction on the asset object to show only those assets that have their type as FACILITIES. When a user navigated to the asset application he would be shown only FACILITIES type assets. When he navigates to the subassembly section of any of these assets, he would be shown all child assets of that asset which may or may not be of the type FACILITIES. Seems justifying right!

Note: Subassembly section in assets is a relationship of asset with itself to show child assets.

2016-11-15-20_38_48-security-groups

2016-11-15-20_42_39-assets

2016-11-15-20_42_49-assets

Let see how and where does this impact –

Domains – Table domains are fetched using MXServer so data restrictions apply. But if you are creating a table domain through code and in the getList method you use relationship then data restriction wont apply.

Dialogs – Dialogs are mostly created using a relationship so data restrictions would not apply. In some cases, mboname is used instead of relationship, in that case data restrictions would apply.

Non-Persistent Fields – Non-persistent fields are populated through java code. If the code uses MXServer to fetch some data and populate the field, the non-persistent field might not get populated or might display incorrect data if data restriction exist on that fetched set.

Other Customizations – Certain logic may fail due to data restriction if these logics are not implemented keeping data restrictions in mind and specially if MXServer is used to implement these logics.

Developer must keep in mind the impact of data restrictions while implementing logics. I am not saying MXServer is a strict no-no, but if data restrictions are going to be placed the impact must be considered.

On the good side even if qualified data restrictions are applied, they are majorly applied on main objects isn’t so!

Cheers!

Conditionally Changing the Lookup on an Attribute

Whenever I travel, I either read or I write. These days I am travelling a lot and I am on a writing spree, so you would be seeing quite a few posts from me.

That brings me to this unique requirement to display different lookups on a field based on a condition. A simple example of this is displaying a location lookup on a field if, lets say the orgid is EAGLENA else display an asset lookup on the same field.

This can be achieved through conditional UI in application designer.

The images below show how to setup this through conditional UI on a field EQ1 on ASSET.

2016-11-19-14_34_39-application-designer

2016-11-19-14_32_21-application-designer

2016-11-19-14_32_28-application-designer

2016-11-19-14_39_46-assets

2016-11-19-14_39_55-assets

2016-11-19-14_40_22-assets

2016-11-19-14_40_25-assets

Have a great day!