Import Export Multiple XMLs in Maximo

Did you know that maximo support multi XML import export?

If you want to export a bunch of application xmls from one environment to another or may be you want to edit more than one xml and then import them back into maximo, this functionality can come in handy.

To use this functionality, filter the list of applications to be exported on the list tab of the application designer and click the export button on the toolbar. Maximo will export a single xml containing all the applications filtered on the list tab. This xml can be imported back in the same environment or other environments as per your need.

The screenshot below shows 3 applications being exported. The exported xml shows 1 expanded application presentation and other 2 collapsed.

2016-04-05 21_39_08-Application Designer.png

2016-04-05 21_38_59-eamserver_9080_maximo_ui_presentationset.xml_event=exportall&uisessionid=3&csrft.png

Adding custom attribute to asset template and copying it to the asset

When it comes to adding a custom attribute to asset template and getting it copied to the asset, one might think that this may require extending the asset template class and adding the code to get the field copied or probably configuring a cross over domain. But its way easier than that!

Just creating the custom attribute in asset template and in asset with the same name gets the job done. When the asset is created from the asset template, the data from the custom field in asset template gets copied over to the field with the same name in asset. I did a quick check on this and it works as expected. Have a look –

2016-02-14 10_52_35-Asset Templates.png

2016-02-14 10_55_09-Asset Templates.png

2016-02-14 10_55_44-Assets.png

When using MXServer for fetching MboSet

There are two ways of fetching a MboSet in java code, either using a relationship or through MXserver. There are different benefits for using either of these and different scenarios may need one of these ways to be used. For now, we are going to talk about MboSet fetched from MXServer.

When fetching a MboSet from MXServer, there are a few things that one needs to be careful about. Poor knowledge of these may result in performance issues, poor memory management and undesirable outcomes.

There are six simple rules to keep in mind when fetching MboSet from MXserver. These rules are as follows –

  1. Always use setWhere() and reset() on the set fetched from MXServer. The where clause that you give in the setWhere will reduce the no. of MBOs fetched in the set. The reset will get the data from the database using the where clause. If the setWhere is not used, the entire set is fetched from the database which means the entire table is fetched into the memory.
MboSetRemote assetSet = MXServer.getMXServer().getMboSet("ASSET", getUserInfo());
assetSet.setWhere("LOCATION = 'BEDFORD'");
assetSet.reset();
  1. If the set is being fetched just to add new MBOs and not for traversing then use 1=0 as where clause in the setWhere. This will fetch an empty set.
assetSet.setWhere("1=0");
assetSet.reset();
  1. If the set is being fetched just for traversing and not for any addition or updates set DISCARDABLE flag as true on the MboSet. Discardable MboSets are not cached in memory, they can only be traversed in forward direction and cannot be saved.
assetSet.setFlag(DISCARDABLE, true);
  1. If MboSet is fetched for readonly purpose, set NOSAVE flag as true. This way the MboSet won’t be added to the MXTransaction which would shortens the looping time of the transaction.
assetSet.setFlag(NOSAVE, true);
  1. If an add or update transaction is being performed on an MboSet, do not forget to call save(). Contrary to popular belief that save shouldn’t be called explicitly in java code, MboSet fetched from MXServer must be saved before the set is closed otherwise the changes would be lost. MboSets fetched from relationship do not require a save to be called explicitly because their save is called when their parent is saved. Hence the belief.
  1. Always call clear() and close() on the MboSet fetched from MXServer once it is certain that it is not required anymore. This releases the memory and the database resources. If the set is not closed, it will remain in memory till it is collected by the garbage collector which could be a long time.
assetSet.clear();
assetSet.close(); 

clear and close shouldn’t be called on an MBOSet fetched from a relationship. This will give undesirable results.

So keep in mind these simple rules next time when you are fetching a set from MXServer.

In my next post I will try to explain you the difference between clear(), cleanup() & close(). Stay tuned.. 🙂

How Workflow Conditions are Evaluated

While looking at the debug logs of Maximo I noticed queries fired against the dummy_table. I wondered what is it that Maximo needs to query against the dummy_table. After looking at the sql and some digging in, I found that Maximo uses the dummy_table for evaluating the conditions.

The condition of a condition node in a workflow is evaluated in 2 ways –

  1. The condition is validated using the Parser (standard way, on the object).
  2. If the parser fails, the condition is validated using SqlFormat on dummy_table.

To explain it further, consider a workflow on the PO having one of the following conditions-

  1. Condition: poid=:poid -this will pass the Parser, so validation would be successful.
  2. Condition: exists (select * from ADDRESS where orgid = :orgid and addresscode = :shipto) – this will fail the parser, but will pass the SqlFormat checking on dummy_table using this sql: Select count(*) from dummy_table where (exists (select * from address where orgid =  ‘org01’ and addresscode =  ‘add01’));

So validation would be successful.

  1. Condition: poid = :poid and exists (select * from ADDRESS where orgid = :orgid and addresscode = :shipto) will fail the parser, will also fail the SqlFormat check because dummy_table doesn’t have a column named poid :

sql: Select count(*) from dummy_table  where  poid =  32767 and exists (select * from address where orgid =  ‘org01’  and addresscode =  ‘add01’ );

So validation would fail.

In this case the poid = :poid doesn’t really make any sense in the sql.

So while writing a condition make sure that you are not writing a sql that would fail.

Using Migration Collection in Maximo

Development activities may involve creation and modification of artifacts such as domains, objects, escalations, actions, workflows, etc. Once the development has completed, these configurations need to be migrated from the development environment to QA for testing and finally to Prod – to the end user. The number of these environments may vary but the migration process mostly remains the same.

Keeping track of all the development artifacts that are to be migrated can be a hectic task especially if the duration of the development is more than a few days in which case the developer may forget some of the artifacts that he/she had created over time, or if the number of these artifacts is too large.

The typical migration manager approach to migrate the configurations requires building a sql where clause against each object structure in the migration group and creating a migration package which extracts the data using this where clause. This approach requires the developer to identify and organize all the configurations to be migrated.

Another way to keep track of all these configurations is to create a change type migration package which will track all the changes done by the developer on any of the objects of the associated migration group which can then be used to create the package. But this approach will fail if there are multiple developers working on the same environment.

This is where the migration collection application comes into the picture. The developer can create a collection record in the migration collection application and add the configurations to the collection as and when they are created or modified. This way the developer doesn’t have to remember the long list of changes that have to be migrated.

Collections entries can be added in three different ways:

  • Manually navigate to target application from the collection record and return with value

2015-09-23 00_09_02-Migration Collections

2015-09-23 00_09_23-Communication Templates

2015-09-23 00_09_33-Migration Collections

  • Manually add to collection from directly within the target application – For this to work, in the migration collection application, support for the target application needs to be added. Once support is added, a button appears in the toolbar of the target application to add the current record to the migration collection.

2015-09-23 00_09_55-Migration Collections

2015-09-23 00_10_04-Migration Collections

2015-09-23 00_10_47-Actions

2015-09-23 00_10_59-Actions

2015-09-23 00_11_24-Migration Collections

  • Automatically capture the changes made by specific users – For this to work, tracking for the target application needs to be enable in the migration collection application. This tracks changes made by a particular user only in the target application, the user that is specified at the time of setting up the tracking.

2015-09-23 00_11_53-Migration Collections

2015-09-23 00_12_35-Migration Collections

2015-09-23 00_18_00-Domains

2015-09-23 00_18_48-Migration Collections

Once all the configurations have been added to the collection, developer can create a migration package using the content of the collection.

2015-09-23 00_18_58-Migration Collections

2015-09-23 00_19_03-Migration Collections

The Significance of “Independent of Other Groups” in Security Groups

The “Independent of Other Groups” checkbox in the security group application is one of the most misunderstood concepts in Maximo. What is the significance of this checkbox? How does it affect the security authorization of the group? Let’s have a look.

IndependentGroup

As the name suggests, the “Independent of Other Groups” checkbox on the main tab of the security group application specifies weather the group is independent of other groups or not. What that means is whether the authorization of this group can be merged with the authorizations of other groups. This is only applies to multisite implementations, for single site implementation this checkbox doesn’t have any significance. Lets understand how this works –

Scenario 1: Consider a multisite implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A and Work Order Tracking application

Group 2: Site B and Purchase Orders application

If these security groups are both independent then the user ends up with rights for Work Order Tracking on Site A and Purchase Orders on Site B. If however the security groups are non-independent then the privileges combine and the user ends up with both Work Order Tracking and Purchase Orders on Sites A and B. Basically you sum up the privileges and if any of them overlap you take the highest level.

Scenario 2: Consider a single site implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A (or all sites authorization) and Work Order Tracking application

Group 2: Site A (or all sites authorization) and Purchase Orders application

Since there is only one site, irrespective of whether these groups are independent or non-independent the user will have the rights to both Work Order Tracking and Purchase Orders on Sites A.

Scenario 3: Consider a single site implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: NO SITE and Work Order Tracking application

Group 2: Site A (or all sites authorization) and Purchase Orders application

If these security groups are both independent then the user ends up with rights for only Purchase Orders on Site A. This is because the two groups are independent and group 1 doesn’t have authorization to any site hence user doesn’t have rights to Work Order Tracking in Site A. If however the security groups are non-independent then the privileges combine and the user ends up with both Work Order Tracking and Purchase Orders on Sites A.

NOTE: This is why the independent checkbox should not be checked for single site implementations.

Scenario 4: Consider a multisite implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A and Work Order Tracking application – Read, Route workflow Access

Group 2: Site B and Work Order Tracking application – Read, Change Status Access

If these security groups are both independent then the user ends up with rights for Work Order Tracking – Read, Route workflow Access on Site A and Work Order Tracking application – Read, Change Status Access on Site B. If however the security groups are non-independent then the privileges combine and the user ends up with Work Order Tracking – Read, Route workflow & Change Status Access on Sites A and B.

Scenario 5: Consider a multisite implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A and PO Limit of 10,000

Group 2: Site B and PO Limit of 50,000

If these security groups are both independent then the user ends up with PO Limit of 10,000 for Site A and PO Limit of 50,000 for Site B. If however the security groups are non-independent then the privileges combine and the user ends up PO Limit of 50,000 for both Sites A and B.

Note: Never change the MAXEVERYONE group to independent group as this group has a lot of conditional grants which will stop working if this group is made independent without granting access to all sites (similar to Scenario 3).

IBM Tech Note on why MAXEVERYONE shouldn’t be set as independent

How to detect and resolve database connection leak?

There are 2 very detailed posts on IBM DeveloperWorks written by Manjunath about database connection leaks – how to detect them and how to resolve them.

Maximo — How to detect database connection leak
Maximo — How to solve database connection leak

Here is the crux –

To free up locked database connect in maximo, IBM has introduce 3 new system properties (7.5.0.3 onwards) –

mxe.db.closelongrunconn – default is false.

mxe.db.longruntimelimit – default 180 minutes.

mxe.db.detectlongrunconninterval – default is 30 minutes. This is the frequency in which the long running connections are checked. Cannot be less then 30 minutes.

The mxe.db.closelongrunconn property when set to true will close the lost connections if the connections have been held for greater than mxe.db.longruntimelimit time and were not used by any processes in that time duration.


To detect database connection leak, turn Maximo dbconnection watchdog logger to INFO and collect the logs for 1-2 days. If there are any connection leaks the logs will show some thing like this –

[INFO] BMXAA7084I – The DbConnectionWatchDog class has been trying to close the database connection for: 230233001 ms

DbConnectionWatchDog:Db Connection reference id=436107 SPID=397
Create time:1302986384636
Life time:230233001 ms

The Logger indicates that the connection as being held for 230233001 ms i.e. approximately 64 hours. By looking in the logs at approximately 64 hours back, one should be able to find the stack trace of where this connection was established

Maximo Fix Delivery – What’s Changing?

IBM has done some changes to the way it delivers fixes for Maximo. Here are the main points –

  1. LA Fix report available in Maximo – obtain the list by running LATestFixReportWriter.bat program from the <maximo>\tools\maximo directory, available in 7.5.0.2 and above
  2. Cumulative interim fixes (IFIXES) are now delivered every 4 weeks
  3. Fixpack delivery schedule changed from 6-8 months to every 3-4 months (quarterly)
  4. Fixpack vs Featurepack – Fixpack now called “feature packs” which will be a combination of bug fixes and product features & enhancements

Here is the link to the complete article –

DeveloperWorks – Maximo Fix Delivery

Disabling The Warning Message Shown While Closing The Browser

Sometimes when I am in a hurry and I have to close my browser which has multiple Maximo sessions open, it is very frustrating to see each tab asking me to click on ‘Leave This Page’ to close it even after signing out from one of the tabs.

Close Confirmation

There is a property in Maximo – webclient.exitwarn to disable this. By default its set to 1. Setting it to 0 from the system property application will disable the confirmation box. This doesn’t require a restart to take effect as this property is Live Refresh enabled.

Printing Class Details When Debugging in Maximo

Sometimes when I am asked to look into some issue or I have to debug some out of the box functionality in Maximo, I start wondering where to start.

Logs are one good place to start debugging. Turning the logger level of appropriate loggers to debug prints lots of information in the system out (which can be a performance overhead also so be cautious).

To print more specific details of the class that is writing these logs there is a tweak that can be done in the logging application.

In the logging application, in the Manage Appenders dialog in the select action, select the console appender. In the table details, update the Conversion Pattern to

–CLASS -> %C LINE -> %L %d{dd MMM yyyy HH:mm:ss:SSS} [%-2p] [%s] [%q] %m%n

Logger - Manager Appender

This will print the classname and the line number for all the log entries made in the systemOut.log and maximo.log, which will look something like this –

Logger - SystemOutLog

This can be the starting point for your debugging. If you created custom appenders, this tweak can be implemented on the custom appenders leaving the console appender as it is.