Recursive Queries, Maximo Lookups and Deadlocks

There are several scenarios where you have a requirement to present data that cannot be queried through a simple select statement but requires looping through the table as the data resides in a parent child hierarchy in the table. Work order hierarchy is one such example where a work order record has its parent listed in one of its column, and the parent’s record would have its own parent listed in its record and so on. So to find all the child work orders to the leaf level under a work order, looping would be required.

Note: Although ANCESTOR tables maintain the complete ancestor hierarchy but at times they can have incorrect data.

This is typically achieved in a relational database through a recursive query and the traditional way is to use CTE or Common Table Expression which uses the ‘WITH’ statement to construct recursive queries. I will not go into details of how to construct or use CTE, their detailed documentation is readily available online.

Oracle has introduced a much simpler syntax to execute recursive queries – the START WITH, CONNECT BY statement. It has a lot of limitation compared to the traditional CTE method but for simple querying its gets the work done in a lot less effort. The START WITH, CONNECT BY statement can also be used in DB2 by enabling the oracle compatibility mode 08.

When it comes to table domains in Maximo, using the traditional CTE is a challenge since in table domains only the where clause can be specified and querying using CTE has an entirely different syntax so it cannot be specified in the listwhereclause. The START WITH, CONNECT BY on the contrary, has the syntax similar to the traditional select statement and can be used in the table domains to query data. Here is an example of the START WITH, CONNECT BY statement that lists all the child work orders of the work order WO-1990 till the leaf level.

SELECT * FROM WORKORDER
START WITH WONUM = ‘WO-1990’
CONNECT BY PRIOR WONUM = PARENT AND ISTAKS = 0;

A little bit about table domains, when you open a lookup in Maximo which displays data through table domain, it doesn’t fetch the entire result set in one go but only that no. of rows that are displayed on the first page of the lookup. When you click the next page on the lookup, the next set of records is fetched and so on. What that means is if the lookup is configured to display only 10 rows at a time, when the lookup is opened, only first 10 records are fetched from the database, now if the next button on the lookup is clicked, the next 10 records are fetched and so on.

This is where it gets interesting, what I have observed is if you use the START WITH, CONNECT BY statement in a domain, when the lookup for this domain is opened, it holds a lock on the table(row level lock) till the time that lookup is closed. This is not the case with non-recursive queries, they never hold a lock, even complex queries with joins and sub-selects.

This is where the problem lies, when the lookup with the START WITH, CONNECT BY statement is opened in one application and another user or the same user in another application tries to do a transaction (update or delete) on the same set of rows as queried by the lookup, the user doing the transaction is presented with a deadlock error. Overriding the isolation level for the recursive query also doesn’t help.

db_error_911

The image above shows an error presented to the user in a similar scenario but in this case the lookup and the transaction performed was on the LOCHIERARCHY object.

So it is better to avoid using START WITH, CONNECT BY statement in lookups and use alternate solutions like ANCESTOR tables or views to display data. Some of the lookups can be tricky and may require lot of effort to avoid using START WITH, CONNECT BY statement but if there are deadlock issues, it’s worth the effort.

Cheers!

Import Export Multiple XMLs in Maximo

Did you know that maximo support multi XML import export?

If you want to export a bunch of application xmls from one environment to another or may be you want to edit more than one xml and then import them back into maximo, this functionality can come in handy.

To use this functionality, filter the list of applications to be exported on the list tab of the application designer and click the export button on the toolbar. Maximo will export a single xml containing all the applications filtered on the list tab. This xml can be imported back in the same environment or other environments as per your need.

The screenshot below shows 3 applications being exported. The exported xml shows 1 expanded application presentation and other 2 collapsed.

2016-04-05 21_39_08-Application Designer.png

2016-04-05 21_38_59-eamserver_9080_maximo_ui_presentationset.xml_event=exportall&uisessionid=3&csrft.png

Adding custom attribute to asset template and copying it to the asset

When it comes to adding a custom attribute to asset template and getting it copied to the asset, one might think that this may require extending the asset template class and adding the code to get the field copied or probably configuring a cross over domain. But its way easier than that!

Just creating the custom attribute in asset template and in asset with the same name gets the job done. When the asset is created from the asset template, the data from the custom field in asset template gets copied over to the field with the same name in asset. I did a quick check on this and it works as expected. Have a look –

2016-02-14 10_52_35-Asset Templates.png

2016-02-14 10_55_09-Asset Templates.png

2016-02-14 10_55_44-Assets.png

When using MXServer for fetching MboSet

There are two ways of fetching an MboSet in java code, either using a relationship or through MXserver. There are different benefits for using either of these and different scenarios may need one of these ways to be used. For now we are going to talk about MboSet fetched from MXServer.

When fetching an MboSet from MXServer, there are a few things that one needs to be careful about. Poor knowledge of these may result in performance issues, poor memory management and undesirable outcomes.

There are six simple rules to keep in mind when fetching MboSet from MXserver. These rules are as follows –

  1. Always use setWhere() and reset() on the set fetched from MXServer. The where clause that you give in the setWhere will reduce the no. of MBOs fetched in the set. The reset will get the data from the database using the where clause. If the setWhere is not used, the entire set is fetched from the database which means the entire table is fetched into the memory.
MboSetRemote assetSet = MXServer.getMXServer().getMboSet("ASSET", getUserInfo());
assetSet.setWhere("LOCATION = 'BEDFORD'");
assetSet.reset();
  1. If the set is being fetched just to add new MBOs and not for traversing then use 1=0 as where clause in the setWhere. This will fetch an empty set.
assetSet.setWhere("1=0");
assetSet.reset();
  1. If the set is being fetched just for traversing and not for any addition or updates set DISCARDABLE flag as true on the MboSet. Discardable MboSet are not cached in memory, they can only be traversed in forward direction and cannot be saved.
assetSet.setFlag(DISCARDABLE, true);
  1. If MboSet is fetched for readonly purpose, set NOSAVE flag as true. This way the MboSet won’t be added to the MXTransaction which would shortens the looping time of the transaction.
assetSet.setFlag(NOSAVE, true);
  1. If an add or update transaction is being performed on an MboSet, do not forget to call save(). Contrary to popular belief that save shouldn’t be called explicitly in java code, MboSet fetched from MXServer must be saved before the set is closed otherwise the changes would be lost. MboSets fetched from relationship do not require save to be called explicitly because their save is called when their parent is saved. Hence the belief.
  1. Always call clear() and close() on the MboSet fetched from MXServer once it is certain that it is not required anymore. This releases the memory and the database resources. If the set is not closed, it will remain in memory till it is collected by the garbage collector which could be a long time.
assetSet.clear();
assetSet.close(); 

clear and close shouldn’t be called on an MBOSet fetched from a relationship. This will give undesirable results.

So keep in mind these simple rules next time when you are fetching a set from MXServer.

In my next post I will try to explain you the difference between clear(), cleanup() & close(). Stay tuned.. 🙂

Calculation for Units to Go in Preventive Maintenance (CM)

Needless to say, the calculation of units to go in ACM is a bit trickier than how it’s calculated in base Maximo. People who have worked on it can understand where I am coming from.

plusalfevent

Units to Go is calculated based on the PLUSALFEVENT (Maintenance Event) record which is created when the PM is made ACTIVE. These are the records displayed in the Maintenance Plan tab in the Asset (CM) application.

It is these PLUSALFEVENT records that are utilized by the BDI process to decide when Work Orders should be created from the PMs and also to decide whether the PM is overdue, past its warning point etc.

The ‘Left’ value on the Maintenance Plan tab is essentially the units to go for the associated PM in Assets (CM).

The calculation for units to go is:

Units to Go = Due Count – Current Count

Where Due Count = Active Count + PM Meter Frequency

The Active Count is the meter count calculated at the Active Date of maintenance event (PLUSALFEVENT.ACTIVEDATE).

Every time a work order for a maintenance event is completed, a new maintenance event (PLUSALFEVENT) entry is created with an Active Date same as that of the work order completion date. The same date is stamped as inactive date on the old maintenance event record.

So every time the active date from the most recent maintenance event record is used to calculate the Due Count for the PM.

The Current Count is the meter count at the current system date and time.

How Workflow Conditions are Evaluated

While looking at the debug logs of Maximo I noticed queries fired against the dummy_table. I wondered what is it that Maximo needs to query against the dummy_table. After looking at the sql and some digging in, I found that Maximo uses the dummy_table for evaluating the conditions.

The condition of a condition node in a workflow is evaluated in 2 ways –

  1. The condition is validated using the Parser (standard way, on the object).
  2. If the parser fails, the condition is validated using SqlFormat on dummy_table.

To explain it further, consider a workflow on the PO having one of the following conditions-

  1. Condition: poid=:poid -this will pass the Parser, so validation would be successful.
  2. Condition: exists (select * from ADDRESS where orgid = :orgid and addresscode = :shipto) – this will fail the parser, but will pass the SqlFormat checking on dummy_table using this sql: Select count(*) from dummy_table where (exists (select * from address where orgid =  ‘org01’ and addresscode =  ‘add01’));

So validation would be successful.

  1. Condition: poid = :poid and exists (select * from ADDRESS where orgid = :orgid and addresscode = :shipto) will fail the parser, will also fail the SqlFormat check because dummy_table doesn’t have a column named poid :

sql: Select count(*) from dummy_table  where  poid =  32767 and exists (select * from address where orgid =  ‘org01’  and addresscode =  ‘add01’ );

So validation would fail.

In this case the poid = :poid doesn’t really make any sense in the sql.

So while writing a condition make sure that you are not writing a sql that would fail.

BMXAA7901E – You cannot log in at this time. Contact the system administrator

Whenever you try to login into Maximo, for whatever reasons if you are not able to login, Maximo throws the error – “BMXAA7901E – You cannot log in at this time. Contact the system administrator”. Whether it’s wrong password or a blocked account or some administrative activity is going on, the same message is displayed every time.

Out of curiosity I tried to find out why is it that the actual issue with the failed login is not shown and this is what I found – IBM says that “Alerting a user that they have entered an invalid username or password is a violation of emerging security best practices. Giving a potential hacker any details on a system they are not authenticated against is a risk. These messages were generalized intentionally.

So that means it’s intentional. Though at times it could be frustrating to not know the reason why Maximo is not letting you in but the intention seems to be valid. I also found out that in the previous versions on Maximo, the actual error message was displayed if the login failed.

So the question is, with all the VPNs and SSL enabled networks to protect us from hackers, is it really necessary to hide the reason of the unsuccessful login?

Using Migration Collection in Maximo

Development activities may involve creation and modification of artifacts such as domains, objects, escalations, actions, workflows, etc. Once the development has completed, these configurations need to be migrated from the development environment to QA for testing and finally to Prod – to the end user. The number of these environments may vary but the migration process mostly remains the same.

Keeping track of all the development artifacts that are to be migrated can be a hectic task especially if the duration of the development is more than a few days in which case the developer may forget some of the artifacts that he/she had created over time, or if the number of these artifacts is too large.

The typical migration manager approach to migrate the configurations requires building a sql where clause against each object structure in the migration group and creating a migration package which extracts the data using this where clause. This approach requires the developer to identify and organize all the configurations to be migrated.

Another way to keep track of all these configurations is to create a change type migration package which will track all the changes done by the developer on any of the objects of the associated migration group which can then be used to create the package. But this approach will fail if there are multiple developers working on the same environment.

This is where the migration collection application comes into the picture. The developer can create a collection record in the migration collection application and add the configurations to the collection as and when they are created or modified. This way the developer doesn’t have to remember the long list of changes that have to be migrated.

Collections entries can be added in three different ways:

  • Manually navigate to target application from the collection record and return with value

2015-09-23 00_09_02-Migration Collections

2015-09-23 00_09_23-Communication Templates

2015-09-23 00_09_33-Migration Collections

  • Manually add to collection from directly within the target application – For this to work, in the migration collection application, support for the target application needs to be added. Once support is added, a button appears in the toolbar of the target application to add the current record to the migration collection.

2015-09-23 00_09_55-Migration Collections

2015-09-23 00_10_04-Migration Collections

2015-09-23 00_10_47-Actions

2015-09-23 00_10_59-Actions

2015-09-23 00_11_24-Migration Collections

  • Automatically capture the changes made by specific users – For this to work, tracking for the target application needs to be enable in the migration collection application. This tracks changes made by a particular user only in the target application, the user that is specified at the time of setting up the tracking.

2015-09-23 00_11_53-Migration Collections

2015-09-23 00_12_35-Migration Collections

2015-09-23 00_18_00-Domains

2015-09-23 00_18_48-Migration Collections

Once all the configurations have been added to the collection, developer can create a migration package using the content of the collection.

2015-09-23 00_18_58-Migration Collections

2015-09-23 00_19_03-Migration Collections

The Significance of “Independent of Other Groups” in Security Groups

The “Independent of Other Groups” checkbox in the security group application is one of the most misunderstood concepts in Maximo. What is the significance of this checkbox? How does it affect the security authorization of the group? Let’s have a look.

IndependentGroup

As the name suggests, the “Independent of Other Groups” checkbox on the main tab of the security group application specifies weather the group is independent of other groups or not. What that means is whether the authorization of this group can be merged with the authorizations of other groups. This is only applies to multisite implementations, for single site implementation this checkbox doesn’t have any significance. Lets understand how this works –

Scenario 1: Consider a multisite implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A and Work Order Tracking application

Group 2: Site B and Purchase Orders application

If these security groups are both independent then the user ends up with rights for Work Order Tracking on Site A and Purchase Orders on Site B. If however the security groups are non-independent then the privileges combine and the user ends up with both Work Order Tracking and Purchase Orders on Sites A and B. Basically you sum up the privileges and if any of them overlap you take the highest level.

Scenario 2: Consider a single site implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A (or all sites authorization) and Work Order Tracking application

Group 2: Site A (or all sites authorization) and Purchase Orders application

Since there is only one site, irrespective of whether these groups are independent or non-independent the user will have the rights to both Work Order Tracking and Purchase Orders on Sites A.

Scenario 3: Consider a single site implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: NO SITE and Work Order Tracking application

Group 2: Site A (or all sites authorization) and Purchase Orders application

If these security groups are both independent then the user ends up with rights for only Purchase Orders on Site A. This is because the two groups are independent and group 1 doesn’t have authorization to any site hence user doesn’t have rights to Work Order Tracking in Site A. If however the security groups are non-independent then the privileges combine and the user ends up with both Work Order Tracking and Purchase Orders on Sites A.

NOTE: This is why the independent checkbox should not be checked for single site implementations.

Scenario 4: Consider a multisite implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A and Work Order Tracking application – Read, Route workflow Access

Group 2: Site B and Work Order Tracking application – Read, Change Status Access

If these security groups are both independent then the user ends up with rights for Work Order Tracking – Read, Route workflow Access on Site A and Work Order Tracking application – Read, Change Status Access on Site B. If however the security groups are non-independent then the privileges combine and the user ends up with Work Order Tracking – Read, Route workflow & Change Status Access on Sites A and B.

Scenario 5: Consider a multisite implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A and PO Limit of 10,000

Group 2: Site B and PO Limit of 50,000

If these security groups are both independent then the user ends up with PO Limit of 10,000 for Site A and PO Limit of 50,000 for Site B. If however the security groups are non-independent then the privileges combine and the user ends up PO Limit of 50,000 for both Sites A and B.

Note: Never change the MAXEVERYONE group to independent group as this group has a lot of conditional grants which will stop working if this group is made independent without granting access to all sites (similar to Scenario 3).

IBM Tech Note on why MAXEVERYONE shouldn’t be set as independent

Work Order not Generated from PM

Sometimes there is this situation where when you try to create work order from PM, the system shows the message that the work order has been created but when you try to search that work order in the work order tracking application, it doesn’t exists.

2015-09-23 20_23_46-Preventive Maintenance

One of the reasons for this is if the PM was created when the admin mode is on (which could be the case of initial data upload), entry in the PMANCESTOR is not created for that PM. When the work order is being generated for this PM, even though the message for successful work order creation is displayed, the work order is actually not created because of the missing entry in the PMANCESTOR.