How Workflow Conditions are Evaluated

While looking at the debug logs of Maximo I noticed queries fired against the dummy_table. I wondered what is it that Maximo needs to query against the dummy_table. After looking at the sql and some digging in, I found that Maximo uses the dummy_table for evaluating the conditions.

The condition of a condition node in a workflow is evaluated in 2 ways –

  1. The condition is validated using the Parser (standard way, on the object).
  2. If the parser fails, the condition is validated using SqlFormat on dummy_table.

To explain it further, consider a workflow on the PO having one of the following conditions-

  1. Condition: poid=:poid -this will pass the Parser, so validation would be successful.
  2. Condition: exists (select * from ADDRESS where orgid = :orgid and addresscode = :shipto) – this will fail the parser, but will pass the SqlFormat checking on dummy_table using this sql: Select count(*) from dummy_table where (exists (select * from address where orgid =  ‘org01’ and addresscode =  ‘add01’));

So validation would be successful.

  1. Condition: poid = :poid and exists (select * from ADDRESS where orgid = :orgid and addresscode = :shipto) will fail the parser, will also fail the SqlFormat check because dummy_table doesn’t have a column named poid :

sql: Select count(*) from dummy_table  where  poid =  32767 and exists (select * from address where orgid =  ‘org01’  and addresscode =  ‘add01’ );

So validation would fail.

In this case the poid = :poid doesn’t really make any sense in the sql.

So while writing a condition make sure that you are not writing a sql that would fail.

BMXAA7901E – You cannot log in at this time. Contact the system administrator

Whenever you try to login into Maximo, for whatever reasons if you are not able to login, Maximo throws the error – “BMXAA7901E – You cannot log in at this time. Contact the system administrator”. Whether it’s wrong password or a blocked account or some administrative activity is going on, the same message is displayed every time.

Out of curiosity I tried to find out why is it that the actual issue with the failed login is not shown and this is what I found – IBM says that “Alerting a user that they have entered an invalid username or password is a violation of emerging security best practices. Giving a potential hacker any details on a system they are not authenticated against is a risk. These messages were generalized intentionally.

So that means it’s intentional. Though at times it could be frustrating to not know the reason why Maximo is not letting you in but the intention seems to be valid. I also found out that in the previous versions on Maximo, the actual error message was displayed if the login failed.

So the question is, with all the VPNs and SSL enabled networks to protect us from hackers, is it really necessary to hide the reason of the unsuccessful login?

Using Migration Collection in Maximo

Development activities may involve creation and modification of artifacts such as domains, objects, escalations, actions, workflows, etc. Once the development has completed, these configurations need to be migrated from the development environment to QA for testing and finally to Prod – to the end user. The number of these environments may vary but the migration process mostly remains the same.

Keeping track of all the development artifacts that are to be migrated can be a hectic task especially if the duration of the development is more than a few days in which case the developer may forget some of the artifacts that he/she had created over time, or if the number of these artifacts is too large.

The typical migration manager approach to migrate the configurations requires building a sql where clause against each object structure in the migration group and creating a migration package which extracts the data using this where clause. This approach requires the developer to identify and organize all the configurations to be migrated.

Another way to keep track of all these configurations is to create a change type migration package which will track all the changes done by the developer on any of the objects of the associated migration group which can then be used to create the package. But this approach will fail if there are multiple developers working on the same environment.

This is where the migration collection application comes into the picture. The developer can create a collection record in the migration collection application and add the configurations to the collection as and when they are created or modified. This way the developer doesn’t have to remember the long list of changes that have to be migrated.

Collections entries can be added in three different ways:

  • Manually navigate to target application from the collection record and return with value

2015-09-23 00_09_02-Migration Collections

2015-09-23 00_09_23-Communication Templates

2015-09-23 00_09_33-Migration Collections

  • Manually add to collection from directly within the target application – For this to work, in the migration collection application, support for the target application needs to be added. Once support is added, a button appears in the toolbar of the target application to add the current record to the migration collection.

2015-09-23 00_09_55-Migration Collections

2015-09-23 00_10_04-Migration Collections

2015-09-23 00_10_47-Actions

2015-09-23 00_10_59-Actions

2015-09-23 00_11_24-Migration Collections

  • Automatically capture the changes made by specific users – For this to work, tracking for the target application needs to be enable in the migration collection application. This tracks changes made by a particular user only in the target application, the user that is specified at the time of setting up the tracking.

2015-09-23 00_11_53-Migration Collections

2015-09-23 00_12_35-Migration Collections

2015-09-23 00_18_00-Domains

2015-09-23 00_18_48-Migration Collections

Once all the configurations have been added to the collection, developer can create a migration package using the content of the collection.

2015-09-23 00_18_58-Migration Collections

2015-09-23 00_19_03-Migration Collections

The Significance of “Independent of Other Groups” in Security Groups

The “Independent of Other Groups” checkbox in the security group application is one of the most misunderstood concepts in Maximo. What is the significance of this checkbox? How does it affect the security authorization of the group? Let’s have a look.

IndependentGroup

As the name suggests, the “Independent of Other Groups” checkbox on the main tab of the security group application specifies weather the group is independent of other groups or not. What that means is whether the authorization of this group can be merged with the authorizations of other groups. This is only applies to multisite implementations, for single site implementation this checkbox doesn’t have any significance. Lets understand how this works –

Scenario 1: Consider a multisite implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A and Work Order Tracking application

Group 2: Site B and Purchase Orders application

If these security groups are both independent then the user ends up with rights for Work Order Tracking on Site A and Purchase Orders on Site B. If however the security groups are non-independent then the privileges combine and the user ends up with both Work Order Tracking and Purchase Orders on Sites A and B. Basically you sum up the privileges and if any of them overlap you take the highest level.

Scenario 2: Consider a single site implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A (or all sites authorization) and Work Order Tracking application

Group 2: Site A (or all sites authorization) and Purchase Orders application

Since there is only one site, irrespective of whether these groups are independent or non-independent the user will have the rights to both Work Order Tracking and Purchase Orders on Sites A.

Scenario 3: Consider a single site implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: NO SITE and Work Order Tracking application

Group 2: Site A (or all sites authorization) and Purchase Orders application

If these security groups are both independent then the user ends up with rights for only Purchase Orders on Site A. This is because the two groups are independent and group 1 doesn’t have authorization to any site hence user doesn’t have rights to Work Order Tracking in Site A. If however the security groups are non-independent then the privileges combine and the user ends up with both Work Order Tracking and Purchase Orders on Sites A.

NOTE: This is why the independent checkbox should not be checked for single site implementations.

Scenario 4: Consider a multisite implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A and Work Order Tracking application – Read, Route workflow Access

Group 2: Site B and Work Order Tracking application – Read, Change Status Access

If these security groups are both independent then the user ends up with rights for Work Order Tracking – Read, Route workflow Access on Site A and Work Order Tracking application – Read, Change Status Access on Site B. If however the security groups are non-independent then the privileges combine and the user ends up with Work Order Tracking – Read, Route workflow & Change Status Access on Sites A and B.

Scenario 5: Consider a multisite implementation; the user belongs to the 2 security groups with the following privileges:

Group 1: Site A and PO Limit of 10,000

Group 2: Site B and PO Limit of 50,000

If these security groups are both independent then the user ends up with PO Limit of 10,000 for Site A and PO Limit of 50,000 for Site B. If however the security groups are non-independent then the privileges combine and the user ends up PO Limit of 50,000 for both Sites A and B.

Note: Never change the MAXEVERYONE group to independent group as this group has a lot of conditional grants which will stop working if this group is made independent without granting access to all sites (similar to Scenario 3).

IBM Tech Note on why MAXEVERYONE shouldn’t be set as independent

Work Order not Generated from PM

Sometimes there is this situation where when you try to create work order from PM, the system shows the message that the work order has been created but when you try to search that work order in the work order tracking application, it doesn’t exists.

2015-09-23 20_23_46-Preventive Maintenance

One of the reasons for this is if the PM was created when the admin mode is on (which could be the case of initial data upload), entry in the PMANCESTOR is not created for that PM. When the work order is being generated for this PM, even though the message for successful work order creation is displayed, the work order is actually not created because of the missing entry in the PMANCESTOR.

Alert and Warning Intervals in PM (CM)

Alert and warning interval in Preventive Maintenance (CM) have always been confusing. Let me explain you what they are and how to set them.

Alert interval is the point at which a work order is generated. This interval is checked by BDI to generate work order.

Warning interval is the point at which a warning is issued to inform you that a preventive maintenance (PM) record is almost overdue.This interval is checked by BDI to generate warning.

The Assets (CM) application color-codes warnings and overdue PMs.

If the alert interval is null, BDI would create the work order at the moment the PM is generated. So it should never be null.

When setting the alert and warning interval, the Alert and Warning Interval State plays an important role. Here’s how –

Consider a PM with time based frequency as 30 days. Work order for this PM should be created on the 24th days and the warning should be shown from the 27th day. Let’s see how the alert and warning interval should be set to achieve this based on different interval state.

Alert Warning 01

Alert Warning 02

Alert Warning 03

Alert Warning 05

Alert Warning 06

Maximo and the Internet of Things

IOT

The Internet of Things or IOT is one of the upcoming technologies in the market these days. What is IOT? – In Short, IOT is basically smart devices sending data related to their performance, state, etc. over the internet which can be acted upon, if required, without much human intervention.

There is an infograph which explains what IOT is all about and its applications. This is the most informative and interactive article that I have come across on IOT, make sure you turn to full screen mode when viewing this infograph.

http://www.informationisbeautiful.net/visualizations/the-internet-of-things-a-primer/

How Maximo fits into the world of IOT?

Consider a scenario of a manufacturing unit which has its assets installed with sensors that can monitor the performance, state and other parameters of the asset and upload it to the network – basically smart assets. These performance parameters can then be picked up by Maximo and –

  1. Parameter values can be updated against assets (meter reading) and
  2. If any deviations are found, Investigation or corrective actions can be initiated through condition monitoring.

This can significantly reduce the downtime on the asset due to breakdowns and increase its reliability, efficiency & performance. This is just one example; there can be thousands of applications of IOT.

How IOT is transforming businesses? – have a look at the IBM IOT page. Do watch the customer stories.

http://www.ibm.com/analytics/us/en/internet-of-things

Bulk Meter Reading Import in Maximo for Transportation

When there are lots of assets and lots of meters to update on those assets, it becomes a real headache for the user to go to each asset and enter the meter readings. Maximo for Transportation gives the user the option to bulk import meter readings on multiple assets. The Fuel Transaction Import (Tr) application in the Data Import (Tr) module can be used for this purpose.

It doesn’t require any complex MIF setup or large flat files or XMLs to import the meter reading. It only requires a few one time settings in the Organization (Tr) application after which a file with meter readings as show below can be imported easily from the application.

Meter_Readings - Notepad

These meter reading imports are not specific to transportation type asset, meter readings for all type of assets can be imported from this application.

For detailed steps on how to setup and import meter readings, refer the document attached below.

Meter Reading Upload in Maximo

How to detect and resolve database connection leak?

There are 2 very detailed posts on IBM DeveloperWorks written by Manjunath about database connection leaks – how to detect them and how to resolve them.

Maximo — How to detect database connection leak
Maximo — How to solve database connection leak

Here is the crux –

To free up locked database connect in maximo, IBM has introduce 3 new system properties (7.5.0.3 onwards) –

mxe.db.closelongrunconn – default is false.

mxe.db.longruntimelimit – default 180 minutes.

mxe.db.detectlongrunconninterval – default is 30 minutes. This is the frequency in which the long running connections are checked. Cannot be less then 30 minutes.

The mxe.db.closelongrunconn property when set to true will close the lost connections if the connections have been held for greater than mxe.db.longruntimelimit time and were not used by any processes in that time duration.


To detect database connection leak, turn Maximo dbconnection watchdog logger to INFO and collect the logs for 1-2 days. If there are any connection leaks the logs will show some thing like this –

[INFO] BMXAA7084I – The DbConnectionWatchDog class has been trying to close the database connection for: 230233001 ms

DbConnectionWatchDog:Db Connection reference id=436107 SPID=397
Create time:1302986384636
Life time:230233001 ms

The Logger indicates that the connection as being held for 230233001 ms i.e. approximately 64 hours. By looking in the logs at approximately 64 hours back, one should be able to find the stack trace of where this connection was established

Maximo Fix Delivery – What’s Changing?

IBM has done some changes to the way it delivers fixes for Maximo. Here are the main points –

  1. LA Fix report available in Maximo – obtain the list by running LATestFixReportWriter.bat program from the <maximo>\tools\maximo directory, available in 7.5.0.2 and above
  2. Cumulative interim fixes (IFIXES) are now delivered every 4 weeks
  3. Fixpack delivery schedule changed from 6-8 months to every 3-4 months (quarterly)
  4. Fixpack vs Featurepack – Fixpack now called “feature packs” which will be a combination of bug fixes and product features & enhancements

Here is the link to the complete article –

DeveloperWorks – Maximo Fix Delivery