MAS Manage MMI Stats Dashboard

While working on some code, I found myself wondering how to quickly check if things were running smoothly under the hood—if there were any database connection leaks or if the code was using too much memory. Rather than digging through the logs, I decided to use the Maximo Management Interface (MMI) APIs. I used postman to call the MMI APIs but it was very cumbersome to individually call each API endpoint so I built a lightweight dashboard to show key JVM metrics from the MMI APIs.

This dashboard is strictly for use in a development environment and is not meant for production. It works only with the “all” server bundle and displays data from five MMI endpoints: threads, thread pool, memory, database connections, and user sessions. The stats auto refreshes every 2 minutes.

You’ll need a Maximo Manage API key to make it work. The API key must be embedded directly in the HTML, and you’ll also need to update the endpoints variable in the dashboard HTML to point to your Maximo Manage environment’s URL.

The dashboard can be run directly from your desktop or hosted on a web server. If you’re running it locally, set the mxe.oslc.aclalloworigin property in Maximo Manage to *. If it’s hosted, make sure the web server’s domain is added to that same property to avoid CORS issues. Also, include apikey to the existing list of headers in the mxe.oslc.aclallowheaders system property.

Its a quick and easy solution to monitor your development environment. You can add more MMI APIs or customize the UI as needed.

Check out the dashboard code here.

Manage Log Size and Rotation Policy in IBM WebSphere Application Server

Log files are essential for troubleshooting and monitoring applications on IBM WebSphere. Here’s how to set up log size and rotation policies for an application server in Websphere:

    1. Access the Administrative Console.
    2. Navigate to “Troubleshooting” > “Logs and trace”.
    3. Select the server you want to set up the log size and rotation policy for.
    4. Configure log file size.
    5. Set “Maximum Number of Historical Files” for retention.
    6. Click ok and save changes.
    7. Restart the application server for it to take effect.

Efficient log management ensures optimal performance and storage utilization in WebSphere Application Server. Refer to official IBM documentation for version-specific details.

Log Level in Automation Script

What logging level do you use to log messages from automation scripts? Well, your first answer would be the highest level of logging – debug. I used to do the same but in the past couple of months, I have moved from writing logs at debug level to writing them at info level.

If you have not already noticed, in the last year or so, automation script logging has changed in Maximo. Earlier, you could just specify a logger in the code, set that logger to debug in the logging application, refresh the cache and that would start logging your messages in the SystemOut.log. But that doesn’t work anymore. Now you need to set the logging level at the automation script record level for it to log messages from the automation script in the SystemOut.log. And if you set the log level on the automation script record header to DEBUG (because that is what your log statements in the code are using), Maximo starts printing a whole lot of OOB debug information out in the logs. This just makes debugging an issue much more difficult as you need to search for your debug statements in a big pile of debug logs. If it’s a big script or a script that runs on a schedule you are trying to debug, it’s just a nightmare to read the logs.

So, to be able to debug issues without having to unnecessarily scroll through a large pile of logs, I moved to using the info logger level in my automation script code. That way if I set the log level on the automation script record header to INFO and it only prints my log statements and not the OOB debug statement.

Happy days!!

What do you think about this approach? Let me know in the comments.

Auto Restart Websphere JVM after a Crash

Auto-restarting the WebSphere JVM after a crash can be a useful feature for ensuring uptime and minimizing downtime. In this blog post, we will discuss how to enable the auto-restart setting in WebSphere.

First, navigate to the Monitoring Policy under Java and Process Management section for the Application Server in the WebSphere administrative console. In the Monitoring Policy, you will find the option to set the Node Restart State. Change this setting to “Running” for the JVM in question.

It’s worth noting that this setting will also bring the JVM up when the node is started, ensuring that your application is always running and available.

Additionally, it is recommended to monitor the JVM logs, and in case the JVM crashes frequently, it’s best to investigate the root cause of the crash.

For more information, refer to the following link: https://www.ibm.com/support/pages/automatic-restarting-websphere-services

In conclusion, auto-restarting the WebSphere JVM after a crash can be a useful feature that helps to ensure uptime and minimize downtime. By following the steps outlined in this post and the reference link provided, you can easily enable the auto-restart setting in WebSphere and keep your application running smoothly.

DBC Technical Reference

Well, reference documents vanishing from IBM site is no new news. This is what happened with the DBC Technical Reference Guide that was available on the IBM site till last year. I was looking for it a few days ago but couldn’t find it on the IBM site. I thought it was gone along with all the other developerworks articles.

But thanks to Cezar Tipa who had an offline copy and Biplab Das Choudhury who shared it on Linkedin a copy of it is still available.

I downloaded a copy and uploaded it here. You can find a link to the document below.

DBC XML Format for Technical Consultants

Update JAVA_HOME in Websphere

This is for an old version but helpful.

e.g. –

Unix: JAVA_HOME=/usr/local/java/jdk1.8.0_13

Windows: JAVA_HOME=C:\j2sdk1.8.2_14

Update the JAVA_HOME variable to point to the location where the JDK is installed. Restart WebSphere Application Server for the changes to take effect.

reference: https://community.microstrategy.com/s/article/KB16159-How-to-change-the-location-of-the-Java-JDK-JAVA-HOME?language=en_US

File Deployment From WebSphere Console

Have you ever come across a situation where your development environment WebSphere is on Linus/Unix and you don’t have access to the filesystem to hot deploy your class changes for testing? Well it’s pretty common if you are working for a big company which has lot of security policies in place. But what do you do? Do you go through all the pain of building the EAR and deploying it to WebSphere only to realize that there was a small piece of code that you missed and that you have to build and deploy all over again! Isn’t that really frustrating!

What I am going to tell you is not as quick a fix as doing a hot deployment, but it does save some deployment time. With this, you won’t need to build the EAR all over again, just have to do the deployment which takes half the time of a full EAR deployment.

This functionality in WebSphere lets you deploy delta changed to an application. All you have to do is 1 – select the application you want to deploy the changes to, 2 – select the path where the file has to be placed inside the application and 3 – select the file on the local filesystem which has to be uploaded and WebSphere will deploy that file for you.

Lets take an example – I want to deploy a class – abc.class in businessobjects.jar at the this path – com\pk\cron. Under the enterprise applications section in WebSphere, there is an option called Update which will let you upload/update this file in the businessobjects.jar. With this you can upload new files or update existing files.

Follow along with the screenshots, it pretty much self explanatory. Only thing you have to be careful about is the path inside the application EAR, that should be correct or WebSphere will place the file in wrong path. If you are not sure about the path, open the EAR, navigate through to the folder where you want to deploy the file and copy that path.

In this case the path is – businessobject.jar\com\pk\cron\abc.class

businessobjects.jar BEFORE uploading the file – 


businessobjects.jar AFTER uploading the file – 

Until next time!

Maximo Concepts Has Been Moved!

Welcome to Maximo Concepts.

This blog has been moved from maximoconcepts.wordpress.com to maximoconcepts.com.

What that means is I wouldn’t be updating or publishing blogs on maximoconcepts.wordpress.com anymore, I would be doing it here from now on. From the looks and feel prospective, nothing has changed, your experience would remain the same.

Unfortunately, people who had subscribed to my old blog will have to subscribe again here. I would do my best to subscribe them myself, if you get a subscription confirmation request from this blog, please accept it.

So for the latest updates you are at the right place. There is a whole new section on the right panel for subscribing so don’t forget to subscribe!

Cheers!

Load Balancing Using IHS – 101

How does IHS load balancing work?

IHS or the IBM HTTP Server is an apache HTTP server implementation with a topping of IBM configuration. IHS is not a part of WebSphere but you can install it separately and add it to WebSphere, that way you can manage it using the WebSphere console.

In addition to all the function of a webserver, IHS can also be used to as a load balancer which means you can use IHS to spread the traffic evenly across all your JVMs and in case a JVM crashes or is otherwise down, traffic won’t be routed to that JVM.

For IHS to do the load balancing it should have the understanding of the clusters, JVMs that are there in those clusters, virtual hosts, context roots, etc. and since IHS is not a part of WebSphere there needs to be a way for it to do that, this is where the plugin comes in. The plugin is basically an xml file that has details of the clusters and it members, virtual hosts associated to the applications on the clusters and a lot of other details. By default this file is updated every minute to keep the latest configurations updated.

The plugin has to be installed separately and should be linked to the IHS. If you use the launch pad to install the plugin, it takes care of associating the plugin to the IHS, if you are installing it manually you need to make sure you link it to the IHS.

That’s it for today!

Until next time!

Things You Must Know Before Using Migration Manager

Migration manager is a really great tool for migrating configurations from one Maximo instance to another. It does all the validations and tells you of any dependency that you may have missed in the package. Unfortunately it doesn’t tell you that while creating the package but does it while the package is being deployed and I wouldn’t blame migration manager for that. When the package is being created in the source environment, it doesn’t really know where it’s going to be deployed and whether the target environment had all dependencies or not. It’s when the package is being deployed that the migration manager gets to know that the dependencies don’t exist in the target environment and the package errors out. I’ll take an example and explain.

Let say you want to migrate an object from one environment to another. You create a migration manager package for that object which includes all its attributes but not the domains that are associated with those attributes. When you deploy this package to the target environment, it would fail if those domains don’t exist there.

Migration Manager has its own section for mentioning dependencies and most the out of the box migration groups have dependencies already specified.

2017-11-06 13_48_22-Migration Groups

2017-11-07 09_58_25-Migration Groups

But I don’t prefer using this dependency functionality of Maximo and here’s why – If you see the images above, Data Dictionary is one of the dependencies for the application migration group which should be the case but what this would do is take the entire data dictionary from the source environment and deploy it in the target environment.

You really wouldn’t want to do that because

  1. If a developer has created any attribute or domain for testing some functionality and didn’t delete those, they would be migrated too. Worst, if someone changed or removed a class from an OOB attribute or removed the domain associated to it, that would be migrated too. This could break existing functionality.
  2. And if not any of that, it would unnecessarily take a lot of time for deploying the package since there would be the entire data dictionary from the source that would have to be compared to the existing data dictionary of the target for changes.

Unfortunately if you don’t know about this feature of Migration Manager or you are not careful about it, you might unknowingly migrate certain configuration that were not required.

So it’s better to maintain dependencies by you yourself then having Maximo maintain it for you.

Have a great day!

Cheers!