By John Q. Todd, Sr. Business Consultant and Product Researcher with Total Resource Management
Artificial intelligence vs. reliability studies – which one is better?
Truth be told, I have held positions as a Reliability Engineer at both a space exploration organization as well as a software company that specialized in reliability analysis tools.
I greatly enjoyed the search for insight into what was happening, or could happen, based upon the mediocre data sets I was able to dig up. While probability of failure was certainly able to be calculated, the bands of uncertainty were so wide that the final numbers really did not matter. Add this to the fact that modern equipment sports so much more reliability and built-in redundancy, in some cases it can be assumed that new equipment will perform its function with little to no need to postulate failures. Yet we all know that things can and do fail, no matter how well engineered, so we must continue to be diligent.
Your decisions are only as good as your data
Fast forward to today. Because of the telemetry available from modern equipment, the amount of well-organized data we have available to us is orders of magnitude beyond what we had just a few years back. It is very common now to have access to literally 1000’s of devices, each sending out data payloads with many data elements, at sub-second intervals. There can easily be terabytes of data available for us to analyze and then make decisions with.
Further, this data (and the need for decisions) is coming so fast that we cannot wait 2 weeks for a study to point us in a direction. We need to know and act now.
With so much data available, how best to ingest and use it for making near-real time decisions?
Reliability Studies – Building the foundation
Reliability studies, statistical methods, and failure models have been in use for many years. They have proven themselves, given enough “good” data, to be reliable guides to product and process improvements. The products and services we enjoy today have greatly benefitted from the many years of folks grinding through data, coming up for air when they have a result for consideration. I can attest to the fact that my humble reliability studies played a role in several successful spacecraft launches and planetary encounters.
With the advent of analysis and data visualization tools embedded into offerings such as IBM Maximo Application Suite (MAS), we have even more opportunity to visualize all this data to then make decisions with.
Leverage Artificial Intelligence
Yes, it is easy for us to just click the “smart” icon and let whatever learning network behind the scenes take control and tell us what to do. Take your modern home sprinkler system controller. After you give it some information about your soil and plant types, sun/shade situation, and local weather, off it goes to manage your watering. It knows about impending rain and freezes and guides the application of water as it deems necessary. In the end you have a nice lawn for less water usage without you needing to spend much time thinking about it. AI has been working well for me in this context, so in my mind it has credibility.
Isn’t AI just another tool in our box to help us understand the data we have streaming in? Seems like it would be to our advantage to explore its use and apply it wisely where it makes sense. Why spend weeks pawing through data when AI can give us results in a few seconds? Why not use it as a second opinion… something we are told to always get before making a big decision?
Which is better – paper or plastic?
Let me give you the typical consultant answer: “It depends.” Sure, in some contexts the traditional reliability study is sufficient for exposing decision points. If the data you have access to is well formed, consistent, and three-bears just right, then keep doing what you are doing.
If, however, the volume and diversity of data is starting to add up, or you are needing tighter uncertainty curves, then looking into current-state AI tools may be of benefit. You will be surprised at the nuances these tools bring out of your even-increasing data streams.
John Q. Todd has nearly 30 years of business and technical experience in the Project Management, Process development/improvement, Quality/ISO/CMMI Management, Technical Training, Reliability Engineering, Maintenance, Application development, Risk Management, & Enterprise Asset Management fields. His experience includes work as a Reliability Engineer & RCM implementer for NASA/JPL Deep Space Network, as well as numerous customer projects and consulting activities as a reliability and spares analysis expert. He is a Sr. Business Consultant and Product Researcher with Total Resource Management, an an IBM Gold Business Partner – focused on the market-leading EAM solution, Maximo, specializes in improving asset and operational performance by delivering strategic consulting services with world class functional and technical expertise