I just got off the phone with an operator who said, “this year has been the worst year I can remember in terms of having to do more things with less money. It has been a real struggle to get this year end closed.” The price of gas is down. The integrity rules have placed additional demands on resources. There is very real pressure do things more efficiently.
Recession or not, the question of how to do more with less is universal across industries and across time. Since the first workflow software was p (see draft post with quotes from business week)
Lucky for us, dozens of three letter acronyms have been championed, tested, and either refined or abandoned in industries like banking, insurance, and healthcare. These industries were some of the first to adopt workflow management systems (WMS), business process re-engineering (BPR), service oriented architecture (SOA), et al in the hope of doing more with less. Recent efforts with business process management seem to have been making headway recently.
In fact, Gartner says “For struggling companies, business process management is a lifeline that helps them survive by reducing and avoiding costs in this volatile and turbulent economy.” They go on to say that “compliance is often another burden. BPM is well-suited to drive costs out of compliance and regulatory work.”
This report pertains to managing regulatory compliance in the financial, insurance, healthcare and similar industries. While these industries share the need for an audit trail with pipeline integrity management, they differ in the focus of their concern. In asset management the “things” we manage are physical pipelines, not digital accounts. Instead of policy documents, we have linear assets. Instead of transactions lasting seconds to days, we have transactions – like [next assessment] – that last years. How can the lessons from these other industries be applicable?
A taste of nirvana…
Take for example the case of aligning inspection results with the GIS centerline. By now this activity should be routine and yet, the consequences of screwing up can translate to money lost digging holes in the wrong place.
The routine nature of this data alignment makes it desirable to outsource or down-source. The criticality of the task makes senior-level involvement necessary. Do you have junior-resources perform the task and have senior-resources review every instance? Do you throw people at the problem with extra quality assurance checks before senior-level sign-off? How do you do this consistently? How do you monitor the process?
In a software-supported world, software can be used to guide a junior-level resource through a repeatable workflow. It can check the alignment for common errors and help guide quality assurance. It can watch for unusual circumstances and raise these few cases to the attention of a senior-resource who can use the audit trail to ferret out a root cause.
While a simple example, the benefits should be evident. The senior-resource is released from a mundane task while remaining involved where needed. Despite using a lower-level resource, consistency and quality will improve as individual steps are refined. Confidence in the activity will increase as senior-resources can go back and review the audit trail whenever there are questions.
Sounds pretty cool, doesn’t it? Indeed, many software vendors in our space have recognized the potential benefit and have begun to shape their message accordingly. From what I can tell, these vendors are getting traction with operators. Clearly there is something to this.
Current efforts…
I know of four domestic vendors who are messaging to this space. Do these vendors have the answers? From what I can tell, they don’t even have the questions yet. Each vendor is taking a uniquely different approach:
- Vendor 1: High-level BPM play
- Vendor 2: Workflow engine
- Vendor 3: Data integration
- Vendor 4: Decision support
Clearly each of these vendors has a different take on the problem. If you dig into their stories though, you will see that they are all trying to help the integrity manager orchestrate the integrity process. If so, then as the space matures these vendors should converge to different flavors of the same answer.
Until an answer is revealed what is an operator to do? Pick one of these four vendors and help drive their solution? Avoid being a guinea pig and wait for some success stories? How about turning outside of the industry to learn from those who have been doing this for a while?
Going back to the thought-leaders for this sort of thing – the financial, insurance, and healthcare industries – there are plenty of cautionary tales. For every company that has had a smashing success, an equal number have had miserable failures. [cite – miserable success rate of sw projects? Alt 50% success of BPM.] Clearly there is some risk involved.
How can we apply the lessons that other industries have learned in the area of using software supported processes to do more with less? How can we reduce our implementation risk and at the same time increase the likelihood of a successful solution?
Complicating factors…
There are some glaring differences between the industry of the thought leaders – finance, insurance, healthcare- and our own:
- We deal with linear assets that can be parsed into an infinite number of segments. Contrast this with policies, accounts, and patients which remain relatively discrete through their lifetime.
- Geospatial data is relied on much more heavily in our industry.
- Our “customer” is the longevity of the pipe.
- Our processes require trips from the office to the field and back.
In devising a solution, we must factor in the effect of these and other differences between business processes and engineering processes.
A call for research…
One approach, that of my colleagues, is to “just jump right in and start trying stuff”. If these vendors survive, they will eventually evolve a strong solution.
I prefer a more deliberate approach. I believe that by studying the successes and failures of business process management we can quickly narrow our scope to those engineering processes that share similar patterns. With a narrowed scope, we can then concentrate on the elements that make engineering processes unique.
To explore these problems, I am trying to set up some formal research through the Colorado School of Mines. My objective:
Given the vast canvas of software support for business processes, devise a value-based protocol for applying these tools to support pipeline integrity management.
If you know of an operator who would be receptive to the idea of being a laboratory, would you mind helping me connect with them?
Benefits:
- State-of-the-industry solution
- Do more with less, sooner
- Path of lowest risk
It will require:
- Operator participation
- help identify test scenarios
- help collect baseline metrics
- help select and install commercial software packages
- customizing them
- running tests
- analyzing results
- Funding
- research grant
- time from you
- some software & hardware
- some programming
Maybe if we’re lucky we can get PHMSA to help out with this as well.
Stay nimble!
– Craig




Leave a comment