Archive

Archive for December, 2009

In search of efficiency for pipeline integrity management…

December 23, 2009 Leave a comment

A different approach to IM…

I think that a dashboard is needed.

Think about an airplane cockpit.  In a mature aircraft, like a 737, there are tons of instruments, controls, and warnings.  In contrast, the simplest planes may only have an airspeed indicator, altimeter, and a compass.  These three basic instruments are technically very simple to build, but they provide tremendous value to the pilot.

Airspeed is measured by a pitot tube, which is essentially a hollow tube that is inserted into the air rushing around the plane.  Altitude can be measured with a barometer.  Direction with a magnet on a bearing.  It doesn’t cost a lot to build these instruments, but it could be argued that they are the three most useful instruments on the plane. [cite – conversation with Jerry H on Dec 21]

Imaging trying to fly a plane without these instruments.  Pilots would not know how fast they are going, how far they are from the ground, or which direction they were headed.  It can be done, but  it is stressful.

Integrity managers are stressed.  Could it be because they don’t have the basic instruments they need to navigate their integrity programs?

Clearly this must be the case.  There is a lot of energy being spent trying to make the integrity manager’s life easier.  I know of operators who are doing additional data integration, or installing workflow engines to try to give the integrity manager a bit more visibility and control.  The PODS committee is feeling this need in the form of requests for new functionality. [cite PODS march meeting]

Vendors have picked up on this.  I know of four US companies who specifically address the integrity manager’s concerns.  There are countless other vendors who claim to make a portion of the integrity manager’s life easier.

It is my observation that much of the energy being spent on solving the integrity manager’s visibility and control problems is being wasted.

Some efforts, like additional data integration, are too narrowly scoped.  These projects are spending too much money building messaging systems that go into unneeded detail.  Other efforts, like full-blown BPM systems are too broadly scoped.  These efforts are trying to build the complex control systems of the 737 without taking the time to identify the essential instruments.

I think we need a dashboard.[cite – business week]  I think we need to identify the critical instruments the integrity manager needs and build them.  We need to identify the critical controls they need and build those too.  Let’s get these instruments into the hands of those who need them.

Who are the integrity managers?

In every company that operates a pipeline, there is a small group of individuals who are ultimately responsible for that pipeline’s longevity.  This group has the dual mission of reducing the cost of ownership and of reducing exposure to integrity-based fines.  We’ll call this group the integrity managers. [cite blog entry]

The organizational structure and composition of this group may differ, but ultimately these individuals are responsible for the following:

1: Creating a program (maintenance, regulatory)

  • Establishing an overall strategy
  • Developing a program to support that strategy
  • Creating procedures to support this program

2: Operating that program

  • Measuring the effectiveness of those procedures
  • Modifying the program to improve its effectiveness
  • Responding to unusual conditions

[Adapted from ASME Pipeline Operations; cite]

Programs for compliance

Historically rules concerning pipeline maintenance are operational in nature:

Example: periodic inspection of rectifiers [check]

When you encounter a new operational rule, the IM group will create a program for compliance.  Once the program is deployed, they are able to demonstrate compliance by comparing a schedule of required tasks against the work that was done.  The schedule is simple, if an asset is of type A then the tasks for type A must be performed within a prescribed interval.

Example: check valves have to be operated annually [check]

At first glance, the integrity rules appear to be operational rules.  “Pig the line every 7 years.  If you can’t pig it then use DA.  Dig up and fix sections that are really degraded.”  If this were truly the case, the IM team’s main concern would be if the pipe were piggable or not.  Compliance would be a simple matter of running assessments according to a calendar schedule.

Of course, things are never simple.  The schedule of the IM rules is not only driven by asset type, it is also driven by the environment around the pipe.  In addition to piggability, you must consider nine threats, three consequences, assessment methods, overall risk, mitigation activities et al.  These concerns apply cyclically over the operational life of the pipe.

Because of the need to consider the environment around the pipe, the rules have significant data handling and analytical components.  They require that data be integrated to support a risk model.  You must respond to discoveries within your risk model, not just your assessments.  You must react to changes within the data.  Instead of occasionally monitoring the rule, the IM team is actively participating in it.

Problem

Examine the responsibilities of the IM group again.  I have broken them into two phases – creating a program to support the IM rules and then operating that program. The IM team has been creating a program for IM compliance over the past few years.

As pipelines go through their second IMP assessments, they move into the operational phase.  Recall the elements associated with operating a process:

  • measuring program effectiveness
  • modifying the program to improve effectiveness
  • responding to unusual conditions

Each of these activities requires that the IM group look into the program and extract information from it.  Based on this information they will take action – kick-off a dig, B31G analysis, et al.  If something unusual happens, they will need to take action once to correct the problem.  If it keeps happening, they will need to alter the program so that it stops.

Hypothesis: The effectiveness of the integrity team is based on their ability to see information and respond to it.

To test this hypothesis one only has to look to other industries.  There is a wealth of case studies addressing this hypothesis in the field of business process management.  [cite needed. gartner?  wiki?]  This field has evolved since the first workflow engines in the 1970’s.  Businesses have continued to invest because they have seen results.  Measurable improvements in efficiency are very real.  So much so that Gartner recently said that you shouldn’t run your business without it. [cite]

I am going to assume that you subscribe to my hypothesis that your IM team’s ability to do their job efficiently is based on their ability to monitor their programs and their ability to intervene and adjust the process.  Let us then examine your IM teams capabilities in terms of visibility and control over their programs.  This will give us the problem we are trying to solve.

In the current environment the view is anything but crystal.  The view of the IM program is fragmented by task.  What does your team need to do to create a “vertical slice” of data across a section of pipe?  Is this something they can do effortlessly?  Now, what if they wanted to examine it historically?  Compare this to other slices?

Control for the IM team is no better.  Procedures that are written on paper, or in an electronic procedural library, don’t provide feedback regarding weaknesses.  Workflows that have been scripted into software tools provide feedback, but are difficult and expensive for the IM team to change.

Problem: What is the most effective way to improve the efficiency of the IM team?

Drivers:

  • Reduce cost of IM program compliance
  • Reduce exposure to IM related penalties and fines
  • Increase effective life of assets

What is available…

I know of four domestic vendors who are messaging to the problem the integrity managers are facing.  Each vendor is taking a uniquely different approach:

  • Vendor 1: Business process management
  • Vendor 2: Workflow engine
  • Vendor 3: Data integration
  • Vendor 4: Decision support

Clearly each of these vendors has a different take on the problem.  Ultimately, they are each trying to help the integrity manager be more effective.

Why such different approaches?  I maintain that each addresses a part of the solution.  Business process management and the workflow engine address the control portion of the problem.  Data integration and decision support address the visibility issue.

As the space matures these vendors should converge to similar answers.  Are you willing to risk your cash to help them find the answer?

So, what about looking to tools available for business processes?  Can these tools be adapted for integrity management?  Maybe.

The challenge is that there is such a broad spectrum of possible solutions.  Additionally the engineering processes behind the IM program are different enough from business processes that they should be examined:

  1. We deal with linear assets that can be parsed into an infinite number of segments.  Contrast this with policies, accounts, and patients which remain relatively discrete through their lifetime.
  2. Geospatial data is relied on much more heavily in our industry.
  3. Our “customer” is the longevity of the pipe.
  4. Our processes require trips from the office to the field and back.

While I think that the software tools are out there to solve the IM problems of visibility and control, I think that it would be useful to take an academic approach to the problem.  What I the most direct path to IM efficiency?  I maintain that nobody knows.

A call for research…

I believe that by studying the successes and failures of business process management we can quickly narrow our scope to those engineering processes that share similar patterns with corresponding business processes.  With a narrowed scope, we can then concentrate on the elements that make engineering processes unique.

To explore these problems, I am trying to set up some formal research through the Colorado School of Mines.  My objective:

Given the vast canvas of software support for business processes, devise a value-based protocol for applying these tools to support pipeline integrity management.

If you know of an operator who would be receptive to the idea of being a laboratory, would you mind helping me connect with them?

Benefits:

  • Be the first in the industry with a real solution
  • Working software tools to keep and expand
  • Thought-leadership in the industry

It will require:

  • Operator participation
    • help identify test scenarios
    • help collect baseline metrics
    • help select and install commercial software packages
    • help customizing them
    • help running tests
    • help analyzing results
    • help designing subsequent experiments
  • Funding
    • research grant
    • time from you or your staff
    • some software & hardware
    • some programming

Maybe if we’re lucky we can get PHMSA to help out with this as well.

Stay nimble!

– Craig

BPM – The Adoption In The Financial Industry Versus Early Expectations

December 16, 2009 Leave a comment

Very nice article which describes adoption of technology in banking sector.  It suggests the road to nirvana is bumpy.  Very well written.  No sources.

Bpm – The Adoption In The Financial Industry Versus Early Expectations

Categories: agility Tags: , ,

What is pipeline integrity management, really?

December 10, 2009 Leave a comment

A point of confusion…

Recently I had a conversation that went badly.  Examining it in hindsight it is clear that I was on a completely different wavelength regarding the definition of integrity management.  This is not the first time that the definition of integrity management has caused me to have a confusing conversation.

The confusion usually comes when I am speaking to an individual from the operations side of the business.  Most specifically, when I am speaking with a vendor who services the operations side of the business.

It’s not what you know…

Figure 2.2 from Mohitpour et al in Pipeline Operation and Maintenance – A Practical Approach (ASME)[ii] (crudely screen captured from the Google books preview) I think highlights the source of confusion:

Generic org chart for pipeline operating company.

Pipeline Control, Operations and everybody else

This chart shows a typical organizational model for a pipeline company.  My experience confirms that this structure is common.  All management functions have been combined into the single box at the top of the structure.  Below this box are the four primary concerns of operating a pipeline – operations, pipeline control, technical support, and corporate support.

The entire branch on the left side of this chart is the operations division.  Operations is a very visible concern for the pipeline company.  In contrast, the pipeline integrity group is merely a box in the technical support group.  Because this group is small, typically only a handful of engineers and technicians, many vendors do not encounter them.

It is not surprising that vendors who do not encounter IM would have a skewed perspective of its definition.  It is to be expected that they would be unaware of its concerns.  To these guys IM is about running pigs and doing DA, because this is where their customers’ concerns are.  “Their customers” being the operations group, not the integrity management group.

Convergent definitions…

Mohitpour et al [i] say that the integrity management and asset management functions of pipeline maintenance are often part of the technical support functional area which is “typically located in the head office of the pipeline company.”  This “support group caries out integrity management functions.”

Again, drawing from the book, the authors list the concerns of this group as:

  • Pipeline inspection
  • Pipe replacement
  • Establishing an overall maintenance strategy
  • Developing a maintenance program
  • Managing computerized maintenance management systems
  • Measuring performance of maintenance activities

This explanation seems consistent with the PHMSA definition of integrity management.  PHMSA prescribes a program consisting of regular inspections, risk assessments, and mitigation activities.  While PHMSA’s primary concern is minimizing the pipeline’s impact on people and the environment, the operator shares these concerns plus the concern of maximizing the pipeline’s useful life.  In either case, the activities described in the IM rules are very much aligned with the activities described by Mohitpour.

Both perspectives describe an integrity manager who is concerned about data and process. These are not the boots-on-the-ground guys.  They work in an office.  They ask the questions “what” and “where”.  They are less concerned about “who” and “when”.

My definition, my ambition…

So far, this is consistent with my definition of integrity management and with my understanding of the concerns of the integrity manager.  Not to dis the pigs, but they are not what is on my mind when I write about integrity management.

Definition: Pipeline integrity management is a continuous sequence of engineering analyses and subsequent assessments which combine to maximize the useful life of each individual pipeline asset.  This is very much about data.  It is very much about keeping a record of each analysis, assessment, and decision.  It is being able to support a decision when audited.

Ambition: The vision of the integrity manager is to orchestrate the IM process.  To see data flow smoothly from one step to the next.  To be able to visualize the past, present, and future of maintenance on the pipe.  To know who and why and when.  To initiate tasks across departments.  To have data that goes in clean and stays that way.  To continuously improve IM processes and to be able to demonstrate this improvement.  To do more with less.

I will go on later.  For now, I have clarified the definition of integrity management and have identified the core concerns of integrity managers.  In later installments I’ll explore these concerns in more detail.

Until next time,

-Craig


[i] Sec 2.4.3 p 39; Pipeline Operation and Maintenance – A Practical Approach (ASME); Mo Mohitpour, Jason Szabo, Thomas Van Hardeveld; (C) 2005 ASME, NY NY 10016; Link to cover on google-books.

[ii] fig 2.2 p 38; Pipeline Operation and Maintenance – A Practical Approach (ASME); Mo Mohitpour, Jason Szabo, Thomas Van Hardeveld; (C) 2005 ASME, NY NY 10016; Link to cover on google-books.

Log of the disambiguator… (volume 1)

December 4, 2009 Leave a comment

Today the disambiguator started reading my blog.  He/she (I can’t really tell) grumbled something about “stupid software jargon” and the grabbed my laptop.  Below are its writings.  I am labeling this as volume 1 because I have a feeling it will be back…

from the lair of the disambiguator…


Craig is concerned about software things, consequently he uses many software terms.  To disambiguate something means to determine its intended meaning.  It comes from linguistics and was adopted by computer science.  This document will be a place for me to clarify how he uses terms.

I like the being called the disambiguator because it sounds complicated and obscure.  You can call me the dissa.  Ironically, I want to make things simple and clear.  I think it makes a nice contrast.

Disambiguator Definition | Definition of Disambiguator at Dictionary.com

Client application

A software program which makes requests of another software program which fulfills those requests.  Usually the other software application is running on a remote server.  In terms of the user experience, a client application provides the interface (GUI) for a user to interact with. In terms of networking applications, a client application is the one making a request.

Specific types: Fat client, thin client
Synonyms: graphical user interface (GUI), user interface (UI)
Compare vs: Server application
Used with: client/server application, 2/3/n-tiered application, user experience (UX)
Examples:

  • Apple iTunes Store uses a thin client application that runs in a web browser
  • When using a central Geodatabase, ArcView functions as a fat client for this data

Client/server application

“Client/server describes the relationship between two computer programs in which one program, the client, makes a service request from another program, the server, which fulfills the request. Although the client/server idea can be used by programs within a single computer, it is a more important idea in a network.” – What is client/server? – Definition from Whatis.com

I have heard many people describe client/server as a “normal application” (like excel or visio) which pulls information from a server.  This description implies a fat client application where the local software application does most of the work.  The server is only there to fulfill requests.  Contrast this with a thin client application where the server does most of the work and the local software application merely directs the server’s actions.

When I use the term, I will be using the broader sense of it.  I intend the term client/server to mean any client server arrangement in which one software program makes a request of another program which fulfills that request.

Specific types: fat client, thin client, networking
Synonyms: fat client application, 2-tier application, peer-to-peer application (rare to hear thin client called C/S)
Compare vs: thin client application, 3/n-tiered application, web-application
Used with: client application, server application, database application
Reference: Client-server – Wikipedia, the free encyclopedia
Reference: What is client/server? – Definition from Whatis.com
Examples:

  • Your online banking application is client/server.  In this application a thin client application runs in your web browser.  This client application accesses a server application across the internet.  It is this server application that does the actual work as it is directed by the client application.

Until next time,

the dissa

Selling an agile approach…

December 1, 2009 Leave a comment

Setup:

I came across this question at Discussion: Agile | LinkedIn (account required).  In the original posting Myroslava Trotsyuk referenced an article (How Agile Practices Reduce Requirements Risks) by Ellen Gottesdiener which describes 6 project risks that are mitigated by agile practices.  In response Lanis Brett Ossman raised the question:

Why is it difficult to sell an agile approach when delivering software solutions?

In explaining the question, Lanis Brett created his own list of 6 objections to adopting an agile approach.  While he labeled them as “risks”, I have encountered these same objections when trying to answer the question “why agile?”

– Risk 1: For the “major” cost, customers expect developers to figure out the details. They expect developers to know their business, for the money they are paying.

– Risk 2: Added cost for paying employees to review progress rather than do their regular job. Again, why can’t the developer nail down the details up front?

– Risk 3: Developers typically eat the added cost of poor impact analysis. Not the customer’s fault.

– Risk 4: Many developers just allow scope creep, so the customer does it expecting no impact to them. Creep is often small changes, but they add up fast.

– Risk 5: Defective requirements pretty much involve the same customer issues as Risk 1.

– Risk 6: Why should I, the customer, pay for your learning curve implementing new tools and techniques?

Response:
Nice list of risks/objections.  The original article was compelling, but outside of project risk it doesn’t address other objections to an agile approach.  When the customer comes back with “purchasing insists we know all the costs up front”  or “It is the rules, sorry” you often have no choice but comply – taking the cost-overrun bullet yourself instead of letting it hit the customer.  At this point, you have to overcome the purchasing or rules objection.  There is no point in pleading project risk anymore.

As the original article stated, software is different than other types of products.  In the physical world project estimation and sales are relatively straightforward.  Not so in the software world where our deliverables are unbounded.  We are not even limited by our materials or tools.  We can create tools as needed.  If we need a new fundamental data structure, or other “materials” we can create those too.

Until it is fully delivered, software only exists in the imagination of those requesting it and those building it.  There is plenty of room for misunderstanding and error.  This is the problem Agility was created to solve.  For it to work, though, you need to sell something that fundamentally looks agile.

Are you selling an agile project or a traditional one in which you hope to apply agile techniques?

Case study:
If I am an oil company ordering a drilling platform, I expect to specify its dimensions, drilling depth, and other requirements for the finished unit.  I expect the contractor to take my finished specifications, then working from a wireframe model, I expect the contractor to come up with a list of materials and standard times for attaching those materials.  The sum of the parts rolls up to a final cost estimate.  This is monolithic.

Similarly, as an oil company asking for “asset management software” I expect the vendor to pull pre-built elements off the shelf and attach them, like welding, to produce a final product that meets my specifications.  When I hear that I need to participate in discussions about every rivet, bolt, and plate it doesn’t make sense.  Why can’t you just pull 5 compressors and some 2″ steel plate off the shelf and put something together?

Points of failure:
Physical world –
Despite its enormous complexity, a drilling platform is made up of only a few hundred basic elements – steel plate, bolts, pumps, steel tube.  During construction these basic parts are used repeatedly. By modifying or “fabricating” these elements into the proper shape, they can be attached to an adjacent part. The complexity of this originates from the volume of repeating elements.

Software world –
Software is different. If a basic element is repeated, it is a sign of poor programming. If a software program has 100 different parts, every one of them should be unique and independently useful in shaping the final result.  Our raw materials – software classes – need to each function flawlessly to receive information, act on that information, and pass that information onto another class.  If a single element malfunctions it can affect the entire application.  Complexity arises from not only the number of assembled parts, but also from the fact that each one of these parts is individually unique.

Bringing it back around:
Our customers are accustomed to purchasing monolithic items in which the number of fundamental parts is small.  In theory if we could break our offering into elements, each of which is no more complicated than a drilling platform, we would reduce our software risks to the (embarrassingly) relatively low risk of building a drilling platform.

To achieve this we need to get our offerings (what we sell) small enough where the functionality is represented by a few hundred classes at the most.  By keeping what we sell small, we lower everyone’s risk – ours and our customers’.  We can build up complex solutions from these smaller elements.

I came across the following article Still playing planning poker? Stop gambling with your project budget. – About Agility which addresses the question of estimating an agile project.  In it Robert Neri describes an approach to using user stories to estimate the project.  Admittedly a user story is too fine grained to be useful in a sales situation.  We need something a bit bigger, but not too big.

What does this look like?
Try breaking your offering into smaller lego sized chunks of value.  Instead of selling “asset management” try selling solutions to individual problems within this domain.  The trick is finding a level of usefulness that we can sell.  I can’t sell a login screen, but I can sell a “remaining life monitor”.  I can’t take the risk of selling a “control panel”, but I can handle selling a “panel shell” and some “snap-ins”.

In the way that a user story breaks functionality into something we can build, we need a way to break a project into valuable chunks we can sell.  Right now I am working with a concept I call “critical paths to value”.  Instead of proposing to build the entire offshore rig, I’ll start with proposing we build a boat that could hold a drilling platform.  A floating platform is the first critical path to value.  The next would be the drilling rig, followed by a control room, and next maybe a transfer point for offloading oil.

So where are the critical paths to value in my own application?  I am working on it.  I’ll post an update as things unfold.