What is pipeline integrity management, really?

August 29, 2017 Leave a comment

A point of confusion…

Recently I had a conversation that went badly.  Examining it in hindsight it is clear that I was on a completely different wavelength regarding the definition of integrity management.  This is not the first time that the definition of integrity management has caused me to have a confusing conversation.

The confusion usually comes when I am speaking to an individual from the operations side of the business.  Most specifically, when I am speaking with a vendor who services the operations side of the business.

It’s not what you know…

Figure 2.2 from Mohitpour et al in Pipeline Operation and Maintenance – A Practical Approach (ASME)[ii] (crudely screen captured from the Google books preview) I think highlights the source of confusion:

Generic org chart for pipeline operating company.

Pipeline Control, Operations and everybody else

This chart shows a typical organizational model for a pipeline company.  My experience confirms that this structure is common.  All management functions have been combined into the single box at the top of the structure.  Below this box are the four primary concerns of operating a pipeline – operations, pipeline control, technical support, and corporate support.

The entire branch on the left side of this chart is the operations division.  Operations is a very visible concern for the pipeline company.  In contrast, the pipeline integrity group is merely a box in the technical support group.  Because this group is small, typically only a handful of engineers and technicians, many vendors do not encounter them.

It is not surprising that vendors who do not encounter IM would have a skewed perspective of its definition.  It is to be expected that they would be unaware of its concerns.  To these guys IM is about running pigs and doing DA, because this is where their customers’ concerns are.  “Their customers” being the operations group, not the integrity management group.

Convergent definitions…

Mohitpour et al [i] say that the integrity management and asset management functions of pipeline maintenance are often part of the technical support functional area which is “typically located in the head office of the pipeline company.”  This “support group caries out integrity management functions.”

Again, drawing from the book, the authors list the concerns of this group as:

  • Pipeline inspection
  • Pipe replacement
  • Establishing an overall maintenance strategy
  • Developing a maintenance program
  • Managing computerized maintenance management systems
  • Measuring performance of maintenance activities

This explanation seems consistent with the PHMSA definition of integrity management.  PHMSA prescribes a program consisting of regular inspections, risk assessments, and mitigation activities.  While PHMSA’s primary concern is minimizing the pipeline’s impact on people and the environment, the operator shares these concerns plus the concern of maximizing the pipeline’s useful life.  In either case, the activities described in the IM rules are very much aligned with the activities described by Mohitpour.

Both perspectives describe an integrity manager who is concerned about data and process. These are not the boots-on-the-ground guys.  They work in an office.  They ask the questions “what” and “where”.  They are less concerned about “who” and “when”.

My definition, my ambition…

So far, this is consistent with my definition of integrity management and with my understanding of the concerns of the integrity manager.  Not to dis the pigs, but they are not what is on my mind when I write about integrity management.

Definition: Pipeline integrity management is a continuous sequence of engineering analyses and subsequent assessments which combine to maximize the useful life of each individual pipeline asset.  This is very much about data.  It is very much about keeping a record of each analysis, assessment, and decision.  It is being able to support a decision when audited.

Ambition: The vision of the integrity manager is to orchestrate the IM process.  To see data flow smoothly from one step to the next.  To be able to visualize the past, present, and future of maintenance on the pipe.  To know who and why and when.  To initiate tasks across departments.  To have data that goes in clean and stays that way.  To continuously improve IM processes and to be able to demonstrate this improvement.  To do more with less.

I will go on later.  For now, I have clarified the definition of integrity management and have identified the core concerns of integrity managers.  In later installments I’ll explore these concerns in more detail.

Until next time,

-Craig


[i] Sec 2.4.3 p 39; Pipeline Operation and Maintenance – A Practical Approach (ASME); Mo Mohitpour, Jason Szabo, Thomas Van Hardeveld; (C) 2005 ASME, NY NY 10016; Link to cover on google-books.

[ii] fig 2.2 p 38; Pipeline Operation and Maintenance – A Practical Approach (ASME); Mo Mohitpour, Jason Szabo, Thomas Van Hardeveld; (C) 2005 ASME, NY NY 10016; Link to cover on google-books.

A Dashboard for Pipeline Integrity

December 20, 2010 Leave a comment

A dashboard for pipeline integrity to aid in compliance with US rules for integrity management

“I want to log-on to my computer and be told the status of the system. Today, I have to ask a specific engineer that looks after the area I want to know about who says he will get back to me. Two days later I get an answer, usually with caveats because he doesn’t have all the up to date information.”

anonymous pipeline manager

Sections

  • Problem: a need for situational awareness and process control
  • Integrated data for situational awareness and process control
  • A pipeline information management system (PIMS)
  • Building an integrity dashboard on your existing IT infrastructure

 

Problem: a need for situational awareness

On March 31, 2009 the PODS organization hosted a meeting in Sugar Land, Texas called Mapping Our Pipeline Future.  Seventeen pipeline operating companies participated.  The intent of this meeting was to identify and prioritize operational needs that are not being met by the PODS committee or by vendors.

At this meeting four pipeline operators made short presentations describing their integration efforts to date.  Afterwards the meeting participants were divided into five groups for general discussion followed by specific discussion groups after lunch.

A prevalent thread at this meeting concerned leveraging integrated data.  Some of the wish list items generated along this vein include:

  • “Improved situational awareness and decision making” – Anadarko Midstream, operator presentation
  • “Data should be made to work for the [user] through more empowered processes” – general discussion groups, room 2
  • “Missing is a way to tie/connect business processes to the tasks” – specific discussion groups, key issues facing operators

In general these requests to leverage integrated data fell into one of two categories, requests for improved visibility and requests for better processes.  These two categories can be summarized as follows:

  1. Situational awareness – the need to know what is happening, what has happened, or what will happen
  2. Better processes – the need to orchestrate individual tasks once situational awareness has been achieved

This paper will present a lightweight approach for building pipeline dashboards for key players in the integrity process.  This approach leverages your data, regardless of where it resides, to achieve situational awareness.  This situational awareness in turn allows for better processes.

This approach is independent of your integration platform, PODS or other.  It leverages existing investments in IT infrastructure and gets stronger with each additional  investment.  What is needed to make this solution available to the industry is a set of vendor-neutral standards describing interfaces for data sensors and process actuators.

Integrated data for situational awareness and process control

The illustration below was taken from one of the operator presentations at the PODS mapping our pipeline future meeting.

This image serves to illustrate the hope that many operators have placed in integrated data.  Integrated data is the hub around which the various departments involved in integrity management interact.  With PODS in place, everything should go smoothly.

Ironically, integrated data can actually make situational awareness and control worse, not better.  The need for more people and more procedures means that there are more opportunities for things to go wrong.  Critical conditions can hide within a flood of “normal” data.  Instead of seeing an improvement, many operators are finding that their data is bad, their processes are broken, and integrated data is expensive to maintain.

Integrated data alone can never provide the awareness and control that is needed.  PODS and other data standards simply describe how to store and exchange information.  Data standards are concerned with data, not people or processes.  They do not describe how data is gathered, verified, and inserted into the database.  They only describe where to put it and what format to use.

While integrated data is necessary for better situational awareness and control, integrated data is just a first step.  The illustration above would be more accurate if the central puzzle piece showed a software application instead of a database like PODS.

Software which orchestrates the data, people, processes, and events of integrity management is the next step.  A dashboard which lets you see a summary of the system and to drill into and examine details from a single location would be a big help.  If this dashboard enabled you to respond to this summary information and to alter the process you would have the situational awareness and process control you seek.

A pipeline information management system (PIMS)

Both ASME B31.8S and the integrity rules hint at a global information system which integrates the people, processes, and data associated with maintaining a pipeline.  The intent of this theoretical system is to provide the integrity manager with the situational awareness and control required to orchestrate the integrity process.

These software systems are called pipeline information management systems (PIMS). In a PIMS the integrity manager…

  • has global visibility into past, present, and future activity on every asset.
  • is able to quickly define, deploy, and modify procedures for compliance.
  • sets up alerts which trigger action when warranted.

On its face a PIMS is a dashboard application for the integrity process. In the business world, “the dashboard is the CEO’s killer app, making the gritty details of a business that are often buried deep within a large organization accessible at a glance to senior executives. So powerful are the programs that they’re beginning to change the nature of management, from an intuitive art into more of a science.”

These same benefits apply to asset management and to integrity compliance.

A dashboard application is a console into which gauges and controls that are of interest to an individual user are chosen from a library of prebuilt tools.  These tools are added to individual dashboards and are configured based on individual responsibility.  We’ll call these tools “widgets”.

Each widget needs software to make it work.  To build a widget, a software developer combines the capabilities of software “sensors” and “actuators”.  Sensors monitor data, dates, and events.  Actuators enable changes to a procedure or to data.  This simple concept can provide tremendous value for relatively low cost.  As you improve your underlying IT infrastructure you are able to build better and more useful widgets.

A dashboard doesn’t replace existing IT investments, it leverages them.  The better your underlying IT infrastructure the stronger your dashboard system can be.  If your data is integrated, you can create sensors which monitor this central location.  If you install messaging between your asset data and your work management system, you can take advantage of this new infrastructure to track, or even alter work once it has been scheduled.  If you install a workflow system, you can use your dashboard as an access point to adjust individual task steps as needed.

There is a non-linear return on your investment.  Invest in one widget and you realize a benefit of 1.  Two widgets give you a benefit of 4.  Invest in exactly the right number, and the benefits are immeasurable.  By putting the right information into the right hands a dashboard can have a tremendous impact on reducing operating costs.

Building an integrity dashboard for domestic use

While dashboards for pipeline integrity exist, they are usually custom affairs.  The few commercial solutions available are targeted at international operators who are not affected by the PHMSA rules.  What is needed is an affordable, off the shelf system that has been designed specifically for compliance with the US integrity rules.  What this looks like is a set of dashboard widgets which are generally useful to US pipeline operators and which can be deployed into common IT environments.

Off the shelf solutions are available from other industries and can be adapted to meet our needs. Since the 1970s banks and other information-intensive industries have been integrating their computer systems and processes in a quest for better situational awareness and process control.  They have tried and studied workflow systems, business process re-engineering, and dozens of other techniques.  It is not a simple problem, but recent efforts have been very successful.

Gartner Research says that the most recent round of developments, business process management systems (BPMS), “gives companies visibility into processes that are key to cost management.  BPM is a lifeline in this troubled economy.  It helps companies find and avoid hidden costs – to keep companies in business.”

While this sounds like a major rework of your IT infrastructure, it isn’t.  BPMS encompasses most of the common solutions that get your IT folks excited – integrated data, workflow management, SOA, web 2.0 and many, many more.  The better your IT infrastructure, the better your dashboard can be.

What your IT team needs is some guidance regarding where to make their next major investment.  To do this it would be helpful if you could tell them the kinds of widgets that would provide you with the most benefit in terms of your own situational awareness and control.  Some examples to consider:

  • on-the-fly alignment sheets that follow map view
  • Live flowcharts which enable you to intervene in tasks like data scrubbing
  • project builder/selector or a project budgeter/scheduler

Sure you need lots of help, but where can you get the most help for the least money? To explore this problem, I would like to set up some formal research. After some initial comparison research between pipeline processes and business processes we would develop a series of test software installations with the objective of determining:

  • a protocol for measuring the benefits of dashboard widgets
  • a series of benchmark measurements for return on investment
  • a baseline set of widgets arranged from largest to smallest benefit and from most cost to least cost

 

In search of efficiency for pipeline integrity management…

December 23, 2009 Leave a comment

A different approach to IM…

I think that a dashboard is needed.

Think about an airplane cockpit.  In a mature aircraft, like a 737, there are tons of instruments, controls, and warnings.  In contrast, the simplest planes may only have an airspeed indicator, altimeter, and a compass.  These three basic instruments are technically very simple to build, but they provide tremendous value to the pilot.

Airspeed is measured by a pitot tube, which is essentially a hollow tube that is inserted into the air rushing around the plane.  Altitude can be measured with a barometer.  Direction with a magnet on a bearing.  It doesn’t cost a lot to build these instruments, but it could be argued that they are the three most useful instruments on the plane. [cite – conversation with Jerry H on Dec 21]

Imaging trying to fly a plane without these instruments.  Pilots would not know how fast they are going, how far they are from the ground, or which direction they were headed.  It can be done, but  it is stressful.

Integrity managers are stressed.  Could it be because they don’t have the basic instruments they need to navigate their integrity programs?

Clearly this must be the case.  There is a lot of energy being spent trying to make the integrity manager’s life easier.  I know of operators who are doing additional data integration, or installing workflow engines to try to give the integrity manager a bit more visibility and control.  The PODS committee is feeling this need in the form of requests for new functionality. [cite PODS march meeting]

Vendors have picked up on this.  I know of four US companies who specifically address the integrity manager’s concerns.  There are countless other vendors who claim to make a portion of the integrity manager’s life easier.

It is my observation that much of the energy being spent on solving the integrity manager’s visibility and control problems is being wasted.

Some efforts, like additional data integration, are too narrowly scoped.  These projects are spending too much money building messaging systems that go into unneeded detail.  Other efforts, like full-blown BPM systems are too broadly scoped.  These efforts are trying to build the complex control systems of the 737 without taking the time to identify the essential instruments.

I think we need a dashboard.[cite – business week]  I think we need to identify the critical instruments the integrity manager needs and build them.  We need to identify the critical controls they need and build those too.  Let’s get these instruments into the hands of those who need them.

Who are the integrity managers?

In every company that operates a pipeline, there is a small group of individuals who are ultimately responsible for that pipeline’s longevity.  This group has the dual mission of reducing the cost of ownership and of reducing exposure to integrity-based fines.  We’ll call this group the integrity managers. [cite blog entry]

The organizational structure and composition of this group may differ, but ultimately these individuals are responsible for the following:

1: Creating a program (maintenance, regulatory)

  • Establishing an overall strategy
  • Developing a program to support that strategy
  • Creating procedures to support this program

2: Operating that program

  • Measuring the effectiveness of those procedures
  • Modifying the program to improve its effectiveness
  • Responding to unusual conditions

[Adapted from ASME Pipeline Operations; cite]

Programs for compliance

Historically rules concerning pipeline maintenance are operational in nature:

Example: periodic inspection of rectifiers [check]

When you encounter a new operational rule, the IM group will create a program for compliance.  Once the program is deployed, they are able to demonstrate compliance by comparing a schedule of required tasks against the work that was done.  The schedule is simple, if an asset is of type A then the tasks for type A must be performed within a prescribed interval.

Example: check valves have to be operated annually [check]

At first glance, the integrity rules appear to be operational rules.  “Pig the line every 7 years.  If you can’t pig it then use DA.  Dig up and fix sections that are really degraded.”  If this were truly the case, the IM team’s main concern would be if the pipe were piggable or not.  Compliance would be a simple matter of running assessments according to a calendar schedule.

Of course, things are never simple.  The schedule of the IM rules is not only driven by asset type, it is also driven by the environment around the pipe.  In addition to piggability, you must consider nine threats, three consequences, assessment methods, overall risk, mitigation activities et al.  These concerns apply cyclically over the operational life of the pipe.

Because of the need to consider the environment around the pipe, the rules have significant data handling and analytical components.  They require that data be integrated to support a risk model.  You must respond to discoveries within your risk model, not just your assessments.  You must react to changes within the data.  Instead of occasionally monitoring the rule, the IM team is actively participating in it.

Problem

Examine the responsibilities of the IM group again.  I have broken them into two phases – creating a program to support the IM rules and then operating that program. The IM team has been creating a program for IM compliance over the past few years.

As pipelines go through their second IMP assessments, they move into the operational phase.  Recall the elements associated with operating a process:

  • measuring program effectiveness
  • modifying the program to improve effectiveness
  • responding to unusual conditions

Each of these activities requires that the IM group look into the program and extract information from it.  Based on this information they will take action – kick-off a dig, B31G analysis, et al.  If something unusual happens, they will need to take action once to correct the problem.  If it keeps happening, they will need to alter the program so that it stops.

Hypothesis: The effectiveness of the integrity team is based on their ability to see information and respond to it.

To test this hypothesis one only has to look to other industries.  There is a wealth of case studies addressing this hypothesis in the field of business process management.  [cite needed. gartner?  wiki?]  This field has evolved since the first workflow engines in the 1970’s.  Businesses have continued to invest because they have seen results.  Measurable improvements in efficiency are very real.  So much so that Gartner recently said that you shouldn’t run your business without it. [cite]

I am going to assume that you subscribe to my hypothesis that your IM team’s ability to do their job efficiently is based on their ability to monitor their programs and their ability to intervene and adjust the process.  Let us then examine your IM teams capabilities in terms of visibility and control over their programs.  This will give us the problem we are trying to solve.

In the current environment the view is anything but crystal.  The view of the IM program is fragmented by task.  What does your team need to do to create a “vertical slice” of data across a section of pipe?  Is this something they can do effortlessly?  Now, what if they wanted to examine it historically?  Compare this to other slices?

Control for the IM team is no better.  Procedures that are written on paper, or in an electronic procedural library, don’t provide feedback regarding weaknesses.  Workflows that have been scripted into software tools provide feedback, but are difficult and expensive for the IM team to change.

Problem: What is the most effective way to improve the efficiency of the IM team?

Drivers:

  • Reduce cost of IM program compliance
  • Reduce exposure to IM related penalties and fines
  • Increase effective life of assets

What is available…

I know of four domestic vendors who are messaging to the problem the integrity managers are facing.  Each vendor is taking a uniquely different approach:

  • Vendor 1: Business process management
  • Vendor 2: Workflow engine
  • Vendor 3: Data integration
  • Vendor 4: Decision support

Clearly each of these vendors has a different take on the problem.  Ultimately, they are each trying to help the integrity manager be more effective.

Why such different approaches?  I maintain that each addresses a part of the solution.  Business process management and the workflow engine address the control portion of the problem.  Data integration and decision support address the visibility issue.

As the space matures these vendors should converge to similar answers.  Are you willing to risk your cash to help them find the answer?

So, what about looking to tools available for business processes?  Can these tools be adapted for integrity management?  Maybe.

The challenge is that there is such a broad spectrum of possible solutions.  Additionally the engineering processes behind the IM program are different enough from business processes that they should be examined:

  1. We deal with linear assets that can be parsed into an infinite number of segments.  Contrast this with policies, accounts, and patients which remain relatively discrete through their lifetime.
  2. Geospatial data is relied on much more heavily in our industry.
  3. Our “customer” is the longevity of the pipe.
  4. Our processes require trips from the office to the field and back.

While I think that the software tools are out there to solve the IM problems of visibility and control, I think that it would be useful to take an academic approach to the problem.  What I the most direct path to IM efficiency?  I maintain that nobody knows.

A call for research…

I believe that by studying the successes and failures of business process management we can quickly narrow our scope to those engineering processes that share similar patterns with corresponding business processes.  With a narrowed scope, we can then concentrate on the elements that make engineering processes unique.

To explore these problems, I am trying to set up some formal research through the Colorado School of Mines.  My objective:

Given the vast canvas of software support for business processes, devise a value-based protocol for applying these tools to support pipeline integrity management.

If you know of an operator who would be receptive to the idea of being a laboratory, would you mind helping me connect with them?

Benefits:

  • Be the first in the industry with a real solution
  • Working software tools to keep and expand
  • Thought-leadership in the industry

It will require:

  • Operator participation
    • help identify test scenarios
    • help collect baseline metrics
    • help select and install commercial software packages
    • help customizing them
    • help running tests
    • help analyzing results
    • help designing subsequent experiments
  • Funding
    • research grant
    • time from you or your staff
    • some software & hardware
    • some programming

Maybe if we’re lucky we can get PHMSA to help out with this as well.

Stay nimble!

– Craig

BPM – The Adoption In The Financial Industry Versus Early Expectations

December 16, 2009 Leave a comment

Very nice article which describes adoption of technology in banking sector.  It suggests the road to nirvana is bumpy.  Very well written.  No sources.

Bpm – The Adoption In The Financial Industry Versus Early Expectations

Categories: agility Tags: , ,

What is pipeline integrity management, really?

December 10, 2009 Leave a comment

A point of confusion…

Recently I had a conversation that went badly.  Examining it in hindsight it is clear that I was on a completely different wavelength regarding the definition of integrity management.  This is not the first time that the definition of integrity management has caused me to have a confusing conversation.

The confusion usually comes when I am speaking to an individual from the operations side of the business.  Most specifically, when I am speaking with a vendor who services the operations side of the business.

It’s not what you know…

Figure 2.2 from Mohitpour et al in Pipeline Operation and Maintenance – A Practical Approach (ASME)[ii] (crudely screen captured from the Google books preview) I think highlights the source of confusion:

Generic org chart for pipeline operating company.

Pipeline Control, Operations and everybody else

This chart shows a typical organizational model for a pipeline company.  My experience confirms that this structure is common.  All management functions have been combined into the single box at the top of the structure.  Below this box are the four primary concerns of operating a pipeline – operations, pipeline control, technical support, and corporate support.

The entire branch on the left side of this chart is the operations division.  Operations is a very visible concern for the pipeline company.  In contrast, the pipeline integrity group is merely a box in the technical support group.  Because this group is small, typically only a handful of engineers and technicians, many vendors do not encounter them.

It is not surprising that vendors who do not encounter IM would have a skewed perspective of its definition.  It is to be expected that they would be unaware of its concerns.  To these guys IM is about running pigs and doing DA, because this is where their customers’ concerns are.  “Their customers” being the operations group, not the integrity management group.

Convergent definitions…

Mohitpour et al [i] say that the integrity management and asset management functions of pipeline maintenance are often part of the technical support functional area which is “typically located in the head office of the pipeline company.”  This “support group caries out integrity management functions.”

Again, drawing from the book, the authors list the concerns of this group as:

  • Pipeline inspection
  • Pipe replacement
  • Establishing an overall maintenance strategy
  • Developing a maintenance program
  • Managing computerized maintenance management systems
  • Measuring performance of maintenance activities

This explanation seems consistent with the PHMSA definition of integrity management.  PHMSA prescribes a program consisting of regular inspections, risk assessments, and mitigation activities.  While PHMSA’s primary concern is minimizing the pipeline’s impact on people and the environment, the operator shares these concerns plus the concern of maximizing the pipeline’s useful life.  In either case, the activities described in the IM rules are very much aligned with the activities described by Mohitpour.

Both perspectives describe an integrity manager who is concerned about data and process. These are not the boots-on-the-ground guys.  They work in an office.  They ask the questions “what” and “where”.  They are less concerned about “who” and “when”.

My definition, my ambition…

So far, this is consistent with my definition of integrity management and with my understanding of the concerns of the integrity manager.  Not to dis the pigs, but they are not what is on my mind when I write about integrity management.

Definition: Pipeline integrity management is a continuous sequence of engineering analyses and subsequent assessments which combine to maximize the useful life of each individual pipeline asset.  This is very much about data.  It is very much about keeping a record of each analysis, assessment, and decision.  It is being able to support a decision when audited.

Ambition: The vision of the integrity manager is to orchestrate the IM process.  To see data flow smoothly from one step to the next.  To be able to visualize the past, present, and future of maintenance on the pipe.  To know who and why and when.  To initiate tasks across departments.  To have data that goes in clean and stays that way.  To continuously improve IM processes and to be able to demonstrate this improvement.  To do more with less.

I will go on later.  For now, I have clarified the definition of integrity management and have identified the core concerns of integrity managers.  In later installments I’ll explore these concerns in more detail.

Until next time,

-Craig


[i] Sec 2.4.3 p 39; Pipeline Operation and Maintenance – A Practical Approach (ASME); Mo Mohitpour, Jason Szabo, Thomas Van Hardeveld; (C) 2005 ASME, NY NY 10016; Link to cover on google-books.

[ii] fig 2.2 p 38; Pipeline Operation and Maintenance – A Practical Approach (ASME); Mo Mohitpour, Jason Szabo, Thomas Van Hardeveld; (C) 2005 ASME, NY NY 10016; Link to cover on google-books.

Log of the disambiguator… (volume 1)

December 4, 2009 Leave a comment

Today the disambiguator started reading my blog.  He/she (I can’t really tell) grumbled something about “stupid software jargon” and the grabbed my laptop.  Below are its writings.  I am labeling this as volume 1 because I have a feeling it will be back…

from the lair of the disambiguator…


Craig is concerned about software things, consequently he uses many software terms.  To disambiguate something means to determine its intended meaning.  It comes from linguistics and was adopted by computer science.  This document will be a place for me to clarify how he uses terms.

I like the being called the disambiguator because it sounds complicated and obscure.  You can call me the dissa.  Ironically, I want to make things simple and clear.  I think it makes a nice contrast.

Disambiguator Definition | Definition of Disambiguator at Dictionary.com

Client application

A software program which makes requests of another software program which fulfills those requests.  Usually the other software application is running on a remote server.  In terms of the user experience, a client application provides the interface (GUI) for a user to interact with. In terms of networking applications, a client application is the one making a request.

Specific types: Fat client, thin client
Synonyms: graphical user interface (GUI), user interface (UI)
Compare vs: Server application
Used with: client/server application, 2/3/n-tiered application, user experience (UX)
Examples:

  • Apple iTunes Store uses a thin client application that runs in a web browser
  • When using a central Geodatabase, ArcView functions as a fat client for this data

Client/server application

“Client/server describes the relationship between two computer programs in which one program, the client, makes a service request from another program, the server, which fulfills the request. Although the client/server idea can be used by programs within a single computer, it is a more important idea in a network.” – What is client/server? – Definition from Whatis.com

I have heard many people describe client/server as a “normal application” (like excel or visio) which pulls information from a server.  This description implies a fat client application where the local software application does most of the work.  The server is only there to fulfill requests.  Contrast this with a thin client application where the server does most of the work and the local software application merely directs the server’s actions.

When I use the term, I will be using the broader sense of it.  I intend the term client/server to mean any client server arrangement in which one software program makes a request of another program which fulfills that request.

Specific types: fat client, thin client, networking
Synonyms: fat client application, 2-tier application, peer-to-peer application (rare to hear thin client called C/S)
Compare vs: thin client application, 3/n-tiered application, web-application
Used with: client application, server application, database application
Reference: Client-server – Wikipedia, the free encyclopedia
Reference: What is client/server? – Definition from Whatis.com
Examples:

  • Your online banking application is client/server.  In this application a thin client application runs in your web browser.  This client application accesses a server application across the internet.  It is this server application that does the actual work as it is directed by the client application.

Until next time,

the dissa

Selling an agile approach…

December 1, 2009 Leave a comment

Setup:

I came across this question at Discussion: Agile | LinkedIn (account required).  In the original posting Myroslava Trotsyuk referenced an article (How Agile Practices Reduce Requirements Risks) by Ellen Gottesdiener which describes 6 project risks that are mitigated by agile practices.  In response Lanis Brett Ossman raised the question:

Why is it difficult to sell an agile approach when delivering software solutions?

In explaining the question, Lanis Brett created his own list of 6 objections to adopting an agile approach.  While he labeled them as “risks”, I have encountered these same objections when trying to answer the question “why agile?”

– Risk 1: For the “major” cost, customers expect developers to figure out the details. They expect developers to know their business, for the money they are paying.

– Risk 2: Added cost for paying employees to review progress rather than do their regular job. Again, why can’t the developer nail down the details up front?

– Risk 3: Developers typically eat the added cost of poor impact analysis. Not the customer’s fault.

– Risk 4: Many developers just allow scope creep, so the customer does it expecting no impact to them. Creep is often small changes, but they add up fast.

– Risk 5: Defective requirements pretty much involve the same customer issues as Risk 1.

– Risk 6: Why should I, the customer, pay for your learning curve implementing new tools and techniques?

Response:
Nice list of risks/objections.  The original article was compelling, but outside of project risk it doesn’t address other objections to an agile approach.  When the customer comes back with “purchasing insists we know all the costs up front”  or “It is the rules, sorry” you often have no choice but comply – taking the cost-overrun bullet yourself instead of letting it hit the customer.  At this point, you have to overcome the purchasing or rules objection.  There is no point in pleading project risk anymore.

As the original article stated, software is different than other types of products.  In the physical world project estimation and sales are relatively straightforward.  Not so in the software world where our deliverables are unbounded.  We are not even limited by our materials or tools.  We can create tools as needed.  If we need a new fundamental data structure, or other “materials” we can create those too.

Until it is fully delivered, software only exists in the imagination of those requesting it and those building it.  There is plenty of room for misunderstanding and error.  This is the problem Agility was created to solve.  For it to work, though, you need to sell something that fundamentally looks agile.

Are you selling an agile project or a traditional one in which you hope to apply agile techniques?

Case study:
If I am an oil company ordering a drilling platform, I expect to specify its dimensions, drilling depth, and other requirements for the finished unit.  I expect the contractor to take my finished specifications, then working from a wireframe model, I expect the contractor to come up with a list of materials and standard times for attaching those materials.  The sum of the parts rolls up to a final cost estimate.  This is monolithic.

Similarly, as an oil company asking for “asset management software” I expect the vendor to pull pre-built elements off the shelf and attach them, like welding, to produce a final product that meets my specifications.  When I hear that I need to participate in discussions about every rivet, bolt, and plate it doesn’t make sense.  Why can’t you just pull 5 compressors and some 2″ steel plate off the shelf and put something together?

Points of failure:
Physical world –
Despite its enormous complexity, a drilling platform is made up of only a few hundred basic elements – steel plate, bolts, pumps, steel tube.  During construction these basic parts are used repeatedly. By modifying or “fabricating” these elements into the proper shape, they can be attached to an adjacent part. The complexity of this originates from the volume of repeating elements.

Software world –
Software is different. If a basic element is repeated, it is a sign of poor programming. If a software program has 100 different parts, every one of them should be unique and independently useful in shaping the final result.  Our raw materials – software classes – need to each function flawlessly to receive information, act on that information, and pass that information onto another class.  If a single element malfunctions it can affect the entire application.  Complexity arises from not only the number of assembled parts, but also from the fact that each one of these parts is individually unique.

Bringing it back around:
Our customers are accustomed to purchasing monolithic items in which the number of fundamental parts is small.  In theory if we could break our offering into elements, each of which is no more complicated than a drilling platform, we would reduce our software risks to the (embarrassingly) relatively low risk of building a drilling platform.

To achieve this we need to get our offerings (what we sell) small enough where the functionality is represented by a few hundred classes at the most.  By keeping what we sell small, we lower everyone’s risk – ours and our customers’.  We can build up complex solutions from these smaller elements.

I came across the following article Still playing planning poker? Stop gambling with your project budget. – About Agility which addresses the question of estimating an agile project.  In it Robert Neri describes an approach to using user stories to estimate the project.  Admittedly a user story is too fine grained to be useful in a sales situation.  We need something a bit bigger, but not too big.

What does this look like?
Try breaking your offering into smaller lego sized chunks of value.  Instead of selling “asset management” try selling solutions to individual problems within this domain.  The trick is finding a level of usefulness that we can sell.  I can’t sell a login screen, but I can sell a “remaining life monitor”.  I can’t take the risk of selling a “control panel”, but I can handle selling a “panel shell” and some “snap-ins”.

In the way that a user story breaks functionality into something we can build, we need a way to break a project into valuable chunks we can sell.  Right now I am working with a concept I call “critical paths to value”.  Instead of proposing to build the entire offshore rig, I’ll start with proposing we build a boat that could hold a drilling platform.  A floating platform is the first critical path to value.  The next would be the drilling rig, followed by a control room, and next maybe a transfer point for offloading oil.

So where are the critical paths to value in my own application?  I am working on it.  I’ll post an update as things unfold.