Starting (or Restarting) a Site Improvement Journey

Author
Pablo Cabezas

Starting a site-based measurement and improvement journey can be overwhelming. Although many IPA clients have well developed organizations with dedicated groups that drive and measure continuous site improvement initiatives, this is not the case for many companies in the industry. Each year, IPA hosts the annual meeting of the Industry Benchmarking Consortium (IBC), which measures the performance of both large and small projects. Many small projects are managed at sites that are just starting their improvement journey and have outcomes that are worse than those of companies that have been systemically developing and routinely benchmarking their improvement efforts.

Graph showing Cost effectiveness

As shown above, first-time benchmarking sites (those starting site improvement initiatives or those getting back after many years) have an average cost that is about 20 percent higher than industry average. Sites that have measured their improvement initiatives two or three times are closer to industry average and, importantly, are more predictable; these sites have projects systems that are more competitive and also more in control.

There are many reasons for project systems to be out of control but some of the common root causes include lack of resources given the number of projects in the portfolio, a focus on production that can reduce the importance of a process or the projects group processes, or simply because there is no established way to define and develop projects. For systems starting the site improvement journey, there is a common question: where do we start?

Identify Strengths and Weaknesses, and Then Prioritize

Most companies decide to start site improvement efforts because of a known performance gap (e.g., projects are not predictable). A common mistake for these companies is trying to fix all or multiple issues at the same time. This typically does not end well as the lack of focus ends with site improvement efforts that are too much to successfully implement and are quickly abandoned. A key element is to understand the system weaknesses (gaps) is the project drivers.

Graph showing IPA framework: Elements of capital effectiveness

As the IPA Framework outlines, there is a natural flow to improvements. If project objectives are unclear, it’s hard for a team to efficiently complete robust Front-End Loading because there is recycle with the business on what the project is supposed to accomplish. If objectives are clear but the project team is missing key functions, projects struggle with missing inputs and late changes occur. If the objectives are clear and the team is integrated, robust Front-End loading can occur. Thus, sites should develop an understanding of their current drivers and correct issues in sequence.

Once the gaps are identified, they must be prioritized (again, do not try to cover everything at the same time), using the organization’s strengths and understanding what drives system results.

What Causes Poor Site Performance? Three Common Scenarios

Many factors can contribute to poor site performance but we focus here on three common scenarios:

  • Scenario 1: there is no established way to define and develop projects; this means that everything depends on who is the project manager role.
  • Scenario 2: The site has a process but no resources
  • Scenario 3: The site has processes and resources but has not identified the driver gaps yet.

Let’s start with some basics. In order to improve, a system needs to measure, and to measure there needs to be some sort of standardization (otherwise every project could be measuring different things). In scenario 1, there is no process to define and develop projects. We find this scenario quite common outside of the major refining, chemical, and mining companies. These systems are generally not ready for a fully deployed FEL process (including strict gates or decision points with support from assurance groups) but can implement a checkpoint to make sure objectives are understood and another to ensure key deliverables have been defined. Most systems without a process also lack strong governance systems. This, of course, requires strong business support but in the absence of it some companies have implemented pilot plans in which this standardization is used on a handful of projects with a measurement system that is compared against the rest of projects in the portfolio. This can be a starting point to show some quick-wins and get management support to add governance.

Scenario 2 is one of the most common. These sites have a version of a process and governance (typically weak), but most of the project positions are covered by a few project team members. In this scenario, everyone wears multiple hats. For example, the project manager might also fill the roles of construction manager, estimator, scheduler, and controller. In this case, a key element is to understand performance gaps (e.g., cost predictability, schedule slip, frequent scope or design changes) and assign specific resources to close the gaps. Focusing on the most problematic issues and trying to get additional support from resources within the company or (ideally) adding new resources to cover certain functions is key.

In scenario 3, the system needs a root-cause analysis to identify the main causes of project failure and success. Once this assessment is complete, improvement efforts can be prioritized in alignment with available resources, and management support can be secured to drive the case for change. Although this is a better position to be in relative to the first two scenarios, an adequate prioritization process and identification of achievable targets is key to maintain momentum. The identification of a few quick wins to show results can help to increase the buy-in from members of the project system.

In most cases, out-of-control project systems can be linked to weak project drivers. Project teams tend to protect themselves by adding more contingency or reducing checkpoints so risks are not as visible, which can hinder the decision-making process. But it is also very common to see that project outcomes are not in control because the systems elements are not in place, so project teams are defining and executing projects without a set of rules, clear requirements, and support.

Flowchart of IPA Project System Excellence Model (PSEM)

Regardless of the scenario, project systems that are starting their improvement journey should start with the basics, focus on aligning objectives (e.g., clear business case), ensure project teams include all key roles, and develop sound definition (e.g., risk assessments, execution plans, cost estimate, and schedule). Once these elements are in place and developed in a consistent way, the system can expand to focus on other practices.

How IPA Can Help

IPA can be a partner in this journey. IPA project assessments are strong tools that project systems use to quantitatively show business evidence of the benefits of these Best Practices. We can support the development of fit-for-purpose work processes and design project systems, compare current staffing levels against Industry to identity gaps and help you prioritize on key resources, and also support measurement efforts to identify and prioritize the gaps and strengths within a project system.

Get more details

  • Read our Privacy Policy
  • This field is for validation purposes and should be left unchanged.