W19_SJP_Cost & Time Trade-off Part 2

 

Issue Identification and Appraisal

Last week blog was the first part in a series of three where the development of the Total Contractors Construction Costs was put forward last week. Figure 1 below comes from the GPCCAR Module 08-7 and as mentioned last week, is based on the US Department of Transportation, Federal Highway Agency’s document “Work Zone Road User Costs Concepts and Applications” figure 15.

Figure 1 – Relationship between project cost and duration

This week’s Blog is going determine the costs the cumulative owners engineering, project overhead and lost opportunity cost curve (black line). Therefore, the problem statement for this week’s blog is, “How is the Cumulative owners curve developed?”

Feasible Alternatives

As this is a three-blog posting, each one is examining a different curve on the chart:

  1. 1. Total Contractor’s Construction Costs
  2. 2. Cumulative Owners Engineering, Project Overhead and Lost Opportunity Costs
  3. 3. Total Cumulative Project Costs

The first curve (Blog 18) had many alternatives, however this week’s curve is straightforward so there are no feasible alternatives, the same list of items would appear on this line each time, costs may be different but the list remains static, these are:

  • • Owner’s Project Management Team (PMT) monthly costs
  • • Owner’s Project Office monthly costs
  • • Auxiliary Costs, in this instance there is a cost for land purchases
  • • Owner’s opportunity costs.

There is no fancy formula this time, just develop the cost estimates values for each item and determine the period and cumulative monthly costs from month 1 to the last month.

Develop the Outcomes for each 

The cost estimate has already been developed and the following figure 2 shows the build-up of figures.

Figure 2 – Development of Owners Overhead Costs & Opportunity Cost

The Overheads costs come from the estimate, while the opportunity costs are derived from the perceived crude oil throughput (in barrels) with a production ramp-up, production month-01 averaging 50,000bbl/day, month-02 100,000bbl/day and month-03 onwards 200,000bbl/day. These quantities are costed at the nett revenue of a barrel of oil $35.00 (basis: Market value = $50.00 – All-in Production costs $15.00 = $35.00).

Selection Criteria

A conceptual project is being used to model the Owner and Contractor curves, the following is the project criteria:

An Oil and Gas facility Project consisting of 200,000 barrel per day processing plant, a fifty-kilometre pipeline, a near-shore storage tank to hold twenty days’ inventory along with a tanker loading facility. The project is based on a tie-in at the ‘fence-line’ where the feed-stock for the processing facility is delivered, therefor the cost estimate excludes any well-pad facility and transit flowlines. Each project facility is based on an Engineering, Procurement and Construction (EPC) philosophy for design, procure and construct of each portion. Also, included in the estimate along with the EPC portion, are costs for land purchase for the facilities, and all the associated owner’s costs for the Project Management Team (PMT) and offices. Overall Cost Estimate is $2.5B (Billion) with $1.8B being the Contractors portion.

The criteria selection is based on the develop outcomes section above, and as there are 50 data-points too large to show on one line in the blog, so the key months are shown; the start of the project, the point where the opportunity costs kick in, and the costs after the project completion month passes to show that the costs keep increasing.

Figure 3 – Selected Cumulative Owners Overhead Costs & Opportunity Cost Data-points

Analysis and Comparison of the Alternatives

Using the criteria information, the following curve is generated.

Figure 4 – Owners Cumulative O/H and Opportunity Curve

The curve starts with a nice steady slope until month-37 when the first opportunity cost appears (50,000 bbl/day), then (100,000bbl/day), and finally the full through-put of 200,000bbl/day from month-39 onward, all costs then go through until month-50 in this model.

Selection of Preferred Alternative

For this curve/line there is no alternative available, and as it’s part of a three-blog posting which ends with the development of the Total Cumulative Project Costs along with analysis and proposed incentive/disincentives.

Monitoring Post Evaluation Performance

As this is part of a series of three blogs, until blog 20 is complete, the post evaluation performance cannot be performed until the full model is completed.

References

  • • Mallela, J., & Sadasivam, S. (2011). Figure 15 – Work zone road user costs: Concepts and applications : final report. U.S. Department of Transportation, Federal Highway Administration Office of Operations (HOP).
  • • Guild of Project Controls. (n.d.). 08.7.3 – Cost vs Time Trade Offs (Optimization) – Guild of project controls compendium and reference (CaR) | Project controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved September 5, 2017 from http://www.planningplanet.com/guild/gpccar/validate-the-time-and-cost-trade-offs
  • • Investopedia. (2017). Opportunity cost. Retrieved from http://www.investopedia.com/terms/o/opportunitycost.asp

W18_SJP_Cost & Time Trade-off Part 1

 

Issue Identification and Appraisal

My current paper (number 5) is about validation of Cost and Time Trade-off’s, and the GPPCAR Module 08-7 provides a great illustration which is based on the US Department of Transportation, Federal Highway Agency’s document “Work Zone Road User Costs Concepts and Applications” figure 15.

Figure 1 – Relationship between project cost and duration and explanations

It is hoped that the paper will be able to demonstrate that what was done for the US DoT can work in an Oil and Gas environment, and part of the exercise is to test the illustration above on a conceptual project.

This week’s Blog is going to look at how to determine the best exponential fit for the contractor curve (green line). The point designated as “Normal duration” is the Contractors contract price and schedule duration, the portion to the left of the “Normal duration” denotes any acceleration costs up to the project completion, and the portion to the right of the “Normal duration” is any delay costs associated with project over-runs. It’s a two-component curve, but the left side has the variables and the right side should be consistent for all scenarios.

Therefore, the problem statement for this week’s blog is, “What ‘Learning Curve’ formula provides the best fit for the left-side component of the Contractors curve?”

Feasible Alternatives

After some deliberation the decision to go with a ‘Learning curve’ type profile was taken, and there can be numerous alternatives depending what percentage is used, but for this blog three options were chosen and these are:

  • 80% Efficiency Factor (EF) / 20% Learning Curve Rate (LCR)
  • 90% Efficiency Factor / 10% Learning Curve Rate
  • 99% Efficiency Factor / 1% Learning Curve Rate

The formula for the learning curve is shown in figure 2 below:

Figure 2 – Learning Curve Formula

Develop the Outcomes for each

To determine the learning curve for each percentage, a spreadsheet template with two columns; column ‘A’ being the Project months from 1 through 60, and column ‘B’ containing the “Learning curve formula” =B$5*(B$3)^LN(A7)/LN(2) where B$5 is the row above, B$3 is the Learning Curve rate, A7 is the month (i.e. 1, 2, 3, etc.,).

The first Model developed was for the 80% EF / 20% LCR and the curve is shown below. Please note that as this is for the left side of the chart (figures to the “Normal Duration”) and this goes to month 42, thereafter it will be delay costs.

Figure 3 – 80% Efficiency Factor / 20% Learning Curve Rate

Figure 4 below reflects 90% EF / 10% LCR.

Figure 4 – 90% Efficiency Factor / 10% Learning Curve Rate

Figure 5 below reflects 99% EF / 1% LCR.

Figure 5 – 99% Efficiency Factor / 1% Learning Curve Rate

Selection Criteria
A conceptual project is being used to model the Owner and Contractor curves, the following is the project criteria:
An Oil and Gas facility Project consisting of 200,000 barrel per day processing plant, a fifty-kilometre pipeline, a near-shore storage tank to hold twenty days’ inventory along with a tanker loading facility. The project is based on a tie-in at the ‘fence-line’ where the feed-stock for the processing facility is delivered, therefor the cost estimate excludes any well-pad facility and transit flowlines. Each project facility is based on an Engineering, Procurement and Construction (EPC) philosophy for design, procure and construct of each portion. Also, included in the estimate along with the EPC portion, are costs for land purchase for the facilities, and all the associated owner’s costs for the Project Management Team (PMT) and offices. Overall Cost Estimate is $2.5B (Billion) with $1.8B being the Contractors portion.

The criteria selection for all three models will use a cost of $1.8B at month 42, and for the delay costs (i.e the data points to the right after month 42) a 20% for Overheads (this includes both Project O/H and Home Office O/H) which is $0.36B for each month thereafter. As previously mentioned all three alternatives will use the same delay profile.

Due to the data values for months 1 through 25 being very large, the scale generated if all the data points are used is very small, therefore for this blog the selection of month 30 through month 50 data points will be used in the analysis charts.

Analysis and Comparison of the Alternatives

Using the criteria information, each model was developed to have a cost of $1.8B at the month 42 point, and thereafter the delay costs were added monthly until completion of the spreadsheet rows.

Figures 6, 7 and 8 reflect each feasible alternative

Figure 6 – 80/20 Accelerated Costs & Delay Cost Chart

Figure 7 – 90/10 Accelerated Costs & Delay Cost Chart

Figure 8 – 99/01 Accelerated Costs & Delay Cost Chart

To better view the results, figure 9 incorporated all three curves into one chart.

Figure 9 – Comparison of Feasible Alternatives

Review of the comparison chart indicates the following:

  • The 99%/1% model has a very steep set of left side data points and is unrealistic if the costs of the points are reviewed, is it feasible to increase by $0.75B in three months, probably not.
  • The 90%/10% model has a good range of data points and looking at the points 6 and 9 months it is not unreasonable to expect this kind of increase to accelerate a project by 6 or 9 months
  • The 80%/20% model again has a good range of data points but they appear to be on the low side if the project were to accelerate by 6 to 9 months.

As the conceptual project is a 200,000bbl/day Oil and Gas Facility (Production, Transportation and Storage, and Offloading) with oil at $50/bbl. If the plant could be accelerated by six months the approximate return would be $1.445B (if production costs were equivalent to $15/bbl, then a profit of $1.012B would be realised).

This type of project cannot be done in one month, or even 24 months, largely due to the types of equipment required and the times taken to procure and fabricate such items. In the authors opinion and based on many years of experience in oil and gas projects, the best a project like this could accelerate would be around 6 months, maybe 9 if the market conditions were favourable.

If the project opted to accelerate, to do this the 90%/10% curve provides the best level confidence, as looking at the other two curves one is unrealistically high, and the other on the low side to cover acceleration costs to advance the facilities.

Selection of Preferred Alternative

Using the Multi-attribute decision making process the following matrix

Table 1 – Decision Matrix of Feasible Alternatives

From the above table the preferred alternative is to use the 90%EF/10% LCR curve.

Monitoring Post Evaluation Performance

As this is part of a series of three blogs, until blog 20 is complete, the post evaluation performance cannot be performed until the full model is completed.

References

  • Mallela, J., & Sadasivam, S. (2011). Figure 15 –Work zone road user costs: Concepts and applications : final report. U.S. Department of Transportation, Federal Highway Administration Office of Operations (HOP).
  • 3.3.7 Multi-Attribute Decision Making – Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (2015, October 3). Retrieved August 17, 2017 from http://www.planningplanet.com/guild/gpccar/managing-change-the-owners-perspective
  • Guild of Project Controls. (n.d.). 08.7.3 – Cost vs Time Trade Offs (Optimization) – Guild of project controls compendium and reference (CaR) | Project controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved September 5, 2017 from http://www.planningplanet.com/guild/gpccar/validate-the-time-and-cost-trade-offs
  • Bhatti, A. T. (2012, November 23).Learning curve phenonemon [Learning curve formula]. Retrieved from https://images.search.yahoo.com/search/images?p=learning+curve+formula&fr=yset_st_sf_ab&imgurl=http%3A%2F%2Fimage.slidesharecdn.com%2Flearningcurve-121123105005-phpapp02%2F95%2Flearning-curve-7-638.jpg%3Fcb%3D1360231937#id=2&iurl=http%3A%2F%2Fimage.slidesharecdn.com%2Flearningcurve-121123105005-phpapp02%2F95%2Flearning-curve-7-638.jpg%3Fcb%3D1360231937&action=click

W17_SJP_Accuracy, Precision and Reliability – Another perspective

 

Issue Identification and Appraisal

Well here goes…Blog 16 discussed the CPI/SPI results and how they corresponded to the baseline, with regards accuracy, precision and reliability. However, the CPI and SPI results were not separately analysed nor were the trend patterns discussed with regards the ‘Control’ and ‘Specification’ limits. Therefore, this week’s blog is going to further explore the problem statement, “What classification was the original baseline?” and will include the results up to week 16. The feasible alternatives are still the same as in Blog 16, along with the outcomes, but there will be an additional week in the selection criteria.

Feasible Alternatives

Review of the Guild of Project Controls Compendium and Reference (GPC) Module 9.4 ‘Assessing & Interpreting Progress Data’ provides three feasible alternatives.

These are:

  • Accuracy
  • Precision
  • Reliable

Develop the Outcomes for each 

The GPCCAR Module 9.4 figure 9, demonstrates that there are several options regarding the feasible alternatives, and that a combination of two or better still all three would be beneficial.

Figure 1 – Precise

Figure 1 shows that the CPI/SPI points are consistently ‘hitting’ the target and within the acceptable range, which demonstrates that the baseline estimate was Accurate, Precise and Reliable to be used in any future projections.

Figure 2 – Imprecise

Figure 2 shows that the CPI/SPI points are scattered over the target and therefore imprecise, which demonstrates that the baseline estimate was not Accurate or Reliable and should not be used in any future projections.

Figure 3 – Precise and Accurate with some outliers

Figure 3 shows that many the CPI/SPI points are ‘hitting’ the target but a few of the outlying points need to be assessed as to why they were outside. In such cases the Project Controls professional could apply statistical process control chart analysis to discard these outlying points that are outside of the +/- 3 Sigma range as being “special and identifiable”. Use of these values without performing any statistical process control would be risky.

Figure 4 – Precise and Reliable but not Accurate

Figure 4 shows that the CPI/SPI points are ‘hitting’ the target in a precise group but outside of the bulls-eye. It shows that the data is precise and reliable, but needs to be adjusted to bring the points back to the center. If adjustments were made, then the data could be used in future projections. Adjustments would mean looking at the cost and time estimates and assessing what needs to be done to gain alignment.

Table 1 – Summary of Data Point Dispersions

Selection Criteria

The criteria selection for both models will be taken from W05 through W15 weekly reports. Table 2 below shows the information to be used.

 

 

Table 2 – CPI & SPI Data points for use in Analysis

Analysis and Comparison of the Alternatives

The data points from table 2, were input to the “Target” Cost/Schedule Trend chart and the “Line” CPI/SPI by week chart to provide the following graphical outputs.

Figure 5 – Cost/Schedule Trend Report

Figure 6 – CPI / SPI Line Chart

Review of the data, shows four data-points outside of the +/- Sigma 3 range. The explanations behind these four points were provided in Blog 16, so briefly they were; i) W06 – missed final 3 activities in the Paper 1, ii) W08 – trend reversed with schedule and cost well ahead due to the budget being inflated, this was subsequently revised in W09, iii) W10 – schedule remained ahead but due to the papers going better than anticipated because of working smart, and not following the template set out by Dr. Paul Giammalvo, and iv) W13 the schedule was back on the radar as paper 4 was progressing good due to research being done on blogs prior to commencing the paper thus reducing the development time. Removal of these outlying points is reflected in figure 7 below.

Figure 7 – Data without the outlying points

In Blog 16, the use of CPI * SPI data points was input to a statistical control chart, however in this Blog these will be evaluated individually and the results of analysis provided.

Figures 8 and 9 reflect the CPI and SPI results from weeks 4 through 16 when input to the statistical control charts.

For those who are not familiar with these charts, these include; the Upper and Lower Control Limits which are based on the +/- 3 sigma, the Upper and Lower specification limits which is based on +/- 5% of the 100% that is used in the course to ensure staying on track, the average which the Control Chart determines based on the data points, and the A, B, C Zones in which each zone is one standard deviation in width.

Analysis of these charts will look at; Zone Tests (out of control if there is a run of k points in a row above or below the average), Stratification Test (k points in a row within Zone C: out of control if there are k points in a row in Zone C on either side of the average) and Mixture Test (k points in a row with none in Zone C:  out of control if there are k points in a row not in Zone C on either side of the average), see table 3 of rules below.

Table 3 – Control Chart Rules for Out-of-Control

Figure 8 – CPI Statistical Control Chart

So, what is the CPI chart telling us, first let’s look at the data points.

  • One point outside of both the specification and control limits.
  • Nine points within the Zone C, four in upper zone, five in the lower zone – the four upper points are not consistent being at two levels and spaced out, the five lower points have four in a line and one lower down the zone.
  • Three points in the lower Zone B – all in a line, but not consecutively.

Chart analysis:

  • Zone Test: There are two rows of points, one in lower zone B, and one in lower zone C, however neither of the two of them match the amount of consecutive points in the control chart rules above, so at present still under control.
  • Stratification Test: There are four points in a row in zone C, again they are not consecutive, so at present still under control.
  • Mixture Test: There are a row of 3 points in zone B, but again not matching the rules for out-of-control.

Figure 9 – SPI Statistical Control Chart

So, what is the SPI chart telling us, first let’s look at the data points.

  • Six points outside of the specification limits.
  • Six points within the lower zone C, and one point in lower zone B – these points are gapped out until the recent budget realignment where there are three points in close proximity in zone C.

Chart analysis:

  • Zone Test: The points are not matching the criteria so things are still in control.
  • Stratification Test: There are insufficient points in zones to meet the out-of-control criteria, so despite the ups and downs there is no indication that the current schedule is out-of-control.
  • Mixture Test: At present, there are only three consecutive points in zone C, so still in control.

Selection of Preferred Alternative

The analysis using the statistical control charts tends to allude to the fact that as its only been three weeks since the budget and schedule re-alignment and insufficient points to provide an accurate assessment of the data points.

Therefore, the data point pattern as shown in figure 7 closely matches the model in figure 3 “Precise and Accurate, with Outliers” and once these outlying data points are discarded the other points are dispersed within the +/- Sigma 3 range. With regards the Control budget, it appears it is both precise and accurate and can be used in evaluating future forecasts based on the data points provided.

Monitoring Post Evaluation Performance

As is the case with the other blogs, close monitoring of the period data allows regular evaluations to ensure that the data-point trends are keeping within the Statistical Control limits. If these limits are breached then re-evaluation of the data will be required.

References

W16_SJP_Accuracy, Precision and Reliability

 

Issue Identification and Appraisal

Blog 15 discussed the tracking and reporting Cost Performance Index (CPI) and Schedule Performance Index (SPI) and the graphical choices, which brings out another dimension – the results and how close they are to the baseline, or was the baseline flawed. This week’s blog problem statement is, “With the CPI/SPI results obtained to week 15, what classification was the original baseline?”

Feasible Alternatives

Review of the Guild of Project Controls Compendium and Reference (GPC) Module 9.4 ‘Assessing & Interpreting Progress Data’ provides three feasible alternatives.

These are:

  • Accuracy
  • Precision
  • Reliable

Develop the Outcomes for each

The GPCCAR Module 9.4 figure 9, demonstrates that there are several options regarding the feasible alternatives, and that a combination of two or better still all three would be beneficial.

Figure 1 – Precise

Figure 1 shows that the CPI/SPI points are consistently ‘hitting’ the target and within the acceptable range, which demonstrates that the baseline estimate was Accurate, Precise and Reliable to be used in any future projections.

Figure 2 – Imprecise

Figure 2 shows that the CPI/SPI points are scattered over the target and therefore imprecise, which demonstrates that the baseline estimate was not Accurate or Reliable and should not be used in any future projections.

Figure 3 – Precise and Accurate with some outliers

Figure 3 shows that many the CPI/SPI points are ‘hitting’ the target but a few of the outlying points need to be assessed as to why they were outside. In such cases the Project Controls professional could apply statistical process control chart analysis to discard these outlying points that are outside of the +/- 3 Sigma range as being “special and identifiable”. Use of these values without performing any statistical process control would be risky.

Figure 4 – Precise and Reliable but not Accurate

Figure 4 shows that the CPI/SPI points are ‘hitting’ the target in a precise group but outside of the bulls-eye. It shows that the data is precise and reliable, but needs to be adjusted to bring the points back to the center. If adjustments were made, then the data could be used in future projections. Adjustments would mean looking at the cost and time estimates and assessing what needs to be done to gain alignment.

Table 1: Summary of Data Point Dispersions

Selection Criteria

The criteria selection for both models will be taken from W05 through W15 weekly reports. Table 2 below shows the information to be used.

Table 2: CPI & SPI Data points for use in Analysis

Analysis and Comparison of the Alternatives

The data points from table 2, were input to the “Target” Cost/Schedule Trend chart to provide the following graphical output.

Figure 5: Cost/Schedule Trend Report

Review of the data, shows four data-points outside of the +/- Sigma 3 range. As mentioned in the figure 3 explanation these four points needs to be assessed to determine why they fell outside to ensure there is a valid explanation to discard them.

Figure 6: Zoom-in View

To understand the reasons for the outlying points we need to determine which week these occurred and check the report analysis for explanations.

Table 3: CPI & SPI Data points outside +/- 3 Sigma limits

Table 3 shows the four data-points that need reviewed to determine why they fell outside the limits. Of the four points, three are schedule related only, and one is to cost and schedule.

The first was in W06 and was due to missing final 3 activities in the Paper 1. In W08 the trend had reversed and schedule and cost were showing they were well ahead. This was due to the budget being inflated, this was subsequently revised in W09. In W10 schedule was showing it was ahead but due to the papers going better than anticipated because of working smart, and not following the template set out by Dr. Paul Giammalvo. W13 the schedule was back on the radar as paper 4 was progressing good due to research being done on blogs prior to commencing the paper thus reducing the development time.

Figure 7: Data Points without Outlying points

Therefore, our initial scenario was like the “Precise and Accurate, with Outliers. Once these outlying data points are discarded the other points are dispersed within the +/- Sigma 3 ranges.

Selection of Preferred Alternative

To provide a level of confidence in the data points, a statistical control chart will be used. As this requires one set of data points and a time scale, the data points will be based on CPI * SPI.

Table 4: CPI * SPI Data points for Control chart

Inputting the above points into the statistical control chart model provides the output in figure 8. Looking at the chart it tells us the following:

  • Most points are close to the average
  • There are a few points close to the limits
  • There are two points beyond the control limits

Figure 8 : CPI*SPI Statistical Control Chart

As there are a couple of points outside of the control limits these are treated as a “special cause”. These occurred prior to the budget re-alignment, looking at figure 8 it shows that points from W09 onwards are in control and therefore generally acceptable.

With regards the Control budget, it appears it is both precise and accurate and can be used in evaluating future forecasts based on the data points provided.

Monitoring Post Evaluation Performance

As is the case with the other blogs, close monitoring of the period data allows regular evaluations to ensure that the data-point trends are keeping within the Statistical Control limits. If these limits are breached then re-evaluation of the data will be required.

References

W15_SJP_Tracking and Reporting CPI & SPI

 

Issue Identification and Appraisal

Back on the course trail again this week, a few weeks back, the task of tracking and reporting Cost Performance Index (CPI) and Schedule Performance Index (SPI) was given. Research has shown there are several ways to graphically represent these two metrics but which one is more informative. This week’s blog problem statement is, “Which CPI/SPI graphic provides the best visual representation to management?”

Feasible Alternatives

Review of the Guild of Project Controls Compendium and Reference (GPC) Module 9.4 ‘Assessing & Interpreting Progress Data’ shows two method/graphics; i) 9.4.3.2.11 Cost/Schedule Trend Report, and ii) 9.4.3.2.12 SPI & CPI Tracked Over Time.

The alternatives are:

  • Target chart Graphic
  • Line Chart Graphic

Develop the Outcomes for each

The graphics for both alternatives have been taken from the GPCCAR and are shown in figures 1 and 2 below.

Figure 1: Cost/Schedule Trend Report

Figure 1 is in the form of a “Target Chart” and the CPI/SPI point is plotted as an X-Y chart. Each Quartile has a message to tell, whether it is “Ahead of Schedule/Under Budget” or “Behind Schedule/Over Budget”, along with the grouping of the data points to determine accuracy, precision and reliability.

Figure 2: SPI & CPI Tracked Over Time

Figure 2 is a modified line chart which provides the same information as figure one but the graphic shows separate lines for CPI and SPI.

For this blog, a format was developed for each model to show the comparison of the graphics. However, should either of these be utilized on a live project, it would be expected that the format be developed by the Project Controls Team for use.

Selection Criteria

The criteria selection for both models will be taken from W05 through W14 weekly reports. Table 1 below shows the information to be used.

Table 1: CPI & SPI Data points for use in Analysis

Analysis and Comparison of the Alternatives

Using the data points from table 1, the input for Feasible Alternative 1 “Cost/Schedule Trend Report” was performed providing the following graphical output.

Figure 3: Cost/Schedule Trend Report

Inputting the same information into the Feasible Alternative 2 “SPI & CPI Tracked Over Time” provided the graphic in Figure 4.

Figure 4: SPI & CPI Tracked Over Time

Both figures were developed from the same information but look different. See how the information in figure 3, the target chart not only provides the data points, it also lets the viewer (in this case, but on a project, it would be the project manager) see the accuracy of the information provided, and the precision of the data points which in this case are widely distributed which would suggest that the initial cost and schedule budgets were over generous.

Figure 4 does not impart the same message and if the reader was not aware what he was looking at, would glance at the information and move on. It does not have the same powerful message embedded in the graphic.

Selection of Preferred Alternative

The use of the Multi-attribute decision making process will allow us to build a small model to evaluate the best visual graphic that gets the message across to the reader (i.e. Project Management).

Table 2: MADM Evaluation of Feasible Alternatives

From table 2, the selection of feasible alternative 1 is evident, as the graphic is very powerful and at a glance the reader can determine the status and what needs to be addressed.

The author liked the information provided by the selected graphic that it was recommended as a “Dashboard Page one” graphic in Paper 3.

Monitoring Post Evaluation Performance

Closely monitor the graphic to ensure that it’s use is being effective; the project controller needs to be able to review the graphic in conjunction with remaining float to ensure that the correct assessments are being made, and that management are aware of what actions need to be taken. Ensure that the correct model (feasible alternative) is being used to provide the information and status. Periodically review the graphic and the corresponding write-up to verify accuracy of the information provided.

References

W14_SJP_Forecasts Part 6

 

Issue Identification and Appraisal

This is the final part in the series of Blogs regarding forecasting methods that can be applied to determine what the Estimate at Complete (EAC). This week’s blog poses the problem statement “Which forecasting method provides the best Estimate at Complete (EAC) accuracy?”

Feasible Alternatives

The feasible alternatives that were reviewed were:

  • (Blogs 9 & 10) – IEAC methods outlined in the Guild of Project Controls Compendium and Reference (GPC) Module 4.4 ‘Assess, Prioritize and Quantify Risks/Opportunities’ & NDIA’s Guide to Managing Programs and Projects Using Predictive Measures.
  • (Blog 11.1) MS Excel’s “Best Fit” regression analysis curve
  • (Blog 12) Monte Carlo Simulation using @Risk
  • (Blog 13) Monte Carlo Simulation using regular MS Excel (this was a late addition as research was showing options to perform same simulation without @Risk)

Each method was evaluated and Probability numbers developed for each method.

Develop the Outcomes for each

As mentioned above each of the feasible alternatives were evaluated and probability ‘P’ numbers were determined for each, let’s quickly recap each process used.

  • IEAC used the ACWP, CPI and SPI data from the Week 7 report and provided a range of forecasts, which were then used to determine best, most likely and worst cases which allowed the determination of the mean, std. deviation and variance – all the information required to utilize the PERT formulas. The information was then input to a Normal Distribution curve and the probability figures determined.
  • MS Excel’s best fit used the 7 data points from all the weekly reports and used all the available regression options to provide a forecast range, which again determined the best, most likely and worst cases, before following the same process as above (PERT results, Normal distribution and probability numbers).
  • @Risk used the 7 data points and used the ARMA distribution to provide a range of forecasts, which allowed determination of the probability figures.
  • MS Excel’s Monte Carlo simulation used the PMB figure of $49,795 and using a model assessed the mean and std. deviation, which was then input to the PERT formulas, normal distribution curve and provided the resultant probability numbers.

Selection Criteria

The criteria selection was from weekly report 7, and the following information was utilized; ACWP, BAC, CPI, SPI, weekly ACWP in the form of 7 data points.

Analysis and Comparison of the Alternatives

The results from each alternative are presented in from the evaluation of each methodology.

Table 1 – Results from Forecast Alternatives

For information purposes, table 2 provides the same tabulation but shows how each forecast has performed against the mean for each probability figure.

Table 2 – Results as a % above/below Mean

As noted in each of the blogs, except for the @Risk method, the variances were large and therefore a high “P” number should be considered, for the purposes of this exercise the selection of the P90 and P95 figures are suggested as the focal points for the accuracy of the forecasts, table 3 shows the figures.

Table 3 – Probability Figures used to determine most accurate forecast

To compare how the developed forecast figures, look against the current ACWP figures as of week 13, taking the W07 data point and placing the forecast on week 26 and connecting a straight line between the two points allows a visual comparison of which forecast is trending close to the data points to date. The first set of data points for the P90 probability is plotted on a chart with the week 13 information.

Figure 1 – P90 Forecasts vs Week 13 ACWP (Full version)

To better see the comparison, a zoom-in view from week 7 through week 15 is shown on figure 2.

Figure 2 – P90 Forecasts vs Week 13 ACWP (Zoom-in)

For the P90 comparison it appears that the MS Excel Monte Carlo method is tracking close to the new data points from week 8 to 13. Let’s look at the P95 comparison figures.

Figure 3 – P95 Forecasts vs Week 13 ACWP (Full version)

And zooming in to better see the comparison.

Figure 4 – P95 Forecasts vs Week 13 ACWP (Zoom-in)

Again, the comparison appears to be closely following the new data points and it’s the same methodology, MS Excel Monte Carlo method.

Selection of Preferred Alternative

Now that the analysis of the 4 alternatives has been completed, and the results have been tabulated and plotted the P90 and P95 probability figures against the latest information from week 13 to provide a live comparison. The final selection will use Multi-Attribute Decision Making technique to provide the most accurate method at this stage. Remember, this is a forecast and unforeseen risks can occur that can change the current perspective, but at present this there are no signs that such an issue will occur.

The Feasible alternatives are already identified, all that is required is to identify the attributes; i) Ease of use of the forecasting method, ii) Average of the forecasts vs the PMB as a %, and iii) forecast performance against new data points. Table 4 shows the results of the MADM, along with the ranking of each method.

Table 4 – MADM results for the Forecasting methods

For this exercise, it appears that MS Excel Monte Carlo simulation method has provided the most accurate results, although the IEAC formulas were very close also.

Monitoring Post Evaluation Performance

As previously mentioned in the earlier blogs, there is no right or wrong forecasting method, and although one feasible alternative out ranked the others this time, there is no guarantee that will be the same the next time. Projects are dynamic, and forecasting methods vary, it is recommended to review all feasible alternatives on a regular basis, determining any appearing trends, a continual process of checking, reviewing, and monitoring to ensure the correct method is being used to provide the most accurate result for management level decisions to be effective.

References

  • Guild of Project Controls. (2015, October 3). Guild of project controls Compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved from http://www.planningplanet.com/guild/gpccar/assess-prioritize-and-quantify-risks-opportunities
  • Guild of Project Controls. (2015, October 3). 08.9.3.3 Monte Carlo Simulation – Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved August 13, 2017, from http://www.planningplanet.com/guild/gpccar/conducting-a-cost-risk-analysis
  • 5 Project performance forecasting – Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (2015, October 3). Retrieved July 20, 2017, from http://www.planningplanet.com/guild/gpccar/project-performance-forecasting
  • 3.3.04 Force Field or SWOT Analysis – Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (2015, October 3). Retrieved July 10, 2017 from http://www.planningplanet.com/guild/gpccar/identify-risks-opportunities
  • 3.3.7 Multi-Attribute Decision Making – Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (2015, October 3). Retrieved August 17, 2017 from http://www.planningplanet.com/guild/gpccar/managing-change-the-owners-perspective
  • National Defense Industrial Association. (2014). A Guide to managing programs using predictive measures. Author.
  • Verschuuren, G. (2013, December 19). Monte Carlo Simulations in Excel[Video file]. Retrieved from https://www.youtube.com/watch?v=UeGncSFijUM&t=8s
  • @Risk [Computer program] v7.5. 798 Cascadilla St, Ithaca, NY 14850, USA: Palisade Corporation (2016).
  • Bergeron, E. (n.d.). Standard normal probabilities. Retrieved from http://www.stat.ufl.edu/~athienit/Tables/Ztable.pdf
  • Theyerl, P. (2013, February 28). Excel – Multiple and varied trendlines [Video file]. Retrieved from https://www.youtube.com/watch?v=dsJnsuoVfA8
  • (2016). How to add best fit line/curve and formula in Excel? Retrieved from https://www.extendoffice.com/documents/excel/2642-excel-best-fit-line-curve-function.html#excel2013
  • Kyd, C. (2014). How to create normal curves with shaded areas in new excel. Retrieved from http://www.exceluser.com/excel_dashboards/normal-curve-new-excel.htm

Progress to date on the course

 

Well week 12 is over and into week 13 – the course’s mid point.

Progress, well let’s see.

  1. 11 weekly reports (weeks 2 thru’ 12) produced, submitted and accepted
  2. 3 papers compiled and accepted, Paper 1 “Scoring Model for GAO Best Practices” was published in the PM World Journal’s August edition http://pmworldjournal.net/article/scoring-model-scheduling/ , and Paper 2 “Retain or Over-ride – the question is logical” is under editing for publication in September’s edition. Paper 3 “Best in Class Dashboards Oil & Gas Projects – A research based approach” has been completed last weekend.
  3. The 3 Papers have also been submitted to the Guild of Project Controls as part of the requirements to be accepted to sit the Expert level GPC examinations.
  4. Paper 4 “Final title still pending – A case study of Forecasting Models” is underway.
  5. 13 Blogs posted, some with a bit of rework, but it’s been a valuable experience, and i’m over half way in getting these completed.
  6. Lot’s of ass-chewings from Dr Paul, as I’m old and don’t listen very well at times. These have been well deserved and like teaching a chimp, i’m catching on quickly…

Well that’s about it for now. Check back in a few weeks for another update…

Steve

References:

  • Paterson, S. J. (2017, August). Scoring model for scheduling – Project Management World Journal. Retrieved from http://pmworldjournal.net/article/scoring-model-scheduling/

W13_SJP_Forecasts Part 5

 

Issue Identification and Appraisal

Blogs 9 thru’ 12, looked at several forecasting methods that can be applied to determine what the Estimate at Complete (EAC) would be for the sample project data we are using. This week’s blog finalizes the last method, continuing with the problem statement “Other than IEAC, what other forecasting method could have been used to provide the Estimate at Complete (EAC) range for the weekly report?”

Feasible Alternatives

The feasible alternatives are:

  • 6 IEAC methods outlined in the Guild of Project Controls Compendium and Reference (GPC) Module 4.4 ‘Assess, Prioritize and Quantify Risks/Opportunities’
  • MS Excel’s “Best Fit” regression analysis curve
  • Monte Carlo Simulation using @Risk
  • Monte Carlo Simulation using regular MS Excel (this was a late addition as research was showing options to perform same simulation without @Risk)

Each method is being evaluated in separate blogs, and then a conclusion blog will finish up the series. This week will evaluate Monte Carlo simulation using MS Excel.

Develop the Outcomes for each

6 IEAC methods – Evaluated in blogs 9 & 10, and will form part of the final analysis in the final blog of the series. P75 = $50,961 / P85 = $51,187 / P90 = $51,340 / P95 = $51,566 / P98 = $51,821

MS Excel “Best Fit” Curves – Uses data set of values (ACWP data points from week 1 thru’ week 7) then plots the best fit line or curve through the data points to week 26 completion point in the future. P75 = $48,776 / P85 = $51,249 / P90 = $52,923 / P95 = $55,407 / P98 = $58,201

Monte Carlo Simulation ‘A’ – using @Risk software to perform the Monte Carlo simulation, the following values were determined; P75 = $44,900 / P85 = $45,250 / P90 = $45,375 / P95 = $45,500 / P98 = $N/A..

Monte Carlo Simulation ‘B’ – Need to set-up Monte Carlo model in MS Excel to evaluate this option.

Selection Criteria

This time, a change of thought regarding the technique being applied.

A model (spreadsheet) was developed to evaluate the probability of meeting the original budget of $49,795.  Using 1,000 iterations generated by MS Excel with each step allowing a plus/minus 0.5% deviation depending on the number. In cell, A1 the original budget value was input (A1 = $49,795) and in cell, L1 the value $248.98 (0.5% of $49,795). In cell, A2 a formula used to generate random numbers from 0 to 1 was modified to use the 0.5% value, so if the number generated was greater than 0.5 it would add $248.98 to the figure above, or less than 0.5 subtract $248.98 from the figure above. This formula “=IF(RAND()>0.5,A1+$L$1,A1-$L$1)” was then copied down to cell A1000 to provide the amount of iterations required. To ensure a broad range of results were obtained (see figure 1).

Figure 1 – Sample range of Data iterations using formula above.

MS Excel criteria used: 1) Base Figure of $49,795 budget, 2) 1,000 iteration sets x 5 Sets, providing Average (Mean), Minimum, Maximum and Std.Dev for each set. 3) Ensure all 5 sets are in range +/-10%, 4) Ensure one set has figure close to 0% but no greater than +/- 1%, 5) results need to be 2 or 3 below and 2 or 3 above – 1&4 or 5&0 are not acceptable options.

Analysis and Comparison of the Alternatives

Again, another first for the author, building the model was relatively easy – it’s a simplistic approach and all that is required is to meet the criteria set-out for the analysis.  As MS Excel recalculates every time a change is made it is better to set the recalculation method to manual mode. Recalculate as required to meet the criteria guidelines, this may take a few keystrokes but should be achieved very quickly. Figure 2 shows the results obtained which meets the required criteria and being used in this blog.

Figure 2 – Simulation results meeting criteria

Taking the information from row 3 (see arrows in figure 2), and inputting to normal distribution model, the following profile is obtained.

Figure 3 – Normal Distribution showing Mean

To see how got the number is, using the std.deviation squared will provide the variance, if the variance is small then we have good data to work with, but if the variance is high then there is a need to select I higher ‘P’ factor to provide a better level of confidence.  Therefore; Variance = Sigma^2 = $1,697.12^2 = $2,880,228. This variance is large therefore a higher P# needs to be selected.

Figure 4 – Normal Distribution showing Probability numbers

From figure 4 above, the following figures have been determined; P75 = $48,247 / P85 = $48,862 / P90 = $49,277 / P95 = $49,894 / P98 = $50,588.

Selection of Preferred Alternative

This blog is the fifth in a series of six, next week’s blog will compare all the alternatives from each blog and then select a preferred method.

Monitoring Post Evaluation Performance

No matter which forecasting method is finally decided upon, projects are dynamic, and forecasting methods vary, it is recommended to review the feasible alternatives on a regular basis, determining any appearing trends.

This is a continual process of checking, reviewing, and monitoring to ensure the correct method is being used to provide the most accurate result for management level decisions to be effective.

References

  • Guild of Project Controls. (2015, October 3). GUILD OF PROJECT CONTROLS COMPENDIUM and REFERENCE (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved from http://www.planningplanet.com/guild/gpccar/assess-prioritize-and-quantify-risks-opportunities
  • Guild of Project Controls. (2015, October 3). 08.9.3.3 Monte Carlo Simulation – Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved August 13, 2017, from http://www.planningplanet.com/guild/gpccar/conducting-a-cost-risk-analysis
  • Verschuuren, G. (2013, December 19). Monte Carlo Simulations in Excel[Video file]. Retrieved from https://www.youtube.com/watch?v=UeGncSFijUM&t=8s

W12_SJP_Forecasts Part 4

 

Issue Identification and Appraisal

Blogs 9 thru’ 11, commenced looking at several forecasting methods that can be applied to determine what the Estimate at Complete (EAC) would be for the sample project data we are using. This week’s blog continues with the problem statement “Other than IEAC, what other forecasting method could have been used to provide the Estimate at Complete (EAC) range for the weekly report?”

Feasible Alternatives

The feasible alternatives are:

  • 6 IEAC methods outlined in the Guild of Project Controls Compendium and Reference (GPC) Module 4.4 ‘Assess, Prioritize and Quantify Risks/Opportunities’
  • MS Excel’s “Best Fit” regression analysis curve
  • Monte Carlo Simulation using @Risk
  • Monte Carlo Simulation using regular MS Excel (this was a late addition as research was showing options to perform same simulation without @Risk)

Each method is being evaluated in separate blogs, and then a conclusion blog will finish up the series. This week will evaluate Monte Carlo simulation using @ Risk.

Develop the Outcomes for each

6 IEAC methods – Evaluated in blogs 9 & 10, and will form part of the final analysis in the final blog of the series. P75 = $50,961 / P85 = $51,187 / P90 = $51,340 / P95 = $51,566 / P98 = $51,821

 

MS Excel “Best Fit” Curves – Uses data set of values (ACWP data points from week 1 thru’ week 7) then plots the best fit line or curve through the data points to week 26 completion point in the future. P75 = $48,776 / P85 = $51,249 / P90 = $52,923 / P95 = $55,407 / P98 = $58,201

 

Monte Carlo Simulation ‘A’ – Requires @Risk software to perform the Monte Carlo simulation.

 

Monte Carlo Simulation ‘B’ – Need to set-up Monte Carlo model in MS Excel to evaluate.

 

Selection Criteria

To evaluate the “Monte Carlo simulation ‘A’”, the same set of data points that have been used for the other two cases will be used (the ACWP data points from week 1 through week 7 from the weekly report).

Table 1 – ACWP Data Points

@ Risk criteria used: 1) Table 1 data points, 2) Time Series Model using ARMA (Autoregressive, moving average) with weeks 8 to 26 as required outputs, 3) 10,000 iterations, 4) use results of 26 points and run curve fit for normal distribution to get the ‘P’ values.

Analysis and Comparison of the Alternatives

The first time the author was familiarised with @Risk was in the early 1990’s providing input for a cost estimate risk evaluation, and has not touched it again until this week. It has changed a lot since the first time, and to be honest, presented a large learning curve. Request for assistance from the software developers technical support unfortunately were not forthcoming and was directed to review hours on online tutorials in order to determine what needed to be developed. The software developer needs to listen to beginners needs and if necessary assist first time beginners if they want to recruit new customers to purchase their products. This will probably be reflected in the evaluation when writing the final blog in the series.

Anyway, developing the model to be able to write this blog could be termed as a being “a hit or miss” exercise, but as I had already performed two of the other forecasting options I had an idea of what was required.

Using the 7 data points in Table 1 and inputting into a “Time Series” Model and using the ARMA methodology to generate the remaining 19 weeks provided the following curve. Please note that the curve provided more data weeks than required so only the additional 19 were selected.

Figure 1 – ARMA Time Series Forecast curve

The legend shows the outer pink/red bar above the mean line as being equal to 95% (P95), while the outer blue line is equal to 75% (P75). As this is a simulation there is no need for any further ‘Pert’ calculations, however it would have been nice to show the probability table, but after many attempts to generate the probability table unsuccessfully this was given up as potentially outwith the abilities of the author at his current level of using the software. A zoom-in snap-shot (figure 2) gives the reader a better idea of how close the points are.

Figure 2 – Zoom-in to show P75 thru’ P95

It should be noted that these points appear to be closer together than the previous two examples, therefore it appears that a smaller variance will be encountered which suggests a lower probability number may be choosen.

From the above zoom in, the following figures have been determined; P75 = $44,900 / P85 = $45,250 / P90 = $45,375 / P95 = $45,500 / P98 = $N/A.

Selection of Preferred Alternative

This blog is the fourth in a series of six, the last blog will compare the three alternatives from each blog and then select a preferred method.

Monitoring Post Evaluation Performance

No matter which forecasting method is finally decided upon, projects are dynamic, and forecasting methods vary, it is recommended to review the feasible alternatives on a regular basis, determining any appearing trends.

This is a continual process of checking, reviewing, and monitoring to ensure the correct method is being used to provide the most accurate result for management level decisions to be effective.

References

  • Guild of Project Controls. (2015, October 3). GUILD OF PROJECT CONTROLS COMPENDIUM and REFERENCE (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved from http://www.planningplanet.com/guild/gpccar/assess-prioritize-and-quantify-risks-opportunities
  • Guild of Project Controls. (2015, October 3). 08.9.3.3 Monte Carlo Simulation – Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved August 13, 2017, from http://www.planningplanet.com/guild/gpccar/conducting-a-cost-risk-analysis
  • @Risk [Computer program] v7.5. 798 Cascadilla St, Ithaca, NY 14850, USA: Palisade Corporation (2016).

W11.1_SJP_Forecasts Part 3

 

Issue Identification and Appraisal

In blogs 9 and 10, we commenced looking at several forecasting methods that can be applied determining what the Estimate at Complete (EAC) would be for the sample project data we are using. Unfortunately, last week’s blog provided some erroneous information, which Dr. Paul Giammalvo rightly pointed out that regardless of the method chosen analysis will rely on the PERT curve and Z Table to determine the desired ‘P’ level. And as the example last week was based on the IEAC methods the results need to be attributed to this also. The Z-table ‘P’ figures are provided below in the ‘Develop Outcomes’ section. This week’s blog continues with the problem statement “Other than IEAC, what other forecasting method could have been used to provide the Estimate at Complete (EAC) range for the weekly report?”

Feasible Alternatives

The feasible alternatives are:

  • 6 IEAC methods outlined in the Guild of Project Controls Compendium and Reference (GPC) Module 4.4 ‘Assess, Prioritize and Quantify Risks/Opportunities’
  • MS Excel’s “Best Fit” regression analysis curve
  • Monte Carlo Simulation

Each method is being evaluated in separate blogs, and then a conclusion blog will finish up the series. This week will evaluate MS Excel’s “Best Fit” curve.

Develop the Outcomes for each 

6 IEAC methods – Evaluated in blogs 9 & 10, and will form part of the final analysis in the final blog of the series. P75 = $50,961 / P85 = $51,187 / P90 = $51,340 / P95 = $51,566 / P98 = $51,821

MS Excel “Best Fit” Curves – Uses data set of values (ACWP data points from week 1 thru’ week 7) then plots the best fit line or curve through the data points to week 26 completion point in the future.

Monte Carlo Simulation – Requires @Risk software to perform the exercise. This option will be reviewed in a future blog in this series when the software is received.

Selection Criteria

To evaluate the “Best Fit” curve, there is a need to provide some data points, so the ACWP data points from week 1 through week 7 from the weekly report is going to be used.

Table 1 – ACWP Data Points

MS Excel uses data-points to produce trend lines, as the data points are limited the following regression methods will be used in the analysis to get a forecast spread to use the PERT analysis on:

  • • Linear Regression
  • • Polynomial Regression (2nd)
  • • Logarithmic Regression

From the above trend line proposals, the highest value will represent the “Worst Case”, the lowest value will be the “Best Case”, and the two mid values will be averaged to provide “Most Likely”.

Analysis and Comparison of the Alternatives

Plotting the 7 data points in Table 1 using a scatter chart provides the following, see figure 2.

Figure 1 – Weekly Report Data Points

Using these 7 data points and MS Excel’s “Linear regression” trendline facility and trending them out to week 26, provides the chart in figure 2. The trendline uses R² = 0.985

Figure 2 – Linear Trendline

Again, using these 7 data points, but this time using MS Excel’s “Polynomial regression” trendline facility and trending them out to week 26, provides the chart in figure 3. The trendline uses R² = 0.9883

Figure 3 – Polynomial (2nd) Trendline

Again, using these 7 data points, but this time using MS Excel’s “Logarithmic regression” trendline facility and trending them out to week 26, provides the chart in figure 4. The trendline uses R² = 0.9369

Figure 4 – Logarithmic Trendline

Plotting all three trendlines on one chart (figure 5) reveals the range of the trendline forecasts from.

Figure 5 – All Trendlines on one Chart

This allows to visually see the “Best Case”, “Most Likely Case” and “Worst Case”, refer to Table 2 below.

Table 2 – Trendline Forecasts

Therefore, the following values are assigned:

  • “Best Case” (Optimistic) = $26,000
  • “Worst Case” (Pessimistic) = $67,000

and

  • “Most Likely Case” = $43,000

Using the PERT formula:

Step 1 – PERT weighted Mean

(Optimistic + (4 x Most Likely) + Pessimistic) / 6

= $ ((26,000 + (4 x 43,000) + 67,000) / 6)

= $ 265,000 / 6

= $ 44,167

Step 2 – Standard Deviation

(Largest value – Smallest value) / 6

= $(67,000 – 26,000) / 6

= $ 41,000 / 6

= $ 6,833.33

Step 3 – Variance

Variance = Sigma^2

= $6,833.33^2

= $46,694,444

Figure 6 shows the normal distribution curve with the ‘Mean’ value shown.

Figure 6 – Normal distribution curve showing ‘Mean’ value

With Sigma () = 6,833.33, the variance $46,694,444 divided by 2 = $23,347,222 which if divided by  = +/- 3,417 . This means the shape of the curve is a lot lower than the curve on figure 6, similar to the version shown in Blog 10.

Again, the result from Step 3 reveals that the very large variance means that the number is risky, so a higher P number (P90, P95, P98?) needs to be considered when selecting one, however for this blog we will show the P75, P85, P90, P95 and P98, refer to figure 7.

Figure 7 – Normal distribution curve showing P75, P85, P90, P95 & P98

So, for this forecasting exercise the following figures have been determined; P75 = $48,776 / P85 = $51,249 / P90 = $52,923 / P95 = $55,407 / P98 = $58,201.

Selection of Preferred Alternative

This blog is the third in a series of five, the last blog will compare the three alternatives from each blog and then select a preferred method.

Monitoring Post Evaluation Performance

No matter which forecasting method is finally decided upon, projects are dynamic, and forecasting methods vary, it is recommended to review the feasible alternatives on a regular basis, determining any appearing trends.

This is a continual process of checking, reviewing, and monitoring to ensure the correct method is being used to provide the most accurate result for management level decisions to be effective.

References

  • • Bergeron, E. (n.d.). Standard normal probabilities. Retrieved from http://www.stat.ufl.edu/~athienit/Tables/Ztable.pdf
  • • Guild of Project Controls. (2015, October 3). GUILD OF PROJECT CONTROLS COMPENDIUM and REFERENCE (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved from http://www.planningplanet.com/guild/gpccar/assess-prioritize-and-quantify-risks-opportunities
  • • Theyerl, P. (2013, February 28). Excel – Multiple and varied trendlines [Video file]. Retrieved from https://www.youtube.com/watch?v=dsJnsuoVfA8
  • • ExtendOffice. (2016). How to add best fit line/curve and formula in Excel? Retrieved from https://www.extendoffice.com/documents/excel/2642-excel-best-fit-line-curve-function.html#excel2013