W11_SJP_Forecasts Part 3

 

Issue Identification and Appraisal

In blogs 9 and 10, we commenced looking at several forecasting methods that can be applied determining what the Estimate at Complete (EAC) would be for the sample project data we are using. Unfortunately, last week’s blog provided some erroneous information, which Dr. Paul Giammalvo rightly pointed out that regardless of the method chosen analysis will rely on the PERT curve and Z Table to determine the desired ‘P’ level. And as the example last week was based on the IEAC methods the results need to be attributed to this also. The Z-table ‘P’ figures are provided below in the ‘Develop Outcomes’ section. This week’s blog continues with the problem statement “Other than IEAC, what other forecasting method could have been used to provide the Estimate at Complete (EAC) range for the weekly report?”

Feasible Alternatives

The feasible alternatives are:

  • • 6 IEAC methods outlined in the Guild of Project Controls Compendium and Reference (GPC) Module 4.4 ‘Assess, Prioritize and Quantify Risks/Opportunities’
  • • MS Excel’s “Best Fit” regression analysis curve
  • • Monte Carlo Simulation

Each method is being evaluated in separate blogs, and then a conclusion blog will finish up the series. This week will evaluate MS Excel’s “Best Fit” curve.

Develop the Outcomes for each 

6 IEAC methods – Evaluated in blogs 9 & 10, and will form part of the final analysis in the final blog of the series. P75 = $50,961 / P85 = $51,187 / P90 = $51,340 / P98 = $51,821

MS Excel “Best Fit” Curves – Uses data set of values (ACWP data points from week 1 thru’ week 7) then plots the best fit line or curve through the data points to week 26 completion point in the future.

Monte Carlo Simulation – Requires @Risk software to perform the exercise. This option will be reviewed in a future blog in this series when the software is received.

Selection Criteria

To evaluate the “Best Fit” curve, there is a need to provide some data points, so the ACWP data points from week 1 through week 7 from the weekly report is going to be used.

Table 1 – ACWP Data Points

MS Excel uses data-points to produce trend lines, as the data points are limited the following selections will be used in the analysis in order to get a forecast spread to use the PERT analysis on:

  • All 7 Data points
  • First 4 Data points
  • Last 4 Data points
  • Middle 5 Data points

From the above trend line proposals, the highest value will represent the “Worst Case”, the lowest value will be the “Best Case”, and the two mid values will be averaged to provide “Most Likely”.

Analysis and Comparison of the Alternatives

Plotting the 7 data points in Table 1 using a scatter chart provides the following, see figure 2.

Figure 1 – Weekly Report Data Points

Using these 7 data points and MS Excel’s trendline facility and trend them out to week 26, provides the chart in figure 2. The trendline uses a formula of y = 2637.5x – 150 / R² = 0.9693

Figure 2 – Trendline using all 7 Data Points

To get additional trendlines the use of various data-points from the 7 available are required, these were noted in the criteria selection above. Using the first four data points and MS Excel’s trendline facility and trend them out to week 26, provides the chart in figure 3. The trendline uses a formula of y = 2847.5x – 525 / R² = 0.9739

Figure 3 – Trendline using first 4 Data Points

Using the last four data points and MS Excel’s trendline facility and trend them out to week 26, provides the chart in figure 4. The trendline uses a formula of y = 2075x + 2787.5 / R² = 0.9857

Figure 4 – Trendline using Last 4 Data Points

Using the middle five data points and MS Excel’s trendline facility and trend them out to week 26, provides the chart in figure 5. The trendline uses a formula of y = 2508.9x + 282.14 / R² = 0.9856

Figure 5 – Trendline using 5 Middle Data Points

Plotting all four trendlines on one chart (figure 6) reveals the range of the forecasts from the trendlines.

Figure 6 – All Trendlines on one Chart

This allows to visually see the “Best Case”, “Worst Case”, and by averaging the two middle trends the “Most Likely Case”. The forecast figures are shown in Table 2.

Table 2 – Trendline Forecasts

Therefore, the following values are assigned:

  • “Best Case” (Optimistic) = $56,738
  • “Worst Case” (Pessimistic) = $73,510

and

  • “Most Likely Case” = $66,969

Using the PERT formula:

Step 1 – PERT weighted Mean

(Optimistic + (4 x Most Likely) + Pessimistic) / 6

= $ ((56,738 + (4 x 66,969) + 73,510) / 6)

= $ 398,124 / 6

= $ 66,354

Step 2 – Standard Deviation

(Largest value – Smallest value) / 6

= $(73,510 – 56,738) / 6

= $ 16,772 / 6

= $ 2,795.33

Step 3 – Variance

Variance = Sigma^2

= 2,795.33^2

= 7,813,888

Figure 7 shows the normal distribution curve with the ‘Mean’ value shown.

Figure 7 – Normal distribution curve showing ‘Mean’ value

With Sigma () = 2,795.33, the variance $7,813,888 divided by 2 = $3,906,944 which if divided by  = +/- 1,398 . This means the shape of the curve is a lot lower than the curve on figure 7, similar to the version shown in Blog 10.

Again, the result from Step 3 reveals that the very large variance means that the number is risky, so a higher P number (P90, P95, P98?) needs to be considered when selecting one, however for this blog we will show the P75, P85, P90 and P98, refer to figure 8.

Figure 8 – Normal distribution curve showing P75, P85, P90 & P98 values

So, for this forecasting exercise the following figures have been determined; P75 = $68,239 / P85 = $69,251 / P90 = $69,936 / P98 = $72,095.

Selection of Preferred Alternative

This blog is the third in a series of five, the last blog will compare the three alternatives from each blog and then select a preferred method.

Monitoring Post Evaluation Performance

No matter which forecasting method is finally decided upon, projects are dynamic, and forecasting methods vary, it is recommended to review the feasible alternatives on a regular basis, determining any appearing trends.

This is a continual process of checking, reviewing, and monitoring to ensure the correct method is being used to provide the most accurate result for management level decisions to be effective.

References

  • Bergeron, E. (n.d.). Standard normal probabilities. Retrieved from http://www.stat.ufl.edu/~athienit/Tables/Ztable.pdf
  • Guild of Project Controls. (2015, October 3). GUILD OF PROJECT CONTROLS COMPENDIUM and REFERENCE (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved from http://www.planningplanet.com/guild/gpccar/assess-prioritize-and-quantify-risks-opportunities
  • Theyerl, P. (2013, February 28). Excel – Multiple and varied trendlines [Video file]. Retrieved from https://www.youtube.com/watch?v=dsJnsuoVfA8

 

W10_SJP_Forecasts Part 2

 

Issue Identification and Appraisal

In Blog 9, the merits of choosing a forecasting method were discussed regarding the Guild of Project Controls Compendium and Reference (GPC) Module 9.5 and NDIA’s Guide to Managing Programs Using Predictive Measures which provided the choice of five IEAC forecasts. This week’s blog will supplement the work performed last week, with a problem statement “Other than IEAC, what other forecasting method could have been used to provide the Estimate at Complete (EAC) for the weekly report?”

Feasible Alternatives

Review of the Guild of Project Controls Compendium and Reference (GPC) Module 4.4 ‘Assess, Prioritize and Quantify Risks/Opportunities’ outlines two methods; i) Monte Carlo simulation, and ii) Project or Program Evaluation and Review Technique (PERT) formula.

While another method is to utilize MS Excel’s “Best Fit” regression curve to generate, Best, Most Likely, and Worst case curves.

The alternatives are:

  • Monte Carlo Simulation
  • PERT Formula
  • MS Excel’s “Best Fit” regression analysis curve

These are three fine alternatives and it would be best to review all three methods in three separate blogs, and then conclude this exercise with a summarization fifth and final blog. This week will look at the application of the PERT Formula against last week’s results.

Develop the Outcomes for each

  1. Monte Carlo Simulation

Requires @Risk or Pertmaster simulation software to perform the exercise. This option cannot be reviewed at present but will form part of a future blog posting on forecasting subject.

  1. PERT

It’s a key tool/technique in determining ‘Expected Monetary Value’. Enables the use of historic cost information to determine the Mean, Sigma and Variance.

  1. MS Excel “Best Fit” Curves

Uses data set of values (ACWP data points from week 1 thru’ week 7) then plots the best fit line or curve through the data points to week 26 completion point in the future.

Selection Criteria

To evaluate the PERT formula, the use of the five IEAC forecast figures from last week’s blog will be used.

Figure 1 – Zoom-in of Forecast Comparisons

The five values are $48,494, $50,165, $50,322, $51,404 and $52,234.

Analysis and Comparison of the Alternatives

Using the criteria from figure 1, the Best Case, Worst Case, and Most Likely can be determined, with the “Best Case” being the lowest value, the “Worst Case” being the highest value, and the “Most Likely Case” being an average of the three figures in the middle. Therefore, the following values are assigned:

  • “Best Case” (Optimistic) = $48,494
  • “Worst Case” (Pessimistic) = $52,234

and

  • “Most Likely Case” = average($51,404, $50,322, $50,165) = $50,630

Using the PERT formula:

Step 1 – PERT weighted Mean

(Optimistic + (4 x Most Likely) + Pessimistic) / 6

= $ ((48,494 + (4 x 50,630) + 52,234) / 6)

= $ 303,248 / 6

= $ 50,541

Step 2 – Standard Deviation

(Largest value – Smallest value) / 6

= $(52,234 – 48,494) / 6

= $ 3,740 / 6

= $ 623.33

Step 3 – Variance

Variance = Sigma^2

= $623.33^2

= $388,540

Figure 2 shows the normal distribution curve with the ‘Mean’ value shown.

Figure 2 – Normal distribution curve showing ‘Mean’ value

With Sigma (s) = 623.33, the variance $388,540 divided by 2 = $194,740 which if divided by s = +/- 312 s. This means the shape of the curve is a lot lower than the curve on figure 2 and looks like a straight line across the chart, see figure 3 below which uses the same ‘X and Y’ axis ranges.

Figure 3 – Snapshot of Distribution curve for Sigma +/- 312

Therefore, the result from Step 3 reveals that the very large variance means that the number is very risky, and so a higher P number (P90, P95, P98?) needs to be considered, refer to figure 4.

Figure 4 – Normal Distribution curve with 90% to 98% Probability

Choosing the 90%-98% probability returns a forecast range of $51,340 to $51,821, with the later being the closer forecast.

Selection of Preferred Alternative

As this blog is the second in a series of five, the last blog will compare the alternatives from each blog and then select a preferred method.

Monitoring Post Evaluation Performance

No matter which forecasting method is finally decided upon, projects are dynamic, and forecasting methods vary, it is recommended to review the feasible alternatives on a regular basis, determining any appearing trends.

This is a continual process of checking, reviewing, and monitoring to ensure the correct method is being used to provide the most accurate result for management level decisions to be effective.

References

  • Bergeron, E. (n.d.). Standard normal probabilities. Retrieved from http://www.stat.ufl.edu/~athienit/Tables/Ztable.pdf
  • Guild of Project Controls. (2015, October 3). GUILD OF PROJECT CONTROLS COMPENDIUM and REFERENCE (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved from http://www.planningplanet.com/guild/gpccar/assess-prioritize-and-quantify-risks-opportunities
  • Kyd, C. (2014). How to create normal curves with shaded areas in new excel. Retrieved from http://www.exceluser.com/excel_dashboards/normal-curve-new-excel.htm

W09_SJP_Forecasts

 

Issue Identification and Appraisal

Part of the Certification Mentoring course that I’m currently embarked on, requires that a weekly report is generated to measure progress achievement based on the budgeted hours and costs developed at the commencement of the course. This week’s problem statement is “What forecasting method should be used to provide the Estimate at Complete (EAC) for the weekly report?”

Feasible Alternatives

Review of the Guild of Project Controls Compendium and Reference (GPC) Module 9.5 ‘Project Performance Forecasting’ provides five independent EAC’s (IEAC), four coming from NDIA’s Guide to Managing Programs Using Predictive Measures, and one coming from the collective experience of the GPC authors.

The alternatives are:

  • IEAC1 = ACWP + ((BAC – BCWP) / CPI)
  • IEAC2 = ACWP + ((BAC – BCWP) / SPI)
  • IEAC3 = ACWP + ((BAC – BCWP) / CPI * SPI)
  • IEAC4 = ACWP + ((BAC – BCWP) / ((0.2 * SPI) + (0.8 * CPI))
  • IEAC5 = ACWP + ((BAC – BCWP) / (EV / AC))

We will review all five against a previous weekly report to determine the spread of the forecast, finally selecting the most accurate method.

Figure 1 – NDIA IEAC’s

Develop the Outcomes for each

NDIA and GPC provide assumptions and outcomes for each forecasting method

  1. IEAC1

Assumption: Future cost performance will be the same as all past cost performance.

Comment: “Best Case” when CPI is below 1.00 / “Worst Case” when CPI above 1.00

  1. IEAC2

Assumption: Future cost performance will be influenced by past schedule performance.

Comment: Use with caution as SPI is diluted by LOE and loses accuracy over last third of the project.

  1. IEAC3

Assumption: Future cost performance will be influenced by past schedule and cost performance.

Comment: In contrast to IEAC1, this calculation typically yields the “Worst Case” (pessimistic) when the CPI and SPI are below 1.00.

  1. IEAC4

Assumption: Similar to IEAC3, except increased weight is placed on CPI.

Comment: More reliable than IEAC3 late in a project since less weight is given to SPI.

  1. IEAC5

Assumption: Future cost performance will be influenced by productivity

Comment: Uses actual costs along with actual productivity, plus/minus adjustments are made to cost budgets to reflect realistic ‘real’ or ‘true’ costs.

Selection Criteria

To evaluate the effect of the different IEAC’s, the figures from week 7’s report will be used to allow generation of each forecast.

Table 1 – Weekly Report 7 Status

Analysis and Comparison of the Alternatives

Using the criteria from figure 2, each of the alternatives are individually evaluated.

IEAC1

Using the IEAC1 formula = ACWP + ((BAC-BCWP)/CPI) the combined forecast for all the WBS’s total $50,165 which is $370 (0.74%) over the current control budget of $49,795

Table 2 – IEAC1 Forecast

IEAC2

Using the IEAC2 formula = ACWP + ((BAC-BCWP)/SPI) the combined forecast for all the WBS’s total $51,404 which is $1,609 (3.23%) over the current control budget of $49,795

Table 3 – IEAC2 Forecast

IEAC3

Using the IEAC3 formula = ACWP + ((BAC-BCWP)/(CPI*SPI)) the combined forecast for all the WBS’s total $52,234 which is $2,439 (4.90%) over the current control budget of $49.795

Table 4 – IEAC3 Forecast

IEAC4

Using the IEAC4 formula = ACWP + ((BAC-BCWP)/((0.2 * SPI) + (0.8 *  CPI)) the combined forecast for all the WBS’s total $50,322 which is $527 (1.06%) over the current control budget of $49,795

Table 5 – IEAC4 Forecast

IEAC5

Using the IEAC5 formula = ACWP + ((BAC-BCWP)/(BCWP/ACWP)) the combined forecast for all the WBS’s total $48,494 which is -$1,301 (-2.61%) under the current control budget of $49,795

Table 6 – IEAC5 Forecast

Comparison of Forecasting Methods

Using the 5 forecasting methods, on the same criteria selection provides a range from -$1,301 (-2.61%) to +$2,439 (+4.90%). As the project is in week 7 of a 26-week program, and forecasting a 2-week delay to week 28, showing the forecasts on a chart will give a better idea of the spread of results.

Figure 2 – Comparison of Forecasts

Zooming-in to the final weeks gives a better view of the results spread.

Figure 3 – Zoom-in of Forecast Comparisons

The decision is which of the forecasts is most suited to be used going forward, there are no right or wrong methods here, just which forecast is better suited to be used on the reporting over the remaining 19 weeks.

What should not happen is the forecast changes week-in, week-out, the saw-tooth forecasting which shows changes each week. Forecasts need to be developed with some thought behind them and not be constantly changing.

So, at Week 7, there are four forecasts above the control budget and one below it. Review of the trends to date reveal that has there been a positive step change but for how long is the question. Would an individual forecast a budget underrun with confidence 25% of the way into a project, when run time is showing only one good week out of three since the recovery schedule was implemented? Probably not, but what criteria should be used?

The Program it’s self utilizes a +/-5% margin on the CPI & SPI which if applied to the $49,795 provides a range between $47,305 and $52,285. If we adopted this, at this stage, all five methods would fall within the +/- acceptable limits. Of these five, one forecasts an underrun while the other four forecasts overruns. However, the problem statement posed asks for one forecast method be selected to be used at this stage in the project for supporting the weekly report.

Performing a SWOT analysis provides three methodologies with strengths, and from these it is possible to select a suitable option for ‘this stage’ in the project. The mention ‘this stage’ is because later on circumstances may dictate a different methodology is better suited.

Figure 4 – SWOT analysis of methods

From the above SWOT analysis, the use of IEAC3 would tend to be favoured during the early stages in the project.

Selection of Preferred Alternative

Based on the results of analysis, the preferred method for developing a schedule on the week 7 progress data is IEAC3.

Monitoring Post Evaluation Performance

As projects are dynamic, and quickly can turn from good to bad, and vice-versa, it is recommended to review the recommended option regularly and determine the trends that are appearing, likewise the same should be performed for the other four methodologies again reviewing trends.

So, this needs to be a continual process of checking, reviewing, and monitoring to ensure the correct method is providing the right result for management level decisions to be effective.

References

  • 5 Project performance forecasting – Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (2015, October 3). Retrieved July 20, 2017, from http://www.planningplanet.com/guild/gpccar/project-performance-forecasting
  • 3.3.04 Force Field or SWOT Analysis – Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (2015, October 3). Retrieved July 10, 2017 from http://www.planningplanet.com/guild/gpccar/identify-risks-opportunities
  • National Defense Industrial Association. (2014). A Guide to managing programs using predictive measures. Author.

W08.1_SJP_The effect of Retained Logic and Progress Override on EV Calculations

 

Issue Identification and Appraisal

A subject that raises emotions within our industry is the decision how to deal with schedule updates when out-of-sequence work has been performed. Having spent the past 20 years working for ‘the Client’ our Project Controls procedures stipulated that contractors needed to use ‘retained logic’ when scheduling, during that time it was occasionally noted there were differences in Earned Value reporting. This week’s problem statement is “What is the Impact of Retained Logic and Progress Override to the Earned Value?”

Feasible Alternatives

Within Oracle P6 in the schedule options there are three options, ‘Retained Logic’, ‘Progress Override’ and ‘Actual Dates’ see figure 1. We will review all three in this blog along with their effects on earned value calculations.

Figure 1 – P6 Schedule Options

Develop the Outcomes for each

Oracle P6 User Guide mentions the schedule update methods very briefly and does not go into a lot of detail, “When scheduling progressed activities use: Specify the type of logic used to schedule activities that are in progress. When you choose Retained Logic, the remaining duration of a progressed activity is not scheduled until all predecessors are complete. When you choose Progress Override, network logic is ignored and the activity can progress without delay. When you choose Actual Dates, backward and forward passes are scheduled using actual dates.

  1. Actual Dates

Using this method, the update method uses the actual dates and the existing logic, so in effect there appears to be no difference to using the retained logic method (refer to figure 2). Further online research confirmed this from an article called ‘Retained Logic and Progress Override in Primavera P6’ where it stated “The remainder of the activity is still treated the same as when we use Retained Logic. P6 will not allow the remainder of the activity to continue until its predecessor is complete.

Figure 2 – Schedule Update using Actual Dates

  1. Retained Logic

As the title suggests, this method retains the existing network logic, so an out-of-sequence activity which has a Finish to Start logic tie, the successor will have a gap in the bar until the predecessor finishes (refer to figure 3).

Figure 3 – Schedule Update using Retained Logic

Observation

Check out the date calculations on ‘Actual Dates’ figure 2 and ‘Retained Logic’ figure 3 for activity A4020. Both methods provide the same result; start 03-Jul-17A & finish 05-Aug-17. So, it appears that in this example these two methods act in the same manner. Later the cost profiling should tell whether they are or not.

  1. Progress Override

Progress override ignores the network logic and progresses out-of-sequence activities as if the Finish to Start logic tie did not exist. It does not change anything in the network just ignores the tie and schedules the activities. This results in a lower float calculation than retained logic as the activity is not gapped/pushed out to match completion of the predecessor (refer to figure 4)

Figure 4 – Schedule Update using Progress Override

Selection Criteria

To evaluate the effect of Actual Dates, Retained Logic or Progress Override has on the EV calculation, a small resource loaded schedule was developed and an out of sequence activity introduced. The schedule has 25 activities, 7 have been completed leaving 18 remaining. There were no constraints and the schedule log is clean apart from one out-of-sequence activity that was introduced for this evaluation.

Analysis and Comparison of the Alternatives

Before diving headlong into the analysis, it’s probably a good place to briefly mention why out-of-sequence activities occur. These tend to crop up when an activity has been identified that can be started ahead of the sequence it had originally been planned. For example, there are two activities A & B, the original network showed that A had to complete before B started. However, after some analysis it was determined that since A had progressed past a certain point that B could commence before A finished. So, B is started and when a P6 schedule update is performed it recognizes this and flags the out-of-sequence activity as an error on the Schedule Log it produces. This is probably due to the original schedule logic being defective when it was developed. For certain, these need to be fully evaluated prior to implementation so it does not affect the critical path activities.

By rights the error should be addressed and the logic revised and the schedule log free from errors. Remember any logic changes made should be put onto the schedule changes register which the PCM & PM review and sign-off weekly/monthly given the reporting cycles.

For the purposes of this blog, the out-of-sequence activity is still there, and how each method deals with the profiling of costs is what needs to be reviewed.

Actual Dates Earned Value Analysis

As demonstrated in figure 2 the actual dates method puts a gap in the bar, review of the cost phasing of activity A4020 shows that the costs are distributed from week 6 to 10 (5-week timeframe) see figure 5.

Figure 5 – Cost profile of activity A4020 using Actual Dates

Retained Logic Earned Value Analysis

As demonstrated in figure 3 the retained logic method puts a gap in the bar, review of the cost phasing of activity A4020 shows that the costs are distributed from week 6 to 10 (5-week timeframe) see figure 6.

Figure 6 – Cost profile of activity A4020 using Retained Logic

Progress Override Earned Value Analysis

Figure 4 above showed that the bar was intact with no gaps and the schedule had zero float so was still on track to complete on time. The costs for the activity were distributed from week 6 to 9 (4-week timeframe) and the cost profile is shown on figure 7.

Figure 7 – Cost profile of activity A4020 using Progress Override

To better see what is happening, refer to Table 1 which compares all three methods.

Table 1 – Comparison of Actual Dates, Retained Logic and Progress Override

Table 1 shows how, for the same activity, the earned value profiles change using the different methods. ‘Actual Dates’ and ‘Retained Value’ have identical profiles and included the delay caused by the effect of the predecessor logic while ‘Progress Override’ shows an unimpeded cost profile as it commences the activity by ignoring the network logic.

Table 2 – SWOT analysis of methods

From the above SWOT analysis there is a clear indication that the ‘Retained Logic’ method has many strengths. Out of six identified retained logic lists four of them (67%), which is consistent the legal definition of the “preponderance of the evidence” test which suggests that the case for using ‘Retained logic’ has a greater than 51% chance of being correct.

Selection of Preferred Alternative

Based on the analysis, the preferred method for reporting Earned Value should be ‘Retained Logic’ for reporting against in-progress activities.

Monitoring Post Evaluation Performance

The issue of which method to be used, be it, ‘Progress Override’, ‘Retained Logic’ or ‘Actual Dates’ is a much larger discussion than just the ‘earned value’ discussion here. There is a plethora of arguments out there regarding which method should be adopted.

The key to all this is ensuring sufficient detail is provided in the original planning as outlined by the GAO Schedule Assessment Guide Best Practice 1 – capturing all activities. But no schedule is infallible, and there is always a potential for out-of-sequence work to occur. However, the Project Schedule Procedures need to have a section outlining how such instances are addressed (fixing the defective logic that is causing the P6 errors) and documenting any changes individually on the schedule changes register.

References

  • Primavera P6 Enterprise Project Portfolio Management [Computer program] 15.1.Redwood shores, CA, USA: ORACLE (2016).
  • According to Oracle online P6 Professional Help (2017), “When scheduling progressed activities use: Specify the type of logic used to schedule activities that are in progress. When you choose Retained Logic, the remaining duration of a progressed activity is not scheduled until all predecessors are complete. When you choose Progress Override, network logic is ignored and the activity can progress without delay. When you choose Actual Dates, backward and forward passes are scheduled using actual dates.” (No page number)
  • According to R.Hendricks, “The remainder of the activity is still treated the same as when we use Retained Logic. P6 will not allow the remainder of the activity to continue until its predecessor is complete.”, retrieved from tepco.us website (August 2015, page 5 figure 1 lower comment note)
  • Woolf, M. B. (2012). CPM mechanics: The critical path method of modeling project execution strategy. Rochester, MI: ICS-Publications.
  • 3.3.7 Multi-Attribute Decision Making. (2015, November 2). Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (n.d.). Retrieved from http://www.planningplanet.com/guild/gpccar/managing-change-the-owners-perspective
  • 3.3.04 Force Field or SWOT Analysis. (2015, October 3). Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (n.d.). Retrieved from http://www.planningplanet.com/guild/gpccar/identify-risks-opportunities
  • United States. (2015). Best Practice Checklists. In GAO schedule assessment guide: Best practices for project schedules(pp. 25/26). U.S. Government Accountability Office.
  • United States. (2015). Best Practice 9: Updating the schedule using actual progress and logic. In GAO schedule assessment guide: Best practices for project schedules(pp. 121/134). U.S. Government Accountability Office.
  • Jackson and Wilson. (2013, August 28). Burdens of Proof- Preponderance of the Evidence vs Beyond a Reasonable Doubt – JACKSON & WILSON | Text or Call 800.661.7044. Retrieved from http://jacksonandwilson.com/burden-of-proof/

 

 

W08_SJP_The effect of Retained Logic and Progress Override on EV Calculations

 

Issue Identification and Appraisal

A subject that raises emotions within our industry is the decision how to deal with schedule updates when out-of-sequence work has been performed. Having spent the past 20 years working for ‘the Client’ our Project Controls procedures stipulated that contractors needed to use ‘retained logic’ when scheduling, during that time it was occasionally noted there were differences in Earned Value reporting. This week’s problem statement is “What is the Impact of Retained Logic and Progress Override to the Earned Value?”

Feasible Alternatives

Within Oracle P6 in the schedule options there are three options, ‘Retained Logic’, ‘Progress Override’ and ‘Actual Dates’ see figure 1. We will review all three in this blog along with their effects on earned value calculations.

Figure 1 – P6 Schedule Options

Develop the Outcomes for each

Oracle P6 User Guide mentions the schedule update methods very briefly and does not go into a lot of detail, “When scheduling progressed activities use: Specify the type of logic used to schedule activities that are in progress. When you choose Retained Logic, the remaining duration of a progressed activity is not scheduled until all predecessors are complete. When you choose Progress Override, network logic is ignored and the activity can progress without delay. When you choose Actual Dates, backward and forward passes are scheduled using actual dates.

  1. Actual Dates

Using this method, the update method uses the actual dates and the existing logic, so in effect there appears to be no difference to using the retained logic method (refer to figure 2). Further online research confirmed this from an article called ‘Retained Logic and Progress Override in Primavera P6’ where it stated “The remainder of the activity is still treated the same as when we use Retained Logic. P6 will not allow the remainder of the activity to continue until its predecessor is complete.

Figure 2 – Schedule Update using Actual Dates

  1. Retained Logic

As the title suggests, this method retains the existing network logic, so an out-of-sequence activity which has a Finish to Start logic tie, the successor will have a gap in the bar until the predecessor finishes (refer to figure 3).

Figure 3 – Schedule Update using Retained Logic

  1. Progress Override

Progress override ignores the network logic and progresses out-of-sequence activities as if the Finish to Start logic tie did not exist. It does not change anything in the network just ignores the tie and schedules the activities. This results in a lower float calculation than retained logic as the activity is not gapped/pushed out to match completion of the predecessor (refer to figure 4)

Figure 4 – Schedule Update using Progress Override

In effect, there are only two methods to evaluate here as the ‘Actual Dates’ method uses similar methodology to the Retained logic method so this method will be discarded from the rest of the topic.

Selection Criteria

To evaluate the effect Retained Logic or Progress Override has on the EV calculation, a small resource loaded schedule was developed and an out of sequence activity introduced. The schedule has 25 activities, 7 have been completed leaving 18 remaining. There were no constraints and the schedule log is clean apart from one out-of-sequence activity that was introduced for this evaluation.

Analysis and Comparison of the Alternatives

Before diving headlong into the analysis, it’s probably a good place to briefly mention why out-of-sequence activities occur. These tend to crop up when an activity has been identified that can be started ahead of the sequence it had originally been planned. For example, there are two activities A & B, the original network showed that A had to complete before B started. However, after some analysis it was determined that since A had progressed past a certain point that B could commence before A finished. So, B is started and when a P6 schedule update is performed it recognizes this and flags the out-of-sequence activity as an error on the Schedule Log it produces. This is probably due to the original schedule logic being defective when it was developed. For certain, these need to be fully evaluated prior to implementation so it does not affect the critical path activities.

By rights the error should be addressed and the logic revised and the schedule log free from errors. Remember any logic changes made should be put onto the schedule changes register which the PCM & PM review and sign-off weekly/monthly given the reporting cycles.

For the purposes of this blog, the out-of-sequence activity is still there, and how each method deals with the profiling of costs is what needs to be reviewed.

Progress Override Earned Value Analysis

Figure 4 above showed that the bar was intact with no gaps and the schedule had zero float so was still on track to complete on time. The costs for the activity were distributed from week 6 to 9 (4-week timeframe) and the cost profile is shown on figure 5.

Figure 5 – Cost profile of activity A4020 using Progress Override

Retained Logic Earned Value Analysis

As demonstrated in figure 3 the retained logic method puts a gap in the bar, review of the cost phasing of activity A4020 shows that the costs are distributed from week 6 to 10 (5-week timeframe) see figure 6.

Figure 6 – Cost profile of activity A4020 using Progress Override

To better see what is happening, refer to Table 1 which compares both methods.

Table 1 – Comparison of Progress Override vs Retained Logic

Table 1 shows how the Earned value figures change with two different profiles for the same activity depending on the method chosen. Progress override shows an unimpeded cost profile, while the Retained Logic cost profile includes the delay caused by the effect of predecessor logic.

Table 2 – SWOT analysis of methods

Selection of Preferred Alternative

Based on the analysis, the preferred method for reporting Earned Value should be ‘Progress Override’ for reporting against in-progress activities.

Monitoring Post Evaluation Performance

The issue of using ‘Progress Override’ instead of ‘Retained logic’ is a much larger discussion than just the ‘earned value’ discussion. There is a plethora of arguments out there regarding the use of Progress Override due to the ignoring of network logic. The key to all this is ensuring sufficient detail is provided in the original planning as outlined by the GAO Schedule Assessment Guide Best Practice 1 – capturing all activities. The backup to this is fixing the defective logic that is causing the P6 errors documenting any changes individually on a schedule changes register.

References

  • Primavera P6 Enterprise Project Portfolio Management [Computer program] 15.1.Redwood shores, CA, USA: ORACLE (2016).
  • According to Oracle online P6 Professional Help (2017), “When scheduling progressed activities use: Specify the type of logic used to schedule activities that are in progress. When you choose Retained Logic, the remaining duration of a progressed activity is not scheduled until all predecessors are complete. When you choose Progress Override, network logic is ignored and the activity can progress without delay. When you choose Actual Dates, backward and forward passes are scheduled using actual dates.” (No page number)
  • According to R.Hendricks, “The remainder of the activity is still treated the same as when we use Retained Logic. P6 will not allow the remainder of the activity to continue until its predecessor is complete.”, retrieved from tepco.us website (August 2015, page 5 figure 1 lower comment note)
  • Woolf, M. B. (2012). CPM mechanics: The critical path method of modeling project execution strategy. Rochester, MI: ICS-Publications.
  • 3.3.7 Multi-Attribute Decision Making. (2015, November 2). Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (n.d.). Retrieved from http://www.planningplanet.com/guild/gpccar/managing-change-the-owners-perspective
  • 3.3.04 Force Field or SWOT Analysis. (2015, October 3). Guild of project controls compendium and reference (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). (n.d.). Retrieved from http://www.planningplanet.com/guild/gpccar/identify-risks-opportunities
  • United States. (2015). Best Practice Checklists. In GAO schedule assessment guide: Best practices for project schedules (pp. 25/26). U.S. Government Accountability Office.

W07_SJP_Going on a trip, what mode of transportation is most suitable?

 

Issue Identification and Appraisal
The past few blogs have been interesting to write about but very work specific, this week sees the use of the multi attribute decision making process to assess a personal issue. Faced with a 350+ kilometre trip to meet up with some former work colleagues for a few rounds of golf, presents the problem, “What mode of transportation should I use to make my trip?”

Feasible Alternatives
There are five feasible alternatives; car, taxi, bus, train or ferry. All are acceptable alternatives to evaluate.

Develop the Outcomes for each 
Brain storming session decided on the following benefits of each alternative.

Table 1 Acceptable Alternatives Benefits

Selection Criteria
Using the priority and score method to determine a finalized weighted score for each alternative.
As this problem is a personal one, the criteria set need to pass personal and financial preferences.
There were two methods used to score the criteria, the normal method where ‘not meeting requirement’ scored 0 / ‘meeting requirement’ scored 4. However, because there are also two risks identified, these need to be assessed using the opposite rhetoric, meaning that the greater the chance of the risk occurring the lower the score, and the lower chance of the risk, the higher the score.
Scoring Criteria:
Normal Method – Journey Time, Direct Route, Door to Door, Comfort Level, Cost
Risk Method – Risk of Traffic Jam, Risk of Accident

Analysis and Comparison of the Alternatives
Application of the scoring method based on the criteria provided the following results against the five alternative modes of transportation.

Table 2 Analysis results of Alternatives

Selection of Preferred Alternative
Based on the analysis the preferred alternative out of the five suitable options would be to take the ferry.

Monitoring Post Evaluation Performance
Preferred alternative transportation mode organized, and will monitor trip to assess whether or not criteria selection process was adequate. Concurrently, viewing situations, travel reports, news casts regarding the other alternatives to support the decision process will also be provide a check.
Continual refinement of the criteria by applying lessons learned

References

W06_SJP_Applying 10 Best Practices Scoring Model to the GAO’s 4 Characteristics of a Reliable Schedule

 

Issue Identification and Appraisal

Over the past few weeks the Blog has been about the development of Scoring Model best suited for the GAO’s Ten Best Practices. This week takes the subject one step further by applying our scoring model to the GAO’s Table 7: Best Practices Entailed in the Four Characteristics of a Reliable Schedule.

This week’s problem is, “How to apply the GAO’s 10 Best Practices scoring model to their Four Characteristics of a reliable schedule”.

Feasible Alternatives

The challenge is going to be determining how to take the results from the 10 Best Practices and “Roll Up” into the 4 Characteristics. This revisits old research with regards how to weight the information;

  1. equal weightings i.e. 25% for each characteristic,
  2. assign weightings based on criticality of the four characteristics, or
  3. use the 10 Best Practice weightings and roll these up. There are three alternatives to assess.

Develop the Outcomes for each

The table below shows the potential benefits for each weighting method.

Table 1: Benefits of each Alternative

Taking the benefit issue a step further, and demonstrating how the weightings will appear like in a table, gives the viewer a better idea of what to expect. Alternatives 1 and 3 are easy to depict, but alternative 2 the ‘Ranked weighting’ needs to be shown using potential ranges as these would be chosen by the assessor.

Table 2: Weightings of each Alternative

Selection Criteria

The selection criteria used was the Hybrid data set developed during Blogs 3 to 5. It has 82 scoring points all equally weighted (plus an additional 33 informational items, which assist in scoring the 82 items).

Table 3 Typical Scoring Sheet (showing BP 1)

Analysis and Comparison of the Alternatives

A simple analysis was performed using the same method that was adopted for the previous blogs, determining the priority scale for each of the 3 criteria’s then applying a score against each alternative. The analysis of the alternatives is shown below.

Table 4 Analysis results of Alternatives

Selection of Preferred Alternative

The preferred alternative is to use the weights from the Ten Best Practices scoring template and roll them up in each characteristic by using the GAO directive in which section gets assigned to each characteristic.

Monitoring Post Evaluation Performance

The roll-up of the Scoring Method from the Ten Best Practices to the Four Characteristics is now completed. So, the effective tools are available, but what will the acceptance criteria be? At what point can it be said that the schedule has passed, 70%, 75%, 80%, 90% or 100%. Bear in mind that a schedule is dynamic and projects need to forge ahead, I don’t think 100% is realistic. My thoughts would be somewhere in the region of 75% and upwards would be a good start point.

How do I feel this is a good start? If we demand too much we are not realistic and schedulers/planners build the schedules around the criteria and not the work; if we plan demand not enough then we are not using the GAO’s Best Practices in the spirit it was developed for.

We need to Plan, Assess, Learn, Re-evaluate, and continually refine as the need arises.

References

  • GAO (United States Governance Accountability Office), 2015, GAO-16-89G Schedule Assessment Guide
  • Guild of Project Controls. (2015, October). GUILD OF PROJECT CONTROLS COMPENDIUM and REFERENCE (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved June 14, 2017, from http://www.planningplanet.com/guild/gpccar/managing-change-the-owners-perspective
  • Ishizaka, A., Nemery, P., & John Wiley & Sons. (2013). Multi-criteria decision analysis: Methods and software[Kindle].
  • W04_SJP_ Development of a Scoring Model for the GAO Ten Best Practices – Achieving Guild of Project Controls / AACE Certification BLOG [Web log post]. (n.d.). Retrieved from https://js-pag-cert-2017.com/w04_sjp_Development of a Scoring Model for the GAO Ten Best Practices/
  • W05_SJP_ Development of a Scoring Model for the GAO Ten Best Practices Part 2 – Achieving Guild of Project Controls / AACE Certification BLOG [Web log post]. (n.d.). Retrieved from https://js-pag-cert-2017.com/w05_sjp_Development of a Scoring Model for the GAO Ten Best Practices Part 2/

W05_SJP_Development of a Scoring Model for the GAO Ten Best Practices Part 2

 

Issue Identification and Appraisal

In W04 Blog, the scoring model best suited for the GAO’s Ten Best Practices was determined, however it left two items outstanding; i) criteria to be used, and ii) what would the acceptance criteria be. This week is part two of our problem statement, “How to develop a scoring model based on the GAO’s Ten Best Practices.”

Feasible Alternatives

The challenge is determining the selection criteria to be used as the ‘acceptance criteria’ should be uniform across the data ranges. There are four alternatives for the selection criteria; i) Individual Checklists at end of each Best Practice section, ii) Appendix II, iii) Appendix VI, and iv) a Hybrid system of i, ii & iii in a more concise manner.

Develop the Outcomes for each

As the criteria sets all come from the same document, there are very little we can do in the means of selection other than look at the potential benefits that might arise between the 4 alternatives. And while all could be used, the potential of a hybrid version would have advantages.

Table 1: Benefits of each Alternative

Selection Criteria

The GAO Schedule Assessment Guide provides several criteria sets;

  • Individual checklists for the Ten Best Practices – these can be found on the following pages of the document; 25/26, 46/47, 61/62, 68/69, 73/74, 89, 96/97, 119/120, 132/133, 146/147.
  • Appendix II: An auditor’s key questions (pages 151 to 165).
  • Appendix VI: Standard Quantitive measures for Assessing Schedule Health (pages 183 to 188).
  • A developed Hybrid from the three above.

Analysis and Comparison of the Alternatives

Using the existing data sets plus a Hybrid, a simple analysis was performed with regards meeting the GAO, ensuring we have the correct amount of points to monitor (ideally 100 points would ensure the criteria points each can gain 1%).

Criteria excerpt showing all four data sets for Best Practice 2 is shown in the table below.

Table 2 Criteria for Alternatives showing Hybrid

Analysis of the alternatives is shown below, there were not too many variables to score against.

Table 3 Analysis results of Alternatives

Selection of Preferred Alternative

Due to the ambiguity of some item descriptions on the Checklists, Appendix II and Appendix VI, the preferred alternative is the use of the hybrid list.

Using Best Practice 2, Hybrid alternative, and adding the acceptance criteria, the initial sheet is represented in Table 4 below.

Table 4 Acceptance Criteria (Hybrid model)

Research on several professional software packages (Deltek, XER Toolkit, Schedule Analyzer) confirmed the ‘acceptable criteria’ was in the ranges used by them, albeit they support the DCMA.

Monitoring Post Evaluation Performance

The Scoring Method is now completed, and the next step is to perform a detailed analysis on a schedule to determine the results. There is a need to ensure that the Acceptance criteria continues to meet the industry requirements, so there needs to be periodic checks on the system.

Once the system has been fully tested and results verified, a demonstration to Company management needs to be organized, with the view to incorporating key metrics into future contracting strategies.

References

  • GAO (United States Governance Accountability Office), 2015, GAO-16-89G Schedule Assessment Guide
  • Guild of Project Controls. (2015, October). GUILD OF PROJECT CONTROLS COMPENDIUM and REFERENCE (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved June 14, 2017, from http://www.planningplanet.com/guild/gpccar/managing-change-the-owners-perspective
  • Ishizaka, A., Nemery, P., & John Wiley & Sons. (2013). Multi-criteria decision analysis: Methods and software[Kindle].
  • W04_SJP_ Development of a Scoring Model for the GAO Ten Best Practices – Achieving Guild of Project Controls / AACE Certification BLOG [Web log post]. (n.d.). Retrieved from https://js-pag-cert-2017.com/w04_sjp_Development of a Scoring Model for the GAO Ten Best Practices/
  • DELTEK ACUMEN 8.0 [Computer software]. (2016)
  • SCHEDULE ANALYZER FOR THE ENTERPRISE Build 4.4.312 [Computer software]. (2016)
  • XER SCHEDULE TOOLKIT version 132-0-0-19 [Computer software]. (2016)

W04_SJP_Development of a Scoring Model for the GAO Ten Best Practices

 

Issue Identification and Appraisal

In my W03 Blog, it was determined that a ‘Weighted Scoring Method’ was the preferred method evaluate a schedule. As a follow-up, this week’s problem statement is, “How to develop a scoring model based on the GAO’s Ten Best Practices.”

Feasible Alternatives

The scoring model that requires to be developed is quite complex as it needs to weight the criteria, and provide quantitative limits to assess if the result as a pass/fail. i.e. for example Best Practice 1 ‘capturing all activities’ what determines the pass criteria to say the schedule met the GAO requirements 100%, 90%…thoughts anybody?  Initially, the problem to solve is the how to achieve the vertical scoring determined, and look at horizontal scoring thereafter. There are two suitable alternatives; i) a system that treats all the criteria as being equal, or ii) a system that has imposed rankings against the criteria.

Develop the Outcomes for each

Based on my early weeks of research on the topic there does not appear to be a ‘one rule suits all’ method. As mentioned above the system will eventually be complex in nature. The benefits for the two alternatives are outlined in table 1 below.

Table 1: Benefits of each Alternative

Selection Criteria

The GAO Schedule Assessment Guide provides several criteria sets;

  • Individual checklists for the Ten Best Practices – these can be found on the following pages of the document; 25/26, 46/47, 61/62, 68/69, 73/74, 89, 96/97, 119/120, 132/133, 146/147.
  • Appendix II: An auditor’s key questions (pages 151 to 165).
  • Appendix VI: Standard Quantitive measures for Assessing Schedule Health (pages 183 to 188).

All are excellent criteria sets, however data sets range between 74 to 152 points depending on the set chosen. The GAO specifically state, “…they are not intended as a series of steps for developing the schedule”, “…questions are related to the general policies in place and procedures undertaken to create and maintain the schedule” and “No “pass-or-fail” thresholds or tripwires are associated with the measures…. Moreover, severity of the errors or anomalies takes precedence over quantity because any error can potentially affect the reliability of the entire schedule network”.

As the scoring model is ultimately going to be used to evaluate schedule quality whichever set of GAO criteria is going to be used will need to be reviewed in detail and acceptance criteria developed. However, for the purposes of the Blog a comparison of the results from all three data sets will be provided.

Analysis and Comparison of the Alternatives

Comparing the alternatives using the information from the benefits table provided the assessment of which method to be used. Please note that the lower two items (scoring flexibility / data manipulation potential) did not relate to A1 and scored zero (Refer to table 2 below) but results of A2 were brought closer to A1 due to this.

 

Table 2 Analysis results of Alternatives

The next step in the process is to input all three data sets to compare the results Best Practice Checklists (152 points), Appendix II (87 points) and Appendix VI (74 points). Table 3 below shows the ranges that can be determined for each Best Practice depending on the data used.

Table 3 Best Practice weighting ranges

The next step would be to determine the acceptance criteria for each Best Practice, however due to the large dispersion of ranges a further analysis will be required to develop one set of criteria to be able give credibility to the weightings.

Selection of Preferred Alternative

The preferred solution would be to implement a scoring system that weights all criteria equally.

Monitoring Post Evaluation Performance

The preferred alternative has been determined and the system will provide results. However due to the wide range of weightings determined using the three different data sets there is a need to re-assess the input criteria. Blog 5 will further the research, and develop the input criteria and the acceptance criteria to finalize the scoring method.

References

  • GAO (United States Governance Accountability Office), 2015, GAO-16-89G Schedule Assessment Guide
  • Guild of Project Controls. (2015, October). GUILD OF PROJECT CONTROLS COMPENDIUM and REFERENCE (CaR) | Project Controls – planning, scheduling, cost management and forensic analysis (Planning Planet). Retrieved June 14, 2017, from http://www.planningplanet.com/guild/gpccar/managing-change-the-owners-perspective
  • Ishizaka, A., Nemery, P., & John Wiley & Sons. (2013). Multi-criteria decision analysis: Methods and software[Kindle].
  • W03_SJP_Scoring model to analyze a Baseline Schedule vs GAO requirements – Achieving Guild of Project Controls / AACE Certification BLOG [Web log post]. (n.d.). Retrieved from https://js-pag-cert-2017.com/w03_sjp_scoring-model-analyze-baseline-schedule-vs-gao-requirements/

W03_SJP_Scoring model to analyze a Baseline Schedule vs GAO requirements

 

Issue Identification and Appraisal

Having worked many years as a Client’s Project Controls representative, one of the tasks regularly performed was reviewing Contractor’s schedule submittals. In the past two decades, it’s noticeable that there has been a steady decline in the quality of the schedules submitted despite the increasing sophistication of planning and scheduling tools. There are several guidelines published to assist both schedule developers and reviewers regarding ‘Best Practices’;

  • DCMA Generally accepted scheduling practices.
  • GAO Schedule assessment guide.
  • NDIA Planning and scheduling excellence guide.

For the purposes of this Blog, I have selected to use the GAO Schedule assessment guide, with this week’s topic being “What scoring model should be used to develop a GAO checklist to analyze contractors schedules?”

Feasible Alternatives

Online research guided me to two articles written by the same author D.A. Zimmer PMP regarding the uses of the powerful but flexible ‘weighted scoring method’. A method he concluded that required three scoring methods to provide credible results.

Another method is to develop a checklist of the GAO criteria using predetermined quantitative criteria to provide a Pass/Fail answer.

Develop the Outcomes for each

The table below shows that there are benefits for both alternatives, however there are limitations to the Pass/Fail checklist with regards the results provided. Both alternatives can advise the analyst that the schedule being reviewed is good or bad, but the weighted scoring method could provide a better indication of the schedule risk index.

Table 1 : Benefits of each alternative

Selection Criteria

The six criteria items listed in Table 1 were used as the criteria for the analysis.

Analysis and Comparison of the Alternatives

A priority scale was first allocated to the criteria to determine the importance of the item – low importance (1) / important (3) / essential (5). Against the criteria a ‘Meeting requirement’ score based on the comments on Table 1 was applied – Did not Meet (0) / Partially Met (2) / Met Requirements (4) / Exceeded Requirements (6) – scoring range 0 to 36 dependent on requirements met.

Final results were achieved by determining the ‘Weighted score’  by multiplying the ‘Score’ by the ‘Priority Scale’.

Table 2 Analysis results of Alternatives

Selection of Preferred Alternative

Review of the analysis results concludes that the ‘Weighted Scoring Method’ to be the preferred alternative.

Monitoring Post Evaluation Performance

The next step will be to apply the ‘Weighted Scoring method’ to develop a model using the criteria from the GAO’s Appendix VI “Standard Quantitative Measurements for Assessing Schedule Health”, Table 11 checklist items to measure the quality of a contractor’s baseline proposal submission.

Ensuring that all the GAO requirements for reviewing a baseline submission are incorporated into the model, and determine if there are any areas that cannot be included, in which case there needs to be a re-assessment of the preferred choice.

There are ‘off-the-shelf’ professional schedule analysis software packages (e.g. Deltek Fuse, Schedule Analyzer, XER Toolkit, Zummer, etc.,) that can perform these tasks instantaneously providing detailed analysis report. The choice of which package to use will be a subject for a future Blog.

References

  • GAO (United States Governance Accountability Office), 2015, GAO-16-89G Schedule Assessment Guide
  • NDIA (National Defense Industrial Association), 2016, Planning & Scheduling Excellence Guide (PASEG), Version 3.0
  • DCMA (Defense Contract Management Agency), Date unknown, Generally Accepted Scheduling Practices (GASP).
  • Zimmer PMP, David. A, American Group, (2011, January), A. E. (n.d.). What is the Weighted Scoring Method? Retrieved June 10, 2017, from http://terms.ameagle.com/2011/01/david.html
  • Zimmer PMP, David. A, American Group, and View profile. “Plugging The Holes In The Weighted Scoring Model”. ameagle.com. N.p., 2014. Web. 10 June 2017.