Observations on Construction Submittals

Per the AIA (A201, General Conditions of the Contract for Construction), the purpose of submittals are “to demonstrate for those portions of Work for which submittals are required, the way by which the Contractor proposes to conform to the information given and the design concept expressed in the Contract Documents.”

Submittals are the confirmation of the contractor’s intent to comply with the design concept.  The importance of this compliance process is emphasized by the prerequisite condition stated in A201:  “The Contractor shall perform no portion of the Work for which the Contract Documents require submittal and review of Shop Drawings, Product Data, Samples, or similar submittals until the respective submittal has been approved by the Architect.”

At a minimum, designers should be generating construction documents that contain enough detail that clearly demonstrate a design intent.  Additional detail from designers - equipment or process specifics - is beneficial but not critical to project success (albeit with a higher risk of overages).  For projects with expedited project timelines (design-build, EPC), minimum designer input dictating general sizing, spatial relationships, and arrangement can be enough information to advance project steps.  In these scenarios, the submittal process can be utilized by the designers to fine tune a design and verify system coordination to ensure performance needs are met.

However, on standard or typical projects, submittals are not intended to be an opportunity to alter the design concept by either designer or contractor, but the reality is they often function in that capacity.  This 2006 AIA article takes it a step further describing the process as a game between the design and construction teams.  While most may treat it this way – a cat and dog fight – the analogy cuts to the root of many of the process’s problems.

Image courtesy of Looney Tunes

Image courtesy of Looney Tunes

The reality is submittals and the submittal process are really critical to project success.  It represents one of the last opportunities to make changes (large and small) without causing a compounded affect to cost.  For a seemingly typical and simple process, the problems are well rooted in construction culture and are still largely evident.  The workflow has improved recently with shared project management software, but there are still efficiency gains to be had.

A case study authored by Catarina Pestana and Thais Alves of San Diego State University titled “Study of the Submittal Process Using Lean Construction Principles” did an analysis of submittal cycle time for a 12-story, 220,000 sf, mixed-use, CIP concrete new construction project in San Diego, CA around 2010.  They were able to calculate actual cycle times at each process step (GC initial review, A or A/E review, GC distribution) and compare against estimated times.  Most of the results were expected: actual lead times exceeded estimated, actual lead times average and median were about 32 days.  A few of the surprise findings were:

  1. The GC distribution cycle time exceeded GC initial review cycle time by about 3 days.

    1. Submittal distribution is expected to be the least burdensome step as it should require less technical review than the other two.

  2. Shop drawing review lead times (generally more complex) were about 10 days less than product data reviews.

    1. Shop drawings are generally more complex and thus the expectation would be for a longer lead time.

  3. Architect-only review lead times on average were 12 days longer than those also requiring the review of an engineer.

    1. Because an A/E review requires additional hand-off to/from the engineer, it would be expected that the cycle time would be longer.

Interestingly, it was the GC review and distribution that caused the difference in lead times for both 2 and 3 above; the cycle time for the design professional review was the same for both.  In this study, the longer distribution times were attributed by the contractor to “the architect finishing the design”, thus requiring change orders (part of the change order process was incorporated into the “distribution” step).

 From my experience, here’s where I see waste in the submittal process:

  1. A submittal is received that did not follow the compliance requirements.

  2. A submittal is incomplete (covers only part of system/equipment).

  3. A submittal is poorly labeled; it is unclear what is being submitted on.

  4. A submittal is submitted in the wrong sequence (a sub-component to a larger component that has yet to be submitted on).

  5. A submittal is submitted that is not required.

  6. A submittal is disguised as a substitution request.

  7. A submittal is not previously reviewed by the GC.

  8. A submittal is overlooked or forgotten (all parties guilty here).

  9. A submittal is not applicable to the project at-hand.

  10. A submittal is provided by a project team member deep down the contractor org chart (sub-vendor to a vendor to a sub-subcontractor to a subcontractor to a GC).

  

From my experience, here are a few solutions to minimize submittal process waste:

  1.  Require a standardized control number for all parties involved.

    1. Why?  This improves coordination, avoids confusion, and eliminates the unnecessary step of manually creating/adding/altering unique control numbers.

  2. Require the GC generate a schedule of submittals prior to issuing submittals and have it reviewed by the design team for completeness.

    1. Why?  This gives the design professionals an opportunity to ensure all critical components/equipment/systems are accounted for.

  3. Specs should clearly describe what submittals are needed.

    1. Why?  While the general spec format is standardized, how specific submittals are requested is oftentimes determined by the architect (and is not standardized).

  4. Specs should clearly describe how submittals are to be replied.

    1. Why?  Compliance statements clarify communication between contractor and designer.  The designer can verify that the contractor reviewed the specification and the contractor can explain why their product or drawing deviates from spec.

  5. If sub-subcontractors or sub-vendors are utilized, it should be the responsibility of the prime subcontractor to directly issue and manage all relevant submittals.

    1. Why?  Each document hand-off step is an opportunity for delay, increasing the probability of a longer cycle or lead time.

  6. Change workflow such that any obvious consultant items are sent direct, bypassing the architect “review” step.

    1. Why?  Oftentimes submittals for engineer review get hung-up with the architect for no reason other than they are busy.  Eliminating this step can decrease cycle and lead time.

  7. Change the workflow such that submittal reviews are two-step.  Step one is a cursory review for proper formatting (stamps, equipment labeling, compliance statements) and should have a 2-4 day lead time.  Step two would consist of the full review which would carry that same lead time.

    1. Why?  Where is the value in waiting 8 days for a submittal response that will ultimately get rejected on a technicality after a 15 minute review?

  8. Designers to provide clear, listed responses that can be tracked over each issuance.

    1. Why?  The submittal comment format is not standardized and oftentimes comments are buried in the documentation.  Cleanly formatted lists ensure all comments will be visible to the receiver.

  9. Set a goal for no more than two re-submittals.  The third should be for record only.

    1. Why?  Goals help “set the tone” or set expectations for all project parties.

  10. Triage submittals by categories “Ordinary”, “Semi-Custom”, “Specialized” to indicate product lead times.

    1. Why?  Categories can signal to the submittal reviewer approximate time to review or criticality of review.  For example, valves labeled “Ordinary” would signal a short review, whereas a pump labeled “Specialized” would signal long delivery lead time and expedited review.

  11. GC to expedite the first submittals for larger equipment that are anticipated to undergo multiple reviews.

    1. Why?  Initial submittal steps after the log is generated should be to prioritize submittals for larger/more complex equipment with long lead times.

  12. GC to reduce expected review times on R&R resubmittals.

    1. Why?  The first submittal review on average should take longer than second or third reviews. The second or third review should be intended to verify earlier comments are addressed.

Is Weather Becoming More Extreme? Let's look at the data...

There’s been a lot of talk on various media outlets about “extreme” weather events, their ferocity and frequency, and how this is the “new normal”. And of course these days you can’t talk about weather without also talking about climate change. Regardless of whether you’re a believer or skeptic, I wanted to see what the data has to say regarding extreme weather: on average, is our (national) weather becoming more extreme, less, or about the same as it was?

Fortunately, our government has a comprehensive website detailing extreme weather events dating back to 1910. The National Oceanic and Atmospheric Administration (NOAA) publishes all sorts of great data on weather events via the National Center for Environmental Information (formerly the National Climatic Data Center).

Courtesy of the NOAA

Courtesy of the NOAA

In the graph above, the data compiled for each year (red columns) is “based on an aggregate set of conventional climate extreme indicators which include monthly maximum and minimum temperature, daily precipitation, monthly Palmer Drought Severity Index (PDSI), and landfalling tropical storm and hurricane wind velocity.” Additional background information on their methodology and data can be found here.

If we follow the 9-pt binomial filter (a recognized statistical smoothing technique), it’s apparent that extreme weather events have increased steadily since 1970 and have peaked in the last 10 years. We also notice that extreme weather events between 1910 and 1970 gradually decreased 5-8% points. For this post we’re not examining the root cause, only resultant data, so how or why the events trend downward then shoot up is for another time. In any case, there is a clear spike in events starting in the mid-1990s.

So why does this matter? Increasing extreme weather events matter for various reasons, the most obvious of which is the cost to rebuild/reconstruct, a majority of which is paid by taxpayers and via insurance premiums (the greater the risk, the more we all pay). Other consequences are the loss of economic activity when priorities are shifted to reconstruction, the costs to reinforce existing infrastructure in preparation for future events, and an increase in uncertainty for strategic planners who rely on steady data to manage risk.

Oh, and if you were curious how much these extreme events were costing us, the NOAA was kind enough to graph that as well. 2011, 2017, and 2016 were the most expensive years ever recorded, with 2018 already exceeding the fourth largest with three months left to go…

Courtesy of NOAA

Courtesy of NOAA

Distributed Energy Resource (DER) Siting

Interesting article in the new SolarPro about how states (California) are pushing utilities to use their capacity and capital planning information to optimize the siting of PV and storage projects.  As many of us know, the interconnection process for large commercial and utility projects can be a game of chance.  What the line capacity is (and therefore how big the system can be) and whether there are required upgrades is determined after a lengthy review.  The initial responses generally include hefty cost estimates to proceed with what amounts to a nameplate size significantly less than what was proposed.

LNBA Demo B.jpg

The CPUC (California Public Utility Commission) created two working groups to address the issue: the Integration Capacity Analysis (ICA) and the Locational Net Benefits Analysis (LNBA).  Above is the heat map for the LNBA demo.  Per CPUC, "the goal is to ensure DERs are deployed at optimal locations, times, and quantities so that their benefits to the grid are maximized and utility customer costs are reduced."

Why is this important?  Consider the $2.6 billion planned transmission project upgrades in California that were recently revised down, accounting for higher forecasts of PV and energy efficiency projects.  And besides the avoided costs, don't forget all the grid upgrades DER developers are paying for that benefit all consumers.  This is a key factor in the debate over whether PV owners pay their fair share, or rather whether non-DER owners are subsidizing DER projects.  With the federal investment tax credit decreasing to 26% in 2020, 21% in 2021, and 10% in 2022, the models generated in the CPUC exercise can be used to reduce development costs, defer distribution and transmission capital improvements, and lay the groundwork for incentivizing grid-beneficial siting.

Enron and PV Module Warranties

While reading an article in Wired about equipment failures, I came across an interesting website called Warranty Week, an amalgam of equipment warranty research and insight.  Written/hosted by Mr. Eric Arnum out of his home office in Forest Hills, NY, he does deep dives into everything from extended warranty revenues, product claims, recalls, federal and state regulation, and warranty reserves (most important)!  Also, who knew an Extended Warranty and Service Contract Innovation Forum existed?  Browse his headlines for current events or head straight to the solar equipment warranty page for the good stuff.  In a July 28, 2016 post on solar equipment warranties, Mr. Arnum writes that, "In general, what we're finding is that most of the manufacturers are financing their very long warranties properly, while most of the installers are playing for the short term, hoping that the manufacturers will be there to pay at least the cost of replacement parts."  So the good news is, for owners large and small of PV systems, both workmanship and production warranty claims should be upheld.  Mr. Arnum can better explain the bad news: "But here's the central problem: none of the nine companies we're following have been financing warranty expenses since 2003. Four started in 2004, and one started in 2005. The rest have even less experience than that. And they really don't know what failure rates will look like in decades to come, nor do they have a good grip on repair or replacement costs in the year 2025 or beyond. So even the ones that are good at it are guessing."  From a failure rate perspective, at least as of 2016, nobody knows for sure just how long modules will last!

Checkout the Wired article for more insight into how major manufacturers design and test components, and for more background on Mr. Arnum's research.  I'll be posting separately about this issue at a later time.

Also, why did I title this post Enron and PV?  Because the collapse of Enron led to changes to the Generally Accepted Accounting Principals (rules that govern how companies write financial statements) which as of November 2002, required companies to provide detailed information on their guarantees, warranty reserves, and warranty payments in quarterly and yearly filings.  It is these filings that are the foundation to Mr. Arnum's research.

The Software Apocalypse

There was a great article published in The Atlantic late last year, The Coming Software Apocalypse, that took a hard look at the crossroads of software ubiquity, safety, and subject expertise (does anyone actually really understand how anything works anymore).  The evolution of technology has been so exhaustingly expeditious that for the average American it can be easy to forget both how amazingly complex software is but also that technology once existed without it altogether.  In 2014, the entire State of Washington experienced a six hour blackout of its 911 system.  During this time, if you were to dial 911 you would have heard a busy signal, a frightening sound if say you were alone in your house subject to an (alleged) breaking and entering.  Which in fact the story cites as at least one example of why the 911 system going down is a bad thing (the homeowner actually called at least 37 times).  It was later discovered that the outage was caused by a glitch in the software code designed to keep a running count of incoming calls for recordkeeping.  Turns out, the developers set the counter upper limit to a random number in the millions which just so happen to occur.  With each new call, a unique number was assigned.  On the day the upper limit was reached, calls were rejected because they were not assigned a unique number.  Insert chaos.

photo courtesy of paramount pictures

The programmers of the software did not immediately understand the problem in part because it was never deemed critical.  And because it was not critical it was never assigned an alarm.  There was a time when emergency calls were handled locally, by people.  The promise of innovation led the system to shift away from mechanical or human operation and rely more and more on code.

“When we had electromechanical systems, we used to be able to test them exhaustively.  We used to be able to think through all the things it could do, all the states it could get into,” states Nancy Leveson, a professor of aeronautics and astronautics at MIT.  Thinking about a small system (a gravity fed sewer wet-well, an elevator, a railroad crossing) and all of its modes of operation, both likely and unlikely, you can jot down on a single sheet of paper.  And each one of those items you can visually inspect, observe, and verify appropriate (and inappropriate) responses to operating scenarios and externalities.

Software is different.  By editing the text in a file somewhere (it does not have to be local to the hardware), that same processor or controller can become an intelligent speaker or a self-driving car or logistics control system.  As the article states, “the flexibility is software’s miracle and its curse.  Because it can be changed cheaply, software is constantly changed; and because it is unmoored from anything physical-a program that is a thousand times more complex than another takes up the same actual space-it tends to grow without bound.”  “The problem is that we are building systems that are beyond our ability to intellectually manage,” says Leveson.

Because software is different, it is hard to say that it “broke”, like say an armature or a fitting break.  The idea of “failure” takes on a different meaning when applying it to software.  Did the 911 system software fail or did it do exactly what the code told it to do?  The reason it failed was because it was told to do the wrong thing.  Just like a bolt can be fastened wrong or a support arm is designed wrong, wrong software will lead to a “failure”.

As software-based technology continues to advance, as engineers we need to keep all of this in the back of our minds.  It is challenging to just be a single discipline engineer these days.  To really excel in your (our) field, you must be able to think beyond your specialty to fully grasp the true nature of your design decisions.