Is Weather Becoming More Extreme? Let's look at the data...

There’s been a lot of talk on various media outlets about “extreme” weather events, their ferocity and frequency, and how this is the “new normal”. And of course these days you can’t talk about weather without also talking about climate change. Regardless of whether you’re a believer or skeptic, I wanted to see what the data has to say regarding extreme weather: on average, is our (national) weather becoming more extreme, less, or about the same as it was?

Fortunately, our government has a comprehensive website detailing extreme weather events dating back to 1910. The National Oceanic and Atmospheric Administration (NOAA) publishes all sorts of great data on weather events via the National Center for Environmental Information (formerly the National Climatic Data Center).

Courtesy of the NOAA

Courtesy of the NOAA

In the graph above, the data compiled for each year (red columns) is “based on an aggregate set of conventional climate extreme indicators which include monthly maximum and minimum temperature, daily precipitation, monthly Palmer Drought Severity Index (PDSI), and landfalling tropical storm and hurricane wind velocity.” Additional background information on their methodology and data can be found here.

If we follow the 9-pt binomial filter (a recognized statistical smoothing technique), it’s apparent that extreme weather events have increased steadily since 1970 and have peaked in the last 10 years. We also notice that extreme weather events between 1910 and 1970 gradually decreased 5-8% points. For this post we’re not examining the root cause, only resultant data, so how or why the events trend downward then shoot up is for another time. In any case, there is a clear spike in events starting in the mid-1990s.

So why does this matter? Increasing extreme weather events matter for various reasons, the most obvious of which is the cost to rebuild/reconstruct, a majority of which is paid by taxpayers and via insurance premiums (the greater the risk, the more we all pay). Other consequences are the loss of economic activity when priorities are shifted to reconstruction, the costs to reinforce existing infrastructure in preparation for future events, and an increase in uncertainty for strategic planners who rely on steady data to manage risk.

Oh, and if you were curious how much these extreme events were costing us, the NOAA was kind enough to graph that as well. 2011, 2017, and 2016 were the most expensive years ever recorded, with 2018 already exceeding the fourth largest with three months left to go…

Courtesy of NOAA

Courtesy of NOAA

Enron and PV Module Warranties

While reading an article in Wired about equipment failures, I came across an interesting website called Warranty Week, an amalgam of equipment warranty research and insight.  Written/hosted by Mr. Eric Arnum out of his home office in Forest Hills, NY, he does deep dives into everything from extended warranty revenues, product claims, recalls, federal and state regulation, and warranty reserves (most important)!  Also, who knew an Extended Warranty and Service Contract Innovation Forum existed?  Browse his headlines for current events or head straight to the solar equipment warranty page for the good stuff.  In a July 28, 2016 post on solar equipment warranties, Mr. Arnum writes that, "In general, what we're finding is that most of the manufacturers are financing their very long warranties properly, while most of the installers are playing for the short term, hoping that the manufacturers will be there to pay at least the cost of replacement parts."  So the good news is, for owners large and small of PV systems, both workmanship and production warranty claims should be upheld.  Mr. Arnum can better explain the bad news: "But here's the central problem: none of the nine companies we're following have been financing warranty expenses since 2003. Four started in 2004, and one started in 2005. The rest have even less experience than that. And they really don't know what failure rates will look like in decades to come, nor do they have a good grip on repair or replacement costs in the year 2025 or beyond. So even the ones that are good at it are guessing."  From a failure rate perspective, at least as of 2016, nobody knows for sure just how long modules will last!

Checkout the Wired article for more insight into how major manufacturers design and test components, and for more background on Mr. Arnum's research.  I'll be posting separately about this issue at a later time.

Also, why did I title this post Enron and PV?  Because the collapse of Enron led to changes to the Generally Accepted Accounting Principals (rules that govern how companies write financial statements) which as of November 2002, required companies to provide detailed information on their guarantees, warranty reserves, and warranty payments in quarterly and yearly filings.  It is these filings that are the foundation to Mr. Arnum's research.

The Software Apocalypse

There was a great article published in The Atlantic late last year, The Coming Software Apocalypse, that took a hard look at the crossroads of software ubiquity, safety, and subject expertise (does anyone actually really understand how anything works anymore).  The evolution of technology has been so exhaustingly expeditious that for the average American it can be easy to forget both how amazingly complex software is but also that technology once existed without it altogether.  In 2014, the entire State of Washington experienced a six hour blackout of its 911 system.  During this time, if you were to dial 911 you would have heard a busy signal, a frightening sound if say you were alone in your house subject to an (alleged) breaking and entering.  Which in fact the story cites as at least one example of why the 911 system going down is a bad thing (the homeowner actually called at least 37 times).  It was later discovered that the outage was caused by a glitch in the software code designed to keep a running count of incoming calls for recordkeeping.  Turns out, the developers set the counter upper limit to a random number in the millions which just so happen to occur.  With each new call, a unique number was assigned.  On the day the upper limit was reached, calls were rejected because they were not assigned a unique number.  Insert chaos.

photo courtesy of paramount pictures

The programmers of the software did not immediately understand the problem in part because it was never deemed critical.  And because it was not critical it was never assigned an alarm.  There was a time when emergency calls were handled locally, by people.  The promise of innovation led the system to shift away from mechanical or human operation and rely more and more on code.

“When we had electromechanical systems, we used to be able to test them exhaustively.  We used to be able to think through all the things it could do, all the states it could get into,” states Nancy Leveson, a professor of aeronautics and astronautics at MIT.  Thinking about a small system (a gravity fed sewer wet-well, an elevator, a railroad crossing) and all of its modes of operation, both likely and unlikely, you can jot down on a single sheet of paper.  And each one of those items you can visually inspect, observe, and verify appropriate (and inappropriate) responses to operating scenarios and externalities.

Software is different.  By editing the text in a file somewhere (it does not have to be local to the hardware), that same processor or controller can become an intelligent speaker or a self-driving car or logistics control system.  As the article states, “the flexibility is software’s miracle and its curse.  Because it can be changed cheaply, software is constantly changed; and because it is unmoored from anything physical-a program that is a thousand times more complex than another takes up the same actual space-it tends to grow without bound.”  “The problem is that we are building systems that are beyond our ability to intellectually manage,” says Leveson.

Because software is different, it is hard to say that it “broke”, like say an armature or a fitting break.  The idea of “failure” takes on a different meaning when applying it to software.  Did the 911 system software fail or did it do exactly what the code told it to do?  The reason it failed was because it was told to do the wrong thing.  Just like a bolt can be fastened wrong or a support arm is designed wrong, wrong software will lead to a “failure”.

As software-based technology continues to advance, as engineers we need to keep all of this in the back of our minds.  It is challenging to just be a single discipline engineer these days.  To really excel in your (our) field, you must be able to think beyond your specialty to fully grasp the true nature of your design decisions.