In this post I'll recap the full year 2016 in the continental US — as well as winter 2017 — using comfort scores, as a supplement to the many other high-quality summaries out there that use more-traditional metrics (i.e. average temperature). Essentially, I intend it to capture the perceived pleasantness of a given 6-hour period, with temperatures between 70 F and 80 F considered ideal, and temperatures below 70 F or above 80 F accumulating 'discomfort' points in accordance with their wind chill or heat index respectively. Points are apportioned so that, for example, a wind chill of 30 F is considered roughly equivalent in comfort to a heat index of 92 F. This definition is applied to NCEP reanalysis data to produce the below figures and calculations. Additional comfort-score maps are found over on the Recent Weather page.
Averaged over the entire year (first figure below), it's clear that heat and humidity over the Gulf of Mexico combine to make it, and areas immediately adjacent to it, quite uncomfortable. In fact, due to high heat indices, these areas lead the country in discomfort three seasons of the year. With its lower moisture, despite high summer temperatures, most of the West is considerably more comfortable than the Southeast, with the sole exception of North-American-Monsoon-influenced southern Arizona. The entire Pacific coast, along with eastern New England, stand out as the most comfortable spots. In the northern tier of the country and into southern Canada, the annual-average discomfort is dominated by the winter months, and thus it increases going further north. In terms of individual cities, the most comfortable year-round are San Francisco (mild winters and cool summers) and Honolulu (cool winters and warm summers). The most uncomfortable are Brownsville, TX and Miami.
In the next figure, we can see the same data computed relative to the normal picture. This shows that 2016 was a warm year in nearly all of the country: the southern parts were less comfortable than normal (percentiles > 50), and the northern parts were more comfortable (percentiles < 50). Where the color scale maxes out in dark red — in the northern Gulf of Mexico and the western Atlantic — discomfort was record-high due to record high SSTs, whereas in the Pacific Northwest it was record-low, due mostly to a lack of cold wintertime temperatures in concurrence with the cloudiness and precipitation of the strong El Niño.
Plots for winter 2017 can be found on the Recent Weather page. In winter, moisture is low enough that it doesn't really play a role in affecting comfort, and so comfort is a temperature-only story. Thus, as is typical, the most comfortable areas were Hawaii and Florida, and the least comfortable the Upper Midwest across to the inland Northwest (with anomalous warmth in the inland Northeast taking it out of this competition in which it is normally a contender). The Southeast and lower Midwest were much more comfortable than normal due to persistent anomalous warmth that has also resulted in those areas having phenological spring arrive several weeks early. In contrast to last year, cool conditions have dominated the Pacific Northwest and made it less comfortable than normal (the conclusion also reached by the Weather Channel after a parade of storms there).
As in 2016, a recap and discussion of the past winter in terms of snowfall (also tracked regularly on the Recent Weather page) will happen once the snow stops falling at the highest ski resorts, probably sometime in May. Regular blog posts will of course occur in the interim, so keep an eye out for those!
The faintest ink is stronger than the strongest memory — paraphrasation of a Chinese proverb
In the world today, we have data galore, and yet many issues of substantial public policy are decided largely through the perceptions of the deciders, whether the issue is economic, climatic, or otherwise, and whether or not the perception is substantiated by the data. I write this post on the presupposition that the truth still matters, though some cynics may argue that narrative is now everything and reality is nothing.
Part of the problem is that data is meek and tends to be overwritten (even among the scientifically literate and open-minded) by anecdotal-type perceptions that are dominated by memorable events and are in many cases not representative of the climate at a particular time and place. As a plausible hypothetical, the one year when there were two large snowstorms a week apart, and not the 5 years in between with no such storms at all. One recent study showed that perceptions of weather 5-20 years prior were pretty far off (the only category where people had any skill at all was for recent summer temperatures), except for flashbulb memories on personally momentous days, such as Danes' very accurate recollections, decades later, of the weather on the days of the German invasion and liberation of their country during WWII. For flashbulb memories, bias — where it exists — tends to be positive when moods are high (liberation) and negative when moods are low (invasion), termed the “pleasantness bias” by psychologists. In line with this, respondents from Colorado stated the weather on Sep 11 was worse than average, although in actuality it was sunny and warm across nearly the whole of the United States. These misperceptions are quite likely a function of several factors, such as the human tendency to remember the most ‘impressive’ thing (as it makes a better story), and the survivalist need to remember as much as possible about the conditions that surrounded an extreme, so as to be prepared for its next occurrence. People, not being machines, are also subject to heuristics like the availability heuristic of extending recent trends into the future, and in fact we often have more-accurate memories when the weather is pleasant.
An example of the corruptibility of human perception of weather & climate issues: for people who are on the fence about anthropogenic climate change, the percent professing belief increases linearly with recent temperature anomalies (Source: Hamilton & Stampone 2013, "Short-Term Weather and Belief in Anthropogenic Climate Change").
As a result, memories will never be entirely representative; however I think there are ways to leverage them to help people better understand their “personal climate history”, by linking their memories more clearly to the supporting data. For example, by pointing out where their memories and the data line up (and thus where their memories provide local color to the data), and where they don’t musing about the reasons why they remember something that is on the whole different from what actually happened on the preponderance of days. Establishing a trustworthy and viscerally true-seeming linkage between data and perception is in my view key for determining whether climate events, whatever and wherever they are, will be responded to proportionately and appropriately for a given location.
One step in the direction of remedying this issue is the Common Sense Climate Index [CSCI], developed to quantify the 'feeling' people have as to whether or not the climate in their area has changed significantly during their lifetime. It is fairly simple in concept: it compares average temperatures and the frequency of extreme temperatures to the values they had in a baseline period, e.g. 1951-1980. When the index in a year exceeds one standard deviation relative to the baseline distribution, that means the year's temperatures would have been expected only out of every 6 years previously, and the devisers of the index presume this difference is large enough to be noticed by the casual observer. Four illustrative figures of the index are shown below; almost the whole world exceeded 1.0 in 2016, and it's been at or above 1.0 in recent years for the world, the United States, and many individual cities. Some areas, particularly those with high interannual temperature variability or land-atmosphere interactions, like the US High Plains, have yet to see a consistent upward trend.
Endeavors like the CSCI are no doubt beneficial in the communication of science at a gut level, but this is not to say that memories should be discounted or considered passé as a mechanism of understanding climate. On the contrary, they play an essential role by filling in where data could not be obtained any other way, or exists but is of uncertain quality. A study in Iceland gives the example of diarists recording glacial positions, or encountering polar bears where none now exist. As this type of information is inherently qualitative, it is especially valuable when it describes something binary, where the accuracy is hard to dispute — someone either saw a polar bear or they didn't, no room for ambiguity.
Also, memories of climate are significantly better than those of weather, particularly among people who work the land and have a good sense of conditions in a place over multiple generations, and who have precise and unambiguous points of reference that they note as a matter of habit: dates of ice breakup, dates of spring planting, locations of glacial tongues, etc. Even so, quantitatively, the best correlation between recollection and fact is pretty imperfect.
I'll close this post with some thoughts on how an index like the CSCI could be improved. Very simply, breaking out the components would allow people to see for example how extremes or particular months have changed in their region, helping them gain insight into the present climate (e.g. 'this past December seemed very unusual -- was it really?') and connect it with their own past. Another modification could be customizing the index based on a basket of indicators in each region that are regionally meaningful — special events, ski-resort opening/closing dates, days of snow cover, leaf-out dates, when the window A/C has to be brought out of the garage, the first autumn night requiring a winter coat, etc. Ideally, this would make the index maximally relevant to everyone's lives. Shifting the reference period for people of different ages, or those who have moved, would also help accomplish that goal. Finally, using more crowdsourced data, in the CSCI and in general, would likely stimulate greater interest and investment in local-climate problems, as people are always more emotionally tied to data they have a hand in producing and that they know matters to them. Scientists are trained to think that all data matters, but outside of science the relevance has to first be felt in order to be believed.
As humans we've been engaged in unplanned terraforming for many years, as I discussed in a post a few months ago. This has come in the form of direct changes to the land via agriculture, logging, and urbanization, and (more recently) indirect changes via greenhouse-gas emissions. The direct changes are more localized and intense, while the latter are generally regional-to-global and comparatively modest. Within each category of change, there are a range of mechanisms at play, from albedo feedback to land-atmosphere interaction effects (on soil moisture, dust, etc). Inspired by a thoughtful piece from a few weeks ago, here I'll focus in particular on desertification -- this focus also helping to set some reasonable parameters on the scope of a single post.
One of the core problems with desertification is the phenomenon known as "dry get drier, wet get wetter", which is at its heart a consequence of a strengthening of the water cycle in a warmer climate. This concept, while well-supported theoretically, applies to observations only in certain areas of the globe (see figure below). However, it's also worth noting that much of this non-application depends on the exact definitions of 'wet' and 'dry' used: for example, the relatively dry subtropics, like the Mediterranean, are indeed expected to dry further as this century goes on, but are considered 'wet' by that paper's reckoning.
Observed trends in precipitation from the mid- to late 20th century, combined with climatological moistness/dryness information to categorize regions according to whether they obey the 'dry get drier, wet get wetter' paradigm. From Greve et al. 2014, "Global assessment of trends in wetting and drying over land."
If we know deserts are encroaching, on farmland and cities alike, what can we do about it? In a sense, the forces enabling us to "turn back the deserts" are the same as those that we leveraged to cause the changes in the first place, and they are familiarly modern: enormous sums of money, massive spatial coordination, and skillful biophysical and climate models that allow for an understanding of how to optimize resources (e.g. the most-strategic locations to plant trees or to construct irrigation systems). Many of the modifications of course involve the areas surrounding cities, which are much larger and often easier to change, as implementing a different tilling method or planting more trees involves relatively few people and little cost compared to dealing with thousands or millions of individual units in an urban area.
Perhaps the best-known example of what could be considered purposeful terraforming is the remarkably successful shift in agricultural practices in the US High Plains in the mid-20th century. Following suspicions that human mistakes were to blame for the severity of the Dust Bowl (later proved correct), agricultural-extension efforts spread the gospel of practices like not leaving freshly tilled soil exposed to the wind. These changes were tested in the 1950s drought, which based on the SST pattern alone would have been equally bad or worse than the 1930s, and the result: no sad families leaving Oklahoma in beat-up cars. Mission accomplished.
While primarily affecting rural residents, cities are of significant interest in the desertification problem because a. they are centers of people and resources, which suffer the most in absolute numbers when dust chokes lungs or shuts down businesses, and b. they are inextricably connected to the surrounding landscape. As a neat illustration of the latter point, a study in Iran directly connected urbanization to desertification through the greater wealth of urban residents, and their correspondingly greater usage and therefore wastage of water. Cities are also in some sense useful laboratories for studying aspects of desertification in microcosm, given that they nearly always have higher temperatures and lower moisture than neighboring areas -- due in part to the urban-heat-island effect and in part to the lack of permeable surfaces wherein moisture can be retained.
Finally, thinking abstractly, cities are instructive in a way because they represent the most complex systems we as a species have ever constructed, and their elaborate infrastructure and balancing of many interests perhaps can provide a sort of framework for how to go about tweaking the analogously (but much more) complex climate system. These viewpoint has even made it up to the stodgy folks at the United Nations Environment Programme. While I believe the overall situation is serious, it merits optimism. That being said, far-fetched plans such as towing icebergs to the Persian Gulf, or building mountains there to squeeze out moisture, will never happen in the foreseeable future – even gushers of oil money just aren't enough to be able to terraform on those kinds of scales with anything like current technology. Besides, shifting cartography is a matter not just of physical geography, but of social and geopolitical geography, and this truism has inspired some very specific meditations on what it all means. A more-realistic prediction would be that in desertifying regions, cities themselves will change to match their surroundings and remain sustainable -- as one small example, constructing buildings from naturally occurring (and naturally insulating) sand. Regardless, industrialization, deforestation, and urbanization will continue apace throughout the developing world, so like with greenhouse-gas emissions the real question with terraforming is not whether it will occur to any substantial degree in our lifetime, but whether it will be planned or unplanned, and whether we will be prepared for the changes it will wreak.