Following Lorenz’s work using analogue pairs for establishing 10-to-14-day predictability limits for synoptic weather regimes, predictability limits for the Rex block, the long-wave wintertime ridge over the eastern Pacific Ocean and the western United States, have been estimated. This was accomplished by using mid-latitude geopotential height reanalysis data over a period of 38 years, 1979–2016, and associated 90-day winters (DJF). The metric used to define analogue pairs is the RMS difference assessed for the hemispheric 850, 500, and 200 hPa geopotential height fields. The resultant set of analogue pairs was used to estimate predictability with respect to both a single latitude circle (40° N) that passes through the Rex Block and for a multi-latitude swath (20–80° N). Our methods showed a range of results, by choice of Fourier component wavenumbers 2 through 8. These results indicate system predictability for low wavenumber components to exceed the 10–14-day limit imposed by Lorenz’ results. The results to 21 days, the maximum predictability limit value allowed by our method, do not preclude the possibility of a greater range of system predictability past 21 days. The unique aspect of this work is determination of predictability limits as a function of geopotential wave structure found through Fourier decomposition.

We report on a novel method of estimating limits of system predictability and describe this method and the consequent numerical results in detail. However, from the outset, we bring certain issues to the reader’s attention that connect our work to the work of Edward Lorenz [

In the early 1960s, an effort was made to estimate the doubling time of small errors in the initial conditions of three general circulation models (GCMs). Starting from two or more very similar initial states, it became common practice to measure the error by finding the root-mean-square (RMS) difference between solutions until the time when the separate solutions lost all resemblance to one another. At that time, the difference is the same as differences between solutions chosen at random. Results are summarized in Charney et al. [

As an alternative to studying error growth by perturbing the initial conditions in a numerical model [

Here, we explore the relationship between certain large-scale atmospheric structures and their inherent predictability. More specifically, we tested whether the temporal and spatial characters of certain large-scale structures render them predictable past the canonical 10-to-14-day Lorenzian limit. To do this, we have combined Lorenz’s method of AP progressions (APPs) with low wavenumber Fourier components derived from a set of GPH fields provided by GPH reanalysis. Our results show the strong possibility that system predictability with respect to large-scale structures extends well beyond the canonical Lorenzian 10-to-14-day limit.

The initial motivation for our study came from our interest in the strongly persistent patterns of wintertime high pressure over the eastern Pacific Ocean and western United States that are commonly called Rex blocks. Daniel Rex first described such structures in his pioneering study [

Although instances of Rex blocks vary in location and shape, they are by definition large in spatial and temporal extent. These gross characteristics allow them to be characterized by a small number of GPH low wavenumber Fourier components. The amplitude and phase angles that emerge from the Fourier decomposition signify strength and longitudinal position of the Rex block, respectively. The decomposition allows us to clearly define presence and the strength of Rex blocks.

Little research has been conducted on this topic in recent decades, although GCM studies have indicated that long-wave structures are more predictable than shorter synoptic-scale structures [

While the work in Lorenz [

The logical connection between our numerical findings and the conclusions we may draw from them about system predictability has been based upon Lorenz’s work, but our approach is not identical to Lorenz’s. Our premise is that a large number of fairly good APs will describe system predictability better than a few exceptionally good APs.

Although we begin by generating a roster of APs in a manner very similar to Lorenz [

It was by finding a compromise between two opposing considerations that we arrived at the AP set GPH difference threshold value (TV) criterion of 100 m. The first consideration was that the range of difference values between a given AP set’s average difference value and the Esat value, the ‘error saturation’ value at which similarity is lost, should be as great as possible. This is so that our APP curve, built from the chosen set of APs, may traverse as much of the hypothetical ideal curve’s difference range (which ostensibly would extend to difference values close to zero) as possible, given our data. This consideration would have us choose only a few—possibly only one—of the lowest difference-valued APs.

The second consideration is that any very small set of APs will exhibit a substantial amount of variability in the resultant APP curve of averaged values. The effect of this type of variability is that such an APP curve will not rise smoothly to cross the Esat line at such a point as to give a clear indication where to demarcate the limit of predictability. For example, the ‘TV 75 m’ curve in

The minimum AP difference value found in the background comparison set is 72.7 m. The Esat value is 155.2 m. What value would be the optimum TV? In attempting to qualitatively reconcile these two considerations for choosing TV, we chose 100 m because this was judged to be the lowest difference value that gave us sufficiently large AP set sizes to smooth out the consequent APP curves. This choice gave us an AP set of size 22,287. As we shall see, due to the additional narrowing choices that our different tests required, this number will be reduced (in some cases to sets of only several hundred APs), thereby further limiting the AP set size from which to form APPs. While not perfectly smooth, we judge these resulting APP curves to be sufficiently smooth such that they yield reasonably sharp predictability limit results. A larger AP set would smooth these curves out even more, but we would thereby lose even more range at the low end by starting at a higher average difference value on day 1. This ‘low-end’ range of the average AP set difference values covered by the different TV choices is exactly as seen in

We admit that the 100 m TV criterion is to some extent an arbitrary choice. However, we judged that small changes in that value—perhaps 98 to 102 m—and the different AP sets chosen would not greatly affect our predictability limit results, but that large changes in the criterion value outside that range would no longer reconcile well the two considerations just discussed and would have deleterious effects on the predictability results. On this basis, we assert that the 100 m choice is close to optimum.

Our use of a large set of APs is a significant departure from Lorenz. We propose that we should not limit ourselves to only pursuing the very ‘best’ APs in assembling our roster. Rather, the loss of similarity as a general principle is driven by the accumulation of differences between many somewhat similar, but not very similar, states. The use of many APs, and the APPs built from them, give us a better view into the system as a whole.

We can construct and interpret the behavior of different ensembles of APPs according to different comparison regimes. These regimes can be narrowly chosen to characterize different, specific aspects of the system. We have chosen to follow the APPs constructed from single low wavenumber Fourier components of the 500 hPa GPH field, over the wavenumber range 2–8. We show by comparing the spatial extents of examples that some of these particular components are closely related to Rex blocks.

By following the behaviors of APPs formed from low wavenumber Fourier components and deriving system predictability limits thereby, we are testing the hypothesis that predictability limit is inversely related to component wavenumber. This hypothesis is a formal statement of the intuitively attractive idea that large, persistent structures, such as Rex blocks, should possess an inherently greater range of predictability in time than smaller and more ephemeral structures. It also allows us to draw a direct line from the reanalysis data to Rex block predictability.

We argue that wavenumber components 2–5 are a suitable representation for the large structures—that is, the strong ridging of Rex blocks—with which we are primarily concerned.

Panel (b) of

Another way of illustrating this issue is to compare the longitudinal extent of one peak each of wavenumbers 2–5 with the ridge shown by the GPH signal centered at longitude −130°.

The data used were obtained from the NOAA ESRL-PSD ‘NCEP-DOE Reanalysis 2’ data set, daily mean values [

As we proceed with describing in detail how we used these data, we wish first to make clear how our process involved several steps that progressively narrowed the number of the two-state comparisons used. The fundamental divide among these steps was between (1) our establishing a large set of APs, in which we adhered closely to Lorenz’ RMS comparison protocol, and (2) our introduction of Fourier decomposition to act upon this set of APs, in order to measure how system predictability varied among combinations of different wavenumber Fourier components and the presence or absence of ridging in A1.

The differencing method used by Lorenz [_{ijp}_{ipk}_{ijp}

We also chose to disallow intra-winter comparisons—that is, no APs were included in which both dates were chosen from the same winter—so as not to introduce the higher correlations of states due to persistence caused by close temporal association. No additional stipulations, such as the presence or absence of ridging over the eastern Pacific, were introduced at this step. Lorenz used a weighting to adjust his two-state comparisons, to account for drawing days from different seasons, since different seasons show differing variability in GPH range. As we have confined our inquiry to 90-day winters, we have omitted this type of adjustment. We have assumed instead that such variability within the winter season (i.e., between the forcing regimes expressed by 1 December versus 15 January versus 28 February) is not large enough to affect our results. Up to this point, we have not used any Fourier decomposition.

As mentioned above, the AP100 set of analogue pairs, calculated by a purely Lorenzian scheme of RMS differencing, is for the present study merely a starting point. The chief innovation of our study lies within our application of different comparison regimes to the initial AP100 set. The choices of comparison regime were based upon the parameters of wavenumber, phase angle, and amplitude that only become available upon the imposition of Fourier decomposition. By choosing comparison regimes based upon these parameters, we were able to construct APPs whose predictability results illustrated specific qualities of system predictability related to large-scale structures such as Rex blocks.

We now describe our methods for creating APPs that are based upon the 500 hPa GPH data from just one latitude circle at 40° N using Fourier component wavenumbers 2 through 8. Each component was treated separately, to make its own small family of analogue pair progressions.

The choice of using wavenumbers 2–8 was based upon two considerations. The first was that their amplitudes constitute most of the total wave amplitude. For example, on 23 January 2014, the total amplitudes of wavenumbers 2–8 constituted 67% of the total signal amplitude. The second consideration was that the half wavelength extent of these components’ waveforms covered the longitudinal range we expect in regard to the size of ridging in A1. However, as we argued at the beginning of this section, it seems that wavenumbers 2–5 determine the structure of Rex blocks (

We decomposed the AP100 set as follows. For any day of interest, within our temporal range, the chosen latitude of 40° N resulted in a subset of the 500 hPa GPH reanalysis data with 144 entries. Fourier decomposition was performed on this set. Fourier analysis theory states that, for a given set, there is only one unique component for any single wavenumber (e.g., Ref. [

Wavenumber zero is the set mean value, a constant value around that latitude circle, and thus, it has an ‘amplitude’ defined by the Fourier decomposition process to be the mean value of the GPH signal; its frequency is zero, and its phase angle is undefined. The rest of the Fourier components numbers 1–143 are sine waves with zero mean value, with frequency, amplitude, and phase angle values provided by the decomposition. The variability of that set of 144 values lives within the components 1–143. Or, more properly, we should stipulate that due to the Nyquist frequency of 144/2 = 72, the variability lives in wavenumbers 1 through the ‘fold’ component defined by the Nyquist frequency, wavenumber 72; the higher wavenumber, trans-folds components’ (73–143) amplitudes to exactly mirror their lower-fold (1–71) conjugates. In keeping track of the component amplitudes, one must account for the splitting of amplitudes that this mirroring causes. Since these fold conjugate pairs are identical in amplitude, we must multiply any given wavenumber amplitude from this decomposition process by two to arrive at its correct value, except for wavenumber 72 itself [

In our analysis, each wavenumber family had three members, according to three different initial parsing regimes. These regimes were as follows: S1, parsed for ridging in both amplitude and phase angle (as defined in the paragraph below); S2, the analogue pairs left over from the S1 parsing; and S3, the total set of analogue pairs. Numerically, S1 + S2 = S3, and whereas S1 and S2 vary in size, S3 had a fixed size of 8502 analogue pairs. Note that S3 at size 8502 pairs is much smaller than AP100′s size of 22,287 pairs because we also imposed a separate parsing onto each set to address a temporal issue. We did not want our 21-day progressions extending past February and into March, and thus, we imposed the restriction to only allow analogue pairs that would satisfy this stricture; S1, S2, and S3 all conform to this restriction. We refer to this expedient as ‘date truncation’.

The initial parsing for ridging to form regime S1 used the restriction that, to be part of the set, both members of an analogue pair had to satisfy ridging in A1, and to have an amplitude equal to or greater than the average amplitude of that wavenumber over our 38 winters. Ridging in A1 was defined by choosing phase angle ranges such that a wave peak for that wavenumber would lie within the longitude range of −160° to −120°. This range was chosen as a suitable subset of A1, in that we decided to exclude from our ridging definition those wave peaks lying near (within 10° of) the boundaries of A1. Set S2 therefore was composed of exactly those analogue pairs that failed the S1 parsing definition. Our choice of the S1 parsing was designed to be generally consistent with the structure of the high-pressure ridges of Eastern Pacific Rex blocks, in terms of wave peak position within A1 and wave amplitude.

From the sets of analogue pairs defined by each initial parsing, we built 21-day progressions of the difference values between the analogue pairs as each stepped forward in time. Day 1 shows the difference values between the states of the analogue pairs themselves; day 2 shows the difference values between the states one day past each analogue pair member; and thus, on up to 21 days. Since the analogue pairs were chosen specifically due to their Lorenzian state similarity, we expected that the progressions should show a gradual decrease in similarity in the increase in difference values. Although the resulting curves are not monotonically increasing, they in general follow an increasing trajectory toward the Esat lines. We suspect that larger numbers of analogue pairs would tend to smooth the curves even more and reduce the effects of day-to-day variability.

A progression length of 21 days was chosen due to two considerations. First, we wanted there to be the possibility of trans-Lorenzian predictability limits—more than 14 days—to present themselves. Second, we did not want the progression length to be so large as to drastically limit the analogue pairs chosen. For if we had chosen a progression length of, say, 45 days, but required our progressions to not extend into March, then only those analogue pairs, both of whom started in December and the first half of January, could be chosen. An avenue of future research may be to chart the effects of employing longer progression lengths.

We note at this point that the phase angle and amplitude differences we are using to construct APPs are a natural evolution of the concept of error as used by Lorenz in his 1969 study. In that work, the pattern of difference growth between an analogue pair is analogous to the error that grows between forecasts and the true atmosphere. There is no reason that measuring the accumulation of this error should be confined only to RMS differences of the GPH field. In our work, we have constructed different distributions of two-state difference values for the chosen fields among the many day/states of our temporal domain. The subsequent patterns of variation within these distributions provide different views into the predictability of the system as a whole.

An extensive justification of this strategy seems unnecessary, since it is well known that many different fields are calculated separately for standard forecast model outputs. That is, within the predictive realm of atmospheric science, multiple kinds of inquiries and results are embraced as a matter of course. Our efforts should be seen in this same general light, of choosing to use specific analytical tools of Fourier analysis to focus narrowly on differing aspects of the atmospheric system.

We now turn to an application of the same method, but instead of just one latitude circle, we expanded the analysis to include a broad swath of the northern hemisphere, from 20° to 80° N. As in the previous section, our analysis included the wavenumber range of 2–8.

Our second APP study centered primarily on phase angles. The importance of these phase angles can best be understood when we consider which aspect of waves it is that most characterizes an Eastern Pacific ridge: its longitudinal position, which when decomposed, is purely a function of the component phase angle. The method devised to include 25 latitude circles did not lend itself to including amplitude information, and our attempts proved problematic. We hope a future iteration of this work can include both phase angle and amplitude information, when expanding the analysis from single to multiple latitudes. It would for example be an improvement in the experimental design to impose an improved initial parsing regime to exclude those analogue pairs that have primary wavenumber components of low amplitude, although lying in the correct phase angle ridging range.

We chose the latitude swath of the 25 increments from 20° to 80° N. In our data, these 25 complete latitude circles each have 144 longitudinal increments. Each of these latitude circles was decomposed as an independent signal containing 144 values, yielding 144 components (72 below-fold components) by Fourier decomposition. The choice of using 20–80° N as the latitudinal range of the large swath study was arrived at via the following consideration. We wished to include a latitudinal range sufficient to cover the extent of the varying structure of Rex blocks. This must include most of the northern hemisphere, but we did not want the complication of including tropical and equatorial processes, nor those peculiar to the region immediate to the north pole. While this study investigates processes of the mid-latitudes, its design should not exclude potentially important information that may lie off of the latitudinal fringes of a narrow reading of mid-latitude extent—perhaps, 30–60° N. As shown in the two examples of

For the 25 latitude circles, the data for one day then yielded 25 triplets for each of the 144 components. As in the previous section, we developed each progression sequentially for each wavenumber 2–8, and thus, for each wavenumber, the only changing values were those of amplitude and phase angle. In effect, a day’s full decomposition of 25 latitudes by 144 component triplets was reduced to an array of 2-by-25 values: the given wavenumber’s amplitudes and phase angles for the 25 latitude circles. In order to compare day-states of the analogue pairs and their progression, we devised an averaging scheme to find phase angle centroid values relative to the center of ridging in A1, at longitude −140°.

To give a sense of the general pattern of these progressions, an example is shown in

Our results are summarized in

The variability within our results suggests a complex and nuanced relationship between wavenumber and predictability limit. In order to try to form a better summary illustration of our overall results, we calculated average values of the predictability limit values from the two tables of results by averaging the results for each wavenumber. That is, six results per wavenumber came from

It is not a simple picture. The averaged predictability limit values for wavenumbers 2 and 4 seem anomalously low, when compared to the pattern shown by the 3 and 5 through 8 wavenumbers’ limit values. One possibility is that our method was somehow flawed. However, proceeding on the assumption that our averaged results indicated a genuine, general pattern, one way to understand this pattern is to posit that in it, we are catching a glimpse into how system uncertainty is partitioned among these Fourier components.

To that end, the pattern of values shown in

We offer this unproven and speculative idea as an example of an avenue by which to connect the narrow, abstract realm of our results to possible answers to the important general question: in both geometrical and dynamical terms, how do low wavenumber components find accord with the size, spatial and temporal, of Rex blocks and other large-scale structures? In this case, our surmise offers this connection. It may later be shown how system predictability is somehow partitioned among these families, according to as yet unknown rules likely dictated by system geometric and energetic constraints. If such rules exist, they have the potential to show why, in a rigorous dynamical argument, a Rex block at 40° N has the size, shape, and duration it exhibits.

While it has been noted in this review that our results overall are perhaps not surprising, in that Rex blocks are characterized by their tendency to persist, we draw the reader’s attention to the following implications of our numeric results. We have shown that the same Fourier components that are most closely associated with the structure of the Rex blocks in our examples are the very components that most exhibit extended predictability. We feel that our demonstration of this congruence of such specific structural detail with estimates of associated predictability limit values is a worthwhile new result.

The chief result of our work is evidence of trans-Lorenzian predictability for Fourier components of wavenumbers that are consistent with the scale of Rex block structures. The evidence, offered here for wavenumbers 2, 3, and 5, implies that prediction into the seasonal regime of forecasting is possible.

The possible connection between the wavenumbers, along their families of fundamental frequency/harmonic frequencies as suggested by

One of our chief motivations was to investigate the predictability limit of Rex blocks. While our results only have a narrow applicability to this problem, our application of low wavenumber Fourier components lays a technical foundation for further investigating the predictability limit of such large-scale structures. As noted in

However, we have not taken the next step of parsing our general set of APPs for those progressions that display persistent A1 ridging throughout the 21 days of comparison. Our parsing only focused on day 1, the APs themselves. We did not track the extent to which the two paths in each progression stayed within an A1 ridging posture, but this omission points to a positive outcome of our work. It suggests many new lines of inquiry, based upon our general technical approach. We believe that our template holds much promise for further investigation into how extended predictability is partitioned within the atmospheric system.

It is worth noting the possibility, however, that higher wavenumber components may yet be shown to be significant aspects of the Rex block, in helping to initiate or maintain the patterns we have focused on. Therefore, while parsimony may imply beginning with the simplest version and lowest wavenumber components, it may well prove necessary to include some higher wavenumber components to adequately encompass a suitable level of complexity in order to represent this system.

Conceptualization, M.L., J.L. and H.M.; methodology, M.L., J.L. and H.M.; software, M.L.; validation, M.L., J.L. and H.M.; formal analysis, M.L., J.L. and H.M.; investigation, M.L., J.L. and H.M.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, M.L., J.L. and H.M.; supervision, J.L. All authors have read and agreed to the published version of the manuscript.

Not applicable.

Not applicable.

In terms of data, this work is exclusively based on the NOAA ESRL-PSD ‘NCEP-DOE Reanalysis 2’ data set [

It is a pleasure to acknowledge helpful discussions with additional members of Marshall B. Liddle’s committee, including M. Green, E. N. Wilcox, and F. C. Harris, Jr.

The authors declare no conflict of interest.

Comparison of six different APPs, based upon different threshold value (TV) criterion choices. APP curves based upon larger sets of APs show less day-by-day variability than those based upon fewer APs. We argue that the smooth curve of the ‘TV 100 m’ APP (light blue datapoints) gives the best result in terms of indicating a predictability limit value.

(

Same data as

Daily average 500 hPa geopotential heights (GPH in m) data plotted over the geographic area A1 described in the text, for (

The ‘dispersal of the blob’: five time slices of the analogue pair progression (APP) for wavenumber 3, S1 parsing regime. Depicted are the aggregate patterns of amplitude difference and phase angle difference as function of day of the progression. This illustrates the progression of 3899 analogue pairs.

Analogue pair progression (APP) for mean amplitude difference, for wavenumber 3, parsing regime S1. The dashed, vertical blue line indicates the limit of predictability. The implied predictability limit is 19 − 1 = 18 days, since the curve crosses the Esat line at day 19. This illustrates the progression of 3899 analogue pairs. Red datapoints indicate values below Esat; black datapoints indicate values at or above Esat.

Analogue pair progression (APP) for mean phase angle difference, for wavenumber 3, parsing regime S1. The implied predictability limit exceeds the progression limit of 21 days, since the difference curve never crosses the Esat line. This illustrates the progression of 3899 analogue pairs—this and the previous figure drew from the same set.

Predictability limit results in days from

Predictability limit results in days from the single latitude analogue pair progression experiment, over wavenumbers 2–8, for initial parsings S1, S2, and S3, and for either component phase angle (PA) or amplitude (Amp).

Wavenumber | S1 PA | S1 Amp | S2 PA | S2 Amp | S3 PA | S3 Amp |
---|---|---|---|---|---|---|

2 | 21+ | 7 | 12 | 21+ | 13 | 21+ |

3 | 21+ | 19 | 21+ | 21+ | 21+ | 21+ |

4 | 8 | 2 | 9 | 8 | 9 | 7 |

5 | 21+ | 2 | 21+ | 4 | 21+ | 3 |

6 | 7 | 2 | 17 | 3 | 18 | 3 |

7 | 6 | 2 | 7 | 10 | 7 | 4 |

8 | 5 | 1 | 8 | 4 | 8 | 4 |

Phase angle (PA) predictability limit summary in days from the multi-latitude analogue pair progression study, for wavenumbers 2–8.

Wavenumber | PA Predictability Limit |
---|---|

2 | 16 |

3 | 14 |

4 | 12 |

5 | 19 |

6 | 9 |

7 | 6 |

8 | 6 |