Posit AI Blog: AO, NAO, ENSO: A wavelet evaluation instance

0
894
Posit AI Blog: AO, NAO, ENSO: A wavelet evaluation instance


Recently, we confirmed learn how to use torch for wavelet evaluation. A member of the household of spectral evaluation strategies, wavelet evaluation bears some similarity to the Fourier Transform, and particularly, to its widespread two-dimensional utility, the spectrogram.

As defined in that guide excerpt, although, there are vital variations. For the needs of the present put up, it suffices to know that frequency-domain patterns are found by having a bit “wave” (that, actually, may be of any form) “slide” over the information, computing diploma of match (or mismatch) within the neighborhood of each pattern.

With this put up, then, my purpose is two-fold.

First, to introduce torchwavelets, a tiny, but helpful bundle that automates the entire important steps concerned. Compared to the Fourier Transform and its purposes, the subject of wavelets is relatively “chaotic” – that means, it enjoys a lot much less shared terminology, and far much less shared observe. Consequently, it is smart for implementations to observe established, community-embraced approaches, every time such can be found and properly documented. With torchwavelets, we offer an implementation of Torrence and Compo’s 1998 “Practical Guide to Wavelet Analysis” (Torrence and Compo (1998)), an oft-cited paper that proved influential throughout a variety of utility domains. Code-wise, our bundle is generally a port of Tom Runia’s PyTorch implementation, itself primarily based on a previous implementation by Aaron O’Leary.

Second, to indicate a beautiful use case of wavelet evaluation in an space of nice scientific curiosity and great social significance (meteorology/climatology). Being in no way an skilled myself, I’d hope this might be inspiring to individuals working in these fields, in addition to to scientists and analysts in different areas the place temporal knowledge come up.

Concretely, what we’ll do is take three completely different atmospheric phenomena – El Niño–Southern Oscillation (ENSO), North Atlantic Oscillation (NAO), and Arctic Oscillation (AO) – and examine them utilizing wavelet evaluation. In every case, we additionally have a look at the general frequency spectrum, given by the Discrete Fourier Transform (DFT), in addition to a traditional time-series decomposition into development, seasonal elements, and the rest.

Three oscillations

By far the best-known – probably the most notorious, I ought to say – among the many three is El Niño–Southern Oscillation (ENSO), a.okay.a. El Niño/La Niña. The time period refers to a altering sample of sea floor temperatures and sea-level pressures occurring within the equatorial Pacific. Both El Niño and La Niña can and do have catastrophic affect on individuals’s lives, most notably, for individuals in growing nations west and east of the Pacific.

El Niño happens when floor water temperatures within the japanese Pacific are larger than regular, and the robust winds that usually blow from east to west are unusually weak. From April to October, this results in scorching, extraordinarily moist climate circumstances alongside the coasts of northern Peru and Ecuador, regularly leading to main floods. La Niña, however, causes a drop in sea floor temperatures over Southeast Asia in addition to heavy rains over Malaysia, the Philippines, and Indonesia. While these are the areas most gravely impacted, modifications in ENSO reverberate throughout the globe.

Less well-known than ENSO, however extremely influential as properly, is the North Atlantic Oscillation (NAO). It strongly impacts winter climate in Europe, Greenland, and North America. Its two states relate to the scale of the stress distinction between the Icelandic High and the Azores Low. When the stress distinction is excessive, the jet stream – these robust westerly winds that blow between North America and Northern Europe – is but stronger than regular, resulting in heat, moist European winters and calmer-than-normal circumstances in Eastern North America. With a lower-than-normal stress distinction, nonetheless, the American East tends to incur extra heavy storms and cold-air outbreaks, whereas winters in Northern Europe are colder and extra dry.

Finally, the Arctic Oscillation (AO) is a ring-like sample of sea-level stress anomalies centered on the North Pole. (Its Southern-hemisphere equal is the Antarctic Oscillation.) AO’s affect extends past the Arctic Circle, nonetheless; it’s indicative of whether or not and the way a lot Arctic air flows down into the center latitudes. AO and NAO are strongly associated, and would possibly designate the identical bodily phenomenon at a elementary degree.

Now, let’s make these characterizations extra concrete by taking a look at precise knowledge.

Analysis: ENSO

We start with the best-known of those phenomena: ENSO. Data can be found from 1854 onwards; nonetheless, for comparability with AO, we discard all data previous to January, 1950. For evaluation, we decide NINO34_MEAN, the month-to-month common sea floor temperature within the Niño 3.4 area (i.e., the world between 5° South, 5° North, 190° East, and 240° East). Finally, we convert to a tsibble, the format anticipated by feasts::STL().

library(tidyverse)
library(tsibble)

download.file(
  "https://bmcnoldy.rsmas.miami.edu/tropics/oni/ONI_NINO34_1854-2022.txt",
  destfile = "ONI_NINO34_1854-2022.txt"
)

enso <- read_table("ONI_NINO34_1854-2022.txt", skip = 9) %>%
  mutate(x = yearmonth(as.Date(paste0(YEAR, "-", `MON/MMM`, "-01")))) %>%
  choose(x, enso = NINO34_MEAN) %>%
  filter(x >= yearmonth("1950-01"), x <= yearmonth("2022-09")) %>%
  as_tsibble(index = x)

enso
# A tsibble: 873 x 2 [1M]
          x  enso
      <mth> <dbl>
 1 1950 Jan  24.6
 2 1950 Feb  25.1
 3 1950 Mar  25.9
 4 1950 Apr  26.3
 5 1950 May  26.2
 6 1950 Jun  26.5
 7 1950 Jul  26.3
 8 1950 Aug  25.9
 9 1950 Sep  25.7
10 1950 Oct  25.7
# … with 863 extra rows

As already introduced, we wish to have a look at seasonal decomposition, as properly. In phrases of seasonal periodicity, what can we count on? Unless informed in any other case, feasts::STL() will fortunately decide a window measurement for us. However, there’ll seemingly be a number of necessary frequencies within the knowledge. (Not desirous to destroy the suspense, however for AO and NAO, this may positively be the case!). Besides, we wish to compute the Fourier Transform anyway, so why not do this first?

Here is the facility spectrum:

In the beneath plot, the x axis corresponds to frequencies, expressed as “number of times per year.” We solely show frequencies as much as and together with the Nyquist frequency, i.e., half the sampling price, which in our case is 12 (per 12 months).

num_samples <- nrow(enso)
nyquist_cutoff <- ceiling(num_samples / 2) # highest discernible frequency
bins_below_nyquist <- 0:nyquist_cutoff

sampling_rate <- 12 # per 12 months
frequencies_per_bin <- sampling_rate / num_samples
frequencies <- frequencies_per_bin * bins_below_nyquist

df <- data.frame(f = frequencies, y = as.numeric(fft[1:(nyquist_cutoff + 1)]$abs()))
df %>% ggplot(aes(f, y)) +
  geom_line() +
  xlab("frequency (per 12 months)") +
  ylab("magnitude") +
  ggtitle("Spectrum of Niño 3.4 knowledge")
Frequency spectrum of monthly average sea surface temperature in the Niño 3.4 region, 1950 to present.

There is one dominant frequency, akin to about yearly. From this part alone, we’d count on one El Niño occasion – or equivalently, one La Niña – per 12 months. But let’s find necessary frequencies extra exactly. With not many different periodicities standing out, we could as properly prohibit ourselves to 3:

strongest <- torch_topk(fft[1:(nyquist_cutoff/2)]$abs(), 3)
strongest
[[1]]
torch_tensor
233.9855
172.2784
142.3784
[ CPUFloatType{3} ]

[[2]]
torch_tensor
74
21
7
[ CPULongType{3} ]

What we have now listed below are the magnitudes of the dominant elements, in addition to their respective bins within the spectrum. Let’s see which precise frequencies these correspond to:

important_freqs <- frequencies[as.numeric(strongest[[2]])]
important_freqs
[1] 1.00343643 0.27491409 0.08247423 

That’s as soon as per 12 months, as soon as per quarter, and as soon as each twelve years, roughly. Or, expressed as periodicity, by way of months (i.e., what number of months are there in a interval):

num_observations_in_season <- 12/important_freqs  
num_observations_in_season
[1] 11.95890  43.65000 145.50000  

We now cross these to feasts::STL(), to acquire a five-fold decomposition into development, seasonal elements, and the rest.

library(feasts)
enso %>%
  mannequin(STL(enso ~ season(interval = 12) + season(interval = 44) +
              season(interval = 145))) %>%
  elements() %>%
  autoplot()
Decomposition of ENSO data into trend, seasonal components, and remainder by feasts::STL().

According to Loess decomposition, there nonetheless is important noise within the knowledge – the rest remaining excessive regardless of our hinting at necessary seasonalities. In reality, there isn’t any huge shock in that: Looking again on the DFT output, not solely are there many, shut to 1 one other, low- and lowish-frequency elements, however as well as, high-frequency elements simply received’t stop to contribute. And actually, as of right this moment, ENSO forecasting – tremendously necessary by way of human affect – is targeted on predicting oscillation state only a 12 months prematurely. This shall be attention-grabbing to remember for after we proceed to the opposite sequence – as you’ll see, it’ll solely worsen.

By now, we’re properly knowledgeable about how dominant temporal rhythms decide, or fail to find out, what truly occurs in ambiance and ocean. But we don’t know something about whether or not, and the way, these rhythms could have diverse in energy over the time span thought-about. This is the place wavelet evaluation is available in.

In torchwavelets, the central operation is a name to wavelet_transform(), to instantiate an object that takes care of all required operations. One argument is required: signal_length, the variety of knowledge factors within the sequence. And one of many defaults we want to override: dt, the time between samples, expressed within the unit we’re working with. In our case, that’s 12 months, and, having month-to-month samples, we have to cross a price of 1/12. With all different defaults untouched, evaluation shall be accomplished utilizing the Morlet wavelet (obtainable alternate options are Mexican Hat and Paul), and the rework shall be computed within the Fourier area (the quickest manner, except you may have a GPU).

library(torchwavelets)
enso_idx <- enso$enso %>% as.numeric() %>% torch_tensor()
dt <- 1/12
wtf <- wavelet_transform(size(enso_idx), dt = dt)

A name to energy() will then compute the wavelet rework:

power_spectrum <- wtf$energy(enso_idx)
power_spectrum$form
[1]  71 873

The result’s two-dimensional. The second dimension holds measurement occasions, i.e., the months between January, 1950 and September, 2022. The first dimension warrants some extra clarification.

Namely, we have now right here the set of scales the rework has been computed for. If you’re conversant in the Fourier Transform and its analogue, the spectrogram, you’ll most likely suppose by way of time versus frequency. With wavelets, there’s an extra parameter, the dimensions, that determines the unfold of the evaluation sample.

Some wavelets have each a scale and a frequency, through which case these can work together in advanced methods. Others are outlined such that no separate frequency seems. In the latter case, you instantly find yourself with the time vs. scale format we see in wavelet diagrams (scaleograms). In the previous, most software program hides the complexity by merging scale and frequency into one, leaving simply scale as a user-visible parameter. In torchwavelets, too, the wavelet frequency (if existent) has been “streamlined away.” Consequently, we’ll find yourself plotting time versus scale, as properly. I’ll say extra after we truly see such a scaleogram.

For visualization, we transpose the information and put it right into a ggplot-friendly format:

occasions <- lubridate::12 months(enso$x) + lubridate::month(enso$x) / 12
scales <- as.numeric(wtf$scales)

df <- as_tibble(as.matrix(power_spectrum$t()), .name_repair = "common") %>%
  mutate(time = occasions) %>%
  pivot_longer(!time, names_to = "scale", values_to = "energy") %>%
  mutate(scale = scales[scale %>%
    str_remove("[.]{3}") %>%
    as.numeric()])
df %>% glimpse()
Rows: 61,983
Columns: 3
$ time  <dbl> 1950.083, 1950.083, 1950.083, 1950.083, 195…
$ scale <dbl> 0.1613356, 0.1759377, 0.1918614, 0.2092263,…
$ energy <dbl> 0.03617507, 0.05985500, 0.07948010, 0.09819…

There is one extra piece of knowledge to be integrated, nonetheless: the so-called “cone of influence” (COI). Visually, it is a shading that tells us which a part of the plot displays incomplete, and thus, unreliable and to-be-disregarded, knowledge. Namely, the larger the dimensions, the extra spread-out the evaluation wavelet, and the extra incomplete the overlap on the borders of the sequence when the wavelet slides over the information. You’ll see what I imply in a second.

The COI will get its personal knowledge body:

And now we’re able to create the scaleogram:

labeled_scales <- c(0.25, 0.5, 1, 2, 4, 8, 16, 32, 64)
labeled_frequencies <- spherical(as.numeric(wtf$fourier_period(labeled_scales)), 1)

ggplot(df) +
  scale_y_continuous(
    trans = scales::compose_trans(scales::log2_trans(), scales::reverse_trans()),
    breaks = c(0.25, 0.5, 1, 2, 4, 8, 16, 32, 64),
    limits = c(max(scales), min(scales)),
    develop = c(0, 0),
    sec.axis = dup_axis(
      labels = scales::label_number(labeled_frequencies),
      identify = "Fourier interval (years)"
    )
  ) +
  ylab("scale (years)") +
  scale_x_continuous(breaks = seq(1950, 2020, by = 5), develop = c(0, 0)) +
  xlab("12 months") +
  geom_contour_filled(aes(time, scale, z = energy), present.legend = FALSE) +
  scale_fill_viridis_d(choice = "turbo") +
  geom_ribbon(knowledge = coi_df, aes(x = x, ymin = y, ymax = max(scales)),
              fill = "black", alpha = 0.6) +
  theme(legend.place = "none")
Scaleogram of ENSO data.

What we see right here is how, in ENSO, completely different rhythms have prevailed over time. Instead of “rhythms,” I might have stated “scales,” or “frequencies,” or “periods” – all these translate into each other. Since, to us people, wavelet scales don’t imply that a lot, the interval (in years) is displayed on an extra y axis on the correct.

So, we see that within the eighties, an (roughly) four-year interval had distinctive affect. Thereafter, but longer periodicities gained in dominance. And, in accordance with what we count on from prior evaluation, there’s a basso continuo of annual similarity.

Also, observe how, at first sight, there appears to have been a decade the place a six-year interval stood out: proper originally of the place (for us) measurement begins, within the fifties. However, the darkish shading – the COI – tells us that, on this area, the information is to not be trusted.

Summing up, the two-dimensional evaluation properly enhances the extra compressed characterization we obtained from the DFT. Before we transfer on to the following sequence, nonetheless, let me simply shortly handle one query, in case you have been questioning (if not, simply learn on, since I received’t be going into particulars anyway): How is that this completely different from a spectrogram?

In a nutshell, the spectrogram splits the information into a number of “windows,” and computes the DFT independently on all of them. To compute the scaleogram, however, the evaluation wavelet slides constantly over the information, leading to a spectrum-equivalent for the neighborhood of every pattern within the sequence. With the spectrogram, a set window measurement implies that not all frequencies are resolved equally properly: The larger frequencies seem extra ceaselessly within the interval than the decrease ones, and thus, will permit for higher decision. Wavelet evaluation, in distinction, is finished on a set of scales intentionally organized in order to seize a broad vary of frequencies theoretically seen in a sequence of given size.

Analysis: NAO

The knowledge file for NAO is in fixed-table format. After conversion to a tsibble, we have now:

download.file(
 "https://crudata.uea.ac.uk/cru/data//nao/nao.dat",
 destfile = "nao.dat"
)

# wanted for AO, as properly
use_months <- seq.Date(
  from = as.Date("1950-01-01"),
  to = as.Date("2022-09-01"),
  by = "months"
)

nao <-
  read_table(
    "nao.dat",
    col_names = FALSE,
    na = "-99.99",
    skip = 3
  ) %>%
  choose(-X1, -X14) %>%
  as.matrix() %>%
  t() %>%
  as.vector() %>%
  .[1:length(use_months)] %>%
  tibble(
    x = use_months,
    nao = .
  ) %>%
  mutate(x = yearmonth(x)) %>%
  fill(nao) %>%
  as_tsibble(index = x)

nao
# A tsibble: 873 x 2 [1M]
          x   nao
      <mth> <dbl>
 1 1950 Jan -0.16
 2 1950 Feb  0.25
 3 1950 Mar -1.44
 4 1950 Apr  1.46
 5 1950 May  1.34
 6 1950 Jun -3.94
 7 1950 Jul -2.75
 8 1950 Aug -0.08
 9 1950 Sep  0.19
10 1950 Oct  0.19
# … with 863 extra rows

Like earlier than, we begin with the spectrum:

fft <- torch_fft_fft(as.numeric(scale(nao$nao)))

num_samples <- nrow(nao)
nyquist_cutoff <- ceiling(num_samples / 2)
bins_below_nyquist <- 0:nyquist_cutoff

sampling_rate <- 12 
frequencies_per_bin <- sampling_rate / num_samples
frequencies <- frequencies_per_bin * bins_below_nyquist

df <- data.frame(f = frequencies, y = as.numeric(fft[1:(nyquist_cutoff + 1)]$abs()))
df %>% ggplot(aes(f, y)) +
  geom_line() +
  xlab("frequency (per 12 months)") +
  ylab("magnitude") +
  ggtitle("Spectrum of NAO knowledge")
Spectrum of NAO data, 1950 to present.

Have you been questioning for a tiny second whether or not this was time-domain knowledge – not spectral? It does look much more noisy than the ENSO spectrum for positive. And actually, with NAO, predictability is far worse – forecast lead time often quantities to only one or two weeks.

Proceeding as earlier than, we decide dominant seasonalities (a minimum of this nonetheless is feasible!) to cross to feasts::STL().

strongest <- torch_topk(fft[1:(nyquist_cutoff/2)]$abs(), 6)
strongest
[[1]]
torch_tensor
102.7191
80.5129
76.1179
75.9949
72.9086
60.8281
[ CPUFloatType{6} ]

[[2]]
torch_tensor
147
99
146
59
33
78
[ CPULongType{6} ]
important_freqs <- frequencies[as.numeric(strongest[[2]])]
important_freqs
[1] 2.0068729 1.3470790 1.9931271 0.7972509 0.4398625 1.0584192
num_observations_in_season <- 12/important_freqs  
num_observations_in_season
[1]  5.979452  8.908163  6.020690 15.051724 27.281250 11.337662

Important seasonal durations are of size six, 9, eleven, fifteen, and twenty-seven months, roughly – fairly shut collectively certainly! No marvel that, in STL decomposition, the rest is much more vital than with ENSO:

nao %>%
  mannequin(STL(nao ~ season(interval = 6) + season(interval = 9) +
              season(interval = 15) + season(interval = 27) +
              season(interval = 12))) %>%
  elements() %>%
  autoplot()
Decomposition of NAO data into trend, seasonal components, and remainder by feasts::STL().

Now, what is going to we see by way of temporal evolution? Much of the code that follows is similar as for ENSO, repeated right here for the reader’s comfort:

nao_idx <- nao$nao %>% as.numeric() %>% torch_tensor()
dt <- 1/12 # identical interval as for ENSO
wtf <- wavelet_transform(size(nao_idx), dt = dt)
power_spectrum <- wtf$energy(nao_idx)

occasions <- lubridate::12 months(nao$x) + lubridate::month(nao$x)/12 # additionally identical
scales <- as.numeric(wtf$scales) # shall be identical as a result of each sequence have identical size

df <- as_tibble(as.matrix(power_spectrum$t()), .name_repair = "common") %>%
  mutate(time = occasions) %>%
  pivot_longer(!time, names_to = "scale", values_to = "energy") %>%
  mutate(scale = scales[scale %>%
    str_remove("[.]{3}") %>%
    as.numeric()])

coi <- wtf$coi(occasions[1], occasions[length(nao_idx)])
coi_df <- data.frame(x = as.numeric(coi[[1]]), y = as.numeric(coi[[2]]))

labeled_scales <- c(0.25, 0.5, 1, 2, 4, 8, 16, 32, 64) # identical since scales are identical 
labeled_frequencies <- spherical(as.numeric(wtf$fourier_period(labeled_scales)), 1)

ggplot(df) +
  scale_y_continuous(
    trans = scales::compose_trans(scales::log2_trans(), scales::reverse_trans()),
    breaks = c(0.25, 0.5, 1, 2, 4, 8, 16, 32, 64),
    limits = c(max(scales), min(scales)),
    develop = c(0, 0),
    sec.axis = dup_axis(
      labels = scales::label_number(labeled_frequencies),
      identify = "Fourier interval (years)"
    )
  ) +
  ylab("scale (years)") +
  scale_x_continuous(breaks = seq(1950, 2020, by = 5), develop = c(0, 0)) +
  xlab("12 months") +
  geom_contour_filled(aes(time, scale, z = energy), present.legend = FALSE) +
  scale_fill_viridis_d(choice = "turbo") +
  geom_ribbon(knowledge = coi_df, aes(x = x, ymin = y, ymax = max(scales)),
              fill = "black", alpha = 0.6) +
  theme(legend.place = "none")
Scaleogram of NAO data.

That, actually, is a way more colourful image than with ENSO! High frequencies are current, and regularly dominant, over the entire time interval.

Interestingly, although, we see similarities to ENSO, as properly: In each, there is a vital sample, of periodicity 4 or barely extra years, that exerces affect through the eighties, nineties, and early two-thousands – solely with ENSO, it reveals peak affect through the nineties, whereas with NAO, its dominance is most seen within the first decade of this century. Also, each phenomena exhibit a strongly seen peak, of interval two years, round 1970. So, is there a detailed(-ish) connection between each oscillations? This query, after all, is for the area specialists to reply. At least I discovered a current research (Scaife et al. (2014)) that not solely suggests there’s, however makes use of one (ENSO, the extra predictable one) to tell forecasts of the opposite:

Previous research have proven that the El Niño–Southern Oscillation can drive interannual variations within the NAO [Brönnimann et al., 2007] and therefore Atlantic and European winter local weather by way of the stratosphere [Bell et al., 2009]. […] this teleconnection to the tropical Pacific is energetic in our experiments, with forecasts initialized in El Niño/La Niña circumstances in November tending to be adopted by destructive/constructive NAO circumstances in winter.

Will we see an identical relationship for AO, our third sequence below investigation? We would possibly count on so, since AO and NAO are intently associated (and even, two sides of the identical coin).

Analysis: AO

First, the information:

download.file(
 "https://www.cpc.ncep.noaa.gov/products/precip/CWlink/daily_ao_index/monthly.ao.index.b50.current.ascii.table",
 destfile = "ao.dat"
)

ao <-
  read_table(
    "ao.dat",
    col_names = FALSE,
    skip = 1
  ) %>%
  choose(-X1) %>%
  as.matrix() %>% 
  t() %>%
  as.vector() %>%
  .[1:length(use_months)] %>%
  tibble(x = use_months,
         ao = .) %>%
  mutate(x = yearmonth(x)) %>%
  fill(ao) %>%
  as_tsibble(index = x) 

ao
# A tsibble: 873 x 2 [1M]
          x     ao
      <mth>  <dbl>
 1 1950 Jan -0.06 
 2 1950 Feb  0.627
 3 1950 Mar -0.008
 4 1950 Apr  0.555
 5 1950 May  0.072
 6 1950 Jun  0.539
 7 1950 Jul -0.802
 8 1950 Aug -0.851
 9 1950 Sep  0.358
10 1950 Oct -0.379
# … with 863 extra rows

And the spectrum:

fft <- torch_fft_fft(as.numeric(scale(ao$ao)))

num_samples <- nrow(ao)
nyquist_cutoff <- ceiling(num_samples / 2)
bins_below_nyquist <- 0:nyquist_cutoff

sampling_rate <- 12 # per 12 months
frequencies_per_bin <- sampling_rate / num_samples
frequencies <- frequencies_per_bin * bins_below_nyquist

df <- data.frame(f = frequencies, y = as.numeric(fft[1:(nyquist_cutoff + 1)]$abs()))
df %>% ggplot(aes(f, y)) +
  geom_line() +
  xlab("frequency (per 12 months)") +
  ylab("magnitude") +
  ggtitle("Spectrum of AO knowledge")
Spectrum of AO data, 1950 to present.

Well, this spectrum appears much more random than NAO’s, in that not even a single frequency stands out. For completeness, right here is the STL decomposition:

strongest <- torch_topk(fft[1:(nyquist_cutoff/2)]$abs(), 5)

important_freqs <- frequencies[as.numeric(strongest[[2]])]
important_freqs
# [1] 0.01374570 0.35738832 1.77319588 1.27835052 0.06872852

num_observations_in_season <- 12/important_freqs  
num_observations_in_season
# [1] 873.000000  33.576923   6.767442   9.387097 174.600000 

ao %>%
  mannequin(STL(ao ~ season(interval = 33) + season(interval = 7) +
              season(interval = 9) + season(interval = 174))) %>%
  elements() %>%
  autoplot()
Decomposition of NAO data into trend, seasonal components, and remainder by feasts::STL().

Finally, what can the scaleogram inform us about dominant patterns?

ao_idx <- ao$ao %>% as.numeric() %>% torch_tensor()
dt <- 1/12 # identical interval as for ENSO and NAO
wtf <- wavelet_transform(size(ao_idx), dt = dt)
power_spectrum <- wtf$energy(ao_idx)

occasions <- lubridate::12 months(ao$x) + lubridate::month(ao$x)/12 # additionally identical
scales <- as.numeric(wtf$scales) # shall be identical as a result of all sequence have identical size

df <- as_tibble(as.matrix(power_spectrum$t()), .name_repair = "common") %>%
  mutate(time = occasions) %>%
  pivot_longer(!time, names_to = "scale", values_to = "energy") %>%
  mutate(scale = scales[scale %>%
    str_remove("[.]{3}") %>%
    as.numeric()])

coi <- wtf$coi(occasions[1], occasions[length(ao_idx)])
coi_df <- data.frame(x = as.numeric(coi[[1]]), y = as.numeric(coi[[2]]))

labeled_scales <- c(0.25, 0.5, 1, 2, 4, 8, 16, 32, 64) # identical since scales are identical 
labeled_frequencies <- spherical(as.numeric(wtf$fourier_period(labeled_scales)), 1)

ggplot(df) +
  scale_y_continuous(
    trans = scales::compose_trans(scales::log2_trans(), scales::reverse_trans()),
    breaks = c(0.25, 0.5, 1, 2, 4, 8, 16, 32, 64),
    limits = c(max(scales), min(scales)),
    develop = c(0, 0),
    sec.axis = dup_axis(
      labels = scales::label_number(labeled_frequencies),
      identify = "Fourier interval (years)"
    )
  ) +
  ylab("scale (years)") +
  scale_x_continuous(breaks = seq(1950, 2020, by = 5), develop = c(0, 0)) +
  xlab("12 months") +
  geom_contour_filled(aes(time, scale, z = energy), present.legend = FALSE) +
  scale_fill_viridis_d(choice = "turbo") +
  geom_ribbon(knowledge = coi_df, aes(x = x, ymin = y, ymax = max(scales)),
              fill = "black", alpha = 0.6) +
  theme(legend.place = "none")
Scaleogram of AO data.

Having seen the general spectrum, the dearth of strongly dominant patterns within the scaleogram doesn’t come as an enormous shock. It is tempting – for me, a minimum of – to see a mirrored image of ENSO round 1970, all of the extra since by transitivity, AO and ENSO needs to be associated in a roundabout way. But right here, certified judgment actually is reserved to the specialists.

Conclusion

Like I stated at first, this put up can be about inspiration, not technical element or reportable outcomes. And I hope that inspirational it has been, a minimum of a bit bit. If you’re experimenting with wavelets your self, or plan to – or in case you work within the atmospheric sciences, and want to present some perception on the above knowledge/phenomena – we’d love to listen to from you!

As at all times, thanks for studying!

Photo by ActionVance on Unsplash

Scaife, A. A., Alberto Arribas Herranz, E. Blockley, A. Brookshaw, R. T. Clark, N. Dunstone, R. Eade, et al. 2014. “Skillful Long-Range Prediction of European and North American Winters.” Geophysical Research Letters 41 (7): 2514–19. https://www.microsoft.com/en-us/research/publication/skillful-long-range-prediction-of-european-and-north-american-winters/.

Torrence, C., and G. P. Compo. 1998. “A Practical Guide to Wavelet Analysis.” Bulletin of the American Meteorological Society 79 (1): 61–78.

LEAVE A REPLY

Please enter your comment!
Please enter your name here