I'm going to venture back into #scicomm and talk about gravitational wave detection. Four months ago I left behind my old research field and jumped into detecting gravitational waves.
Followup of GW170817, a binary neutron star merger, fundamentally changed astronomy
Followup of GW170817, a binary neutron star merger, fundamentally changed astronomy
Most people who work in astrophysics are fairly familiar with 'high performance computing' or HPC. In most applications in astronomy you take some data and store it in computer memory for your entire computation time, and perform millions of operations on that data
When you look for gravitational waves (online), things are slightly different. The goal is to send out an alert you've seen a gravitational wave as soon as you detect it.
Online = taking in data from detectors as it is produced, and analyzing it immediately
Online = taking in data from detectors as it is produced, and analyzing it immediately
We often talk about latency in online GW searches. Latency = time between the arrival and detection of the gravitational wave and the sending of an alert in the form of an upload to the database of events (GraceDB) and a GCN
The part of the LIGO Scientific Collaboration devoted to online detection and rapid followup of gravitational waves is called the Low Latency group.
When you design your GW detection pipeline to run online, you have to meet a lot of rigorous design criteria, including
- detects gravitational waves close* to when they happen
- doesn't have too many false detections (we don't want to send GCNs for glitches)
*close ~ O(seconds)
- detects gravitational waves close* to when they happen
- doesn't have too many false detections (we don't want to send GCNs for glitches)
*close ~ O(seconds)
(obviously there's a lot more, but they're more software architecture things)
There are only five online pipelines, four of which look for compact binary coalescence (CBC): MBTA (multiband template analysis), PyCBC (self-explanatory), GSTLAL (G-streamer LIGO algorithm library) and SPIIR (summed parallel infinite impulse response)
All of these pipelines use a modelled search: we build huge banks of theoretical templates of what signals from CBCs look like. How we construct our banks differs, and how we use them also differs.
I work on SPIIR, which is related to GstLAL
I work on SPIIR, which is related to GstLAL
SPIIR works a little like how we perform language processsing.
Suppose you are having a conversation in a loud room at a party. Someone says to you
'are you having a good time?'
Suppose you are having a conversation in a loud room at a party. Someone says to you
'are you having a good time?'
It is so loud you miss the first two words and don't immediately understand what is being said during the first three words of the sentence, but you catch enough of the last two words that your brain goes back to the beginning of the sentence and patches the meaning together
Your brain uses the smaller units of the sentence to then interpret the context.
In SPIIR, we break our templates for the inspiral waveform (the whole sentence) into smaller approximated chunks, like words. We use GPUs to increase the number of words we can look for per second
In SPIIR, we break our templates for the inspiral waveform (the whole sentence) into smaller approximated chunks, like words. We use GPUs to increase the number of words we can look for per second
In principle, the latency of SPIIR (time between the GW signal coming in and us sending a GCN) in O2 and O3 was 0s, however because it takes real time to move information from computer to computer, we had a latency of ~ a few seconds
The faster we can send out alerts for GW events, especially if we can send alerts pre-merger, especially NS mergers, where emission of EM signals is predicted to occur during inspiral, the better the chances we have of detecting precursor and prompt emission
I will now take any questions you have on online GW searches