Here is a thread-based version of my talk at the @AeroSociety conference on 'Safeguarding Earth's Space Environment' that I hope gets some key points across about modelling #spacedebris & how it can help to identify the data we need to understand #SpaceSustainability (1/n)
Caveat: I use images as metaphors, to help with understanding of key concepts, so my slides have no words in them. (2/n)
Our models have two distinct roles: PREDICTION and UNDERSTANDING. Understanding can help us to design better models and gather more relevant data. Both of these roles are important in relation to #SST, #SpaceSafety and #SpaceSustainability (3/n)
Here is a prediction of the future space environment in terms of the num. of objects >10 cm. The prediction is from our DAMAGE model, which uses Monte Carlo simulation. Here, the graph shows the average and the 1-sigma spread (uncertainty) (4/n)
The assumptions behind this prediction are many. The key ones are that future space activity is the same as recent space activity (no large constellations) and compliance with #spacedebris mitigation guidelines is widespread (5/n)
The prediction suggests that there is only a small growth of the #spacedebris population over the next 100 years. In fact, the growth RATE at the end of this period is less than the growth rate at the beginning. Does this mean that our current space activity is sustainable? (6/n)
What happens if I change the assumption about future space activity? Let's include 5 constellations in the future, operating from 2020 through 2065 (7/n)
Now, the prediction looks quite different for the years 2020-2065 (to be expected) but not so different for the remainder of the prediction. That's because we have assumed responsible behaviour by the constellation operators (8/n)
We have just assumed that the constellations will behave in the way that each operator has already described. Typically, going beyond what is currently recognised as good behaviour (i.e. following the #IADC debris mitigation guidelines) (9/n)
Our prediction still looks like we have a good outcome, in the long-term, after the each constellation has finished its operations. So, can we consider this still to be sustainable? (10/n)
Let's set that question aside for the moment and instead look at what metric we are using to assess the state of the environment and determine whether our activity is sustainable: the number of objects in the orbital population (11/n)
This metric has been closely linked to ideas such as a "space debris index" or an "space environment carrying capacity" (the number of objects the environment can sustain without compromising its use for our vital space services) (12/n)
A slightly different metric is #spacedebris SPATIAL DENSITY: the number of objects inside a fixed volume of space (i.e. the number per cubic kilometre). This is still fundamentally based on a count of the number of objects (13/n)
For the DAMAGE prediction that includes the five satellite constellations, the #spacedebris spatial density at the end of the prediction period looks like this: (14/n)
Here, the count of the number of objects in 50 km "buckets" is used to calculate the spatial density. This type of representation of the space environment is very common. It shows some regions of high density (e.g. at 800 km and 1400 km) and regions of lower density (15/n)
Is this metric useful when we are thinking about #SpaceSustainability? Let's consider spatial density in a different context, one that is perhaps relevant for our current global circumstances: crowding on train (16/n)
Here, people are in very close proximity to each other and, as we know, there are risks as a result. The same is true for #spacedebris. In regions of high debris spatial density the chances of a collision between objects tends to be higher (17/n)
Indeed, this correlation is the basis of most #spacedebris models including the ones created by Don Kessler and colleagues; models that were used to identify regions where the CRITICAL spatial density has been exceeded, where collision cascades are probably occurring (18/n)
The term "Kessler Syndrome" - the runaway growth of the #spacedebris population thanks to a collision cascade - has made its way into our vocabulary and is often used in connection to ideas of #SpaceSustainability and #CarryingCapacity (19/n)
Here is the number of catastrophic collisions predicted by our DAMAGE model overlayed with the spatial density. There is a good correlation between the two quantities over the LEO altitudes we considered (20/n)
But another assumption that is made is that the objects in the orbital population are uniformly distributed within their "buckets" (those volumes of space we consider when calculating things like the collision probability) (21/n)
What if that assumption is wrong? Our predictions might also be wrong. Our best mitigation measures might not be as effective as we think they ought to be (22/n)
What if the orbital population has more STRUCTURE than we have assumed? What if that structure means that the assumptions we rely on for calculating collision probability (for example) are also wrong? How will that affect our evaluation of sustainability? (23/n)
Let's set those questions aside and consider one last element of our prediction that plays an important role: time - the period we are considering in our prediction (24/n)
You will have seen that the first two predictions I presented were for a period of 100 years. Now, it doesn't seem sensible to base a prediction of this duration only on our recent space activity. Surely our space activity will be different in the future? (25/n)
Well, here we use our model and the long-time period to help us with our UNDERSTANDING (one of the key roles for the model I mentioned right at the start of the thread) (26/n)
If the environment changes slowly then we need a long time to understand those changes properly. Objects in LEO can take a VERY long time to decay and re-enter (due to atmospheric drag). This slow process is of huge importance in the models, hence long-term predictions (27/n)
Here is our first prediction but now extended over a further 100 years. As before, the number of objects doesn't appear to change by much even over this long period of time. So, can we say that the activity represented here is sustainable? Is 200 years enough? (28/n)
Let's go even further: 1000 years into the future with the same assumptions. Now we can see a very different type of response: an exponential growth in the number of objects with a doubling time of about 200 years. Not sustainable (29/n)
If I had based my definition of #SpaceSustainability on the number of objects over a 200-year period (e.g. using a "carrying capacity") I probably would have made a mistake! (30/n)
But there is something else, something we have not considered. If I want to understand the environment as a whole, or the sustainability of our space activity in its entirety, then I cannot just look at the individual components (31/n)
I have to understand how the individual components - the satellites, rocket bodies, and fragments - interact with each other. I have to understand the structures that we have created in the environment. I need to know the RELATIONSHIPS that exist between all of the objects (32/n)
I have to understand why sometimes a high spatial density does not translate into a high collision rate, and why sometimes a seemingly low spatial density results in a high collision rate (as seen here in this spatial density plot from the 1000-year prediction) (33/n)
Taking a closer look using some very different metrics we start to see something surprising (perhaps). These metrics tell a story of increasing, then plateauing, homogeneity in the distribution of objects (34/n)
They suggest that there is a *correlation* between the homogeneity and the triggering of the collision cascade. Something worthy of further exploration (35/n)
Coming back to the two distinct roles of models like DAMAGE: for better PREDICTIONS we use our UNDERSTANDING to identify RELEVANT data. Our modelling is telling us that we need to measure/analyse/assess the RELATIONSHIPS between objects in the space environment (36/n)
Cataloguing and characterising individual objects in the population might not be sufficient (37/n)
END
(I have tried to use free images where possible and/or credited their creators, but if I have missed one I apologise)
You can follow @ProfHughLewis.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.