Read the thread and consider software risk and system safety from this perspective. When designing for safety, ask "for whom?" https://twitter.com/michaelharriot/status/1340796465967476737
While drafting my legacy scientific code book proposal, I started fleshing out a section on risk, harm, and safety. Too focused or lazy to dig through my library to poach a list of harms from a real book on safety, I composed my own, planning to fill in gaps later
The first three are easy: death/injury, environmental damage, and financial loss. Those seem to be the core of engineering ethics: don't kill or injure people, don't destroy the environment (much), and serve those who write your paycheck.
Aside: I hear the tech industry has simplified that to a more efficient One Clause Code of Ethics...
But back to the list: What are other possible consequences of your scientific/engineering code being wrong? Your analysis is wrong, you write it up, send it to a journal or a client, and people make bad decisions because of it.
Here we've damaged the body of knowledge, the trust others place in it, and likely damaged our reputation, that of our peers, maybe our institution, or even our entire field. Suddenly that societal risk has become a lot more personal. Or vice versa.
These are secondary, indirect, and possibly recoverable harms but they're very serious in a scientific context. Follow @RetractionWatch and @MicrobiomDigest to see how this plays out. What role do our tools have in reducing the risk of publishing bad science?
You might argue that addressing secondary harm is outside the scope of software safety. I contend that most of the harms caused by scientific software are secondary - direct, primary harm is most often caused by embedded control systems. THERAC-25, Patriot, etc.
The TLDR is that Reinhart & Rogoff published a paper touting the benefits of austerity (starving the public) based on their Very Serious Data-Driven Analysis. Herndon asked for the technical basis, got their Excel spreadsheet, and found errors that demolished their conclusions
All well and good; errors were detected, the public record was corrected, and Science had prevailed. Except lawmakers and exec had used this paper in shaping policy and that flawed policy still stood. The damage done by austerity was not addressed or redressed. It continued.
You can argue that economics and policy have such a tenuous, gossamer relationship with science and reality already that fixing broken software is like rearranging deck chairs on the Titanic. Have fun making sure everyone gets a good view of the iceberg; that ship ain't turning
It's not just a matter of innocent ill-informed decision-making; bad data is often used to prop up motivated reasoning and intentionally biased and malicious policy, the kind that Michael (the OP) is addressing.
Consider the risks of misidentification, misattributed liability, loss of privacy, loss of dignity, etc. and consider how technical systems play into both inadvertent and intentionally biased and broken policy and implementation.
And there will be objections like "well, stuff like loss of dignity aren't real harms, they're just hurt feelings, and we can't possibly consider psychological harm as a legitimate risk".
Newsflash: the bulk of nuclear safety is devoted to preventing psychological harm. We spend a shitload of time, energy, effort, and money every year trying to make people feel safer without actually reducing harm. Policy says those people's feelings are worth assuaging.
I've worked on safety-related systems for about half my career; I'm constantly changing my understanding of safety and my role in reducing harm. Technical software is being used more and more in shaping and implementing policy. Ethical demands on practitioners are increasing
Ask what software gets used in analyzing and matching DNA sequences, how it's made, how it's tested, how it's reviewed and validated. Now ask what risks that software contributes to in a research lab, a medical facility, or a state criminology lab.
"Look, I'm a biochemist that's been drafted as a software developer; I'm not a medical or legal ethicist or a safety engineer - what do you honestly expect me to do?"
Just be aware of the downstream harms your code can contribute to and look for ways to address them. Consider intentional misuse or abuse of your code as a component in a larger downstream system. Maybe instead of code katas, Advent of Code, and dev efficiency, look around you.
How does this relate back to my focus on updating sad old FORTRAN? Code implements bias. Legacy code is painful and inconvenient to change. If you can't safely change the code, those original biases (consider them sociological 'bugs') stay encoded forever.
But even if you are so willfully obtuse to believe your work is purely technical, there are still people downstream using your code facing risks from it, either as direct users or those affected by the decisions made by its use, misuse, and abuse.
No idea how much of this thread/rant will make it into the book or the proposal. It's bigger-picture motivation for the work. There are so many problems we can see but can't immediately fix because there's so much preparatory work to do before you can address anything material
But you have to start somewhere. You may have to move deck chairs around to make a path to the bridge to try to steer around the iceberg or get below decks to close the bulkhead doors. At least warn everyone down in steerage to get to the lifeboats.
You can follow @arclight.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.