what do you think the chances are that a denial-of-service exploit in a government website has been fixed in the 9 years since you last confirmed it existed?
there's a site which lets you download files, but it turns out the files are not stored on the same server you download them from. instead, when you select which file to download, it pauses for a moment while it connects to an FTP site and uploads the file to be downloaded
and that FTP site automatically deletes the files every week or two, because it has very limited disk space
so if one user were to try to download "too many" files in a row, they'd risk filling up the FTP server and blocking the site for all users
I really want to scrape this site and download all the files but the FBI has already gotten mad at me regarding this exact site once before
also do you think I can get in trouble now for giving enough hints that someone could figure out which site it is I'm talking about? I've definitely mentioned enough parts that you could do some serious spelunking on my timeline, figure out which site it is, and DoS it.
in any case, don't.
if I build my "tantric archiver" to very slowly archive the site over a long period to avoid DoSing it, I gotta hope that no one else is doing the same, or we'd conflict
"This link will expire after 24 hours."

OK yeah they haven't touched the site in 8 years.
the denial of service exploit is still there.
BTW, this exploit is actually an improvement on how the site worked when I got there.
So, the site lets you select which file you want, then it does the ftp-publish step, right?
well the previous programmer (who was a Very Interesting Person I have told many stories about) made a mistake in how they implemented the select-a-file part.
see they were using Java (because Government) and the Spring Framework's tools for building "Wizard" style websites, where you have a bunch of steps with NEXT buttons. Like an install wizard... website.
and the spring framework has an object that is instantiated for the entire wizard itself, and a separate object that's per-user. So the "wizard object" is global to the whole webserver, and all users...
well the site has to load up a thread (yes, a website handler loading threads that persist between pages. yes this is a bad idea) that does the FTP-upload part, and it has to keep a reference to the thread... so he put it in the wizard object.
he was supposed to put in in the wizard's USER object, not the wizard object itself.

This meant the reference to the thread was shared between all users of the site
which meant that if you were using the site as another user, you'd select "give me file A" and the other user would say "give me file B" and GUESS WHAT you both get file B.
Amusingly because the selection parts were correctly scoped to the user object, it would give you file B but then TELL YOU IT WAS FILE A
additional fun: at the time, some files on the site were not free.
which meant you had two fun scenarios:
1. you ask for a free file, and get a premium file instead
2. you pay for a premium file, but instead get a free file.
want the premium file? go pay for it again.
the whole site is free now (kinda) so at least that's less troublesome.
although there was a fun thing that happened once where we accidentally made the whole site free, but in a weird way:
it still asked you for your credit card details, it just didn't validate or charge them
this happened because the site itself doesn't handle any payment details. instead it passes off some info to a separate site (on the same government domain) which accepts all that info, then redirects the user back with a special "user_paid_for_the_thing"... form value.
and yes, this does mean that if you watched how the payment process worked in your browser's content inspector, you could figure out that the paid-for-the-thing request parameter was there and spoof it.
there was no server-side validation, because those systems couldn't talk to each other.
in any case, for development reasons, there were actually two copies of the payment-processing site.
one that was real, and one that was fake.
the fake one basically just skipped the part where it actually processed the payment, and always returned "YEP, PAYMENT PROCESSED!"
and for *mumble mumble* reasons, the fake one was (and probably still is) visible on the public internet.
And the configuration to switch between which one was used had to be done in the repo (subversion!)
which meant after deploying the site, you then had to do another commit to switch the site to the fake one, so that you could locally test on the fake one. it was fun!
in any case, the Questionable Server Setup of the government agency meant that if you wanted to deploy a site you had to file a bugzilla issue and then wait for some review processes to finish and for someone else to manually deploy it
which wasn't an indication of us having a mature devop infrastructure where we had people who were experts in site deployment and such, no it was just lots of red tape and you got generic IT drones doing it and screwing it up
well one day the "IT techs are not deployment experts" and the "you have to make a commit to switch which payment site is used, even locally" parts collided.
see the IT people tended to not actually read the instructions you put in the ticket. they just assumed all sites worked the same. mine had explicit notes that said that the usual deployment didn't work, but about 1 out of 5 times they'd try the usual deployment steps anyway.
I ended up having to intentionally sabotage the ant script we used for deployment, because it could be run in two ways, like "ant ez_deploy" and "ant manual_deploy" and because of reasons, ez_deploy wouldn't work with my site
so my instructions explicitly said "run ant manual_deploy, not ez_deploy!" and still I'd get emails saying "TICKET CLOSED: your site has been deployed" and check the site and it was completely broken because SOMEONE DID EZ_DEPLOY
so I ended up editing the supposedly-organization-standard deployment ant script so that the ez_deploy step would instead just print "THIS SITE CANNOT BE EZ_DEPLOYED, DO IT THE MANUAL WAY LIKE THE FUCKING TICKET TOLD YOU TO"
anyway another key part of the ticket was that it would say, as part of the install steps, which tagged release to check out.
I'd specifically tag a release because if they just did a SVN checkout, they might get a later revision... in fact they'd pretty much always get one
because the weird "you have to make an SVN commit to switch back to the fake payment site" thing meant that the workflow was:
1. make a commit to switch to the real payment site
2. commit your changes, tag the release
3. make another commit to switch back to fake, for development
and one day the IT tech who happened to pick up the "please deploy this site" ticket just happened to ignore the whole thing about which tagged release to use, and just grabbed the trunk/master/main commit.
which of course included the "switch to fake payment site" commit
so there was a couple days back in 2009 or so where every publication on the site was free, it just lied to you and told you it was paid, it just didn't actually charge you for what you bought.
it was a couple days because:
1. my post-deployment checks didn't catch this mistake
2. the helpdesk people didn't notice for another day or two
3. IT always took a couple days to redeploy the site
yeah, DAYS.
we once had a Weekend Event where the site went down on Saturday, and it was a late Wednesday before we were able to get it fixed, because IT was busy with something else, apparently.
the Weekend Event was that the people who hosted the database the site was based on renamed a bunch of columns without warning on a Saturday.
and naturally the site wasn't super-happy with having all the pseudo-SQL failing because it was doing things like "select station_name from stations where state=CA" and the station_name column was now named station___0032__name
that happened because the SQL was actually a weird abstraction layer built on top of sharepoint (which is itself built on an SQL database, but we weren't using that layer) and they did a system update over the weekend which somehow changed how invalid column names got normalized
and one of the column was originally named "station name" with a space in it. The pseudoSQL turned it into "station_name" originally, just swapping the space out for an underscore.
but after updating, it did some kind of weird HTML-entities escaping first, so the column was apparently corrected to "station name", and THEN run through the underscore filter, resulting in the "station__0032_name" madness.
so I came in on monday to 900 emails from help desk about the site being completely borked and had a fun phone call with the subcontractor to figure out what the fuck happened and if it was correctable on their end and then put in a commit to "fix" it to the new names
marked high priority, please deploy immediately, our site is completely down, angry users are emailing us constantly...

and yeah, it got redeployed within 72 hours.
my favorite "joke" when I was working for the government is that everything about how they "work" makes sense once you realize that it's impossible for them to do their job so poorly that they go out of business
if a private company had been this frighteningly incompetent at core parts of their business, they would have been a smoking crater on the ground a few decades back.
but if failure has no meaning, there's no way to be so bad at doing what your organization is supposed to that you have to stop doing it.
An individual will eventually run out of savings, a company will be out-competed or go bankrupt.

but the government is forever.
I did come in one day to find out that my entire department (like 40 people?) had just been laid off, except for 2 managers and me.

But it wasn't because of our incompetence or failures, it was because the government almost shut down because of budget things.
we didn't even have a government shutdown that year! this wasn't the 2013 government shutdown.
it was just that the we got so close to a shutdown that we had run out of budget we knew we had, so we couldn't pay for all the employees.
which is the fun part of being a government employee/contractor.
you lose your job not because of how good or bad you did it, but because the senate is delaying the federal budget finalization for political reasons.
anyway, the reason why I was kept on was because it turned out or subcontractor (the one who renamed database columns on a saturday) was having their contract ended and we need someone to rescue the 50 million documents stored with them
they weren't de-contracted because of their clear incompetence over a decade, but because it turned out that they had been heavily defrauding the government and we were about to sue them
it turned out the government was paying them for like 10 projects with like 5 full time employees working on each... but it turned out that while they were charging for 50 employees, they only had like 10 employees.
and they were just having those 10 employees working on multiple projects at once, but were charging as if each was full time on each project.
these are also the people who tried to contract-lawyer us out of a hundred million dollars worth of data.
because we signed a contract with them which was like:
1. you will store all this data for us
2. we're gonna give you some servers for this project
3. if we end the contract, you give us the servers back
and the part our lawyers missed was that we weren't requiring them to actually store our data on the servers we gave them for this project
it turned out they were misleading us about the performance of their system, for Reasons, and it needed a lot more server power than we thought, and therefore (and because it let them hold our data hostage) our servers weren't hosting our data.
they were instead using the servers we gave them as database indexes/crawlers, with the actual files stored elsewhere (reportedly at least some of them were on a desktop machine, not a server, but I never confirmed that)
which meant that if we did pull the cord and got them to give us the servers back, they'd hand us a pile of useless database indexes to a database we wouldn't have
and they were hoping we'd discover this and the only solution would be to continue the contract.

which is pretty much what happened, they put the lawsuit on hold until we could rescue the data from them
which I managed, and the lawsuit continued.
they tried as hard as they could to delay the process, by refusing to pay for any extra hardware to expedite the process (knowing we couldn't spend money because Government)
you'd think "a company has 15tb of our data and we need it ASAP" has an easy solution, even back in 2011, when you couldn't go to amazon and get an 18tb drive tomorrow
1. go to your local best buy/frys/whatever
2. pick up 15tb of storage
3. copy all the data over
4. stick it in a fedex box
this is not what happened.
they instead wanted to buy one (ONE!) 2tb drive and we'd just mail it back and forth.
They'd load up 2tb, mail it to us, we'd copy it off, then mail it back, and the cycle repeated.
after an angry meeting my manager argued them up...
to two drives.
so they could be loading one up while the other one was in flight
most of the reasons I lost a lot of hair over the next 6 months comes out of how both organizations (the government and the fraudulent subcontractor) were basically trying to make this as hard as possible
like, several of the drives arrived with massive filesystem corruption and were useless, so we had to mail them back and get them to re-load that chunk of the data.
so you'd think this would be an easy thing to handle, right?
the drive comes in, we plug it into a computer and check that the files are there, and if not it goes back on the fedex truck.
nah, our organization managed to set it up so that we'd have, at best, like a 3 day delay before we'd even KNOW if the files were there.
and the problem was viruses.
the security department said that legally all usb devices coming into the organization have to be virus checked before being attached to any server
you know, in case your linux or solaris server gets a virus off a windows hard drive
anyway the way that worked was that you'd file a ticket, and a day or two later IT would come pick up the USB device, then a day or two later they'd get back to you with the scan results, and you could schedule a time for them to drop the drive back off
so I ended up in a lot of annoying meetings to argue IT into letting us avoid this huge stall in our data transfer pipeline.
I finally got one:
when the drive arrived, I'd go into the locked IT section with it, and they'd give me a laptop to test it on
the laptop would have no ethernet connection and the wifi card specially removed.
after I ran my scans, I'd hand it back in, and they'd replace the hard drive in it and erase the old one, to prevent the possible viruses escaping into the network
so yeah, a key part of the whole process of getting data back from the subcontractor was "lock foone in a room with no communication with the outside world and let them write some python scripts on an XP laptop to test the files"
that was fun.
and by fun I mean terrible. IT hated me and also "accidentally" locked me out once.
because it turns out that IT's locked section has 24/7 guards on it and they change guards at 5pm and since I wasn't part of IT, they had to put me on a special list of users allowed into the locked IT section
but this was apparently the first time they'd needed a "special list of users with access" thing, and they only bothered telling the day-guards about the special list.
so one day I'm there at 4:30pm and I'm scanning the drive (2tb of files takes a long time to scan over usb 2.0) and it says it'll take an hour or two, so I go out and grab some dinner at a local restaurant.
I come back around 5:30 and they're like "who are you? you can't come in"
and I argue that I was just in there, that there's a laptop logged into my account sitting in one of the cubicles, and my library book is sitting there on the desk, half-read!
"NOPE, GO AWAY"
I had to leave my purse and everything in there, overnight.
To this day I'm just happy that I took my apartment key with me when I went to the restaurant, or I would have been locked out overnight.
they kinda-apologized for that later and said they fixed it, but I made a point never to test it. I didn't want to get locked out again.
the locked-in-the-IT part was FUN.
I had to rewrite some python libraries while I was there, because it turned out the gave us the database dump (the metadata, not the files themselves) as CSV files, and... well... CSV files
CSV is not a format. it's a vague idea of a format that everyone implements differently
and they wrote their own CSV exporter, apparently!
and some of the fields in the database:
1. contained commas
2. contained quotes
3. were multi-line
so I was stuck in a locked IT room with an XP laptop and no internet connection, having to rewrite the python CSV module (with a time crunch!) to handle these weird terrible CSV files they were shipping us
the fun part of this was that while I could bring files INTO the locked IT space, I couldn't bring them OUT
so I basically had to sit there, write a bunch of python code, then take a bunch of notes on how it worked...
then leave and write it again on my external computer, to bring in for next time
this was my life for a month or two.

and by the way, if this sounds insufficiently stupid, here's the punchline:
it turns out that all of this was worthless and pointless.
see, the reason IT was so insistent on their super-slow virus scanning process was because they had rules about what had to be done before external storage could be connected to their servers, right?

and it turns out these drives... were never attached to their servers.
because it turns out that we were using a relatively ancient version of linux for reasons related to retirement-ultimatums, and it didn't support NTFS
and naturally these 2tb drives were all NTSF formatted
so after IT fucked around for a week trying to mount them to a server or another, they handed them back to me saying "yeah we can't read these. just plug them into your desktop, and copy the files over using FTP"
and my desktop was running a virus checker and so was the network the files were being FTP'd over.
so these incredibly time-sensitive files ended up getting scanned FOUR TIMES before we could use them.
1. once when my locked-IT-room laptop scanned them
2. once when IT did their special scan pass
3. once when my desktop scanned them
4. once when the network scanned them during the FTP upload
naturally we abandoned this difficult and slow and annoying IT prescan/scan/hand back dance after it turned out to be unnecessary, right?

You must be new here
no we kept it in place because it had taken so long to negotiate with IT to get them to this place, it would have taken a while to figure out the new approved process.
so we still did this stupid locked-room-airgapped-laptop dance for all the drives, over months.
and once we got all the data, the fun didn't end there.
see, this was all the data for our site which was supposed to talk to their sharepoint instance.
we didn't have access to their sharepoint instance, or their code, and we had no windows servers
so now the next step was "well I guess we need to build our own back-end site to power our site, to replace their sharepoint mess"
one of my managers estimated it would take a year to develop it.
which pissed me off so much that I went home and spent the weekend completing a 100% feature-complete proof-of-concept replacement for it.
not to "show off", but to go FUCK YOU THIS IS NOT GOING TO BE RUN UNDER THEIR INSANE TIMELINES.
This is not a year long project, because I'm not here just to leech money from the government. I want to Get Things Done.
in any case, because my prototype was not How Things Were Done (I dared use some upstart "python" technology to develop interactive web applications) I then spent the next 4 months or so redeveloping the same thing, but in Java.
the most amusing part of that replacement is that it legally had to be named "OldSite 2.0"
It was written by a different organization (me, rather than the fraudulent subcontractor), with a different backend (Oracle instead of Sharepoint), in a different language (Java instead of C#), but it was considered a direct continuation of the previous one
because we were explicitly banned from starting "new projects", in some attempt to limit government spending.

We could only do continuity and maintenance on existing projects.
so my 100% complete rewrite of the back-end server was included in our department's plans as a simple update on the existing project
Anyway, in thanks for my tireless work to almost single-handedly rescue a hundred million dollars worth of data that our department had collected over a decade...
they cut my salary by 5%
it wasn't because of anything I'd done, of course. this is the government, remember? failing or succeeding doesn't matter.
My contractor had been partially merged with another one and the budget changes meant they had to renegotiate a lot of stuff with the government.
so my salary was cut and I couldn't get a bonus, even though one had been promised to me by the government employees I worked with (I should have realized they have 0 control over that, they can just suggest it)
and with the budget changes that meant they cut all the salaries.... they definitely didn't have money for bonuses that year.
But don't worry, they might be able to give them next year!
and I don't mean "maybe we'll give you a bonus next year, for the stuff you did this year", but "maybe we'll give you a bonus next year, based on what you do next year!"
because yeah, it's every year that you do a once-in-a-decade rescue of an entire government department's data.
I'm sure that'll happen again!
anyway I made my displeasure known, both to the contractor and the government, and the government side was able to... get my name mentioned in the monthly email, saying something like "special thanks to foone turing for their work in rescuing data for the FOOBAR project"
which definitely helped with my salary going down 5%
in any case, I didn't get much time to complain about that.
see, remember how I said my department got laid off overnight except 2 managers and me?
well, the next budget cycle, that department didn't exist.
They moved everyone to other departments
so I ended up being part of another department, but still doing the same things. it was just a difference on paper, in terms of management and budget and such.
this also meant that I had a different manager, which was great because it meant I no longer had the person I'd been working with for 5 years at that point, who could maybe have helped advocate a bit better for my situation
but yeah, after two months of being in that situation where I was doing the same work but for a different agency, I get called in on a friday and they're like "ok yeah... there's a problem. see, you know how you moved departments but are still in the old department, kinda?"
it turned out because of that weirdness, the contractor I worked for had not included my position in the contract.

meaning they hadn't been billing the government for my work the last couple months.
and they weren't going to be able to easily fix that...
so they said "we'll talk to some other departments and try to figure out where we can get the money for your position, but you might not have a job after monday. Have a good weekend!"
So on Monday, I check, there's no update, no "hey good news you have a job email".
I make the first of two "mistakes"
1. I compile up all my documentation (a lot of this was on paper, because... government) and hand it over to my government manager, to ensure they'll have access to it when I'm not there anymore.
There's no guarantee they'd get that vital info if I just stopped working there.
and, you know, I want to ensure projects I'd worked on for 5 years can keep going after I leave. Professional pride and all that.
I hand it over and it turns out they have zero idea this is happening.
my contractor has not told the government about the fact they're about to... they're not firing me, or letting me go, exactly. I'm in a very office-space situation. I'm being unhired, de-jobbed?
so yeah, this is the first anyone at the government knows about the situation.
My contractor clearly would have liked the government not to have known until after I just stopped showing up one day, apparently.
anyway, I pack up my desk and then get called to the contractor office, who go "What's this I hear about you quitting?"
and I'm like "uhh... you told me on friday that I might not have a job after Monday."
"well, I'm getting calls from the government, who are worried that you won't be here after today!"
"THAT'S WHAT YOU TOLD ME, TODAY WAS MY LAST DAY?"
and that was the second mistake...
Apparently they interpreted that as "Foone is quitting".
And I didn't realize that's how they interpreted it because I had been told that today was probably gonna be my last day.
so they think I'm quitting (in protest of being fired, I guess?) and I think I'm being fired, and... I walk out the door, in the confusion.
I end my 5 years working for the government, fittingly to how it had gone: confused, annoyed, and angry.
anyway, the story has a happy ending!
they denied my unemployment claim because I "quit" instead of being fired.
and then two months later, when I'm working at a new job, they re-list my position, exactly, in the local job search places.
I apply, and don't get hired.
I may have lied about it having a happy ending.
and if you want more stories of madness at the government, I have a whole section on my link-to-twitter-threads wiki page for my government job:
https://floppy.foone.org/w/Twitter_Info_Threads#My_Government_Job
You can follow @Foone.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.