Medical Accreditation System Rescue
The Challenge
A healthcare accreditation and verification organisation ran their compliance operations on a system built and maintained by a single development firm. That firm dissolved — suddenly, with no handover, no documentation transfer, and no warning. The organisation was left running mission-critical healthcare infrastructure with no one who understood how any of it worked. Within weeks, the consequences were concrete: accreditation application workflows stopped processing correctly, certificate generation began producing documents rejected by regulatory bodies, and a document management integration began failing silently. Accreditation renewals for medical providers — organisations that needed this body's verification to continue operating — ground to a halt. Regulatory penalties were on the table.
Our Solution
SOLUTION:
When we were brought in, we had no documentation to work from. No architectural diagram. No database schema. No deployment runbook. No inline comments in the code. We had the running system, the codebase, and three sets of symptoms — and we had to work backwards from all of them simultaneously.
Reading the system from the outside in
We started by mapping what we could observe: the web application's request flow, the database queries each action triggered, the Windows services running in the background, and the file paths each component read from and wrote to. Within two days we had a working picture of how the system was structured — not from any document, but from reading the code and watching the system behave.
Three distinct failure modes were active simultaneously.
Failure 1: The application submission pipeline
Accreditation applications were being submitted through the WebForms interface and appearing to succeed — the confirmation screen displayed, the submission timestamp was recorded — but applications were not appearing in the review queue. Staff were manually chasing applicants who believed their submissions had been received.
The root cause was a stored procedure with a hardcoded file server path. The firm had moved the file server before dissolving, updating their own internal reference but not the code. When the procedure attempted to write to the old path, it failed silently — partial data was committed to the database, but the record was never placed in the review queue. The application appeared submitted. It went nowhere.
We corrected the path, fixed the silent failure mode so future errors would be logged and surfaced, and audited the database for incomplete submissions. We found 23 applications in a partially committed state and recovered all of them into the review queue.
Failure 2: Certificate generation
Certificates produced by the reporting layer were being rejected by regulatory bodies due to formatting errors — margins, font sizes, and layout had shifted enough to fall outside accepted standards.
The cause was a font dependency. The reporting installation referenced a font that had been installed on the development firm's server but was absent from the production server. When the font was unavailable, the renderer fell back to a system default that changed the layout. The fix was to embed the required font directly in the report definition, making it independent of any server's installed fonts. All pending certificates were regenerated correctly.
Failure 3: Document management integration
A background service was responsible for moving uploaded documents from the web application's upload directory to the document management system. It had been failing silently — documents appeared saved from the user's perspective but were never reaching the document management system.
The cause was a network share permission change made during a routine IT security review. No one had known the service depended on that share. We reconfigured the service to run under a service account with the appropriate permissions, restarted it, and audited the document queue. Seven documents from the failure window were unrecoverable; the affected providers were contacted directly.
Documentation: building what should have existed
After resolving the three active failures, we spent three weeks creating documentation the organisation had never had: a complete architectural diagram, a database schema with field-level descriptions, deployment and configuration instructions for every component, and a failure runbook covering the most likely failure modes and their resolutions.
The outcome
All pending accreditations were completed within six weeks. The regulatory bodies involved accepted the explanation of the technical disruption — the documentation we produced was submitted as evidence — and waived the penalty provisions that had been triggered.
The organisation's estimate of work hours saved, compared to having a new firm attempt to reverse-engineer and rebuild from scratch, was over 1,000 hours. The system has run without incident since the engagement. For the first time in the organisation's history, any competent developer can open a document and understand what the system does and how to maintain it.