Friday, 22 May 2015

Obstacles to CDA adoption in Ontario

I'm really starting to wonder now if CDA will ever take hold in Ontario.

There was a time when I admired the adoption of CDA in the U.S. as a part of their impressive "Meaningful Use" initiative.  I have worked first hand with CDA documents from various EMR systems in the U.S. and have seen many successful Health Information Exchanges launch in the U.S. based on CCDA XDS repositories.

Despite the flurry of CDA activity south of the border, I see serious obstacles to adoption of CDA here in Ontario:
  1. The strongest case for CDA in Ontario is the abundance of CDA support and tooling in the U.S.  What's important to recognize, however, is that CDA encodes country-specific data types like postal codes and units of measure that are different between the U.S. and Canada.  So even if we wanted to take advantage of American CDA tools here in Canada, we would first need to modify those tools to use Canadian data types before we could use them up here.  The cost of this will in many cases be prohibitive.
  2. The next case for CDA in Ontario is how naturally it would support continuity of care scenarios like eReferral, eConsult, hospital discharge, admission, etc.  The problem with this is that Ontario EMR vendors have already achieved OntarioMD 4.1 certification that requires supporting the import and export of patient data in the OntarioMD "Core Data Set" data format.  In hindsight, it's clear that Ontario should never have invented its own proprietary EMR data exchange format.  But now that we have it, the EMR vendors are going to prefer that we build on that capability rather than trying to add support for a completely new CDA format.
  3. Lastly, many people I speak with about CDA are quick to point out that despite all the HIEs and EMR support developed in the U.S., it has not even come close to living up to its promise there.  In fact, the EMR backlash against CDA has prompted the formation of an industry association called the CommonWell Health Aliance that is promoting FHIR as the way forward for health data interoperability.  Every technical person I've spoken with who's seen both the CDA and FHIR specs has emphatically preferred FHIR.  Support for FHIR is snowballing everywhere.

So it now feels like we're in an awkward in-between time for EMR interoperability in Ontario.  Support for CDA is waning, but the FHIR spec is still only half-baked.  It will be years before the FHIR is released as a normative standard.

I will be watching how EMR interoperability unfolds south of the border with interest.  Momentum will either end up falling with CDA or FHIR, and it will be in Ontario's long-term best interest to follow whichever interoperability standard wins in the gigantic market to our south.


Thursday, 23 October 2014

Healthcare Interoperability in Canada: Perfection is the Enemy of Good

Yesterday, as co-chair of the ITAC Interoperability and Standards Committee, I presented opening comments for an ITAC Health workshop on Interoperability.  Details of the event can be found here.  Below is the text of my opening comments.

The Problem

I’m a software developer that got into healthcare about 10 years ago.  When I joined healthcare, I was surprised by a number of things I saw.  Things like:

  • Records are stored on paper and exchanged using paper fax.
  • The software behind the desk looks like it was written in the 1980s or 1990s.
  • The endless transcribing and repeated oral communication at every encounter is reminiscent of medieval monasteries:  In a week a patient can repeat their entire medical history to multiple clinicians and dump out their bag of drugs for each and every one of them.
  • Data exchange, if it happens at all, is often extracted directly from the EMR database (bad practice) and looks like lines of custom pipe-delimited text from my Dad’s generation.

In short:  Why hasn’t technology revolutionized healthcare like it has every other industry?  It feels like Canadian Healthcare is still stuck back in the last century.  Not much has changed in the last 10 years.

Healthcare IT in Canada is behind the rest of the world by most measures.  Even the U.S., who are committed to doing everything the hard way, are years ahead of Canada when it comes to Healthcare IT.  How did we get here?  How can we fix it?

How did we get here?


You can’t blame Canada for lack of trying.  We have invested billions of dollars into major eHealth initiatives right across the country.  There has been a decade-long project to introduce new healthcare interoperability standards across Canada, organized under a Pan-Canadian EHR Blueprint to get everyone connected into centralized EHR repositories.  We were promised that everyone would have a shared electronic health record accessible by all providers by 2015.  We’re not going to make it.  What happened?

If I were to pick one overarching theme it would be this: Perfection is the enemy of Good.
I’ve seen numerous projects get derailed by intricate Privacy and Security tentacles that grow out of monstrous consent models.  Time and time again we have held up perfectly secure and functional eHealth initiatives because we’re pursuing an absolutely comprehensive and airtight privacy and security model around it.  These delays cost lives.  It’s too easy to indefinitely postpone a project over privacy and security hand waving.

Another issue I’ve seen hold Canada back is our fantasy that each province is a unique flower, requiring completely different infrastructure, software, and its own independent standards committees and EHR programs.  Get OVER yourselves.  We will all save a heck of a lot of money when the provinces just get together and present Canada as a single market to the international Healthcare vendor community, rather than as a balkanized collection of misfits.

From a software developer’s perspective, I can tell you one issue that contributed to delaying Canada’s eHealth agenda is the quality of our Interoperability Standards.  I’ve heard people say, “I don’t care what message standard you use to move your data around—the technology is irrelevant—the interoperability standard isn’t the problem.”  To this, I say “hogwash!”  I’ve seen good APIs and I’ve seen bad APIs.  The “P” in “API” stands for “Programmer.”  If you want to know whether a proposed API is any good, you have to ask an experienced programmer.  If you take a look at the HL7v3 standard, it looks to me like they skipped this step.  If it costs 10 times as much effort to implement one API over another, that’s a sign there’s probably a problem with your API.

I think when the whole Canadian HL7v3 thing started out, there were a number of vendors involved in the process.  But one by one they dropped out, and the torch was left to be carried by committees of well-intentioned, but ultimately misguided information modellers.

We in the Canadian vendor community need to take some responsibility for letting this happen.
Smaller vendors didn’t get involved because they couldn’t afford to—many were just struggling to survive in the consolidating landscape.  The tragedy here is they will be the ones most affected by lack of interoperability standards.

Larger vendors arguably stand to benefit the most from a Wild West, devoid of easy-to-use interoperability standards where their Walled Fortress can be presented as the only fully interconnected show in town!

But simply falling into the arms of a handful of large vendors will have a cost for all of us in the long run.  That cost is innovation.  It’s in our best interest to start seriously thinking about supporting a manageable collection of simple, proven interoperability standards.

How can we fix it?

Vendors are the custodians of the most experienced technical minds in Canada.  We need to bring these minds together and take on this problem.  We can’t afford to continue complaining, wiping our hands of responsibility and expecting government to figure it out for us.  We need serious software engineers at the table, rolling up our sleeves, and getting this job done.

Now it’s easy to say that.  But what can we practically do to move this forward?  I recommend 3 things.
  1. We need something in Canada akin to the IHE working groups they have in the U.S.  A focal point for vendor input on the direction interoperability standards will take in Canada.  This needs to happen at the national level.
  2. We need to leverage infrastructure already deployed and we need to leverage standards that have already been successfully implemented in other parts of the world.  This will mean moving forward with a plurality of standards, such as IHE XDS, CDA, HL7v2 and HL7v3, and potentially even FHIR. 
  3. We need to strive for simple, clear and unambiguous interoperability standards.  It’s not enough to say you broadly support a standard like HL7v2.  You need to have very specific conformance processes to go along with it that ensure my HL7v2 messages have exactly the same Z segments and use exactly the same vocabulary as your HL7v2 messages.
A bit more on the last point.  Along with each standard, you need to have, at a minimum, content specifications and vocabulary bindings.  And by this I don’t mean 400 page word document that system integrators are expected to read through and implement.  I mean MACHINE READABLE software artifacts that completely specify the structure of how the data will be represented in bytes over the wire and how field values will be unambiguously interpreted.  Representing your specs in a machine readable format accelerates interoperability tooling by a considerable factor.  It’s the difference between building robots, and building robots that are able to build other robots.

For different standards this means different things.
StandardMachine readable artifactsRecommendations
HL7v2conformance profiles, vocabulary dictionariesUHN has done some great work here with their machine readable HAPI conformance profiles
HL7v3MIFs with vocabulary constraintsalthough I don’t see much of a future for HL7v3 here in Canada outside of pharmacy and even there it’s not clear if that’s going to win in the long run
CDAtemplates with terminology constraintsI think the jury’s still out for level 3 CDA.  The Lantana group has a made a good start at organizing CDA templates, but this space still has a long way to go—I think it suffers from some of the same challenges that HL7v3 faces
IHEIntegration ProfilesDiagnostic Imaging is the poster child for how an initiative like this can be successful.  DI is way ahead of other domains in Canada and we can credit the IHE for much of that progress—we need to consider building on the success of this approach in other domains.
FHIRresource schemasI have to say, given how new the FHIR standard is, it’s impressive how many online conformance test sandboxes are already publically available—that’s a testimony to how committed FHIR is to machine readability, openness and simplicity.  Read Intelliware's assessment of FHIR here.


In closing, I’m asking the vendors: give us your best engineers, and let’s work together to get serious about establishing some simple, functioning interoperable standards to get our healthcare data moving!

Friday, 27 June 2014

When off-shoring software to India, include code quality metrics as a part of the contract

I understand the appeal of off-shoring software development to India: low rates, scalable team size, and a process that has really matured over the years.  India is a serious and credible competitor for software development services.

I have personally been asked to maintain software written by large Indian off-shore companies. While the software usually meets the functional requirements, and passes manual QA testing, in my experience, the quality of the code written overseas is often poor.  Specifically, the resulting code is not extensible and it is expensive to maintain.  I am not exaggerating when I say I have seen 2000 line methods within 6000 line classes that were copy/pasted multiple times.

Setting aside for a moment the implicit conflict-of-interest of writing code that is expensive to maintain, in fairness to the Indian offshore developers, when customers complain that it's expensive to change features and add new ones to the delivered system, the developers innocently respond, "well you never told us you were going to need those changes..."

There is a simple answer to this.  Ask for it up front.  And I don't mean ask for the system to be extensible and maintainable.  That's vague.  I mean require the developer to run a Continuous Integration server (such as Jenkins) with a code quality plugin such as SonarQube, and measure the specific code quality metrics that matter.

In my experience, measuring the following 4 metrics goes a long way towards ensuring the code you get back is extensible and maintainable.

  1. Package Tangle Index = 0 cycles.  This ensures the software is properly layered, essential for extensibility.
  2. Code Coverage between 60% and 80%.  This is essential for low maintenance costs.  This metric is about automated testing.  The automated unit tests quickly discover side-effects of future feature changes, allowing you to make changes to how the system behaves and get those changes into production with a minimum of manual regression testing.
  3. Duplication < 2%.  Any competent developer will maintain low code duplication as a basic pride of craft.  But I have been astonished at the amount of copy/paste code I've seen come back from India.  If you don't measure it, unscrupulous coders will take this shortcut and produce a system whose maintenance costs quickly spiral out of control.
  4. Complexity: < 2.0 / method and < 6.0 / class.  This metric plays a huge factor in extensibilty.  Giant classes with giant methods make a system brittle and resistant to change.  Imagine a building made out of a few giant Lego blocks versus the same building made out of 10 times as many smaller Lego blocks.  The latter building will be far more flexible to reshape as business needs change.
A word of caution about using SonarQube.  Some developers, particularly those with a perfectionist bent, can get lost in a rabbit hole of trying to improve their code's "score" on many of the other metrics offered by the tool.  Violations, Rules Compliance, Technical Debt Score and LCOM4 are particularly tempting to undisciplined developers.  But in my experience, these metrics provide limited return on investment.  If you do decide to measure your code quality, I urge you to ignore these metrics.  While it can be a hill of fun spending weeks making your code "squeeky clean," the business value of these other metrics pales in comparison to what you get out of the 4 metric I recommended.

So the next time you outsource a development project to India, protect yourself from getting back junk by requiring code quality metrics right in the contract.  It might add an extra 10% to the initial cost of the system, but that cost will be more than offset by the resulting extensibility and maintainability of the code you get back.

Friday, 28 June 2013

What Ontario can learn from Northern Europe

Earlier this month, I participated in an event hosted by the Canadian Foundation for Healthcare Improvement.  The goal of the event was to bring together thought leaders from seven countries to discuss and debate Canada's Healthcare Strategy.  Paul Martin, Deb Matthews, Don Drummond and Michael Guerriere were all there and it was an excellent discussion.  Details of the event can be found here.  Below are some of the ideas that caught my attention.


Startling Facts from Ontario

15% of prescriptions are not filled in Ontario because the patient can't afford the medication.

20% of hospital beds in the province are occupied by someone who shouldn't be in a hospital.  (This problem is often called the "ALC" problem--Alternate Level of Care.)  Hospital beds are the most expensive beds in our health care system.

How Sweden fixed ALC

Sweden had the same chronic ALC problem as Ontario until a couple of years ago when they introduced an innovative solution.  A problem that they couldn't solve for decades suddenly disappeared within 3 months. What Sweden did is split the jurisdictional responsibility of Hospital care from Long Term Care: The province kept the responsibility for acute care, but they moved responsibility for long term care (along with the funding) to the municipality.  And then, and here's the genius, the province charged the municipality a high daily hospital bed fee for every day a person was left waiting to be transferred from a hospital bed out to a long term care facility.  Since the cost of the long-term care bed was so much lower than the cost of the hospital bed, the problem resolved itself very quickly.  Now this is easier for Sweden to do because municipalities have income tax revenue, but I thought the idea of splitting responsibility to force efficiency was brilliant.  (As an aside, in the Swedish tax model, 15% of income tax goes to the federal government, 10% goes to the province, and 20% to municipalities.  No wonder they have such great transit over there!)

How the Germans do it

Here in Ontario, OHIP is managed like a Big Government Program, with a heavy bureaucracy managing a lumbering public claims system funded by taxes.  In Germany, it is managed more like a tightly efficient, regulated crown corporation.  Patients pay their health insurance premiums directly to the insurer. The government subsidizes these premiums for low wage earners, and a salary-based sliding scale higher premium paid by higher wage earners.  Because it's managed as a separate financial institution (and because it's German) there is a tireless focus on efficiency and effectiveness, managed by teams of heavyweight quants.  People are categorized into 38 different groups, with compensation to providers based on the representation of these groups in their roster.  (Compare this to Ontario's roster compensation that has 2 categories: "normal," and "old.")  Treatment outcomes are measured and a national drug formulary establishes best practices to manage costs.  The Germans approach Health Insurance like a multi-billion industry and run it like a bank.

What accounts for rising Healthcare costs?

What surprised me about rising healthcare costs was how little of the increase was due to the ageing population we hear so much about.  10% of the increase can be accounted for by an ageing population.  The lion's share of increased cost is the increase of volume of activity.  More medications and more tests.  The consensus at the event was that the solution is to stop compensating providers for services and start compensating providers based on who's in their roster, and to reward outcomes; move to a capitation model.

What can Business Do?

Michael Guerriere (Telus Health) made a number of insightful observations about the role of business in improving Canada's healthcare landscape.

Different sectors respond to failure differently.  In the private sector, if a project is failing, the business will kill it quickly and decisively.  Whereas in the public sector, when a project is failing, governments have a tendency to, as Michael put it, "double down," throwing good money after bad.  His recommendation:  Rely more on private sector capital to solve healthcare problems.

The challenge with this in Canada, however, is that we have 14 little healthcare markets.  These little markets behave too differently from one another for a vendor to build a coherent national strategy, which explains why so few American healthcare vendors have much of a presence in Canada.  I'm painfully aware of this problem in my standards work--It astounds me that every Canadian province feels the need to define different message formats for exchanging healthcare data.  Yes you read that right, Canadian provinces are each defining different, incompatible technical specifications for exchanging health care data.  It's insane.

Primary care EMRs need better communication with the rest of the care community.  This is a topic near and dear to my heart and I will be writing a separate blog post on this topic.

Monday, 10 June 2013

eHealth 2013 impressions: A thousand points of light

I've been attending Canada's eHealth conference for about 5 years now.  This year felt different from previous years.

In previous years, there was a strong presence of large national and provincial initiatives.  This year, it felt more like a "thousand points of light".  Major jurisdictional initiatives have shrunk out of the limelight.  We saw terrific presentations from grass roots pilots at various healthcare organizations across the country, but gone were the ambitious blueprints and grand proclamations of EHR 2015.

A big part of this has got to be the current eHealth Ontario crisis.  Ontario is the largest Healthcare market in Canada by far, but it feels like the wheels have fallen off the eHealth Ontario bus.  Greg Reed announced three priorities when he took the helm in 2010: Diabetes Registry, Medication Management, and OLIS.  The first two projects have been cancelled, and we've seen an unprecedented exodus of top leadership from that organization this spring.

Moving away from ambitious provincial initiatives back to grass-roots projects is mostly a good thing.  Though I continue to feel that every jurisdiction needs, at a minimum, a single patient, provider, and location registry to have any hope of ever achieving shared electronic health records.  Why do we still not have these in Ontario?

The main question I kept asking myself at this conference was: "Wow, what this surgeon accomplished in their hospital pilot was fantastic!  How do we roll her solution out to everyone else?"  That, I think, is the biggest gap in our current eHealth ecosystem.  Every year we should pick the top three best eHealth pilots, scale those systems up, and roll them out to everyone.  We need a market for innovation in Healthcare.

Tuesday, 26 March 2013

Software Procurement in the Public Sector

Last week, I was a member of the panel at a workshop hosted by ITAC Health.  The goal of the workshop was to bring public sector buyers and sellers together to fix a broken procurement system.  I was a member of the vendor panel, representing mid-sized Canadian software companies.  The buyer side was well represented, including an Auditor General of Ontario and the Assistant Deputy Minister for Ontario Shared Services.  Details of the event can be found here.  Below is the talk I gave at the event and some suggestions vendors made for improving the public sector procurement process.


Building a Software System is not like Building a Subway System

As a Software Engineer, the main barrier I see to successful software delivery in the public sector is a misunderstanding about the nature of software.  Software projects in the public sector tend to be managed like major construction projects: like building a new subway system.  How do you manage risk when building a new subway system?  You do years of up front planning, specifying all the details of the entire project long before the first shovel breaks earth.  Why do you do this?  Because re-routing a subway tunnel is very, very expensive.  You have to get it right the first time.

The mistake we make in the public sector, is that we treat software systems the same way. Software is different.  Unlike a major construction project, it is in fact relatively inexpensive to alter a software system after it has been built.  Building a software system is more like building a successful political campaign platform.  Successful political campaigns commit very little up front: They throw out teasers and then poll intensively to suss out the public mood--which parts of the new platform does the electorate hate, which parts get the public excited--then based on this feedback, the direction of the campaign is altered.  It is not a subway tunnel, planned years in advance.  It is built incrementally, guided by constant feedback from the electorate.

To give a specific example, consider Ontario's Medication Management RFP.  Ontario started writing this RFP 6 years ago.  The RFP was finally issued 3 years ago.  Today it is still not awarded.  We've been planning this projects for 6 years now, and all we have to show for it is a stack of paper.  If a middle-aged person arrives at an EMR today unconscious, we have *no* way of knowing what medications they are on.  Why are we at this point?  It's because the scope of the Ontario Medication Management system is huge.  It's being procured like a new subway system: with massive scope and comprehensive specifications.  The scope includes real-time transactional integrations into pharmacy systems, real-time e-prescribing integration into physician systems, real-time patient lookup into patient registries etc.  These are ambitious plans.

Imagine, if instead of this, 6 years ago we decided to build the software incrementally.  We started with a nightly batch upload of all prescriptions from all pharmacies to a central database and then gave EMRs access to this database.  Pharmacies already batch upload their prescriptions to other partners, so adding a provincial drug database to the feed would be a simple project for them.  Had Ontario instead started with this scope, I believe we would have had a comprehensive province-wide prescription database within a year. Sure the prescription data is a day old, but day old data is a heck of a lot better than *no* data.  More significant than that, having a real system out there, in the hands of users, allows you to start polling your users--which parts of the new platform do the users hate?  Which parts get the users excited.  Based on this feedback, you can steer the evolution of the system, often in ways you could never have anticipated at the start.

6 years is an eternity in the world of technology.  6 years ago, there were no iPhones.  3 years ago, when the Medication Management RFP hit the streets, there were no iPads.  Subway trains don't change that much.  But software changes a lot.  When you're planning a software project, particularly something as important as a public health system, if you plan too big for too long, your system will be obsolete by the time it's launched.  Start small, get feedback, and  incrementally improve.

Fix the Q&A Process

I also wanted to touch on an aspect of that process that I think is particularly broken, and that is the RFP Q&A process.  The intent of the RFP Q&A rules are to level the playing field--to ensure all vendors have a fair chance at winning.  The effect of the RFP Q&A rules is exactly the opposite.  Vendors on the inside already understand what the customer needs, and those on the outside have no way of finding out.

The rules require that any question a vendor has about an RFP must be submitted on paper on the public record, with the answers to the questions being shared with all bidders.  While this process looks good on paper, this is what actually happens:

  • Meaningful clarification questions are rarely asked for fear of giving away your requirements analysis advantage to your competitors.
  • On the rare occasion when meaningful clarification questions are asked, 9 times out of 10 the answers don't help.
Software requirements cannot be understood through a public Q&A process.  Delivering software is fundamentally different from delivering milk.  What's required is a conversation, with only the vendor and the customer in the room, where the vendor can ask insightful questions and the customer can answer honestly. It is very, very hard to successfully bid on a project when you don't understand what the customer actually needs, and a public Q&A process has proven that it can't get us there.

To make this work, there would need to be some sort of short-list qualifying process so there are a manageable number of meetings.  But often times there are only 2 or 3 bidders.  Even a confidential Q&A would be more fair than what we currently have.

Contract Templates

We heard a lot at the workshop about how attaching a unique contract in each RFP adds delay to the bidding process, and how many clauses in these contracts are showstoppers for many vendors, especially clauses regarding unlimited liability, unlimited indemnification, and IP ownership.

Why not publish, say, 5 contract templates that are be used for all public sector procurements, and then have the RFP simply refer to the contract template by name.  This would allow vendors to pre-qualify which of those templates they can bid on and which they can't.  Such pre-qualification could even be announced by vendors if they chose to, so that when selecting a contract template, governments would know in advance which vendors they are excluding.

RFP Star Rating

Those of us who regularly read RFPs know that there is a huge variety in the quality.  Vendors prefer RFPs that describe the business problem and leave the implementation details to the vendor.  Vendors prefer RFPs with clear scope and deliverables.

One of the things Marion Macdonald (ADM Ontario Shared Services) mentioned at the meeting is that Ontario plans to change the system it uses to manage the RFP process.  I would suggest to Marion that she consider allowing vendors to anonymously assign a star rating to posted RFPs.  I imagine a future where budding RFP authors download all the 5 star RFPs as guides for how to write a really good RFP.

References vs. Innovation

My co-panelist Michael Martineau raised an important point, which is that we make a lot of noise about fostering innovation, but then when it comes time to RFP, the RFP inevitably requires "three sites in Canada where the software has been installed for over a year" or somesuch reference requirement.  You're not going to introduce innovative solutions into the marketplace, particularly from other countries, so long as RFPs contain language like that.




Thursday, 24 March 2011

ePrescribing - Is "Safety Last" an option?

I had the privilege of attending an ePrescribing workshop recently. The provinces were well represented, as were pharmacies and Drug Information Systems vendors.

I particularly appreciated feedback from the physicians at the meeting who had actually used ePrescribing systems in various pilots across the country. One of these physicians gave ePrescribing in it's current form a big thumbs down. The reason? He values spending time with his patients. It takes him 12 to 14 seconds to write a paper prescription. Completing an ePrescription, on the other hand, takes him on the order of 4 minutes. He calculated that resulted in him seeing 3-4 less patients in a day.

I've heard many an eHealth idealist bemoan the dinosaur physician who refuses to get with the times, or who jealously guards his or her patient's data. But this is not the case here. This physician just wants to see his patients!

The bulk of this 4 minutes is spent responding to "Alerts" raised by the Drug Information System. The possibility of raising such alerts is the source of much excitement among eHealth proponents: drug interactions, drug allergies, adverse reactions etc could all be detected "at source". Who could say no to that?

The reality, unfortunately, is that the signal-to-noise ratio on those alerts is so bad, that by the the end of their first week, most physicians were dismissing the endless stream of useless alerts without even looking at them. All three physicians at the meeting attested to this.

It made me wonder, should we consider focusing on adoption first, and then add alerts later? There are many benefits to ePrescribing beyond alerts: prescription accuracy, minimizing call-backs, etc. Alerts will be caught by the pharmacy system DUR anyways. Perhaps "Safety Last" is actually the best way forward for ePrescribing in Canada!