Category Archives: Technology & Medicine

OSCAR Crowdfunding

Today I found out through the OSCAR EMR mailing list that there are a number of projects open now for crowd funding, including an upgrade to the prescription module and a billing module update. I always thought the billing module for OSCAR 12.1 is a bit clunky – it works, but (as far as I know) only supports billing through OHIP and I would like to see some ability to track bills submitted to outside insurance companies or to the patient directly. If we all chip in a little bit, it will become a reality:

OSCAR EMR Crowdfunding projects

EMR Hardware part 2 – network connections

As I discovered in my quest to set up a functioning EMR, an electronic record does not function with a computer alone. In order to work, a client computer is needed to access the server, it needs to connect to a network, and there are considerations to be made for security and reliability. As the topic is broad, today I’ll focus on what I learned about network connections, and discuss client computers, security, and reliability later. Perhaps a little bit of background will make the details relevant to networking clearer.

For the total beginners, EMR software usually runs on a server computer; the user interacts with the software using a client computer that connects to the server, rather than operating the server computer directly. This is similar to how your computer is accessing a server to view this webpage. The nice thing about OSCAR is that it runs through a web browser; therefore, provided the client computer can connect to the server, any computer with a web browser can be used to access and operate OSCAR.

There are different ways the client computer can connect to OSCAR. One option is over a local network – the computers in the office connect to each other, but not the wider Internet. Therefore the server is located on-site and remote access is not possible. Another option is to connect through the Internet, which requires that the server be connected to the world wide network, but it could then be accessed anywhere with an Internet connection. One could connect to OSCAR locally while in the office and over the Internet while at another location. Alternatively, OSCAR can be run on someone else’s server (Application Service Provider, or ASP). This last option is not really DIY – the server is in someone else’s hands, in a physical sense and in terms of the upkeep. Since I needed remote access but I wanted some control over the setup, I opted to set up my OSCAR server at a central location so that I could access it at the various clinics where I work.

If one only plans to access the server locally (a very secure, but less convenient option), then a simple network switch connecting the computers should suffice. For remote access, Internet connections get involved and you will need a router.

When considering the Internet connection, the upload speed is very important. The server will be serving up files to the client computer and therefore, especially if multiple people will be working on the EMR at once, the server needs a fast upload connection. The problem with the average high-speed Internet connections is that it is biased towards fast download speed so that users can watch movies, download music, etc. Uploading is much less of a priority, and understandably so – if everyone were running a file sharing service or a web server at home it could suck up bandwidth pretty quickly. If the upload speed is too slow, it bottlenecks the server. I’ve been told that an upload speed of at least 3-5 Mbps is required and in my experience, is sufficient.

The router is basically a small computer that directs internet traffic. You will need a router if you have more than one computer connected to the Internet. There is a huge range of routers on the market – ranging in price from $20-30 to several hundred dollars.

Since my home Internet provider uses a dynamic IP, the router needed to support Dynamic DNS (Domain Name System). The computer’s IP address is like its postal address on the Internet; when one enters a URL into the browser, a DNS service uses the URL to look up the correct IP address and directs the client computer there. If the server has a static IP address, then it’s “location” on the Internet is always the same. If, however, it is a dynamic IP, it might change from time to time – a DNS service’s information might go out of date and then the server would be impossible to find remotely. The solution is Dynamic DNS – the router gets the server’s current IP address at certain intervals and feeds that information to the DDNS service, so even when the IP changes, the same URL will get you there.

There are many DDNS services available, for example, subscription services from Dyn. If you are really on a budget Afraid.org offers a free DDNS service; you may need to modify your router firmware to use this service (see below) as the commercial routers that I’ve owned tend only to support a few of the larger subscription DDNS services.

There are commercial-grade routers that have more advanced wi-fi encryption, virtual private network support, and faster connections, but in my experience it isn’t necessary to pay for most of these features. Given the slow upload speeds of most connections, a Gigabit router isn’t necessary – a standard 10/100 Ethernet router, even if it tops out at 100 MBps, will not be the limiting factor. Almost any old $20 second-hand router will do; most of the “advanced” features on the more expensive routers have to do with the software installed on them, rather than the hardware, and the software can be altered. This means that if your cheap router doesn’t support DDNS, the firmware can often be replaced with DD-WRT, an open-source router firmware, that does support DDNS. When my D-link router died after about 5-6 years of service I replaced it with a second-hand Linksys WRT54G (first released around 2002) and flashed the firmware so I could use DDNS. The instructions for doing this are widely available on the Internet; it takes a few hours and some anxiety is involved due to the potential of “bricking” (rendering inoperable) the router if the procedure isn’t followed properly.

Now that we’ve discussed the connecting hardware, we can look at the computer that will do the connecting – the client computer.

EMR Hardware part 1 – to rack, or not to rack?

Having picked OSCAR as the software for the EMR experiment, I set out to find suitable computer hardware to install a trial version of the software on. My plan was to set something up initially as proof-of-concept and give the software a test run before making a commitment.

As an overview, a few pieces of hardware are needed to make an EMR work. There needs to be a server computer that runs the software, a client computer that allows the user to access the software, and network connections in between them. Other accessories, like battery backup, seem not to be essential, but are very important – these will be discussed in a later post. Today we’ll take a look at the server hardware.

A perusal of the OSCAR user’s manual (available online) suggested that for a single-person clinic, a typical $500 desktop PC would likely be sufficient. The OSCAR service providers seem to routinely provide Mac Mini computers (or something comparable) for about $1000. I did find an anecdote on the PEI OSCAR blog that it would be possible to set up OSCAR on an old user-grade PC computer (see the link for an excellent breakdown of the potential costs involved in setting up OSCAR). The idea is to use a computer that is obsolete for most people’s purposes, but still working. It seems that people do this all the time to run web servers for online games or serving up webpages – why not use it for OSCAR, browser-based EMR?

In my experience with OSCAR I have yet to see any source of information that compares performance of the software running on different server systems, so there is no way to know for sure (at least as far as I know) how little one can get away with in terms of buying hardware. I’ll provide my experience here in case it helps anyone else making a hardware decision.

I ultimately decided to use rack server hardware for creating an OSCAR server. I was able to find a first-generation IBM x3650 rack server on Kijiji for $250. It has an Intel Xeon 3.0 GHz dual-core processor, 4GB of RAM, 8 x 73 GB hot-swappable hard drives, a hardware RAID controller, dual gigabit ethernet adapters, and dual hot-swappable power supplies. This was surely hot stuff in 2004, but by today’s standards, it is pretty dated (when a pretty basic desktop computer comes with a 1 TB hard drive). It was a good computer to experiment on, and so far, it my experience, it has enough power to do the job.

IBM x3650, Gen 1

The x3650, with face full of hot-swappable hard drives

From an efficiency perspective and what I managed to teach myself about computers, even an old rack server would have enough processing power to handle requests from one or two users, with room to upgrade to multiple users if needed – after all, this is what they were designed to do. There is minimal requirement for a graphics processor, so the server, which doesn’t have a fancy video card, does fine here and one would not be paying for extra hardware that won’t get used. Theoretically, the server features like multiple hard drives arranged in RAID speed up the time needed to access data; for an application like OSCAR I’m not sure it really matters. What I can say from my experiences is that this machine was fine for me – one physician with a medical office assistant.

From a maintenance and reliability perspective the server does have advantages over a converted PC. For one, there is a layer of redundancy. Parts that commonly fail (fans, hard drives, power supplies) come in pairs. Lightpath diagnostics allow one to tell at a glance if one of these components has failed, and replace it, which is designed to be easy, to the point that many of the vulnerable parts are hot-swappable (they can be changed out while the computer is still running). By setting up multiple hard drives configured in RAID (Redundant Array of Independent Disks), the drives can be set up so that the contents of one are mirrored onto another so that if one hard drive fails, the data are still safe on another disk and the array can be restored by replacing the failed drive.

In contrast, the Mac Mini, which seems to be somewhat of a current standard in OSCAR computers for solo practitioners, is almost not user-serviceable at all. If something breaks, if one is able to repair it at all, it is a complicated job that involves delicate dismantling, and the more likely outcome is that it will need to be taken in to Apple. For mission-critical equipment, it seems important to be able to fix it quickly. Server hardware is also (at least theoretically) built to run 24/7, designed with reliability in mind, which may not apply to many inexpensive desktop computers.

Is server-grade hardware overkill, though? For Internet retail sites where every minute of up-time translates into a dollar value in sales, hot-swappable components are probably valuable. Can an outpatient psychiatrist function for a day without a medical record? Maybe. My experience says that after a while, it starts to become mission-critical – how do you manage if you don’t even know who is scheduled to come for the day?

From an operating cost perspective, power consumption is a consideration – I have not done the measurements myself, but from my research, an old rack server like the x3650 runs hot, has a lot of fans, and uses considerably more electricity than a little, efficient Mac Mini. In my building electricity is included with rent, so the cost to me is no higher than for any other computer I may have chosen to use, but the environmental costs are probably higher than necessary. If a component fails in a rack server and needs to be replaced, it is cheaper than buying a new computer, as might need to be done with a Mac Mini. (Although at $250, one could just buy another rack server!)

There are two other important things I learned about related to rack-mounted servers that are easy to know from experience but not from reading the Internet. The most important point is noise – this is not a widely advertised property of rack server hardware but it is an important consideration. I was told that a 2U server like the x3650 is much quieter than a 1U model because it is larger, and can therefore accommodate larger (and therefore quieter) fans. However, it is still loud – compared to a desktop computer, the x3650 sounds like a jet turbine when it fires up and becomes only slightly quieter afterwards. I would not recommend putting one of these machines where one works – especially if one intends to talk to people. Clearly these machines are designed to be housed in separate, purpose-built rooms.

Server rack for the OSCAR server.

The x3650 racked up in an XRackPro2 noise-insulated server cabinet. Even with the insulation and glass, this thing is loud!

Another important observation is the size of a rack-mounted computer. It doesn’t look imposing from the front, but the real bulk is in the depth. It is much deeper than one might suspect from looking at a picture – about 3 1/2 feet. Mounting it inside of a rack cabinet further increases the required amount of floor space. This is not a setup that one can easily tuck away inside a closet or under a desk. While the server does not need to be mounted in a rack – it could be left on the floor or standing up on its side (using a special mounting kit), there are some reasons for doing so. Mounting inside a lockable cabinet provides some security features, preventing unauthorized physical access to the machine. If the machine will be anywhere near people, a sound-insulated cabinet can reduce some of the noise. It also just looks better. The problem with server racks is that they are either monstrous (full-sized 42U racks can be found retired from data centres and cheap on Kijiji, but they might be 7 feet tall and need two people to move) or expensive (under-desk models are in the $500+ range). What I didn’t realize before I bought the computer is that many of inexpensive 6 or 8 U server racks available from the neighbourhood computer store are actually not deep enough to mount a rack server computer. Rather, they must be designed with other hardware in mind, such as audio equipment. I ended up buying an XRackPro2 – not cheap, but it provides some sound insulation and it is lockable. At the end of the day, though, it is still louder than I would like, and it is not easily movable.

In conclusion,  my experience with the rack server was a bit like owning a vintage motorcycle. It’s fun to set up and tinker with  (and blog about), and the design appeals to a certain manly sensibility, but after the initial thrill wears off, one wants to trade it in for something quieter and more energy-efficient. If I were to do it again (set up a cost-effective EMR that could be done with minimal expense for someone without external funding), I might elect to use second-hand rack server hardware, but only if there was a dedicated, separate space to set it up in, and it was likely to see heavy use where reliability is an important factor. Otherwise, the benefits in redundancy and ease of servicing are outweighed by the noise, size, and power consumption. Also, it is difficult and/or expensive to obtain a proper server rack and move it into place.

A Mac Mini is still attractive because of the size and power efficiency factor, but it loses points in my eyes for not being easily user-serviceable, and it costs twice as much.

I might actually opt for a quiet $500 desktop computer in a tower configuration that can be tucked away under a desk or inside a closet. The priorities to focus on would be a fast multi-core CPU, redundant hard drives if possible, and ignoring the video or multimedia cards. In terms of specifications, I’m not sure how little one can get away with in terms of the CPU, but with respect to hard drive space I can say that notes in OSCAR take up very little space. After 5 months of use my notes take up about 5 MB. Scanned documents are another story, accumulating at a rate of perhaps 10-20 MB per month. At that rate it would take a years to even add up to one GB but it is also important to note that OSCAR stores daily backup files for a month, so multiply however much space you think you will use by a factor of 30. If there are a extensive paper charts to be scanned, they will also require memory. My rough estimate based on my experience would be about 10 MB per patient’s paper chart, per year. Hard drive space is cheap these days, so 1 TB should be more than enough space, and affordable, if it’s too hard to do the math.

Next up – how does one go from computer-in-hand to running server? The server needs to be configured, of course.

EMR Software – why open source is important

The first step in setting up an Electronic Medical Record is to pick the software. There are many options to choose from, but the list can be narrowed significantly when considering that OntarioMD only considers certain software providers to be funding-eligible. In other words, in order to get money from the government, one needs to go with one of the options on their list, which must then meet a certain standard for functionality. At the time of this writing, there are at least 13 options on the OntarioMD funding-eligible list – still a lot to choose from.

Looking at software from the perspective of efficiency, it needs to have enough features to be functional and it needs to be simple enough that it does not take more time to do than keeping a paper chart. From a private practice psychiatrist’s perspective, those functions include appointment scheduling, record-keeping, prescriptions, creating and faxing consultation reports, and billing. Other extra functions are a bonus. I have not tried every software on the market (not even close) but I can say that when looking for software, it is important to make sure it does what is needed of it.

Functionality aside, let’s look at EMR software from the perspective of ease of maintenance and operating costs. These are more long-term issues which I will attempt to outline below. In my mind, the biggest factor to consider in both of these domains is vendor lock-in. Consider me paranoid, but using software that stores patient data in a proprietary format that would allow for a software provider to hold the data hostage does not sound like a good idea. Keep in mind that physicians need records to defend against complaints and lawsuits, not to mention keeping track of the care we are providing. Therefore, access to those records even decades into the future is extremely important.

Another factor to consider in the ongoing maintenance is longevity of the product. Software out of the box is great at the time. If it doesn’t change in ten years – not so great. Can you imagine using record-keeping software that still runs on Windows 95? What about Windows 3.1, or DOS? Even if software meets all of our needs at the present, standards of care and practice will change in the future. OHIP, for example, no longer accepts billings by diskette. In the future, all of our records may be connected by a network. It may become the standard of care to have clinical decision-making aids integrated into our software. We may need to make decisions based on individual patient parameters like genotype. If the software we use does not evolve, it will no longer be useful.

In order for an EMR to continue to evolve, it needs to be maintained. In order to be maintained, it either needs to be profitable (i.e. there is a market for it) so that a company will continue to work on it, or it needs to be backed by an enthusiastic user community. As I mentioned in my previous post, long as the Ontario government is giving away money, there is an artificial market for EMR software. Doctors have money to throw away, so entrepreneurs are happy to develop software to collect it. After the money dries up, what happens? It seems to me that before signing on for any particular software, it would be a good idea to determine how many people use it, in how many places, and for how long.

With this in mind, I propose that an open-source option would meet the needs for a non-proprietary format and product longevity. Open source means that the software is usually free in the sense of being very inexpensive, and more importantly, free in the sense that anyone can look at the source code and contribute improvements. Even if the original developer becomes defunct, the users of the software could band together to make sure the software continues to be supported, and the users do not have to depend on a development company to access their data.

In my search for open-source EMR software, only option that stood out – OSCAR, developed at McMaster University in 2001. It is the only OntarioMD funding-eligible option that is open-source, to my knowledge. Since OSCAR is open-source, changes and improvements can be made by anyone. There is also no licensing fee to use OSCAR, and since it runs on a MySQL database, the patient records are not tied up in any kind of proprietary code that a software company could use to hold one hostage. A fully-functional version of the software is freely available from the OSCAR EMR website – any interested party can download it, install it, and take it for a test drive. Therefore, it meets the needs for functionality, ease of maintenance, and low operating costs. It is also supported by a not-for-profit entity, OSCAR-EMR (similar, perhaps, to the way in which Canonical supports the development of Ubuntu). It seemed perfect for a self-maintained, do-it-yourself setup.

Next, we’ll take a look at setting up hardware to run a basic OSCAR EMR system.

The Do-It-Yourself EMR Experiment

Five months ago I decided to try an experiment – see if it would be possible to set up an Electronic Medical Record (EMR) on a limited budget. OntarioMD provides government funding to physicians who are looking to switch from paper to electronic records but the amount of funding is limited. (I applied in August 2013 and I got approved almost a year later). My question, therefore, was whether it is possible for a physician to set up an EMR independently of government funding. If EMR is the way of the future, why should it need to be heavily incentivized in order to get people to make the switch? If it really is better, then shouldn’t it be faster than paper, easy to maintain, and competitive with paper charts in terms of operating costs?

There are other reasons why I thought this experiment would have social value. Besides physicians who are new into practice, there are other professionals (e.g. naturopathic doctors, independent psychotherapists and counselors) who could use an EMR but who do not have access to funding.

In this series on EMRs, I’ll write about my experience trying to set up an EMR system keeping those three points in mind:

1. Efficiency
2. Operating costs
3. Maintenance

Efficiency is relevant because if the EMR, subsidized or not, is slower to operate than keeping paper charts in a filing cabinet, it does not make sense for the average doctor to make the switch. Yes, there are future visions of big medicine – connecting all of the EMRs in a network that would allow for information sharing and data mining (and government snooping, perhaps?) In the long term that may contribute to better care from a systems / population perspective, but from the perspective of most doctors I would argue that we care primarily about whether it helps us provide better care to the patient in front of us right now. If the record-keeping system slows us down or does not add any short-term benefit, it is not very attractive. If one spends a little bit of time on the online self-help forums or talking to colleagues it does not take long to hear stories about the physician who stays a couple of extra hours at the end of the day to finish typing paperwork, whose appointments run over time because of the extra time it takes to figure out how the prescription module of the software is supposed to work, or who (worst-case scenario) is not able to function at all because the computer is down.

Operating cost is also very important – a doctor could apply for the government funding and wait until it goes through before switching over, but the funding is only for a number of years and after that, the burden of maintenance goes back to the physician. In my mind, that means the EMR had better be easy and cost-effective to maintain after funding runs out, or else we would be foolish to jump on the bandwagon only to be saddled with the burden of maintaining aging computer hardware and continuing to pay service fees that mostly benefit the proprietary software companies and service providers that sprung up when the subsidy gravy was flowing.

Ease of maintenance is closely related to operating cost, but not exactly the same thing. Even if a doctor contracts out the maintenance of the EMR to a third party, the doctor is ultimately responsible for whether or not it works, because we are the ones who bear the consequences if it doesn’t. There are also factors to consider that a third-party service provider is not directly responsible for – the physician’s client computers (the ones used to access the server), the Internet connection, the physician’s time to learn how to use the software and ensure that staff know how to use it. Also, for a solo or small group practice that does not have in-house IT staff, the physician is more than likely going to be the one troubleshooting when there are small problems and therefore (in my opinion) should know how the system works. Consider, as an analogy, commuting to work on a bicycle. Of course, you can take your bike to a shop every time you want it tuned up. Even so, if it breaks down on the road, the rider’s tools and mechanical knowledge make the difference between calling a taxi or walking to work and getting up and riding again.

Next up, we’ll look at the first step in constructing the EMR – selecting the software – keeping the three above points in mind.

DIY Medicine

A big challenge that I’ve noticed in my work is how to get help for the people who need it – even when they take the step of asking for help, wait times are very long. However – it seems that a lot of psychiatry is not rocket science (as an example, see this article in The Guardian on the MANAS intervention, where lay people were trained to deliver effective psychotherapy).

This study really makes one think about how mental health could be provided for all people, by all people. I believe, for example, it is quite reasonable to argue that mindfulness training and interpersonal skills should be in the hands of the average person. In a sense, it still is – there are many places where one can learn to meditate for free. At the same time, there seems to be a trend in our society for everything to become ultra-specialized so that there is an expert for everything and less emphasis on disseminating skills for people to help themselves.

All of this brings me to the issue of how medical knowledge is disseminated. In the computer world we have open-source software – Freely available, free to modify, free to disseminate. This, in some ways, is the opposite of proprietary software, which makes the user reliant on an expert company or institution for updates and licenses. Is mental health a bit too much like Microsoft or Apple?

Psychiatrists could probably be doing a lot more to work with other people like artists, designers, or software developers, to package health information in a way that is more easily disseminated and accessed. We could put more power in the hands of people to help themselves. OHIP doesn’t compensate us to do that (but we would get nicely compensated for working in a hospital or an ER). Still – would wellness promotion and education be a more economic use of our time?

Private Practice, DIY, & Quality Improvement

Over the past month I’ve been looking at some Electronic Medical Record systems to make my life a bit easier. I don’t have a busy private practice but I can see the benefits of cutting down on paperwork and having access to records from anywhere.

This process of quality improvement probably wouldn’t be happening if I didn’t have my own practice and I can definitely see that although it is a lot more work setting up my own office as compared to working in an institution (I do both) there are certain benefits, and I think the main one is the mentality that it fosters. Looking back I would encourage every psychiatrist to do at least some independent practice because it encourages a sense of responsibility for quality improvement and advancement. Working with patients and other doctors in the community encourages us to be responsive to challenges that the community faces whereas in an institution it is all too easy to leave the work to whoever is in charge of the department.

I can definitely see benefits in how I use office information technology. Working on my own has led me to Internet-based phone systems and voice recognition software to improve efficiency and mobility. Investigating EMRs has taught me a lot about computer hardware and security. I’ve been able to help the family doctor I work with to use more technology in her practice. By being part of a community and taking responsibility for our medical practices, we can help each other a lot – so here’s to going out on your own.