Thursday, March 17, 2011

The 'Medical-Grade' Network

Cisco uses the terminology ‘Medical-Grade’ in regard to their published reference architectures for building what they consider to be a robust network platform capable of supporting healthcare specific applications and patient safety devices such as patient monitors, infusion pumps, oxygen monitors, etc. We have found ourselves in an interesting position as our own device lifecycle has encouraged the adoption and standardization of many of these device types at the same time that manufacturers are beginning to embrace the deployment of these devices on IP networks rather than proprietary infrastructure.

By placing these devices on IP based wired and wireless infrastructure we simplify cabling, enable mobility, facilitate data exchange with EHR systems, centralize server infrastructure into a properly regulated environment, provide remote access, and integrate with existing communications systems. There are certainly numerous other benefits depending on the type of device being utilized and the capabilities of individual vendors. We also introduce risk. Systems fail and those of us in IT realize this, however, clinical personnel have become dependent on these devices and are accustomed to older and less complex infrastructure that provides reliable service. Appropriate and complete human processes need to be in place to facilitate hardware failure, upgrades, and configuration changes.

Not everyone runs an end-to-end Cisco network. I’ll give you my two cents in regard to what we consider to be ‘Medical-Grade’ from the core outward and perhaps follow up with another post specifically addressing the architecture allowing us to meeting these recommendations.

Everything starts with appropriate datacenter facilities. Ideally we want to see redundancy in regard to street power, generators, heat rejection, air handling, UPS, power distribution, and ultimately power supplies within core network and server equipment. In regard to medical devices the brains of the operations reside here - EHR interface servers, remote access devices, and centralized alarming. From a standpoint of wireless you will likely place redundant centralized controllers and other management services in this location.

On a typical campus you will extend from the datacenter (MDF) to your switch closet locations (IDF) via a campus-wide fiber deployment. Physical redundancy is the key here and is accomplished by having multiple paths to each and every IDF location. Additional considerations are given to large deployments requiring distribution level switching. As we are a relatively small campus we do not have this requirement but considerations would be similar within this model.

From the standpoint of logical redundancy the goal is to enable redundant physical fiber links between MDF and IDF locations using a technology capable of providing extremely fast failover convergence in the event that a single physical connection should fail. There are a number of ways to accomplish this but simple is best and KISS is never a bad idea. We will typically be bound to our chosen technology for the duration of its lifecycle and options are numerous. Examples include STP (yuck), Routed Access via OSPF/EIGRP, Cross-Chassis / Cross-Stack EtherChannel, etc.

At the IDF level we want to consider all the appropriate facilities in regard to cooling and power. We want UPS units on emergency power capable of powering PoE class switches for a significant period of time should emergency power fail to fire. These units should be network attached and centrally monitored. Ideally we want dual UPS units in each closet to shield the location from a UPS failure event.

In regard to the IDF switching equipment should allow for dual power supplies or an externally attached power supply unit capable of providing this functionality. As device types vary a PoE capable unit is desirable and if you can deliver full gigabit at this level please do. These units will drive your hardwired devices and wireless access points. Avoid grouping devices onto one particular physical switch so that subsets of devices remain online in the event of a single switch failure.

The closer we get to the patient the more apparent our single points of failure are. Most wired devices have only a single interface leaving you exposed to device failure, physical link failure, switchport failure, and failure of an entire switching unit. Stock hot spares – or please let me know if you have a better idea.

An interesting philosophy we have had recently – at a certain point wireless almost becomes more reliable – you eliminate the physical link and eliminate dependency on a single physical switch if AP units are properly staggered. YMMV. We continue to stick to the ideal that wireless = mobile != wired replacement. Perhaps this will change in the near future.

I didn’t guess this would get so lengthy. I will address some human issues shortly.

No comments: