YOUR pWNED HEART
Hacking attacks can turn off heart monitors
by Richard Thurston / 12th March 2008
American researchers have proven it’s possible to maliciously turn off individuals’ heart monitors through a wireless hacking attack. Many thousands of people across the world have the monitors, medically known as implantable cardiac defibrillators (ICDs), installed to help their hearts beat regularly. ICDs treat abnormal heart conditions; more recent models also incorporate the abilities of a Pacemaker. Their function is to speed up a heartbeat which is too slow, or to deliver an electrical shock to a heart which is beating too quickly. According to the research by the Medical Device Security Center – which is backed by the Harvard Medical School among others – hackers would be able to intercept medical information on the patient, turn off the device, or, even worse, deliver an unnecessary electrical shock to the patient.
The hack takes advantage of the fact the ICD possesses a radio which is designed to allow reprogramming by a hospital doctor. The ICD’s radio signals are not encrypted, the Security Center said. The Security Center demonstrated the hack on an ICD made by Medtronic using a PC, radio hardware and an antenna. The ICD was not in a patient at the time. The research is detailed in a report released today. The report reveals that a hacker could “render the ICD incapable of responding to dangerous cardiac events. A malicious person could also make the ICD deliver a shock that could induce ventricular fibrillation, a potentially lethal arrhythmia.”
The Security Center says manufacturers of ICDs could implement several measures to prevent the threat. These include making the IMD produce an audible alert when an unauthorised party tries to communicate with their IMD. It also suggests employing cryptography to provide secure authentication for doctors. The researchers added that the risk facing patients is negligible. “We believe the risk to patients is low and that patients should not be alarmed,” it said in the report. “We do not know of a single case where an IMD patient has ever been harmed by a malicious security attack.” It added that hackers would need to be physically close to their intended victim and would need sophisticated equipment. The kit used in the demoed attack cost $30,000. The researchers omitted their methodology from the paper to help prevent such an attack ever happening, they said. Medtronic said the chance of such an attack is “extremely low”. Future versions of its IMDs, which will send radio signals ten metres, will incorporate stronger security, it told reporters.
10. WHY IS MEDTRONIC LEGALLY RESPONSIBLE?
“Manufacturers of medical devices have a duty to patients to produce safe products. In lawsuits against Medtronic prepared by Lieff Cabraser, our clients allege that Medtronic misrepresented the safety of the Sprint Fidelis lead. Hundreds of injuries linked to Sprint Fidelis heart defibrillator wires had been reported to the FDA as of the end of 2006. The high and early failure rate of Medtronic Sprint Fidelis leads was also reported in a medical journal in 2006. Yet, Medtronic failed to issue a recall and instead continued to sell the devices.”
NO SMALL THREAT
On the Road With Cheney
by Deb Riechmann / March 31, 2008
He travels with a green duffel bag stuffed with nonfiction books about military campaigns and political affairs. He has an iPod and noise-canceling earphones to listen to oldies and some country-western. Oh, and he has two planes, including a C-17 military transport with a 40-foot silver trailer in its belly for his privacy and comfort, and round-theclock bodyguards and medical staff. Aides pack the Diet Sprite — a Cheney favorite — keep the decaffeinated lattes flowing, and tune the tube to Fox News. Vice President Cheney is not your regular road warrior. Mr. Cheney returned Wednesday from a 10-day trip to Iraq, Oman, Afghanistan, Saudi Arabia, Israel, the Palestinian territory, and Turkey. The rigors of travel and ever-present security concerns make sightseeing difficult. Still, he squeezed in a little on his final stop in Istanbul. The vice president, his wife, Lynne, and daughter, Liz, saw Topkapi Palace, seat of the Ottoman sultans for almost 400 years. For all his globe-trotting, Mr. Cheney had never been to Istanbul, home of the Bosphorus Bridge that links Europe and Asia.
More often, Mr. Cheney’s days on the road are spent holed up on planes, helicopters, hotel rooms and stuffy government buildings. They are long, grueling days. His staffers say they have to run fast to keep up with a schedule that seems especially rigorous for a 67-year-old man who has had four heart attacks. Like the gadget inside his chest that makes sure his heart is beating in sync, Mr. Cheney paces himself. “Because he’s been doing it for so long, he has a pretty good sense of what’s important and what’s not important,” a former administration official, Liz Cheney, said. “He keeps his perspective, doesn’t let the little things get to him, you know. They sort of roll off, and he keeps his sense of humor,” she said in Saudi Arabia.
Pacemakers can be hijacked by radio / 22 March 2008
It gives new meaning to the term “heart attack”.
Last week researchers led by William Maisel at Harvard University used a commercially available radio transmitter to hijack the software on a device that acts as both a heart pacemaker and defibrillator. The device was not implanted in anyone, but the experiment raises the prospect of hackers being able to disrupt a person’s heartbeat or stealthily administer damaging shocks. Is the threat of a hacker-instigated heart attack imminent? “The chances of someone being harmed by malicious reprogramming of their device is remote,” says Maisel. However, implanted drug pumps and neurostimulators, which deliver electrical pulses to the brain, could be more vulnerable to such attacks in future as they increasingly have wireless capabilities built in.
William H. Maisel, M.D., M.P.H.
email : wmaisel [at] bidmc.harvard [dot] edu
Dr. William H. Maisel is director of the Medical Device Safety Institute at Beth Israel Deaconess Medical Center and assistant professor of medicine at Harvard Medical School. He has an active cardiology practice and also directs the Pacemaker and Defibrillator Service at Beth Israel Deaconess Medical Center. His research interests involve the safe and effective use of medical devices, and he has published extensively on the safety of pacemakers and defibrillators, drug-eluting stents and other cardiovascular devices. He received his M.D. from Cornell University, his MPH from the Harvard School of Public Health, and completed his internal medicine and cardiovascular training at Brigham and Women’s Hospital. Maisel is an FDA consultant and former Chairman of the FDA’s Circulatory System Medical Device Advisory Panel.
A better method (Score:5, Interesting) / by yamamushi
“The article details how the researchers had to be within 2 inches of the pacemaker, and several thousands of dollars worth of equipment. I suspect there is an easier way to deactivate a pacemaker, find out what frequency they operate at. I’ve got an FM radio blocker, that is basically just a 100mhz oscillator, a potentiometer, and a battery. It works by canceling out a given frequency, thus letting me silence my neighbors stereo from 50ft away. I know the technique works for the 2.4ghz band, for blocking out wireless phone signals and whatnot. I suppose finding an oscillator in the high ghz range would suffice for ‘killing’ a pacemaker.”
Re:Ah, the smart-arse non-sequiturs (Score:5, Informative) / by I_Love_Pocky!
“I appreciate your enthusiasm, but thank god you aren’t designing these devices. I work for one of the competitors to Medtronic (the company whose devices were studied). We have encryption in our RF communication. We DO take security into consideration, but there are trade offs that have to be considered. Battery life is generally the most important consideration. Every time surgery needs to be performed to physically access the device (usually because of a depleted battery) there is a risk of complications. These aren’t insignificant risks either. Keep in mind the people getting these devices have health problems of some sort or they wouldn’t be getting them. With that in mind, security solutions in this domain have to be very well thought out so as to avoid draining the battery significantly. So please, don’t for a second presume that we are a bunch of monkeys sitting around on our asses ignoring real concerns. The real issue is that there are far more concerns than you are aware of. We do evaluate these concerns and try to build the best devices possible with the fewest compromises.”
Hacking the VP (Score:5, Funny) / by tobiasly
“Yes, that’s a very real concern that the secret service has been terrified of for years. Most people know that Cheney has a pacemaker, but the real secret is that they forgot to turn off SSID broadcast and its password is ‘Linksys’.”
CYBORGS NO LONGER SAFE
Will the bionic man have virus protection?
by John Borland / August 09, 2007
Gadi Evron, a prominent Israeli network security expert, has some questions about a future when we let software into our bionic, cybernetic bodies. Say we really do start modifying ourself, he asked a late-night crowd here at the Chaos Computer Camp. Presumably, that means a bit of hardware, a bit of software. And as any security consultant knows, every piece of software ever written by an actual human is riddled with flaws and bugs, which translate all too easily into security flaws. Suddenly a whole slew of problems familiar to the network security world appear. If someone finds a bug in a bionic body part, what are the ethical issues? Should it be reported widely? Just to the company producing the component? Hidden, or sold for profit? And what about patches? Will people line up at schools for heart-implant fixes, like for today’s flu shot? Will viruses distribute false patches, and infect body parts? Or an apocalyptic scenario: What if our cybernetic tools synchronized themselves with Outlook. With wireless connections, viruses could even spread, well, through the air. What kind of intellectualPirates property issues could arise? Pirates, crackers, ransom-artists, virus-writers, all focused on the body instead of the laptop. In part, Evron uses these analogies to demonstrate issues in computer security to non- experts, for whom the idea of body-hacking might help illustrate problems. But it’s also a daunting take on strains in biology and genetics that are only barely still science fiction. A little of the computer security mindset would be a healthy thing as we enter the uncharted territory of body modification, he argues. “Biology needs to undergo a computer science infusion,” Evron said. “We need to reverse engineer genetics.”
SOFTWARE RADIO ATTACKS, and ZERO-POWER DEFENSES
A Heart Device Is Found Vulnerable to Hacker Attacks
by Barnaby J. Feder / March 12, 2008
To the long list of objects vulnerable to attack by computer hackers, add the human heart. The threat seems largely theoretical. But a team of computer security researchers plans to report Wednesday that it had been able to gain wireless access to a combination heart defibrillator and pacemaker. They were able to reprogram it to shut down and to deliver jolts of electricity that would potentially be fatal — if the device had been in a person. In this case, the researcher were hacking into a device in a laboratory. The researchers said they had also been able to glean personal patient data by eavesdropping on signals from the tiny wireless radio that Medtronic, the device’s maker, had embedded in the implant as a way to let doctors monitor and adjust it without surgery. The report, to published at www.secure-medicine.org, makes clear that the hundreds of thousands of people in this country with implanted defibrillators or pacemakers to regulate their damaged hearts — they include Vice President Dick Cheney — have no need yet to fear hackers. The experiment required more than $30,000 worth of lab equipment and a sustained effort by a team of specialists from the University of Washington and the University of Massachusetts to interpret the data gathered from the implant’s signals. And the device the researchers tested, a combination defibrillator and pacemaker called the Maximo, was placed within two inches of the test gear.
Defibrillators shock hearts that are beating chaotically and dangerously back into normal rhythms. Pacemakers use gentle stimulation to slow or speed up the heart. Federal regulators said no security breaches of such medical implants had ever been reported to them. The researchers said they chose Medtronic’s Maximo because they considered the device typical of many implants with wireless communications features. Radios have been used in implants for decades to enable doctors to test them during office visits. But device makers have begun designing them to connect to the Internet, which allows doctors to monitor patients from remote locations. The researchers said the test results suggested that too little attention was being paid to security in the growing number of medical implants being equipped with communications capabilities. “The risks to patients now are very low, but I worry that they could increase in the future,” said Tadayoshi Kohno, a lead researcher on the project at the University of Washington, who has studied vulnerability to hacking of networked computers and voting machines. The paper summarizing the research is called “Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses.” The last part refers to defensive possibilities the researchers outlined that they say would enhance security without draining an implant’s battery. They include methods for warning a patient of tampering or requiring that an incoming signal be authenticated, using energy harvested from the incoming signals. But Mr. Kohno and Kevin Fu, who led the University of Massachusetts arm of the project, said they had not tried to test the defenses in an actual implant or to learn if anyone trying to use them might run afoul of existing patent claims. Another participant in the project, Dr. William H. Maisel, a cardiologist who is director of the Medical Device Safety Institute at the Beth Israel Deaconess Medical Center in Boston, said that the results had been shared last month with the F.D.A., but not with Medtronic. “We feel this is an industry-wide issue best handled by the F.D.A.,” Dr. Maisel said.
The F.D.A. had already begun stepping up scrutiny of radio devices in implants. But the agency’s focus has been primarily on whether unintentional interference from other equipment might compromise the safety or reliability of the radio-equipped medical implants. In a document published in January, the agency included security in a list of concerns about wireless technology that device makers needed to address. Medtronic, the industry leader in cardiac regulating implants, said Tuesday that it welcomed the chance to look at security issues with doctors, regulators and researchers, adding that it had never encountered illegal or unauthorized hacking of its devices that have telemetry, or wireless control, capabilities. “To our knowledge there has not been a single reported incident of such an event in more than 30 years of device telemetry use, which includes millions of implants worldwide,” a Medtronic spokesman, Robert Clark, said. Mr. Clark added that newer implants with longer transmission ranges than Maximo also had enhanced security. Boston Scientific, whose Guidant division ranks second behind Medtronic, said its implants “incorporate encryption and security technologies designed to mitigate these risks.” St. Jude Medical, the third major defibrillator company, said it used “proprietary techniques” to protect the security of its implants and had not heard of any unauthorized or illegal manipulation of them. Dr. Maisel urged that patients not be alarmed by the discussion of security flaws. “Patients who have the devices are far better off having these devices than not having them,” he said. “If I needed a defibrillator, I’d ask for one with wireless technology.”
ASSASSINATION by BLUETOOTH?
Studies Show Reliability, Failure Rates for Cardiac Devices
Pacemakers and implantable cardioverter-defibrillators (ICDs) are among the most clinically important and technically complex medical devices in use today, but several recent high-profile device malfunctions have called into question their safety and reliability. Two reports in the April 26, 2006 issue of The Journal of the American Medical Association (JAMA) offer new insights into pacemaker and ICD performance by providing the most comprehensive analysis of malfunction data available to date. “Despite millions of pacemaker and ICD implants worldwide and their increasingly frequent use, surprisingly little is known about device reliability,” says the studies’ lead author William H. Maisel, MD, MPH, director of the Pacemaker and Device Service at Beth Israel Deaconess Medical Center (BIDMC) and Assistant Professor of Medicine at Harvard Medical School. The devices work to stabilize abnormal heart rhythms, pacemakers by treating hearts that beat too slowly and ICDs by treating heart rhythms that have become dangerously fast.
In the first study, which Maisel performed with colleagues at the U.S. Food and Drug Administration (FDA), he found that, between the years of 1990 and 2002, there were 2.25 million pacemakers and almost 416,000 ICDS implanted in the U.S. During this same time period, 17,323 devices (8,834 pacemakers and 8,489 ICDs) were surgically removed from patients due to a confirmed device malfunction. (Battery, capacitor and electrical abnormalities accounted for approximately half of the device failures.) In addition, 61 patient deaths were attributed to pacemaker or ICD malfunction during this 13-year period. “Overall, the annual ICD malfunction replacement rate of 20.7 per 1,000 implants was significantly higher than the pacemaker malfunction replacement rate of 4.6 per 1,000 implants,” notes Maisel. “While pacemakers became increasingly reliable during the study period, a marked increase in the ICD malfunction replacement rate was observed between 1998 and 2002, suggesting that ICDs may have become less reliable during this time period.”
In the second study (conducted by Maisel on non-FDA data), an analysis of international pacemaker and ICD registries involving hundreds of thousands of pacemakers and thousands of ICDs, the overall findings proved very similar to those reported in the analysis of the FDA data. “Specifically, in this second report, pacemaker reliability improved markedly during the study period while the ICD malfunction rate trended down during the first half of the 1990s, reaching its lowest level in the mid-to-late 1990s. And, once again, the ICD malfunction rate increased substantially between the years of 1998 and 2002.” But, he adds, this analysis showed a substantial improvement in ICD reliability in 2003 and 2004, years that were not included in the FDA analysis. “Pacemakers and implantable defibrillators are amazing devices that have saved many lives,” says Maisel. “But like any other complex device, they can and do malfunction. It appears that as ICDs became increasingly sophisticated [in the latter 1990s] there was an associated decrease in device reliability. Fortunately, the most recent defibrillator malfunction rates show a reassuring trend.”
Maisel stresses that patients do not need to take any action as a result of these studies, and that routine pacemaker and defibrillator checks remain the best way to monitor device performance in individual patients. “It’s important to remember that during the time periods we analyzed, there were tens of thousands of lives saved as a result of these devices,” he adds. “The chance of a person’s life being saved by a pacemaker or ICD is about 1,000 times greater than the chance of the device failing when it’s needed.” The analysis of FDA data (first study) was funded by the U.S. Food and Drug Administration, for which Maisel serves as a paid consultant and Chair of the FDA Circulatory System Medical Devices Advisory Panel. Study coauthors included Megan Moynahan, MS, Bram D. Zuckerman, MD, Thomas P. Gross, MD, MPH, Oscar H. Tovar, MD, Donna-Bea Tillman, PhD, MPA, and Daniel B. Schultz, MD, all of the FDA’s Center for Devices and Radiologic Health, Rockville, MD. The analysis of registry data (second study) was conducted independently by Dr. Maisel without FDA financial support.
New data finds defibrillator recalls to be common / May 19, 2006
Data presented May 19, 2006 at the Heart Rhythm Society’s 27th Annual Scientific Sessions finds that during a 10-year study period more than one in five automatic external defibrillators (AEDs) were recalled due to potential malfunction. The findings represent some of the first data available on safety and reliability of the devices, which are used to resuscitate victims of cardiac arrest. “AEDs provide automated heart rhythm analysis, voice commands, and shock delivery and can be used by individuals with minimal training or experience,” explains the study’s lead author, William H. Maisel, M.D., M.P.H., director of the Pacemaker and Device Service at Beth Israel Deaconess Medical Center (BIDMC) and assistant professor of medicine at Harvard Medical School. “As a result, widespread installation of AEDs has occurred in recent years.” In fact, he adds, the annual number of the devices distributed between 1996 and 2005 increased almost 10-fold, from fewer than 20,000 to nearly 200,000. “Public places such as airports, sports arenas and casinos are now routinely outfitted with AEDs and the U.S. Food and Drug Administration [FDA] has approved certain AED models for home use,” he says. “Unfortunately, as AED use has increased, so too has the number of recalled devices.” Maisel and his colleagues reviewed weekly FDA enforcement reports to identify recalls and safety alerts (collectively referred to as “advisories”) affecting AEDs. Enforcement reports are issued by the FDA to notify the public about potentially defective medical devices which may not function as intended. During the study period – beginning in 1996 and ending in 2005 – the authors found that the FDA issued 52 advisories involving either AEDs or critical AED accessories, affecting a total of 385,922 devices. “The results showed that during this 10-year study period, more than one in five AEDs were recalled due to a potential malfunction,” says Maisel.
Security researchers to unveil pacemaker, medical implant hacks
by Chris Soghoian / March 3, 2008
A team of respected security researchers known for their work hacking RFID radio chips have turned their attention to pacemakers and implantable cardiac defibrillators. The researchers will present their paper, “Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses,” during the “Attacks” session of the 2008 IEEE Symposium on Security and Privacy, one of the most prestigious conferences for the computer security field. The authors of the paper are listed as: Shane S. Clark, Benessa Defend, Daniel Halperin, Thomas S. Heydt-Benjamin, Will Morgan, Benjamin Ransford, Kevin Fu, Tadayoshi Kohno, William H. Maisel. Kevin Fu, an assistant professor at the University of Massachusetts Amherst, along with two graduate students who worked on the project all gained significant attention for their past work in attacking RFID-based credit cards and RFID (radio frequency identification) transit payment tokens. Kohno, a professor at the University of Washington, was the subject of worldwide media coverage for his work in exposing flaws in Diebold voting machines back in 2003, and then later for finding major privacy flaws in the RFID-based Nike+iPod Sport Kit.
When contacted by e-mail, Kohno told me that he and his colleagues could not currently comment on their latest project. Without the help of the authors, it is difficult to predict the contents of their research paper. However, it is possible to piece together other bits of information to try to learn more about the project. A previous research paper published by the same team noted that over 250,000 implantable cardiac defibrillators are installed in patients each year. An increasingly large percentage of these can be remotely controlled and monitored by specialized wireless devices in the patient’s home. The devices can be accessed at ranges of up to 5 meters. By reading between the lines (millions of remotely implanted medical devices, able to administer electrical shocks to the heart, can be controlled remotely from distances up to 5 feet, designed by people who know nothing about security), it is easy to predict the gigantic media storm that this paper will cause when the full details (and a YouTube video of a demo, no doubt) are made public. Just remember where you saw it first.
Q: What are implantable medical devices (IMDs)?
A: Implantable Medical Devices (IMDs) monitor and treat physiological conditions within the body, and can help patients lead normal and healthy lives. There are many different kinds of IMDs, including pacemakers, implantable cardiac defibrillators (ICDs), drug delivery systems, neurostimulators, swallowable camera capsules, and cochlear implants. These devices can help manage a broad range of ailments, including: cardiac arrhythmia; diabetes; chronic pain; Parkinson’s disease; obsessive compulsive disorder; depression; epilepsy; obesity; incontinence; and hearing loss. IMDs pervasiveness continues to swell, with approximately twenty-five million U.S. citizens currently benefiting from therapeutic implants.
Q: What are pacemakers and implantable cardiac defibrillators (ICDs)?
A: Pacemakers and ICDs are both designed to treat abnormal heart conditions. About the size of a pager, each device is connected to the heart via electrodes and continuously monitors the heart rhythm. Pacemakers automatically deliver low energy signals to the heart to cause the heart to beat when the heart rate slows. Modern ICDs include pacemaker functions, but can also deliver high voltage therapy to the heart muscle to shock dangerously fast heart rhythms back to normal. Pacemakers and ICDs have saved innumerable lives, and there are millions of pacemaker and ICD patients in the U.S. today.
Q: Where do you see the technologies for these devices heading in the future?
A: The technologies underlying implantable medical devices are rapidly evolving, and it’s impossible to predict exactly what such devices will be like in 5, 10, or 20 years. It is clear, however, that future devices may rely more heavily on wireless communications capabilities and advanced computation. IMDs may communicate with other devices in their environment, thereby enabling better care through telemedicine and remote patient health monitoring. There may also be multiple, inter-operating devices within a patient’s body. Given the anticipated evolution in IMD technologies, we believe that now is the right and critical time to focus on protecting the security and privacy of future implantable medical devices.
Q: Why is it important to study the security and privacy properties of existing implantable medical devices?
A: Despite recent large advances in IMD technologies, we still have little understanding of how medical device security and privacy interact with and affect medical safety and treatment efficacy. Established methods for providing safety and preventing unintentional accidents do not necessarily prevent intentional failures and other security and privacy problems. Balancing security and privacy with safety and efficacy will, however, become increasingly important as IMD technologies continue to evolve. Prior to our work, we are unaware of any rigorous public scientific investigation into the observable characteristics of a real, common commercial IMD. Such a study is necessary in order to provide a foundation for understanding and addressing the security, privacy, safety, and efficacy goals of future implantable devices. Our research provides such a study. The overall goals of our research were to: (1) assess the security and privacy properties of a real, common commercial IMD; (2) propose solutions to the identified weaknesses; (3) encourage the development of more robust security and privacy features for IMDs; and (4) improve the privacy and safety of IMDs for the millions of patients who enjoy their benefits.
Q: Can you summarize your findings with respect to the security and privacy of a common implantable cardiac defibrillator (ICD)?
A: As part of our research we evaluated the security and privacy properties of a common ICD. We investigate whether a malicious party could create his or her own equipment capable of wirelessly communicating with this ICD. Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could violate the privacy of patient information and medical telemetry. The ICD wirelessly transmits patient information and telemetry without observable encryption. The adversary’s computer could intercept wireless signals from the ICD and learn information including: the patient’s name, the patient’s medical history, the patient’s date of birth, and so on. Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could also turn off or modify therapy settings stored on the ICD. Such a person could render the ICD incapable of responding to dangerous cardiac events. A malicious person could also make the ICD deliver a shock that could induce ventricular fibrillation, a potentially lethal arrhythmia. For all our experiments our antenna, radio hardware, and PC were near the ICD. Our experiments were conducted in a computer laboratory and utilized simulated patient data. We did not experiment with extending the distance between the antenna and the ICD.
Q: Do other implantable medical devices have similar issues?
A: We only studied a single implantable medical device. We currently have no reason to believe that any other implantable devices are any more or less secure or private.
Q: Can you summarize your approaches for defending against the security and privacy issues that you raise?
A: Our previous research (IEEE Pervasive Computing, January-March 2008) highlights a fundamental tension between (1) security and privacy for IMDs and (2) safety and effectiveness. Another goal we tackle in our research is the development of technological mechanisms for providing a balance between these properties. We propose three approaches for providing this balance, and we experiment with prototype implementations of our approaches. Our approaches build on the WISP technology from Intel Research. Some IMDs, like pacemakers and ICDs, have non-replaceable batteries. When the batteries on these IMDs become low, the entire IMDs often need to be replaced. From a safety perspective, it is therefore critical to protect the battery life on these IMDs. Toward balancing security and privacy with safety and effectiveness, all three of our approaches use zero-power: they do not rely on the IMD’s battery but rather harvest power from external radio frequency (RF) signals. Our first zero-power approach utilizes an audible alert to warn patients when an unauthorized party attempts to wirelessly communicate with their IMD. Our second approach shows that it is possible to implement cryptographic (secure) authentication schemes using RF power harvesting. Our third zero-power approach presents a new method for communicating cryptographic keys (“sophisticated passwords”) in a way that humans can physically detect (hear or feel). The latter approach allows the patient to seamlessly detect when a third party tries to communicate with their IMD. We do not claim that our defenses are final designs that IMD manufacturers should immediately incorporate into commercial IMDs. Rather, we believe that our research helps establishes a potential foundation upon which the community can innovate other new defensive mechanisms for future IMD designs.
Q: Where will these results be published?
A: Our results will be published at the IEEE Symposium on Security and Privacy in May 2008. The IEEE is a leading professional association for the advancement of technology. The IEEE Symposium on Security and Privacy is one of the top scholarly conferences in the computer security research community. This year the conference accepted 28 out of 249 submissions (11.2%). All papers were rigorously peer-reviewed by at least three members of the IEEE Security and Privacy committee.
Q: Should patients be concerned?
A: We strongly believe that nothing in our report should deter patients from receiving these devices if recommended by their physician. The implantable cardiac defibrillator is a proven, life-saving technology. We believe that the risk to patients is low and that patients should not be alarmed. We do not know of a single case where an IMD patient has ever been harmed by a malicious security attack. To carry out the attacks we discuss in our paper would require: malicious intent, technical sophistication, and the ability to place electronic equipment close to the patient. Our goal in performing this study is to improve the security, privacy, safety, and effectiveness of future IMDs.
Q: What have you done to ensure that these findings will not be used for malicious intent?
A: We specifically and purposefully omitted methodologic details from our paper, thereby preventing our findings from being used for anything other than improving patient security and privacy.
COMMENTS : “DENIAL of LIFE ATTACK'”
by Kevin McMurtrie / posted 12th March 2008
“Their hacking equipment cost $30000 because of that fancy oscilloscope shown. It wouldn’t surprise me if it cost $29000. The paper states the frequency and encoding protocol. Hackers don’t need the fancy oscilloscope now. Taking into account what a hacker already owns, that cuts the cost down to maybe $50 for a short-range model. Boosting the range to a few city blocks would require maybe another $100 in parts. I bet Cheney goes in for an operation soon.”
RE: Scary Stuff
by Anonymous Coward / posted 12th March 2008
Denial of Life attack : What protection is there against *accidental* re-programming or DoS?
by Keith T / posted 12th March 2008
“Regarding the 30k price tag, it is a radio transceiver and a computer. Medtronic paid $30k for theirs. That doesn’t mean someone could put something together for less. In fact, my main worry would be someone accidentally re-programming or operating the IMD. What protection is there against that? Could it be done by a hacker with a laptop and an standard wireless NIC card? Would the wireless NIC card need to be modified? Could it be done with random noise from a faulty electric motor? And how can we be assured nobody has ever died to their IMD being intentionally re-programmed? If the device was intentionally re-programmed, would the attacker revert the programming back once the victim had died? Would anyone even check the state of the program in the IMD?”
Needs to be close by
by Herby / posted 12th March 2008
“Having done some work for one of these companies (it was a few years ago!) my understanding is that the “controller” (actually a laptop PC) needs to be in close proximity to the “subject”. They usually use “induction”, not radio frequency to couple to the device implanted (at least that is what I saw). Yes, security is not something the device vendors, or the FDA thinks about. Lots of medical devices have “unpatched” windows environments because the vendors haven’t gone thru the process of verification with the latest of windows patches. Most of the time these computers are not connected to a network (they usually don’t need to be!), but sometimes they do get connected, and then the malware arrives with evil intentions. On the ICD I did some work on they used a 65C02 processor, which they needed to get certified outside the normal supply chain (look at any datasheet for ICs and it usually says “not for life critical…”). Then they need to get ALL the software to pass FDA rules (lots of time and $$$). By the time everything is done, the development cost is HUGE. Then they deploy the stuff, and the added cost of a laptop per inplantable device is “small potatoes”, so they just build it into the kit. In my book the big problem is the controlling box (laptop) used to program the implant to do its thing (parameters per subject). As usual, security isn’t a big consideration since most of the development is in an isolated environment. It was interesting how the company “solved” problems in the test environment. It ended up being 4 (yes four) Windows boxes (it was W95) and a logic analyzer to test the ICD which had a 65C02 processor (same as Apple 2). Need something, add more hardware! In order to get the timing for the network between the 4 cpu’s right, they even incorporated a relay to cutoff the network from outside the 4 cpu’s. Oh, well. It was windows, they didn’t even try anything else.
A lot cheaper than 30K
by Anonymous Coward / posted 13th March 2008
“That kit may have cost 30K, but I am betting it can be do for under 1K, probably about $400. Well it is a dog eat dog world, I wouldn’t put is past some young exec to put 1 and 1 together, and see that getting to the top may involve a bit of heartbreak. It use to be the case that the medical world was off limits to hackers, a sort of unwritten agreement, but with governments using the medical world to build the id databases, that has sort of been rescinded. Bit like using the red cross for spying missions, they are now targets because of it. I would imagine that EMP devices would be on the up as well, there I would blame speed cameras, people are taking axle grinders to them, how much easier would it be to just zap them. And of course EMP could be used against a slew of modern security surveillance devices, with the side effect of knocking out the cyborgs with unprotected pace makers.
Defcon: Excuse me while I turn off your pacemaker
by Dean Takahashi / August 8th, 2008
The Defcon conference is the wild and woolly version of Black Hat for the unwashed masses of hackers. It always has its share of unusual hacks. The oddest so far is a collaborative academic effort where medical device security researchers have figured out how to turn off someone’s pacemaker via remote control. They previously disclosed the paper at a conference in May. But the larger point of the vulnerability of all wirelessly-controlled medical devices remains a hot topic here at the show in Las Vegas. Let’s not have a collective heart attack, at least not yet. The people on the right side of the security fence are the ones who have figured this out so far. But this has very serious implications for the 2.6 million people who had pacemakers installed from 1990 to 2002 (the stats available from the researchers). It also presents product liability problems for the five companies that make pace makers.
Kevin Fu, an associate professor at the University of Massachusetts at Amherst and director of the Medical Device Security Center, said that his team and researchers at the University of Washington spent two years working on the challenge. Fu presented at Black Hat while Daniel Halperin, a graduate student at the University of Washington, presented today at Defcon. Getting access to a pacemaker wasn’t easy. Fu’s team had to analyze and understand pacemakers for which there was no available documentation. Fu asked the medical device makers, explaining his cause fully, but didn’t get any help. William H. Maisel, a doctor at Beth Israel Deaconess Hospital and Harvard Medical School, granted Fu access for the project. Fu received an old pacemaker as the doctor installed a new one in a patient. The team had to use complicated procedures to take apart the pacemaker and reverse engineer its processes. Halperin said that the devices have a built-in test mechanism which turns out to be a bug that can be exploited by hackers. There is no cryptographic key used to secure the wireless communication between the control device and the pacemaker.
A computer acts as a control mechanism for programming the pacemaker so that it can be set to deal with a patient’s particular defribrillation needs. Pacemakers administer small shocks to the heart to restore a regular heartbeat. The devices have the ability to induce a fatal shock to a heart. Fu and Halperin said they used a cheap $1,000 system to mimic the control mechanism. It included a software radio, GNU radio software, and other electronics. They could use that to eavesdrop on private data such as the identity of the patient, the doctor, the diagnosis, and the pacemaker instructions. They figured out how to control the pacemaker with their device. “You can induce the test mode, drain the device battery, and turn off therapies,” Halperin said.
Translation: you can kill the patient. Fu said that he didn’t try the attack on other brands of pacemakers because he just needed to prove the academic point. Halperin said, “This is something that academics can do now. We have to do something before the ability to mount attacks becomes easier.” The disclosure at Defcon wasn’t particularly detailed, though the paper has all of the information on the hack. The crowd here is mostly male, young, with plenty of shaved heads, tattoos and long hair. The conference is a cash-only event where no pictures are allowed without consent. It draws thousands more people from a much wider net of security researchers and hackers than the more exclusive Black Hat. Similar wireless control mechanisms are used for administering drugs to a patient or other medical devices. Clearly, the medical device companies have to start working on more secure devices. Other hackers have figured out how to induce epileptic seizures in people sensitive to light conditions. The longer I stay at the security conferences here in Las Vegas, the scarier it gets.
email : kevinfu [at] cs [dot] umass [dot] edu
email : dhalperi [at] cs [dot] washington [dot] edu
“Our study analyzes the security and privacy properties of an implantable cardioverter deﬁbrillator (ICD). Introduced to the U.S. market in 2003, this model of ICD includes pacemaker technology and is designed to communicate wirelessly with a nearby external programmer in the 175 kHz frequency range. After partially reverse-engineering the ICD’s communications protocol with an oscilloscope and a software radio, we implemented several software radio-based attacks that could compromise patient safety and patient privacy. Motivated by our desire to improve patient safety, and mindful of conventional trade-offs between security and power consumption for resource-constrained devices, we introduce three new zero-power defenses based on RF power harvesting. Two of these defenses are human-centric, bringing patients into the loop with respect to the security and privacy of their implantable medical devices (IMDs). Our contributions provide a scientiﬁc baseline for understanding the potential security and privacy risks of current and future IMDs, and introduce human-perceptible and zero-power mitigation techniques that address those risks. To the best of our knowledge, this paper is the ﬁrst in our community to use general-purpose software radios to analyze and attack previously unknown radio communications protocols.
SEE ALSO : DEFCON / BLACKHAT
PICTURES of DEFCON’S TEMPORARY NETWORK OPERATIONS CENTER
“Over 9,000 hackers, freaks, feds and geeks are gathered in Las Vegas for Defcon, the world’s largest computer security convention. The temporary wireless network that serves the Defcon attendees is the most hostile on the planet. Defcon’s network is put together and run by a group of dedicated volunteers, known as Goons. These red badge-sporting Network Goons work hard to make the network robust enough to handle the endless stream of dangerous traffic. Threat Level got the first ever photo tour of the Defcon Network Operations Center. Here are the photos for your viewing pleasure.”
DEFCON 16 ASSESSMENT
Defcon ends with researchers muzzled, viruses written
by Elinor Mills / August 10, 2008
The Defcon hacker conference ended its 16th year on Sunday, sending about 8,000 attendees home from a weekend of virus writing, discussion of Internet attacks, and general debauchery. The highlight was most definitely the restraining order which prevented three MIT students from presenting their research on how to hack the Boston subway system. The students attended the event and even gave a news conference after the order came down on Saturday, but did not present their highly anticipated talk. Instead, journalist and security expert Brenno de Winter took their empty spot and discussed how the cards used in transit system in The Netherlands and London can be hacked just like the ones used in Boston. Both systems, and many around the world, use the Mifare Classic chip technology, whose cryptography was cracked by researchers last year. “I was advised by several lawyers not to go into details of the Mifare Classic, but anybody who has access to Google…,” de Winter said. Breaking the rules is always a theme at Defcon, but while irreverence for established corporate and government protocols is condoned if not exactly encouraged, breaking Defcon rules definitely has its consequences. Defcon officials said they were considering banning film crews from future events after ejecting a team from the G4 cable network on Saturday for allegedly videotaping a crowd. Photographers and videographers are required to get permission to shoot anyone, even from behind, and are forbidden from shooting crowds.
There was a report that police were called in to investigate a Windows-based kiosk that was hacked to display pornographic images in the lobby. And the usual rowdiness and late-night drinking were a nightly, if not daily, activity. However, things did not seem to reach the level of tomfoolery they did in in the early and mid-1990s when elevators were hacked and cement was poured down toilets. Of course, many of the script kiddies from that era are now married with children. There were, of course, a range of sessions, including ones on evaluating the risks of “good viruses,” hijacking outdoor billboard networks, and compromising Windows-based Internet kiosks. Members of SecureState, a company that does penetration testing of corporate networks, gave a live demo in one session of an automated attack on Microsoft SQL Server-based computer that left the machine vulnerable to attackers installing viruses and other malware. The team used new tools they are offering for download, SA Exploiter and Fast-Track.
One of the more controversial events at the event was a “Race to Zero,” in which teams modified samples of viruses and tested them against antivirus software. Four teams managed to complete all the levels and get through the antivirus software. There were less technical contests as well. “Mike” from Chicago won $3,000 for spending 30 straight hours listening to pitches and marketing buzz from security company Configuresoft and correctly answering questions on periodic quizzes on the presentations. After the announcement, he jumped out of his seat with his arms in the air. Asked how he felt, Mike, who declined to give a last name, said he “felt smelly.” The contest, called “Buzzword Survivor,” was not without scandal. Several contestants claimed–and submitted a cell phone photo as evidence to organizers–that one of the contestants had fallen asleep at one point. However, he was allowed to remain in the contest and made it to the very end with all the others, winning $200. The second prize was $1,000. Gartner analyst Paul Proctor came up with the idea on a whim. It was originally intended to have 10 contestants competing for 36 hours for a $10,000 prize, but the prize was reduced when only one sponsor stepped up. The contestants had 10 minute breaks every hour, but otherwise were in their seats listening to detailed talks about the company, its products, and the industry. “We’ve submitted them to pain,” Andrew Bird, a Configuresoft vice president, who served as MC at the end of contest, said mischievously. “We played recorded Webinars at 4 a.m.”
Defcon founder Jeff Moss aka “Dark Tangent” discusses ethics of hacking + disclosure issues that provoke debate, often lawsuits, at the event
One of the more popular gadgets from the previous two Defcons were the hackable convention badges. This year, we convinced Defcon’s founder Jeff Moss, aka Dark Tangent, and badge-designer Joe “Kingpin” Grand, to give Wired.com an exclusive sneak preview of the Defcon 16 badge. Keep in mind that the badge in the photo is a prototype; the actual badges will be a different color, won’t have the USB and debug ports soldered on, nor include an SD card (so bring one, seriously).
Threat Level: Defcon 15’s badge was exponentially more complicated and functional than Defcon 14’s badge. How does this year’s badge compare to the DC 15 badge?
Joe Grand: Last year’s badge was sort of an over-engineered project. We wanted to do something that was cool and different than the year before, but it ended up getting more complicated because of design problems along the way … I was really aiming to have something a little less complicated and a little less over-engineered than Defcon 15, but still more complicated then Defcon 14 and have enough hackable features to make it interesting enough for people. It’s more simple than last year but also more powerful.
TL: Why did you choose a Freescale microprocessor, and why did you choose the MC9S08JM60 over the MC9S08QG8?
JG: The guys at Freescale have been super supportive throughout both the Defcon 15 and this Defcon 16 badge. One of their things is that they have a lot of engineers who truly love engineering and they love coming up with new products that use their technologies. They understand that it is a hacker gathering and the hackers are ultimately the ones who are creating the cutting-edge products and they’re messing around with technology and doing things that haven’t been done before. They love the concept and they love being involved with Defcon … The JM60 was a new product that they just launched … We looked at the processor and said the JM60 has support for USB so let’s use USB. It has support for a Secure Digital card so let’s add SD in there … I rarely come across companies that are as passionate as I am about a project and these guys are, so it’s a total thrill to be able to work with them.
TL: What components and other fun stuff does the badge have?
JG: The artistic elements and the PC board design tricks that I did this year [are] some of my favorite parts of doing the badge. Ping and Dark Tangent don’t necessarily understand the engineering constraints of making circuit boards so they really push me … In turn I get to learn a lot of new techniques … We’re doing stuff that’s totally crazy and nonstandard for circuit boards.
TL: Are they the same batteries as last year?
JG: No, different batteries. One of the things I ran into last year that I was pretty embarrassed about was the battery life. Depending on how much you used the badge, the batteries didn’t even last the weekend. For me one of my major design goals is making sure that the badge lasted longer than Defcon. This year I went with a larger battery, something that’s way more robust and will just last a long time for people who really want to hack on the badge. It’s one of the CR123A batteries. These things will last a long time, weeks if not months. It’s a little bigger than I would have like, but I placed it in away that hopefully will not get too annoying for people.
TL: Did you see the RFID badges at The Last HOPE? Will yours also include some kind of unique ID for buddy/hacker tracking?
JG: I didn’t [see the HOPE badges] … We talked about the badges being able to either track each other or have some kind of unique identifier, but I think that shit is just way too big brother. Most people at Defcon don’t even use their real name. Forcing them to wear a badge that has features like that, to me is crap. I wouldn’t want to wear one of them.
TL: What were the biggest challenges in the badge development this time around?
JG: This badge … ended up taking 200 hours to design, versus the 170 from last year … Most of that was because I was trying new things I’d never done before … During the process every time I had an engineering problem or I stayed up late … I just kept thinking the pain’s going to be worth it. Once the badge is done and it gets into people’s hands and they just love the way it looks and they have fun with it and they hack on it it makes all of the trouble worthwhile to get people interested into this type of thing.
TL: How are you going to top this badge next year?
JG: I have a few ideas for what I want to do next year assuming we do it … I won’t say what they are yet, but it’s going to be cool.
TL: What other projects are you working on right now?
JG: I just started a new apparel line called Kingpin Empire … I am going to donate a portion of the proceeds to hacker related charities and health related charities: EFF, ACLU, American Heart Association. Things that have personally affected me or personally saved me in some way. It’s a way for me to spread the hacker message to the masses … to educate people as to what hacking is about, support hacker and health related causes and give back to the community that shaped my entire life.
TL: Anything else you want to say about the badge?
JG: I just hope people like it. It’s a labor of love. The more people that hack on it the better. I want people to modify it, I want people to fix any problems they might see with it and just make it their own. If I can inspire just one person out of the 8,500 people that have a badge to start hacking on things and maybe even become and engineering then I’ve done my part.
WALL of SHEEP
Displays Detail Who Has Sent Readable Data Using Insecure Wireless Connections
by Robert McMillan / August 11, 2008
The Wall of Sheep has become a fixture of the Defcon hacker conference: a wall with a long list of details showing who at the conference has sent readable data using insecure wireless connections. For Brian Markus, better known to conference attendees as “Riverside,” it just may become a line of business. Last month, Markus and three of his fellow volunteers incorporated a company called Aries Security, which they bill as an education and security awareness consultancy that can come in and identify risky behavior on corporate networks. The company is still in an experimental state, meaning that none of the partners have actually quit their day jobs, Markus said. They don’t expect companies to start projecting their own Wall of Sheep displays in their lobbies, but they say the network analysis tools they’ve developed could be helpful when aimed at corporate networks. “We can go into a company if they need help with a security awareness program,” Markus said. “There are an amazing amount of things that we could see by watching the traffic go by.” Wall of Sheep got its start in 2002, when Markus and friends were sniffing wireless LAN traffic at Defcon. It turned out there were plenty of people putting their data out on those networks. “We were saying there are so many of them, they are everywhere.” Inspired by a T-shirt, they decided to call the people they could observe “sheep,” and they started sticking paper plates on the wall with some of the user details they’d found. They list login names, domain or Internet Protocol addresses and partial passwords. Hotel management wasn’t crazy about the idea of paper plates being stuck to the walls, so the Wall of Sheep was soon using a projector.
They’ve seen some pretty crazy stuff revealed on open wireless LANs over the years, including fake usernames and passwords, brand-new computer attacks, a tax return and what Markus calls “nontypical adult material.” Today the project attracts dozens of volunteers at the conference who spend hours hunched over computers analyzing data before it’s put up on the wall. “It’s a tremendous amount of human labor,” Markus said. Wall of Sheep made its first appearance ever at Defcon’s less chaotic sister conference Black Hat this year, and it got a lot of attention when French journalists tried to post sensitive information on the wall that was culled from a Black Hat network set up for reporters. Because the journalists had illegally sniffed the Black Hat network without permission, Markus refused, and eventually the journalists were ejected from the conference. “We said, ‘No way,'” he said. “It’s completely against what all of us are trying to do.”
About the Wall of Sheep
Our mission is to raise security awareness. Computer crime and identity theft loom large in most people’s unconscious fears because they do not know:
1. How they are at risk, and
2. The steps they can take to protect themselves.
We explain both, but the way we do it is unconventional . . .
What We Do
The Wall of Sheep is an interactive demonstration of what can happen when network users let their guard down. We passively observe the traffic on a network, looking for evidence of users logging into email, web sites, or other network services without the protection of encryption. Those we find get put on the Wall of Sheep as a good-natured reminder that a malicious person could do the same thing we did . . . with far less friendly consequences. More importantly, we strive to educate the “sheep” we catch—and anyone who wants to learn— how to use free, easy-to-use tools to prevent leaks in the future.
Nearly every time a network is accessed, an email account is checked, a web application is logged into, or a message is sent, some form of identification is passed between systems. By simply listening to this network traffic and sorting out the interesting bits, ill-intentioned third parties can often steal a password or other credentials with little to no effort. In reality, on average, the occurrence of such eavesdroppers is infrequent, but that does not diminish the consequences if they are listening. Why risk a chance when you don’t have to? Awareness and education is the key. The tools and knowledge to protect yourself are freely available. Most of the time, they are built into your current system.
The Wall of Sheep shows what happens when there are eavesdroppers on your network. If you access a network we are listening to without protecting yourself, we will see your username and password. Then we will post identifying elements* of your transaction on the the Wall in front of all of your friends and colleagues. At that point, we hope you will come to us and learn how to avoid such mistakes in the future.
The Bottom Line
A potential attacker might maliciously and criminally use your mistakes against you. We do the opposite by raising security awareness and providing education on how to be defensive. It is very easy to become a “sheep,” but it is just as easy to learn how to avoid turning into one.
*but never the whole thing