Issue: BCMJ, Vol. 58,
page(s) 348, 350 Letters
Mike Figurski, MD
I read with interest Dr Hwang’s study on electronic wound monitoring after ambulatory breast cancer surgery [BCMJ 2016;58(8):448-453]. I do not believe reported results support his conclusions.
It was concluded the app improved the patient experience, but the retrospective control group was not surveyed so no relative conclusions on patient experience are possible. Also concluded was that fewer unscheduled visits were necessary, but three unscheduled follow-up visits in the app group were not counted. Two patients required unscheduled e-visits and one an office visit. These visits were not scheduled when the patients left hospital and inclusion invalidates the hypothesis the app reduced unscheduled visits. I question whether unplanned visits without complication is a meaningful endpoint given that one group was provided an alternate avenue for care (the app). It seems like providing smart phones to half a population and considering unplanned pay phone use a treatment failure.
I would also be interested in how much time was required to create and respond to pictures with text (140 in total if all 35 patients complied with four messages) and consider whether that is a cost benefit as opposed to extra ED (four) and walk-in visits (two).
Regarding conclusions that were not made but can be observed, it seems the study populations are different, with the conventional group being older, with more complex surgery (longer operative times) and more advanced disease (zero DCIS vs seven in the app group). I don’t think we can meaningfully compare nonwound complication rates, since remote unreported complications would not be detected on the app (pressure sores or bradycardia for instance) but might be in person or by synchronous visit. Perhaps most concerning is that there were 3 wound complications in the conventional group (1 leaking drain and 2 infections) vs 11 in the app group (6 infections, 2 minor hematomas, 1 each of edge necrosis, hemorrhage, and pneumonia). The Yates P value (3+34/37 vs 11+24/35) is almost significant at .055%. This may be due to overtreatment of possible infection in the app group but should be considered in future study design.
I do appreciate the considerable time and effort the study must have taken and applaud Dr Hwang as clinical pioneer in this promising BYOD e-health initiative. I have used Medeo and think it’s a great platform, though I don’t think this study proves its value in this use case. I also considered it awkward that a published peer-reviewed study should include so many pictures and references to a commercial product, particularly when the issue includes a full-page paid ad.
—Mike Figurski, MD