No Single Best Answer

109 53
No Single Best Answer
As I looked out on the faces of the second-year students awaiting the Step Exam review, I couldn't help but think of the patient I had seen recently—a 71-year-old woman with a history of recent transient ischemic attack and a right carotid bruit who had refused to consider potential angiogram and surgery. I knew what the right answer would be on a standardized exam. I did not know the right answer for reality. How much of a difference in statistical outcome is needed—how low a number needed to treat—would make it imperative for me to push this reluctant patient to consider having a simple Doppler exam, an angiogram (at some risk), and possible surgery? And what of the cost-effectiveness of studies done and time spent in persuasion? There is no single best answer. What about the patient's fear of surgery and her preference for "natural" approaches? And "patient-centered communication and problem-solving"? Is it coercive to convince this patient, dead set against surgery, to examine her preferences? What difference in probable outcomes would make this the moral approach? And, of course, what is the benefit/risk ratio of invasive intervention over minimal intervention—and how does that balance her actual concerns? And what of the grandchildren she keeps now so her daughter can work? Her situation is rich with particular, contextual details, but has no single best answer. Certainly, there are some wrong answers, but there are many equally good answers—depending on so many specific aspects.

From Iowa tests through SATs, ACTs, MCATs, and Step exams, we select our students by single best answer. Yet even compared to the long-stem, "context"-related cases of the Step exams, real patients are much more broadly contextually situated—family issues, cultural biases of the patients and the doctors, funding resources and changing rules, and on and on. We select our students by their skills in navigating a defined and calculable world, and expect them to adapt to a world of indefinites and ambiguity. Even our beloved database of statistics and prediction rules tells us what will happen to 100 people, but not the one person in front of us in the exam room.

My thoughts turned to the boards. Here too, there is a canned world of expected responses—a contrived, monocultural world where all diabetic patients are miraculously compliant with all diets, monitoring, and prescriptions, a world where there are no misunderstandings of plainly spoken English, where labs make no errors and orders are carried out as written. Yet students quickly discover in their third year that many of the challenges of medical practice are not addressed in the database we teach and test.

We select our students by single best answer for a world of complexity and ambiguity. We teach an intensive, detailed database that is testable by this format, and then struggle to assess and value the skills of communication and the attitude of service. Strangely enough, the students we have selected by single best answer flock to specialties with limited ambiguities, where they may feel smug serenity in their single best answers.

We have developed and honed an excellent testing and educational process to select for individuals who provide the single best answer. As noted by Berwick, "every system is perfectly designed to achieve the results it achieves." Perhaps if we want different results, we need to rethink and redirect our selection and training processes. In the interim, I teach "out of both sides of my mouth," making sure the students know the "single best answer" as well of the variety of right answers for real patients.

Subscribe to our newsletter
Sign up here to get the latest news, updates and special offers delivered directly to your inbox.
You can unsubscribe at any time

Leave A Reply

Your email address will not be published.