top of page

Real-Time Feedback in Clinical Education: Early Lessons from a Pain EPA Pilot Study


Physical therapy student performing a lower-extremity examination with a patient during an in-clinic physical therapy assessment.

Most Doctor of Physical Therapy (DPT) programs have experienced some version of this moment.

 

A student is “managing 50 percent of the caseload,” the rotation is moving fast, and everyone is doing their best. But when it is time for the midpoint or final evaluation, the feedback can feel blunt. Broad categories. Big judgments. Not always enough detail to answer the question the student actually cares about:

 

What can I do differently this week to improve?


That is the gap a new pilot study, building on recent work published in the Physical Therapy & Rehabilitation Journal, is trying to close, and it is the focus of an upcoming poster at APTA Combined Sections Meeting (CSM) in Anaheim, Feb 12–14, 2026:


Understanding Pain Entrustable Professional Activity Assessment in Doctor of Physical Therapy Student Clinical Education


How real-time feedback in clinical education changes clinical learning


One reason real-time feedback in clinical education matters is that many traditional assessment models were not designed to support frequent, actionable guidance.


Traditional workplace-based assessments (WBAs) often lean on caseload volume anchors: 25 percent, 50 percent, 75 percent. Those anchors have value, but they can also invite extra factors into the evaluation.


Clinical instructor (CI) experience, comfort with supervising, and even the CI–student relationship can shape how a student is scored, sometimes independently of the student’s actual performance. Many competencies are also difficult to assess in isolation because clinical performance is inherently contextual.


As Sorcha Martin, PT, DPT, EdD, FAAOMPT, Clinical Assistant Professor and Co-Director for Clinical Education at Boston University, put it, “So you can manage X number of patients per day does not mean you’re a good PT.”


What is a Pain EPA, in plain English?


Entrustable Professional Activities (EPAs) were developed to help educators assess real clinical work, not just isolated skills.

 

In everyday terms, an EPA asks: How much do I trust this learner to do a specific clinical task today, and with what level of supervision?


EPAs support both:

  • formative, in-the-moment feedback that helps day-to-day learning, and

  • summative decisions grounded in repeated observation over time.


For pain-specific EPAs, the goal is to capture meaningful tasks in pain care in a way that is observable, specific, and usable across clinical education settings.


What this pilot study tested

 

This pilot followed eight first- and second-year DPT students during a full-time clinical experience at one outpatient clinical site.


The team compared:

  • Clinical Performance Instrument (CPI) ratings, the profession’s standard caseload-based clinical education assessment, collected at the midpoint and final, and

  • Pain EPA ratings collected throughout the rotation.



Clinical instructor reviewing a real-time Pain EPA assessment on a mobile device in a DPT clinical education setting.

The EPAs were grouped into four categories: Examination, Intervention, Patient Communication, and Collaboration.

 

Here is the practical difference: the Pain EPAs were collected using the Whitecoat Learning Platform, making frequent, low-friction, in-the-moment assessment possible.


As Martin explained, “Without the Whitecoat Learning Platform, this work would not have been possible. We tried other approaches, and they simply did not support the type of assessment we were trying to do.”



What they found so far

 

Mobile view of a Pain EPA assessment form used to support real-time feedback in clinical education.

Despite being a small pilot, the signal was encouraging.

  • Students completed an average of four EPA assessment requests per week, with CIs providing feedback on one to two EPAs per assessment.

  • Of the 20 EPAs available, only six were not used, suggesting the set is largely relevant for entry-level DPT clinical education.

  • The most requested EPAs were in the Intervention category, followed by Examination.

  • Changes in EPA ratings over time were visible and correlated with changes in CPI domains related to pain.

  • Most students agreed the Pain EPAs were easy to use clinically.

  • Preference was mixed, which is informative rather than negative: 43 percent preferred using the Pain EPAs only, while 57 percent felt both tools were equally useful.


What students and CIs said

 

The most compelling insights came from people using the tool in real clinical settings.

 

Student perspectives

 

  • “Personally, I enjoy that the Pain EPA is done daily rather than CPI only being done at midterm and final.”

  • “The EPAs allowed me to have specific goals to shoot for.”

“I love the EPAs and liked getting the feedback so often. It was much better than waiting for the CPI.”

Clinical instructor perspectives

 

  • “Really user-friendly site, able to quickly give feedback in the moment on students.”

“The Pain EPA helped facilitate more frequent feedback and sparked discussion earlier in the clinical experience.”
  • “Our barrier was remembering to do it and making a new habit, but when we did it, it was easy and sparked conversation.”

 

Several CIs also highlighted how valuable it would be, over time, to have a longitudinal view of student feedback.


As they engaged more frequently with the EPAs, instructors naturally began thinking beyond individual moments and toward how feedback accumulates, progresses, and connects across a full clinical experience.


That observation underscores why this work matters. The study is not only exploring a new assessment framework but also revealing how increased frequency of assessment shifts expectations around reflection, continuity, and growth.


Those insights help inform how assessment tools and workflows can continue to evolve alongside competency-based education.


Why this matters now

 

Physical therapy education is moving toward competency-based education, and EPAs are increasingly viewed as a practical way to measure readiness in real clinical work.

 

But there is a catch. If capturing frequent, low-stakes feedback is not easy, it does not happen.


As Jeb Helms, PT, DPT, EdD, Clinical Associate Professor in the Department of Physical Therapy and Athletic Training at Northern Arizona University, explained, “If the assessment process is not designed to support frequent, low-stakes feedback, it simply does not happen. The Whitecoat Learning Platform made it feasible to collect meaningful data over time.”

Later, he added, “There is a huge emphasis on formative, ungraded assessment. Without a system designed to capture that feedback consistently, it is impossible to generate enough data to support meaningful entrustment decisions.”

 

If you are going to CSM, connect with the team


If you are attending the APTA Combined Sections Meeting in Anaheim, make time to visit Sorcha Martin’s poster presentation on Saturday, Feb. 14, between noon and 2 pm local time. 


The session will be especially relevant for Directors of Clinical Education, clinical education faculty, and clinical instructors who are looking for practical ways to strengthen feedback quality without adding burden to clinical workflows.

 

Sorcha Martin is actively inviting Directors of Clinical Education and program leaders to connect with her at the APTA Combined Sections Meeting if you are interested in participating in the next phase of their national Pain EPA study.


Sorcha and Jeb are specifically looking to engage programs and clinical partners who want to help shape how EPA-based assessment is studied and applied across different settings and models of clinical education.



Liam Woodard, CEO and co-founder of the Whitecoat Learning Platform, will also be on-site at CSM and available to talk through how Whitecoat can support EPA-based assessment alongside existing tools without disrupting current program workflows.

 

For Directors of Clinical Education and program leaders who want to explore:

  • how an EPA-based assessment could fit within their current clinical education model,

  • how to reduce friction for clinical instructors while increasing feedback frequency, and

  • what implementation could realistically look like at the program level,

you can schedule a brief discovery conversation with Liam to walk through your goals and questions.



bottom of page