Introduction (Ongoing data collection – expected completion June 2019) Hands-on data acquisition is a term used to describe the mechanisms at play when the hands of a tester palpate an individual using observation-based and touch-based methods to fulfil specific tasks such as: manually assessing an individual’s alignment and/or biomechanics; and/or marker placement. How a tester collects and interprets hands-on data acquisition methods into reliable, specific and sensitive data remains unknown. To the Authors knowledge, no study to date has explored whether the volunteers may be contributing to the inconclusive data reported in inter-tester and intra-tester reliability studies, nor has there been a study that has deductively explored the mechanisms that underpin hands-on data acquisition, suggesting the replication of this inductive methodology has misled the narrative of these studies driving further disparity within the evidence based research. These disparate outcomes have had a two-fold effect within manual therapy: readers acting upon these false-positive outcomes may in fact be causing harm to individuals in their care by prolonging improvements; whilst also strongly discouraging the use of hands-on techniques for data acquisition.
Purpose/Aim This study aimed to deductively investigate the observation-based and touch-based methods a tester uses during hands-on data acquisition. Are we only collecting data from the subject?
Materials and Methods Seven repeated measure design studies were conducted over a two-year period. One hundred and fourteen volunteers (all holding a recognised manual therapy qualification) have been recruited as testers, and one hundred and thirty-four healthy subjects have volunteered as patients. Paired testers had their observation-based and touch-based methods for hands-on data acquisition investigated in isolation and integration within and between three rounds of testing. Prior to the studies commencing, all volunteers craniovertebral and upper thorax regions were assessed, volunteers blinded to outcomes, and any history of head trauma noted by the examiner (Author). Following each experimental study an inductive open interview was conducted with the testers. A convergent parallel mixed methods research approach was employed capitalising on the strengths of both qualitative and quantitative data. A Cohen Kappa Coefficient analysis was used to explore agreement between and within testers and modalities of data acquisition (1). A Thematic Analysis (TA) was used to capture the intricacies of the testers experiences, and for the first time explicitly allowing for social as well as psychological interpretations of the data collected (2).
Results Parietal and/or temporal head trauma correlates with unreliable volunteers, both as testers and as subjects within reliability studies. Testers with incongruent alignment and biomechanics between craniovertebral and upper thorax were consistently unreliable. Whole-hand placement is necessary for hands-on data acquisition to be reliable.
Conclusion This study suggests there are specific volunteer idiosyncrasies that have contributed as confounding variables during inter-tester and intra-tester reliability studies. A new methodology for screening volunteers is warranted to better inform human movement studies where hands-on data acquisition is used.
Keywords reliability studies, hands-on, manual screening, marker placement, data acquisition.
References:
- Viera AJ, Garrett JM. Understanding Interobserver Agreement: The Kappa Statistic. Family Medicine. 2005;37(5):3.
- Braun V, Clarke V. What can “thematic analysis” offer health and wellbeing researchers? Int J Qual Stud Health Well-being. 2014;9:26152.