TESTS & DESCRIPTIONS*

The Bioethics Research Center (BRC) has developed a number of tests or measures that address a wide range of ethical, social, and professional issues in healthcare and health research. These measures were developed using best practices for item generation including expert consultation, cognitive interviewing, determining the dimensions or factors in the measure, assessing the measure’s relationships with other measures, and calculating the measure’s reliability.

BRC supports the efforts of individuals and programs who wish to use these measures for non-profit educational or research purposes. Currently available measures are described below. To request testing services or access to the measures, please complete our testing services request form.

REQUEST TESTING SERVICES

PROFESSIONAL DECISION-MAKING IN RESEARCH (PDR)

The Professional Decision-Making in Research (PDR) measure is a 16-item vignette-based measure of decision making in research contexts that comes in two parallel forms, making it suitable for pre-and post-testing. The test examines the decision-making strategies professionals use when confronted with challenging research issues, including human and animal subjects’ protections, personnel management, peer review, bias, and integrity. These issues are often characterized by challenges such as high levels of emotion, uncertainty or over-certainty, ambiguity, and complex power dynamics. The PDR measures the use of strategies that address these situational challenges, such as seeking help and managing emotions.

The PDR has adequate internal consistency (α = .84) and split-half reliability (r = .70), indicating that the test is stable and consistent in its measurement of decision-making in research. The measure is negatively correlated with narcissism, cynicism, moral disengagement, prior exposure to unprofessional data practices, and compliance disengagement, and was not correlated with social desirability, providing convergent and discriminant validity evidence for the measure. This means that the test accurately measures what it is intended to measure. The PDR can be used as a pre-post assessment of educational interventions for researchers, or it can be used as an instrument in studies of integrity in research.

Key Publications

Antes AL, English T, Baldwin KA, DuBois JM. (2019). What Explains Associations of Researchers’ Nation of Origin and Scores on a Measure of Professional Decision-Making? Exploring Key Variables and Interpretation of Scores. Science and Engineering Ethics, 1-32. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/30604356

Antes AL, Chibnall, J, Baldwin KA, Tait RC, Vander Wal JS, DuBois JM (2016). Making professional decisions in research: Measurement and key predictors. Accountability in Research, 23(5), 288-308. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/27093003

DuBois JM, Chibnall JT, Tait RC, Vander Wal JS, Baldwin KA, Antes AL & Mumford MD (2016). Professional decision-making in research (PDR): The validity of a new measure. Science and Engineering Ethics, 22(2), 391-416. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/26071940

PROFESSIONAL DECISION-MAKING IN MEDICINE (PDM)

The Professional Decision-Making in Medicine (PDM)¹ measure is a 16-item vignette-based measure of decision making in medical contexts that comes in two parallel forms, making it suitable for pre-and post-testing. The test examines decision-making strategies that physicians can use when confronted with challenging ethical situations in healthcare that require managing, including competing interests, accounting for what is clinically appropriate, considering patient values, and addressing conflicts between patients, families, and medical professionals. These issues are often characterized by challenges such as high levels of emotion, uncertainty or over-certainty, ambiguity, and complex power dynamics. The PDM measures the use of strategies that address these individual and situational challenges. The strategies include considering consequences and rules, seeking help, managing emotions, and questioning personal assumptions and motives.

In our preliminary study, Form A had a mean score of 12.04 (SD = 2.13) and Form B had a mean score of 11.86 (SD = 2.74). These means were not statistically different from one another (Wilcoxon signed-rank test: Z = -.564, p = .573), indicating that the two test forms are generally equivalent. The measure was negatively correlated with moral disengagement, positively correlated with peer ratings of professionalism, and was not correlated with social desirability. Taken together, this provides convergent, discriminant, and criterion-related validity evidence for the measure. This means that the test accurately measures what it is intended to measure and is associated with measures it should theoretically be associated with. The PDM can be used as a pre-post assessment of educational interventions for physicians, or it can be used as an instrument in studies of professionalism in medicine.

 

Key Publications

Antes AL, Dineen KK, Bakanas E, et al. Professional decision-making in medicine: Development of a new measure and preliminary evidence of validity. PLoS One. 2020;15(2):e0228450. PubMed link: https://pubmed.ncbi.nlm.nih.gov/32032394/

GOOD CLINICAL PRACTICE (GCP) KNOWLEDGE TEST

The Good Clinical Practice (GCP) Knowledge Test is a 32-item measure that assesses a clinical research coordinator’s knowledge of good clinical practices in four competency domains: 1) clinical trial operations, 2) study site management, 3) ethical and participant safety considerations, and 4) data management and informatics. In a GCP validation study of 625 clinical research coordinators (CRCs), the sample had a mean score of 24.8 (SD = 3.8) and a median score of 25. The GCP is positively related to CRC experience and formal certification and training, providing evidence for the validity of the measure. That is, the GCP accurately measures CRC knowledge. The GCP has adequate internal consistency (α = .69), meaning that the test items reliably measure the knowledge of CRCs. The measure can be used to evaluate the effectiveness of GCP training programs and the knowledge of clinical research associates. It can also be used in research contexts where knowledge of GCP is a predictor or outcome.

Key Publications

DuBois JM, Mozersky JT, Antes AL, Baldwin KA, Jenkerson, M. (2020) Assessing clinical research coordinator knowledge of good clinical practice: An evaluation of the state of the art and a test validation study. Journal of Clinical Translational Science, 4(2), 141-145. DOI:10.1017/cts.2019.440. https://www.ncbi.nlm.nih.gov/pubmed/31984765.

Mozersky JT, Antes AL, Baldwin KA, Jenkerson M, DuBois JM. (2020) How do clinical research coordinators learn Good Clinical Practice? A mixed methods study of factors that predict uptake of knowledge. Clinical Trials. DOI: 10.1177/1740774519893301.

HOW I THINK ABOUT RESEARCH (HIT-Res)

The How I Think about Research (HIT-Res) is a measure that examines the use of cognitive distortions such as blaming others and minimizing or mislabeling to justify research compliance and integrity violations. Respondents are asked to rate the extent to which they agree or disagree with a set of statements, with higher scores indicating higher use of cognitive distortions. The HIT-Res has excellent alpha reliability (α = .91), meaning that the test items reliably measure cognitive distortions. Scores on the HIT-Res are positively correlated with measures of moral disengagement and cynicism and negatively correlated with professional decision-making skills, demonstrating construct validity. That is, the HIT-Res accurately measures cognitive distortions. This measure can be used to assess outcomes of research ethics training programs or to examine factors influencing research integrity.

Key Publications

DuBois JM, Chibnall JT, Gibbs J (2016). Compliance disengagement in research: Development and validation of a new measure. Science and Engineering Ethics, 22, 965-988. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/26174934

The HIT-Res has been used as an outcome measure in the following:

DuBois JM, Chibnall JT, Tait R, Vander Wal JS (2018). The Professionalism and Integrity in Research Program: Description and preliminary outcomes. Academic Medicine, 93(4), 586-592. PubMed link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5738297/

The HIT-Res has been used as a predictor variable in the following:

DuBois JM, Chibnall JT, Tait RC Vander Wal JS., Baldwin KA, Antes AL, Mumford MD (2016). Professional decision-making in research (PDR): The validity of a new measure. Science and Engineering Ethics, 22(2), 391-416. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/26071940

Antes AL, Chibnall J, Baldwin KA, Tait RC, Vander Wal JS, DuBois JM (2016). Making professional decisions in research: Measurement and key predictors. Accountability in Research, 23(5), 288-308. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/27093003

VALUES IN SCIENTIFIC WORK (VSW)

The Values in Scientific Work (VSW) is a 35-item measure that assesses the level of importance scientists attach to different intrinsic, extrinsic, and social values that motivate the work of scientists. During VSW development, items were factor analyzed, meaning that items which measure a similar construct are clustered together in the same factor. The VSW is comprised of eight factors: 1) autonomy, 2) research ethics, 3) social impact, 4) income, 5) collaboration, 6) innovation and growth, 7) conserving relationships, and 8) job security.

The VSW has good internal consistency, with Cronbach’s alphas being greater than .70 for seven of the eight factors. That is, the test is stable and reliable in its measurement of intrinsic, extrinsic, and social values of scientists. Scores on the VSW are correlated with the global values for which shared conceptual overlap is expected and are not correlated for global values that are theoretically more distinct, providing evidence for both convergent and discriminant validity. That is, the VSW accurately measures the values of scientists. The VSW can be used for professional development of scientists and in research where these values are a predictor or outcome variable of interest.

Key Publications

English T, Antes AL, Baldwin KA, DuBois JM (2018). Development and Preliminary Validation of a New Measure of Values in Scientific Work. Science and Engineering Ethics, 24, 393-418. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/28597222

KNOWLEDGE OF RESPONSIBLE CONDUCT OF RESEARCH (RCR)

The Responsible Conduct of Research (RCR) Knowledge Test is a 34-item measure that examines knowledge of RCR guidelines and rules relating to ten RCR topics: 1) research misconduct, 2) data management and ownership, 3) peer review, 4) authorship and publication practices, 5) mentor-trainee relationships, 6) collaboration, 7) conflicts of interest, 8) human subjects research, 9) animal subjects research, and 10) scientists as responsible members of society. Respondents are asked to read each multiple choice question and choose from one of the four possible response options as the best answer. This measure can be used to assess outcomes of research ethics training programs intended to improve trainee knowledge of RCR guidelines and rules.

Key Publications

In progress.

ATTITUDES TOWARD ARTIFICIAL INTELLIGENCE IN HEALTHCARE: ADULTS (AAIH-A)

The Attitudes toward Artificial Intelligence in Healthcare: Adults (AAIH-A) inventory a 45-item measure that assesses a person’s attitudes toward AI-driven technology in healthcare. The inventory consists of two parts: general openness and concerns about AI technologies. General openness is comprised of 12 items, and is focused four AI-driven healthcare functions, including diagnosis, risk prediction, treatment selection, and medical guidance. Concerns are assessed in seven subscales totaling 33 items and is focused on factors people might find important when considering the use of AI-driven healthcare technologies: quality and accuracy, privacy, shared decision making, convenience, cost, the human element of care, and social justice.

The AAIH-A has very good internal consistency, with a Cronbach’s alpha of .92 for the general openness subscale and Cronbach’s alphas ranging from .74 – .90 for the concerns subscales. These alpha levels indicate that the inventory is stable and reliable in its measurement of attitudes toward AI in healthcare. Additionally, as expected, scores on the inventory concerns subscales correlate with measures of health system competency, health system trust, health system integrity, trust in technology, and faith in technology, which demonstrates evidence of convergent validity. That is, the AAIH-A inventory accurately measures attitudes toward AI in healthcare. The inventory can be used by other researchers and healthcare technology companies to evaluate openness and potential contributors to perceptions of AI technologies.

Key Publications

Sisk, B. A., Antes, A. L., Lin, S. C., Nong, P., & DuBois, J. M. (2024). Validating a novel measure for assessing patient openness and concerns about using artificial intelligence in healthcare. Learning Health Systems, e10429. doi: 10.1002/lrh2.10429.

ATTITUDES TOWARD ARTIFICIAL INTELLIGENCE IN HEALTHCARE: PEDIATRICS (AAIH-P)

The Attitudes toward Artificial Intelligence in Healthcare: Pediatrics (AAIH-P) inventory a 45-item measure that assesses a person’s attitudes toward AI-driven technology in pediatric healthcare. The inventory consists of two parts: general openness and concerns about AI technologies. General openness is comprised of 12 items, and is focused four AI-driven healthcare functions, including diagnosis, risk prediction, treatment selection, and medical guidance. Concerns are assessed in seven subscales totaling 33 items, and is focused on factors parents might find important when considering the use of AI-driven healthcare technologies: quality and accuracy, privacy, shared decision making, convenience, cost, the human element of care, and social justice.

The AAIH-P has very good internal consistency, with a Cronbach’s alpha of .92 for the general openness subscale and Cronbach’s alphas ranging from .69 – .87 for the concerns subscales. These alpha levels indicate that the inventory is stable and reliable in its measurement of attitudes toward AI in pediatric healthcare. Additionally, as expected, scores on the inventory concerns subscales correlate with measures of system trust, trust in technology, and faith in technology, which demonstrates evidence of convergent validity. That is, the AAIH-P inventory accurately measures attitudes toward AI in pediatric healthcare. The inventory can be used by other researchers and healthcare technology companies to evaluate openness and potential contributors to parental perceptions of AI technologies.

Key Publications

Sisk, B., Antes, A. L., Burrous, S., & DuBois, J. M. (2020). Parental attitudes toward artificial intelligence-driven precision medicine technologies in pediatric healthcare. Children, 7(9): 145. doi: 10.3390/children7090145.

LEADERSHIP AND MANAGEMENT PRACTICES IN SCIENCE (LAMPS) AND RESEARCH TEAM PRACTICES (RTP)

The Leadership and Management Practices in Science (LAMPS) inventory a 28-item measure that assesses the frequency with which the Principal Investigators (PI) in research labs engage in various leadership and management behaviors. The inventory consists of two subscales: fostering team relationships and directing rigorous research. Fostering team relationships is comprised of 16 items, and is focused on people-oriented behaviors such as providing support and encouragement, and demonstrating respect and concern for the welfare of lab members. Directing rigorous research is comprised of 12 items, and is focused on task-oriented behaviors such as defining expectations, setting standards, and supervising work procedures.

The LAMPS inventory has good internal consistency, with a Cronbach’s alpha of .95 for fostering team relationships and .92 for directing rigorous research. These alpha levels indicate that the inventory is stable and reliable in its measurement of leadership and management practices. Additionally, scores on the inventory subscales are correlated with other measures of leadership, including a measure of leadership behaviors and a measure of ethical leadership, which demonstrates evidence of convergent validity. That is, the LAMPS inventory accurately measures leadership and management practices in research labs. The inventory can be used by research administrators seeking information about lab leadership across an institution, or by researchers who study lab environments.

In addition to the 28-item LAMPS inventory, a separate but related set of 10 items comprise a Research Team Practices (RTP) inventory. These items assess the frequency of various management practices the respondent’s research group may utilize. These items are not focused on the PI of the group specifically, because these behaviors may not be carried out by the leader themselves but are signs of leadership. For example, one group practices item “The lab I work in holds regular meetings as a group” is a practice that successful labs utilize, but that may not be directly organized by the lab leader (e.g. the PI may have tasked a lab manager with organizing group meetings, for example). The RTP inventory has good internal consistency, with a Cronbach’s alpha of .82. Additionally, it explained additional variance above and beyond the LAMPS subscales when predicting perceptions of an ethical lab climate.

Key Publications

Antes, A. L., English, T., Solomon, E. D., Wroblewski, M., McIntosh, T., Stenmark, C. K., & DuBois, J. M. (2024). Leadership, management, and team practices in research labs: Development and validation of two new measures. Accountability in Research, 1–28. doi: 10.1080/08989621.2024.2412772. PubMed link: https://pubmed.ncbi.nlm.nih.gov/39435976/

LAB CLIMATE FOR RESEARCH ETHICS (LCRE)

The Lab Climate for Research Ethics scale is a 3-item measure that assesses the degree to which members of a lab perceive that the group values and is committed to principles of research ethics. The test serves as a brief test of global perceptions of climate for research ethics in a lab environment.

Lab Climate for Research Ethics has good internal consistency, with a Cronbach’s alpha of .91. That is, the test is stable and reliable in its measurement of climate for research ethics. Scores on the test are correlated with another measure of climate for research ethics, and not correlated with social desirability, providing evidence for both convergent and discriminant validity, respectively. That is, the Lab Climate for Research Ethics scale accurately measures the global climate for research ethics in labs. The test can be used by research administrators seeking information about climate within labs across an institution, or by researchers who study lab environments.

Key Publications

Solomon, E. D., English, T. E., Wroblewski, M., DuBois, J. M., & Antes, A. L. (2021). Assessing the climate for research ethics in labs: Development and validation of a brief measure. Accountability in Research: Policies and Quality Assurance. PubMed link: https://pubmed.ncbi.nlm.nih.gov/33517782/

ATTITUDES TOWARD GENOMICS AND PRECISION MEDICINE (AGPM)

The Attitudes toward Genomics and Precision Medicine (AGPM) is a 37-item measure that assesses perceived benefits of and concerns about precision medicine and genomics activities. During AGPM development, the measure was factor analyzed, which is a statistical method of determining which items group together into “factors.” These results showed that AGPM is comprised of five factors that include 1) benefits, 2) privacy concerns, 3) embryo and abortion concerns, 4) gene editing and nature concerns, and 5) social justice concerns. Benefit items are reverse scored; thus, higher overall scores on the AGPM indicate a greater level of concern with precision medicine activities.

The AGPM has excellent alpha reliability (α = .91), and each of the five subscales has alpha reliabilities between .71 and .90. This means that the items on the AGPM consistently measure individual perceptions of benefits and concerns about precision medicine and genomics activities. Overall scores on the AGPM are positively correlated with other related measures of attitudes toward genetic testing, scores on the Systems Trust Index, regular religious practice, and political orientation. Taken together, these correlations provide strong validity evidence for the measure, indicating that the test accurately measures what it is intended to measure. The AGPM can be used to assess patient attitudes in a clinical setting and can also be used in genomics research where patient attitudes are a predictor or outcome.

Key Publications

DuBois, J. M., Mozersky, J., Antes, A., English, T., Parsons, M. V., Baldwin, K. (2021). Attitudes toward genomics and precision medicine. Journal of Clinical and Translational Science. 5, e120, 1-9. PubMed link: https://pubmed.ncbi.nlm.nih.gov/34267947/

*The BRC Testing Service is funded in part by the National Center for Advancing Translational Sciences (UL1 TR002345).