TESTS & DESCRIPTIONS

The Bioethics Research Center (BRC) has developed a number of tests or measures that address a wide range of ethical, social, and professional issues in healthcare and health research. These measures were developed using best practices for item generation including expert consultation, cognitive interviewing, determining the dimensions or factors in the measure, assessing the measure’s relationships with other measures, and calculating the measure’s reliability.

BRC supports the efforts of individuals and programs who wish to use these measures for non-profit educational or research purposes. Currently available measures are described below. To request testing services or access to the measures, please complete our testing services request form.

Currently available measures include:

Measures to be available in 2019:

  • Knowledge of Responsible Conduct of Research

  • Professional Decision-Making in Medicine

  • Leadership Practices Inventory for Research Teams

PROFESSIONAL DECISION-MAKING IN RESEARCH (PDR)

The Professional Decision-Making in Research (PDR) measure is a 16-item vignette-based measure of decision making in research contexts that comes in two parallel forms, making it suitable for pre-and post-testing. The test examines the decision-making strategies professionals use when confronted with challenging research issues, including human and animal subjects’ protections, personnel management, peer review, bias, and integrity. These issues are often characterized by challenges such as high levels of emotion, uncertainty or over-certainty, ambiguity, and complex power dynamics. The PDR measures the use of strategies that address these situational challenges, such as seeking help and managing emotions.

The PDR has adequate internal consistency (α = .84) and split-half reliability (r = .70), indicating that the test is stable and consistent in its measurement of decision-making in research. The measure is negatively correlated with narcissism, cynicism, moral disengagement, prior exposure to unprofessional data practices, and compliance disengagement, and was not correlated with social desirability, providing convergent and discriminant validity evidence for the measure. This means that the test accurately measures what it is intended to measure. The PDR can be used as a pre-post assessment of educational interventions for researchers, or it can be used as an instrument in studies of integrity in research.

Key Publications

Antes, A. L., English, T., Baldwin, K. A., & DuBois, J. M. (2019). What Explains Associations of Researchers’ Nation of Origin and Scores on a Measure of Professional Decision-Making? Exploring Key Variables and Interpretation of Scores. Science and Engineering Ethics, 1-32. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/30604356

Antes, A. L., Chibnall, J., Baldwin, K. A., Tait, R. C., Vander Wal, J. S., & DuBois, J. M. (2016). Making professional decisions in research: Measurement and key predictors. Accountability in Research, 23(5), 288-308. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/27093003

DuBois, J. M., Chibnall, J. T., Tait, R. C., Vander Wal, J. S., Baldwin, K. A., Antes, A. L., & Mumford, M. D. (2016). Professional decision-making in research (PDR): The validity of a new measure. Science and Engineering Ethics, 22(2), 391-416. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/26071940

GOOD CLINICAL PRACTICE (GCP) KNOWLEDGE TEST

The Good Clinical Practice (GCP) Knowledge Test is a 32-item measure that assesses a clinical research coordinator’s knowledge of good clinical practices in four competency domains: 1) clinical trial operations, 2) study site management, 3) ethical and participant safety considerations, and 4) data management and informatics. In a GCP validation study of 625 clinical research coordinators (CRCs), the sample had a mean score of 24.8 (SD = 3.8) and a median score of 25. The GCP is positively related to CRC experience and formal certification and training, providing evidence for the validity of the measure. That is, the GCP accurately measures CRC knowledge. The GCP has adequate internal consistency (α = .69), meaning that the test items reliably measure the knowledge of CRCs. The measure can be used to evaluate the effectiveness of GCP training programs and the knowledge of clinical research associates. It can also be used in research contexts where knowledge of GCP is a predictor or outcome.

Key Publications

DuBois, J. M., Mozersky, J. T., Antes, A. L., & Baldwin, K. A. (under review). Assessing Knowledge of Good Clinical Practice: An Evaluation of the State of the Art and a Test Validation Study.

Mozersky, J. T., Antes, A. L., Jenkerson, M. Baldwin, K. & DuBois, J. M. (under review). How do clinical research coordinators learn Good Clinical Practice? A mixed methods study of factors that predict uptake of knowledge.

ATTITUDES TOWARD GENOMICS AND PRECISION MEDICINE (AGPM)

The Attitudes toward Genomics and Precision Medicine (AGPM) is a 37-item measure that assesses perceived benefits of and concerns about precision medicine and genomics activities. During AGPM development, the measure was factor analyzed, which is a statistical method of determining which items group together into “factors”. These results showed that AGPM is comprised of five factors that include 1) benefits, 2) privacy concerns, 3) embryo and abortion concerns, 4) gene editing and nature concerns, and 5) social justice concerns. Higher scores on the AGPM indicate a greater level of concern with precision medicine activities.

The AGPM has excellent alpha reliability (α = .91), and each of the five subscales has alpha reliabilities between .71 and .90. This means that the items on the AGPM consistently measure individual perceptions of benefits and concerns about precision medicine and genomics activities. Overall scores on the AGPM are positively correlated with other related measures of attitudes toward genetic testing, scores on the Systems Trust Index, regular religious practice, and political orientation. Taken together, these correlations provide strong validity evidence for the measure, indicating that the test accurately measures what it is intended to measure. The AGPM can be used to assess patient attitudes in a clinical setting and can also be used in genomics research where patient attitudes are a predictor or outcome.

Key Publications

Publications in progress.

HOW I THINK ABOUT RESEARCH (HIT-Res)

The How I Think about Research (HIT-Res) is a measure that examines the use of cognitive distortions such as blaming others and minimizing or mislabeling to justify research compliance and integrity violations. Respondents are asked to rate the extent to which they agree or disagree with a set of statements, with higher scores indicating higher use of cognitive distortions. The HIT-Res has excellent alpha reliability (α = .91), meaning that the test items reliably measure cognitive distortions. Scores on the HIT-Res are positively correlated with measures of moral disengagement and cynicism and negatively correlated with professional decision-making skills, demonstrating construct validity. That is, the HIT-Res accurately measures cognitive distortions. This measure can be used to assess outcomes of research ethics training programs or to examine factors influencing research integrity.

Key Publications

DuBois, J. M., Chibnall, J. T., & Gibbs, J. (2016). Compliance disengagement in research: Development and validation of a new measure. Science and Engineering Ethics, 22, 965-988. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/26174934

The HIT-Res has been used as an outcome measure in the following:

DuBois, J. M., Chibnall, J. T., Tait, R., & Vander Wal, J. S. (2018). The Professionalism and Integrity in Research Program: Description and preliminary outcomes. Academic Medicine, 93(4), 586-592. PubMed link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5738297/

The HIT-Res has been used as a predictor variable in the following:

DuBois, J. M., Chibnall, J. T., Tait, R. C., Vander Wal, J. S., Baldwin, K. A., Antes, A. L., & Mumford, M. D. (2016). Professional decision-making in research (PDR): The validity of a new measure. Science and Engineering Ethics, 22(2), 391-416. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/26071940

Antes, A. L., Chibnall, J., Baldwin, K. A., Tait, R. C., Vander Wal, J. S., & DuBois, J. M. (2016). Making professional decisions in research: Measurement and key predictors. Accountability in Research, 23(5), 288-308. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/27093003

VALUES IN SCIENTIFIC WORK (VSW)

The Values in Scientific Work (VSW) is a 35-item measure that assesses the level of importance scientists attach to different intrinsic, extrinsic, and social values that motivate the work of scientists. During VSW development, items were factor analyzed, meaning that items which measure a similar construct are clustered together in the same factor. The VSW is comprised of eight factors: 1) autonomy, 2) research ethics, 3) social impact, 4) income, 5) collaboration, 6) innovation and growth, 7) conserving relationships, and 8) job security.

The VSW has good internal consistency, with Cronbach’s alphas being greater than .70 for seven of the eight factors. That is, the test is stable and reliable in its measurement of intrinsic, extrinsic, and social values of scientists. Scores on the VSW are correlated with the global values for which shared conceptual overlap is expected and are not correlated for global values that are theoretically more distinct, providing evidence for both convergent and discriminant validity. That is, the VSW accurately measures the values of scientists. The VSW can be used for professional development of scientists and in research where these values are a predictor or outcome variable of interest.

Key Publications

English, T., Antes, A. L., Baldwin, K. A., & DuBois, J. M. (2018). Development and Preliminary Validation of a New Measure of Values in Scientific Work. Science and Engineering Ethics, 24, 393-418. PubMed link: https://www.ncbi.nlm.nih.gov/pubmed/28597222