jrp logo link

Journal of Research Practice

Volume 12, Issue 2, Article V3, 2016


Viewpoints & Discussion:
Self-Reporting in Plagiarism Research: How Honest is This Approach?

Julia Colella-Sandercock
Faculty of Education, University of Windsor
Windsor, Ontario, N9B 3P4, CANADA
colell2@uwindsor.ca

Abstract

Plagiarism is a growing phenomenon in higher education institutions. The primary method used to measure student engagement in plagiarism is self-reporting. Self-reporting is problematic for a number of reasons, and with respect to investigating plagiarism rates, it does more harm than good. Inaccuracy in student self-reported plagiarism rates, limited student understanding of plagiarism, the inability to compare findings across studies, and requesting students self-report within a specified timeframe lead to reliability and validity concerns in these studies. This article addresses these issues and provides suggestions for researchers when investigating plagiarism rates.

Index Terms: plagiarism; higher education; self-reporting; academic integrity

Suggested Citation: Colella-Sandercock, J. (2016). Self-reporting in plagiarism research: How honest is this approach? Journal of Research Practice, 12(2), Article V3. Retrieved from http://jrp.icaap.org/index.php/jrp/article/view/558/456



1. Plagiarism Research in Higher Education

The literature on academic misconduct has been growing consistently. Studies conducted on plagiarism, a form of academic misconduct, have increased in number within the previous 20 years. Plagiarism research in higher education began in the 1960s (Cummings, Maddux, Harlow, & Dyas, 2002).  Bowers (1964) was one of the first researchers to examine academic misconduct in higher education (Ballantine & McCourt Larres, 2010). Students from 99 higher education institutions in the United States participated in this study. Approximately 5,500 students who were enrolled in different years of study and 600 Deans were the participants (Stout, 2013).  This study found that 50% of students self-reported to have cheated at some point during their academic career and that a low fraction of these students are reported to authorities (Stout, 2013). Thirty years later, McCabe and Trevino (1996) found similar results as did Bowers. Their study, which is the largest plagiarism study to date, involved 6,000 student participants.  Interestingly, McCabe and Trevino’s (1996) study was conducted during the Internet era, whereas Bowers’ (1964) study was conducted prior to the World Wide Web era.

It is not uncommon for plagiarism studies to examine the plagiarism rate, which is typically done through student self-reporting on surveys or in interviews (Park, 2003; Risquez, O’Dwyer, & Ledwith, 2013; Walker, 2010; Whitley, 1998). In these types of studies, students are presented with a list of behaviours and they are asked to report their engagement in each behaviour. A timeframe is also applied; students are usually required to report their engagement in each behaviour during the previous year or during their post-secondary studies to date. Example: “In the previous 12 months, have you borrowed a sentence from an online source and failed to cite the material?”

Although questions similar to the example above are used in plagiarism studies, these are problematic for a number of reasons. Some of these reasons will be outlined and alternatives to measuring plagiarism behaviour will be suggested.

2. Issues in Measuring Plagiarism Behaviour

(a) Inaccuracy. The plagiarism rates in the literature may be under-reported (Culwin, 2006; Thurmond, 2010). Self-reporting, in and of itself, is questionable and when this is the method used to collect plagiarism information, “it is even more challenging” (Kier, 2014; Scanlon & Neumann, 2002, p. 378; Youmans, 2011). It is not uncommon for some students to under-report their engagement in dishonest behaviour, even if they are informed their responses are anonymous.

(b) Limited Understanding of Plagiarism. Some studies request students to report their overall engagement in plagiarism. For instance, “How often have you engaged in plagiarism within the previous year?” Asking this type of question implies that students have an understanding of what plagiarism is. Research suggests that students engage in plagiarism as a result of their limited plagiarism knowledge. If students are self-reporting on engagement in a behaviour that they do not fully understand, the results from such studies “cannot be taken as entirely reliable” (Power, 2009). Such results must be interpreted with extreme caution (Dahl, 2007; Park, 2003).

(c) Comparisons Across Studies. It is difficult to compare plagiarism rates across studies. Some studies focus on plagiarism within specific disciplines and then compare the plagiarism rates found in their study to rates reported in other studies. This leads to unreasonable conclusions. One cannot assume that participants were provided with the same plagiarism information prior to completing the survey/interview. Some studies provide students with definitions of plagiarism, whereas others do not. Also, definitions differ across institutions, limiting the ability to compare results (Bennett, 2005). It is unreasonable, perhaps unethical too, for researchers to compare their findings with other findings when the underlying data are not comparable.

(d) Timeframes. Example: “How often have you engaged in plagiarism behaviours within the previous year?” Questions similar to this example require students to report behaviours they engaged in during the last 12 months. The lack of reliability in studies that use this tactic needs to be addressed. This question assumes that students remember behaviours they have engaged in within the previous 12 months. It is doubtful if many students would remember such behaviour.

Inaccuracy in self-reporting, limited understanding of plagiarism, inability to compare plagiarism rates across studies, and requiring participants to report on behaviours within a specific timeframe are issues that need to be considered in assessing current plagiarism research. These issues also highlight the lack of validity in some of the plagiarism research. Alternatives to student self-reporting need to be considered.

3. Suggested Alternatives

(a) Measure What Participants Do—Not What They Report They Do. This echoes Walker’s (2010) suggestion. Instead of asking participants to self-report engagement in past behaviours, request sample assignments, and use them to investigate plagiarism. This method might be more time-consuming, but the results would be more credible.

(b) Generate Different Types of Data. Provide participants with the opportunity to discuss their engagement in plagiarism. Rich data obtained through focus groups or interviews can supplement quantitative data. A conversational setting may also reveal students’ attitudes towards and engagement in plagiarism as they develop rapport with the interviewer/focus group facilitator.

(c) Replicate. Replicate the study. Through replication, researchers can determine which components of their research design, including data collection, need to be revised. For example, if different groups of participants complete a study that measures plagiarism, and the results differ significantly, the method needs to be examined. When researchers replicate their study, the results acquire more credibility. This is particularly true when developing a plagiarism measure or modifying an existing plagiarism measure.

The number of plagiarism studies is increasing, yet the means of collecting plagiarism data are questionable, and in some cases, unethical. With limited time and resources, researchers may be hesitant to rethink and change their currents ways of collecting plagiarism data. Further, as the pressure to publish continues, researchers may be reluctant to adopt new practices. Inaccuracy in reporting, limited participant understanding of plagiarism, using timeframes (such as, “within the last 12 months”), and comparing plagiarism results from different studies are all elements that plagiarism researchers need to reconsider when conducting future research. If these aspects are overlooked, the value of the research being published becomes questionable. The above suggestions can enhance the accuracy of results.

References

Ballantine, J., & McCourt Larres, P. (2010). Perceptions of authorial identity in academic writing among undergraduate accounting students: Implications for unintentional plagiarism. Accounting Education, 21(3), 289-306.

Bennett, R. (2005). Factors associated with student plagiarism in a post-1992 university. Assessment & Evaluation in Higher Education, 30(2), 137-162.

Culwin, F. (2006). An active introduction to academic misconduct and the measured demographics of misconduct. Assessment & Evaluation in Higher Education, 31(2), 167-182.

Cummings, R., Maddux, C., Harlow, S., & Dyas, L. (2002). Academic misconduct in undergraduate teacher education students and its relationship to their principled moral reasoning. Journal of Instructional Psychology, 29(4), 286-296.

Dahl, S. (2007). Turnitin®: The student perspective on using plagiarism detection software. Active Learning in Higher Education, 8(2), 173-191.

Kier, C. (2014). How well do Canadian distance education students understand plagiarism? The International Review of Research in Open and Distance Learning, 15(1), 227-248. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/1684/2767

McCabe, D., & Trevino, L. (1996). What we know about cheating in college: Longitudinal trends and recent developments. Change, 28(1), 28-33.

Park, C. (2003). In other (people’s) words: Plagiarism by university students—Literature and lessons. Assessment & Evaluation in Higher Education, 28(5), 471-488.

Power, L. (2009). University students’ perceptions of plagiarism. The Journal of Higher Education, 80(6), 643-662.

Scanlon, P., & Neumann, D. (2002). Internet plagiarism among college students. Journal of College Student Development, 43(3), 374-385.

Stout, D. (2013). Teaching students about plagiarism: What it looks like and how it is measured. Unpublished doctoral dissertation, Western Michigan University, Michigan.

Risquez, A., O’Dwyer, M., & Ledwith, A. (2013). ‘Thou shalt not plagiarise’: From self-reported views to recognition and avoidance of plagiarism. Assessment & Evaluation in Higher Education, 38(1), 34-43.

Thurmond, B. (2010). Student plagiarism and the use of a plagiarism detection tool by community college faculty. Unpublished doctoral dissertation, Indiana State University, Indiana.

Walker, J. (2010). Measuring plagiarism: Researching what students do, not what they say they do. Studies in Higher Education, 35(1), 41-5.

Whitley, B. (1998). Factors associated with cheating among college students: A review. Research in Higher Education, 39(3), 235-274.

Youmans, R. (2011). Does the adaption of plagiarism-detection software in higher education reduce plagiarism? Studies in Higher Education, 36(7), 749-761.

 


Received 22 December 2016 | Accepted 17 January 2017 | Published 24 January 2017