jrp logo link

Journal of Research Practice

Volume 8, Issue 1, Article M2, 2012


Main Article:
Institutional Barriers to Research on Sensitive Topics: Case of Sex Communication Research Among University Students

Carey M. Noland
Department of Communication Studies, Northeastern University
204 Lake Hall, Boston, MA 02115, UNITED STATES
c.noland@neu.edu

Abstract

When conducting research on sensitive topics, it is challenging to use new methods of data collection given the apprehensions of Institutional Review Boards (IRBs). This is especially worrying because sensitive topics of research often require novel approaches. In this article a brief personal history of navigating the IRB process for conducting sex communication research is presented, along with data from a survey that tested the assumptions long held by many IRBs. Results support some of the assumptions IRBs hold about sex communication research, but do not support some other assumptions.

Index Terms: research experience; research context; research ethics; research quality; institutional review board; peer interviewer; sensitive topic; sex communication research

Suggested Citation: Noland, C. M. (2012). Institutional barriers to research on sensitive topics: Case of sex communication research among university students. Journal of Research Practice, 8(1), Article M2. Retrieved from http://jrp.icaap.org/index.php/jrp/article/view/332/262



1. Institutional Review Boards: Role and Power

Whether quantitative or qualitative methods are employed, there are some topics for which data collection is more difficult. One of these sensitive topics is sex. Investigators have responded to these challenges by conceiving new methods and instruments that are better-suited to certain topics or populations (Lee, 1993); however, Institutional Review Boards (IRBs) often do not know how to respond when presented with such research topics. In this article I discuss the challenges associated with conducting sensitive research and negotiating the IRB process. Data from a survey of 2,851 people regarding their perceptions about a novel method of data collection in sex research--peer interviewing--are presented to engage with some concerns IRBs may have about this method.

I first came to understand qualitative research as the process of using lived experiences and socially constructed performances to collect real time narratives and interactions that are then transcribed and translated into a metanarrative of knowledge. Key issues in qualitative research include gaining access to and recruiting participants and exploring and anticipating how participants will respond to study. These questions are especially relevant when researching sensitive topics--taboo topics where the qualitative researcher relies on participants to offer in-depth responses to questions about how they have constructed or understood their experience--the thick description (Jackson, Drummond, & Camara, 2007).

Without dedicating too much time to the benefit of qualitative research, let us acknowledge that scholars generally recognize that qualitative research methods address questions of “how and why” and emphasize the “inner world” of the values which motivate human behavior (Chesebro & Borisoff, 2007). It is the essence of qualitative research that makes it invaluable in investigating human experience, which includes experiences relating to sensitive topics, such as sex. Many IRBs understand this yet seem to have difficulty accurately assessing the potential harm involved with qualitative research on sex.

This article is born out of my on-going struggle as a social scientist to conduct research on sensitive topics in a manner that is acceptable to the review board of my home institution. As a sex researcher, like many other sex researchers, I am continuously denied approval or asked to compromise my research process so radically that the original study becomes untenable. While I fully acknowledge that the IRB is an important entity and that research subjects ought to be protected, I contend that when it comes to sensitive topics, many IRBs err on the side of caution, to the detriment of research quality.

Historically, the IRB was formed to protect research participants, giving particular care to vulnerable persons and populations, but what happens when the subject population is not vulnerable, yet the topic is perceived to be sensitive? Does the designation of sensitive or taboo topic alone move a non-vulnerable population of research subjects into the high-risk category? I contend that IRBs operate under this assumption.

In the last decade there has been a significant increase in the number of scholarly books and journal articles dedicated to questioning the role of IRBs in non-medical research. For example, in Ethical Imperialism: Institutional Review Boards and the Social Sciences, Zachary Schrag (2010) clearly documents how IRBs have overstepped their bounds, threatening academic freedom and disciplinary independence. Schrag argues that IRBs have been cowed by the threat of lawsuits and U.S. federal government policies. He clearly demonstrated that most IRBs, like the commission that issued the 1987 Belmont Report (current federal guidelines for research are based on this report), do not understand the difference between social science research and biomedical research. The result has been disastrous for social scientists in their pursuit of new understanding of people.

An entire journal issue in my field, Journal of Applied Communication Research (Volume 33, Issue 3, 2005), was devoted to understanding, from multiple perspectives, the role of IRB and institutional power. In these articles the journal editors were “struck by the powerlessness expressed in many of the [researchers’] narratives” and “equally surprised by the limited use of resistance to power expressed in them. To be sure, a few narratives expressed forms of resistance ranging from covert strategies . . . but most authors recommended compliance and working within the system rather than resisting, subverting or changing the system” (Dougherty & Kramer, 2005b, p. 278). To be sure the special issue documents many instances of abuses of IRB power--where the IRB restricted access to legitimate research or created extensive delays on studies that should have been exempt according to federal guidelines. Often these delays have economic implications, especially when grant monies are involved, and can significantly affect career trajectories, particularly for tenure track faculty. Therefore it is not surprising that most researchers recommended compliance.

In sum, the special issue highlighted three ways the IRB has become especially problematic. First, Dougherty and Kramer (2005b) claim there has been a shift in many IRBs from the protection of human subjects to the protection of the institution (Annas, 2001). Second, rather than supporting research, IRBs have increasingly begun to shape research by mandating changes in consent forms, questions asked, and methodology. And third, while the IRB is “charged with overseeing the research process of the university, yet, ironically there is no-one assigned to oversee the IRB. This gives the IRB an unbridled ability to monitor and expand its oversight of research without anyone reining it in when it goes astray” (p. 186-187). Furthermore, “there is little room to appeal what at times appear to be arbitrary decisions and directives” (p. 187). While there is no shortage of documentation and research detailing researchers’ concerns and problems with their IRBs, there has been little progress circumventing the IRB process or stemming the current, harmful trends many IRBs have adopted. Given the fact that IRBs tend to protect their institutions and members of their institutions (e.g., students), it makes sense that they are even more inclined to be cautious when it comes to research on sensitive topics.

2. What are Sensitive Research Topics?

Sensitive topics of research are topics that participants may feel uncomfortable discussing. These include taboo topics, topics associated with shame or guilt, and topics that generally reside in the private spheres of our lives.

In the case of sex research, researchers who investigate sex have differing opinions on the overall sensitivity of the issue. Many contend that it is not as sensitive as some other areas of research (see Ford & Norris, 1991; Johnson & DeLamater, 1976) while other contend that it is highly sensitive (see Crawford & Popp, 2003; Wiederman, 2004). However, perceptions of sensitivity are socially influenced, culturally determined, and can be highly subjective to each individual at any given point in time. There is a strong belief in popular culture--and certainly from numerous members of IRB in the U.S.--that the topic of sex is more sensitive than other topics of research. Many people doing research on topics pertaining to sex or sexuality have reported difficulties obtaining IRB approval. When faced with sensitive research topics, many IRBs recommend the use of anonymous surveys, rather than face-to-face interviews, to protect subjects. However, vital qualitative insight that could be gained from in-depth interviews is lost.

From a careful examination of the mission of IRBs it is clear that potential participants need to be adequately warned of a study’s potential risks so they can decide whether or not to participate in the research study. However, in my experience, IRBs are not willing to let non-vulnerable populations such as medical doctors or college students make those decisions. Unfortunately, as the researchers who study IRBs note, the original mission of protecting human subjects may have been taken over by institutional conservatism:

[O]ur concerns were more related to the role and function of the IRB, its constantly shifting and changing policies, and its powerful control of the research process that seemed to have as much to do with protecting universities and building bureaucracies to manage the review process as with protecting human subjects. (Dougherty & Kramer, 2005a, p. 184)

3. Sex Research and IRBs

As a teacher-researcher, I have students who complete a primary research project as part of the undergraduate capstone course. While the research is sometimes quantitative, often times the projects necessitate conducting qualitative interviews or focus group discussions. The research topics emerge organically from the class. Based on their coursework during their tenure at the university (e.g., many take courses in health communication, interpersonal communication, and sexual communication) the students brainstorm research projects, discuss them as a class, and then vote on which project to conduct. I often encourage the students to choose research topics in my area of expertise (communication about sex) for obvious reasons: it is easier for me to mentor them in the project if I have expertise in the area. In the ten years I have been doing this, whenever the class proposed doing anything related to sexual relationships, the protocol would go to the full IRB review process and rarely pass. Each time the IRB recommended that, rather than have qualitative interviews performed by peer interviewers (i.e., the students), we should consider an anonymous quantitative survey. It is important to note that none of the proposed projects asked students about their sexual behaviors, rather the questions were entirely focused on communication about sex (e.g., Whom do you talk to about sex? Do you talk about safe sex?).

While the IRB members at my institution do not discount the importance of qualitative research, they seem to believe that only the Principal Investigator (PI)--the professor--should conduct interviews. They seem to think the potential harm to participants is too great to risk peer interviews, even after the peers had been trained and the participants have had full disclosure about potential harm. Even though university students are not classified as vulnerable subjects, the IRB treated them as if they were unable to understand the risks of the research and to consent or decline to participate in the research based on perceived risks.

There is limited empirical data on how harmful participation in sex research is to participants. Indeed as Kuyper et al. highlight, because of this, “IRBs’ and researchers’ decisions regarding social or psychological research proposals and protocols seem, due to a lack of sufficient empirical data, mostly based on worst-case scenarios, assumptions, and anecdotes” (Kuyper, de Wit, Adam, & Woertman, 2012, p. 497). Indeed this has been my experience. In a proposed study of “friends with benefits” (friends who have a casual sexual relationship) involving students from my university, I was brought before the full IRB committee to answer questions on the protocol. Many of the concerns were worst case scenarios: “What if a college student participant who was raped by a friend commits suicide because they were interviewed?” “What if parents find out we are doing this kind of research?” One member did not think “our” students participated in these kinds of casual sexual encounters. It seemed clear that the board was using personal opinions about undergraduate sexual practices and the fear that students’ parents might find out about the research to deny the study.

In a recent and rare study on the potential harm to subjects, Kuyper et al. (2012) surveyed 899 young people (15-25 years) in the Netherlands and found that sex research was not harmful to them, even in cases where a research participant had suffered from past abuse. Yeater et al. also recently challenged the common assumption that IRBs assume that questionnaires asking about “sensitive” topics (e.g., trauma and sex) pose more risk to respondents than seemingly innocuous measures (e.g., cognitive tests). They tested this assumption by asking 504 undergraduates to answer either surveys on trauma and sex or measures of cognitive ability, such as tests of vocabulary and abstract reasoning. Participants rated their positive and negative emotional reactions and the perceived benefits and mental costs of participating; they also compared their study-related distress with the distress arising from normal life stressors. They concluded that sex surveys are not riskier and in fact students reported higher levels of value from participating in the sex research (Yeater, Miller, Rinehart, & Nason, 2012).

Yet, there are very few studies that quantify participants’ perceptions of sex research. In order to assess whether my IRB was operating under questionable, subjective assumptions about the undergraduate student body at my university, I designed a survey (which the IRB approved).

The survey was posted on the website of my university, a large private university in the U.S. Full-time registered college students at the university were invited to participate by clicking on a link to a university-sponsored website. This website is the site where many surveys regarding university matters are administered. Approval from the IRB and the website administrators was obtained before posting the survey. Details regarding confidentiality and anonymity were discussed in great detail. In the end, while the university could theoretically link a person to their survey results through their e-mail login, it was not possible for the PI to access this information. This was clearly explained to participants in the electronic letter of consent. Participation was completely voluntary and no compensation was offered. The survey was available to the students for a time period of two weeks. A page explaining the rationale for the research and contact information for the PI and the IRB preceded the survey. It was explained that consent was implied by participating in the confidential survey. Data were obtained from the 2,851 completed surveys. While the resulting population was slightly older than most college samples, demographic information is consistent with similar studies of college students.

In order to confirm if sex was a stressful topic of research for participants, ten questions about stress levels of different topics were administered using a Likert-type three-level scale of very stressful, a little stressful, and not stressful. Categories included dietary habits, TV shows, university courses, family, finances, drinking habits/drug use, and sexual practices. These were all topics of research surveys that had been posted on the website in recent months or were topics of major research studies on campus. Results showed that interviews about sexual practices would be the most stressful for them, indicating that sex is indeed a sensitive topic. I was surprised by these results given the depth and frequency of talk about sex the participants reported in the survey. The IRB was correct: students perceived that sex is a more stressful topic of interview.

Questions were asked concerning the students’ honesty levels and comfort levels when interviewed by different categories of people. Participants were asked seven questions about how honest and comfortable they would be talking about sex with different people in a research setting. Categories of people included (a) friend, (b) nurse, (c) physician, (d) psychologist or mental health professional, (e) professor, (f) graduate student researcher, or (g) peer researcher. For this research I provided this definition of peer researcher in the survey: A peer researcher is an acquaintance (someone you know) who is approximately your age and attends your school who has been trained to collect information for a research study in a professional manner. The three-level Likert item for these questions was: very honest, somewhat honest, and not honest. For the comfort level questions I used: very comfortable, somewhat comfortable, and not comfortable. This was followed by an opened-ended question: What concerns, if any, would you have about sharing information about your own sexual behavior with a peer researcher? Tables 1 and 2 present a summary of the responses.

Table 1. Honesty Levels of Interviewees With Different Interviewer Categories

Interviewer Category

Not Honest

Somewhat Honest

Very Honest

Total

Peer researcher

498 (20%)

1129 (45.3%)

864 (34.7%)

2491

Physician

77 (3.1%)

872 (35%)

1540 (61.9%)

2489

Friend

24 (1%)

545 (21.9%)

1920 (77.1%)

2489

Nurse

110 (4.4%)

973 (39.1%)

1404 (56.5%)

2487

Psychologist

138 (5.6%)

1000 (40.2%)

1347 (54.2%)

2485

Professor

986 (39.8%)

984 (39.7%)

507 (20.5%)

2477

Graduate student

587 (26.9%)

1066 (48.8%)

830 (38.0%)

2183

Table 2. Comfort Levels of Interviewees With Different Interviewer Categories

Interviewer Category

Not Comfortable

Somewhat Comfortable

Very Comfortable

Total

Peer researcher

1177 (47.2%)

1078 (43.2%)

238 (9.5%)

2493

Physician

518 (20.8%)

1485 (59.6%)

488 (19.6%)

2491

Friend

59 (2.4%)

717 (28.8%)

1717 (68.8%)

2493

Nurse

634 (25.5%)

1436 (57.7%)

419 (16.8%)

2489

Psychologist

636 (25.6%)

1394 (56.5%)

456 (18.3%)

2486

Professor

2128 (85.5%)

281 (11.3%)

79 (3.2%)

2488

Graduate student

1404 (56.4%)

884 (35.4%)

203 (8.1%)

2491

Results indicate that students would be most comfortable with friends, followed by physican, psychologist, and nurse. The least acceptable person to conduct an interview was the professor. The IRB was operating under false assumptions here. This was a significant finding, especially in light of the capstone course and my quest to have students acting as peer interviewers to complete data collection. When asked to choose one category of interviewer in a sex-related interview, students overwhelmingly chose a physician (Table 3), followed by peer researcher. A chi-square test was run to check for significant differences in preferences of interviewer. There was a significant difference between preference of interviewer (χ2[6] = 864, p = .001). Unlike the data presented in Tables 1 and 2, where participants could rate each interviewer category on levels of honesty and comfort, in this question they were asked to choose one category of interviewer with whom they would feel most comfortable: clearly, the category of physicians was chosen.

Table 3. Students’ Choice of Interviewer Category*

Interviewer Category

Frequency

Percentage

Physician

700

30.2%

Peer researcher

471

20.3%

Psychologist/
Mental health professional

409

17.7%

Nurse

279

12.0%

Graduate student researcher

238

10.3%

None of these

197

8.5%

Professor

23

1.0%

TOTAL

2317

100%

* Note. The category of “Friend” was not included as it is not ethically possible to have friends interviewing one another without training or guidance--technically, they would be peer researchers.

Given the results of the survey it seems reasonable to conclude that if peer interviewers chose participants that they consider friends, the participants would feel comfortable offering honest and in-depth responses to questions. These findings present a dilemma for IRBs in that the more sensitive the topic, the more likely they are to recommend that the PI (a professor in most cases) do the interviews themselves, perhaps because they are most qualified and trained. Students at my university indicated that participating in sex research would be more stressful than other kinds of research. However, in light of the survey demonstrating that professors are the least acceptable interviewer, this creates a problem. Professors as interviewers could significantly limit the pool of participants and decrease the quality of the data.

In the past, I have tried numerous avenues to convince the IRB to allow me to use peer researchers. From what I understand, they often have general university council review my protocols and come back to me with outlandish worst-case scenarios. For example, “What if there is a future lawsuit involving one of your participants? Would you or the interviewee be required to testify and hand-over your data?” As all data would be anonymous to the PI (only the peer interviewer would know the name of the participant and use a pseudonym for the interviewee) how could this be? The likelihood of anyone reading the journal article and connecting a narrative to a participant that happened to be involved in a lawsuit about that very topic seems highly unlikely. On one occasion, I met with the Vice-Provost of Research to discuss the IRB to argue that the research project on sex that I proposed should be exempt according to federal guidelines; he agreed after looking at the protocol and said he would talk to the IRB. After six months of negotiating with the IRB, I dropped the study. In the past few years we have done the most innocuous research in my capstone courses; for example, this semester we are doing a project on communication about nutrition. The limitations imposed by the IRB are unnecessarily depriving students of the experience of conducting in-depth interviews as part of a qualitative research process.

People talk about sex with their friends and are even more likely to share intimate details such as their sexual histories or sexual likes and dislikes with a friend rather than their current sexual partner (Noland, 2006). By finding reliable and valid ways to incorporate peer researchers into data collection, we have the potential to increase the quality of our qualitative research endeavors. This is especially important in areas involving sensitive topics, as much of our research in these areas deal with serious, life-threatening issues that could be better understood and ameliorated through qualitative analysis, such as HIV and other sexually transmitted infections that spread by unsafe sex practices. At some point, we must trust our participants--particularly the oft-employed college student population--to be able to make their own decisions about the level of potential harm caused by participation in sex research. However, convincing IRBs of the appropriateness of using this method will remain a challenge.

References

Annas, G. J. (2001). Reforming informed consent to genetic research. Journal of the American Medical Association, 286(18), 2326-2328.

Chesebro, J. W., & Borisoff, D. J. (2007). What makes qualitative research qualitative? Qualitative Research Reports in Communication, 8(1), 3-14.

Crawford, M., & Popp, D. (2003). Sexual double standards: A review and methodological critique of two decades of research. Journal of Sex Research, 40(1), 13-26.

Dougherty, D. S., & Kramer, M. W. (2005a). A rationale for scholarly examination of institutional review boards: A case study. Journal of Applied Communication Research33(3), 183-188. 

Dougherty, D. S., & Kramer, M. W. (2005b). Organizational power and the institutional review board. Journal of Applied Communication Research33(3), 277-284

Ford, K., & Norris, A. (1991). Methodological considerations for survey research on sexual behavior: Urban African American and Hispanic youth. Journal of Sex Research, 28(4), 539-555.

Jackson, R. L., Drummond, D. K., & Camara, S. (2007). What is qualitative research? Qualitative Research Reports in Communication, 8(1), 21-28.

Johnson, W. T., & Delamater, J. D. (1976). Response effects in sex surveys. Public Opinion Quarterly, 40(2), 165-181.

Kuyper, L., de Wit, J., Adam, P., & Woertman, L. (2012). Doing more good than harm? The effects of participation in sex research on young people in the Netherlands. Achieves of Sexual Behavior, 41(2), 497-506.

Lee, R. (1993). Doing research on sensitive subjects. London: Sage.

Noland, C. M. (2006). Listening to the sound of silence: Gender roles and communication about sex in Puerto Rico. Sex Roles: A Journal of Research, 55(5 & 6), 283-294. Retrieved from http://link.springer.com/content/pdf/10.1007%2Fs11199-006-9083-2

Schrag, Z. (2010). Ethical imperialism: Institutional review boards and the social sciences. Baltimore, MD: Johns Hopkins University Press.

Wiederman, M. (2004). Methodological issues in studying sexuality in close relationships. In J. Harvey, A. Wenzel, & S. Sprecher (Eds.), The handbook of sexuality in close relationships (pp. 33-56). Mawhaw, NJ: Lawrence Erlbaum.

Yeater, E., Miller, G., Rinehart, J., & Nason, E. (2012). Trauma and sex surveys meet minimal risk standards: Implications for institutional review boards. Psychological Science, 23(7), 780-787.

 


Received 10 September 2012 | Accepted 20 November 2012 | Published 24 November 2012