Research Design:
Reflections on the Use of Autovideography in an Undergraduate Education Context
Carla Thomson
Higher Education Development Centre, 65-75 Union Place West, University of Otago
Dunedin, NEW ZEALAND
carla.thomson@otago.ac.nz
In a recent study of undergraduates’ use of information and communication technologies to support their academic work, we asked students to make video recordings of their personal study sessions. Our motivation was to capture their study practice as it occurred rather than relying solely on self-reports of their perceived or remembered practice. As we worked with the participant-created videos, we recognised their uniqueness as sources of evidence and their potential to reveal situated and authentic data. In this article, we have identified some of the complex and problematic elements of this method as we trace its evolution in our research practice.
Index Terms: autovideography; visual methods; videography research
Suggested Citation: Butson, R., & Thomson, C. (2011). Reflections on the use of autovideography in an undergraduate education context. Journal of Research Practice, 7(2), Article D1. Retrieved [date of access], from http://jrp.icaap.org/index.php/jrp/article/view/243/236
As computer technologies become progressively more sophisticated, powerful, and ubiquitous, knowledge of how students integrate these technologies into their study practices is becoming vital to the development of higher education curricula. However, there has been little research to date which draws on students’ first-hand accounts, constructions, and experiences of the ways that they use computer technologies to support their learning (Sharpe, Benfield, Lessner, & DeCicco, 2005). This is particularly the case in relation to the learning which students undertake outside the formal instructional contexts of lectures, laboratories, tutorials, and field trips.
We set out to explore this under-researched field and planned a semester-long investigation of how undergraduate students at the University of Otago, New Zealand used technology in their personal time to support and develop autonomous learning. Our primary methodological concern in this endeavour was to situate data collection as close as possible to students’ technological and academic practices. The challenge was to observe those private technological and academic practices which are ordinarily hidden from faculty and researcher gaze.
We knew from ad hoc discussions with students prior to the study that they found it difficult to articulate how they used technology and the role it played in supporting their study. Typically, students would frame their responses around their use of the institution’s learning management system, e-mail, Google, and Facebook. From these initial discussions it became clear that many students were unaware of the degree to which they employed their personal computers for study. This raised the challenge of observing their actual study sessions.
This article documents how we met this challenge through the use of autovideography. Autovideography allowed us to capture student’s practice of engaging in independent study sessions as it happened rather than their perceived or remembered practice after the event.
In our planning, we considered (and consequently dismissed) several traditional data collection tools. In the first instance, we dismissed the traditional approaches of surveys and focus group discussions because they were restricted to students’ self-reports of their practice (what they think they did) rather than actual practice (what they actually did). Initially we were concerned about the likelihood of recollection errors occurring due to the tendency for these traditional data collection approaches to be contextually removed (both temporally and spatially) from the practice under investigation. In this, we found we were not alone. Patashnick and Rich (2005) and Starr and Fernandez (2007) concur that participant self-reports of past events and behaviours can be inaccurate and unreliable. In our case this was likely to be further distorted due to our experience with students experiencing difficulty even expressing how they actually used their personal computers to support their learning.
Although we sought accurate and reliable data, we were not seeking to uncover “true data” in any positivist sense. We understood whatever data were collected would be someone’s (participants’ or researchers’) selection and/or presentation, someone’s take on reality. This would be true for any recording system whether transcripts of discussions, personal journals, or audio or video recordings. The simple act of allowing students to pick when and what to record in a journal, audio recording, or video capture meant they would have significant control over what we would eventually receive. However, we did not see this as an issue.
With this in mind, and wanting to ensure the inclusion of first-person perspectives in action, we contemplated using researcher observation and participant written journals. However, we eventually dismissed these. We judged observations to be invasive, likely to create “unnatural” experiences and not really able to capture first-person perceptions and we considered that requiring participants to keep journals would create an unreasonable workload over and above their already busy study schedule.
Our preferred choice was to use participant-created video recordings. This method had the potential to capture both practice as it occurred (fly-on-the-wall view) and student perceptions of practice (as in video journaling) with only a modest degree of effort required by our participants. We envisaged that “giving the natives the camera” (Belk & Kozinets, 2005, p. 130) and distancing our researcher presence would “diminish the reactivity of participants to an outside observer” (Rich & Chalfen, 1999, p. 54), thereby enabling them to be more natural, candid, and self-directive in their behaviours.
The obvious caveat here is that the presence of a camera and potential viewers will affect behaviours. Although we planned to distance ourselves physically and from the locus of control, we did not imagine that we (as researchers and potential viewers) would cease to exist in our participants’ awareness. We were aware that our participants’ processes of video creation could entail them speaking to us. Indeed, we hoped that they would do this. During preliminary discussions with the students they had asked us whether they should ignore the camera or talk to it. We replied we were happy either way--whatever they felt comfortable doing. While we did not prescribe how they should undertake the recording process, we did ask them to act as normal as possible and suggested they did not submit recordings until they felt they had overcome any concerns regarding the invasive nature of the camera.
Even if the participants did not directly address us, we anticipated some level of “performance.” We were acutely aware that they might perceive the camera as the audience and simply engage in a variety of performance type behaviours. As Sunderland and Denny (2002, p. 21) note, “respondents are performers, practising culture with every gesture and emphasis--whether in a focus group, their home or producing video diaries.” Given our intention was not to seek “objective” data on how students would interact with the camera, what they would decide to record was of considerable interest to us: performance was part of our dataset. We were hoping the process would capture a combination of subjective “perspectives of action” (accounts of behaviour) and objective evidence or “perspectives in action” (record of behaviour; Belk & Kozinets, 2005, p. 132). We wanted to know what students defined as “study” (what they selected to record), what they defined as a study period (the duration of the recordings), what they were thinking about as they studied (verbal interactions with the camera), and what they defined as the study space (capture angle).
It was pleasing to see that our respondents’ videos depicted instances of spontaneous and candid behaviour (where participants operated as if the camera was not present or occasionally glanced at the camera) alongside instances of “conscious self-presentation” where participants spoke directly into the camera (Belk & Kozinets, 2005, p. 131). We understood these as being located in contemporary cultural contexts, contexts within which “images are ubiquitous” (Banks, 2007, p. 3) and performance type video diaries increasingly feature. Elements of our participants’ videos were clearly modelled on forms of popular culture such as reality and confessional television. Confessional, informational, entertainment, and soapbox approaches were all evident in the video collected. These approaches are illustrated to some extent by the storyboard (a selection of still shots from a researcher-created composite of a number of participants’ videos) in Figure 1.
Figure 1. Approaches to video capture.
Asking research participants to “perform” themselves is not a new concept in a number of academic disciplines (e.g., anthropology, psychology, and education; Patashnick & Rich, 2005). Since Worth and Adair (1972) asked reservation Navajo to create audiovisual representations of their worlds, there have been many instances of “native produced imagery” (Harper, 2005, p. 756). Few of these instances however, have involved the capture of undergraduate experiences and to date there are very few instances involving the use of autovideography to capture common everyday practices.
Certainly, asking undergraduate students to point the video lens at their own life and academic behaviour was a new concept and practice for both our student participants and us. Although both researchers had worked with young people in research contexts (including using researcher-elicited “student voice”), we had never worked with video nor intentionally relinquished control of data collection to participants.
Reconceptualising our raw data had implications for our research practice. While we had decided to allow our participants to capture actual behaviours as they occurred through a form of technologically-mediated self-observation, we lacked a clear procedural and conceptual scheme. Our attempts to develop a framework prior to the study proved to be difficult given that we had decided not to impose conditions on our participants. We decided on an iterative approach, developing our method on the fly, through trial and error. While this required increased researcher input, we soon realised a number of benefits. It became clear that the lack of procedural conditions resulted in participants exhibiting a sense of control and ownership over their recordings. Because they did not have to comply with specific data capture requests, participants adopted a variety of self-surveillance approaches. We also noticed these approaches reflected the personalities and perspectives of the participants. By allowing the participants to define what to capture based on their interpretation of the study, we believe we harvested data very different from what we would have gained if we had adopted a prescribed data capture approach. For instance, one student captured a series of group meetings while another produced a few opinion pieces where he spoke to the camera about issues that bothered him concerning higher education. While the inclusion of these videos were surprising, it was clear that the participants were actively involved in the process of determining what was applicable in describing the role of computer technology in supporting their learning.
Naturally, embracing new approaches brings methodological issues that must be resolved. As we worked to resolve these dilemmas, we found guidance in the fields of visual methods (Banks, 2001, 2007; Harper, 2005; Holliday, 2004; Rich & Chalfen, 1999; Rose, 2007) and video research in the learning sciences (Goldman, Pea, Barron, & Derry, 2007), and in recent conceptualisations of autovideography developed by qualitative consumer research studies (Belk & Kozinets, 2005; Starr & Fernandez, 2007).
For example, in the early stages of the project, the work of Banks (2001, 2007) and Rose (2007) provided helpful insights regarding the use of visual materials that resonated with our study’s purpose. Specifically, these researchers highlighted the potential of visual methodologies beyond simply finding and using images created by others to illustrate some aspect of a research project. For us, visual method was about actively creating and using visual images. Our participants’ video-recordings were central to our inquiry, underpinning on-going dialogue and analysis and serving as a means to interrogate data we collected from a larger participant base through more conventional data collection tools (interviews and surveys).
We asked our participants to choose from a range of recording devices (digital audio recorders, several types of cassette video cameras, digital video cameras) and to create recordings of their study practices including their personal use of technology. As noted, we issued very little direction beyond this brief. The participants had control over how and what they wished to record. We also encouraged them to review and edit material prior to submitting it. In this way participants were contributing an artefact that they had reflected on and which they believed aligned with the study. It was interesting to note that the participants chose not to edit their records and instead submitted them as captured.
Each week the participants would exchange their video or audio recording media for new ones. While this process was time consuming, it was relatively straightforward and without technical problems. The same however cannot be said of the recording devices we used. The use of different recording devices meant that we encountered a number of technical issues, in particular unclear functionality between devices, variations in image quality, and diverse file formats, which were all time consuming and challenging to resolve. In future we would use devices that produce the same file format, are easy to connect to a computer, and have the ability for fast transfer of the data from the device to the computer.
The study employed a grounded theory (Charmaz, 2006) approach and used the qualitative data analysis software NVivo to facilitate the transcribing, coding, and analysis of the video and audio files. However, this was not without its challenges. We soon learnt that NVivo did not handle large video files or certain video formats. This meant that a number of files had to be significantly compressed or split and reformatted prior to adding to NVivo. Once we had met these challenges we found NVivo to be an ideal storage and analysis platform for videography research. Video, audio, pictures, transcripts, interview, and researcher notes could be stored, coded, and linked within one place. On reflection it is difficult to see how we would have achieved the integration of multiple multimedia data sources in such a systematic fashion without NVivo.
Once the videos sections were associated with themes we created a number of theme-based video slices across each participant. These slices created a useful second level dataset that allowed us to re-analyse at a meta-level. These sliced datasets also raised an opportunity for us to give something back to our participants. The idea came out of our view that the participants should receive something of value from engaging in the project, we as researchers were certainly gaining something from them. For this reason we produced a compilation of these theme-based clips on a DVD from each student’s personal collection and presented these to them as something we thought they would like to have to look back on in the future.
The analysis of the video clips revealed relevant aspects to our study that we had not previously considered. These were incidental aspects of the study practice that we would not have been conscious of normally. When we started viewing students’ video clips of their approach to study (these were recorded by them in the privacy of their rooms), we were astonished at the number of unexpected visual cues that we were presented with. For instance the camera captured the layout of the study area; in particular the position of the computer in relation to books and paper work reveal the prevailing prominence of paper-based material over digital. The video clips also disclosed a surprising degree of engagement in consumption of information (reading) over the production of information (writing) and the fluctuating concentration of students across different tasks; the point at which the participant took breaks, the use of communication devices to contact others, and the timing of such contacts; and the interaction between digital and paper documents. These are aspects of our investigation that would not have been obvious areas of interest if we had not used video. We were also privy to behavioural sequences and routines that the participants were not conscious of, as it became clear through later discussions.
The act of viewing behaviours as they happened within a situated meaningful context has revealed considerable differences among participants. This is in contrast to post-event data capture methods such as surveys or interviews where the focus on predefined elements tends to blur individual differences. For instance, prior to this investigation, feedback from students on their study habits led us to believe that most students approach their study in a similar manner. The use of autovideography in this investigation has however revealed a range of concealed processes such as workflow processes relating to the completion of coursework and assessment tasks and the ways students mix technologies (such as audioplayers, wordprocessors, mobile phones, instant messaging, and e-mail). One participant used music as part of her study practice. It was interesting to note the degree of confidence she displayed using a range of music software compared to the much more tentative use of her word-processing software (Figure 2).
Figure 2. Music while you work.
As we viewed and worked with the participant-created videos such as the one depicted in Figure 2, we increasingly recognised their uniqueness as data. Not only did we find that they combined the power of still photographs and audio recording, we found that they captured a rich context (live situated practice) that we were not normally privy to. As Goldman et al. note, one of the important dimensions along which video data add value to learning research is the “rhetorical power of viewing video of behaviours and interactions for understanding nuances of social relationships, kinesics, proxemics, prosodics and other situated parameters of human interactions” (Goldman, Pea, Barron, & Derry, 2007, p. xi). It was access to these other dimensions that had a profound effect on the way we perceived our data and our participants. In addition to conveying the explicit actions we were interested in, the video clips revealed the important co-joined, tacit behaviours and personality that are present in all our experiences. Repeated viewing of the videos resulted in us having a more perceptive understanding of our participants in relation to the activities being studied. It allowed us the advantage of being contextually situated that was not possible through other methods.
We witnessed the potential of participant-created video to communicate understandings beyond the purely cognitive when we presented some edited video clips in an in-house research seminar attended by many teaching staff. The attendees’ reactions spoke to the power of these clips as data. The video data allowed teachers to view the layout of the students’ study areas, how they mixed digital and paper use, how they took notes from readings, and how they mused about the value of the topic or assignment being undertaken. It was clear, as the audience watched, listened to, and laughed at the student musings, that the video clips reflected behaviours that most of the audience had engaged in as students. It was for us, further evidence that this approach was successful at capturing and communicating behaviours that were ordinarily hidden. We speculated that it reminded them of the students’ world, and allowed them to see activities that occur outside the lecture, the tutorial, or assignment. We believe the video clips drew the audience into the students’ experience--that sense of being there--and the feelings and respect that go along with such shared experiences. The audience response was very different to presentations where we had previously presented graphs and tables from focus groups and surveys.
As interest rose and our own conviction of the method’s potential increased, we became keen to share both our experiences with the method and our findings more widely. At this point a number of ethical considerations emerged, attendant to presenting visual artefacts that clearly identify the participants. As noted by Wiles et al. (2008), ethical issues in visual research are contextual and often emerge throughout the project.
We were comfortable showing these clips in an in-house context because our participants had given consent for them to be viewed by other participants and used for research purposes (as per the ethical consent requirements of the study). However, when we contemplated sharing these findings with a wider audience we grappled with issues of informed consent. We resolved consent issues as best we could by re-negotiating consent each time we wished to present findings in different contexts.
Although none of our participants expressed undue concern about their recordings being made public and possibly welcomed publicity, we were concerned that the ease of replication of digital content meant we could not control the extent of distribution. In addition to participants’ informed consent, our major concerns were to do with anonymity (Harper, 2005) and privacy (Starr & Fernandez, 2007). This has been alleviated somewhat by insights from Rich and Chalfen’s work with young people who shared their experiences with asthma through video narrative. In particular, our position at this point is informed by their understanding that:
The major risk of participating in this research lies in the content and use of videotaped information. There is a potential loss of privacy inherent in revealing people’s lives on video. . . . Ultimately, it was determined that, because the participants had control over the content of their visual narratives, the VIA [Video Intervention/Prevention Assessment] process did not constitute undue surveillance or result in an invasion of privacy. (Rich & Chalfen, 1999, p. 66)
Our aim was to find a research method that would allow us to slip beneath the traditional self-reports of past practice and to observe behaviours as they occur. The challenge here was to find a procedure that was neither covert nor invasive. Inviting the participants to act as data gatherers was an obvious solution. We envisaged a first-person point of view where students captured their behaviours on video which allowed us into their world in ways we had not previously been privy to. In this we agree with Starr and Fernandez’s views:
Comparing the videographer’s depiction to the subject’s experience, there are inherent differences in physical and emotional perspective, what is attended to or ignored, and relative emphasis of different elements in the situation. The ability to get closer to the original lived experience would clearly help us understand in new and valuable ways. (Starr & Fernandez, 2007, p. 170)
In our attempt to articulate this approach to our participants the notion of technologically-mediated self-observation developed, and with it the idea that we were “giving the natives the camera” (Belk & Kozinets, 2005, p. 130) and distancing our “researcher presence” in the hope of realising our goal of eliciting natural, candid, and spontaneous behaviours not possible with self-reporting or perception-based approaches.
However, we soon became aware that it was actually difficult to define from whose perspective the behaviours were being captured. It was the student who decided where the camera would be situated and what it was that would be captured. The student also decided when and what it was they were going to video. It would have been interesting to ask students how they decided these matters; unfortunately it was not until after the project that we became aware of the importance of these factors.
Our primary focus was on the behaviours once the video was recording and, as mentioned earlier, we found behaviours could be categorised as either performance or surveillance. By performance we mean the student was consciously aware of the camera’s presence and therefore controlled the data being captured. Surveillance was the term we employed when it was clear the student mentioned they had forgotten that the camera was present. In these cases, on reviewing footage with the students, it became clear that the students were watching themselves in a new way--a hypothetical vantage point from which to observe and reflect on themselves.
Using video as a data source meant we were producing an artefact that had significance to both ourselves as researchers and to the participants. As participants, the students were fascinated by their videos and pleased that we decided to create a personalised DVD collection of the theme-based slices. They each saw these DVDs as something of value: a stylised snapshot of their study habits in their last semester as an undergraduate.
As a result of these experiences, we have come to view video as a useful way to observe practice as it happens and within the relevant context, revealing things that could not be revealed otherwise. By engaging the participants as data collectors and overseers of the final data set we have gone some way in mitigating our concerns about privacy and transparency. While there are always challenges when working with enormous open data flows, access to this type of observational data has enhanced the current research project beyond what was possible via post event, self-report methods.
We are very grateful for the invaluable guidance provided by JRP Editor D. P. Dash and three peer reviewers.
Banks, M. (2001). Visual methods in social research. London: Sage.
Banks, M. (2007). Using visual data in qualitative research. London: Sage.
Belk, R. W., & Kozinets, R. V. (2005). Videography in marketing and consumer research. Qualitative Market Research: An International Journal, 8(2), 128-141. Retrieved October 11, 2011, from http://business.nmsu.edu/~mhyman/M610_Articles/Belk_QMR_2005.pdf
Charmaz, K. (2006). Grounded theory: Objectivist and constructivist methods. In N. Denzin & Y. Lincoln (Eds.), Handbook of Qualitative Research (pp. 509-535). Thousand Oaks, CA: Sage.
Goldman, R., Pea, R., Barron, B., & Derry, S. J. (Eds.). (2007). Video research in the learning sciences. Mahwah, NJ: Lawrence Erlbaum.
Harper, D. (2005). What’s new visually? In N. Denzin & Y. Lincoln (Eds.), The handbook of qualitative research (pp. 747-762). Thousand Oaks, CA: Sage.
Holliday, R. (2004). Reflecting the self. In C. Knowles & P. Sweetman (Eds.), Picturing the social landscape: Visual methods and the sociological imagination (pp. 49-64). London: Routledge.
Patashnick, J., & Rich, M. (2005). Researching human experience: Video intervention/prevention assessment (VIA). Australasian Journal of Information Systems, 12(2), 103-111. Retrieved October 11, 2011, from http://dl.acs.org.au/index.php/ajis/article/view/96/77
Rich, M., & Chalfen, R. (1999). Showing and telling asthma: Children teaching physicians with visual narrative. Visual Studies, 14(1), 51-71.
Rose, G. (2007). Visual methodologies. An introduction to the interpretation of visual materials (2nd ed.). London: Sage.
Sharpe, R., Benfield, G., Lessner, E., & DeCicco, E. (2005). Scoping study for the pedagogy strand of the JISC e-learning programme. Retrieved October 11, 2011, from http://www.jisc.ac.uk/uploaded_documents/scoping%20study%20final%20report%20v4.1.doc
Starr, R. G., & Fernandez, K. V. (2007). The Mindcam methodology: Perceiving through the native’s eye. Qualitative Market Research: An International Journal, 10(2), 168-182.
Sunderland, P. L., & Denny, R. M. (2002, November). Performers and partners: Consumer video documentaries in ethnographic research. Paper presented at the European Society for Opinion and Marketing Research (ESOMAR 2002) Qualitative Research Conference, November 10-12, Boston, MA. Retrieved October 11, 2011, from http://www.practicagroup.com/pdfs/Sunderland_and_Denny_Performers_and_Partners.pdf
Wiles, R., Prosser, J., Bagnoli, A., Clark, A., Davies, K., Holland, S., & Renold, E. (2008). Visual ethics: Ethical issues in visual research [ESRC National Centre for Research Methods Review Paper: NCRM/011]. Retrieved October 11, 2011, from http://eprints.ncrm.ac.uk/421/1/MethodsReviewPaperNCRM-011.pdf
Worth, S., & Adair, J. (1972). Through Navajo eyes: An exploration in film communication and anthropology. Bloomington, IN: Indian University Press. Retrieved October 11, 2011, from http://isc.temple.edu/TNE/
Received 19 October 2010 | Accepted 11 October 2011 | Published 12 October 2011
Copyright © 2011 Journal of Research Practice and the authors