Skip to navigation – Site map

HomeVolumesVol. 20, n° 1"I'm the first video Voicethread–...

"I'm the first video Voicethread–it's pretty sweet, I'm pumped": Gender and Self-Expression on an Interactive Multimodal Platform

"Chuis la première video Voicethread – C’est trop génial, chuis ravi" : Genre et auto-expression sur une plate-forme multimodale interactive
Susan C. Herring and Bradford Demarest

Abstracts

This study explores how male and female users of Voicethread.com, an interactive multimodal web 2.0 platform that allows asynchronous commenting via text, audio, and video, communicate and perform identity through self-expression in different semiotic modes. A quantitative computer-mediated discourse analysis of three public English-language Voicethreads found that in video and audio comments, both genders express more positive attitudes; they are also more self-conscious and ego-focused. The text comments express more neutral and negative attitudes, especially when written by males, but they are also more socially interactive. With few exceptions, female communication patterns resemble those for audio and video, while male communication patterns resemble those for text. We propose explanations for these findings and discuss their implications for identity performances in interactive multimodal environments.

Top of page

Full text

1. Introduction

1Do people present themselves differently in different communicative modalities? Are they more sociable in "rich" modes such as face-to-face (FtF) communication and more impersonal and less polite in "lean" textual computer-mediated communication (CMC)? Such were the claims made in the 1980s, when the internet was new and the choice of modes was between offline (FtF) and synchronous or asynchronous CMC (eg, Daft & Lengel, 1984; Kiesler, Siegel & McGuire, 1984). In practice, of course, Internet users soon adapted to communicating via text online, and the expressive range of textual communication expanded, such that almost any style of communication came to be at home in textual CMC. Now, however, text is no longer the only option: Internet increasingly makes various CMC modes, including "richer" modes such as audio and video chat, readily available on the internet. Thus, although the evolution of textual CMC seemed to disprove early theoretical claims about the effects of different semiotic modes on communication, the multimodal internet provides a new testing ground for those claims.

  • 1 The term "mode" is used here to refer to a CMC option within a single platform, as distinct from th (...)

2This is especially true for web 2.0 environments that allow users to engage in communicative exchanges in more than one mode on the same platform, or what Herring (2015) calls interactive multimodal platforms (IMPs). YouTube users, for example, have the option to record and upload their own videos in response to posted videos, in addition to leaving text comments (eg, Pihlaja, 2011). Players of multiplayer online games can communicate with their teammates in real time via both text chat and voice chat (Newon, 2011), and users of the multimodal microblogging service Tumblr communicate via animated GIFs as well as textual posts (Bourlai & Herring, 2014). The IMP that is the focus of the present study, Voicethread.com, supports multimodal user-to-user communication on a larger scale: Users can comment asynchronously in three modes1–text, audio, and/or video–in addition to posting (and drawing on) mixed media slideshows.

3Figure 1 shows how an audio, a video, and a text comment, respectively, appear during playback in a Voicethread. Comments can be accessed either by clicking on the user profile pictures on the right and left sides of the central multimedia display (in the figure, a video clip about the effects of driving over the speed limit), or by clicking on segments of the playback bar at the bottom of the interface. The playback bar replays the comments sequentially in the order in which they were posted.

Figure 1–An audio comment (top left), a video comment (bottom left), and a text comment (right) in a Voicethread.

Figure 1–An audio comment (top left), a video comment (bottom left), and a text comment (right) in a Voicethread.

4By their very nature, multimodal IMPs such as Voicethread raise questions about why and how users choose to express themselves in CMC in a given mode, as well as how digital modes of communication function as vehicles for identity performance. "Identity" in this study is analyzed through the lens of user gender. Many studies have found that women's and men's textual online discourse differs stylistically, with men being more assertive, less polite, and contributing more and longer comments than women, who tend to be more attenuated and supportive (for an overview, see Herring & Stoerger, 2014). However, the relationship between gender and language use has not yet been systematically analyzed for multimodal CMC. Do stylistic gender differences in textual CMC manifest in audio and video CMC, as well? Do women and men prefer to communicate in different modes?

5In this study, we investigate how commenting mode and participant gender relate to participation and self-expression in three public Voicethreads. To do so, we employ a computer-mediated discourse analysis approach, which adapts linguistic methods to analyze online communication, and which involves "coding and counting" linguistic features (Herring, 2004). Specifically, to investigate whether richer modes are associated with greater sociability, we identified and counted all the words in the Voicethread comments that indicate social awareness and an orientation on the part of the commenter to the addressees, following Hyland (2005). To assess the effect of comment mode on attitude, we coded and counted expressions associated with affect, judgment, and appreciation, drawing on categories from Martin and White (2005), as well as coding the tone of the expressions, whether positive, negative, or neutral. Participation frequencies were also measured.

  • 2 Since our sample Voicethreads include participants of all ages, to avoid repetition of the cumberso (...)

6The analyses revealed differences between male and female Voicethread users, as well as differences across modes. Males contributed more comments overall than females,2 consistent with previous findings for text-only CMC. Moreover, video comments, while few, were made overwhelmingly by males, consistent with previous research that found that males are more likely than females to be early adopters of new technology (eg, Venkatesh & Morris, 2000). Participants of both genders expressed more positive attitudes in video and audio comments than in text. They were also more self-conscious and ego-focused, especially in video, as illustrated by the quote in the title of this article. In contrast, the text comments expressed more neutral and negative attitudes, but they were also–unexpectedly–more socially interactive than the video or audio comments. These results are partially consistent with claims based on media richness, although they present a more complex picture. Interestingly, with few exceptions, the female communication patterns resemble those for audio and video, while the male communication patterns resemble those for text. We propose explanations for these associations that invoke the notions of sociability (for females and rich media) and distancing (for males and lean media), and conclude by discussing the implications of mode differences for identity performances in interactive multimodal online environments.

2. Background

7Voicethread.com was founded in Boca Raton, Florida by Benji Papell and Steve Muth and launched in March 2007.3 The platform has attracted the attention of educators, who praise it as an engaging learning and discussion tool (Weir, 2008) that enhances social presence, especially through the use of audio and video (Pacansky-Brock, 2010), and promotes participation, collaboration, and community (Ditkoff & Young, 2011). However, Ching and Hsu (2013) found that although students in a graduate course felt more connected to each other using Voicethread, it did not cause them to participate more. Moreover, when Millard (2010) analyzed user comments from 50 randomly-selected public Voicethreads, he found little evidence of interaction or collaboration; rather, the commenters tended to respond to the content featured in the multimedia slideshow. Aside from this pedagogically-oriented body of work, little scholarship has been conducted on Voicethreads to date.

8Voicethread is of broader interest, however, in that it offers users a choice of communication modes on the same platform–and, indeed, in the same interaction–, allowing direct comparison of natural communication across modes by the same group of participants. Previously this was not possible; instead, researchers typically studied mode effects by comparing communication on separate platforms in experimental settings, often with different subjects using each platform. Much of that research was based on early theoretical claims involving the degree of "presence" or "richness" of different media.

9Social Presence Theory (Short, Williams & Christie, 1976) posits that participants will perceive greater social presence via richer media such as video than via telephone or written communication. By degree of social presence is meant degree of awareness of the other person in a communication interaction. Media Richness Theory (originally known as Information Richness Theory) posits that richer media are better suited for tasks involving nuanced social communication, while leaner media are best suited for simple, routine tasks (Daft & Lengel, 1984). The main criteria for determining the richness of a medium, according to Daft and Lengel (1984), are immediacy of feedback and number and nature of channels of communication. Similar to Social Presence Theory, Daft and Lengel situate media along a continuum, with face-to-face (FtF) as the richest medium, telephone as less rich, and traditional written documents as among the leanest media, exceeded only by computer output in the form of numeric data. Although these two theories predated CMC and appear to assume one-to-one communication, they have been widely invoked in CMC research involving many-to-many communication (eg, Balaji & Chakrabarti, 2010).

10The CMC modes involved in the present study can be situated along a modified version of the media richness continuum, as shown in Figure 2. In Daft and Lengel's (1984) conceptualization, FtF communication is considered to be synchronous and written documents are asynchronous. (We exclude numeric data from Figure 2 because they normally do not serve as an interpersonal communication medium.) CMC modes can be either synchronous or asynchronous, with synchronous modes being richer, according to the criteria of Media Richness Theory, because they provide feedback in near-real time. The three Voicethread commenting modes (highlighted) are all asynchronous, but they can still be ranked along the continuum according to the number and nature of channels used by each (partial visual for textual comments; auditory for audio comments; auditory+visual for video comments).

  • 4 Daft and Lengel (1984) distinguish between formal and informal written documents, situating the for (...)

Figure 2–Continuum of media richness (adapted from Daft and Lengel, 1984).4

Figure 2–Continuum of media richness (adapted from Daft and Lengel, 1984).4

11Kiesler et al. (1984) were among the first to study the effects of mode on communicative behavior. Their task-focused experiments revealed mode differences in disinhibition (as measured by swearing, insults, name-calling, and hostile behavior), decision shifts, and equality of participation, all of which occurred more often in textual CMC–both chat and email–than in FtF conditions. The first two findings led Kiesler and her colleagues to claim that textual CMC was ill-suited for social interaction. They explained their findings in terms of the paucity of social cues, such as facial expressions and prosody, in text-only CMC, resulting in a process of "depersonalization" whereby senders feel detached from (experience less social presence with) their addressees. This approach is referred to in the literature as the Cues Filtered Out theory (eg, Culnan & Markus, 1987).

12However, the explosion of naturally-occurring CMC on the internet soon seemed to give the lie to the predictions of the Social Presence, Media Richness, and Cues Filtered Out theories as regards textual CMC. Online communities quickly arose in which participants reported feeling a sense of belonging and social presence (Rheingold, 1993); reports of online romances soon followed (eg, Cooper & Sportolari, 1997). At the same time, a high incidence of hostile verbal behavior was observed to occur in some online forums (eg, Kim & Raja, 1990).

13To account for these contradictory behaviors, Walther (1996) theorized that textual, asynchronous CMC is "hyperpersonal," in that message recipients tend to overgeneralize from the limited social cues available in computer-mediated messages, with the result that both positive and negative perceptions of the sender are exaggerated. Such perceptions can fuel romantic feelings as well as feelings of hostility and aggression. However, Herring (1994, 1995) observed that verbal aggression is much more common in messages posted by males than by females, which should not be the case if the technology alone predisposes users toward expressing hostility. Herring (1994, 1995) invoked gender socialization to account for differences in male and female behavior online; these include a tendency for males to post more and longer messages in public forums, as well as to post in a more confrontational manner, whereas women's messages tend to express more social support.

14Studies of technology adoption in organizations have also argued for the importance of including gender as a factor in their models. These studies found that women perceive email as higher in social presence than men do (Gefen & Straub, 1997), adopt new technologies for social reasons (such as usage by their peers) more, and are more concerned with ease of use, whereas men are motivated to adopt mainly in terms of the affordances of the technology (Venkatesh & Morris, 2000). Despite being motivated to adopt, women tend to be slower to do so.

15Because IMPs are relatively new, little research has yet focused on the adoption of, or verbal self-expression on, interactive platforms that support user-to-user communication in more than one mode. Nonetheless, there is some evidence that mode choice on IMPs affects the nature of communication. In her study of the multiplayer online game World of Warcraft, Newon (2011) found that voice chat was dominated by a few individuals, whereas text chat favored more democratic participation. Pihlaja's (2011) study of video responses to video prompts on YouTube showed them to be longer, more developed, and more interactive than text comments on the same prompts. Sindoni (2014) noted that interlocutors are more self-conscious in video chat than in written exchanges, and that observing themselves in the feedback image "produces psychological effects influencing the verbal and nonverbal features of the online exchange" (Sindoni, 2014: 333). Bourlai and Herring (2014) found that emotions expressed in animated gifs on Tumblr were more positive than emotions expressed in text comments; they attribute the greater negativity of text, in part, to the lack of paralinguistic cues in text compared to other modes, which can create a distancing effect. None of the research on multimodal platforms has compared the behavior of male and female users, however. The present research addresses this gap.

16The remainder of this paper is organized as follows. We first describe and present the results of a quantitative study of comments posted to three Voicethreads; this introduces the linguistic categories that are central to our analysis. The results of the quantitative study are then interpreted qualitatively and illustrated with comments from the Voicethread data. We conclude by discussing the contributions and limitations of the study.

3. Research Questions

17The overall research question that guides this study is the following: How do users express themselves in a given mode on Voicethread.com, an interactive web platform that supports commenting in multiple modes? Specifically:

18RQ1. What differences, if any, are there in sociability and attitude across the three commenting modes available on Voicethread.com?

RQ2. Are there gender differences in sociability and attitude, and if so, what are they?

RQ3. Are there gender differences in mode choice and amount of participation, and if so, what are they?

19Based on the research discussed in the preceding section, we expected to find more social communication in audio and video, the richer modes, and more impersonal and/or contentious communication in text, which is leaner. Previous research also led us to expect that males would express themselves in ways that are less social and more contentious than females, independent of mode choice, and that males, as early adopters of new technology, would participate on Voicethread.com more overall, as well as commenting more often than females in the asynchronous video and audio modes, which are relatively novel compared to asynchronous textual CMC.

4. Methodology

4.1. Data

  • 5 The Voicethreads were public, and anyone could post to them. Several comments in each thread were f (...)

20The data for this exploratory study are all of the comments posted in three extended public Voicethreads. These threads are prompt-triggered discussions that took place in pedagogical contexts, consistent with the most common use of the Voicethread platform. Because we had little indication of what to expect given the paucity of previous discourse-based research on Voicethreads, we sampled for diversity, while aiming to include substantial numbers of comments by both genders that employed, inasmuch as possible, all three modes of communication. The topic and the participants’ ages vary in the three threads: The first was produced mostly by a primary school class evaluating art in science fiction, the second was produced mostly by a secondary school driver's education class on the topic of the deleterious effects of speeding,5 and the third is a discussion among professional educators in response to the question: What does the network mean to you? Most of the commenters in the three threads appear to be based in the United States. The "Sci Fi" thread had 155 comments, the "Speeding" thread had 97 comments, and the "Network" thread had 111 comments at the time of our data collection in March 2011. Multimodal comments posted to the "Speeding" thread are illustrated in Figure 1 at the beginning of this article. The total corpus consists of 363 comments and 22,069 words.

21One advantage of studying naturally-occurring communication such as the Voicethreads we analyzed is that, compared to controlled studies in experimental settings, the data represent natural behavior and thus have greater real-world validity. A disadvantage is that with naturally-occurring data it is not possible to control for all of the variables that could potentially affect the findings of the study. Age and topic are conflated in our sample, since each topic is associated with a different age group. To minimize the potentially confounding effects of these two variables in this study, we only report quantitative patterns that are found in all three threads, as well as in the sample overall.

4.2. Analytical Methods

22For the quantitative analysis, we adopted a computer-mediated discourse analysis approach, applying "language-focused content analysis" to online discourse (Herring, 2004: 4). The audio and video comments were transcribed by the authors and entered into an Excel spreadsheet along with the text comments as they appeared in the Voicethread.com interface. Each comment was coded for the independent variables "thread," "mode," and participant "gender." Thread and mode were self-evident. Gender was evident from the participant's voice and/or appearance in most of the video and audio comments; for text comments, we determined gender from user IDs (which often appeared to be actual names) and profile images, when available. Participants whose gender could not be identified (because they had user IDs such as "xxx," "quill," or "1Sem7-1Team1" and no profile image) were classified as "unknown." The comments were also coded for three dependent variables: participation, attitude, and interactional metadiscourse, as described below.

4.2.1. Participation

23The participation analysis consisted of a straightforward count of the number of comments and the number of words. The results of these measures are reported by gender, mode, and thread.

4.2.2. Attitude

24To assess participant attitude toward other participants and toward propositions in the discourse, we adapted attitude categories from Martin and White (2005). These are listed in Table 1 below (definitions are from Martin and White; examples are ours).

Table 1–Martin and White's (2005) attitude categories.

Table 1–Martin and White's (2005) attitude categories.

25In addition, we coded each expression of attitude for tone: positive, negative, or neutral. The unit of analysis could be a word, phrase, utterance, or chunk of a message that expressed a particular attitude, and more than one attitude could be coded per message, including the same attitude multiple times.

26Both authors participated in the coding. Attitude and tone codes were assigned manually by each author independently for a portion of the data, code assignments were compared, and disagreements were resolved through discussion. This process was repeated until all comments had been jointly coded with 100% agreement (for "Speeding" and "SciFi") or until better than 80% agreement was reached for both attitude and tone assignment (for "Network"); in the latter thread, the second author coded the remaining comments.

4.2.3. Metadiscourse

  • 6 More precisely, Hyland (2005: 37) defines metadiscourse as "self-reflective expressions used to neg (...)

27The first analysis was conducted to measure the amount of sociability in the comments in each mode. The comments were analyzed for the frequency of what Hyland (2005)6 calls metadiscourse–words or phrases that indicate a degree of social awareness and an orientation on the part of the commenter to the addressee(s).

28Hyland (2005) classifies metadiscourse into two broad types: interactive and interactional. Interactive metadiscourse mainly involves reference to other parts of the discourse, while interactional metadiscourse involves the interaction of the writer or speaker with the audience. We focus on the second type, as it is the most relevant to social interaction. Interactional metadiscourse has several subtypes, as summarized in Table 2 (the examples are ours).

Table 2–Hyland's (2005) interactional metadiscourse categories.

Table 2–Hyland's (2005) interactional metadiscourse categories.

29In an appendix to his 2005 book, Hyland provides an extensive list of metadiscourse terms in English based on his studies of academic writing. We modified his list to fit our data. Specifically, we excluded punctuation and non-alphabetic symbols (on the grounds that these are not possible in speech), formal terms and conventions mainly found in written text (such as "the author," "the reader"), and common terms that do not function primarily as metadiscourse (such as "go"). We also manually reviewed the comments in our data sample and added terms that occurred there but were not in Hyland's list; these included spoken discourse phenomena such as contracted hedges (eg, "kinda," "sorta") and engagement markers (eg, "y'know"). Finally, we lemmatized terms with a common root (representing, eg, "seem/seems/seemed" as "seem*") to facilitate their retrieval from the corpus. The end result was a list of 266 lemmatized terms that we imported into a freely available concordancing program, CasualConc, which was used to sort and count the frequencies of each term. We manually filtered the results returned by CasualConc for each term to exclude instances that did not function as interactional metadiscourse.

4.3. Quantification

30In order to allow for direct comparison of numerical values from subsamples of different sizes in our data set, we normalized the results for attitude counts and metadiscourse counts in relation to the total word count in each subsample (gender, mode, gender-mode), scaling them to counts per 1000 words. To further facilitate comparison, proportions of total words within each subsample are also provided as percentages. The participation results are presented both as raw numbers and normalized as percentages.

5. Results

5.1. Participation

31The largest number of comments was made via text, followed by audio. Video was a distant third; only 15 video comments were posted, and the "SciFi" thread contained no video comments. The audio comments accounted for the highest percentage of words in the corpus (67%); however, video comments were longest, with an average of 192 words per message, compared to 145 words for audio and 21 words for text. Overall, males contributed more comments and words than females did. Participation also varied across threads: "SciFi," the thread with primary school children, had the most comments but the fewest words (and favored text), while "Network," the thread with academic professionals, had the most words and longest comments (and favored audio). Table 3 summarizes the participation results by words and by comments for the corpus overall. (Percentages add up to 100% vertically for each column within each independent variable.)

  • 7 Out of our data set of 363 comments, 43 comments were invalid because they were empty or contained (...)

Table 3–Participation by independent variable: gender, mode, thread7.

Table 3–Participation by independent variable: gender, mode, thread7.

32Males posted more comments and more words than females in each of the three threads. The difference was largest in "SciFi" (M: 66% of comments and 72% of words vs F: 21% of comments and 17% of words), but males also contributed more in the "Speeding" thread (M: 48% of comments, 54% of words; F: 39% of comments, 40% of words) and the "Network" thread (M: 49% of comments, 56% of words; F: 44% of comments, 43% of words). In order to simplify the presentation, results are not broken down by thread in what follows, unless specially noted. Only patterns that hold across all three threads are presented as findings.

33The gender breakdown by mode resembles the overall pattern: Males posted more comments and more words in each of the three modes, except that the proportion of words in text comments is equal for males and females. Video comments, while few, especially favored males. The distribution of participation by gender and mode is shown in Table 4.

Table 4–Participation by mode and gender.

Table 4–Participation by mode and gender.

34Comments posted by participants of unknown gender are a minority in every thread and every mode, and they tend to be shorter than comments by gender-identifiable participants, which makes them harder to interpret. Therefore, we excluded those comments from further analysis.

5.2. Attitude and Tone

35Approximately one-third of all comments in the three threads expressed attitude. Those comments were overwhelmingly positive in tone. Females were more positive than males, whereas males were more neutral and negative than females. Also, text was more negative and neutral than video and audio, especially text comments posted by males, which were the most negative of all. Audio and video comments were mostly positive, especially those posted by females. However, text comments were not mostly negative overall; rather, they expressed both positive and negative sentiment. These patterns can be seen in Table 5, which displays the tone results for the overall attitude corpus.

Table 5–Overall tone of attitude expression by gender and mode. Per 1000 words and as (percentages).

Table 5–Overall tone of attitude expression by gender and mode. Per 1000 words and as (percentages).

36The results for type of attitude, broken down by tone, gender, and mode, are presented in Table 6. Of the three attitude types, appreciation was expressed most often, followed by judgment, then affect.

Table 6a–Affect by gender and mode. Per 1000 words and as (percentages).

Table 6a–Affect by gender and mode. Per 1000 words and as (percentages).

Table 6b–Judgment by gender and mode. Per 1000 words and as (percentages).

Table 6b–Judgment by gender and mode. Per 1000 words and as (percentages).

Table 6c – Appreciation by gender and mode. Per 1000 words and as (percentages).

Table 6c – Appreciation by gender and mode. Per 1000 words and as (percentages).

37Appreciation expressions were overwhelmingly positive, in keeping with the nature of appreciation, whereas judgments and affect were split between positive and negative tone, in ratios that differ according to participant gender. Male comments were more neutral and negative than female comments in all three attitude categories. Thus, for example, textual appreciation by males was mostly positive at 75.7%, but it was still less positive than by females (80.1%) (Table 6c). Female comments, in addition to being more positive than male comments, were more positive than negative in all three attitude categories.

38With regard to mode, text conveyed more neutrality and negativity than audio and video in all three attitude categories. Video comments were somewhat more positive than audio comments overall, especially in affect and judgment expression.

5.3. Interactional Metadiscourse

39The results of the interactional metadiscourse analysis are presented in Table 7.

  • 8 Sixty-two metadiscourse terms used by commenters of unknown gender were excluded from this table; t (...)

Table 7a–Gender and interactional metadiscourse8.

Table 7a–Gender and interactional metadiscourse8.

Table 7b – Mode and interactional metadiscourse.

Table 7b – Mode and interactional metadiscourse.

Table 7c – Mode and interactional metadiscourse of males.

Table 7c – Mode and interactional metadiscourse of males.

Table 7d – Mode and interactional metadiscourse of females.

Table 7d – Mode and interactional metadiscourse of females.

40Of Hyland's (2005) categories, self-mentions were used most frequently, followed by boosters, hedges, and engagement markers. Consistent with media richness theory, video and audio comments included more interactional metadiscourse than text comments did, especially self-mentions. However, text comments had more engagement markers and hedges. This is surprising, because text has traditionally been thought to be more impersonal and less engaged than speech, and because text (especially asynchronous CMC) allows for advance planning, whereas the use of hedges is often associated with unplanned speech production (Chafe & Danielewicz, 1987).

41The gender results were also somewhat surprising: In contrast to previous research that found that males used more boosters and females used more hedges (eg, Coates, 2003), in these three Voicethreads the male commenters used more hedges and engagement markers, and the female commenters used more boosters. Moreover, the gender-by-mode breakdown reveals that these gender differences are more pronounced in the spoken modes: Audio and video favor the use of engagement markers and hedges by males (compared to females), and females' use of boosters occurs mostly in audio. Females also use self-mentions especially often in audio and video. We suggest possible explanations for these results below.

6. Discussion

6.1. Sociability, Attitude, and Mode

42Our first research question asked: What differences, if any, are there in sociability and attitude across the three commenting modes available on Voicethread.com? We found more interactional metadiscourse terms in the audio and video comments than in the text comments overall, as media richness theory would predict, although this result is accounted for mainly by the much greater frequency of self-mentions in video and audio. Text comments actually contained more engagement markers, including 2nd-person pronouns, hedges, and somewhat more attitude markers. If we consider interactive metadiscourse to be an indicator of sociability, text does not appear to lack sociability (cf. Daft & Lengel, 1984; Kiesler et al., 1984). If anything, the text commenters appear more other-aware, in contrast to the audio and video commenters, who appear more self-focused.

43The immediate reflection audio and video provide of the participants as they record their comments appears to increase their self-consciousness. Consider, for example, the frequent use of self-references (in boldface), along with meta-references to the video commenting technology itself, at the beginning of a video comment in the "Network" thread (self-references are bolded):

  • 9 In the transcribed spoken examples, two dots indicate a short pause and three dots indicate a mediu (...)

1) Hey Alec. [Firstname Lastname] here..9 um with eh.. [employer name] that's the day job.. And the night job, I run the net as [blog name]. And I'm speaking to you uh through Voicethread and this is the first time I've actually used the video feature very cool very cool and I'm actually in Victoria ... [M, video, "Network"]

44Sindoni (2011) observed that the videochat users in her study displayed self-consciousness, including arranging their hair as if their video image were a mirror, when communicating via video. Sindoni (2014: 10) notes that "casting a sidelong glance at oneself during a conversation may change, if not determine, the way one speaks, gesticulates, smiles and so on." Audio commenting also has a defamiliarizing effect in that it requires the commenter to speak out loud to a computer, which can feel unnatural.

45Expressions of attitude in the three Voicethreads mostly involved appreciation, followed by judgment and affect. These expressions were overwhelmingly positive overall, as might be expected in pedagogical contexts, which in our experience tend to be polite and nonconfrontational both online and FtF. Video and audio comments were more positive in tone than text, while text was more neutral or negative than video or audio. These latter results could be taken to support theories that consider text to be "disinhibited" and more contentious than richer CMC modes.

  • 10 Such was not the case in Bourlai and Herring's (2014) comparison of messages conveyed via text and (...)

46However, inconsistent with those theories, commenters using text were more often positive than negative overall,10 as Table 5 shows, in keeping with the generally congenial tone of the three Voicethreads. Indeed, text was by far the preferred mode of commenting in these multimodal threads. It seems that text supports a wider range of expression than do the richer modes, a surprising finding in light of the aforementioned theories. However, one must take into consideration that textual CMC has been the primary means of communication online for most of the past 40 years, and users have extended its expressive potential out of necessity. As Walther (2011: 29, following Korzenny, 1978) observed, "the fewer one's choices of media, the more closeness [in this case, sociability] one may experience even through the lowest of bandwidths." We suggest, therefore, that rather than being a lean or impoverished mode, text has become the default mode of CMC, capable of supporting a wide range of expression, much like speech is the default mode of communication FtF. This is consistent with the observation of Walther (1996) and others that textual communication on the internet, in email, discussion forums, chatrooms, instant messaging, blogs, etc., can be highly social, even "hyperpersonal." The default status of text may eventually be replaced by one or more non-textual modes of CMC as they come into more common use, but as these findings show, that has not happened yet on Voicethread.com.

6.2. Sociability, Attitude, and Gender

47Our second research question asked about gender differences in sociability and attitude in the Voicethreads. We found that females expressed themselves significantly more positively than males in all modes and all attitude categories, whereas males tended to be more neutral or negative, especially in making judgments. This is consistent with previous findings that negativity is more frequent in comments by males than by females in discussion forums, newsgroups, and chat rooms, whereas females tend to be more supportive and polite toward their addressees (eg, Hall, 1996; Herring, 1994; Herring & Stoerger, 2014). The following is an example of an unambiguously negative judgment (in boldface) with negative affect that was posted via text by a boy in the "SciFi" thread:

2) The picture is painful and i hate it !!!!!!! [M, text, "SciFi"]

48In contrast, females favored positive expressions of appreciation (in italics), as in the following audio example from the same thread:

3) I completely agree with you ... his eyes, they are … human, but ... his body's .. just so different. [F, audio, "SciFi"]

49However, females' greater use of boosters and males' greater use of hedges is the opposite of what has been reported in the sociolinguistics literature for both spoken discourse (cf. Coates, 2003) and internet discussion forums (cf. Herring, 1995; Herring, Johnson & DiBenedetto, 1998). To understand these seemingly paradoxical results, we must consider patterns within particular threads. Most of the boosters by females were produced in the "Speeding" thread to agree empathetically that speeding was dangerous, and thus they supported the teacher's position, as in the following text example (boosters are in boldface).

4) It is entirely shocking that anyone would blatently [sic] risk getting caught speeding when you look at the blunt results in a chart like this. The amount of time saved is simply not worth the money, risk of safety, and other consequences. The most ridiculous thing is this chart states a traveling speed of 100 mph and that is a speed rarely reached by citizen drivers, so the time saved in a normal situation is even less. It really makes you wonder what's the point in speeding? [F, text, "Speeding"]

50In contrast, males in the "Speeding" thread more often emphasized their views by using a hedge word together with a strongly evaluative descriptive adjective, as in the following video comment (hedges are italicized.)

5) This is a .. very historic moment, because I am the first video voicethread out of our class. It's pretty sweet, I'm pumped. Um, this little thing right here in the voicethread's pretty interesting, uh, it's definitely not worth it to speed, because you're just gonna end up paying a lot more money and you're not gonna save much time, so speeding is bad, just don't do it, and that's pretty much all i have ... for today. [M, video, "Speeding"]

51Other examples of this strategy produced by males elsewhere in our data include: just fantastic, a bit staggering, kinda mindblowing, and pretty sick (ie, good). Rather than indicating uncertainty, these examples suggest a pragmatic strategy of downplaying one's enthusiasm while still making a strong evaluation, possibly to seem "cool," as though the speaker does not really care all that much. This interpretation is supported by the fact that the majority of such examples appear in the "Speeding" thread, where the participants are teenagers.

  • 11 In principle, that is. Several commenters in the "Network" thread appear to have been reading from (...)

52Example 5 above is a video comment; however, hedging does not occur only in the spoken modes, where it might be expected, given that participants do not have the ability to edit as they speak.11 Rather, the incidence of hedging by males is roughly equally distributed between the spoken and text modes in the "Speeding" and "SciFi" threads, and males actually hedge more in text than in audio or video in the "Network" thread. This distribution supports the view that hedging is strategic, rather than a by-product of constraints on discourse production.

53Another unexpected finding is that males used more engagement markers than females–unexpected, because females have been reported in the literature to be more interpersonally interactive (eg, Coates, 2003). Engagement markers were used most frequently in the "SciFi" and "Network" threads, and include greetings, thanks, imperative verb forms, and, especially, 2nd person pronouns. Many instances of "you" were nonspecific in reference, as in "you have to wonder," but others addressed another participant in the thread or the group as a whole (especially in "Network") or the science fiction characters depicted in the multimedia slideshow that is the centerpiece of the Voicethread interface (in "SciFi"), as illustrated in the following examples (2nd-person address forms are in boldface):

6) [To all participants:] So hope you guys have a good time learning about this - this is an excellent topic to be exploring. [M, audio, "Network"]

7) [To a specific participant:] Hey Alec. It's [Firstname] … or the bass player. Em … I'm a student from Scotland .. as you'll know but .. a lot of people out there won't know. [M, audio, "Network"]

8) [To a character in a slide:] You hit me … I'm gonna shoot a laser beam … pkeew pkeew [shooting sounds] … you shall be blinded forever … Ah [whispering] pkeew, die! pkeew… [moans, in the role of the character who was shot] [M, audio, "SciFi"]

54Two thread-specific explanations suggest themselves for why both adult males and boys used more direct address forms. First, the "Network" thread was started by a male educator, Alex, who had recently conducted a FtF workshop on the same topic that many participants had attended. While participants of both genders greeted Alex, male participants may have felt more comfortable addressing him directly as "you" and referencing their shared knowledge, as the participant in example 7 did, due to their common gender identity. Second, many of the uses of "you" in the "SciFi" thread, which was started for primary school children (although a number of adults joined in), involved fantasy role-play with one of the characters portrayed in the slideshow, as in example 8. Only boys engaged in this activity. Girls simply evaluated the image in the slide, eg, "What an awesome piece of art" or "I think that's very interesting."

55Conversely, females exceeded males in the use of self-mentions, albeit only in the audio and video modes. The following example is from the "Network" thread (in response to the topic: What does the network mean to you?). Note the heavy use of 1st person singular pronouns (bolded).

9) Hi I'm [name] from [place], in [country]. I teach computer classes. My students are from five to fourteen years old. I also teach middle school math. Network learning means .. collaboration. It means the world is getting smaller. It means I'm attending conferences virtually .. that I never even knew existed. I can learn on my .. schedule, on car rides, in my living room, in my classroom at lunch. It means my students are having opportunities that I never would have brought to them on my own. I'm very new to all of this, but I know I can't go back to the way I worked before. I'm so happy I took .. the first steps six months ago to participate in these conversations, instead of just watching from the sidelines. [F, audio, "Network"]

56Men also use many 1st person singular pronouns in their response to the topic of the thread, especially in audio and video, but more of their clauses start with 3rd person subjects, such as "the network" and "students," than in the voice comments made by women. It is possible that audio and video commenting made the female participants feel especially self-conscious and thus triggered greater use of self-reference; this possibility could be explored in a future study that uses interview methods.

6.3. Gender, Mode, and Participation

57Our last research question asked about gender differences in mode choice and amount of participation. Males posted more and longer comments in each thread, consistent with the findings of previous CMC research (Herring & Stoerger, 2014). Males also chose to comment using video much more often than females did, although there was no gender difference in choice of audio. That almost all video commenters were male is consistent with the findings of Gefen and Straub (1997) and Venkatesh and Morris (2000) that males are more eager than females to adopt new computer technologies. Almost all the video comments included some reflection on the fact that the commenter was "trying out" the video mode for the sake of experimentation, because it was new (eg, examples 1 and 5). While choice of text or audio has no apparent gender connotations, it seems that choice of video mode indexes a certain (bold, exploratory) masculine gender identity in these Voicethreads.

58It should also be noted that the audio and video commenting modes on Voicethread are not as easy to use as the text mode, and a number of invalid comments were posted in video and audio as a consequence (see note 7). Previous research found that women are more concerned than men with ease of use in deciding whether to adopt a new technology (Venkatesh & Morris, 2000). The perceived or actual difficulty of video commenting could have been a specific factor discouraging the use of video commenting by females.

7. Conclusions

  • 12 Differences were also observed across the modes that can be ascribed to differences between speakin (...)

59This study systematically compared three modes of CMC on the same platform, applying a common set of linguistic categories to each. Our findings suggest that text commenting is the default on the multimodal Voicethread platform, as it is for CMC on the internet in general. Furthermore, video and audio behaviors tend to pattern together, in contrast to text. In our data, textual CMC is associated with neutral and/or negative expression and brevity, whereas video and audio commenting is associated with self-consciousness and self-focus.12 Whether the Voicethread commenters chose a mode because they wanted to express themselves in a particular way, or whether the modes affected how the commenters expressed themselves, is a question that awaits future investigation.

60This study is also a first contribution to understanding how gender identities interact with mode of communication in multimodal CMC. We found that some common gendered CMC patterns persist in Voicethreads independent of mode, such as those involving negativity vs positivity, judgment vs appreciation, and the greater willingness of males to try out novel technological modes. Other expected gender patterns were reversed, notably, the use of hedges, boosters, and engagement markers; interestingly, these patterns were more pronounced in the spoken modes, suggesting that textual CMC may actually minimize some gender differences in linguistic expression, contrary to what has been suggested previously (eg, Hall, 1996). Mode choice also interacts with gender and the tone of a message: Males are notably more negative in text than females are, or than either gender is in the other modes, providing support for early observations that most hostile or strongly negative message content in textual CMC on the internet is produced by males (Herring, 1994, 1995).

61These observed differences constitute gender identity performances (cf. Butler, 1990) that reproduce socially recognizable gendered identities–for example, the pleasant, supportive female and the forthright, daring, "cool" male. Even the apparently paradoxical uses of boosters by females and hedges by males contribute to these identities, as suggested in the discussion above.

  • 13 This is so, even though more females than males commented in text, and more males than females comm (...)

62Another unexpected finding was that male behavior paralleled the patterns for text (neutral and negative attitude; use of hedges and engagement markers), and female behavior paralleled the patterns for audio and video (positive attitude; self-mentions).13 The notion that connects females, audio/video, and positivity appears to be "sociability," and the notion that connects males, text, neutrality/negativity, and hedges is arguably "distancing." That said, it remains somewhat of a mystery why text promotes greater use of engagement markers overall. This may be due not so much to the properties of text as to the self-consciousness-inducing effects of the two spoken modes, which focus users' attention more on themselves than on others.

63A limitation of this study is the relatively small and heterogeneous sample analyzed. Future Voicethread research should analyze more comments from all mode types while focusing on a single demographic and discussion type, in order to obtain the most internally consistent findings. Specific to mode, the relative lack of video comments in the sample overall as well as the unevenness of their distribution across the threads limits the significance of our findings for video. Having more video data would certainly be desirable. Another potential limitation is the rubrics used to measure sociability and attitude in this study; these variables could be operationalized in various ways, and other measures might produce somewhat different results. Finally, the fact that Voicethread.com is an emergent technology and that most users in our data were inexperienced with the platform means that the snapshot we have presented here may be ephemeral. Communication in the novel commenting modes, asynchronous audio and video, will likely evolve as internet users become more familiar with their use. Such familiarity can be expected to lead to more strategic usage that takes advantage of the inherent properties of each mode but that is also susceptible to influence from social and situational factors present in different contexts of use.

64Multimodality is a growing trend in digital media. It has advanced beyond users watching videos or playing graphical games to become part of CMC–user-to-user conversation–itself. It stands to reason that IMPs and multimodal interactive threads such as those analyzed in this study will become more commonplace, providing new resources for identity performances. Here we focused exclusively on verbal language, but physical appearance plays a role in video, as well, giving rise to different expressive resources (dress, gesture, facial expression, posture, etc.) that communicators can manipulate and that can be expected to vary by gender (see, eg, Kapidzic & Herring, 2014, on gender differences in social media profile photographs). Video and audio, being richer in social presence, also reveal more about a communicator's physical, offline identity. This richness has implications for the kinds of identity performances that are possible and the resources available to users to perform gender in those modes.

Top of page

Bibliography

Balaji, M. S. & Chakrabarti, D. (2010). "Student interactions in online discussion forum: Empirical research from 'Media Richness Theory' perspective." Journal of Interactive Online Learning, vol. 9, n° 1. pp. 1-22.

Bourlai, E. & Herring S. C. (2014). "Multimodal communication on Tumblr: 'I have so many feels!'." Proceedings of the 2014 ACM conference on Web science. Bloomington, Indiana, USA. pp. 171-175. Available online: http://info.ils.indiana.edu/~herring/tumblr.pdf

Butler, J. (1990). Gender Trouble: Feminism and the Subversion of Identity. London: Routledge.

Chafe, W. & Danielewitz, J. (1987). "Properties of spoken and written language." In Horowitz, R. & Samuels, S. J. (eds.). Comprehending Oral and Written Language. San Diego, CA: Academic Press. pp. 83-112.

Ching, Y.-H. & Hsu, Y.-C. (2013). "Collaborative learning using VoiceThread in an online graduate course." Knowledge Management & E- Learning, vol. 5, n° 3. pp. 298–314.

Coates, J. (2003). Women, Men, and Language (2nd edition). London: Longman.

Cooper, A. &. Sportolari, L. (1997). "Romance in cyberspace: Understanding online attraction." Journal of Sex Education and Therapy. vol. 22, n° 1. pp. 7-14.

Culnan, M. J. & Markus, M. L. (1987). "Information technologies." In Jablin, F., Putnam, L. L., Roberts, K. & Porter, L. (eds.). Handbook of Organizational Communication. London: Sage. pp. 420-443.

Daft, R. & Lengel, R. H. (1984). "Information richness: A new approach to managerial behavior and organizational design." Research in Organizational Behavior, vol 6. pp. 191-233.

Ditkoff, J. & Young, K. (2011). "Speak up! Using the VoiceThread to encourage participation and collaboration in library instruction." In Corrado, E. & Moulaison, H. (eds.). Getting Started with Cloud Computing: A LITA Guide. New York: Neal-Schuman Publishers. pp. 191-199.

Du Bois, J., Schuetze-Coburn, S., Cumming, S. & Paolino, D. (eds.) (2014). "Outline of discourse transcription." In Edwards, J. A. & Lampert, M. D. (eds.). Talking Data: Transcription and Coding in Discourse Research. New York/London: Psychology Press. pp. 45-90.

Gefen, D. & Straub, D. W. (1997). "Gender differences in the perception and use of e-mail: An extension to the Technology Acceptance Model." MIS Quarterly, vol. 21, n° 4. pp. 389-400. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.89.9847&rep=rep1&type=pdf

Hall, K. (1996). "Cyberfeminism." In Herring, S. C. (ed.). Computer-Mediated Communication: Linguistic, Social, and Cross-Cultural Perspectives. Amsterdam: John Benjamins. pp. 147-170. Available online: http://www.colorado.edu/linguistics/faculty/kira_hall/articles/HALL1996.pdf

Herring, S. C. (1994). "Politeness in computer culture: Why women thank and men flame." In Cultural Performances: Proceedings of the Third Berkeley Women and Language Conference. Berkeley, CA: Berkeley Women and Language Group. pp. 278-294. Available online: http://ella.slis.indiana.edu/~herring/politeness.1994.pdf

Herring, S. C. (1995). "Men's language on the Internet." Nordlyd, vol. 23. pp. 1-20. Available online: http://ella.slis.indiana.edu/~herring/men.1995.pdf

Herring, S. C. (2004). "Computer-mediated discourse analysis: An approach to researching online behavior." In Barab, S. A, Kling, R. & Gray, J. H. (eds.). Designing for Virtual Communities in the Service of Learning. New York: Cambridge University Press. pp. 338-376. Preprint available online: http://ella.slis.indiana.edu/%7Eherring/cmda.pdf

Herring, S. C. (2015) "New frontiers in interactive multimodal communication." In Georgapoulou, A. & Spilloti, T. (eds.). The Routledge Handbook of Language and Digital Communication. London: Routledge. pp. 398-402.

Herring, S. C., Johnson, D. & DiBenedetto, T. (1998). "Participation in electronic discourse in a 'feminist' field." In Coates, J. (ed.). Language and Gender: A Reader. Oxford: Blackwell. pp. 197-210. Available online: http://ella.slis.indiana.edu/~herring/participation.1998.pdf

Herring, S. C. & Stoerger, S. (2014). " Gender and (a)nonymity in computer-mediated communication." In Ehrlich, S., Meyerhoff, M. & Holmes, J. (eds.). The Handbook of Language, Gender, and Sexuality. Chichester, UK: Wiley. pp. 567-586. Prepublication version available online: http://ella.slis.indiana.edu/~herring/herring.stoerger.pdf

Hyland, K. (2005). Metadiscourse. Exploring Interaction in Writing. Oxford: Continuum.

Kapidzic, S. & Herring, S. C. (2014). "Race, gender, and self-presentation in teen profile photographs." New Media & Society. DOI: 10.1177/1461444813520301. Prepublication version available online: http://ella.slis.indiana.edu/~herring/race_gender.photos.pdf

Kiesler, S., Siegel, J. & McGuire, T. W. (1984). "Social psychological aspects of computer-mediated communication." American Psychologist, vol. 39. pp. 1123-1134. Available online: https://pdfs.semanticscholar.org/3b2f/e281ae7bb3bf362db8e2ed7c045fe456da94.pdf

Kim, M.-S. & Raja, N. S. (1990). Verbal aggression and self-disclosure on computer bulletin boards. ERIC document (ED334620). Available online: http://files.eric.ed.gov/fulltext/ED334620.pdf

Korzenny, F. (1978). "A theory of electronic propinquity: Mediated communications in organizations." Communication Research, vol. 5, n° 1. pp. 3-24.

Martin, J. R. & White, P. R. R. (2005). The Language of Evaluation: Appraisal in English. Basingstoke: Palgrave Macmillan.

Millard, M. (2010). "Analysis of interaction in an asynchronous CMC environment." Web Science Conf. 2010, April 26-27, Raleigh, NC. Available online: http://journal.webscience.org/391/2/websci10_submission_106.pdf

Murray, D. E. (1988). "The context of oral and written language: A framework for mode and media switching." Language in Society, vol. 17, n° 3. pp. 351-373.

Newon, L. (2011). "Multimodal creativity and identities of expertise in the digital ecology of a World of Warcraft guild." In Thurlow, C. & Mroczek, K. (eds.). Digital Discourse: Language in the New Media. Oxford and New York: Oxford University Press. pp. 203-231.

Pacansky-Brock, M. (2010). "VoiceThread: Enhanced community, increased social presence and improved visual learning." Award winner: 2010 Sloan-C Effective Practice Award. http://sloanconsortium.org/effective_practices/VoiceThread-enhanced-community-increased-social-presence-and-improved-visual-lea

Pihlaja, S. (2011). "Cops, popes, and garbage collectors: Metaphor and antagonism in an atheist/Christian YouTube video thread." Language@Internet, vol. 8, article 1.

Rheingold, H. (1993). The Virtual Community: Homesteading on the Electronic Frontier. Reading, MA: Addison Wesley. Available online: http://www.rheingold.com/vc/book/

Short, J., Williams, E. & Christie, B. (1976). The Social Psychology of Telecommunications. London: Wiley.

Sindoni, M. G. (2011). "Mode-switching": Speech and writing in videochats. Paper presented at "GURT 2011: Discourse 2.0: Language and New Media." Washington, DC, March 11.

Sindoni, M. G. (2014). "Through the looking glass: A social semiotic and linguistic perspective on the study of video chats." Text & Talk, vol. 34, n° 3. pp. 325-347.

Venkatesh, V. & Morris, M. G. (2000). "Why do not men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior." MIS Quarterly, vol. 24, n° 1. pp. 115-139. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.361.4468&rep=rep1&type=pdf

Voicethread (nd). Amazing conversations about media. https://voicethread.com/

Walther, J. B. (1996). "Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction." Communication Research, vol. 23. pp. 1-43. Available online: https://blogs.commons.georgetown.edu/cctp-505-fall2009/files/computer-mediated-communication23.pdf

Walther, J. B. (2011). "Visual cues in computer-mediated communication: Sometimes less is more." In Kappas, A. & Krämer, N. C. (eds.). Face-to-Face Communication over the Internet: Emotions in a Web of Culture, Language, and Technology. Cambridge: Cambridge University Press. pp. 17-38.

Weir, L. (2008, April 16). "Voicethread extends the classroom with interactive multimedia albums." Edutopia. http://www.edutopia.org/voicethread-interactive-multimedia-albums

Top of page

Notes

1 The term "mode" is used here to refer to a CMC option within a single platform, as distinct from the term "medium," which refers to, eg, CMC in contrast to face-to-face communication or mediated communication hosted on a non-internet platform (see also Murray, 1988).

2 Since our sample Voicethreads include participants of all ages, to avoid repetition of the cumbersome constructions "men and boys" and "women and girls," we refer henceforth to "males" and "females."

3 Source: http://www.appappeal.com/app/voicethread/

4 Daft and Lengel (1984) distinguish between formal and informal written documents, situating the former lower on the richness hierarchy than the latter. Because one could argue that informal written documents (such as personal letters) are richer than formal asynchronous CMC (such as emailed job announcements), we include only "formal written documents" on the scale in Figure 2 to avoid debate on this question, which is irrelevant to the present study. Similarly, telephone communication is omitted, to avoid debate as to its placement relative to audio and video CMC. Finally, no distinction is made in the figure between synchronous and asynchronous audio, video, and textual CMC, whose relative positions are the same regardless of synchronicity.

5 The Voicethreads were public, and anyone could post to them. Several comments in each thread were from individuals who appeared to be outside the original target group of participants.

6 More precisely, Hyland (2005: 37) defines metadiscourse as "self-reflective expressions used to negotiate interactional meanings, assisting the writer (or speaker) to … engage with readers”.

7 Out of our data set of 363 comments, 43 comments were invalid because they were empty or contained nothing but noise. Invalid comments appeared to be the result of problems using the unfamiliar commenting technology; this is supported by the breakdown of invalid comments by mode: video (29%), audio (17%), text (7%). Moreover, the thread with the youngest participants, "SciFi," had the greatest proportion of invalid comments (20%), compared with the teen "Speeding" thread (8.4%) and the adult "Network" thread (1.8%).

8 Sixty-two metadiscourse terms used by commenters of unknown gender were excluded from this table; therefore, the total n’s for males + females is less than the total number of metadiscourse terms.

9 In the transcribed spoken examples, two dots indicate a short pause and three dots indicate a medium-length pause, in keeping with common practice in spoken discourse transcription (eg, DuBois et al., 2014).

10 Such was not the case in Bourlai and Herring's (2014) comparison of messages conveyed via text and images on Tumblr. In that study, text was more often negative than positive, while image content was more often positive than negative.

11 In principle, that is. Several commenters in the "Network" thread appear to have been reading from notes prepared in advance.

12 Differences were also observed across the modes that can be ascribed to differences between speaking and writing (cf. Chafe & Danielewitz, 1987). For example, audio and (especially) video comments, because they typically were not planned in advance, contained more false starts, pause fillers (um, hm, etc.), and phatic communication (such as greetings and closings) than did text, and voice comments tended to be more rambling and less concise.

13 This is so, even though more females than males commented in text, and more males than females commented in audio and video (see Table 4).

Top of page

List of illustrations

Title Figure 1–An audio comment (top left), a video comment (bottom left), and a text comment (right) in a Voicethread.
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-1.jpg
File image/jpeg, 136k
Title Figure 2–Continuum of media richness (adapted from Daft and Lengel, 1984).4
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-2.jpg
File image/jpeg, 16k
Title Table 1–Martin and White's (2005) attitude categories.
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-3.png
File image/png, 127k
Title Table 2–Hyland's (2005) interactional metadiscourse categories.
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-4.png
File image/png, 66k
Title Table 3–Participation by independent variable: gender, mode, thread7.
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-5.png
File image/png, 109k
Title Table 4–Participation by mode and gender.
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-6.png
File image/png, 134k
Title Table 5–Overall tone of attitude expression by gender and mode. Per 1000 words and as (percentages).
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-7.png
File image/png, 198k
Title Table 6a–Affect by gender and mode. Per 1000 words and as (percentages).
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-8.png
File image/png, 193k
Title Table 6b–Judgment by gender and mode. Per 1000 words and as (percentages).
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-9.png
File image/png, 203k
Title Table 6c – Appreciation by gender and mode. Per 1000 words and as (percentages).
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-10.png
File image/png, 203k
Title Table 7a–Gender and interactional metadiscourse8.
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-11.png
File image/png, 75k
Title Table 7b – Mode and interactional metadiscourse.
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-12.png
File image/png, 106k
Title Table 7c – Mode and interactional metadiscourse of males.
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-13.png
File image/png, 109k
Title Table 7d – Mode and interactional metadiscourse of females.
URL http://journals.openedition.org/alsic/docannexe/image/3007/img-14.png
File image/png, 106k
Top of page

References

Electronic reference

Susan C. Herring and Bradford Demarest, "I'm the first video Voicethread–it's pretty sweet, I'm pumped": Gender and Self-Expression on an Interactive Multimodal PlatformAlsic [Online], Vol. 20, n° 1 | 2017, Online since 26 September 2017, connection on 29 March 2024. URL: http://journals.openedition.org/alsic/3007; DOI: https://doi.org/10.4000/alsic.3007

Top of page

About the authors

Susan C. Herring

Susan C. Herring is Professor of Information Science and Linguistics and Director of the Center for Computer-Mediated Communication at Indiana University Bloomington. A pioneer in language-focused study of computer-mediated communication (CMC), she has been researching structural, pragmatic, interactional, and social phenomena in digital communication, especially in relation to gender, since the early 1990s. Her recent interests include online multilingualism, multimodal CMC, and telepresence robot-mediated communication. A former editor of the Journal of Computer-Mediated Communication and currently editor of the online journal Language@Internet, she has also edited or co-edited three volumes on CMC: Computer-Mediated Communication: Linguistic, Social and Cross-Cultural Perspectives (John Benjamins, 1996), The Multilingual Internet: Language, Culture, and Communication Online (Oxford University Press, 2007, with B. Danet), and The Handbook of Pragmatics of Computer-Mediated Conversation (Mouton, 2013, with D. Stein and T. Virtanen).
Affiliation: Department of Information & Library Science, Indiana University, Bloomington.
Email: herring@indiana.edu
Web: http://info.ils.indiana.edu/~herring/
Address:
Department of Information & Library Science, Wells Library 011, Indiana University, Bloomington, IN 47405, USA.

Bradford Demarest

Bradford Demarest is a Lecturer in the Department of Informatics in the School of Informatics, Computing, and Engineering at Indiana University, where he is currently earning his doctorate in Information Science. His research interests revolve around the ways that socio-cultural identities are expressed in, and interact with, discourse and technological aspects of communication – both in multimodal computer-mediated communication and in the more traditional written and spoken genres of scholarly communication.
Affiliation: Department of Information & Library Science Indiana University, Bloomington.
Email: bdemares@indiana.edu
Address: Department of Information & Library Science, Wells Library 011, Indiana University, Bloomington, IN 47405, USA.

Top of page

Copyright

CC-BY-NC-ND-4.0

The text only may be used under licence CC BY-NC-ND 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Top of page
Search OpenEdition Search

You will be redirected to OpenEdition Search