Skip to main content
Humanities LibreTexts

6: What the Citation Project Tells Us about Information Literacy in College Composition

  • Page ID
    70131
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Chapter 6. What the Citation Project Tells Us about Information Literacy in College Composition

    Sandra Jamieson

    Drew University

    Introduction

    In the introduction to the Framework for Information Literacy for Higher Education (Framework for IL) (ACRL, 2015), the ACRL Board explains (through footnotes) that the thinking behind the Framework for IL is indebted to Thomas Mackey and Trudi Jacobson’s (2011, 2014) work on metaliteracy. That work, it notes,

    expands the scope of traditional information skills (i.e., determine, access, locate, understand, produce, and use information) to include the collaborative production and sharing of information in participatory digital environments (collaborate, produce, and share). This approach requires an ongoing adaptation to emerging technologies and an understanding of the critical thinking and reflection required to engage in these spaces as producers, collaborators, and distributors (footnote 7, citing Mackey & Jacobson, 2014).

    As writing teachers and librarians develop ways to help students acquire the dispositions identified in the Framework for IL, it is useful to look closely at the kind of researched writing produced before its introduction. In addition to providing a sense of the kinds of resources being consulted in response to specific writing contexts, such analysis provides a baseline to work beyond—and instructors against. We have learned a lot about attitudes, practices, and expectations from student interviews such as those by Project Information Literacy (Head & Eisenberg, 2009, 2010; Head, 2013) and protocol analysis (see Blackwell-Starnes, Chapter 7, this collection), but analysis of the final product—the research paper—offers insight into how those various habits of mind play out. Review of student research papers produced before the Framework for IL reveals why the shift to a metacognitive information literacy (IL) is welcomed by many involved in IL instruction. It also demonstrates the impact of some of the limitations of the ACRL Information Literacy Competency Standards for Higher Education (IL Standards) (ACRL, 2000).

    As others have noted (see Norgaard & Sinkinson, Chapter 1, this collection), there are many institutional challenges preventing faculty and librarians from working together to develop shared IL pedagogy. Not the least of these is the common location of IL instruction in the required first-year writing course (FYW) where IL assignments and too often product-based, focusing attention away from IL as a process (Blackwell-Starnes, Chapter 7, this collection) and obscuring the vision of IL as “a rich multifaceted literacy that is responsive to changing contexts and opportunities” (Norgaard & Sinkinson, Chapter 1, this collection). Yet the location of IL instruction is not likely to change with the introduction of the new Framework for IL, making it essential for those who would develop a more responsive IL pedagogy to understand what happens in this current context. For this reason, analysis of the research papers produced in first-year writing (FYW) prior to 2011 is particularly instructive. FYW is the one college-level course that almost always includes a researched project (Hood, 2010). Whether the instruction takes the form of the “one shot” library visit (Gavin, 1995; Jacobs & Jacobs, 2009), a program-wide IL component (Holliday & Fagerheim, 2006; Jacobson & Mackey, 2007), embedded librarians (Deitering & Jameson, 2008; Kesselman & Watstein, 2009), or team-taught courses (Alvarez & Dimmock, 2007; Jacobson & Mackey, 2007), final papers are expected to reflect what students have learned about research writing (Howard & Jamieson, 2014) and IL. Because IL instruction is often formally or informally assessed based upon those papers, we can also use them to assess the IL Standards.

    Articles published by librarians and by writing teachers that focus on or include data on papers produced in FYW courses (Grimes & Boening, 2001; Carlson, 2006; McClure & Clink, 2009) or other lower-level introductory courses mostly populated by first-year students (Davis & Cohen, 2001; Jenkins, 2002; Davis, 2002, 2003; Carlson, 2006; Knight-Davis & Sung, 2008) can help us begin this assessment, but all report on single-institution studies, so they may be too limited to allow broad conclusions. Such single-site research has important local relevance, permitting the campus community to explore questions about the kinds of sources selected and retrieved at a specific moment in a specific place and develop responsive pedagogies and policies. Individually, though, they reveal little about national patterns or trends. These studies occur in isolation and are designed to address local concerns rather than being developed in response to other research. Indeed, single-institution studies are so inward-focused that very few replicate the methods or coding categories of other studies with which they might compare data. Lacking such overlap, existing studies cannot be easily aggregated as part of an evolving national picture of source use. The two exceptions to this are Project Information Literacy and the Citation Project, both of which are engaged in multi-institution study of student research practices and products.

    This chapter reports on data from a study of the Citation Project Source-Based Writing (CPSW) Corpus. The study in question explores the types of sources selected and cited in 800 pages of source-based writing by 174 students enrolled in FYW courses at 16 U.S. institutions, ranging from community colleges to Ivy Leagues. Source codes replicate coding categories of earlier studies (Carlson, 2006; McClure & Clink, 2009). Sub-codes allow the data to be broken out for comparison with other studies (Davis & Cohen, 2001; Jenkins, 2002; Davis, 2002, 2003; Knight-Davis & Sung, 2008), extending their reach and reinforcing some of their findings while challenging other oft-repeated claims. This research allows us to understand the limits of decontextualized and linearly focused IL instruction (Norgaard & Sinkinson, Chapter 1, this collection). The data indicate that, nationally, students are broadly able to identify, locate, and access information from apparently appropriate sources in sanctioned ways; however, a closer look at which texts are cited and the ways they are incorporated into the papers reveals the need to go beyond what has for many become a checklist mentality to what the Framework for IL describes as “an expanded definition of information literacy [that] emphasize[s] dynamism, flexibility, individual growth, and community learning” (“Introduction”).

    The Citation Project Data

    Citation Project Source-Based Writing (CPSW) Corpus

    As reported elsewhere (Jamieson & Howard, 2013; Jamieson, 2013), the Citation Project Source-Based Writing Corpus (CPSW) gathered research papers from 16 institutions distributed regionally and representing 12 states throughout the United States and also distributed across 2008-1010 Carnegie classifications (see Table 6.1). The papers were produced at the end of whatever the institution identified as the standard FYW course, requiring a 7-10 page research paper using at least five sources. Only decontextualized final research papers were collected; Institutional Research Board (IRB) approvals required researchers to protect students and faculty from possible repercussion should plagiarism be detected, which prevented collection of any demographic information, syllabi, or assignments. The study focuses only on the finished product of the research process—the papers—gathered between Spring 2008 and Spring 2010, and the source use in a total of 50 pages from each participating institution was coded. In all, 174 papers and works cited lists were examined, along with the 1,911 citations they included and the 930 sources cited.

    Table 6.1. Institution types in the Citation Project Source-Use Study

    Carnegie classification (data from 2008-2010)

    Level

    Control

    Classification & Description

    N

    2-year

    Public

    Assoc/Pub-R-M (Associate’s—Public Rural-serving Medium)

    2

    4-yr

    Private

    Bac/A&S (Baccalaureate Colleges—Arts & Sciences)

    2

    4-yr

    Public

    Bac/Diverse (Baccalaureate Colleges—Diverse Fields)

    1

    4-yr plus

    Public

    Master’s L (Master’s Colleges & Universities, larger programs)

    4

    4-yr plus

    Public

    DRU (Doctoral/Research Universities)

    1

    4-yr plus

    Public

    RU/H (Research Universities, high research activity)

    2

    4-yr plus

    Private

    RU/VH (Research Universities, high research activity)

    2

    4-yr plus

    Public

    RU/VH (Research Universities, very high research activity)

    2

    TOTAL

    16

    Table 6.1 continued

    Carnegie classification

    Number of papers available for study

    Papers whose sources could not all be retrieved

    Number of papers coded

    Number of pages coded

    Assoc/Pub-R-M

    54

    31

    23

    100

    Bac/A&S Private

    80

    23

    20

    100

    Bac/Diverse Public

    85

    16

    10

    50

    Master’s L Public

    136

    43

    40

    200

    DRU Public

    43

    4

    16

    50

    RU/H Public Public

    37

    15

    20

    100

    RU/VH Private

    58

    23

    20

    100

    RU/VH Public

    68

    16

    25

    100

    Total

    561

    171

    174

    800

    Locating Sources Cited

    In order to generate the 174 papers used in this study, 171 papers were rejected (Table 6.1) because researchers were unable to retrieve all of the sources listed. In some cases, the irretrievable sources were part of localized databases set up by the participating institution but not accessible to researchers, or were part of larger collections behind a prohibitive pay-wall. In other cases, citations provided inadequate documentation, especially URLs containing typographical errors; many others pointed to URLs that no longer exist or had been overwritten. Other bibliographic studies spend considerable time discussing concerns about unretrievable Internet sources. Grimes and Boening (2001) note that 30% of the URLs cited in their sample could not be found “due to either student misreporting of the URL or inactive links” (p. 19); and when Davis (2003) checked URLs for “accuracy and persistence” six months after collecting the papers in his 2001 study, he found that 35% did not take him to the original source (p. 55).

    In general, these researchers attribute their difficulty locating cited sources to errors on the part of the students or imply that the difficulty reveals the inadequacy of the source itself, citing both as further evidence of the need to strengthen IL instruction. It is possible, though, that many of the URLs were correct when the students listed them. Lepore (2015) notes that “the average life of a webpage is 100 days,” and Zittrain, Albert, and Lessig (2014) found that 70% of the 1,002 sampled URLs cited in the Harvard Law Review, the Harvard Journal of Law and Technology, and the Harvard Human Rights Journal failed to send readers to the information originally cited (what they term “reference rot”). The same was true of 49.9% of the 555 URLs in all published United States Supreme Court opinions. Instead of jumping too quickly to conclusions about the quality of Internet sources selected by the students or blaming student honesty or IL skills for irretrievable sources, these findings suggest that teachers and librarians should revise IL instruction to include discussion of the role of accurate citations, the problem of “reference rot,” and the importance of listing DOIs if they exist.

    The fact that, using the information provided, Citation Project researchers were able to locate most of the Internet sources in the sample papers using the Internet Archive (http://archive.org/), suggests that reference rot might have been part of the problem in other studies as well. Successful retrieval of sources by Citation Project researchers was higher when papers used MLA-style works cited lists that note access date, but approximate searches based on the date of the paper were also largely effective. The Internet Archive allowed Citation Project researchers to read Internet sources as they appeared the day the student consulted them.

    Coding Categories

    Data on source use has been reported elsewhere, along with a discussion of methods (Jamieson & Howard, 2013, Jamieson, 2013). This chapter focuses on the sources themselves, which were classified into one of 14 types (see Table 6.2). The category “book” is uniformly described across studies of student source use (Davis & Cohen, 2001; Jenkins, 2002; Davis, 2002, 2003; Carlson, 2006; Knight-Davis & Sun, 2008; McClure & Clink, 2009); Citation Project coding replicated this category and, like other studies, included books accessed in any format. The definition of journal article as peer-reviewed and written for an academic audience is also quite standard, although Citation Project coding replicated Jake Carlson’s (2006) language, “written by an academic expert in the field, incorporating scholarly perspectives such as theory or research, and having an intended audience of other individuals knowledgeable in the field” (p. 16). The category “Specialized News Source and other periodicals” is defined by Citation Project coding the way Carlson defines “Magazine article” (“reporting an event, opinion, or other issue from a non-scholarly perspective …written in a way that would be accessible to a general audience,” p. 16), and includes articles from publications such as The Economist, Nature, Mother Jones, The New Yorker, and Harpers, regardless of how they were accessed. In the case of encyclopedia, dictionaries, and government documents, again no distinction was made between those consulted electronically and those consulted in print, although almost all of the citations indicated that they were consulted online. The category “General News Source” includes traditional newspapers that appear in print and electronically, as well as news delivered by television and radio and related websites (where broadcast news and related information is repeated and updated), and via apps, social media, and email and text updates. Neither the reputation nor the politics of the news source were noted, although sources were also coded using a slight modification of the categories developed by McClure and Clink (2009) as “information (apparently without bias),” “opinion,” “advocacy,” “commercial,” and “self-help.”

    While Davis (2003) does note that some websites in his studies would probably be deemed sufficiently informational to be included in student papers, he does not break out URLs in this way, nor do Carlson (2006) or Knight-Davis and Sung (2008). McClure and Clink (2009) do make that distinction, and Citation Project research followed their lead and used the categories they developed. Informational Internet sites are defined as sources that seem to be presenting information without bias or commercial backing, such as the American Cancer Society, and the CDC. Researchers also coded an additional category not included by McClure and Clink, “Internet, multiple-author,” special interest websites or eZines that include articles by a number of contributors but do not have a print version and are not associated with any news or entertainment organizations (most notably fan sites for sports, collectables, or activities). This category includes commercially produced multi-user fandom sites associated with films, books, or television shows, although such sites were also coded as commercial following McClure and Clink.

    Data was analyzed using SPSS (Statistical Package for Social Science 14.0).

    Limitations of the Study

    Although it includes students from 16 institutions, this study only provides a snapshot of first-year college students in the U.S. as they leave a specific course in a given year. As Carlson (2006) notes about his own study, the data are potentially skewed by the fact that the instructors from whose classes the papers were drawn and the students who submitted their papers were volunteers rather than being randomly selected (randomizing occurred after papers had been submitted). Instructors who felt they were doing a good job teaching research skills were probably more likely to volunteer than those who had doubts; experienced instructors were also probably more likely to volunteer. As for the students, those who were misusing sources or who had very low confidence in their research and citation skills probably selected out of the study. Papers were drawn from the standard FYW course, which is sometimes the second writing course for those deemed weaker writers and which in some institutions stronger writers place out of and English Language Learners or multilingual writers are tracked out of. These factors provided a fairly evenly prepared pool of writers for the study, but also limit what might be learned from outliers.

    Findings

    Types of Sources Selected and Retrieved

    Surveys of faculty expectations for the FYW research paper (see Howard & Jamieson, 2014) and conversations with librarians about sources they recommend indicate that, in general, books, journal articles, government documents, and specialized news sources (accessed either electronically or in print) are considered appropriate sources for FYW research papers. As Head and Eisenberg (2010) among others have found, many assignments still require that students use a specific number of types of sources, often one book and at least two journal articles. Once those requirements are satisfied, many faculty also consider newspaper articles appropriate, although of course this depends on the context and the nature of the topic selected. The majority of FYW research papers address general interest topics selected by the student, and 85% of the assignments studied by Head and Eisenberg (2010) either expected students to generate their own topic or provided acceptable topics from which they could choose (p. 8). This also appears to describe the papers in the CPSW, whose topics are frequently abortion, gun control, Title IX, global warming, marijuana (legalization/health benefits), and Internet privacy. Some papers focus on literature or discipline-specific topics requiring specialized sources, but these are the minority.

    Table 6.2. Categories of sources selected and used at least once

    Frequency

    Percent

    Cumulative Percent

    Book (single author or anthology)

    128

    13.76

    13.76

    Journal *

    219

    23.55

    37.31

    Specialized news source or periodical *

    105

    11.29

    48.60

    Government document or publication *

    63

    6.77

    55.37

    General news source *

    141

    15.17

    70.54

    Visual images (still and moving)

    7

    0.75

    71.29

    Encyclopedia *

    18

    1.93

    73.22

    Dictionary *

    10

    1.07

    74.29

    Informal print or oral (email, text mess., etc.)

    4

    0.43

    74.72

    Public Internet

    235

    25.28

    100.00

    TOTAL

    930

    100.0

    * accessed electronically or in print

    As Table 6.2 shows, source types that fit the category “appropriate for FYW” as described above dominate the list of sources selected, retrieved, and cited at least once in the 800 pages coded. Of the 930 sources cited in coded extracts written by the 174 student participants, 55% fall into this category. Of those, books make up 14%; articles from scholarly journals, 24%; specialized news sources and periodicals, 11%; and government documents, 7%. If general news sources (15%) and visual images, mostly films (0.75%) are added to the list of generally acceptable source types, as they often are in FYW courses, the percentage of the 930 sources that would be considered appropriate types for FYW rises to 71%. Relatively few students cite encyclopedia and dictionaries (2% and 1%), or informal print or oral sources (0.5%), although this does not mean they do not use them; only cited sources were studied.

    A Closer Look at Sources

    Where sources were available in print or electronically, they were coded by source type (e.g., journal article) rather than method of retrieval, but sources that listed a URL were also coded as “Internet” following previous research. Table 6.3 reveals more about the 235 Internet sources cited at least once within the coded pages. More than half (54%) are informational websites, which is 14% of all of the sources cited at least once. Such informational sources (including the American Cancer Society and the CDC) are generally also acceptable in FYW courses, raising the percentage of the 930 sources that would be considered appropriate types of sources by most writing teachers today to 85%.

    Of the 107 Internet sites not classified as “informational,” Table 6.4 shows that 21 (9% of all 235 Internet sites) fit McClure and Clink’s (2009) classification of “advocacy website,” and 7 (3%) are websites with clearly commercial motivation. While many websites are sponsored or include advertising, this category focuses on websites whose purpose is directly or indirectly commercial (selling or promoting an item, brand, person, location, activity, etc.). These findings are significantly lower than McClure and Clink’s, revealing again the influence of context on single-institution data.

    Table 6.3. Categories of public internet sources selected and used at least once

    Frequency

    Percent of all internet sources (n=235)

    Percent of all 930 sources (Table 6.2)

    Cumulative percent of all sources (n=930)

    Informational website

    128

    54.47

    13.76

    13.76

    Personal website incl. social media

    5

    2.13

    0.54

    14.30

    Blog (personal or professional)

    14

    5.96

    1.50

    15.80

    Multiple-author (eZine, wiki, etc.)

    32

    13.62

    3.45

    19.25

    Other (not classified above)

    56

    23.82

    6.03

    25.28

    TOTAL

    235

    100.00

    25.28

    Table 6.4. Sponsorship categories of public internet sources selected and used at least once

    Frequency

    Percent of all internet sources (n=235)

    Percent of all 930 sources (Table 6.2)

    Cumulative percent of all sources (n=930)

    Informational (no obvious bias)

    128

    54.47

    13.76

    13.76

    Advocacy

    21

    8.94

    2.26

    16.02

    Personal

    19

    8.08

    2.04

    18.06

    Company or commercial

    7

    2.98

    0.75

    18.81

    Online journal (unsponsored)

    32

    13.62

    3.45

    22.26

    Other (not classified above)

    28

    11.91

    3.02

    25.28

    TOTAL

    235

    100.00

    25.28

    Table 6.5. Categories of books selected and used at least once

    Frequency

    Percent of all Books (n=128)

    Percent of all 930 sources (see Table 6.2)

    Cumulative percent of all sources (n=930)

    Fiction

    16

    12.50

    1.72

    1.72

    Drama

    2

    1.56

    0.22

    1.94

    Poetry

    2

    1.56

    0.22

    2.16

    Creative non fiction

    1

    .78

    0.10

    2.26

    Literary criticism

    12

    9.38

    1.29

    3.55

    Information (non-academic)

    17

    13.29

    1.83

    5.38

    Information (Curated collections)

    15

    11.71

    1.61

    6.99

    Information (scholarly books and edited collections)

    63

    49.22

    6.77

    13.76

    TOTAL

    128

    100.00

    13.76

    A Closer Look at Books

    When the category “book” is sub-divided, the question of acceptable source type is further complicated, as Table 6.5 shows. Of the 128 books selected, retrieved, and used at least once in the 800 coded pages, 49% are the traditional scholarly texts probably imagined as the result of the instruction to “include at least one book.” Such texts make up 7% of the total 930 sources. A further 16% of the 128 books cited are works of literature, and 9% are literacy criticism focusing on them. Of the remaining books, 13% are non-academic (self-help or popular press books that do not cite sources or include notes regarding sources), and 12% are curated collections, such as the Opposing Viewpoints and At Issue series, and short single-topic textbooks that arrange extracts from longer texts into a “conversation” for the students.

    Perhaps unsurprisingly, while some books are cited frequently in literature-based papers, of the 930 sources only 14% are cited more than three times, and 56% are cited only once (Table 6.6).

    Table 6.6. Frequency of citation for each of the 930 sources

    Frequency

    Percent

    Cumulative Percent

    Once

    525

    56.45

    56.45

    Twice

    185

    19.89

    76.34

    Three times

    89

    9.57

    85.91

    Four times

    48

    5.16

    91.07

    Five times

    34

    3.66

    94.73

    Six times

    19

    2.04

    96.77

    Seven times

    10

    1.08

    97.85

    Eight times

    9

    .97

    98.82

    Nine times

    4

    .43

    99.25

    Ten or more times

    7

    .75

    100.00

    TOTAL

    930

    100.00

    Discussion

    Identifying, Locating and Retrieving sources

    Bibliographic coding provides information about the kinds and combinations of sources used in each of the 174 papers and also aggregate data about the 930 sources used within the coded pages (sources not used in the coded pages were not retrieved or coded). It also reveals the frequency of use of each source across the 1,911 citations in the sample, allowing comparison between what was selected for the works cited list and what was actually used by the student to build an argument within the paper itself. This allowed researchers to track what percentage of citations drew on scholarly and non-scholarly sources and also to explore the relationships among the sources cited.

    On the face of it, the data in Tables 6.2 to 6.5 indicate that students seem to be able to retrieve types of sources that meet faculty requirements and that would probably be recommended by librarians, thereby demonstrating an ability to identify, locate, and access source types appropriate for their academic projects. In spite of the inclusion of obviously non-academic sources and commercial and advocacy websites, Tables 6.2 to 6.5 reveal that the majority of the sources being cited at least once are of a type that most instructors of FYW courses would consider “acceptable.” Tables 6.2 and 6.3 reveal that 85% of the 930 sources (791) would appear to be of acceptable types (although this does not mean that the individual sources would be acceptable to support or build the argument in question). Even when adjusted to remove non-academic books (see Table 6.5), that number is still 80% (774 sources).

    While sources classified as “magazines” (such as Business Week, The Economist, and National Geographic) may not be appropriate for research papers in courses like those in economics studied by Davis and Cohen (2001) and Davis (2002, 2003), following McClure and Clink’s (2009) lead, the Citation Project classifies “specialized news sources or periodicals” as generally appropriate for FYW. These differences are, of course, institution-specific (and sometimes instructor-specific), and lacking assignments and handouts, researchers cannot speak to what was acceptable in each case. At the same time, although literature-based and discipline-specific courses skew the data a little, the overall pattern described here persists across sites.

    Because they classify sources by type (book, journal) rather than content, the majority of scholars who study student researched-writing focusing on source use or sources cited classify sources that can be accessed freely from the public Internet as “websites” or “web” if a URL is listed with no further subdivision. This means that the category includes everything from self-help blogs to sites that would be considered appropriate for a paper in a FYW course, such as government- or university-sponsored website. Just as the category “book” tends to be considered appropriate without analysis of content, so “web” tends to be considered inappropriate, and “sources from the Internet” are still forbidden or limited by some faculty, particularly beyond the first year. While Davis (2003) does note that some Internet sources would probably be sufficiently informational to be included in first-year economics papers, his study does not break out URLs in this way, nor do those of Carlson (2006) or Knight-Davis and Sung (2008). McClure and Clink (2009), focus most of their attention on the 48% of sources they coded as “websites,” offering a more granular classification and when Citation Project sources are coded using those classifications (Table 6.4) it is obvious that not all Internet sources should be considered unacceptable for source-based papers in FYW courses. The decision to exclude sources found “online” from studies, and still from some classes, seems increasingly limited given the ubiquity of the Internet, the quality of sources available through it, and the growing sophistication of the so-called “digital natives.” Where earlier studies found cause for concern in the types of Internet sources selected, the research reported in this chapter echoes Carlson’s (2006) observation that the quality of sources revealed in single-institution studies do not appear to be generalizable to a national level. The national snapshot provided by the Citation Project suggests less cause for concern about the Internet and also records a high percentage of sources classified as informational (websites from national organizations such as the American Cancer Institute, or from government sponsored sites like the Center for Disease Control).

    Perhaps because they were distracted by the Internet, earlier researchers have not focused on the books students select, apparently imagining “books” by definition to be scholarly. In fact, data regarding this category (Table 6.5) suggests the need to revisit the traditional instruction to “include at least one book.” Some of the courses in the sample were literature-focused, producing papers using literary criticism (including SparkNotes and Cliff’s Notes, which are frequently used as criticism) to discuss literary texts. Another subset of courses aid the “research” process by selecting curated collections such as Opposing Viewpoints and At Issue series or single-topic textbooks. While these collections include scholarly sources, they do most of the IL work for students, arranging extracts from longer texts into a “conversation” rather than asking students to conduct research in order to discover possible conversations themselves. Such collections may help students understand the Framework for IL threshold concept Scholarship as Conversation, and many of the papers also draw on other sources that the students may have selected. The use of curated collections in FYW alongside additional student-selected sources is worth further consideration as part of IL pedagogy, especially if students learn to develop source networks from the works cited lists of texts in those collections (Laskin & Haller, Chapter 11, this collection). Books make up only 14% of the 930 sources, and of those, half are monographs and edited collections that are sufficiently scholarly, although a closer look at the books that would not be acceptable is instructive.

    Overall, the Citation Project research indicates that in the area of traditional sources (books and journals) and in non-traditional sources (websites), first-year students are mostly able to identify, find, access, and cite sources in ways that would satisfy traditional bibliographic instruction. If the purpose of IL instruction is to help students navigate library databases and stacks along with the Internet, select sources of an appropriate type on a specific topic, and access and cite them correctly, then it seems to have succeeded. Studies of student bibliographies produced in intermediate- and upper-level courses report a high percentage of books and appropriate scholarly journals (Hovde, 2000; Jenkins, 2002; Kraus, 2002; Carlson, 2006; Mill, 2008), confirming that this skill appears to transfer beyond the first year. While Davis (2003) found that students were using more Internet sources, he reports that they did not do so at the expense of traditional sources but as part of an increase in the total number of sources cited (p. 47). Of course, the IL Standards go far beyond simple information retrieval, calling for a sophisticated understanding of sources and listing characteristics of an equally sophisticated “information literate student” within the performance indicators and outcomes. It is on this level that the Citation Project data suggests the IL Standards have been less than successful in transforming both practice and habits of mind. This observation reinforces critiques that the IL Standards present a set of “neatly packaged skills which result in a successful product” (Norgaard & Sinkinson, Chapter 1, this collection) rather than encouraging a process-oriented pedagogy. Such a recursive, rhetorically based IL pedagogy is, of course, harder to measure using the kinds of checklists so common in FYW and IL assessment, yet it is more in line with the process-oriented pedagogy called for by Writing and Composition theorists, and perhaps a similar focus on process in the Framework for IL will encourage radical pedagogical revision in both areas.

    A Deeper Look at Source Use

    A closer look at Tables 6.5 and 6.6 suggests that there is still much IL work to be done. The fact that the papers in this study demonstrate an ability to retrieve appropriate types of sources does not mean students can do that work alone. When instructed to include a certain number of books and peer-reviewed articles on their works cited list, most students will comply; however, if they are not part of a brief literature review it is unlikely that a single citation was what the professor intended. Overall, 56% of the 930 sources and 50% of the 128 books were cited only once, although the 21 works of literature were cited with greater frequency. This finding suggests, again, that students need to be exposed to a more sophisticated model of IL that teaches them to retrieve information as needed from types and numbers of sources appropriate to the topic at hand, not to satisfy a decontextualized checklist.

    Other studies (Sherrard, 1986; Hull & Rose, 1990; Pecorari, 2003; Howard, Serviss, & Rodrigue, 2010) reveal that students working with sources write from sentences within those sources rather than summarizing extended passages, and in this expanded study they were found to do so in 94% of the citations (Jamieson & Howard, 2013; Jamieson 2013). Together, these data suggest that the focus of concern should not be the sources per se but the ways students engage with them and use them to trace connections and create a conversation among them. As Maid and D’Angelo (Chapter 2, this collection) observe, IL is a “contextualized and situated concept,” and by replacing the “prescriptive and de-contextualized set of skills” with a deeper attention to metaliteracy, the new Framework for ILs may begin to address this.

    Information Literacy Dispositions and Habits of Mind

    Because most of the students used the citation guidelines included in the 7th edition of the MLA Handbook for Writers of Research Papers (2009), which require the inclusion of the medium (print, Web, DVD) and the date a source was consulted, it is possible to track the order that sources were retrieved and the ways papers were constructed. Jenkins (2002), Davis (2003), and Carlson (2006) all caution that there is no “average bibliography,” and that observation holds for the Citation Project papers as well; there are, though, patterns within the kinds of sources selected and the way those sources are used. A look at papers that use “acceptable source types” reveals an often torturous research process and paper-writing formula that is far from the “set of integrated abilities encompassing the reflective discovery of information, the understanding of how information is produced and valued, and the use of information in creating new knowledge and participating ethically in communities of learning” identified as a goal of the Framework for IL.

    One such paper, Z24, while not being typical, is representative of the many struggles revealed in the coded papers, and reflects many of the concerns that indicate the need to move beyond a checklist model of IL. A review of the works cited list (Appendix A) suggests that few of us would find a problem with it unless we looked at the sources themselves. It includes two books, one scholarly journal article, two additional journal/periodical articles, one government website, two informational sources from the Internet, an article from The New York Times, and a website whose title at least would appear appropriate to the paper topic (obesity).

    “Authority Is Constructed and Contextual”

    A closer look reveals the problems that can result from an over-dependence on source type to assess authority. Neither of the two books cited in this paper is scholarly, the first obviously so from the title—Skinny Bitch. The second, Compulsive Overeating by Judith Peacock, is 64 pages long but may have appeared reliable to the student because s/he used LexisNexis Academic to access the (8-page) chapter “Who is at Risk?” cited in the paper. The publisher, Life Matters, explain “each book defines the problem, describes its effects, discusses dilemmas teens may face, and provides steps teens can take to move ahead,” giving it a Citation Project classification of “book-self help.” It could have been a useful source for a paper analyzing the kinds of advice given to teens suffering from eating disorders; however, that was not the focus of the paper, and it appears that the student did not assess the contextual nature of the data in this source.

    One other source was also retrieved from LexisNexis Academic. Listed as being from Biotech Business Week, the title is “Salad Bars in Every School: United Fresh Applauds New Child Nutrition Bill,” and no author is included by the student, although it is actually listed as the United Fresh Produce Association. Biotech Business Week notes on its “about” page that it publishes “News and information from pharmaceutical and biotechnology companies, with a focus on business trends and analysis.” The article reports on United Fresh’s president’s visit to Capital Hill to thank legislators for including fresh produce in the New Child Nutrition Bill, crediting Congressman Farr and United Fresh lobbying for this legislation. The student introduces unattributed data about the importance of fresh produce from the article with “research shows....” This source should have raised flags about authority when the student found it (Citation Project researchers classified it as “company or commercial”). A student with a deeper understanding of IL might have considered the role of context and purpose and rejected it.

    While Z24 is an outlier in some ways, it reflects the impact of findings by other researchers that students tend to trust the authority of all sources they find “through the library” (Tolar-Burton & Chadwick, 2000). Head and Eisenberg (2009) found that 84% of students reported that scholarly research databases were the library resource they used most frequently (p. 22) and 78% reported that such databases “contain more credible content than the internet” (p. 27). Four of the sources in Z24 were retrieved via Academic Search Complete, which notes that it “provides complete coverage of multidisciplinary academic journals … [and] supports high-level research in the key areas of academic study by providing peer-reviewed journals, full-text periodicals, reports, books, and more” (ebscohost). Two others came from The LexisNexis-Academic “about” page notes that it allows researchers to “quickly and easily search full-text documents from over 15,000 credible sources of information and pinpoint relevant information for a wide range of academic research projects … [from] comprehensive, authoritative news content, including current coverage and deep archives” (LexisNexis). A final source was retrieved from a third “database,” about.com (“the largest source for Expert content on the Internet that helps users answer questions, solve problems, learn something new or find inspiration” according to the “about” page). In fact, about.com led the student to a government document published by the Center for Nutrition Policy and Promotion, which is part of the USDA, but the student does not even mention that this is government research. Do students know the difference between “peer-reviewed,” “credible,” and “expert”? This student appears not to have, nor to have been able to make any judgment call about these different databases when looking at the sources themselves or seen the difference between a government document and a company-sponsored promotion.

    The student does introduce Skinny Bitch as “New York Times Best Selling book,” but aside from that does not appear to consider the authority of the sources selected, and only cites one 23-page chapter entitled “Have no Faith: Government Agencies Don’t Give a Shit About Your Health.” (While this might have been used in dialogue with the government data, it was not.) Source assessment has always been part of IL instruction and is central to the IL Standards, but as Head and Eisenberg’s (2010) data show, if it is not accompanied by a larger metacognitive discussion of the role of research and what we hope students will learn through the process of researching a topic, they are unlikely to incorporate such assessment into their normal research habits or the way they think about information in general.

    Information Creation as a Process and Information Has Value

    Each of the sources in Z24 is used to introduce a reason why people might be obese (eating disorders, high cost of healthy food, low cost of burgers, lack of government concern), but the very different kinds of sources and the lack of acknowledgement of their context or purpose undermines the argument. While some of the sources selected are appropriate in terms of content and type, the remaining sources are isolated voices on the general topic of the student’s paper, and the information in them is created and presented for a very different audience and purpose. One such source is a self-help blog, “Eat without Guilt,” which promises to help readers “make peace with food, your body, and your weight.” A second comes from The New York Times Well blog and is entitled “A High Price for Healthy Food,” and a third is the book chapter “Compulsive Overeating,” which is cited twice in the paper, with both quotations taken from the same page. Blogs and self-help sources would be appropriate for some kinds of paper (an analysis of the rhetoric of “help,” or the kinds of resources available for people with eating disorders), but to mix self-help with academic research in a paper whose stated aim is to explore the causes of obesity is a questionable decision, and to do so without acknowledgment of the different context or recognition of the different scholarly weight assigned to each, reveals a student who still needs to develop IL skills. If students are able to “assess the fit between an information product’s creation process and a particular information need” they are also on the way to developing “an understanding that their choices impact the purposes for which the information product will be used and the message it conveys” (Framework for IL). Such an awareness may have guided this student to different source choices.

    Searching as Strategic Exploration and Scholarship as Conversation

    Perhaps more interesting than the student’s inability to evaluate the sources s/he cites is the story about the process of construction of the paper that the works cited list suggests. The student accessed four sources on 5 April, 2010, all four of them via Academic Search Complete. Two come from academic journals (Pediatrics and New Scientist), and the other two are from the Proceedings of the National Academy of Sciences of the United States of America and the Tufts University and Nutrition Letter. The New Scientist article, introduced by the student as “a recent study by New Scientist” is a book review of Supersize Me. If one looks at these four sources, a story appears to be emerging, moving from comments in Supersize Me about corn to “Corn Content of French Fry Oil from National Chain vs. Small Business Restaurants” (Proceedings), and from there to “Who’s Losing the Burger Battle?” (Tufts) and “Nutrition Labeling May Lead to Lower-Calorie Restaurant Meal Choices for Children” (Pediatrics). All of these sources were published in 2010 making it impossible for them to cite or reference each other; however, all seem appropriate to begin an exploration of the topic of obesity, and all are in some ways part of a larger dialogue on the given topic. It would appear that on April 5, the student spent some quality time with a reliable database beginning the process of strategically exploring an evolving topic and entering an ongoing conversation on that topic. Perhaps the class visited the library to start their research or a librarian came to the classroom; certainly someone introduced the student to this database and perhaps helped focus the research question. So far, so good.

    But the next reported access of source material is not for another three weeks. On April 25 the Biotech article on salad bars was added to the list, appearing to continue the conversation begun on April 5; however, by April 27 when the four remaining sources were accessed the paper seems to be on the way to its final disconnected list of possible causes of obesity, one per paragraph, each supported by a different source. Two of the sources accessed on April 5 (and still listed as “works cited”) do not appear in the final paper (the Proceedings article and the Pediatrics article). Instead, on April 27 the student lists accessing the blogs “Eat without Guilt” and the The New York Times Well blog, the book chapter “Compulsive Overeating,” and the report on fruit and vegetables from the USDA. The book Skinny Bitch is listed as having been read in print so has no access date. It is cited four times in the paper in two different paragraphs drawing from 4 pages in one 23-page chapter. None of the authors of these sources is directly cited by others, none of the student sources is drawn from the works cited lists of other sources, and no citations from one source appear in any of the other sources.

    Because the sources selected in late April do not explicitly respond to other sources about obesity or healthy living, it is easier for them to be used to provide “evidence” for one item on the list of causes of obesity that ultimately organize the paper. Perhaps the need for a list-like outline came externally, but had the student been reading the sources as part of a conversation on the topic of healthy eating and explicitly creating source networks (Laskin & Haller, Chapter 11, this collection), paper Z24 might have evolved very differently. The student concludes by stating agreement with the author of Skinny Bitch that ultimately Americans are responsible for the “obesity epidemic” by remaining naive and ill-informed, but this claim contradicts many of the “causes” that organize the paper. The student neither counters that lack of information by adding to the conversation about obesity, nor makes the argument expressed in the conclusion.

    It is this concept of research, and academic sources, as part of an ongoing conversation that seems most missing from the papers in the Citation Project study. The organization of Z24 with one source per paragraph (and one paragraph per source) makes it typical. When each source is a discrete item, and even a “type” to be checked off, the possibility of those sources entering into conversation is slim. If assignments do not specify why we do research or how research questions and conversations evolve as Head (2010) found to be the case, and if IL instruction emphasizes identifying, finding, and assessing discrete sources rather than developing metacritical frameworks for thinking about research (Kleinfeld, 2011), it should be no surprise that the resulting papers tend to be “information dumps” rather than forays into academic conversation and the intellectual work the research paper is imagined to be (Robinson & Schlegl, 2004). Diametrically opposite to the image of scholarship as a conversation is the model of research as formulaic, demanding particular types of sources and “killer quotes,” which can mostly be extracted from the first page of the source. This latter version of “The Research Paper” characterizes the papers in the CPSW corpus.

    Conclusion

    The data generated by the Citation Project’s multi-institution research suggests that students who received IL instruction in the era of the IL Standards have adopted a limited checklist mode of research rather than the nuanced appreciation of sources and source selection included in that document. This suggests that the IL Standards themselves did not totally displace the kind of bibliographic instruction that preceded them in 2000. The papers in the CPSW corpus demonstrate that students have the ability to select, retrieve, and cite the right kinds of sources, many of them appropriately academic for first-year papers; however, the papers evidence little or no relationship among those sources. Analysis of the incorporation of information from the sources into the papers reveals students working at the sentence-level, and segregating each source into one paragraph, mostly by including quotation or paraphrase rather than by summarizing larger ideas in a text or comparing arguments with those of other sources. Furthermore, most of the references to sources are to the first one or two pages and most of the sources are cited only once or twice, suggesting little engagement with them as part of a broader scholarly conversation.

    It may be, as I have argued elsewhere (Howard & Jamieson, 2014), that the first-year research paper itself bears more responsibility for this state of affairs than the IL instruction that tries to support it. It is certainly the case that shorter source-based papers could be used to introduce students to IL practices and habits of mind; however, it is also the case that the introduction of threshold concepts in writing pedagogy (Adler-Kassner, Majewsi, & Koshnick, 2012) and in the Framework for IL increases the likelihood that students may gain a “vision of information literacy as an overarching set of abilities in which students are consumers and creators of information who can participate successfully in collaborative spaces” (“Introduction”). Had the students writing the papers studied by the Citation Project evidenced the kinds of critical self-reflection described in the Framework for IL and embedded in the description of what constitutes IL, it is difficult to believe they could have produced the papers in the sample. We might hope that with the revised instruction associated with threshold concepts, future papers will show evidence of the “dynamism, flexibility, individual growth, and community learning” described in the Framework for IL document (“Introduction”).

    This optimism is unlikely to bear fruit unless we ask some difficult questions of IL instruction and the FYW courses where it so frequently becomes ghettoized. What is the role of real IL (not bibliographic instruction) in FYW? How can we ensure that the skills and habits of mind are transferrable to other courses (and to work, life, etc.)? When included amongst many other elements of a writing course, how can threshold concepts be introduced without overwhelming the students—or the course? And if institutions recognize that IL cannot be “delivered” in one library visit, assignment, or even semester, how can it be advanced programmatically or throughout a student’s education (and beyond to lifelong learning)? Finally, how can the Framework for IL be introduced to all faculty—including library faculty, administrators, and students in a way that will help us all to recognize our shared responsibility for IL and our shared stake in successful IL pedagogy? Unless these questions are addressed, I fear that the 2019 FYW research papers will not look significantly different from those produced in 1999 or 2010.

    References

    About.com. http://www.about.com/

    Academic Search Complete. https://www.ebscohost.com/academic/a...earch-complete

    Adler-Kassner, L., Majewsi, J., & Koshnick, D. (2012). The value of troublesome knowledge: Transfer and threshold concepts in writing and history. Composition Forum 26.

    Association of College and Research Libraries. (2015). Framework for Information Literacy for Higher Education. Retrieved from http://www.ala.org/acrl/standards/ilframework

    Association of College and Research Libraries. (2000). Information Literacy Competency Standards for Higher Education. Retrieved from http://www.ala.org/acrl/standards/in...racycompetency

    Alvarez, B., & Dimmock, N. (2007). Faculty expectations of student research. In N. Foster & S. Gibbons (Eds.). Studying Students: The Undergraduate Research Project at the University of Rochester. (p. 1-7). Chicago: Association of College and Research Libraries.

    Blackwell-Starnes, K. (2016). Preliminary paths to information literacy: Introducing research in core courses. In B. J. D’Angelo, S. Jamieson, B. Maid, & J. R. Walker (Eds.), Information literacy: Research and collaboration across disciplines. Fort Collins, CO: WAC Clearinghouse and University Press of Colorado.

    Carlson, J. (2006). An examination of undergraduate student citation behavior. Journal of Academic Librarianship, 32(1), 14-22.

    Davis, P. (2002). The effect of the web on undergraduate citation behavior: A 2000 update. College & Research Libraries, 63(1), 53–60.

    Davis, P. (2003). Effects of the web on undergraduate citation behavior: Guiding student scholarship in a networked age. portal: Libraries and the Academy, 3(1), 41-51.

    Davis, P., & Cohen, S. (2001). The effect of the web on undergraduate citation behavior 1996–1999. Journal of the American Society for Information Science and Technology, 52(4), 309–14.

    Deitering, A., & Jameson, S. (2008). Step-by-step through the scholarly conversation: A collaborative library/writing faculty project to embed information literacy and promote critical thinking in first-year composition at Oregon State University. College and Undergraduate Libraries, 15(1-2), 47-59.

    Gavin, C. (1995). Guiding students along the information highway: Librarians collaborating with composition instructors. Journal of Teaching Writing, 13(1/2), 225–236.

    Grimes, D., & Boening, C. (2001). Worries with the web: A look at student use of web resources. College & Research Libraries, 62(1), 11-23.

    Head, A. (2013). Learning the ropes: How freshmen conduct course research once they enter college. Project Information Literacy Research Report. p. 1-48.

    Head, A., & Eisenberg, M. (2010). Assigning inquiry: How handouts for research assignments guide today’s college students. Project Information Literacy Progress Report: University of Washington Information School. p. 1-41.

    Head, A., & Eisenberg, M. (2009). Lessons learned: How college students seek information in the digital age. Project Information Literacy Progress Report: University of Washington Information School. p. 1-42.

    Holliday, W., & Fagerheim, B. (2006). Integrating information literacy with a sequenced English composition curriculum. portal: Libraries and the Academy, 6(2), 169-184.

    Hood, C. (2010). Ways of research: The status of the traditional research paper assignment in assignment in first-year writing/composition courses. Composition Forum, 22.

    Hovde, K. (2000). Check the citation: Library instruction and student paper bibliographies. Research Strategies, 17(1), 3-9. DOI: http://dx.doi.org/10.1016/S0734-3310(00)00019-7

    Howard, R., & Jamieson S. (2014). Research writing. In G. Tate, A. Rupiper-Taggart, B. Hessler, & Schick, K. (Eds.), A guide to composition pedagogies (2nd ed.) (p. 231-247). New York: Oxford.

    Howard, R., Serviss, T., & Rodrigue, T. (2010). Writing from sources, writing from sentences. Writing & Pedagogy, 2(2), 177-192.

    Hull, G., & Rose, M. (1990). “This wooden shack place”: The logic of unconventional reading. College Composition and Communication, 41(3), 286-298.

    Internet Archive. http://archive.org/

    Jacobs, H., & Jacobs, D. (2009). Transforming the library one-shot into pedagogical collaboration: Information literacy and the English composition class. Reference & Users Quarterly, 49(1), 72–82.

    Jacobson, T., & Mackey, T. (Eds.). (2007). Information literacy collaborations that work. New York: Neal-Schuman.

    Jamieson, S. (2013). Reading and engaging sources: What students’ use of sources reveals about advanced reading skills. Across the Disciplines, 10. Retrieved from http://wac.colostate.edu/atd/reading/jamieson.cfm

    Jamieson, S., & Howard, R. (2013). Sentence-mining: Uncovering the amount of reading and reading comprehension in college writers’ researched writing. In R. McClure & J. Purdy (Eds.), The new digital scholar: Exploring and enriching the research and writing practices of nextgen students (p. 111-133). Medford, NJ: American Society for Information Science and Technology.

    Jenkins, P. (2002). They’re not just using web sites: A citation study of 116 student papers. College and Research Libraries News, 63(3), 164.

    Kesselman, M., & Watstein, S. (2009). Creating opportunities: Embedded librarians. Journal of Library Administration, 49(4), 383-400.

    Kleinfeld, E. (2011). Writing Centers, ethics, and excessive research. Computers and Composition Online. Retrieved from www.bgsu.edu/departments/english/cconline/ethics_special_issue/Kleinfeld/

    Knight-Davis, S., & Sung, J. (2008). Analysis of citations in undergraduate papers 1. College & Research Libraries, 69(5), 447-458; DOI:10.5860/crl.69.5.447

    Kraus, J. (2002). Citation patterns of advanced undergraduate students in biology, 2000–2002. Science & Technology Libraries, 22(3-4), 161-179. dx.doi: 10.1300/J122v22n03_13

    Laskin, M., & Haller, C. (2016). Up the mountain without a trail: Helping students use source networks to find their way. In B. J. D’Angelo, S. Jamieson, B. Maid, & J. R. Walker (Eds.), Information literacy: Research and collaboration across disciplines. Fort Collins, CO: WAC Clearinghouse and University Press of Colorado.

    Lepore, J. (2015, January 15). Can the Internet be archived? The New Yorker. LexisNexis. Retrieved from www.lexisnexis.com/en-us/abou.../about-us.page

    Mackey, T., & Jacobson, T. (2011). Reframing information literacy as a metaliteracy. College and Research Libraries, 72(1), 62-78.

    Mackey, T., & Jacobson, T. (2014). Metaliteracy: Reinventing information literacy to empower learners. Chicago: Neal-Schuman.

    Maid, B., & D’Angelo, B. (2016). Threshold concepts: Integrating and applying information literacy and writing instruction. In B. J. D’Angelo, S. Jamieson, B. Maid, & J. R. Walker (Eds.), Information literacy: Research and collaboration across discipline. Fort Collins, CO: WAC Clearinghouse and University Press of Colorado.

    McClure, R. & Clink, K. (2009). How do you know that? An investigation of student research practices in the digital age. Portal: Libraries and the Academy, 9(1), 115-132.

    Mill, D. (2008). Undergraduate information resource choices. College & Research Libraries, 69(4), 342-355. DOI: 10.5860/crl.69.4.342.

    Modern Language Association. (2009). MLA handbook for writers of research papers (7th ed.). New York: MLA.

    Norgaard, R., & Sinkinson, C. (2016). Writing information literacy: A retrospective and a look ahead. In B. J. D’Angelo, S. Jamieson, B. Maid, & J. R. Walker (Eds.), Information literacy: Research and collaboration across disciplines. Fort Collins, CO: WAC Clearinghouse and University Press of Colorado.

    Pecorari, D. (2003). Good and original: Plagiarism and patchwriting in academic second language writing. Journal of Second Language Writing, 12, 317-345.

    Robinson, A., & Schlegl, K. (2004). Student bibliographies improve when professors provide enforceable guidelines for citations. portal: Libraries and the Academy, 4(2), 275–290.

    Sherrard, C. (1986). Summary writing: A topographical study. Written Communication, 3, 324-343.

    Tolar-Burton, V., & Chadwick, S. (2000). Investigating the practices of student researchers: Patterns of use and criteria for use of Internet and library sources. Computers and Composition, 17(3), 309–28. DOI: 10.1016/S8755-4615(00)00037-2

    Zittrain, J., Albert, K., & Lessig, L. (2014) Perma: Scoping and addressing the problem of link and reference rot in legal citations. Legal Information Management, 14(02), 88-99. DOI: 10.1017/S1472669614000255

    Appendix: Works Cited List for Paper Z24 (as submitted)

    Citation Project

    Z24

    Works Cited

    Biotech Business Week. “Salad Bars in Every School: United Fresh Applauds New Child Nutrition Bill.” News Rx 4 Jan. 2010. LexisNexis Academic. Web. 25 Apr. 2010

    Center for Nutrition Policy and Promotion. “Eating Fruits and Vegetables.” About.com Pediatrics. Medical Review Board. 26 Jan. 2008. Web. 27 Apr. 2010

    Dineen. Eat Without Guilt. N.p. 29 Mar. 2010. Web. 27 Apr. 2010

    Freedman, Rory, and Kim Barnouin. Skinny Bitch. Pennsylvania: Running Press Book Publishers, 2005. Print.

    Jahren, A. Hope, & Brian A. Schubert. “Corn Content of French Fry Oil from National Chain vs. Small Business Restaurants.” Proceedings of the National Academy of Sciences of he United States of America 107.5 (2010): 2099-210. Academic Search Complete. Web. 5 Apr. 2010.

    Motluk, Allison. “Supersize Me.” New Scientist 184.2471 (2004): 46. Academic Search Complete. Web. 5 Apr. 2010.

    Parker-Pope, Tara. “A High Price for Healthy Food.” Well Blog. The New York Times. 5 Dec. 2010. Web. 27 Apr. 2007.

    Peacock, Judith. “Chapter #2: Who is at Risk?” Compulsive Overeating 2000: 12+. LexisNexis Academic. Web. 27 Apr. 2007.

    Tandon, Pooja, et al. “Nutrition Labeling May Lead to Lower-Calorie Restaurant Meal Choices for Children.” Pediatrics 125.2 (2010): 244-248. Academic Search Complete. Web. 5 Apr. 2010.

    “Who’s Losing the Burger Battle?” Tufts University & Nutrition Letter 27.12 (2010): 6. Academic Search Complete. Web. 5 Apr. 2010.


    6: What the Citation Project Tells Us about Information Literacy in College Composition is shared under a CC BY-NC-ND license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?