Skip to main content
Humanities LibreTexts

6.7: The Evaluation of Conference Paper Proposals in Linguistics

  • Page ID
    57943
    • Charles Bazerman, Chris Dean, Jessica Early, Karen Lunsford, Suzie Null, Paul Rogers, & Amanda Stansell
    • WAC Clearinghouse

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Françoise Boch, Fanny Rinck, and Aurélie Nardy

    Université Grenoble, CNRS/Université Paris Ouest Nanterre La Défense, and Université Grenoble

    This chapter falls within the scope of work on scientific writing.1 From the 1980s onwards, many studies have focused upon describing the characteristics of scientific discourse according to genre, discipline or language.2 Genres studied include articles and PhD dissertations, but also, to a lesser degree, proposals for conference papers. In this chapter, we focus indirectly on the latter, analyzing their evaluation by conference peer review panels. The genre of the proposal evaluation—insofar as it can be labelled a genre—is in fact subject to very little study. However, in our view, it presents features making it a particularly rich type of writing. Indeed, analysing this genre can provide valuable information regarding both the linguistic practices of researchers under evaluation and the criteria retained by those conducting this evaluation. 3

    First, it is interesting to examine the practices of the researchers constituting the community of experts from a linguistic point of view. We are referring here to a strand in discourse analysis that focuses upon the linguistic functions of scientific writing so as to highlight the specificities of the scientific community,4 or, in other words, its manières de faire (“ways of doing” things) (Maingueneau, 1992). We will examine the rhetoric of evaluation in a corpus of evaluations. Strictly speaking, the latter are not scientific writing but they nonetheless reflect researchers’ ways of doing things. We will look in particular at how the reviewer addresses the author and whether these forms of address vary according to the verdict pronounced on the proposal.

    Second, the study of such a genre can provide information regarding the norms in place within a given discipline. What criteria are retained today in order to deem a proposal acceptable or not? Is there a consensus regarding these criteria within a group of experts or are these criteria heterogeneous and linked to subjective perceptions of what constitutes a “good” or “bad” proposal? We will consider the extent to which the evaluative discourse of evaluations enables us to grasp the institutional requirements and expectations for proposals. Recent studies (cf. in particular Fløttum, 2007) have highlighted substantial differences within the field of the humanities. We shall therefore focus specifically on one discipline—linguistics—whilst also remaining aware of the variations that exist within the different domains of this field (such as psycholinguistics, sociolinguistics or the didactics of language).

    Methodological Aspects

    The corpus studied is composed of 284 evaluations by reviewers in linguistics examining proposals submitted to a conference for “young researchers” (i.e., doctoral students or recent doctors) in language sciences. This conference took place in France in July 2006.

    Each proposal (142 in total) was evaluated by two anonymous reviewers who provided both a commentary evaluating the proposal and a verdict: accepted, refused or to be revised.

    The breakdown of verdicts was as follows: 60% accepted, 30% to be revised and 10% rejected.5

    After obtaining the consent of each of the reviewers, the entire corpus was processed and placed on a publicly available online platform (scientext. msh-alpes.fr) created for this purpose. The platform included linguistic search functions6 and these tools allowed us to examine the corpus using, in part, automatic searches (see detail below). The results were then checked manually with a view to disambiguation. Finally, qualitative analysis was carried out based on the observation of phenomenon highlighted by the raw data.

    Results

    Rhetoric of Evaluative Discourse in Proposal Evaluations

    Markers of the Reviewer and Addresses to the Author

    The aim of this initial analysis was to identify the markers indicating both the reviewer’s presence and their addresses to the author. K. Fløttum & al. (2006) developed a typology of the roles of the author in scientific writing (more specifically, in the research article). This typology was based on a large study of all pronominal markers of the author, and the associated verbs and other lexical items. This Norwegian team thus identified three roles taken on by authors: the writer (in this section, I shall present … ), the researcher (the study we conducted) and the arguer (I would defend the idea that). Here we will only focus upon the “I” of the arguer because it refers explicitly to the reviewer and the way in which he positions himself personally in his evaluation of the proposal, thus raising a number of interesting questions. Does the presence of this “I” vary according to the verdict given on the proposal? Does the reviewer assert himself more or, on the contrary, adopt a self-effacing position when giving a negative evaluation? Similarly, how does he address the author of the proposal? Does the reviewer use “you” (the most direct form of address possible) in the same way depending on whether or not he is accepting or rejecting the proposal?

    We took the number of texts7 including reviewer-arguer “I”s and “you”s referring to the author of the proposal and cross-examined them with the verdict given on the latter, distinguishing three possibilities: proposal accepted, to be revised or rejected. Figure 1 provides a synthesis of the results obtained.

    As Figure 1 illustrates, the use of personal pronouns remains relatively low in this type of evaluation corpus: “I” appears on average in 8.3% of texts and “you” in 25.4%, all corpora included. In general, the reviewer tends not to draw attention to himself as someone putting forward an argument, and, to a lesser degree, tends to give preference to depersonalized utterances when addressing the author (we find more utterances such as “the methodological aspects should be looked at in more depth” as opposed to “you should develop the methodological aspects further”). In keeping with the canons of usage in place in the scientific community, there is an effort to keep the debate centered upon the object in question—the proposal—rather than upon the people in question.

    However, in the evaluations where these pronouns do appear, their use differs depending on the verdict. While the “you”s and the “I”s appear in a balanced fashion throughout the “proposal accepted” corpus, the gap is far greater in the “to be revised” corpus, and becomes quite substantial in the “proposal rejected” corpus, where the “I” tends to disappear (it is present in only 3% of texts). In order to be understood by the author, it would seem that the quite powerful act of refusing a proposal must go hand-in-hand with an objective argument, grounded in facts. And such an argument, it seems, does not allow for the marked linguistic presence of the reviewer, which would give a subjective tone to the evaluation.

    However, this progressive disappearance of the “I” in negative decisions gives way to the increasingly marked presence of the “you.” This raises the following question: if we accept the hypothesis that researchers wish to make their evaluations objective, should we not expect negative evaluations to be characterized by impersonal utterances regarding both the reviewer and the author? The observation of the contexts in which these pronouns appear will help us refine this hypothesis.

    We can indeed note that in the proposals accepted (where the “I” is more present and the “you” the most absent), the reviewer’s “I” is used recurrently in language acts of the type “masked advice” (example 1) or indirect criticism (example 2).

    Example 1: En biblio j’aurais rajouté P. Charaudeau, Grammaire du sens et de l’expression et Riegel, Pellat et Rioul, Grammaire méthodique du français (A169)

    [In the bibliography, I would have added P. Charaudeau, Grammaire du sens de l’expression and Riégel, Pellat & Rioul, Grammaire méthodique du français]

    Example 2: Je me demande si le concept de Vion d’histoire interactionnelle est vraiment intéressant dans ce cadre (A155)

    [I wonder whether Vion’s concept of interactional history is really interesting in this context]

    In these cases, the exchanges do not seem particularly hierarchical. The reviewer is addressing a peer and indicating possible improvements to the proposal in the form of personal suggestions (of the type “this is what I would do in your place, fellow researcher”). On the contrary, in the corpus of rejected proposals (where the “you” is dominant and the “I” disappears) the tone is no longer that of an exchange between peers: the presence of “you” is most often found in correlation with an explicit and barely modalized criticism (example 3) or with the highlighting of shortcomings (example 4).

    Figure 1: Markers of address

    Example 3: le concept de “ faute” que vous semblez utiliser sans distance mériterait d’être précisé (Rj173)

    [the concept of “mistake” that you seem to be using without any distance would be worth clarifying]

    Example 4: vous ne dites pas clairement quelles sont vos données, comment vous les avez recueillies et traitées (Rj123)

    [you don’t state clearly what your data are nor how you collected and processed them.]

    Thus, while most reviewers use few personal forms, when the “you” does appear, it is mainly in the context of negative evaluations. In these cases, it most often serves the purpose of putting the author in the hot seat by highlighting his errors or weaknesses, thus allowing the reviewer—taking on a superior, dominant, position—to auto-justify his verdict.

    Yet in most cases, according to our observations, the reviewers’ qualitative evaluation is linguistically modalized showing a common desire to allow the author to save face.

    Allowing the Author to Save Face

    The tendency of reviewers to try to be tactful and not offend the authors is evident in their use of negation and evaluative adjectives.

    Use of Negation. Figure 2 shows that the markers expressing negation (nepas in French) are gradually more present when the evaluation is mixed (proposals to be revised) or definitive (proposals rejected). In the latter case, negation markers are present in almost three out of four evaluations of the corpus, and yet barely exceed one-third in the positive evaluations (proposals accepted).

    In many cases, the negation concerns adjectives, adverbs or positive verbs of evaluation. Examples 5 (rejected proposal) and 6 (proposal to be revised) are representative of this tendency.

    Example 5: Il n’est pas vrai (cf. votre point 3) que l’on ne peut pas distinguer deux verbes … (Rj63)

    [It is not true (cf. your point 3) that one cannot distinguish between two verbs …]

    Example 6: Votre proposition mériterait cependant d’être remaniée car en l’état la problématique n’est pas suffisamment claire. (Rs193)

    [However, your proposal would benefit from being reworked as in its current state, the key research question is not sufficiently clear.]

    Qualitative observation of the utterances that included negations framing adjectives shows that these are characteristic patterns. These, therefore, indicate a ritualized rhetoric: as with examples 5 & 6, criticism is expressed through the negation of a positive term rather than the foregrounding of a weakness. Reviewers prefer to qualify an aspect by saying it “n’est pas développé” [is not developed] or it is “pas clair” [not clear] rather than stating it is “flou” [vague] or “lacunaire” [lacking]

    In other words, rather than expressing how the proposal is “bad,” the reviewers indicate how it is “not good.” We could, therefore, hypothesize that this is a way of softening the criticism directed at the author, while still providing ways in which to improve the proposal even in cases where it has been rejected.

    Evaluative Adjectives. This hypothesis is strengthened by another result, which could seem somewhat surprising if the presence of negation markers were not taken into account: the analysis of evaluative adjectives in the corpus shows that the five most cited adjectives in all three sub-corpora are positive adjectives—interesting, clear, original, good, relevant.

    As we can see, the number of these positive adjectives is subject to relatively little variation depending on sub-corpus. We could have expected a different distribution, with greater use of positive adjectives in the proposals accepted than in those refused. The fact that these adjectives are sometimes framed by negation8 can explain this tendency in part. Only in part, however, for this tendency towards modalization, and more generally towards softening criticism with a view to allowing the author to save face, can also be observed throughout the corpus through the use of another linguistic process characteristic of argumentative writing: the dynamic of concession/refutation. This consists in granting the value of something in the point of view defended by the author (approval) and then highlighting a weakness (disapproval) with a counter-argument.

    Figure 2: Negation as a tool in allowing the author to save face

    Process of Concession/Refutation

    This process can be examined through the example of the most used adjective in the corpus, the adjective “interesting.” Figure 3 shows that this adjective is used in relatively similar proportions in all three sub-corpora: approximately 38% for the “accepted proposals” corpus and 28% if we combine the “to be revised” and “proposal rejected” corpora.

    The interest of a study is therefore the quality most subject to evaluation in our corpus. Furthermore, a proposal being “interesting” is not incompatible with its rejection. Qualitative analysis of the context in which “interesting” appears shows that it is always used positively: in contrast with all the other evaluative adjectives, there are no occurrences of “not interesting,” or “not very interesting,” even in the proposals rejected or those subject to a somewhat reticent judgment. It is as if it were impossible to officially indicate to a researcher that his study lacks interest, no doubt because this type of judgment—that is highly subjective—does not fall within the remit of academic judgment. We can suppose, on the contrary, that in the sphere of research, any subject can be considered of potential interest, as Bourdieu (1993, p. 911) states, “Everything is interesting, provided you look at it long enough.” So, while it is clear that the absence of interest of a study cannot be highlighted, it is, on the contrary, very common to underscore its interest. In our corpus of proposals rejected or to be revised, while this quality in itself seems to be off limits for criticism, it can nonetheless offer a springboard for the latter. Often preceded by “Certes” —an archetypal concession marker in French (which could best be translated as “admittedly” or “no doubt” depending on the context) and often followed by “but” (example 7)—the interest of the research is sometimes reduced to certain aspects (example 8) or diminished by specifications such as “de prime abord” [at first glance] (example 9).

    Figure 3: Distribution of most cited adjectives in the three sub-corpora

    Example 7: Ce texte présente un certain nombre de principes pédagogiques, certes intéressants mais qui ne sont ni questionnés ni inscrits dans une recherche de terrain. [Rj-191]

    [This text presents a certain number of pedagogical principles that are no doubt interesting (certes intéressants) but are not called into question nor situated within any field work.]

    Example 8: Certains concepts présentés par l’auteur sont intéressants et aptes à apporter une contribution efficace à la recherche en didactique des langues (Rs182)

    [Certain concepts presented by the author are interesting and could make an efficient contribution to research in the didactics of language]

    Example 9: Le thème de cette proposition de communication est de prime abord intéressant: en effet, on dispose de peu d’informations sur l’intégration du parler de jeunes urbains (Rs126)

    [The topic of this proposal seems at first glance to be interesting: indeed, relatively little information is available regarding the integration of the speech of urban youths.]

    This dynamic of concession and then refutation, which is very visible through the recurrent use of the “interesting”-“but” 9 pair, is therefore one of the subtle processes put into play in our corpus by the reviewers to express their criticism tactfully to the authors and lead to a negative verdict.10 But above and beyond this argumentative function, the routine use of this linguistic pattern also seems to play a simple pragmatic role of initiation for the reviewer—along the lines of a verbal tic of argumentative writing—that simply helps him lead into his evaluative discourse, in particular when it is critical. Indeed, the context of certain utterances shows that this adjective can be followed by a very hard comment against the author; is it still possible to speak of “allowing the author to save face” in the following examples, concerning a proposal rejected (example 10) and to be revised (example 11)?

    Example 10: L’idée d’une telle étude en sémantique est intéressante mais le cadre théorique est inexistant et la problématique reste trop vague (Rj-114)

    [The idea of such a study in semantics is interesting but the theoretical framework is nonexistent and the research question remains too vague.]

    Example 11: Recherche intéressante mais qui en l’état n’apporte rien de bien nouveau dans le champ. (Rs-283)

    [The research is interesting but in its current state does not bring anything new to the field.]

    In conclusion to this initial part of this paper, we can note that the rhetoric of evaluation does vary depending on the verdict addressed to the author. In our view, the most salient point, which we shall now develop further, concerns the difference between the tone adopted by the reviewer when he accepts or rejects a proposal. In the “proposals accepted” corpus, the reviewers tend to underline the positive points of the proposal and to suggest improvements to the author on the mode of an exchange between peers. In the “rejected proposal” corpus, however, the exchanges are linguistically more hierarchical, and while there is a tendency to try and enable the author to save face—in particular through the use of concession/refutation and through negation—the evaluative discourse often seems to be limited in its pedagogical scope. The following part to this study, focusing on identifying the criteria for success in proposals, will offer some details backing up this observation.

    Norms in Place: The Criteria for Successful Conference Paper Proposal

    In order to approach this question of the criteria for successful proposals, we first calculated the degree of agreement between the two reviewers responsible for evaluating the proposal: Table 1 summarizes the results obtained by crossmatching the verdict of the two reviewers.

    Table 1: Distribution of proposals according to each reviewer’s verdict

    The degree of agreement (the sum of the figures in bold) barely exceeds half of all cases (52.6% in total) and mainly concerns the proposals accepted. At the same time, a not insignificant number of proposals (13.5%, or 19 proposals, and 38 evaluations) were accepted by one reviewer and rejected by the other. The degree of homogeneity between evaluations is therefore relatively weak and the subjective nature of the evaluation is clearly apparent. In order to back up this observation, we carried out a more detailed analysis of these cases where the evaluations differed greatly (proposal accepted VS rejected) by questioning the type of qualitative criteria called upon by each reviewer. Two possible explanations appear, concerning, on the one hand, the low level of convergence between the criteria retained, and, on the other hand, the different weighting given to common criteria.

    Low level of convergence between criteria retained. The analysis of the corpus of 38 highly divergent evaluations consisted in identifying the qualitative criteria highlighted by each of the 19 pairs of reviewers in order to determine what differed between the two evaluations (we could imagine, for example, that one might consider the proposal to be original and the other not). However, there are very few cases in which this analysis is possible. More often than not, the two reviewers do not base their verdict on common criteria. In other words, each reviewer foregrounds different criteria to justify their verdict. The following two examples concerning the same proposal offer an illustration of this:

    Example 12: une problématique claire et bien circonscrite (prop. accepted)

    [a clear and well defined research question]

    Example 13: l’exposé ne dit rien de l’arrière plan théorique de ce travail (prop. rejected)

    [the presentation says nothing of the theoretical background of this work]

    In the first evaluation, no reference is made to the theoretical background identified as lacking by the second reviewer. Similarly, this second reviewer does not mention the research question, which was underscored as being “clear and well defined” by the first reviewer. This type of focus on different criteria in each evaluation can no doubt be explained in many ways. In this case, as the research question is essentially the product of an in-depth knowledge of the theoretical background, it could be imagined that reviewer 1 (who gave a very positive evaluation of the proposal) is familiar with this background, the implicit nature of which could, on the contrary, pose a problem for reviewer 2, who is perhaps not a specialist in the author’s domain. However, whatever the reasons for the discrepancy, one might wonder how the young researcher is likely to interpret these differing views of his work and how they will be of use to him in progressing. Perhaps it is necessary to question the sacrosanct practice of “blind” reviewing, which is so widespread (at least in the context of humanities in France). Could we not imagine that each reviewer, after having written an initial version of his evaluation, should then be made aware of the evaluation written by his counterpart? He would then have the option of rethinking his own selection filter and altering his evaluation (or even his verdict) if necessary before it is sent to the author. This additional stage in the process could well benefit both the homogeneity of the two evaluations and the conscious awareness of practices.

    It should be noted, however, that one criterion is evaluated in opposing terms by both reviewers: clarity. Thus, in examples 14 and 15, the clarity of the proposal is considered to be one of its strengths by the first reviewer whose verdict is positive (“exposé clair quoiqu’un peu abstrait” [the outline is clear, if somewhat abstract]) whereas it is called into question by the second (“propos confus” [argument unclear.]

    Example 14: L’outil que vous décrivez répond à un besoin réel … . L’exposé de l’architecture globale de la plateforme est clair quoiqu’un peu abstrait (prop. accepted).

    [The tool that you describe corresponds to a real need … . The outline of the overall architecture of the platform is clear, if somewhat abstract. ]

    Example 15: Cette proposition est difficile à lire. Elle comporte trop de faiblesses aussi bien du point de vue du fond que du point de vue de la forme. Le propos est confus dans son organisation générale … . (prop. rejected)

    [This proposal is difficult to read. It has too many weaknesses both in terms of content and form. The argument is unclear in its general organization.]

    Three other pairs of evaluations reveal contradictory points of view concerning this criterion. And yet clarity is an omnipresent requirement in French pedagogical discourse whether it be in the words of teachers or in those of writing manuals, where the instruction “be clear!” abounds. Are we sure that we actually know what the “clarity” of a text means? Does this notion correspond to the same reality for everyone, whether reviewer or author? Our observations lead us to doubt that this is the case. Analysis of our corpus highlights the relative nature of the discernment at work when we evaluate our peers’ scientific production. We would thus argue in favour of multiple or cross-referenced evaluations of work, produced in such a way as to call into question—and thus reduce—the bias introduced by our individual filters of judgement.

    Different Weighting Given to the Same Criteria. Qualitative analysis of the corpus of differing evaluations highlighted a criterion for which there was a consensus amongst the reviewers who mention it. However, it does not seem to carry the same weight for each of them. The criterion in question is that of methodological aspects. In our corpus, methodology is always brought up in terms of being lacking, whether the proposal is accepted or refused, as in examples 16 and 17.

    Example 16: il reste à apporter des précisions de nature méthodologique (qu’est ce qui caractérise les deux versions du récit, comment les gestes sont-ils caractérisés, y a-t-il un traitement quantitatif des données) (prop. accepted)

    [methodological details remain to be given (what characterizes the two versions of the narrative; how are gestures characterized, is there quantitative processing of data)]

    Example 17: des précisions seraient nécessaires concernant la méthodologie. Quels sont les facteurs situationnels considérés ? Les analyses ont-elles porté sur deux récits différents ou sur un même conte dans deux situations différentes ? Comment le récit a-t-il été analysé ? Comment les facteurs situationnels et internes ontils été intégrés dans l’analyse de la gestualité ? S’agit-il d’analyses qualitatives ou quantitatives ? … (prop. rejected)

    [further details would be necessary concerning the methodology. What are the situational factors considered? Did the analyses focus upon two different narratives or upon the same tale in two different situations? How was the narrative analyzed? … Are these analyses quantitative or qualitative? …]

    The importance given to one criterion is therefore different depending on the reviewer, whose position (in terms of verdict) has a clear influence on the way the questions are formulated. The first questions (example 16) are intended to allow the young researcher to progress and are clearly pedagogical: there is an impression that the reviewer believes in the proposal’s potential for improvement. The second questions, on the other hand (example 17), fall more within the scope of auto-justification than pedagogy—and one could suppose that their length and number would produce a fairly discouraging effect for their reader. In sum, this type of evaluation seems to be directed more towards the organizing committee than towards the author.

    These brief analyses raise the question of the didactic scope of the evaluative commentary, which can sometimes appear limited for a young researcher who is still unfamiliar with the workings of scientific writing and only just discovering the institutional expectations of the domain. It should be noted that, given the specificity of the group of authors in question, the organizing committee had explicitly requested that reviewers provide constructive comments to help the young researchers improve their practice. When the proposal was accepted, in particular, there was a tendency to respect this instruction: as we have seen here, the reviewer expressed praise and made suggestions to the author for improvements. Conversely, when the proposal was rejected, this request was not always enacted: in other words, it is when the proposals show the most weaknesses and when the young researchers would most benefit from constructive comments that they are least likely to receive them. It is true that proposals are sometimes very far from meeting expectations, which could serve to discourage the reviewer in his intention to comment helpfully and lead him to simply justify his decision to reject the proposal with a comment expressing a definitive and irrevocable judgment. This raises the question of the doctoral student’s supervision. Indeed, it seems important to us to provide support for doctoral students not only in the writing of their PhD dissertation but equally in the necessary dissemination of their work; in other words, through the submission of articles and conference papers. More generally, we need to make the effort (which would take time but no doubt be highly beneficial to young researchers’ training) to produce precise and constructive comments when we find ourselves in the position of evaluating young researchers’ proposals for conference papers or articles

    Conclusion and Further Perspectives

    As mentioned in the introduction, studies focusing on the evaluation of conference paper proposals are only just beginning. This chapter can thus be considered as an initial foray in the field, with a view to opening up avenues for further analyses. The first conclusion to be drawn from this study is methodological: a larger corpus would enable more interesting observations to be carried out. The corpus in question here is too limited to allow some of our hypotheses to be validated or to provide sufficient evidence for certain comparisons, which nonetheless seem promising. In particular, our analysis of evaluative adjectives warrants further development. The adjective “clair” (clear) seems to be used far more with negation than “original” or “interesting”; it would be useful to carry out a similar analysis for the other three most prevalent adjectives in our corpus i.e., “pertinent” (relevant), “important” (important), and “bon” (good).

    Similarly, in terms of the linguistic routines used by researchers, it would be interesting to examine further the differences according to the verdict given on the proposal. The number of cases here is too limited to allow any reliable tendencies to be observed.

    We have seen that the use of the personal pronouns “I” and “you” depended greatly upon the viewpoint of the reviewer on the proposal in question. A high number of “you”s addressed to the author seemed to correlate with a negative evaluation of the proposal, allowing the reviewer to justify his rejection by highlighting the author’s weaknesses and shortcomings. Conversely, a high number of “I”s seems to correlate with positive evaluations: it appears that the reviewer wants to offer advice to the author from a personal standpoint. However, given that personal pronouns were absent from the majority of texts in our corpus, this hypothesis would need to be refined and tested upon a larger corpus. This would enable greater analysis of the differences between the “to be revised” and “rejected” sub-corpora, from the point of view of indicators of didactic intention. It could be thought that the most advice and constructive criticism would be found in the sub-corpus regarding proposals “to be revised” with a view to enabling the author to go on to achieve a positive evaluation. This hypothesis could be tested by counting the number of conditional verbs leading into a suggestion present in both corpora (eg., “il faudrait développer … ”/“… should be developed;” “il vaudrait mieux préciser … ”/“it would be better to specify … ”; “il serait judicieux de … ”/“it would be wise to … ”). Another possibility would be to analyze in more detail the types of questions asked by reviewers. Indeed, some are in fact of an advisory nature (of the type “pourquoi ne pas … ”/“why don’t you … ”) while others are in fact simply critical (of the type “où sont les données?” “where are the data?”). In either case, statistical analysis alone would obviously not suffice and would need to be supplemented by qualitative analysis.

    We would argue in favour of the pooling of resources of a large variety of evaluation corpora. This would pave the way for further, more ambitious, studies looking at comparisons between disciplines or languages in the same way as existing studies on other types of scientific writing. Studies could also consider differences depending upon the status of the author of the text being evaluated (young or experienced researcher) and upon the institutional context of the evaluation (evaluation of a conference paper, an article or a funding proposal).

    The comparison between languages strikes us as a particularly promising avenue. The researcher’s native linguistic culture has been shown to be of limited influence in the case of research articles (cf. Fløttum et al., 2006), but is this also the case for evaluations? More specifically, it would be interesting to determine the extent to which the phenomena in question are specifically French. Can similar observations be made concerning English, for example? To take this even further, it is worth considering the possible variations linked to non-native use of English given that this is now the dominant language for the dissemination of scientific research. Do non-native speakers of English bring to bear their own cultural specificities upon their language use, or does the language itself shape usage in this field?

    In sum, although this study remains merely exploratory, in our view it opens up a vast number of possible avenues for further analysis. The methodological tools offered by the Scientext platform (which is freely accessible to all), adapted for the purposes of this study to the type of corpus in question, can enable linguistic analysis of substantial corpora.11 These future studies, of which we hope there will be many, will allow us to better understand our own habitus in terms of evaluation. They may also allow us to become more aware of the linguistic routines specific to our scientific communities and to take a more critical view of the (more or less explicit) criteria that we bring to bear upon our evaluations of our peers.

    Notes

    1. By the term “scientific writing,” we refer to the pieces of writing produced by researchers (doctoral students or professional researchers) that have as their aim the building and dissemination of scientific knowledge. In the Francophone context, contrary to the Anglophone one, the label “scientific writing” does not only cover the physical and natural sciences, but equally social sciences and the humanities.
    2. For two overviews of the state of the art in this field, see Hyland & Bondi (2006) in English, and Rinck (2010) in French.
    3. This contribution is part of a research project entitled Écrits Universitaires: Inventaires, Pratiques, Modèles (2007-2010), and funded by the Agence Nationale de la Recherche in France.
    4. These studies (cf. for example Hyland, 2002; Harwood, 2005; Grossmann & Rink, 2004; Boch & Rink, 2010) have shown that far from being neutral and objective, scientific writing includes a form of subjectivity and a persuasive aim, and that this dimension varies according to the context: by studying research articles in medicine, linguistics and economics in three languages (English, French and Norwegian), Fløttum (2007) demonstrated that the disciplinary parameter was in fact more decisive than the national culture (the language) of the researcher.
    5. This breakdown refers to the verdict given by the reviewers and not the final decision, which came down to the organizing committee when there was disagreement between the reviewers.
    6. This platform was created as part of Scientext, another project in the laboratoire LIDILEM (address: scientext.msh-alpes.fr, directed by F. Grossmann and A. Tutin), which includes three large corpora that can be consulted online:
      • A pluridisciplinary corpus of scientific writing in French representing a variety of genres and containing just under five million words.
      • A corpus of learners’ English including long pieces of work by students studying English as a foreign language (1.1 million words).
      • An English corpus of scientific writing, taken from the BMC corpus, mainly in the fields of biology and medicine, that comes close to 13 million words and is the subject of lexicological study (Williams & Million, in press).
    7. All our calculations take into account the number of texts in which the term studied appears and not the number of occurrences of the term. This allows us to neutralize any bias caused by the personal style of the reviewer, who might use “you” or “I” excessively and thus skew the averages.
    8. Due to a lack of occurrences of adjectives preceded by negations (none for “interesting” and 11 for “clear”) it was not possible to carry out a comparative analysis of the three sub-corpora.
    9. This analysis also applies to the adjective “original,” more present in the “to be revised” and “rejected” corpora than in the “proposal accepted” corpus (cf. Figure 3). While utterances of the type “le sujet est peu original” [the subject is not very original] can be found, this adjective is often used in a positive manner in the form of a concession followed by a “but” introducing an element that requires further work (“thématique originale mais aspects théoriques insuffisamment développés” [the topic is original but the theoretical aspects are insufficiently developed]). For a detailed analysis of evaluative adjectives in scientific discourse, see Tutin (2010).
    10. On this subject, it should be noted that in the “proposals accepted” corpus, “interesting” is never followed by “but” or any other marker of refutation. It would seem that the adjective takes on its full meaning again, moving away from the recurrent argumentative role that it plays in the case of proposals refused/to be revised.
    11. Given that nowadays conference paper proposals are more often than not evaluated using online electronic tools, collecting evaluations seems far more feasible than before. The greatest difficulty lies in obtaining permission from the reviewers to use these evaluations.

    References

    Boch, F., & Rinck, F. (Eds.). (2010). Enonciation et rhétorique dans l’écrit scientifique. Lidil, 41. Retrieved from http://lidil.revues.org/index3001.html

    Bourdieu, P. (1993). La misère du monde. Paris: Seuil.

    Fløttum, K. (2007). Language and discipline perspectives on academic discourse. Cambridge, UK: Cambridge Scholars.

    Fløttum, K., Dahl, T., & Kinn, T. (Eds.). (2006). Academic voices. Across languages and disciplines. Amsterdam/Philadelphia: John Benjamins.

    Grossmann, F., & Rinck, F. (2004). La surénonciation comme norme du genre: L’exemple de l’article de recherche et du dictionnaire en linguistique. Langages. 156, 34-50.

    Harwood, N. (2005). We do not seem to have a theory. … The theory I present here attempts to fill this gap: Inclusive and exclusive pronouns in academic writing. Applied Linguistics, 26(3), 343-375.

    Hyland, K. (2002). Authority and invisibility: Authorial identity in academic writing, Journal of Pragmatics. 34, 1091-1112.

    Hyland, K., & Bondi, M. (Eds.). (2006). Academic discourse across disciplines. Linguistic Insights, 42. Bern: Peter Lang.

    Maingueneau, D. (1992). Le tour ethnolinguistique de l’analyse du discours. Langages, 26(105), 114-125.

    Rinck, F. (2010). L’analyse linguistique des enjeux de connaissance dans le discours scientifique: Un état des lieux. Revue d’anthropologie des connaissances, 3(4), 427-450. Retrieved from www.cairn.info/revue-anthropo...3-page-427.htm

    Tutin, A. (2010). Evaluative adjectives in academic writing in the humanities and social sciences. In R. Lores-Sanz, P. Mur-Duenas, & E. Lafuente-Millan (Eds.), Constructing interpersonality: Multiple perspectives on written academic genres (pp. 219-240). Cambridge, UK: Cambridge Scholars.

    Williams, G., & Million, C. (2009). The general and the specific: Collocational resonance of scientific language. In M. Mahlberg, V. Conzalez-Diaz, & C. Smith (Eds.), Proceedings Corpus Linguistics 2009. Liverpool: University of Liverpool. Retrieved from http://ucrel.lancs.ac.uk/publication..._FullPaper.doc.