Resources
A guide for social sciences and humanities applicants.
Funding decisions in the social sciences and humanities (SSH) often appear opaque from the applicant’s perspective. While evaluation criteria are public and accessible, applications are most often considered in relation to one another rather than against a fixed set of standards.
This guide treats committee review as a situated practice shaped by disciplinary norms, comparative ranking, and time constraints. It draws on empirical studies of peer review, qualitative accounts of panel deliberation, and Canadian analyses of adjudication, including how proposals are discussed and ranked.
A committee does not operate from a single perspective. Standards vary by field even when a funder uses common criteria. Committee members also do not all read applications in the same way. Although instructional materials are provided to committee members, reviewers engage with them inconsistently (Sims Gould et al. 2025, 4).
This guide does not aim to provide a template. It describes how evaluative attention tends to function in practice and how applications can be designed for legibility under those conditions.
A committee does more than assess merit. It judges relative strength under conditions of limited funding. During review, very different kinds of work are being made comparable on a common scale and then ranked. As SSHRC’s Blue Ribbon Panel notes, committee members weigh the “relative merit, quality and significance” of proposals in contrast with many others (SSHRC 2008, 2).
Committee work often proceeds in two phases. Initial assessments are typically formed by a subset of committee members before proposals enter comparative discussion. The committee then weighs proposals against one another, focusing discussion on points of disagreement or where applications fall into a competitive middle range (SSHRC 2008, 47).
This comparative phase creates a predictable problem. Many applications are written as if they will be judged in isolation, but committees score proposals in relation to others competing for the same pool of funds.
Application guidance often presents criteria as a single list. In practice, committees evaluate different parts of a proposal using different criteria. Hug and Aeschbach distinguish between the evaluated entity and the evaluation criterion. The entity is what is judged, such as the research question, methods, plan, budget, or applicant achievements. The criterion is the dimension of judgment, such as originality, relevance, rigour, clarity, or feasibility (Hug and Aeschbach 2020, 4).
Proposals frequently develop some components more fully than others. A rich theoretical framework may be paired with an underdeveloped plan. A strong research question may be accompanied by limited methodological details.
Hug and Aeschbach further group evaluation into aims, means, and outcomes. Aims and anticipated outcomes are judged for originality and relevance. Means, including research process and use of resources, are judged for rigour, appropriateness, coherence, and clarity of description. Resources as well as the applicant’s experience and track record are judged in terms of feasibility (Hug and Aeschbach 2020, 23–24).
Strong applications show precisely where originality is situated and how the proposed means can actually realize their aims.
Even when the same criteria are used, committees do not share a single standard of excellence. Michèle Lamont’s study of multidisciplinary panels shows that reviewers bring discipline-specific standards and often justify them in discussion. Differences are managed through what she calls “customary rules of deliberation” (Lamont 2009, 6).
One key rule is cognitive contextualization. Evaluators interpret shared criteria in ways that are compatible with the discipline of the proposal. This presumes methodological pluralism and recognition that different methods serve different purposes (Lamont 2009, 133). It often involves recognizing that those closest to the field have the strongest basis for accurate evaluation.
Two consequences follow for SSH applicants. Proposals are read by mixed audiences that include non-specialists who rely on signals of intelligibility and plausibility. At the same time, deference to expertise can constrain the discussion. Lamont describes a case where a panelist suspected a proposal was being dismissed unfairly as “dilettantish” but hesitated to challenge the judgment outside their expertise, resulting in more limited deliberation (Lamont 2009, 119).
A sympathetic reader may not be able to advocate for a project if its value cannot be articulated in terms that the committee recognizes. This requires writing that remains specific within a field but also intelligible and accessible to non-specialists.
Committees operate under various constraints. Two features are especially relevant.
Reviewers often describe learning to review “on the job,” with limited standardized training. Even when guidance exists, it is not internalized and utilized evenly. In these conditions, reviewers rely on personal experience and their own philosophies of merit (Sims Gould et al. 2025, 4).
In this situation, review does not become arbitrary but interpretive capacities begin to vary. When shared assumptions cannot be relied upon, clarity and explicitness in writing carry a lot of significance.
Reviewers frequently struggle to differentiate proposals of similar strength, particularly within the “meritorious middle” (Sims Gould et al. 2025, 7). When distinctions become difficult to delineate, judgments may be guided by personal topic preferences or perceptions of what is fashionable (Sims Gould et al. 2025, 5).
In these cases, proposals benefit from stating clearly what should count as their main points of distinction. These should be framed in terms committees already assess, such as specific claims to originality, the significance of the contribution, the methodological choices that make the work viable, and the plan and resources that make the completion of the project feasible.
Under time constraints, committee discussion centres on elements that directly inform evaluation and scoring. Some recurring focal points include:
Because these elements anchor deliberation from the outset, proposals benefit from articulating them clearly and making them easy to locate through framing and section structure.
Deliberation is not simply an accumulation of individual reviews. It is a process through which individual assessments are translated into a collective ranking of proposals.
Lamont shows that even when panels aim to apply universalistic standards, deliberation unfolds in a social setting shaped by collegial norms and intellectual affinities. These norms help panels function but can also limit how far disagreement is pursued (Lamont 2009, 120–121). Not every hesitation about a proposal becomes a sustained critique in conversation.
This is significant because committee discussion does not always fully and adequately analyze a proposal’s strengths and weaknesses. Early characterizations can therefore carry weight in how a project is understood relative to others.
For applicants, this means that parts of a proposal will circulate in discussion rather than being reread in full. The proposal should be formulated so that claims remain accurate when retold by others and visibly grounded in the written document.
Grant applications are evaluated structurally as well as intellectually. Committees look for specific components that allow for comparison and judgment. The task is to make these components easy to locate and clear enough to avoid misinterpretation.
Aims include the topic, research questions, goals, and scope. They are judged primarily for originality and relevance.
A common problem is treating a topic or theme as if it is a research aim. Committees need to know what will be clarified, revised, argued, tested, or produced within the proposed framework.
Aims should be stated in ways that allow committees to critically distinguish one project from another. When many proposals claim to produce new insight, specificity about what becomes newly thinkable or doable allows distinctions to emerge.
Means include theory, method, analysis, and research process. They are judged for rigour, appropriateness, coherence or justification, and clarity of description.
A strong theoretical framing does not compensate for an under-specified research process. Committees may value the orientation and still judge the means as underdeveloped.
The research process needs to be described in a way that makes the sequence of steps and expected outputs visible. This is the level at which methodology is evaluated.
Outcomes include expected results and anticipated contributions to the field. They are judged for originality and relevance, including extra-academic relevance when applicable.
Outcomes describe what changes and developments will occur if the work succeeds. In SSH this may involve conceptual clarification, empirical insight, methodological contribution, archival work, or knowledge mobilization. Committees also attend to the significance of these contributions, meaning how they advance understanding, reframe existing questions, or open productive lines of inquiry.
Resources include the budget, timeline, and institutional context. These are evaluated in terms of feasibility.
For project grants, budgets often receive sustained attention in deliberation. In fellowship competitions, where funding amounts are fixed, attention shifts to whether the proposed work is feasible within the time and support available.
If there is a budget to consider, the aim is alignment between the proposed work and the requested funds. A budget should function as a diagram of the project. If it is presented as a separate object, it invites scrutiny that goes beyond the core evaluation of the project.
Track record is read as evidence about an applicant’s ability to carry the proposed work to completion. Reviewers often look for signs of follow-through on related or similar projects and outputs that match prior commitments. This reflects the alignment between an applicant’s documented abilities and the scope of the proposal.
A record is most persuasive when it reinforces the proposal’s feasibility and overall coherence. It is important to ensure that the track record demonstrates prior follow-through, relevant research outputs, and experience managing work of comparable scale.
Reviewers often struggle to distinguish among proposals of similar strength, especially in what Sims Gould et al. describe as the “meritorious middle,” referring to “applications that are considered ‘fundable’ if the funding pool was larger” (Sims Gould et al. 2025, 7). Many highly competent and feasible applications end up in this zone, where small differences in clarity, scope, or articulation can carry disproportionate weight.
This requires making points of comparison available. Instead of relying on strong claims alone, a proposal can provide clear points of reference for comparison.
Examples compatible with SSH norms:
Committee reading involves four main considerations:
The first stage establishes what kind of project this is and where it belongs. Early sentences often do this work, signaling the field, the topic, and the scale.
Here attention turns to what will be produced and why it constitutes an original and significant contribution. This is where the proposal’s claims and stakes need to be made explicit rather than implied.
At this point reviewers look for a process that is coherent, appropriate for the project, and clearly described. Methods and research design should make the path from aims to outcomes visible.
Finally, the proposal is considered in relation to others. Distinctions become easier to draw when contributions, scope, and plans are stated in comparable terms.
Writing with this sequence in mind makes the project’s key elements visible at points where they inform evaluation.
Proposals are read comparatively. How a project is articulated matters as much as what it proposes to do.
Strong SSH applications present clear aims and research questions, means and methods, outcomes and contributions, and overall feasibility. They give reviewers language that travels into deliberation without losing specificity appropriate to the field. They anticipate the comparative zone where distinctions are hardest to evaluate and supply defensible bases for comparison.
Applicants should always consult the specific evaluation criteria and instructions for the program to which they are applying, since these define how proposals are formally assessed. At the same time, assessment is not mechanical, and meeting criteria alone does not determine how a proposal is received. Committees interpret proposals within practical and procedural constraints, and members bring varied experiences and perspectives to their reading. Evaluation also unfolds relationally in discussion, where proposals are scrutinized and characterized comparatively.
An application is therefore not only a statement of research but a carefully designed document calibrated for these conditions. It should be crafted to be conducive not only to reading but to discussion, maintaining its precision as it moves from individual review into collective deliberation.
Hug, Sven E., and Mirjam Aeschbach. 2020. “Criteria for Assessing Grant Applications: A Systematic Review.” Palgrave Communications 6 (30). https://doi.org/10.1057/s41599-020-0412-9.
Lamont, Michèle. How Professors Think: Inside the Curious World of Academic Judgment. Cambridge, MA: Harvard University Press, 2009.
Sims Gould, Joanie, Anne M. Lasinsky, Adrian Mota, Karim M. Khan, and Clare L. Ardern. 2025. “Threats to Grant Peer Review: A Qualitative Study.” BMJ Open 15: e091666. https://doi.org/10.1136/bmjopen-2024-091666.
Social Sciences and Humanities Research Council of Canada (SSHRC). Promoting Excellence in Research: An International Blue Ribbon Panel Assessment of Peer Review Practices at the Social Sciences and Humanities Research Council of Canada. Ottawa: SSHRC, 2008.