Don't stop here
Hand-picked guides our readers explore right after this one.
AI prompts for product strategy, user research, roadmapping, and stakeholder communication
Read the guideExpert guide to Claude prompts with XML tags, artifacts, and complex reasoning
Read the guideResearch-grade prompts for Perplexity AI's search-powered responses
Read the guideAI saves researchers 10 to 15 hours per week on literature synthesis and data interpretation β if you use the right prompts. The researchers who are not saving that time are either avoiding AI entirely or using it for the wrong tasks (asking general-purpose models to generate citations from memory, which hallucinate at an unacceptable rate). This guide covers the research workflow tasks where AI delivers genuine productivity gains, with prompts designed for the actual research process.
| Research task | AI usefulness | Time saved | Best tool |
|---|---|---|---|
| Literature synthesis | High | 8 to 12 hrs per review | Elicit + Claude |
| Qualitative coding | High | 4 to 6 hrs per dataset | Claude Sonnet |
| Grant writing | High | 3 to 6 hrs per section | ChatGPT / Claude |
| Data interpretation | Moderate to High | 2 to 4 hrs per analysis | ChatGPT Advanced Data Analysis |
| Discussion writing | Moderate to High | 2 to 4 hrs per paper | Claude / ChatGPT |
| Peer review responses | Moderate | 1 to 3 hrs per revision | Claude / ChatGPT |
| Citation finding | Low (without verified sources) | Use Elicit / Semantic Scholar | Elicit, Consensus, Semantic Scholar |
| Hypothesis generation | Low | Limited β use for brainstorming only | Researcher judgment primary |
The cardinal rule of AI-assisted literature review: never ask AI to generate citations from memory. General-purpose language models will produce plausible-looking references that do not exist. The safe workflow separates sourcing from synthesis.
Qualitative coding is the research task with the steepest time cost. A full round of interviews with 15 to 20 participants, each transcript 60 to 90 minutes long, can take weeks to code manually. AI reduces the time to first-pass coding dramatically when the prompts are structured to mirror rigorous qualitative methodology.
Read this interview transcript. Perform open coding: identify all themes that emerge from what the participant says. For each theme: give it a descriptive code name, quote the specific text that illustrates it (verbatim), and note approximately how many times it appears. Do not impose an existing theoretical framework. Generate codes from the data. Treat repeated ideas as one code. [Paste transcript]
Apply the codebook below to this interview transcript. For each code, identify every passage in the transcript that fits and quote the relevant text verbatim. Note any passages that do not fit existing codes and suggest new codes where appropriate. If a passage fits multiple codes, assign all that apply. Codebook: [paste your codebook]. Transcript: [paste transcript]
Grant narratives follow predictable structures that AI drafts well when given the right inputs. The difference between a generic AI draft and a competitive one is the specificity of your inputs: actual data, named prior work, and clear statements of what is novel.
Write a specific aims page for a grant proposal. Research gap: [describe the gap with supporting evidence]. Proposed approach: [describe your methodology]. Innovation: [what is new β method, question, population, or theoretical contribution]. Impact: [who benefits and in what timeframe]. Funding body: [agency name]. Review criteria emphasize: [innovation / impact / approach / investigator β list priority order]. Under 800 words. No undefined acronyms. Write for a mixed reviewer panel including non-specialists in my specific subfield.
Write a significance and innovation section. The core argument: [your claim about why this research matters]. Scale or prevalence data supporting this: [your data points]. Current state of the literature: [summarize in 2 to 3 sentences]. What existing approaches fail to do: [the gap your research fills]. What our approach does differently: [specific innovations]. 400 to 600 words. Ground every claim in evidence; do not use vague importance language like 'this will greatly advance the field.'
Literature synthesis is the highest-leverage application. Manually reading, summarizing, and finding patterns across 30 to 50 papers takes days. With AI, you can paste abstracts or paper sections and extract themes, contradictions, and gaps in hours. Interview and qualitative data coding is second: AI can apply a codebook to transcripts consistently and at scale, reducing the time to first-pass coding by 70 to 80 percent. Third is writing: translating findings into clear prose, drafting grant narratives, and converting technical results into executive or policy summaries. The tasks AI is weakest at in research contexts are: original hypothesis generation (it reproduces existing knowledge patterns), statistical analysis requiring judgment about model assumptions, and any task requiring access to proprietary or paywalled data it has not seen.
Most institutions in 2026 have moved from blanket prohibition to conditional acceptance with disclosure requirements. The ethical framework that has emerged centers on three principles: transparency (disclose AI use in methods sections, specifying which tools and how they were used), verification (AI-assisted work requires the same human verification as any other method β you are responsible for the accuracy of AI-generated summaries), and integrity (AI should not be used to fabricate data, misrepresent sources, or generate citations that do not exist). The specific risks to manage: AI hallucinating citations (always verify every reference independently), AI producing plausible-sounding but incorrect summaries of papers (always cross-check against the original), and AI generating language in a way that constitutes plagiarism if it reproduces copyrighted training data verbatim. Check your institution's current policy before use, as guidelines are evolving.
The rule that eliminates hallucination in literature review contexts: never ask AI to generate citations from memory. AI will produce plausible-looking references that do not exist or that misattribute claims. The safe workflow: use AI to help you structure and synthesize, not to source. Step 1: Find papers using Google Scholar, Semantic Scholar, Elicit, or Consensus β tools with verified databases. Step 2: Export abstracts or paste paper sections into AI. Step 3: Ask AI to synthesize themes, identify contradictions, and note gaps across the papers you have provided. Step 4: Ask AI to draft narrative sections that reference specific papers by the author and title you provide, not by generating new references. Prompt: 'Based only on the papers I have provided, identify the three main themes in this literature, any methodological controversies, and the most commonly cited gaps. Do not add references not in the list I provided.'
Qualitative coding with AI works best with a two-pass approach. First pass (open coding): paste a transcript and prompt: 'Read this interview transcript. Identify all themes that emerge from what the participant says. For each theme: give it a descriptive code name, quote the specific text that illustrates it, and note how many times it appears. Do not impose a theoretical framework β generate codes from the data.' Second pass (applying a codebook): once you have an initial codebook from the first few transcripts, apply it to the remaining transcripts: 'Apply this codebook to the interview transcript below. For each code, identify every passage that fits and quote the relevant text. Note any passages that do not fit existing codes and suggest new codes where appropriate. Codebook: [paste your codebook].' The two-pass structure mirrors standard qualitative methodology and maintains the rigor of researcher-generated codes.
Grant writing is where AI saves the most time for researchers because the structure of most grant sections is predictable and AI produces strong first drafts when given the right inputs. The specific requirements section: 'Write a specific aims page for a grant proposal. Research gap: [describe the gap]. Proposed approach: [describe your approach]. Innovation: [what is new about this]. Impact: [who benefits and how]. The funding body is [agency name] and the review criteria weight [innovation / impact / approach]. Under 800 words. No jargon beyond what the field requires.' The significance section: 'Write a significance section explaining why this research matters. The audience is [describe reviewers: specialists / mixed panel]. The main argument is [your core claim]. Support this with: the prevalence or scale of the problem [your data], the current state of the field [summarize], and what solving this would enable. 300 to 500 words.' AI cannot tell you what to research or generate novel ideas, but it dramatically reduces the time from research plan to polished prose.
The research stack that covers most use cases: Elicit for systematic literature search and paper extraction with verified citations. Consensus for finding scientific consensus on specific questions across peer-reviewed literature. Perplexity Deep Research for comprehensive research synthesis with source attribution. NotebookLM for uploading papers and having a conversation that is grounded entirely in those documents (preventing hallucination). Claude (Sonnet 3.7) for long-document analysis, grant writing, and any task requiring extended reasoning over large text. ChatGPT (GPT-4o) for tasks that benefit from web search integration and code execution for data analysis. Semantic Scholar for citation analysis and finding the most-cited work in a field. The tools that are not useful: asking any general-purpose LLM to find you papers from memory β hallucination rates for specific citations are unacceptably high.
AI combined with code execution (ChatGPT's Advanced Data Analysis or Claude with tools) can run statistical analyses, generate visualizations, and interpret results. The most reliable approach: upload your dataset and ask AI to describe what it sees before asking it to interpret. Prompt: 'Describe the structure of this dataset: number of observations, variables, data types, missing values, and notable distributions. Do not interpret yet.' Then: 'What descriptive statistics are most relevant for answering the question of [your research question]? Calculate them and explain what each tells us.' Then: 'Given these descriptive findings, what statistical approach would you recommend for testing [hypothesis]? Explain the assumptions and whether our data meets them.' Asking AI to explain its reasoning at each step catches errors before they propagate. For complex analyses, use AI to write the code in R or Python, run it yourself, and paste the output back for interpretation β this gives you the audit trail that peer review will require.
The discussion section is where AI helps the most after results writing. The common failure mode is a discussion that just restates the results without situating them in the literature or addressing limitations honestly. Prompt: 'Write a discussion section for these research results: [paste your key findings]. The research question was: [question]. The key prior work this extends or contradicts includes: [list 3 to 5 studies with authors and findings]. What is novel about our findings compared to this prior work: [your description]. Limitations of our study: [list them]. Implications for theory / practice / policy: [your description]. Structure: interpretation of findings, relationship to prior literature, theoretical contribution, practical implications, limitations, future directions. Under 800 words.' The limitation section is the most important part to write carefully and honestly β reviewers are more suspicious of papers that understate limitations than ones that are transparent about them.
Responding to peer reviews is one of the most frustrating tasks in academic research, and AI is genuinely useful for structuring the response. Workflow: paste the reviewers' comments and prompt: 'List every distinct criticism or request from these peer review comments, numbered. For each, classify it as: factual question about the data, methodological concern, request for additional analysis, writing or clarity issue, or philosophical disagreement with our approach.' Once categorized, tackle each category. For methodological concerns: 'Reviewer 2 raises this concern about our methodology: [paste]. Our approach was [explain]. Draft a response that: acknowledges the concern, explains the rationale for our choice, notes any limitations this creates, and if we can address it, describes what we would add.' Do not use AI to dismiss legitimate methodological concerns. Use it to help you articulate your response more precisely and to identify which concerns actually warrant changes to the manuscript.
Translating technical research for policy, press, or public audiences is a task AI handles well when given clear target audience specifications. Prompt: 'Summarize this research paper for [target audience: policymakers / journalists / the general public]. The paper is: [paste abstract or full paper]. Translate all technical terms into plain language. Focus on: why this research was done, what was found, what it means for [policy / practice / daily life]. Avoid all statistical notation. Length: [one paragraph for a press release, 300 words for a policy brief, 150 words for a social media post]. Do not overstate the findings β be accurate about what the evidence does and does not show.' The last instruction is the most important. AI has a tendency to make findings sound more definitive than the paper claims when writing for non-specialist audiences. Explicitly asking for accuracy about the limits of the evidence counteracts this.
Expert prompts for academic researchers β literature review, paper synthesis, research writing, data analysis, methodology, and grant applications.
Synthesise findings across multiple papers
I've read the following papers on [topic]: [paste summaries or abstracts] Help me synthesise them by: 1. Identifying the major themes or findings that appear across multiple papers 2. Noting where papers agree and where they conflict, with specific details 3. Identifying methodological patterns (what approaches dominate? what's rarely used?) 4. Mapping the chronological development of ideas in this area 5. Identifying what questions remain unanswered across all these studies Present this as a synthesis, not a list of individual paper summaries.
Identify research gaps for a proposal
Based on this overview of the literature on [topic]: [paste your summary] Help me identify research gaps suitable for a new study: 1. Empirical gaps: what phenomena or populations haven't been studied? 2. Methodological gaps: what methods have been underused or not applied to this context? 3. Theoretical gaps: what theoretical questions remain untested? 4. Contextual gaps: what settings, cultures, or time periods are missing? 5. Practical gaps: what knowledge would most benefit practitioners in this field? For each gap: explain why it matters and what filling it would contribute.
Thematic organisation of literature
I have read [X] papers on [topic]. Here are brief summaries of each: [paste your summaries] Help me organise them into a thematic structure for my literature review: 1. Suggest 4-6 overarching themes that cut across the papers 2. Map each paper to its primary theme(s) 3. For each theme, suggest a 2-3 sentence summary of what the literature shows 4. Suggest the logical order to present these themes (which builds on which?) 5. Identify which themes are most contested and would benefit from deeper treatment
Critical annotation of a paper
I'm annotating this paper for my literature review: [paste abstract and key sections or describe the paper] Help me critically assess: 1. Research question clarity: is it well-defined and answerable? 2. Methodology appropriateness: does the method match the question? Key limitations? 3. Evidence quality: how strong is the evidence for the conclusions? 4. Claims vs evidence: does the paper overstate or understate its findings? 5. Contribution: what does this paper add that wasn't known before? 6. Relevance to my research: [describe your research focus] β how and how closely does this relate?
Improve clarity and argument flow
Review this section of my academic paper for clarity and argument strength: [paste section] Field: [discipline] Target journal: [journal name or type] Feedback I need: 1. Is the central argument clear in the first paragraph? 2. Does each paragraph have a clear topic sentence? 3. Where is the argument weakest or least supported? 4. Are there places where I make claims without sufficient evidence or citation? 5. Where is the writing needlessly complex β jargon that could be clearer? Please provide specific suggestions, not general comments.
Write or refine a paper abstract
Write a structured abstract for my paper on [topic]. Key information: - Research question / objective: [state it] - Methods: [brief description of approach] - Key findings: [2-3 main results] - Conclusions / implications: [what this means for the field] - Keywords: [5-7 terms] Structure the abstract as: Background (1-2 sentences) β Objective (1 sentence) β Methods (2-3 sentences) β Results (2-3 sentences) β Conclusions (1-2 sentences). Target length: [150-250 words]. Field conventions: [any specific requirements].
Structure a research introduction
Help me structure the introduction for my paper on [topic]. My research fills this gap: [describe the gap] My argument or hypothesis: [state it] Write an introduction outline that follows the "funnel" structure: 1. Opening hook: a compelling framing of why this research area matters 2. Current state of knowledge: what do we know (2-3 key points) 3. The gap: what we don't yet know or understand 4. Why this gap matters: theoretical or practical significance 5. This paper's contribution: what I do and why it helps 6. Roadmap: one sentence describing the paper structure Then write the first paragraph in full, applying your outline.
Draft a research discussion section
Help me draft the discussion section for my paper. My findings: [describe your key results] Research question: [state it] Prior literature I should engage with: [list key papers/theories] The discussion should: 1. Restate the key findings in relation to the research question (not just list results again) 2. Interpret what the findings mean β why did this happen? 3. Compare to prior literature: where do findings confirm, challenge, or extend existing work? 4. Address limitations honestly (not defensively) 5. Explain theoretical and practical implications 6. Suggest specific future research directions (not generic "more research needed") Avoid starting every sentence with "The findings show..." β vary the sentence structure.
Interpret statistical results
Help me interpret these statistical results from my study: Analysis type: [e.g., regression, ANOVA, chi-square] Results: [paste your output table or describe results: F/t/ΟΒ² value, p-value, effect size, confidence intervals] Research question: [what were you testing?] Help me: 1. Explain what these results mean in plain language 2. Assess statistical significance and practical significance separately 3. Identify whether the effect size is meaningful for this type of research 4. Note any assumptions I should check given these results 5. Suggest how to report these results in the appropriate format for my field 6. Flag any interpretive claims I should avoid making with this data
Qualitative coding framework development
I'm developing a coding framework for qualitative analysis of [data type β interviews / documents / observations] on [topic]. My research question: [state it] Theoretical framework: [e.g., grounded theory / thematic analysis / discourse analysis] Here are excerpts from my data: [paste 3-5 illustrative quotes or descriptions] Help me: 1. Develop initial codes that describe what's happening in the data 2. Suggest higher-order themes that might group related codes 3. Check whether the codes are mutually exclusive and collectively exhaustive 4. Identify tensions or contradictions in the data that resist neat coding 5. Note what my theoretical framework would predict vs what I'm actually seeing
Plan data visualisation for a paper
I need to present this data visually in an academic paper: Data type: [quantitative / qualitative / mixed] Key findings to visualise: [describe 3-4 results] Journal requirements: [colour / black-and-white / figure limit] For each finding, recommend: 1. The most appropriate chart/figure type and why 2. What to put on each axis (for quantitative) or how to structure it (for qualitative) 3. What to emphasise visually to make the finding clear 4. Whether one combined figure or multiple separate figures is clearer 5. The figure caption text (should describe what to see, not just what the figure is) 6. Common mistakes to avoid for this type of visualisation in academic journals
Write a rigorous methods section
Help me write the methods section for my study on [topic]. Study design: [e.g., RCT / case study / survey / experiment] Participants/sample: [how selected, n=, inclusion/exclusion criteria] Procedure: [what you did, in order] Measures/instruments: [what you measured and how] Analysis approach: [how data was analysed] The methods section should: 1. Be detailed enough to be replicated by another researcher 2. Justify key methodological choices (why this design, why this sample size) 3. Address potential validity threats and how you mitigated them 4. Note ethical approval and consent procedures 5. Follow conventions for [your discipline/journal style] Flag any gaps in the information I've given you that a reviewer would question.
Research proposal narrative
Help me write a compelling research proposal narrative for [grant type / funder]. My research: [brief description] Research question: [state it] Why it matters: [significance β theoretical and practical] My approach: [brief methodology] My qualifications: [why I'm the right person to do this] Write the narrative sections: 1. Significance (why does this matter to this funder specifically?) 2. Innovation (what's genuinely new about this?) 3. Approach (concise, credible, addresses likely reviewer concerns) 4. Impact (specific outcomes, not vague claims) Funder priorities: [what this grant scheme values β check the funding call] Word limit per section: [if specified]
Research aims and objectives
Help me write specific, measurable research aims and objectives for my study on [topic]. Overall purpose: [broad goal] My research question(s): [list 1-3] My approach: [brief description of methods] Write: 1. One overarching aim (broad statement of purpose) 2. 3-4 specific objectives (concrete, measurable, achievable steps that together achieve the aim) 3. For each objective: the action verb, what you'll do, and what you'll produce 4. Check: do the objectives collectively address the research question? Any gaps? 5. Identify any objective that might be too ambitious for the scope and timeline
Craft a research impact statement
Write a research impact statement for my work on [topic].
Research summary: [brief description of what you're doing]
Primary audience for impact: [academic / policy / practice / public]
Potential beneficiaries: [who benefits from this knowledge?]
Timeframe: [when might impacts be realised?]
The impact statement should:
1. Describe academic impact: how does this advance the field?
2. Describe practical/societal impact: who benefits and how?
3. Be specific about mechanisms: how does knowledge get used? By whom?
4. Avoid generic claims ("will benefit society") β name specific actors and decisions
5. Be honest about uncertainty: what has to happen for the impact to be realised?
Format for: [REF impact case study / grant application / public engagement / all three]Respond to peer review comments
Help me respond to these peer review comments on my paper: [paste reviewer comments] My paper is about: [brief description] For each comment, help me: 1. Categorise it: major revision, minor revision, or clarification needed 2. Assess validity: is the reviewer correct, partially correct, or missing something important? 3. Draft a response: acknowledge the point, state what change I'll make (or why not), and explain how the paper is now better 4. Write the revision text for comments that require new content Key principle: reviewers are rarely wrong that something is confusing β even if you disagree with their solution, address the underlying confusion.
Plain language summary of research
Write a plain language summary of my research for a non-specialist audience. My research: [paste abstract or key findings] Target audience: [general public / policy makers / practitioners / journalists] Key message I want to convey: [the main takeaway] The summary should: 1. Open with why someone who doesn't work in this field should care 2. Explain the research question in one sentence without jargon 3. Describe the approach in terms of what you did, not methodological categories 4. State the findings in plain English β no statistics unless essential 5. Explain what this means in practice (what should people do differently?) Maximum: 300 words. No passive voice. No disciplinary jargon without explanation.
Conference abstract submission
Write a conference abstract for [conference name / type] on my research about [topic]. Paper title (working): [title] Research question: [state it] Method: [brief] Key findings: [2-3 results] Contribution: [what this adds] Word limit: [X words] Format required: [structured with headings / unstructured paragraph] The abstract should: 1. Hook the programme committee in the first sentence 2. Clearly state what you did and found (not just what you aimed to do) 3. Articulate the contribution specifically β why does this conference need to hear this? 4. Use the keywords that will attract the right audience 5. End with a clear statement of theoretical or practical implications
Research blog post for broader audience
Write a research blog post based on my paper on [topic]. Core finding: [your main result in plain language] Why it matters: [practical or theoretical significance] Target readers: [academics in adjacent fields / practitioners / policy audience] Length: 600-800 words. Tone: accessible but substantive. Structure: 1. Opening hook: a concrete scenario, surprising fact, or question that draws them in 2. The puzzle: what question does the research address and why is it hard to answer? 3. What we did: brief, jargon-free methods 4. What we found: the key result with one illustrative example 5. Why it matters: who should change what because of this? 6. Limitations and next steps: honest and brief 7. Closing: memorable takeaway, not generic summary
Academic social media posts
Write social media posts about my research on [topic] for these platforms: 1. Twitter/X thread (3-5 tweets): Hook tweet β key finding β implication β where to read more 2. LinkedIn post (200-250 words): Professional narrative with context, finding, and call to action 3. Bluesky post (280 characters): Most surprising or counterintuitive finding 4. Research summary for ResearchGate / Academia.edu (100 words): Clear summary for academic peers For each platform, match the tone to the audience. Use hashtags where appropriate. Key finding: [state your main result] Link to paper: [URL if available]
Efficient paper reading strategy
I have [X] papers to read on [topic] in [timeframe]. Help me read them efficiently. Papers: [list titles/authors or paste abstracts] My goal: [what do you need from this reading β background, specific methods, evidence for argument?] Create a reading strategy: 1. Categorise: which papers are essential vs useful vs background? 2. Reading depth: which to read fully vs skim vs read abstract only? 3. Reading order: which to read first to build the right foundation? 4. What to extract from each: create a note-taking template tailored to my goal 5. Time estimate: how long each depth level takes and whether [timeframe] is realistic
Structured research note template
Create a research note template for papers I'm reading on [topic]. My research question: [state it] Fields to capture for each paper: - Full citation (formatted for [citation style]) - Research question and context - Methodology (in 2-3 sentences) - Key findings (3-5 bullet points) - Methodological limitations - Relevance to my research: [how does this connect?] - Quotable passages (with page numbers) - Questions raised / follow-up reading suggested - My critical assessment (in 2-3 sentences) Format this as a Markdown template I can use for every paper.
Academic writing schedule and accountability
Help me build a realistic writing schedule for [academic project β paper / thesis / book chapter]. Deadline: [date] Current state: [draft stage / outline / notes only] Available writing time: [hours per week, on which days] My biggest writing obstacle: [e.g., procrastination, perfectionism, unclear argument] Create: 1. A milestone plan working backwards from the deadline (first draft, revisions, final) 2. Weekly writing targets that are specific (e.g., "draft methods section, 800 words") 3. Strategies for my specific obstacle 4. A daily writing routine recommendation (when, for how long, in what conditions) 5. What to do when I miss a scheduled session β how to recover without derailing
Prepare for a supervision or advisor meeting
Help me prepare for my [PhD supervision / advisor meeting / thesis committee meeting]. Meeting date: [date] Current work status: [brief description of where I am] Progress since last meeting: [what I've done] Current roadblocks: [what I'm stuck on] Questions I need answered: [list them] Prepare: 1. A one-page progress summary my supervisor can read in 3 minutes 2. A clear agenda for the meeting (what decisions or feedback I need) 3. How to frame the roadblocks constructively (not as complaints β as specific questions) 4. What I should have ready to show or demonstrate 5. The single most important thing I need from this meeting
Most journals and institutions now require disclosure of AI tool use. Use this as a starting template and adapt to your publisher's requirements.
Always check the specific AI policy of your target journal, conference, or institution before submission. Policies vary widely and are evolving rapidly.
Literature search and summarisation
Elicit, Consensus, Perplexity, ChatGPT with web browsing
Writing and editing manuscripts
Claude, ChatGPT, Grammarly, Writefull for academic style
Statistical analysis and coding
ChatGPT Code Interpreter, Claude, GitHub Copilot for R/Python
Grant writing and funding
Claude, ChatGPT for drafting sections, Grantable for grant-specific language
Citation management
Zotero with AI plugins, Connected Papers for visual citation mapping
Research communication and outreach
Claude for plain-language summaries, Canva AI for research posters