top of page
Search

Can I Use ChatGPT for SBIR Grants? NIH's New AI Ban & What It Means for DoD, NSF Applications

  • josh84483
  • Jul 25, 2025
  • 10 min read

Updated: Jul 28, 2025


A robotic hand holding a black fountain pen hovers over a grant application form stamped 'REJECTED' in bold red letters. The top of the page features the NIH logo and the words 'SBIR APPLICATION.' The scene suggests artificial intelligence involvement in the grant-writing process and its rejection by NIH.
Image Generated by ChatGPT

The landscape of federal grant writing changed forever on a quiet Tuesday in July 2025. The National Institutes of Health dropped a bombshell that sent shockwaves through the SBIR community: effective September 25, 2025, AI-generated content is banned from NIH grant proposals, and Principal Investigators are now limited to just six applications per calendar year.


But here's what most people are missing: this isn't just about the NIH. This could be a sign of a seismic shift across all federal agencies that may fundamentally change how SBIR and STTR proposals are written, reviewed, and won.


If you've been using ChatGPT, Claude, or other generative AI tools to write your grant proposals, you may need to consider reevaluating how you prepare your grant applications. Your company's approach to SBIR funding could require significant adjustments in this evolving landscape.



The NIH Bombshell: Understanding the New AI Policy

The NIH's announcement in Notice NOT-OD-25-132 doesn't mince words. The agency observed Principal Investigators submitting over 40 distinct applications in a single submission round—something virtually impossible without AI assistance. The response was swift and decisive.


What's Actually Banned

The NIH policy states that applications "substantially developed by AI" or containing "sections substantially developed by AI" will not be considered original ideas of applicants. But what does "substantially developed" mean in practice?


While the NIH hasn't provided exact percentages, the policy clearly targets proposals where AI tools like ChatGPT generate significant portions of the narrative, technical approach, or other core content. The agency is investing heavily in AI detection technology and warns that post-award detection could trigger research misconduct investigations.


The Six-Application Limit

Perhaps equally impactful is the new restriction limiting each Principal Investigator to six new, renewal, resubmission, or revision applications per calendar year across all council rounds. This applies to all activity codes except T activity codes and R13 Conference Grant Applications. (More on NIH Activity Codes)


For most researchers, this won't matter—the NIH notes that relatively few PIs currently exceed this threshold. But for those who do, it forces a complete strategic rethinking of their grant portfolio approach.


Enforcement and Consequences

The NIH has outlined specific enforcement measures for AI policy violations. If AI-generated content is detected after an award has been made, the agency may refer the matter to the Office of Research Integrity to determine whether research misconduct occurred.


Simultaneously, NIH may take enforcement actions including:

  • Disallowing costs

  • Withholding future awards

  • Grant suspension (wholly or in part)

  • Possible grant termination


These are the same enforcement tools NIH uses for other grant compliance violations, representing serious administrative and financial consequences that can significantly impact funded research projects.


Can I use ChatGPT for SBIRs in Other Agencies?

So can I use ChatGPT for SBIR submissions to other agencies? The NIH announcement isn't likely happening in a vacuum and this decision could be part of a broader federal response to AI proliferation that's already showing signs across multiple agencies.


NSF Already Requires AI Disclosure

The National Science Foundation was actually ahead of the curve. They already require researchers to disclose any use of AI tools in their proposals—a policy that many interpreted as a warning shot about future restrictions.


NASA's Suspicious Disqualifications

Industry insiders have noted an unusual number of company disqualifications in recent NASA SBIR cycles. While NASA hasn't explicitly cited AI detection, the timing coincides with increased scrutiny of rapidly-submitted, similar-sounding proposals.


DoD's AI Ethical Framework

The Department of Defense released five AI Ethical Principles in 2023 that may conflict with AI-generated proposal content:


  1. Responsible: Exercise judgment and care while remaining responsible for AI system development and deployment

  2. Equitable: Minimize unintended bias in AI capabilities

  3. Traceable: Maintain transparent and auditable methodologies and data sources

  4. Reliable: Ensure safety, security, and effectiveness through comprehensive testing

  5. Governable: Design systems to detect and avoid unintended consequences


These principles suggest the DoD values human oversight and accountability—qualities that AI-generated proposals inherently lack. However, these principles likely need updating to address AI use in proposal writing itself. The DoD faces the same reviewer capacity challenges as other agencies with the increasing volume of SBIR submissions, but has additional national security concerns since the technologies they fund directly impact defense capabilities and often involve classified or sensitive information.


We can maybe expect similar policies from the Department of Energy, Department of Agriculture, and other major funding agencies coming soon as the underlying pressures are universally shared by all Federal agencies:

  • Review system overload: The exponential increase in proposals is straining review capacity across all agencies

  • Quality degradation: AI-generated proposals often lack the depth, originality, and technical rigor that human experts provide

  • National security concerns: Sensitive technical information shared with AI tools creates potential security vulnerabilities



Why Federal Agencies Are Taking This Stand


The Review Cost Crisis

Federal agencies are drowning in proposals. The NIH alone receives over 80,000 research project grant applications annually, and that number has been growing rapidly. When Principal Investigators can suddenly submit 40+ applications per cycle using AI tools, the review system becomes strained.


Some agencies review proposals using peer review panels with multiple expert reviewers, requiring detailed technical evaluation and coordination across busy academic and industry schedules. As application volumes increase, both the cost and time required for thorough review grows substantially, forcing agencies to either expand their review capacity or find ways to manage application volume.


Merit vs. Prompt Engineering

SBIR and STTR programs exist to identify and fund the most innovative technologies and capable research teams. When AI tools enable any company to generate professional-sounding proposals regardless of their actual technical expertise, the programs may struggle to distinguish between genuine innovation and sophisticated marketing.


The concern isn't just about fairness—it's about national competitiveness. Federal agencies need to fund the researchers and companies most likely to deliver breakthrough technologies, not those with the best ChatGPT prompts.

National Security and IP Protection

For agencies like DoD and NASA, many funded technologies fall under International Traffic in Arms Regulations (ITAR) or contain Controlled Unclassified Information (CUI). When researchers input sensitive technical details into commercial AI tools, they may inadvertently expose classified or proprietary information to foreign adversaries.

Large language models retain training data and can potentially reproduce sensitive information in responses to other users. A notable example occurred in 2023 when Samsung engineers inadvertently leaked sensitive corporate data in three separate incidents within a month, including source code, internal meeting notes, and hardware-related data, after employees used ChatGPT to debug code and generate meeting minutes. For national security agencies, this risk is unacceptable.


Preserving Innovation Integrity

SBIR programs fund $4+ billion annually in early-stage research and development. These programs have historically been remarkably successful at identifying and nurturing breakthrough technologies—from GPS to modern internet infrastructure, and no shortage of AI innovations.


When AI tools commoditize proposal writing, agencies may struggle to identify the researchers and companies with genuine expertise and innovative thinking. The result could be funding allocation based on smooth writing rather than technical merit.


Why Your Company Should Care: The Hidden Risks of AI-Generated Proposals

Beyond the policy implications, companies using AI for proposal writing face significant business risks that extend far beyond potential rejection. Understanding these risks is crucial for protecting your intellectual property, reputation, and long-term funding prospects.


Intellectual Property Exposure

SBIR proposals require detailed discussions of your innovation and its underlying intellectual property to demonstrate technical merit and feasibility. When you input this proprietary technical information into ChatGPT, Claude, or similar tools to help articulate these concepts, you're potentially sharing your intellectual property with the broader AI training ecosystem. While most commercial AI services claim not to use customer data for training, the terms of service often contain exceptions, and data breaches remain possible.


For SBIR applicants developing cutting-edge technologies, this exposure could undermine patent applications, enable competitor intelligence gathering, or compromise trade secrets.


Reputation and Trust Damage

Grant reviewers are technical experts in their fields. They can often identify AI-generated content through subtle linguistic patterns, generic technical descriptions, or lack of deep domain insight. Being perceived as an applicant who relies on AI tools rather than genuine expertise can damage your credibility with review panels.


Security Vulnerabilities

Beyond IP concerns, AI-generated proposals may inadvertently include information that creates security vulnerabilities. AI tools don't understand classification levels, export control restrictions, or operational security requirements. They may suggest approaches or reveal capabilities that compromise sensitive programs.


Detection and Misconduct Consequences

AI detection technology is advancing rapidly, and agencies are investing heavily in these capabilities. Being caught submitting AI-generated content isn't just about proposal rejection—it can trigger formal misconduct investigations that destroy research careers and institutional relationships.


The NIH has made clear that post-award detection can result in grant termination and referral to the Office of Research Integrity. Similar consequences likely await at other

agencies as they implement comparable policies.



Smart AI Usage: Research Tool, Not Writing Tool

This doesn't mean AI has no place in your grant development process. When used appropriately, AI tools can significantly enhance your research and preparation without violating agency policies.


Legitimate AI Applications


  1. Market and Technology Research: AI excels at helping you analyze your target market, assess the scope of commercial opportunity, and map potential commercialization pathways. Use it to research early adopters, identify key end-users and their specific pain points, understand stakeholder ecosystems, and evaluate market size and growth potential for your technology.

  2. Competitor Intelligence: AI can help you research competing technologies, identify key players in your space, and understand the current state of the art. This background research strengthens your positioning and helps you emphasize your technology's competitive edge over similar companies.

  3. End-User Analysis: Understanding your target customers and their specific needs is crucial for SBIR success. AI can help you research potential end-users, their pain points, and how they currently address these challenges.

  4. Solicitation Analysis: AI tools can help you parse complex solicitation requirements, identify key evaluation criteria, and understand what agencies are seeking. This ensures your proposal addresses all requirements and aligns with agency priorities.

  5. Outline and Structure Development: AI can help you create proposal outlines and structure your arguments logically. This planning phase use doesn't involve AI generating actual proposal content.

  6. Image Generation: For early-stage technologies that don't yet have professional photography or detailed schematics, AI image generation tools can create conceptual illustrations and diagrams.


What AI Cannot Replace


Authentic Voice and Expertise: Grant reviewers are looking for evidence that your team has deep domain knowledge and genuine insight into the technical challenges you're addressing. AI-generated content lacks the authentic voice and nuanced understanding that comes from living and breathing the solution you are building.


Original Research Insights: Successful SBIR proposals often include novel approaches, unexpected connections between fields, or innovative applications of existing technologies. These insights come from human creativity and expertise, not AI pattern matching.


Stakeholder Relationships: If AI becomes outlawed, then instead of putting an emphasis on a very high volume of proposal output, invest in the human side. Leverage relationships with potential customers or stakeholders in the agency you are applying to for funding. Spend time meeting with them to understand their challenges better so you can tailor a solution that hits their pain point directly on the head. These human connections and the insights they provide cannot be generated by AI tools.



Best Practices for the New Landscape


Strategic Application and Development Approach

With the NIH's six-application limit now in effect and similar restrictions likely coming to other agencies, the days of "spray and pray" grant applications are over. Strategic planning becomes crucial for maximizing your funding opportunities while ensuring quality and compliance.


This means fundamentally shifting from volume to precision, focusing your efforts on opportunities where you have genuine competitive advantages and strong technical approaches. However, to maintain a robust pipeline despite individual agency limits, you can extend your reach across multiple agencies.


Focus on the narrative and real-life examples that only you can bring as a human domain expert—authentic experiences and insights rooted in reality.


Akela Consulting: Human Excellence in Grant Writing

At Akela Consulting, we've always believed that the best proposals come from the intersection of deep technical knowledge, understanding the end-user, and expert communication. Our human-centered approach focuses on understanding the uniqueness of your technology and translating them into compelling narratives that win funding.


Ready to navigate the new landscape of federal grant writing? Contact Akela Consulting to help win SBIR funding for your company.



Akela Consulting logo black lettering on white background

Frequently Asked Questions (FAQ)


Can I use ChatGPT to write my NIH grant proposal?

No. As of September 25, 2025, the NIH explicitly prohibits applications that are "substantially developed by AI" or contain "sections substantially developed by AI." Using ChatGPT to write significant portions of your proposal text violates this policy and can result in serious consequences including research misconduct investigations.

Is it allowed to use AI tools for SBIR applications?

It depends on the agency and how you use the tools. The NIH has banned AI-generated content, and other agencies are likely to follow. However, using AI for research, analysis, and planning—rather than content generation—may still be acceptable. Always check current agency policies before using any AI tools.

What are the rules for using generative AI in DoD or NSF grant submissions?

The DoD hasn't issued explicit AI restrictions yet, but their AI ethical principles suggest similar policies are coming. The NSF currently requires disclosure of AI tool usage. Both agencies are likely to implement restrictions similar to the NIH's in the near future.

How do I avoid violating CUI rules when using AI for SBIR?

Never input Controlled Unclassified Information (CUI) or any sensitive technical details into commercial AI tools. Use AI only for general research and planning activities that don't involve proprietary or sensitive information. When in doubt, consult your security officer or avoid AI tools entirely.

Can I use AI to write a Phase I SBIR proposal?

Not for NIH proposals as of September 25, 2025. Other agencies haven't implemented explicit bans yet, but similar restrictions are expected. Even where not explicitly prohibited, using AI for proposal writing carries significant risks including detection, reputation damage, and potential IP exposure.

What are the restrictions on AI-generated grant text?

The NIH prohibits applications or sections "substantially developed by AI." While they haven't defined exact percentages, the policy clearly targets proposals where AI generates significant narrative, technical, or other core content. Other agencies are expected to implement similar restrictions.

Will my SBIR application be rejected if I use ChatGPT?

For NIH applications submitted after September 25, 2025, yes—AI-generated content violates their explicit policy. For other agencies, AI use may not result in immediate rejection but carries risks including detection, credibility damage, and potential future policy violations as restrictions spread.

Best practices for using AI in federal proposal writing?

Use AI for research, analysis, and planning only—never for content generation. Focus on market research, competitor analysis, solicitation requirements review, and structural planning. Always keep sensitive or proprietary information away from AI tools, and ensure all proposal content comes from human experts.

 
 
bottom of page