USPTO Clarifies §101 Examination for Software, AI & ML: A Playbook for Stronger Claims

USPTO Clarifies §101 Examination for Software, AI & ML: A Playbook for Stronger Claims

Last updated: August 13, 2025
Author: Marcus Julius Zanon — IP & Compliance Counsel (MJZanon)

USPTO Clarifies §101 Examination for Software, AI & ML: A Playbook for Stronger Claims

The Aug 4, 2025 USPTO memorandum reinforces practical §101 analysis for software—including AI/ML.

On August 4, 2025, the USPTO issued a memorandum to examiners in TCs 2100, 2600, and 3600 reinforcing how to evaluate
subject-matter eligibility (SME) under 35 U.S.C. § 101 for software inventions, including AI and ML. The memo does
not change policy; it emphasizes recurring issues in the Step 2A analysis of the Alice/Mayo framework and reminds examiners
when a §101 rejection is appropriate.

What the memo emphasizes (without changing policy)

  • “Mental process” is bounded: claim limitations that cannot practically be performed in the human mind should not be treated as mental steps.
  • “Recites” vs “involves” an exception: a claim that merely involves math (e.g., training a model) is different from one that recites specific mathematical relationships/algorithms.
  • Prong Two must assess the claim as a whole: focus on how limitations interact to integrate any exception into a practical application.
  • Technical improvement vs “apply it”: eligibility favors a particularized technological solution, not just instructions to apply an idea on a generic computer.
  • “Close-call” standard: a §101 rejection should be made only when ineligibility is more likely than not, with compact prosecution across §§101/102/103/112 in the first action.

How to leverage the guidance—strategy first

Treat Step 2A as an opportunity to narrate your invention’s technical delta and demonstrate system-level interaction. Align the
specification and claims so the improvement is apparent to a person of ordinary skill—even if not spelled out verbatim in the claim text.

1) Keep AI/ML claims out of the “mental process” bucket

  • Highlight limitations that require non-mental operations: high-dimensional tensors, device-resident kernels, real-time constraints, network actions.
  • Draft (and argue) that the claimed operations cannot practically be performed in the human mind.

2) Use “recites vs involves” to avoid unnecessary abstraction

  • At Prong One, claims that involve machine learning at a functional level (e.g., “training a neural network”) need not recite a mathematical concept.
  • Avoid name-dropping specific algorithms (e.g., backpropagation with gradient descent) unless essential—doing so tends to recite math and invites abstraction analysis.

3) Win at Prong Two: show integration into a practical application

  • Explain how elements work together to deliver a concrete improvement—e.g., lower latency, higher throughput, improved accuracy, network security actions.
  • Describe non-generic data structures, schedulers, buffers, DMA/NIC actions, or device constraints that effectuate the improvement.

4) Use the “close-call” standard during prosecution

  • When a §101 rejection is tentative or conclusory, invoke the memo’s “more likely than not” threshold and insist on a preponderance-based rationale.
  • Keep the record compact by addressing §§102/103/112 alongside §101 and ensure all dependent claims are examined in the first action.

AI Examples to cite (July 2024)

The USPTO’s AI-focused Examples (Nos. 47–49) illustrate how Prong One and Prong Two apply to ML scenarios:

Example 47, Claim 2 (ineligible): expressly recites algorithms (e.g., backpropagation, gradient descent) and is treated as mathematical calculations with generic “apply it” steps.
Example 47, Claim 3 (eligible): integrates abstract ideas into a practical application improving network security (e.g., automatically dropping malicious packets and blocking traffic in real time).

Drafting & response checklist (copy/paste)

  • Claim the technical improvement: tie limitations to quantifiable effects (latency, memory/IO, accuracy, security) under explicit device or real-time constraints.
  • Show interactions: model ↔ scheduler ↔ buffer ↔ device/NIC/cache; emphasize how these combined interactions deliver the improvement.
  • Prefer structures over equations: describe data layouts and pipelines; avoid math names unless indispensable.
  • Dependent sets: hardware binding; scheduling/quantization modes; resilience/security actions; auditability and logs (for governance).
  • OA responses: (i) rebut “mental process” by explaining impracticability; (ii) for Prong Two, walk through the claim as a whole and the specific mechanisms that effectuate the improvement; (iii) use the “close-call” standard where applicable.

Primary sources (official)


Open the USPTO Memorandum (PDF)

Back to top

Leave a Reply

Your email address will not be published. Required fields are marked *

− 1 = 1

MENU
Protecting Your Trademarks & Patents in Brazil