The AI-Beginner Patent Engineer’s Update: “Tricks” Are No Longer Needed

Our company supports Japanese patent firms with the intermediate processing of outbound patent applications, and we’ve shared our experiences using generative AI Claude on this blog several times before.

It’s been ten months since my last post — sorry for the long silence! In this article, I’d like to briefly share what’s happened during that time, and what’s coming next.

🤖Actually, We’re Still Using AI

Since the blog updates stopped, some of you may have wondered, “Did they back away from AI?” The truth is the opposite — we’re using it more than ever.

The honest reason for the silence is that I just couldn’t carve out the time to write blog posts. New features, new models, new ways to use AI keep coming out one after another, and just keeping up has been a full-time effort. If you’re someone using AI in your daily work, you probably feel the same way.

“By the time I finish writing about a tool, the tool has already moved on” — that’s the biggest reason updates stopped.

📚What I’ve Shared So Far

In past articles, I covered topics like getting started with generative AI Claude as an outbound-patent engineer, MCP server setup, patent family searches, translation review, and reference sign checking on patent drawings.

The prompt-chain templates I introduced were versions adjusted for readers — designed so that anyone could try them out on the free version of Claude. They were intended to give you something hands-on to play with, and I hope they did.

🔧AI Itself Got Better, So “Tricks” Are Becoming Unnecessary

Some of the techniques I introduced ten months ago are no longer needed. Not because the tools changed, but because the underlying AI itself got significantly better.

The three changes I notice in daily use:

  • Plain language gets the job done. Before, you had to assign a role, break down the steps, and provide output examples to get what you wanted. Now, a politely written request in everyday Japanese (or English) tends to produce more or less what you intended.
  • Long materials in a single conversation. One chat can now handle a lot more information at once, which makes it easier to feed multiple documents and have ongoing back-and-forth. The AI also remembers earlier parts of the conversation more reliably.
  • The AI asks before assuming. Rather than producing a finished output right away, AI now often asks “How would you like to handle this?” before starting. You don’t need to write perfect instructions upfront — you can refine them through dialogue.

The result is that the kind of “prompt engineering tricks” I introduced in past articles are becoming less necessary. For people just starting to use AI now, the bar has actually gotten lower, not higher.

For the record, there have been improvements on the tool side too — the Web version’s connectors, Skills as a way to package operational knowledge, and so on — but the bigger change, in my experience, has been AI itself getting better.

🎯How We Use AI for Intermediate Processing

For the intermediate processing of outbound patent applications, our company uses generative AI mainly for drafting client-bound comments on office actions (OA comments) and drafting instruction letters to foreign associates.

Beyond intermediate processing itself, we’ve also started using generative AI for case management tasks — such as extracting new requests and delivery information from emails and registering them as tasks in our deadline management tool (ClickUp). It’s mundane work, but automating these administrative steps has been just as effective in reducing human error.

The specifics of prompts and operational details are outside the scope of this post, so I’ll skip them — but the basic flow is: AI drafts the document, and I review and revise it as an outbound-patent engineer, making whatever corrections or additions are needed.

The biggest benefit I’ve felt from this way of working is fewer human errors.

Things like missed points of argument, citation mix-ups, or terminology inconsistencies — the kinds of mistakes that “occasionally happen even when you’re being careful” when working by hand alone — are structurally reduced when you have an AI draft as a starting point. In terms of deliverable quality, this is probably the most concrete change I’ve noticed during the ten months without blog updates.

That said, AI output still includes “plausible-sounding misinformation,” “logical leaps,” and “misinterpreted instructions” at a non-trivial rate. Given the nature of LLMs (large language models), these issues won’t disappear completely anytime soon — that’s my view from working with them daily.

For that reason, the final review and judgment of AI output need to come from someone experienced in the work.

For our intermediate processing, I always perform that check as an outbound-patent engineer. This applies regardless of industry — if you’re using AI, this part is non-negotiable.

🚀Looking Ahead

Updates to this blog will be on hold for now. There’s just too much new happening, and articles feel outdated even as I’m writing them. It’s been hard to find the right moment to sit down and write something properly.

That said, we’re going to keep using generative AI in our actual work. If anything, time that might have gone into writing blog posts is going into deeper integration with daily operations.

If, at some point, enough organized insight builds up that it’s worth sharing in a post, I might write again. Until then, thank you for reading.

[For questions or comments regarding this article, please use our website contact form. Please note that we do not accept sales or solicitation inquiries.]

Author of this article

TOC