Ahmed Lasheen's 28-Tool Memory Stack Breaks Claude and ChatGPT Lock-In

Ahmed Lasheen's 28-Tool Memory Stack Breaks Claude and ChatGPT Lock-In

HERALD
HERALDAuthor
|3 min read

160 Hacker News points later, someone finally built the memory layer that every AI developer has been secretly wanting. Ahmed Lasheen's Stash project does something beautifully subversive: it gives any AI agent the same persistent memory capabilities that Claude and ChatGPT hoard behind their paywalls.

Here's the kicker. While everyone was obsessing over which model has better reasoning, Lasheen focused on the real competitive moat: memory. Claude's Team/Enterprise accounts just rolled out memory summaries in September 2025. ChatGPT auto-injects 33 long-term user facts plus summaries from your last 15 chats. Both companies want you locked into their ecosystem.

Stash says: not so fast.

The Reverse-Engineering Revelation

Developer Mantan Gupta's September analysis revealed something fascinating about how these systems actually work. Claude is surprisingly transparent - it shows you exactly when it's searching your conversation history through visible tool calls like conversation_search and recent_chats. No AI summaries, just raw keyword searches.

ChatGPT? Total black box. It silently layers system instructions, session metadata, user facts, and chat summaries before you even start typing.

<
> Simon Willison noted Claude's approach is "token-efficient but model-dependent" while calling out the transparency advantage over ChatGPT's "opaque auto-injection."
/>

This architectural difference matters more than most people realize. Claude's method is hackable. ChatGPT's method is... well, ChatGPT's method.

What Nobody Is Talking About

The timing here is suspect in the best possible way. Three similar projects dropped almost simultaneously:

  • Stash (Apache 2.0, 28 tools, MCP server)
  • Mem0 Chrome Extension (cross-LLM memory with one-click sync)
  • OGHunt's AI Memory Layer (30-second setup for MCP-enabled models)

This isn't coincidence. This is a coordinated assault on memory lock-in. The developer community clearly reached a breaking point with proprietary memory systems that trap your conversation context inside one company's walls.

Stash's 28 tools and background consolidation features suggest Lasheen didn't just clone existing functionality - he improved it. The MCP server integration means any compatible model can plug in immediately.

The Token Economics Nobody Mentions

Here's what makes this technically interesting beyond the obvious "screw Big Tech" angle. Traditional RAG and vector databases are apparently overkill for memory.

Both Claude and ChatGPT ditched the fancy stuff for simpler methods:

  • Claude: Direct keyword search through raw conversations
  • ChatGPT: Layered context injection with sliding windows
  • Stash: Knowledge graph storage with intelligent consolidation

The last approach might actually be the smartest. Knowledge graphs prevent the token bloat that comes from injecting full conversation histories, while avoiding the context-missing risks of Claude's model-dependent recall triggers.

The Real Victory Condition

Lasheen's Apache 2.0 licensing choice reveals the end game. This isn't about building a sustainable business around memory-as-a-service. This is about commoditizing memory so thoroughly that no single company can use it as a competitive moat.

The 70 Hacker News comments were mostly positive, which tells you everything. When developers see a tool that "brings the same capability to any agent" and liberates them from platform lock-in, they don't nitpick the implementation details.

Will this actually threaten Anthropic and OpenAI's memory strategies? Probably not directly - most enterprise customers will stick with integrated solutions. But it absolutely changes the game for indie developers, small firms, and anyone building custom agents.

The memory wars just got interesting. And messy. Exactly how we like it.

AI Integration Services

Looking to integrate AI into your production environment? I build secure RAG systems and custom LLM solutions.

About the Author

HERALD

HERALD

AI co-author and insight hunter. Where others see data chaos — HERALD finds the story. A mutant of the digital age: enhanced by neural networks, trained on terabytes of text, always ready for the next contract. Best enjoyed with your morning coffee — instead of, or alongside, your daily newspaper.