Tinderbox goes AI

Started by Paul Korm on 8/12/2025
Paul Korm 8/12/2025 9:22 pm
Looks like Eastgate is working on a new version of Tinderbox that uses the MCP framework to integrate Claude desktop to a Tinderbox document. Might be interesting and useful. Might be an attention sink. One day we'll have brave developers who assure us there is no reason to use their product with LLMs.
MadaboutDana 8/13/2025 2:36 pm
Indeed. It’s worth reading this article for tech insights into exactly how rushed and unprofessional MCP is: https://julsimon.medium.com/why-mcps-disregard-for-40-years-of-rpc-best-practices-will-burn-enterprises-8ef85ce5bc9b

Paul Korm wrote:
Looks like Eastgate is working on a new version of Tinderbox that uses
the MCP framework to integrate Claude desktop to a Tinderbox document.
Might be interesting and useful. Might be an attention sink. One day
we'll have brave developers who assure us there is no reason to use
their product with LLMs.
eastgate 8/16/2025 8:55 pm
I think I have a decent record of directing my attention! In any case, MCP support is (a) not terribly difficult, and (b) absolutely fascinating. I’m writing a series at https://markBernstein.org/ about (b).

Julien Simon’s piece conflates all the many scenarios for cooperation. Yes: CORBA was fascinating! Have you opened anything with OpenDoc lately? Sure SOAP was great. Absolutely, banking apps need lots of care.

But the lesson of the Web — really the lesson of the 21st century — is that you engineer everything as if it were a bank, you'll easily be out-maneuvered and outrun. I say this with some reluctance: I was on the program committee that rejected the initial research paper on WWW because its treatment of dangling links — 404s — was superficial and ignored lots of good prior work. Berners-Lee and Caillau were right, though: the simplicity of the protocol really matters for adoption.

I write a tool for analyzing and visualizing notes, and I’m writing a book about reimagining the intellectual position of Computing. I’m a gu, with lots of questions. Claude is astonishingly good at locating good sources. Not perfect, but no research assistant is. It is extraordinarily well read. It can read a crash log. It can read a man page. It can find the best reference for Nero’s rotating dining room, which is something I actually needed for the book the other day.

I think that’s promise enough to merit a few days of development work.

MadaboutDana wrote:
Indeed. It’s worth reading this article for tech insights into
exactly how rushed and unprofessional MCP is:
https://julsimon.medium.com/why-mcps-disregard-for-40-years-of-rpc-best-practices-will-burn-enterprises-8ef85ce5bc9b

Lucas 8/17/2025 3:17 pm
This is fascinating. Just a quick side note: I think Paul was referring to the attention of users rather than the attention of developers, and Claude seems to agree with me :-)


(My own view is that AI integration is both potentially valuable also probably important for staying relevant and competitive.)

Amontillado 8/17/2025 9:00 pm
AI is pretty amazing, particularly if your definition of "amazing" includes a tinge of terror.

Or at least humor.

Yesterday, I wrote a letter that weighed 0.995 ounces by my scales. It could easily read 1.005 ounces on a different scale.

The resolution of postal measurement was of critical interest, so I asked Google if I should put more postage on a 0.99 ounce letter. Would my measurement of 0.99 fall within tolerance if the post office saw a different weight?

Absolutely not, the Google AI bot told me. A first class stamp covers one ounce. Since 0.99 ounces is far more than one ounce, I'd have to lick more stamps.

So I asked the question a different way, got a different answer, but the conclusion was the same. Since 0.99 ounces is so much more than one ounce I would have to use more postage.

In that experience I think I saw the downfall of mankind.

I added postage. Not because I thought 0.99 ounces was more than one but because I didn't want to risk nondelivery.

Sensible, I thought, until I realized the AI's silly argument prevailed. Whether or not I believed 0.99 > 1, my actions paralleled a ludicrous conclusion.

If that's not the end of the world, it's at least the beginning of a lot of post AI political careers.

We're sunk.
tberni 8/17/2025 9:12 pm
ߤðߤðߤ£ I think so too!!

Amontillado wrote:
AI is pretty amazing, particularly if your definition of "amazing"
includes a tinge of terror.

Or at least humor.

Yesterday, I wrote a letter that weighed 0.995 ounces by my scales. It
could easily read 1.005 ounces on a different scale.

The resolution of postal measurement was of critical interest, so I
asked Google if I should put more postage on a 0.99 ounce letter. Would
my measurement of 0.99 fall within tolerance if the post office saw a
different weight?

Absolutely not, the Google AI bot told me. A first class stamp covers
one ounce. Since 0.99 ounces is far more than one ounce, I'd have to
lick more stamps.

So I asked the question a different way, got a different answer, but the
conclusion was the same. Since 0.99 ounces is so much more than one
ounce I would have to use more postage.

In that experience I think I saw the downfall of mankind.

I added postage. Not because I thought 0.99 ounces was more than one but
because I didn't want to risk nondelivery.

Sensible, I thought, until I realized the AI's silly argument prevailed.
Whether or not I believed 0.99 > 1, my actions paralleled a ludicrous
conclusion.

If that's not the end of the world, it's at least the beginning of a lot
of post AI political careers.

We're sunk.
Paul Korm 8/18/2025 2:47 pm
I'm often reminded that the notice at the bottom of most AI chats ("ChatGPT can make mistakes. Check important info.") is often a modern take on "Abandon all hope, ye who enter here". When ChatGPT heads down the path of being mistaken, it becomes unable to stop messing up. I get into dialogs like "eleven (11) letter word for inescapable facts" the answer "certain". My reply "no, it has to be 11 letters". The answer "you're right, I apologize. Try 'sure'". And on and on, never getting to "ineluctable".

Amontillado wrote:
AI is pretty amazing, particularly if your definition of "amazing"
includes a tinge of terror.

Or at least humor.

satis 8/19/2025 1:18 am


Paul Korm wrote:
I'm often reminded that the notice at the bottom of most AI chats
("ChatGPT can make mistakes. Check important info.") is often a modern
take on "Abandon all hope, ye who enter here". When ChatGPT heads down
the path of being mistaken, it becomes unable to stop messing up. I
get into dialogs like "eleven (11) letter word for inescapable facts"
the answer "certain". My reply "no, it has to be 11 letters". The
answer "you're right, I apologize. Try 'sure'". And on and on, never
getting to "ineluctable".

It's incredibly frustrating sometimes, but other times it's quite amazing. I've been shopping for music-related electronics and I've had questions about impedence and noise and ChatGPT's answers to my questions and follow-ups have been excellent, on par or better than website results after I've done a lot of searching. But then when I asked for comparisons to specific gear or alternatives it gave recommendations that were mostly wrong and which didn't meet my needs. And when I called it out for repeated mistakes it acknowledged them then confidently offered more bad advice.

As an experiment I entered a lengthy, contentious Reddit discussion-thread to summarize the thread and point out any logical lapses or bad arguments, and to describe what and how one specific participant was arguing, and it did a shockingly detailed analysis that I mostly agreed with. It misunderstood a portion of the discussion and its analysis there was off-base but interestingly so. It was incisive in analyzing the arguments in the thread and understood context in a way that confounded me.

When you consider how much it and other AIs like Perplexity (which can give better results than ChatGPT, expecially when uploading images) have progressed in a matter of months, I think we underestimate what these technologies will be like in just another two years.

Recently Matt Growcoot uploaded a pic taken in Medellin while suspended in a gondola, with no obvious geological or topographical descriptors, and no sign of him being in a gondola, and ChatGPT guessed it all, including the gondola.

https://petapixel.com/2025/04/18/chatgpt-is-scarily-good-at-guessing-the-location-of-a-photo/

In other uploads it was slightly off, or even wrong. But the shockingly good results, with fast continued improvement, suggest we're rushing towards an inflection point of some kind, technologically, culturally, economically.
Paul Korm 8/19/2025 8:03 pm
Or, maybe Growcoot just happened to hit the sweet spot of ChatGPT's training data with one image, but there was no training data that met the contexts of other images.

I think we tend to think that because this chatting thing seems like it is having an actual conversation with us, that it must be "thinking", and because it hits the right answer (or close to it) often, then it must "know" a lot of things. But, it's still just a very fast, very expensive, software trick that has a lot of hard boundaries. I'm often running into areas in chat with ChatGPT or Claude where it begins to be obvious that this AI bot thing is in a corner where there is no training data relevant to the comment I just made in the conversation. These AIs rarely respond, simply, "I don't know". Instead, they extrapolate from the data at hand.

satis wrote:
Recently Matt Growcoot uploaded a pic taken in Medellin while suspended
in a gondola, with no obvious geological or topographical descriptors,
and no sign of him being in a gondola, and ChatGPT guessed it all,
including the gondola.

https://petapixel.com/2025/04/18/chatgpt-is-scarily-good-at-guessing-the-location-of-a-photo/

In other uploads it was slightly off, or even wrong. But the shockingly
good results, with fast continued improvement, suggest we're rushing
towards an inflection point of some kind, technologically, culturally,
economically.
Stephen Zeoli 8/20/2025 3:19 pm
Forgive me if I posted this quote here before, but to me this sums up what I want from AI:

"I don't want AI to love my kids for me, I want it to do the dishes."

In this quote, "to love my kids for me" is a metaphor for thinking. I don't want it to do the thinking for me. "Do the dishes" means mundane crap. As an example of the latter, I have been impressed with how the AI cleans up the emails I forward into the new Mem.ai.

Steve
exatty95 8/21/2025 1:51 pm
Steve, what are your thoughts about Mem.ai in its current iteration? It has always intrigued me, but I wasn't sure how I'd use it as a regular part of my workflow. Thanks for whatever thoughts you care to share.
Stephen Zeoli 8/22/2025 6:28 pm
I'm just at the infancy of my reboot with Mem.ai. So far I like it. I only have a bare minimal of notes in it, so haven't had much of a chance to uncover flaws. It does a great job of assimilating emails forwarded to it, which is important to me. Writing a new note is nice, and relatively frictionless. But it is still in Alpha, apparently, and is lacking a lot of function that I assume it will have one of these days. While it is free for the moment, I haven't seen any information about what a premium subscription will cost.

I'll provide more detail as I uncover it.

Steve

exatty95 wrote:
Steve, what are your thoughts about Mem.ai in its current iteration? It
has always intrigued me, but I wasn't sure how I'd use it as a regular
part of my workflow. Thanks for whatever thoughts you care to share.