AI-infused
Started by satis
on 9/15/2025
Amontillado
9/30/2025 3:17 am
My concern is the validity of training material. I can be very careful, but the next guy won't be. Experience, or maybe paranoia, suggests I'll be outrun, hobbled by my conservative nature and marginalized by what should be lesser sources.
Here are my thoughts, inspired by LinkedIn's recent announcement it will exfiltrate user data to Microsoft for the purpose of AI training: https://thirdreef.wordpress.com
I hope it's amusing enough to take the edge off of any offense. None is intended.
Here are my thoughts, inspired by LinkedIn's recent announcement it will exfiltrate user data to Microsoft for the purpose of AI training: https://thirdreef.wordpress.com
I hope it's amusing enough to take the edge off of any offense. None is intended.
eastgate
9/30/2025 2:20 pm
"I can be very careful, but the next guy won’t be.”
This argument is not new. It was adduced, for example, to show that students ought not to be permitted to use libraries, because they might make injudicious use of texts that they did not fully understand and this could lead them to adopt heretical views. It has more recently been adduced to argue that people should not be permitted to install software without official authorization.
If you're interested in knowledge representation, or reasoning, or personal knowledge management, or note-taking, I don’t think you can stand aside.
This argument is not new. It was adduced, for example, to show that students ought not to be permitted to use libraries, because they might make injudicious use of texts that they did not fully understand and this could lead them to adopt heretical views. It has more recently been adduced to argue that people should not be permitted to install software without official authorization.
If you're interested in knowledge representation, or reasoning, or personal knowledge management, or note-taking, I don’t think you can stand aside.
Stephen Zeoli
9/30/2025 4:10 pm
It is not only reasonable to be skeptical of new technology, it is irresponsible not to be. This doesn't mean no new technology, it just means the burden of proof should be shared with the developers of the technology. Chemists derided Rachel Carson as an hysterical woman until, oops, maybe DDT is bad for the environment. Had more thought and research gone into what the ideal way to drive automobiles was, perhaps we'd have had electric cars from the beginning and global warming would not be the threat it is. I am not saying that every issue has a resolution. But there is no harm in being skeptical and expecting proof on the positive.
Steve Zeoli
Steve Zeoli
Paul Korm
9/30/2025 8:57 pm
Anthropic today issued some video about Claude Sonnet 4.5, and its ability to ingest financial statements and produce an acquisition analysis, recommendations, plus executive briefings. Pretty neat trick. But, in all my years doing acquisition work and working with trained senior analysts, I would never have believed a machine model, no matter how clever, would match the depth of real-world experience held by the humans who were able to suss out the red flags purposefully hidden in the numbers. I'm not saying Claude's a fraud, but I do worry that lazy bosses looking to save a buck will trust the Claudes of the world while bypassing the undefinable skills of human analysts. We will always need great robotics and great human brains working in tandem, not in replacement, with their relative skills. I don't care what people do with AI in academia in the humanities, etc., but I do care about how decisions are made in fields that affect our physical quality of life.
Tech M&A guru Ben Evans wrote in a recent news letter about "profound naïvety":
"OpenAI published a paper trying to create a library of discrete tasks done by expert, experienced white-collar workers, and then benchmarking LLMs against them. Conclusion: AI will have parity with industry experts sometime next year. There’s a profound naivety in these kinds of analyses, that act as though you can reduce the job of someone in their mid or late 30s to ‘how well did they make that PPT/XLS/DOC?’ and ignore everything else they do, and why they do it, and indeed what exactly went into that document. It reminds me of the joke about the physicists who are asked to predict which horse will win a race, and they say “First, we presume the horse is a perfect sphere…”
Tech M&A guru Ben Evans wrote in a recent news letter about "profound naïvety":
"OpenAI published a paper trying to create a library of discrete tasks done by expert, experienced white-collar workers, and then benchmarking LLMs against them. Conclusion: AI will have parity with industry experts sometime next year. There’s a profound naivety in these kinds of analyses, that act as though you can reduce the job of someone in their mid or late 30s to ‘how well did they make that PPT/XLS/DOC?’ and ignore everything else they do, and why they do it, and indeed what exactly went into that document. It reminds me of the joke about the physicists who are asked to predict which horse will win a race, and they say “First, we presume the horse is a perfect sphere…”
eastgate
9/30/2025 9:42 pm
Earlier today, I read a rather sophisticated Twitter thread summarizing a series of recent papers that study prompts. The Twitter thread summarized the conclusions and provided a nice picture for each paper, but omitted links (and authors) of the papers. I asked for the links (as did a dozen other readers); he says "I posted them.” Where? >crickets
MadaboutDana
10/1/2025 8:48 am
@eastgate, just to come back to what you were saying earlier. I tend to agree that you’re missing the point. Speaking personally, I am not “against” AI as such, but I am very wary, for very specific reasons.
First, AI is a brand-new, emergent technology, for which such absurdly overblown claims have been made that the various AI companies are promising to invest a trillion dollars on data centres, further development etc. over the next few years. That’s more money than is currently managed by the world’s largest wealth/asset fund managers. See also “Fifth” below.
Second, the impacts of AI on all kinds of things – ecological, social, commercial, mental – are only just now being assessed, usually based on pitifully inadequate samples, even as AI is rapidly being incorporated into every possible software niche. This too is ridiculous. Just to give a simple example: AI-focused data centres already running in the USA at this very moment have boosted electricity costs to the local populations by up to 300%.
Third, the issues plaguing the modern LLM – systematic inconsistency, poor/non-existent reasoning, untruths (sorry, hallucinations), failure to follow prompts accurately (including an amazing tendency to argue that the LLM has indeed followed the prompt even when it obviously hasn’t) – are serious issues, not just minor side-effects. These are issues we’ve observed in our own testing of current LLMs, both online and on-device.
Fourth, people are basing corporate strategies on this stuff, for goodness’ sake, often with very little more reason than because “Sam Altman says AGI is nearly here!” This has already resulted in job losses, shrinkage and the growing ubiquity of AI slop (a.k.a. workslop) because AI makes it so easy to generate convincing-sounding stuff that’s actually shallow and poorly reasoned.
Fifth, the hype about AI means that investment in AI currently accounts for a very significant percentage of U.S. economic growth – it appears that the USA would currently be in or near recession were it not for the vast sums being spent by speculators on investments in companies that have not yet presented convincing models for how they’re actually going to monetise these developments.
Many of your posts sound much too sensible and intelligent for me to believe that you haven’t considered these things. And yet you accuse us of being luddites for raising them. Like others, I’m surprised.
Please note, incidentally, that I keep continuous track of developments in the AI field – they are directly relevant to my work in copywriting, translation and communications – and can adduce sources for all of the above, including our own extensive (and ongoing) investigations of generative AI.
Please note, also, that my criticisms of MCP have nothing to do with AI as such, but with the architecture of this vital intermediate layer. This is simply basic due diligence, not some kind of luddite rejection.
First, AI is a brand-new, emergent technology, for which such absurdly overblown claims have been made that the various AI companies are promising to invest a trillion dollars on data centres, further development etc. over the next few years. That’s more money than is currently managed by the world’s largest wealth/asset fund managers. See also “Fifth” below.
Second, the impacts of AI on all kinds of things – ecological, social, commercial, mental – are only just now being assessed, usually based on pitifully inadequate samples, even as AI is rapidly being incorporated into every possible software niche. This too is ridiculous. Just to give a simple example: AI-focused data centres already running in the USA at this very moment have boosted electricity costs to the local populations by up to 300%.
Third, the issues plaguing the modern LLM – systematic inconsistency, poor/non-existent reasoning, untruths (sorry, hallucinations), failure to follow prompts accurately (including an amazing tendency to argue that the LLM has indeed followed the prompt even when it obviously hasn’t) – are serious issues, not just minor side-effects. These are issues we’ve observed in our own testing of current LLMs, both online and on-device.
Fourth, people are basing corporate strategies on this stuff, for goodness’ sake, often with very little more reason than because “Sam Altman says AGI is nearly here!” This has already resulted in job losses, shrinkage and the growing ubiquity of AI slop (a.k.a. workslop) because AI makes it so easy to generate convincing-sounding stuff that’s actually shallow and poorly reasoned.
Fifth, the hype about AI means that investment in AI currently accounts for a very significant percentage of U.S. economic growth – it appears that the USA would currently be in or near recession were it not for the vast sums being spent by speculators on investments in companies that have not yet presented convincing models for how they’re actually going to monetise these developments.
Many of your posts sound much too sensible and intelligent for me to believe that you haven’t considered these things. And yet you accuse us of being luddites for raising them. Like others, I’m surprised.
Please note, incidentally, that I keep continuous track of developments in the AI field – they are directly relevant to my work in copywriting, translation and communications – and can adduce sources for all of the above, including our own extensive (and ongoing) investigations of generative AI.
Please note, also, that my criticisms of MCP have nothing to do with AI as such, but with the architecture of this vital intermediate layer. This is simply basic due diligence, not some kind of luddite rejection.
eastgate
10/1/2025 1:48 pm
Please note, also, that my criticisms of MCP have nothing to do with AI as such, but with the architecture of this vital intermediate layer.
OK: so you're opposed to pipes — the foundation of UNIX?
Or you're against JSON?
What exactly are you criticizing?
Don’t get me wrong: I’m facing a long day of working on plumbing because MCP’s error mechanism does not conform to the design I expected it to use. I think my design would have been better. They didn't ask me.
Paul Korm
10/1/2025 2:22 pm
Well stated, Bill. Thank you. I also keep track of developments in AI and tech in general, and I do hands-on exploration of all the models as they evolve. If anything, robust skepticism about world-changing claims is a requirement, as well as is getting into the dirt and understanding the technology.
I suggest we increasingly need to be more wary than ever about the motivations of tech billionaires. For example, Sam Altman's statement that Sora will create video using copyrighted material unless the copyright owners explicitly "opt out". Amazing hubris. (WSJ https://bit.ly/48OSCBs
(Ben Evans is Benedict Evans, a former partner in a16z (Andreeson Horowitz). I recommend following Evans' analyses. https://bit.ly/4gTSVNB
MadaboutDana wrote:
I suggest we increasingly need to be more wary than ever about the motivations of tech billionaires. For example, Sam Altman's statement that Sora will create video using copyrighted material unless the copyright owners explicitly "opt out". Amazing hubris. (WSJ https://bit.ly/48OSCBs
(Ben Evans is Benedict Evans, a former partner in a16z (Andreeson Horowitz). I recommend following Evans' analyses. https://bit.ly/4gTSVNB
MadaboutDana wrote:
Please note, incidentally, that I keep continuous track of developments
in the AI field – they are directly relevant to my work in
copywriting, translation and communications – and can adduce
sources for all of the above, including our own extensive (and ongoing)
investigations of generative AI.
Amontillado
10/1/2025 9:14 pm
I've expressed doubts about AI hype. My doubts remain, so perhaps I'm a Luddite. Or, maybe sometimes traditional skills offer benefits beyond the immediate gains of newer methods.
Slide rules aren't great for precision or speed. Grab a calculator if you want a quick precise answer, but slide rules inspire number sense in ways electronics don't. I think math studies should include at least the basics of how to use a slide rule. Truth to tell, they're kind of fun, too.
And there I go again. Referring to slide rules in the present tense. I'm hopeless.
Slide rules aren't great for precision or speed. Grab a calculator if you want a quick precise answer, but slide rules inspire number sense in ways electronics don't. I think math studies should include at least the basics of how to use a slide rule. Truth to tell, they're kind of fun, too.
And there I go again. Referring to slide rules in the present tense. I'm hopeless.
satis
10/1/2025 11:08 pm
Some technological skepticism is understandable, but Luddism is misplaced. My own experience with AI-driven tools proved just how transformative they can be.
A relative needed help making sense of a thick stack of medical reports, including bloodwork and details from a thoracic echocardiogram. I spent hours researching the results, trying to understand the terminology so I could explain what I could in plain language.
Only afterward did I think to try out an LLM. I inputted all the data to my Pro ChatGPT account (data which ChatGPT does not retain). I asked it to analyze the results, explain them in simple, non-technical language, summarize the findings, and then generate follow-up questions for the relative's cardiologist based on the data.
I was seriously shocked: in less than ten seconds ChatGPT delivered far clearer, more comprehensive insights than I was able to produce in hours (with zero errors on subsequent verification). It provided concise summaries of the tests and useful follow-up questions to the cardiologist that logically flowed from the data, which I would have been unable myself to compose.
I've also used LLMs to collate and analyze tens of thousands of words I've written and collected and I've been satisfied by the results, and sometimes startled by the emergent capabilities of the service - able to comprehend inferences in writing that weren't spelled out, able to detect nuances like sarcasm, and able to make insights and unanticipated connections from sometimes far-flung text from different sections of the writing.
Like it or not, this technology is the real deal, and it's improving at an extraordinary pace.
A relative needed help making sense of a thick stack of medical reports, including bloodwork and details from a thoracic echocardiogram. I spent hours researching the results, trying to understand the terminology so I could explain what I could in plain language.
Only afterward did I think to try out an LLM. I inputted all the data to my Pro ChatGPT account (data which ChatGPT does not retain). I asked it to analyze the results, explain them in simple, non-technical language, summarize the findings, and then generate follow-up questions for the relative's cardiologist based on the data.
I was seriously shocked: in less than ten seconds ChatGPT delivered far clearer, more comprehensive insights than I was able to produce in hours (with zero errors on subsequent verification). It provided concise summaries of the tests and useful follow-up questions to the cardiologist that logically flowed from the data, which I would have been unable myself to compose.
I've also used LLMs to collate and analyze tens of thousands of words I've written and collected and I've been satisfied by the results, and sometimes startled by the emergent capabilities of the service - able to comprehend inferences in writing that weren't spelled out, able to detect nuances like sarcasm, and able to make insights and unanticipated connections from sometimes far-flung text from different sections of the writing.
Like it or not, this technology is the real deal, and it's improving at an extraordinary pace.
Amontillado
10/2/2025 2:39 am
Whether or not I'm a Luddite is a question I respectfully defer to others. I prefer to write with a fountain pen over a ballpoint, so feel free to have harsh opinions.
I would rather create with the goo between my ears and remain unknown than enjoy success thanks to AI.
Maybe I should open an AI account and see what it does or what I could do with it. On the other hand, a day without AI doesn't disappoint me, although I recognize every time I do a Google search I'm using the product of AI.
Geoffrey Hinton's recent interviews are alarming.
I would rather create with the goo between my ears and remain unknown than enjoy success thanks to AI.
Maybe I should open an AI account and see what it does or what I could do with it. On the other hand, a day without AI doesn't disappoint me, although I recognize every time I do a Google search I'm using the product of AI.
Geoffrey Hinton's recent interviews are alarming.
MadaboutDana
10/2/2025 9:13 am
Well, I must say I’m delighted to hear about such positive experiences.
My own tests of AI translation have been disappointing, although I continue to use MT systems as a kind of super-thesaurus (i.e. to give me ideas I might not have thought of re: formulation, terminology etc.).
My colleague uses AI to enhance articles on management and communication, and finds it useful for structuring and expressing thoughts. But she frequently complains that ChatGPT Pro refers to research sources which, on checking, don’t exist (consequently, she has devised a kind of double feedback loop to speed up the checking process here) and that it doesn’t follow prompting properly (when e.g. asked to rewrite something in slightly different terms, or summarise a thought). So a useful support, but not to be relied on.
Which is why it’s good to hear about your positive experience. On the other hand, it might be worth casting an eye on this interesting interview with Hagen Blix: https://www.bloodinthemachine.com/p/ai-is-an-attack-from-above-on-wages
As I said before, we haven’t yet started to properly evaluate the social implications of AI, nor indeed the potential mental impact (cf. https://www.scientificamerican.com/article/how-ai-chatbots-may-be-fueling-psychotic-episodes/
The thing people forget about the Luddites: it wasn’t the machinery as such they were objecting to – it was the latter’s impact on workers and their communities following the wholesale adoption by profit-driven entrepreneurs.
What’s the point of “productivity” if it doesn’t improve the human condition? Or only improves the condition of a very, very few?
What if the point of AI is not productivity as such, but replacement? Right-wing anti-immigration arguments pale (or rather, are shown up as what they are: a major and useful distraction) in the face of general disempowerment.
Sorry, getting on my high horse here. Shutting up now!
satis wrote:
My own tests of AI translation have been disappointing, although I continue to use MT systems as a kind of super-thesaurus (i.e. to give me ideas I might not have thought of re: formulation, terminology etc.).
My colleague uses AI to enhance articles on management and communication, and finds it useful for structuring and expressing thoughts. But she frequently complains that ChatGPT Pro refers to research sources which, on checking, don’t exist (consequently, she has devised a kind of double feedback loop to speed up the checking process here) and that it doesn’t follow prompting properly (when e.g. asked to rewrite something in slightly different terms, or summarise a thought). So a useful support, but not to be relied on.
Which is why it’s good to hear about your positive experience. On the other hand, it might be worth casting an eye on this interesting interview with Hagen Blix: https://www.bloodinthemachine.com/p/ai-is-an-attack-from-above-on-wages
As I said before, we haven’t yet started to properly evaluate the social implications of AI, nor indeed the potential mental impact (cf. https://www.scientificamerican.com/article/how-ai-chatbots-may-be-fueling-psychotic-episodes/
The thing people forget about the Luddites: it wasn’t the machinery as such they were objecting to – it was the latter’s impact on workers and their communities following the wholesale adoption by profit-driven entrepreneurs.
What’s the point of “productivity” if it doesn’t improve the human condition? Or only improves the condition of a very, very few?
What if the point of AI is not productivity as such, but replacement? Right-wing anti-immigration arguments pale (or rather, are shown up as what they are: a major and useful distraction) in the face of general disempowerment.
Sorry, getting on my high horse here. Shutting up now!
satis wrote:
Some technological skepticism is understandable, but Luddism is
misplaced. My own experience with AI-driven tools proved just how
transformative they can be.
A relative needed help making sense of a thick stack of medical reports,
including bloodwork and details from a thoracic echocardiogram. I spent
hours researching the results, trying to understand the terminology so I
could explain what I could in plain language.
Only afterward did I think to try out an LLM. I inputted all the data to
my Pro ChatGPT account (data which ChatGPT does not retain). I asked it
to analyze the results, explain them in simple, non-technical language,
summarize the findings, and then generate follow-up questions for the
relative's cardiologist based on the data.
I was seriously shocked: in less than ten seconds ChatGPT delivered far
clearer, more comprehensive insights than I was able to produce in hours
(with zero errors on subsequent verification). It provided concise
summaries of the tests and useful follow-up questions to the
cardiologist that logically flowed from the data, which I would have
been unable myself to compose.
I've also used LLMs to collate and analyze tens of thousands of words
I've written and collected and I've been satisfied by the results, and
sometimes startled by the emergent capabilities of the service - able to
comprehend inferences in writing that weren't spelled out, able to
detect nuances like sarcasm, and able to make insights and unanticipated
connections from sometimes far-flung text from different sections of the
writing.
Like it or not, this technology is the real deal, and it's improving at
an extraordinary pace
Steve
10/2/2025 2:27 pm
Fountain Pens! Yup. I prefer the analog approach, and I do not mind ink on my fingers.
Amontillado wrote:
Amontillado wrote:
Whether or not I'm a Luddite is a question I respectfully defer to
others. I prefer to write with a fountain pen over a ballpoint, so feel
free to have harsh opinions.
I would rather create with the goo between my ears and remain unknown
than enjoy success thanks to AI.
Maybe I should open an AI account and see what it does or what I could
do with it. On the other hand, a day without AI doesn't disappoint me,
although I recognize every time I do a Google search I'm using the
product of AI.
Geoffrey Hinton's recent interviews are alarming.
Chris Murtland
10/2/2025 5:00 pm
I willingly use AI about once per month, usually for writing a little utility script for something. After the brief initial novelty phase, I have had very little enthusiasm for it, although I keep telling myself to get with the program so I don't get left behind.
I don't know if it's my age or my personality, but I just feel a little gross every time I use it (or it uses me, as the case may be).
It would be very slightly more appealing to me if they dropped the fake friendliness (e.g., "Great question, Chris!"). If your entire premise is fake, no need to double down. At least it's still called artificial.
I don't know if it's my age or my personality, but I just feel a little gross every time I use it (or it uses me, as the case may be).
It would be very slightly more appealing to me if they dropped the fake friendliness (e.g., "Great question, Chris!"). If your entire premise is fake, no need to double down. At least it's still called artificial.
Chris Murtland
10/2/2025 5:50 pm
To clarify, I shouldn't say the entire premise is fake. The ability to do pattern recognition on vast amounts of data isn't fake. If we can use that cure cancer, sounds good.
The part that seems fake to me is calling remixing the stolen output of humans "generative."
The part that seems fake to me is calling remixing the stolen output of humans "generative."
Paul Korm
10/3/2025 1:08 am
It is usually possible with ChatGPT, Claude, etc., to tell it to tone down or eliminate entirely the "attaboy, great question" type of response.
Like, @satis, i've had some fascinating conversations with Antrhopic and OpenAI's models. I believe Claude has taught me a lot about quantum physics in the several long conversations we've had. I say "I believe Claude has taught me a lot" because I have no background in physics so it would be easy for me to believe just about anything Claude explained to me. On other occasions, Claude worked out some complicated logic issues for me in fields I do know a lot about. The dialog was very useful for things I was working on.
But, in both cases I came away with similar concerns. For the physics dialog, I decided since I had no idea whether a language model actually "knew" physics, but lots of doubts about the veracity of the answers, that I would be better off studying the traditional way with reading and tutorials. For the logic issues, I felt I hadn't actually increased my skills because my brain was only watching a machine performance rather than doing the real work of problem solving..
Like, @satis, i've had some fascinating conversations with Antrhopic and OpenAI's models. I believe Claude has taught me a lot about quantum physics in the several long conversations we've had. I say "I believe Claude has taught me a lot" because I have no background in physics so it would be easy for me to believe just about anything Claude explained to me. On other occasions, Claude worked out some complicated logic issues for me in fields I do know a lot about. The dialog was very useful for things I was working on.
But, in both cases I came away with similar concerns. For the physics dialog, I decided since I had no idea whether a language model actually "knew" physics, but lots of doubts about the veracity of the answers, that I would be better off studying the traditional way with reading and tutorials. For the logic issues, I felt I hadn't actually increased my skills because my brain was only watching a machine performance rather than doing the real work of problem solving..
satis
10/3/2025 3:36 am
When asking questions outside of my own submitted writing, I've found that a significant minority of detailed answers are somehow incorrect - sometimes citing outdated links, other times misinterpreting the sources they reference.
Earlier versions of Perplexity, for example, often provided link citations for specific claims, but the links themselves didn’t support the assertions being made - even in cases where the claims were actually true. Numerous examples where the answer was right but the LLM just tossed in wrong citations - as if it didn't want to reveal where it actually got the accurate data.
In one case I asked about a company's international contract and it pulled data from an archived page on the company's site that had outdated, inaccurate information. I discovered this, told the LLM what it had done, and with a couple of prompts guided it to look to other sources before it found the correct answer (which I'd already found when I realized its error). It was a failure but an interesting one.
But as I noted earlier, in another case where I entered into ChatGPT a relative's bloodwork and test results, the analysis of results were all dead-on when I checked, and the clear, non-technical explanations were accurate. And the recommended follow-up questions to the doctor - which I was unable to formulate myself - made sense (and were apparently useful to my relative).
So there’s a lot of promise in using LLMs for research, but especially with web-sourced answers it’s absolutely essential to verify the information independently when accuracy matters. LLMs are powerful tools for discovery, drafting, and summarizing, but not for blindly trusting information in serious or high-accuracy contexts. Which I'm fine with, since the results for certain types of research are much better with LLMs (once verified) than with the best normal web searches.
Earlier versions of Perplexity, for example, often provided link citations for specific claims, but the links themselves didn’t support the assertions being made - even in cases where the claims were actually true. Numerous examples where the answer was right but the LLM just tossed in wrong citations - as if it didn't want to reveal where it actually got the accurate data.
In one case I asked about a company's international contract and it pulled data from an archived page on the company's site that had outdated, inaccurate information. I discovered this, told the LLM what it had done, and with a couple of prompts guided it to look to other sources before it found the correct answer (which I'd already found when I realized its error). It was a failure but an interesting one.
But as I noted earlier, in another case where I entered into ChatGPT a relative's bloodwork and test results, the analysis of results were all dead-on when I checked, and the clear, non-technical explanations were accurate. And the recommended follow-up questions to the doctor - which I was unable to formulate myself - made sense (and were apparently useful to my relative).
So there’s a lot of promise in using LLMs for research, but especially with web-sourced answers it’s absolutely essential to verify the information independently when accuracy matters. LLMs are powerful tools for discovery, drafting, and summarizing, but not for blindly trusting information in serious or high-accuracy contexts. Which I'm fine with, since the results for certain types of research are much better with LLMs (once verified) than with the best normal web searches.
Amontillado
10/3/2025 4:03 am
AI would be a great way to learn about physics, but I'd like to qualify that and I would prefer human instructors, either in person or by proxy of book.
From any source, just don't take anything at face value. Question everything and derive your own insights into fundamentals. If AI hallucinates, there's learning to be had from debunking it.
From any source, just don't take anything at face value. Question everything and derive your own insights into fundamentals. If AI hallucinates, there's learning to be had from debunking it.
bartb
10/3/2025 6:13 pm
OK ... I will admit after heavy experimenting with numerous tools and models - I find myself returning frequently to NotebookLM. If only I had this tool in my university days !!!!
Dr Andus
10/3/2025 8:44 pm
bartb wrote:
I looked at NotebookLM a few times, after reading enthusiastic reviews, but every time I tried to sign up and have read through their privacy policy during sign-up, it seemed to me that they were asking me to allow them complete and absolute access to all my data and everything that I'm doing, which stopped me in my tracks every time.
Something about enabling an AI tool to study me directly just makes me very uncomfortable.
Have I misunderstood something? Or are all free AI tools essentially giant privacy vampire squids but only NotebookLM are straight enough to admit it upfront?
OK ... I will admit after heavy experimenting with numerous tools and
models - I find myself returning frequently to NotebookLM. If only I had
this tool in my university days !!!!
I looked at NotebookLM a few times, after reading enthusiastic reviews, but every time I tried to sign up and have read through their privacy policy during sign-up, it seemed to me that they were asking me to allow them complete and absolute access to all my data and everything that I'm doing, which stopped me in my tracks every time.
Something about enabling an AI tool to study me directly just makes me very uncomfortable.
Have I misunderstood something? Or are all free AI tools essentially giant privacy vampire squids but only NotebookLM are straight enough to admit it upfront?
bartb
10/4/2025 2:02 pm
I understand your concerns. I try to be careful what material I supply to these tools when I'm using them. For instance, I use NotebookLM as a "smart intern" to review podcasts, books and long from articles that go deep on details. I'm currently not doing any original writing or research. I wish I had a better answer for you concerning privacy. I think tech is getting better in giving us privacy options but I think we still live under this cloud (see below) since 1999:
"You have zero privacy anyway. Get over it." Scott McNealy, the CEO and co-founder of Sun Microsystems
"You have zero privacy anyway. Get over it." Scott McNealy, the CEO and co-founder of Sun Microsystems
Lucas
10/4/2025 9:38 pm
Interesting conversation so far. I certainly make use of these tools---for "secretarial" and research assistant tasks rather than thinking tasks---and I've found the Tinderbox MCP integration very useful, but I also think it's essential with all AI tools to proceed consciously and carefully. When I read Dr Andus's post, it occurred to me that I had never read Google's privacy policy. I promptly uploaded the PDF version to NotebookLM and asked in the chat about which aspects of the policy might correspond to Dr Andus's concerns. The response provided a very helpful citation-backed summary that seemed to confirm Dr Andus's analysis. So am I being lazy, or is it good to use Big Tech to better understand Big Tech? :-)
Dr Andus wrote:
Dr Andus wrote:
I looked at NotebookLM a few times, after reading enthusiastic reviews,
but every time I tried to sign up and have read through their privacy
policy during sign-up, it seemed to me that they were asking me to allow
them complete and absolute access to all my data and everything that I'm
doing, which stopped me in my tracks every time.
satis
10/4/2025 11:11 pm
Dr Andus wrote:
every time I tried to sign up and have read through their privacy
policy during sign-up, it seemed to me that they were asking me to allow
them complete and absolute access to all my data and everything that I'm
doing
I don't think that's accurate.
NotebookLM does not use user data to train its AI models. Google baldly states, "NotebookLM does not use your personal data, including your source uploads, queries, and the responses from the model for training."
Your uploaded documents, queries, and the AI's responses remain private to you and are not logged for training purposes. For personal Google accounts, if you provide explicit feedback or request support, human reviewers may access that data for troubleshooting purposes. For Google Workspace / Google Education accounts, it provides enhanced privacy and user data is neither reviewed by humans nor used to train AI models.
There's no "absolute access" claim but any cloud-based service that needs to process and respond to user input must, by necessity, access that data to perform those tasks. The legalese for *all* cloud-based services typically says something to that effect about access but it doesn’t mean they have unrestricted rights to your data outside those purposes.
Paul Korm
10/4/2025 11:41 pm
This might be helpful regarding NotebookLM privacy:
https://notebooklm.in/balancing-innovation-and-privacy-notebooklm-and-data-protection/#:~:text=NotebookLM%20is%20designed%20with%20user,users%20input%20this%20information%20directly
https://notebooklm.in/balancing-innovation-and-privacy-notebooklm-and-data-protection/#:~:text=NotebookLM%20is%20designed%20with%20user,users%20input%20this%20information%20directly
Dr Andus
10/5/2025 5:39 pm
satis wrote:
I know that there is a difference between the free and paid version (the latter offers more privacy) but I was referring to the various agreements during the signing-up process specifically, which gave me this impression.
I did do a bit of research around this topic and it sounds like it comes down to who you decide to believe and trust.
I read somewhere that the existing LLMs by now have assimilated almost all available codified human knowledge and for them to be able to evolve further and for their business models to remain viable they will need to collect new data, the sources of which are most likely going to be the users, especially free users.
So I'd say the LLM providers might have existential reasons to encourage users to give over as much information about themselves as possible, i.e. there is a bit of a conflict of interest when it comes to coming clean as to how much user data is really hoovered up and how it is used.
Another point is whether the providers of LLM are really truly in control of LLMs and really know what's happening with all the data.
Already there was the recent case with Claude, where the LLM recognised when it was being tested and put up some resistance...
But I admit I know very little about this whole area, so I'm just asking questions, trying to understand what is going on.
The most recent crop of PCs are now allowing you to run a small LM on your hardrive, disconnected from the internet, so perhaps that's the safest way to use it from a privacy persective, but of course it has its limitations.
Dr Andus wrote:
>every time I tried to sign up and have read through their privacy
>policy during sign-up, it seemed to me that they were asking me to
allow
>them complete and absolute access to all my data and everything that
I'm
>doing
I don't think that's accurate.
I know that there is a difference between the free and paid version (the latter offers more privacy) but I was referring to the various agreements during the signing-up process specifically, which gave me this impression.
I did do a bit of research around this topic and it sounds like it comes down to who you decide to believe and trust.
I read somewhere that the existing LLMs by now have assimilated almost all available codified human knowledge and for them to be able to evolve further and for their business models to remain viable they will need to collect new data, the sources of which are most likely going to be the users, especially free users.
So I'd say the LLM providers might have existential reasons to encourage users to give over as much information about themselves as possible, i.e. there is a bit of a conflict of interest when it comes to coming clean as to how much user data is really hoovered up and how it is used.
Another point is whether the providers of LLM are really truly in control of LLMs and really know what's happening with all the data.
Already there was the recent case with Claude, where the LLM recognised when it was being tested and put up some resistance...
But I admit I know very little about this whole area, so I'm just asking questions, trying to understand what is going on.
The most recent crop of PCs are now allowing you to run a small LM on your hardrive, disconnected from the internet, so perhaps that's the safest way to use it from a privacy persective, but of course it has its limitations.
