The dirty secret of white-collar and white-collar adjacent professional life is that one has to be very good at mediocre reading and writing (regarding the latter, just look at this blog! HARDEE HAR HAR!). These are important skills. One often has to quickly read something to gain a surface level understanding (the goal sometimes being “Do I need to read this in depth?”). Likewise, there is plenty of writing one must do that needs to be intelligible, but, frankly, not very good. You just need to convey the point, nothing more (e.g., an email requesting something).
Over the years, like many, I have become very good at mediocre reading and writing, and can do both very quickly. Asking a LLM to write an email for me (and then checking it) or prompting a written summary of an article and then reading it wouldn’t save me time. It would be as fast to do it myself.
But in professional life, there are many who are bad at either reading or writing (or both). This includes people with technical skills and hoity-toity degrees. Not mediocre, but bad. They have a hard time reading and understanding various texts, and their writing is both laborious and unintelligible. For them, I could see ‘AI’ being a godsend. AI is pretty good at being mediocre, and, often, that’s all that’s needed.
Of course, when high-quality reading and writing are required–that is, when you need to understand something very well or convey something clearly, effectively, and accurately–AI isn’t good at that. The legal profession, along with Kennedy’s MAHA clique, have learned this the hard way. And for coding, it seems to be mediocre (though it can be bad for certain applications).
If I’m making a decision on whether some data should be included in a ‘gold-standard’* reference database, a mediocre summary of the article describing the data isn’t going to cut it. I need a detailed assessment, and from I’ve seen of scientific article summaries, the tools simply aren’t up to snuff (my previous workplace encouraged exploring various tools, often supposedly better than what is publicly available, and, nope, they didn’t cut it).
So the promise of AI, such as it is, is mediocrity**, which sometimes or often can be enough. But that’s not good or high-quality, even though there is considerable financial pressure to pretend otherwise.
*I hate that Bhattacharya, Trump, and other science deniers have tarnished (SWIDT?) the phrase ‘gold-standard.’
**Between ‘ship it, then fix it’, and using earnings, not product quality, as the primary value function, mediocrity as a goal really shouldn’t surprise people, corporate propaganda notwithstanding.

Pingback: ‘AI’ Is Still Very Good at Mediocrity | Mike the Mad Biologist
I am reminded of Feynman’s account of his experience teaching physics in Brazil, in “Surely You’re Joking, Mr. Feynman”. The Stepford physics in Brazil taught students the sentences, but the students had no idea what the sentences meant. AI can regurgitate good medical information (sometimes bad medical information), but if one had AI analyze the Biden Presidency, AI would probably produce word salad.