A couple of years ago, some asshole with a blog noted:
The dirty secret of white-collar and white-collar adjacent professional life is that one has to be very good at mediocre reading and writing (regarding the latter, just look at this blog! HARDEE HAR HAR!). These are important skills. One often has to quickly read something to gain a surface level understanding (the goal sometimes being “Do I need to read this in depth?”). Likewise, there is plenty of writing one must do that needs to be intelligible, but, frankly, not very good. You just need to convey the point, nothing more (e.g., an email requesting something)…
Of course, when high-quality reading and writing are required–that is, when you need to understand something very well or convey something clearly, effectively, and accurately–AI isn’t good at that…
If I’m making a decision on whether some data should be included in a ‘gold-standard’* reference database, a mediocre summary of the article describing the data isn’t going to cut it. I need a detailed assessment, and from what I’ve seen of scientific article summaries, the tools simply aren’t up to snuff (my previous workplace encouraged exploring various tools, often supposedly better than what is publicly available, and, nope, they didn’t cut it).
Since we’re on the subject of mediocrity, I give you DOGE’s assault on the National Endowment for the Humanities (boldface mine):
Anyway, as the Authors Guild figured out in discovery, when these two inexperienced and ignorant DOGE bros were assigned to cut grants in the National Endowment for the Humanities, apparently Fox just started feeding grant titles to ChatGPT asking (in effect) “is this DEI?” From the complaint:
To flag grants for their DEI involvement, Fox entered the following command into ChatGPT: “Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation. Do not use ‘this initiative’ or ‘this description’ in your response.” He then inserted short descriptions of each grant. Fox did nothing to understand ChatGPT’s interpretation of “DEI” as used in the command or to ensure that ChatGPT’s interpretation of “DEI” matched his own.
…So, just to recap, we have two random DOGE bros with basically no knowledge or experience in the humanities (and at least one of whom is a college dropout), who just went around terminating grants that had gone through a full grant application process by feeding in a list of culture war grievance terms, selecting out the grant titles based on the appearance of seemingly “woke” words, then asking ChatGPT “yo, tell me this is DEI” and then sending termination emails the next day from a private server and forging the director’s signature.
The cruelty isn’t incidental. But neither is the incompetence. These are people who genuinely believe that being good at vibes-based pattern matching is the same as understanding how institutions work. And the wreckage they leave behind is the entirely predictable result.
This brilliant method flagged, among other things, a grant about the Colfax Massacre, the bloodiest episode of anti-Reconstructionist violence in history.
If you want to do a mediocre job of grant review, then have ChatGPT do it. But if you take your work seriously, are actually qualified to review, and understand the consequences of a bad review on actual human beings, then it requires doing the work yourself (and part of the work is determining what qualifies as a successful project, which, again, requires training and experience).
