Ben Makuh writes:
My consternation with how we talk about LLMs does not come from a desire to virtue signal or to sound holier than thou. It comes from a belief that the way we use language shapes our behavior in real and meaningful ways.…
If I’m training myself to think about LLMs as unpaid interns, workers who never take lunch breaks, cheap/free replacements for my coworkers, etc., then I’m doing grave damage to myself. Now, whether you think my comparison between LLMs and slavery holds any merit or not is up for you to decide. I’m not even suggesting that we shouldn’t use LLMs, just that there are far better ways for us to approach their use than are currently in vogue–and those better ways don’t even really require that much of us.…
When I prompt ChatGPT with a string of text, I’m prodding the model to run in a certain way that is hopefully useful toward my ends. There’s nothing in that action that necessitates me thinking of ChatGPT as an “unpaid intern.” If I use Cursor or Zencoder to generate code in response to a prompt, there’s nothing in that action that requires me to think of it as an “engineer who never takes lunch breaks.” I very easily can interact with LLMs in a manner quite similar to the way that the vast majority of the world has interacted with a traditional search engine over the past couple decades: as a tool that does a job.