We have spent the last few years training LLMs on human conversation data with the explicit goal of making them sound like us, but we have also been training ourselves on them in return. I have spent a lot of hours talking to claude over the past couple of months, my juniors grew up doing their assignments with gpt, and there are kids in undergrad right now whose first real intellectual sparring partner was a language model instead of another human. This is not a one way street. The gradients are running in both directions, they are doing SGD on us and we are doing something like SFT on them, and the effect is small enough per interaction that you don’t notice it until one day you start hearing “Yes absolutely I can help you with that” from freinds and you realize you have never heard them say that phrase in your life. There is a USC study that measures this drift in spoken youtube videos after chatgpt, words like delve and meticulous and underscore spiking in frequency in actual human speech, not copy pasted text but mouths, and the comments under the post are full of people confessing that they now say “you’re absolutely right” out loud in real conversations, which is maybe the most claude coded sentence a human being can utter and also the kind of thing you would only notice if someone else said it to you first

The usual response I get when I bring this up with friends is that tools have always shaped the people using them, the printing press did it, radio did it, standardized spelling did it, twitter did it, so what is the big deal about this one. And I think the big deal is that the loop is closed now and the loop is short. Previous tools did not update, the printing press in 1500 was the same printing press in 1600, twitter is roughly the same twitter it was five years ago. But claude 4.6 is not claude 3.5 and it will not be claude 5 in six months, and every retraining cycle grounds on an updated human interaction, and the 2026 internet is written by humans who have spent the last two years being subtly finetuned by the 2024 version of the model, which was itself trained on humans that had been talking to the 2022 version. The model distribution and the human distribution are idk sort of converging, and maybe at one point the reference you would use to even measure the drift, stops existing because nobody is untouched anymore. I don’t know if this converges into something stable or diverges into something strange but either way the concept of “what unassisted human thinking sounds like” quietly diminishes, and I don’t think we are going to notice the day that happens

I heard the other day that people have stopped open sourcing work that they would have happily published a few years ago, specifically because they know it will end up inside a training corpus and the edge they got from writing it will evaporate. The funny second order thing here is that the models got good in the first place because people were generous in public, and the models being good is now the exact reason people are not being generous in public anymore. The commons is being fed into a thing that makes continuing to contribute to the commons feel like a bad trade, and I don’t see any version of that where the commons ends up richer

The other thought which I came across while writing this post is a quiet shift in what humans are actually for in this whole arrangement. I met a second year undergrad a couple weeks ago building a startup, smart kid, and he walked me through his workflow with real pride. He maintains this enormous knowledge base about his company and the agent acts as the CEO in his framing, and he is the hands, the agent drafts the email and he clicks send, the agent preps the talking points and he goes and says them to the investor, the agent reads the contract and tells him where to sign, and I could not figure out in the moment how to tell him that from where I was sitting he had just described himself as a peripheral, an io device with legs, a face and a bank account rented out to something else’s cognition. And the same shape is showing up in people who route feedback and delegation and even performance reviews through a model, so your manager is not writing your review and your response will also not be written by you, and the two models are going to have a little conversation about your career while the two humans pretend to have opinions in the meeting afterwards. The core limiting factor of most industries used to be intelligence, and the story of the last century is basically the story of that factor getting commoditized in stages, first physical labour, then access to knowledge, and now the parsing of that knowledge itself, which is the thing we used to call being smart. What you cannot hand off yet is the legal identity, the bank account, the face on the zoom call, the body that walks into the room, and so that is quietly what humans are being optimized into in this new stack. Not the thinker anymore, the thing that makes the agent’s output count as a real action in the real world, maybe humans will become just an extended version of tools to interact with the world

What if all this becomes the norm and we all shrug and get along with it. In an earlier post I wrote about asking claude to apply for my visa and I framed it as a fun anecdote because it genuinely was, but there is a darker version of that same story and it is the one where my juniors are firing up a claude and asking it to apply to 400 jobs a night while recruiters on the other side are firing up their own claude to screen the same 400 applications, and the only humans in the loop are the ones who wrote the prompts on either end. I already see it happening, my juniors are not landing interviews and I do not think they are less capable than I was at their age, the signal to noise ratio of the application pool has just collapsed because the noise is free to generate and the signal costs the same it always did. I do not want to live in a world where two models negotiate the hiring decision in the middle while the humans on both sides wait around to be told the outcome, and I am not sure if we can get a vote on this