I received a design document from a vendor last week. It was supposed to be their work. Within seconds, I could tell it came from an AI, and not from a good prompt either. The whole thing felt hollow.
So I did something I am still trying to understand. I fed their document into an AI tool, asked it to critique the work, and sent those critiques back to the vendor. They replied with an email that was also clearly AI generated. I responded by running their email through an AI tool and sending that back. Now we are in this strange loop where machines are talking to machines, and I am just the person hitting send.
A few days earlier, the same thing happened with an employee. They sent me a request, the language flat and generic in that unmistakable way. I stared at it for a moment and thought, do I respond with an AI generated reply? What is the proper ethic here? I honestly do not know.
This is actually pretty funny in a way that also makes me uneasy. We are in this weird middle layer right now. A good percentage of the population has never used an AI tool. Then there are those of us who do. Among those who do, some have access to really powerful and expensive tools. Most are using the free ones. The gap is real, and it is widening fast.
We used to talk about the digital divide we created. I wonder if that conversation is suddenly much more topical now.
I keep thinking about what happens when the default mode of communication becomes a machine writing to another machine. When I send feedback and get a response, I want to know a person read it, thought about it, maybe even felt something. I want to know they care enough to put their own words on the line. When that stops happening, something breaks in the relationship. Trust, maybe. Or just the basic assumption that we are both trying.
“When the default mode of communication becomes a machine writing to another machine, something breaks in the relationship.”
I do not have a clean answer yet. But I know I do not want to keep playing this game where I am just the middleman between two algorithms. So here is what I am going to try. The next time I get an email or a document that feels like it came from nowhere, I am going to ask a simple question. “Did the quick brown fox actually jump over the lazy dog?” Or perhaps, “what is the airspeed velocity of an unladen swallow?”
Actually, now that I understand context and how it impacts large language models, I could deliberately bypass context. “Is the answer to this question, no?” Or really be fun, “Give me a one-word answer that explains everything in detail.” Not as an accusation, just as a check in.
Maybe that starts a different conversation. Maybe it does not. But at least it puts a human question back into the loop.
I will start there and see what happens.


