I noticed last night, at an event, that many conversations centered on the impact of artificial intelligence tools in daily work. One person would describe something they were trying, and the other would say, That is neat, maybe I should try that. I heard that phrase so often that I began to reflect on what sits underneath it. For most of our lives we have treated software as reactive. We buy a program to solve a problem we already have, or we install a system to make a current process easier. We judge tools by the workload they can replace. Hence the common refrain, maybe I should try that.
Yet people are not actually trying it. The excitement around artificial intelligence is loud, and the growth charts seem dramatic, but adoption is nowhere near as widespread as the headlines would have you believe. The learning curve is real. Practical uses are elusive, especially when you attempt to wedge a mostly good solution into a rigid process. Experiments often end up in the hands of one or two artificial intelligence enthusiasts inside an organization, and whatever they learn trickles out slowly. People are busy. The prospect of learning yet another tool, then babysitting it, remains the most common complaint.
This leads me to a different conclusion. The next era of software will be more predictive than reactive. We already see glimmers of this. Social platforms attempt to predict what will hold our attention. Recommendation systems curate what we will most likely read, watch, or purchase. Predictive ideas are entering everyday conversation as well. The question people ask me starts like this, Now, what if artificial intelligence could do… The sentence that follows is almost always the same. Could the software guess what I want, then just do it for me. If the system can anticipate my need, why bother asking. Just serve it up.
Of course, there are social risks and dangers, and the pragmatic among us are quick to raise them. Caution is warranted. We talk about artificial general intelligence as if it is just around the corner, but it remains out of reach for now. For software to feel less artificial, it must show that it can understand, learn, and apply knowledge across a wide range of tasks. That requires a leap from reactive to truly proactive. The system must act with initiative, not only when we feed it step by step instructions, but when we give it purpose and permission.
If that is where we are headed, then our responsibility grows. We need clearer intent. We need better guardrails. We need to decide where prediction ends and human choice begins. The promise is not a world where tools do our living for us. The promise is a world where tools meet us one step earlier, where they offer the first draft, the first nudge, the first move toward what we already value. When that happens, we will say something better than maybe I should try that. We will say, This is helping me become who I said I would be.
											

