FT: “Universal Commerce Protocol [is] a technology standard to help retailers build their own shopping agents and interact with others, this is part of a growing base of technology that could start to replace human attention — the lifeblood of online advertising — with a growing degree of machine-to-machine interaction. UCP joins a list of other protocols designed to automate online activity. This started a little over a year ago with Anthropic’s Model Context Protocol, which enables AI assistants and agents to tap into data held on other companies’ servers, and has since grown to include standards for agents to interact with other agents (A2A) and to make payments on behalf of users (AP2). If internet users find the services made possible by these technologies a more convenient way to get things done, old forms of online engagement are likely to wither. Advertising is still likely to play an important part, even as machine-to-machine interaction becomes more prevalent. At some level, purchases reflect customer preferences, and influencing that preference will always have value. But how and where that influence happens will change.”
Kate Murphy: “You know it when you feel it, with a co-worker, friend or stranger. The science of interpersonal synchrony explains how ‘clicking’ can be a fast track to intimacy—or drama…Synchrony researcher and psychotherapist Dr. Richard Palumbo advises imagining there is a MUTE button during particularly fraught interactions so you focus less on the words used and more on the other person’s level of arousal and how you might be matching that energy. ”It’s your natural human tendency to sync with someone else,” he says. “What’s not so natural is being aware of it.” Sometimes we need to disconnect to recalibrate and reclaim ourselves. The relationships that endure, however, are the ones where you are in sync more than you are not. Grace is learning to ride the tide.”
Yann LeCunn: “There is a sense in which they have not been overhyped, which is that they are extremely useful to a lot of people, particularly if you write text, do research, or write code. LLMs manipulate language really well. But people have had this illusion, or delusion, that it is a matter of time until we can scale them up to having human-level intelligence, and that is simply false. The truly difficult part is understanding the real world. This is the Moravec Paradox (a phenomenon observed by the computer scientist Hans Moravec in 1988): What’s easy for us, like perception and navigation, is hard for computers, and vice versa. LLMs are limited to the discrete world of text. They can’t truly reason or plan, because they lack a model of the world. They can’t predict the consequences of their actions. This is why we don’t have a domestic robot that is as agile as a house cat, or a truly autonomous car. We are going to have AI systems that have humanlike and human-level intelligence, but they’re not going to be built on LLMs.”
Ethan Mollick: “Software developers write Product Requirements Documents. Film directors hand off shot lists. Architects create design intent documents. The Marines use Five Paragraph Orders (situation, mission, execution, administration, command). Consultants scope engagements with detailed deliverable specs. All of these documents work remarkably well as AI prompts for this new world of agentic work (and the AI can handle many pages of instructions at a time). The reason you can use so many formats to instruct AI is that all of these are really the same thing: attempts to get what’s in one person’s head into someone else’s actions.” Adds Arnold Kling: “His point is that using AI effectively requires the management skill of being able to articulate clearly a project’s goals, context, and constraints. He mentions the skill of knowing what an AI can do. I think this could use more emphasis. Sometimes a simple prompt will work, sometimes a more complex prompt is needed, and sometimes a task is beyond the (current) capability of an AI. Knowing the difference is important.”

