Sridhar Ramaswamy: “What I tell people is that the most celebrated companies of the 21st century, like Google, like Meta took data almost as seriously as they took their main product. In fact, it’s those feedback loops that have created greatness and I tell our customers that it is our vision and my dream that Snowflake is that data partner for them to become that efficient and that insightful with their data as these great companies were. To me, fulfilling that mission with more and more customers is, I think, reward. But yes, monetarily or in terms of just growth, aspiring to things like mid-30s growth for a decade, that compounds.
David Brooks: “We use cost-benefit analysis when we are operating in a prosaic frame of mind. But I don’t think anything great was ever accomplished in a prosaic frame of mind. People commit to great projects, they endure hard challenges, because they are entranced, enchanted. Some notion or activity has grabbed them, set its hooks inside them, aroused some possibility, fired the imagination. The moment of enchantment can be so subtle and soft — a baseball player hits a double and Murakami contemplates writing a novel; he has a track by his house, so maybe he’ll take up running. But, unbidden, almost involuntarily, a commitment has been made — to some activity or ideal — a quiet passion has been inflamed. Some arduous journey has begun.” He quotes Henry Moore: ““The secret of life is to have a task, something you devote your entire life to, something you bring everything to, every minute of the day for your whole life. And the most important thing is — it must be something you cannot possibly do!””
Fortune: “Our inability to understand how LLMs work has made some businesses hesitant to use them. If the models’ inner workings were more understandable, it might give companies more confidence to use the models more widely. There are implications for our ability to retain control of increasingly powerful AI “agents” too. We know these agents are capable of “reward hacking”—finding ways to achieve a goal that were not what a user of the model intended. In some cases the models can be deceptive, lying to users about what they have done or are trying to do. And while the recent “reasoning” AI models produce what’s known as a “chain of thought”—a kind of plan for how to answer a prompt that involves what looks to a human like “self-reflection”—we don’t know if the chain of thought the model outputs accurately represents the steps it is taking (and there’s often evidence it might not be.) Anthropic’s new research offers a pathway to solve at least some of these problems. Its scientists created a new tool for deciphering how LLM’s “think.””
NYTimes: “Reasoning just means that the chatbot spends some additional time working on a problem. “Reasoning is when the system does extra work after the question is asked,” said Dan Klein, a professor of computer science at the University of California, Berkeley, and chief technology officer of Scaled Cognition, an A.I. start-up. It may break a problem into individual steps or try to solve it through trial and error. The original ChatGPT answered questions immediately. The new reasoning systems can work through a problem for several seconds — or even minutes — before answering.”