Larry Culp on advice he received from Warren Buffett: “First, and most important, pick the right leaders—not only for their ability, but for their values. And second, set ambitious but not unreasonable expectations.”
Tim Koller: “What we have found is that when people are overly focused on what’s going on in the world and forget about the principles, they make poor decisions…we encourage companies to have the courage to be more long-term oriented—to focus more on growth and innovation, for example, rather than just increasing profits by cutting costs. Cutting costs is never a way to succeed. I’ve never seen a company in all my years that has successfully cut costs. They may be able to do it four, five, six times, but eventually it comes back to haunt them. They find that their growth is below their market, and then they end up scrambling to figure out how to fix it—and sometimes they find that they can’t fix it. The focus on earnings, particularly short-term earnings, is still the biggest misconception we battle all the time.”
WSJ: “Looking to cut through the hype and find restaurants you’ll actually love? The founders of Beli were. Now, their app uses your restaurant ratings to connect you with dining discoveries tailored to your own tastes.”
Rohit Krishnan: “LLMs inherently are probabilistic. No matter how much you might want it, there is no perfect verifiability of what it produces. Instead what’s needed is to find ways to deal with the fact that occasionally it will get things wrong. This is unlike code that we’re used to running before. That’s why using an LLM can be so cool, because they can do different things. But the cost of it being able to read and understand badly phrased natural language questions is that it’s also liable to go off the rails occasionally. This is true whether you’re asking the LLMs to answer questions from context, like RAG, or if you’re asking it to write Python, or if you’re asking it to use tools. It doesn’t matter, perfect verifiability doesn’t exist. This means you have to add evaluation frameworks, human-in-the-loop processes, designing for graceful failure, using LLMs for probabilistic guidance rather than deterministic answers, or all of the above, and hope they catch most of what you care about, but know things will still slip through.”