Thinks 1857

SaaStr: “The market has split into two very different worlds. In one world, companies are riding incredible tailwinds – raising at premium valuations, growing at unprecedented rates with lean teams, and accessing budgets 10x larger than traditional SaaS. In the other world, companies are fighting for scraps, dealing with flat-to-negative budgets after price increases, and watching their valuations compress. Which world you’re in is largely up to you. The technology is available. The budgets are available. The customers are in market right now. The question is whether you’re building something that deserves to win.”

WSJ: ““A properly regulated system of AI-powered choice engines could produce massive welfare benefits,” concludes Cass Sunstein in “Imperfect Oracle,” his study of what artificial intelligence can do for humanity. “It could make life less nasty, less brutish, and less short—and less hard.” Many people today see great potential in large language models and other, more ambitious, AI applications. But what does he mean by “AI-powered choice engines”? Mr. Sunstein…identifies the real benefit of AI as its capacity to overcome human “cognitive biases.” Deeply influenced by the field of behavioral economics, he argues that people tend to value avoiding losses rather than pursuing equivalent gains, pay too much attention to the examples of outcomes that are most familiar to them, and then to be “unrealistically optimistic.” They use “heuristics” that humans evolved for making snap decisions but that can mislead them at other times. “People tend to focus on the short term, not the long term,” he notes. We trust our intuitions when we should rely on rational calculation. “Intuitions and impressions should be replaced by computations,” Mr. Sunstein concludes.”

Physical Intelligence: “One of the most exciting (and perhaps controversial) phenomena in large language models is emergence. As models and datasets become bigger, some capabilities, such as in-context learning and effective chain-of-thought reasoning, begin to appear only above a particular scale. One of the things that can emerge at scale with LLMs is the ability to more effectively leverage data, both through compositionality and generalization, and by utilizing other data sources, such as synthetic data produced via RL. As we scale up foundation models, they become generalists that can soak up diverse data sources in ways that smaller models cannot. In this post, we’ll discuss some of our recent results showing that transfer from human videos to robotic tasks emerges in robotic foundation models as we scale up the amount of robot training data. Based on this finding, we developed a method for using ego-centric data from humans to improve our models, providing a roughly 2x improvement on tasks where robot data is limited.”

Jason Furman: “I’m more worried about the financial valuation bubble than I am a technological bubble…To justify financial valuations, you basically need two things: the technology works really, really well, and you can make a profit from that. The two threats to valuations are that we hit diminishing returns and a lot of the different scaling laws that have applied to date don’t apply in the future. Moreover, I don’t know that every scaling law translates economically. Every time your microchip in your computer gets two times as fast, you don’t write Word documents two times as fast or respond to emails two times as fast. In fact, a lot of that is almost like excess capacity that is building up in our computers, and that could be what happens in AI, even if it follows the law. The second thing is the current valuations assume enormous ability to monetize, which requires products that people will buy and being able to build moats so that people won’t switch to cheaper products. It’s not like I’m sure at all that there’s not an AI technology bubble — I change my thoughts on this by the day — but it’s the valuations I’m much more worried about.”

Published by

Rajesh Jain

An Entrepreneur based in Mumbai, India.