Thinks 1927

Noah Smith: “In the late 20th century, we2 invented three things that utterly changed the game. These three inventions were the lithium-ion battery, the rare-earth electric motor, and power electronics. A little over a year ago, I wrote a post about why these three inventions were such game-changers: Basically, these three things allow electric motors to replace combustion engines (and steam boilers) over a wide variety of applications. Batteries make it possible to store and transport electrical energy very compactly and extract that energy very quickly. Rare-earth motors make it possible to use electrical energy to create very strong torques — for example, the torque that turns the axles of a Tesla. And power electronics make it possible to exert fine control over large amounts of electric power — stopping and starting it, rerouting it, repurposing it for different uses, and so on. With these three technologies, combustion’s main advantages vanish in many domains. Whether it’s cars, drones, robots, or household appliances, electric technology now has both the power and the portability that only combustion technology used to enjoy.”

Siddharth Pai on OpenClaw: “The most useful term in this debate is the ‘lethal trifecta,’ popularized by Simon Willison. The three parts are precise. First, the agent has access to private or sensitive data. Second, it is exposed to untrusted content such as text, images or other material that an attacker can influence, whether through a webpage, email, document or bug report. Third, it can communicate externally; for example, by sending a message, calling an API or writing outside its trust boundary. The phrase ‘lethal trifecta’ doesn’t mean the software is evil, but that the architecture is dangerous. Private data supplies the prize, untrusted content supplies the attack path and external communication the escape route. If these features co-exist in one agent, prompt injection can turn a helpful assistant into an unwitting exfiltration channel.”

Ben Thompson: “Many of the biggest flaws from the original ChatGPT have been substantially mitigated, at least for verifiable use cases like coding: LLMs are much more likely to be right the first time, they reason over their results to increase their chances, and now agents actively verify the results without humans needing to be in the loop. That leaves one flaw: actually figuring out what to use these for.”

Asymco: “Apple turned 2 billion devices into the data center.​ Every iPhone, Mac, iPad gets distributed AI at a scale no server farm can match. While its rivals burn cash, Apple is doing the opposite. $90.7 billion in stock buybacks last fiscal year.​ Its competitors? Combined buybacks collapsed 74% from their peak.​ Apple didn’t miss the AI revolution. It just bet that the winners won’t be the ones who build the infrastructure. They’ll be the ones who own the customer and no one else on Earth owns the best customers.”

Thinks 1926

WSJ: “You can think of AI as a restaurant. The model is the chef. After it undergoes a period of intensive training, learning hundreds (or billions) of recipes and techniques, it is ready to begin taking orders. Inference is the day-to-day operation of the restaurant. Diners place their orders (often in the form of a query to a chatbot) and the chef prepares their meals (the chatbot’s response). Inference consists of two phases, known as prefill and decode. Prefill happens when a user enters a prompt, forcing the model to interpret the query by processing each word, symbol or image it contains. Decode is the process by which the model, using all it has learned in training, spits out a response to the query. The two phases of inference require different attributes from chips: Prefill demands more processing power, while decode requires more memory, in part because it has to draw on all the knowledge it has accumulated to serve up nice, piping-hot tokens to the user.”

Tyler Cowen: “If strong AI will lower the value of your human capital, your current wage is relatively high compared to your future wage.  That is an argument for working harder now, at least if your current and pending pay can rise with greater effort (not true for all jobs). If strong AI can at least potentially boost the value of your human capital, you should be investing in learning AI skills right now.  No need to fall behind on something so important.  You also might have the chance to use that money and buy into the proper capital and land assets. So…WORK HARDER!”

FT: “Human intellect rests on three pillars: seeing (observing the world), doing (intervening in it) and imagining (simulating what might happen under different choices). Right now, artificial intelligence inhabits only one of these pillars. Expanding existing frontier AI models will not address this problem. The breakthrough that set off today’s frenzy was the transformer architecture, developed at Google and scaled up into large language models trained on much of the public internet and used to write text and code. Then came agents that stitch these models together into automated workflows. Now the focus is on “world models”, which try to capture the physical environment from vast streams of video and other inputs. World models are an important evolution from LLMs. This so-called spatial intelligence is being used to develop technology that can enable driverless cars and robotic factory workers.”

WSJ: ““We are, hands down, the sweatiest animal on the planet,” writes Bill Gifford in “Hotwired.” His splendid book is devoted to a topic—the health benefits derived from exposure to high temperatures—that could be tedious in the wrong hands…He begins “Hotwired” with an accessible tour through the biology and evolutionary history of perspiration. There are two kinds of sweat glands in mammals: apocrine and eccrine. The vast majority of those in humans are the eccrine variety; adults have between two and four million in all. These glands are concentrated on the hands and feet and are missing from only a few places on the human body (our lips, for example). Eccrine sweat glands perform a critical function: cooling the body via evaporation. Mr. Gifford characterizes this adaptation, which dates back millions of years, as “evolution’s nuclear weapon.””

Thinks 1925

Dylan Patel: “The biggest bottleneck is compute. For that, the longest lead time supply chains are not power or data centers. They’re actually the semiconductor supply chains themselves. It switches back from power and data centers as a major bottleneck to chips. In the chip supply chain, there’s a number of different bottlenecks. There’s memory, logic wafers from TSMC, and the fabs themselves. Construction of the fabs takes two to three years, versus a data center which takes less than a year. We’ve seen Amazon build data centers in as fast as eight months. There’s a big difference in lead times because of the complexity of building the fab that actually makes the chips. The tools also have really long lead times. The bottlenecks, as we’ve scaled, have shifted based on what the supply chain is currently not able to do. It was CoWoS, power, and data centers, but those were all shorter lead time items. CoWoS is a much simpler process of packaging chips together. Power and data centers are ultimately way simpler than the actual manufacturing of the chips. There’s been some sliding of capacity across mobile or PC to data center chips, which has been somewhat fungible.”

Tim Wu: “Until now, companies like Meta and Google have relied on some powerful legal defenses. They do not deny that their products can be highly absorbing. But so, they contend, is a good novel, and no one suggests that a beach thriller is a public health hazard; a novel is speech protected by the First Amendment. Moreover, the companies note, unlike the publisher of a novel, which can still be responsible for defamation, social media companies cannot be held responsible for what appears on their platforms, thanks to Section 230 of the Communications Decency Act of 1996, which was intended to protect platforms from being destroyed by tort lawsuits. But changing circumstances have undercut these arguments. For one thing, if the platforms in the 1990s and 2000s were passive carriers of others’ content (albeit filtered by human moderators), they are now active purveyors. The platforms use aggressive tactics to keep users compulsively engaged — algorithmic recommendations, infinite scroll, auto video play and intermittent reinforcement (in which likes, comments and refreshed content are rewarded unpredictably rather than consistently). This goes far beyond merely hosting and moderating third-party content.”

Tom Rothman: “Windows, in entertainment lexicon, refers to a period during which a product — whether a film, TV show or sporting event — is available exclusively to the public in one place. For movies, these windows occur sequentially: first in movie theaters, then on home video, then on pay TV and streaming, then eventually on free TV. This system was meant to ensure that if you made a good film at an appropriate budget and the audience liked it, it would usually be profitable. Of course, if you made a bad film, all the windows in the world won’t save you.”

Jason Lemkin: “Let me ask you a simple question. When was the last time you were genuinely excited to buy a pre-AI SaaS tool? Not “fine with it.” Not “it gets the job done.” Genuinely excited. The way you were excited the first time you saw Slack replace email threads. Or the first time Figma made you forget Sketch existed. Or the first time Notion made you feel like your entire company’s knowledge actually lived somewhere…Now let me ask you a harder one: When was the last time you felt like you were overpaying on a renewal for a pre-AI SaaS tool — and seriously thought about cancelling?…Now ask yourself.  How do your customers feel about both these points?  Be brutally honest…Your product probably isn’t magical anymore. And your customers know it.”

Thinks 1924

Roger Rosenblatt: “I try to let at least five people a day know that I’m thinking of them and wishing them well. Sometimes it’s in person; often I send a little email. I don’t say much more than that to my five. Thinking of you. Hope you’re thriving. That sort of thing. Small talk, I know. Yet small talk powers human activity and human feelings much of the time. The key to good small talk is to believe, if only for a moment, that it is just as urgent and consequential as any philosophical conundrum or national event. At the core of this belief lie kindness and tenderness. People with whom you make small talk are made aware that for at least one moment in their lives, they have a safe home with you, a place where they are welcome just as they are. They do not need to earn your attention. They receive it simply by existing.”

WSJ: “What always struck [Jamie Dimon] about Buffett’s writing, Dimon said, was his talent for explaining complex financial concepts in plain English. “I write it for people like my sisters,” Buffett told the Journal in 2016. “They’re smart, they read a lot, they have a lot invested in the company. They don’t know all the financial jargon, but they don’t want to be treated like 5-year-olds.” “I’ve always tried to emulate that,” Dimon said. Buffett’s letters could continue for more than a dozen pages, and their readership extended beyond Berkshire shareholders. Indeed, many of the Oracle of Omaha’s oft-quoted aphorisms found in past annual letters are applicable to investors in just about anything. His wise words included, “We simply attempt to be fearful when others are greedy and to be greedy only when others are fearful,” and “never bet against America,” among others.”

NYTimes: “As wearable devices like Fitbits and Apple Watches proliferated over the past decade or two, many people began calibrating their health and wellness regimens to improve how their bodies function. But in recent years, more and more gadget geeks have focused such biohacking efforts on their performance not just at the gym but at the office as well. The once eccentric quest for immortality is becoming a fixture of the 9-to-5 hustle. The trend has spawned a cottage industry of coaches and gurus who train white-collar workers to raise their H.R.V., or help companies train employees to boost theirs. Software makers have created dashboards that allow a company’s H.R.V. coach to track and analyze employees’ data, and to share teamwide averages with company managers.”

WSJ: “Research has long indicated that men generally recover faster from pain and are less likely than women to develop chronic pain. Now scientists have a better idea why. When you get injured, your immune system sends certain white blood cells to calm pain-sensing neurons and inflammation. In men, those white blood cells are more likely to produce a pain-resolving molecule that can quickly quell the ache, according to a recent study in the journal Science Immunology. The testosterone hormone is probably what drives the increased production of that pain-resolving molecule, known as interleukin-10, the researchers found.”

Thinks 1923

Nathan Lambert: “The path forward for open [AI] models is to solve different problems than the frontier labs, to find places where open models are effectively free alternatives, to show ways of using specialized models that the closed labs cannot offer. The world of open models needs to embrace creativity, before building powerful AI systems grows too expensive and prices out many of the prized open labs of today.

Ben Thompson interview with Nvidia CEO Jensen Huang about Accelerated Computing. “I think that that’s the big idea, that we need to help customers not just build chips, but build systems and then after we build systems, not just build systems, but build AI factories. AI Factories has a lot of software inside, it’s not just our software, it’s a ton of software for cooling management and electricals and things like that, and redundancies and a lot of it is over-designed, it’s over-designed because nobody talked to each other.” More: “[AI] is [a] five-layer cake: Energy → chips → infrastructure → models → applications.”

FT: “The market for personal travel planners, high-end club memberships, private wealth managers and educational consultants has in recent years grown by high single to double digits. Fractional aviation subscription services (think NetJets) are growing by about 10 per cent a year. Those with Clear (the airport service that speeds you through security if you must fly commercial) have tripled since 2022. It’s all part of a burgeoning “concierge” economy that caters to affluent consumers who don’t wait — or want — for anything…Concierge services are about convenience and access, but they are also about bringing ease and luxury to areas that have become digital commodities or suffer from high levels of consumer dissatisfaction, such as healthcare or financial services.”

Ajay Shah: “We in India have locked down one question. We know that we want control of our own monetary policy with an inflation target. Monetary policy — the short-term interest rate of the economy — will be devoted only to the pursuit of consumer price index stability at 4 per cent. Everyone in India has got this point. To see the inflation-targeting reform through, we need the other two pieces. The government has to step out of activities on the exchange rate, and it has to step out of interference in the capital account. Both kinds of interference create contradictions, induce mistakes by firms, and hinder Indian economic growth. Embracing the automatic stabiliser of the open economy will give us a long-term, stable, harmonious arrangement that is best-suited to foster Indian success.” 

Thinks 1922

John Cochrane: “What should government do about rising energy prices? Nothing. Or, more concretely, get out of the way, ease restrictions, and let the market work its magic of sending energy to the most economically important uses while encouraging others to save, substitute or provide new energy. Keep inflation under control, and don’t induce financial problems.”

NYTimes: “The new business models have made it more difficult for investors to evaluate the businesses of software companies, Ms. Saiprasad said. Unlike revenue from seat-based pricing, the money that other models bring in is less predictable, she added. The shift has spooked Wall Street. Since October, anxieties over A.I. have erased roughly $3 trillion in market capitalization from software companies, or a third of the value of the S&P 500 index’s software sector. So far this year, shares of Salesforce and ServiceNow have fallen about 30 percent. “This is an industry that people believed is extremely durable and that, once you get a customer, you have that relationship for the next one, three, five, 10 years,” Mr. Zukin said. “Now, with the changes happening, you don’t even know what’s going to happen in two months.””

OpenAI: “OpenAI was the fastest technology platform to reach 10 million users, the fastest to 100 million users, and soon the fastest to 1 billion weekly active users. Within a year of launching ChatGPT, we reached $1B in revenue. By the end of 2024 we were generating $1B per quarter. We are now generating $2B in revenue per month. At this stage, we are growing revenue four times faster than the companies who defined the Internet and mobile eras, including Alphabet and Meta.”

Jordi Visser: “For more than a decade, equity markets were built around a simple premise: durable franchises deserved durable multiples. Investors weren’t just buying earnings. They were buying time. Time to compound. Time before meaningful competition arrived. Time protected by scale, distribution, switching costs, and capital intensity. Time was the moat. The entire architecture of modern markets reinforced that belief. Passive flows concentrated into the largest platforms. Growth indices tilted toward scalable digital economics. Valuation frameworks stretched duration assumptions further into the future. A narrow cohort absorbed more and more of the index because the math appeared rational. Scale begot scale. But something subtle has changed. AI does not simply disrupt business models. It compresses time…AI lowers the cost of building businesses. But it raises the bar for sustaining advantage. More companies can start. Fewer can dominate.”

Thinks 1920

Arnold Kling: “The AI models find patterns that a human would not have spotted. That is why it is wrong to think of them as like a child savant who studies the encyclopedia. As AI models improve, they are going to be better able to find patterns that we as humans would have found. In addition, they will find patterns that we would not have found, and increasingly these will be interesting. At the same time, they will hallucinate less. It is as if their acid trips come with greater and greater clarity over time.”

WSJ: “People who would never post an Instagram video to hawk nutritional supplements or teeth-whitening strips are increasingly striking deals with brands nonetheless. Just don’t call them influencers. They are the “alternatively influential,” according to Figures, a new representation firm for public thinkers and tastemakers who have real clout in their own demesnes despite only modest internet followings—in comparison to the massive online pull of celebrities and big-time creators, the company says.”

Bloomberg: “Decades of research on how markets react to layoff announcements have established a consistent pattern: Investors punish companies that frame cuts as a response to problems. But when a company frames the same cuts as proactive restructuring, the penalty disappears. The stated reason for the layoff matters more than the fact of the layoff. AI has become the most powerful proactive frame available. “We’re restructuring around AI” is a growth signal. “We over-hired during the pandemic and revenue softened” is an accountability signal. In a market where artificial intelligence is the black hole around which everything orbits, swathing your cuts in AI-labeled wrapping paper lets you tap the valuation boost of an AI adoption story. The technology doesn’t need to work if the belief that it will does.”

FT: “Given the speed of recent rollouts, China will probably be both the testing ground and a leading indicator for agentic AI. In the US, the different parts needed to run AI systems are often controlled by separate companies. AI model developers, cloud providers and apps are separated as are payments, commerce and messaging services. A similar dynamic exists in Europe, where regulatory constraints can make integration harder. That fragmentation makes agentic AI harder to deploy at scale, as systems must navigate across multiple providers. Until now, much of the conversation about who leads the AI race has focused on model capability: who scores highest on controlled benchmarks. The US still holds the lead in models. But once AI begins to act, benchmark scores matter less than the ability to get things done. By that standard, China may already have an edge.”

Thinks 1919

Aaron Levie: “Now, the path forward is to make software that agents want. While the biggest users of agents tend to be developers or at least highly technical users that often will have their own preferences of tools, in a world of agents doing any type of task for knowledge workers, this type of preference will slowly drift away. Short of an enterprise already having a standard, agents will then be in the driver’s seat for what gets adopted for any particular workflow. This could mean the tools they sign up for, the code that they write, the libraries that they use, the skills they leverage, and so on. The platforms that are easier for agents to adopt, and solve the agent (and user’s) problems the best, will get ahead far faster than those that don’t. Agents won’t be going to your webinar or seeing your ad; they’re just going to use the best tool for the job, and you’ll want it to be yours.”

NYTimes: “At a moment when faith in markets is fraying and faith in governments is strained, [Adam] Smith’s message is neither to worship the invisible hand nor to wish it away. It is to discipline power, defend competition and keep the focus where he always insisted it belonged: on improving the lives of ordinary people.”

Andy Kessler: “Think of agents as autonomous digital bots that roam up and down a company probing and executing its business process. How items are sold, deals are closed, or inputs are procured. The dream is to have successful agents that efficiently and automatically restructure the organization to optimize the business constantly. Possible? Eventually. But first agents need to understand how the company really works. They need the “context”—a company’s living, breathing ecosystem with “decision traces,” the history of every decision made, every prospect considered, every process used or discarded. Things like “we were a close second and lost that deal but are ready to step in.” Where is that snippet stored today? In someone’s memory. A context graph captures the sequence of decisions—the why. Not a snapshot like an org chart, but a movie with millions of potential plots.”

NYTimes: “Now coding itself is being automated. To outsiders, what programmers are facing can seem richly deserved, and even funny: American white-collar workers have long fretted that Silicon Valley might one day use A.I. to automate their jobs, but look who got hit first! Indeed, coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace. A.I.-generated videos look janky, artificial photos surreal; law briefs can be riddled with career-ending howlers. But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose. You might imagine this would unsettle and demoralize programmers. Some of them, certainly. But I spoke to scores of developers this past fall and winter, and most were weirdly jazzed about their new powers.”

Thinks 1918

NYTimes: “Despite or even because of its omnipresence, social media is evolving. Eric Goldman, a professor at Santa Clara University School of Law, anticipates a future where social media is transformed into a thousand channels broadcasting at you. It would be reminiscent of cable television circa 1995: ubiquitous and a little bland. “The whole point of social media is talking to each other,” Mr. Goldman said. “If that becomes too legally risky, it will still be media. It just won’t be social.” All future engagement will be with a machine. On Facebook, content generated by artificial intelligence is already being prioritized over friends and family.”

Business Standard: “Consider this. India now has over 900 TV channels, thousands of newspapers and over 860 radio channels. We make more than 1,600 films in a normal year. It has been over a decade since streaming took off and six years since short videos did. The last two years have added micro-dramas to the list. With more than 60 video streaming apps and a dozen music streaming ones, there is now an obscenely rich spread on tap. Here’s a sense of the scale: YouTube uploads 500 hours of video every minute. This column only talks of the 523 million Indians who use broadband internet-connected laptops, TVs or phones, making for an over-served, pampered market…How do you tell a story to this audience?”

The Top 100 Gen AI Consumer Apps: “ChatGPT leads but the race for the “default AI” is on. ChatGPT is still far and away the largest consumer AI product. On web, it is 2.7x larger than the #2, Gemini (measured by monthly traffic) — and on mobile, it is 2.5x larger (measured by monthly active users). ChatGPT has seen weekly active users grow by 500 million people over the past year to 900 million today. This is especially impressive given growth is difficult to maintain at scale — over 10% of the global population now utilizes ChatGPT every week.”

WSJ: “In their current form, tokenized stocks are digital tokens that represent shares of publicly traded companies on the blockchain. By design, each token is equivalent to a single share of stock. Most of the tokens trading today are technically derivatives and not stocks, at least at the moment, and thus don’t confer the holder all of the rights of ownership that shares provide—even if they track those shares’ prices. In the future, though, tokens are expected to grant those rights, including dividend payouts and the ability to vote on shareholder proposals.”

Thinks 1917

WSJ: “Instead of paying humans to join focus groups and complete surveys, Aaru uses thousands of AI agents, or bots, to simulate human responses. It feeds demographic and psychographic information into its models to create human profiles that match clients’ needs, and the results those bots spit out are being used for product development, pricing, identifying new customers and political polling.”

Arnold Kling: “The human should not have to learn how to prompt the AI. The AI should learn how to prompt the human.”

TheMaxSource: “Eighty one percent of consumers need to trust a brand before they’ll consider buying from it. Not interested. Not aware. Trust first, transaction later. The math gets sharper when you look at what drives that trust. User generated content gets 28% higher engagement than branded content. Videos about your product from actual customers get viewed ten times more than your official ads on YouTube. Translation: people trust other people talking about your stuff more than they trust you talking about your stuff.”

Sandeep Goyal: “Marketing has survived print-to-broadcast, broadcast-to-digital, desktop-to-mobile. Each shift created winners and casualties. This one goes further. It does not merely change the channel. It changes the decision-maker. Yes, AI is upending marketing. But the real upheaval is this: The future customer may not blink. May not feel. May not be persuaded by nostalgia. And yet, paradoxically, the brands that will thrive are those that double down on the one thing machines cannot manufacture — meaning. AI isn’t just upending marketing: It’s rewriting who the customer is.”