Thinks 1937

Reid Hoffman: “What is genuinely true (and exciting) is that software must now incorporate AI generativity as a core feature of its value proposition. The new competitive moat isn’t built from how well a software system’s AI is tuned to the specific needs of its category. A CRM company that ships a deeply intelligent set of agents that iteratively refine your sales workflow, that understands your pipeline more comprehensively than any human analyst, that comes with powerful backend libraries purpose-built for that domain has an extremely well-crafted moat. The incumbents who understand this will evolve. The ones who don’t will be the ones who actually die. But even they will die more slowly than most assume.”

NYTimes: “A long time ago, in England as well as America, people understood a constitution to be like a garment, tailored to fit the body of a nation and intended to “align the character of the land and people it governs with an appropriate frame of government.” This old understanding was universal among the framers, whatever else they disagreed about. So, too, [Mark] Peterson reminds us, was the belief that when a constitutional relationship goes awry — when the garment no longer fits the body — the people have the power, right and responsibility to alter it. Whether we possess the political will to create a new constitutional order better suited to address the challenges of our time seems entirely less certain.

WSJ: “Since the 1970s, engineers speculated this might allow humans to store vast quantities of energy more or less indefinitely. Two problems: At the time, renewable energy cost too much to make it affordable, and adding water usually turns quicklime into an unwieldy goop. A 10-person startup called Cache Energy, working out of a 10,000-square-foot facility in Champaign, Ill., says it has figured out how to make such a cement battery durable, efficient and affordable. The company’s approach is to form cement into tiny balls, each about the size of a kernel of corn. Its engineers add a binding agent—secret though widely available, they say—to keep the balls in shape during the discharge and recharge process. Recharging the pellets requires heat, generated from electricity. When it’s time to discharge that stored energy, adding the right amount of water causes the pellets to release enough heat to generate temperatures up to 1,000 degrees Fahrenheit, says Cache’s founder, Arpit Dwivedi.”

Julia Angwin: “Compensating people for the harm caused by their products is just the silver lining. The real win would be if the social media giants were finally forced to design less harmful products. I’m talking about features like infinite scroll, which entices people with seemingly endless content, and autoplay, which automatically starts videos before our eyes. And of course, there are the algorithms that spread misinformation and amplify outrage. These are all techniques Big Tech uses to keep us staring at the screen for as long as possible. Too bad if its profitable practices extract a terrible cost on its users and on our society.”

Thinks 1936

Donald Boudreaux: “It’s no astonishing coincidence that Wealth of Nations appeared in the same year as America’s Declaration of Independence,
Thomas Jefferson’s manifesto, which Milton and Rose Friedman described as
“the political twin of Smith’s economics” (1989). Both works are products of the liberalism that was just then beginning to free humankind from its ages-old self-imprisonment within an ideology that treats most individuals as inferior to the nobility, treats commerce with contempt, and treats innovations that threaten traditional economic arrangements as intolerable. The fruits of Smith’s and Jefferson’s quills not only reflected the liberalism of the age; they also nourished that liberalism. Indeed, perhaps no other single work—except maybe John Locke’s Second Treatise on Government—has done as much as have Wealth of Nations and the Declaration to advance the cause of liberalism.”

Tyler Cowen book “The Marginal Revolution: Rise and Decline, and the Pending AI Revolution”: “Tyler traces the birth, triumph, and quiet decline of marginalism — the idea that made modern economics possible. Beginning with the 1871 Marginal Revolution and ending with the AI tools transforming research today, this is a book about how ideas are born, why they take so long to arrive, and what happens when machines begin to see around corners that humans cannot.”

WSJ: “The question isn’t whether AI is cognitively dangerous, the question is whether you’re using it as a crutch or as a coach. Memory consolidates through a process called elaborative encoding. The deeper and more actively you process information, the stronger the trace. Shallow engagement (skimming, passively consuming, letting AI summarize for you) leaves faint impressions. Active struggle—retrieving, connecting, questioning—builds durable knowledge. This is sometimes called the “desirable difficulty” principle, and it’s one of the most robust findings in cognitive psychology. Easy learning is often the least sticky. Struggle is what makes knowledge hold. AI, when used badly, is a desirable-difficulty machine running in reverse. It removes friction, smooths edges, and hands you the answer before you’ve had a chance to reach for it yourself. Every time you ask it to summarize a book you haven’t read, recall a fact you could have retrieved yourself, or draft a thought you were about to form, you’ve skipped a cognitive rep.”

FT: “Private credit is distinct from the public market credit plotted out in our treemap. Both are credit, but the public bit is typically tradeable bonds that pay fixed interest rates. That’s all the red stuff in the chart. Meanwhile private credit doesn’t trade (at least not much), usually pays a floating rate — say, 5 percentage points above benchmark interest rates — and is covered in stickers saying how senior it is in the borrower’s capital structure. In other words, it ranks more highly if the borrower collapses. It might even have some bespoke terms and conditions attached to protect lenders, and include specific ring-fenced assets supposedly backing the loan.”

Thinks 1935

Bloomberg: “In the early 1960s, the RAND Corporation, a think tank based in Santa Monica, California, popularized the idea of red and blue teams. The red team had to think like the Soviet military and probe the US for weaknesses. The home team, or blue team, had to counter the red team, with the two often engaged in prolonged war games. The idea is simple: To fight a formidable adversary we need to think like the adversary. Think red or be red. Since then, red teaming has spread to government, business and beyond. The CIA created a new red cell in response to the September 11 attacks, and the US military extended its use of red teaming after the failures of the Iraq War. Red teaming is standard in the world of cybersecurity, where companies use internal hackers to probe their digital infrastructure. Google operates a dedicated AI red team to stress-test large language models for vulnerabilities such as the theft of sensitive data (“data exfiltration”) or using prompts to ignore ethical guardrails (“jailbreaking”). It is time to apply the methodology of red teams to the key institutions of liberalism.”

Cal Newport: “The growth of A.I. has brought new cognitive concerns. A study from January, based on surveys and interviews with more than 600 participants, revealed a “significant negative correlation between frequent A.I. tool usage and critical thinking abilities.” Another recent study, which tracked the brain activity of research subjects who were writing with the help of large language models, found that “brain connectivity systematically scaled down with the amount of external support.” The loss of our ability to think is a big deal. Close to 40 percent of the U.S. gross domestic product comes from so-called knowledge and technology-intensive industries, from aerospace manufacturing to software development to financial and information services. Companies in these fields alchemize advanced human thought into revenue; as we weaken our brains, we also threaten to weaken our economy. It is notable that productivity growth in the private business sector stagnated during the same 2010s period when technology became measurably more distracting. A diminished ability to use our brains also has concerning personal impacts. Thinking is what lets us make sense of information in a complicated world.”

WSJ: “Agents shouldn’t have human names. They shouldn’t be on org charts. And they shouldn’t be given a specific job title, Nickle LaMoreaux, chief human resources officer at IBM said,…at the WSJ Leadership Institute’s Chief People Officer Summit in Menlo Park, Calif. “We learned this the hard way,” she said. IBM used to have a series of agents that went by names like Harry, Hermione, Charlie and Sherlock. But it fell into a trap of focusing too much on each agent’s individual use cases rather than using them for more impactful large-scale process re-engineering. “Too many CPOs are getting so hung up on: what does this agent do, what does this AI do?” she said. The biggest bang for your buck, she said, isn’t in individual assistant-type agents that, say, help write emails. It’s in integrating AI into enterprise workflows.”

FT: “Since February, Chinese AI models made by groups such as DeepSeek and MiniMax have overtaken US rivals in token consumption, according to OpenRouter data, which tracks these units of text, code or data processed by large language models.  The shift points to a deeper change in the AI race, with Nvidia’s Jensen Huang saying this month that the production and use of the digital units will drive the AI economy. Because developers are charged per token, it doubles as both a proxy for adoption of models and a pricing battleground between AI companies. As AI agents, such as those built on the open-source platform OpenClaw, consume vastly more tokens than earlier chatbots, the ability to cheaply produce tokens is reshaping global competition — and giving China a new edge.”

Thinks 1934

GeekWire: “Two decades [after its launch], AWS generates nearly $129 billion a year in revenue. That’s enough to rank in the top 40 of the Fortune 500 if it were a standalone company, ahead of the likes of Comcast, AT&T, Tesla, Disney, and PepsiCo. Companies such as Netflix, Airbnb, Slack, Stripe and thousands more have built massive businesses on its platform. When AWS goes down, it ripples across the web, taking down apps, websites, and services that most users never knew were on a common infrastructure. But the business that defined cloud computing — bankrolling Amazon’s expansion into everything from streaming to same-day delivery — is now grappling with the most significant challenge since it launched. The rise of AI has upended the industry, empowering Microsoft, Google and others, and creating competitive dynamics that seem to change every month. For the first time, AWS faces questions about its long-term ability to lead the market it created.”

WSJ: “Airlines are retrofitting their passenger jets or buying new ones that have a larger share of premium seats. Their goal is to squeeze more revenue out of each seat flown, catering to travelers willing to pay up for lie-flat and extra legroom seats…Since January 2020, the number of scheduled business and first-class seats on domestic flights has grown 27%, according to research from aviation data firm Visual Approach Analytics. That is nearly three times higher than scheduled economy seats, which rose just 10% over the same span.”

NotBoring: “The world is a place where unexpected futures unfold, but in somewhat predictable ways. As humans, we can envision almost all of them with roughly the same amount of effort with a very similar amount of time given to each thought. Computers can’t. It’s no wonder traditional computing struggles with this complexity. Imagine anticipating and coding each and every action, as well as the interactions between all of those actions. Mathematically, in a traditional engine, simulating N fans is at least an O(N) or O(N2) problem. Each person, flag, chair, and ball must be explicitly calculated — and really, the interactions between them need to be calculated, too. In robotics, machines must respond to situations in the real world in the same amount of time, regardless of their complexity, even though, in traditional computing, different situations can take wildly different amounts of time to simulate. This has been a major bottleneck for robotics and embodied AI progress. World Models are a solution to that problem.”

NYTimes: “Experts are increasingly finding that having a powerful posterior isn’t just about looking good in jeans. The glutes are the largest muscles in our body and are closely tied to stability, balance and aging well. They act like shock absorbers when we walk or climb stairs, and building a strong butt can help prevent and manage back pain at any age and reduce the risk of falling for older adults.”

Thinks 1933

FT: “It’s hard to think of many other chief executives who would so vehemently deny the lure of lucre, even in the unlikely event they had won a Nobel Prize. Then again, it is hard to think of many people quite as singular as [Demis] Hassabis: the London-born son of a Greek Cypriot father and a Chinese Singaporean mother who emerged as a child chess prodigy; a teenage video games creator who turned down a £500,000 offer to skip university; a successful entrepreneur who netted $136mn from selling his AI lab to Google in 2014; and the first Nobel laureate in modern times to win the prize for research conducted at a company they co-founded. Impressive though these accomplishments may be, they are not enough for the 49-year-old Hassabis, who is still driven to achieve a lot, lot more. He believes his most significant accomplishment, his life’s mission, still lies ahead of him. It is to achieve, what his DeepMind co-founder Shane Legg has called artificial general intelligence, human-level AI across all cognitive tasks, which would be an extraordinary milestone in human history if it were ever accomplished. As the British mathematician IJ Good once argued, the creation of such an “ultraintelligent” machine might mark humanity’s “last invention” because machines would then be more capable of inventing everything else.”

WSJ: “New studies demonstrate what should be obvious: Universal basic income programs kill initiative…[Recently], economist Kevin Corinth and Hannah Mayhew of the American Enterprise Institute released a survey of 122 basic-income pilots that took place between 2017 and 2025 in 33 states and the District of Columbia. They reported mixed results. Employment increased in some programs and decreased in others, and the role of the pandemic was difficult to assess. The pilot programs varied “in their designs, data collection and study quality,” and only 30 of them provided employment outcomes. Hence, the authors counsel against sweeping policy conclusions based on the results. Most experiments were small, and the evaluations “rely exclusively on survey data and are thus subject to reporting bias and non-response bias.””

Brian Doherty: “Libertarianism is based in economic theory, as economic science teaches how workable order can arise from the seeming chaos of free actions uncoordinated by a single outside intelligence, and how government intervention is apt to upset that balance. It is based in moral theory, positing what is or is not right when it comes to a human being, or group of human beings, using force or coercion on another. It is based in political theory, exploring the likely effects of granting human beings power over others. It is ultimately a delicate ecological balance of all these, with history in the mix as well, to further understand how the constant struggle of liberty versus power tends to play out in the real world.” [via CafeHayek]

WSJ: “The new weapons of global power are oil, rare earths and microchips.”

Chris Walker: “AI is not only unlikely to automate the deepest science anytime soon; it is actively reshaping the incentive landscape of the science we have, tilting effort toward well-explored territory and away from the data-sparse questions most likely to produce genuinely new scientific theories. This exploitation trap, and the simultaneous diffusion of AI tools to practitioners outside the academy, sets up the case for Mokyr. His framework suggests that the answer depends less on how powerful the AI becomes than on whether the right institutional infrastructure exists to channel AI’s capabilities into positive feedback loops. What that infrastructure looks like in practice (open data channels, incentives to share failures and surprises, mechanisms for connecting practitioners to researchers) is the subject of the essay’s second half. The original Industrial Enlightenment, as Mokyr calls it, did not merely produce discoveries. It produced the sustained, compounding growth in useful knowledge that transformed medicine, agriculture, manufacturing, and living standards. A second one could do the same, faster. But it requires deliberate construction, and the existing incentive structures that AI is reinforcing (who shares data, who hoards it, which questions get funded) will only harden with time.”

Thinks 1932

WSJ: “The number of people we consider close friends changes over time, peaking in our teens and early 20s and shrinking as we get busier with kids, work and aging parents. With less free time, we tend to become more selective about who we share it with, focusing on the most meaningful connections. Many of us lose friends over time. People drift away, physically and emotionally. Jeffrey Hall, professor of communications studies at the University of Kansas, doesn’t have a magic friend number, but says there are downsides to extremes. Having no friends can make a person terribly lonely. Having only one friend that you depend on for everything can leave a person floundering if something happens to that person. On the other hand, “too many to keep track of and care for thins out your time for everyone,” he says…On average, Americans have between three and five close friends.”

Bloomberg: “It is tempting to believe that AI is the ultimate revolution because we are in the middle of it. But every technology that ever crossed a hidden threshold felt unique to the people living through it. The S-curve for AI will eventually flatten, because all S-curves do. And when it does, the next revolution will not be more AI. It will be something else, something currently sitting on the flat bottom of its own S- curve, where adoption looks negligible and the technology looks like a niche curiosity. Its builders will be looking at moderate projections and assuming that they understand their market. They won’t. The demand will be latent below a threshold that, when crossed, will make their forecasts look like a rounding error. They will not see it coming. The people building the next revolution never do.”

WSJ: “Quantum computing Is today’s Manhattan Project… The field of quantum computing makes possible cracking encryption, creating innovative technologies, and discovering new drugs. Classical computers process information as ones or zeros in bits. A quantum computer uses qubits—shorthand for quantum bits—to harness the behavior of subatomic particles, which can exist in multiple states at once. That lets it explore vast numbers of possibilities simultaneously, a capability that could accelerate artificial intelligence development.”

Mint: “The business of media and advertising services—buying and planning ad inventory, creating ads, building campaigns, and other projects for a brand’s sales and marketing—has been upended over the last decade. Advertising is dominated by digital channels, which are multiplying every few days. Consumer attention is ever-fragmented, spread between hundreds of forms of media and distribution channels. Finally, the work that a traditional advertising agency did is slowly getting commoditized or co-opted by newer rivals, including information technology (IT) services and consulting rivals such as Accenture Song, Capgemini Invent, Deloitte Digital and Infosys’ Aster.”

Thinks 1931

WSJ: “In any business, focus is an edge. But it’s especially valuable in the business where the possibilities are infinite and the stakes feel existential. And when companies can build almost anything, the trickiest thing is knowing what not to build.”

R Gopalakrishnan: “Thirty-five years ago, I assumed chairmanship of Unilever Arabia. I learned about leadership intuition during that stint. Unilever had expended $15 million in researching a product to wrest market share from its competitor in Arabia. I was impressed with the overpoweringly rational plan. Upon execution, my company would have to spend another $100 million in fresh capital and marketing investment. Then I explored the truthiness of the plan by walking the Arab bazaars — from Gizaan to Tabuk, Buraidah to Hofuf. My intuition revealed weaknesses of the mathy plan. I discussed anew with colleagues, and, together, we modified the plan!  That experience emphasised that if niyat is right, then truthiness matters. I captured the lessons in my first book, The Case of the Bonsai Manager! Will truthiness be even more important in the future? Is there an essential prerequisite for truthiness to have a better chance of success? Yes, first, niyat must be right and transparent.”

CNBC: “The perks of working in Silicon Valley have long included high salaries. Now, some engineers may be offered a new incentive: artificial intelligence tokens. Nvidia CEO Jensen Huang…floated a novel compensation model that would give engineers a token budget on top of their base salary, effectively paying them to deploy AI agents as productivity multipliers. Tokens, or units of data used by AI systems, can be spent to run tools and automate tasks and are becoming “one of the recruiting tools in Silicon Valley,” Huang said…”I’m going to give them probably half of that on top of [their base pay] as tokens … because every engineer that has access to tokens will be more productive.””

Peter Diamandis: “Software engineers have always been the rate-limiting factor for every startup I’ve invested in. You can never hire enough. The Fortune 500 barely gets any – they all flow to Silicon Valley. Now you can buy intelligence on a metered basis. Pay per token. No recruiting, no vetting, no retention, no equity. Just intelligence as a utility. Consumers pay $20/month. Enterprise power users pay $200/month. And companies are spending millions per year because the ROI is there.” [via Arnold Kling]

Thinks 1930

WSJ: “The most basic way to counter AI sycophancy is to ask open-ended questions. If you ask an AI, “What energy drink will keep me awake all night so I can finish this report?” it will likely recommend a pantry’s worth of caffeine-laden soft drinks, never questioning the plan. Ask in a way that keeps several options on the table—“How can I complete a big report by tomorrow?”—and you’re much less likely to receive an endorsement for your plan to power through…You can push the AI even further by making a habit of asking for several options whenever you’re getting help on a decision. Ask for three different outlines for a presentation you’re developing; ask for 20 ways to divide up household responsibilities. Then resist the urge to zero in on the option that confirms your own instincts. Instead, get the AI to compare the pros and cons of your go-to path with an option that is the opposite of your usual inclination.”

Forbes: “At Nvidia’s annual developer conference, Huang dubbed 2026 the year of AI inference—the process of using AI rather than training it. He then announced a new product to integrate Groq’s specialized LPU chips—known for doing inference extra fast—with Nvidia’s newest Vera Rubin generation of GPUs. The subtext: the “GPU does everything” era is colliding with the part of the market that cares less about training bragging rights and more about cost, latency, and throughput at scale. This is Nvidia, the patron saint of the GPU monoculture, effectively blessing a heterodox idea: sometimes you want something that isn’t a GPU.”

Jerry Neumann: “In 1973, the evolutionary biologist Leigh Van Valen proposed what he called the Red Queen hypothesis: in any ecosystem, when one species evolves an advantage at the expense of another, the disadvantaged species will evolve to offset that improvement. The name comes from Lewis Carroll’s Through the Looking-Glass, in which the Red Queen tells Alice, “it takes all the running you can do to keep in the same place.” Species must constantly innovate with numerous and varied strategies just to survive the innovative strategies of their rivals. Similarly, when new startup methods are quickly adopted by everyone, no one gains a relative advantage, and success rates stay flat. To win, startups must develop novel, differentiating strategies and build sustainable barriers to imitation before competitors can catch up. This tends to mean that winning strategies are either built in-house (rather than found in published works that anyone can read), or they are so idiosyncratic that no one else would think to copy them.”

Andy Weir: “When I wrote “Project Hail Mary,” it was believed that there was an exoplanet very, very close to a star system called 40-Eridani. (Now it looks like the planet might not actually be there at all.) For the plot, I needed life based on liquid water. So I asked, how can you have liquid water on a planet so close to its star? The water would boil off unless there was a high atmospheric pressure. Drive up the pressure, and you drive up the boiling point of water. So I knew the planet had to have a thick atmosphere and really hot water. A star will blast the atmosphere off a planet that’s too close. It helps if the atmosphere is made of heavy molecules, like Venus with its carbon dioxide. For Erid, I decided on ammonia. I also decided that Erid had a magnetic field. Both of those keep the atmosphere from blasting off.”

Thinks 1929

FT: “[Jensen] Huang’s take on AI economics is based around the production, consumption and monetisation of tokens. These are the most basic units of output from large language models: it takes about 1,300 tokens to generate 1,000 words of text. The key metric, he argues, is the cost per token of output. And as the main input into AI-powered services, he adds, tokens translate directly into revenue.”

WSJ: “Claws are autonomous agents and can plan and execute tasks on their own, and, critically, spin up their own subagents to tackle specialized tasks, access files and themselves delegate tasks to other subagents. They represent a big leap beyond question-and-answer-style AI chatbots as well as recent iterations of AI agents, which typically have narrow use cases and run for a set amount of time—although claws also come with a new set of security concerns. For claws to work as a true personal assistant, they need access to all of a user’s data. So what are people using them for today?”

WSJ: “AI tools like Anthropic’s Claude Code, Cursor and OpenAI’s Codex can now write and debug software, unlocking huge new sources of revenue. That success is pushing their makers toward a bigger ambition: automating our entire lives. What began as a way to autocomplete code quickly evolved into semiautonomous AI bots, or “agents,” that can work for hours on end with little human oversight. We can tell a bot to create a presentation for work, coordinate the family’s schedules and pick a March Madness bracket, all while it learns our personal preferences, no coding needed…The shift has permanently changed the lives of coders and sparked a $1 trillion market selloff as investors and executives contemplate the technology’s potential to reshape industries, including finance, legal and healthcare. Tens of thousands of job cuts have already been attributed to AI.”

Pete Boettke: “In the late 19th century, Italian economist Vilfredo Pareto (1848-1923) expanded on this point, observing that co-ordinating even a modest economy and matching resources to uses and preferences would soon cause an explosion in the number of equations to be solved. But today’s computers can handle quintillions of computations per second, more than Pareto could possibly have imagined. Doesn’t that make a difference? This is where Nobel laureate economist Friedrich Hayek (1899-1992) comes in. Hayek explained that the problem is not merely that the relevant knowledge is decentralized — spread out across millions of individuals — but that it is often tacit. Local shopkeepers’ understanding of their customers’ buying habits cannot be translated into one data point to feed into an AI or any other kind of model. Nor can we predict the emergence of an entrepreneur dreaming up a product that did not exist before…Prices are not lying around in the wild, waiting to be harvested and fed into an algorithm. Rather, they are the result of constantly evolving discovery. Without this process of discovery, the knowledge embedded in a price simply doesn’t come into existence…As powerful and helpful a tool as AI can be to improve logistics, better manage inventories and analyze markets, it remains just that, a tool. It can help us gain a better understanding of markets but only markets themselves can predict and co-ordinate the results of the billions and billions of voluntary exchanges that take place every day.”

 

Thinks 1928

Forbes: “As CEO, you face challenges every day. Whether it’s a meeting that spirals, a missed deadline or rising tension, it’s easy to react negatively. I’ve found that instead, it helps to pause and ask a simple question: “Isn’t this interesting?” It’s one of the most effective tools I use to stay grounded, make better decisions in fast-moving environments and build a culture that does the same. When you’re observing, you’re not in fight-or-flight. You’re not reaching for control or trying to protect your ego. You’re just seeing what’s there, reframing your frustration into curiosity. That clarity leads to smarter decisions.”

WSJ: “Getting fooled into thinking that AI is thinking is what I call the Turing Trap. Alan Turing, godfather of modern computing and AI, proposed a simple test to determine whether a computer had attained human-level intelligence: If a person chatting with a bot couldn’t tell if it was human, it might as well be declared intelligent. What became known as the Turing Test doesn’t stipulate how a machine achieves this. At the time, language was thought to be closely associated with reasoning, but modern neuroscience shows us that it’s a separate process. Speaking isn’t the same as thinking, let alone being. Rather than demonstrating that machines have achieved intelligence, the Turing Test shows that linguistic fluency is possible even in its absence.”

Naomi Klein: A “doppelgänger” is a German word that means, literally translated, a “doublegoer” or a “double walker.” It’s the idea that out there, somewhere, you could bump into somebody who looks just like you — but isn’t you. It’s that uncanny vertigo that addresses the strangeness of that which is most familiar — which is yourself. “Mirror world” is a term I use to describe the relationship between the liberal left world and the far right world, and the ways in which, when people are ejected from our world, they end up in a world that is the exact mirror of where we live — in replica social media platforms, the same but different doppelgänger publishing worlds, doppelgänger narratives of the narratives that we tell ourselves.”

WSJ: “The core tenets of somatics are a series of slow movements designed to release tension that leads to pain and hinders flexibility and mobility. The practice proposes something more rare than perfectly toned arms: un-jangled nervous systems.”