Gary Marcus: “I would emphasize having some kind of AI agency for the United States. It should be a cabinet-level position because AI is changing so fast. It’s affecting so many aspects of society. It’s just as important as having a cabinet-level thing for defense or health and so forth. Also at the top of my list would be some kind of FDA-like process to approve things that are released at large scale. The third thing I would prioritize is monitoring once things are out. So, for example, it should be possible for well-qualified scientists to say, ‘I want to study the degree to which this particular large language model might discriminate against people.’ And how is it actually being used in practice, in job decisions, jail sentences and so forth. There should be some auditing that has government backing to allow independent scientists to ask legitimate questions of this sort. We should have some kind of liability, especially if something very seriously goes wrong. The big tech AI companies right now are basically trying to privatize the profits and socialize the costs.”
Neil Lawrence: “Will AI totally displace the human? Or is there any form, a core, an irreducible element of human attention that the machine cannot replace? If so, this would be a robust foundation on which to build our digital futures. I call this kernel the “atomic human”. Unfortunately, when we seek it out, we are faced with a form of uncertainty principle. Machines rely on measurable outputs, meaning any aspect of human ability that can be quantified is at risk of automation. But the most essential aspects of humanity are the hardest to measure. We won’t find the atomic human in the percentage of A grades that our children are achieving at schools or the length of waiting lists we have in our hospitals. It sits behind all this. We see the atomic human in the way a nurse spends an extra few minutes ensuring a patient is comfortable or a bus driver pauses to allow a pensioner to cross the road or a teacher praises a struggling student to build their confidence.”
FT: “While the big AI companies, like OpenAI, Google, Amazon and Meta, are developing general-purpose agents that can be used by anyone, a small army of start-ups is working on more specialised AI agents for business. At present, generative AI systems are mostly seen as co-pilots that augment human employees, helping them write better code, for example. Soon, AI agents may become autonomous autopilots to replace business teams and functions altogether.”
NYTimes: “For some crucial A.I. tasks, Nvidia’s rivals are proving they can deliver much faster speed, and at prices that are much lower, said Daniel Newman, an analyst at Futurum Group. “That’s what everybody has known is possible, and now we’re starting to see it materialize,” he said. The shift is being driven by an array of tech companies — from large competitors such as Amazon and AMD to smaller start-ups — that have started tailoring their chips for a particular phase of A.I. development that is becoming increasingly important. That process, called “inferencing,” happens after companies use chips to train A.I. models. It allows them to carry out tasks such as serving up answers with A.I. chatbots.“The real commercial value comes with inference, and inference is starting to gain scale,” said Cristiano Amon, the chief executive of Qualcomm, a mobile chip maker that plans to use Amazon’s new chips for A.I. tasks. “We’re starting to see the beginning of the change.”Nvidia’s rivals have also started taking a leaf out of the company’s playbook in another way. They have begun emulating Nvidia’s tactic of building complete computers — and not just the chips — so that customers can wring the maximum power and performance out of the chips for A.I. purposes.”
Terence Reilly: “Youth culture has driven culture for time immemorial, but more than ever before, female youth culture drives culture.”