Thinks 1484

FT: “Creating what is in essence a new layer of digital plumbing between apps and websites like this hardly sounds like the most sexy use of AI. But it could end up causing important changes to how people use technology and shifting the balance of power in the tech industry. AI agents that act on behalf of their users are the fad of the moment. Giving them the power to operate across different apps, web sites and digital services could have far-reaching effects.”

WSJ: “[Jensen Huang] doesn’t want information that has already made its way through layers of management. What he wants is “information from the edge,” he said…The way he solved this problem was by asking roughly 30,000 employees at every level of the company to send regular emails to their teams and executives that even the CEO can access. Which he does—every single day. They’re usually brief and include a few bullet points, and glancing at them gives Huang a snapshot of what’s happening inside Nvidia, Kim writes. It might just be the only way he can get the sort of unvarnished truth that nobody wants to give the CEO but every CEO needs to get. After all, Nvidia’s employees are not telling Huang what they think he wants to hear. They’re just telling him things. T5T (Top-5 Things) emails became a “crucial feedback channel” for Huang, Kim writes, because they allowed him to pick up on trends that were obvious to junior employees, even when top executives were completely oblivious. “I’m looking to detect the weak signals,” he says, according to Kim. “It’s easy to pick up the strong signals, but I want to intercept them when they are weak.””

Nick Bostrom: “There may well exist a normative structure, based on the preferences or
concordats of a cosmic host, and which has high relevance to the development of
AI. In particular, we may have both moral and prudential reason to create
superintelligence that becomes a good cosmic citizen—i.e. conforms to cosmic
norms and contributes positively to the cosmopolis. An exclusive focus on
promoting the welfare of the human species and other terrestrial beings, or an
insistence that our own norms must at all cost prevail, may be objectionable and
unwise. Such attitudes might be analogized to the selfishness of one who
exclusively pursues their own personal interest, or the arrogance of one who acts
as if their own convictions entitle them to run roughshod over social
norms—though arguably they would be worse, given our present inferior status
relative to the membership of the cosmic host. An attitude of humility may be
more appropriate.”

FT: “Anyone wanting to take on Nvidia must contend not just with its chip nous, but its bountiful profit. Huang’s company generates earnings equivalent to about 60 per cent of its revenue, higher than Apple, Microsoft, Alphabet and Intel have managed any time this century, according to LSEG data. That gives Huang a lot to play with. He can invest Nvidia’s riches to keep its products out in front, acquire companies in adjacent markets, or even put a lid on prices. He can also pay fines or sacrifice some Chinese revenue — if it comes to that — without breaking a sweat. Given the financial goodies Nvidia enjoys as a result of its dominance, it’s no surprise that ants are gathering on all sides. They should be easy to hold at bay.”

Microsoft AI CEO Mustafa Suleyman: “To me, AGI is a general-purpose learning system that can perform well across all human-level training environments. So, knowledge work, by the way, that includes physical labor. A lot of my skepticism has to do with the progress and the complexity of getting things done in robotics. But yes, I can well imagine that we have a system that can learn — without a great deal of handcrafted prior prompting — to perform well in a very wide range of environments. I think that is not necessarily going to be AGI, nor does that lead to the singularity, but it means that most human knowledge work in the next five to 10 years could likely be performed by one of the AI systems that we develop. And I think the reason why I shy away from the language around singularity or artificial superintelligence is because I think they’re very different things. The challenge with AGI is that it’s become so dramatized that we sort of end up not focusing on the specific capabilities of what the system can do. And that’s what I care about with respect to building AI companions, getting them to be useful to you as a human, work for you as a human, be on your side, in your corner, and on your team. That’s my motivation and that’s what I have control and influence over to try and create systems that are accountable and useful to humans rather than pursuing the theoretical super intelligence quest.”

Published by

Rajesh Jain

An Entrepreneur based in Mumbai, India.