Sam Altman on OpenAI’s hardware plans: “Two big revolutions in computer use have occurred: the mouse and keyboard, and the idea of the monitor displaying this sort of windowed system. That was a breakthrough, for sure. Then we had the touch devices, which adapted that, taking out the mouse and letting you use your finger, making it a very personal device. This was huge. Fundamentally, we have never had something as powerful as AI. Computers really can understand what we want, can think, which has let us reimagine what it could mean to use a computer. So we’re still exploring. It’ll take us quite some time. Don’t expect anything very soon. But over time, I expect we’ll make a small family of devices. They will look good, for sure, but that’s not the main thing. I hope that if we do a really great job, they will change what it means to use a computer, how you do your work, and how you play and live your life. But there’s a lot of work and a lot to explore between here and there.”
Richard Sutton: “First, the large language models are surprising. It’s surprising how effective artificial neural networks are at language tasks. That was a surprise, it wasn’t expected. Language seemed different. So that’s impressive. There’s a long-standing controversy in AI about simple basic principle methods, the general-purpose methods like search and learning, compared to human-enabled systems like symbolic methods. In the old days, it was interesting because things like search and learning were called weak methods because they’re just using general principles, they’re not using the power that comes from imbuing a system with human knowledge. Those were called strong. I think the weak methods have just totally won. That’s the biggest question from the old days of AI, what would happen. Learning and search have just won the day. There’s a sense in which that was not surprising to me because I was always hoping or rooting for the simple basic principles. Even with the large language models, it’s surprising how well it worked, but it was all good and gratifying. AlphaGo was surprising, how well that was able to work, AlphaZero in particular. But it’s all very gratifying because again, simple basic principles are winning the day.”
Andy Kessler: “Author Kyla Scanlon divides Generation Z into “safety seekers” and “digital gamblers.” Plausible, except careers that were once safe are now risky: graphic designers, marketers, some programmers, maybe even lawyers. And surfing the waves of progress to where the world is headed is less risky than you think. Progress comes via surprises, not rules, with inventions no one thought possible. The telescope opened the skies. The microscope illuminated the unseeable. Both surprises. So was Edison’s Kinetograph movie camera. Quantum theory was heretical, until it wasn’t. It enables entire industries, including semiconductors. Gene editing was hard until Crispr technology simplified it. Machine learning was researched for decades with little result, until back-propagation allowed voice and facial recognition. And it’s been less than three years since ChatGPT shocked the world with what it could do. None of these were invented by following the rules, but by coloring outside the lines…Take risks. Risk leads to reward. Ignore those who tell you to take “calculated risks.” It’s the magnitude of risk that provides the potential reward. And we need a new name for entrepreneurs. It’s too French. Maybe “risk agents” or “productivity creators” or, hmmm, “no rulers.””
Ashu Garg: “Your culture is defined by the people you hire and the behaviors you reinforce. The pace of AI startups is so fast that you need people who will truly do “whatever it takes.” To build this high-intensity culture, hire for an ownership mentality. Jonathan actively screens for what he calls “raw founder energy.” He hires “less for experience, more for exceptional ability…someone that’s really hungry, intense, hardworking.” In practice, this means Turing might pass on a polished veteran if a scrappier candidate shows more intensity and an eagerness to own outcomes. You want team members who care so deeply that they never consider any problem above or beneath them. That creates a culture where everyone pushes hard and actively works to make things better, rather than saying “that’s not my job.””