NYTimes: ““Cerebral Entanglements” sprawls across many different fields that investigate how we (and our busy brains) engage with the world: neuroscience, endocrinology, history, culture, psychology, moral philosophy — well, you get the idea. New technologies allow us to visually illuminate the brain as it encounters myriad issues and challenges. This means, Hamilton argues, that we are the “first generation to be able to image and quantify human thought” — an idea to justify some exuberance, you might say — and in the 300-odd pages of his book, he does his best not to miss a single thought of importance.”
Tyler Cowen: “I have a very concrete and specific proposal for teaching people how to work with AI. It is also a proposal for how to reform higher education. Given them some topics to investigate, and have them run a variety of questions, exercises, programming, paper-writing tasks — whatever — through the second or third-best model, or some combination of slightly lesser models. Have the students grade and correct the outputs of those models. The key is to figure out where the AIs are going wrong. Then have the best model grade the grading of the students. The professor occasionally may be asked to contribute here as well, depending on how good the models are in absolute terms. In essence, the students are learning how to grade and correct AI models.”
The New Atlantis: “Today, Farming-2.0-style agriculture — which began with innovations in field crops like wheat but spread to other parts of farming, such as cattle ranching and chicken-raising — is by almost any measure the world’s most critical industry. It is directly responsible for our daily bread. But despite its overwhelming importance, Farming 2.0 is in many ways unknown to most of us, because it has been so smoothly successful that we have almost no picture of the underpinnings of the vast system that provides us with breakfast, lunch, and dinner. Too few have any sense of its scope, what brought it into existence, and in what ways it will need to change.”
WSJ: “The biggest risk of AI transcription is how it affects our ability to trust one another. When colleagues are relaxed and open in their conversations, it builds the relationships that support everything we do. If a meeting digresses into a short conversation about a new lunch spot near the office, it isn’t a waste of time: Once you and the guy in the next cubicle bond over your shared love of Ethiopian food, you may be less irritable about his too-loud phone voice. But those trust-building processes don’t happen if we’re so conscious of AI transcription that we don’t make room to talk about that great new restaurant. Perhaps more unnerving, AI transcription tools could sow the seeds of dissent, misunderstanding or simple error. Studies of AI transcription have found that while accuracy is gradually improving, AIs sometimes make things up in transcripts, just like they hallucinate in chats and research tasks. If an AI inserts an invented event or line item into the transcript of a conference-planning meeting, an employee may end up booking a speaker or room that nobody wanted.”
Zahir Mirza: “A [handwritten] letter is devoid of AI prompts which interfere with self-reflection and correct erroneous spellings or grammar. It’s okay to have a few mistakes that you strike out, or write in Hing-lish if that’s the language in which you can best express yourself. Your handwriting communicates your personality in a way that Helvetica or Calibri cannot.”