![]() I thought Tyler Cowan (with whom I often don’t agree) did an excellent job of defining what we do know about AI and then what questions to ask. I thought this was the most edifying of the econtalk episdoes on AI – not because it had the answers but because it had the questions. As our tools grow ever more powerful, that hubris grows ever more dangerous. It’s the kind of arrogance that Aeschylus wrote about at the dawn of civilization, the hubris that has caused so much suffering over the millennia. I suspect that, in such a world of pampered welfare and lack of any sort of meaningful work, most people will simply slip into insanity.Ĭowen’s lack of imagination is startling, but what really worries me is his feeling of casual superiority. ![]() I doubt that the machines will “come for us.” What concerns me is a world in which a human has no function, no vocation. The last vocations to be rendered unnecessary will probably be jobs like auto repair that require a wide range of practical knowledge along with the ability to manipulate tools in physically demanding situations, but even they will yield, all too soon, to the economics, robotics, and superior analytical abilities of advanced AI. At the current rate of AI development, I strongly suspect that their services will no longer be required in these fields or in most others. They seem to believe that their particular vocations as educators and writers will remain untouched by these developments. Roberts and Cowen both speak casually about people having to adapt to a world with ChatBot or even AGI. And, how like an economist- the way we should debate this is with competing “models.”ĭr. Cowen sneers at this model, while offering nothing different. Has he developed a model? If he were present when Gutenberg produced the first printed page, could he have produced a predictive model of any utility at all? The model concerned people are offering is in the rhetoric they produce, both written and oral. ![]() I’m deeply concerned about the impact of AI, but Cowen is ready to dismiss my concerns out of hand, unless I develop a “model” of potential impact. The only other potentially similar line of technological development at the moment is gene editing, which has risks of a similar magnitude. Not even the splitting of the atom is comparable. Using the impact of the printing press as an analog to the potential impact of AI? Really?!!? The potential power and ubiquity of AI in this era are unparalleled- they are qualititatively different from any other breakthrough in human affairs. The fallacies in Cowen’s arguments are so overwhelming that it’s difficult to respond to them all. I think it’s hard to model something where reasonable people can disagree on whether it’s more like the invention of the printing press, or more like The Great Oxidation Event. So what’s the relevant historical model or data you could bring to bear on a model? I think Eliezer would say the most relevant historical event is the evolution of humans, but 1-10 million times faster. Taken together those two points make it seem like we are inventing the atomic bomb, but without understanding enough physics to know what detonates them off, or how big the explosions will be. (2) We are profoundly ignorant of how GAIs work, what makes it more or less of a threat, and how to control it if it is a threat. (1) General AI has a significant chance of being an existential threat to the human race. Here are 2 claims that I actually don’t think are too controversial – To be sympathetic to Team Eliezer here, I think there’s a relatively conservative version of his story that is still pretty scary.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |