As an AI researcher, I've often been asked when I think we will have sufficient computational power for strong AI. I always answer, "About ten years ago." And I've been giving that answer for over ten years. I agree with the author that AI is a software problem rather than a hardware problem. But I think he misformulates the limitations. In AI, the issue isn't whether we can brute force a particular transformation (that is the ML approach), it's whether we can create a self-organizing system that recognizably approximates human cognition. Growing a redwood versus trying to build one, so to speak. Not an easier problem, but a different one whose limitations have not yet been defined.