A2) Because it's about 1,000x to 10,000x more efficient.
An awful lot of the "ills" of modern development practices boil down to the lack of ingrained rules of thumb related to performance. The difference between a local function call -- virtual or not -- and a network call can easily be a factor of a million.
This just isn't in the mental model of most developers. The terms "nanoseconds" or "clocks" are not in their vocabulary.
I grew up and learnt programming in an era where OO was considered extravagantly wasteful because virtual function calls had an extra indirection! Those precious instructions -- and more importantly -- the lost opportunity for inlining or CPU pipelining were considered brutal performance hits.
These days, people throw Python into Docker containers and run them remotely on the network to invoke what amounts to a page of code. They call this "modern".
Then they go on Y Combinator News and complain about how OO is "bad" somehow. Quite a few of these people have probably never written a class hierarchy from scratch themselves.
I literally just spent a day talking to some full-time developers with years of experience, explaining how to implement a simple "storage abstraction" OO hierarchy. You know, you have a base interface or abstract class with a bunch of implementations like "S3BucketStorage", "ZipFileStorage", "LocalFilesStorage", or whatever... and then you have the meta-implementations that combine them, such as "UnionStorage", "CacheStorage", and "RetryStorage", each of which take the abstract interface as input parameters during construction. So you can have local files act as a cache for S3 buckets (with retry) that override a local zip file of static content. Or whatever! Combine implementations to suit your whims.
They looked at me like I had grown a second head that started speaking Greek while the other spoke Latin.
Then they wrote some spaghetti code of functions with hard-coded parameters, checked that garbage in to the repo, and then dutifully sent out an email to management saying "job done".
Is OO bad, or are most developers bad? I suspect the latter...
OO runtime dispatch as implemented by C++ via "vtables" of function pointers doesn't even map well to CPUs! The indirection via a data pointer that can change unpredictably is terrible for pipelined architectures. Similarly, this approach generally prevents inlining, especially across dynamic library boundaries.
However, many languages and even C++ with modern compilers can pull tricks to mitigate this. For example, static analysis can often be used to replace virtual calls with direct ones. Similarly, functions can be inlined in many cases, such as a "leaf" class in a hierarchy calling itself.
Languages with "virtual machine" runtimes such as Java and C# can potentially optimise even dynamic scenarios. Java certainly does in some cases.
I think the ideal OO framework would actually be more like what Rust does with traits. Have static dispatch as the default at runtime, but with the traditional OO model of interfaces, classes, derived classes, etc... Dynamic dispatch should be an option, but not the default. Ideally, dynamic dispatch should be used only on the boundary of binary modules such as DLL files or kernel-to-user-mode ABIs.
Note that OO was also designed to reduce compilation times by decoupling implementation from use. So if developer A updates an implementation (class/struct) of an interface/trait, then developer B using that interface can use incremental compilation without having to recompile the usages of the interface! This saves a lot of time for large code bases.
One reason Rust is notoriously slow to compile is because it always recompiles everything -- both implementation and usage of interfaces.
Again, a hybrid approach could work: dynamic dispatch by default for debug builds to enable efficient workflows, and static dispatch by default for release builds for runtime performance at the cost of longer build times.
A2) Because it's about 1,000x to 10,000x more efficient.
An awful lot of the "ills" of modern development practices boil down to the lack of ingrained rules of thumb related to performance. The difference between a local function call -- virtual or not -- and a network call can easily be a factor of a million.
This just isn't in the mental model of most developers. The terms "nanoseconds" or "clocks" are not in their vocabulary.
I grew up and learnt programming in an era where OO was considered extravagantly wasteful because virtual function calls had an extra indirection! Those precious instructions -- and more importantly -- the lost opportunity for inlining or CPU pipelining were considered brutal performance hits.
These days, people throw Python into Docker containers and run them remotely on the network to invoke what amounts to a page of code. They call this "modern".
Then they go on Y Combinator News and complain about how OO is "bad" somehow. Quite a few of these people have probably never written a class hierarchy from scratch themselves.
I literally just spent a day talking to some full-time developers with years of experience, explaining how to implement a simple "storage abstraction" OO hierarchy. You know, you have a base interface or abstract class with a bunch of implementations like "S3BucketStorage", "ZipFileStorage", "LocalFilesStorage", or whatever... and then you have the meta-implementations that combine them, such as "UnionStorage", "CacheStorage", and "RetryStorage", each of which take the abstract interface as input parameters during construction. So you can have local files act as a cache for S3 buckets (with retry) that override a local zip file of static content. Or whatever! Combine implementations to suit your whims.
They looked at me like I had grown a second head that started speaking Greek while the other spoke Latin.
Then they wrote some spaghetti code of functions with hard-coded parameters, checked that garbage in to the repo, and then dutifully sent out an email to management saying "job done".
Is OO bad, or are most developers bad? I suspect the latter...