> The bellman equations (exactly as written above) are not found in ML libraries.
This is because they work assuming you know a model of the data. Most real world RL is model-free RL.
Q-learning (the usual application of the Bellman equation) is generally model-free. It is also commonly found in reinforcement learning libraries.
Usually deep Q learning is found in libraries where you function-approximate Q with a NN, which I alluded to in one of my later paragraphs (the approximation one).
Model-free RL doesn't mean you aren't training a model. It means that you aren't explicitly building a model of the environment's f(s,a)=(s',r) transition function, which methods like Dreamer do.
Q-learning only approximates the Q-value function, not the full state transition, so it is model-free.
Q-learning (the usual application of the Bellman equation) is generally model-free. It is also commonly found in reinforcement learning libraries.