Just a short note: a very simple model of a graphics card can be seen as "a thing that consists of *many* weak little units that can do a tiny bit of calculation per unit". By contrast the CPU consists of a few but powerful such units (which can do complex pieces of calculation including interrupting code "flow" to respond to user input).
Nvidia card design in particular is about "many weak ones is better than a few strong ones"; with ATI using somewhat more complex designs.
Now the thing with both graphics and physics (graphics are very basically a sort of physics) is that you do not model very complex things (the code is actually executed is very trivial at any sub-step; the total however can be fairly complex) but you model a lot of it (high resolutions/ large dimensions). The code that is sent to your graphics card for instance never has to handle user input -- it couldn't possibly do it because its design doesn't allow it. But a graphics card can do a "lot of small simple things" a time (i.e. in parallel; i.e. batch computing) which suits an animation (or physics/graphics model) perfectly fine so long the intervals between batch input (the device driver sends data in main memory to the graphics card) output (the graphics card emits the result data back to the device driver which places it in main memory again) are sufficiently small that the FPS remains high enough.
Bookmarks