First of, the question:
As topic says, how is it handled.
1) Does the GPU(s) treat it as one huge resolution which is updated and then divided and sent to each monitor?
2) Does the GPU(s) do the calculations as for a 1920*1080 resolution x3 ?
The background
The need for more VRAM on the GPU goes up as the resolution goes up and as textures/AA etc are higher more detailed. There are a lot of benchmarks out today were you see a large performance drop when you have a higher resolution then 1920*1080, for instance 2560*1400. I would also guess that the drop is not proportional with the increase in resolution. Its like you have a smaller drop as resolution increases until you pass a line were the performance drop goes through the floor.
Without knowing, if the GPUs handle this as one huge resolution, like in example 1 above, which would be equal to playing on one huge monitor with a resolution of 5760*1080, i would bet the performance drop would be insane.
This leads me to the main issue, i have a 2x5970 quadfire setup (4 GPUs on 2 cards) with 2 GIG VRAM per card, that is 1 GB per GPU. If eyefinity is treated as example 1 above, i think the setup 2x5970 will be able to handle most games in eyefinety, and the "lack of" 2 GB VRAM will not be an issue. Basically, you have 4 GPUs which will have to cooperate with calculating 3x 1920*1200 resolutions in parallel.
However, if case 2 is true, then you may have a larger problem, since its only 1 image calculated with a resolution of 5970*1080, and the cooperation between the GPUs may not be so good.
|