Nu pricepeti o treaba simpla. Aplicatia incarca in memorie atata informatie cat considera ca are nevoie (indiferent daca e folosita sau nu). Daca cipul grafic e slab, performanta e trasa in jos de cipul grafic. Problemele apar daca cipul grafic e puternic si memoria e insuficienta, nu invers.
Ce spune chiar Nvidia:
"There are a number of hardware monitoring tools and utilities that claim to report GPU Memory Usage. Note that these tools do not report actual GPU memory usage. Instead, they report memory allocation, the amount of memory requested by the application.
This number can vary for a variety of reasons, and should not be used as an indicator of the amount of GPU memory that the application is actually using. In fact, in some cases the application may request all of the GPU’s memory, whether it actually needs it or not. To illustrate this issue, we have included memory allocation logs running Tom Clancy’s The Division 2. In this example running on a GeForce RTX 2080 SUPER (8GB) and RTX 2080 Ti (11GB) using the same settings (4K with the Ultra preset), the game allocates almost all of each GPU’s framebuffer:
RTX 2080 SUPER (8GB) RTX 2080 Ti (11GB)
AVG: 41 AVG 49
MEM 7393 MB MEM 10531 MB
As you can see, both GPUs are running at high frames (41 fps vs 49 fps, with both GPUs running smoothly), with the game allocating as much memory as it can, which resulted in memory consumption of 7.3 GB on the RTX 2080 Super and 10.5 GB on the RTX 2080 Ti."
In principiu tocmai s-au dat de gol, pentru ca, consumul de memorie creste impreuna cu performanta cipului grafic. Acum o sa avem RTX 3070 cu performanta de 2080 Ti, care 2080 Ti are nevoie de 10,5GB de memorie pentru 49fps, iar RTX 3070 va avea doar 8GB memorie disponibila ca sa scoata tot in jur de 49fps. Urmeaza sa vedem ce rezulta in review-uri din asta.
E valabil si pentru AMD, nu degeaba s-au decis sa ofere 16GB VRAM pentru placile din clasa superioara.