Saturday , 25 October 2014
Latest Reviews
Home » Reviews & Articles » Hardware » Nvidia GeForce GTX 590: The Dual-GF110 Beast

Nvidia GeForce GTX 590: The Dual-GF110 Beast

The GTX 590 is a Dual-GPU GF110 video card capable of playing all the latest video games, including some of the greatest 3D titles. This is done while maintaining reasonable temperatures at a very quiet operation.

[review_ad]

Nvidia GeForce GTX 590 – The Dual GF110 video card

Nvidia’s long awaited Dual-GPU video card is finally here. The GeForce GTX 590 is designed for enthusiast gamers that need the highest graphics performance and image quality from a single video card. The GTX 590 is a DX11 based video card designed to allow users to turn up their graphics settings–including resolution, anti-aliasing, and image quality–without causing excessive performance decreases which would otherwise render games impossible to play. With full control over Nvidia’s driver game presets and other graphics features, users can also enjoy image quality customization which would allow them to use up to 64x anti-aliasing with a single GTX 590 and up to 128xAA with two GTX 590s combined for Quad SLI. The user needs to over-ride the application AA through the Nvidia drivers. Once that is done a Multi-GPU 64x AA is possible with a single GTX 590.

The new Nvidia GeForce GTX 590 uses two GF110 GPUs on a single board, providing up to 1024 CUDA cores (512 per GPU) and 1.5GB of GDDR5 memory per GPU (3GB total). The memory subsystem is very similar to the older single GPU card, the GTX 580, which uses six 64-bit memory controllers (384-bit). This also means that while using one GTX 590, games can use up to 1.5GB of memory (similar to a two-GTX 580 configuration), and with two GTX 590s, games can take advantage of all 3GBs during gameplay. Clock speeds have also changed on the GTX 590. Unfortunately, we do not see two standard clocked GF110 GPUs on the GTX 590 running at the specs that the GTX 580s ran at, so those who have waited to see a card that can perform exactly like two GTX 580s in SLI might be slightly disappointed. Instead, Nvidia downclocked the GPU Clock from 772 MHz to 607 MHz, the Shader from 1544 MHz to 1215 MHz, and the Memory from 4008MHz to 3414MHz. We believe this was the option Nvidia had to take in order to optimize their GTX 590 to stay quiet during full load while also maintaining reasonable temperatures. We will have additional dB(A) readings later in the review comparing the Nvidia GTX 590 to the AMD HD6990. The raw performance we can conclude from these numbers is somewhat like the GTX 570 video card, so it should perform somewhat close to two GTX 570s in SLI.

As far as GPU architecture goes, since the GTX 590 is using the same GF110 chips as the GTX 580, there are no changes in die size, and it still uses a 40nm technology with 3000M transistors. We have a more detailed explanation of the GF110 architecture in our GTX 580 review from November 2010. Note that the Graphics Processing Clusters, Streaming multiprocessors, CUDA Cores, Texture Units, and ROP Units have doubled over the GTX 580 because the GTX 590 is using two GF110 GPUs; however, the card will only be utilizing 1536MB GDDR5 memory and 768KB L2 Cache Size unless there are two GTX 590s which would enable full 3072MB GDDR5 total video memory and 1536kb L2 Cache Size support.

(Expected performance tests between the GTX 580 and GTX 590 set forth by the Nvidia Team)

When the AMD HD6990 was launched, we were severely disappointed by its acoustics. The AMD HD6990 would run at higher RPM which caused a lot of noise during gameplay. Thankfully, Nvidia paid close attention to acoustic noise coming from cooling, and optimized the GTX 590 in order to maintain good temperatures as well. The GTX 580 had fantastic acoustic levels and while the GTX 590 is rated at slightly higher dB levels, it should not be too big of a difference for gamers, especially for those playing with headsets or loud speakers. As mentioned earlier, it seems that in order to maintain good temperatures along with low acoustics, Nvidia had to downclock their cards a bit. This also has to do with its power design, however: with two PCI-E 8-pin power connectors and certain motherboards like the ASUS Rampage Series or the GIGABYTE G1.Killer series, users can provide additional power to the PCI-E lanes with Molex power connectors from the PSU. With this said, overclocking for the card should not be a problem once overclocking tools like MSI Afterburner will support voltage tweaking for the GTX 590. We will take a look at overclocking throughout the review, but we are expecting yields somewhere around 700-750 MHz for the GPU Clock speed.

There are several features that the GTX 590 has which makes the card unique from previous Nvidia video cards.

  1. Users no longer need two video cards in order to support NVIDIA Surround/3D Vision Surround. Users can enjoy 3 displays with NVIDIA Surround/3D Vision Surround from a single card, as the GTX 590 works as two video cards in SLI.
  2. The GTX 590 can also dedicate one of its GPUs solely to PhysX processing, but this will require the user to exit SLI mode. It is possible to use both GPUs in SLI mode but then PhysX calculations will be limited as the GPU rendering and PhysX will have to cooperate together. It’s the same idea as a single GPU video card both rendering the 3D graphics and calculating PhysX. If needed, users can also dedicate a PhysX card and keep both GPUs on the GTX 590 solely for graphics rendering.
  3. Quad-SLI is much easier with two GTX 590s. Users no longer need to buy 4 video cards in order to achieve Quad-SLI performance. By having two GTX 590s, users can enjoy the performance of Quad-SLI for ultimate gaming systems. This setup would be idea for 3D Vision Surround gamers that want to push the gaming envelope as far as they can. We’ll talk more about Nvidia certified Quad-SLI components on the next page.

The price for the GTX 590 is $699 MSRP, though we can expect more expensive models coming out, especially as Nvidia partners could come out with their own custom design cooler, and perhaps higher clocked cards. We do have word about waterblock-cooled GTX 590s for the water cooling enthusiasts.

GTX 590 Certified Hardware

To make choosing the right hardware easier for gamers, Nvidia certified several motherboards, Quad-SLI capable PSUs, and Quad-SLI ready chassis. While the list provided below only shows hardware officially certified by Nvidia as Quad-SLI capable, it is still possible to run Quad-SLI on other hardware as well.

Motherboard Requirements: Nvidia made it clear to us that having at least one expansion slot space between two GTX 590s is necessary for proper video card ventilation. This way users do not risk damaging their cards, especially during summer, when even the air outside the case can be very hot. The listed Quad-SLI certified motherboards all support spacing between their PCI-Express slots in order to maintain Nvidia’s standards of proper cooling.

PSU Requirements: Because the GTX 590 uses two 8-pin PCI-E power connectors, which use up to 150W on each connector, the card uses a total of 300W just from the PCI-E power connectors. The PCI-Express expansion slots provide additional 75W of power if needed. While it is extremely unlikely that users will use 375W for their video cards, it is good to have headroom not just in terms of wattage, but also in terms of the amperes provided on each rail of the power supply. The first requirement for Quad-SLI is that the PSU support 4x 8-pin PCI-E power connectors. Secondly, the PSU must meet the minimum 30A requirement for the 12V rails that the cards are connected to. Users with 1100W+ power supplies that have a single rail with 80A+ should not worry about whether there will be enough power to feed two power hungry GTX 590s.

Chassis Requirements: There are several chassis out there that have plenty of cooling possibilities, though Nvidia only certifies cases that have pleasant acoustics in addition to proper cooling potential . These chassis can also fit the 11 inch GTX 590s without any problem.

These products are qualified Quad-SLI products, guaranteed to work fine with the Nvidia GTX 590s in Quad-SLI. Of course, the final choice of hardware is ultimately up to the end-user. Nvidia will update this list on their website as they certify more hardware.

Nvidia GeForce GTX 590 vs. AMD HD6990

Acoustics: One of the biggest differences between the Nvidia GeForce GTX 590 and the AMD Radeon HD6990 Dual-GPU graphics cards is the acoustics. The GTX 590 manages to stay at a quieter 48-49dB(A) range, while the HD6990 runs up to about 58dB(A). Every 10dB increase is perceived by the ears as a 2x increase in noise. Comparing the HD6990 to the GTX 590, the GTX 590 is half as loud as the HD6990 at full load.

Acoustics Testing

When we test acoustics for each video card in our system, we minimize ambient environment noise by running each test after 2AM. This prevents any ambient noise from outside due to cars and other noise. Usually all electronics and other hardware are off at night as well, so it’s the best possible time to perform acoustic testing. We also try to minimize noise by using low RPM fans. Since the dB(A) sound meter only records the highest noise coming from the system, as long as the other fans in the system are quieter than the video card, the extra minimal noise level should not be a problem in our testing. We set up the sound meter on a small tripod exactly 12 inches away from the video card, making sure that the sound meter pointed at the fan of the video card. Then we ran the system and recorded idle fan noise in Windows. To record the highest noise coming from the fan, we ran Unigine Heaven 2.1 for 15 minutes before taking the maximum reading.

We used an Extech Instruments 407730 Sound Meter on a monkey rubber tripod.

Acoustic Levels (Fan Noise)

Video Card Idle – db(A) Load – db(A)
GIGABYTE Radeon HD 6990

45.4

58.6
NVIDIA GeForce GTX 590 45.7 48.5

From the table above, we can tell that the GIGABYTE Radeon HD 6990 is twice as loud as the Nvidia GTX 590, because every 10db is twice as loud for our ears. Here we have a quick video which can provide an idea of how loud each card is:

We have the proof of the noise level testing in our video, however we would like to note that we could not include the 100% fan speed noise for the HD 6990 because the video seemed to be corrupted in Premier Pro CS5. Our application would freeze every single time we would add the video to our timeline, so we deleted that part. Though we have never heard anything as loud as the HD 6990 before. The HD 6990 was 73db loud at 100% fan speed over the 57dB that we got with the GTX 590.

Length: The GTX 590 board design measures up to 11 inches in length, which is a great advantage over the AMD Radeon HD6990’s 12.2 inches. The problem with Dual-GPU video cards is that they become too long and become impossible to install in mid-tower chassis. The GTX 590 should fit fine into most gaming mid-tower chassis, but the HD6990 will only fit into cases that have at least 12.3 inches or about 310mm clearance for expansion cards. A mid-tower case like the Zalman Z9 Plus, which has 300mm clearance, would fit the GTX 590, but not the HD6990.

Click Image To Enlarge
 

Here we can see three images. The first one shows the GTX 590 compared to the top GIGABYTE HD6990 video card. We can see that the GTX 590 is about 1.2 inches shorter. The other two pictures compare the GTX 590 to the same HD6990 as above, and also to an ASUS GTX 580. The length difference is clearly visible in these two pictures, which also show the cooler implementation on each card.

Cooler Design: While the HD6990 and the GTX 590 both use the same general vapor chamber cooling design, the acoustics really come down to GPU design, Thermal Design Power, and of course, fan design on the video card. The HD6990 uses the dense small fin PowerTune design fans, whereas the GTX 590 uses a standard 9-fin fan design which provides enough cooling for each vapor chamber heatsink to efficiently cool down the GPUs and other components quietly.

Supersample Anti-Aliasing and Image Quality: One very interesting option on Nvidia video cards that has been around for a while now is that Nvidia cards can take advantage on Anti-Aliasing by having higher AA settings than AMD cards. While the AMD HD6990 only allows up to 8xAA in video came graphics settings, the Nvidia cards provide up to 32xAA. Of course with the GTX 590, this limit is doubled to a maximum of 64xAA, and a maximum of 128xAA for two GTX 590s.

With two GF110 GPUs, the tessellation engines have also doubled. With 32 tessellation Engines, the GTX 590 should have phenomenal performance in extreme tessellated scenes. Here is a picture from a new game coming out this year from Epic Games, which uses Unreal Engine 3 and DirectX 11 with tessellation to create a morphing effect of the face in real time.

What is CUDA?

CUDA is NVIDIA’s parallel computing architecture that enables dramatic increases in computing performance by harnessing the power of the GPU (graphics processing unit).

With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for CUDA, including image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing and much more.

Background

Computing is evolving from “central processing” on the CPU to “co-processing” on the CPU and GPU. To enable this new computing paradigm, NVIDIA invented the CUDA parallel computing architecture that is now shipping in GeForce, ION, Quadro, and Tesla GPUs, representing a significant installed base for application developers.

In the consumer market, nearly every major consumer video application has been, or will soon be, accelerated by CUDA, including products from Elemental Technologies, MotionDSP and LoiLo, Inc.

CUDA has been enthusiastically received in the area of scientific research. For example, CUDA now accelerates AMBER, a molecular dynamics simulation program used by more than 60,000 researchers in academia and pharmaceutical companies worldwide to accelerate new drug discovery.

In the financial market, Numerix and CompatibL announced CUDA support for a new counterparty risk application and achieved an 18X speedup. Numerix is used by nearly 400 financial institutions.

An indicator of CUDA adoption is the ramp of the Tesla GPU for GPU computing. There are now more than 700 GPU clusters installed around the world at Fortune 500 companies ranging from Schlumberger and Chevron in the energy sector to BNP Paribas in banking.

And with the recent launches of Microsoft Windows 7 and Apple Snow Leopard, GPU computing is going mainstream. In these new operating systems, the GPU will not only be the graphics processor, but also a general purpose parallel processor accessible to any application.

For information on CUDA and OpenCL, click here.

For information on CUDA and DirectX, click here.

For information on CUDA and Fortran, click here.

 

 

PhysX

Some Games that use PhysX (Not all inclusive)

Batman: 
            Arkham Asylum Batman: Arkham Asylum
Watch Arkham Asylum come to life with NVIDIA® PhysX™ technology! You’ll experience ultra-realistic effects such as pillars, tile, and statues that dynamically destruct with visual explosiveness. Debris and paper react to the environment and the force created as characters battle each other; smoke and fog will react and flow naturally to character movement. Immerse yourself in the realism of Batman Arkham Asylum with NVIDIA PhysX technology.
Darkest 
            of Days Darkest of Days
Darkest of Days is a historically based FPS where gamers will travel back and forth through time to experience history’s “darkest days”. The player uses period and future weapons as they fight their way through some of the epic battles in history. The time travel aspects of the game, lead the player on missions where they at times need to fight on both sides of a war.
Sacred 2 
            – Fallen Angel Sacred 2 – Fallen Angel
In Sacred 2Fallen Angel, you assume the role of a character and delve into a thrilling story full of side quests and secrets that you will have to unravel. Breathtaking combat arts and sophisticated spells are waiting to be learned. A multitude of weapons and items will be available, and you will choose which of your character’s attributes you will enhance with these items in order to create a unique and distinct hero.
Dark Void Dark Void
Dark Void is a sci-fi action-adventure game that combines an adrenaline-fuelled blend of aerial and ground-pounding combat. Set in a parallel universe called “The Void,” players take on the role of Will, a pilot dropped into incredible circumstances within the mysterious Void. This unlikely hero soon finds himself swept into a desperate struggle for survival.
Cryostasis Cryostasis
Cryostasis puts you in 1968 at the Arctic Circle, Russian North Pole. The main character, Alexander Nesterov is a meteorologist incidentally caught inside an old nuclear ice-breaker North Wind, frozen in the ice desert for decades. Nesterov’s mission is to investigate the mystery of the ship’s captain death – or, as it may well be, a murder.
Mirror’s Edge Mirror’s Edge
In a city where information is heavily monitored, agile couriers called Runners transport sensitive data away from prying eyes. In this seemingly utopian paradise of Mirror’s Edge, a crime has been committed and now you are being hunted.

What is NVIDIA PhysX Technology?
NVIDIA® PhysX® is a powerful physics engine enabling real-time physics in leading edge PC games. PhysX software is widely adopted by over 150 games and is used by more than 10,000 developers. PhysX is optimized for hardware acceleration by massively parallel processors. GeForce GPUs with PhysX provide an exponential increase in physics processing power taking gaming physics to the next level.

What is physics for gaming and why is it important?
Physics is the next big thing in gaming. It’s all about how objects in your game move, interact, and react to the environment around them. Without physics in many of today’s games, objects just don’t seem to act the way you’d want or expect them to in real life. Currently, most of the action is limited to pre-scripted or ‘canned’ animations triggered by in-game events like a gunshot striking a wall. Even the most powerful weapons can leave little more than a smudge on the thinnest of walls; and every opponent you take out, falls in the same pre-determined fashion. Players are left with a game that looks fine, but is missing the sense of realism necessary to make the experience truly immersive.

With NVIDIA PhysX technology, game worlds literally come to life: walls can be torn down, glass can be shattered, trees bend in the wind, and water flows with body and force. NVIDIA GeForce GPUs with PhysX deliver the computing horsepower necessary to enable true, advanced physics in the next generation of game titles making canned animation effects a thing of the past.

Which NVIDIA GeForce GPUs support PhysX?
The minimum requirement to support GPU-accelerated PhysX is a GeForce 8-series or later GPU with a minimum of 32 cores and a minimum of 256MB dedicated graphics memory. However, each PhysX application has its own GPU and memory recommendations. In general, 512MB of graphics memory is recommended unless you have a GPU that is dedicated to PhysX.

How does PhysX work with SLI and multi-GPU configurations?
When two, three, or four matched GPUs are working in SLI, PhysX runs on one GPU, while graphics rendering runs on all GPUs. The NVIDIA drivers optimize the available resources across all GPUs to balance PhysX computation and graphics rendering. Therefore users can expect much higher frame rates and a better overall experience with SLI.

A new configuration that’s now possible with PhysX is 2 non-matched (heterogeneous) GPUs. In this configuration, one GPU renders graphics (typically the more powerful GPU) while the second GPU is completely dedicated to PhysX. By offloading PhysX to a dedicated GPU, users will experience smoother gaming.

Finally we can put the above two configurations all into 1 PC! This would be SLI plus a dedicated PhysX GPU. Similarly to the 2 heterogeneous GPU case, graphics rendering takes place in the GPUs now connected in SLI while the non-matched GPU is dedicated to PhysX computation.

Why is a GPU good for physics processing?
The multithreaded PhysX engine was designed specifically for hardware acceleration in massively parallel environments. GPUs are the natural place to compute physics calculations because, like graphics, physics processing is driven by thousands of parallel computations. Today, NVIDIA’s GPUs, have as many as 480 cores, so they are well-suited to take advantage of PhysX software. NVIDIA is committed to making the gaming experience exciting, dynamic, and vivid. The combination of graphics and physics impacts the way a virtual world looks and behaves.

 

Direct Compute

DirectCompute Support on NVIDIA’s CUDA Architecture GPUs

Microsoft’s DirectCompute is a new GPU Computing API that runs on NVIDIA’s current CUDA architecture under both Windows VISTA and Windows 7. DirectCompute is supported on current DX10 class GPU’s and DX11 GPU’s. It allows developers to harness the massive parallel computing power of NVIDIA GPU’s to create compelling computing applications in consumer and professional markets.

As part of the DirectCompute presentation at the Game Developer Conference (GDC) in March 2009 in San Francisco CA, NVIDIA demonstrated three demonstrations running on a NVIDIA GeForce GTX 280 GPU that is currently available. (see links below)

As a processor company, NVIDIA enthusiastically supports all languages and API’s that enable developers to access the parallel processing power of the GPU. In addition to DirectCompute and NVIDIA’s CUDA C extensions, there are other programming models available including OpenCL™. A Fortran language solution is also in development and is available in early access from The Portland Group.

NVIDIA has a long history of embracing and supporting standards since a wider choice of languages improve the number and scope of applications that can exploit parallel computing on the GPU. With C and Fortran language support here today and OpenCL and DirectCompute available this year, GPU Computing is now mainstream. NVIDIA is the only processor company to offer this breadth of development environments for the GPU.

 

OpenCL

 

 

OpenCL (Open Computing Language) is a new cross-vendor standard for heterogeneous computing that runs on the CUDA architecture. Using OpenCL, developers will be able to harness the massive parallel computing power of NVIDIA GPU’s to create compelling computing applications. As the OpenCL standard matures and is supported on processors from other vendors, NVIDIA will continue to provide the drivers, tools and training resources developers need to create GPU accelerated applications.

In partnership with NVIDIA, OpenCL was submitted to the Khronos Group by Apple in the summer of 2008 with the goal of forging a cross platform environment for general purpose computing on GPUs. NVIDIA has chaired the industry working group that defines the OpenCL standard since its inception and shipped the world’s first conformant GPU implementation for both Windows and Linux in June 2009.

OpenCL for GPU Nbody Demo

NVIDIA has been delivering OpenCL support in end-user production drivers since October 2009, supporting OpenCL on all 180,000,000+ CUDA architecture GPUs shipped since 2006.

NVIDIA’s Industry-leading support for OpenCL:

2010

March – NVIDIA releases updated R195 drivers with the Khronos-approved ICD, enabling applications to use OpenCL NVIDIA GPUs and other processors at the same time 

January – NVIDIA releases updated R195 drivers, supporting developer-requested OpenCL extensions for Direct3D9/10/11 buffer sharing and loop unrolling 

January – Khronos Group ratifies the ICD specification contributed by NVIDIA, enabling applications to use multiple OpenCL implementations concurrently 

2009 

November – NVIDIA releases R195 drivers with support for optional features in the OpenCL v1.0 specification such as double precision math operations and OpenGL buffer sharing 

October – NVIDIA hosts the GPU Technology Conference, providing OpenCL training for an additional 500+ developers 

September – NVIDIA completes OpenCL training for over 1000 developers via free webinars 

September – NVIDIA begins shipping OpenCL 1.0 conformant support in all end user (public) driver packages for Windows and Linux 

September – NVIDIA releases the OpenCL Visual Profiler, the industry’s first hardware performance profiling tool for OpenCL applications 

July – NVIDIA hosts first “Introduction to GPU Computing and OpenCL” and “Best Practices for OpenCL Programming, Advanced” webinars for developers 

July – NVIDIA releases the NVIDIA OpenCL Best Practices Guide, packed with optimization techniques and guidelines for achieving fast, accurate results with OpenCL 

July – NVIDIA contributes source code and specification for an Installable Client Driver (ICD) to the Khronos OpenCL Working Group, with the goal of enabling applications to use multiple OpenCL implementations concurrently on GPUs, CPUs and other types of processors

June – NVIDIA release first industry first OpenCL 1.0 conformant drivers and developer SDK

April – NVIDIA releases industry first OpenCL 1.0 GPU drivers for Windows and Linux, accompanied by the 100+ page NVIDIA OpenCL Programming Guide, an OpenCL JumpStart Guide showing developers how to port existing code from CUDA C to OpenCL, and OpenCL developer forums

2008

December – NVIDIA shows off the world’s first OpenCL GPU demonstration, running on an NVIDIA laptop GPU at

SIGGRAPH Asia 

June – Apple submits OpenCL proposal to Khronos Group; NVIDIA volunteers to chair the OpenCL Working Group is formed

2007 

December – NVIDIA Tesla product wins PC Magazine Technical Excellence Award 

June – NVIDIA launches first Tesla C870, the first GPU designed for High Performance Computing 

May – NVIDIA releases first CUDA architecture GPUs capable of running OpenCL in laptops & workstations

2006 

November – NVIDIA released first CUDA architecture GPU capable of running OpenCL

Pictures & Impressions

Click Image For a Larger One
 
Due to the structural complexity of Nvidia GTX 590 an extensive set of pictures covering every plausible angle was taken in order to provide our readers with a complete overview of the card. While the top cover looks deceptively simple, the placement of the fan in the middle of the board is of no coincidence. In comparison to GTX 570 and 580 the airflow is completely different. The obvious reason for such a difference is the introduction two GPUs onto the PCB, which is bound to significantly raise the overall temperature of the card. In order to accommodate proper heat dissipation, GTX 590 has been designed with two vapor chambers. With the openings on both the front and back end of the card, as well as the exhaust fan, the heat is able to escape effectively without creating a lot of noise. Ability to function at low noise is yet another distinct feature of GTX 590. Aside from the cooling capability of the card through air circulation, GTX 590 has also been equipped with a back plate in order to absorb heat trapped in the PCB around the GPU, where temperatures could easily hit 100 degrees Celsius. Both plates are layered over GF110 GPUs, which is by far the hottest part of this graphics card. Symmetry can also be observed in the layout of the other components on the back of the PCB. The voltage chips in the middle of the card are lined up symmetrically on both sides, as well as other components, which suggests that this card is essentially the fusion of two GTX 580s with lower GPU and memory clock speeds.

 

Click Image For a Larger One
 
Nvidia GTX 590 has 4 video outputs: 3 DVI ports and a single mini display port, making this card a suitable contender for running games in panoramic view (Nvidia 3D Vision Surround). In addition, this card is SLI compatible. Note that each of the cards has an estimated TDP of 365W, and will require a 1100W+ power supply with high amperage on the 12V rails in order to fully support the Quad-SLI configuration. Due to the high power requirements the Nvidia GTX 590 has been equipped with 8 pin power connectors. Each 8 pin connector is able to draw up to 150W, providing a total of 300W. The other 75W are provided by the PCI-E.
 
Click Image For a Larger One
 
The GTX 590 is equipped with twin-vapor chamber coolers. The vapor chamber composition is analogous to that of a heatsink. The bottom of the chamber is composed of the copper plate which is made to lay directly on the components, in particular the GF110 chip. The top of the chamber is directly connected to the aluminum slits which serve as the secondary source of heat absorption. The heat is able to irradiate out of the card with the air stream crossing through the aluminum slits. This highly efficient system allows for the cooling capability similar to that of GTX 580. 

Click Image For a Larger One
 
During the basic overview we pointed out that the card looked symmetrical in terms of the component placement on the back of PCB. After removal of the heatsink the symmetry suggested earlier is confirmed. On the inside GTX 590 looks essentially like 2 GTX 580’s fused into the same board. Each side of the PCB is equipped with the GF110 chip, as well as surrounding memory controllers. Altogether there are 12x 256 MB memory chips allowing for the card to utilize up to 3 GB of memory. The space between two GPUs has 10-phase advanced digital power controllers as well as 4 capacitors for each of the parts. These power controllers allow for over-volting, though extra precaution is needed. The green chip in between the two GPUs is an NF200 chip, previously unobserved in the 500 series of cards. This chip provides a x16 PCI-E lane connection between each GPU for maximum performance, and is the same component placed on most 3-Way SLI compatible motherboards.

Finally, the PCB that all the components are placed on is a 12-layer PCB, which also has two ounces of copper for each of the board’s power and ground layers on the PCB. This will help maximize signal integrity between the components. The two ounces of copper PCB design is very similar to that of GIGABYTE’s Ultra Durable 3 design PCBs.

Testing & Methodology

The OS we use is Windows 7 Pro SP1 64bit with all patches and updates applied. We also use the latest drivers available for the motherboard and any devices attached to the computer. We do not disable background tasks or tweak the OS or system in any way. We turn off drive indexing and daily defragging. We also turn off Prefetch and Superfetch. This is not an attempt to produce bigger benchmark numbers. Drive indexing and defragging can interfere with testing and produce confusing numbers. If a test were to be run while a drive was being indexed or defragged, and then the same test was later run when these processes were off, the two results would be contradictory and erroneous. As we cannot control when defragging and indexing occur precisely enough to guarantee that they won’t interfere with testing, we opt to disable the features entirely.

Prefetch tries to predict what users will load the next time they boot the machine by caching the relevant files and storing them for later use. We want to learn how the program runs without any of the files being cached, and we disable it so that each test run we do not have to clear pre-fetch to get accurate numbers. Lastly we disable Superfetch. Superfetch loads often-used programs into the memory. It is one of the reasons that Windows Vista occupies so much memory. Vista fills the memory in an attempt to predict what users will load. Having one test run with files cached, and another test run with the files un-cached would result in inaccurate numbers. Again, since we can’t control its timings so precisely, it we turn it off. Because these four features can potentially interfere with benchmarking, and and are out of our control, we disable them. We do not disable anything else.

We ran each test a total of 3 times, and reported the average score from all three scores. Benchmark screenshots are of the median result. Anomalous results were discounted and the benchmarks were rerun.

Please note that due to new driver releases with performance improvements, we rebenched every card shown in the results section. The results here will be different than previous reviews due to the performance increases in drivers.

Acoustics Testing

When we test acoustics for each video card in our system, we minimize ambient environment noise by running each test after 2AM. This prevents any ambient noise from outside due to cars and other noise. Usually all electronics and other hardware are off in the night as well, so it’s the best possible time to perform some accoustic testing. We also try to minimize noise by using low RPM fans. Since the dB(A) sound meter only records the highest noise coming from the system, as long as we use quieter fans than the video card, the extra minimal noise level should not be a problem in our testing. We set up the sound meter on a small tripod exactly 9 inches away from the video card, making sure that the sound meter points at the fan of the video card. Then we run the system and record idle fan noise when just running Windows. To record the highest noise coming from the fan, we run Unigine Heaven 2.1 for 15 minutes before we take a maximum reading.

We used an Extech Instruments 407730 Sound Meter on a monkey rubber tripod.

Temperature and Power Consumption Testing

To measure the temperature of the video card, we used MSI Afterburner and ran Metro 2033 for 10 minutes to find the Load temperatures for the video cards. The highest temperature was recorded. After playing for 10 minutes, Metro 2033 was turned off and we let the computer sit at the desktop for another 10 minutes before we measured the idle temperatures.

To get our power consumption numbers, we plugged in our Kill A Watt power measurement device and took the Idle reading at the desktop during our temperature readings. We left it at the desktop for about 15 minutes and took the idle reading. Then we ran Metro 2033 for a few minutes minutes and recorded the highest power consumption.

Test Rig

Test Setup
Case Silverstone Temjin TJ10
CPU

Intel Core i7 2600K @ 4.8GHz

Motherboard

ASUS P8P67 WS Revolution

Ram

Patriot Gamer 2 Series 1600 MHz Dual-Chanel 16GB (4x4GB) Memory Kit

CPU Cooler

Heatblocker Rev 3.0 LGA 1156 CPU Waterblock

Thermochill 240 Radiator

Black Ice Extreme 120 Radiator

Laing D5 Variable Speed Pump

3x Quiet Zalman ZM-F3 FDB 120mm Fans

Hard Drives

4x Seagate Cheetah 600GB 10K 6Gb/s Hard Drives

2x Western Digital RE3 1TB 7200RPM 3Gb/s Hard Drives

SSD 1x Zalman SSD0128N1 128GB SandForce SSD
Optical ASUS DVD-Burner
GPU

Nvidia GeForce GTX 590 (Dual-GPU Video Card)

GIGABYTE Radeon HD6990 (Dual-GPU Video Card)

2x Nvidia GeForce GTX580 in 2-Way SLI

Nvidia GeForce GTX 580 1536MB

ASUS ENGTX580 1536MB

Nvidia GeForce GTX 570 1536MB

Nvidia GeForce GTX 560 Ti

GIGABYTE GeForce GTX 480 SOC 1536MB

Galaxy GeForce GTX 480 1536MB

2x Palit GeForce GTX460 Sonic Platinum 1GB in 2-Way SLI

Palit GeForce GTX460 Sonic Platinum 1GB

ASUS Radeon HD6870

AMD Radeon HD5870

Case Fans

1x Quiet Zalman Shark’s Fin ZM-SF3 120mm Fan – Top

1x Silverstone 120mm fan – Front

1x Quiet Zalman ZM-F3 FDB 120mm Fan – Hard Drive Compartment

Additional Cards
LSI 3ware SATA + SAS 9750-8i 6Gb/s RAID Card
PSU

Sapphire PURE 1250W Modular Power Supply

Mouse Razer Mamba
Keyboard Logitech G15

Synthetic Benchmarks & Games

We will use the following applications to benchmark the performance of the Nvidia GeForce GTX 580 video card.

Synthetic Benchmarks & Games
3DMark Vantage
3DMark 11
Metro 2033
Lost Planet 2
Civilization V
Dirt 2
HAWX 2
Crysis Warhead
Just Cause 2
Unigine Heaven 2.1

Overclocking the GTX 590

Overclocking the Nvidia GTX 590 is a snap, though there is no full support for voltage tweaking at the moment. We expect to have full voltage tweaking options in the following days; however, Nvidia strongly discourages overvolting the card, especially with their current cooler. They wrote:

At stock voltage the GTX 590 still provides lots of headroom for OC’ing; generally the boards have 10-15% of OC’ing headroom, with some boards going even further on stock voltage.

If you do want to overvolt, we would highly recommend liquid cooling.

For OC’ers who want to overvolt, we’ve worked with a number of partners who will be providing GTX 590 waterblocks (Danger Den, Coolit, Koolance, EK Waterblocks), and EVGA and our system builders should have liquid-cooled solutions on the market shortly.

 

Without much trouble, we were able to achieve a stable overclock at 709 MHz GPU Clock speed, 3812 MHz Memory Clock (Data Rate) and 1418 MHz Shader clock speed. While these might not seem like a lot, these were done with the stock voltage settings. With overvoltage on the GPUs, higher frequencies are possible, which may even surpass two GTX 580s in SLI.

TEMPERATURES

To measure the temperature of the video card, we used MSI Afterburner and ran Metro 2033 for 10 minutes to find the Load temperatures for the video cards. The highest temperature was recorded. After playing for 10 minutes, Metro 2033 was turned off and we let the computer sit at the desktop for another 10 minutes before we measured the idle temperatures.

Video Cards – Temperatures – Ambient 23C Idle Load (Fan Speed)
ASUS GeForce GTX 580 38C 73C (66%)
NVIDIA GeForce GTX 590 (GPU 1 / GPU 2) 40C / 42C 85C / 85C
GIGABYTE Radeon HD 6990 (GPU 1 / GPU 2) 48C / 56C 84C / 84C

The temperatures between the GTX 590 and HD 6990 were extremely similar at full load, but the difference showed in the acoustics tests. The Acoustics tests can be found on Page 2 of this review. Also, while the GIGABYTE Radeon HD 6990 had a lower idle acoustic noise, this also resulted in a higher idle temperature and a wider temperature gap between each GPU on the video card. Overall the GTX 590 was able to keep the card 1 degree Celsius higher than the HD 6990, but at half of the acoustic noise. This is very impressive from Nvidia.

POWER CONSUMPTION

Power_Consumption

To get our power consumption numbers, we plugged in our Kill A Watt power measurement device and took the Idle reading at the desktop during our temperature readings. We left it at the desktop for about 15 minutes and took the idle reading. Then we ran Metro 2033 for a few minutes minutes and recorded the highest power consumption.

We were not very surprised by the results in the power consumption. Lower RPM fan and different GPU architecture can have a difference on power consumption. While the GTX 590 had a higher idle power consumption, it also had a lower load power consumption by 3W. This is very minimal, so there is no clear advantage here in the power consumption.

3DMark Vantage

The newest video benchmark from the gang at Futuremark. This utility is still a synthetic benchmark, but one that more closely reflects real world gaming performance. While it is not a perfect replacement for actual game benchmarks, it has its uses. We tested our cards at the ‘Performance’ setting.

Let’s start with our trusty 3DMark Vantage. The NVIDIA GTX 590 scored 8% higher than the HD 6990 and 6% lower than a pair of GTX 580’s.

3DMark 11

“3DMark 11 is the latest version of the world’s most popular benchmark for measuring the graphics performance of gaming PCs. Designed for testing DirectX 11 hardware running on Windows 7 and Windows Vista the benchmark includes six all new benchmark tests that make extensive use of all the new features in DirectX 11 including tessellation, compute shaders and multi-threading. After running the tests 3DMark gives your system a score with larger numbers indicating better performance. Trusted by gamers worldwide to give accurate and unbiased results, 3DMark 11 is the best way to test DirectX 11 under game-like loads.”

3DMark 11 shows the GTX 590 is 5% slower than the HD6990 in the graphics test, but is 8% faster in the Physics test. NVIDIA cards are often much better at Physics than AMD cards, so it is not too surprising that the GTX 590 comes out ahead. However, it was interesting to see that the GTX 590 was not up to par to the HD 6990 in a DirectX 11 benchmark.

The Extreme test shows the same result as the Performance test. Once again the GTX 590 showed a higher Physics test score, while the rest of the 3D tests were lower than the HD 6990.

Unigine Heaven 2.1

Unigine Heaven is a benchmark program based on Unigine Corp’s latest engine, Unigine. The engine features DirectX 11, Hardware tessellation, DirectCompute, and Shader Model 5.0. All of these new technologies combined with the ability to run each card through the same exact test means this benchmark should be in our arsenal for a long time.

 
 
At 1680×1050, the HD 6990 comes in ahead of the GTX 590 with 2% greater performance with normal tessellation.
 

At higher resolution the HD 6990 still maintains the lead with normal tessellation. However, under extreme tessellation, we can see the GTX 590 is 13% faster than the HD6990. As tessellation becomes increasingly important in game graphics, the GTX 590 will most likely be a better investment to provide good frame rates at extremely high levels of visual detail.

CRYSIS WARHEAD

Crysis Warhead is the much anticipated standalone expansion to Crysis, featuring an updated CryENGINE™ 2 with better optimization. It was one of the most anticipated titles of 2008.

The Settings we use for benchmarking Crysis Warhead
 
 
The GTX 590 narrowly edges out the HD6990 in Crysis Warhead. Without any visual enhancement, the card is about 3% faster than the HD6990.
 
 
 
However, as the visual details are increased, the difference between the two cards narrows considerably, and the HD6990 even outperforms the GTX 590 by a few frames per second at one point.
 
 
At 1920×1200, the HD6990 edges out the GTX 590. The HD6990 seems to do a better job with AA and AF enabled in Crysis Warhead.
 

Just Cause 2

“Just Cause 2 is an open world action-adventure video game. It was released in North America on March 23, 2010, by Swedish developer Avalanche Studios and Eidos Interactive, and was published by Square Enix. It is the sequel to the 2006 video game Just Cause.

Just Cause 2 employs the Avalanche Engine 2.0, an updated version of the engine used in Just Cause. The game is set on the other side of the world from the original Just Cause, on the fictional island of Panau in Southeast Asia. Panau has varied terrain, from desert to alpine to rainforest. Rico Rodriguez returns as the protagonist, aiming to overthrow the evil dictator Pandak “Baby” Panay and confront his former mentor, Tom Sheldon.”

The GTX 590 dominates this benchmark with as much as 15% higher performance over the HD6990.

Lost Planet 2

“Lost Planet 2 is a third-person shooter video game developed and published by Capcom. The game is the sequel to Lost Planet: Extreme Condition, taking place ten years after the events of the first game, on the same fictional planet.”

This is another test where the GTX 590 is absolutely the clear winner. It is about 30% faster than the HD6990.

Metro 2033

Metro 2033 is an action-oriented video game blending survival horror and first-person shooter elements. The game is based on the novel Metro 2033 by Russian author Dmitry Glukhovsky. It was developed by 4A Games in Ukraine and released in March 2010 for the Xbox 360 and Microsoft Windows. In March 2009, 4A Games announced a partnership with Glukhovsky to collaborate on the game. The game was announced a few months later at the 2009 Games Convention in Leipzig; a first trailer came along with the announcement. When the game was announced, it had the subtitle “The Last Refuge,” but this subtitle is no longer being used.

The two dual-GPU cards traded places in this test, with the GTX 590 taking the lead at 1680×1050 resolution and the HD6990 taking the lead at 1920×1200. It would be hard to tell which card performs better through gameplay, and users will most likely not see a noticeable difference between the two cards in this game.

HAWX 2

Tom Clancy’s H.A.W.X. 2 plunges fans into an explosive environment where they can become elite aerial soldiers in control of the world’s most technologically advanced aircraft. The game will appeal to a wide array of gamers as players will have the chance to control exceptional pilots trained to use cutting edge technology in amazing aerial warfare missions.

Developed by Ubisoft, H.A.W.X. 2 challenges you to become an elite aerial soldier in control of the world’s most technologically advanced aircraft. The aerial warfare missions enable you to take to the skies using cutting edge technology.

 

NVIDIA cards traditionally do very well in HAWX 2, and the GTX 590 is no exception.

Dirt 2

The GTX 590 again shows a very nice result in Dirt 2, coming in about 40% faster than the HD6990, with 8xAA enabled at 1920×1200.

Civilization V

With DirectX 11 adding support for tessellation, the GTX 590 clearly is able to exercise its full potential. The GTX 590 is again able to perform about 25% faster than the HD6990.

Conclusion

The Nvidia GeForce GTX 590 has impressed us in many ways, including its extremely quiet operation during full load, reaching only 48dB(A), as compared to the GIGABYTE Radeon HD 6990’s 58dB(A) at full load. The overall PCB design, with two ounces of copper and component design was impressive as well. Nvidia proved that it is possible to incorporate 3GB of memory and two GF110 GPUs on a single PCB while keeping the card only 11 inches long (compared to the full length 12.2 inch HD 6990). The length of the AMD card is especially discouraging to customers who use mid-tower enclosures, as very few mid-tower cases have the requisite 310mm clearance. The Zalman Z9 Plus, a case with specially designed high PCI-E slot clearance, has only 300mm, meaning it would fit the GTX 590 but not the HD6990.

Giving the gamers the option of Nvidia 3D Vision Surround from a single video card is also very impressive, though gamers that want to experience such setup while pushing the gaming envelope to the next level will most likely need a second GTX 590 for a Quad-SLI setup. Additionaly, even though Nvidia advertises the card as having 3GB of memory, each GPU allocates only 1.5GB, so the card really has the same RAM as two GTX 580’s in SLI.

Though the Nvidia GTX 590 is a nice piece of hardware, the AMD Radeon HD 6990 still has some performance advantage over the GTX 590 in some games. However, the GTX 590 does have its own advantage in the tessellation, because in tessellation heavy games and benchmarks, the GTX 590 would leave the HD 6990 in the dust. This is especially important for upcoming games, as the importance of tessellation is growing in modern games, and this trend is sure to continue.

It is interesting to note that even though design-wise, this card resembles two GTX 580’s on one board, in terms of performance, this is hardly the case. The GTX 590 falls just short of the performance of two GTX 580’s in several areas, due to the downclocked specifications. The tradeoff is less noise from the video card, though the importance of noise versus performance can vary from user to user. Those who wear headsets or use loud speakers while gaming may not notice fan noise. Nonetheless, this card is still able to pose serious competition to two GTX 580’s in SLI, staying within 10FPS of the SLI setup in many benchmarks, and even outperforming it in some. Considering that two GTX 580’s in SLI would retail for around $1000, and still require significantly better ventilation because the bottom card tends to dissipate heat onto the top card, the GTX 590 may be a good option for those who wish to attain that level of performance without paying the steep price.

The overclocking potential of this card is not yet fully definite, since there are no overclocking tools that yet support overvolting the GTX 590, but overclocking it at stock voltages is fairly easy. It does not yield a great improvement, but overvolting may do so, especially when Nvidia partners start producing cards with aftermarket coolers, and companies like EVGA, CoolIT, Danger Den and EK start selling waterblocks for the GTX 590.

On a final note, even though both the Nvidia GTX 590 and the AMD HD6990 are very nice cards, the low noise and shorter graphics card size makes us lean towards the GTX 590, especially knowing that the GTX 590s are going to be starting off at $699. Current HD 6990s sell for $709.99 at Newegg.com, and so we expect the GTX 590 to be somewhere around there as well.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Optimization WordPress Plugins & Solutions by W3 EDGE