The NVIDIA GTX 1060 6GB Review
Share:
Author: SKYMTL
Date: July 18, 2016
Product Name: GTX 1060 6GB Founders Edition
Warranty: 3 Years
NVIDIA’s GeForce lineup has been experiencing something of a renaissance as of late due to the relative strengths of their Pascal architecture. The GTX 1080 initially shocked the gaming market with its ability to move the high end performance yardsticks forward by a country mile while the GTX 1070 proved to be an awesome, more affordable encore presentation. Given the fact those two cards were geared towards customers who could afford leading-edge solutions and AMD has been making positive inroads with their substantially more affordable RX480, it was time for NVIDIA to launch their own salvo into the sub $300 market. Enter the GeForce GTX 1060 6GB.
I’ve said it once but it bears repeating: while everyone loves reading about expensive flagship products since they set standards and expectations for lower end products, most won’t actually buy them. Rather, they’ll settle upon a solution that offers an optimal blend of performance and price which is why the $199 to $299 segment has historically been so popular. That’s where the GTX 1060 factors into the equation since it is a $249 graphics card that’s supposed to take up the mantle from NVIDIA’s own GTX 960, one of the most popular GPUs of all time.
Despite the many veins of similarity between the GTX 960 and its replacement the GTX 1060 is somewhat unique in the way its launch has been handled. Whereas the typical cadence between a new core architecture and the launch of its more efficient versions is typically four to six months its been barely two months since the GTX 1080 was introduced. They say the greatest innovations are borne out of necessity and the GTX 1060 is certainly needed to combat the RX480.
At the GTX 1060’s heart beats the aforementioned GP106 core. If you read our full architecture overview in the initial Pascal launch article, you’ll understand what’s going on behind the pretty block diagram but if you haven’t…head over there immediately.
Consisting of 4.4 billion transistors the GP106 core highlights the advantages of moving to a 16mm FinFET manufacturing process; despite featuring nearly 50% more transistors than the Maxwell-based GM206, it is actually about 20% smaller at just 200mm². Essentially this denser design allowed NVIDIA to cram additional components into a strictly limited die space. In its fully enabled form, GP106 within NVIDIA’s GTX 1060 features ten SMs (two more than the GTX 960), 1.5MB of shared L2 cache, six ROP partitions each containing eight ROPs and six 32-bit memory controllers.
One item that is conspicuous by its absence is the SLI compatibility, something NVIDIA feels isn’t particularly relevant for this price range. I beg to differ with this stance but with DX12’s numerous developer-side multi GPU controls, we could see vendor-specific technologies like Crossfire and SLI go the way of the dodo. Unfortunately, that may take time and for now, buyers of a GTX 1060 need to be aware they won’t be able to boost their in-game performance if they’re ever able to afford a second card.
Like its larger siblings the GTX 1060 6GB has been designed from the ground up to be a significant improvement over their predecessors. It will also come in two different versions: there’s a Founder’s (or reference) Edition that retails for $299 while board partners will have custom versions that go for anywhere between $249 and $299. It should also be noted that unlike other NVIDIA launches as of late the GTX 1060 Founders Edition will ONLY be available directly from NVIDIA while board partners will have plenty of custom offerings right at launch. With that being said, from a specifications perspective there’s a wide gulf which separates this $249 / $299 card from its larger, more capable siblings.
With 1280 CUDA cores and 80 Texture Units the GTX 1060 is in a completely different league from the GTX 960 it is meant to replace, boasting on-paper stats that are closer to those of a GTX 970. When you add in the increased clock speeds granted by the 14nm manufacturing process and triple the amount of memory (not taking into account the more expensive GTX 960 4GB of course) it isn’t hard to see why NVIDIA claims this new card will compete against the GTX 980. That’s mightily impressive given the GTX 1060’s meager TDP of just 120W.
The specs and performance potential here are admittedly impressive but there are two areas in which people may question NVIDIA’s decisions for the GTX 1060: price and memory allotment. Let’s tackle the latter first since there’s very little to worry about. 6GB of GDDR5 operating across a 192-bit memory interface represents a pretty significant step forward for NVIDIA’s offerings in this category when you consider how well the GTX 960 performed even when saddled with a comparatively paltry amount of memory bandwidth. This card even compares quite favorably to the GTX 980 largely due to the higher GDDR5 speed bin. However, in a segment that is quite vulnerable to marketable features taking precedent over actual performance metrics, the RX480’s 8GB framebuffer presents a golden opportunity for the next AMD PR campaign.
Regardless of perception NVIDIA has taken several steps in an effort to mitigate any memory bottlenecks. Not only is the GTX 1060 meant to live in a world where a memory-sipping resolution 1080P is common but there are several technologies build into the foundation of their Pascal architecture which optimize the memory subsystem’s theoretical throughput. For example, their enhanced delta color compression algorithms can boost bandwidth by more efficiently utilizing onboard resources.
As for the GTX 1060’s pricing structure, there shouldn’t be any surprises here but that doesn’t mean gamers will take it sitting down. Offering a Founder’s Edition with a $50 (or 20%) premium into a budget-focused category results in extremely poor optics no matter which way you look at things. NVIDIA is obviously committed to litmus testing out the Founder’s Edition initiative but that fifty bucks speaks volumes in this price range, particularly when a $299 reference board is compared against a $239 RX480. $299 also makes this card significantly more expensive than the GTX 960’s initial launch price.
While NVIDIA seems to be aligning themselves with the RX480’s cost structure rather than attempting to replicate the costing of legacy solutions, the GTX 1060 will undoubtedly be an extremely strong contender. That’s because there will be many suitable board partner alternatives retailing for $249 or a mere $10 more than AMD’s competitor, all of which will be available right at launch.
The $200 to $300 price range is a cluttered place and the GTX 1060 6GB has some huge shoes to fill. While it is more than obvious this card is being launched in an effort to take away some of AMD’s thunder, the focus of this review –NVIDIA’s Founder’s Edition- could face an uphill battle given its $50 premium even if it can equal the GTX 980’s performance.
A Closer Look at the GTX 1060 Founders Edition
The GTX 1060 Founders Edition follows the general design guidelines of its predecessors with a stylized heatsink shroud that boasts a tasteful combination of black and aluminum. The main difference this time around is the omission of an acrylic “window” and slightly lower end materials to insure a lower price point. It still looks pretty good but that was a given considering you’ll pay a hefty $50 for this particular design. You should also know that black corrugated area is actually the heatsink’s top portion. Will it get hot to the touch? We’ll look at that a bit later.
Despite the similarities with NVIDIA’s higher-end cards, the GTX 1060 utilizes a much more compact design at just 9 ¾” long and will thus be a much better fit for small form factor systems. There should be several board partner options that are even smaller.
Flip the Founders Edition over and you can really see just how compact the card really is. While the blower-style cooler necessitates a longer overall footprint, the actual PCB is just 6 ¾” long. This actually mirrors the dimension of NVIDIA’s reference GTX 960.
Another thing to note here is the lack of a backplate. Once again this was done to lower BOM costs plus, due to the high efficiency of NVIDIA’s GP106 core, there’s really no need to have additional heat dissipation in this area.
Along the card’s side is a single 6-pin connector which, when combined with the current available through the PCI-E slot, should be more than enough for some impressive overclocking achievements. Remember, the GTX 1060 has a TDP of just 120W (which doesn’t necessarily correlate to actual power consumption but it gives a general idea) and based on the still-limited amount of Power Limit adjustment NVIDIA allows for, there should be no reason to worry.
Along this edge you’ll also find the usual illuminated GeForce logo which is a must if a build warrants clean NVIDIA badging rather than the somewhat contrived brand-positioned approaches some board partners have taken with their LED setups.
One thing you may have noticed from the last few pictures is the oddball positioning of the 6-pin relative to the PCB. In order to keep a clean design that matches up well with the cable routing positioning in most cases, NVIDIA treated the power input connector as a hard wired extension with wires that run from the PCB to a header. While this certainly isn’t the first time I’ve seen this kind of layout, it is exceedingly rare since the flexible solder points could be viewed as a problem waiting to happen. Luckily the extension happens within a completely contained space so any potential concerns are completely unwarranted.
Other than the unique treatment given to the power connector, there isn’t much interesting going on below the GTX 1060’s beltline. The heatsink itself is a straightforward all-aluminum / copper affair that doesn’t require a full vapor chamber like the GTX 1080 and GTX 1070.
The GTX 1060 Founders Edition utilizes a basic 3+1 phase PWM along with six GDDR5 modules (one per memory controller) placed strategically around the small GP106 core. We can expect board partners to market heavily modified and upgraded power designs but, as with other cards, it’s debatable whether they’ll actually benefit overclocking headroom or long term stability.
When compared against the reference AMD RX480, NVIDIA has taken a much more open approach with their display output selection. Instead of consigning the ubiquitous DVI connector to the dustbin of history, they’ve included one alongside the usual HDMI 2.0 and Displayport 1.4 connectors. This insures native compatibility with existing and legacy monitors so a user won’t have to purchase a DisplayPort to DVI adapter in addition to a new graphics card.
Test System & Setup
Processor: Intel i7 5960X @ 4.3GHz
Memory: G.Skill Trident X 32GB @ 3000MHz 15-16-16-35-1T
Motherboard: ASUS X99 Deluxe
Cooling: NH-U14S
SSD: 2x Kingston HyperX 3K 480GB
Power Supply: Corsair AX1200
Monitor: Dell U2713HM (1440P) / Acer XB280HK (4K)
OS: Windows 10 Pro
Drivers:
AMD Radeon Software 16.7.2
NVIDIA 368.14 WHQL
NVIDIA 368.146 Beta (GTX 1060)
*Notes:
– All games tested have been patched to their latest version
– The OS has had all the latest hotfixes and updates installed
– All scores you see are the averages after 3 benchmark runs
All IQ settings were adjusted in-game and all GPU control panels were set to use application settings
The Methodology of Frame Testing, Distilled
How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.
Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.
FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.
Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.
We are now using FCAT for ALL benchmark results in DX11.
DX12 Benchmarking
For DX12 many of these same metrics can be utilized through a simple program called PresentMon. Not only does this program have the capability to log frame times at various stages throughout the rendering pipeline but it also grants a slightly more detailed look into how certain API and external elements can slow down rendering times.
Since PresentMon throws out massive amounts of frametime data, we have decided to distill the information down into slightly more easy-to-understand graphs. Within them, we have taken several thousand datapoints (in some cases tens of thousands), converted the frametime milliseconds over the course of each benchmark run to frames per second and then graphed the results. This gives us a straightforward framerate over time graph. Meanwhile the typical bar graph averages out every data point as its presented.
One thing to note is that our DX12 PresentMon results cannot and should not be directly compared to the FCAT-based DX11 results. They should be taken as a separate entity and discussed as such.
Performance Consistency Over Time
Modern graphics card designs make use of several advanced hardware and software facing algorithms in an effort to hit an optimal balance between performance, acoustics, voltage, power and heat output. Traditionally this leads to maximized clock speeds within a given set of parameters. Conversely, if one of those last two metrics (those being heat and power consumption) steps into the equation in a negative manner it is quite likely that voltages and resulting core clocks will be reduced to insure the GPU remains within design specifications. We’ve seen this happen quite aggressively on some AMD cards while NVIDIA’s reference cards also tend to fluctuate their frequencies. To be clear, this is a feature by design rather than a problem in most situations.
In many cases clock speeds won’t be touched until the card in question reaches a preset temperature, whereupon the software and onboard hardware will work in tandem to carefully regulate other areas such as fan speeds and voltages to insure maximum frequency output without an overly loud fan. Since this algorithm typically doesn’t kick into full force in the first few minutes of gaming, the “true” performance of many graphics cards won’t be realized through a typical 1-3 minute benchmarking run. Hence why we use a 10-minute warm up period before all of our benchmarks.
While we don’t have any concerns over a mere 120W core causing issues for a pretty capable heatsink, NVIDAI will still have to finely balance temperatures, core speeds and acoustics so throttling doesn’t occur. Remember the critique we laid at the GTX 1080’s feet when its performance was pushed downwards as the lethargic fan speed profile failed to keep up with rising temperatures? Yeah, hopefully that doesn’t happen here…
The first results we are seeing here are quite heartening with temperatures barely reaching the 70°C mark after an intensive 15 minutes of gameplay. Something else to take note of is the actual decrease in heat as time goes on at the fan / clock speed dance finds a happy medium where both can coexist while being minimally invasive upon the gaming experience.
Fan speeds ramp up in a fairly linear fashion so the noise output won’t be distracting and finally level out around 1900RPMs. That’s actually quite impressive since it doesn’t represent too much of an increase from the fan’s normal “idle” speed.
NVIDIA claims the Boost Clock on the GTX 1060 should be around 1708MHz and after an initial minor step-down our sample achieved frequencies well above that mark. In most applications it leveled out at 1850MHz with some pushing the card a bit further towards its TDP limit and the resulting core speed came down to about 1800MHz. Nonetheless, regardless of the situation, the GTX 1060 was simply unflappable and exhibited perfectly consistent frequencies.
As we add custom cards to this chart things will certainly become a bit more interesting but for the time being the GTX 1060 Founders Edition delivers constant framerates without any perceivable throttling.
Doom (OpenGL)
Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.
Fallout 4
The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.
Far Cry 4
This game Ubisoft’s Far Cry series takes up where the others left off by boasting some of the most impressive visuals we’ve seen. In order to emulate typical gameplay we run through the game’s main village, head out through an open area and then transition to the lower areas via a zipline.
Grand Theft Auto V
In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.
Hitman (2016)
The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.
For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.
Overwatch
Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.
Rise of the Tomb Raider
Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.
The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.
Star Wars Battlefront
Star Wars Battlefront may not be one of the most demanding games on the market but it is quite widely played. It also looks pretty good due to it being based upon Dice’s Frostbite engine and has been highly optimized.
The benchmark run in this game is pretty straightforward: we use the AT-ST single player level since it has predetermined events and it loads up on many in-game special effects.
The Division
The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.
Witcher 3
Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.
Doom (OpenGL)
Not many people saw a new Doom as a possible Game of the Year contender but that’s exactly what it has become. Not only is it one of the most intense games currently around but it looks great and is highly optimized. In this run-through we use Mission 6: Into the Fire since it features relatively predictable enemy spawn points and a combination of open air and interior gameplay.
Fallout 4
The latest iteration of the Fallout franchise is a great looking game with all of its detailed turned to their highest levels but it also requires a huge amount of graphics horsepower to properly run. For this benchmark we complete a run-through from within a town, shoot up a vehicle to test performance when in combat and finally end atop a hill overlooking the town. Note that VSync has been forced off within the game’s .ini file.
Far Cry 4
This game Ubisoft’s Far Cry series takes up where the others left off by boasting some of the most impressive visuals we’ve seen. In order to emulate typical gameplay we run through the game’s main village, head out through an open area and then transition to the lower areas via a zipline.
Grand Theft Auto V
In GTA V we take a simple approach to benchmarking: the in-game benchmark tool is used. However, due to the randomness within the game itself, only the last sequence is actually used since it best represents gameplay mechanics.
Hitman (2016)
The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.
For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.
Overwatch
Overwatch happens to be one of the most popular games around right now and while it isn’t particularly stressful upon a system’s resources, its Epic setting can provide a decent workout for all but the highest end GPUs. In order to eliminate as much variability as possible, for this benchmark we use a simple “offline” Bot Match so performance isn’t affected by outside factors like ping times and network latency.
Rise of the Tomb Raider
Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.
The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.[/I]
Star Wars Battlefront
Star Wars Battlefront may not be one of the most demanding games on the market but it is quite widely played. It also looks pretty good due to it being based upon Dice’s Frostbite engine and has been highly optimized.
The benchmark run in this game is pretty straightforward: we use the AT-ST single player level since it has predetermined events and it loads up on many in-game special effects.
The Division
The Division has some of the best visuals of any game available right now even though its graphics were supposedly downgraded right before launch. Unfortunately, actually benchmarking it is a challenge in and of itself. Due to the game’s dynamic day / night and weather cycle it is almost impossible to achieve a repeatable run within the game itself. With that taken into account we decided to use the in-game benchmark tool.
Witcher 3
Other than being one of 2015’s most highly regarded games, The Witcher 3 also happens to be one of the most visually stunning as well. This benchmark sequence has us riding through a town and running through the woods; two elements that will likely take up the vast majority of in-game time.
Ashes of the Singularity
Ashes of the Singularity is a real time strategy game on a grand scale, very much in the vein of Supreme Commander. While this game is most known for is Asynchronous workloads through the DX12 API, it also happens to be pretty fun to play. While Ashes has a built-in performance counter alongside its built-in benchmark utility, we found it to be highly unreliable and often posts a substantial run-to-run variation. With that in mind we still used the onboard benchmark since it eliminates the randomness that arises when actually playing the game but utilized the PresentMon utility to log performance
Hitman (2016)
The Hitman franchise has been around in one way or another for the better part of a decade and this latest version is arguably the best looking. Adjustable to both DX11 and DX12 APIs, it has a ton of graphics options, some of which are only available under DX12.
For our benchmark we avoid using the in-game benchmark since it doesn’t represent actual in-game situations. Instead the second mission in Paris is used. Here we walk into the mansion, mingle with the crowds and eventually end up within the fashion show area.
Quantum Break
Years from now people likely won’t be asking if a GPU can play Crysis, they’ll be asking if it was up to the task of playing Quantum Break with all settings maxed out. This game was launched as a horribly broken mess but it has evolved into an amazing looking tour de force for graphics fidelity. It also happens to be a performance killer.
Though finding an area within Quantum Break to benchmark is challenging, we finally settled upon the first level where you exit the elevator and find dozens of SWAT team members frozen in time. It combines indoor and outdoor scenery along with some of the best lighting effects we’ve ever seen.
Rise of the Tomb Raider
Another year and another Tomb Raider game. This time Lara’s journey continues through various beautifully rendered locales. Like Hitman, Rise of the Tomb Raider has both DX11 and DX12 API paths and incorporates a completely pointless built-in benchmark sequence.
The benchmark run we use is within the Soviet Installation level where we start in at about the midpoint, run through a warehouse with some burning its and then finish inside a fenced-in area during a snowstorm.[/I]
Ashes of the Singularity
Hitman (2016)
Quantum Break
Rise of the Tomb Raider
Thermal Imaging
This is one cool-running card and while there are some minor heat spots near the GTX 1060’s bank of VRMs, that’s to be expected. From what’s visible there aren’t any points of concern here. Of special note is the heatsink that peaks through the plastic / aluminum shroud; it barely gets warm to the touch have plenty of gameplay time.
Acoustical Testing
What you see below are the baseline idle dB(A) results attained for a relatively quiet open-case system (specs are in the Methodology section) sans GPU along with the attained results for each individual card in idle and load scenarios. The meter we use has been calibrated and is placed at seated ear-level exactly 12” away from the GPU’s fan. For the load scenarios, Hitman Absolution is used in order to generate a constant load on the GPU(s) over the course of 15 minutes.
While this isn’t exactly the quietest card we’ve tested, it is one of the least invasive reference designs around. Simply put, the fan doesn’t require high RPMs to achieve an optimal core thermal level.
System Power Consumption
For this test we hooked up our power supply to a UPM power meter that will log the power consumption of the whole system twice every second. In order to stress the GPU as much as possible we used 15 minutes of Unigine Valley running on a loop while letting the card sit at a stable Windows desktop for 15 minutes to determine the peak idle power consumption.
NVIDAI’s Pascal architecture has impressed us again and again with the efficiency it puts on display and the GTX 1060 continues that tradition in a big way. While our sample did consume a bit more power than a GTX 960 2GB, its power requirements relative to AMD’s RX480 are nothing short of shocking considering each card’s performance metrics.
It looks like AMD’s Polaris architecture requires almost 40W more to achieve framerates that are –at best- equal to the GTX 1060’s. Not only does that reflect poorly upon Polaris’ ability to scale upwards to other higher end products but it also highlights the inroads NVIDIA continues to make towards optimal efficiency.
Overclocking Results
In the Performance Over Time section we saw that NVIDIA’s GTX 1060 Founders Edition has the ability to boost itself to between 1800MHz and 1850MHz under normal gaming conditions. That’s a significant amount when you consider its average Boost Frequency should be around 1700MHz. But how does that affect overclocking headroom? Has this card sacrificed additional clock speed headroom in an effort to deliver optimal out-of-box performance? The answer to those questions is simple: there’s still some room left to play around with.
First of all the Founders Edition isn’t limited by its heatsink in the least. Even when overclocked the GP106 core runs cool enough this setting the fan to 60% not only results in low temperatures but it also doesn’t exhibit any distracting acoustical characteristics.
In terms of actual achievable frequencies, things leveled out at 2075MHz even though for the first few minutes (as evidenced by the 15 minute GPU-Z readout above) 2123MHz and even 2088MHz looked perfectly stable. After a bit of time those high speeds were curtailed as they approached the card’s TDP limit. Nonetheless, an additional speed boost of nearly 200MHz will certainly be beneficial to in-game framerates.
Memory didn’t fare quite as well but it still boasted a good amount of overhead without any voltage changes. An approximate 10% boost to ~4800MHz puts it a bit behind some of the other 8Gbps GDDR5 setups we’ve tested but not by all that much.
All in all, I’m extremely excited to see what board partners have in store for this card. It seems to be a willing little overclocker and the low heat output guarantees that cooling shouldn’t be a problem. As usual, it will be Power and Voltage limits which hold things back.
Conclusion
I had some seriously high expectations for NVIDIA’s GTX 1060. The Pascal architecture has thus far proven to be highly efficient and able to deliver an almighty performance wallop within each respective price point it’s been launched into. Indeed the GTX 1070 and GTX 1080 were some of the most impressive graphics cards I’ve reviewed to date. For the most part this new GTX 1060 Founders Edition carries down the trail blazed by its more expensive siblings by delivering great framerates and efficiency. However, unlike the higher end GeForce cards it doesn’t exist in a vacuum that’s devoid of any competition and AMD’s RX480 has proven to be an extremely competent and well priced alternative.
Let’s get right to performance since that’s what you are all here for. Against current and previous generation GeForce offerings the GTX 1060 delivers impressive numbers within DX11 applications, particularly when you compare it against the GTX 960 2GB. Anyone with a 960 or 760 will experience noticeable and pretty dramatic performance increases with a move to Pascal; this new card is easily able to match a GTX 980’s framerates. Considering this 75% to 90% uplift versus the GTX 960 2GB was accomplished in just one generation, NVIDIA needs to be commended for an achievement of epic proportions. The GTX 1060 also maintains a “safe” distance from the GTX 1070 so as not to directly compete against one of NVIDIA’s premium offerings.
Against the AMD competition things look pretty good as well with the 1060 trading blows with the R9 390X and maintaining a healthy lead over the RX480 8GB at 1080P. The only hiccup here is that once again we see an NVIDIA card which loses some ground at higher resolutions. At 1440P the RX480, 390 and 390X are all able to take some serious bites out of the GTX 1060’s lead. When you consider the Founders Edition costs a whopping $60 (or about 25%) more than the RX480 8GB, NVIDIA is facing an uphill battle on the price / performance front but more on that a bit later.
With the GTX 1080 and GTX 1070, NVIDIA’s DX12 performance was somewhat obstruficated due to the massive horsepower propping up those two cards against earlier GeForce offerings and a lack of any AMD competition at their respective price points. Not so with the GTX 1060. Whereas AMD’s RX480 obviously has challenges in the performance per watt field, the GTX 1060 has some very real problems delivering consistent DX12 framerates. So much so that it’s significant lead over the RX480 completely evaporates when Microsoft’s increasingly popular next gen API is used.
Not only are the results you see above the polar opposite of this card’s DX11’s positioning but they raise some questions about how well the GTX 1060 will age as more games launch with DX12 support. Now it should be mentioned we are still in the early days of this API and the sample size of four games is paltry at best, thus causing “zingers” to adversely influence all results but there is certainly a noteworthy trend here. Only time will tell if the old adage of “where there’s smoke, there’s fire” applies to NVIDIA’s future DX12 performance.
Against the GTX 980, GTX 970 and GTX 960 this newcomer exhibits all the hallmarks of Pascal’s DX12 performance benefits. For example the GTX 1060 goes from tying the GTX 980’s framerates in DX11 to outstripping it in DX12. I do however think this highlights how lackluster Maxwell’s DX12 support was rather than exemplifying the strengths of Pascal. Nonetheless, it is heartening to see a new architecture extend its lead in next generation applications.
I also need to mention those epic numbers against the GTX 960, a card that has gone from a price / performance champion to one that delivers disappointing framerates in both DX12 and DX11. Not only is its 2GB framebuffer completely inadequate for the settings I chose and the amount of on-die resources it has just isn’t up to the task of keeping up in DX12. If you want any hope of maintaining your system’s performance and want to move laterally within NVIDIA’s lineup then the GTX 1060 is the way to go for DX12.
After more than a year of hiatus, our price / performance charts are making a comeback and none too soon it seems. In the sub-$300 category every dollar relative to displayable on-screen frames counts.
Simply put, in DX11 the $299 GTX 1060 Founders Edition doesn’t deliver a convincing $/FPS ratio against the RX480 and in DX12 environments its wheels fall off. The problem here isn’t the GTX 1060 per se since it has the potential to be one of the best GPUs around; it’s the Founders Edition that pulls a snake eyes. When a $249 price point is entered into our calculator, this product becomes a screaming deal but at $299 it doesn’t live up to expectations from a value standpoint.
We’re not reviewing a board partners’ $249 card here but rather the reference board and when a much higher end custom $379 GTX 1070 (if you can find one that is!) has a better value quotient than your premium reference design, there’s obviously a major miss with product positioning.
NVIDIA believes the Founders Edition warrants a premium and folks will gladly pay for it. That’s either clever marketing or -if you believe what some folks say- the ultimate example of arrogance since the 1060 6GB Founders Edition is bewilderingly overpriced. There’s absolutely no reason to charge a 20% markup on this product, a premium that turns an awesome value into a solution that simply doesn’t compete with AMD’s RX480 from a price per FPS standpoint. But then again that seems to be the modus operandi of every Founders Edition to date. NVIDIA can highlight the GTX 1060’s dominating efficiency numbers all they want but I think mid-range buyers are predominantly focused on optimizing their investments rather than fixated on a few watts.
Luckily the GTX 1060 launch is going about its business in a very different way from the Founders Edition-focused GTX 1080 and GTX 1070 releases. This is what’s called a “virtual” launch with the board partners shouldering the responsibility of making their custom boards available at retailers from day one. Meanwhile, the Founders Edition I’ve been lambasting over the last few paragraphs will be exclusively available through GeForce.com and you won’t see it on retail shelves. After reaching out to several AIB’s not only does it look like there will be a huge amount of products in the channel but there should also be plenty of options around the $249 and $259 price points. That brings in a whole new perspective to NVIDIA’s GTX 1060, doesn’t it?
I alluded to the effect of a $249 GTX 1060 a little while ago but I need to reiterate things here again: it sets a new high water mark in the price / performance battle. When combined with its significantly lower power consumption the GTX 1060 can really put the screws to AMD’s RX480 8GB while highlighting all of Pascal’s strengths in one compact, efficient package.
Past the items I’ve mentioned above, there’s one other wrinkle in the GTX 1060’s fabric: its lack of SLI support. Personally I don’t think this isn’t such a big deal since potentially paying six hundred bucks for two of these things seems to be preposterous. For that kind of cash a single GTX 1080 would provide about the same amount of performance and you won’t need to worry about those pesky multi card profiles for optimal framerates. That doesn’t mean I’m entirely behind NVIDIA’s decision to nuke SLI on this card. There are benefits to amortizing the cost of higher performance by staggering purchases of two cards across several months and with this generation of more affordable GeForce products, that will no longer be possible.
Going into this review I really thought the end result would be a foregone conclusion: the GTX 1060 would prove to be the best option for value-focused gamers. Now that I’ve crunched the numbers the outcome isn’t so clearly cut. While the Pascal architecture delivers awesome performance per watt benefits, the Founders Edition’s $299 price puts the brakes on any hopes of it being considered a shoe-in over the RX480 8GB. As a matter of fact I’d consider the RX480 to be a more versatile option due to its relative strengths in DX12 and higher resolutions, not to mention its lower price. Factor in that $249 price though and suddenly the advantage turns to NVIDIA’s favor from nearly every conceivable perspective.
So where does this all leave the GTX 1060 6GB Founders Edition? What we have here is a simple yin and yang situation. At $299 the Founders Edition costs too much. Period. Meanwhile a $249 price makes this one of the best deals around if you are looking for a quick and inexpensive drop-in upgrade. With the RX480 in short supply and the GTX 1060 just being introduced, now may not be the best time to take the plunge due to inevitable retailer markups for a card that will understandably be a hot commodity. However, if pricing is maintained those less expensive custom boards will really, really deserve your undivided attention.