fbpx
GamesGraphic CardHow ToPC

How to Choose The Right Graphic Card

How to Buy the Best GPU for Gaming

If you’re a PC gamer or a content creator who lives and dies by the speed of your graphics-accelerated software, your video card is the engine that powers what you can do—or how lustily you can brag.
Our guide will help you sort through the best video-card options for your desktop PC, what you need to know to upgrade a system, and how to evaluate whether a particular card is a good buy. We’ll also touch on some upcoming trends—they could affect which card you choose. After all, consumer video cards range from under Rs 7000Pkr to well over 2 Lac Pkr. ($50 to over $1,000). It’s easy to overpay or underbuy. (We won’t let you do that, though.)

Who’s Who in GPUs: AMD vs. Nvidia

First off, what does a graphics card do? And do you really need one?

If you’re looking at any given pre-built desktop PC on the market, unless it’s a gaming-oriented machine, PC makers will de-emphasize the graphics card in favor of promoting CPU, RAM, or storage options. Indeed, sometimes that’s for good reason; a low-cost PC may not have a graphics card at all, relying instead on the graphics-acceleration silicon built into its CPU (an “integrated graphics processor,” commonly called an “IGP”). There’s nothing inherently wrong with relying on an IGP—most business laptops, inexpensive consumer laptops, and budget-minded desktops have them—but if you’re a gamer or a creator, the right graphics card is crucial.

A modern graphics solution, whether it’s a discrete video card or an IGP, handles the display of 2D and 3D content, drawing the desktop, and decoding and encoding video content in programs and games. All of the discrete video cards on the consumer market are built around large graphics processing chips designed by one of two companies: AMD or Nvidia. These processors are referred to as “GPUs,” for “graphics processing units,” a term that is also applied, confusingly, to the graphics card itself. (Nothing about graphics cards…ahem, GPUs…is simple!)

The two companies work up what are known as “reference designs” for their video cards, a standardized version of a card built around a given GPU. Sometimes these reference-design cards are sold directly by Nvidia (or, less often, by AMD) themselves to consumers. More often, though, they are duplicated by third-party card makers (companies referred to in industry lingo as AMD or Nvidia “board partners”), such as Asus, EVGA, MSI, Gigabyte, Sapphire, XFX, and Zotac.

Depending on the graphics chip in question, these board partners may sell their own self-branded versions of the reference card (adhering to the design and specifications set by AMD or Nvidia), or they will fashion their own custom products, with different cooler designs, slight overclocking done from the factory, or features such as LED mood illumination. Some board partners will do both—that is, sell reference versions of a given GPU, as well as its own, more radical designs.

Who Needs a Discrete GPU?

We mentioned integrated graphics (IGPs) above. IGPs are capable of meeting the needs of most general users today, with three broad exceptions…

Professional Workstation Users. These folks, who work with CAD software or in video and photo editing, will still benefit greatly from a discrete GPU. Some of their key applications can transcode video from one format to another or perform other specialized operations using resources from the GPU instead of (or in addition to) those of the CPU. Whether this is faster will depend on the application in question, which specific GPU and CPU you own, and other factors.

Productivity-Minded Users With Multiple Displays. People who need a large number of displays can also benefit from a discrete GPU. Desktop operating systems can drive displays connected to the IGP and discrete GPUs simultaneously. If you’ve ever wanted five or six displays hooked up to a single system, you can combine an IGP and a discrete GPU to get there.

That said, you don’t necessarily need a high-end graphics card to do that. If you’re simply displaying business applications, multiple browser windows, or lots of static windows across multiple displays (i.e., not demanding PC games), all you need is a card that supports the display specifications, resolutions, monitor interfaces, and a number of panels you need. If you’re showing four web browsers across four display panels, a GeForce RTX 2080 Super card, say, won’t confer any greater benefit than a GeForce GTX 1660 card with the same supported outputs.

Gamers. And of course, there’s the gaming market, to whom the GPU is arguably the most important component. RAM and CPU choices both matter, but if you have to pick between a top-end system circa 2016 with a 2019 GPU or a top-end system today using the highest-end GPU you could buy in 2016, you’d want the former.

Graphics cards fall into two distinct classes: consumer cards meant for gaming and light content creation work, and dedicated cards meant for professional workstations and geared toward scientific computing, calculations, and artificial intelligence work. This guide, and our reviews, will focus on the former, but we’ll touch on workstation cards a little bit, later on. The key sub-brands you need to know across these two fields are Nvidia’s GeForce and AMD’s Radeon RX (on the consumer side of things), and Nvidia’s Titan and Quadro, as well as AMD’s Radeon Pro and Radeon Instinct (in the pro workstation field). As recently as 2017, Nvidia had the very high end of the consumer graphics-card market more or less to itself, and it still dominates there.

We’ll focus here on the consumer cards. Nvidia’s consumer card line in late 2019/early 2020 is broken into two distinct classes, both united under the long-running GeForce brand: GeForce GTX, and GeForce RTX. AMD’s consumer cards, meanwhile, comprise the Radeon RX and (now fading) Radeon RX Vega families, as well as the Radeon VII.

Before we get into the individual lines in detail, though, let’s outline a very important consideration for any video-card purchase.

Target Resolution: Your First Consideration

Resolution is the horizontal-by-vertical pixel count at which your video card will drive your monitor. This has a huge bearing on which card to buy, and how much you need to spend when looking at a video card from a gaming perspective.

If you are a PC gamer, a big part of what you’ll want to consider is the resolution(s) at which a given video card is best suited for gaming. Nowadays, even low-end cards will display everyday programs at lofty resolutions like 3,840 by 2,160 pixels (a.k.a., 4K). But for strenuous PC games, those cards will not have nearly the power to drive smooth frame rates at high resolutions like those. In games, the video card is what calculates positions, geometry, and lighting, and renders the onscreen image in real-time. For that, the higher the in-game detail level and monitor resolution you’re running, the more graphics-card muscle is required.

The three most common resolutions at which today’s gamers play are 1080p (1,920 by 1,080 pixels), 1440p (2,560 by 1,440 pixels), and 2160p or 4K (3,840 by 2,160 pixels). Generally speaking, you’ll want to choose a card suited for your monitor’s native resolution. (The “native” resolution is the highest supported by the panel and the one at which the display looks the best.) You’ll also see ultra-wide-screen monitors with in-between resolutions (3,440 by 1,440 pixels is a common one); you can gauge these versus 1080p, 1440p, and 2160p by calculating the raw number of pixels for each (multiply the vertical number by the horizontal one) and seeing where that screen resolution fits in relative to the common ones. (See our targeted roundups of the best graphics cards for 1080p play and the best graphics cards for 4K gaming.)

Now, of course, you can always dial down the detail levels for a game to make it run acceptably at a higher-than-recommended resolution, or dial back the resolution itself. But to an extent, that defeats the purpose of a graphics card purchase. The highest-end cards are meant for 4K play or for playing at very high refresh rates at 1080p or 1440p; you don’t have to spend $1,000 or even $500 to play more than acceptably at 1080p. A secondary consideration nowadays, though, is running games at ultra-high frame rates to take advantage of the extra-fast refresh abilities of some new monitors; more on that later. Let’s look at the graphics card makers’ lines first, and see which ones are suited for what gaming resolutions.

Meet the Radeon and GeForce Families

The GPU lines of the two big graphics-chip makers are constantly evolving, with low-end models suited to low-resolution gameplay ranging up to elite-priced models for gaming at 4K and/or very high refresh rates. Let’s look at Nvidia’s first.

A Look at Nvidia’s Lineup

The company’s current line is split between cards using last-generation (a.k.a. “10-series”) GPUs dubbed the “Pascal” line, and newer GTX 1600- and RTX 2000-series lines, based on GPUs using an architecture called “Turing.” Its Titan cards are outliers; more on them in a bit.

Here’s a quick rundown of the currently relevant card classes in the Pascal and Turing families, their rough pricing, and their usage cases…

If you are a keen observer of the market, you may notice that many of the familiar GeForce GTX Pascal cards like the GTX 1070 and GTX 1080 are not listed above. They are being allowed to sell through and are largely going off the market in 2019 and 2020in favor of their GeForce RTX successors. We expect this to happen soon for the GeForce GTX 1060 due to the release of the GeForce GTX 1660 and 1660 Ti, and eventually, the lesser Pascal cards.

We’d class the GT 1030 to GTX 1050 as low-end cards, coming under $100 or a little above. The GTX 1650/1650 Super to GTX 1660 Ti makes up Nvidia’s current midrange, spanning from about $150 to $300, or a little more.

With apologies to nu-soul and nu-metal, the end-of-lifting GTX 1080-class cards, as well as the GeForce RTX 2060 and RTX 2070 (both the originals and the newer “Super” variants), constitute what we’d call the “nu-high end,” as they take the place of the old top-end GeForces in the $350 to $700 range. The RTX 2080 Super and RTX 2080 Ti cards, finally, form what we’d call a new “elite class.”

As for the Titan cards, these are essentially stripped-down workstation cards that bridge the pro graphics and high-end/4K gaming worlds. For most gamers, the Titans won’t be of interest due to their pricing. But know that the Titan Xp (older, around $1,200) and newer Titan RTX ($2,500) and Titan V ($2,999) cards are options for Powerball-winning gamers, machine-learning pioneers, AI developers, or folks involved in pro/academic GPU-bound calculation work.

A Look at AMD’s Lineup

As for AMD’s card classes, as 2020 dawns the company is stronger than it has been for some time, competing ably enough with Nvidia’s low-end and mainstream cards. It’s weaker at the high end, though, and it puts up no resistance against the elite class…

The aging Radeon RX 550 and 560 comprise the low end, while the Radeon RX 570 to RX 590 are the midrange and ideal for 1080p gaming, though one must suspect that their time is limited, given the company’s newest addition to its 1080p-play arsenal, the Radeon RX 5500 XT. The Radeon RX 580, RX Vega 56, and RX Vega 64 cards, the first a great-value 1080p card and the latter two good for both 1080p and 1440p play, were cards that were hit particularly hard by the cryptocurrency craze of 2017-2018, inflating card prices to the sky, but have since come back down to earth as they sell through and fade away in favor of newer AMD cards.

Indeed, the 1080p and especially the 1440p AMD cards have seen a shakeup. The company released the first of its new, long-awaited line of 7nm-based “Navi” midrange graphics cards in July, based on a whole new architecture AMD calls RDNA. The first three cards are the Radeon RX 5700, the Radeon RX 5700 XT, and the limited-run Radeon RX 5700 XT Anniversary Edition. All these cards have their sights pinned on the 1440p gaming market, and each indeed powers even the most demanding AAA titles at above 60fps in that resolution bracket.

The Radeon VII is AMD’s sole player in the elite bracket; it trades blows with the GeForce RTX 2080 at 4K but generally performs less well at lower resolutions in games. It’s nearing its end of life. We suspect that AMD will replace it with a higher-end Navi/RDNA card before too long.

Graphics Card Basics: Understanding the Core Specs

Now, our comparison charts above should give you a good idea of which card families you should be looking at, based on your monitor and your target resolution. A few key numbers are worth keeping in mind when comparing cards, though: the graphics processor’s clock speed, the onboard VRAM (that is, how much video memory it has), and—of course!—the pricing. And then there’s adaptive sync.

Clock Speed

When comparing GPUs from the same family, a higher base clock speed (that is, the speed at which the graphics core works) and more cores signify a faster GPU. Again, though: That’s only valid comparison between cards in the same product family. For example, the base clock on the GeForce GTX 1080 is 1,733MHz, while the base clock is 1,759MHz on a (factory overclocked) Republic of Gamers Strix version of the GTX 1080 from Asus in its out-of-the-box Gaming Mode.

Note that this base clock measure is distinct from the graphics chip’s boost clock. The boost clock is the speed to which the graphics chip can accelerate temporarily when under load, as thermal conditions allow. This can also vary from card to card in the same family. It depends on the robustness of the cooling hardware on the card and the aggressiveness of the manufacturer in its factory settings. The top-end partner cards with giant multi fan coolers will tend to have the highest boost clocks for a given GPU.

This is to say nothing of AMD’s new category: “game clock.” According to the company, the game clock represents the “average clock speed gamers should expect to see across a wide range of titles,” a number that the company’s engineers gathered during a test of 25 different titles on the new Navi lineup of cards. We mention this so that you don’t compare game clocks to boost or base clocks, which game clock decidedly is not.

Onboard Memory

The amount of onboard video memory (sometimes referred to by the rusty term “frame buffer”) is usually matched to the requirements of the games or programs that the card is designed to run. In a certain sense, from a PC-gaming perspective, you can count on a video card to have enough memory to handle current demanding games at the resolutions and detail levels that the card is suited for. In other words, a card maker generally won’t overprovision a card with more memory than it can realistically use; that would inflate the pricing and make the card less competitive. But there are some wrinkles to this.

A card designed for gameplay at 1,920 by 1,080 pixels (1080p) these days will generally be outfitted with 4GB or 6GB of RAM, while cards geared more toward play at 2,560 by 1,440 pixels (1440p) or 3,840 by 2,160 (2160p, or 4K) tend to deploy 8GB or more. Usually, for cards based on a given GPU, all of the cards have a standard amount of memory. The wrinkles: In some isolated but important cases, card makers offer versions of a card with the same GPU but different amounts of VRAM. Some key ones to know nowadays: cards based on the Radeon RX 5500 XT and RX 580 (4GB versus 8GB). Both are GPUs you’ll find in popular midrange cards a bit above or below $200, so mind the memory amount on these. The cheaper versions will have less.

Now, if you’re looking to spend $150 or more on a video card, with the idea of all-out 1080p gameplay, a card with at least 4GB of memory really shouldn’t be negotiable. Both AMD and Nvidia now outfit their $200-plus GPUs with more RAM than this. (AMD has stepped up to 8GB on its RX-series cards, with 16GB on its Radeon VII, while Nvidia is using 6GB or 8GB on most, with 11GB on its elite GeForce RTX 2080 Ti.) Either way, sub-4GB cards should only be used for secondary systems, gaming at low resolutions, or simple or older games that don’t need much in the way of hardware resources.

Memory bandwidth is another spec you will see. It refers to how quickly data can move into and out of the GPU. More is generally better, but again, AMD and Nvidia have different architectures and sometimes different memory bandwidth requirements, so these numbers are not directly comparable.

Pricing: How Much Should You Spend?

Generations of cards come and go, but the price bands were constant for years—at least when the market was not distorted in 2017-18 by cryptocurrency miners. Now that the rush has abated, AMD and Nvidia both are targeting light 1080p gaming in the $100-to-$180 price range, higher-end 1080p and entry-level 1440p in cards between $200 to $300, and light-to-high-detail 1440p gaming between $300 and $400.

If you want a card that can handle 4K handily, you’ll need to spend more than $400… at the least. A GPU that can push 4K gaming at high detail levels will cost $500 to $1,200. Cards in the $150-to-$350 markets generally offer performance improvements in line with their additional cost. If a card is a certain amount costlier than another, the increase in performance is usually proportional to the increase in price. In the high-end and elite-level card stacks, though, this rule falls away; spending more money yields diminishing returns.

Looking Forward: Graphics Card Trends
Nvidia has been in the consumer video card driver’s seat for a few years now, but 2020 should see more action than any in recent memory to shake things up between the two big players.

Source
PcMagGigaByte
Tags

Sohaib Asghar

Founder, Web developer, and Cinematographer. Loves to Play with Cameras and Tech Products. Runs On CHAI.
Back to top button
Close
%d bloggers like this: