A bit over 8 months after it all began, the tail-end of NVIDIA’s GeForce Turing product stack launch is finally in sight. This morning the company is rolling out its latest and cheapest GeForce Turing video card, the GeForce GTX 1650. Coming in at $149, the newest member of the GeForce family is set to bring up the rear of the GeForce product stack, offering NVIDIA’s latest architecture in a low-power, 1080p-with-compromises gaming video card with a budget-friendly price to match.

In very traditional NVIDIA fashion, the Turing launch has been a top-to-bottom affair. After launching the four RTX 20 series cards early in the cycle, NVIDIA’s efforts in the last two months have been focused on filling in the back end of their product stack. Central to this is a design variant of NVIDIA’s GPUs, the TU11x series – what I’ve been dubbing Turing Minor – which are intended to be smaller, easier to produce chips that retain the all-important core Turing architecture, but do away with the dedicated ray tracing (RT) cores and the AI-focused tensor cores as well. The end result of this bifurcation has been the GeForce GTX 16 series, which is designed to be a leaner and meaner set of Turing GeForce cards.

To date the GTX 16 series has been comprised of solely the GTX 1660 family of cards – the GTX 1660 (vanilla) and GTX 1660 Ti. Both of these were based on the TU116 GPU. However today the GTX 16 series family is expanding, with the introduction of the GTX 1650 and the new Turing GPU powering NVIDIA’s junior-sized card: TU117.


Unofficial TU117 Block Diagram

Whereas the GeForce GTX 1660 Ti and the underlying TU116 GPU served as our first glimpse at NVIDIA’s mainstream product plans, the GeForce GTX 1650 is a much more pedestrian affair. The TU117 GPU beneath it is for all practical purposes a smaller version of the TU116 GPU, retaining the same core Turing feature set, but with fewer resources all around. Altogether, coming from the TU116 NVIDIA has shaved off one-third of the CUDA cores, one-third of the memory channels, and one-third of the ROPs, leaving a GPU that’s smaller and easier to manufacture for this low-margin market. Still, at 200mm2 in size and housing 4.7B transistors, TU117 is by no means a simple chip. In fact, it’s exactly the same die size as GP106 – the GPU at the heart of the GeForce GTX 1060 series – so that should give you an idea of how performance and transistor counts have (slowly) cascaded down to cheaper products over the last few years.

At any rate, TU117 will be going into numerous NVIDIA products over time. But for now, things start with the GeForce GTX 1650.

NVIDIA GeForce Specification Comparison
  GTX 1650 GTX 1660 GTX 1050 Ti GTX 1050
CUDA Cores 896 1408 768 640
ROPs 32 48 32 32
Core Clock 1485MHz 1530MHz 1290MHz 1354MHz
Boost Clock 1665MHz 1785MHz 1392MHz 1455MHz
Memory Clock 8Gbps GDDR5 8Gbps GDDR5 7Gbps GDDR5 7Gbps GDDR5
Memory Bus Width 128-bit 192-bit 128-bit 128-bit
VRAM 4GB 6GB 4GB 2GB
Single Precision Perf. 3 TFLOPS 5 TFLOPS 2.1 TFLOPS 1.9 TFLOPS
TDP 75W 120W 75W 75W
GPU TU117
(200 mm2)
TU116
(284 mm2)
GP107
(132 mm2)
GP107
(132 mm2)
Transistor Count 4.7B 6.6B 3.3B 3.3B
Architecture Turing Turing Pascal Pascal
Manufacturing Process TSMC 12nm "FFN" TSMC 12nm "FFN" Samsung 14nm Samsung 14nm
Launch Date 4/23/2019 3/14/2019 10/25/2016 10/25/2016
Launch Price $149 $219 $139 $109

Right off the bat, it’s interesting to note that the GTX 1650 is not using a fully-enabled TU117 GPU. Relative to the full chip, the version that’s going into the GTX 1650 has had a TPC fused off, which means the chip loses 2 SMs/64 CUDA cores. The net result is that the GTX 1650 is a very rare case where NVIDIA doesn’t put their best foot forward right off the bat – the company is essentially sandbagging – which is a point I’ll loop back around to here in a bit.

Within NVIDIA’s historical product stack, it’s somewhat difficult to place the GTX 1650. Officially it’s the successor to the GTX 1050, which itself was a similar cut-down card. However the GTX 1050 also launched at $109, whereas the GTX 1650 launches at $149, a hefty 37% generation-over-generation price increase. Consequently, you could be excused if you thought the GTX 1650 felt a lot more like the GTX 1050 Ti’s successor, as the $149 price tag is very comparable to the GTX 1050 Ti’s $139 launch price. Either way, generation-over-generation, Turing cards have been more expensive than the Pascal cards they have replaced, and the low price of these budget cards really amplifies this difference.

Diving into the numbers then, the GTX 1650 ships with 896 CUDA cores enabled, spread over 2 GPCs. This is actually not all that big of a step up from the GeForce GTX 1050 series on paper, but Turing’s architectural changes and effective increase in graphics efficiency mean that the little card should pack a bit more of a punch than it first shows on paper. The CUDA cores themselves are clocked a bit lower than usual for a Turing card, however, with the reference-clocked GTX 1650 boosting to just 1665MHz.

Rounding out the package is 32 ROPs, which are part of the card’s 4 ROP/L2/Memory clusters. This means the card is being fed by a 128-bit memory bus, which NVIDIA has paired up with GDDR5 memory clocked at 8Gbps. Conveniently enough, this gives the card 128GB/sec of memory bandwidth, which is about 14% more than the last-generation GTX 1050 series cards got. Thankfully, while NVIDIA hasn’t done much to boost memory capacities on the other Turing cards, the same is not true for the GTX 1650: the minimum here is now 4GB, instead of the very constrained 2GB found on the GTX 1050. Not that 4GB is particularly spacious in 2019, however the card shouldn’t be quite so desperate for memory as its predecessor was.

Overall, on paper the GTX 1650 is set to deliver around 60% of the performance of the next card up in NVIDIA’s product stack, the GTX 1660. In practice I expect the two to be a little closer than that – GPU performance scaling isn’t quite 1-to-1 – but that is the ballpark area we’re looking at right now until we can actually test the card.

Meanwhile when it comes to power consumption, the smallest member of the GeForce Turing stack is also the lowest power. NVIDIA has held their GTX xx50 cards at 75W (or less) for a few generations now, and the GTX 1650 continues this trend. Which means that, at least for cards operating at NVIDIA’s reference clocks, an additional PCIe power connector is not necessary and the card can be powered solely off of the PCIe bus. This satisfies the need for a card that can be put in basic systems where a PCIe power cable isn’t available, or in low-power systems where a more power-hungry card isn’t appropriate. This also means that while discrete video cards aren’t quite as popular as they once were for HTPCs, for HTPC builders who are looking to go that route, the GTX 1650 is going to be the GTX 1050 series’ replacement in that market as well.

Reviews, Product Positioning, & The Competition

Shifting gears to business matters, let’s talk about product positioning and hardware availability.

The GeForce GTX 1650 is a hard launch for NVIDIA; that means that cards are shipping from retailers and in OEM systems starting today. Typical for low-end NVIDIA cards, there are no reference cards or reference designs to speak of, so NVIDIA’s board partners will be doing their own thing with their respective product lines. Notably, these will include factory overclocked cards that offer more performance, but also which will require an external PCIe power connector in order to meet the cards' greater energy needs.

Despite this being a hard launch however, in a very unorthodox (if not outright underhanded) move, NVIDIA has opted not to allow the press to test GTX 1650 cards ahead of time. Specifically, NVIDIA has withheld the driver necessary to test the card, which means that even if we had been able to secure a card in advance, we wouldn’t have been able to run it. We do have cards on the way and we’ll be putting together a review in due time, but for the moment we have no more hands-on experience with GTX 1650 cards than you, our readers, do.

NVIDIA has always treated low-end card launches as a lower-key affair than their high-end wares, and the GTX 1650 is no different. In fact this generation’s launch is particularly low-key: we have no pictures or even a press deck to work with, as NVIDIA opted to inform us of the card over email. And while there’s little need for extensive fanfare at this point – it’s a Turing card, and the Turing architecture/feature set has been covered to excess at this point – it’s rare that a card based on a new GPU launches without reviewers getting an early crack at it. And that’s for a good reason: reviewers offer neutral, third-party analysis of the card and its performance. So it’s generally not in buyers’ best interests to cut out reviewers – and when it is this can raise some red flags – but none the less here we are.

At any rate, while I’d suggest that buyers hold off for a week or so for reviews to be put together, Turing at this point is admittedly a known quantity. As we mentioned earlier the on-paper specifications put the GTX 1650 at around 60% of the GTX 1660’s performance, and real-world performance will probably be a bit higher. NVIDIA for their part is primarily pitching the card as an upgrade for the GeForce GTX 950 and its same-generation AMD counterparts, and this has been the same upgrade cadence gap we’ve seen throughout the rest of the GeForce Turing family. NVIDIA is saying that performance should be 2x (or more) faster than the GTX 950, and this is something that should be easily achieved.

While we’re waiting to get our hands on a card to run benchmarks, broadly speaking the GTX xx50 series of cards are meant to be 1080p-with-compromises cards, and I’m expecting much the same for the GTX 1650 based on what we saw with the GTX 1660. The GTX 1650 should be able to run some games at 1080p at maximum image quality – think DOTA2 and the like – but in more demanding games I expect it to have to drop back on some settings to stay at 1080p with playable framerates. One advantage that it does have here, however, is that with its 4GB of VRAM, it shouldn’t struggle nearly as much on more recent games as the 2GB GTX 950 and GTX 1050 do.

Strangely enough, NVIDIA is also offering a game bundle (of sorts) with the GTX 1650. Or rather, the company has extended their ongoing Fortnite bundle to cover the new card, along with the rest of the GeForce GTX 16 lineup. The bundle itself isn’t much to write home about – some game currency and skins for a game that’s free to begin with – but it’s an unexpected move since NVIDIA wasn’t offering this bundle on the other GTX 16 series cards when they launched.

Meanwhile, looking at the specs of the GTX 1650 and how NVIDIA has opted to price the card, it’s clear that NVIDIA is holding back a bit. Normally the company launches two low-end cards at the same time – a card based on a fully-enabled GPU and a cut-down card – which they haven’t done this time. This means that NVIDiA is sitting on the option of rolling out a fully-enabled TU117 card in the future if they want to. And while the actual CUDA core count differences between GTX 1650 and a theoretical GTX 1650 Ti are quite limited, to the point where a few more CUDA cores alone would probably not be worth it, NVIDIA also has another ace up its sleeve in the form of GDDR6 memory. If the conceptually similar GTX 1660 Ti is anything to go by, a fully-enabled TU117 card with a small bump in clockspeeds and 4GB of GDDR6 could probably pull far enough ahead of the vanilla GTX 1650 to justify a new card, perhaps at $179 or so to fill NVIDIA’s current product stack gap.

Finally, as for the competition, AMD of course is riding out the tail-end of the Polaris-based Radeon RX 500 series, so this is what the GTX 1650 will be up against. AMD is trying very hard to setup the Radeon RX 570 8GB against the GTX 1650, which makes for a very interesting battle. Based on what we saw with the GTX 1660, the RX 570 should perform rather well versus the GTX 1650, and the 8GB of VRAM would be the icing on the cake. However I’m not sure AMD and its partners can necessarily hold 8GB card prices to $149 or less, in which case the competition may end up being the 4GB RX 570 instead.

Ultimately AMD’s position is going to be that while they can’t match the GTX 1650 on features or power efficiency – and bear in mind that the RX 570 is rated to draw almost twice as much power here – they can match it on pricing and beat it on performance. Which as long as AMD wants to hold the line there, this is going to be a favorable matchup for AMD on a pure price/performance basis for current-generation games. Though to see how favorable it might be, we’ll of course need to benchmark the GTX 1650, so be sure to stay tuned for that.

Q2 2019 GPU Pricing Comparison
AMD Price NVIDIA
  $349 GeForce RTX 2060
Radeon RX Vega 56 $279 GeForce GTX 1660 Ti
Radeon RX 590 $219 GeForce GTX 1660
Radeon RX 580 (8GB) $189 GeForce GTX 1060 3GB
(1152 cores)
Radeon RX 570 $149 GeForce GTX 1650
Comments Locked

55 Comments

View All Comments

  • flyingpants265 - Wednesday, April 24, 2019 - link

    Jeez, first time I see someone complaining about high-end GPU cooling.
  • jharrison1185 - Tuesday, April 23, 2019 - link

    While only rumours, Navi 12 is supposed to be a 75W or less solution. Let's hope it comes sooner rather than later as a new AMD 7nm solution should blow this
    P.O.S. out of the water.
  • malukkhel - Tuesday, April 23, 2019 - link

    Please Navi be better, I'm tired of Nvidia Intel monopoly
  • Opencg - Tuesday, April 23, 2019 - link

    navi will likely be alot better. its main design target is performance/production cost ratio improvement. and navi 20 will target the high end according to amd. this is an indication that their multi die gpu tech is a success. they will likely do more than break even with current offerings in the value market this year with navi 10.
  • thetelkingman - Tuesday, April 23, 2019 - link

    AMD needs to step up their game in driver support and software. Nvidia has some amazing software that comes with their graphics card and intel has pretty much all features in terms of now technologies
  • jharrison1185 - Tuesday, April 23, 2019 - link

    One can hope
  • ballsystemlord - Saturday, April 27, 2019 - link

    The current thinking is that Navi will not be very preformant, but it will be cheaper -- much cheaper.
    Navi is, after all, just another refinement on GCN. Ignore the Arcturus and other myths.
    AMD's GCN was designed to be general purpose compute and GFX arch, it scaled in clock freq in line with current cards of the time. Nvidia's new generation was Maxwell which was so much faster because it clocks so much higher.
    AMD's original clocks were 0.8GHz, Maxwell's were 1.2GHz. That's over 34% faster clock for clock. And that's also before overclocking which AMD's cards were terrible at and Maxwell was very good at.
    AMD later clocked up their cards to 1050MHz but were still having a time of it with Nvidia's Pascal arch (an improvement on Maxwell mostly video codecs and RAM speed), was clocked at over 1.7GHz. AMD released Polaris which runs at ~1.3GHz.
    Even if you clocked GCN up, it's still just a little slower. It's not an amazing arch. It was their first try at something different.
    It's also been rather memory BW starved. What works better, overclocking Vega 64's HBM or cores?
    I've heard, and believe, that AMD management was not so good either which would place GCN as being underfunded and possibly mismanaged.
    Theirs something holding them back, I don't know what. Not the clocks now with Radeon VII, not the memory with Radeon VII. The ROPs aren't a likely candidate either with Vega 64 not benefiting much from overclocking -- even after you speed up the HBM.
    I suspect that their is a small piece of Vega, the warp (from memory), scheduler seems most likely, to be particularly bad at what it does, hence a 20% performance increase when doing parallel GPU processing (think mantle). Resulting in poor performance. Perhaps overclocking the cores does not even apply to the piece.

    I currently own only AMD HW (Ok, some ARM stuff too, but that does not count), so I'm not writing this to shill. This is their story. Albeit I don't know all of it.

    https://www.anandtech.com/show/7728/battlefield-4-...
    https://www.anandtech.com/show/10446/the-amd-radeo...
    https://www.anandtech.com/show/11180/the-nvidia-ge...
    https://www.anandtech.com/show/8526/nvidia-geforce...
    https://www.anandtech.com/show/5881/amd-announces-...
  • psychobriggsy - Tuesday, April 23, 2019 - link

    So 1/3rd more theoretical performance than the 1050Ti or 1050. Yet only 10% more memory bandwidth (possibly better bandwidth compression features in Turing?).
    But at a higher cost (10-35%), despite being launched 2.5 years later.

    This would be a good generational release if the price wasn't this high. This pricing (because of the 200mm^2 die no doubt) really allows AMD to position the 4GB 570 as a viable competitor until they launch Navi 10 Lite, or Navi 12, against it.
  • Yojimbo - Tuesday, April 23, 2019 - link

    The theoretical performance doesn't matter for price/performance arguments. You really have to wait for reviews to decide how it stacks up.
  • PeachNCream - Tuesday, April 23, 2019 - link

    Although it's good to see NV responding to the market by offering closer to reasonable prices on Turing-based GPUs, it would have been really interesting to see what the 20x0 GPUs could have been without the flubbed bet on ray-tracing and tensor. Power consumption and performance could have both been better and the generational improvements from the 10x0 cards maybe could have been significant at the same number of CUDA cores per FPS. The 16-series shows us what could have been in the price-performance-cost balance had RT not been mistakenly pushed to a set of disinterested, cost-sensitive set of game publishers that were never going to pursue the capability due to the sub-par performance even the highest end 20x0 cards delivered using said feature.

Log in

Don't have an account? Sign up now