Comments Locked

14 Comments

Back to Article

  • ravyne - Thursday, October 26, 2017 - link

    The prices for the SSG and WX9100 are swapped in the table.
  • Nate Oh - Thursday, October 26, 2017 - link

    Thanks! It's been fixed
  • Space Jam - Thursday, October 26, 2017 - link

    The MSRPs in the table are backwards. SSG is $6999, WX 9100 is $2199
  • ddriver - Thursday, October 26, 2017 - link

    The SSG is completely unnecessarily, it's an act of desperation, shooting in the dark, hoping to hit something meaningful.

    4 TBs of storage would hold like 12 minutes of 8k footage - woot! Now if you are going to be streaming and buffering over the PCIE bus, you might as well do it directly, without strapping hot running SSDs on a hot running GPU. And save 5-6k $$$ while you are at it.

    The real-world gain is very close to zero unfortunately. Having the GPU on a x16 slot means you can push 16 GB/s, some basic data streaming and buffering and you can keep the GPU well fed 100% of the time, at the negligible penalty of a single buffer fill.

    Which means you can plug some more adequate storage to it, like say an array of 16 HDDs, which will give you both the bandwidth as well as the storage required for high resolution video.

    But even if it is something less demanding than 8k video, this product is pointless.

    I'd much rather see AMD launch ECC enabled GPUs with good FP64 rates at decent prices than this nonsense. This ain't it, with those messily 0.7 TFLOPS. A 3 year old firepro W8100 thrashes that with its 2.1 TFLOPS, and sells for only 1000$. It defies logic that AMD is surrendering that market, where they had a huge value advantage to nvidia...
  • mdriftmeyer - Thursday, October 26, 2017 - link

    Tell that to RED, Hollywood and anyone working in 8K. This is for pre and post-production housing.
  • ddriver - Thursday, October 26, 2017 - link

    The SSD cache is pointless, it is not what makes it work. Which is why nvidia was able to make an identical demo without having SSDs on graphics cards. Your system can either meet the bandwidth requirement, in which the caching is pointless, or it cannot, in which case the cache gives you 10 minutes of footage after preloading and then it is back to stuttering.

    All it really takes is streaming to a double buffer, sized according to the computational throughput, the double buffering will hide the extra latency completely save for the initial buffer filling, which is like milliseconds. So now it all boils down to the question, is shaving off a few milliseconds off your render job spending an additional 7500$? I don't think so, because in such scenarios you are always computationally bottlenecked, and spending that 7500$ on moar GPUs will give you actual and significant performance improvements.

    The shame here is that this product is snake oil, and AMD is not in the market position where they can make money on snake oil, this ain't something a devoted fanboy will run to buy. Which means that the R&D on this is completely wasted, which is another position AMD cannot be in. This hurts the company financially and engineering wise, which makes it less competitive, which is bad for consumers, which is why I see a problem with the whole thing. It is a completely misguided effort to make something new for the sake of being new, not because it actually makes sense to make it.
  • Samus - Thursday, October 26, 2017 - link

    I don't know man...having 2TB of PCIe x16 storage embedded in a GPU pushing 12-24 TFLOPS sounds like a hell of a useful product for multiple (niche) applications.
  • ddriver - Friday, October 27, 2017 - link

    Sounds? How exactly? NAND is so slow that it literally makes no difference whether it is on the GPU or on the PCIE bus. If anything, the SSG solution introduces an additional step in the pipeline, and one that is subject to rather fast wear at that. Keep in mind that those are standard PCIE NVME drives, they do not somehow magically connect to the very GPU core, it is still using a PCIE bus, and it still comes with the overheads of NAND, NVME and PCIE, and any notions of direct addressing are emulated. They don't get magically full of data either, it still has to arrive from somewhere.

    AMD's true motives here are to create an illusion of innovation and sell some SSDs at double their price. Because them "WTF radeon SSDs" were such a success :)

    But imagine how stupid will it look when nvidia graphics turn out to be capable of the same feats without integrating SSDs, for both AMD and the chumps that will get duped into buying this.

    AMD really needs to do better, I do realize the R&D on this was minimal, but still, it is a waste, and the company is not in the position to either waste or play shenanigans.
  • mode_13h - Friday, October 27, 2017 - link

    Wow, you really don't get it.

    SSG is not about bandwidth - it's about latency. That's not something you can fix with double buffering or any amount of queuing. At least several of the examples AMD highlights are multi-gigabyte GIS datasets and other cases that necessitate random access.
  • Lord of the Bored - Saturday, October 28, 2017 - link

    Is it bad that I mostly read the comments to see ddriver's latest nonsense, I mean industry expertise, and everyone else dropping knowledge-bombs on him?
  • Tams80 - Sunday, October 29, 2017 - link

    The comments here wouldn't be the same without ddriver.

    Long posts written to look like they are knowledgeable, but rarely so. Top entertainment.
  • Kevin G - Sunday, October 29, 2017 - link

    I'm not entirely sure there is a raw latency win either. I think AMD went this route as they knew it would actually work in the end. PCIe devices are permitted to communicate directly with each other. At the hardware level it is conceptually possible for brand spanking new Intel Optane 900p drive to be used directly by the Radeon SSG for cache storage. The catch is the software side which would require hypervisor, OS and driver (both Intel and AMD) support for it all to work. This should conceptually be cheaper and faster (the entire PCIe 16x bus can be used on the Radeon SSG for transfer).
  • edzieba - Friday, October 27, 2017 - link

    The Radeon SSG uses a Broadcom PEX8747 PCIe switching chip, to take the 16x link from the GPU and split it into an 8x link (then bifurcated to 2 4x links to the two internal m.2 drives) and a 16x for the card-edge connector (bandwidth-limited to 8x when the drives are in use).

    If you instead put the GPU in one slot and an 8x SSD host card in another, you could use DMA to get the same result. Or a host card with a 16x link (4 x4 SSDs rather than 2) for greater performance. Both Nvidia and AMD cards can already use DMA, and have been able to for many, many years.

    Having the m2 drives on the same card means you pay extra for an expensive lane-switching chip, and only get any actual benefit if you are limited by physical volume of your chassis from adding the extra host cards.
  • Kevin G - Sunday, October 29, 2017 - link

    By putting them on board, AMD does have the option of hiding them from the host system for other purposes and configurations. Using NVMe drives as you describe is technically possible but requires all the parties involve to behave as expected, including the user who would have to configure the NVMe drive to be a Radeon SSG cache. Insert issue here where a scientist assigns the nice NVMe boot drive to a Radeon SSG cache and nukes the system install.

    You're also erroneous on the PEX8747 chip. It has 48 lanes total, which 16 can go do the host system, 16 to the Radeon GPU which leaves 16 lanes available for SSDs. This chip would only be a bottleneck if the Radeon GPU was able to saturate its link to the host system and simultaneously needed access to the on board SSDs.

Log in

Don't have an account? Sign up now