Further Image Quality Improvements: SSAA LOD Bias and MLAA 2.0

The Southern Islands launch has been a bit atypical in that AMD has been continuing to introduce new AA features well after the hardware itself has shipped. The first major update to the 7900 series drivers brought with it super sample anti-aliasing (SSAA) support for DX10+, and starting with the Catalyst 12.3 beta later this month AMD is turning their eye towards further improvements for both SSAA and Morphological AA (MLAA).

On the SSAA side of things, since Catalyst 9.11 AMD has implemented an automatic negative Level Of Detail (LOD) bias in their drivers that gets triggered when using SSAA. As SSAA oversamples every aspect of a scene – including textures – it can filter out high frequency details in the process. By using a negative LOD bias, you can in turn cause the renderer to use higher resolution textures closer to the viewer, which is how AMD combats this effect.

With AMD’s initial release of DX10+ SSAA support for the 7900 series they enabled SSAA DX10+ games, but they did not completely port over every aspect of their DX9 SSAA implementation. In this case while there was a negative LOD bias for DX9 there was no such bias in place for DX10+. Starting with Catalyst 12.3 AMD’s drivers have a similar negative LOD bias for DX10+ SSAA, which will bring it fully on par with their DX9 SSAA implementation.

As far as performance and image quality goes, the impact to both is generally minimal. The negative LOD bias slightly increases the use of higher resolution textures, and thereby increases the amount of texels to be fetched, but in our tests the performance difference was non-existent. For that matter in our tests image quality didn’t significantly change due to the LOD bias. It definitely makes textures a bit sharper, but it’s a very subtle effect.


Original uncropped screenshots

4x SSAA 4x SSAA w/LOD Bias

Moving on, AMD’s other AA change is to Morphological AA, their post-process pseudo-AA method. AMD first introduced MLAA back in 2010 with the 6800 series, and while they were breaking ground in the PC space with a post-process AA filter, game developers quickly took the initiative 2011 to implement post-process AA directly into their games, which allowed it to be applied before HUD elements were drawn and avoiding the blurring of those elements.

Since then AMD has been working on refining their MLAA implementation, which will be replacing MLAA 1.0 and is being launched as MLAA 2.0. In short, MLAA 2.0 is supposed to be faster and have better image quality than MLAA 1.0, reflecting the very rapid pace of development for post-process AA over the last year and a half.

As far as performance goes the performance claims are definitely true. We ran a quick selection of our benchmarks with MLAA 1.0 and MLAA 2.0, and the performance difference between the two is staggering at times. Whereas MLAA 1.0 had a significant (20%+) performance hit in all 3 games we tested, MLAA 2.0 has virtually no performance hit (<5%) in 2 of the 3 games we tested, and in the 3rd game (Portal 2) the performance hit is still reduced by some. This largely reflects the advancements we’ve seen with games that implement their own post-process AA methods, which is that post-process AA is nearly free in most games.

Radeon HD 7970 MLAA Performance
  4x MSAA 4x MSAA + MLAA 1.0 4x MSAA + MLAA 2.0
Crysis: Warhead 54.7

43.5

53.2
DiRT 3 85.9 49.5 78.5
Portal 2 113.1 88.3 92

As for image quality, that’s not quite as straightforward. Since MLAA does not have access to any depth data and operates solely on the rendered image, it’s effectively a smart blur filter. Consequently like any post-process AA method there is a need to balance the blurring of aliased edges with the unintentional burring of textures and other objects, so quality is largely a product of how much burring you’re willing to put up for any given amount of de-aliasing. In other words, it’s largely subjective.


Original uncropped screenshots

  Batman AC #1 Batman AC #2 Crysis: Warhead Portal 2
MLAA 1.0 Old MLAA Old MLAA Old MLAA Old MLAA
MLAA 2.0 New MLAA New MLAA New MLAA New MLAA

From our tests, the one thing that MLAA 2.0 is clearly better at is identifying HUD elements in order to avoid blurring them – Portal 2 in particular showcases this well. Otherwise it’s a tossup; overall MLAA 2.0 appears to be less overbearing, but looking at Portal 2 again it ends up leaving aliasing that MLAA 1.0 resolved. Again this is purely subjective, but MLAA 2.0 appears to cause less image blurring at a cost of less de-aliasing of obvious aliasing artifacts. Whether that’s an improvement or not is left as an exercise to the reader.

Meet The Radeon HD 7870 & Radeon HD 7850 The Test
Comments Locked

173 Comments

View All Comments

  • fingerbob69 - Tuesday, March 6, 2012 - link

    It ain't, the 7870 is faster by 25-33% depending on the res. Price wise it's about 30% more (UK) but that fits with the bump in performance. So, you're wrong.
  • Houdani - Monday, March 5, 2012 - link

    Hey! My mom was born on Pitcairn. It's the top of a blown off volcano, only 1x2 miles large. No correlation, I'm sure. Interesting.
  • AlB80 - Monday, March 5, 2012 - link

    It beats 6950. 6970 and 7850. Is it correct?
  • haukionkannel - Monday, March 5, 2012 - link

    Well these are good card even at this moment! Ofcource we can hope cheaper prizes, but that need at least two competitors and at this moment there is none...
    And I would not wonder if Kepler will be prices accordingly. Those kepler chip are bigger, if leak are true, so they should be faster and they definitely will be more expensive (if not counting those renamed low end cards tha allso AMD is releasing this time)
    AMD is not getting profit (in total) and Nvidia has a lot of new staff going on that need a lot of money to develop, so there seems to be zero reason to both company to reduce the prices... pity but true.
    If you have good 5000 or 6000 series card you don't need these (same as if you have good 6600 serieas cpu you don't need ivy...) at this moment. But if you need a lot of power for little power usage these are extremely good and allso as someone said, these are very small chips! So there is a lot of room for a little bit bigger for 8000 series. Tick tock... Seems to be a lot like Intel Ivy vs Hasvell. Ivy does not offer a much compared to sandy, only smaller power usage and a little bit better speed. Like someone else said, very similar situation.
  • Hubb1e - Monday, March 5, 2012 - link

    The upgrade from a 5800 to a 7800 may be only 20-40% on stock clocks, but add in the extra headroom the 7800 has when overclocking and you're looking at a decent upgrade. Once the prices come down on these I'm sure you'll see quite a few folks dropping their 5800 for a 7800.
  • PurpleMoose - Monday, March 5, 2012 - link

    The 7850 (usually) slightly outperforms the 6950 despite having only 1024 shaders compared to 1408, with a ~7% core overclock (and a slight memory underclock). Even being conservative, that would make the GCN shaders about a third more efficient than the VLIW4 ones. But if we assume that a VLIW4 cluster performs more or less the same as a VLIW5 cluster, as does seem to be the case, then we can compare a hypothetical VLIW4-based 5770 with 640 shaders to the 7770. In this case the 7770 outperforms the 5770 basically by its clock speed difference, in other words clock for clock, shader for shader, VLIW4/5 vs GCN seems to be a wash.

    So why doesn't the 7700 series show as much (ie any) improvement?

    The most obvious deficiency is the memory bus and memory bandwidth, but if thats the case why not add more? Alternatively, if you're happy with the performance as is, why not cut away a few more shader groups as it seems the card really can't use them, and save even more space? I had a very brief look for overclocked results and couldn't really find any - what I'd find really interesting is if anyone has benched a stock 7770 against a 7750 running at 7770 frequencies. I wonder how much the loss of shaders would hurt.
  • jesh462 - Monday, March 5, 2012 - link

    Whenever I read an article on the new 7xxx series, I can't help but wonder if people remember what they're looking at. AMD moved to 28nm with this series. They also introduced a completely new architecture. They did so with no complications and without going overtime on the release date.
    This hasn't been done before. Even Intel doesn't attempt to do this with their CPUs. Tick, then tock, right?
    Not only did AMD manage to get their new line up out, but the new cards have performance that exceeds their Nvidia counterparts on both the gaming and compute levels, in most cases. People who buy actual retail samples of the 7xxx series are pleased with the great overclocking headroom. It's obvious that there is a lot of room, even in the 7xxx series current iteration, for growth.

    Despite all this, I still see people talk about how a 7xxx card isn't worth it, and how AMD is a sh*t company. Really? Ok.

    Disclaimer, I own an i7 laptop with a geforce 560 blah blah.. fanboy whatever. Just think about this before you post. Yeah the new cards could use a price drop. We all know they will, sooner or later. That's why it's called the waiting game.
  • arjuna1 - Monday, March 5, 2012 - link

    a 7xxx card is not worth it and AMD is not a sh*t company.

    I tend to agree with you for the most part but, there are no NVIDIA's counterparts for the 7xxx series, yet, and when there is, the 7xxx will go down in prices and then their value will increase.
  • CeriseCogburn - Thursday, March 8, 2012 - link

    I'm sorry, we were promised southern islands for the 6000 series, and then, all that changed...
    What we really have here is a release that is like 2 years late.
    Apparently once AMD re-announces it's new release schedule after admitting it missed it's last release target... all you people suddenly get a gigantic case of perfect amnesia.
    To put it simply this is AN ENTIRE GENERATION LATE ON THE PROMISED RELEASE.
  • mattgmann - Monday, March 5, 2012 - link

    Where is this misconception that the pricing is anywhere near acceptable on these new parts coming from? So they fit right in with the current price/performance ration. So what? AMD has basically put out a new line of cards that match their competitors previous generation and cost SLIGHTLY less.

    Aren't technologies supposed to get better? What's the point in upgrading if you get basically the same amount of performance for your dollar today as when you bought you last part?

    Intel's new top end processors cost the same as last generation's, and the generation before that. New products replace old ones in pricing structures. AMD is raking in cash on these cards. They're less expensive to produce than last generation and retail for MORE money.

    AMD is taking full advantage of their current market position, and instead of passing on ANYTHING to the consumer, is milking every profitable drop.

    These cards' performance is impressive when compared apples to apples against last generation's equivalents. But since they basically all occupy a price slot a full tier higher than their predecessors, the comparison is moot.

    Too bad the only 2 companies in the graphics card race are so ill equipped to advance the industry. AMD, Nvidia, get a clue.

Log in

Don't have an account? Sign up now