They've yet to fail me. I think I may take the plunge with Ivy Bridge. My 920 X58 build has served me well, but I have the itch and I've avoided scratching it since late 2008. I don't think I can hold off any longer!!!
Ha ha. We all get the rational or irrational urge to upgrade, whether we 'need' it or not. I would never stop someone from enjoying a new build. But that being said, the core i7 920 still holds its own pretty well. However, I suppose quicksync alone could be worth the upgrade if it is a feature you use heavily. That is one area where the performance gains are just phenomenal.
I had a 920 X58 setup and decided to rebuild to an i7 2700K Z68. So far I've regretted the whole thing. I'm going to pick up one of these ASUS Z77 Deluxe boards but I don't think it's going to make a big difference.
The X58's/i7's were rock solid and performance monsters. I honestly see very little in terms of performance gains. Supposedly Ive Bridge processors are only going to give you about 15% to 20% increase in performance. If that's the case I think I'm going to stick with the 2700K until the next architecture change.
I actually would like a comparison with ZOTAC Z77-ITX WiFi, as I'm leaning toward the zotac mobo due to the msata compatibility by removing the wifi/bt module.
I'm very curious about this too. On page two, Ian says "Within the hybrid system, the integrated GPU takes over two of the tasks for the GPU – snooping for required frames, and display output. This requires a system to run in i-Mode, where the display is connected to the integrated GPU."
This is a super interesting feature, and I hope it performs as well in reality as it sounds like on paper. And with a triple screen setup it would be bliss.
I have a three monitor setup working just fine on a Z68 board with all monitors attached to a 6970. Virtu gives the option of which you want to be primary - the video card or the integrated graphics. So for me (with the video card primary) this works kind of backwards from a power saving point, but good for performance since it still allows for quick sync video transcoding, etc.
I know this doesn't adress the new Virtu MVP, but I can't see them taking a step backwards when something similar works on the old version. Especially since if you are running in eyefinity mode it is just seen by the system as one big wide monitor and not three separate screens that each get their own render. Hopefully they can pull it off because I like my three screen setup and would hate to lose features because of that.
I'm curious how Skyrim behaves with this Lucid technology, since physics and framerate are linked for whatever reason... (if you disable Skyrim's 60 fps cap and point the camera in a direction that gives you 150+ FPS for example, everything that is moveable nearby starts to rattle and fall off shelves...)
Games with their own framerate limits should not be affected, as long as that limit is preserved. They already simply 'pause' internally if the machine is too fast. It might go absolutely crazy though if you do forcefully disable that mechanism.
Am I reading that right? Z75 offers most of what even enthusiasts would want?
So... why are there a crapton of Z77 boards in here and no Z75s?
Z75 supports 2 way crossfire / SLI, overclocking, 6Gbps SATA, native USB3.0... these are the features all but a tiny handful of users should be interested in.
By all rights Z75 should be the definitive chipset for the average enthusiast. Unless I'm missing something major, I hope there is significant attention paid to the Z75 chipset in reviews, because I'm failing to see why any but the most extreme users and those with money to burn would choose the Z77.
Actually the ones with less money to burn might choose Z77 over Z75 to avoid investing in a huge SSD. The difference between chipset prices are usually small (what the motherboard manufacturer makes out of this is another question entirely).
I find SSD caching to be some desperate dinosaur attempt. Mainly fueled by HD makers. Hybrid HDs are in the same basket. And yes, pick one, Z77 or Z75. The other one makes no sense.
Z77, Z75 and H77 chipsets are priced at US$48, US$40 and US$43.
If SRT basicly cost 8$. Then its time for it to go away as the stillborn tech it is.
The difference is mainly software and development, and should go away. SSD caching is a great idea that, imo, should optimally be on the filesystem level, not on the block level.
*Looks at OS / filesys developers* (ZFS has some of it)
And oh, I expect lower end derivatives to come out eventually.
I was really excited about this motherboard - was kind of disappointed to see it is not part of this roundup. But I guess there will be other tests. Thanks for the preview - I look forward to the benchmarks, particularly if asus' memory technology works well, and if memory bandwidth plays more of a role on Ivy Bridge.
Things like overclocking being restricted to specific chipsets is really disappointing. AM3+ boards are generally cheaper than equivalent Intel boards and they don't lock features like this.
I didn't see your comment and posed a similar question, but I'm fairly certain the answer is that there are actually 4 SuperSpeed ports provided by the 7-series chipsets. If you look at the block diagram shown for the Intel DZ77GA-70K motherboard it looks like they clearly used two 3-port USB 3.0 hub chips to arrive at a total of 8 USB 3 ports. Which also brings up the point that by leveraging the integrated USB 3.0 capabilities, motherboard manufacturers can add as many USB 3.0 ports as they like by using far less expensive hub chips instead of full blown controllers which also require a PCIe lane apiece.
And speaking of PCIe, I also wonder about where Ian says, "These are known as PCIe 3.0 PLX PXE chips..." I'm guessing that he's referring to PLX's PEX 874x Gen 3 PCIe switches, but it's stated a bit oddly.
Yeah, the WiFi connection(s) on the ASUS Pro and Deluxe depends on a small WiFi module that plugs into a particular slot. These will be interchangeable AND I have seen ASUS showing off a WiFi/ 60GB SSD drive going into that slot. The Pro looks like just a receiver while the Deluxe has a WiFi router built in but I am REALLY guessing on this. I think I will go with the Deluxe just because it has a couple of features I like and the WiFi router combo will be just gravy.
Our MCombo Card ( which is on the Maximus V Gene and upcoming Formula should not be confused with the solution on our channel boards ( Standard -V, Pro or Deluxe ). The MCombo will allow you to install any MiniPCI-E or MSATA cards into their corresponding slots.
The module which connects to the back I/O panel can be opened. While not promoted as being DIY there is nothing stopping you from installing your own mini PCI-E wireless controller.
I know I've asked for this before, but if you're going to do a big roundup of all these motherboards (which I'm looking forward to, as I plan to upgrade to Ivy Bridge on release), then please please test the boot / POST times to compare between the boards!
Just the time it takes from hitting the power switch to when it starts to actually run the bootloader off disk. Or until it displays the "please insert boot media" - the actual time the bios contributes to the total boot time. This is something that can really differentiate between different bios implementations and would be really useful to know when choosing.
I know having RAID and on board devices turned on make a big difference, so a baseline of everything non-essential turned off, or just those devices that are present on all boards would make sense.
Especially with the 6 Gbps Marvel controllers. Those damn controllers can take over 10 seconds to boot. That can easily be as much as the entire system.
What is the actual significant difference between Asus P8Z77-V and P8Z77-V Pro? The only things I noticed were a few more phases in power supply (not significant).
Pro offer higher quality back I/O bracing for the display connections, High phase count 12+4 Additional front USB3 front header ESATA via bracket Additional fan header
Otherwise all other key features and hardware implementation is the same.
All 6-series chipsets are said to support PCIe 3 in this table. Would that work with Ivi and a proper GPU? Would be the first time I heard about this. And since PCIe 3 is supposed to be a new feature of the 7-series I suppose it's a typo.
Well, alot of 6 series boards supports PCIe 3.0 due to the controller is on the CPU (Ivy Bridge). Essentially the only reason besides BIOS for PCIe 3.0 support on the 6 series, is the added switches for splitting the PCIe x16 into two x8. Thats also why the basic 6 series boards got a bigger chance of PCIe 3.0 support.
How does it work together with nVidias adaptive VSync, which debuted with the GTX680? And which, IMO, looked quite promising (lowering average power consumption a lot while gaming).
"Predicting which frames (or rendering tasks) will never be shown and taking them out of the pipeline so the GPU can work on what is needed"
Along the lines of power consumption (and the ever-important side effect, heat), I would be very interested in seeing an article on MVP vs power consumption/heat on a power-hungry dGPU (ala GTX 480).
Not sure why most of these boards bother including the non-Intel USB 3.0 controllers any more.
Not many people have several USB 3.0 devices, so they could be saving costs. Or instead bringing back some of the things the article mention were taken away - like DVI on an ASUS board.
I think board makers are getting desperate for ways to add value.
Less and less components to change and diversify with.
And its not getting better in the future. Haswell on LGA1150 basicly removes the entire VRM part on the motherboard. No more 32 phases or whatever.
A 50$ board performs identical to a 200$ board if you dont overclock. Actually the 50$ board might be more reliable (less components to fail) and more power efficient.
50 dollar boards are probably going to at least boot faster. The driver load and POST times with many 3rd party controllers, SATA especially, is atrocious. Seen some boards that take nearly 3x longer if the external controllers aren't disabled.
Mostly marketing I would guess. Intel only supports 4 3.0 ports, so if you add another chip you now have a checklist item to justify the premium price of your motherboard and possibly differentiate it from lower-end versions in the same product line.
Second, ASUS owns ASMedia, who makes a lot of the (mediocre, IMHO) USB 3 and Bluetooth controllers on their boards, so there's probably an incentive to use them in their own products to help prop up their production volumes.
Third, a lot of the third party controllers have special modes to support things like high-current charging for iPads or charging when the computer is turned off. Does anyone know if it's possible to add these features to Intel-based ports? If not, that would be an incentive to include a secondary chip.
ASMedia does not produce or design a bluetooth controller. Additionally you are correct in that add in controllers do offer support for specialized modes of operation.functionality ( like charging )
the Intel controller under Windows 7 does not offer operating in UASP mode. With UASP the ASMedia add in controller can provide superior performance especially in queue depth. The USB3 Boost package is offered for both the Intel controller and the ASMedia ( Intel support change from BOT mode to SCSI ( Turbo mode ) and the ASMedia controller of support for BOT, SCSI ( Turbo ) as well UASP.
P8Z77-M is the one I am waiting for for my Micro ATX build. It has no those xtra useless controllers which are all gonna be inferior to native Intel ones. So won't overpaying anything.
However, I am planning to overclock so I wish to know how it will perform in this regard! Hoping to see these ASUS Micro ATX boards on Anandtech asap :)
It features a Digi+ VRM with robust VRM components overall in our testing the -M Pro provide comparable scaling to that of our standard board. You will see this information from ASUS soon. Solid board !
What happened to the boards with 10 SATA ports? There were some from tradeshows earlier in the year, but none listed here (apart from the one with 8 + 2 eSATA).
Obviously this isn't a comprehensive lineup, but most of these seem to be fairly high end boards, yet no 10 SATA ports.
Very few users use this many ports it makes sense more to prioritize expansion lane support to add ons that will be used. Keep in mind even with a board with that number of ports you may not actually be able to have that usable bandwidth.
Should you really need that many you should consider an add on controller card. Things like SATA ports while important are not the only way to distinguish true attention to design in higher end boards.
The article says Virtu MVP has an i-Mode and a d-Mode, but which one is the better one? I kinda didnt get the difference, except where you connect your display.
For regular Virtu, d-Mode is the mode that was better from a performance perspective, since at times performance under i-Mode dipped due to having to send frames out to the iGPU.
For MVP, there's not going to be all that much of a difference. Regardless of the mode used Virtual V-Sync needs frames passed from the dGPU to the iGPU. The only difference is which display output is used, since a copy of the frame is on both GPUs (i.e. while you have to send frames to the iGPU, you don't have to send them back even in d-Mode).
Well depends. (Its the same btw for H77, Z75 and Z77 etc.). You need to have 2 displayport if I remember correctly. So if you got for example DVI, HDMI and DP. Then 2 screens only. If you got DVI, DP, DP then 3. Or if you daisychain the DP?
Listing fan header count and layout is useless if you do not test the functionality. Do they report RPM? Can they control fan speed? Does the fan control work with Speedfan etc. or only the software from the motherboard manufacturer? Several years ago many people including myself brought this to Anandtech's attention. At the time Anandtech stated they would include this missing info which they did. Trouble is the info was only included in one or two reviews before it was dropped. When I purchase a new motherboard I want to know this info. I have spent many hours searching for this info for each new build as most manufactures do not give detailed info even in the motherboard manual.
IMO if a fan header exists it must have full functionality. If not the header should not be on the board. Motherboard manufactures need to pull their heads out of their asses. If Anandtech reported this info in reviews and gave negative reviews on boards with poor fan support the manufacturers would get the hint.
Primary CPU headers ( CPU and CPU OPT are fully controllable for 4 pin ) as the majority of CPU coolers are PWM for chassis headers ( 1-4 all allow for 3 pin and 4 pin fan control ).
This has been noted in the last couple of reviews. Specifically for ASUS we have spent considerable time putting quality fan controls on our boards all headers allow for 3 presets as well as min and max rotation and target temperatures. In addition with our software for this generation we offer full calibration per each header that can sense the min and max rotation and provide this information as well as sync this data to the profiles. Overall it is quite extensive make sure to check out our videos coming up which shows it in great depth.
@dubyadubya - Look on the bright side: at least one manufacturer (Asus) takes fans seriously, and at least one reviewer (Anandtech) is even mentionng the fact.
I have the same wants as you do, and have made the same requests, but be reasonable. These aren't full motherboard reviews! They don't even have the boards operable, much less any hands-on time with BIOS details.
And when they do have all that, a higher priority will be PCIe lanes and how many graphics cards can be stuffed in. That's because you can't run any modern games with only one board.
But then they might talk a little more about the fan controls... Let's hope. Again - be glad that even one vendor is paying attention and has included some controls to be talked about.
With a slightly more than passing knowledge of rendering, and having spent a fair amount of time handling input in a game engine, I'm curious as to how Lucid came to the responsiveness numbers in the chart on page 3. The concept seems valid at first glance, but the numbers strike me as pure marketing fodder as opposed to solid and testable results.
Also, this sort of technology seems far better suited to residing in the driver layer as opposed to yet another piece of middleware for PC gamers to contend with. We're already effectively blocked from the hardware, and forced to go through third-party graphics APIs (Direct3D/OpenGL).
Maybe it's a "you have to see it (feel it)" kind of thing, but from here you can color me skeptical.
"handling input in a game engine" means nothing here. What matters is when your input is reflected in a rendered image and displayed on your monitor. That involves the entire package. Lucid basically prevents GPUs from rendering an image that won't get displayed in its entirety, allowing the GPU to begin work on the next image, effectively narrowing the gap from your input to the screen.
I am sure he knows that. He was just giving a bit of detail as to his exact experience, of which I would bet is far more than most people on here. You have to be very aware of things such as latency and delay when you are handling input in a game engine. I agree with the OP and am skeptical also. The bit that makes me most curious is the transfer of the fully rendered screens from one framebuffer to the other, that has to add some latency, and probably enough to make the entire process worthless. It's not like Lucid has a good track record on stuff like this, I mean we all know how their cross platform SLI/CF took off and worked so well....
Why would you need to physically copy framebuffers?? I'm sure pointers are used...
I have no idea if this has tangible benefits, but theoretically it does. None of us know until we can test it. I'm more inclined to discredit the people already discrediting Lucid, despite Lucid's track record. That's what you call hating.
Personally, I'm absolutely uninterested in anything 'high-performance', especially fancy gaming stuff. Not to say that I don't think that's a valid market niche, but I see other possibilities.
I'm really looking forward to new thin ITX boards with built-in DC-DC converter (i.e. running directly off a 19V brick), and I am especially wondering whether Intel (or Zotac, possibly) is going to build a golden board this time around. Last time, they made DH61AG which was a nice board, but lacked an msata port (kind of a must for a truly thin computer) and 'only' had an H61 chipset.
With H77, I expect it will be possible to make a thin ITX board with USB 3.0 and a fast on-board SSD option, combining this with an HD 4000 equipped processor would enable users to build a truly thin (sub-4 inch thick) computer that fits on the back of their monitor but still provides ample computing power.
It sounds to me that Lucid Virtual V-Sync is just glorified triple buffering with a lot of marketing and a bit of overhead for transferring frames and powering two video cards instead of one. I'm very skeptical on the HyperFormance too.
It seems a bit more involved than triple buffering, more like having 2 buffers where the back buffer is not flipped until it is fully rendered. Seems like this would lead to more stuttering, and given the number of times they asked Mr. Cutress to reiterate that this would be a bug, it may be something they are seriously concerned with.
Thinking about it a little more, I'm not sure what advantages this system would have over a system with separated input and rendering modules. The academic side of me is extremely interested and hopeful, but the practical developer side of me is going to require a lot more to be brought on board.
Separate input and rendering modules, as I stated in an earlier post, means nothing. They allow for a responsive mouse cursor, for instance. But, when you actually provide input that alters the RENDERED WORLD, you have to wait for that input to reflect on screen. It doesn't matter how perfectly the software solution is architected, you still have to wait for the rendering of the image after your input.
Lucid simply prevents renders that never get displayed in their entirety, allowing the GPU to work on the NEXT image, shortening the time from your input to the screen.
The comment was to indicate that while I have experience writing input systems, rendering is still relatively new to me; simply a qualifier of my impression and opinion.
The way I am understanding Lucid, it is attempting to preempt displaying a frame that is not fully rendered in time for the next screen refresh. By presenting a virtual interface to both the GPU and the application, the application believes the frame has been rendered (displaying user input at that time) and proceeds to render the next frame. Thinking more about it, would this reduce the time interval between input reflected in frame one (which was preempted) and frame two (which will be displayed) so that rather than having input sampled at a fixed rate (say 60Hz) and displayed at a variable rate, input would be more closely tied to the frame for which it is intended.
My interest is rising, but it still seems like a rather complex solution to a problem that I either haven't experienced, or which doesn't really bother me.
it's not preemtively doing anything, except determining if a frame added to the queue will finish rendering in time... if not, it >>>>DOESNT LET THE GPU RENDER IT<<<< and places the previously rendered image in its place, allowing the GPU to immediately begin work on the FOLLOWING frame... that's it... it cuts unneeded frames from queues
as for your input sampling rate question, that's entirely based on how the application is coded to handle input, lucid has nothing to do with this...
Do you even know what it means to preempt a frame? Cavalcade is describing the technology correctly. He is explaining pretty much the same thing as you are but you just don't get it..
Also separate input and rendering modules means a lot. Typically a game engine will have a big loop that will check input, draw the frame, and restart (amongst other things of course) but to split that into two independent loops is what he is talking about.
You really should look up "preemption." This is not what is happening... CLOSE, but not quite. Preemption is not the right word at all. This makes him incorrect and I kindly tried explaining. You are incorrect in backing him up and then accusing me of being inept. Guess what that makes you?
On top of that, he's also not talking about splitting input and rendering into two loops. Not even close. How did you come up with this idea? He's asking how the input polling is affected with this technology. It is not, and can not, unless polling is strictly tied to framerate.
I want to be clear that I'm not for this technology. I think it won't offer any tangible benefits, especially if you're already over 100 fps, and they want to power up a second GPU in the process... I'm just trying to help explain how it's supposed to work.
"handling input in a game engine" means nothing here. What matters is when your input is reflected in a rendered image and displayed on your monitor. That involves the entire package. Lucid basically prevents GPUs from rendering an image that won't get displayed in its entirety, allowing the GPU to begin work on the next image, effectively narrowing the gap from your input to the screen.
Triple buffering as we know it - with 2 back buffers and the ability to disregard a buffer if it's too old - doesn't exist in most DirectX games and can't be forced by the video card. Triple buffering as implemented for most DirectX games is a 3 buffer queue, which means every frame drawn is shown, and the 3rd buffer adds another frame of input lag.
On paper (note: I have yet to test this), Virtual V-Sync should behave exactly like triple buffering. The iGPU back buffer allows Lucid to accept a newer frame regardless of whether the existing frame has been used or not, as opposed to operating as a queue. This has the same outcome as triple buffering, primarily that the GPU never goes idle due to full buffers and there isn't an additional frame of input lag.
The overhead of course remains to be seen. Lucid seems confident, but this is what benchmarking is for. But should it work, I'd be more than happy to see the return of traditional triple buffering.
HyperFormance is another matter of course. Frame rendering time prediction is very hard. The potential for reduced input lag is clear, but this is something that we need to test.
Lucid was very confident in their Hydra solution; but it never performed even close to SLI/xFire; and after much initial hype being echoed by the tech press it just disappeared. I'll believe they have something working well when I see it; but not before.
Page 8 quote: "The VRM power delivery weighs in at 6 + 4 phase, which is by no means substantial (remember the ASRock Z77 Extreme4 was 8 + 4 and less expensive)." Yet: the "Conclusions" chart (page 14) shows the same board having 10 + 4 power. Which is correct?
I'm bummed that ASUS didn't include mSATA connectors. Small mSATA SSDs would make for great cache or boot drives with no installation hassles and they're pretty cheap and available at the low capacities you'd want for a cache drive. That's a feature I will be looking for with my next mobo purchase.
Ditching USB 2.0 is also one of the next steps I'll be looking for. Not having to spend a second thinking about which port to plug something in to will be nice once USB 2.0 is finally laid to rest. Having only 4 USB 3.0 ports is stupidly low this long after the release of the standard, and it's hampering the development of USB 3.0 devices.
Finally, I've been repeatedly impressed by my Intel NICs over the last decade. They simply perform faster and more reliably than the other chips. I look for an Intel NIC when I shop for mobos.
This is similar to what happened with the USB1->2 transition. The newer controller is significantly bigger (read more expensive) and very few people have more than one or two devices using it per computer. I suspect the 8x (Haswell) chipset will be mixed as well; simply because the total number of ports on the chipset is so much higher than it was a decade ago (vs older boards were all but the lowest end models added more USB from 3rd party controllers).
mSATA currently has very little penetration on the market and cost wise it is much lower to purchase a larger cache SSD for the same or lower cost. We would prefer to focus on bringing implementations that offer immediate value to users.
As for the Intel nics all our launch boards across the board for ATX ( Standard and above all feature Intel lan ) we have been leading in this regard for a couple of generations.
In regards to USB 3 we offer more than the standard on many boards but keep in mind many users only have 1 USB3 device.
Maybe I missed something from an earlier post, but could someone please tell me why these don't have light peak? Are they waiting to go optical and it is not ready yet? Having my USB3 controlled by Intel instead of another chip is not enough to make me want to upgrade my Z68 board...
Thunderbolt controllers are relatively expensive ($20-30) and their value is fairly limited on a system using a full size ATX motherboard that has multiple PCIe slots. Including two digital display outputs, an x4 and a couple x1 PCIe slots on a motherboard provides essentially all the same functionality as Thunderbolt but at a way lower cost.
Almost all of our boards feature a special TB header which allows for you to easily equip our boards with a Thunderbolt add on card which we will release at the end of the month. Expect an approximate cost of $40 dollars, this card will connect to the TB header and install in a X4 slot providing you with Thunderbolt should you want it. A great option for those who want it and for those who do not they do not pay for it.
I've made it this far on my venerable OC Q6600, but I can't wait any longer. I do wish they weren't so stingy on the 6 core as I could use it, but I just can't justify the price differential (w 3 kids that is.)
USB 3.0 descriptions and depictions are contradictory. The platform summary table says there are 4. The Intel diagram shows up to 4 on front and back (and the diagram is itself very confusing, because there are 4 USB 3.0 ports indicated on the chipset, and then they show 2 going to hubs, and 2 going directly to the jacks.) The text of the article says there can only be 2 USB 3.0 ports.
I think there's 2 real ports (full bandwidth ports) and the Intel solution uses 2 additional chips that act like "hubs", splitting each real port into 4 separate ports.
Basically the bandwidth of each real port gets split if there are several devices connected to the same hub.
Hub as far as I know means that what the hub receives sends to all four ports (and then the devices at the end of each port ignore the data if it's not for them). This would be different than a switch, which has the brains to send the data packages only to the proper port.
DZ77GA-70K makes DX79SI looks like a bad joke (which it is really).
LGA 2011 turns into an epic fail and DZ77GA-70K is the proof. I have 1366 system and I have zero will to get LGA 2011 system thanks to the crappy tech decisions somebody made there. Six cores is the top? Again? An old 32nm process? Really? Chipset with nothing new inside but troubles? Since 1366 something strange is going on and Intel fails to see it. The end user can get better manufacturing tech for the video card than for the CPU. First it was 45nm CPU with 40nm GPU and now 28nm GPU and 32nm CPU and Intel call that high end? Really?
Everything that DX79SI should have been you can find inside DZ77GA-70K.
1. DZ77GA-70K has high quality TI 1394 firewire controller, while DX79SI has cheap VIA one that no any audio pro would ever want to deal with. 2. DZ77GA-70K has next best after Intel SATA controller by Marvell to get 2 more SATA 6.0 and eSATA vs zero extra SATA and hard to believe no any eSATA on DX79SI. 3. Intel USB 3.0 vs crappy Renesas.
DZ77GA-70K has everything to impress, including the two Intel LANs vs the Realtek that everyone else is using.
DZ77GA-70K fails in only one thing - it had to be LGA 2011, not 1155 that will be just 4 cores like forever and has zero future.
The other long awaited addition found on Panther Point is the native implementation of USB 3.0 that comes directly from the chipset. The chipset will only provide two USB 3.0 ports,
"ASUS have a lot to live up to with its Ivy Bridge Pro board."
You do realize that you're mixing a plural verb and singular pronoun for the same damn thing...Asus in this case. First, you use a plural verb talking about Asus and then use a singular pronoun for Asus in the same sentence. You cannot do both; well, I guess you can, but you show you have no clue about English grammar and look like you dropped out of third grade.
Get a copy editor! How can anyone take this site a professional when the writing borders on illiterate?
You do realize that... you're the illiterate one, don't you?
"ASUS have" is perfectly legitimate English, and is in fact what you will hear in England itself. "ASUS" is a company of people and can be taken as singular or plural.
For me, the AT editors just made major points right in this set of comments by correcting another ignoramus, who was misusing "begs the question".
no, there are errors STILL all over the place in this article... it's horrid... when your site is 99% words, please make them as easy as possible to comprehend...
PLEASE LEARN TO WRITE LIKE ANAND, THX!
Anand, for the love of god, pay a little more to hire a little more education (SEE WHAT I DID THAR??)
I do not understand why non express PCI slots are still on boards. The only one to see the light is MSI, and if they had a bit better performance I would switch from ASUS for my next mobo in a heartbeat. Also, why do these boards have a VGA connector (D-sub)? Intel HD graphics can only support 2 displays max, and if you have more than you should get a dedicated graphics card anyway, and probably already have. I don't see the point.
Another thing, when will OEMs start putting the USB hub at the bottom of the board facing down and not away from the board. If you have multiple cards on the board then you can get really cramped really fast when you are trying to use those.
I'm sorely tempted to just wait another year or so till there is a board with these features and over 50% SATA 6G/s, but we'll see if that even comes out in that short of time.
1) Some customers are asking for them. Customer demand was why a few boards started sporting floppy controllers again last summer. Legacy PCI demand is almost certainly much higher.
2) Intel doesn't have enough PCIe lanes on the southbridge for well featured ATX boards.
2.1) This means a bridge chip of some sort.
2.2) PCIe devices are used to being able to count on the full 250/500MBps bandwidth.
2.3) Legacy PCI devices are used to sharing their bandwidth (133MBps).
2.4) 2.2 and 2.3 combined mean there's less risk of compatibility problems in filling out a few slots with legacy PCI slots.
This is probably going to remain an issue until either:
A) Intel increases the number of lanes they offer on their boards by a half dozen or so (bridges are also used for on board devices).
B) Intel integrates a lot more stuff into the southbridge so it doesn't need PCIe lanes: More USB3, Sata6GB, audio, ethernet.
C) A new version of PCIe allows sizes other than powers of two. Fitting everything on would be much easier if a board maker could fall back on just offering 13/1x or 7/7x on the main gfx slots.
Almost all of the current motherboards are using PCI connected Firewire chips. Even the ones that have PCIe connected firewire use TI chips, which in turn are still PCI firewire, with an internal PCI to PCIe translator.
After some research the only native PCIe firewire controller I've found is from LSI. Does anyone else know of another solution? This is an interesting "dirty secret" that I never really paid any attention to.
It doesn't matter, firewire maxes out at 800Mbps, which the PCI standard can easily handle ht 133 MBps.
Of course, shared bandwidth is an issue, but reworking designs / buying the PCIe design rises issues of cost and reduces the number of PCIe lanes for other devices that can better use the bandwidth.
Seeing at the layer that Virtu is running and reading about what it claims to be capable of, is there any reason this could not cure once and for all the micro-stutter associated with multiple video cards?
It's good to know my Z68 (asus maximus iv-extreme) still hasn't been bested. I see nothing from any of these boards that beats what I've been running for some time now. Plus i have more usb 3.0 ports (12 to be exact).
For my own PC, I'm interested in seeing what mATX sized Z77 boards there are. Often they have weird expansion slot combinations or positions. Looking at what Z68 boards are out now, it surprises me how many still have PCI slots. I would've thought they'd be replaced by PCIe 1x's by now.
Also, anything about UEFI bios implementations? Was the promise of fast booting UEFI bioses ever fulfilled?
I'm planning on making my next build a mATX one, and would be interested in seeing some reviews of high quality boards with lots of features. The Gigabyte mentioned in this article sadly seems to be a cheaper model.
I don't know if this helps or you wan to put an update but NewEgg has the boards out and prices on them (I am not sure if they sell it at MSRP or not). Just a thought if you want to update those "TBC" prices.
I am leaving my AMD FX-60, 3 GB DDR, Asus 939 Delux, Win XP, Raptor 150 HDD for IVY Bridge pasures!!!
I am all for ASUS 16+4 power, multi usb 2.0 and 3.0 on the back panel. I also like the multiple 4 pin fan plug ins, mem ok, LED problem indicator, switches, 4 SATA 6GB connectors and heat pipes connecting the alunimum fins.
What i want to see is 16x/16x not 8x/8x on dual video card on a Z77 board. ASUS, don't skimp for a measly $30! I hate cheap companies and don't make me think you are just being cheap!!!
Hey all you MoBo companies. Don't get cheap with the Z77 boards and not include 16x/16x on the pci-e 3.0!!!! Come on, add what you need to and pass the $30 on to me!!!!
"ASUS as a direct standard are now placing Intel NICs on all their channel motherboards. This is a result of a significant number of their user base requesting them over the Realtek solutions." Um... ASUS P8Z77-V LX has Realtek! and...ASUS P8H77-M PRO has Realtek! There are more.
Dual E5-2690 So far best i have got burn a lot $$$ to get this right my last build was with I7 990x got itchy in oct 2011 with some minor issue decided to change my PC got my i7 2700K did not meet my expectation built i7 3960x still failed many of my requirements regret my pc change from 990x Finally with all my pain and wasting$$ got my new build that so far perform better than my 990X build My advice do not get carried away by fancy new i7 release they are just little benefit for P4 just wasting time I was shocked that they released P4 with 1155 socket it was having same performace as 2700K not much change in fact it was cheaper too.
Am not expert an average system builder but my advice from bottom of my heart is just go for E5 build if you are really looking for performace and some benefits you may spend some extra $$ on MB ,CPU,Casing etc it is worth in long run works out cheper than any fancy High end gaming rig water cooling etc all just shit tech advice. Never get ferrari performance form mod toyota.
With the third pcie lane on the z77 boards I have come across almost all manufacturers saying "1xPCI Express 2.0 x16 (x4 Mode) & only available if a Gen 3 CPU are used". Does this mean that the lane is pcie 2.0 at x16 but works at pcie 3.0 x4 mode, if an IVB processor is connected, and other two pcie 3.0 lane is populated giving x8/x4x4 speed with pcie 3.0 compliant cards?? Also what will happen if I put Pcie 2.0 GPUs in the first two pcie 3.0 x16 slots and a pcie 2.0 compliant raid card (rr2720SGL) in the third pcie lane? Will it give me an effective pcie 2.0 bandwidth of x16/x8/x8 or not?? Damn these are so confusing!! I wish anandtech would do an extensive review on just the pcie lanes covering all sorts of scenario and I think NOW would be the best time to this as the transition from pcie 2.0 to pcie 3.0 will happen slowly (maybe years) so majority end-user will still be keeping their pcie 2.0 compliant devices!!
Any plans on putting up some detailed reviews on these units? I'm especially interested in the gigabyte GA-Z77M-D3H , since it seems to be the cheapest z77 i've found so far.
Hi Anand, i found your short review of the Asus P8-Z77 Deluxe mobo very interesting especially in the small details which other reviews don't bother to inform laymans such as myself.
Anyway what i wanted to know more was regarding what you said concerning the PLX. So because the Deluxe is using an older PLX chip, what exactly does this mean ?
What does that do exactly ? Does it mean it would be possible to use a PCI card using a PCI-express slot ? Is that what the bridge thing does :d ?
The reason i ask is because i'm stuck between keeping my current Deluxe model, or trading it in for a Premium. I'm a bit stuck deciding what to do at this point, so Anand i could use your sound advise please :{
Does Virtu MVP really help with monitors (Geforce 3D Vision ready) ? I have such a monitor, and it "appears" , not to have much of an effect on increased performance. It, however, I set the refresh rate to 60hrz, I think there is an increase in performance, as my frame rates are above the frame rates of the monitor (70-85fps). But my frames rates remain unchanged, (70-85) when I increase my refresh to 120hrz. This statement really confuses me, at least I should make mention of monitors that are 3D Vision ready "If your setup (screen resolution and graphics settings) perform better than your refresh rate of your monitor (essentially 60 FPS for most people). If you have less than this, then you will probably not see any benefit." Any comments ?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
145 Comments
Back to Article
LancerVI - Sunday, April 8, 2012 - link
They've yet to fail me. I think I may take the plunge with Ivy Bridge. My 920 X58 build has served me well, but I have the itch and I've avoided scratching it since late 2008. I don't think I can hold off any longer!!!I feel the need for a new build.....NOW!!!
TrackSmart - Sunday, April 8, 2012 - link
Ha ha. We all get the rational or irrational urge to upgrade, whether we 'need' it or not. I would never stop someone from enjoying a new build. But that being said, the core i7 920 still holds its own pretty well. However, I suppose quicksync alone could be worth the upgrade if it is a feature you use heavily. That is one area where the performance gains are just phenomenal.ImSpartacus - Sunday, April 8, 2012 - link
Shoot, I'm still rolling with an E8400. You're 920 has some legs.Unless you have a killer app in Ivy Bridge, just sit on your Nahalem machine.
LancerVI - Sunday, April 8, 2012 - link
Believe me fellas. This is all irrational and I'm not ashamed to admit it!mgl888 - Wednesday, April 11, 2012 - link
Like :)I'm torn between waiting for Haswell next or upgrading to Ivy Bridge now.
I'm on a E7200. Hahaha
Artifex28 - Monday, April 16, 2012 - link
...and I am burning this E6750. :DI give up. Time to upgrade. :)
prophet001 - Monday, April 9, 2012 - link
Hmm...Rockin the Core 2 on the 975x with an 8800 gtx here :D
LancerVI - Monday, April 9, 2012 - link
That's awesome! That's a great setup! The 8800GTX is on par, in my mind, with the 9700 Pro of yore.jbuiltman - Tuesday, April 10, 2012 - link
920 X58 being slow???? I have an AMD FX-60 Dual core with 2GB of DDR ram....That is slow.... :)LancerVI - Thursday, April 12, 2012 - link
Who said anything about being slow?? All I'm saying is I haven't built a new machine since 2008 and I have the itch.I realize it's a slight up grade or maybe even considered a side-grade, but it's an itch. I'll probably wait unitl Ivy Bridge-E and then see.
t4murphy - Wednesday, April 18, 2012 - link
That was a good cpu for me before I went to the 920. I still ran MS FS9 with good results along with my GTX 8800. Im not laughing:)rocknrob - Thursday, April 12, 2012 - link
I had a 920 X58 setup and decided to rebuild to an i7 2700K Z68. So far I've regretted the whole thing. I'm going to pick up one of these ASUS Z77 Deluxe boards but I don't think it's going to make a big difference.The X58's/i7's were rock solid and performance monsters. I honestly see very little in terms of performance gains. Supposedly Ive Bridge processors are only going to give you about 15% to 20% increase in performance. If that's the case I think I'm going to stick with the 2700K until the next architecture change.
457R4LDR34DKN07 - Sunday, April 8, 2012 - link
you need to get a asus P8Z77-I DELUXE review.Mitxplease - Sunday, April 8, 2012 - link
Hells yes.GreenEnergy - Sunday, April 8, 2012 - link
I only found one (tiny) review sofar:http://vr-zone.com/articles/first-look-asus-p8z77-...
457R4LDR34DKN07 - Wednesday, April 11, 2012 - link
I actually would like a comparison with ZOTAC Z77-ITX WiFi, as I'm leaning toward the zotac mobo due to the msata compatibility by removing the wifi/bt module.ViperV990 - Sunday, April 8, 2012 - link
Does the Virtu MVP stuff work with an Eyefinity or NV Surround setup?martinw89 - Sunday, April 8, 2012 - link
I'm very curious about this too. On page two, Ian says "Within the hybrid system, the integrated GPU takes over two of the tasks for the GPU – snooping for required frames, and display output. This requires a system to run in i-Mode, where the display is connected to the integrated GPU."But on page 3, Lucid's own slide makes it sound like these new features are monitor configuration independent: http://images.anandtech.com/doci/5728/Lucid1.png
This is a super interesting feature, and I hope it performs as well in reality as it sounds like on paper. And with a triple screen setup it would be bliss.
jimnicoloff - Sunday, April 8, 2012 - link
I have a three monitor setup working just fine on a Z68 board with all monitors attached to a 6970. Virtu gives the option of which you want to be primary - the video card or the integrated graphics. So for me (with the video card primary) this works kind of backwards from a power saving point, but good for performance since it still allows for quick sync video transcoding, etc.I know this doesn't adress the new Virtu MVP, but I can't see them taking a step backwards when something similar works on the old version. Especially since if you are running in eyefinity mode it is just seen by the system as one big wide monitor and not three separate screens that each get their own render. Hopefully they can pull it off because I like my three screen setup and would hate to lose features because of that.
Zoomer - Tuesday, April 10, 2012 - link
I'm leading no:"This requires a system to run in <b>i-Mode, where the display is connected to the integrated GPU.</b>"
Iketh - Sunday, April 8, 2012 - link
I'm curious how Skyrim behaves with this Lucid technology, since physics and framerate are linked for whatever reason... (if you disable Skyrim's 60 fps cap and point the camera in a direction that gives you 150+ FPS for example, everything that is moveable nearby starts to rattle and fall off shelves...)Xale - Sunday, April 8, 2012 - link
Games with their own framerate limits should not be affected, as long as that limit is preserved. They already simply 'pause' internally if the machine is too fast. It might go absolutely crazy though if you do forcefully disable that mechanism.Concillian - Sunday, April 8, 2012 - link
Am I reading that right? Z75 offers most of what even enthusiasts would want?So... why are there a crapton of Z77 boards in here and no Z75s?
Z75 supports 2 way crossfire / SLI, overclocking, 6Gbps SATA, native USB3.0... these are the features all but a tiny handful of users should be interested in.
By all rights Z75 should be the definitive chipset for the average enthusiast. Unless I'm missing something major, I hope there is significant attention paid to the Z75 chipset in reviews, because I'm failing to see why any but the most extreme users and those with money to burn would choose the Z77.
MrSpadge - Sunday, April 8, 2012 - link
Actually the ones with less money to burn might choose Z77 over Z75 to avoid investing in a huge SSD. The difference between chipset prices are usually small (what the motherboard manufacturer makes out of this is another question entirely).GreenEnergy - Sunday, April 8, 2012 - link
I find SSD caching to be some desperate dinosaur attempt. Mainly fueled by HD makers. Hybrid HDs are in the same basket. And yes, pick one, Z77 or Z75. The other one makes no sense.Z77, Z75 and H77 chipsets are priced at US$48, US$40 and US$43.
If SRT basicly cost 8$. Then its time for it to go away as the stillborn tech it is.
Zoomer - Tuesday, April 10, 2012 - link
The difference is mainly software and development, and should go away. SSD caching is a great idea that, imo, should optimally be on the filesystem level, not on the block level.*Looks at OS / filesys developers* (ZFS has some of it)
And oh, I expect lower end derivatives to come out eventually.
Nje - Sunday, April 8, 2012 - link
I was really excited about this motherboard - was kind of disappointed to see it is not part of this roundup. But I guess there will be other tests. Thanks for the preview - I look forward to the benchmarks, particularly if asus' memory technology works well, and if memory bandwidth plays more of a role on Ivy Bridge.Articuno - Sunday, April 8, 2012 - link
Things like overclocking being restricted to specific chipsets is really disappointing. AM3+ boards are generally cheaper than equivalent Intel boards and they don't lock features like this.MrSpadge - Sunday, April 8, 2012 - link
I suppose it's being done to make power delivery cheaper on these boards. Personally I don't like it either.GreenEnergy - Sunday, April 8, 2012 - link
I think you should visit Newegg. LGA1155 and AM3+ boards are just as cheap. And CPU wise...its just a disaster for AMD.tyger11 - Sunday, April 8, 2012 - link
The article says only 2 USB 3.0 ports, but the table indicates 4. Which is correct?repoman27 - Sunday, April 8, 2012 - link
I didn't see your comment and posed a similar question, but I'm fairly certain the answer is that there are actually 4 SuperSpeed ports provided by the 7-series chipsets. If you look at the block diagram shown for the Intel DZ77GA-70K motherboard it looks like they clearly used two 3-port USB 3.0 hub chips to arrive at a total of 8 USB 3 ports. Which also brings up the point that by leveraging the integrated USB 3.0 capabilities, motherboard manufacturers can add as many USB 3.0 ports as they like by using far less expensive hub chips instead of full blown controllers which also require a PCIe lane apiece.And speaking of PCIe, I also wonder about where Ian says, "These are known as PCIe 3.0 PLX PXE chips..." I'm guessing that he's referring to PLX's PEX 874x Gen 3 PCIe switches, but it's stated a bit oddly.
landerf - Sunday, April 8, 2012 - link
Is the wifi card on the deluxe going to be accessible? Be nice to get a killer 1102 in there.johnpombrio - Sunday, April 8, 2012 - link
Yeah, the WiFi connection(s) on the ASUS Pro and Deluxe depends on a small WiFi module that plugs into a particular slot. These will be interchangeable AND I have seen ASUS showing off a WiFi/ 60GB SSD drive going into that slot. The Pro looks like just a receiver while the Deluxe has a WiFi router built in but I am REALLY guessing on this. I think I will go with the Deluxe just because it has a couple of features I like and the WiFi router combo will be just gravy.ASUSTechMKT - Monday, April 9, 2012 - link
Our MCombo Card ( which is on the Maximus V Gene and upcoming Formula should not be confused with the solution on our channel boards ( Standard -V, Pro or Deluxe ). The MCombo will allow you to install any MiniPCI-E or MSATA cards into their corresponding slots.ASUSTechMKT - Monday, April 9, 2012 - link
The module which connects to the back I/O panel can be opened. While not promoted as being DIY there is nothing stopping you from installing your own mini PCI-E wireless controller.AlexIsAlex - Sunday, April 8, 2012 - link
I know I've asked for this before, but if you're going to do a big roundup of all these motherboards (which I'm looking forward to, as I plan to upgrade to Ivy Bridge on release), then please please test the boot / POST times to compare between the boards!Just the time it takes from hitting the power switch to when it starts to actually run the bootloader off disk. Or until it displays the "please insert boot media" - the actual time the bios contributes to the total boot time. This is something that can really differentiate between different bios implementations and would be really useful to know when choosing.
I know having RAID and on board devices turned on make a big difference, so a baseline of everything non-essential turned off, or just those devices that are present on all boards would make sense.
Nihility - Sunday, April 8, 2012 - link
Seriously, please test POST times!Especially with the 6 Gbps Marvel controllers. Those damn controllers can take over 10 seconds to boot. That can easily be as much as the entire system.
risa2000 - Sunday, April 8, 2012 - link
I would be quite interested in this one.eXces - Sunday, April 8, 2012 - link
would be very pleased for an ITX review! Especially Asus P8Z77-I DeluxeThx
GreenEnergy - Sunday, April 8, 2012 - link
Indeed. Personally im looking at the Intel DH77DF:http://www.intel.com/content/dam/www/public/us/en/...
I dont see the need for anything bigger than mATX anymore. And for most mITX will do everything and abit more.
Personally I got a hint from another person. So I will be looking at the DH77DF, 2x8GB, GTX 680 and a i5 3570 and put it into a Silverstone SG08.
hybrid2d4x4 - Monday, April 9, 2012 - link
Agreed. mATX or smaller for me plz!Paapaa125 - Sunday, April 8, 2012 - link
What is the actual significant difference between Asus P8Z77-V and P8Z77-V Pro? The only things I noticed were a few more phases in power supply (not significant).Byte - Sunday, April 8, 2012 - link
The only diff I see is the 12 vs 8 power phases, extra USB3.0 header, and pictures look like it includes a usb and esata pcie 1x card.ASUSTechMKT - Monday, April 9, 2012 - link
Pro offer higher quality back I/O bracing for the display connections,High phase count 12+4
Additional front USB3 front header
ESATA via bracket
Additional fan header
Otherwise all other key features and hardware implementation is the same.
repoman27 - Sunday, April 8, 2012 - link
Given this statement on the first page, why does the chart indicate that the Panther Point chipsets provide 4 USB 3.0 ports?Knifeshade - Sunday, April 8, 2012 - link
First page, the paragraphs detailing the various features. You spelled Cougar Point in place of where I think you mean to say Panther Point....DesktopMan - Sunday, April 8, 2012 - link
PCIe / PCI info in the last page table would be appreciated. Good overview. Quite curious to see if the memory stuff from Asus actually does anything.MrSpadge - Sunday, April 8, 2012 - link
All 6-series chipsets are said to support PCIe 3 in this table. Would that work with Ivi and a proper GPU? Would be the first time I heard about this. And since PCIe 3 is supposed to be a new feature of the 7-series I suppose it's a typo.GreenEnergy - Sunday, April 8, 2012 - link
Well, alot of 6 series boards supports PCIe 3.0 due to the controller is on the CPU (Ivy Bridge). Essentially the only reason besides BIOS for PCIe 3.0 support on the 6 series, is the added switches for splitting the PCIe x16 into two x8. Thats also why the basic 6 series boards got a bigger chance of PCIe 3.0 support.MrSpadge - Sunday, April 8, 2012 - link
"multiples of 15 Hz (15, 30, 35, 60, 75)" on page 2Ryan Smith - Sunday, April 8, 2012 - link
Got it. Thanks.MrSpadge - Sunday, April 8, 2012 - link
How does it work together with nVidias adaptive VSync, which debuted with the GTX680? And which, IMO, looked quite promising (lowering average power consumption a lot while gaming).Ryan Smith - Sunday, April 8, 2012 - link
You wouldn't use MVP with Adaptive V-Sync. It only makes sense to use MVP by itself.Thaine - Thursday, August 16, 2012 - link
"Predicting which frames (or rendering tasks) will never be shown and taking them out of the pipeline so the GPU can work on what is needed"Along the lines of power consumption (and the ever-important side effect, heat), I would be very interested in seeing an article on MVP vs power consumption/heat on a power-hungry dGPU (ala GTX 480).
primonatron - Sunday, April 8, 2012 - link
Not sure why most of these boards bother including the non-Intel USB 3.0 controllers any more.Not many people have several USB 3.0 devices, so they could be saving costs. Or instead bringing back some of the things the article mention were taken away - like DVI on an ASUS board.
GreenEnergy - Sunday, April 8, 2012 - link
I think board makers are getting desperate for ways to add value.Less and less components to change and diversify with.
And its not getting better in the future. Haswell on LGA1150 basicly removes the entire VRM part on the motherboard. No more 32 phases or whatever.
A 50$ board performs identical to a 200$ board if you dont overclock. Actually the 50$ board might be more reliable (less components to fail) and more power efficient.
Xale - Sunday, April 8, 2012 - link
50 dollar boards are probably going to at least boot faster. The driver load and POST times with many 3rd party controllers, SATA especially, is atrocious. Seen some boards that take nearly 3x longer if the external controllers aren't disabled.Metaluna - Sunday, April 8, 2012 - link
Mostly marketing I would guess. Intel only supports 4 3.0 ports, so if you add another chip you now have a checklist item to justify the premium price of your motherboard and possibly differentiate it from lower-end versions in the same product line.Second, ASUS owns ASMedia, who makes a lot of the (mediocre, IMHO) USB 3 and Bluetooth controllers on their boards, so there's probably an incentive to use them in their own products to help prop up their production volumes.
Third, a lot of the third party controllers have special modes to support things like high-current charging for iPads or charging when the computer is turned off. Does anyone know if it's possible to add these features to Intel-based ports? If not, that would be an incentive to include a secondary chip.
GreenEnergy - Sunday, April 8, 2012 - link
I saw the DH77DF im looking at myself got high current recharge. So that doesnt seem to be related to the controller at all.ASUSTechMKT - Monday, April 9, 2012 - link
ASMedia does not produce or design a bluetooth controller. Additionally you are correct in that add in controllers do offer support for specialized modes of operation.functionality ( like charging )ASUSTechMKT - Monday, April 9, 2012 - link
the Intel controller under Windows 7 does not offer operating in UASP mode. With UASP the ASMedia add in controller can provide superior performance especially in queue depth. The USB3 Boost package is offered for both the Intel controller and the ASMedia ( Intel support change from BOT mode to SCSI ( Turbo mode ) and the ASMedia controller of support for BOT, SCSI ( Turbo ) as well UASP.MrMaestro - Sunday, April 8, 2012 - link
I didn't know a motherboard could be kitsch until I saw the ECS.XSCounter - Sunday, April 8, 2012 - link
P8Z77-M is the one I am waiting for for my Micro ATX build. It has no those xtra useless controllers which are all gonna be inferior to native Intel ones. So won't overpaying anything.However, I am planning to overclock so I wish to know how it will perform in this regard! Hoping to see these ASUS Micro ATX boards on Anandtech asap :)
http://uk.asus.com/Motherboards/Intel_Socket_1155/...
ASUSTechMKT - Monday, April 9, 2012 - link
It features a Digi+ VRM with robust VRM components overall in our testing the -M Pro provide comparable scaling to that of our standard board. You will see this information from ASUS soon. Solid board !Lonyo - Sunday, April 8, 2012 - link
What happened to the boards with 10 SATA ports?There were some from tradeshows earlier in the year, but none listed here (apart from the one with 8 + 2 eSATA).
Obviously this isn't a comprehensive lineup, but most of these seem to be fairly high end boards, yet no 10 SATA ports.
ASUSTechMKT - Monday, April 9, 2012 - link
Very few users use this many ports it makes sense more to prioritize expansion lane support to add ons that will be used. Keep in mind even with a board with that number of ports you may not actually be able to have that usable bandwidth.Should you really need that many you should consider an add on controller card. Things like SATA ports while important are not the only way to distinguish true attention to design in higher end boards.
aranyagag - Sunday, April 8, 2012 - link
"four SATA 6 Gbps also from the PCH," shouldn't that be sata 3 gbpsin the msi gd65 board
ConVuzius - Sunday, April 8, 2012 - link
The article says Virtu MVP has an i-Mode and a d-Mode, but which one is the better one? I kinda didnt get the difference, except where you connect your display.Ryan Smith - Sunday, April 8, 2012 - link
For regular Virtu, d-Mode is the mode that was better from a performance perspective, since at times performance under i-Mode dipped due to having to send frames out to the iGPU.For MVP, there's not going to be all that much of a difference. Regardless of the mode used Virtual V-Sync needs frames passed from the dGPU to the iGPU. The only difference is which display output is used, since a copy of the frame is on both GPUs (i.e. while you have to send frames to the iGPU, you don't have to send them back even in d-Mode).
Zoomer - Tuesday, April 10, 2012 - link
So when is AMD buying them out and integrating this in their gfx cards / platform?neo55 - Sunday, April 8, 2012 - link
Will Z77 support two or three monitors simultaneously?GreenEnergy - Sunday, April 8, 2012 - link
You mean from the Ivy Bridge IGP?Well depends. (Its the same btw for H77, Z75 and Z77 etc.). You need to have 2 displayport if I remember correctly. So if you got for example DVI, HDMI and DP. Then 2 screens only. If you got DVI, DP, DP then 3. Or if you daisychain the DP?
dubyadubya - Sunday, April 8, 2012 - link
Listing fan header count and layout is useless if you do not test the functionality. Do they report RPM? Can they control fan speed? Does the fan control work with Speedfan etc. or only the software from the motherboard manufacturer? Several years ago many people including myself brought this to Anandtech's attention. At the time Anandtech stated they would include this missing info which they did. Trouble is the info was only included in one or two reviews before it was dropped. When I purchase a new motherboard I want to know this info. I have spent many hours searching for this info for each new build as most manufactures do not give detailed info even in the motherboard manual.IMO if a fan header exists it must have full functionality. If not the header should not be on the board. Motherboard manufactures need to pull their heads out of their asses. If Anandtech reported this info in reviews and gave negative reviews on boards with poor fan support the manufacturers would get the hint.
Nje - Sunday, April 8, 2012 - link
Yeah I would love to know this, particularly if the fan headers can control 3 pin fans as well (ie vary the voltage).ASUSTechMKT - Monday, April 9, 2012 - link
Primary CPU headers ( CPU and CPU OPT are fully controllable for 4 pin ) as the majority of CPU coolers are PWM for chassis headers ( 1-4 all allow for 3 pin and 4 pin fan control ).Zoomer - Tuesday, April 10, 2012 - link
Asus, can they be used with speedfan or is it BIOS/Asus software only?ASUSTechMKT - Monday, April 9, 2012 - link
This has been noted in the last couple of reviews. Specifically for ASUS we have spent considerable time putting quality fan controls on our boards all headers allow for 3 presets as well as min and max rotation and target temperatures. In addition with our software for this generation we offer full calibration per each header that can sense the min and max rotation and provide this information as well as sync this data to the profiles. Overall it is quite extensive make sure to check out our videos coming up which shows it in great depth.Arbie - Monday, April 9, 2012 - link
@dubyadubya - Look on the bright side: at least one manufacturer (Asus) takes fans seriously, and at least one reviewer (Anandtech) is even mentionng the fact.I have the same wants as you do, and have made the same requests, but be reasonable. These aren't full motherboard reviews! They don't even have the boards operable, much less any hands-on time with BIOS details.
And when they do have all that, a higher priority will be PCIe lanes and how many graphics cards can be stuffed in. That's because you can't run any modern games with only one board.
But then they might talk a little more about the fan controls... Let's hope. Again - be glad that even one vendor is paying attention and has included some controls to be talked about.
Cavalcade - Sunday, April 8, 2012 - link
With a slightly more than passing knowledge of rendering, and having spent a fair amount of time handling input in a game engine, I'm curious as to how Lucid came to the responsiveness numbers in the chart on page 3. The concept seems valid at first glance, but the numbers strike me as pure marketing fodder as opposed to solid and testable results.Also, this sort of technology seems far better suited to residing in the driver layer as opposed to yet another piece of middleware for PC gamers to contend with. We're already effectively blocked from the hardware, and forced to go through third-party graphics APIs (Direct3D/OpenGL).
Maybe it's a "you have to see it (feel it)" kind of thing, but from here you can color me skeptical.
Iketh - Sunday, April 8, 2012 - link
"handling input in a game engine" means nothing here. What matters is when your input is reflected in a rendered image and displayed on your monitor. That involves the entire package. Lucid basically prevents GPUs from rendering an image that won't get displayed in its entirety, allowing the GPU to begin work on the next image, effectively narrowing the gap from your input to the screen.extide - Tuesday, April 10, 2012 - link
I am sure he knows that. He was just giving a bit of detail as to his exact experience, of which I would bet is far more than most people on here. You have to be very aware of things such as latency and delay when you are handling input in a game engine. I agree with the OP and am skeptical also. The bit that makes me most curious is the transfer of the fully rendered screens from one framebuffer to the other, that has to add some latency, and probably enough to make the entire process worthless. It's not like Lucid has a good track record on stuff like this, I mean we all know how their cross platform SLI/CF took off and worked so well....Iketh - Wednesday, April 11, 2012 - link
Why would you need to physically copy framebuffers?? I'm sure pointers are used...I have no idea if this has tangible benefits, but theoretically it does. None of us know until we can test it. I'm more inclined to discredit the people already discrediting Lucid, despite Lucid's track record. That's what you call hating.
Iketh - Wednesday, April 11, 2012 - link
excuse me, you're right... it has to copy the frame from gpu to igpu... what kind of crap tech is this???ssj3gohan - Sunday, April 8, 2012 - link
Personally, I'm absolutely uninterested in anything 'high-performance', especially fancy gaming stuff. Not to say that I don't think that's a valid market niche, but I see other possibilities.I'm really looking forward to new thin ITX boards with built-in DC-DC converter (i.e. running directly off a 19V brick), and I am especially wondering whether Intel (or Zotac, possibly) is going to build a golden board this time around. Last time, they made DH61AG which was a nice board, but lacked an msata port (kind of a must for a truly thin computer) and 'only' had an H61 chipset.
With H77, I expect it will be possible to make a thin ITX board with USB 3.0 and a fast on-board SSD option, combining this with an HD 4000 equipped processor would enable users to build a truly thin (sub-4 inch thick) computer that fits on the back of their monitor but still provides ample computing power.
Senti - Sunday, April 8, 2012 - link
It sounds to me that Lucid Virtual V-Sync is just glorified triple buffering with a lot of marketing and a bit of overhead for transferring frames and powering two video cards instead of one. I'm very skeptical on the HyperFormance too.Cavalcade - Sunday, April 8, 2012 - link
It seems a bit more involved than triple buffering, more like having 2 buffers where the back buffer is not flipped until it is fully rendered. Seems like this would lead to more stuttering, and given the number of times they asked Mr. Cutress to reiterate that this would be a bug, it may be something they are seriously concerned with.Thinking about it a little more, I'm not sure what advantages this system would have over a system with separated input and rendering modules. The academic side of me is extremely interested and hopeful, but the practical developer side of me is going to require a lot more to be brought on board.
Iketh - Sunday, April 8, 2012 - link
Separate input and rendering modules, as I stated in an earlier post, means nothing. They allow for a responsive mouse cursor, for instance. But, when you actually provide input that alters the RENDERED WORLD, you have to wait for that input to reflect on screen. It doesn't matter how perfectly the software solution is architected, you still have to wait for the rendering of the image after your input.Lucid simply prevents renders that never get displayed in their entirety, allowing the GPU to work on the NEXT image, shortening the time from your input to the screen.
Cavalcade - Monday, April 9, 2012 - link
The comment was to indicate that while I have experience writing input systems, rendering is still relatively new to me; simply a qualifier of my impression and opinion.The way I am understanding Lucid, it is attempting to preempt displaying a frame that is not fully rendered in time for the next screen refresh. By presenting a virtual interface to both the GPU and the application, the application believes the frame has been rendered (displaying user input at that time) and proceeds to render the next frame. Thinking more about it, would this reduce the time interval between input reflected in frame one (which was preempted) and frame two (which will be displayed) so that rather than having input sampled at a fixed rate (say 60Hz) and displayed at a variable rate, input would be more closely tied to the frame for which it is intended.
My interest is rising, but it still seems like a rather complex solution to a problem that I either haven't experienced, or which doesn't really bother me.
Iketh - Tuesday, April 10, 2012 - link
it's not preemtively doing anything, except determining if a frame added to the queue will finish rendering in time... if not, it >>>>DOESNT LET THE GPU RENDER IT<<<< and places the previously rendered image in its place, allowing the GPU to immediately begin work on the FOLLOWING frame... that's it... it cuts unneeded frames from queuesas for your input sampling rate question, that's entirely based on how the application is coded to handle input, lucid has nothing to do with this...
extide - Tuesday, April 10, 2012 - link
Do you even know what it means to preempt a frame? Cavalcade is describing the technology correctly. He is explaining pretty much the same thing as you are but you just don't get it..Also separate input and rendering modules means a lot. Typically a game engine will have a big loop that will check input, draw the frame, and restart (amongst other things of course) but to split that into two independent loops is what he is talking about.
Iketh - Wednesday, April 11, 2012 - link
You really should look up "preemption." This is not what is happening... CLOSE, but not quite. Preemption is not the right word at all. This makes him incorrect and I kindly tried explaining. You are incorrect in backing him up and then accusing me of being inept. Guess what that makes you?On top of that, he's also not talking about splitting input and rendering into two loops. Not even close. How did you come up with this idea? He's asking how the input polling is affected with this technology. It is not, and can not, unless polling is strictly tied to framerate.
I want to be clear that I'm not for this technology. I think it won't offer any tangible benefits, especially if you're already over 100 fps, and they want to power up a second GPU in the process... I'm just trying to help explain how it's supposed to work.
Iketh - Sunday, April 8, 2012 - link
"handling input in a game engine" means nothing here. What matters is when your input is reflected in a rendered image and displayed on your monitor. That involves the entire package. Lucid basically prevents GPUs from rendering an image that won't get displayed in its entirety, allowing the GPU to begin work on the next image, effectively narrowing the gap from your input to the screen.Iketh - Sunday, April 8, 2012 - link
mistake post, sorryRyan Smith - Sunday, April 8, 2012 - link
The bug comment is in regards to HyperFormance. Virtual V-Sync is rather simple (it's just more buffers) and should not introduce rendering errors.Ryan Smith - Sunday, April 8, 2012 - link
Virtual V-Sync is totally a glorified triple buffering, however this is a good thing.http://images.anandtech.com/reviews/video/triplebu...
Triple buffering as we know it - with 2 back buffers and the ability to disregard a buffer if it's too old - doesn't exist in most DirectX games and can't be forced by the video card. Triple buffering as implemented for most DirectX games is a 3 buffer queue, which means every frame drawn is shown, and the 3rd buffer adds another frame of input lag.
On paper (note: I have yet to test this), Virtual V-Sync should behave exactly like triple buffering. The iGPU back buffer allows Lucid to accept a newer frame regardless of whether the existing frame has been used or not, as opposed to operating as a queue. This has the same outcome as triple buffering, primarily that the GPU never goes idle due to full buffers and there isn't an additional frame of input lag.
The overhead of course remains to be seen. Lucid seems confident, but this is what benchmarking is for. But should it work, I'd be more than happy to see the return of traditional triple buffering.
HyperFormance is another matter of course. Frame rendering time prediction is very hard. The potential for reduced input lag is clear, but this is something that we need to test.
DanNeely - Monday, April 9, 2012 - link
Lucid was very confident in their Hydra solution; but it never performed even close to SLI/xFire; and after much initial hype being echoed by the tech press it just disappeared. I'll believe they have something working well when I see it; but not before.JNo - Monday, April 9, 2012 - link
Thisvailr - Sunday, April 8, 2012 - link
Page 8 quote: "The VRM power delivery weighs in at 6 + 4 phase, which is by no means substantial (remember the ASRock Z77 Extreme4 was 8 + 4 and less expensive)."Yet: the "Conclusions" chart (page 14) shows the same board having 10 + 4 power.
Which is correct?
flensr - Sunday, April 8, 2012 - link
I'm bummed that ASUS didn't include mSATA connectors. Small mSATA SSDs would make for great cache or boot drives with no installation hassles and they're pretty cheap and available at the low capacities you'd want for a cache drive. That's a feature I will be looking for with my next mobo purchase.Ditching USB 2.0 is also one of the next steps I'll be looking for. Not having to spend a second thinking about which port to plug something in to will be nice once USB 2.0 is finally laid to rest. Having only 4 USB 3.0 ports is stupidly low this long after the release of the standard, and it's hampering the development of USB 3.0 devices.
Finally, I've been repeatedly impressed by my Intel NICs over the last decade. They simply perform faster and more reliably than the other chips. I look for an Intel NIC when I shop for mobos.
DanNeely - Monday, April 9, 2012 - link
This is similar to what happened with the USB1->2 transition. The newer controller is significantly bigger (read more expensive) and very few people have more than one or two devices using it per computer. I suspect the 8x (Haswell) chipset will be mixed as well; simply because the total number of ports on the chipset is so much higher than it was a decade ago (vs older boards were all but the lowest end models added more USB from 3rd party controllers).ASUSTechMKT - Monday, April 9, 2012 - link
mSATA currently has very little penetration on the market and cost wise it is much lower to purchase a larger cache SSD for the same or lower cost. We would prefer to focus on bringing implementations that offer immediate value to users.As for the Intel nics all our launch boards across the board for ATX ( Standard and above all feature Intel lan ) we have been leading in this regard for a couple of generations.
In regards to USB 3 we offer more than the standard on many boards but keep in mind many users only have 1 USB3 device.
jimnicoloff - Sunday, April 8, 2012 - link
Maybe I missed something from an earlier post, but could someone please tell me why these don't have light peak? Are they waiting to go optical and it is not ready yet? Having my USB3 controlled by Intel instead of another chip is not enough to make me want to upgrade my Z68 board...repoman27 - Sunday, April 8, 2012 - link
Thunderbolt controllers are relatively expensive ($20-30) and their value is fairly limited on a system using a full size ATX motherboard that has multiple PCIe slots. Including two digital display outputs, an x4 and a couple x1 PCIe slots on a motherboard provides essentially all the same functionality as Thunderbolt but at a way lower cost.ASUSTechMKT - Monday, April 9, 2012 - link
Almost all of our boards feature a special TB header which allows for you to easily equip our boards with a Thunderbolt add on card which we will release at the end of the month. Expect an approximate cost of $40 dollars, this card will connect to the TB header and install in a X4 slot providing you with Thunderbolt should you want it. A great option for those who want it and for those who do not they do not pay for it.DanNeely - Tuesday, April 10, 2012 - link
Sounds like a reasonable choice for something that's still rather expensive and a very niche product.Am I correct in thinking that the mobo header is to bring in the DisplayPort out channel without impacting bandwidth available for devices?
jimwatkins - Sunday, April 8, 2012 - link
I've made it this far on my venerable OC Q6600, but I can't wait any longer. I do wish they weren't so stingy on the 6 core as I could use it, but I just can't justify the price differential (w 3 kids that is.)androticus - Sunday, April 8, 2012 - link
USB 3.0 descriptions and depictions are contradictory. The platform summary table says there are 4. The Intel diagram shows up to 4 on front and back (and the diagram is itself very confusing, because there are 4 USB 3.0 ports indicated on the chipset, and then they show 2 going to hubs, and 2 going directly to the jacks.) The text of the article says there can only be 2 USB 3.0 ports.What is the correct answer?
mariush - Sunday, April 8, 2012 - link
I think there's 2 real ports (full bandwidth ports) and the Intel solution uses 2 additional chips that act like "hubs", splitting each real port into 4 separate ports.Basically the bandwidth of each real port gets split if there are several devices connected to the same hub.
Hub as far as I know means that what the hub receives sends to all four ports (and then the devices at the end of each port ignore the data if it's not for them).
This would be different than a switch, which has the brains to send the data packages only to the proper port.
plamengv - Sunday, April 8, 2012 - link
DZ77GA-70K makes DX79SI looks like a bad joke (which it is really).LGA 2011 turns into an epic fail and DZ77GA-70K is the proof. I have 1366 system and I have zero will to get LGA 2011 system thanks to the crappy tech decisions somebody made there. Six cores is the top? Again? An old 32nm process? Really? Chipset with nothing new inside but troubles? Since 1366 something strange is going on and Intel fails to see it. The end user can get better manufacturing tech for the video card than for the CPU. First it was 45nm CPU with 40nm GPU and now 28nm GPU and 32nm CPU and Intel call that high end? Really?
Everything that DX79SI should have been you can find inside DZ77GA-70K.
1. DZ77GA-70K has high quality TI 1394 firewire controller, while DX79SI has cheap VIA one that no any audio pro would ever want to deal with.
2. DZ77GA-70K has next best after Intel SATA controller by Marvell to get 2 more SATA 6.0 and eSATA vs zero extra SATA and hard to believe no any eSATA on DX79SI.
3. Intel USB 3.0 vs crappy Renesas.
DZ77GA-70K has everything to impress, including the two Intel LANs vs the Realtek that everyone else is using.
DZ77GA-70K fails in only one thing - it had to be LGA 2011, not 1155 that will be just 4 cores like forever and has zero future.
Wake up INTEL!
Springf - Sunday, April 8, 2012 - link
Quote: Native USB 3.0The other long awaited addition found on Panther Point is the native implementation of USB 3.0 that comes directly from the chipset. The chipset will only provide two USB 3.0 ports,
------- end quote
I think Z77 natively support 4 USB 3.0 ports
http://www.intel.com/content/dam/www/public/us/en/...
C'DaleRider - Sunday, April 8, 2012 - link
When you write sentences like this:"ASUS have a lot to live up to with its Ivy Bridge Pro board."
You do realize that you're mixing a plural verb and singular pronoun for the same damn thing...Asus in this case. First, you use a plural verb talking about Asus and then use a singular pronoun for Asus in the same sentence. You cannot do both; well, I guess you can, but you show you have no clue about English grammar and look like you dropped out of third grade.
Get a copy editor! How can anyone take this site a professional when the writing borders on illiterate?
sausagestrike - Sunday, April 8, 2012 - link
You should higher a sand removal specialist to take a look at you're twat.Arbie - Monday, April 9, 2012 - link
@C'DaleRider -You do realize that... you're the illiterate one, don't you?
"ASUS have" is perfectly legitimate English, and is in fact what you will hear in England itself. "ASUS" is a company of people and can be taken as singular or plural.
For me, the AT editors just made major points right in this set of comments by correcting another ignoramus, who was misusing "begs the question".
Now, can we get back to fan headers?
Iketh - Tuesday, April 10, 2012 - link
no, there are errors STILL all over the place in this article... it's horrid... when your site is 99% words, please make them as easy as possible to comprehend...PLEASE LEARN TO WRITE LIKE ANAND, THX!
Anand, for the love of god, pay a little more to hire a little more education (SEE WHAT I DID THAR??)
nz_nails - Monday, April 9, 2012 - link
"Biostar have unfortunately put much effort in here, with only three to play with..."Should be a "not" in there I suppose.
s1lencerman - Monday, April 9, 2012 - link
I do not understand why non express PCI slots are still on boards. The only one to see the light is MSI, and if they had a bit better performance I would switch from ASUS for my next mobo in a heartbeat. Also, why do these boards have a VGA connector (D-sub)? Intel HD graphics can only support 2 displays max, and if you have more than you should get a dedicated graphics card anyway, and probably already have. I don't see the point.Another thing, when will OEMs start putting the USB hub at the bottom of the board facing down and not away from the board. If you have multiple cards on the board then you can get really cramped really fast when you are trying to use those.
I'm sorely tempted to just wait another year or so till there is a board with these features and over 50% SATA 6G/s, but we'll see if that even comes out in that short of time.
DanNeely - Monday, April 9, 2012 - link
1) Some customers are asking for them. Customer demand was why a few boards started sporting floppy controllers again last summer. Legacy PCI demand is almost certainly much higher.2) Intel doesn't have enough PCIe lanes on the southbridge for well featured ATX boards.
2.1) This means a bridge chip of some sort.
2.2) PCIe devices are used to being able to count on the full 250/500MBps bandwidth.
2.3) Legacy PCI devices are used to sharing their bandwidth (133MBps).
2.4) 2.2 and 2.3 combined mean there's less risk of compatibility problems in filling out a few slots with legacy PCI slots.
This is probably going to remain an issue until either:
A) Intel increases the number of lanes they offer on their boards by a half dozen or so (bridges are also used for on board devices).
B) Intel integrates a lot more stuff into the southbridge so it doesn't need PCIe lanes: More USB3, Sata6GB, audio, ethernet.
C) A new version of PCIe allows sizes other than powers of two. Fitting everything on would be much easier if a board maker could fall back on just offering 13/1x or 7/7x on the main gfx slots.
jfelano - Monday, April 9, 2012 - link
GReat to see Asrock finally stepping up with the warranty, great products.James5mith - Monday, April 9, 2012 - link
Something I realized by reading this roundup:Almost all of the current motherboards are using PCI connected Firewire chips. Even the ones that have PCIe connected firewire use TI chips, which in turn are still PCI firewire, with an internal PCI to PCIe translator.
After some research the only native PCIe firewire controller I've found is from LSI. Does anyone else know of another solution? This is an interesting "dirty secret" that I never really paid any attention to.
Zoomer - Tuesday, April 10, 2012 - link
It doesn't matter, firewire maxes out at 800Mbps, which the PCI standard can easily handle ht 133 MBps.Of course, shared bandwidth is an issue, but reworking designs / buying the PCIe design rises issues of cost and reduces the number of PCIe lanes for other devices that can better use the bandwidth.
prophet001 - Monday, April 9, 2012 - link
Seeing at the layer that Virtu is running and reading about what it claims to be capable of, is there any reason this could not cure once and for all the micro-stutter associated with multiple video cards?jonyah - Monday, April 9, 2012 - link
It's good to know my Z68 (asus maximus iv-extreme) still hasn't been bested. I see nothing from any of these boards that beats what I've been running for some time now. Plus i have more usb 3.0 ports (12 to be exact).flashbacck - Monday, April 9, 2012 - link
For my own PC, I'm interested in seeing what mATX sized Z77 boards there are. Often they have weird expansion slot combinations or positions. Looking at what Z68 boards are out now, it surprises me how many still have PCI slots. I would've thought they'd be replaced by PCIe 1x's by now.Also, anything about UEFI bios implementations? Was the promise of fast booting UEFI bioses ever fulfilled?
Aruneh - Friday, April 13, 2012 - link
I'm planning on making my next build a mATX one, and would be interested in seeing some reviews of high quality boards with lots of features. The Gigabyte mentioned in this article sadly seems to be a cheaper model.CharonPDX - Monday, April 9, 2012 - link
"...including 8800 and 2400 series..."What, are we back in 2007?
Oh, wait... AMD 8800 and nVidia 2400, not the other way around...
Wait, that's not right either. What's the 2400 referring to?
Ryan Smith - Tuesday, April 10, 2012 - link
Radeon HD 2400; AMD's low end series for the HD 2000 generation in 2007.extide - Tuesday, April 10, 2012 - link
You had it right the first time... That phrase in the article was specifically pointing out the broad range of compatibility of the Lucid solution.extide - Tuesday, April 10, 2012 - link
The first time as in in 8800 = nVidia, and 2400 = AMDkristof007 - Monday, April 9, 2012 - link
I don't know if this helps or you wan to put an update but NewEgg has the boards out and prices on them (I am not sure if they sell it at MSRP or not). Just a thought if you want to update those "TBC" prices.mechjman - Monday, April 9, 2012 - link
I don't remember seeing PCIe 3.0 support straight from P6x series chipsets.http://www.intel.com/content/www/us/en/chipsets/ma...
If this is regarding in use with a PLX chip, it might be good to state so.
extide - Tuesday, April 10, 2012 - link
It's actually when the boards DONT use a plx chip, or if the use 3.0 capable ones. It's only the boards that use 2.0 chips that are limited to 2.0GameLifter - Tuesday, April 10, 2012 - link
I am very curious to see how this technology will affect the overall performance of the RAM. If it works well, I may have to get the P8Z77-V Pro.jbuiltman - Tuesday, April 10, 2012 - link
I am leaving my AMD FX-60, 3 GB DDR, Asus 939 Delux, Win XP, Raptor 150 HDD for IVY Bridge pasures!!!I am all for ASUS 16+4 power, multi usb 2.0 and 3.0 on the back panel. I also like the multiple 4 pin fan plug ins, mem ok, LED problem indicator, switches, 4 SATA 6GB connectors and heat pipes connecting the alunimum fins.
What i want to see is 16x/16x not 8x/8x on dual video card on a Z77 board. ASUS, don't skimp for a measly $30! I hate cheap companies and don't make me think you are just being cheap!!!
jbuiltman - Tuesday, April 10, 2012 - link
Hey all you MoBo companies. Don't get cheap with the Z77 boards and not include 16x/16x on the pci-e 3.0!!!! Come on, add what you need to and pass the $30 on to me!!!!ratbert1 - Wednesday, April 11, 2012 - link
"ASUS as a direct standard are now placing Intel NICs on all their channel motherboards. This is a result of a significant number of their user base requesting them over the Realtek solutions."Um... ASUS P8Z77-V LX has Realtek!
and...ASUS P8H77-M PRO has Realtek!
There are more.
ratbert1 - Wednesday, April 11, 2012 - link
I meant P8Z77-M PRO, but the H77 has it as well.lbeyak - Sunday, April 15, 2012 - link
I would love a detailed review of the Gigabyte G1.Sniper 3 Z77 board when it becomes available.Keep up the good work!
csrikant - Sunday, April 22, 2012 - link
Dual E5-2690So far best i have got burn a lot $$$ to get this right
my last build was with I7 990x got itchy in oct 2011 with some minor issue decided to change my PC got my i7 2700K did not meet my expectation
built i7 3960x still failed many of my requirements regret my pc change from 990x
Finally with all my pain and wasting$$ got my new build that so far perform better than my 990X build
My advice do not get carried away by fancy new i7 release they are just little benefit for P4 just wasting time I was shocked that they released P4 with 1155 socket it was having same performace as 2700K not much change in fact it was cheaper too.
Am not expert an average system builder but my advice from bottom of my heart is just go for E5 build if you are really looking for performace and some benefits you may spend some extra $$ on MB ,CPU,Casing etc it is worth in long run works out cheper than any fancy High end gaming rig water cooling etc all just shit tech advice. Never get ferrari performance form mod toyota.
mudy - Monday, April 23, 2012 - link
With the third pcie lane on the z77 boards I have come across almost all manufacturers saying "1xPCI Express 2.0 x16 (x4 Mode) & only available if a Gen 3 CPU are used". Does this mean that the lane is pcie 2.0 at x16 but works at pcie 3.0 x4 mode, if an IVB processor is connected, and other two pcie 3.0 lane is populated giving x8/x4x4 speed with pcie 3.0 compliant cards?? Also what will happen if I put Pcie 2.0 GPUs in the first two pcie 3.0 x16 slots and a pcie 2.0 compliant raid card (rr2720SGL) in the third pcie lane? Will it give me an effective pcie 2.0 bandwidth of x16/x8/x8 or not?? Damn these are so confusing!! I wish anandtech would do an extensive review on just the pcie lanes covering all sorts of scenario and I think NOW would be the best time to this as the transition from pcie 2.0 to pcie 3.0 will happen slowly (maybe years) so majority end-user will still be keeping their pcie 2.0 compliant devices!!Thanks
SalientKing - Friday, April 27, 2012 - link
Any plans on putting up some detailed reviews on these units? I'm especially interested in the gigabyte GA-Z77M-D3H , since it seems to be the cheapest z77 i've found so far.Moogle Stiltzkin - Wednesday, May 2, 2012 - link
Hi Anand, i found your short review of the Asus P8-Z77 Deluxe mobo very interesting especially in the small details which other reviews don't bother to inform laymans such as myself.Anyway what i wanted to know more was regarding what you said concerning the PLX. So because the Deluxe is using an older PLX chip, what exactly does this mean ?
You mentioned that as such it doesn't have the PCI to PCI-Express bridge feature.
http://www.plxtech.com/products/devicedefinitions#...
What does that do exactly ? Does it mean it would be possible to use a PCI card using a PCI-express slot ? Is that what the bridge thing does :d ?
The reason i ask is because i'm stuck between keeping my current Deluxe model, or trading it in for a Premium. I'm a bit stuck deciding what to do at this point, so Anand i could use your sound advise please :{
swindmill - Monday, May 14, 2012 - link
The LogMeIn Mirror driver seems to break Lucidlogix's Virtu software as detailed in this blog post:http://blog.ampx.net/2012/05/lucidlogix-virtu-and-...
Has anyone else experienced this issue?
kcblair - Thursday, June 28, 2012 - link
Does Virtu MVP really help with monitors (Geforce 3D Vision ready) ? I have such a monitor, and it "appears" , not to have much of an effect on increased performance. It, however, I set the refresh rate to 60hrz, I think there is an increase in performance, as my frame rates are above the frame rates of the monitor (70-85fps). But my frames rates remain unchanged, (70-85) when I increase my refresh to 120hrz. This statement really confuses me, at least I should make mention of monitors that are 3D Vision ready "If your setup (screen resolution and graphics settings) perform better than your refresh rate of your monitor (essentially 60 FPS for most people). If you have less than this, then you will probably not see any benefit." Any comments ?MrSpockTech - Thursday, February 28, 2013 - link
I read this article in French there:http://www.hardware.fr/articles/858-1/lucidlogix-v...
And what I can say is LucidLogix CHEAT and think people are like monkeys.
Ati and Nvidia cheat too in past to boast all benchmark.
It's really a shame that stuff exist again.
And more funny that Intel support LucidLogix!!!
For me now LucidLogix = C**P !!!