Comments Locked

153 Comments

Back to Article

  • Xcellere - Wednesday, April 6, 2011 - link

    It's too bad the lower capacity drives aren't performing as well as the 240 GB version. I don't have a need for a single high capacity drive so the expenditure in added space is unnecessary for me. Oh well, that's what you get for wanting bleeding-edge tech all the time.
  • Kepe - Wednesday, April 6, 2011 - link

    If I've understood correctly, they're using 1/2 of the NAND devices to cut drive capacity from 240 GB to 120 GB.
    My question is: why don't they use the same amount of NAND devices with 1/2 the capacity instead? Again, if I have understood correctly, that way the performance would be identical compared to the higher capacity model.
    Is NAND produced in only one capacity packages or is there some other reason not to use NAND devices of differing capacities?
  • dagamer34 - Wednesday, April 6, 2011 - link

    Because price scaling makes it more cost-effective to use fewer, more dense chips than separate smaller, less dense chips as the more chips made, the cheaper they eventually become.

    Like Anand said, this is why you can't just as for a 90nm CPU today, it's just too old and not worth making anymore. This is also why older memory gets more expensive when it's not massively produced anymore.
  • Kepe - Wednesday, April 6, 2011 - link

    But couldn't they just make smaller dies? Just like there are different sized CPU/GPU dies for different amounts of performance. Cut the die size in half, fit 2x the dies per wafer, sell for 50% less per die than the large dies (i.e. get the same amount of money per wafer).
  • A5 - Wednesday, April 6, 2011 - link

    No reason for IMFT to make smaller dies - they sell all of the large dies coming out of the fab (whether to themselves or 3rd parties), so why bother making a smaller one?
  • vol7ron - Wednesday, April 6, 2011 - link

    You're missing the point on economies of scale.

    Having one size means you don't have leftover parts, or have to pay for a completely different process (which includes quality control).

    These things are already expensive, adding the logistical complexity would only drive the prices up. Especially, since there are noticeable difference in the manufacturing process.

    I guess they could take the poorer performing silicon and re-market them. Like how Anand mentioned that they take poorer performning GPUs and just sell them at a lower clockrate/memory capacity, but it could be that the NAND production is more refined and doesn't have that large of a difference.

    Regardless, I think you mentioned the big point: inner RAIDs improve performance. Why 8 chips, why not more? Perhaps heat has something to do with it, and (of course) power would be the other reason, but it would be nice to see higher performing, more power-hungry SSDs. There may also be a performance benefit in larger chips too, though, sort of like DRAM where 1x2GB may perform better than 2x1GB (not interlaced).

    I'm still waiting for the manufacturers to get fancy, perhaps with multiple controllers and speedier DRAM. Where's the Vertex3 Colossus.
  • marraco - Tuesday, April 12, 2011 - link

    Smaller dies would improve yields, and since they could enable full speed, it would be more competitive.

    A bigger chip with a flaw may invalidate the die, but if divided in two smaller chips it would recover part of it.

    On other side, probably yields are not as big problem, since bad sectors can be replaced with good ones by the controller.
  • Kepe - Wednesday, April 6, 2011 - link

    Anand, I'd like to thank you on behalf of pretty much every single person on the planet. You're doing an amazing job with making companies actually care about their customers and do what is right.
    Thank you so much, and keep up the amazing work.

    - Kepe
  • dustofnations - Wednesday, April 6, 2011 - link

    Thank God for a consumer advocate with enough clout for someone important to listen to them.

    All too often valid and important complaints fall at the first hurdle due to dumb PR/CS people who filter out useful information. Maybe this is because they assume their customers are idiots, or that it is too much hassle, or perhaps don't have the requisite technical knowledge to act sensibly upon complex complaints.
  • Kepe - Wednesday, April 6, 2011 - link

    I'd say the reason is usually that when a company has sold you its product, they suddenly lose all interest in you until they come up with a new product to sell. Apple used to be a very good example with its battery policy. "So, your battery died? We don't sell new or replace dead batteries, but you can always buy the new, better iPod."
    It's this kind of ignorance towards the consumers that is absolutely appalling, and Anand is doing a great job at fighting for the consumer's rights. He should get some sort of an award for all he has done.
  • Super - Friday, April 8, 2011 - link

    ...perhaps the Nobel Peace Prize. ?? i've seen someone win it for a whole lot less *cough Obama
  • A5 - Wednesday, April 6, 2011 - link

    Agreed - glad they listen to Anand.

    The real question is why they didn't do anything until Anand bitched to the CEO directly. It's not like they weren't aware of the issue - the Storage Review article came out several months ago...
  • darckhart - Wednesday, April 6, 2011 - link

    It just goes to show that companies are not customer focused. Unless they get shoved hard enough, or see that the bottom line will be affected greatly, they just hope you'll give up after being mired in the revolving email chain or sent through 5 level deep phone support.

    Thanks Anand for reminding companies that some of us are still capable of making informed decisions and aren't afraid to express our dissatisfaction with our dollars.
  • 789427 - Thursday, April 7, 2011 - link

    It's not about being customer focussed or not. Quite frankly, what percentage of upgraders will go into this level of detail?

    Furthermore, 25nm sounds better than 35nm to most people and that's salesmen included.

    After all that, it's a victory for transparency for a tiny few.

    In terms of marketing, there's little you can do except re-brand the entire product range.

    e.g. Silver and Silver Pro for the lower capacities, Gold and Gold Pro for the higher capacities and explain on the box that fewer chips means generally lower performance

    The problem here is that this is the cutting edge of technology and that in 12 months time, it will be surpassed. Then how do you re-vamp the line?

    Graphics cards have this problem too and the model numbers are baffling for 99% of first-time buyers.

    What I would advocate is a sticker valid for 3 months on the product that lets you know which product in terms of performance you are buying and a URL you can visit to check for an update.

    e.g.

    Your product: xyz 300-35
    is better than xyz 300-24
    but is worse than 300-ii

    Check Real performance figures here: URL

    Then it would be nice for salesmen to allow customers to verify this.
    cb
  • cactusdog - Wednesday, April 6, 2011 - link

    Yep, at least OCZ have made a commitment not to use slow hynix nand and are being more transparent with real world performance but its all too little too late.

    Branding drives with the 25 or 34nm prefix is redundant now that all(or most) nand being produced is 25nm. Ocz made no real attempt to fix the problem when they needed to, and continued to sell the drives even after the consumer backlash.

    I disagree with Anand that other manufacturers of sandforce controller drives hide the specs as OCZ did. Corsair rebranded their 25nm drives from the start. Other non sandforce drives from Intel also rebranded their 25nm drives.

    Its true that many companies use different components and use the same branding but rarely does the performance vary as much as 30%. 30% is a huge and not acceptable for high end expensive parts..

    Its a pity Anand didnt really have anything to add on the Spectek issue that hasnt already been said. I find it hard to believe a company like Micron would sell very expensive nand cheaper to Spectek unless there is some problem with it.

    Saying Spectek nand must be OK because it is still rated at 3000 cycles doesnt sound very thorough or tell us the whole story. The cycle rating could have very different testing standards between Micron and Spectek.

    I would have thought it would be easy for someone like Anand to ask Micron or Spectek if the Spectek nand is tier 1 nand or not. I wouldnt trust OCZ response given their track record.

    Overall though thanks Anand for sticking up for the consumers.
  • Powerlurker - Wednesday, April 6, 2011 - link

    According to their corporate website "SpecTek began at Micron in 1988 as a component-recovery group." which would lead me to believe that they're Micron's low-end brand for disposing of lower performing dies.
  • Xneb - Thursday, April 7, 2011 - link

    That is correct. testing is the same though so end users should not be able to tell the difference between spectek and imft nand in these drives.
  • sleepeeg3 - Thursday, April 7, 2011 - link

    You can't fault him for reporting honestly. There is no concrete data that shows Spectek NAND is inferior to Micron.
  • Alkapwn - Wednesday, April 6, 2011 - link

    Ditto! Keep up the great work! We all appreciate it greatly!
  • Mr Perfect - Thursday, April 7, 2011 - link

    Yes, thank you for addressing the Vertex 2 issue.

    The sad part is that if OCZ had used their new, transparent labeling scheme from day one, they would have been praised for their transparency and all of the other companies would have been expected to rise to their standard. Instead, they waited through months of consumer and press outcry, meaning this fair and honest SKU system is merely re-earning lost trust.
  • pfarrell77 - Sunday, April 10, 2011 - link

    Great job Anand!
  • ARoyalF - Wednesday, April 6, 2011 - link

    For keeping them honest!
  • magreen - Wednesday, April 6, 2011 - link

    Intro page: "It's also worth nothing that 3000 cycles is at the lower end for what's industry standard..."

    I can't figure out your intent here. Is it worth noting or is it worth nothing?
  • Anand Lal Shimpi - Wednesday, April 6, 2011 - link

    Noting, not nothing. Sorry :)

    Take care,
    Anand
  • magreen - Wednesday, April 6, 2011 - link

    Hey, it was nothing.

    :)
  • vol7ron - Wednesday, April 6, 2011 - link

    Lmao. Magreen, I like how you addressed that.
  • Shark321 - Thursday, April 7, 2011 - link

    On many workstations in my company we have a daily SSD usage of at least 20 GB, and this is not something really exceptional. One hibernation in the evening writes 8 GB (the amount of RAM) to the SSDs. And no, Windows does not write only the used RAM, but the whole 8 GB. One of the features of Windows 8 will be that Windows does not write the whole RAM content when hibernating anymore. Windows 7 disables hibernation by default on system with >4GB of RAM for that very reason! Several of the workstation use RAM-Disks, which write a 2 or 3 GB Images on Shutdown/Hibernate. Since we use VMWare heavily, 1-2 GB is written contanstly all over the day as Spanshots. Add some backup spanshops of Visual Studio products to that and you have another 2 GB.

    Writing 20 GB a day, is nothing unusual, and this happens on at least 30 workstations. Some may even go to 30-40 GB.

    Only 3000 write cycles per cell is the reason why we had several complete failures of SSDs. Three of them from OCZ, one Corsair, one Intel.
  • Pessimism - Thursday, April 7, 2011 - link

    Yours is a usage scenario that would benefit more from running a pair of drives, one SSD and one large conventional hard drive. The conventional drive could be used for all your giant writes (slowness won't matter because you are hitting shut down and walking away) and use the SSD for windows and applications themselves.
  • Shark321 - Friday, April 8, 2011 - link

    HDD slowness does matter! A lot! Loading a VMWare snapshot on a Raptor HDD takes at least 15 seconds, compared to about 6-8 with a SDD. Shrinking the image once a month takes about 30 minutes on a SDD and 3 hours on a HDD!

    Since time is money, HDDs are not an option, except as a backup medium.
  • Per Hansson - Friday, April 8, 2011 - link

    How can you be so sure it is due to the 20GB writes per day?
    If you run out of NAND cycles the drives should not fail (as I'm implying you mean by your description)
    When an SSD runs out of write cycles you will have (for consumer drives) if memory serves about one year before data retention is no longer guaranteed.

    What that means is that the data will be readable, but not writeable
    This of course does not in any way mean that drives could not fail in any other way, like controller failure or the likes

    Intel has a failure rate of ca 0.6% Corsair ca 2% and OCZ ca 3%

    http://www.anandtech.com/show/4202/the-intel-ssd-5...
  • dagamer34 - Wednesday, April 6, 2011 - link

    Any idea when these are going to ship out into the wild? I've got a 120GB Vertex 2 in my 2011 MacBook Pro that I'd love to stick into my Windows 7 HTPC so it's more responsive.
  • Ethaniel - Wednesday, April 6, 2011 - link

    I just love how Anand puts OCZ on the grill here. It seems they'll just have to step it up. I was expecting some huge numbers coming from the Vertex 3. So far, meh.
  • softdrinkviking - Wednesday, April 6, 2011 - link

    "OCZ insists that there's no difference between the Spectek stuff and standard Micron 25nm NAND"

    Except for the fact that Spectek is 34nm I am assuming?
    There surely must be some significant difference in performance between 25 and 34, right?
  • softdrinkviking - Wednesday, April 6, 2011 - link

    sorry, i think that wasn't clear.
    what i mean is that it seems like you are saying the difference in process nodes is purely related to capacity, but isn't there some performance advantage to going lower as well?
  • softdrinkviking - Wednesday, April 6, 2011 - link

    okay. forget it. i looked back through and found the part where you write about the 25nm being slower.

    that's weird and backwards. i wonder why it gets slower as it get smaller, when cpus are supposedly going to get faster as the process gets smaller?

    are their any semiconductor engineers reading this article who know?
    are the fabs making some obvious choice which trades in performance at a reduced node for cost benefits, in an attempt to increase die capacities and lower end-user costs?
  • lunan - Thursday, April 7, 2011 - link

    i think because the chip get larger but IO interface to the controller remain the same (the inner raid). instead of addressing 4GB of NAND, now one block may consists of 8GB or 16GB NAND.

    in case of 8 interface,
    4x8GB =32GB NAND but 8x8GB=64GB NAND, 8x16GB=128GB NAND

    the smaller the shrink is, the bigger the nand, but i think they still have 8 IO interface to the controller, hence the time takes also increased with every shrinkage.

    CPU or GPU is quite different because they implement different IO controller. the base architecture actually changes to accommodate process shrink.

    they should change the base architecture with every NAND if they wish to archive the same speed throughput, or add a second controller....

    I think....i may not be right >_<
  • lunan - Thursday, April 7, 2011 - link

    for example the vertex 3 have 8GB NAND with 16(8 front and 8 back) connection to the controller. now imagine if the NAND is 16GB or 32 GB and the interface is only 16 with 1 controller?

    maybe the CPU approach can be done to this problem. if you wish to duplicate performace and storage, you do dual core (which is 1 cpu core beside the other)....

    again...maybe....
  • softdrinkviking - Friday, April 8, 2011 - link

    thanks for your reply. when i read it, i didn't realize that those figures were referring to the capacity of the die.

    as soon as i re-read it, i also had the same reaction about redesigning the controller, it seems the obvious thing to do,
    so i can't believe that the controller manufacturer's haven't thought of it.
    there must be something holding them back, probably $$.
    the major SSD players all appear to be trying to pull down the costs of drives to encourage widespread adoption.

    perhaps this is being done at the expense of obvious performance increases?
  • Ammaross - Thursday, April 7, 2011 - link

    I think if you re-reread (yes, twice), you'll note that with the die shrink, the block size was upped from 4K to 8K. This is twice the space to be programmed or erased per write. This is where the speed performance disappears, regardless of the number of dies in the drive.
  • Anand Lal Shimpi - Wednesday, April 6, 2011 - link

    Sorry I meant Micron 34nm NAND. Corrected :)

    Take care,
    Anand
  • jjj - Wednesday, April 6, 2011 - link

    any chance of a comparison soon for the new gen SSDs running on p67 vs the non native sata 3 controllers out there(the marvell controller on many 1366 and 1155 boards or/and some cheap PCIe sata3 cards) and maybe an AMD system too?
  • A5 - Wednesday, April 6, 2011 - link

    I think they did a comparison in the P67 article. The P67 controller is the fastest, followed by AMD (it's within a few %), and then the 3rd part controllers are a good bit slower.
  • Movieman420 - Wednesday, April 6, 2011 - link

    What more can I say? I've been chomping at the bit over this issue ever since SR broke the story. As a loong time Ocz customer (ok...fanboy..lol) I couldn't believe Ocz was behaving like that. The max speed rating using the fastest test available is excusable...like you said, if Ozc would have went the altruistic route then the competition would have take full advantage in about 1 millisecond. After finding out about the inevitable switch to 25nm I quickly ordered another drive for my existing array from a lesser known vendor that I hoped was still selling older stock. I received the drive and to my dismay it was a 25nm/64Gb piece. Adding this drive to my existing array of 34nm/32Gb drives would have a definite negative effect. Which brings me to my point.

    "After a dose of public retribution OCZ agreed to allow end users to swap 25nm Vertex 2s for 34nm drives, they would simply have to pay the difference in cost. OCZ realized that was yet another mistake and eventually allowed the swap for free."

    This is only partially true. Replacements were offered based on drives that formatted below IDEMA capacity. If your drive formatted to the correct size, you were not eligible to swap. The only problem is that the 64Gb dies were also used in Vertex 2/Agility 2 drives that feature 28 percent over-provisioning (i.e. 50, 100, 200gb models). In this case the decreased capacity was 'hidden' for lack of a better term. This is where I locked horns with them. The exchange was only offered for the 60'E' and 120'E' drives even tho many others suffered the same performance issue for the same reason. I had to raise a bit of hell before they agreed to replace my 64nm/64Gb 'non-E' drive with a 34nm replacement. At first they would only swap for another 25nm drive and I stated that my issue was with performance NOT die size. They ended up replacing my drive with a 34nm model only because it would have put a hurting on my existing raid array of 34nm drives...they made it clear that this was an exception since I had a raid array that would be negatively affected. So anyone who bought a 28 percent OP drive with 64Gb nand chips was DENIED any sort of exchange unless a raid array was involved. As far as I know, that policy still stands unless Ryan or Alex decides to make good on the exchange for 28 percent OP, non 'E' 64Gb die drives which are internally identical to the 'E' drives just with a different amount of OP set by the firmware. While I may have been 'lucky' if you will because I had an array involved, there's people out there that purchased a high OP model which if anything should be a slightly better performer and instead it's the complete opposite. Charge a premium for the more expensive NAND? Absolutely! Just don't offer a half hearted exchange that doesn't cover all models affected...and not just for the ones whose OP doesn't hide the issue.
  • CloudFire - Wednesday, April 6, 2011 - link

    thanks anand! really glad you put some pressure on Ocz. I hope other companies will follow suite as well. Here's to hoping you'd continue to do the right thing for us consumers in the future! :D
  • Dennis.Huang - Wednesday, April 6, 2011 - link

    Thank you for the review and for your actions on behalf of customers. This was a great review for me as a new person to SSDs. Do you have any thoughts of the performance of the 480GB version of the Vertex 3 and/or do you plan to do a review on that version too?
  • kensiko - Thursday, April 7, 2011 - link

    I saw some number on the OCZ forum, I think it came from Ryder, for the 480GB and it performs even better than the 240.
  • kensiko - Thursday, April 7, 2011 - link

    Here:
    IO METER (QD=1) 2008 on P67 SATAIII
    120GB 240GB 480GB
    4KB Random READ 16.31 15.58 17.77
    4KB Random WRITE 14.45 14.97 15.99
    128KB Seq. READ 190.23 255.17 355.89
    128KB Seq WRITE 345.21 342.99 313.98
  • bennymankin - Wednesday, April 6, 2011 - link

    Please include Vertex 2 120GB, as it is probably one of the most popular drives out there.
    Thank you.
  • kensiko - Thursday, April 7, 2011 - link

    The F120 does it, but true it's not the 25nm
  • Shark321 - Friday, April 8, 2011 - link

    I concur. Vertex 2 120 GB should be compared to Vertex 3 120GB. I suspect the differences will be minimal on SATA II. It's basically the same product, with slight controller and firmware changes.
  • SolidSteel144 - Wednesday, April 6, 2011 - link

    Why weren't other controllers tested?
    AMD's SB850 should also be able to handle these drives at full speed.
  • A5 - Wednesday, April 6, 2011 - link

    If you go back and look at the Sandy Bridge launch article (http://www.anandtech.com/show/4083/the-sandy-bridg... you'll see that the Intel and AMD controllers have essentially identical performance. No reason to double his benchmark time for a 1% difference.
  • acripps - Wednesday, April 6, 2011 - link

    Newegg should have one to my door tomorrow......The last drop of my yule spending authorization. It will spend the next few years drifting through various machine incarnations....till it passes out of the pool in a give-away pc....somewhere around 2014.
  • watzupken - Wednesday, April 6, 2011 - link

    Following this issue I had with them, there won't be another OCZ product from me. Anand did point out a good thing that this issue is far from over since OCZ left buyers like myself and others out in the cold in the exchange. So other than the 60 and 120GB drives, no other drives are eligible for an exchange. Worst case, I got the affected drive back due to an exchange as the earlier drive failed. I return fast drive, get a slow drive back. How nice.
  • devlabz - Wednesday, April 6, 2011 - link

    Last few articles I ended up wondering why random read speed in SF controllers is slower than random write. I may have missed some important article explaining all that stuff, tho i read all of them. Isn't flash technology favoring the read speeds? Or it have something to do with lookups for the random data chunks?

    Most likely this will be the year where I'll try to get a SSD drive, and since my main reason will be to reduce the compilation times of my projects and I think that my biggest gain will be with highest random read IOPs drive? Am I wrong here? Or will it matter that much actually?
  • FunBunny2 - Wednesday, April 6, 2011 - link

    I've read, don't remember where, that the IMFT 25nm NAND has on-die ECC circuitry. So:
    - did you find such
    - is OCZ, or anyone, exercising it
    ???
  • Movieman420 - Wednesday, April 6, 2011 - link

    Yeah...Tosh also just introduced their 'built-in ECC' nand.
    http://www.techpowerup.com/143619/Toshiba-Debuts-S...
    The thing is, from what I understand anyway, that this nand will take the ECC burden off the controller. Thing is tho that SandForce controllers actually accell at ECC duties vs other controllers. This is a major selling point because as the die process continues to shrink, the ECC burden will continue to increase. So I guess I'm saying that I'm not too sure that more expensive ecc-nand would be practical if the controller doesn't suffer from the increasing ECC issue. Someone with more knowledge about how the SF controller works could probably answer the question best...cough*Anand*cough. ;)
  • Movieman420 - Wednesday, April 6, 2011 - link

    The dismal performance of the Hynix nand was news to me. It does however explain why there were many users with horrid performance posting on the Ocz forums. I suspect these were the ones who where told that the problem was with their PC/Lappy. It has never once been mentioned on the forum that some drives may have low performing nand inside. No wonder they kept reminding folks not to open their drives 'due to potential warranty issues'. It seems Ocz was being less than forthcoming even before the whole 25nm nand thing blew up. I really really REALLY hope that Ocz puts an end to the shady business we've seem for the last few months...they are a great company with a great product. Omission and/or deception isn't gonna fly, especially when you cater to enthusiasts who are not exactly stupid. It's those same 'enthusiasts' who made Ocz's early success possible in the first place. I know that things have since changed and now the vast majority of their sales are to commercial and enterprise customers. They'd never think of pulling this with those customers, but they'll do it to the very people who made their early success possible in the first place? This post and my previous one come from the prospective of a die hard customer who also happens to be an Ocz shareholder as well. Just wish I could afford enough to actually have a say so in the way things go down. :P
  • xboxist - Wednesday, April 6, 2011 - link

    Anand,

    I'm a very casual hardware enthusiast, and admittedly most of the technical aspects discussed in this article eludes me.

    With that said, I don't need to understand everything to continue to be impressed by your enthusiasm for the products in your industry, and the way you carry yourself as an ambassador for all of your users. The way you went after OCZ here has to be applauded.
  • fixxxer0 - Wednesday, April 6, 2011 - link

    after being disappointed in some way with just about every (large) company i've dealt with, whether it be insurance, auto makers, electronics, appliances, you name it... i am glad to see one finally accepting responsibility, and doing the right thing.

    i do not expect 100% perfection from every company at all times. i know sometime things are DOA, or defective, or flawed. but to actually have a company take that extra step and make it right without you having to sue them is commendable.

    personally, when it comes time on deciding which drive to go with, it will mainly be on the numbers, but OCZ's ethics will definitely give them the edge if there is a toss up.
  • kensiko - Thursday, April 7, 2011 - link

    It's true, I never saw any big company letting customers having so much impact on them. The forum is really the big thing here.
  • lukechip - Wednesday, April 6, 2011 - link

    I've just bought an 80GB Vertex 2. OCZ state that only "E" parts are affected, but at StorageReview, they show that they had a non "E" part which contained 25nm NAND. Also, OCZ say that the only parts affected are the 60 GB and 120 GB models.

    I've just purchased an 80 GB model, and have no idea what is inside it, nor whether I'd prefer it to be an 'old' one or a 'new' one.

    The new SKUs that Anand listed indicate that moving forwards, all 80, 160 and 200 GB Vertex 2 units will be 25nm only, and all 60, 120 and 240 GB Vertex 2 units will be 34nm only. I can't imagine they can keep this up for long, as 34nm runs out and they have to move the 60, 120 and 240 GB models to 25 nm.

    What I suspect is that prior to 25 nm NAND becoming available, all 80 GB units used the Hynix 32 nm NAND. Based on Anand's tests, I suspect this mean they were the worst performing units in the line up. 80 GB units built using the new 25 nm NAND would actually perform better than those built with Hynix 32 nm NAND.

    So whereas 60 GB and 120 GB customers really want to have a unit based on 34 nm NAND, 80 GB customers like me really want to have a drive based on 25 nm NAND. Hence OCZ are not offering replacements for 80 GB units. A new 80 GB unit is better than an old 80 GB unit, even though it is not as good as an old 60 GB unit

    So my questions are:

    1/ Is what I am suggesting above true ?
    2/ How can I tell what NAND I've got ? I've updated the firmware on my 80 GB unit soon after buying it, so the approach of using firmware version to determine NAND type doesn't seem too reliable to me ?

    Personally, I find my unit plenty fast enough. And I understand that OCZ and other SSD vendors must accomodate what their suppliers present them with. However the lack of tranparency, and the "lucky dip" approach that we have to take when buying an SSD from OCZ lead me to conclude that they

    1/ don't respect their customers and/or
    2/ are very naive and stupid to expect that customers won't notice them pulling a 'bait and switch'
  • B3an - Thursday, April 7, 2011 - link

    Anand... you seem to have forgotten something in your conclusion. You say it's best to go for the 240GB if torn between that and the 120GB. But being as two 120GB Vertex 3's are only very slightly more expensive than the 240GB version, wouldn't it make more sense to just get two 120GB's for RAID 0? Because you'd get considerably better performance than the 240GB then considering how well SSD's scale in RAID 0.

    Really great and interesting review BTW.
  • Alopex - Thursday, April 7, 2011 - link

    I'd really like to see this question addressed, as well. According to several tests, SSDs scale in pretty much all categories after a minimal queue depth. It seems like the random reads here are the 120gb model's achilles' heel, but given the linearity of the scaling, it might be safe-ish to assume that 2x 120gb RAID 0 will equal 1x 240gb. For nearly the same price, it would then seem you get the same storage size, fixed the discrepancy between the two models, and hopefully see significant performance gains in the other categories like sequential read/write.

    I'm building a new computer at the moment, and in light of this article, I'm still planning to go with 2x 120gb Vertex 3s in RAID 0, unless someone can provide a convincing argument to do otherwise. At the moment, the only thing that really makes me hesitate is to see what the other vendors have planned for "next-gen" SSD performance. Then again, if I had that attitude I'd be waiting forever ;-)

    Many thanks for the article, though!
  • casteve - Thursday, April 7, 2011 - link

    No TRIM available in RAID.
  • B3an - Thursday, April 7, 2011 - link

    Not a big problem. I've had 3 different SSD sets in RAID 0 over the years, and i've not needed TRIM. And a certain crappy OS with a fruity theme dont even support TRIM without a hack job.
  • ComputerNovice22 - Thursday, April 7, 2011 - link

    You wrote "
    In the worst case comparison the F120 we have here is 30% faster than your 34nm Hynix Vertex 2."

    I believe you meant 32nm Hynix, I'm not sure I'm right or not and I'm not trying to be one of those people that just likes to be right either, just wanted to let you know just in-case.

    On another note though I LOVE the article, I bought a vertex 2 recently and I was very angry with OCZ after I hooked it up and realized it was a 25nm SSD ... I ended up just buying a 120Gb (510 elm-crest)
  • Lux88 - Thursday, April 7, 2011 - link

    1. Thank you for investigating NAND performance so thoroughly.
    2. Thank you for benching drives with "common" capacities.
    3. Thank you for protecting consumer interests.

    Great article. Great site. Fantastic Anand.
  • sor - Thursday, April 7, 2011 - link

    I worked at a Micron test facility years ago. I can only speak for DRAM, but I imagine NAND is much the same. Whenever someone drops a tray of chips and they go sprawling all over the floor... SpekTek. Whenever a machine explodes and starts crunching chips... SpekTek. I used to laugh when I saw PNY memory in BestBuy with a SpecTek mark on its chips selling for 2x what good RAM at newegg would cost.

    Basically anything that's dropped, damaged, or doesn't meet spec somehow, gets put into SpecTek and re-binned according to what it's now capable of. It's a brand that allows Micron to make money off of otherwise garbage parts, without diluting their own brand. On the good end the part may have just had some bent leads that needed to be fixed, on the bad end the memory can be sold and run at much slower specs or smaller capacity (blowing fuses in the chip to disable bad parts), or simply scrapped altogether.
  • sleepeeg3 - Thursday, April 7, 2011 - link

    Thanks for the info, but IMO the bottom line is if it works reliably and it allows them to deliver something at a lower price, I am all for it. If it backfires on them and they get massive failure rates, consumers will respond by buying someone else's product. That's the beauty of capitalism.
  • sor - Thursday, April 7, 2011 - link

    Oh sure, I agree that the bottom line is whether or not it still works, that's why they do the binning and have grades of product within that brand. If OCZ can use cheaper flash and the controller takes care of the increased failures, or the users never reach the failure threshold, then who cares, as long as the product works the same?

    I can't speak for the testing procedures within SpekTek or their tolerances, as I only worked for a facility that tested parts for Micron, and in the process generated the bad parts and did some of the binning before sending them to SpekTek. Much of the stuff that went to them failed our tests but was otherwise not physically damaged.

    There's a reason why those parts are sold under the SpecTek brand at a discount, it shows that even the manufacturer doesn't trust them to be sold under the good brand after testing.
  • Panlion - Thursday, April 7, 2011 - link

    I wonder if OCZ will produce a 7mm 2.5 inch drive. The newer notebooks from Lenovo are starting to demand that format it'll be nice if I can have some option other than Intel SSD.
  • sleepeeg3 - Thursday, April 7, 2011 - link

    Maintaining integrity while sticking out for the little guy, instead of bending over backward to write glowing articles for every vendor sponsor. That's what has made this site succeed.

    I wish you could also take OCZ to task on SandForce's controller strange tendency to lock up and vanish from a system, due to built in encryption. They are in complete denial that it is an issue, despite dozens of reports on their user forums.
  • edfcmc - Thursday, April 7, 2011 - link

    Thank you Anand for this very informative and in-depth review of the OCZ issue and their latest 120gb vertex 3 product; especially since the 120gb products are within my price range and the size I am looking to purchase. On a side note, I have been reading your reviews since your review of the FIC PA-2007 many years ago and I love the evolution of this site and your dedication to keeping us consumers informed.

    p.s. Please consider asking asus/Nvidia to update the Nvidia driver on their ULV80 series as nothing new has been updated since I purchased the UL80vt based on this site's recommendation. Asus/Nvida seem to be a little non-responsive to us folks who have been requesting an update for quite some time.
  • ekerazha - Thursday, April 7, 2011 - link

    Can't wait for reviews of SSDs (Intel G3, Crucial m4) with comparable size (120 GB).
  • Chloiber - Thursday, April 7, 2011 - link

    Anand:
    Just a quick note. In the newest SF-firmware, there is also still a bug with Hynid Flash. You can see it here, under "Open Issues":
    http://www.ocztechnologyforum.de/forum/showthread....

    "Under benchmarking scenarios with IOMETER 2006, 60GB drives that use Hynix32nm MLC (1024 blocks, 8KB pages) can impose long latencies"

    Just FYI.
  • MarcHFR - Thursday, April 7, 2011 - link

    Dear OCZ, Dear Anand,

    In the past, it was simple :

    Vertex : always the same NAND
    Agility : NAND could change

    I know that Vertex name is a best seller for OCZ, but i think it will be simplier to back to this
  • strikeback03 - Friday, April 8, 2011 - link

    That is what I was wondering, I thought the point of the Agility line was that they would use the good controller but possibly cheaper NAND.
  • Adul - Thursday, April 7, 2011 - link

    Why not make use of QOR codes so a shopper can just scan the code to be taken to a page with more detail information.
  • miscellaneous - Thursday, April 7, 2011 - link

    Given this particularly insidious paragraph:
    "OCZ will also continue to sell the regular Vertex 2. This will be the same sort of grab-bag drive that you get today. There's no guarantee of the NAND inside the drive, just that OCZ will always optimize for cost in this line."

    Will these "grab-bag" drives be using the same SKU(s)/branding as the original - well reviewed - Vertex 2? If so, how is using the _old_ SKU(s) to identify the _new_ "grab-bag" drives, whilst introducing _new_ SKU(s) to help identify drives with the _old_ level of performance a satisfactory solution?
  • erple2 - Friday, April 8, 2011 - link

    I believe that the issue is scale. It would not be possible financially for OCZ to issue a massive recall to change the packaging on all existing drives in the marketplace. Particularly given that while the drives have different performance characteristics (I'd like to see what the real world differences are, not just some contrived benchmark), it's not like one drive fails while another works.

    So it sounds to me like they're doing more or less what's right, particularly given the financial difficulty of a widespread recall.
  • Dorin Nicolaescu-Musteață - Thursday, April 7, 2011 - link

    IOmeter results for the three NAND types are the same for both compressible and uncompressible data in ”The NAND Matrix”. Yet, the text suggests the opposite.
  • gentlearc - Thursday, April 7, 2011 - link

    The Vertex 3 is slower
    It doesn't last as long
    Performance can vary

    Why would you write an entire article justifying a manufacturers decisions without speaking about how this benefits the consumer?

    The real issue is price and you make no mention of it. If I'm going to buy a car that doesn't go as fast, has a lower safety rating, and the engine can be any of 4 different brands, the thing better be cheaper than what's currently on the market. If the 25nm process allows SSDs to break a price barrier, then that should be the focal point of the article. What is your focal point?

    "Why not just keep using 34nm IMFT NAND? Ultimately that product won't be available. It's like asking for 90nm CPUs today, the whole point to Moore's Law is to transition to smaller manufacturing processes as quickly as possible."

    Pardon? This is not a transistor count issue, it's further down the road. I am surprised you would quote Moore's Law as a reason why we should expect worse from the new generation of SSDs. The inability for a company to address the complications of a die shrink are not the fault of Moore's Law, it's the fault of the company. As you mentioned in your final words, the 250GB will probably be able to take better advantage of the die shrink. Please don't justify manufacturers trying to continue using a one-size-fits-all approach without showing how we, the consumer (your readership), are benefited.
  • erple2 - Friday, April 8, 2011 - link

    I think that you've missed the point entirely. The reason why you can't get 34nm IMFT NAND going forwards, is that Intel is ramping that production down in favor of the smaller manufacturing process. They may already have stopped manufacturing those products in bulk. Therefore, the existing 34nm NAND is "dying off". They won't be available in the future.

    The point about Moore's Law - I think Anand may be stretching the meaning of Moore's Law, but ultimately the reason why we get faster, smaller chips is because of cost. It's unclear to me what the justification behind Moore's law is, but ultimately, that's not important to the actual Law itself. It is simply a reflection of the reality of the industry.

    I believe transistor count IS the issue. The more transistors Intel (or whomever) can pack in to a memory module for the same cost to them (thereby increasing capacity), the more likely they are to do that. It is a business, after all. Higher density can be sold to the consumer at a higher price (more GB's = more $'s). Intel (the manufacturer of the memory) doesn't care whether the performance of the chips is lower to some end user. As you say, it's up to the controller manufacturer to figure out how to take into account the "issues" involved in higher density, smaller transistor based memory. If you read the article again, Anand isn't justifying anything - he's simply explaining the reasons behind why RIGHT NOW, 25nm chips are slower on existing SF drives than 34nm chips are.

    It's more an issue of the manufacturers trying to reuse "old" technology for the current product line, until the SF controller optimizations catch up to the smaller NAND.
  • gentlearc - Saturday, April 9, 2011 - link

    Once again, why do an article explaining a new product that is inferior to the previous generation with no reason why we should be interested? AMD's Radeon HD 6790 was titled "Coming Up Short At $150" because regardless of the new technology, it offers too little for too much. Where is the same conclusion?

    Yes, this article was an explanation. Anand does a 14-page explanation, saving a recommendation for the future.

    "The performance impact the 120GB sees when working with incompressible data just puts it below what I would consider the next-generation performance threshold."

    The questions remains. Why should the120GB Vertex 3 debut $90 more than it's better performing older brother?
  • mpx999 - Sunday, April 10, 2011 - link

    If you have a problem with speed of flash memory then a good choice for you are drives with SLC memory, which doesn't have as much speed limitations. Unfortunately manufacturers severy overprice them, as SLC drives are much more than 2 times more expansive than MLC ones at the same amount GB, despite the fact that the flash is only 2 times more expansive. You can buy reasonably priced (2x MLC version price) SDHC cards with SLC flash, but you can't get reasonably priced (2 x MLC version price) SSD with SLC flash.
  • taltamir - Thursday, April 7, 2011 - link

    "After a dose of public retribution OCZ agreed to allow end users to swap 25nm Vertex 2s for 34nm drives"

    Actually OCZ lets customers swap their 25nm 64Gbit drives for 25nm 32Gbit drives. There are no swaps to the 32nm 32Gbit drives
  • garloff - Thursday, April 7, 2011 - link

    Anand -- thanks for your excellent coverage on SSDs -- it's the best that I know of. And I certainly appreciate your work with the vendors, pushing them for higher standards -- something from which everybody benefits.

    One suggestion to write power consumption:
    I can see drives that write faster consume more power -- that's no surprise, as they write to more chips (or the SF controller has to compress more data ...) and it's fair. They are done sooner, going back to idle.
    Why don't you actually publish a Ws/GB number, i.e. write a few Gigs and then measure the energy consumed to do that? That would be very meaningful AFAICT.

    (As a second step, could could also do a mix, by having a bench run for 60s, writing a fixed amount of data and then comparing energy consumption -- faster drives will be longer in idle than slower ones ... that would also be meaningful, but that's maybe a second step. Or you measure the energy consumed in your AS bench, assuming that it transfers a fixed amount of data as opposed to running for a fixed amount of time ...)
  • Nihil688 - Thursday, April 7, 2011 - link

    Hello all,
    I am kinda new to all this and since I am about to get a new 6GB/s Sata3 system I need to ask this

    The main two SSDs that I am considering are the Micron's C400 or the OCZ Vertex3 120' version.
    I can see that their sequential speeds in both write and read are completely different with V3 winning
    but their Random IOPSs (always comparing the 120GB V3 and the 128GB C400) differ with C400 winning in reads and V3 winning with big difference in writes.
    I must say I am planning to install my windows 7 OS in this new SSD I am getting and what I would
    consider doing is the following:
    -Compiling
    -Installing 1 game at a time, playing, erasing, redo
    -Maybe Adobe work: Photoshop etc

    So I have other hard drives to store stuff but the SSD would make my work and gaming quite faster.
    The question is, C400 gives 40K of read which is more important for an OS whilst V3 gives better overall stats and is only lacking in random reads. What would be more important for me? Thanks!
  • PaulD55 - Thursday, April 7, 2011 - link

    Connected my 120 Gig Vertex 3 ( purchased from New Egg) , booted and saw that it was not recognized by the BIOS, I then noticed the drive was flashing red and green. Contacted OCZ and was told the drive was faulty and should be returned. New Egg claims they have no idea when these will be back in stock.
  • GrizzledYoungMan - Thursday, April 7, 2011 - link

    Thank you Anand for your vigilance and consumer advocacy. OCZ's disorganization remains a problem for their customers (and I'm one of them, running OCZ SSDs in all my systems).

    Still, I am disappointed by the fact that your benchmarks continue to exaggerate the different between SSDs, instead of realistically portraying the difference between SSDs that a user might notice in daily operation. Follow my thinking:

    1. The main goal of buy an SSD, or upgrading an SSD from another SSD, is to improve system responsiveness as it appears to the user.
    2. No user particularly cares about the raw performance of their drives as much as how much performance is really available in real-world use.
    3. Thus, tests should focus on timing and comparing common operations, in both solo tasking and multi tasking scenarios (like booting, application loading, large catalog/edit files/database loading and manipulation for heavy duty desktop content creation applications and so on).
    4. In particular, Sandforce is a huge concern when comparing benchmarks to real world use. Sure, they kill in the benchmarks everyone uses, but many of the most resource intensive (and especially disk intensive) desktop tasks are content creation related (photo and video, primarily) which use incompressible files. How is it that no one has investigated the performance of Sandforce in these situations?

    Users here have complained that if we did only #3, only a small difference between SSDs would be apparent. But to my eyes, THAT IS EXACTLY WHAT WE NEED TO KNOW. If the performance delta between generations of SSDs is not really significant, and the price isn't moving, then this is a problem for the industry and consumers alike.

    However, creating the perception with unrealistically heavy trace programs that SSDs have significant performance differences (or that different flash types and processes have significant performance differences) when you haven't yet demonstrated that there are real world performance differences in terms of system responsiveness (if anything, you've admitted the opposite on a few occasions) strikes me as a well intentioned but ultimately irresponsible testing method.

    I'm sure it's exciting to stick it to OCZ. But really, they are one manufacturer among many, and not the core issue. The core issue is this charade we're all participating in, in which we pretend to understand how SSDs really improve the user experience when we have barely scratched the surface of this issue (or are even heading in the wrong direction).
  • GrizzledYoungMan - Thursday, April 7, 2011 - link

    Wow, typos galore there. Too early, too much going on, too little coffee. Sorry.
  • kmmatney - Thursday, April 7, 2011 - link

    The Anand Storage Bench 2010 "Typical workload" is about as close as you can get (IMO) to a real work test. Maybe its a heavier multitasking scenario that most of us would use, but I think its the best test out there to give a real-world assessment of SSDs. Just read the description of the test - I think it already has what you are asking for:

    "The first in our benchmark suite is a light/typical usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11."
  • GrizzledYoungMan - Friday, April 8, 2011 - link

    Actually, the storage bench is the opposite of what I'm asking for. I've written about this a couple of times, but my complaint is basically that benchmarks exaggerate the difference between SSDs, that in real world use, it might be impossible to tell one apart from another.

    The Anand Storage Benches might be the worst offenders in this regard, since they dutifully exaggerate the difference between SSD generations while giving the appearance of a highly precise way to test "real world" workloads.

    In particular, the Sandforce architecture is an area of concern. Sure, it blows away everyone in the benchmarks, but the fact that it becomes HDD-slow when given an incompressible workload really has to be explored further. After all, the most disk-intensive desktop workloads all involve manipulating highly compressed (ie, not compressible further) image files, video files and to a lesser degree audio files. One more than one occasion, I've seen people use Sandforce drives as scratch disks for this sort of thing (given their high sequential writes, it would seem ideal) and been deeply disappointed by the resulting performance.

    No response yet from Anand on this. But I'll keep posting. It's nothing personal - if anything, I'm posting here out of respect for Anand's leadership in testing.
  • KenPC - Thursday, April 7, 2011 - link

    Nice write up. And - excellent results getting OCZ to grow up a little bit more.

    As a consumer, the solution of SKU's based on NAND will be confusing and complicated. How the heck am I supposed to know if the xxx.34 or the xxx.25 or some future xxxx.Hyn34 or xxxx.IMFT25 is the one that will meet one of the many performance levels offered?
    A complicating factor that you mentioned in the article, is that for a specific manufacturer and process size, there can be varying levels of NAND performance.

    I strongly urge you to consider working with OCZ to 'bin' the drives with establshed benchmarks that focus on BOTH random and TRUE non-conmpressible data rates. SKU suffixes then describe the binned performance.

    You also have the opportunity to help set SSD 'industry standard benchmarks' here!

    Then give OCZ the license to meet those binned performance levels with the best/lowest cost methods they can establish.

    But until OCZ comes up with some 'assured performance level', OCZ is just off of my SSD map.

    KenPC
  • KenPC - Thursday, April 7, 2011 - link

    Yes, a reply to my own post......

    But how about a unique and novel idea?

    What if.. a Vertex 2 is a Vertex 2 is a Vertex 2, as measured by ALL of the '4 pillars' of SSD performance?

    Vertex 3's are Vertex 3's, and so on......

    If different nand/fw/controller results in any of the parameters 'out of spec', then that version never ships as a 'Vertex 2'.

    After all, varying levels of performance is why there is a vertex, a vertex 2, and an onyx and an agility, and an onyx2, and an agility2, and etc etc within the OCZ SSD line.

    Why should the consumer need to have to look a second tier of detail to know the product performance?

    KenPC
  • strikeback03 - Friday, April 8, 2011 - link

    So any time Sandforce/OCZ upgrades the firmware you need a new product name? If something happened in the IMFT process and they had to buy up Samsung NAND instead, new product? And of course everyone wants to wait for reviews of the new drives before buying.

    I personally don't mind them changing stuff as necessary so long as they maintain some minimum performance that they advertise. The real-world benchmarks in the Storage Review articles showed a 2-5% difference, to me that is within margin of error and not a problem for anyone not benchmarking for fun. The Hynix NAND performing at only ~70% of the old ones are a problem, not so much the 25nm ones.
  • semo - Thursday, April 7, 2011 - link

    You've done well. I hope you continue to do this kind of work as it benefits the general public and in this particular case, keeps the bad PR away from a very promising technology.

    The OCZ Core and other jmicron drives did plenty to slow down the progress of SSD adoption in to the mainstream. You caught the problem earlier than anyone else and fixed it. This time around it took you longer because of other high priority projects. I think your detective and lobbying work are what keeps us techies checking AT daily. In my opinion, the Vertex 2 section of this article deserves home page space and a catchy title!

    Finally, let's not forget that OCZ have not yet fixed this issue. People may still have 25nm drives without knowing it or be capable of understanding the problems. OCZ must issue a recall of all mislabeled drives.
  • Shadowmaster625 - Thursday, April 7, 2011 - link

    It is ridiculous to expect a company to release so many SKUs based on varying NAND types. It costs a company big money to release and keep track of all those SKUs. When you look at the actual real world differences between the different NAND types, it only comes down to a few percentage points of difference. It is like comparing different types of motherboard RAM. It is a waste of time and money to even bother looking at one vs another. OCZ should just tell you all to go pound sand. I suspect they will eventually, if you keep nitpicking like this. The 25nm Vertex 2 is virtually identical to the 34nm version. If you run a complete battery of real world and synthetic tests, you clearly see that they are within a few % of each other. There is no reason for OCZ to waste any more time or money trying to placate a nitpicking nerd mob.
  • semo - Thursday, April 7, 2011 - link

    The real issue was that it wasn't just a few % difference. Some V2 drives were nowhere near the rated capacity with the 25nm NAND. So if you bought 2 V2 drives and they happen to be different versions, RAID wouldn't work. There is still no way to confirm if the V2 you are trying to buy is one of the affected drives as OCZ haven't issued a recall or taken out affected drives from retail shelves. Best way to avoid unnecesary hassle is not to buy OCZ at all. Corsair did a much better job at informing the customer about the transition:
    http://www.corsair.com/blog/force25nm/

    The performance difference was higher than a few % as well.
  • casteve - Thursday, April 7, 2011 - link

    Great review! Thanks for carrying the 120GB torch :)

    I'd love to see a couple of HDDs added to the 2011 bench (like they are in the 2010 bench) to keep the perspective in play.* Most people are still moving from an HDD to an SSD and not just upgrading their SSD's.

    * stuff a 120GB SSD in yer laptop for $200 to replace a 5400rpm HDD and improve gaming IOPS by 5x is more impressive than replacing an existing 120GB SSD with a newer one for $250 and improve gaming IOPS by 10%. Extreme example...but you get the idea.
  • Gasaraki88 - Thursday, April 7, 2011 - link

    I want to thank you for writing this article and keeping the companies honest. Without smart people like you, companies will overstate performance and to the common person it will look fine because we don't have the proper tools to test.

    I'll been reading Anandtech for 11 years now. The quality is still top notch unlike some other sites I used to go to.
  • cknobman - Thursday, April 7, 2011 - link

    Dont buy OCZ products.

    Anand and countless consumer reviews from Newegg have proved that OCZ does not put out a consistent and reliable level of product.

    Im not one for rolling the crap dice with my hundreds of hard earned dollars.
  • hackztor - Thursday, April 7, 2011 - link

    LOL, okay go spend your hard earned money on last year performance intel new drives.
  • seapeople - Thursday, April 7, 2011 - link

    Wow you got him there! Yeah, why doesn't he just buy an Intel drive. That would be funny because then he would have to wait 5.6 seconds to open Photoshop instead of 5.2 seconds. Or it would take a full 33 seconds to reboot instead of 35 seconds. I bet he's so unintelligent that he would actually accept crappy last-generation solid state drive performance like that at a lower price.
  • semo - Friday, April 8, 2011 - link

    Corsair, patriot and many other SSD makers use SF controllers. You just have to be sure you know what firmware you're getting. You have to be much more careful with OCZ as you can't trust them to sell you what they claim on the box.
  • Stargrazer - Thursday, April 7, 2011 - link

    It's awesome that you're reviewing the 120GB version first. It's the version that I believe most people would be most interested in getting, so it's great that we'll be able to see how it performs, rather than only seeing the higher numbers of a ~256GB version that's so expensive that most people would never get it. It's fantastic even. Did I mention that it's awesome?

    Unfortunately, since the ~128GB versions haven't always been reviewed in the past, this also means that we don't really have much to compare the numbers to. How do we know if the 120GB Vertex 3 is competitive if we don't know the performance of its competitors?

    I can understand if it might take a while to get the numbers for comparable versions of the 510, 310 and m4 (though I really hope that in the future you continue to press on for getting ~128GB versions in time for the initial reviews), but would it at least be possible to get the complete numbers for the 128GB version of the RealSSD C300? For some reason it doesn't seem to be in the IOmeter tests.

    Oh. Isn't it time that you stopped using "I didn't expect to have to debut this so soon" in the introduction to the 2011 Storage Bench? :)
  • KenPC - Thursday, April 7, 2011 - link

    A humorous thought of little relevance. but.. if OCZ rebrands the Vertex 2 as the 2b to solidify performance specs. then as I shop I will be thinking....

    Is this OCZ drive a 2b or not 2b, that is the question.....
  • Omid.M - Thursday, April 7, 2011 - link

    They saw Anand's Vibrams and knew he meant business (casual).

    :)
  • sethm1 - Thursday, April 7, 2011 - link

    I was looking forward to the Vertex 3 as being the next best thing.
    And so was hoping for a more positive review (but yes appreciate the candor in the review).
    My question is, after all is said and done, is the Vertex 3 still better than the Vertex 2 (120GB versions)?
    Should I go out and get a version 2?
  • kmmatney - Thursday, April 7, 2011 - link

    The answer is pretty easy, I think. Anand's own storage bench is a great test of real world performance, especially the "typical workload"

    http://www.anandtech.com/show/4256/the-ocz-vertex-...

    The bottom line: Version 3 is better than Verion 2, although not by an amazing amount
  • sunbear - Thursday, April 7, 2011 - link

    "3) Finally, are you willing to commit, publicly and within a reasonable period of time, to exchanging any already purchased product for a different configuration should our readers be unhappy with what they've got?"

    The problem is that it is not straight forward for a customer to know "what they've got" without opening up the SSD and voiding their warranty. OCZ provides the "OCZ Toolbox" that tells you whether your SSD contains 32Gb or 64Gb NAND chips but they don't currently provide any too; to determine whether you have the dreaded Hynix flash or the superior IMFT flash.

    I asked in the OCZ forum and their response was to do a secure erase and run the AS SSD benchmark. I have no idea what numbers from the AS SSD benchmark would indicate Hynix versus IMFT.
  • cptcolo - Friday, April 8, 2011 - link

    Hats off to both Anand and OCZ for fixing the Vertex 2 issue. I am really impressed by both Anand and Alex Mei. Anand thanks fo rbeing proactive and presenting OCZ with the problem, and thanks to Alex Mei and Ryan for taking care of the problem 100% (via the change in name and SKUs). You are both true alturists.
  • B0GiE - Friday, April 8, 2011 - link

    I just cancelled my order of the 120Gb OCZ Vertex. It says on Scan webpage that it is 550mbs Read and 500mbs Write.

    Due to this review i'm not sure i believe it. I will wait for further reviews before I purchase a new SSD.

    I am interested in game load times for the Vertex 3 such as Black Ops but Anandtech does not show any???
  • gietrzy - Friday, April 8, 2011 - link

    I've just cancelled 120GB Vertex 3 drive. I have no time to investigate whether or not my drive performs as promised.
    I also have a Vertex 2 60 GB I think "E" version - how do I check if it's faulty.

    My scenario is #2 at this page http://www.anandtech.com/show/4256/the-ocz-vertex-...
    I also have lots of 1080p avchd videos and even more raw files from my camera so I think I will wait for Intel 510 120 GB review and buy Intel.

    One thing's for sure: I will never buy OCZ again.

    Thanks Anand, thanks guys!
  • mattcpa - Friday, April 8, 2011 - link

    I ordered the 120GB Vertex 3 from Computers4Sure on the morning before you published this review... :(
    I also picked up an HDD Optical Bay to put my MBPro 750GB HDD there and plan to put the 120GB in the 6gbps SATA.
    I use the Macbook Pro 15" 2.2 SBP for laptop DJ work along with handbraking movies and such; sprinkle in some random gaming.

    Hopefully for these processes, it appears this drive will still be near the top of the pack in terms of performance, as I feel I perform many read functions daily rather than performing constant writes. If someone has an opinion, let me know if I am wrong...
  • Affectionate-Bed-980 - Friday, April 8, 2011 - link

    Come on. You HAVE to compare against last generation's Vertex 2. It's selling for $169 at Newegg, and you don't even bench against that. Sigh. Like it's fine if you miss out on some of the other ways say the Kingston, but to skip on the Vertex 2 is a major /facepalm.
  • Shark321 - Friday, April 8, 2011 - link

    Yes, Vertex 2 and Agility 2 benchmarks compared to Vertex 3 would be really helpful here.
  • db808 - Friday, April 8, 2011 - link

    Hi Anand,

    First, let me join in with the others in complementing you on your excellent article.

    I saw some interesting data hidden in the information describing the IO access patterns of your new IO benchmarks. I was very surprised that the IO size was so small, and that you mentioned that a majority of the IO was sequential.

    Some of this can be explained by the multi-threaded nature of the tests. Two applications, each doing sequential IO, running against each other, result in interleaved IOs going to the disk, with a result that is very non-sequential. Some of this may be explained by the application runtime actually requesting 4kb IO, and Windows not having time to do "read aheads".

    Windows does have the capability to do larger-IO than was requested by the application (opportunistic IOs), as well as read-ahead and write behinds(that are often coalesced into larger IOs) ... but SSDs may actually be so fast, that the Windows IO optimization algorithms don't have enough time to "think".

    You also pointed out that SSD IO performance increases very quickly has the IO size increases above 4kb. It appears that most of the modern controllers parallel stripe the IO across multiple channels, wear-leveling notwithstanding. So an 8kb IO is 2 parallel 4kb IO, for example (ignoring SandForce compression behavior).

    The simplest way to cajole the large share of 4kb IO's to 8kb or larger sizes is to simply increase the NTFS cluster size. This has been a performance optimization techniques used with high performance storage arrays for many years. Many Unix systems actually default to 8kb or larger block sizes, and EMC internally uses a 32kb block size as examples.

    There is a small negative tradeoff ... some additional slack space at the end of every file. The average slack space per file is 1/2 the cluster size, or 2kb for the default 4kb cluster. Increasing the cluster to 8kb, increases the slack space to 4kb per file ... for a 64kb cluster, it would be 32kb slack per file. The JAM Software "Treesize" utility will actually compute the total slack space for you. With TreeSize Pro, you can even do "what if" analysis and see the impact of changing the cluster size on total slack space.

    In summary, slack space overhead only represent a few percentage points of the disk capacity. For example, on my business laptop, by C: drive has about 262K files, and my total wasted space is ~ 644 MB. Increasing the cluster size to 8kb would roughly double my wasted space ... an additional 644MB. Not much.

    On my hard-disk based systems that are also memory rich, I regularly run NTFS cluster sizes of 8kb and 16kb ... 64kb for temp file systems. I am pro-actively trading a few percentage points of disk space for higher performance levels. The cost of a few GB of extra overhead on a 1TB disk is a no brainer.

    But SSDs are a lot more expensive, and space is a lot tighter. I use a SSD as a boot disk on one PC, and I've filled it about 1/2 full, with the OS, applications, page, hibernate, and temps. Performance is great, and the 40%-ish free space is a form of over-provisioning.

    My performance was so good, I had not yet experimented with increasing my cluster size, because I was not able to quantify what the IO size profile looked like. Your IO size statistics from your IO storage benchmark was very enlightening as it shows the (unexpected) large amount of small IO.

    On Sandforce-based SSDs, the controller would compress away all the slack space at the end a file, since Windows pads the last cluster in a file with zeros. So with a larger cluster size, your file system would look fuller under Windows, but all the extra slack space would be compressed on the SSD ... with little detriment to the over-provisioning headroom.

    I know you are exceedingly busy, but it would be extremely interesting to be able to re-run your controlled test environment with the Anand IO Storage 2011 tests on systems that were built with different cluster sizes. I suspect that using a larger cluster size would improve performance on all SSDs, with SSDs with weaker performance showing the most relative gain. From what I have read, increasing the cluster size beyond 16kb (for Sandforce controllers) will have diminishing (but still positive) returns.

    Increasing a Windows 7 boot disk's cluster size from 4kb to 16 kb would increase the wasted space about 4-fold. On my system that would be less than 3GB. It could be a worthwhile trade for performance.

    Another reason to explore larger cluster sizes is the fact that the new 28nm Flash chips typically have page sizes of 8kb, not the smaller 4kb used in the 32/34 nm Flash chips. When Windows does 4kb IO on these new 28nm Flash SSDs, it is actually doing sub-page IO, causing the controller to perform a read/modify/write function, and increasing the write amplification effect. The impact would be similar to doing 2kb IO on the SSDs with 4kb page sizes.

    If you assume that the typical compression factor is 2:1 for Sandforce controllers, a 16kb NTFS cluster would often be compressed to fit in a single 8kb page ... sounds like a sweet spot.

    Using a larger cluster size, also decrease the amount of work needed to append to a file, as fewer clusters need to be allocated. The cluster size also defines the lower limit of contiguousness. This could be important on SSDs, since we normally don't run defrag utilities on SSDs, so we know that fragmentation will only get worse over time.

    I will point out that using larger cluster sizes may increase memory usage for the kernel buffer pool, and/or reduce the effective number of buffers for a buffer pool of a given size. I only recommend increasing cluster sizes on systems in a "memory rich" environment.

    Again, thank you for your excellent report. Exploring the impact of larger cluster sizes, especially on 28nm based SSDs could add an additional dimension to your analysis. 8kb and larger cluster sizes could further improve real-world SSD performance, and mask some of the performance drop from using the 28nm chips.

    db
  • mpx999 - Sunday, April 10, 2011 - link

    That's a big limitation for number of total I/Os. Eg. in 300MB/s SATA-II you'd be limited to ~37.5k IOPS with 8kB transfers, less than some SSDs are capable of, while the limit with 4kB clusters is 2 times higher, which is still beyond current SSDs for random transfers.

    4kB clusters are a perfect match for x86 processors that use hardware 4kB page size, as each page size is one block on disk. This is especially important for pagefile reads, which tend to be random by nature, rather than pagefile writes that are mostly sequential dumps of memory content. Some Unix systems may use 8kB disc block sizes because it's a default page size for SPARC and Itanium processors. For Power and ARM 4kB is the default but also 64kB can be used. So I'd advice against using large (larger than hw. page size) block sizes on a system/boot partition.

    8kB disk block sizes can be useful on partitions dedicated for SQL Server as default database blocks are 8kB for SQL Server, so it's doing 8kB transfers anyway. Oracle supports multiple page sizes and their advice is:

    http://www.dba-oracle.com/t_multiple_blocksizes_su...

    "Oracle recommends smaller Oracle Database block sizes (2 KB or 4 KB) for online transaction processing (OLTP) or mixed workload environments and larger block sizes (8 KB, 16 KB, or 32 KB) for decision support system (DSS) workload environments."

    32kB cluster sizes are a default value on flash cards for digital cameras, as sequential writes of large prictures are done on them.

    BTW. The slow speed of both Hynix and Intel 25nm versions of Vertex 2 may be because it's aging controller cannot deal with 8kB flash pages.
  • martixy - Monday, April 11, 2011 - link

    So... the SSD market is shaping up to the just about the most confusing and volatile market out there.
    At least that's the impression I get from the articles here. I mean you'd probably need your very own market research team if you want to get a good deal on an SSD.
    Meh...
  • gixxer - Monday, April 11, 2011 - link

    So if you have read all the comments up to this point with the OCZ verus Intel debate.

    Where would you spend your money?

    A vertex 3, Intel 320, or Intel 510
  • MamiyaOtaru - Tuesday, April 12, 2011 - link

    it's not scientific, but after looking at the newegg user review averages, not touching anything other than intel
  • tech6 - Monday, April 11, 2011 - link

    Thank you Anand - you're a real asset to the tech community!

    While OCZ has a potentially great product, they are really proving to be their own worst enemy. Until they demonstrate some maturity I will choose an Intel 320 instead. It may not be the newest or fastest but the G1/G2/G3 series drives have so far proven to be reasonably reliable and perform as advertised.
  • ClagMaster - Monday, April 11, 2011 - link

    Seems to me the Intel 510 offers better mainstream performance than the Vertex 3.

    And I also think Intel does a better job with balancing firmware with memory technology, and has better configuration control of what memory is used for their SSD's.

    I think suffering a 20% risk of getting a Vertex 3 SSD with slower memory is too high for what I pay for such a device
  • qax - Wednesday, April 13, 2011 - link

    This sort of commitment can make me wanna buy OCZ next time, thats for sure.
    Although they shipped slow drives, they accept the responsability, and thats a big thing in my world.
    I´ve totaly stopped buying som vendors that are too cheap, resulting in useless/nonexisting support.
    Same reason why i allways buy from a psysical shop and never from internetshops.
    I need psysical adress not too far from my own adress, where i can turn in a faulty product.
    For me an SSD driver will allways be used for OS, programs and games. For space i would have HDD.
    So space on SSD is no concern.
  • javishd - Wednesday, April 13, 2011 - link

    I think I'm not alone here. Waiting to buy after some real comparisons of the $300 120gb range. We look to you for help with the decision! Thanks for your long term commitment to ssd. I've been on board since the x25 g1, and I really appreciate all the info from you guys. I'll keep checking every day hoping....
  • alexb1 - Wednesday, April 13, 2011 - link

    Anand, THANK YOU VERY VERY MUCH!

    Honestly, there is NO ONE ELSE in the IT industry advocating for enthusiast consumers like you... kudos!

    I am A VICTIM of OCZ Marketing of Vertex2, and got a 80GB recently that basically does EXTREMELY POOR compared to ALL benchmarks. To top it all off, it is NOT part of the *recall* drives as its size hasn't been affected with the 25nm transition... so I am just about to return and take a 15% restocking fee.

    Now, my question is... should I even bother looking for a 34nm drive, or one of the newer 25nm drives would just do ok as boot drive in Win7? My MOST CONCERN is reliability and longevity.

    I can either get a F60-A (25nm), F60 (34nm), or OCZ Vertex2 (25nm)... The 25nm being $30-40 CHEAPER!
  • faster - Thursday, April 14, 2011 - link

    Today the Intel 510 250GB drive mentioned in these benchamarks can be had at newegg for $615 (-$40 off promo until 4/19, $575).

    The Egg also has the Revo Drive X2 240GB at $570 (was $680).

    So we as consumers have the new 250 GB 6Gbps SATA3 SSD drives vs. the 240GB PCIE X4 integrated bootable RAID 0 card within $5 of the same price point.

    Certainly a bootable add in card is not a straight comparison to a single SSD drive, but at the same price point, in the cutting edge overpriced enthusiast level, it is a sensible comparison.

    Anandtech should put the RevoX2 in these benchmark charts to show how they measure up. It would be more interesting than comparing a WD Raptor represented by tiny slivers on the performance comparisons. I believe, generally speaking, that the Revo would come away with faster read speeds and be neck in neck with fastest SSD drives on write speeds. AnandTech had or has a RevoDrive that they reviewed in the past. Is that thing still laying around?
  • daidaloss - Thursday, April 14, 2011 - link

    @faster
    I second your petition to Anand to put the Revo2 on the charts, so us, real power user, would have an idea how do SSDs compare with PCI raid cards.

    Also, sure would be interesting to see how do SSD compare to ram drives like the HyperDrive5. Supposedly this thing boots up in 4 seconds. Should be interesting to compare such a system with a modern SSD.
  • soltys - Thursday, April 14, 2011 - link

    Looking at past few articles, I was wondering - what exactly do SSDs do, that random writes are significantly faster than random reads (and looking at the tables above, 2x - 3x faster) ?

    Even considering magic firmware + spare space + caching - sooner or later R-E-M-W will have to be performed. And random patterns, with random data should emphasize that.

    Any insights or pointers ?
  • Norrin - Friday, April 15, 2011 - link

    Hi Anand,

    I have the vertex 3 installed in a 2011 macbook pro.
    I'm having a horrible problem where the OS locks up for about 10 seconds every 30 minutes or so.

    What was the problem that cause OCZ to delay their March 3rd launch day??

    What changes were made (firmware version numbers)? How can the firmware on a vertex 3 be checked and where can the latest version be downloaded and installed?

    I suspect the problem I'm seeing is the same which delayed their launch. Maybe they have a firmware update available now which can be installed in the disk I currently have....

    Thanks so much!
  • jammmet - Tuesday, April 19, 2011 - link

    I am experiencing exactly the same issue - did you find a workaround? Also, do you also have a spinning HD in your machine too?
  • typofonic - Monday, April 18, 2011 - link

    Wouldn't a Vertex 3 120 GB be a really bad choice for a boot drive when it has such a low random read performance, compared to the older Force F120/Vertex, even if I have a new SATA3 MacBook Pro?

    I can imagine that launching applications, booting the system etc. would be much slower with this compared to a Vertex 2/Corsair Force F120. Yes, the sequential performance is much better, but wouldn't the older drives seem snappier in normal everyday use?

    Even if the Vertex 3 120GB cost the same as a Vertex 2/Force F120, wouldn't the older drives still be a much better choice for normal use, because of their high random read/write? Can't decide if I should go for the Vertex 3 or the Force F120/Vertex 2.

    Anybody who knows more about this?
  • rgbxyz - Wednesday, April 20, 2011 - link

    I own a 120 GB Vertex. I've been thinking about adding another one. However, it will not be an OCZ. With the word coming, that it seems. and I stress, seems, that OCZ can not once again be trusted. And this time around it's an even bigger issue.

    From the just released report: "OCZ has parlayed investor and market excitement for solid state drives (SSDs) into an amazing story. From a low of $1.79 last summer, OCZ's stock has steadily climbed more than 350% on a feel good tale told by its CEO. But there is a much darker and sinister side that has been well hidden. It is our opinion that OCZ has misrepresented its SSD growth and has financial irregularities that are nearly impossible to reconcile. We believe that some form of a restatement may be required and that the auditors tick and tie review has some substantial inconsistencies. As such, we have sent our findings to the Securities and Exchange Commission asking for clarification on the multiple sets of numbers that we have uncovered. We believe OCZ's Board has the fiduciary responsibility to form a special committee to examine these discrepancies." The bottom line for those curious where this short-seller sees the stock: "If OCZ trades in-line with the comp group, a generous assumption given OCZ's limited asset value, differentiation, and minimal profitability, a reasonable price target would be between $2.58 and $4.98 per share."

    http://www.scribd.com/doc/53435574/OCZ-The-Master-...
  • la taupe - Friday, April 22, 2011 - link

    http://www.scribd.com/doc/53435574/OCZ-The-Master-...
  • geroj - Saturday, April 23, 2011 - link

    it would be interesting to see if 2 120gb ssd-s in raid0 would be better choice over a 240gb vertex3 or intel 510 (performance and costwise).

    im thinking of putting 2x120gb crucial c300 in raid0, it would cost 2/3rd of a 240gb vertex3 but a thorough test would be nice before deciding.

    2x64gb in raid0 is also enough for me (and as i see for a lot of us) but what about the performance?
  • ekerazha - Wednesday, April 27, 2011 - link

    New "Vertex 3 Max IOPS" series released.

    120 GB
    Read IOPS: 20.000 -> 35.000
    Write IOPS: 60.000 -> 75.000

    240 GB
    Read IOPS: 40.000 -> 55.000
    Write IOPS: 60.000 -> 65.000
    Max Write: 520 MB/s -> 500 MB/s (decrease)
  • sor - Friday, April 29, 2011 - link

    Yeah, what the hell is this all about? Anand mentioned in his review that there was supposed to be some sort of firmware cap on iops according to sandforce, but that his test vertex3 didn't have it, and that OCZ promised that performance of the shipping drive would be identical. Turns out apparently that they had TWO versions they were going to ship, and everyone was apparently led to believe that the test review one was the same performer that everyone has been jumping on as fast as they can ship. I think we've been duped.
  • spensar - Saturday, April 30, 2011 - link

    Love the real world benchmarks, and would like to see the Vertex 2 120gb numbers put up in the comparisions as well.
  • jharmon - Monday, May 2, 2011 - link

    It's been almost a month since an update in the SSD world from Anandtech. Are there any reviews on the near horizon? You said one might want to wait for a purchase until you got some of the lower capacity drives in to test to compare with the vertex 3.

    Thanks for all your hard work in analysis, Anand!
  • jharmon - Monday, May 2, 2011 - link

    Maybe also address the issue with the hardware limitation from the Marvell controllers 91xx. If I understand it correctly, these will be be able to achieve the 500 MB/s. If that performance cannot be achieved, is the vertex 3 worth the premium?
  • mrkimrkonja - Wednesday, May 4, 2011 - link

    From these SSD articles I learned that TRIM is very important with SSD.
    I read somewhere that when you use SSD in RAID you lose TRIM support and some other things.
    Is this true?
    I now have 2 old ADATA SSD generation 1 in RAID0 they have 2x the performance comparing to a single drive bud they do not have TRIM anyway.
    I already bought one VERTEX 2 60GB and thought buying anther one.
    Always thought better more smaller disks in RAID than one big one.
    If this is true I an thinking I would lose more over time without TRIM support than gain with RAID0.

    Can you hal me with some info or first hand tests?
  • Foochey - Wednesday, May 4, 2011 - link

    I would be very interested to see how the 480GB stacks up to these drives. Since these are parallel processing devices, you would think that the 480 would perform even better than the 240. Anand, any way you can get a 480 and test it? OCZ's specs look the same, so I'm guessing that they are using the smaller die chips or doing something with the way they write to the array. Any ideas? Price certainly makes the 480 out of the question for most of us, but I sure would be interested in its performance.
  • jeffburg - Thursday, May 26, 2011 - link

    OCZ Just launched the Max IOPS Version. Is that worth the extra $10? Whats the difference between the two?
  • Palen - Thursday, June 2, 2011 - link

    Thank you for another great article.

    In the conclusion of this article it's advised to wait for a couple of weeks to see how the vertex 3 120 GB goes againts the other 3rd generation 120GB SSD's. Is there any indication that Anandtech will be recieving / testing any of these drives in the near future?

    Another thing is RAID: I've been doing some digging on SSD's, since I didn't know much about them untill last week. The general picture is starting to make sence now. Bigger is ussually faster (within certain parameters). However i'm getting conflicting information about RAID. According to some it's worth it, while others urge to stay away from RAID (no TRIM support, etc). It would be nice to have a well respected opinion in the mass of conflicting information. How would for instance 2x 60 GB SSD's do against a same type/brand 120 GB SSD? and how does the raidcontroller fit into this equation? On-board vs. professional solution?

    My gut says "stay away from raid!" because the SSD's allready scale in performance depending on their storagesize. Next to that the strain that an onboard controller will put on your CPU. And paying for a professional raidcard just seems silly, with the range of SSD PCIe storage solutions. But than again, what do I know?
  • paul-p - Saturday, October 22, 2011 - link

    After 6 months of waiting for OCZ and Sandforce to fix their firmware from freezes and BSOD's, I can finally say it is fixed. No more freezes, no more BSOD's, performance is what is expected. And just to make sure all of the other suggestions were 100% a waste of time, I updated the firmware and DID NOT DO anything else except reboot my machine and magically everything became stable. So, after all these months of OCZ and Sandforce blaming everything under the sun including:

    The CMOS battery, OROM's, Intel Drivers, Intel Chipsets, Windows, LPM, Hotswap, and god knows what else, it turns out that none of those issues had anything to do with the real problem, which was the firmware.

    While I'm happy that this bug is finally fixed, Sandforce and OCZ have irrepairably damaged their reputation for a lot of users on this forum.

    Here is a list of terrible business practices that OCZ and Sandforce have done over the last year...

    OCZ did not stand behind their product when it was clearly malfunctioning is horrible.
    OCZ did not allow refunds KNOWING that the product is defective is ridiculous.
    OCZ nor Sandforce even acknowledged that this was a problem and steadfastly maintained it only affected less than 1% of users.
    The fact that OCZ claims this bug affected 1% of users is ridiculous. We now know it affected 100% of the drives out there. Most users just aren't aware enough to know why their computer froze or blue screened.
    OCZ made their users beta test the firmwares to save money on their own testing
    OCZ did not have a solution but expected users to wipe drives, restore from backups, secure erase, and do a million other things in order to "tire out" the user into giving up.
    OCZ deletes and moves threads in order to do "damage control and pr spin".

    But the worst sin of all is the fact that it took almost a year to fix such a MAJOR bug.

    I really hope that OCZ learns from this experience, because I'm certain that users will be wary of Sandforce and OCZ for some time. It's a shame, because now that the drive works, I actually like it.
  • paul-p - Saturday, October 22, 2011 - link

    I just want to thank Anandtech for being the ONLY site out there there called out OCZ and Sandforce for having defective products. While every other hardware site out there were kissing OCZ and Sandforces butt and saying that the Vertex 3 SSD was the best thing since sliced bread, Anandtech actually was one of the only sites that actually acknowledged that there was a major bug with the sandforce controllers.

    I can't believe all the review sites out there were praising the Vertex 3 for almost a year when the drive had major BSOD and freezing issues. A review should be more than running a benchmark on the drive, but checking to see how the drive performs and making sure it is stable. In my eyes, every other review site showed me that they care more about the sponsors and advertising dollars than the users that visit their site. So, once again, thanks Anandtech for speaking the truth when others wouldn't.
  • danwat12345 - Sunday, November 13, 2011 - link

    I think I'll keep my 80GB Intel X25-m G2 SSD. From your benchmarks it looks like the 120GB Vertex 3 over SATA 2 isn't that much better with random read operations than my trusty ole' 80GB Intel.

    I am curious if your Anandtech storagebench 2011 test were done on completely in-compressible data? The random read numbers of the Vertex 3 120GB SATA2 drive wasn't very impressive.
  • chainspell - Saturday, December 10, 2011 - link

    you had me at page 4...

    tell Alex Mei (the CEO) he has my business, I just bought 2 of these on Newegg!
  • rarmstrongtaeus - Tuesday, May 8, 2012 - link

    Hello, I am looking for any OCZ, or other SSD for that matter, that has the Hynix H27UBG8T2ATR NAND devices as shown in the image above. I am willing to pay a bounty of $500 plus the original cost of the drive for one of these drives as long as it is still functional. Please contact me at 719-306-5539
  • jamesearlywine - Monday, October 22, 2012 - link

    I bought a vertex 2, and within 3 months the drive just stopped working. I was able to get a replacement from OCZ because the drive was under warranty, but I lost data.

    I recently bought a vertex 3 about 8 months ago, a couple months ago the drive began disappearing. My laptop would freeze, and when it rebooted, it didn't detect the vertex 3.

    I've heard there's a firmware upgrade that might fix this, but I have to backup the drive, install the firmware, then restore from system image.

    If you buy an OCZ vertex 2 or 3, make sure to backup your data regularly. I have been severely disappointed by how unreliable these drives are, I bought two, and both purchases have been bad experiences.
  • axelsp76 - Thursday, June 5, 2014 - link

    Hi Anand and everyone!! Thanks very much for this site. The amount of articles/comments here mesmerizing!
    Im having an issue with data transfer rates, and I cant get my head around it.
    I really wanted to have better transfer rates on my desktop. Im not looking at RAID config at this stage.
    I was hoping to get way above the 130 MB/s mark specially using sata III ports on my onboard Intel controller.
    The motherboard is EVGA x79 Dark. And although I see spikes of 350MB/s on the first 3 seconds of the transfer rate (windows 8.1 copy details), its safe to say that most of the transfer is done at 130MB/s.
    The BIOS has AHCI and ACPI enabled, and I checked that prior to the OS install.
    Partition alignment on both drives seems to be ok (offset divisible by 4096) and both drives are about 1/3 from they total capacity. Both are 'twins' 120GB vertex 3 OCZ, purchased in one lot (but already about 3 years old)
    The test i did was to transfer a 8Gb folder from SSD to the other. The difference I noted btwn copying a 8gb folder against a 8gb file is that in my case, the first 3 seconds are stable at 300+MB/s for the folder transfer, then it quickly drops to +-120MB/s till the end of the transf.
    After reading this article, Im still unsure if I should go for another SSD brand (Samsung 840 pro or EVO), try an external sata controller.
    Thanks!

Log in

Don't have an account? Sign up now