Comments Locked

29 Comments

Back to Article

  • Adramtech - Monday, November 9, 2020 - link

    30% smaller die? Wow.
  • LiKenun - Monday, November 9, 2020 - link

    Maybe we’ll finally get those 2TB SDXC cards or even a 4TB SDUC card. 4TB would mark about ~13 years since the 4GB SDHC card first hit the market.
  • nandnandnand - Monday, November 9, 2020 - link

    The 16-die stacked package is 1 TB. So if they can fit two of those in or make larger 1 Tb dies, then a 2 TB microSDXC card is easily possible.

    I get the impression that the market only cares about microSD. Despite the fact that a full size SD card is several times larger than microSD, the largest capacity available of each type is still 1 TB. Devices that need a full size SD card can use a microSD card inside of an adapter. Still, we are not that far away from having 4 TB microSDUC Express cards, and similar UFS capacities.
  • martinpw - Monday, November 9, 2020 - link

    What are the practical limits on layer count? Curious how much further it is likely to go...
  • Billy Tallis - Monday, November 9, 2020 - link

    We're pretty close to the limits of what can be done with a single stack of layers. Almost everyone is now making 3D NAND as two stacks of layers (string stacking), eg. this one is 88+88. That means a lot of manufacturing steps that needed to be done once for early 3D NAND now need to be done twice, plus there's an interface between stacks that needs to be properly aligned and connected. Every roadmap for several hundred layers calls for doing more string stacking of several decks instead of just two. That allows per-die capacity and density to continue increasing, but the manufacturing costs don't scale as dramatically as they did going from eg. 32 through 96 layer nodes.
  • raywin - Monday, November 9, 2020 - link

    and this is why i come to this website, thank you
  • stephenbrooks - Monday, November 9, 2020 - link

    I was quite curious at what point the whole die height would be filled with solid cells. The statistics on thickness suggest that is quite close. E.g. 176*16 = 2816 layers in a full 16-stack, they're 250nm thick so that's 0.7mm.
  • Adramtech - Monday, November 9, 2020 - link

    Micron says it's 1/5 the thickness of a piece of paper. https://www.micron.com/products/nand-flash/176-lay...
  • azfacea - Tuesday, November 10, 2020 - link

    yes but they still save steps, they save circuitry, and they will find new ways to save steps. Engineering roadmaps get created as they are used. no body knew about immersion until ppl knew about immersion. whose to say they wont discover new materials new processes along the way that would reduce steps. whose to say they wont exploit multi stacks to get perf the kinds of which has never been seen b4. whose to say they wont revisit the layer limit w. the kind of capital and talent they are going to get with this exponential expansion.

    the point is the roadmap and pipeline are not dry atm, they are well lubricated, and no other boom has really had or known its roadmap for 10 years later until they many years later. This is not true for HDD whos pipeline has been pretty dry and dead, just compare the HDD growth from 2000s to the one from 2010s, not to mention SSD are shooting at HDD but HDD are not shooting back, they are just trying to hold on to what they can.
  • rrinker - Tuesday, November 10, 2020 - link

    At some point there will be a practical limit as those interface bits start consuming more and more space as the number of layers grows. At some point, the tradeoff of an additional layer won't make up for the loss on each lower layer due to those interconnects, no matter how good the manufacturing process. Question is, is that limit before or after the stack gets too thick to manage for the subsequent stages of the manufacturing process.
  • Billy Tallis - Tuesday, November 10, 2020 - link

    The CMOS under Array design seems to provide plenty of space for peripheral logic at the moment. The interface between stacks of NAND doesn't take up any extra horizontal space, and is only as thick as a few layers of memory cells, so it's not a big concern in that respect—but the impact on yield is something to worry about. And no matter how we get to higher total layer counts, we still have to do something to address the die space taken up by the staircase required to expose every active layer so it can be wired up.
  • azfacea - Tuesday, November 10, 2020 - link

    YESSSSSSSSSSSSSSS
    This revenge of moore's law. Where you at HDD fanbois ??? I see your RCS is getting smaller as time goes by. 2 years ago when i started exposing HDD for the disaster they are, I would get instant hate from 5000 ppl. now the freq and magnitude of the hate is dropping.
    (excluding you npz).
  • Holliday75 - Tuesday, November 10, 2020 - link

    Seek help.
  • Tams80 - Tuesday, November 10, 2020 - link

    Urrrghh. Is nothing sacred? Does everything have to be some sort of battle?

    I second you needing to seek help.
  • ballsystemlord - Tuesday, November 10, 2020 - link

    IDK why you have the "HDDs are a disaster" mentality. I agree with the other 2. Seek help.

    If you don't feel like seeking help, how about reading this to give you some perspective?
    Currently, HDDs are faster on full drive sequential write than QLC SSDs. It's true that SSDs are getting cheaper and they are smaller. But they are not close to HDDs in price-per-TB. By increasing the amount of stored bits, they don't have decent endurance (200TBW), and they don't have decent speed -- although the SLC cache certainly does. SSDs for us normal people also don't have the capacity of HDDs.
    Furthermore, HDDs don't require 4 PCIe 4.0 lanes. With AMD having only 24 PCIe 4.0 lanes on their CPUS (the chipset takes 4, so actually only 20), and most MBs having only 1 PCIe 4x 4.0 M.2 slot, there's not enough bandwidth or physical slots for normal people to replace their HDDs with M.2 SSDs. We could use SATA 3 SSDs, but then you loose both speed and, in the case of TLC SSDs, also endurance.
    I'd love to get cost effective SSDs, with amazing speeds, mind boggling capacity, and great endurance all connected to my PCIe 4.0 CPU. But it's not happening any time soon.
  • Calin - Wednesday, November 18, 2020 - link

    "Furthermore, HDDs don't require 4 PCIe 4.0 lanes"
    Neither do SSDs. As long as you are happy with HDD access speed, you can multiple SSDs on a single PCI-E lane.
    The problem is that you _want_ the speed from the SSD/NVRAM, so you MUST use enough PCI-E lanes.
  • alphasquadron - Tuesday, November 10, 2020 - link

    HDD fanboi here. You wait, we're not done yet. We're making bigger HDDs than ever before, you wait, it's gonna be the biggest HDD you've ever seen. Suck it, wooooooooooo.
  • azfacea - Tuesday, November 10, 2020 - link

    I take that back, the hate is still strong.
  • ballsystemlord - Tuesday, November 10, 2020 - link

    Hate? -> "I'd love to get cost effective SSDs..."
  • Dizoja86 - Thursday, November 12, 2020 - link

    It's not the debate about SSD's vs HDD's that people hate here. It's your needlessly obnoxious approach to the discussion.
  • yeeeeman - Tuesday, November 10, 2020 - link

    just...go to 1k layers so we can reach the 1pb mark tomorrow.
  • TheWereCat - Tuesday, November 10, 2020 - link

    3 slot NAND?
  • nandnandnand - Tuesday, November 10, 2020 - link

    They could probably put 1 PB in a 3.5" drive soon if they wanted to (3.5" HDD volume ÷ microSD volume ≥ 2000). Whenever it is available, you will be using 16 TB, at best. Cost is the problem, not capacity.
  • AlexDaum - Friday, November 13, 2020 - link

    Also, memory controller and speed is a problem. For a useful 1PB SSD you would need a fast controller, it won't be useful at all with SATA or even SAS speed. With SATA and 600MB/s sequencial write speed it will take over 19 days to fill that drive!
    There is already the Nimbus ExaDrive 100TB 3.5" SSD, getting another 10x density improvement might be possible with new high layer NAND chips, thinner PCBs and tighter stacking
  • RSAUser - Saturday, November 14, 2020 - link

    1PB / 32GBps = ~8.5 hours on a maxed out PCIe 4 16x lane.
    Don't think there's really a market for that yet.
  • rpg1966 - Tuesday, November 10, 2020 - link

    Question from a dummy:

    What's the difference between "stacking of two 88-layer decks" into a 176-layer die, versus stacking 16 of those dies into a package?
  • AlexDaum - Friday, November 13, 2020 - link

    As I understand it, each deck has a continuous Channel + SiN for storing the data and separate gates. It seems they can't make that channel tall enough for all layers, so they fab it in 2 steps, but still on the same wafer, probably with some kind of metal interconnect between the decks.

    For stacking the dies in a package the wafer is first sawed into dies and those are connected to each other with some kind of interconnect, probably using TSVs (Through silicon vias). After stacking all the dies, the chip is then bonded onto some kind of carrier for the BGA package and molded in plastic.
  • Rictorhell - Wednesday, February 17, 2021 - link

    Can anyone speculate IF Micron or Crucial were to release a single or double sided 2230 m.2 ssd, using this 176 layer technology, what the potential capacity of either one of those drives MIGHT be?
  • Rictorhell - Wednesday, February 17, 2021 - link

    Assuming that they were willing to max out the capacity of said SSD, using this current technology?

Log in

Don't have an account? Sign up now