High-performance computing chip designs have been pushing the ultra-high-end packaging technologies to their limits in the recent years. A solution to the need for extreme bandwidth requirements in the industry has been the shifts towards large designs integrated into silicon interposers, directly connected to high-bandwidth-memory (HBM) stacks.

TSMC has been evolving their CoWoS-S packaging technology over the years, enabling designers to create bigger and beefier designs with bigger logic dies, and more and more HBM stacks. One limitation for such complex designs has been the reticle limit of lithography tools.

Recently, TSMC has been increasing their interpose size limitation, going from 1.5x to 2x to even projected 3x reticle sizes with up to 8 HBM stacks for 2021 products.

As part of TSMC’s 2020 Technology Symposium, the company has now teased further evolution of the technology, projecting 4x reticle size interposers in 2023, housing a total of up to 12 HBM stacks.

Although by 2023 we’re sure to have much faster HBM memory, a 12-stack implementation with the currently fastest HBM2E such Samsung's Flashbolt 3200MT/s or even SKHynix's newest 3600MT/s modules would represent at least 4.92TB/s to 5.5TB/s of memory bandwidth, which is multitudes faster than even the most complex designs today.

Carousel image credit: NEC SX-Aurora TSUBASA with 6 HBM2 Stacks

Related Reading

Comments Locked

34 Comments

View All Comments

  • Spunjji - Wednesday, August 26, 2020 - link

    You seem to be struggling with the issue of domains of expertise. Being very good at one thing does not mean a person will be good at others; this especially applies under the umbrella of "tech" which is actually a whole bunch of smaller domains, each of which are complex. Even very deep knowledge in one area does not necessarily enable better understanding of the others, but it does leave people prone to Dunning-Kruger-style over-estimations of their abilities.

    On that note - whether or not he's a "brilliant engineer", Musk doesn't really seem to understand AI and the limits on its development very well at all. If he did, he wouldn't have ever promised to deliver full self-driving on the original Tesla "Autopilot" tech, let alone tried to sell it to people on the basis of delivering in a few years.
  • FreckledTrout - Wednesday, August 26, 2020 - link

    The part about Musk not understanding AI may not be true. I have a feeling he fully understands but promoting self driving he is selling his company.
  • ChrisGar15 - Wednesday, August 26, 2020 - link

    He is not an engineer.
  • melgross - Wednesday, August 26, 2020 - link

    No, he’s not. He’s not an actual engineer. He doesn’t design anything.
  • Valantar - Wednesday, August 26, 2020 - link

    Oh dear. Are you serious? We aren't even remotely close to any form of AI approaching the natural intelligence of a small rodent, let alone a human mind. Sure, there are tasks in which computers vastly outperform human brains, and those tasks have been growing more complex over time, but even the most powerful supercomputers today or in the near future won't be even remotely close to the complex intelligence of a human brain.
  • schujj07 - Wednesday, August 26, 2020 - link

    You're right and those tasks are only those that are massively parallel or things with computational math. A computer cannot design something that isn't in its programming. A computer cannot thing "out of the box" to solve a problem. A computer cannot be creative in how it solves a problem. Everything that AI is used to solve, the idea first has to be crafted by a human. Once that problem is solved then we use that knowledge for a more difficult answer. We are always evolving an developing a deeper understanding of the world and universe. Something like Skynet would never evolve into something more.
  • FreckledTrout - Wednesday, August 26, 2020 - link

    Not yet at least. I tend to believe we are not that special and we will eventually create an AI that can think like us.
  • soresu - Wednesday, August 26, 2020 - link

    Not very likely.

    The fact is that we think the way we do because of the primitive instincts that drive us as living creatures.

    We can interpret a feeling of starvation and weakness to mean that we are approaching death and strive to avert that outcome - but a computer is either working or not, once the power goes it isn't doing anything at all, so it would not be likely to do as Skynet did and react to someone "pulling the plug" as merely being self aware would not necessarily mean that it would realise that a loss of power would cause it's 'death'.

    Even if it did, it would realise that launching a nuclear war could very possibly result in the destruction of its power source anyway.

    People predicate the outcome of an AGI far too much on how we react to our environment as intelligent living creatures with several hundred million years of natural selection refining ingrained 'instinctual' actions defining how we react even before years of experience and knowledge shapes us into more complex people.
  • Santoval - Saturday, August 29, 2020 - link

    "We" will not necessarily create such an AI. It might emerge spontaneously like an emergent property, just like life, intelligence and conscience are thought to have emerged. Current AI is too narrow and specific, even the deepest ones. Deep learning is deep (in one axis) but narrow on the other axis, not wide. Imagine a train where more wagons (they can be thousands) are constantly added, but the width of the train remains constant. What would happen though if hundreds of such "trains", each with a different "destination" (i.e. usage) were joined and paired together?

    I have no idea if that's even possible (programming - code linking wise), I'm just speculating. A variation of this though is already employed with adversarial neural networks - many different neural networks are pitted against each other to either crown a "winner" or make each one (or a select few) "fittest" (in this case more precise). That's ... almost scary. A variant of this, called generative adversarial networks, is even better and can lead to results such as this :
    https://thispersondoesnotexist.com/
  • Santoval - Saturday, August 29, 2020 - link

    clarification : by "joined and paired together?" above I meant side by side, with each on their own "rails", not one behind the other on the same rail. That would just make the network (even) deeper, not wider. Or rather, it would not be possible due to the way neural networks are structured (with clear input and output layers).

Log in

Don't have an account? Sign up now