John Barth, Invecas

June 6, 2016, Full Video Transcript

I’m going to tell you a little bit about how we actually do some of these variation-aware designs.

So start off real quickly with what our mission is at Invecas; basically I’m going to try to describe to you why we do what we do.

JBarth-1

Basically we provide leading edge memories and hardware proven yield and performance to enable first rate execution. I want to point out that throughout our mission statement, we go back to things like yield that talks specifically to the need for variation-aware design.

We try to differentiate our circuits through innovation. You need to be able to verify these innovations before you commit them to silicon.

And so these innovations need the variation-aware design facilities before you commit them to hardware. So it’s very important there as well.

We basically do power/performance-optimized memories, memory compliers – and there’s a wide variety of compliers that we offer at Invecas. And we implement read and write assist features to enable you to go to very low voltages and hit some of these IoT markets as well.

We also have complicated dual-rail supplies that enable the peripheral logic to go to a much lower supply level. This introduces an additional level of complexity that needs to be addressed during the analysis phase.

We also build in self-test engines so that we can actually verify that the hardware is doing what we expected when it gets to the tester platform.
And we do this in three different technologies at this point in time for GlobalFoundries. They are:

  • 22 nanometer Fully-Depleted SOI technology — very interesting technology. I would encourage you all to take a look at it because it’s a very unique technology.
  • 14 nanometer FinFET
  • And then we’re developing 7 nanometer in the very near future

So once again I’d like to point out – I’m not going to take you through our entire guiding principles here – but I just wanted to point out to you that the specific points about what we feel our differentiation points are and how they specifically talk to the need for a variation-aware type of design tool.

We talk about maximizing yield, we talk about analyzing innovative circuits. We need to be best-of-breed, so we use best-of-breed tools.

An additional thing is that from a resource perspective we want to maximize the reuse of our designs, and when you do that, you want to make sure that the circuits that you’re propagating to all those other pieces of IP are very strong in their foundation.

So you want to spend the extra time and effort to make sure you get that core foundation design correct right up front. It’s worth spending the extra time and energy to make sure those get right, so you don’t propagate a bad design.

JBarth-2

So this chart just demonstrates some of the complexities associated with a dual-rail supply.

  • On the left hand axis, we have the bit cell voltage, so there’s a separate rail applied to the bit cell.
  • On the horizontal axis is the voltage that’s applied to the periphery of the memory. Typically this is the same voltage that’s also applied to the logic of the chip.

And so the SRAM bit itself has a minimum operating voltage that tends to be higher than the minimum operating voltage of the peripheral logic.

So in order to enable these high performance chips to minimize their power envelope, we want to enable them to drive their operating voltage as low as possible below the minimum operating voltage of the bit cell.

And this creates a very large window of operation that we have to operate our memories in. We have to look at all these different corners. So we need the tools to help us accelerate the amount of simulation we do, across not only process, voltage and temperature, but it’s also the multiple voltage ranges of two different power supplies and how they play off each other.

In some cases the bit cell voltage can be higher than the peripheral, but it can also be lower as well. So you have a positive differential voltage and negative differential voltage that you have to worry about.

So let me talk a little bit more about why we focus on variation-aware. Now who out here worries about a one in a million problem, right? Typically nobody worries about a one-in-a-million problem in their day-to-day lives.

But when we have a chip out there with 300 megabits of SRAM, guess what? You’ve got 300 of those one-in-a-million problems on every chip.

So you do have to worry about it right up front. And I don’t know how many people play Powerball but just so you know, I didn’t happen to pick one-in-300-million out of the air but it turns out to be that’s the same odds you have of winning the Powerball.

So you got to win, you know, in this case you have to win the Powerball every time you want to build a good chip. So you don’t want those odds, you don’t want to play those odds. So you really have to focus heavily on very deep into the sigma curve.

So you really do need very high-sigma analysis. We also have a very intense drive towards performance, power and area. So in order to win business, you have press the boundaries. You have to provide specifications that are very attractive to your customers.

That’s how you win business. But that’s not the only part of the picture. If you want to stay in business, you have to deliver and you have to be able to deliver those yields at very high-sigma values.

You have this tradeoff you’re always trying to play with; being as aggressive as you can to win business, yet conservative enough that you don’t end up with a manufacturing yield problem – because then you’ll be out of business.

To do this in a reasonable amount of time, you could go after Monte Carlo but that’s very cost prohibitive to get to those extreme sigma values.

So we need a high sigma tool and in this case we’ve chosen to use Solido for that tool. Now when you talk about these types of statistics, when you’re talking about one in 300 million, it’s very easy to get lost in the math. The nice thing about the Solido tool is that it has a very innovative approach.

It’s easy for someone like me who doesn’t have a strong degree in statistics to understand how the tool works, and actually verify it is working the way I expect it to.

In terms of bonus features, it has more than just this high sigma capability. It gives you very good feedback in terms of what specific devices and parameters are actually driving your yield losses, and it enables you to focus in and optimize specifically on those particular devices.

And whenever you do this analysis, it’s all about impacting change on the design. How are you going to make your design better? To do that, you really do need to know, not only what the yield is, but what is putting your yield out at that spot.

And we get some very valuable information in terms of which device is actually giving you that yield. They also have very nice queuing management which, when you’re dealing with a lot of jobs, is necessary. And we can do nice PVT sweeping in a really nice graphical user interface.

So just really quickly here’s a snapshot from their tool. You know the nice thing about their tool is it doesn’t use any fancy math to actually calculate the sigma. It actually rolls the dice as many times as necessary. So it’ll roll it 300 million times or 600 million times -whatever you need to get to that sigma value.

JBarth-3

The nice part about the tool is it basically does a two-step simulation.

  • The first step is that it runs a random sampling
  • It uses the data from that sampling to calculate or build a model.
  • Then focuses all of your simulations on the tail of your distribution that you’re most worried about.

So I don’t want to necessarily get too much into what this pictures all about, but basically your general sampling is in the gray. And the purple sampling is where it has determined these points are actually on the tail of distribution.

Now, how do you know it’s picking the right model to pick the right points in the tail of the distribution? Well, it builds an ordered list of simulation points, so it simulates the one that it thinks is going to be the worst first and then the next one second and the next one third.

And you can see on the next verification slide whether it’s doing that correctly or not. I’ll talk about that in a second briefly.

The important thing is that a normal way you would do this without the tool is you would basically do a small number of samples using regular Monte Carlo and then get a sigma value and project that or extrapolate that.

And what you can see on the bottom part of this graph on the horizontal axis is basically a measured parameter, and on the vertical axis is a sigma value.

And if you were just to use regular Monte Carlo and extrapolate, you’d see that this curve is not actually straight — it’s a curve. And if you were to do that extraction, you’d probably find out that you would have some error associated with it because things don’t perform the same way out on the tail of the curve as they do across the flat spot of the curve.

This would introduce error. That error would either come in the form of yield loss, or pessimism in the design point making you less competitive. So it’s very important to have a tool that actually does the simulation of the points out in that regime where you’re worried.

JBarth-4

This is the verification modeling. Basically on the horizontal axis is the ordered point, the simulation point. And on the vertical axis is the measured parameter. And you can see on the far left, the very first simulation is picking the worst case point that gives you some confidence that their model is correct.

And as you move to the right, as long as you see a decreasing value in the parameter you’re measuring, that means that the model is relatively correct in terms of picking the points.

So now you know, that the small number of samples that you’re actually simulating are on the tail of distribution.

Very, very powerful. So with that I’ll close with that one.