Solido Technology


Solido has built a portfolio of the world’s most advanced, production-proven variation-aware design technologies. We do all of our own research and are continuously pushing the envelope of what is possible. These technologies are often imitated in marketing hype, but are never duplicated in practice. Solido’s technology is decidedly the best in the world. It simply works the best, both in theory and in practice.

Our core technology is a suite of machine learning algorithms that are designed from the ground up to solve the types of problems found in the semiconductor industry. We have invested over 100 person years into developing novel methods that deliver fast, accurate, reliable, and verifiable results. This technology is trusted by >1000 designers at >35 companies, and has been used in production for years. It is the standard for variation-aware design.

The following sections give a brief overview of some of Solido’s main technologies. If you would like to take a deeper dive, please contact us – we are very open with our customers about how our technology works under the hood, and we are always enthusiastic to talk about it (we love this stuff!). You can meet the creators of our technology and ask all of the hard questions – we want you to, as it builds trust and leads to new ideas.

Statistical PVT


Statistical PVT solves two key problems that have not been treated correctly in traditional variation-aware design flows:

  • Insufficient PVT and statistical variation coverage: PVT variation and statistical variation have historically been analyzed separately. For example, you might run your PVT corners, then run Monte Carlo in a separate run for one or two of your worst-case PVT corners. This unnecessary two-step process is inefficient and it can lead to accuracy problems due to correlations between PVT and statistical variation being missed. Statistical PVT provides unprecedented speed, accuracy, and coverage of PVT conditions and statistical process variation together in an easy-to-use package. It provides the same coverage as running thousands of Monte Carlo samples at every single PVT corner, but uses just hundreds of total simulations.
  • Digital corners for non-delay measurements: Digital corners (e.g. FF, SSG) are typically 3-sigma delay corners for nmos and pmos transistors. They are designed to bound time-domain measurements. As such, it has never made any sense to run non-time-based measurements (e.g. gain on an opamp) at digital corners. Statistical PVT solves this problem by finding design-specific, output-specific statistical corners that actually bound what you are measuring on your design at your target sigma. For example, it can find 3-sigma corners for gain and bandwidth on your opamp.

Statistical PVT has a number of Solido-developed technologies under the hood that combine to deliver unprecedented coverage in just hundreds (instead of 10s of thousands) of simulations. Here are a few of them:

  • Fast Monte Carlo sampling and accurate density estimation: Solido is now on its third generation of novel Fast Monte Carlo methods. We have developed a number of new sampling methods and density estimation methods that deliver more accurate statistical results with fewer Monte Carlo samples. Statistical PVT uses this as a basis for quickly building an accurate probability density function (PDF) using far fewer samples than regular Monte Carlo methods. It then can use this PDF to identify accurate output values up to 4-sigma.
  • Statistical corner extraction: Statistical corners are design-specific and output-specific statistical corners – think of them as full statistical corners that are custom-made at runtime to bound what you are measuring on your design at your target sigma. These custom-tailored corners incorporate both process (global) variation and mismatch (local) variation. Statistical corners are created by identifying a process point (a set of process variable values for both global and local statistical variables) that sits exactly at the target sigma in output space. The method is actually pretty simple – it just takes a sample that is close to the target sigma and tunes it until it sits at exactly the target sigma. The harder part is how to quickly and accurately identify the 3- or 4-sigma values in output space and what to do with the corners once you have them.
  • Fast statistical PVT analysis: Now that we have a set of accurate design-specific and output-specific statistical corners, we need to explore those across voltage, temperature, load, bias, etc. There can be a lot of combinations, and Statistical PVT has a fast machine learning method for exploring this full space with verification-quality accuracy. It works by only simulating a carefully selected subset of the space, modeling the space with this data, predicting the rest, and adaptively iterating until the model and predictions are extremely accurate and all of the worst cases are simulated in SPICE.

Statistical PVT also includes a suite of technologies for identifying the source of variation problems across both PVT and Monte Carlo. It doesn’t just reveal design problems – it shows you where and why they occur.

Fast PVT


When we have long simulation times and lots of PVT combinations, we have to guess at which PVTs are worth running (sometimes we know, lots of times we do not), and hope we were right. We also end up running a lot of simulations for corners that are not actually very important, which wastes valuable SPICE licenses, cluster time, and causes product delays.

Fast PVT is all about getting comprehensive PVT coverage without needing to run all of the SPICE simulations. It is conceptually fairly simple – it works as follows:

  • Choose a small subset of carefully selected PVT combinations
  • Simulate those in SPICE
  • Build predictive models for all of the outputs based on the SPICE data
  • Predict the output values for all of the remaining PVT combinations

Using a cool machine learning approach, run strategic PVTs to tighten up predictive models in key areas, determine which PVTs could be the worst cases for each output, and simulate anything that could be the worst case. The end result is that we get perfect SPICE accuracy for all of the worst case corners for each of our measurements, and excellent predictions for every PVT combination that is not simulated.

This may (or may not) sound easy, depending on how familiar you are with this space and with design of experiments, regression modeling, and machine learning. Some of the things that are hard that we have solved are:

  • Predicting complex variable interactions: Modeling independent effects (e.g. temperature separate from voltage) is really easy. Non-linear modeling is also pretty easy. Where things get tricky is when we start looking at interaction effects, where variables cause non-additive effects when changing together. Doing this across many variables is really hard to get right. We do this really well, and it is critical in the semiconductor domain, as powerful interactions are actually very common.
  • Dealing with hard-to-model problems: The right behaviour when things are hard to model is definitely not to give the wrong answer. Sometimes we want to run more PVTs in SPICE to get the right answer; other times we want to provide best effort answers and make it clear to the designer what the error margins may be. Fast PVT has a suite of sensible, designed behaviours for handling these cases easily.
  • How to scale this to lots (e.g. 15) of variables: Designers want to look at more than just process, voltage, and temperature: They want to combine this with on/off flags, multiple supplies, load, bias, etc. Every time a variable is added, the number of PVT combinations explodes multiplicatively. Fast PVT can handle millions of combinations.
  • What to do with n-ary and multi-modal outputs: When there is no gradual trend that can be modeled, we need completely different modeling technology that can classify things as n-ary or multimodal behaviours, which we see all the time in this domain.
  • How to prove that the predictions are right: Fast PVT includes a suite of technologies for verifying its work. It is error-aware with bounds on prediction accuracy, it demonstrates machine learning convergence through intuitive plots that show that the technology knows what is doing (and reveals when it does not!). To help build designer confidence, it even has a built-in verification mode that runs Fast PVT, then simulates the rest of the corners, and automatically compares accuracy and runtime.

There are lots of others – the point is that Fast PVT has some heavy hitting technology under the hood for solving the very hard and very real problems that occur in our unforgiving domain. It is a mature and reliable tool that has been in production for >5 years. It is absolutely the best way for verifying across many PVT corners quickly.

Fast Monte Carlo


Fast Monte Carlo is a robust tool used to verify yield and analyze statistical performance up to 4-sigma. Its intelligent distribution fitting allows for accurate distribution and yield information to be extracted on any output, including non-Gaussian. If output specifications are set in Variation Designer, designers can select an option for Fast Monte Carlo to finish in fewer simulations if the target yield has been verified. Another option allows for Variation Designer to automatically extract the corner at the target sigma when the Fast Monte Carlo task has completed. In all cases, a summary of the distribution and yield information is available in addition to verification plots that outline how the algorithm came to find the results.

Interactive Environment


Variation Designer’s interactive environment consists of a netlist editor, design history manager, and nominal simulation environment. It allows designers to build, test, and modify their designs without having to struggle with a bunch of command line utilities and sketchy Linux text editors. The environment automatically determines the design hierarchy and project files, updating those views as changes are made.

The netlist editor features syntax highlighting as well as hierarchy ascent and descent, allowing for quick and easy navigation through designs. Design History tracks the revisions and the tasks associated with each, allowing for comparison and reversion between revisions. The nominal simulation environment can quickly simulate the current netlist at the selected project corners, which include foundry process corners, extracted statistical corners, and extracted worst-case environmental corners.

High-Sigma Monte Carlo


High-sigma parts are inherently difficult to verify because it is difficult to measure the effects of variation on high-sigma designs quickly and accurately. With only a few defects in a very large number of samples, Monte Carlo (MC) sampling takes prohibitively long to obtain accurate information in the extreme tail of the distribution where the defects occur. Other methods, such as extrapolating from fewer MC samples or importance sampling, have other drawbacks, such as long runtimes, poor accuracy, or that they are only effective for trivial examples and do not scale to the needs of production designs.

The result is that the actual sigma of high-sigma designs is often unknown, so designers add additional margin to compensate for this uncertainty. This in turn sacrifices power, performance, and area. Still, some designs may fail to meet their high-sigma goals, resulting in poor yields and expensive re-spins.

High-Sigma Monte Carlo (HSMC) is the world’s most advanced and production-proven technology for high-sigma analysis. It started life in 2009 as a fundamentally new approach to high-sigma analysis. HSMC’s first generation was quite limited in that it that could only scale to 1M samples (i.e. ~4-sigma), it had a capacity of a few hundred devices, and it only worked with continuous distributions. But It had one big differentiator compared with all other methods before it – it gave the right answer consistently. This was the first technology that was Monte Carlo and SPICE accurate, and fully verifiable. This quickly attracted the attention of a number of top memory designers, and in spite of its early limitations, it quickly found a path to production.

Since then, Solido has invested heavily into furthering HSMC technology with generations of speed, capacity, and feature breakthroughs. Today’s HSMC includes some very cool technology that can do things we previously thought to be impossible like:

  • Support trillions of samples: This gives HSMC the ability to deliver perfect Monte Carlo and SPICE verification to 7 sigma and beyond. Yes, it actually generates trillions of samples. And yes, it’s still fast.
  • Work on really big stuff: HSMC supports >100K process variables – that’s >20K active devices varying at once. As such, you can use it to run complex cells like big analog components, memory slices (see Hierarchical Monte Carlo below), smaller full memory instances, and substantial macros.
  • Generate full PDFs: A single HSMC run can find not just the tail of the distribution with perfect Monte Carlo and SPICE accuracy, but the entire distribution, just as you would if you ran millions or billions of Monte Carlo samples in SPICE.
  • Support n-ary and multi-modal outputs: Many measurements are naturally binary or multi-modal. For example, if you want to measure whether a bit wrote or not, passes are at one end, failures are at the other, and there is not much in between. This is a hard problem because at high sigma, failures are rare, and you cannot simply run a bunch of simulations, get a bunch of 1s, then model that to somehow find 0s. HSMC’s technology for addressing this very hard problem is totally novel and works great.

HSMC is production-proven to be fast, accurate, scalable, verifiable, reliable, and easy to use. It has been trusted by hundreds of designers at top semiconductor companies for years, through many production cycles. If you are doing high-sigma analysis and not using HSMC, we should really talk.

Cell Optimizer


Understanding the interactions between large numbers of design parameters and determining the optimal combination across multiple outputs is huge challenge faced by designers. Using Solido’s Cell Optimizer, designers can determine the best overall combination of a multitude of parameters in far fewer simulations than brute force methods. Optimization is performed for a user-defined goal function, giving the designer ultimate control over the end-result of their designs.

Cell Optimizer uses model and intelligent output predictions to quickly target the best possible combinations without simulating all combinations, drastically reducing the number of simulations, especially when many parameters are being swept. Once an optimal design is found, the integration with Variation Designer’s interactive environment allows designer to quickly apply the parameters to their netlist.

Hierarchical Monte Carlo


Full-chip memory statistical verification is a really hard problem. It is not even close to possible to run Monte Carlo on the whole memory. Bleeding edge simulation technology enables small memory instances to run with fairly long runtimes, but we are a long way off from being able to run Monte Carlo on the full-chip memory. In the meantime, we need to stick to memory slices and critical paths for our Monte Carlo analysis.

Every method we have ever seen for applying statistical variation to these reduced memory structures has been deeply flawed. There are lots of wrong ways to do this that lead to various levels of over-design. The simplest is to just run a small (e.g. 3K) number of Monte Carlo samples on the whole structure, then to extrapolate the result to the target sigma for the bit cell. This produces terrible results, as it rolls the random variation dice way too many times on the sense amps, control logic, and global statistical variation, leading to a very pessimistic result. There are other more elaborate methods, like taking a 6-sigma bit cell, a 5-sigma sense amp, a 4-sigma control block, and a 3-sigma global statistical sample, combining them all together into a critical path netlist, then simulating. Although the target sigmas may be right for each component, combining them all together at the same time is massively pessimistic, as they simply are not ever going to all occur in the same sample. Our customers have measured the error from these types of methods, and they are huge – typically in the 30-60% range. That leads to a lot of over-design.

Hierarchical Monte Carlo solves this problem with perfect statistical accuracy. It works by doing a full virtual statistical reconstruction of the full on-chip memory many times. It is statistically identical to running Monte Carlo on the full chip memory, but it runs quickly. It is easy to use too, and does away with the need to convert from architectures to sigma targets. Rather, just map the chip architecture into the tool and it takes care of the rest.

Hierarchical Monte Carlo builds on the same production-proven algorithmic technology developed for High-Sigma Monte Carlo, ensuring the same reliability and speed, and like High-Sigma Monte Carlo, the results are are inherently verifiable and brute-force accurate.

Hierarchical Monte Carlo works great for our production customers, answering variability questions that were previously simply unknown. It removes large uncertainties, and ultimately saves a lot of over-design. If statistical verification of full-chip memories is a pain point, you should definitely get in touch with us – we can help you to get this right.