By Rohit Sharma
Constant coverage of an invigorating topic like machine intelligence in the media often urges us to consider its use in EDA technology. As is often the case, there are many myths and falsehoods that consume our time and effort when trying to apply machine intelligence to EDA. This article aims to uncover the myths and to provide helpful advice on applying machine intelligence to your EDA project or product.
Value Proposition
First, there needs to be a clear value proposition for adding machine intelligence to an EDA product. Using machine intelligence to create a me-too product adds no value. EDA customers are too busy to understand or care about an EDA tool’s underlying technology. They just want to use the tool and get results. If the tool delivers value, if it delivers tangible benefits, then they’ll use it. Otherwise, they won’t.
Currently, EDA tool developers are already experimenting with AI and machine intelligence without considering this fundamental truth – without a higher-end objective. AI must deliver something better or new, whether a speed advantage, a performance advantage, new features, new insights, or perhaps even something pleasantly surprising. Before you write a single line of AI-enhanced code, you need to clearly understand how AI will enhance the product. What is the value proposition?
Use Model
There’s a major barrier to customer adoption of AI and machine intelligence technology for EDA tools: EDA users are averse to make decisions based on probabilistic results. Instead, half a century of EDA tool use has conditioned them to expect deterministic outcomes from their tools.
Back in 2003, a prominent visionary and EDA investor was quoted in an interview, saying: “If I open my eyes five years from now, all static analysis in VLSI will be statistical.” Many EDA luminaries have been proven wrong over time for betting that EDA users will accept statistical results. As enthusiastic as I am about using machine intelligence to improve EDA tools, I must urge caution based on the history of EDA failures that employed a probabilistic use model. Decision-makers and EDA tool users want to see deterministic answers to questions about yield or slack, not probabilistic ones.
Our experiences at Paripath in developing the PASER (Paripath Accelerated Simulation Environment) tool also bear this out. We discovered that delivering results 50x faster but with 92% accuracy was simply not good enough for end users. EDA users only started to use PASER when its answers became 98+% accurate. To be adopted in the production flow, the tool had to deliver 99% accuracy.
Data Engineering
There are specific ways to achieve these accuracy goals. The first is data engineering. Machine intelligence is a new approach to EDA tool development and it needs to be trained on a data set. If the data is poor or incomplete, training will create an inaccurate model. Fundamental software-development rules still apply. Garbage in, garbage out.
Without good training data, there’s no way for you to build good neural-network models. If you train a model with garbage data, you’ll get a garbage model. You must cleanse the data before you use it for training. Otherwise, the model will draw inaccurate conclusions and customers will not use your tool. The model is not to blame here. The model’s not wrong. The problem lies in poor data engineering, poor data cleansing, and a lack of discipline to prepare input data.
High Dimensionality
Next, machine intelligence has a unique ability to quickly solve problems of high dimensionality. Pure EDA problems often have high dimensionality. Over the years, EDA developers have perfected the art of segmenting the problems into sequencing solutions with lower dimension. Machine intelligence technology can handle problems with thousands of dimensions, but you need to be careful when tackling problems that have high dimensionality. Too many dimensions can produce confused or inaccurate results with AI and deep-learning technology.
It helps to visualize the problem and to analyze the data set before using the data to train an AI-enhanced EDA tool. Several visualization methods can help. For example, t-SNE (t-Distributed Stochastic Neighbor Embedding) lets you reduce a data set’s dimensionality from a very large number to a much lower number. Figure 1 shows a high-dimension dataset with a dimensionality of 2000, which has been reduced to a low dimensionality of 3.
Reducing the dimensionality of a data set to 3 using t-SNE and visualization allows you to quickly see whether the data set defines an easy or a difficult problem. If the problem is difficult, you’ll likely need to lower the problem’s and the data set’s dimensionality before using the data to train a neural network.
Technology Selection
One factor that determines whether it will be easy or difficult to incorporate machine intelligence into your EDA tool is your choice of AI development tools. AI researchers have developed a long list of frameworks, libraries, and languages that they use to develop AI and machine-learning software. Frameworks and libraries such as TensorFlow, Caffe and MXNet are most popular for developing deep-learning models.
However, these tools are not yet popular with the EDA development community. The languages of choice in the EDA community are traditionally C and C++ for development and Tcl for prototyping and creating user interfaces. The rest of the software world has moved on to newer development languages such as Python, Java, R, and such. Moreover, machine-learning development segments into two distinct processes: training (i.e. generating the model) and inference (i.e. using the model).
Another question to consider is where to generate the model – at the vendor site or the customer site?
Consequently, fitting AI and deep-learning development into EDA development environments can feel like fitting a square peg into a round hole. You may need to create corners in your hole.
EDA is a very small player in the overall software market. Relatively few software developers are familiar with writing EDA tools. It’s best to select AI and deep-learning development tools that can provide some sort of interface that’s compatible with EDA’s development tools of choice. Some AI frameworks have lower-level C and C++ interface layers that provide a familiar entry point for experienced EDA developers.
At Paripath, we chose TensorFlow for exactly this reason. TensorFlow has a lower-level C/C++ interface. Although the resulting development path becomes a longer one using this approach, it’s a more familiar path for EDA developers and therefore it’s a path that can ultimately lead your EDA development team to success. An elaborate study of comparing these frameworks has been published in the book Machine Intelligence in Design Automation.
Integration into Legacy Systems
When you understand the value that you expect machine intelligence to add to your new EDA tool, when you’ve cleansed and then analyzed the data set, and when you have selected an appropriate set of development tools, you’re finally ready to add machine intelligence to your EDA development. There are two use models for AI-enhanced EDA tools. The first uses a trained model to guide the EDA tool’s decision-making. In this use case, the trained neural network doesn’t change. The software’s accuracy doesn’t improve with use unless the company that developed the EDA tool retrains the underlying neural network. This use case follows the familiar, existing use case associated with EDA tools developed using deterministic algorithms.
For the second use case, the end user is able to retrain the underlying neural network, which allows the EDA tool to produce better, more accurate results over time. This use case produces a win/win situation because end users are able to hone their tools and improve them over time, without help from the EDA tool vendor’s application engineers. If the retrained models are also sent back to the EDA developer for incorporation into newer versions of the tool, all users benefit from other users’ training data.
It’s not clear how you’d support this second use case in the current EDA business environment where most data sets are proprietary and are carefully guarded. Most large EDA tool customers want to keep their data in house under tight control. Even with this somewhat restrictive situation, however, EDA tools benefit from the incorporation of machine intelligence because each EDA tool customer can customize the tool and improve its results.
Machine intelligence has much to add to EDA tools’ capabilities. Only time will tell if the customers want and will accept these new capabilities.
Rohit Sharma, founder and CEO of Paripath Inc., is an engineer, author and entrepreneur. He has published many papers in international conferences and journals. He has contributed to electronic design automation domain for over 20 years learning, improvising and designing solutions. He is passionate about many technical topics including machine learning, analysis, characterization, and modeling. It led him to architect guna – an advanced characterization software for modern nodes.
Sharma has written a book titled “Machine Intelligence for Design Automation.” You can download code examples and other information here.
This originally appeared on the SEMI blog.