Read PDF Advanced Techniques in Logic Synthesis, Optimizations and Applications

Free download. Book file PDF easily for everyone and every device. You can download and read online Advanced Techniques in Logic Synthesis, Optimizations and Applications file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Advanced Techniques in Logic Synthesis, Optimizations and Applications book. Happy reading Advanced Techniques in Logic Synthesis, Optimizations and Applications Bookeveryone. Download file Free Book PDF Advanced Techniques in Logic Synthesis, Optimizations and Applications at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Advanced Techniques in Logic Synthesis, Optimizations and Applications Pocket Guide.

The cloud is a collection of servers that run Internet software you can use on your device or computer. The plumbing on chip, among chips and between devices, that sends bits of data and manages that data. Crypto processors are specialized processors that execute cryptographic algorithms within hardware. A method of conserving power in ICs by powering down segments of a chip when they are not in use.

A data center is a physical building or room that houses multiple servers with CPUs for remote data storage and processing. Data processing is when raw data has operands applied to it via a computer or server to process data into another useable form. This definition category includes how and where the data is processed. Deep learning is a subset of artificial intelligence where data representation is based on multiple layers of a matrix. Actions taken during the physical design stage of IC development to ensure that the design can be accurately manufactured.

A physical design process to determine if chip satisfies rules defined by the semiconductor manufacturer. Electronic Design Automation EDA is the industry that commercializes the tools, methodologies and flows associated with the fabrication of electronic systems. A way of including more features that normally would be on a printed circuit board inside a package. The use of metal fill to improve planarity and to manage electrochemical deposition ECD , etch, lithography, stress effects, and rapid thermal annealing.

Functional Design and Verification is currently associated with all design and verification functions performed before RTL synthesis. Functional verification is used to determine if a design, or unit of a design, conforms to its specification. Adding extra circuits or software into a design to ensure that if one part doesn't work the entire system doesn't fail. A dense, stacked version of memory with high-speed interfaces that can be used in advanced packaging.

Combines use of a public cloud service with a private cloud, such as a company's internal enterprise servers or data centers. IEEE Enables broadband wireless access using cognitive radio technology and spectrum sharing in white spaces. Specific requirements and special consideration for the Internet of Things within an Industrial settiong. Also known as the Internet of Everything, or IoE, the Internet of Things is a global application where devices can connect to a host of other devices, each either providing data from sensors, or containing actuators that can control some function.

Data can be consolidated and processed on mass in the Cloud. A technical standard for electrical characteristics of a low-power differential, serial communication protocol. An approach in which machines are trained to favor basic behaviors and outcomes rather than explicitly programmed to do certain tasks. That results in optimization of both hardware and software to achieve a predictable range of results.

Microelectromechanical Systems are a fusion of electrical and mechanical engineering and are typically used for sensors and for advanced microphones and even speakers. The ability of a lithography scanner to align and print various layers accurately on top of each other. Hardware Verification Language, PSS is defined by Accellera and is used to model verification intent in semiconductor design. Special flop or latch used to retain the state of the cell when its main power supply is shut off.

Data centers and IT infrastructure for data storage and computing that a company owns or subscribes to for use only by that company. Design verification that helps ensure the robustness of a design and reduce susceptibility to premature or catastrophic electrical failures. Additional logic that connects registers into a shift register or scan chain for increased test efficiency. Sensors are a bridge between the analog world we live in and the underlying communications infrastructure.

A transmission system that sends signals over a high-speed connection from a transceiver on one chip to a receiver on another. The transceiver converts parallel data into serial stream of data that is re-translated into parallel on the receiving end. When channel lengths are the same order of magnitude as depletion-layer widths of the source and drain, they cause a number of issues that affect design. A class of attacks on a device and its contents by analyzing information using different access methods. A system on chip SoC is the integration of functions necessary to implement an electronic system onto a single substrate and contains at least one processor.

How semiconductors are sorted and tested before and after implementation of the chip in a system. A type of transistor under development that could replace finFETs in future process technologies. The Unified Coverage Interoperability Standard UCIS provides an application programming interface API that enables the sharing of coverage data across software simulators, hardware accelerators, symbolic simulations, formal tools or custom verification tools. The basic architecture for most computing today, based on the principle that data needs to move back and forth between a processor and memory.

Wired communication, which passes data through wires between devices, is still considered the most stable form of communication. Search for:. Knowledge Center Navigation. Knowledge Center. Since that time, Synopsys has dominated the logic synthesis market for ASICs, although other tools from Synplicity acquired by Synopsys and Exemplar acquired by Mentor had success in the FPGA market As implementation geometries shrank, delays associated with wires grew while gate delays shrank. Enabling Cheaper Design Published on September 13, Debugging Debug Published on February 22, Is Synthesis Still Process-Independent?

Published on August 23, Subcenters A brief history of logic synthesis. A brief history of design We start with schematics and end with ESL. A brief history of logic simulation Important events in the history of logic simulation. A brief history of logic synthesis where you are Early development associated with logic synthesis. Acronyms Commonly and not-so-commonly used acronyms. Advanced Smart Fill At newer nodes, more intelligence is required in fill because it can affect timing, signal integrity and require fill for all layers.

Advanced Packaging A collection of approaches for combining chips into packages, resulting in lower power and lower cost. Agile An approach to software development focusing on continual delivery and flexibility to changing requirements. Agile Hardware Development How Agile applies to the development of hardware systems. Air Gap A way of improving the insulation between various components in a semiconductor by creating empty space. Ambient Intelligence A collection of intelligent electronic environments.

Analog Semiconductors that measure real-world conditions. Analog circuits Analog integrated circuits are integrated circuits that make a representation of continuous signals in electrical form. Analog Design and Verification The design and verification of analog components. Application specific integrated circuit ASIC A custom, purpose-built integrated circuit made for a specific task or product.

Artificial Intelligence AI Using machines to make decisions based upon stored knowledge and sensory input. Assertion Code that looks for violations of a property. Automatic Test Pattern Generation The generation of tests that can be used for functional or manufacturing verification. Automotive Issues dealing with the development of automotive electronics. Avalanche Noise Noise in reverse biased junctions.

AVM Verification methodology created by Mentor. Band gap. Batteries Devices that chemically store energy. BEOL Backend-of-line processes. Biometrics Security based on scans of fingerprints, palms, faces, eyes, DNA or movement. Blech Effect A reverse force to electromigration. Bluetooth Low Energy Also known as Bluetooth 4. BSIM Transistor model. Built-in self-test BiST On-chip logic to test a design. Bus Functional Model Interface model between testbench and device under test. Cache Coherent Interconnect for Accelerators CCIX Interconnect standard which provides cache coherency for accelerators and memory expansion peripheral devices connecting to processors.

CAN bus Automotive bus developed by Bosch. Cell-Aware Test Fault model for faults within cells. Checker Testbench component that verifies results.

Ebook Advanced Techniques In Logic Synthesis, Optimizations And Applications

Chip Design Design is the process of producing an implementation from a conceptual form. Chip Design and Verification The design, verification, implementation and test of electronics systems into integrated circuits. Clock Gating Dynamic power reduction by gating the clock. Clock Tree Optimization Design of clock trees for power reduction. CMOS Fabrication technology.

Cobalt Cobalt is a ferromagnetic metal key to lithium-ion batteries. Code Coverage Metrics related to about of code executed in functional verification. Combinatorial Equivalence Checking Verify functionality between registers remains unchanged after a transformation. Communications The plumbing on chip, among chips and between devices, that sends bits of data and manages that data. Communications systems. Compiled-code Simulation Faster form for logic simulation. Contact The structure that connects a transistor with the first layer of copper interconnects.

Coverage Completion metrics for functional verification. Crosstalk Interference between signals. Crypto processors Crypto processors are specialized processors that execute cryptographic algorithms within hardware. Dark Silicon A method of conserving power in ICs by powering down segments of a chip when they are not in use. Data Analytics. Data processing Data processing is when raw data has operands applied to it via a computer or server to process data into another useable form. Enabling Cheaper Design Published on September 13, Debugging Debug Published on February 22, Is Synthesis Still Process-Independent?

Published on August 23, Subcenters A brief history of logic synthesis. A brief history of design We start with schematics and end with ESL. A brief history of logic simulation Important events in the history of logic simulation. A brief history of logic synthesis where you are Early development associated with logic synthesis.

Acronyms Commonly and not-so-commonly used acronyms. Advanced Smart Fill At newer nodes, more intelligence is required in fill because it can affect timing, signal integrity and require fill for all layers. Advanced Packaging A collection of approaches for combining chips into packages, resulting in lower power and lower cost. Agile An approach to software development focusing on continual delivery and flexibility to changing requirements.

Agile Hardware Development How Agile applies to the development of hardware systems. Air Gap A way of improving the insulation between various components in a semiconductor by creating empty space. Ambient Intelligence A collection of intelligent electronic environments. Analog Semiconductors that measure real-world conditions. Analog circuits Analog integrated circuits are integrated circuits that make a representation of continuous signals in electrical form. Analog Design and Verification The design and verification of analog components.

Application specific integrated circuit ASIC A custom, purpose-built integrated circuit made for a specific task or product. Artificial Intelligence AI Using machines to make decisions based upon stored knowledge and sensory input. Assertion Code that looks for violations of a property. Automatic Test Pattern Generation The generation of tests that can be used for functional or manufacturing verification.

Automotive Issues dealing with the development of automotive electronics. Avalanche Noise Noise in reverse biased junctions. AVM Verification methodology created by Mentor. Band gap.

Introduction: Optimization Techniques for Digital VLSI Design

Batteries Devices that chemically store energy. BEOL Backend-of-line processes. Biometrics Security based on scans of fingerprints, palms, faces, eyes, DNA or movement. Blech Effect A reverse force to electromigration. Bluetooth Low Energy Also known as Bluetooth 4. BSIM Transistor model.

Built-in self-test BiST On-chip logic to test a design. Bus Functional Model Interface model between testbench and device under test. Cache Coherent Interconnect for Accelerators CCIX Interconnect standard which provides cache coherency for accelerators and memory expansion peripheral devices connecting to processors. CAN bus Automotive bus developed by Bosch. Cell-Aware Test Fault model for faults within cells. Checker Testbench component that verifies results.

Chip Design Design is the process of producing an implementation from a conceptual form. Chip Design and Verification The design, verification, implementation and test of electronics systems into integrated circuits. Clock Gating Dynamic power reduction by gating the clock.

Clock Tree Optimization Design of clock trees for power reduction. CMOS Fabrication technology. Cobalt Cobalt is a ferromagnetic metal key to lithium-ion batteries. Code Coverage Metrics related to about of code executed in functional verification.

Description

Combinatorial Equivalence Checking Verify functionality between registers remains unchanged after a transformation. Communications The plumbing on chip, among chips and between devices, that sends bits of data and manages that data. Communications systems. Compiled-code Simulation Faster form for logic simulation. Contact The structure that connects a transistor with the first layer of copper interconnects.

Coverage Completion metrics for functional verification. Crosstalk Interference between signals. Crypto processors Crypto processors are specialized processors that execute cryptographic algorithms within hardware. Dark Silicon A method of conserving power in ICs by powering down segments of a chip when they are not in use. Data Analytics. Data processing Data processing is when raw data has operands applied to it via a computer or server to process data into another useable form.

De Facto Standards A standard that comes about because of widespread acceptance or adoption. Debug The removal of bugs from a design. Deep Learning DL Deep learning is a subset of artificial intelligence where data representation is based on multiple layers of a matrix. Design for Manufacturing DFM Actions taken during the physical design stage of IC development to ensure that the design can be accurately manufactured.

Design for Test DFT Techniques that reduce the difficulty and cost associated with testing an integrated circuit. Design Patent Protection for the ornamental design of an item. Design Rule Checking DRC A physical design process to determine if chip satisfies rules defined by the semiconductor manufacturer.

Design Rule Pattern Matching Locating design rules using pattern matching techniques. Device Noise Sources of noise in devices. Diamond Semiconductors A wide-bandgap synthetic material. Digital Oscilloscope Allowed an image to be saved digitally. DNA Chips Using deoxyribonucleic acid to make chips hacker-proof. Double Patterning A patterning technique using multiple passes of a laser. Double Patterning Methodologies Colored and colorless flows for double patterning. E-Beam Lithography using a single beam e-beam tool.

Edge Computing. Educational Establishments Educational establishments from which technology has been spawned into the EDA field. Electromigration Electromigration EM due to power densities. Emulation Special purpose hardware used for logic verification. Energy Harvesting Capturing energy from the environment. Environmental Noise Noise caused by the environment. Epitaxy A method for growing or depositing mono crystalline films on a substrate. Ethernet Ethernet is a reliable, open standard for connecting devices by wire.

Fan-Outs A way of including more features that normally would be on a printed circuit board inside a package. Fault Simulation Evaluation of a design under the presence of manufacturing defects. Femtocells The lowest power form of small cells, used for home WiFi networks. Fill The use of metal fill to improve planarity and to manage electrochemical deposition ECD , etch, lithography, stress effects, and rapid thermal annealing.

FinFET A three-dimensional transistor. Flash Memory non-volatile, erasable memory. Flicker Noise Noise related to resistance fluctuation. Formal Verification Formal verification involves a mathematical proof to show that a design adheres to a property. Functional Coverage Coverage metric used to indicate progress in verifying functionality. Functional Design and Verification Functional Design and Verification is currently associated with all design and verification functions performed before RTL synthesis.

Functional Verification Functional verification is used to determine if a design, or unit of a design, conforms to its specification. Gate-Level Power Optimizations Power reduction techniques available at the gate level. Generation-Recombination Noise noise related to generation-recombination. Graphene 2D form of carbon in a hexagonal lattice.

Graphics processing unit GPU An electronic circuit designed to handle graphics and video. Guard Banding Adding extra circuits or software into a design to ensure that if one part doesn't work the entire system doesn't fail. Hardware Assisted Verification Use of special purpose hardware to accelerate verification. Hardware Modeler Historical solution that used real chips in the simulation process.

Heat Dissipation Power creates heat and heat affects power. High-Bandwidth Memory HBM A dense, stacked version of memory with high-speed interfaces that can be used in advanced packaging. IC Types. Impact of lithography on wafer costs Wafer costs across nodes. Implementation Power Optimizations Power optimization techniques for physical implementation. Induced Gate Noise Thermal noise within a channel. Integrated Circuits ICs Integration of multiple devices onto a single piece of semiconductor.

Puri, A. Bjorksten, T. Rosser Domino logic is one of the most popular dynamic circuit configurations for implementing high-performance logic designs. Since domino logic is inherently non-inverting, it presents a fundamental constraint of implementing logic functions without any intermediate inversions. Removal of intermediate inverters requires logic duplication for generating both the negative and positive signal phases, which results in significant area overhead.

This area overhead can be substantially reduced by selecting an optimal output phase assignment, which results in a minimum logic duplication penalty for obtaining inverter-free logic. In this paper, we present this previously unaddressed problem of output phase assignment for minimum area duplication in dynamic logic synthesis. We give both optimal and heuristic algorithms for minimizing logic duplication.

Xiaoqing, K. Saluja 1A. Huang, J. Jou, W. First, it finds an area-optimized performance-considered initial network by a modified area optimization technique. Then, an iterative algorithm consisting of several resynthesizing techniques is applied to trade the area for the performance in the network gracefully. Experimental results show that this approach can provide a complete set of mapping solutions from the area-optimized one to the performance-optimized one for the given design.

Furthermore, these two extreme solutions, the area-optimized one and the performance-optimized one, produced by our algorithm outperform the results of most existing algorithms. Therefore, our algorithm is very useful for the timing driven FPGA synthesis. Zheng, Q. Zhang, M. Nakhla, R. Achar This paper describes a new moment-generation algorithm for efficient simulation of linear subnetworks characterized by measured or tabulated data using moment-matching techniques. The subnetwork moments are computed by performing an integration in time-domain on the measured data.

The proposed technique is more accurate as it relies on integration as compared to the previously published approaches which depend on the differentiation of measured data in frequency-domain for computation of moments. Using the new moment-generation technique, the CFH Complex Frequency Hopping algorithm has been extended to handle measured subnetworks. Also a generalized stencil for measured data for inclusion in circuit simulators and to facilitate efficient moment-generation has been presented.

Examples and comparison with conventional simulations are provided. The method is accurate while it is faster than the conventional approach by 1 to 2 orders of magnitude. Corey, A. Yang An approach is presented for modeling board-level, package-level, and MCM substrate-level interconnect circuitry based on measured time domain reflectometry data. The time-domain scattering parameters of a multiport system are used to extract a SPICE netlist which uses standard elements to match the behavior of the device up to a user-specified cutoff frequency.

Linear or nonlinear circuits may be connected to the model ports, and the entire circuit simulated in a standard circuit simulator. Two-port and four-port example microstrip circuits are characterized, and the simulation results are compared with measured data. Delay, reflection, transmission, and crosstalk are accurately modeled in each case.

Kahng, S. Muddu, K. Masuko Elmore delay has been widely used as an analytical estimate of interconnect delays in the performance-driven synthesis and layout of VLSI routing topologies. We develop new analytical delay models based on the first and second moments of the interconnect transfer function when the input is a ramp signal with finite rise time. Evaluation of our analytical models is several orders of magnitude faster than simulation using SPICE.

We also describe extensions of our approach for estimation of source-sink delays in arbitrary interconnect trees. Chen, H. Zhou, D. Wong We consider non-uniform wire-sizing for general routing trees under the Elmore delay model. Three minimization objectives are studied: 1 total weighted sink-delays; 2 total area subject to sink-delay bounds; and 3 maximum sink-delay.

We first present an algorithm NWSA-wd for minimizing total weighted sink-delays based on iteratively applying the wire-sizing formula in [1]. We show that NWSA-wd always converges to an optimal wire-sizing solution. Experimental results show that our algorithms are efficient both in terms of runtime and storage. For example, NWSA-wd, with linear runtime and storage, can solve a wire-segment routing-tree problem using about 1.

Okamoto, J. Cong This paper presents an efficient algorithm for buffered Steiner tree construction with wire sizing. Given a source and n sinks of a signal net, with given positions and a required arrival time associated with each sink, the algorithm finds a Steiner tree with buffer insertion and wire sizing so that the required arrival time or timing slack at the source is maximized.

The unique contribution of our algorithm is that it performs Steiner tree construction, buffer insertion, and wire sizing simultaneously with consideration of both critical delay and total capacitance minimization by combining the performance-driven A-tree construction and dynamic programming based buffer insertion and wire sizing, while tree construction and the other delay minimization techniques were carried out independently in the past.

Experimental results show the effectiveness of our approach. Lehther, S. Sapatnekar While designing interconnect for MCM's, one must take into consideration the distributed RLC effects, due to which signals may display nonmonotonic behavior and substantial ringing. This paper considers the problem of designing clock trees for MCM's. A fully distributed RLC model is utilized for AWE-based analysis and synthesis, and appropriate measures are taken to ensure adequate signal damping and for buffer insertion to satisfy constraints on the clock signal slew rate.

Experimental results, verified by SPICE simulations, show that this method can be used to build clock trees with near-zero skews. Cao, D.

A brief history of logic synthesis

Pradhan A sequential redundancy identification procedure is presented. Based on uncontrollability analysis and recursive learning techniques, this procedure identifies c-cycle redundancies in large circuits, without simplifying assumptions or state transition information. The proposed procedure can identify redundant faults which require conflicting assignments on multiple lines. In this sense, it is a generalization of FIRES, a state-of-the-art redundancy identification algorithm. A modification of the proposed procedure is also presented for identifying untestable faults.

Experimental results on ISCAS benchmarks demonstrate that these two procedures can efficiently identify a large portion of c-cycle redundant and untestable faults. Hartanto, V. Boppana, W. Fuchs State justification is a time-consuming operation in test generation for sequential circuits. In this paper, we present a technique to rapidly identify state elements flip-flops that are either difficult to set or unsettable. This is achieved by performing test generation on certain transformed circuits to identify state elements that are not settable to specific logic values.

Two applications that benefit from this identification are sequential circuit test generation and partial scan design. The knowledge of the state space is shown to be useful in creating early backtracks in deterministic test generation. Partial scan selection is also shown to benefit from the knowledge of the difficult-to-set flip-flops. Experiments on the ISCAS89 circuits are presented to show the reduction in time for test generation and the improvements in the testability of the resulting partial scan circuits. Rudnick, J. Patel Simulation-based techniques for dynamic compaction of test sequences are proposed.

The first technique uses a fault simulator to remove test vectors from the partially-specified test sequence generated by a deterministic test generator if the vectors are not needed to detect the target fault, considering that the circuit state may be known. The second technique uses genetic algorithms to fill the unspecified bits in the partially-specified test sequence in order to increase the number of faults detected by the sequence.

Significant reductions in test set sizes were observed for all benchmark circuits studied. Fault coverages improved for many of the circuits, and execution times often dropped as well, since fewer faults had to be targeted by the computation-intensive deterministic test generator. Lee, A. Pardo, J. Jang, G. Hachtel, F. Somenzi In this paper we present the tearing paradigm as a way to automatically abstract behavior to obtain upper and lower bound approximations of a reactive system. We also give an algorithm for false negative or false positive resolution for verification based on a theory of a lattice of approximations.

We show that there exists a bipartition of the lattice set based on positive versus negative verification results. Our resolution methods are based on determining a pseudo-optimal shortest path from a given, possibly coarse but tractable approximation, to a nearest point on the contour separating one set of the bipartition from the other. Iwashita, T. Nakata, F. Hirose We present a CTL model checking algorithm based mainly on forward state traversal, which can check many realistic CTL properties without doing backward state traversal.

This algorithm is effective in many situations where backward state traversal is more expensive than forward state traversal. We combine it with BDD-based state traversal techniques using partitioned transition relations. Experimental results show that our method can verify actual CTL properties of large industrial models which cannot be handled by conventional model checkers. Pradhan, D. Paul, M. Chatterjee This paper presents a new framework for formal logic verification. What is depicted here is fundamentally different from previous approaches.

In earlier approaches, the circuit is either not changed during the verification process, as in OBDD or implication-based methods, or the circuit is progressively reduced during verification. Whereas in our approach, we actually enlarge the circuits by adding gates during the verification process. Specifically introduced here is a new technique that transforms the reference circuit as well as the circuit to be verified, so that the similarity between the two is progressively enhanced.

In the process, we reduce the dissimilarity between the two circuits, which makes it easier to verify the circuits. In this paper, we first introduce a method to identify parts of the two circuits which are dissimilar. We use the number of implications that exist between the nodes of one circuit and the nodes of the other circuit as a metric of similarity. As demonstrated, this can be a very useful metric. We formulate transformations that can reduce the dissimilarity. These are performed on those parts of the circuits which are found to be dissimilar.

These admissible transformations are functionality-preserving and based on certain Boolean difference formulations. The dissimilarity reduction transformations introduce new logical relationships between the two circuits that did not previously exist. These logical relationships are extracted as new implications, which are then used to reduce the complexity of the verification problem. These two steps are repeated in succession until the verification process is complete.

A complete procedure is presented which demonstrates the power of our logic verification technique. The concept presented in this paper can be useful in accelerating verification frameworks which rely on structural methods. Shin, K. Choi Latency tolerance is one of main problems of software synthesis in the design of hardware-software mixed systems. This paper presents a methodology for speeding up systems through latency tolerance which is obtained by decomposition of tasks and generation of an efficient scheduler.

Scheduling of the decomposed tasks is performed in a mixed static and dynamic fashion. Experimental results show the significance of our approach. Zhao, C. Papachristou Design with cores has become popular recently because it can decrease the design time and ease the complexity of the design process. This paper presents a new method for the design of DSP cores based on multiple behaviors. This method uses redesign technique based on reallocation transformations to extract those RTL components in an initial RTL structure which are highly reusable and uses them to construct a DSP core.

Experimental results are provided to illustrate the high reusability of core, extracted from given behaviors, when it accommodates new behaviors.


  • Essential Grammar In Use Elementary Cambridge.
  • Advanced Techniques in Digital Design.
  • Ebook Advanced Techniques In Logic Synthesis, Optimizations And Applications 2011.
  • Find Me!

Leupers, P. Marwedel This paper presents DSP code optimization techniques, which originate from dedicated memory address generation hardware. We define a generic model of DSP address generation units. Based on this model, we present efficient heuristics for computing memory layouts for program variables, which optimize utilization of parallel address generation units. Improvements and generalizations of previous work are described, and the efficacy of the proposed algorithms is demonstrated through experimental evaluation.

Yalcin, J. Hayes, K. Sakallah We present a novel timing analysis method ACD that computes an approximate value for the delay of datapath circuits. Based on the conditional delay matrix CDM formalism we introduced earlier, the ACD method exploits the fact that most datapath signals are directed by a small set of control inputs. The signal propagation conditions are restricted to a set of predefined control inputs, which results in significant reductions in the size of the conditions as well as computation time.

Our results demonstrate up to three orders of magnitude speedup in computation time over exact methods, with little or no loss in accuracy 2C. Narayanan, B. Chappell, B. Fleischer Static timing analysis techniques [1, 2] are widely used to verify the timing behavior of large digital designs [11] implemented predominantly in conventional static CMOS.

These techniques, however, are not sufficient to completely verify the dynamic circuit families now finding favor in high-performance designs [11]. Due to the circuit structure employed in SRCMOS, designs naturally decompose into a hierarchy of gates and macros; timing analysis must address and preferably exploit this hierarchy.

At the gate level, three categories of constraints on pulse timing arise from considering the effects of pulse width, overlap, and collisions. Timing analysis is performed at the macro level, by a performing timing tests at macro boundaries and b using macro-level delay models.

We define various macro-level timing tests which ensure that fundamental gate-level timing constraints are satisfied. We extend the standard delay model to handle leading and trailing edges of signal pulses, across-chip variations, tracking of signals, and slow and fast operating conditions.

We have developed an SRCMOS timing analyzer based on this approach; the analyzer was implemented as extensions to a standard static timing analysis program, thus facilitating its integration into an existing design system and methodology. Van Campenhout, T. Mudge, K. Sakallah Two methods are presented for static timing verification of sequential circuits implemented as a mix of static and domino logic.

Constraints for proper operation of domino gates are derived. An important observation is that input signals to domino gates may start changing near the end of the evaluate phase. The first method models domino gates explicitly, similar to latches. The second method treats domino gates only during pre- and post-processing steps. This method is shown to be more conservative, but easier to compute. Lehmann, B. Wunder, K. An increasing application of design data reuse is widely recognized as a promising technique to master future design complexities.

Since the intellectual property of a design is more and more kept in software-like hardware description languages HDL , successful reuse depends on the availability of suitable HDL reverse engineering tools. This paper introduces new concepts for an integrated HDL reverse engineering tool-set and presents an implemented evaluation prototype for VHDL designs. Starting from an arbitrary collection of HDL source code files, several graphical and textual views on the design description are automatically generated.

The tool-set provides novel hypertext techniques, expressive graphical code representations, a user-defined level of abstraction, and interactive configuration mechanisms in order to facilitate the analysis, adoption and upgrade of existing HDL designs. Johnson, J. Brockman, R.

Vigeland As design processes continue to increase in complexity, it is important to base process improvements on quantitative analysis. In this paper we develop an analytical approach to analyze sequential design processes using sensitivity analysis. Two applications illustrate this approach, one involving a Pareto analysis of an ASIC design process and the other an optimization of a software design process to determine the lower bound of the process completion time. Ho, M. Horowitz The functional validation of a state-of-the-art digital design is usually performed by simulation of a register-transfer-level model.

The degree to which the test-vector suite covers the important tests is known as the coverage of the suite. Previous coverage metrics have relied on measures such as the number of simulated cycles or number of toggles on a circuit node, which are indirect metrics at best. This paper proposes a new method of analyzing coverage based on projecting a minimized control finite-state graph onto control signals for the datapath part of the design to yield a meaningful metric and provide detailed feedback about missing tests.

Theories and algorithms

The largest hurdle is state-space explosion. We describe two methods of dealing with this in a practical manner and give results of applying this coverage analysis to parts of the node controller of the Stanford FLASH multiprocessor. Tuebingen,Tuebingen,Germany 3A. Juan, D. Gajski, V. Chaiyakul In interactive behavioral synthesis, the designer can control the design process at every stage, including modifying the schedule of the design to improve its performance. In this paper, we present a methodology for performance optimization in interactive behavioral synthesis. Also proposed are several quality metrics and hints that can assist the user in utilizing the proposed methodology.

When the user is optimizing the performance of the design, one important decision is the selection of a clock period. To facilitate clock selection by the user, we have developed an algorithm to estimate the effect of different clock periods on the execution time of the design. We have tested our methodology on several benchmarks. The experimental results support the proposed methodology by demonstrating an average improvement of Raghunathan, S. Dey, N. Jha We present techniques for estimating switching activity and power consumption in register-transfer level RTL circuits.

Previous work on this topic has ignored the presence of glitching activity at various data path and control signals, which can lead to significant underestimation of switching activity. For data path blocks that operate on word-level data, we construct piecewise linear models that capture the variation of output glitching activity and power consumption with various word-level parameters like mean, standard deviation, spatial and temporal correlations, and glitching activity at the block's inputs.

For RTL blocks that operate on data that need not have an associated word-level value, we present accurate bit-level modeling techniques for glitching activity as well as power consumption. This allows us to perform accurate power estimation for control-flow intensive circuits, where most of the power consumed is dissipated in non-arithmetic components like multiplexers, registers, vector logic operators, etc. Since the final implementation of the controller is not available during high-level design iterations, we develop techniques that estimate glitching activity at control signals using control expressions and partial delay information.

Mehra, J. Rabaey Current day behavioral-synthesis techniques produce architectures that are power-inefficient in the interconnect. We present a novel approach targeted at the reduction of power dissipation in interconnect elements buses, multiplexors , and buffers. The scheduling, assignment, and allocation techniques presented in this paper exploit the regularity and common computational patterns in the algorithm to reduce the fan-outs and fan-ins of the interconnect wires, resulting in reduced bus capacitances and a simplified interconnect structure.

Conn, P. Coulman, R. Haring, G. Morrill, C. Visweswariah Optimization of a circuit by transistor sizing is often a slow, tedious and iterative manual process which relies on designer intuition. Circuit simulation is carried out in the inner loop of this tuning procedure. Automating the transistor sizing process is an important step towards being able to rapidly design high-performance, custom circuits.

JiffyTune is a new circuit optimization tool that automates the tuning task. Each weighted target can be either a constraint or an objective function. Minimax optimization is supported. Transistors can be ratioed and similar structures grouped to ensure regular lay-outs. Bounds on transistor widths are supported.

Simple bounds are handled explicitly and trust region methods are applied to minimize a composite objective function. In the inner loop of the optimization, the fast circuit simulator SPECS is used to evaluate the circuit. SPECS is unique in its ability to efficiently provide time-domain sensitivities, thereby enabling gradient-based optimization. These interfaces automate the specification of the optimization task, the running of the optimizer and the back-annotation of the results on to the circuit schematic.

JiffyTune has been used to tune over circuits for a custom, high-performance microprocessor that makes use of dynamic logic circuits. Circuits with over tunable transistors have been successfully optimized. Automatic circuit tuning has been found to facilitate design reuse. The designers focus shifts from solving the optimization problem to specifying it correctly and completely.

This paper describes the algorithms of JiffyTune, the environment in which it is used and presents a case study of the application of Jiffy-Tune to individual circuits of the microprocessor. Cong, L. We define a class of optimization problems as CH-posynomial programs and reveal a general dominance property for all CH-posynomial programs Theorem 1. We show that the STIS problems under a number of transistor delay models are CH-posynomial programs and propose an efficient and near-optimal STIS algorithm based on the dominance property. When used to solve the transistor sizing problem, it achieves a smooth area-delay trade-off.

Charbon, P. Miliozzi, E. Malavasi, A. Sangiovanni-Vincentelli In a constraint-driven layout synthesis environment, parasitic constraints are generated and implemented in each phase of the design process to meet a given set of performance specifications. The success of the synthesis phase depends in great part on the effectiveness and the generality of the constraint generation process. None of the existing approaches to the constraint generation problem however are suitable for a number of parasitic effects in active and passive devices due to non-deterministic process variations.

To address this problem a novel methodology is proposed based on the separation of all variables associated with non-deterministic parasitics, thus allowing the translation of the problem into an equivalent one in which conventional constrained optimization techniques can be used. The requirements of the method are a well-defined set of statistical properties for all parasitics and a reasonable degree of linearity of the performance measures relevant to design.

Dutt, W. This class of algorithms is of the local improvement type. They generate relatively high quality results for small and medium size circuits. However, as VLSI circuits become larger, these algorithms are not so effective on them as direct partitioning tools. The new algorithms significantly improve partition quality while preserving the advantage of time efficiency.

This demonstrates the potential of iterative improvement algorithms in dealing with the increasing complexity of modern VLSI circuitry. Zien, M. Schlag, P. Chan This paper presents a new spectral partitioning formulation which directly incorporates vertex size information. The new formulation results in a generalized eigenvalue problem, and this problem is reduced to the standard eigenvalue problem.

To evaluate the new method for use in multi-level partitioning, we combine the partitioner with a multi-level bottom-up clustering algorithm and an iterative improvement algorithm for partition refinement. Experimental results show that our new spectral algorithm is more effective than the standard spectral formulation and other partitioners in the multi-level partitioning of hypergraphs. Mak, D. Wong Logic replication has been shown to be very effective in reducing the number of cut nets in partitioned circuits. Liu et al. In general, there are many possible partitioning solutions with the minimum cut size and the difference of their required amounts of replication can be significant.

Since there is a size constraint on each component of the partitioning in practice, it is desirable to also minimize the amount of replication.