Free download of IEEE Std 1735-2014™ for “Recommended Practice for Encryption and Management of Electronic Design Intellectual Property (IP)”

Sponsored by Accellera, IEEE Std 1735-2014 can be downloaded for no cost at http://standards.ieee.org/getieee/1735/download/1735-2014.pdf .

 

Advertisements

Slice a SystemVerilog interface in the receiving modules

A common use pattern for SystemVerilog interfaces is that one server is connected to N clients, and the interconnect is like an N-element array of structs, where the server uses the entire array, but each client uses only one element of the array. In practice, it’s easier to express the interconnect as several arrays, one for each struct field.

I made this problem too complicated in “How to slice a SystemVerilog interface“, because each of the server and the clients were being passed different interface instances, and the instance passed to the server was even of a different type of interface than the instances passed to the clients.

The more natural way is to pass them all the same instance, and restrict access to single elements inside the clients. As in the earlier entry, this restriction is done with a second interface. But the trick here is that the client must be passed its own index.

`define _ import GLOBAL_PARAMETERS::*;
package GLOBAL_PARAMETERS;
  localparam type requestType = byte;
  localparam type responseType = int;
  localparam int N = 16;
endpackage:GLOBAL_PARAMETERS

module testMod `_ (/*...*/);
  wire clk, rst;
  IFC U(clk, rst);
  for (genvar INDEX = 0; INDEX != N; ++INDEX) begin:GEN
    clientMod#(INDEX) client(U.clientMp);
  end
  serverMod server(U.serverMp);
endmodule:testMod

module clientMod `_ #(INDEX)(IFC.clientMp bigifc);
  IFC_SLICE#(INDEX) U(bigifc);
  always_ff @(posedge U.clk, negedge U.rst) begin
    if (!U.rst) 
      U.requestWrite(0);
    else
      U.requestWrite(1);
  end
  // ...
endmodule:clientMod

module serverMod `_ (IFC.serverMp bigifc);
  // ...
endmodule:serverMod

interface automatic IFC `_ (input clk, rst);
  var requestType Requests[N-1:0];
  var responseType Responses[N-1:0];

  function requestType requestRead(int index);
    return Requests[index];
  endfunction

  function responseType responseRead(int index);
    return Responses[index];
  endfunction

  function void requestWrite(int index, requestType request);
    Requests[index] <= request;
  endfunction

  function void responseWrite(int index, responseType response);
    Responses[index] <= response;
  endfunction

  modport clientMp(output Requests, input Responses,
                   import requestWrite, responseRead,
                   input clk, rst);

  modport serverMp(input Requests, output Responses,
                   import requestRead, responseWrite,
                   input clk, rst);
endinterface:IFC

interface automatic IFC_SLICE `_ #(INDEX)(IFC.clientMp bigifc);
  wire clk = bigifc.clk;
  wire rst = bigifc.rst;

  function void requestWrite(requestType request);
    bigifc.requestWrite(INDEX, request);
  endfunction

  function responseType responseRead();
    return bigifc.responseRead(INDEX);
  endfunction
endinterface:IFC_SLICE

Copyright © 2016 Brad Pierce

Don’t use $unit and module parameters when SystemVerilog package import can do the job

In a SystemVerilog design many basic types and sizes are shared. Passing them down as Verilog-style module parameters through endless levels of instantiation hierarchy isn’t the best way to keep SystemVerilog module definitions generic.

SystemVerilog added two lexical scoping mechanisms beyond module definitions for this purpose, the compilation-unit scope ($unit) and packages. I recommend packages.

An old objection that is no longer accurate was that types and sizes from packages could not be used in the declarations of module ports without either fully qualified package references (such as type_package::T) or wildcard imports into $unit. But that was fixed in IEEE Std 1800-2009 by allowing package imports directly after the module name in a definition.

A new objection is that package imports after the module name clutter up the code, because if they are sharing global types and values they are needed in almost every module and interface definition. But here I show an easy way to get rid of the clutter with an unobtrusive macro.

`define _ import global_parameters::*, type_package::*;

module test `_
( input var T in[N]
, output var T out[N]
);
  always_comb out = in;
endmodule 

Type-checking SystemVerilog interfaces using a class signature

As I wrote here,

Unlike class specializations, interface specializations cannot be used in a module port declaration. For example, the following is disallowed

module m #(parameter N) (IFC#(N) ifc, ...);

Steven Sharp followed up here that perhaps it was an oversight instead of a decision. Either way, it is still disallowed. He discusses some of the problems this causes for separate compilation. Happily, it is possible to get the effect of at least type checking SystemVerilog interfaces by combining class specializations (8.25) and interface-based typedefs (6.18) using a couple macros.

`define SIGNATURE_DEFINE(Params) \
   typedef SIGNATURE Params \%SIGNATURE ;

`define SIGNATURE_CHECK(Params, Port) \
  if (1) begin \
    typedef SIGNATURE Params Expected; \
    typedef Port.\%SIGNATURE Actual; \
    if (type(Expected) != type(Actual)) begin \
      $fatal("Mismatch"); \
    end \
  end 

Here’s a simple example

virtual class SIGNATURE#(int N, type T);
endclass

interface IFC#(int N, type T);
  `SIGNATURE_DEFINE(#(N,T));
  T a[N], z[N];
  modport mp (input a, output z);
endinterface

module top;
  IFC#(8,int) ifc_inst();
  test#(32,byte) test_inst(ifc_inst.mp);
endmodule

module test#(int N, type T)(IFC.mp ifc_mp);
  `SIGNATURE_CHECK(/*IFC*/#(N,T), ifc_mp);
  // ...
endmodule
Copyright © 2016 Brad Pierce

Semiconductor device engineering — the neglected design importance of reducing variation

According to Scotten Jones

One really interesting point in this talk that was repeated at the Coventor event at IEDM was the importance of reducing variation. Device engineers focus on improving the mean but designers are more concerned with the distribution tails. Reducing variation is better even if the mean is lower! It was also noted that many of the proposed future devices will likely have more variability and therefore their actual performance may be less impressive than originally expected.

ANSYS to acquire Mentor Graphics?

According to CharlieD

Is this an active rumor or is ANSYS really acquiring Mentor Graphics?

Apache used to be a preferred vendor for us but after the ANSYS acquisition they seemed to have lost their zest for life. Honestly I have not heard much from them in the FinFET world. ANSYS has a 2x market cap over MENT? Do they have the cash to make an outright buy or would it be a stock deal?

According to Daniel Nenni

Let’s call it wishful thinking……. I do think it would be good for EDA as I mentioned previously:

The dark horse here of course is ANSYS if they acquire Mentor for example. That would certainly shake things up a bit. Not only would that take Mentor into a whole new level of exposure outside traditional EDA, it would get ANSYS securely inside the semiconductor ecosystem and give Synopsys and Cadence cause for concern, absolutely.

Why chip designers won’t risk tool changes

According to Tom Simon

Years ago I thought that chip design companies would embrace the latest technology and be eager to adopt new tools. What I learned was that the people implementing and managing design projects were taking a lot of risks with almost every aspect of their projects. What they most wanted is to minimize risk from the design process – especially from design tool changes.

The reluctance to change goes much deeper. In the middle of a project a design team would never be willing to change tools, or even tool versions. Even minor updates from vendors can have subtle algorithmic changes that affect results. Beyond the obvious possibility of an outright bug, there can be variations in results that can affect every downstream step. This is true for implementation and sign off tools.

Chip companies spend significant resources on correlation and validation of tools. In some cases, known bugs in software are compensated for and if a tool vendor were to suddenly fixed the bug it could break the flow. Pretty much the only reason a design team will change any tool or tool version is to fix a show-stopper issue.

Innovation and standards

According to Dave Rich

The whole point of a having a standard is recording common practice.

[We] have a long history that other end users and implementers of the standard do not have. It’s easy for us to understand the intent, and search the LRM for justifications, but those other people do not have that benefit. And all the people in that environment (training, support, maintenance, AEs), are faced with these problems every day (and many times each day).

According to Brad Pierce

There are more answers to “Why Standards?” than just “recording common practice”.

http://www.happyabout.com/bookinfo/Ten_Commandments_for_Effective_Standards_wp.pdf

For example, “Standards can fuel innovation by providing a common starting point.”

Importing Java-style interface classes was an excellent addition to the SystemVerilog language, but they were not common practice in SystemVerilog tools.

According to Dave

I’m not trying to say there’s no place for innovation in standards, it’s just that you can’t forget the underlying principle that brought everybody together in the first place.

I know that if you get a lot of engineers in a room they all will want to work on new and exciting ideas but you can’t forget about the foundation. My favorite analogy:: you can build the most advanced aircraft in the world, but it will never take off unless someone designs the rubber tires.

According to Brad

I’m not objecting to investing some resource into perfecting our rubber tires, I just don’t think the ones we have now are so defective that we can only dare taxi around the airport.

There ought to be a balance between maintenance and innovation, if only to fire up the enthusiasm of the participants. I don’t know anyone that wants to be on projects that have fallen completely into maintenance mode. Maintenance is an honorable and necessary function, and we all do some of it, but it’s thin gruel that should only be eaten as part of a balanced diet.

SystemVerilog: Backward compatibility with Verilog was a key to its success

According to Dave Rich

Superlog was originally designed to be a complete remake of Verilog using more modern programming concepts. My role as the company’s first application engineer was simply to convince people to use it. However, after working at four different Verilog start-ups, I realized how important it was to support legacy code and that migration to a new language must come in the form of evolution, not revolution. So we changed the design of Superlog to be 100% backward compatible with Verilog, as SystemVerilog is today.

UVM beyond SystemVerilog

According to Chris Edwards in “Expanding role of UVM takes center stage at DVCon Europe

[DVCon Europe general chair Martin] Barnasconi says UVM provides a useful framework that can be extended beyond its home of SystemVerilog-based IP verification […]

“We need to think beyond SystemVerilog,” says Barnasconi. “That’s where the challenge is if you ask me. System-level people might use SystemC, C++ or Matlab Simulink. The methodology concept behind UVM is something we should build upon to make it more applicable to other disciplines.

“In the conference there is a tutorial on UVM in SystemC. Teams are trying to bring the methodology to different languages. It underlines the ‘U’ in universal in my view. We also have the trend towards software-driven verification. We need to enable this software layer can be used within sequences defined in UVM.”