Channels ▼

Al Williams

Dr. Dobb's Bloggers

On the Bench

March 24, 2014

I enjoy creating FPGA designs, mostly in Verilog. What I don't enjoy is writing repetitive test benches for the little subcomponents that I use in a larger design. If you haven't done much FPGA development, a test bench is just a program that exercises your simulated design and possibly checks for expected outputs.

Writing FPGA "code" in Verilog or VHDL looks like conventional programming, but it isn't. When you describe a circuit in one of these hardware definition languages, you are really writing requirements in a computer-readable language. The tools (analogous to a compiler and a linker) will eventually convert those requirements into hardware on the FPGA. It is customary, though, to simulate the design to get things working before you commit it to the actual hardware.

Because of this, hardware definition languages have a bit of a split personality. A subset of the language is synthesizable — the tools can convert this subset into hardware. The rest of the language can't be synthesized. This part is for writing simulation code like the test bench.

Here's a simple 4-bit parity generator in Verilog:

module par4(input [3:0] bits, output parity);
assign parity=^bits;

The expression ^bits means to exclusive or all of the bits in the input word together. It is a pretty easy bet this will work, but a test bench for this would be pretty simple:

module test;
reg [3:0] testvector;
wire parity;
par4 dut(testvector,parity);  // instantiate device under test

  $dumpfile("dump.vcd");   // output file
  $dumpvars;               // show all variables
// cycle test vector from F to 1
  for (testvector=4'b1111; testvector!=4'b000; testvector--) #1;
// exit simulator
 #10 $finish;

You could run the test using any Verilog simulator like Modelsim or Icarus. However, if you don't have one handy, you can use the very cool cloud-based system provided by EDA Playground. That link will take you to the code and test bench already ready to run. Click on the Open EPWave After Run checkbox and you should see this result:

This is simple enough, but it is a pain when you have lots of small modules and it, frankly, isn't very inspiring. When I used ModelSim, I sometimes used the wave editor to create input vectors and expected outputs. ModelSim can then export the waveform into a testbench. Unfortunately, using open source tools like Icarus and GTKWave (or EDA Playground, which can use ModelSim as a backend, but doesn't provide the wave editor), that's not an option.

If you've read my blog for long, you know I'm pretty lazy about building user interfaces. However, I was thinking that if I had the wave data, it wouldn't be that hard to automatically generate a testbench.

I'll show you my solution next time, but here's a little picture to give you a hint as to my approach:

Before you jump to any conclusions, the spreadsheet is pretty ordinary (although I am using the XWave font). There's no exotic macros or anything. Next time, I'll share the secret.

Related Reading

More Insights

Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.



Thanks for the example! And how cool is edaplayground? ;-) Made my tablet much more useful.


Thanks for the response.

One thing I've seen is the scope-creep of a throwaway testbench. So for example what starts out as a designer knocking something up to check whether it works gradually grows into something that's not quite throw-away but also not very maintainable - I see this a lot on FPGA projects. That's why I'm a proponent heavily in favour of "proper" testbenches ;)

Just out of curiosity I made a testbench for your example though you can see it doesn't really offer any saving:

Your example in the article is probably below the threshold of what I'd consider to be worth even testing. My design style is to work at a higher abstraction, it would take far to long to build anything of substance by instantiating individual flops or parity primitives!

Regarding test discovery - Cocotb doesn't auto-generate stimulus. It automatically discovers tests and runs them consecutively. There is also a mechanism for generating tests by varying different parameters. The goal is to capture every test into a regression - by simply decorating a function with "@cocotb.test()" it runs automatically and the result is tracked, thus previously throwaway tests become part of a regression suite.

You are correct that Cocotb requires Python knowledge and that probably complicates matters when teaching. I would argue though that the students of today really need be comfortable working with software (and all the standard software development tools like distributed revision control, diffs and patches, Makefiles, Jenkins etc). Without these skills they are at a serious disadvantage in the modern world. Real verification is entirely software!

I'm sure your tool is useful (and I'm very glad to hear it generates self-checking testbenches!). I still think that to keep up with the development pace of the modern world we need to move away from waveforms and schematics. Each to his own though - if you're happy with Excel then it's great you've created a tool and turned it into a flow. Personally, I'll stick with Python :)


I don't disagree that having a high quality sophisticated test bench is going to be better than a waveform-based approach. However, there are at least two cases where I just want to do a simple test. The first is when I'm building very small building blocks that will be tied together to make a system. Your point that the testbenches are throw aways is exactly right. So if I can just pop out a quick bench for the little registers and latches and get them working then I start with a palette of good "primitives" when I go to do the real thing.

The second case is when you are teaching an HDL. It is useful for students to build simple test benches without having to resort to using non-synthesis HDL or Python or some other thing you have to teach them in addition to what you are trying to focus on.

Don't be so sure that my Excel tool, by the way, requires inspection. It also generates self-checking testbenches from the input and output waveforms. I'll be the first to agree, I wouldn't want to build a test for something even moderately complex like that -- it is like the difference between drawing a schematic and writing an HDL description. But I do think quick automatic generation of (yes, self-checking) test benches is a worthwhile tool to have in your arsenal.

Thanks for the comment. Do you have a pointer to a walk through of using Cocotb to automatically discover tests? I'm trying to visualize how that would work in a complex case. In a combinatorial circuit, sure. You just cycle through all the input patterns. But for a sequential case, I can't visualize it. And you still have to write Python, right? At least last time I looked at it.

Stay tuned and see what you think after the next installment.


Unless your testbenches are self-checking, they are essentially throw away work! A testbench should also adequately cope with parameters which may affect port-widths or latency for example.

When we created Cocotb one of the design goals was to make the trivial low-overhead testbenches useful. There is no wrapper code required - you don't even have to instantiate your DUT or any wiring! Tests are automatically discovered and results output in JUnit form to allow Jenkins integration.

Following the above process we have turned the first bring-up tests that previously would have very quickly bit-rotted into a regression suite that continually provides feedback about the quality of the codebase.

I must say I think the industry is generally better off moving away from waveforms. For anything worth testing the overhead of inspection (or creating a waveform in Excel) is actually higher than creating a simple self-checking testbench which will also have greater value in the long run.