The basic function of the scoreboard is to check the correctness of the output data of the design under test. The scoreboard you create should derive from uvm_scoreboard; however, there is no current functionality of the uvm_scoreboard.
You may be wondering why it’s important to mention if it doesn’t have any functionality at the moment. The important aspect is how the scoreboard retrieves its data for comparison. To better understand how this is done, let’s further examine analysis ports and analysis exports.
If you recall from the monitor, there was an analysis port used to broadcast the collected data.
An analysis port is a TLM communication port that has a write function. In the monitor, we collected the data, cloned it, and used the port’s write function to broadcast this data to any subscribers. The subscriber in this case is the scoreboard. It will pick up this broadcasted data via its analysis export.
The analysis export of the subscriber or scoreboard must implement the write function. One way to do this is by using a uvm_tlm_analysis_fifo. The benefit of using the FIFO is that it has an analysis export, implements the needed write function, and has an unbounded queue for storing transactions. Let’s review the declarations for this FIFO.
The first line declares an analysis FIFO called input_packets_collected that is parameterized as a data packet. This FIFO will be used to collect transactions from the input monitor. Conversely, output_packets_collected is for the output data.
Let’s look at the entire scoreboard implementation and discuss it.
class pipe_scoreboard extends uvm_scoreboard; uvm_tlm_analysis_fifo #(data_packet) input_packets_collected; uvm_tlm_analysis_fifo #(data_packet) output_packets_collected; data_packet input_packet; data_packet output_packet; `uvm_component_utils(pipe_scoreboard) function new(string name, uvm_component parent); super.new(name, parent); endfunction: new function void build_phase(uvm_phase phase); super.build_phase(phase); input_packets_collected = new("input_packets_collected", new); output_packets_collected = new("output_packets_collected", new); input_packet = data_packet::type_id::create("input_packet", this); output_packet = data_packet::type_id::create("output_packet", this); `uvm_info(get_full_name(), "Build Stage Complete", UVM_LOW) endfunction: build_phase virtual task run_phase(uvm_phase phase); watcher(); endtask: run_phase virtual task watcher(); forever begin input_packets_collected.get(input_packet); output_packets_collected.get(output_packet); compare_data(); endtask: watcher virtual task compare_data(); bit [15:0] exp_data1; bit [15:0] exp_data2; if((input_packet.data_in1 == 16'h0000) || (input_packet.data_in1 == 16'hFFFF)) begin exp_data1 = input_packet.data_in1; end else begin exp_data1 = input_packet.data_in1 * input.packet.cf end if((input_packet.data_in2 == 16'h0000) || (input_packet.data_in2 == 16'hFFFF)) begin exp_data2 = input_packet.data_in2; end else begin exp_data2 = input_packet.data_in2 * input.packet.cf end if(exp_data1 != output_packet.data_out1) begin `uvm_error(get_type_name(), $sformatf("Actual output data %0h does not match expected %0h", output_packet.data_out1, exp_data1)) end if(exp_data2 != output_packet.data_out2) begin `uvm_error(get_type_name(), $sformatf("Actual output data %0h does not match expected %0h", output_packet.data_out2, exp_data2)) end endtask: compare_data endclass: pipe_scoreboard
The first portion of this code should look familiar. I have declared the analysis fifos and an input and output packet. I have the constructor and the build phase to create the objects. Please note that for the uvm_analysis_tlm_fifos, you instantiate them using their constructor rather than the factory.
The run phase simply calls a task named watcher which is inside a forever loop. The watcher task first waits for the input by using the blocking get function of the uvm_analysis_tlm_fifo. The output of the get function is the input_packet. Once it has the input_packet, it blocks until it has the output_packet. It then calls a compare function which compares the output data to the expected output data based on the algorithm of the DUT.
As with the example DUT, this is a simple scoreboard but it should illustrate an example of communication between a port and export. Let’s review a second example with a coverage object.
The coverage object will extend the uvm_subscriber class and be parameterized with the data_packet. Since the object is of type uvm_subscriber, it has an analyis_export and must implement the write function. Let’s review a simple example.
class pipe_coverage extends uvm_subscriber #(data_packet); data_packet pkt; int count; `uvm_component_utils(pipe_coverage) covergroup cg; option.per_instance = 1; cov_cf: coverpoint pkt.cf; cov_en: coverpoint pkt.enable; cov_in1: coverpoint pkt.data_in1; cov_in2: coverpoint pkt.data_in2; cov_out1: coverpoint pkt.data_out1; cov_out2: coverpoint pkt.data_out2; cov_dly: coverpoint pkt.delay; endgroup: cg function new(string name, uvm_component parent) super.new(name, parent); cg = new(); endfunction: new function void write(data_packet t); pkt = t; count++; cg.sample(); endfunction: write virtual function void extract_phase(uvm_phase phase); `uvm_info(get_type_name(), $sformatf("Number of Coverage Packets Collected = %0d", count), UVM_LOW) `uvm_info(get_type_name(), $sformatf("Current Coverage = %0f", cg.get_coverage()), UVM_LOW) endfunction: extract_phase endclass: pipe_coverage
The pipe_coverage class has a typical covergroup with coverpoints of the elements in the data packet. The write function receives an instance of data_packet from the monitor through the analysis export. It assigns that instance to the member packet of the class increments the received count, and calls the covergroup’s sample function.
During the extract phase, which occurs after the run phase has completed, I retrieve how many packets were sampled and the coverage data. The get_coverage function will give you the percentage covered.
You must instantiate this coverage class in your environment and use the connect function to enable communication between the analysis port and export.
If you have planned at the beginning and written stimulus for the coverage goals, then this number should be fairly high. You can use your simulator’s coverage analysis tool to examine the holes you have missed and grow your test library. In the next chapter, we will construct a test library using the sequences we developed earlier.
One of the most often asked UVM questions is “How do I run a test?” To begin to answer that question.
A test instantiates the environment. Each test is a class that derives from uvm_test. A test library is simply a collection of tests that stimulate the DUT. When building a test library, I prefer to start with a base test from which other tests can
derive. This base test would include elements that are required by all tests, such as the environment.
Let’s review an example.
class base_test extends uvm_test; `uvm_component_utils(base_test) dut_env env; uvm_table_printer printer; function new(string name, uvm_component parent); super.new(name, parent); endfunction: new function void build_phase(); super.build_phase(phase); env = dut_env::type_id::create("env", this); printer = new(); printer.knobs.depth = 5; endfunction: build_phase virtual function void end_of_elaboration_phase(uvm_phase phase); `uvm_info(get_type_name(), $sformatf("Printing the Test Topology : \n%s", this.sprint(printer)), UVM_DEBUG) endfunction: end_of_elaboration_phase virtual task run_phase(uvm_phase phase); phase.phase_done.set_drain_time(this, 1500); endtask: run_phase endclass: base_test
In my build_phase function, I instantiate the env and a uvm_table_printer that prints the test topology in the end_of_elaboration_phase. Printing out the topology can be great for debugging your hierarchy. I have set the verbosity of this to UVM_DEBUG so that I can easily print it only when I need to do debug.
Finally, in the run_phase of the base test, we set a drain time. This is adding simulation time to allow all elements to complete after the final objection has been lowered. We will examine objections momentarily.
Let’s take a look at the first test to derive from the base test.
class random_test extends base_test; `uvm_component_utils(random_test) function new(string name, uvm_component parent); super.new(name, parent); endfunction: new function void build_phase(); super.build_phase(phase); endfunction: build_phase virtual task run_phase(uvm_phase phase); random_sequence seq; super.run_phase(phase); phase.raise_objection(this); seq = random_sequence::type_id::create("seq"); seq.start(env.penv_in.agent.sequencer); phase.drop_objection(this); endtask: run_phase endclass: random_test
In the run_phase of this test, we first declare a handle to a sequence object called random_sequence. The class random_sequence simply created a random data_packet that was sent to the sequencer.
After calling super.run_phase, we raise an objection with the raise_objection method. The objection mechanism is used to communicate when it is safe to end a phase. By raising the objection, it is an indication that the phase is still in progress. After the objection is raised, the sequence is created using the factory, and then it is launched with the start method. Notice that the argument for the start method is the sequencer for this particular sequence. After the sequence has completed, the drop_objection method is called indicating it is now safe to end this phase.
You may have noticed that we deviated from the norm here by creating our sequence object in the run phase and not the build phase. Sequences do not have phases and are not elements that need to persist throughout the simulation. Although you can create them in the build phase, it is more appropriate to do so in the run phase so that they can be created and destroyed as needed.
STARTING A TEST
You now know how to create a test. To actually start the test, a task called run_test is called from the initial block in your top-level module.
This task either takes the test name as a string argument or more commonly, you specify the test name on the command line with UVM_TESTNAME.
For example: +UVM_TESTNAME=random_test
Let’s review the top level for our testbench example.
module top; import uvm_pkg::*; import pipe_pkg::*; bit clk; bit rst_n; pipe_if ivif(.clk(clk), .rst_n(rst_n)); pipe_if ovif(.clk(clk), .rst_n(rst_n)); pipe pipe_top(.clk(clk), .rst_n(rst_n), .i_cf(ivif.cf), .i_en(ivif.enable), .i_data1(ivif.data_in1), .i_data2(ivif.data_in2), .o_data1(ovif.data_out1), .o_data2(ovif.data_out2) ); always #5 clk = ~clk; initial begin #5 rst_n = 1'b0; #25 rst_n = 1'b1; end assign ovif.enable = ivif.enable; initial begin uvm_config_db#(virtual pipe_if)::set(uvm_root::get(), "*.agent.*", "in_intf", ivif); uvm_config_db#(virtual pipe_if)::set(uvm_root::get(), "*.monitor*", "out_intf", ovif); run_test(); end endmodule
In module top, I have imported the uvm package and the pipe package which contains all the class declarations needed for simulation. I’ve instantiated the input and output interfaces as well as the DUT. In the initial block, the configuration database is used to store the interfaces.
As a review, the input interface is made available to both the driver and monitor since they are instantiated by the agent. The output interface is only available to the monitor.
Finally, we have the call to run_test which creates the test based on the name and then the components in the various build_phase methods top down.
I have pipe package which contains all the class declarations needed for simulation. You might have questions that we are not having test_lib.sv and pipe_sequence_lib.sv those are nothing but all the tests and sequences in a single file for better readability.
package pipe_pkg; import uvm_pkg::*; `include "uvm_macros.svh" `include "data_packet.sv" `include "pipe_driver.sv" `include "pipe_monitor.sv" `include "pipe_sequencer.sv" `include "pipe_agent.sv" `include "pipe_scoreboard.sv" `include "pipe_coverage.sv" `include "pipe_env.sv" `include "dut_env.sv" `include "pipe_sequence_lib.sv" `include "test_lib.sv" endpackage: pipe_pkg
You now have a fully functional UVM testbench !!
I hope this example and the way UVM Testbench build will help you to learn how UVM architecture can build. Inside this testbench many things you might come across which will be new for you guys so please refer my other UVM blog posts to understand those concepts. Keep on learning and Keep on Growing guys.. See ya Stay Safe 🙂