A 2018 study revealed that 84% of FPGA design projects – including some safety-critical designs - suffered from non-trivial bugs escaping into production, with 10% having four or more bugs released into production.
In this webinar, MathWorks engineers will demonstrate a series of techniques that FPGA design teams in industry are using today to verify correct performance of FPGA designs using MATLAB and Simulink.
MathWorks engineers will demonstrate these techniques using example designs.
Mark Lin in an advance application engineer supporting ASIC/FPGA workflows who specializes in digital design verification. Mark was a verification engineer at Broadcom for eight years, where he developed full-chip test environments. He earned a BS degree in electrical engineering from California State University of Los Angeles.
Eric Cigan is the principal product marketing manager for ASIC and FPGA verification at MathWorks. Prior to joining MathWorks, he held technical marketing roles at MathStar, AccelChip, and Mentor Graphics. Eric earned BS and MS degrees in mechanical engineering from the Massachusetts Institute of Technology.
Recorded: 25 Aug 2020
Hello, and welcome, everyone. My name is Eric Sagan. And I'm with MathWorks, here in Natick, Massachusetts, USA I'm presenting today, along with my colleague on the west coast, Mark Lin. Today, Mark and I are here to discuss the topic of FPGA verification. This is timely and important information, because FPGA design projects are getting ever larger, and design teams are looking for better tools and methods to help make sure FPGA implementations meet specs and quality requirements.
During this webinar, we will use an image processing example to show how you can use MATLAB and Simulink as golden references for FPGA implementations, how you can use your in-house HDL simulator along with MATLAB and Simulink, how you can take the next step and use your FPGA or SOC development board, along with MATLAB and Simulink, and then how you can generate components from your MATLAB or Simulink system testbench produced by downstream hardware design and verification teams.
But first, let's look at some data showing exactly how serious of a problem the FPGA verification has become. For the last decade, Mentor Graphics has worked with Wilson Research Group to track issues and trends in the areas of functional verification with their biannual survey that regularly gets between 1,000 and 2000 responses from design teams around the world.
I find this first chart to be really concerning. It shows how often nontrivial bugs escape into production FPGA designs. Think about it, in six out of seven FPGA projects that go to production, there are non-trivial bugs that have been deployed. Worse yet, in one out of 10 projects, there's four or more non-trivial bugs. And if you compared the results for 2016 and 2018, here, you see the trend is going the wrong way. Actually, I'm here to see Mentor's next report published soon, to see if this trend is continuing.
If we dig deeper and look at the root cause of functional flaws, we'll see the top three causes are design errors, changes and specifications, and incorrect or incomplete specifications. All three of these causes suggest that the main issue isn't in hardware design. It's in the specifications encoding system level specs that hardware designers rely upon. Using MATLAB or Simulink to evaluate the design and create executable specifications is a good way to head off these sorts of issues.
So here's what we'll cover in the rest of our time here. We'll illustrate FPGA verification workflows with MATLAB and Simulink. We'll demonstrate interactive verification using code simulation between HDL simulators and Simulink. Then we'll take the next step, using an FPGA development board to perform hardware-based verification with Simulink as the system testbench.
At that point, I'll hand off to Mark. Mark has extensive experience in design verification. So he will be covering how to generate verification components from MATLAB and Simulink, and then use them in RTL simulation, typical of what you'd see in a production verification environment. Then we'll do a brief wrap up. So let's look at one of these workflows. We'll start off looking at more of a conventional typical workflow.
So, here, what we see is there's two teams within FPGA projects, those focused on RD and those focused on hardware design verification and implementation. Typically, these teams are divided and really communicate only via manually written specs. Hardware designers manually write their RTL, based on these written specs.
And then verification engineers manually create testbench environments to ensure the design meets specs. These may include verification architecture, stimulus, golden reference models to check against, and models are external to the-- and system models that are external to the design under test. Often, they spend all this time verifying. At the end of it they find out the design doesn't even work in the system context. This can take long iteration cycles to fix.
Now, with H2O Verifier, a product from MathWorks, it brings together the system context and hardware design. HDL Cosimulation simulates the design in an HDL simulator. It's connected to the system level test environment in MATLAB or Simulink. FPGA-in-the-loop does the same, except the design is loaded into the actual hardware for high fidelity simulation, often with acceleration.
And SystemVerilog DPI component generation exports golden reference models, stimulus, and system models for MATLAB and Simulink to speed verification environment creation, provided that the MATLAB code or Simulink code are compatible C code generation, which underlies all this.
Finally, the newest method we support is actually generating UVM testbenches and UVM components, again, directly from Simulink. This speeds up use of verification in using the universal verification methodology, which is increasing in popularity.
So let's get to the demo. The demo design is a simple video processing design, a boring filter. Because it has a lot of data process, we will drop the resolution down from 1080p down to 240p. That'll simulate much faster at this low resolution. Once it's been proved out, then we could scale up the design to 1080p.
Now? Let's bring up the model in Simulink. But before we get started, we probably should take a look at the video we're going to be watching in a moment. It's only just two seconds of video. But we should see what it looks like for reference.
All right, got it. So let's get on with the demo. All right, so here's the model in Simulink. The 240p video source is on the far left, and it feeds into a behavioral specification model across the middle of the canvas and a hardware oriented implementation model across the bottom of the canvas.
Inside the behavior model is a 2D FIR filter that will do the image blurring. We look inside the mask, and we can see the photo coefficients in MATLAB syntax. Next, in the implementation model, let's look at the filter that will be our design under test or DUT. This subsystem also uses the 2D FIR filter, in this case, from the Vision HDL Toolbox. The difference between these two is that the blocks in the Vision HDL Toolbox uses streaming pixel interface.
Serial processing is efficient for hardware designs, because less on-chip and DDR memory is required to store pixel data for computation. The zero interface allows the block to operate independently of image size, format, and makes the design more resilient to video timing errors. Then on the output from the DUT, we've use to block that deserializes the output of the design, which will then allow us to view the video out and compare it to the behavioral specification models video out.
We click on the Run button to start simulating. As you can see, we have viewers for the video input, behavior model, and the implementation model. You can see the images on the right have been subjected to the boring filter. Oh, yeah, I took this video a few days ago in my backyard. And it shows my dog, Milo, pretty much begging for food.
Our first verification step is going to be HDL Cosimulation. So just to review that with this diagram, here, we're really kind of keying in on this portion here. And let's go ahead and start. We actually launch the Questa Sim simulator to start. And we bring up the [INAUDIBLE]. And we can do some things that kind of shape that to the form we want. We can do some work on the wave window, shorten the names, get the comms all set up, so that this-- we're easily able to view.
And we can launch the simulation here. And as you see, we're getting the same views as before. But a key thing to keep in mind is there'll be some differences here, because now we've expanded out the ports. So now we're seeing all the detail control signals as well.
And the nice thing is we can actually compare how those are performing between the initial-- the original reference and the cosimulation model. So you can see the bottom graphs on each of these is showing us the difference between the reference model and the cosimulation model. And you can see they're coming up to zero. So that gives us the confidence that the models we have are-- or that the implementation at RTL is tracking the specification model and Simulink.
So we'll put those aside for a moment. And as you can see, here, we're only up to 9%. OK, so while that's running, let's go ahead and just review what we're doing with HDL Cosimulation. You're verifying your HDL with your MATLAB and Simulink test environment. Usually that's RTL level that you're verifying at this point.
The typical times you would use this would be if you're one of the systems RD teams and you don't have very good access to the hardware team. You want to be doing your interactive early verification on your own. The other cases where you do have tight collaboration between your systems team and your hardware design team, in that case, this really gives you kind of a common way to verify at this interactive level between system level design and the hardware implementation.
Just add a little bit of color on that. When you're doing cosimulation with the commercial HDL simulator, what you're doing is you're reusing your existing MATLAB or Simulink testbench, as we saw here. You're running all that HDL code in this ported simulator. And we've listed the supported simulators there, Mentor Graphics, ModelSim, or Questa simulators, and Cadence, Incisive, and Xcelium simulators.
What HDL Verifier does is it really generates the entire cosimulation infrastructure with handshaking. An important point to make here is you can be running this on a single machine, the simulator, HDL simulator and Simulink running there together. Or it can be-- and often, it's across a network. Say, if you've got a server farm, which is running under a Linux, and you're on your HDL simulators there, you can connect to that through using HDL Verifier.
And now let's go back and see how this simulation is doing. OK, now, so now we can see that we're closing in on being done. We're up to about 95%, 96% now. And we're seeing the last few frames click buy. And what to look for here, as we finish up, is then we can get more detailed results looking at the individual signals on the pins.
So here we're looking at the pixel output from the DUT, as well as from one of the control signals coming out. And, again, we're seeing here the cosimulation results versus the reference model. And you can see the differential on the bottom of the graphs on each of those. And it's coming out to zero. So that's for the whole course of the simulation that we just completed now.
OK so that gives us a bit of a look at HDL Cosimulation. The next part of the process we want to do is look at FPGA-in-the-loop verification. I'll show you how to get set up right now.
OK, so we're ready to go with FPGA-in-the-loop. So here's our model set-up FPGA-in-the-loop with the Xilinx VC707 board. That's a Verdict 7 board. In this model, we have the same behavioral model as before. Now, though, we can see the FPGA-in-the-loop block. And that we can see this block was actually created by HDL Verifier. And it provides the communication channel between the Simulink session and the FPGA board. We used HDL Verifier to set this communication by ethernet.
I would show you the board here in my home office here, right now. But given our work situation, I'm actually running this test on a VC707 board in a lab back at our Lakeside campus in Natick. So, as we saw in the case of HDL's Cosimulation, the FPGA-in-the-loop block uses pixel streaming. And the bus with the video control signals has been broken out into its components, serialized at its input and deserialized at its output and, again, output to the viewer there, so we can see how it compares with the reference.
Before I can actually run FPGA-in-the-loop, though, I still, first, need to program the board with a bitstream. I've already generated the bitstream ahead of time. So we can use the mask for the FPGA-in-the-loop block and program the FPGA on the board.
OK, so once that has been loaded on board, we're all ready to run the simulation or run the FPGA-in-the-loop. So I'll go ahead and kick off the FPGA-in-the-loop run. And we'll see it starting to count on the bottom here. And we'll see, very slowly, of course, movement of Milo here, as he's going through his paces.
So, again, it's like as before. This will take a while. So why don't we break away, and let's go look at a bit of an overview, here, of that FPGA-in-the-loop. So what's the use case overall? It's a way to prototype your algorithm on FPGA that's connected to your MATLAB or Simulink session.
And that's a very important point, subtle point as well. Because really it requires, with FPGA-in-the-loop, that all the inputs coming into your board have to be coming from the MATLAB or Simulink session. You can't have some other video stream coming into the board from elsewhere. It's really all going to be coming from sources in MATLAB or Simulink.
OK, that said, it does address a number of use cases fairly well. Certainly, having it on hardware, as opposed to running it on a logic simulator, is the closest thing you're going to get, in many cases, to actual execution before you've actually really deployed the full design to an FPGA board.
It also provides a proof of concept for an algorithm. Think of showing an algorithm to your management, to your customer. This really provides a great way to show that interactively and gives you a way to visualize the results. So it's a good showpiece to provide an example of an algorithm. And also, in many cases, it can actually accelerate these compute-intensive algorithms, relative to HDL Cosimulation.
In this particular design, that is not the case. It actually runs slightly slower in an FPGA loop than it does in cosimulation. But that's always going to be a function of the type of design. And if you're interested in learning more about that, follow up with me or with your local MathWorks HDL AE. They can provide more insight on your particular design.
So let's go in a little bit more detail here. So the key benefit for this is being able to reuse your existing MATLAB or Simulink testbench, the same testbench all the way through from initial Simulink simulation, in this case, through HDL Cosimulation now and FPGA-in-the-loop, same all the way through.
We provide the automation to help you program bitstreams into Xilinx Intel and Microsemi boards. And to that end, we support quite a variety of commercial off-the-shelf boards right out of the box. We have a hardware support package, great show verifier, actually, three hardware support packages, great show verifier, one each for Xilinx Intel and Microsemi. And those have a number of predefined boards. We'll actually point you to a list of those a bit later in the webinar.
And the communication can be done through gigabit ethernet or JTAG or PCI Express. We support relatively few boards with PCIe, because of the development effort involved. But it does perform very considerably faster than the two other mechanisms, with gigabit ethernet being in the middle, JTAG being the least fast.
And speaking of that, let's actually go back to our FPGA-in-the-loop example and see how that's performing. So as you can see, we're getting really down to the wire here, down to the really the last few frames of this simulation run here. And we're counting on up, See a nice little bit of movement here on the part of the dog. And actually, now it's completed.
So the nice thing now is that we can actually, once again, as we did with FPGA-- or as we did with HDL Cosimulation, we-- HDL Verifier provides these nice comparison charts, so you can actually see the reference versus the FPGA-in-the-loop data at this point for the pixel and for one of the control signals.
So, again, that's really building your confidence. Now you've gone through cosimulation, gone through FPGA-in-the-loop. You're getting good results.
So at this point, what we're ready to do is turn this over to Mark Lin. Again, Mark Lin is an experienced design verification engineer. And what his role here today is to take us to the next step, which is once you've got this design verified from an algorithmic point of view, once you've done this interactive verification-- now, at some point that's going to have to get transferred into a design verification team. They're going to be using a whole different verification environment. Mark's going to tell you about that. Take it over, Mark.
Thank you, Eric. As design and FPGA continue to increase in complexity and size, A the need for RTL level verification becomes more important. Having full design visibility enables us to capture bugs that are, otherwise, hard or difficult to capture with traditional probing techniques. However, this visibility does come at a cost, in terms of efforts required, to create the testbench, the test case, the checking mechanism, and the execution of the test cases, themselves.
As Eric has mentioned, the root cause of non-trivial bugs stems from design and specification error. However, by using Simulink, we can easily create tests and checking mechanisms to capture these bugs and validate their fixes. This effort can then be exported down to RTL level simulation for verification and reducing the efforts required for this particular test.
DV or verification, design verification, is a number one time poll in any design project. For ASIC, you can see-- you can expect about 50% of the time spent on design verification and FPGAs, about 47%, 46% of your time spent on DV. However, in practical experience, I have found that this number to be dramatically higher. I have experienced about 75% to 80% of the project time spent just doing DV.
And my DV efforts really breaks down this way. 37% of my time is spent on doing debug. 24% of my time is spent on just creating test and running my simulation. 22% of my time is spent on creating a testbench and debugging my test environment, making sure that it works the way I expect it to. And 14%, I would spent on test planning with my colleagues and other people within the project.
The way simulate would like to help is by contributing or speeding up your testbench development process, making your test creation a little easier, and running your test cases a little faster. The way we would like to do this is really by the concept of Shift Left. The idea is this, as you are going down from a high level abstraction into low level implementation, going from spec to modeling to RTL level capturing and all the way down to net list, you'll find that your design space will become larger and larger.
Your ability to maneuver around the space and capture bugs and react to problems become slower and slower. So the idea is this. We would like to be able to do our design or debug verification or at the model level, shifting left all the way down to the highest level abstraction possible.
Therefore, we will then leverage HDL Verifier's ability to generate C for direct programming interface, which is a system well off standard to import foreign language into an RTL simulation and HDL Verifier's test of age. But HDL verifies UVM test generation capabilities.
OK, continue with our image [INAUDIBLE] model. I have made some additions and additional changes, so I can get it ready for DPIC cogeneration. Now, the time I was working on this model, Milo was not available. So I am forced to record my own pet in the backyard, Rhino instead. And you can see that, visually, we can easily detect that Rhino has been sufficiently blurred by our blurring filter here in the red.
However, if you want to do the programmatically with SystemVerilog, it's actually quite difficult to do. Now, MATLAB and Simulink has lots of functions and subsystems of analytical tools that allows us to do image quality checks or various other form of analysis that we can leverage.
In this case, I happen to choose the simple form of peak signal to noise ratio as my factor of determination. Now, I arbitrarily chose 17 to say whether my image has been sufficiently blurred. The point here is I don't have to go and write peak signal to noise ratio function by hand. I simply export this functionality into my RTL level environment, and I'm ready to go.
In addition, just because I have a lots of functions and capabilities at my disposal from MATLAB and Simulink, it's also important to architect my model in a way that makes sense for RTL level verification. For example, here on my left hand side here, I have three blocks that are contributing to the inputs of my blur filter. I would like to-- I would want to group them into a subsystem and name them my sequence block.
And the rest, here, that are consuming my output of my filter, I will group them into a subsystem and name them as-- and name them as my scoreboard. That's C or E4. OK.
Now I am ready to generate-- I'm architecturally ready to generate C code. So, for example, I want to generate DPIC for just the sequence book. What I would like to do here then is make sure that my cogeneration engine has been selected to system Verilog DPI. By going into cogeneration system target file and selecting SystemVerilog DPIGRT. Once that is done, it is a matter of right clicking my block of, interest going down to C code, and build for a subsystem.
Now any variable inside, used inside my sequence block, we will automatically generate, we will detect them, and either use them-- either as-- code them hard coded into-- car code the value into the C code. Or you have a mode where we will generate a function where you can tune these values during the execution of your DPIC code in your simulation. So by clicking Build, the tool will then go ahead and generate the C code.
OK, now once the C code is generated, what you'll get is a set of functions that represent the functionality of your sequence block. The first function that we generate is our initialization function. This function serves as the memory allocation of all the spaces and locations in the simulator that I will require to model your behavior in the simulate model.
The reset function serves to be called during the reset phase of our design. Whenever reset is asserted, this function will reset all the outputs of the CMO. Output will then deliver-- when called, will deliver the actual calculation of the CMO. And then, call immediately after is the update function. Well, we then sample the inputs to our C model and then getting it ready for the next put calculation-- next output calculation.
Terminate is the reverse of initialing. It basically deallocates any memory location in the simulator that we have used so far. This is useful when you are running your test in series over and over again, and then you want to put a period behind that sentence so that you can start to test the anew.
Here's an example of how these functions are typically-- can be integrated into a SystemVerilog module. In during the initial-- during the initialization block, we will call our initialization function, getting our memory locations and spaces set up. At our final stage, where our test is ready to exit, we go ahead and do the cleanup of our memory locations.
During reset, we will call our reset function. Therefore all the outputs of our DPIC model has been cleared to its default value. And when the clock is running, and once the system is enabled, we will call the output to deliver the calculation as done inside the C model and then call immediately-- immediately call the update functions, so that we can sample the next events for the upcoming calculations.
And from this model alone, we see that we can generate DPIC code to serve as our stimulus generating portion of our design. Therefore, using it to stimulate our RTL. But in addition, you can also generate DPIC model from the filter itself. For example, if you can-- the C model generator from the blur filter can serve as your reference model to, maybe, a legacy RTL or hand-coated RTL that you have in your arsenal, right? You can use it to check whether your RTL that is written by somebody else is functionally correct.
And finally the scoreboard here is you can leverage that checking mechanism, the peak signal to noise ratio to use to check the validity of the functionality of your blur filter, all through cogeneration and saving you time from manually coding these functionality into your environment, OK?
Now a minute ago, I mentioned the use of universal verification methodology or UVM. This is typically a ASIC form of verification. It really is designed to do robustness testing. It is really designed to have the reuse in mind, meaning that any portion of this feature, this way of writing verification tests, can be reused from projects to project, regardless how different these project can be or from each other, given that they're written correctly.
Now, another benefit of using UVM is that it can be used to automatically create test cases by leveraging random constraint features of SystemVerilog, so that you can hit these test spaces and design features without having to actually write the code yourself, right? However, with all this benefit comes at a very, very high initial cost for engineers to build these type of testbench properly and then to execute it and debug them, right?
The test case creation is also very heavy. Oftentimes, engineers, DV engineers, who spend most of the time after building their test bench to create a test sequence and creating the scoreboard to validate their design. In addition, the environment, itself, can get very, very complicated. Because it is an object-oriented thing written in SystemVerilog.
You have a lot of moving parts that's working together to test your environment. And if you're-- especially when your design is large, you may have many, many environments, many, many components and parts and moving parts working it together to do your testing.
So with that, what we'll do here is try to alleviate some of that complexity by automatically generating-- by automatically generate these components and giving you an executable UVM test case out of the get go. Therefore, you don't have to worry about building the test case, the environment, all from scratch.
Right at RTL drop, you can actually deliver an executable UVM environment that you can either use yourself or give to your design verification engineer. And then they can go and extend from that test case to do even more than what you have already done, right?
So the way we integrate here is-- remember that sequence block earlier that I generated DPIC code for? Well, the way we generate UVM, underneath the hood, we're all just leveraging DPIC. so that sequence block that I generated DPIC for, I'm going to bury that into what we call a UVM sequence, here. The sequence is, essentially, the stimulus that it will be used, a sequence of stimulus that will be driven to your design or your DUT under test.
The scoreboard block that I use-- that you saw me-- that we can generate DPIC for, we will use the DPIC there and embed it into the UVM scoreboard, which is another named for the checking mechanism, the response of your DUT, right?
Now, these two blocks, alone, is not enough to be a UVM test. There are other blocks involved in a typical UVM test. There is a sequencer, where the arbitrator of all the type of stimulus that is going into your DUT, the driver, and then the monitor, the mechanism that is do-- that is actually driving the transaction into your design and then sampling the response and bringing it back into the environment, encapsulating all within the UVM environment, itself.
Now, the way we do this to generate this environment is very similar to what we have shown earlier with the image processing model. Again, like the image processing model, say, we have a counter here, you have to tell me what we have done, what are your stimulus-generating components, and what are your verification components. Group them into a subsystem, very exactly like we have already done with our image processing model. And from there, I will be able to generate the UVM components for you, OK? And this is what it will look like in a demo.
Let me demo this with the image processing model. And what you will get is an executable UVM environment that looks like this. You provide the design under test. I will automatically generate an interface file, a SystemVerilog interface file, to that design under test.
I will automatically create the drivers, the sequence, based on your Simulink model, automatically create the monitors that will be sampling the response of your DUT, automatically wrap them up in an aging component for reusability, generate the scoreboard component, based on your Simulink model, wrapping them up in an environment component for reusability. Because the environment of your project could be different from one project to a different project or different revisions.
And then, finally, delivering a fully executable UVM test case that, again, that you could either execute, yourself, or give to your verification engineers to do the work, to do the actual verification workload.
Let's take a look at what this would look like and practice with our image processing model. Going back to our image processing model, we will be leveraging HDL Verify as UVM build command to generated UVM testbench. To use this command, we got to tell it where our design under test block is, where sequence block is at, and where we kept our scoreboard.
Here, I have captured the hierarchical path from the model of our DUT sequence and scoreboard and passed it to the UVM build command. And once we execute this, we will go-- the system will go and build the UVM architecture that I have described earlier.
OK, now that it's done, let's take a look at what we've got from the-- what we got generated. You will find any of the UVM-- all the UVM artifacts here within the UVM bill, within a directory suffix with a UVM build under testbench. And here are the scoreboard, the directory hierarchy for this scoreboard.
Let's take a look at what's inside this sequence. Inside the sequence, we we'll see that this is a fully functional UVM component where the actual heavy lifting is done by these two DPIC functions. These DPIC functions are generated from your SystemVerilog Simulink model, exactly as you have put them down in the image processing model.
We simply leverage, use that C code degenerative sequence of transactions going down to the driver. We call the DPI, send the transaction, and wait for the driver to deliver that transaction into the design under test.
In the scoreboard, things are now done in reverse. Transactions are being pulled from the monitor components. And then, these transactions are then sent to the DPIC model of the scoreboard. The scoreboard then return an error message-- return a pass/fail criteria determined by a case statement here in these lines and indicate to the simulator where something has failed or passed.
Now the way, by default, the behavior here is, no news is good news. If the DPIC does not detect any error, then no message will be sent, therefore, eliminating-- reducing the amount of clutter inside your lock [INAUDIBLE], which will be the primary mode of debug. But it is by looking for these pass and failed messages inside your lock [INAUDIBLE].
Now one thing that we've done inside the UVM cogeneration, besides just creating these DPIC functions, embedding them into the UVM environment, is that we handle a lot of the timing synchronization of the transaction compare from the reference model with the design. Because sometimes the sequence can come out as much, much faster than what your RTL code is capable of doing.
For example, you might have a design that has some certain amount of latency in there, where the behavior of functional model might not have that latency. The way we address that is we automatically create a representation of a FIFO within UVM. Therefore, we only compare when there's something in that FIFO, effectively removing that timing element of the testing.
So, in a nutshell, any UVM test case that we generate are time agnostic, therefore, enabling you to really reuse this from different projects where you might have different clocking requirements.
After generating the UVM testbench environment, we also automatically generate the test cases-- the command lines required to instantiate these or run these test cases in their particular simulator. We support Cadence Incisive, Cadence Xcelium, Mentor Graphics, Questa, Sim, and VCS. Here, I'm showing an example of how one will go about calling this executing the generator UVM test case in Cadence Incisive tool.
We simply say, this is my unit of testing, automatically generated from your system model. We pass the libraries, the DPIC libraries that was automatically created and generated and compiled by simulate to the simulator, their corresponding SystemVerilog file, and the way we go. From this point on, [INAUDIBLE] will now simulate exactly what you have captured inside simulate in the form of UVM.
Let's come back and take a look at the architecture of the UVM test case that we generated. One of the first question that somebody might have in their mind here is, it's great that you're generating UVM code for me, but how do you know what interfaces is being used by my design? Now, sometime you could have an AXI interface, a PCIe interface, or an SPI interface.
OK, let's go back to the UVM architecture that we generate from your system. OK, let's-- OK let's--
OK, let's take a look again at the UVM architecture that we generate from our Simulink model. Now, by looking at this, one question one might have is, Mark, it's great that you generate UVM for me, but how do you know what interface I'm using for my design? It could be AXI, PCIe, AHB, or something other than any one of--
Let's take a-- let's revisit--
One, two, three.
Let's take a look at the architecture of our UVM test case again. By looking at those, one question people might have here is how do you know what interface is being used by the design? It could be AXI, AHB, PCIe, or SPI. How will we know from a Simulink perspective? And the answer is, we don't. We actually do not know what the user's interface would be. So we fully expect the users to replace that driver with something that they have created themselves.
The driver and the monitor are the only UVM component that we generate that we expect a user to replace with something specific to themselves. You could be using AXI. You will write and AXI driver that understands the transaction that's coming down the pipe. Or, if you go to a different project, you will rewrite that driver, maybe, using a PCIe interface, for example. And try to write a driver that understand PCIe. Everything else that gets generated remains the same and is completely reused.
Now, in terms of randomization, I mentioned that randomization is a big part of UVM, because it allows DV engineers to create additional tests to exercise the design under test, the test of robustness [INAUDIBLE] the design. And when we are doing that here is any MATLAB variable that you use in your simulated environment, well, we will automatically generate a function, followed by a suffix by setparam. This function can be called anywhere in your UVM class to change the value of that variable that you have put them in Simulink model.
Specifically, if you have a random constraint like that in the design, and DV engineer has created that says, hey, I want to swing from the values between one and a thousand or a set of valuess-- a set of specific values that I want to use for this parameter, you can create that in a random constraint block press the randomization value to that variable via it's setparam function. And now the automatically generated test case will take on that new parameter, having a whole different set of behaviors to test your design.
And that's how we can accelerate and allevite the design process and development process of your UVM environment.
All right, so we covered everything we wanted to cover here today. We talked about verification workflows for FPGAs with HDL Verifier. We got into looking at cosimulation with my Mentor Questa. We showed FPGA-in-the-loop testing with a Xilinx development board. And then I handed it off to Mark, and he covered all sorts of aspects on how you take that system testbench, now you can leverage it in downstream verification environments doing various forms of verification.
So now where can you learn more? The best single place to go to is MathWorks.com/verify. That's the HDL Verifier product page. And there you'll see videos, examples, lists of the supported FPGA and SOC development boards, as well as the ability to reach out to sales and request a trial.
Let's take a look the product page. So right up here from the top, you'll see, here's an index to the different parts of it. But you'll see information on each HDL Cosimulation. You'll see information on generating UVM and SystemVerilog verification components. There's more here on FPGA-in-the-loop and other forms of hardware testing. There is how-- you can see how to use HDL Verifier with HDL Coder, which we basically did here today. And then there's more information on other features that we didn't have a chance to really get into so far.
But all the way down here at the bottom, you'll see a link to documentation. And let's just go ahead and take a look at that. So this is the HDL Verifier documentation set. Anybody can go and check it anytime they want. And this really gets into all the different forms of verification that we got into here today.
One thing I, particularly, want to point to is the Getting Started page. This, obviously, will take you to some basic tutorials and examples. But an important thing here is a list of supported EDA tools, verification tools, as well as FPGA boards. If you can just scroll all the way down through here, and you'll see all this information presented very cleanly, very long list of boards from Xilinx, Intel, and Avnet, as well as Microsemi.
So thank you for your time, today. Once again, visit MathWorks.com/verify, and hopefully, you can address all your verification needs. Thank you very much.
You can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.