Directed Verification Technique with a set of directed tests is extremely time-consuming and difficult to maintain for more complex designs to verify. Directed tests only cover scenarios that have been anticipated by the verification team by going through specifications, This can lead to costly re-spins and still, there are chances of missing time to market which is extremely painful.
In directed Verification testing, Engineers spend a good amount of time to understand the functionality of Design and identify different verification scenarios to cover functionality. Once they are done with identifying scenarios, they start defining directed test bench architecture. Traditionally Verification IP (VIP) works in a directed test environment by acting on specific testbench commands such as read, write, or some other commands to generate transactions for specific protocol testing. This type of directed testing is used to verify that an interface behaves as expected in response to valid/invalid transactions. A bigger risk with this type of testing is that directed tests only test for predicted behavior. So sometimes it leads to extremely costly bugs found in silicon which they missed during the scenario identification phase !!
Constrained Random Verification Methodology gives an effective method to achieve coverage goals faster and most importantly it helps in finding corner case problems. The advantage is, Engineers does not have to write many test cases, smaller set of constrained-random scenario with few full random test scenario are good enough to fulfill coverage goals (functional as well as code coverage).
Based on my experience and understanding, usually, people follow layered architecture in constrained random verification. (For a better understanding of layered architecture, Read VMM/UVM User manual by Synopsys) where you will see Test layer controls over the whole verification environment and component. Mostly this control will be given to the user. So users can run the same test suites with different configurations if required to achieve the coverage goals.
In a constrained random approach, scoreboards are used to verify that data has successfully reached its destination while monitors snoop the interfaces to provide coverage information. New or revised constraints focus verification on the uncovered scenarios to meet coverage goals. As verification progresses the simulation tool identifies the best seeds which are then retained as regression tests to create a set of scenarios, constraints, and seeds. In this approach of verification, you will be having fewer test cases which are enough to achieve coverage goals. I have observed one best usage of Directed tests in random verification, Here I am describing the best usage place of directed tests.
Always use directed tests after regression cycles of random verification, Random verification regression cycles gives some corner scenarios which are always left in coverage and you can always identify from functional and code coverage analysis. So identify those kinds of scenarios and write directed tests with specific constraints to cover specific scenarios. This way coverage will be achieved ! Constrained Random Verification is very popular nowadays because of so many reasons, I have tried to cover a couple of differences and advantages over both techniques.