Code Coverage Results Study :: FAQs

Q. Why are you providing line coverage ?
  • Line coverage data is the most basic information which helps QA to quickly identify the coverage gaps and to come up with suitable test case scenarios.

Q. What about Functional coverage ? Why is functional coverage less than 100% for files that have 100% line coverage ? Look at the following scenario to prove my point!! . There is 100% line coverage but only 49.3% functional coverage.
  • As you can see the details from here CLICK, this is an artifact of multi threaded applications running along with runtime generated functions.
  • If more than one instance of a function is generated on the stack and if one of them acquires the lock on the thread and the other one goes out of scope before it could acquire a lock on the thread, the loser function will have zero coverage.
  • Function coverage is the least reliable of all modes of code coverage. For instance, if there is a function with 1000 lines of code [ this is an exaggeration ] & with an if condition in the first 10 lines which throws the execution pointer out of the function block for not meeting a condition ... any such ejection from the 'if' branch also shows 100% function coverage where as we have barely scratched the surface of that function in reality.
Q. OK smart pants !! what about branch coverage ? Is it not the most meaningful of all modes of coverage ?
  • You are right that branch coverage provides true sense of coverage.
  • Branch coverage data helps identify integration test gaps, functional test gaps etc.,
  • I have prototyped a C/C++ branch coverage strategy using 'zcov'
  • I can present the findings from branch coverage to interested parties one on one.
  • I do not have plans in place to implement a general availability of 'branch coverage' results in this Quarter.
Q. What is the format in which you are presenting the code coverage results analysis ?
  • The data would look like this

  • For each file in the Firefox executable, you can see the Automated tests code coverage %, Manual tests code coverage %, Number of fixed Regression bugs, Number of fixed bugs [ as calculated from the change log of source code control system ] and the number of times a file is modified in the HG source control.
  • The data can be grouped by Component/Sub component, ordered by Top files w.r.t changes or Top files w.r.t. Regressions or Top files w.r.t general bug fixes etc.,
  • Depending on the criteria you can select top 10 or 20 files from any component of you interest and you can check the code coverage details [ line coverage ] from the FOLLOWING LINK.
  • DEV & QA can work together to identify suitable scenarios to write test cases to fill the coverage gaps efficiently and effectively.
Q. What components that you recommend for first round of study ?
  • Based on the component size Content,Layout,Intl,xpcom,netwerk,js are the components to study.
  • Our goal is to identify 10 files in each of these components to improve code coverage.
Q. You wrote in previous posts that code coverage runs on the instrumented executable do not complete test runs due to hangs and crashes. What exactly is crashing? I don't see any inherent reason why an instrumented build would be "more sensitive to problems", unless the instrumentation itself is buggy !!

  • one example: instrumentation changes the timing, which can change thread scheduling, which can expose latent threading bugs.
    We see it with Valgrind relatively often -- bugs that occur natively but not under Valgrind, or vice versa.
  • You can pick up any dynamic instrumentation tool of your choice. You can find more bugs with it as opposed to optimized build.
Q. Ok!! Show me how did you arrive at these conclusions. How can I do it by myself ?

  • I would be glad to explain that. Buy me a beer :)

Post a Comment

Popular Posts