Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Upgrading v6.0.1 to v6.0.2 increases instrumentation time #1649

Closed
erichiller opened this issue Apr 11, 2024 · 10 comments
Closed

[BUG] Upgrading v6.0.1 to v6.0.2 increases instrumentation time #1649

erichiller opened this issue Apr 11, 2024 · 10 comments
Labels
bug Something isn't working Solved The issue is solved and can be closed

Comments

@erichiller
Copy link

After upgrading from v6.0.1 to v6.0.2 the maximum time it takes to instrument increases from 26 seconds in v6.0.1 to >1.5 minutes. (note: I'm measuring from test command began to first test starting)
This occurs when only changing the coverlet.collector package's version (otherwise identical repo).
Normally this would just be annoying, but since I'm using --blame-hang-timeout 90s when testing, the test host crashes before any tests begin. The problem is compounded because I'm using GitHub Actions and for whatever reason the process crash isn't detected and the test will keep running for as long the test runner lets it.

Going through the differences from v6.0.1 to v6.0.2 it looks like System.Text.Json use was reverted. I didn't see any specific code that appeared to be the culprit, however I am not familiar with coverlet's code.

Let me know if you'd like any other information.

Environment: .net8.0
Coverlet: v6.0.1 / v6.0.2
OS: Ubuntu 22.04
Arch: x64

@github-actions github-actions bot added the untriaged To be investigated label Apr 11, 2024
@daveMueller
Copy link
Collaborator

@erichiller thanks for reporting this. There is another performance issue reported #1646 for which already a PR exists. Maybe it is the same reason. I let you know once this is merged and you can give it a try with the nightly.

@daveMueller
Copy link
Collaborator

@erichiller The PR I mentioned is now merged and can be consumed with our nightly. Maybe you can give it a try. But I guess this here is another issue and as you have already noticed there weren't many changes between the two version v6.0.1 - v6.0.2.
We would really appreciate if somebody could provide a repro that shows those different instrumentation times? This would speed up the work on this issue.

@WyrmUK
Copy link

WyrmUK commented May 2, 2024

We've seen the same performance issue so this should be fairly easy to replicate. It's significant for large solutions. I would hazard a guess that the change back to Newtonsoft from System.Text.Json is the culprit (Newtonsoft is a lot slower for large and numerous objects). You may want to consider providing a specific .NET 8 version of the tool that uses the V8 System.Text.Json then in the build target determine which version to run depending on the installed framework.

@daveMueller
Copy link
Collaborator

The issue says something about increased instrumentation time but e.g. we don't use Json serialization/deserialization for the instrumentation at all. In fact we only use it in 1-2 places for generating reports and reading the runtime.config file. Even if I don't want to rule it out yet, I somehow think the problem could be elsewhere.
I tried to create a repro today but it doesn't seem to be really easy. How do the affected solutions look like? @WyrmUK what categorizes a large application. Amount of source code? Amount of source files? Amount of assemblies?

@daveMueller daveMueller added the needs repro Needs repro to be investigated, cannot repro in local label May 18, 2024
@piccit
Copy link

piccit commented May 19, 2024

The issue says something about increased instrumentation time but e.g. we don't use Json serialization/deserialization for the instrumentation at all. In fact we only use it in 1-2 places for generating reports and reading the runtime.config file. Even if I don't want to rule it out yet, I somehow think the problem could be elsewhere. I tried to create a repro today but it doesn't seem to be really easy. How do the affected solutions look like? @WyrmUK what categorizes a large application. Amount of source code? Amount of source files? Amount of assemblies?

In my own experience, it's files/sloc. Our pipeline runs tests per project, so I don't think amount of assemblies is a factor. I'll update with specific file and sloc count when I get back to my desk, but I was seeing about 3-5 minutes added to overall execution time (per project)

@WyrmUK
Copy link

WyrmUK commented May 20, 2024

The issue says something about increased instrumentation time but e.g. we don't use Json serialization/deserialization for the instrumentation at all. In fact we only use it in 1-2 places for generating reports and reading the runtime.config file. Even if I don't want to rule it out yet, I somehow think the problem could be elsewhere. I tried to create a repro today but it doesn't seem to be really easy. How do the affected solutions look like? @WyrmUK what categorizes a large application. Amount of source code? Amount of source files? Amount of assemblies?

In our case we have a solution containing 6 projects containing 12,000 lines of which 6,000 are coverable/covered with 1,200 branches. Each project has an associated unit test project.
We found the time taken to perform the tests increased from 1min 15s up to 2min 40sec.
We haven't looked any further as to whether it is the instrumentation or the report generation/merge.

@Thomathan
Copy link

I'm having the same issue. I have multiple solutions and 25+ projects overall, with up to 85k coverable lines with a large majority being covered. Similar to above, we also have a unit test project per project.

When running the complete set of tests with clean solutions on 6.0.2, it takes around 7 minutes.
With 6.0.1 it only takes 4 minutes.
The tests themselves only take about 3 minutes when not generating code coverage.

I also attempted to test the nightly build. However, it doesn't seem to be collecting any coverage at all.

@Bertk
Copy link
Collaborator

Bertk commented Aug 16, 2024

@daveMueller I used dotnet.benchmark and analyzed a number of commits. Looks like the performance impact occurs with commit "Add regex evaluation timeout (#1630)"

Without commit
image
With commit #1630
image

@drdamour
Copy link

hm, docs on that https://learn.microsoft.com/en-us/dotnet/api/system.text.regularexpressions.regex.-ctor?view=net-8.0#system-text-regularexpressions-regex-ctor(system-string-system-text-regularexpressions-regexoptions-system-timespan) say

We recommend that you set the matchTimeout parameter to an appropriate value, such as two seconds.

and

If you disable time-outs by specifying InfiniteMatchTimeout, the regular expression engine offers slightly better performance.

guess we've been banking on that slightly better performance?

@Bertk
Copy link
Collaborator

Bertk commented Aug 16, 2024

I reverted the updates in IsLocalMethod and IsTypeFilterMatch. The performance is back again.
image

Using Regex.InfiniteMatchTimeout did not improve the performance.

regex with timeout shall not be used for IsLocalMethod and IsTypeFilterMatch method.
RegexOptions.IgnoreCase configuration shall be removed as well.

@Bertk Bertk added bug Something isn't working tenet-performance Performance related issue and removed untriaged To be investigated needs repro Needs repro to be investigated, cannot repro in local labels Aug 17, 2024
@Bertk Bertk added Solved The issue is solved and can be closed and removed tenet-performance Performance related issue labels Sep 4, 2024
@Bertk Bertk closed this as completed Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Solved The issue is solved and can be closed
Projects
None yet
Development

No branches or pull requests

7 participants