-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too few variables tested for ModelicaTest.Media.TestsWithFluid.MediaTestModels.Incompressible.Essotherm650? #4367
Comments
Same question for:
|
ModelicaTest.Media.TestsWithFluid.MediaTestModels.Incompressible.Essotherm650
?
@beutlich The problem of "false" counting in ModelicaTest.Media.TestAllProperties.IncompleteMedia.ReferenceMoistAir is a problem of CSVCompare: |
I am afraid I cannot confirm your observations, i.e., the |
@MatthiasBSchaefer csv-compare v2.0.4 fixed the need for counting failed values by printing the total number. |
This is due to the smart heuristics in out testing tool. Only if there are no common states, we take all variables appearing in both result files. For the general case, this is a smart way to handle, but we are thinking about some exception for MSL-testing. |
Just to put down explicitly what we talked about during the MAP-LIB meeting yesterday: For this MSL regression testing, if the set of variables in the reference is not exactly the same set of variables as in the comparison signals file, this should be considered a hard error. |
That's unfortunate, and not the impression I got from @GallLeo during the meeting yesterday? |
For the record, these are the existing OSS testing frameworks I am aware of: |
@MatthiasBSchaefer I wonder if LTX ReSim uses csv-compare v2.0.4. It' closed source and I never got any confirmation if v2.0.4 actually improved the automatic workflows. |
In this report https://www.ltx.de/download/MA/Compare_MSL_v4.1.0/Compare/ModelicaTest/testrun_report.html I see:
for the model
ModelicaTest.Media.TestsWithFluid.MediaTestModels.Incompressible.Essotherm650
,where in the last two columns it says
0/1
. I'm interpreting that as "one variable was tested and none was incorrect".However, the model has
32
comparison signals in both the library and the reference result repository. Am I misunderstanding something?The text was updated successfully, but these errors were encountered: