-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add _ALWAYS versions of TEST_MSG and TEST_DUMP macros #57
base: master
Are you sure you want to change the base?
Conversation
Does it make sense to log out such extra messages even for a success and make the "everything is alright" output more cluttered? And if it is "only" for a diagnostics of a previous failed check, we already have (It's true those are likely not propagated into the XML right now as that was only added later in a little bit hacky and incomplete way. But that could be implemented similarly as you do in this PR.)
I really don't want to add a macro which only outputs a stuff into the XML. The default/main is the stdout (without any output altering options). So even if you persuade me we need a new macro to output an additional stuff always (unlike |
The specific use case we want to use this for is performance checking of specific APIs within our code. So, within an individual test, we are wrapping the call to the API with some timing code and wish to capture the results of that. This needs to be output on success (and is less significant on failure) Our plan is to use the XML output to capture performance of the APIs of interest during our CI testing to record how performance changes as we amend the internals of the API to ensure no significant degradation of performance. I'm happy to have the output also on stdout as well as in the XML. |
Just to second @ndptech 's point. The current output abilities are fine if the only thing you want from the test suite is a boolean pass or fail for each individual test, but it doesn't work if there are metrics you need to track over multiple test runs. Static performance criteria (i.e. fail if API requests/s < N) end up being quite brittle and not overly useful. It's better to alert on large deviations from the mean, or to warn of a continuing upwards trend over many test runs. acutest itself is not suited do this kind of analysis as it has no capability to process historical data, but acutest is useful as a framework to execute these performance tests and to make ancillary data available to other systems which can track and alert. |
Would this set of changes make the PR more appealing?:
|
Ok, Thanks for the explanation. I can now understand your use case and agree it's worth of adding. (As a side note, perhaps you might achieve the same via The only remaining "problem", as I see it, is how the new macro(s) should be named to avoid confusion and keep the API as easy to understand/use as possible. I think having Also if we do this, maybe we should also have variant of Consider for example something like this, that would be far more explicit:
(That's the best I came with. Yet I admit I'm not happy with those names at all, so I am open to any suggestions. Any ideas?) |
Hah, a race condition :-) And you were faster. I see the "output only on a failure" as a sane default for most use cases. I think your use case is less frequently needed. So the short macro variants should imho do that. And I'm not sure whether "output only on success" makes some sense but whatever, we could have it too for the sake of completeness. |
Those macros look good. I agree on an always dumping version of The thing that occurred to me is that @ndptech 's
So that'd give us:
I'd quite like the output of all of these to be available in the XML output as well as being written to stderr/stdout, again, just to make ingestion easier by another system. What do you think? |
Ouch, I was just looking how the Xunit support is currently implemented in Acutest. Actually it's severely limited in that it's only generated by the main process, and only using the exit code of child processes to generate the report. I.e. it would work with a naive implementation only with Also, I overlooked the patch actually generates a custom tag in the report. I'm not sure whether allowing that is a good idea. Well, AFAIK Xunit is a big mess and there is actually no real standard for the report format. It seems many frameworks use some more or less restricted dialects of it. The problem is many products generating or consuming it specify the expected report format with a XSD file and they likely may validate any report against it. I.e. they would likely choke on a XML with any custom tag. Relatively safe/widely used common denominator is likely this: https://stackoverflow.com/a/9410271/917880. So, if possible, we should try hard to stick to that and encode any custom output into some tag it already provides. |
Wait the patch actually seem to implement the redirection. Nice, so disregard that one point. (Sorry, that happens when making other things in parallel) |
Yes, the mechanism @ndptech implemented should allow serialization of data going from the forked processes back to the main process. It's not quite the same as stderr/stdout redirection though, that would require more work. I looked over the XSD and there's no real support for providing structured data in a test case or test suite node, so it doesn't work for easily emitting stats data. We would need some kind of custom extension, maybe: <xs:element name="testcase">
<xs:complexType>
<xs:sequence>
<xs:element ref="skipped" minOccurs="0" maxOccurs="1"/>
<xs:element ref="error" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="failure" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-out" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-err" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="name" type="xs:string" use="required"/>
<xs:attribute name="assertions" type="xs:string" use="optional"/>
<xs:attribute name="time" type="xs:string" use="optional"/>
<xs:attribute name="classname" type="xs:string" use="optional"/>
<xs:attribute name="status" type="xs:string" use="optional"/>
<xs:element name="data" minOccurs="0" maxOccurs="unbounded">
<xs:complexType>
<xs:attribute name="name" type="xs:string" use="required"/>
<xs:attribute name="value" type="xs:string" use="required"/>
</xs:complexType>
</xs:element>
</xs:complexType>
</xs:element>
</xs:schema> likely not valid XSD, but hopefully communicates the point :) |
Maybe communicates the point with some particular CI engine, but not with others. Some products are really touchy when the XML cannot be validated against their XSD and refuse it altogether: See https://www.google.com/search?q=invalid+XML+xunit for a brief list of many such problems. |
I've started working on passing the existing messages from the child processes to the parent - I think pipes are better than stdin / stdout redirection as that allows for the child to still output its existing data to the console as well. |
Hmmm, after some thinking, I would split the problem into few steps:
@ndptech Do you mind if I "steal" code from your patch as a basis for the step 1? I think it needs some heavy tinkering to expand it into what I have in mind, but it would help. (Or if you prefer, feel free to update the PR to remove the stuff related to the step 2, i would merge it into a dev branch and start from there; this would you'd still have the credit for it in the git log.) |
As part of using acutest for CI testing, it can be useful to record additional data beyond a simple success / failure. The addition of _ALWAYS versions of TEST_MSG AND TEST_DUMP macro allows for extra data to be captured during the running of tests which is then ouptput to stdout. Also, data captured by use of TEST_MSG* and TEST_DUMP* macros is stored in log entries which are passed from child test processes to the parent to then be included in the XML output produced when the -x / --xml-output option is used. The output is presented in <system-out> nodes within each <testcase> node as per the jUnit XSD specification.
@mity I didn't see your comments until after I'd done my re-work and force push. Feel free to take any bits of my code you think are suitable - or I can re-work further. |
To my knowledge there is no good way to capture raw stdout/stderr data within the same thread or process. You can use a loopback pipe, but you risk loosing all the output (after you last drained the pipe) if the process gets a fatal signal. The way you generally capture stdout/stderr for child processes on UNIX like OSs is to create a pipe, and dup the write end of the pipe into the stderr slot ( For acutest the order of operations would be something like:
So it's doable, but it's work. The simpler way is not attempting to capture everything and just grab the messages emitted by the logging macros using the patch @ndptech submitted. |
@mity any more thoughts on this PR? |
Nope at the moment. I'm slowly reworking the log infrastructure in a local branch. It just does not progress as fast as I would like because I'm also busy with other things. It's certainly not forgotten. |
As part of using acutest for CI testing, it can be useful to record
additional data beyond a simple success / failure.
The addition of a TEST_LOG() macro allows for extra data to be captured
during the running of tests which is then reported as additional
elements in the XML output produced when the -x / --xml-output option is
used.
It can be used within tests such as:
TEST_LOG("count", "%d", count);
which adds a tag to the XML as:
n