Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add _ALWAYS versions of TEST_MSG and TEST_DUMP macros #57

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ndptech
Copy link

@ndptech ndptech commented Feb 2, 2021

As part of using acutest for CI testing, it can be useful to record
additional data beyond a simple success / failure.

The addition of a TEST_LOG() macro allows for extra data to be captured
during the running of tests which is then reported as additional
elements in the XML output produced when the -x / --xml-output option is
used.

It can be used within tests such as:

TEST_LOG("count", "%d", count);

which adds a tag to the XML as:

n

@mity
Copy link
Owner

mity commented Feb 2, 2021

As part of using acutest for CI testing, it can be useful to record
additional data beyond a simple success / failure.

Does it make sense to log out such extra messages even for a success and make the "everything is alright" output more cluttered? And if it is "only" for a diagnostics of a previous failed check, we already have TEST_MSG for that purpose.

(It's true those are likely not propagated into the XML right now as that was only added later in a little bit hacky and incomplete way. But that could be implemented similarly as you do in this PR.)

The addition of a TEST_LOG() macro allows for extra data to be captured
during the running of tests which is then reported as additional
elements in the XML output produced when the -x / --xml-output option is
used.

I really don't want to add a macro which only outputs a stuff into the XML. The default/main is the stdout (without any output altering options). So even if you persuade me we need a new macro to output an additional stuff always (unlike TEST_MSG which only does so if the preceding check failed), it has to output to the stdout as well on the 1st place.

@ndptech
Copy link
Author

ndptech commented Feb 2, 2021

The specific use case we want to use this for is performance checking of specific APIs within our code. So, within an individual test, we are wrapping the call to the API with some timing code and wish to capture the results of that. This needs to be output on success (and is less significant on failure)

Our plan is to use the XML output to capture performance of the APIs of interest during our CI testing to record how performance changes as we amend the internals of the API to ensure no significant degradation of performance.

I'm happy to have the output also on stdout as well as in the XML.

@arr2036
Copy link
Contributor

arr2036 commented Feb 2, 2021

Just to second @ndptech 's point. The current output abilities are fine if the only thing you want from the test suite is a boolean pass or fail for each individual test, but it doesn't work if there are metrics you need to track over multiple test runs.

Static performance criteria (i.e. fail if API requests/s < N) end up being quite brittle and not overly useful. It's better to alert on large deviations from the mean, or to warn of a continuing upwards trend over many test runs. acutest itself is not suited do this kind of analysis as it has no capability to process historical data, but acutest is useful as a framework to execute these performance tests and to make ancillary data available to other systems which can track and alert.

@arr2036
Copy link
Contributor

arr2036 commented Feb 2, 2021

Would this set of changes make the PR more appealing?:

  • Alias TEST_MSG to TEST_FAIL_MSG (or similar)
  • Rename TEST_LOG to TEST_PASS_MSG (or similar).
  • Modify TEST_MSG to use the same code path/mechanism so that failure messages and success were both (optionally) available in the XML output.
  • Add toggles to control whether TEST_PASS_MSG and/or TEST_FAIL_MSG output was included in the XML output in addition to being sent to stdout/stderr.

@mity
Copy link
Owner

mity commented Feb 2, 2021

Ok, Thanks for the explanation. I can now understand your use case and agree it's worth of adding.

(As a side note, perhaps you might achieve the same via --time command line option. Or do you need to measure duration of smaller chunks than just whole unit tests?)

The only remaining "problem", as I see it, is how the new macro(s) should be named to avoid confusion and keep the API as easy to understand/use as possible. I think having TEST_LOG and TEST_MSG is a ticket for troubles: People would be easily confused how they differ and when to use which one.

Also if we do this, maybe we should also have variant of TEST_DUMP which dumps always and notjust on a failure? Or is that bogus?

Consider for example something like this, that would be far more explicit:

  1. Rename TEST_MSG to TEST_MSG_ON_FAIL
  2. Rename TEST_DUMP to TEST_MSG_ON_FAIL
  3. Add TEST_MSG_ALWAYS
  4. Add TEST_DUMP_ALWAYS
  5. Add aliases TEST_MSG and TEST_DUMP for the ON_FAIL variants for backward compatibility.

(That's the best I came with. Yet I admit I'm not happy with those names at all, so I am open to any suggestions. Any ideas?)

@mity
Copy link
Owner

mity commented Feb 2, 2021

Hah, a race condition :-) And you were faster.

I see the "output only on a failure" as a sane default for most use cases. I think your use case is less frequently needed. So the short macro variants should imho do that.

And I'm not sure whether "output only on success" makes some sense but whatever, we could have it too for the sake of completeness.

@arr2036
Copy link
Contributor

arr2036 commented Feb 2, 2021

Those macros look good. I agree on an always dumping version of TEST_DUMP. I know it's probably bad practice, but sometimes the output of one of our test gets used for the input to another. There, it's useful to be able to see data when a previous test case passes.

The thing that occurred to me is that @ndptech 's TEST_LOG macro also includes an identifier for the message, so that it always appears in the same XML node. Seems like that's almost another logging macro type (TEST_DATA?), so maybe there should be:

  • TEST_MSG
  • TEST_DUMP
  • TEST_DATA

So that'd give us:

  • TEST_MSG_ON_FAIL (aliased as TEST_MSG)
  • TEST_DUMP_ON_FAIL (aliased as TEST_DUMP)
  • TEST_DATA_ON_FAIL
  • TEST_MSG_ALWAYS
  • TEST_DUMP_ALWAYS
  • TEST_DATA_ALWAYS

I'd quite like the output of all of these to be available in the XML output as well as being written to stderr/stdout, again, just to make ingestion easier by another system.

What do you think?

@mity
Copy link
Owner

mity commented Feb 2, 2021

Ouch, I was just looking how the Xunit support is currently implemented in Acutest. Actually it's severely limited in that it's only generated by the main process, and only using the exit code of child processes to generate the report.

I.e. it would work with a naive implementation only with --no-exec. Doing it properly would likely require to implement some stdout/err redirection to the parent process or some other means of IPC (actually it would allow to implement some other nice features in the future too but it would be much more work than the path in the current form).

Also, I overlooked the patch actually generates a custom tag in the report. I'm not sure whether allowing that is a good idea. Well, AFAIK Xunit is a big mess and there is actually no real standard for the report format. It seems many frameworks use some more or less restricted dialects of it. The problem is many products generating or consuming it specify the expected report format with a XSD file and they likely may validate any report against it. I.e. they would likely choke on a XML with any custom tag.

Relatively safe/widely used common denominator is likely this: https://stackoverflow.com/a/9410271/917880. So, if possible, we should try hard to stick to that and encode any custom output into some tag it already provides.

@mity
Copy link
Owner

mity commented Feb 2, 2021

Wait the patch actually seem to implement the redirection. Nice, so disregard that one point. (Sorry, that happens when making other things in parallel)

@arr2036
Copy link
Contributor

arr2036 commented Feb 2, 2021

Yes, the mechanism @ndptech implemented should allow serialization of data going from the forked processes back to the main process. It's not quite the same as stderr/stdout redirection though, that would require more work.

I looked over the XSD and there's no real support for providing structured data in a test case or test suite node, so it doesn't work for easily emitting stats data. We would need some kind of custom extension, maybe:

<xs:element name="testcase">
        <xs:complexType>
            <xs:sequence>
                <xs:element ref="skipped" minOccurs="0" maxOccurs="1"/>
                <xs:element ref="error" minOccurs="0" maxOccurs="unbounded"/>
                <xs:element ref="failure" minOccurs="0" maxOccurs="unbounded"/>
                <xs:element ref="system-out" minOccurs="0" maxOccurs="unbounded"/>
                <xs:element ref="system-err" minOccurs="0" maxOccurs="unbounded"/>
            </xs:sequence>
            <xs:attribute name="name" type="xs:string" use="required"/>
            <xs:attribute name="assertions" type="xs:string" use="optional"/>
            <xs:attribute name="time" type="xs:string" use="optional"/>
            <xs:attribute name="classname" type="xs:string" use="optional"/>
            <xs:attribute name="status" type="xs:string" use="optional"/>
            <xs:element name="data" minOccurs="0" maxOccurs="unbounded">
                <xs:complexType>
                    <xs:attribute name="name" type="xs:string" use="required"/>
                    <xs:attribute name="value" type="xs:string" use="required"/>
                </xs:complexType>
            </xs:element>
        </xs:complexType>
    </xs:element>
  </xs:schema>

likely not valid XSD, but hopefully communicates the point :)

@mity
Copy link
Owner

mity commented Feb 2, 2021

Maybe communicates the point with some particular CI engine, but not with others. Some products are really touchy when the XML cannot be validated against their XSD and refuse it altogether: See https://www.google.com/search?q=invalid+XML+xunit for a brief list of many such problems.

@ndptech
Copy link
Author

ndptech commented Feb 3, 2021

I've started working on passing the existing messages from the child processes to the parent - I think pipes are better than stdin / stdout redirection as that allows for the child to still output its existing data to the console as well.
I'll look at a schema that fits with the xUnit XSD.

@mity
Copy link
Owner

mity commented Feb 3, 2021

Hmmm, after some thinking, I would split the problem into few steps:

  1. Implement some infrastructure to better send all the output from the worker processes to the parent (if needed), in some structured way. The parent then could then process it in much saner way into one or multiple output formats, according to the options. This would allow both TAP and xUnit output to become 1st class citizen and to provide more info (hopefully all the default one provides, except maybe what the formats cannot absorb).

  2. Reimplement all the output code to always use the gathered structured data and generate the desired output. This would allow to output any format into a file or to stdout, it would be more easily expandable to new output formats and it would be much cleaner code. I really don't like the hacky TAP & xUnit support right now.

  3. Leave the customization of the xUnit report as a separate problem, for now. I'll take a closer look at this when the steps 1 and 2 are done. Maybe we could then support several dialects of it, for compatibility purposes with various 3rd party products. That would fix my compatibility concerns: When piping the XML into a product which would choke on the custom tag, a stricter dialect filtering such stuff out could be used.

@ndptech Do you mind if I "steal" code from your patch as a basis for the step 1? I think it needs some heavy tinkering to expand it into what I have in mind, but it would help. (Or if you prefer, feel free to update the PR to remove the stuff related to the step 2, i would merge it into a dev branch and start from there; this would you'd still have the credit for it in the git log.)

As part of using acutest for CI testing, it can be useful to record
additional data beyond a simple success / failure.

The addition of _ALWAYS versions of TEST_MSG AND TEST_DUMP macro allows
for extra data to be captured during the running of tests which is then
ouptput to stdout.

Also, data captured by use of TEST_MSG* and TEST_DUMP* macros is stored
in log entries which are passed from child test processes to the parent
to then be included in the XML output produced when the -x / --xml-output
option is used.

The output is presented in <system-out> nodes within each <testcase>
node as per the jUnit XSD specification.
@ndptech ndptech changed the title Add TEST_LOG macro for recording additional test results Add _ALWAYS versions of TEST_MSG and TEST_DUMP macros Feb 3, 2021
@ndptech
Copy link
Author

ndptech commented Feb 3, 2021

@mity I didn't see your comments until after I'd done my re-work and force push. Feel free to take any bits of my code you think are suitable - or I can re-work further.

@arr2036
Copy link
Contributor

arr2036 commented Feb 3, 2021

To my knowledge there is no good way to capture raw stdout/stderr data within the same thread or process. You can use a loopback pipe, but you risk loosing all the output (after you last drained the pipe) if the process gets a fatal signal.

The way you generally capture stdout/stderr for child processes on UNIX like OSs is to create a pipe, and dup the write end of the pipe into the stderr slot (STDERR_FILENO) or stdout slot (STDOUT_FILENO) for the process. You then use another process or thread to read from the pipe and process the log output.

For acutest the order of operations would be something like:

  1. Create two pipes for stderr/stdout.
  2. Create a logging thread
  3. In the logging thread, insert the read ends of stderr/stdout into a select loop (or equivalent for windows). Add a read callback which writes the data to stdout/stderr and appends a copy of the line to a buffer.
  4. In the test execution process(es) dup the write end of the stderr pipe to STDERR_FILENO and the write end of the stdout pipe to STDOUT_FILENO.

So it's doable, but it's work. The simpler way is not attempting to capture everything and just grab the messages emitted by the logging macros using the patch @ndptech submitted.

@arr2036
Copy link
Contributor

arr2036 commented Feb 11, 2021

@mity any more thoughts on this PR?

@mity
Copy link
Owner

mity commented Feb 11, 2021

any more thoughts on this PR?

Nope at the moment. I'm slowly reworking the log infrastructure in a local branch. It just does not progress as fast as I would like because I'm also busy with other things. It's certainly not forgotten.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants