-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sail Softfloat Replacement Verification #41
Comments
As discussed in yesterday's meeting, it's not clear to me how this work differs from the ACT work. Both should ensure complete coverage of the floating point implementations in RISC-V Sail, even if they rely on additional test suites to do so. More discussion is likely needed here. I'll leave this to @billmcspadden-riscv and @allenjbaum in the context of the Golden Model meeting and the Arch Test SIG meeting. |
With respect to the use of ACTs: In the dev-partners meeting on 2024-4-2, @allenjbaum made the comment that it should be possible to make use of ACTs to verify the functionality of SailFloat. This would be done by manually creating the expected signature and then comparing the signature from the Sail model with the expected signature. This would effectively make the ACT a self-checking test. This seems reasonable to me. But there are questions as to the implementation. Here are some of the questions:
|
1. The riscof framework does the signature check.
The ACT does not perform the check (that would be a self-checking
test); that is the function of the riscof framework
It is a post-processing step from the point of view of the ACT, but
part of the framework flow.
The usual flow of riscof is:
a. figure out which tests should be run,
b. run them with a DUT model, producing a DUT signature
c. run them with a reference model, producing a reference signature,
d. compare the signatures and generate a test report (pass/fail for each
test)
There are options to start and end this flow at most of those
intermediate points.
2. You can create the signature by e.g. running the tests with any other
simulator and stopping after step b (using riscf configuration options)
But a real self-checking test has to generate the constants embedded in
the code. How would that be done? Same problem.
3. How hard would it be for a self-checking test? The correct answers need
to come from somewhere.
4. The reference signatures are produced by running a reference model -
which could be softfloat !
The easiest way to do that is to run the tests with Sail configured
using softfloat, as a reference and Sail configured with Sailfloat as a DUT
using the standard riscof flow, I would think.
(Contrary to my musings in the meeting, that is a really good reason to
keep a softfloat option)
…On Wed, Apr 3, 2024 at 6:52 AM Bill McSpadden ***@***.***> wrote:
With respect to the use of ACTs:
In the dev-partners meeting on 2024-4-2, @allenjbaum
<https://github.com/allenjbaum> made the comment that it should be
possible to make use of ACTs to verify the functionality of SailFloat. This
would be done by *manually creating* the expected signature and then
comparing the signature from the Sail model with the expected signature.
This would effectively make the ACT a self-checking test.
This seems reasonable to me. But there are questions as to the
implementation. Here are some of the questions:
1. Who does the signature check? Does the ACT itself perform the
check? In other words, do you embed the signature as a data item within the
test itself and then perform the comparison at the end of the test? Or is
it a post-processing step? If it is a post-processing step, it will need to
fit into methodology described at
<https://github.com/riscv-software-src/riscv-tests>. Or, a new
methodology for post-sim checking will need to be developed.
2. Who generates the manual signature? How can we be sure that it is
correct? Will an error be caught when Sail and Spike are cross-checked?
3. Will the manual generation of the signature be a maintenance
problem? That is, when the ACT is changed, how hard will it be to update
the signature?
4. For ACTs that are automatically generated, will the expected
signature also be auto-generated? If not, how difficult will it be to
manually generate the signature?
—
Reply to this email directly, view it on GitHub
<#41 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHPXVJT36JKTSEIKA6X7R2LY3QCRVAVCNFSM6AAAAABFRUOELGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZUGY3TQMZVGQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
The Golden Model SIG is presently thinking about this. We will await their guidance, @billmcspadden-riscv. |
Technical Group
Applications & Tools HC
ratification-pkg
Technical Debt
Technical Liaison
Bill McSpadden
Task Category
SAIL model
Task Sub Category
Ratification Target
3Q2023
Statement of Work (SOW)
Component names:
Sail RISC-V Model
Requirements:
This SOW is dependent on completion of the Sail Softfloat Replacement - #40 which describes the work to be done to port the SoftFloat library (written in C) to the Sail language.
This SOW describes the work needed to verify that the Sail implementation is correct, a not inconsequential effort.
These 2 SOWs are separately defined for the following reasons:
least 2 sets of eyes evaluate the implementation.
The particular requirements for the verification phase of this work, in this SOW, are:
Create a set of “smoke tests” that check the basic functionality of all public interfaces to the library.
This set of smoke tests are intended to be used as part of the CI of the package. Run time for this set of tests
should be less than 30 minutes.
Leverage existing FP test packages for more rigorous testing of the library. The implementor should evaluate the
following, at a minimum:
Deliverables:
Acceptance Criteria:
Projected timeframe: (best guess date)
SOW Signoffs: (delete those not needed)
Waiver
Pull Request Details
No response
The text was updated successfully, but these errors were encountered: