Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sail Softfloat Replacement Verification #41

Open
13 tasks
jjscheel opened this issue Apr 1, 2024 · 4 comments
Open
13 tasks

Sail Softfloat Replacement Verification #41

jjscheel opened this issue Apr 1, 2024 · 4 comments
Assignees

Comments

@jjscheel
Copy link
Contributor

jjscheel commented Apr 1, 2024

Technical Group

Applications & Tools HC

ratification-pkg

Technical Debt

Technical Liaison

Bill McSpadden

Task Category

SAIL model

Task Sub Category

  • gcc
  • binutils
  • gdb
  • intrinsics
  • Java
  • KVM
  • ld
  • llvm
  • Linux kernel
  • QEMU
  • Spike

Ratification Target

3Q2023

Statement of Work (SOW)

Component names:
Sail RISC-V Model

Requirements:
This SOW is dependent on completion of the Sail Softfloat Replacement - #40 which describes the work to be done to port the SoftFloat library (written in C) to the Sail language.

This SOW describes the work needed to verify that the Sail implementation is correct, a not inconsequential effort.

These 2 SOWs are separately defined for the following reasons:

  1. Design and verification of a significant software package should be kept separate to ensure that at
    least 2 sets of eyes evaluate the implementation.
  2. Manpower effort. Both the design and verification of the Sail-SoftFloat are significant projects.

The particular requirements for the verification phase of this work, in this SOW, are:

  1. Create a set of “smoke tests” that check the basic functionality of all public interfaces to the library.
    This set of smoke tests are intended to be used as part of the CI of the package. Run time for this set of tests
    should be less than 30 minutes.

  2. Leverage existing FP test packages for more rigorous testing of the library. The implementor should evaluate the
    following, at a minimum:

    1. The TestFloat package that comes with the C SoftFloat package.
    2. The IBM FP testsuite. These tests form the basis for the ACTs. They are self-checking (or can easily be made so).
    3. Explore the possibility of using formal analysis methods to evaluate the Sail implementation. (Note: weakness of this approach: dependent upon Yet Another FP Model that exists in the Sail toolchain tree. Alasdair mentioned that the SMT is a bit buggy and will need some work.)

Deliverables:

  • An addition to the Sail-SoftFloat github repository that contains the FP tests and infrastructure needed to test the library.

Acceptance Criteria:

  • TBD. Likely will involve passing the completion of the verification test suite against both softfloat and Sail implementations.

Projected timeframe: (best guess date)

SOW Signoffs: (delete those not needed)

  • Development partner sign-off
  • ACT SIG sign-off (if ACT work)
  • Golden Model SIG sign-off (if SAIL work)

Waiver

  • Freeze
  • Ratification

Pull Request Details

No response

@jjscheel jjscheel changed the title SoftFloat Sail Verification Sail Softfloat Replacement Verification Apr 1, 2024
@jjscheel
Copy link
Contributor Author

jjscheel commented Apr 3, 2024

As discussed in yesterday's meeting, it's not clear to me how this work differs from the ACT work. Both should ensure complete coverage of the floating point implementations in RISC-V Sail, even if they rely on additional test suites to do so.

More discussion is likely needed here. I'll leave this to @billmcspadden-riscv and @allenjbaum in the context of the Golden Model meeting and the Arch Test SIG meeting.

@billmcspadden-riscv
Copy link

billmcspadden-riscv commented Apr 3, 2024

With respect to the use of ACTs:

In the dev-partners meeting on 2024-4-2, @allenjbaum made the comment that it should be possible to make use of ACTs to verify the functionality of SailFloat. This would be done by manually creating the expected signature and then comparing the signature from the Sail model with the expected signature. This would effectively make the ACT a self-checking test.

This seems reasonable to me. But there are questions as to the implementation. Here are some of the questions:

  1. Who does the signature check? Does the ACT itself perform the check? In other words, do you embed the signature as a data item within the test itself and then perform the comparison at the end of the test? Or is it a post-processing step? If it is a post-processing step, it will need to fit into methodology described at riscv-tests. Or, a new methodology for post-sim checking will need to be developed.
  2. Who generates the manual signature? How can we be sure that it is correct? Will an error be caught when Sail and Spike are cross-checked?
  3. Will the manual generation of the signature be a maintenance problem? That is, when the ACT is changed, how hard will it be to update the signature?
  4. For ACTs that are automatically generated, will the expected signature also be auto-generated? If not, how difficult will it be to manually generate the signature?

@allenjbaum
Copy link

allenjbaum commented Apr 3, 2024 via email

@jjscheel
Copy link
Contributor Author

The Golden Model SIG is presently thinking about this. We will await their guidance, @billmcspadden-riscv.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: As-planned
Development

No branches or pull requests

3 participants