Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FIX] small fixes in createDefaultStatsModel.m, getAcquisitionTime.m & bidsResults.m #1248

Merged
merged 11 commits into from
Jul 1, 2024

Conversation

d-ni374
Copy link
Contributor

@d-ni374 d-ni374 commented Jun 18, 2024

Small fixes in 3 scripts:

  • createDefaultStatsModel.m: changed elseif conditon to define GroupBy for dataset level nodes
  • getAcquisitionTime.m: modified condition to verify compatibility of acquisition time and slice timing values; The modified version adds a fixed amount of 0.01 s, while the original one added UP TO 0.01 s. In this way, all datasets are treated equally, i.e., the added time buffer is no matter of coincidence. This should make the function less prone to throw an error. I did some tests on the public openneuro dataset ds000224, which yielded an acquisitionTime of 2.139 s, but has a max slice timing of 2.145 s. Since the ceil function only added 0.001 s, an error was thrown.
  • bidsResults.m: The regular expression used to identify run level contrasts was changed from _[0-9]*$ to _[0-9]+$ to actually have a number at the end of the contrast name (which seems to be the definition of a run level contrast). Otherwise, any contrast name ending with an underscore will be treated as run level contrast. Apart from that, the spelling of contrastsNamesList was corrected.

Copy link
Contributor

sourcery-ai bot commented Jun 18, 2024

🧙 Sourcery has finished reviewing your pull request!


Tips
  • Trigger a new Sourcery review by commenting @sourcery-ai review on the pull request.
  • You can change your review settings at any time by accessing your dashboard:
    • Enable or disable the Sourcery-generated pull request summary or reviewer's guide;
    • Change the review language;
  • You can always contact us if you have any questions or feedback.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've reviewed this pull request using the Sourcery rules engine. If you would also like our AI-powered code review then let us know.

@Remi-Gau
Copy link
Contributor

Remi-Gau commented Jul 1, 2024

Thanks @d-ni374 for the PR!!!! And sorry for being slow at responding.

I will add a few tests to your PR to try to make sure that those problems do not come back in the future.

@Remi-Gau
Copy link
Contributor

Remi-Gau commented Jul 1, 2024

@all-contributors please add @d-ni374 for bug, code

Copy link
Contributor

@Remi-Gau

I've put up a pull request to add @d-ni374! 🎉

@@ -93,6 +93,10 @@ authors:
affiliation: "University of Gent"
orcid: "https://orcid.org/0000-0001-8279-6118"

- family-names: Huber
given-names: Daniel
affiliation: University of Innsbruck
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@d-ni374
is this your affiliation? (got this from github?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will merge the PR, but do not hesitate to open another PR to amend this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the affiliation is correct. Thank you for adding!

@Remi-Gau Remi-Gau merged commit bf2e35b into cpp-lln-lab:main Jul 1, 2024
27 of 30 checks passed
@Remi-Gau
Copy link
Contributor

Remi-Gau commented Jul 7, 2024

To check:

seems that the results workflow may have been affected by this:

https://github.com/cpp-lln-lab/bidspm/actions/runs/9805840957/job/27076371017?pr=1258#step:17:1284

[09:15:02] bidspm - INFO				printProcessingSubject
  PROCESSING SUBJECT No.: 1 SUBJECT LABEL : 01
  List of contrast in this SPM file
  
  	- listening_1
  	- listening_inf_baseline_1
  
  [�Warning:
  [09:15:02] bidspm - WARNING				getContrastNb
  
  This SPM file
  /tmp/tp9fcf8405_d635_4ef5_9125_7a7133955b2f/outputs/derivatives/bidspm-stats/sub-01/task-auditory_space-individual_FWHM-6/SPM.mat
   does not contain a contrast named:
   "listening_1_[0-9]+"
  ]� 

@Remi-Gau
Copy link
Contributor

Remi-Gau commented Jul 7, 2024

To check:

seems that the results workflow may have been affected by this:

https://github.com/cpp-lln-lab/bidspm/actions/runs/9805840957/job/27076371017?pr=1258#step:17:1284

[09:15:02] bidspm - INFO				printProcessingSubject
  PROCESSING SUBJECT No.: 1 SUBJECT LABEL : 01
  List of contrast in this SPM file
  
  	- listening_1
  	- listening_inf_baseline_1
  
  [�Warning:
  [09:15:02] bidspm - WARNING				getContrastNb
  
  This SPM file
  /tmp/tp9fcf8405_d635_4ef5_9125_7a7133955b2f/outputs/derivatives/bidspm-stats/sub-01/task-auditory_space-individual_FWHM-6/SPM.mat
   does not contain a contrast named:
   "listening_1_[0-9]+"
  ]� 

Ok this happens on octave and matlab and can reproduce locally.
I must have missed something and don't have proper tests to detect this.

Will open a new issue to track this new bug.

@Remi-Gau
Copy link
Contributor

Remi-Gau commented Jul 8, 2024

Still missed something:

warning: 
[08:14:52] bidspm - WARNING                             getContrastNb

This SPM file
/outputs/ds000001/derivatives/bidspm-stats/sub-02/task-balloonanalogrisktask_space-MNI152NLin2009cAsym_FWHM-0/SPM.mat
 does not contain a contrast named:
 "cash_demean_[0-9]+"

https://app.circleci.com/pipelines/github/cpp-lln-lab/bidspm/887/workflows/2e3f5eb5-8af7-4644-a4f2-8f6ed55c4845/jobs/2733

@d-ni374
Copy link
Contributor Author

d-ni374 commented Jul 9, 2024

I've seen that the contrast names for ds000001 end with '_run-01'. Maybe changing the regular expression for identifying run level contrasts in the function endsWithRunNumber to '[_-][0-9]+\${0,1}$' may solve the problem.

@Remi-Gau
Copy link
Contributor

Remi-Gau commented Jul 9, 2024

I've seen that the contrast names for ds000001 end with '_run-01'. Maybe changing the regular expression for identifying run level contrasts in the function endsWithRunNumber to '[_-][0-9]+\${0,1}$' may solve the problem.

yup having a look at it now

also fixing some other issues I introduced in other PRs...

@Remi-Gau
Copy link
Contributor

Remi-Gau commented Jul 9, 2024

OK things were a bit more complicated because contrasts need to be named differently
depending on if the datasets uses sessions or use run-entities:

so I updated the documentation to explain it: see this PR
https://bidspm--1273.org.readthedocs.build/en/1273/stats/bids_stats_model.html#run-level

The regex got more complex: .*(_[0-9]+\$?$|_(?:ses|run)-[0-9]*$|_ses-[0-9]*_run-[0-9]*$)

To make it more clear, here are the test cases I am using:

**  % no run label or session label could be inferred from bids names
  assertEqual(endsWithRunNumber('foo_1'), true);
  assertEqual(endsWithRunNumber('foo_10'), true);

  % session label but not run label could be inferred from bids names
  % the session has only one run with no run label in the file name
  assertEqual(endsWithRunNumber('foo_ses-1'), true);
  assertEqual(endsWithRunNumber('foo_ses-29'), true);

  % only run label could be inferred from bids names
  % there is only one session
  % but it is not used explicitly in the filenames
  assertEqual(endsWithRunNumber('foo_run-1'), true);
  assertEqual(endsWithRunNumber('foo_run-05'), true);

  % subject level contrast names should have nothing appended
  assertEqual(endsWithRunNumber('foo'), false);
  assertEqual(endsWithRunNumber('foo_1_'), false);
  assertEqual(endsWithRunNumber('foo_1_a'), false);
  assertEqual(endsWithRunNumber('foo_'), false);

  % there should be an underscore
  assertEqual(endsWithRunNumber('foo1'), false);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants