You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, a lot of the SPA unit tests break easily (e.g. working memory, thalamus) from changes which should improve the network. This defeats the purpose of the unit tests as they do not tell whether the code is working or not and one needs quite some background knowledge to evaluate whether they fail because something is not working or because the tests need an adjustment.
Each test should define somehow a bound on the permitted error and any improvement in accuracy should leave the tests passing. I assume figuring out how to determine that error in each case is the major difficulty.
It would be cool if those error values would be saved for every commit, so that we can track improvements and regressions.
The text was updated successfully, but these errors were encountered:
(original issue nengo/nengo#512)
Currently, a lot of the SPA unit tests break easily (e.g. working memory, thalamus) from changes which should improve the network. This defeats the purpose of the unit tests as they do not tell whether the code is working or not and one needs quite some background knowledge to evaluate whether they fail because something is not working or because the tests need an adjustment.
Each test should define somehow a bound on the permitted error and any improvement in accuracy should leave the tests passing. I assume figuring out how to determine that error in each case is the major difficulty.
It would be cool if those error values would be saved for every commit, so that we can track improvements and regressions.
The text was updated successfully, but these errors were encountered: