Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] - Feature train orb #31

Draft
wants to merge 23 commits into
base: dev
Choose a base branch
from
Draft

[WIP] - Feature train orb #31

wants to merge 23 commits into from

Conversation

kalwalt
Copy link
Owner

@kalwalt kalwalt commented Nov 20, 2022

In this Pull request i will try do develop some utilities to train a Orb pattern. Partially following the jsfeat code but also OpenCV implementation as it was the original one.

@kalwalt kalwalt self-assigned this Nov 20, 2022
@kalwalt kalwalt marked this pull request as draft November 20, 2022 22:10
@kalwalt kalwalt changed the title **WIP** - Feature train orb *WIP* - Feature train orb Nov 20, 2022
@kalwalt kalwalt changed the title *WIP* - Feature train orb [WIP] - Feature train orb Nov 20, 2022

// orb.describe(lev_img.get(), lev_corners[0], corners_num, lev_descr.get());
// This probablly will work in a near future
// orb.describe(lev_img.get(), lev_corners[0], corners_num, &pattern_descriptors[0]);
Copy link
Owner Author

@kalwalt kalwalt Nov 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

orb.describe can not be yet used here because it accept in the first parameter a uintptr_t and in the second parameter an emscripten::val can not be managed here. I should create a new method in the Orb class:
orb.describe_internal(Matrix_t* mat, Keypoints* kp, int num corners, Matix_t* descr)

@@ -149,13 +149,17 @@ void train_orb_pattern_internal(const char* filename) {
ext);
free(ext);
}
webarkitLOGi("Image done!");

JSLOGi("Starting detection routine...");
Copy link
Owner Author

@kalwalt kalwalt Nov 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two printings works, they print tese messages:

Image done!
Starting detection routine...

but at the end of the code they fails to print in the console, i would understand why this happens.... see the comment above.

// This probablly will work in a near future
// orb.describe(lev_img.get(), lev_corners[0], corners_num, &pattern_descriptors[0]);

// console.log("train " + lev_img.cols + "x" + lev_img.rows + " points: " + corners_num);
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...continuning from below, These two printings instead do nothing. I will open an issue as reminder.

auto count = kpc.count;
// sort by score and reduce the count if needed
if (count > max_allowed) {
// qsort_internal<KeyPoint_t, bool>(corners.kpoints, 0, count - 1, [](KeyPoint_t i, KeyPoint_t j){return (i.score < j.score);});
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure of this, maybe it's better to use another small different approach. I'm looking to the OpenCV code in the Orb implementation and there is another possibility.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

retainBest is taken from OpenCV, but i need to figure out if this is correct.

@kalwalt
Copy link
Owner Author

kalwalt commented Nov 20, 2022

Speaking about sorting the keypoints, OpenCV use nor qsort or sort but anohter sorting algorithm, probably because more efficient ? KeyPoint OpenCV class and Keypoint_t from jsfeat are almost similar, and so probably we can adapt the code from OpenCV to our needs. If we look the Orb code in ComputeKeypoints method keypoints are removed those close to the border and then filtered:

// Remove keypoints very close to the border
KeyPointsFilter::runByImageBorder(keypoints, img.size(), edgeThreshold);

// Keep more points than necessary as FAST does not give amazing corners
KeyPointsFilter::retainBest(keypoints, scoreType == ORB_Impl::HARRIS_SCORE ? 2 * featuresNum : featuresNum);

https://github.com/opencv/opencv/blob/64aad34cb4abfb6a2603f3f4ecae7f4f0ba1414d/modules/features2d/src/orb.cpp#L853-L857

this is the retainBest method:

// takes keypoints and culls them by the response
void KeyPointsFilter::retainBest(std::vector<KeyPoint>& keypoints, int n_points)
{
    //this is only necessary if the keypoints size is greater than the number of desired points.
    if( n_points >= 0 && keypoints.size() > (size_t)n_points )
    {
        if (n_points==0)
        {
            keypoints.clear();
            return;
        }
        //first use nth element to partition the keypoints into the best and worst.
        std::nth_element(keypoints.begin(), keypoints.begin() + n_points - 1, keypoints.end(), KeypointResponseGreater());
        //this is the boundary response, and in the case of FAST may be ambiguous
        float ambiguous_response = keypoints[n_points - 1].response;
        //use std::partition to grab all of the keypoints with the boundary response.
        std::vector<KeyPoint>::const_iterator new_end =
        std::partition(keypoints.begin() + n_points, keypoints.end(),
                       KeypointResponseGreaterThanOrEqualToThreshold(ambiguous_response));
        //resize the keypoints, given this new end point. nth_element and partition reordered the points inplace
        keypoints.resize(new_end - keypoints.begin());
    }
}

https://github.com/opencv/opencv/blob/64aad34cb4abfb6a2603f3f4ecae7f4f0ba1414d/modules/features2d/src/keypoint.cpp#L68-L90
May we implement this inside webarkit-jsfeat-cpp? we have to try!

@kalwalt
Copy link
Owner Author

kalwalt commented Nov 22, 2022

while testing with docker and emsdk:3.1.25 i got this error and of course fails to build:

/home/kalwalt/kalwalt-github/webarkit-jsfeat-cpp/emscripten/WebARKitLib/lib/SRC/KPM/FreakMatcher/framework/image.h:89:37: error: ISO C++17 does not allow dynamic exception specifications [-Wdynamic-exception-spec]
                   size_t channels) throw(Exception);
                                    ^~~~~~~~~~~~~~~~
/home/kalwalt/kalwalt-github/webarkit-jsfeat-cpp/emscripten/WebARKitLib/lib/SRC/KPM/FreakMatcher/framework/image.h:89:37: note: use 'noexcept(false)' instead
                   size_t channels) throw(Exception);
                                    ^~~~~~~~~~~~~~~~
                                    noexcept(false)
1 error generated.
emcc: error: '/home/kalwalt/emsdk/upstream/bin/clang++ -target wasm32-unknown-emscripten -fignore-exceptions -fvisibility=default -mllvm -combiner-global-alias-analysis=false -mllvm -enable-emscripten-sjlj -mllvm -disable-lsr -DEMSCRIPTEN -Werror=implicit-function-declaration -I/home/kalwalt/emsdk/upstream/emscripten/cache/sysroot/include/SDL --sysroot=/home/kalwalt/emsdk/upstream/emscripten/cache/sysroot -Xclang -iwithsysroot/include/compat -I/home/kalwalt/kalwalt-github/webarkit-jsfeat-cpp/emscripten/WebARKitLib/include -I/home/kalwalt/kalwalt-github/webarkit-jsfeat-cpp/build/ -I/home/kalwalt/kalwalt-github/webarkit-jsfeat-cpp/emscripten/ -I/home/kalwalt/kalwalt-github/webarkit-jsfeat-cpp/emscripten/WebARKitLib/lib/SRC/KPM/FreakMatcher -Oz -D HAVE_NFT /home/kalwalt/kalwalt-github/webarkit-jsfeat-cpp/emscripten/WebARKitLib/lib/SRC/KPM/FreakMatcher/detectors/DoG_scale_invariant_detector.cpp -c -o /tmp/emscripten_temp_qwriigpt/DoG_scale_invariant_detector_94.o' failed (returned 1)

exec error: 1
Built at Tue Nov 22 19:10:49 CET 2022
Done!

Need to open a issue on webarkitlib in the meanwhile i will continue to use emsdk 3.1.20



JSLOGi("Resampling image...");

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resampling is not needed in our case, because we provide our image with the right size. The code was taken by the jsfeat sample_orb example and in that case we simply resampled the image taken by the canvas(webcam) to a smaller size. Anyway the resample function has some issues, infact the log console "Image resampled, starting pyrmaid now..." can not be printed with this function enabled. (just comment out and recompile to test)

// prepare preview
std::unique_ptr<Matrix_t> pattern_preview = std::make_unique<Matrix_t>(jpegImage->xsize >> 1, jpegImage->ysize >> 1, ComboTypes::U8C1_t);
imgproc.pyrdown_internal(lev0_img.get(), pattern_preview.get());

Array<KeyPoints> lev_corners;
Array<KeyPoints*> lev_corners(4);
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think we should provide a array of pointers and pre init them.

@@ -186,22 +190,30 @@ void train_orb_pattern_internal(const char* filename) {
// preallocate corners array
i = (new_width * new_height) >> lev;
while (--i >= 0) {
lev_corners[lev].set_size(i);
lev_corners[lev]->set_size(i);
lev_corners[lev]->allocate();
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to set the size and allocate the entire set.

}
std::cout << "Num. of level: " << lev << std::endl;
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this print 4 levels in the console:
Num. of level: 0 and so on...

pattern_descriptors.push_back(std::unique_ptr<Matrix_t>(new Matrix_t(32, max_per_level, ComboTypes::U8C1_t)));
}

imgproc.gaussian_blur_internal(lev0_img.get(), lev_img.get(), 5, 0.2); // this is more robust
//std::cout << "Size of first lev_corners: " << lev_corners[0]->kpoints.size() << std::endl;
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not printed...

//std::cout << "Size of first lev_corners: " << lev_corners[0]->kpoints.size() << std::endl;

imgproc.gaussian_blur_internal(lev0_img.get(), lev_img.get(), 5, 2); // this is more robust

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is ok , it is printed...

@@ -35,6 +35,12 @@ class KeyPoints {
this->size = kp.size;
this->kpoints = kp.kpoints;
}

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added tha allocate function because if you initialize a KeyPoints with the default constructor it will not init the kpoints.

@@ -84,6 +84,7 @@ class Yape06 {
}
}
}
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this not printing anything...

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now it display some value: Count inside Yape06 detect_internal: 64192

@kalwalt
Copy link
Owner Author

kalwalt commented Nov 23, 2022

Somewhere the code is broken, infact in the Yape06 detect function the compute_laplacian function print anything (just put a cout to print the dst). It means that or tsome of the args are empty or there is a memory leak. Probably can be heplful to add some specific flags for debugging.

@kalwalt
Copy link
Owner Author

kalwalt commented Nov 24, 2022

I discovered that i missed an important step: the pattern image imported by ar2ReadJpegImage it should be converted to grayscale (this is the missed step) but i can't apply because all the methods are specific for emscripten but not to be used internally. Before all i have to make the imgproc methods more flexible and rearrange the code a bit. I will do this in another specific PR.

@kalwalt kalwalt mentioned this pull request Nov 24, 2022
6 tasks
std::cout << i << std::endl;
while (--i >= 0) {
lev_corners[lev]->set_size(i);
//lev_corners[lev]->allocate();
Copy link
Owner Author

@kalwalt kalwalt Nov 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to allocate if we create the vector: Array<KeyPoints*> lev_corners(num_train_levels);

@kalwalt
Copy link
Owner Author

kalwalt commented Nov 27, 2022

Ok with the latest commit 6fe0b12 we recieve a count number from detect_keypoints. I will make some comments in the code to use as future reference. The main problem was to adapt javascript code to C++. See the while cicle and you can understand what i mean.

@kalwalt
Copy link
Owner Author

kalwalt commented Nov 27, 2022

Things to do:

  • Use lev_corners as unique_ptr as tried before.
  • solve orb.describe.

@kalwalt
Copy link
Owner Author

kalwalt commented Mar 1, 2023

At this point of the code we got this from te output log:

jsfeatES6cpp_debug.js:1667 JSLOG [info] Level 0 with 3352576 keypoints.
put_char @ jsfeatES6cpp_debug.js:1667
jsfeatES6cpp_debug.js:1667 JSLOG [info] Level 1 with 1676288 keypoints.
put_char @ jsfeatES6cpp_debug.js:1667
jsfeatES6cpp_debug.js:1667 JSLOG [info] Level 2 with 838144 keypoints.
put_char @ jsfeatES6cpp_debug.js:1667
jsfeatES6cpp_debug.js:1667 JSLOG [info] Level 3 with 419072 keypoints.
put_char @ jsfeatES6cpp_debug.js:1667
jsfeatES6cpp_debug.js:1655 Size of first lev_corners: 3352576
jsfeatES6cpp_debug.js:1667 JSLOG [info] After Gaussian blur
put_char @ jsfeatES6cpp_debug.js:1667
jsfeatES6cpp_debug.js:1667 JSLOG [debug] deleting data_t
put_char @ jsfeatES6cpp_debug.js:1667
jsfeatES6cpp_debug.js:1655 Count inside Yape06 detect_internal: 64192
jsfeatES6cpp_debug.js:1667 JSLOG [debug] deleting data_t
put_char @ jsfeatES6cpp_debug.js:1667
jsfeatES6cpp_debug.js:1655 Count inside detect_keypoints: 64192
jsfeatES6cpp_debug.js:1667 JSLOG [info] train 1637 x 2048 points: 300

instead from jsfeatNext:

train 512x384 points: 300
sample_orb.html:195 train 362x271 points: 300
sample_orb.html:195 train 255x191 points: 261
sample_orb.html:195 train 181x135 points: 139

what is wrong with the c++ code?

@kalwalt
Copy link
Owner Author

kalwalt commented Mar 1, 2023

JSLOG [info] train 1637 x 2048 points: 300 display only one line because we only print for the first corners level. i will fix this.

@kalwalt
Copy link
Owner Author

kalwalt commented Mar 3, 2023

with commit 4a0cf97 i added the training for the other levels, but it doesn't correctly calculate. I think there is an issue with the resample method, not sure if it ever worked correctly.

- using standard lib to speed up
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant