Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] - Feature train orb #31

Draft
wants to merge 23 commits into
base: dev
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion build/jsfeatES6cpp.js

Large diffs are not rendered by default.

383 changes: 363 additions & 20 deletions build/jsfeatES6cpp_debug.js

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion build/jsfeatcpp.js

Large diffs are not rendered by default.

383 changes: 363 additions & 20 deletions build/jsfeatcpp_debug.js

Large diffs are not rendered by default.

32 changes: 24 additions & 8 deletions emscripten/webarkitJsfeat.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -149,13 +149,17 @@ void train_orb_pattern_internal(const char* filename) {
ext);
free(ext);
}
webarkitLOGi("Image done!");

JSLOGi("Starting detection routine...");
Copy link
Owner Author

@kalwalt kalwalt Nov 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two printings works, they print tese messages:

Image done!
Starting detection routine...

but at the end of the code they fails to print in the console, i would understand why this happens.... see the comment above.


Orb orb;
Imgproc imgproc;
detectors::Detectors detectors;
std::unique_ptr<Matrix_t> lev0_img = std::make_unique<Matrix_t>(jpegImage->xsize, jpegImage->ysize, ComboTypes::U8C1_t);
std::unique_ptr<Matrix_t> lev_img = std::make_unique<Matrix_t>(jpegImage->xsize, jpegImage->ysize, ComboTypes::U8C1_t);
Array<std::unique_ptr<Matrix_t>> pattern_corners;

auto sc0 = std::min(max_pattern_size / jpegImage->ysize, max_pattern_size / jpegImage->xsize);
new_width = (jpegImage->ysize * sc0) | 0;
new_height = (jpegImage->xsize * sc0) | 0;
Expand All @@ -168,25 +172,37 @@ void train_orb_pattern_internal(const char* filename) {
imgproc.resample(img_u8.get(), lev0_img.get(), new_width, new_height);

// prepare preview
// pattern_preview = new jsfeat.matrix_t(new_width >> 1, new_height >> 1, jsfeat.U8_t | jsfeat.C1_t);
std::unique_ptr<Matrix_t> pattern_preview = std::make_unique<Matrix_t>(jpegImage->xsize >> 1, jpegImage->ysize >> 1, ComboTypes::U8C1_t);
imgproc.pyrdown_internal(lev0_img.get(), pattern_preview.get());

Array<KeyPoints> lev_corners;
Array<std::unique_ptr<Matrix_t>> pattern_descriptors;

for (lev = 0; lev < num_train_levels; ++lev) {
//pattern_corners[lev] = [];
//lev_corners = pattern_corners[lev];
// what we should do with this code?
// pattern_corners[lev] = [];
// lev_corners = pattern_corners[lev];

// preallocate corners array
i = (new_width * new_height) >> lev;
while (--i >= 0) {
//lev_corners[i] = new jsfeatCpp.keypoint_t(0, 0, 0, 0, -1);
lev_corners[lev].set_size(i);
}

// pattern_descriptors[lev] = new jsfeatCpp.matrix_t(32, max_per_level, jsfeat.U8_t | jsfeat.C1_t);
pattern_descriptors.push_back(std::unique_ptr<Matrix_t>(new Matrix_t(32, max_per_level, ComboTypes::U8C1_t)));
}

imgproc.gaussian_blur_internal(lev0_img.get(), lev_img.get(), 5, 0.2); // this is more robust
corners_num = detectors.detect_keypoints(lev_img.get(), lev_corners[0], max_per_level);

// orb.describe(lev_img.get(), lev_corners[0], corners_num, lev_descr.get());
// This probablly will work in a near future
// orb.describe(lev_img.get(), lev_corners[0], corners_num, &pattern_descriptors[0]);
Copy link
Owner Author

@kalwalt kalwalt Nov 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

orb.describe can not be yet used here because it accept in the first parameter a uintptr_t and in the second parameter an emscripten::val can not be managed here. I should create a new method in the Orb class:
orb.describe_internal(Matrix_t* mat, Keypoints* kp, int num corners, Matix_t* descr)


// console.log("train " + lev_img.cols + "x" + lev_img.rows + " points: " + corners_num);
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...continuning from below, These two printings instead do nothing. I will open an issue as reminder.

JSLOGi("train %i x %i points: %i\n", lev_img.get()->get_cols(), lev_img.get()->get_rows(), corners_num);
std::cout << "train " << lev_img.get()->get_cols() << " x " << lev_img.get()->get_rows() << " points: " << corners_num << std::endl;
free(ext);
free(jpegImage);
};

void train_orb_pattern(std::string filename) {
Expand Down
78 changes: 78 additions & 0 deletions src/feature_detection/detectors.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
#ifndef DETECTORS_H
#define DETECTORS_H

#include <keypoint_t/keypoint_t.h>
#include <keypoints/keypoints.h>
#include <math/math.h>
#include <matrix_t/matrix_t.h>
#include <types/types.h>
#include <yape06/yape06.h>

namespace jsfeat {

namespace detectors {

class Detectors : public Yape06, public Math {
public:
Detectors() {}
~Detectors() {}

int detect_keypoints(Matrix_t* img, KeyPoints corners, int max_allowed) {
// detect features
auto kpc = detect_internal(img, &corners, 17);
auto count = kpc.count;
// sort by score and reduce the count if needed
if (count > max_allowed) {
// qsort_internal<KeyPoint_t, bool>(corners.kpoints, 0, count - 1, [](KeyPoint_t i, KeyPoint_t j){return (i.score < j.score);});
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure of this, maybe it's better to use another small different approach. I'm looking to the OpenCV code in the Orb implementation and there is another possibility.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

retainBest is taken from OpenCV, but i need to figure out if this is correct.

count = max_allowed;
}

// calculate dominant orientation for each keypoint
for (auto i = 0; i < count; ++i) {
corners.kpoints[i].angle = ic_angle(img, corners.kpoints[i].x, corners.kpoints[i].y);
}

return count;
}

private:
// function(a, b) { return (b.score < a.score); }
// bool myfunction(KeyPoint_t i, KeyPoint_t j) { return (i.score < j.score); }
// central difference using image moments to find dominant orientation
// var u_max = new Int32Array([15, 15, 15, 15, 14, 14, 14, 13, 13, 12, 11, 10, 9, 8, 6, 3, 0]);
float ic_angle(Matrix_t* img, int px, int py) {
Array<u_int> u_max{15, 15, 15, 15, 14, 14, 14, 13, 13, 12, 11, 10, 9, 8, 6, 3, 0};
auto half_k = 15; // half patch size
auto m_01 = 0, m_10 = 0;
auto src = img->u8;
auto step = img->get_cols();
auto u = 0, v = 0, center_off = (py * step + px) | 0;
auto v_sum = 0, d = 0, val_plus = 0, val_minus = 0;

// Treat the center line differently, v=0
for (u = -half_k; u <= half_k; ++u)
m_10 += u * src[center_off + u];

// Go line by line in the circular patch
for (v = 1; v <= half_k; ++v) {
// Proceed over the two lines
v_sum = 0;
d = u_max[v];
for (u = -d; u <= d; ++u) {
val_plus = src[center_off + u + v * step];
val_minus = src[center_off + u - v * step];
v_sum += (val_plus - val_minus);
m_10 += u * (val_plus + val_minus);
}
m_01 += v * v_sum;
}

return std::atan2(m_01, m_10);
}
};

} // namespace detectors

} // namespace jsfeat

#endif
1 change: 1 addition & 0 deletions src/jsfeat.h
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#include <feature_detection/detectors.h>
#include <imgproc/imgproc.h>
#include <keypoint_t/keypoint_t.h>
#include <keypoints/keypoints.h>
Expand Down