Android – how to optimize multiple image stitching?

I am using multi image stitching in Visual Studio 2012, C. I modified stitching according to my requirements_ Detailed.cpp, which gives high-quality results. The problem here is that execution takes too much time. For 10 images, it takes about 110 seconds

This is what you need most of the time:

1) Pairwise matching – 10 images take 55 seconds! I'm using orb to find feature points. This is the code:

vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(false, 0.35);
matcher(features, pairwise_matches);
matcher.collectGarbage();

I try to use this code because I already know the image sequence:

vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(false, 0.35);

Mat matchMask(features.size(),features.size(),CV_8U,Scalar(0));
for (int i = 0; i < num_images -1; ++i)                                                 
    matchMask.at<char>(i,i+1) =1;                                                       
matcher(features, pairwise_matches, matchMask);                                         

matcher.collectGarbage();

It will certainly reduce the time (18 seconds), but it will not produce the desired results. Only six images are spliced (the last four are omitted because the feature points of image 6 and image 7 do not match to some extent. So the cycle is interrupted.)

2) Composite – 10 pictures take 38 seconds! This is the code:

for (int img_idx = 0; img_idx < num_images; ++img_idx)
{
    printf("Compositing image #%d\n",indices[img_idx]+1);

    // Read image and resize it if necessary
    full_img = imread(img_names[img_idx]);

    Mat K;
    cameras[img_idx].K().convertTo(K, CV_32F);

    // Warp the current image
    warper->warp(full_img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);

    // Warp the current image mask
    mask.create(full_img.size(), CV_8U);
    mask.setTo(Scalar::all(255));
    warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);

    // Compensate exposure
    compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);

    img_warped.convertTo(img_warped_s, CV_16S);
    img_warped.release();
    full_img.release();
    mask.release();

    dilate(masks_warped[img_idx], dilated_mask, Mat());
    resize(dilated_mask, seam_mask, mask_warped.size());
    mask_warped = seam_mask & mask_warped;

    // Blend the current image
    blender->Feed(img_warped_s, mask_warped, corners[img_idx]);
}

Mat result, result_mask;
blender->blend(result, result_mask);

The original image resolution is 4160 * 3120. I didn't use compression in synthesis because it reduces the quality. I used compressed images in the rest of the code

As you can see, I modified the code and reduced the time. But I still want to reduce the time as much as possible

3) Find feature points – use orb. 10 pictures take 10 seconds. Find 1530 maximum feature points for the image

55 38 10 = 103 7 remaining codes = 110

When I use this code in Android, it needs almost the memory (RAM) of the whole smartphone to execute. How to reduce the time and memory consumption of Android devices? (my Android device has 2 GB RAM)

I've optimized the rest of the code. Thank you for any help!

Edit 1: I used image compression in the synthesis step, and the time was reduced from 38 seconds to 16 seconds. I also managed to reduce the time of the rest of the code

So now, from 110 – > 85 seconds, it helps me reduce the time of pairwise matching; I don't know how to reduce it!

Edit 2: I found the code of pairwise matching in matchers.cpp. I created my own function in the main code to optimize the time. For the synthesis step, I use compression until the final image does not lose clarity. For feature search, I use image scaling to find image features at a reduced image scale. Now I can easily stitch up to 50 images

resolvent:

Since 55 to 18 seconds is a pretty good improvement, maybe you can control the matching process more. I first recommend that if you haven't learned to debug in each step, you will know what's wrong with the image being not marked. You will always learn to control, for example, the number of orb functions you are detecting. Maybe in some cases you can limit them and still get results, so as to speed up the process (this should not only speed up the search function, but also speed up the matching process)

Hopefully, this will lead you to detect this situation when you put it aside - the loop breaks. Therefore, you can respond to the situation accordingly. You will still match the sequence in the loop, saving time, but when you detect a problem matching the specific pair, force the program to continue (or change the parameters and try to match the pair again)

I don't think there is much room for improvement in the composition process here, because you don't want to lose quality. If I were you, I would try to study if threading and parallel computing might be helpful

This is an interesting and wide-ranging problem - if you can accelerate it without giving up quality, you should call LG or Google because my nexus algorithm is of poor quality:) it is slow and inaccurate

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>