Effectively implement Java Native Interface webcam feed
I am developing a project to receive video input from webcam and display the moving area to users My "beta" attempt in this project is to use the Java media framework to retrieve webcams Through some practical functions, JMF conveniently returns the webcam framework as buffered images. I have established a large number of frameworks to deal with it However, I soon realized that JMF is no longer well supported by sun / Oracle, and some higher webcam resolutions (720p) cannot be accessed through the JMF interface
I want to continue working with the framework as buffered images and use opencv (c) to provide video Using only the opencv framework, I found that opencv efficiently returns the HD webcam framework and draws it on the screen
I think it's very simple to provide this data to Java and achieve the same efficiency I've just finished writing a JNI DLL to copy this data to bufferedimage and return it to Java However, I found that the data replication I was doing really hindered performance My goal is 30 FPS, but it takes about 100 milliseconds to copy the data in the char array returned by OpenCV to Java bufferedimage Instead, I see about 2-5 FPS
When frame capture is returned, opencv provides a pointer to a 1D character array This data needs to be provided to Java, and obviously I don't have time to copy it
I need a better solution to capture these frameworks into a bufferedimage I don't think any of the solutions I'm considering are very good (quite sure they will also perform poorly):
(1) Overwrite bufferedimage and return pixel data from various bufferedimage methods by calling DLL natively Instead of copying the array at once, I return a single pixel as requested by the calling code Please note that calling code usually requires all pixels in the image to draw or process the image, so a separate pixel grab operation will be implemented in the 2D for loop
(2) Instructs bufferedimage to use Java nio. ByteBuffer directly accesses the data in the char array returned by opencv in some way Thank you for any tips on how to do this
(3) Do everything in C and forget Java OK, yes, it sounds like the most logical solution, but I never have time to start this multi - month project from scratch
So far, my JNI code has been written to return bufferedimage, but at this point, I am willing to accept the return of a 1D character array and put it in a bufferedimage
By the way, the question here is: what is the most effective way to copy the image data of 1D char array to bufferedimage?
Provided is (inefficient) code, which I use to copy from opencv source image to bufferedimage:
JNIEXPORT jobject JNICALL Java_graphicanalyzer_ImageFeedOpenCV_getFrame (jnienv * env,jobject jThis,jobject camera) { //get the memory address of the CvCapture device,the value of which is encapsulated in the camera jobject jclass cameraClass = env->FindClass("graphicanalyzer/Camera"); jfieldID fid = env->GetFieldID(cameraClass,"pCvCapture","I"); //get the address of the CvCapture device int a_pCvCapture = (int)env->GetIntField(camera,fid); //get a pointer to the CvCapture device CvCapture *capture = (CvCapture*)a_pCvCapture; //get a frame from the CvCapture device IplImage *frame = cvQueryFrame( capture ); //get a handle on the BufferedImage class jclass bufferedImageClass = env->FindClass("java/awt/image/BufferedImage"); if (bufferedImageClass == NULL) { return NULL; } //get a handle on the BufferedImage(int width,int height,int imageType) constructor jmethodID bufferedImageConstructor = env->getmethodID(bufferedImageClass,"<init>","(III)V"); //get the field ID of BufferedImage.TYPE_INT_RGB jfieldID imageTypeFieldID = env->GetStaticFieldID(bufferedImageClass,"TYPE_INT_RGB","I"); //get the int value from the BufferedImage.TYPE_INT_RGB field jint imageTypeIntRGB = env->GetStaticIntField(bufferedImageClass,imageTypeFieldID); //create a new BufferedImage jobject ret = env->NewObject(bufferedImageClass,bufferedImageConstructor,(jint)frame->width,(jint)frame->height,imageTypeIntRGB); //get a handle on the method BufferedImage.getRaster() jmethodID getWritableRasterID = env->getmethodID(bufferedImageClass,"getRaster","()Ljava/awt/image/WritableRaster;"); //call the BufferedImage.getRaster() method jobject writableRaster = env->CallObjectMethod(ret,getWritableRasterID); //get a handle on the WritableRaster class jclass writableRasterClass = env->FindClass("java/awt/image/WritableRaster"); //get a handle on the WritableRaster.setPixel(int x,int y,int[] rgb) method jmethodID setPixelID = env->getmethodID(writableRasterClass,"setPixel","(II[I)V"); //void setPixel(int,int,int[]) //iterate through the frame we got above and set each pixel within the WritableRaster jintArray rgbArray = env->NewIntArray(3); jint rgb[3]; char *px; for (jint x=0; x < frame->width; x++) { for (jint y=0; y < frame->height; y++) { px = frame->imageData+(frame->widthSteP*y+x*frame->nChannels); rgb[0] = abs(px[2]); // OpenCV returns BGR bit order rgb[1] = abs(px[1]); // OpenCV returns BGR bit order rgb[2] = abs(px[0]); // OpenCV returns BGR bit order //copy jint array into jintArray env->SetIntArrayRegion(rgbArray,3,rgb); //take values in rgb and move to rgbArray //call setPixel() this is a copy operation env->CallVoidMethod(writableRaster,setPixelID,x,y,rgbArray); } } return ret; //return the BufferedImage }
Solution
If you want to make your code really fast and still use Java, there is another option AWT window toolkit has a direct local interface. You can use C or C to draw to AWT surface Therefore, you don't need to copy anything into Java, because you can render directly from C or the buffer in C I don't know the details about how to do it because I haven't seen it, but I know it is included in the standard JRE distribution Using this method, if you like, you may be able to approach the FPS limit of the camera instead of trying to reach 30 FPS
If you want to study further, I will start here and @ L_ 403_ 2 @ start
Happy programming!