Android face detection API – stored video files

I want to use the Android vision FaceDetector API to perform face detection / tracking on video files (for example, MP4 in the user Library). I can see many examples of using the camerasource class to perform face tracking on streams directly from the camera (for example, on the Android vision GitHub), but there is nothing for video files

I try to view the source code of camerasource through Android studio, but it is confused. I can't see the original code online. I think there are many similarities between using a camera and using files. Maybe I just play video files on the surface and transfer them to the pipeline

Alternatively, I can see that frame.builder has functions setimagedata and settimestampmillis. If I can read video in the form of ByteBuffer, how can I pass it to the FaceDetector API? I guess this question is similar, but there is no answer. Similarly, decode the video into a bitmap frame and pass it to setbitmap

Ideally, I don't want to render the video to the screen, and the processing should be as fast as the FaceDetector API can

resolvent:

Just call SparseArray < face >. Face = detector.detect (frame); Detectors must be created like this:

FaceDetector detector = new FaceDetector.Builder(context)
   .setProminentFaceOnly(true)
   .build();

The content of this article comes from the network collection of netizens. It is used as a learning reference. The copyright belongs to the original author.
THE END
分享
二维码
< <上一篇
下一篇>>