Java – similar and of camera. Parameters. Gethorizontalviewangle() and camera. Parameters. Getverticalviewangle()
All this is in the title, but in the now deprecated Android camera API, there are two methods: camera. Parameters. Gethorizontalviewangle() and camera. Parameters. Getverticalviewangle()
Now, using the current Camera2 API, there seems to be no corresponding content in the document. I assume that this is because the FOV angle is more complex and subtle than simple horizontal and vertical values, but I can't find any information on how to use the newer Camera2 API to calculate the total field of view of Android devices on the Internet
resolvent:
The basic formula is
FOV.x = 2 * atan(SENSOR_INFO_PHYSICAL_SIZE.x / (2 * LENS_FOCAL_LENGTH))
FOV.y = 2 * atan(SENSOR_INFO_PHYSICAL_SIZE.y / (2 * LENS_FOCAL_LENGTH))
This is an approximation assuming an ideal lens, etc., but usually good enough
This calculates the FOV of the entire sensor pixel array
However, the actual field of view for a given output will be smaller; First, the readout area of the sensor is usually smaller than the full pixel array, so it is not necessary to use physical directly_ Size, but first scale according to the ratio of the number of pixel array pixels to the number of effective array pixels (sensor_info_active_array_size / sensor_info_pixel_array_size)
Then, the field of view depends on the aspect ratio of the output you configure (16:9 FOV will be different from 4:3 FOV), the aspect ratio relative to the active array, and the aspect ratio of the clipping area (digital zoom) if it is less than the complete active array
Each output buffer will be minimized, and the cropregion will be further cropped to obtain the corresponding capture request to achieve the correct output aspect ratio( http://source.android.com/devices/camera/camera3_crop_reprocess.html There are charts)
So let's assume that we have a sensor with a pixel array of (120120), and we have a valid array rectangle (10,10) – (110110), so the width / height is 100100
We configure two outputs, output a is (40,30) and output B is (50,50). Let's keep the crop area at the maximum value (0,0) – (100100)
The horizontal FOV of outputs a and B will be the same because the maximum area clipping will cause both outputs to use the full effective array width:
output_physical_width = SENSOR_INFO_PHYSICAL_SIZE.x * ACTIVE_ARRAY.w / PIXEL_ARRAY.w
FOV_x = 2 * atan(output_physical_width / (2 * LENS_FOCAL_LENGTH))
However, the vertical FOV is different – output a uses only 3 / 4 of the vertical space due to the aspect ratio mismatch:
active_array_aspect = ACTIVE_ARRAY.w / ACTIVE_ARRAY.h
output_a_aspect = output_a.w / output_a.h
output_b_aspect = output_b.w / output_b.h
output_a_physical_height = SENSOR_INFO_PHYSICAL_SIZE.y * ACTIVE_ARRAY.h / PIXEL_ARRAY.h * output_a_aspect / active_array_aspect
output_b_physical_height = SENSOR_INFO_PHYSICAL_SIZE.y * ACTIVE_ARRAY.h / PIXEL_ARRAY.h * output_b_aspect / active_array_aspect
FOV_a_y = 2 * atan(output_a_physical_height / (2 * LENS_FOCAL_LENGTH))
FOV_b_y = 2 * atan(output_b_physical_height / (2 * LENS_FOCAL_LENGTH))
When the output aspect ratio is < = active array aspect ratio( letter@R_214_2419 @Ing), the above working principle; If this is not the case, the output horizontal size decreases and the vertical size covers the entire active array (column box). Then, the scale factor in the horizontal direction is active_ array_ aspect / output_ Aspect. If you want to calculate the FOV of the enlarged view, replace the crop area size / aspect ratio with the active array size / aspect ratio