To suppress an unnatural movement of a moving <span class="c15 g0">objectspan> at a <span class="c18 g0">connectionspan> point when a plurality of pieces of video data are connected to each other. A <span class="c5 g0">framespan> <span class="c6 g0">acquisitionspan> <span class="c7 g0">unitspan> 22 acquires pieces of <span class="c5 g0">framespan> data captured at the same <span class="c26 g0">timespan>, from the pieces of video data input from cameras C1 and C2. A <span class="c8 g0">prohibitedspan>-<span class="c11 g0">regionspan> setting <span class="c7 g0">unitspan> 24 sets a <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> that is not set as a <span class="c21 g0">calculationspan> <span class="c13 g0">targetspan> when a <span class="c20 g0">seamspan> is calculated, based on a position of an <span class="c15 g0">objectspan> detected in an overlap <span class="c11 g0">regionspan> between adjacent pieces of <span class="c5 g0">framespan> data and a movement <span class="c19 g0">directionspan> of the <span class="c15 g0">objectspan>. A <span class="c20 g0">seamspan> <span class="c21 g0">calculationspan> <span class="c7 g0">unitspan> 25 calculates a <span class="c20 g0">seamspan> between pieces of <span class="c5 g0">framespan> data adjacent to each other, without setting a pixel included in the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> as the <span class="c21 g0">calculationspan> <span class="c13 g0">targetspan> of <span class="c20 g0">seamspan> <span class="c21 g0">calculationspan>. A <span class="c18 g0">connectionspan>-<span class="c5 g0">framespan> output <span class="c7 g0">unitspan> 26 connects pieces of <span class="c5 g0">framespan> data in <span class="c17 g0">accordancespan> with the <span class="c20 g0">seamspan> and outputs <span class="c18 g0">connectionspan> <span class="c5 g0">framespan> data 15.
|
3. An <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c4 g0">methodspan> by an <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c3 g0">devicespan> for stitching a plurality of pieces of <span class="c0 g0">imagespan> data obtained by <span class="c10 g0">capturingspan> portions of an <span class="c0 g0">imagespan> <span class="c10 g0">capturingspan> <span class="c11 g0">regionspan> with an overlap, the <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c4 g0">methodspan> comprising:
acquiring a plurality of pieces of <span class="c5 g0">framespan> data captured at the same <span class="c26 g0">timespan>, from the plurality of pieces of <span class="c0 g0">imagespan> data;
detecting a <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> in an overlap <span class="c11 g0">regionspan> in which pieces of the <span class="c5 g0">framespan> data adjacent to each other overlap each other;
setting, in the overlap <span class="c11 g0">regionspan>, a <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> through which a <span class="c20 g0">seamspan> does not pass; and
outputting <span class="c18 g0">connectionspan> <span class="c5 g0">framespan> data by connecting the plurality of pieces of <span class="c5 g0">framespan> data to each other in <span class="c17 g0">accordancespan> with the <span class="c20 g0">seamspan>,
wherein setting the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> comprises: (i) determining, in the overlap <span class="c11 g0">regionspan>, a <span class="c14 g0">firstspan> <span class="c11 g0">regionspan> that encompasses the detected <span class="c13 g0">targetspan> <span class="c15 g0">objectspan>, (ii) determining a <span class="c12 g0">secondspan> <span class="c11 g0">regionspan> which spans from a side of the <span class="c14 g0">firstspan> <span class="c11 g0">regionspan> to a side of the overlap <span class="c11 g0">regionspan> on an <span class="c23 g0">entrancespan> <span class="c19 g0">directionspan> side of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan>, and (iii) setting the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> as including both of the <span class="c14 g0">firstspan> <span class="c11 g0">regionspan> and the <span class="c12 g0">secondspan> <span class="c11 g0">regionspan>.
5. A non-transitory computer readable storage medium having stored thereon an <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c2 g0">programspan> for causing one or more processors of an <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c3 g0">devicespan> to perform an <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c4 g0">methodspan> for stitching a plurality of pieces of <span class="c0 g0">imagespan> data obtained by <span class="c10 g0">capturingspan> portions of an <span class="c0 g0">imagespan> <span class="c10 g0">capturingspan> <span class="c11 g0">regionspan> with an overlap, the <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c4 g0">methodspan> comprising:
acquiring a plurality of pieces of <span class="c5 g0">framespan> data captured at the same <span class="c26 g0">timespan>, from the plurality of pieces of <span class="c0 g0">imagespan> data;
detecting a <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> in an overlap <span class="c11 g0">regionspan> in which pieces of the <span class="c5 g0">framespan> data adjacent to each other overlap each other;
setting, in the overlap <span class="c11 g0">regionspan>, a <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> through which a <span class="c20 g0">seamspan> does not pass; and
outputting <span class="c18 g0">connectionspan> <span class="c5 g0">framespan> data by connecting the plurality of pieces of <span class="c5 g0">framespan> data to each other in <span class="c17 g0">accordancespan> with the <span class="c20 g0">seamspan>,
wherein setting the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> comprises: (i) determining, in the overlap <span class="c11 g0">regionspan>, a <span class="c14 g0">firstspan> <span class="c11 g0">regionspan> that encompasses the detected <span class="c13 g0">targetspan> <span class="c15 g0">objectspan>, (ii) determining a <span class="c12 g0">secondspan> <span class="c11 g0">regionspan> which spans from a side of the <span class="c14 g0">firstspan> <span class="c11 g0">regionspan> to a side of the overlap <span class="c11 g0">regionspan> on an <span class="c23 g0">entrancespan> <span class="c19 g0">directionspan> side of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan>, and (iii) setting the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> as including both of the <span class="c14 g0">firstspan> <span class="c11 g0">regionspan> and the <span class="c12 g0">secondspan> <span class="c11 g0">regionspan>.
1. An <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c3 g0">devicespan> that stitches a plurality of pieces of <span class="c0 g0">imagespan> data obtained by <span class="c10 g0">capturingspan> portions of an <span class="c0 g0">imagespan> <span class="c10 g0">capturingspan> <span class="c11 g0">regionspan> with an overlap, the <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c3 g0">devicespan> comprising:
a <span class="c5 g0">framespan> <span class="c6 g0">acquisitionspan> <span class="c7 g0">unitspan>, including one or more processors, configured to acquire a plurality of pieces of <span class="c5 g0">framespan> data captured at the same <span class="c26 g0">timespan>, from the plurality of pieces of <span class="c0 g0">imagespan> data;
an <span class="c15 g0">objectspan> <span class="c16 g0">detectionspan> <span class="c7 g0">unitspan>, including one or more processors, configured to detect a <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> in an overlap <span class="c11 g0">regionspan> in which pieces of the <span class="c5 g0">framespan> data adjacent to each other overlap each other;
a <span class="c8 g0">prohibitedspan>-<span class="c11 g0">regionspan> setting <span class="c7 g0">unitspan>, including one or more processors, configured to set, in the overlap <span class="c11 g0">regionspan>, a <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> through which a <span class="c20 g0">seamspan> does not pass;
a <span class="c20 g0">seamspan> <span class="c21 g0">calculationspan> <span class="c7 g0">unitspan>, including one or more processors, configured to calculate the <span class="c20 g0">seamspan> not to pass through the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan>; and
a <span class="c18 g0">connectionspan>-<span class="c5 g0">framespan> output <span class="c7 g0">unitspan>, including one or more processors, configured to connect the plurality of pieces of <span class="c5 g0">framespan> data to each other in <span class="c17 g0">accordancespan> with the <span class="c20 g0">seamspan> and output <span class="c18 g0">connectionspan> <span class="c5 g0">framespan> data obtained by the <span class="c18 g0">connectionspan>,
wherein the <span class="c8 g0">prohibitedspan>-<span class="c11 g0">regionspan> setting <span class="c7 g0">unitspan> is configured to: (i) determine, in the overlap <span class="c11 g0">regionspan>, a <span class="c14 g0">firstspan> <span class="c11 g0">regionspan> that encompasses the detected <span class="c13 g0">targetspan> <span class="c15 g0">objectspan>, (ii) determine a <span class="c12 g0">secondspan> <span class="c11 g0">regionspan> which spans from a side of the <span class="c14 g0">firstspan> <span class="c11 g0">regionspan> to a side of the overlap <span class="c11 g0">regionspan> on an <span class="c23 g0">entrancespan> <span class="c19 g0">directionspan> side of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan>, and (iii) set the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> as including both of the <span class="c14 g0">firstspan> <span class="c11 g0">regionspan> and the <span class="c12 g0">secondspan> <span class="c11 g0">regionspan>.
2. The <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c3 g0">devicespan> according to
wherein the <span class="c8 g0">prohibitedspan>-<span class="c11 g0">regionspan> setting <span class="c7 g0">unitspan> is configured to set, as the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan>, a <span class="c11 g0">regionspan> from the position of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> to a side of the overlap <span class="c11 g0">regionspan> on a traveling <span class="c19 g0">directionspan> side of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> after detected positions of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> on the <span class="c5 g0">framespan> data in the overlap <span class="c11 g0">regionspan> are substantially the same.
4. The <span class="c0 g0">imagespan> <span class="c1 g0">processingspan> <span class="c4 g0">methodspan> according to
wherein, in setting the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan>, a <span class="c11 g0">regionspan> from the position of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> to a side of the overlap <span class="c11 g0">regionspan> on a traveling <span class="c19 g0">directionspan> side of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> is set as the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> after detected positions of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> on the <span class="c5 g0">framespan> data in the overlap <span class="c11 g0">regionspan> are substantially the same.
6. The non-transitory computer readable storage medium according to
wherein, in setting the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan>, a <span class="c11 g0">regionspan> from the position of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> to a side of the overlap <span class="c11 g0">regionspan> on a traveling <span class="c19 g0">directionspan> side of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> is set as the <span class="c8 g0">prohibitedspan> <span class="c11 g0">regionspan> after detected positions of the <span class="c13 g0">targetspan> <span class="c15 g0">objectspan> on the <span class="c5 g0">framespan> data in the overlap <span class="c11 g0">regionspan> are substantially the same.
|
This application is a National Stage application under 35 U.S.C. § 371 of International Application No. PCT/JP2020/015299, having an International Filing Date of Apr. 3, 2020, which claims priority to Japanese Application Serial No. 2019-079285, filed on Apr. 18, 2019. The disclosure of the prior application is considered part of the disclosure of this application, and is incorporated in its entirety into this application.
The present invention relates to a technique for stitching a plurality of images captured by overlapping a portion of an image capturing region.
In Non Patent Literature 1, a surround image showing the entire state of a field game having a wide competition space is composited in such a manner that images captured by a plurality of cameras are stitched to each other in real time in horizontal and vertical directions. When images adjacent to each other are stitched to each other, a seam is set in an overlap region in which the adjacent images overlap each other. A composite image is obtained by cutting out and connecting adjacent images at the seam.
In Patent Literature 1, a prohibited region through which the seam does not pass is set in order to make the seam unnoticeable when a plurality of images are stitched to each other. For example, it is possible to avoid a defect or an elongation of the moving object, which occurs by overlapping the moving object with the seam, by setting a prohibited region for a moving object such as a person.
With movement of the moving object, the seam may change greatly to avoid the prohibited region set for the moving object. In this case, the moving object unfortunately appears to have moved unnaturally. For example, when a seam is set on the right side of a moving object in a frame but the seam is set on the left side of the moving object in the next frame, the moving object captured in the left image in the previous frame is used for a composited image, and a moving object captured in the right image in the next frame is used for a composited image. Due to the parallax between the first camera that captures the left image and the second camera that captures the right image, if the positions of the moving object on the left and right image are shifted from each other in the overlap region, the moving object appears to have moved suddenly, to have moved in a direction opposite to a traveling direction, or to be stopped.
The present invention has been made in view of the above circumstances, and an object of the present invention is to suppress an unnatural movement of a moving object at a connection point when a plurality of pieces of image data are connected to each other.
According to an aspect of the present invention, there is provided an image processing device that stitches a plurality of pieces of image data obtained by capturing portions of an image capturing region with an overlap. The image processing device includes: a frame acquisition unit configured to acquire a plurality of pieces of frame data captured at the same time, from the plurality of pieces of image data; a prohibited-region setting unit configured to set a prohibited region through which a seam does not pass in an overlap region in which pieces of the plurality of pieces of frame data adjacent to each other overlap each other; a seam calculation unit configured to calculate the seam not to pass through the prohibited region; and a connection-frame output unit configured to connect the plurality of pieces of frame data to each other in accordance with the seam and output connection frame data obtained by the connection. The prohibited-region setting unit sets the prohibited region based on a position and a movement direction of an object when the object detected from the plurality of pieces of frame data enters into the overlap region.
According to an aspect of the present invention, there is provided an image processing method by an image processing device that stitches a plurality of pieces of image data obtained by capturing portions of an image capturing region with an overlap. The image processing method includes: acquiring a plurality of pieces of frame data captured at the same time, from the plurality of pieces of image data; setting a prohibited region through which a seam does not pass in an overlap region in which pieces of the plurality of pieces of frame data adjacent to each other overlap each other; calculating the seam not to pass through the prohibited region; and outputting connection frame data by connecting the plurality of pieces of frame data to each other in accordance with the seam. In setting the prohibited region, the prohibited region is set based on a position and a movement direction of an object when the object detected from the plurality of pieces of frame data enters into the overlap region.
According to an aspect of the present invention, there is provided an image processing program causing a computer to operate as units of the image processing device.
According to the present invention, it is possible to suppress an unnatural movement of a moving object at a connection point when a plurality of pieces of image data are connected to each other.
Hereinafter, an embodiment of the present invention will be described with reference to the drawings. In description of the drawings below, the same or similar component are designated by the same or similar reference signs.
A configuration of a wide viewing-angle remote monitoring system using video processing according to the present invention will be described with reference to
The wide viewing-angle remote monitoring system in
The composition processing server 100 includes a composition processing unit 110, an encoding processing unit 120, and an object detection and tracking processing unit 130. The composition processing server 100 receives a video and audio from each of a plurality of imaging systems (for example, a 4 K camera) as inputs, composites a panoramic video by stitching videos, and performs detection and tracking of a target object from each video.
The composition processing unit 110 composites a panoramic video by stitching the plurality of input videos in real time. The composition processing unit 110 dynamically changes a seam that stitches the images. When obtaining the seam, the composition processing unit 110 uses video processing according to the present invention to improve the quality of the composition. The video processing uses tracking results of a moving object. Details of the video processing according to the present invention will be described later.
The encoding processing unit 120 encodes the composite panoramic video composed by the composition processing unit 110 and audio data, performs conversion into MMTP streams, and transmits the MMTP streams to the decoding server 300.
The object detection and tracking processing unit 130 detects and tracks a target object from each image. The object detection and tracking processing unit 130 transmits the result (tracking information) obtained by performing processing on each image, to the object information integration server 200 and the composition processing unit 110.
The object information integration server 200 converts the coordinates of the object on each video into coordinates on the panoramic video for the tracking information on each video input from the object detection and tracking processing unit 130. The object information integration server 200 integrates the tracking information of objects that appear in each video in an overlap region in which videos overlap each other when the objects are estimated to be the same. The object information integration server 200 converts object information in which additional information is added to the tracking information, into MMTP packets and then transmits the MMTP packets to the integrated object information receiving server 400. The additional information may be inquired and acquired for an external server.
The decoding server 300 decodes the MMTP stream received from the composition processing server 100 and outputs a panoramic video and audio.
The integrated object information receiving server 400 receives the MMTP packets of the object information from the object information integration server 200, and outputs the object information.
In a display system (for example, panoramic screen), the object information output from the integrated object information receiving server 400 is superimposed on the panoramic video output from the decoding server 300, and then the resultant of the superimposition is displayed.
(Configuration of Video Processing Device)
The configuration of the video processing device 1 according to the embodiment will be described with reference to
The video processing device 1 illustrated in
The video processing device 1 illustrated in
The storage device 10 is a read only memory (ROM), a random access memory (RAM), a hard disk, and the like and stores various kinds of data such as input data, output data, and intermediate data required for the processing device 20 executing processing.
The processing device 20 is a central processing unit (CPU) or a graphics processing unit (GPU). The processing device reads or writes data stored in the storage device 10 or inputs and outputs data to and from the input/output interface 30 to perform processing in the video processing device 1.
The input/output interface 30 receives an input from an input device I such as a keyboard and a mouse and the plurality of cameras C1 and C2, and inputs the input data to the processing device 20. The input/output interface 30 further outputs the processing results from the processing device 20 to the display device D such as a display. When transmitting the connection frame data of the processing result, the input/output interface 30 may transmit the encoded connection frame data via a network. The input device I, the plurality of cameras C1 and C2, and the display device D may be connected to the video processing device 1 via a communication interface and a communication network. Instead of the plurality of cameras C1 and C2, a recorder or a storage device that stores a plurality of pieces of captured data captured in advance may be connected to the video processing device 1, and the video processing device 1 may process the plurality of pieces of captured data captured in advance.
The storage device 10 stores setting data 11, frame information data 12, prohibited region data 13, seam data 14, and connection frame data 15. The storage device 10 may store the video processing program executed by the computer.
The setting data 11 refers to setting information such as parameters required for processing of the video processing device 1. The setting data 11 includes, for example, the number of pieces of video data to be input to the video processing device 1, the sequence of the video data, and a parameter used to calculate the prohibited region data 13.
The frame information data 12 refers to information of each piece of frame data captured at the same time in the plurality of pieces of video data output by the plurality of cameras C1 and C2. The frame information data 12 refers to data in which information such as an identifier of the camera, a pixel value, a frame rate, and a luminance value is associated with the frame data.
The prohibited region data 13 refers to data indicating a prohibited region that is not set as a calculation target during the seam calculation. The prohibited region is set by the prohibited-region setting unit 24 described later.
The seam data 14 refers to data of a result obtained in such a manner that the seam calculation unit 25 described later calculates the seam of the frame data.
The connection frame data 15 refers to connected frame data obtained by combining the plurality of frame data captured at the same time in accordance with the seam data 14. The connection frame data 15 refers to one piece of frame data forming the video data output by the video processing device 1.
The processing device 20 includes a setting acquisition unit 21, a frame acquisition unit 22, an object detection unit 23, a prohibited-region setting unit 24, a seam calculation unit 25, and a connection-frame output unit 26.
The setting acquisition unit 21 acquires parameters required for the processing of the video processing device 1 and stores the parameters in the setting data 11. The setting acquisition unit 21 acquires the parameters in accordance with the information input from the input device I by the user. The setting acquisition unit 21 may acquire the parameters by analyzing each piece of video data or the like input from the cameras C1 and C2.
The frame acquisition unit 22 acquires pieces of frame data captured at the same time, from the pieces of video data input from cameras C1 and C2. The frame acquisition unit 22 generates and stores frame information data 12 for each piece of acquired frame data. The frame acquisition unit 22 synchronizes pieces of video data when receiving an input of pieces of video data from the cameras C1 and C2, and compares timestamps of pieces of frame data. The frame acquisition unit 22 may perform correction processing and color correction processing in order to reduce an influence of parallax caused by image capturing by the plurality of cameras C1 and C2.
The object detection unit 23 corresponds to the object detection and tracking processing unit 130 in
The prohibited-region setting unit 24 sets a prohibited region based on the position of the object detected in an overlap region between adjacent pieces of frame data and the movement direction of the object. Similar to Patent Literature 1, the prohibited-region setting unit 24 may set the prohibited region based on the luminance value of the frame data. Details of a method of setting the prohibited region based on the detected object will be described later.
The seam calculation unit 25 acquires a plurality of pieces of frame data captured at the same time, and calculates the seam data 14 that indicates the seam between pieces of frame data adjacent to each other, in a state where a pixel included in the prohibited region indicated by the prohibited region data 13 is not set as the calculation target during the seam calculation. More specifically, the seam calculation unit 25 calculates a pixel having the smallest difference in feature value from a pixel of the seam calculated immediately before, among pixels in a search range for the adjacent line, as the seam in this line. At this time, the seam calculation unit 25 does not include the pixel included in the prohibited region, in the search range. The seam calculation unit 25 may calculate the seam by using reduced frame data obtained by reducing frame data. In this case, the seam calculation unit 25 reduces and applies the prohibited region.
The connection-frame output unit 26 connects a plurality of pieces of frame data in accordance with the seam data 14 calculated by the seam calculation unit 25, and outputs the connection frame data 15 in which pieces of frame data do not overlap each other.
(Operation of Video Processing Device)
Next, an video processing method by the video processing device 1 will be described with reference to
In Step S1, the frame acquisition unit 22 acquires pieces of frame data captured at the same time point, from the video data input from each of the cameras C1 and C2.
In Step S2, the frame acquisition unit 22 associates feature points of the frame data acquired in Step S1 to generate overlap frame data.
In Step S3, the frame acquisition unit 22 sets the overlap region in the overlap frame data generated in Step S2. In
In Step S4, the prohibited-region setting unit 24 sets a prohibited region based on an object detected from the frame data F1 and F2.
In Step S5, the seam calculation unit 25 does not set the pixel included in the prohibited region, as the calculation target during the seam calculation, and sequentially specifies a pixel forming a seam in a direction perpendicular to an adjacent direction of the frame data to calculate the seam. For example, in
In Step S6, the connection-frame output unit 26 connects pieces of frame data to each other in accordance with the seam to generate the connection frame data 15.
In Step S7, the connection-frame output unit 26 outputs the connection frame data 15 generated in Step S6. The connection frame data 15 refers to frame data of video data output by the video processing device 1.
(Setting of Prohibited Region)
Next, setting processing of the prohibited region will be described with reference to
In Step S41, the object detection unit 23 detects and tracks a target object for each piece of frame data. The prohibited-region setting unit 24 acquires position information and the movement direction of the object detected in the overlap region, on the frame data. The process of Step S41 may be performed before the prohibited region is set. For example, the frame data is acquired from each piece of the video data in Step S1, and then the process of Step S41 may be performed. Another apparatus may perform object detection and tracking, and the video processing device 1 may receive the detection result of the object.
The object detection unit 23 detects the same object O in the frame data F1 and F2 in the overlap region R. Due to the disparity of the cameras C1 and C2, the detected position of the object O in the frame data F1 may be shifted from the detected position of the object O in the frame data F2. Because the object O moves from the frame data F1 in a direction of the frame data F2, the object detection unit 23 may set the detected position of the object O in the frame data F1, as the position of the object O. The detected positions of the object in the frame data F1 and F2 may be converted into coordinates on the connection frame data, and may set the coordinates to be the position of the object O. In this case, even if the detected positions of the object O in the frame data F1 and F2 are slightly shifted from each other, the detected positions are set to the same coordinates so long as determination to be the same object is performed.
In Step S42, the prohibited-region setting unit 24 sets a predetermined region including the object detected in the overlap region, as the prohibited regions so that the seam is not generated across the object.
In Step S43, the prohibited-region setting unit 24 sets a region between the prohibited region set in Step S42 and the side of the overlap region, as the prohibited region based on the parameter of the setting data 11 and the movement direction of the object. In the example of
The prohibited-region setting unit 24 may further perform processing of setting the prohibited region in a portion where the luminance value of the frame data is higher than a predetermined threshold value. The threshold value and the like can be adjusted by the setting data 11. In some settings, the prohibited region may not be set based on the luminance value.
As illustrated in
With the processing of setting the prohibited region described above, as illustrated in
If the plurality of objects enter into the overlap region, and the objects are directed in the same direction, the sum may be set as the prohibited region. For example, objects that move in the same direction are collectively handled and are set as the prohibited region. When the movement direction of the object varies, the prohibited region may be set based on the movement direction of each object. Alternatively, the prohibited region may be set by prioritizing objects.
As described above, according to the embodiment, the frame acquisition unit 22 acquires frame data captured at the same time, from video data input from the cameras C1 and C2. The prohibited-region setting unit 24 sets the prohibited region that is not set as the calculation target during the seam calculation, based on the position of the object detected in the overlap region of pieces of frame data adjacent to each other, and the movement direction of the object. The seam calculation unit 25 calculates the seam of pieces of frame data adjacent to each other without setting the pixel included in the prohibited region as the calculation target of the seam calculation. Then, the connection-frame output unit 26 connects frame data in accordance with the seam and outputs the connection frame data 15. In this manner, it is possible to suppress the rapid change of the seam and to suppress the unnatural movement of the object.
Ono, Masato, Matsubara, Yasushi, Minami, Kenichi, Hoshide, Takahide, Fukatsu, Shinji
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10917565, | Mar 08 2019 | GOPRO, INC | Image capture device with a spherical capture mode and a non-spherical capture mode |
11341614, | Sep 24 2019 | Ambarella International LP | Emirror adaptable stitching |
20160307350, | |||
20170140791, | |||
20170178372, | |||
20180095533, | |||
20190082103, | |||
JP2017102785, | |||
JP2017103672, | |||
JP2019036906, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 03 2020 | Nippon Telegraph and Telephone Corporation | (assignment on the face of the patent) | / | |||
Feb 24 2021 | ONO, MASATO | Nippon Telegraph and Telephone Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 057634 | /0453 | |
Feb 26 2021 | FUKATSU, SHINJI | Nippon Telegraph and Telephone Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 057634 | /0453 | |
Feb 26 2021 | MINAMI, KENICHI | Nippon Telegraph and Telephone Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 057634 | /0453 | |
Mar 03 2021 | MATSUBARA, YASUSHI | Nippon Telegraph and Telephone Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 057634 | /0453 | |
Mar 05 2021 | HOSHIDE, TAKAHIDE | Nippon Telegraph and Telephone Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 057634 | /0453 |
Date | Maintenance Fee Events |
Sep 28 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Oct 15 2027 | 4 years fee payment window open |
Apr 15 2028 | 6 months grace period start (w surcharge) |
Oct 15 2028 | patent expiry (for year 4) |
Oct 15 2030 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 15 2031 | 8 years fee payment window open |
Apr 15 2032 | 6 months grace period start (w surcharge) |
Oct 15 2032 | patent expiry (for year 8) |
Oct 15 2034 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 15 2035 | 12 years fee payment window open |
Apr 15 2036 | 6 months grace period start (w surcharge) |
Oct 15 2036 | patent expiry (for year 12) |
Oct 15 2038 | 2 years to revive unintentionally abandoned end. (for year 12) |