An image analyzing method is applied to a camera and the camera is configured to monitor a monitored area. The image analyzing method includes steps of driving the camera to monitor the monitored area; sampling a plurality of field of views when the camera monitors the monitored area, so as to obtain a plurality of time information and a plurality of position information corresponding to the field of views, wherein each of the time information is corresponding to one of the position information; recording the time information and the position information; and generating a monitoring strength distribution corresponding to the monitored area according to the time information and the position information.
|
1. An image analyzing method applied to a camera, the camera being configured to monitor a monitored area, the image analyzing method comprising steps of:
driving the camera to monitor the monitored area;
sampling a plurality of field of views of the camera with a first time interval when the camera monitors the monitored area, so as to obtain a plurality of time information and a plurality of position information corresponding to the field of views, wherein each of the time information is corresponding to one of the position information;
sampling a plurality of current field of views of the camera within a second time interval according to the first time interval, wherein the first time interval is shorter than the second time interval;
merging a plurality of time information and a plurality of position information of the current field of views;
recording the time information and the position information; and
generating a monitoring strength distribution corresponding to the monitored area according to the time information and the position information.
9. A camera configured to monitor a monitored area, the camera comprising:
an image capturing module; and
a processor electrically connected to the image capturing module, the processor driving the image capturing module to monitor the monitored area, the processor sampling a plurality of field of views of the camera with a first time interval when the image capturing module monitors the monitored area, so as to obtain a plurality of time information and a plurality of position information corresponding to the field of views, wherein each of the time information is corresponding to one of the position information, the processor sampling a plurality of current field of views of the camera within a second time interval according to the first time interval and merging a plurality of time information and a plurality of position information of the current field of views, the first time interval being shorter than the second time interval, the processor recording the time information and the position information, the processor generating a monitoring strength distribution corresponding to the monitored area according to the time information and the position information.
2. The image analyzing method of
sampling a current field of view of the camera when the current field of view is motionless.
3. The image analyzing method of
recording at least one of a sampling manner, an operating mode and an operating person corresponding to each of the field of view.
4. The image analyzing method of
5. The image analyzing method of
6. The image analyzing method of
generating an approximate circle corresponding to each of the field of view;
wherein each of the position information comprises a viewing center coordinate and a diameter or a radius of the approximate circle.
7. The image analyzing method of
accumulating a stay time of each of the position information; and
generating the monitoring strength distribution corresponding to the monitored area according to the stay time of each of the position information;
wherein the stay time represents a total time accumulated for each of the position information appearing in the field of view.
8. The image analyzing method of
accumulating a number of recording times of each of the position information; and
generating the monitoring strength distribution corresponding to the monitored area according to the number of recording times of each of the position information.
10. The camera of
11. The camera of
12. The camera of
13. The camera of
14. The camera of
15. The camera of
16. The camera of
|
The invention relates to an image analyzing method and a camera and, more particularly, to an image analyzing method and a camera capable of generating a monitoring strength distribution corresponding to a monitored area.
Since safety awareness is being raised gradually, people pay much attention to safety surveillance application. So far in many public or non-public places, there are always one or more cameras installed for safety surveillance. In general, a camera is configured to monitor a monitored area. A user can set a monitored path or select a plurality of regions of interest (ROI) in the monitored area, so as to monitor the monitored path or the regions of interest. When monitoring the monitored area, a dead spot may exist due to specific operation of the user, such that safety cannot be ensured at the dead spot.
An objective of the invention is to provide an image analyzing method and a camera capable of generating a monitoring strength distribution corresponding to a monitored area, so as to solve the aforesaid problems.
According to an embodiment of the invention, an image analyzing method is applied to a camera and the camera is configured to monitor a monitored area. The image analyzing method comprises steps of driving the camera to monitor the monitored area; sampling a plurality of field of views when the camera monitors the monitored area, so as to obtain a plurality of time information and a plurality of position information corresponding to the field of views, wherein each of the time information is corresponding to one of the position information; recording the time information and the position information; and generating a monitoring strength distribution corresponding to the monitored area according to the time information and the position information.
According to another embodiment of the invention, a camera is configured to monitor a monitored area. The camera comprises an image capturing module and a processor, wherein the processor is electrically connected to the image capturing module. The processor drives the image capturing module to monitor the monitored area. The processor samples a plurality of field of views when the image capturing module monitors the monitored area, so as to obtain a plurality of time information and a plurality of position information corresponding to the field of views, wherein each of the time information is corresponding to one of the position information. The processor records the time information and the position information. The processor generates a monitoring strength distribution corresponding to the monitored area according to the time information and the position information.
As mentioned in the above, when driving the image capturing module of the camera to monitor the monitored area, the invention samples a plurality of field of views to obtain a plurality of time information and a plurality of position information. Then, the invention generates the monitoring strength distribution corresponding to the monitored area according to the time information and the position information. A user can obtain information including coverage, hot zone, dead spot, and so on related to the monitored area through the monitoring strength distribution corresponding to the monitored area, so as to know well the monitored condition of the monitored area.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Referring to
As shown in
As shown in
First of all, the method of the invention drives the image capturing module 10 of the camera 1 to monitor the monitored area 3 (step S10 in
In step S10, the invention may drive the camera 1 to monitor the monitored area 3 according to different operating modes, wherein the operating mode may be a patrol mode, an auto-tracking mode, a normal control mode, and so on.
In step S12, the sampling manner may be “polling sample” or “stable sample”. Polling sample means that the method of the invention samples the field of views of the camera 1 with a first time interval, i.e. the method of the invention samples a current field of view of the camera 1 every the first time interval (e.g. 0.5 second). Stable sample means that the method of the invention samples a current field of view of the camera 1 when the current field of view is motionless.
In step S14, in addition to the time information and the position information, the method of the invention may further selectively record at least one of a sampling manner (e.g. polling sample or stable sample), an operating mode (e.g. patrol mode, auto-tracking mode or normal control mode) and an operating person (e.g. identification code of the operating person) corresponding to each of the field of view. As to the time information and the position information, they may be designed in various formats according to practical applications and the exemplary embodiments are depicted in detail in the following.
In embodiment 1, the time information may be a time stamp and the position information may be a grid coordinate. As shown in table 1 below, the monitored area 3 may be defined as 5×5 grids and a sampled field of view may comprise nine grids with corresponding grid coordinates (0,1), (1,1), (2,1), (0,2), (1,2), (2,2), (0,3), (1,3), (2,3), wherein the numeral “0” following the grid coordinate represents that the grid is not located within the field of view and the numeral “1” following the grid coordinate represents that the grid is located within the field of view. Accordingly, step S14 may record the information shown in table 2 below.
TABLE 1
(0, 0):0
(1, 0):0
(2, 0):0
(3, 0):0
(4, 0):0
(0, 1):1
(1, 1):1
(2, 1):1
(3, 1):0
(4, 1):0
(0, 2):1
(1, 2):1
(2, 2):1
(3, 2):0
(4, 2):0
(0, 3):1
(1, 3):1
(2, 3):1
(3, 3):0
(4, 3):0
(0, 4):0
(1, 4):0
(2, 4):0
(3, 4):0
(4, 4):0
TABLE 2
Grid
Sampling
Operating
Operating
Time stamp
coordinate
manner
mode
person
Oct. 25, 2016
00000111001
0
0
ID0
13:15:20
11001110000
(Polling
(Normal
000
sample)
control
mode)
. . .
. . .
. . .
. . .
. . .
In embodiment 2, the time information may be a time stamp and the position information may comprise at least two viewing coordinates. Accordingly, step S14 may record the information shown in table 3 below. Since a video is usually displayed in a rectangular screen, the range of the field of view can be obtained by recording an upper left viewing coordinate and a lower right viewing coordinate. If the range of the field of view is displayed in a non-rectangular screen, the invention may record a plurality of apex coordinates or record a center coordinate and a radius according to geometrical pattern of the non-rectangular screen, so as to define the range of the field of view. Furthermore, the viewing coordinates may be screen pixel coordinates or viewing angle coordinates. When recording the screen pixel coordinate, it needs to inform the user about pixel size of base map corresponding to the screen pixel coordinate. When recording the viewing angle coordinate, it needs to inform the user about horizontal angle range and vertical angle range covered by the monitored area 3. Compared to the recording format shown in table 2, the recording format shown in
TABLE 3
Upper
Lower
left
right
viewing
viewing
Sampling
Operating
Operating
Time stamp
coordinate
coordinate
manner
mode
person
Oct. 25, 2016
(X1, Y1)
(X2, Y2)
0
0
ID1
13:15:22
(Polling
(Normal
sample)
control
mode)
. . .
. . .
. . .
. . .
. . .
. . .
In embodiment 3, the time information may be a time stamp and the position information may comprise a viewing center coordinate and a viewing size. Accordingly, step S14 may record the information shown in table 4 below. Since the range of the monitored area 3 is definitely larger than the range of one single field of view, the invention may replace one viewing coordinate with the viewing size, so as to further reduce data amount.
TABLE 4
Viewing
center
Viewing
Sampling
Operating
Operating
Time stamp
coordinate
size
manner
mode
person
Oct. 25, 2016
(Xc, Yc)
Width ×
1
0
ID3
13:15:24
Height
(Stable
(Normal
sample)
control
mode)
. . .
. . .
. . .
. . .
. . .
. . .
In step S12, when sampling each of the field of view, the method of the invention may generate an approximate circle corresponding to each of the field of view, wherein the approximate circle may be a circumscribed circle or an inscribed circle. Therefore, in embodiment 4, the time information may be a time stamp and the position information may comprise a viewing center coordinate and a diameter or a radius of the approximate circle. Accordingly, step S14 may record the information shown in table 5 below.
TABLE 5
Viewing
Diameter of
center
approximate
Sampling
Operating
Operating
Time stamp
coordinate
circle
manner
mode
person
Oct. 25, 2016
(Xc, Yc)
R
1
1
ID2
13:15:26
(Stable
(Auto-
sample)
tracking
mode)
. . .
. . .
. . .
. . .
. . .
. . .
In embodiment 5, the time information may be a time range. Accordingly, step S14 may record the information shown in table 6 below.
TABLE 6
Viewing
Viewing
center
size
coordinate
(Width ×
Sampling
Operating
Operating
Time range
(Xc, Yc)
Height)
manner
mode
person
Oct. 25, 2016
(5, 6)
9 × 7
0
0
ID0
13:15:22
(Polling
(Normal
to
sample)
control
Oct. 25, 2016
mode)
13:15:24
. . .
. . .
. . .
. . .
. . .
. . .
In step S12, the invention may sample a plurality of current field of views of the camera 1 within a second time interval according to the first time interval. Then, the invention may merge a plurality of time information and a plurality of position information of the current field of views, wherein the first time interval is shorter than the second time interval. For example, the invention may sample the field of view once every 0.5 second (i.e. the first time interval) and merge the sampled data within 2 seconds (i.e. the second time interval) to form an updated data. Accordingly, the time range of the updated data may be 2016/10/25/13:15:22 to 2016/10/25/13:15:24. The aforesaid first time interval and second time interval may be set by the user.
Referring to
TABLE 7
Field of view
Center coordinate
Size
S1
(2, 8)
3 × 3
S2
(4, 4)
3 × 3
S3
(2, 4)
3 × 3
S4
(7, 6)
5 × 5
C1
(3, 6)
5 × 7
C2
(5, 6)
9 × 7
It is assumed that the field of view S1 is sampled at 0.5 second, the field of view S2 is sampled at 1 second, the field of view S3 is sampled at 1.5 seconds, the field of view S4 is sampled at 2 seconds, and the time range of recording is 2 seconds. After sampling the field of views S1, S2, the invention merges and updates the field of views S1, S2 to form the field of view C1, as shown in
Furthermore, when the operating mode or the operating person is changed during the process of sampling the field of view, the invention will add a record correspondingly. For example, if the time range of recording is 2 seconds and the operating mode is changed at 1.5 seconds, the previous time range of recording will be 1.5 seconds rather than 2 seconds.
When the recording table comprises a field of sampling manner, the invention may record an individual record for the stable sample. For example, if the time range of recording is 2 seconds and the stable sample is triggered at 1.5 seconds, the invention will add a record for the stable sample. Accordingly, the previous time range of recording for the polling sample will be 1.5 seconds, as shown in table 8 below.
TABLE 8
Viewing center
Viewing size
coordinate
(Width ×
Sampling
Time range
(Xc, Yc)
Height)
manner
Oct. 25, 2016
(5, 6)
9 × 7
0
13:15:22 to
(Polling
Oct. 25, 2016
sample)
13:15:23.5
Oct. 25, 2016
(7, 6)
5 × 5
1
13:15:23.5 to
(Stable
Oct. 25, 2016
sample)
13:15:23.5
Oct. 25, 2016
(5, 6)
5 × 5
0
13:15:23.5 to
(Polling
Oct. 25, 2016
sample)
13:15:25.5
. . .
. . .
. . .
. . .
If the recording table does not comprise the field of sampling manner, the invention may analyze and recognize that the sampling manner is polling sample or stable sample according to the time range and other fields (e.g. operating mode, operating person, and so on). As shown in table 9 below, table 9 records five data, wherein the first data is a polling sample data in 2 seconds, the second data is a polling sample data in 1 second, the third data is a newly added polling sample data in 1 second since the operating mode is changed from auto-tracking mode to normal control mode, the fourth data is a newly added stable sample data since the system detects that the user inputs a stop command, and the fifth data is a polling sample data in 2 seconds. Accordingly, when two conditions occur, the sampling manner can be recognized as stable sample, wherein the first condition is that the beginning time of the time range is identical to the ending time of the time range and the second condition is that the operating mode or other fields are identical to that/those of previous data.
TABLE 9
Viewing center
Viewing size
coordinate
(Width ×
Time range
(Xc, Yc)
Height)
Operating mode
Oct. 25, 2016
(5, 6)
9 × 7
1
13:15:21 to
(Auto-tracking
Oct. 25, 2016
mode)
13:15:23
Oct. 25, 2016
(5, 6)
9 × 7
1
13:15:23 to
(Auto-tracking
Oct. 25, 2016
mode)
13:15:24
Oct. 25, 2016
(6, 6)
5 × 5
0
13:15:24 to
(Normal
Oct. 25, 2016
control mode)
13:15:25
Oct. 25, 2016
(6, 6)
3 × 3
0
13:15:25 to
(Normal
Oct. 25, 2016
control mode)
13:15:25
Oct. 25, 2016
(6, 6)
3 × 3
1
13:15:25 to
(Auto-tracking
Oct. 25, 2016
mode)
13:15:27
. . .
. . .
. . .
. . .
It should be noted that if no other fields can be used to recognize that the sampling manner is polling sample or stable sample (as shown in table 10 below), the sampling manner can be recognized as stable sample when the beginning time of the time range is identical to the ending time of the time range, e.g. the third data in table 10 below.
TABLE 10
Viewing center
coordinate
Viewing size
Time range
(Xc, Yc)
(Width × Height)
Oct. 25, 2016
(5, 6)
9 × 7
13:15:21 to
Oct. 25, 2016
13:15:23
Oct. 25, 2016
(5, 6)
9 × 7
13:15:23 to
Oct. 25, 2016
13:15:25
Oct. 25, 2016
(6, 6)
3 × 3
13:15:25 to
Oct. 25, 2016
13:15:25
Oct. 25, 2016
(6, 6)
3 × 3
13:15:25 to
Oct. 25, 2016
13:15:27
. . .
. . .
. . .
As shown in table 11 below, the monitored area 3 may be defined as 5×5 grids and a field of view sampled by the merging manner of the aforesaid embodiment 5 may comprise twelve grids with corresponding grid coordinates (0,1), (1,1), (2,1), (4,1), (0,2), (1,2), (2,2), (4,2), (0,3), (1,3), (2,3), (4,3), wherein the numeral “0” following the grid coordinate represents that the grid is not located within the field of view and the numeral “1” following the grid coordinate represents that the grid is located within the field of view. In other words, the sampled field of view may consist of two non-continuous data regions.
TABLE 11
(0, 0):0
(1, 0):0
(2, 0):0
(3, 0):0
(4, 0):0
(0, 1):1
(1, 1):1
(2, 1):1
(3, 1):0
(4, 1):1
(0, 2):1
(1, 2):1
(2, 2):1
(3, 2):0
(4, 2):1
(0, 3):1
(1, 3):1
(2, 3):1
(3, 3):0
(4, 3):1
(0, 4):0
(1, 4):0
(2, 4):0
(3, 4):0
(4, 4):0
Therefore, according to the aforesaid embodiments, the invention may record the information in tables 12 to 14 below.
TABLE 12
Upper left viewing
Lower right viewing
coordinate
coordinate
Time range
(X1, Y1)
(X2, Y2)
Oct. 25, 2016
(0, 1)
(4, 3)
13:15:21 to
Oct. 25, 2016
13:15:23
TABLE 13
Viewing center
coordinate
Viewing size
Time range
(Xc, Yc)
(Width × Height)
Oct. 25, 2016
(2, 2)
5 × 3
13:15:21 to
Oct. 25, 2016
13:15:23
TABLE 14
Time range
Grid coordinate
Oct. 25, 2016
0000011101111011110100000
13:15:21 to
Oct. 25, 2016
13:15:23
Furthermore, the grid coordinate shown in table 14 may be compressed by run length encoding (RLE) to form a format shown in table 15 below. It should be noted that, in addition to RLE, the invention may compress the grid coordinate by a compression algorithm.
TABLE 15
Time range
Grid coordinate
Oct. 25, 2016
N5Y3N1Y4N1Y4N1Y1N5
13:15:21 to
Oct. 25, 2016
13:15:23
After recording the time information and the position information of the sampled field of views according to the aforesaid embodiments, the invention can generate a monitoring strength distribution corresponding to the monitored area 3 according to the time information and the position information. The monitoring strength distribution may be generated by two manners depicted in the following.
The first manner is to accumulate a stay time of each of the position information and generate the monitoring strength distribution corresponding to the monitored area 3 according to the stay time of each of the position information, wherein the stay time represents a total time accumulated for each of the position information appearing in the field of view. In other words, the longer the stay time of a field of view is, the more often the field of view is monitored.
The second manner is to accumulate a number of recording times of each of the position information and generate the monitoring strength distribution corresponding to the monitored area 3 according to the number of recording times of each of the position information. In other words, the more the number of recording times of a field of view is, the more often the field of view is monitored.
The invention may allow the user to use different conditions (e.g. sampling manner, operating mode, operating person, stay time, the number of recording times, etc.) to lookup the monitoring strength distribution of the monitored area 3. In practical applications, the invention may render the monitoring strength distribution of the monitored area 3 by heat map, pie chart, coordinate distribution chart, histogram or other charts according to the stay time or the number of recording times of the position information. For the heat map, the invention may generate the heat map of the monitoring strength distribution on a planar or fisheye panorama of the monitored area 3 for the user to view the monitoring strength distribution of the monitored area 3. It should be noted that the method for generating the heat map is well known by one skilled in the art, so it will not be depicted herein.
As mentioned in the above, when driving the image capturing module of the camera to monitor the monitored area, the invention samples a plurality of field of views to obtain a plurality of time information and a plurality of position information. Then, the invention generates the monitoring strength distribution corresponding to the monitored area according to the time information and the position information. A user can obtain information including coverage, hot zone, dead spot, and so on related to the monitored area through the monitoring strength distribution corresponding to the monitored area, so as to know well the monitored condition of the monitored area.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
6546115, | Sep 10 1998 | Hitachi Denshi Kabushiki Kaisha | Method of updating reference background image, method of detecting entering objects and system for detecting entering objects using the methods |
8854469, | May 28 2008 | KiwiSecurity Software GmbH | Method and apparatus for tracking persons and locations using multiple cameras |
9592912, | Mar 08 2016 | SKYDIO, INC | Ground control point assignment and determination system |
9746988, | May 23 2011 | The Boeing Company | Multi-sensor surveillance system with a common operating picture |
9756248, | Mar 02 2016 | Conduent Business Services, LLC | Methods and systems for camera drift correction |
20020102991, | |||
20030025599, | |||
20030105578, | |||
20050064926, | |||
20050069207, | |||
20080018738, | |||
20080084482, | |||
20090193055, | |||
20100131533, | |||
20100201787, | |||
20120045090, | |||
20120166137, | |||
20120294549, | |||
20130063550, | |||
20130163879, | |||
20140161305, | |||
20140269942, | |||
20150268043, | |||
20150276402, | |||
20160021344, | |||
20160127641, | |||
20160259854, | |||
20160335484, | |||
20170068851, | |||
20170254876, | |||
20170264890, | |||
20170294096, | |||
20180109767, | |||
20180155052, | |||
CN105915802, | |||
TW556651, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 11 2017 | HSIEH, KUO-YEH | Vivotek Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 044391 | /0708 | |
Dec 14 2017 | VIVOTEK INC. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Dec 14 2017 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Apr 07 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Dec 17 2022 | 4 years fee payment window open |
Jun 17 2023 | 6 months grace period start (w surcharge) |
Dec 17 2023 | patent expiry (for year 4) |
Dec 17 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 17 2026 | 8 years fee payment window open |
Jun 17 2027 | 6 months grace period start (w surcharge) |
Dec 17 2027 | patent expiry (for year 8) |
Dec 17 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 17 2030 | 12 years fee payment window open |
Jun 17 2031 | 6 months grace period start (w surcharge) |
Dec 17 2031 | patent expiry (for year 12) |
Dec 17 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |