A motion detection method includes acquiring a raw image, detecting a motion object image according to the raw image by using a motion detector, cropping the raw image to generate a sub-image according to the motion object image, and inputting the sub-image to a processor for determining if a motion object of the sub-image matches with a detection category. The processor includes a neural network. The shape of the sub-image is a polygonal shape.
|
1. A motion detection method comprising:
acquiring a raw image;
detecting a motion object image according to the raw image;
acquiring processing time of a motion detection process;
acquiring computational complexity of the motion detection process;
updating the motion object image by adjusting an aspect ratio of the motion object image after the motion object image is detected according to the processing time and the computational complexity;
cropping the raw image to generate a sub-image according to the motion object image; and
inputting the sub-image to a processor for determining if a motion object of the sub-image matches with a detection category;
wherein the processor comprises a neural network, and a shape of the sub-image is a polygonal shape.
11. A motion detection system comprising:
an image capturing device configured to acquire a raw image;
a memory configured to save image data; and
a processor coupled to the memory;
wherein after the image capturing device acquires the raw image, a motion object image is detected according to the raw image, the processor acquires processing time and computational complexity of a motion detection process, the processor updates the motion object image by adjusting an aspect ratio of the motion object image after the motion object image is detected according to the processing time and the computational complexity, the processor crops the raw image to generate a sub-image according to the motion object image and transmits the sub-image to the memory, the processor determines if a motion object of the sub-image matches with a detection category, the processor comprises a neural network, and a shape of the sub-image is a polygonal shape.
2. The method of
performing at least one image processing operation to process a plurality of pixels of the motion object image for generating complete and continuous image pixel information;
wherein an aspect ratio of the sub-image and the aspect ratio of the motion object image are identical, and the at least one image processing operation comprises an erosion processing operation, a dilation processing operation, and/or a connected component processing operation.
3. The method of
inputting the raw image; and
partitioning the raw image into the motion object image and a background image;
wherein the motion object image belongs to a foreground image of the raw image.
4. The method of
5. The method of
filtering out at least one motion object image from the raw image by using an image filtering process when the at least one motion object image and the detection category are mismatched;
wherein the image filtering process is performed by the processor according to an aspect ratio and/or a default resolution of the motion object image.
7. The method of
acquiring two-dimensional coordinates of a vertex of a rectangular range of the motion object image;
acquiring a width of the rectangular range and a height of the rectangular range of the motion object image; and
determining a range of the sub-image cropped from the raw image according to the two-dimensional coordinates of the vertex, the width of the rectangular range, and the height of the rectangular range.
8. The method of
setting the detection category by the processor; and
training the neural network of the processor according to the detection category;
wherein after the neural network is trained, the processor has a capability of determining if the motion object of the sub-image matches with the detection category.
9. The method of
10. The method of
12. The system of
13. The system of
14. The system of
15. The system of
17. The system of
18. The system of
19. The system of
20. The system of
|
The present disclosure illustrates a motion detection method and a motion detection system, and more particularly, a motion detection method and a motion detection system having low computational complexity and providing high detection accuracy.
With rapid developments of technologies, consumer electronic products can provide functions of processing at least one video stream and processing intelligent video analytics. The intelligent video analytics (IVA) can be applied to a security control system. When a process of the IVA is applied to the security control system, a humanoid detection technology is an important and indispensable technology. The video stream can be inputted to a processor for determining if a human is present during at least one frame period.
However, since requirements of a resolution and a transmission bandwidth of the video stream are increased in recent years, it is hard to perform a humanoid detection with high detection accuracy in real-time. In general, when a humanoid detection system is required to provide the high detection accuracy and provide an instant detection result (i.e., a very short processing time is required), high computational complexity and improved image processing algorithms are required in the humanoid detection system for efficiently processing floating-point numbers. However, increasing the computational complexity of the humanoid detection system or improving the image processing algorithms for efficiently processing the floating-point numbers may require additional hardware design costs and/or software testing costs. Therefore, to develop a humanoid detection method with high detection accuracy for any programmable system with a low computing capability or an embedded system is an important issue of the IVA.
In an embodiment of the present disclosure, a motion detection method is disclosed. The motion detection method comprises acquiring a raw image, detecting a motion object image according to the raw image by using a motion detector, cropping the raw image to generate a sub-image according to the motion object image, and inputting the sub-image to a processor for determining if a motion object of the sub-image matches with a detection category. The processor comprises a neural network. The shape of the sub-image is a polygonal shape.
In another embodiment of the present disclosure, a motion detection system is disclosed. The motion detection system comprises an image capturing device, a motion detector, a memory, and a processor. The image capturing device is configured to acquire a raw image. The motion detector is coupled to the image capturing device. The memory is configured to save image data. The processor is coupled to the motion detector and the memory. After the image capturing device acquires the raw image, the motion detector detects a motion object image according to the raw image. The processor crops the raw image to generate a sub-image according to the motion object image and transmits the sub-image to the memory. The processor determines if a motion object of the sub-image matches with a detection category. The processor comprises a neural network. A shape of the sub-image is a polygonal shape.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
In the motion detection system 100, after the image capturing device 10 acquires the raw image, the motion detector 11 can detect the motion object image according to the raw image. Then, the processor 13 can crop the raw image to generate the sub-image according to the motion object image and can transmit the sub-image to the memory 12. Further, the processor 13 can determine if a motion object of the sub-image matches with a detection category. In the motion detection system 100, the processor 13 can include a neural network. For example, the processor 13 can include a convolutional neural networks (CNN) based humanoid detector. A shape of the sub-image cropped from the raw image can be a polygonal shape. Therefore, the motion detection system 100 can be regarded as an input-output (I/O) system. The motion detection system 100 can receive the raw image, analyze the raw image, and determine if the motion object image is present. Finally, the motion detection system 100 can output a detection result. Details of a motion detection method performed by the motion detection system 100 are illustrated below.
As previously mentioned, the motion detection system 100 can use the memory 12 for saving the image data. The image data can be digital image data. For example, a range and a position of the motion object image ObjIMG can be digitized as the image data, as illustrated below. In
As previously mentioned, the user can use the processor 13 for setting the detection category of the motion detection system 100. After the detection category is set by the processor 13, the neural network of the processor 13 can be trained according to the detection category. Further, after the neural network is trained, the processor 13 has a capability of determining if the motion object of the sub-image SIMG matches with the detection category. In other words, after information of the sub-image SIMG is received by the processor 13, the processor 13 can use the “trained” neural network for analyzing the sub-image SIMG in order to determine if the motion object of the sub-image SIMG matches with the detection category. For example, in
Details of step S601 to step S607 are previously illustrated. Thus, they are omitted here. Further, the motion detection method of the motion detection system 100 is not limited to step S601 to step S607. For example, step S603 to step S604 can be omitted for reducing computational complexity. Moreover execution orders of step S603 and step S604 can be interchanged. The motion detection system 100 can also detect all motion object images simultaneously. Any reasonable technology modification falls into the scope of the present invention. By using step S601 to step S607, the computational complexity of the motion detection system 100 can be reduced. Therefore, when the motion detection system 100 is applied to the humanoid detection, the motion detection system 100 can provide a real-time humanoid detection result and high humanoid detection reliability.
A principle of performing the motion detection method by the motion detection system 100 with low computational complexity is illustrated below. In the motion detection system 100, the motion detector 11 can be coupled to a front end of the neural network-based humanoid detector (the processor 13). Therefore, the processor 13 can identify the motion object without analyzing a full-resolution raw image. In other words, the processor 13 can analyze the sub-image including the motion object detected by the motion detector 11. Since an amount of pixels of the sub-image including the motion object is much smaller than the full-resolution raw image, the computational complexity of each subsequent image processing mechanism of the motion detection system 100 can be greatly reduced.
To sum up, the present disclosure illustrates a motion detection method and a motion detection system. Since the motion detection system has a low computational complexity, it can be implemented with any programmable hardware with a low computing capability. In the motion detection system, since a motion detector can be introduced for detecting a sub-image including a motion object from the raw image, the motion detection system only analyzes the sub-image detected by the motion detector for determining if the motion object of a detection category is present. Therefore, instead of analyzing a full-resolution raw image by the conventional motion detection system, the motion detection system of the present disclosure can reduce the computational complexity, thereby providing a real-time and accurate motion detection result.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Chen, Shih-Tse, Wu, Chun-Chang, Chan, Shang-Lun, Yang, Chao-Hsun
Patent | Priority | Assignee | Title |
11449092, | Mar 25 2019 | Casio Computer Co., Ltd. | Electronic display device and display control method |
11809225, | Mar 25 2019 | Casio Computer Co., Ltd. | Electronic display device and display control method |
Patent | Priority | Assignee | Title |
8891009, | Aug 29 2011 | VID SCALE, INC | System and method for retargeting video sequences |
20050105765, | |||
20090208062, | |||
20100208063, | |||
20100208084, | |||
20160171311, | |||
20190034734, | |||
20200005613, | |||
20200082544, | |||
20210019893, | |||
20210049770, | |||
TW200703154, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jan 15 2020 | CHAN, SHANG-LUN | Realtek Semiconductor Corp | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051529 | /0216 | |
Jan 15 2020 | YANG, CHAO-HSUN | Realtek Semiconductor Corp | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051529 | /0216 | |
Jan 15 2020 | WU, CHUN-CHANG | Realtek Semiconductor Corp | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051529 | /0216 | |
Jan 15 2020 | CHEN, SHIH-TSE | Realtek Semiconductor Corp | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 051529 | /0216 | |
Jan 16 2020 | Realtek Semiconductor Corp. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 16 2020 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Dec 14 2024 | 4 years fee payment window open |
Jun 14 2025 | 6 months grace period start (w surcharge) |
Dec 14 2025 | patent expiry (for year 4) |
Dec 14 2027 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 14 2028 | 8 years fee payment window open |
Jun 14 2029 | 6 months grace period start (w surcharge) |
Dec 14 2029 | patent expiry (for year 8) |
Dec 14 2031 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 14 2032 | 12 years fee payment window open |
Jun 14 2033 | 6 months grace period start (w surcharge) |
Dec 14 2033 | patent expiry (for year 12) |
Dec 14 2035 | 2 years to revive unintentionally abandoned end. (for year 12) |