An operation detection device includes: first and second illuminations that irradiate illumination light from different positions onto an operation surface on which a user performs an operation; a camera that captures the operation surface together with an operation part (finger) of the user; a shadow region extraction unit that extracts first and second shadows of the operation part of the user from a captured image obtained by the camera; a contour detection unit that detects contours of each of the first and second shadows extracted; and a touch point detection unit that detects a touch point of the operation part of the user on the operation surface from the distance between the contours.
|
4. An operation detection method that detects an operation performed by a finger of a user with respect to an operation surface, comprising the steps of:
irradiating illumination light from different positions onto the operation surface by a first illumination and a second illumination;
capturing, by a camera, together with the finger of the user, the operation surface onto which the illumination light has been irradiated;
extracting a first shadow of the finger of the user produced by the first illumination and a second shadow of the finger of the user produced by the second illumination on the basis of a captured image obtained by the camera; and
detecting a touch point of the finger of the user on the operation surface on the basis of the first shadow and the second shadow extracted,
wherein corresponding substantially linear line segments are extracted as contours from within contours of the first shadow and the second shadow extracted, and
wherein it is determined that the finger of the user has touched the operation surface when the distance between the two contours extracted has become equal to or less than a predetermined threshold value.
6. A projector comprising:
a video projection unit that projects video;
a first illumination and a second illumination that irradiate illumination light from different positions onto an operation surface at least part of which overlaps a video surface projected by the video projection unit;
a camera that captures/together with the finger of the user, the operation surface onto which the illumination light has been irradiated; a shadow region extraction unit that, on the basis of a captured image obtained by the camera, extracts a first shadow of the finger of the user produced by the first illumination and a second shadow of the finger of the user produced by the second illumination;
a touch point detection unit that, on the basis of the first shadow and the second shadow extracted, detects a touch point of the finger of the user on the operation surface; and
a contour detection unit that detects a contour of the first shadow and a contour of the second shadow extracted,
wherein the touch point detection unit detects a touch point of the finger of the user on the operation surface from the distance between the contour of the first shadow and the contour of the second shadow
wherein the shadow region extraction unit compares the brightness of the captured image with a predetermined threshold value, and discerns and extracts the first shadow of the finger of the user and the second shadow of the finger of the user,
wherein the contour detection unit extracts, as contours, corresponding substantially linear line segments from within the contour of the first shadow and the contour of the second shadow, and
wherein the touch point detection unit determines that the finger of the user has touched the operation surface when the distance between the two contours extracted has become equal to or less than a predetermined threshold value.
1. An operation detection device that detects an operation performed by a finger of a user with respect to an operation surface, the operation detection device comprising:
a first illumination and a second illumination that irradiate illumination light from different positions onto the operation surface;
a camera that captures, together with the finger of the user, the operation surface onto which the illumination light has been irradiated;
a shadow region extraction unit that, on the basis of a captured image obtained by the camera, extracts a first shadow of the finger of the user produced by the first illumination and a second shadow of the finger of the user produced by the second illumination;
a touch point detection unit that, on the basis of the first shadow and the second shadow extracted, detects a touch point of the finger of the user on the operation surface; and
a contour detection unit that detects a contour of the first shadow and a contour of the second shadow extracted,
wherein the touch point detection unit detects a touch point of the finger of the user on the operation surface from the distance between the contour of the first shadow and the contour of the second shadow,
wherein the shadow region extraction unit compares the brightness of the captured image with a predetermined threshold value, and discerns and extracts the first shadow of the finger of the user and the second shadow of the finger of the user,
wherein the contour detection unit extracts, as contours, corresponding substantially linear line segments from within the contour of the first shadow and the contour of the second shadow, and
wherein the touch point detection unit determines that the finger of the user has touched the operation surface when the distance between the two contours extracted has become equal to or less than a predetermined threshold value.
2. The operation detection device according to
the first illumination and the second illumination irradiate in a temporally alternating manner,
the camera captures the operation surface in accordance with irradiation timings of each of the irradiation of the first illumination and the irradiation of the second illumination, and
the shadow region extraction unit extracts the first shadow from an image captured by the first illumination, and extracts the second shadow from an image captured by the second illumination temporally separated from the image captured by the first illumination.
3. The operation detection device according to
the first illumination and the second illumination are installed in such a way that the illumination directions thereof are oriented toward substantially the same side as the imaging direction of the camera.
5. The operation detection method according to
the first illumination and the second illumination are made to irradiate in a temporally alternating manner,
the operation surface is captured by the camera in accordance with irradiation timings of each of the irradiation of the first illumination and the irradiation of the second illumination, and
the first shadow is extracted from an image captured by the first illumination, and the second shadow is extracted from an image captured by the second illumination temporally separated from the image captured by the first illumination.
7. The projector according to
the first illumination and the second illumination are made to irradiate in a temporally alternating manner,
the camera captures the operation surface in accordance with irradiation timings of each of the irradiation of the first illumination and the irradiation of the second illumination, and
the shadow region extraction unit extracts the first shadow from an image captured by the first illumination, and extracts the second shadow from an image captured by the second illumination temporally separated from the image captured by the first illumination.
8. The projector according to
the first illumination and the second illumination are installed in such a way that the illumination directions thereof are oriented toward substantially the same side as the imaging direction of the camera.
|
This application claims the Japanese Patent Application No. 2013-048305 filed Mar. 11, 2013, which is incorporated herein by reference in its entirety.
1. Field of the Invention
The present invention relates to an operation detection device and an operation detection method that detect a finger operation of a user.
2. Description of the Related Art
A technology is proposed that captures an image of an operation part (finger) of a user and extracts the shadows thereof to detect a finger touch operation, without using a special device such as a touch sensor, as user operation input on a projection surface (screen) of a projection-type video display device (projector).
JP-2008-59283-A discloses an operation detection device including: a means for causing an imaging means to capture an image of an operator in a state lit by an illumination means; a means for detecting a region of a specific site of the operator on the basis of image data of the operator obtained by the imaging means; a means for extracting shadow portions from the detected region of the specific site of the operator; and a means for detecting, from among the extracted shadow portions, a plurality of line segments in which edges form straight lines, detecting points where the detected line segments intersect at acute angles, and detecting these intersecting points as pointing positions in the region of the specific site of the operator.
Furthermore, JP-2011-180712-A discloses a projection-type video display device including: a projection unit that projects video onto a screen; an imaging unit for capturing an image of a region including at least the video projected onto the screen; an actual image detection unit that detects an actual image of a predetermined object that moves above the screen, from the image captured by the imaging unit; a shadow detection unit that detects a shadow of the predetermined object produced by projection light from the projection unit, from the image captured by the imaging unit; a touch determination unit that determines that the predetermined object is touching the screen if the distance between the actual image of the predetermined object and a corresponding point of the shadow is equal to or less than a predetermined threshold value; and a coordinate determination unit that outputs the coordinates of the predetermined object as a pointing position with respect to the video when it has been determined by the touch determination unit that there is touching.
In JP-2008-59283-A, a shadow portion is extracted from the image data of the operator obtained by the imaging means, and points where the edges of the shadow intersect at acute angles are detected as pointing positions. However, when a hand is open and hovering, because other fingers can be seen overlapping the shadow of a certain finger, it is predicted that a plurality of points where the edges of the shadow intersect at acute angles will be generated, and there is therefore a risk of a point that is different from the pointing position being erroneously detected as a pointing position. Consequently, this method is not suitable for simultaneously detecting the pointing positions of a plurality of fingers when a hand is open, what is otherwise known as the detection of a multi-touch operation.
Furthermore, in JP-2011-180712-A, it is determined that a predetermined object (finger) is touching a screen if the distance between an actual image of the predetermined object and a corresponding point of the shadow is equal to or less than a predetermined threshold value. However, when a hand is open, it becomes no longer possible for some of the shadows of other fingers to be seen due to the actual image of a certain finger, and it is therefore difficult to detect the actual image of the finger and a corresponding point of a shadow. In addition, because the distance between the actual image of a finger and the shadow increases when the hand is open and hovering, for example, the tip end sections of the shadows of other fingers approach the tip end section of the actual image of a certain finger, and there is a risk that it may appear as if the distance between the actual image of the certain finger and a corresponding point of a shadow has become equal to or less than the threshold value, and the finger will be erroneously determined as touching the screen. Consequently, this method is also not suitable for the detection of a multi-touch operation.
The present invention takes the aforementioned problems into consideration, and an object thereof is to provide an operation detection device and an operation detection method that correctly detect each of the touch positions of a plurality of fingers even when a hand is open, and handle multi-touch operations.
A configuration described in the claims for example is adopted in order to solve the aforementioned problems.
The present application includes a plurality of units in order to solve the aforementioned problems, and, to give one example thereof, an operation detection device of the present invention includes: first and second illuminations that irradiate illumination light from different positions onto an operation surface; a camera that captures, together with an operation part of a user, the operation surface onto which the illumination light has been irradiated; a shadow region extraction unit that extracts first and second shadows of the operation part of the user from a captured image obtained by the camera; a contour detection unit that detects contours of each of the first and second shadows extracted; and a touch point detection unit that detects a touch point of the operation part of the user on the operation surface from the distance between the contours. The shadow region extraction unit compares the brightness of the captured image with a predetermined threshold value, and discerns and extracts the first and second shadows of the operation part of the user projected by the first and second illuminations, the contour detection unit extracts, as contours, corresponding substantially linear line segments from within the contours of the first and second shadows, and the touch point detection unit determines that the operation part of the user has touched the operation surface when the distance between the two extracted contours has become equal to or less than a predetermined threshold value.
According to the present invention, it is possible to correctly detect the touch positions of a plurality of fingers on an operation surface, and to realize a highly accurate multi-touch operation, without providing a touch sensor or the like on the operation surface.
The embodiments are described hereafter using the drawings.
In a first embodiment, a description is given with respect to an operation detection device that uses one camera and two illuminations arranged in different positions to detect a touch point where an operation part (finger) of a user touches an operation surface.
The operation detection device 1 is attached to the upper section of the wall surface 21, and the two illuminations 101 and 102 are arranged offset in different positions in the horizontal direction on the wall surface 21, on either side of the camera 100. It should be noted that, in
Next, the operations of the units are described. The camera 100 is configured from an image sensor and a lens and so forth, and captures an image including the finger 30 that constitutes the operation part of the user 3. The two illuminations 101 and 102 are configured from light-emitting diodes, circuit boards, and lenses so forth, irradiate illumination light onto the operation surface 22 and the finger 30 of the user 3, and project shadows of the finger 30 in the image captured by the camera 100. It should be noted that the illuminations 101 and 102 may be infrared-light illuminations, and the camera 100 may be configured from an infrared-light camera. It is thereby possible for an infrared-light image captured by the camera 100 to be separated from a visible-light image projected by the operation target device 2 (projector) and acquired.
The shadow region extraction unit 104 is configured from a circuit board or software or the like, and extracts shadows from an image obtained by the camera 100 and generates a shadow image. For example, the background image of the operation surface 22, which is captured in advance, is subtracted from an image captured during the detection of an operation and a difference image is generated, the brightness of the difference image is binarized using a predetermined threshold value Lth, and regions that are equal to or less than the threshold value are preferably taken as shadow regions. In addition, processing is performed in which shadow regions that are not mutually connected to the extracted shadows are each discerned as separate shadows, what is otherwise known as labeling processing. As a result of the labeling processing, it is possible to identify which fingers correspond to the plurality of extracted shadows.
The contour detection unit 105 is configured from a circuit board or software or the like, and extracts the contours of the shadow regions from the shadow image. For example, contours are obtained by scanning within a shadow image in a constant direction (from the top-left to the bottom-right) to determine a starting pixel for contour tracking, and tracking the neighboring pixels of the starting pixel in a counterclockwise manner. A method for detecting contours is described using
The touch point detection unit 106 is configured from a circuit board or software or the like, and, on the basis of the shapes and positions of contours, determines the touch state of the finger 30 with respect to the operation surface 22, and also detects a touch point (coordinates). A method for detecting a touch point is described using
The control unit 120 is configured from a circuit board or software or the like, and controls the illuminations 101 and 102, the shadow region extraction unit 104, the contour detection unit 105, the touch point detection unit 106, and the output unit 130 on the basis of a captured operation in an image captured by the camera 100.
The output unit 130 is an interface that outputs the detection result data 150 to the operation target device (projector) 2, which constitutes the operation target, and is configured from a network connection or a Universal Serial Bus (USB) connection, an ultrasound unit, or an infrared-ray communication device or the like. Touch state information regarding whether or not the finger 30 is touching the operation surface 22 and touch point coordinates are included in the detection result data 150.
In
Meanwhile, although
In
In
In state (a) in which the finger 30 is not touching the operation surface 22, because the two shadows 401 and 402 are separate from each other, the shortest distance d between the two contours 501 and 502 also becomes a large value. Meanwhile, in state (b) in which the finger 30 is touching the operation surface 22, the two shadows 401 and 402 have approached each other, and the two contours 501 and 502 have also approached each other. The shortest distance therebetween becomes the smallest value d0 at the end sections 501a and 502a at the fingertip side. Consequently, by defining a predetermined threshold value dth (here, dth>d0), and determining whether or not the shortest distance d between the contours 501 and 502 is within the threshold value dth, it is possible to determine whether or not the finger 30 is touching the operation surface 22. In this example, because the contours 501 and 502 are extracted from the outer side portions of the two shadows 401 and 402, the change in the distance between the contours is notable compared to when extraction is performed from the inner side portions, and as a result there is an improvement in touch detection accuracy.
If it has been determined that the finger 30 has touched the operation surface 22, a center point P of the end sections 501a and 502a of the contours 501 and 502 is determined as the touch point, and the coordinate values of the touch point on the operation surface 22 are calculated. It should be noted that, in order to accurately calculate the coordinates of the touch point, a correction to offset the coordinates of the center point P in the fingertip direction by a predetermined amount may be implemented as required.
Likewise, when the right-side contours are to be obtained, the presence/absence of shadow pixels is detected in scanning lines 92 (dotted lines) going from the right to the left side within the screen 90. The right-side contour 50R depicted in (c) is obtained in this way. Curved portions such as fingertips are removed from among the contours 50L and 50R obtained in this way, and contours made up of substantially linear line segments are detected. It should be noted that the aforementioned method is an example, and another algorithm may be used for the detection of contours.
In S1000, the operation detection device 1 starts processing for detecting the touch point of a finger. Illumination light is irradiated from the two illuminations 101 and 102 due to an instruction from the control unit 120, and the operation surface is captured by the camera 100.
In S1001, the shadow region extraction unit 104 subtracts the background from the image captured by the camera 100 and obtains a difference image, and portions where the brightness is equal to or less than the threshold value Lth are extracted as shadow regions. In S1002, the shadow region extraction unit 104 performs processing in which shadow regions that are not mutually connected to the extracted shadows are each discerned as separate shadows, what is otherwise known as labeling processing.
In S1003, the contour detection unit 105 detects contours with respect to the shadows that have been subjected to labeling processing. For example, as in
In S1004, the touch point detection unit 106 determines whether or not there is a place where the shortest distance d between the detected contours 501 and 502 is equal to or less than the threshold value dth. This threshold value dth is defined in such a way that it is possible to identify the distance d0 between the end sections 501a and 502a of the contours when a finger is touching as in
In S1005, the touch point detection unit 106 determines, as a touch point, the center point P of the place (place where the shortest distance d is equal to or less than the threshold value dth) detected in S1004, and calculates the coordinate values of the touch point on the operation surface 22. The output unit 130 outputs the calculated touch point coordinates as detection result data 150.
In S1006, it is determined whether the touch point detection is to be continued due to an instruction or the like from the user, and if the touch point detection is to be continued, processing returns to S1001 and the aforementioned processing is repeated.
As described above, the operation detection device of the first embodiment uses one camera and two illuminations to detect the contours of two shadows projected by the two illuminations. A touch point is then detected from a place where the shortest distance between the contours has approached within a predetermined distance, and the coordinates of the touch point are output. In this method, because the touches of a plurality of fingers can each be independently detected when the hand is open, it is possible for detection to be performed correctly even with respect to a multi-touch operation.
The operation target device 2 has been described using the example of a projector; however, it is also possible for the operation target device 2 to be applied to a general display or a head-mounted display or the like. The operation surface is not restricted to a screen, and can be applied to any type of surface such as a wall surface or a table.
In a second embodiment, a configuration is implemented in which the touch point of a finger is detected by alternately turning on the two illuminations 101 and 102 of the operation detection device 1.
The second embodiment has a feature in that, because shadows are extracted by alternately turning on the illuminations, two shadows can be temporally separated and detected even if the two shadows partially overlap. Therefore, it is possible for a touch point to be correctly detected even if the two illuminations 101 and 102 are installed on the same side of the camera 100, and two shadows are formed on the same side of a finger and partially overlap. In the following example, a description is given with respect to the case where the illuminations 101 and 102 are installed on the same side of the camera 100. Naturally, it goes without saying that the second embodiment is effective also when the two illuminations 101 and 102 are installed on mutually opposite sides on either side of the camera 100.
When the illumination 101 is turned on in frame 1, a shadow 401 of the finger 30 is formed, and when the illumination 102 is turned on in frame 2, the shadow 402 is formed. Both of the shadows 401 and 402 are on the same side (the right side in the drawings) of the finger 30.
The contours of the shadows in this case are, as depicted in
In S1100, the operation detection device 1 starts processing for detecting the touch point of a finger.
In S1101, at the timing of frame 1, the illumination 101 is turned on due to an instruction from the control unit 120, and an image is captured by the camera 100. The shadow region extraction unit 104 extracts, from the captured image, the shadow 401 formed on the right side of the finger 30. In S1102, the contour detection unit 105 detects the contour 501 on the right side of the shadow 401.
In S1103, at the timing of frame 2, the illumination 102 is turned on due to an instruction from the control unit 120, and an image is captured by the camera 100. The shadow region extraction unit 104 extracts, from the captured image, the shadow 402 formed on the right side of the finger 30. In S1104, the contour detection unit 105 detects the contour 502 on the right side of the shadow 402.
In S1105, the touch point detection unit 106 determines whether or not there is a place where the shortest distance d′ between the detected contours 501 and 502 is equal to or less than the threshold value dth′. This threshold value dth′ is defined in such a way that it is possible to identify the distance d0′ between the contours when a finger is touching in
In S1106, the touch point detection unit 106 determines, as a touch point, the vicinity (left side) P′ of the place (place where the shortest distance d′ is equal to or less than the threshold value dth′) detected in S1105, and calculates the coordinate values of the touch point on the operation surface 22. The output unit 130 outputs the calculated touch point coordinates as detection result data 150.
In S1107, it is determined whether the touch point detection is to be continued due to an instruction or the like from the user, and if the touch point detection is to be continued, processing returns to S1101 and the aforementioned processing is repeated.
In the aforementioned processing flow, a description has been given with respect to the case where the two illuminations 101 and 102 are installed on the same side of the camera 100; however, when the two illuminations 101 and 102 are installed on opposite sides on either side of the camera 100, it is preferable for the contour 501 on the left side of the shadow 401 and the contour 502 on the right side of the shadow 402 to be detected as depicted in
As described above, the operation detection device of the second embodiment uses one camera and two illuminations and alternately turns on the two illuminations to thereby detect the contours of two shadows projected by each of the two illuminations. A place where the shortest distance between the contours has approached within a predetermined distance is then determined as a touch point, and the coordinates of the touch point are output. In the second embodiment, because the two shadows are able to be temporally separated and extracted, it is possible for detection to be performed correctly even if the two illuminations are installed on the same side of the camera. Therefore, the degree of freedom with regard to the installation of the illuminations increases.
In a third embodiment, a configuration is implemented in which the touch point of a finger is detected by using a plurality of illuminations for the illuminations of the operation detection device 1.
As depicted in (b), when the finger 30 is touching the operation surface 22, the plurality of shadows 401 to 408 concentrate at the fingertip 30a. As a result, a portion in which the plurality of shadows overlap is produced in the vicinity of the fingertip 30a. When the shadows overlap, the density (darkness) of the shadows increases in accordance with the number of overlapping shadows, and in the regions 409′, all (four shadows on each side) of the shadows overlap and the shadow density becomes the maximum. Maximum shadow density sections 409′ are extracted by defining a brightness threshold value Lth′ based on the fact that the shadow brightness in the maximum shadow density sections 409′ is the lowest.
In (c), the maximum shadow density sections 409′ (where the shadow brightness is equal to or less than the threshold value Lth′) are extracted on both sides of the finger, and a region 410 surrounding these is defined as a touch region. The highest fingertip-side section P″ of the touch region 410 is then determined as the touch point of the finger.
In S1200, the operation detection device 1 turns on the plurality of illuminations 103, and starts processing for detecting the touch point of a finger.
In S1201, the shadow region extraction unit 104 extracts a plurality of shadows from an image captured by the camera 100. At such time, in the shadow region extraction unit 104, the brightness threshold value Lth′ for when shadows are extracted is defined in such a way that the maximum shadow density sections 409′ in which a plurality of shadows overlap are extracted.
In S1202, in the shadow region extraction unit 104, it is determined whether or not it has been possible for the maximum shadow density sections 409′ to be extracted. If it has been possible for the shadows 409′ to be extracted, processing advances to S1203. If extraction is not possible, processing returns to S1201.
In S1203, the contour detection unit 105 determines the region 410 that surrounds the maximum shadow density sections 409′ extracted in S1202. This region 410 becomes the finger touch region.
In S1204, the touch point detection unit 106 determines, as a touch point, the highest fingertip-side section P″ of the region 410 extracted in S1203, and calculates the coordinate values of the touch point on the operation surface 22.
In S1205, it is determined whether the touch point detection is to be continued due to an instruction or the like from the user, and if the touch point detection is to be continued, processing returns to S1201 and the aforementioned processing is repeated.
As described above, the operation detection device of the third embodiment uses one camera and a plurality of illuminations to detect a portion in which the density of shadows projected by the plurality of illuminations is the maximum. It is possible to determine a touch point from this portion in which the shadow density is the maximum. This procedure can be applied to the case where a plurality of shadows overlap, and the number of illuminations may be an arbitrary number equal to or greater than two. In the present embodiment, because the touch point can be detected merely by determining the density (brightness) of shadows, there are effects that there are few processing steps and the detection speed is increased.
In a fourth embodiment, a description is given with respect to the configuration of a projector that has the aforementioned operation detection device 1 incorporated therein.
The central processing unit 201 is configured from a semiconductor chip such as a central processing device (CPU) and software such as an operating system (OS), and on the basis of user operations and so forth detected by the operation analysis unit 202, controls the input and output of information to the memory 203, and each of the units such as the video control unit 204 and the video projection unit 210.
The operation analysis unit 202 is configured from a circuit board or software or the like, and on the basis of the coordinates of a touch point obtained from the output unit 130 of the operation detection device 1, detects a user operation with respect to projected video by determining the correspondence between the video being projected and the touch point.
The memory 203 is configured from a semiconductor and so forth, and stores information required for calculations and control performed by the central processing unit 201, and video information and so forth that is displayed as projected video.
The video control unit 204 is configured from a circuit board and so forth, performs calculation processing required for drawing video information, in accordance with control performed by the central processing unit 201, and outputs drawing information made up of a set of pixels, in a format suitable for input to the video projection unit 210.
The video projection unit 210 is configured from a light source such as a lamp, optical components such as a lens and a reflection mirror, and a liquid crystal panel and so forth, modulates beams of light emitted from the light source, forms image light corresponding to the drawing information sent from the video control unit 204, and expands and projects the image light onto a projection surface such as a screen.
It should be noted that, although the units of
When the user 3 touches an arbitrary place of the projected video 23 with a finger 30, the operation detection device 1 detects a touch point from a shadow image of the finger, and sends the detection data to the operation analysis unit 202 by way of the central processing unit 201. The operation analysis unit 202 analyses the operation content with respect to the projected video 23, and the central processing unit 201 executes processing such as a video alteration corresponding to the user operation. By incorporating the operation detection device 1 inside the projector 2a in this way, the user is able to efficiently perform an operation with respect to projected video, and, particularly in the present embodiment, is able to suitably perform a multi-touch operation.
Furthermore, an illumination 101 and an illumination 102 are attached to both ends of the spectacles-type housing, a camera 100 is attached to the center of the housing, and together with it being possible to irradiate an operation surface 22 that is in the line of sight of the user, it is possible to capture a finger operation performed by the user on the operation surface 22 and to detect the touch point thereof.
Therefore, the video 23 projected onto the lens surfaces of the spectacles by the small projector main body 20 and the operation surface 22 on which the user performs an operation overlap in the field of vision of the user and behave as if the video is being displayed on the operation surface 22. In other words, when the small projector main body 20 has displayed a video, the user is able to perform a multi-touch operation with respect to the displayed video by touching the operation surface 22 with a fingertip.
As described above, by incorporating the operation detection device inside the projector, an effect is obtained in that it is possible for projected video to be operated in a multi-touch manner without providing a sensor or the like on a video projection surface.
It should be noted that the present embodiments described above are exemplifications for describing the present invention, and are not intended to restrict the scope of the present invention to only the embodiments.
Matsubara, Takashi, Mori, Naoki, Narikawa, Sakiko
Patent | Priority | Assignee | Title |
10438370, | Jun 14 2016 | Disney Enterprises, Inc. | Apparatus, systems and methods for shadow assisted object recognition and tracking |
Patent | Priority | Assignee | Title |
4468694, | Dec 30 1980 | International Business Machines Corporation | Apparatus and method for remote displaying and sensing of information using shadow parallax |
7242388, | Jan 08 2001 | VKB INC | Data input device |
7893924, | Jan 08 2001 | VKB Inc. | Data input device |
20040108990, | |||
20080060854, | |||
20120249422, | |||
20130033614, | |||
20130088461, | |||
20140015950, | |||
20140253512, | |||
20140267031, | |||
20150102993, | |||
20150261385, | |||
JP2008059283, | |||
JP200859283, | |||
JP2011180712, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 24 2014 | Hitachi Maxell, Ltd. | (assignment on the face of the patent) | / | |||
Feb 25 2014 | MATSUBARA, TAKASHI | Hitachi Maxell, Ltd | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032443 | /0514 | |
Feb 25 2014 | NARIKAWA, SAKIKO | Hitachi Maxell, Ltd | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032443 | /0514 | |
Feb 25 2014 | MORI, NAOKI | Hitachi Maxell, Ltd | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 032443 | /0514 | |
Oct 01 2017 | Hitachi Maxell, Ltd | MAXELL, LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 045142 | /0208 | |
Oct 01 2021 | MAXELL, LTD | MAXELL HOLDINGS, LTD | MERGER SEE DOCUMENT FOR DETAILS | 058255 | /0579 | |
Oct 01 2021 | MAXELL HOLDINGS, LTD | MAXELL, LTD | CHANGE OF NAME SEE DOCUMENT FOR DETAILS | 058666 | /0407 |
Date | Maintenance Fee Events |
Dec 02 2019 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Dec 06 2023 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Date | Maintenance Schedule |
Jun 14 2019 | 4 years fee payment window open |
Dec 14 2019 | 6 months grace period start (w surcharge) |
Jun 14 2020 | patent expiry (for year 4) |
Jun 14 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jun 14 2023 | 8 years fee payment window open |
Dec 14 2023 | 6 months grace period start (w surcharge) |
Jun 14 2024 | patent expiry (for year 8) |
Jun 14 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jun 14 2027 | 12 years fee payment window open |
Dec 14 2027 | 6 months grace period start (w surcharge) |
Jun 14 2028 | patent expiry (for year 12) |
Jun 14 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |