A handheld projecting device that illuminates a visible image comprised of one or more video objects that appear to interact, connect, and/or form paths with physical objects and/or other video objects within a user's environment. In at least one embodiment, a projecting device animates and projects one or more video objects that appear to responsively adapt to the position and orientation of at least one physical object for unique graphic and tactile effects. In some embodiments, a projecting device may connect together physical objects and video objects to form a user-defined object path, such as a racetrack or adventure trail. video objects may be rendered and illuminated as three-dimensional objects to simulate real world objects.
|
8. A computer-implemented method for presenting video frames projected by a handheld projecting device, comprising the steps of:
obtaining, a spatial view located in front of the handheld projecting device;
determining a position and orientation of at least one physical object in the spatial view by the handheld projecting device; and
rendering one or more video frames comprised of one or more video objects that are projected by the handheld projecting device, wherein the one or more video objects appear to responsively adapt to the position and orientation of the at least one physical object in the spatial view.
17. A computer-readable storage medium comprised of computer-executable instructions that, when processed by a control unit, perform an operation for presenting video frames projected by a handheld projecting device, the operation comprising:
obtaining a spatial view located in front of the handheld projecting device;
determining a position and orientation of at least one physical object in the spatial view by the handheld projecting device; and
rendering one or more video frames comprised of one or more video objects that are projected by the handheld projecting device, wherein the one or more video objects appear to responsively adapt to the position and orientation of the at least one physical object in the spatial view.
1. An interactive image projecting system, comprising:
at least one physical object; and
a handheld projecting device including:
an outer housing;
a control unit within the outer housing;
an image projector operatively coupled to the control unit and operable to project a visible image comprised of one or more video objects generated by the control unit;
an image sensor operatively coupled to the control unit and operable to observe a spatial view including the at least one physical object located on a remote surface; and
an object tracker operable to analyze the observed spatial view including, the at least one physical object and determine a position and orientation of the at least one physical object from the observed spatial view,
wherein the control unit modifies the visible image comprised of one or more video objects based upon the position and orientation of the at least one physical object relative to the projected visible image, that the one or more video objects adapt to the position and orientation of the at least one physical object.
2. The device of
3. The device of
4. The device of
5. The device of
6. The device of
7. The device of
9. The method of
10. The method of
11. The method of
12. The method of
13. The method of
15. The method of
16. The method of
18. The computer-readable storage medium of
19. The computer-readable storage medium of
20. The computer-readable storage medium of
21. The computer-readable storage medium of
22. The computer-readable storage medium of
23. The computer-readable storage medium of
24. The computer-readable storage medium of
25. The computer-readable storage medium of
|
The present disclosure generally relates to handheld projecting devices. In particular, the present disclosure relates to handheld projecting devices that illuminate a visible image comprised of one or more video objects that appear to interact, connect, and/or form paths with physical objects and/or other video objects within a user's environment.
Presently, there are many types of handheld video devices that enable a player/user to control a display image. One type of highly popular video console is the Wii game machine and controller manufactured by Nintendo, Inc. of Japan. This game system enables a user to interact with an animated image, such as a TV screen, by waving a wireless controller through the air. However, the system does not enable its graphic image to extend beyond its boundaries and connect with physical objects found in the user's environment.
Other types of electronic display devices are available, such as tablet computers and handheld game devices. However, such displays are further incapable of extending and connecting a graphic display to household objects in the user's environment.
Projected image displays are also available, as manufacturers are embedding compact, image projectors (often called “pico” projectors) into handheld devices, such as cameras and mobile phones. However, the present focus of these projecting devices is to project images rather than utilize the projectors for illuminating video objects (such as characters, avatars, etc.) that connect and form paths with physical objects and other video objects.
For typical handheld projecting devices, when held obliquely to a remote surface, are hindered with unrealistic images that suffer from shape and keystone distortion with brightness hotspots.
Therefore, an opportunity exists for the use of handheld projecting devices that illuminate video objects that appear to interact, connect, and form paths with physical objects and other video objects within a user's environment, creating a uniquely realistic, visual and tactile experience.
The present disclosure relates to handheld projecting devices that illuminate a visible image comprised of one or more video objects that appear to interact, connect, and/or form paths with physical objects and/or other video objects within a user's environment.
In some embodiments, a handheld projecting device is moved through space while projecting a visible image comprised of one or more video objects Onto a remote surface, creating a virtual world. A video object may be comprised of any type of graphically represented object—such as a character, avatar, vehicle, item, path, etc.—which may be animated and projected by the projecting device. Moreover, the projecting device may sense the surrounding three-dimensional (3D) real world comprised of physical objects, such as walls, furniture, people, and other items. Such a capability enables the projecting device to illuminate one or more video objects that appear to interact, connect, and/or form paths with physical objects and/or other video objects in the user's environment.
In some embodiments, a projecting device is comprised of a control unit (such as a microprocessor) that provides computing abilities for the device, supported by one or more executable programs contained in a computer readable storage media, such as a memory.
The projecting device is further comprised of an image projector, such as a compact pico projector. In some embodiments, the image projector may illuminate a “full-color” visible image comprised of one or more video objects to the delight of a user.
The device is further comprised of an image sensor, such as a camera-based image sensor, capable of observing a spatial view of one or more physical objects within the user's environment.
The projecting device may also include a sound generator and a haptic generator to create audible sound and vibration effects in response to the movement of one or more video objects and physical objects within the environment.
In certain embodiments, an interactive image projecting system may be comprised of at least one handheld projecting device and one or more physical objects. The physical objects may be stationary and/or moveable objects, including handheld physical objects. Examples of stationary physical objects may include, but not limited to, a wall, ceiling, floor, and tabletop. Examples of moveable physical objects may include a rolling vehicle, floating balloon, 2D picture, token, or playing card, while examples of moveable handheld physical objects may include a wristwatch, book, toothbrush, or key.
In some embodiments, physical objects are affixed with object tags to create tagged physical objects that are identifiable. Examples of object tags include barcodes, infrared emitting markers, and other optically discernible fiducial markers.
In certain embodiments, a handheld projecting device and its image sensor may observe a tagged physical object such that the device determines (e.g., using computer vision techniques) the position and/or orientation of the tagged physical object in 3D space. Whereupon, the projecting device can illuminate one or more video objects that appear to adapt to the position and orientation of the physical object. For example, in one embodiment, a handheld projecting device illuminates a video object of a dog near a physical object of a food dish. Whereby, the device connects the video object (of the dog) to the physical object (of the food dish) such that the dog appears to eat from the food dish. That is, the video object (of the eating dog) appears to remain substantially fixed to the physical object (of the food dish) irrespective of device movement.
Video objects can also connect to moveable physical objects. In one embodiment, a projecting device projects a first video object of a dog having its teeth brushed. When a user moves a handheld toothbrush back and forth in the vicinity, a second video object of bubbles connects and remains affixed to the moving toothbrush, cleaning the dog's white teeth.
In at least one embodiment, a handheld projecting device illuminates a video object of a motorcycle. Physical objects, such as a ramp and a “360 degree” loop, may be arbitrarily located and oriented (e.g., upside down, tilted, or right side up) on a wall, ceiling, floor, table top, etc. Then in operation, when the projecting device is moved, the video object of the motorcycle adapts and is oriented according to the ramp orientation, moving up the incline and leaping into the air. When the projecting device is moved over the “360 degree” loop, the motorcycle connects to the loop, spinning the motorcycle around the physical loop, before disconnecting and exiting from the loop.
In some embodiments, a handheld projecting device may connect a plurality of video objects and physical objects to form an object path. In one embodiment, a first and a second handheld projecting device each illuminate a video object of a motorcycle. A racing game may then be constructed of physical objects (e.g., a ramp, a hoop of fire, and a 360-degree loop) that are placed on multiple surfaces. In operation, a user/player may move the first device to trace out a user-defined object path to connect the physical objects for the racing game. Then during the race, players move their projecting devices and motorcycles along the object path between and over the physical objects. If a player drives off course, the projecting device presents video text reading “Disqualified Rider”, and the player loses the race.
Some embodiments of the invention will now be described by way of example with reference to the accompanying drawings:
One or more specific embodiments will be discussed below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that when actually implementing embodiments of this invention, as in any product development process, many decisions must be made. Moreover, it should be appreciated that such a design effort could be quite labor intensive, but would nevertheless be a routine undertaking of design and construction for those of ordinary skill having the benefit of this disclosure. Some helpful terms of this discussion will be defined:
The terms “a”, “an”, and “the” refers to one or more items. Where only one item is intended, the terms “one”, “single”, or similar language is used. Also, the term “includes” means “comprises”. The term “and/or” refers to any and all combinations of one or more of the associated list items.
The terms “adapter”, “analyzer”, “application”, “circuit”, “component”, “control”, “interface”, “method”, “module”, “program”, and like terms are intended to include hardware, firmware, and/or software.
The term “barcode” refers to any optical machine-readable representation of data, such as one-dimensional (1D) or two-dimensional (2D) barcodes, or symbols.
The term “computer readable medium” or the like refers to an kind of medium for retaining information in any form or combination of forms, including various kinds of storage devices e.g., magnetic, optical, and/or solid state, etc.). The term “computer readable medium” may also include transitory forms of representing information, including various hardwired and/or wireless links for transmitting the information from one point to another.
The terms “connect an object”, “connecting objects”, “object connection”, and like terms refer to an association formed between or among objects, and does not always imply that the objects physically connect or overlap.
The term “haptic” refers to tactile stimulus presented to a user, often provided by a vibrating or haptic device when placed near the user's skin. A “haptic signal” refers to a signal that activates a haptic device.
The terms “key”, “keypad”, “key press”, and like terms are meant to broadly include all types of user input interfaces and their respective action, such as, but not limited to, a gesture-sensitive camera, a touch pad, a keypad, a control button, a trackball, and/or a touch sensitive display.
The term “multimedia” refers to media content and/or its respective sensory action, such as, but not limited to, video, graphics, text, audio, haptic, user input events, program instructions, and/or program data.
The term “operatively coupled” refers to, but not limited to, a wireless and/or a wired means of communication between items, unless otherwise indicated. Moreover, the term “operatively coupled” further refers to a direct coupling between items and/or an indirect coupling between items via an intervening item or items (e.g., an item includes, but not limited to, a component, a circuit, a module, and/or a device). The term “wired” refers to any type of physical communication conduit (e.g., electronic wire, trace, or optical fiber).
The term “optical” refers to any type of light or usage of light, both visible (e.g., white light) and/or invisible light (e.g., infrared light), unless specifically indicated.
The term “video” generally refers to the creation or projection of images for a projected display, typically a sequence of still images that create an animated image.
The term “video frame” refers to a single still image.
The term “video object” refers to an object (e.g., character, avatar, vehicle, item, path, etc.) that is graphically represented within an image or animated sequence of images.
The present disclosure illustrates examples of operations and methods used by the various embodiments described. Those of ordinary skill in the art will readily recognize that certain steps or operations described herein may be eliminated, taken in an alternate order, and/or performed concurrently. Moreover, the operations may be implemented as one or more software programs for a computer system and encoded in a computer readable medium as instructions executable on one or more processors. The software programs may also be carried in a communications medium conveying signals encoding the instructions. Separate instances of these programs may be executed on separate computer systems. Thus, although certain steps have been described as being performed by certain devices, software programs, processes, or entities, this need not be the case and a variety of alternative implementations will be understood by those having ordinary skill in the art.
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings do identify the same or similar elements.
Handheld Projecting Device
So turning first to
Thereshown in
Turning now to
The outer housing 162 may be of handheld size (e.g., 70 mm wide×110 mm deep×20 mm thick) and made of, for example, easy to grip plastic. The housing 162 may be constructed in any shape, such as a rectangular shape (as in
Affixed to a front end 164 of device 100 is the image projector 150, which may be operable to, but not limited to, project a “full-color” (e.g., red, green, blue) image of visible light. Projector 150 may be of compact size, such as a micro or pico projector. The image projector 150 may be comprised of a digital light processor (DLP)-, a liquid-crystal-on-silicon (LCOS)-, or a laser-based image projector, although alternative image projectors may be used as well. Projector 150 may be operatively coupled to the control unit 110 such that the control unit 110, for example, can generate and transmit image graphic data to projector 150 for display.
The infrared image sensor 156 may be affixed to the front end 164 of device 100, wherein sensor 156 may be operable to observe a spatial view of the environment and capture one or more image view frames. Image sensor 156 may be operatively coupled to control unit 110 such that control unit 110, for example, may receive and process captured image data. Sensor 156 may be comprised of at least one of a photo diode-, a photo detector-, a photo detector array-, a complementary metal oxide semiconductor (CMOS)-, a charge coupled device (CCD)-, or an electronic camera-based image sensor that is sensitive to at least infrared light, although other types of image sensors may be considered. In some embodiments, sensor 156 may be a 3D depth camera, often referred to as a ranging, lidar, time-of-flight, stereo pair, or RGB-D camera, which creates a 3-D spatial depth view frame. In the current embodiment, image sensor 156 may be comprised of a CMOS- or a CCD-based video camera that is sensitive to at least infrared light. Moreover, image sensor 156 may optionally include an infrared pass-band filter, such that only infrared light is sensed (while other light, such as visible light, is blocked from view). The image sensor 156 may optionally include a global shutter or high-speed panning shutter for reduced image motion blur.
The infrared light emitter 158 may be optionally included (as denoted by a dashed line) with device 100, enhancing infrared illumination for low ambient light conditions, such as a dark or unlit room. Emitter 158 may be operatively coupled to control unit 110 such that control unit 110, for example, may modulate the emitter's 158 generation of infrared light. The infrared illuminating emitter 158 may be comprised of, but not limited to, at least one of an infrared light emitting diode, an infrared laser, or an infrared light source, such that emitter 158 generates at least infrared light. In some alternative embodiments, the light emitter 158 may be integrated with the image projector 150.
The motion sensor 120 may be affixed to device 100, providing inertial awareness. Whereby, motion sensor 120 may be operatively coupled to control unit 110 such that control unit 110, for example, may receive spatial position and/or movement data. Motion sensor 120 may be operable to detect spatial movement and transmit a movement signal to control unit 110. Moreover, motion sensor 120 may be operable to detect a spatial position and transmit a position signal to control unit 110. The motion sensor 120 may be comprised of, but not limited to, at least one of an accelerometer, a magnetometer (e.g., electronic compass), a gyroscope, a spatial triangulation sensor, and/or a global positioning system (GPS) receiver. Advantages exist for motion sensing in 3D space; wherein, a 3-axis accelerometer and/or 3-axis gyroscope may be utilized.
The user interface 116 provides a means for a user to input information to the device 100. For example, the user interface 116 may generate one or more user input signals when a user actuates (e.g., presses, touches, taps, or hand gestures) the user interface 116. The user interface 116 may be operatively coupled to control unit 110 such that control unit 110 may receive one or more user input signals and respond accordingly. User interface 116 may be comprised of, but not limited to, one or more control buttons, keypads, touch pads, rotating dials, trackballs, touch-sensitive displays, and/or hand gesture-sensitive devices.
The communication interface 118 provides wireless and/or wired communication abilities for device 100. Communication interface 118 is operatively coupled to control unit 110 such that control unit 110, for example, may receive and transmit data from/to other remote devices, such as other handheld projecting devices. Communication interface 118 may be comprised of, but not limited to, a wireless transceiver, data transceivers, processing units, codecs, and/or antennae, as illustrative examples. For wired communication, interface 118 provides one or more wired interface ports (e.g., universal serial bus (USB) port, a video port, a serial connection port, an IEEE-1394 port, an Ethernet or modem port, and/or an AC/DC power connection port). For wireless communication, interface 118 may use modulated electromagnetic waves of one or more frequencies (e.g., RF, infrared, etc.) and/or modulated audio waves of one or more frequencies (e.g., ultrasonic, etc.). Interface 118 may use various wired and/or wireless communication protocols (e.g., TCP/IP, WiFi, Zigbee, Bluetooth, Wireless USB, Ethernet, Wireless Home Digital Interface (WHDI), Near Field Communication, and/or cellular telephone protocol).
The sound generator 112 may provide device 100 with audio or sound generation capability. Sound generator 112 may be operatively coupled to control unit 110, such that control unit 110, for example, can control the generation of sound from device 100. Sound generator 112 may be comprised of, but not limited to, audio processing units, audio codecs, audio synthesizer, and/or at least one sound generating element, such as a loudspeaker.
The haptic generator 114 provides device 100 with haptic signal generation and output capability. Haptic generator 114 may be operatively coupled to control unit 110 such that control unit 110, for example, may control and enable vibration effects of device 100. Haptic generator 114 may be comprised of, but not limited to, vibratory processing units, codecs, and/or at least one vibrator (e.g., mechanical vibrator).
The memory 130 may be comprised of computer readable medium, which may contain, but not limited to, computer readable instructions. Memory 130 may be operatively coupled to control unit 110 such that control unit 110, for example, may execute the computer readable instructions. Memory 130 may be comprised of RAM, ROM, Flash, Secure Digital (SD) card, and/or hard drive, although other types of memory in whole, part, or combination may be used, including fixed and/or removable memory, volatile and/or nonvolatile memory.
Data storage 140 may comprised of computer readable medium, which may contain, but not limited to, computer related data. Data storage 140 may be operatively coupled to control unit 110 such that control unit 110, for example, may read data from and/or write data to data storage 140. Storage 140 may be comprised of RAM, ROM, Flash, Secure Digital (SD) card, and/or hard drive, although other types of memory in whole, part, or combination may be used, including fixed and/or removable, volatile and/or nonvolatile memory. Although memory 130 and data storage 140 are presented as separate components, some embodiments of the projecting device may use an integrated memory architecture, where memory 130 and data storage 140 may be wholly or partially integrated. In some embodiments, memory 130 and/or data storage 140 may wholly or partially integrated with control unit 110.
The control unit 110 may provide computing capability for device 100, wherein control unit 110 may be comprised of, for example, one or more central processing units (CPU) having appreciable processing speed (e.g., 2 gHz) to execute computer instructions. Control unit 110 may include one or more processing units that are general-purpose and/or special purpose (e.g., multi-core processing units, graphic processors, and/or related chipsets). The control unit 110 may be operatively coupled to, but not limited to, sound generator 112, haptic generator 114, user interface 116, communication interface 118, motion sensor 120, memory 130, data storage 140, image projector 150, image sensor 156, and light emitter 158. Although an architecture to connect components of device 100 has been presented, alternative embodiments may rely on alternative bus, network, and/or hardware architectures.
Finally, device 100 may include a power source 160, providing energy to one or more components of device 100. Power source 160 may be comprised, for example, of a portable battery and/or a power cable attached to an external power supply. In the current embodiment, power source 160 is a rechargeable battery such that device 100 may be mobile.
Computing Modules of the Projecting Device
The operating system 131 may provide the device 100 with basic functions and services, such as read/write operations with the hardware, such as projector 150 and image sensor 156.
The object tracker 132 may enable the device 100 to track the position, orientation, and/or velocity of video objects and/or tagged physical objects in the vicinity of device 100. That is, tracker 132 may capture one or more view frames from the image sensor 156 and analyze the view frames using computer vision techniques. Whereupon, the object tracker 132 may detect and identify object tags and corresponding physical objects in the environment. Tracker 132 may be comprised of, but not limited to, a tag reader, a barcode reader, and/or an optical light signal demodulator to identify object tags comprised of optical machine-readable representation of data.
The object connector 133 may enable the device 100 to connect one or more illuminated video objects to one or more physical objects and/or other video objects. Whereby, the object connector 133 may manage object relationships, such as, but not limited to, the spatial proximity and movement of the illuminated video objects relative to physical objects and other video objects.
The object path maker 134 may enable the device 100 to create an object path among physical objects and/or video objects. The object path may be auto-generated or user-defined, such that a user can create a desired path in 3D space. Once defined, the device 100 may generate a visible path that a user can follow, such as for a racing or adventure application.
The graphics engine 136 may be operable to render computer graphics of one or more video objects within data storage in preparation for projecting a visible image.
The application 138 is representative of one or more user applications, such as, but not limited to, electronic games and/or educational programs. Application 138 may be comprised of operations and data for video objects and physical objects.
For example, the application 138 may include object descriptions 127, which is a library of video object descriptions and/or physical object descriptions. Object descriptions may include information that describes both object behavior and object attributes, such as the spatial orientation, shape, and size, etc. of the objects.
The application 138 may further include object actions 129, which is a library of video object actions and/or physical object actions. Object actions 129 may describe the causal conditions that need to be met before objects can interact. In addition, object actions 129 may describe the resultant effects of object interactivity. That is, once interactivity is allowed, the object actions 129 may enable the generation of multimedia effects (e.g., graphics, sound, and haptic) for one or more video objects presented by the projecting device 100.
Finally, the application 138 may further include object multimedia 128, which is a library of video object graphics for rendering and animating a visible image comprised of one or more video objects. Object multimedia 128 may also include sound and haptic data for creating sound and haptic effects for the one or more video objects.
Computer Readable Data of the Projecting Device
For example, the view data 142 may retain one or more captured image view frames from image sensor 156 for pending view analysis.
The projector data 142 may provide storage for video frame data for the image projector 150. For example, the control unit 110 executing application 138 may render off-screen graphics of one or more video objects in projector data 142, such as a pet dog or cat, prior to visible light projection by projector 150.
The motion data 143 may store spatial motion data collected and analyzed from the motion sensor 120. Motion data 143 may define, for example, in 3D space the spatial acceleration, velocity, position, and/or orientation of the projecting device 100.
Object data 145 may provide general data storage for one or more video objects and/or physical objects, where each object instance may be defined with attributes, such as, but not limited to, an object identifier (e.g., object ID=“344”), an object type (e.g., video or physical), etc.
Object tracking data 146 may provide spatial tracking storage for one or more video objects and/or physical objects, where each object instance may be defined with attributes, such as, but not limited to, an object position, object orientation, object shape, object velocity, etc.
Object connect data 147 may define object connections made among a plurality of video objects and/or physical objects, where each object instance may be defined with attributes, such as, but not limited to, a first connection object ID, a second connection object ID, etc. Various object connections may be formed such that video objects and/or physical objects can interact and connect in different ways.
Object path data 148 may define one or more object paths of video objects and/or physical objects located in the environment, such as for racing or exploration applications. Object path data 148 may be comprised of, for example, path positions, connected object IDs of connected objects along a path, and path video object IDs that define an illuminated path.
Configuration of Light Projection and Viewing
Turning now to
Further affixed to device 100, the image sensor 156 may have a predetermined light view angle VA of the observed spatial view of view region 230 on remote surface 224 within view field VF. As illustrated, the image sensor's 156 view angle VA may be substantially larger than the image projector's 150 visible light projection angle PA. The image sensor 156 may be implemented, for example, using a wide-angle camera lens or fish-eye lens. In some embodiments, the image sensor's 156 view angle VA (e.g., 70 degrees) may be at least twice as large as the image projector's 150 visible light projection angle PA (e.g., 30 degrees). Such a configuration enables the device to observe the remote surface 224 and physical objects (not shown) substantially outside of the visible light projection field PF, enhancing the light sensing capabilities of device 100.
Passive Object Tags
Turning now to
Observing a Tagged Physical Object
Presented in
During a tag sensing operation, device 100 may optionally activate infrared light emitter 158 (to enhance the view of tag 280 in low light conditions) and enable image sensor 158 to capture at least one view frame of the spatial view of tag 280. Then using computer vision techniques adapted from current art (e.g., barcode reading, camera pose estimation, etc.), the device 100 may analyze the at least one view frame and detect, identify, and determine the position and orientation of the object tag 280 affixed to the physical object 250 in 3D space. For example, the object tag 280 may be an optical machine-readable 2D barcode tag that represents data comprised of, for example, a tag identifier (e.g., tag ID=“123”). Moreover, the object tag 280 may be a 2D barcode tag that has a one-fold rotational symmetry (similar to the tag 276-1 of
In the current embodiment, the object tag 280 may be of a predetermined physical size such that the device 100 can readily determine a position and orientation of the tag 280 in 3D space using computer vision techniques (e.g., camera pose estimation). However, in alternate embodiments, object tag 280 may be of arbitrary physical size and have an optical machine-readable barcode that represents data comprised of a tag identifier (ID) (e.g., tag ID=“123”) and a tag dimension (e.g., tag size=“10 cm×10 cm”) such that device 100 can dynamically determine the position and orientation of tag 280 using computer vision methods. In some embodiments, there may be a plurality of object tags affixed to a physical object, where each object tag may provide a unique tag identifier (e.g., first tag ID=“10”, second tag ID=“11”, etc.).
Visible and/or Hidden Object Tags on a Physical Object
Turning now to
Whereby, in
Start-up the Handheld Projecting Device
Referring briefly to
High-level Method of Operation for the Projecting Device
In
Beginning with step S100, the device's control unit initializes the projecting device's operating state, such as setting data storage (reference numeral 140 of
Then in step S102, the control unit may receive one or more movement signals from the motion sensor (reference numeral 120 of
In step S104, the control unit enables the image sensor (reference numeral 156 of
Continuing to step S105, the control unit then creates and tracks a geometrical surface plane for each detected object tag (from step S104). Thus, each surface plane coincides with the position and orientation of each detected object tag in 3D space. Moreover, the control unit combines the surface planes to create and track a 3D surface model that represents the surrounding remote surfaces in three dimensions of space.
Next, in step S106, the control unit creates a projection region, which is a geometrical region that defines a full-sized, projected image as it would appear on the 3D surface model (from step S105). The control unit may track the position, orientation, shape, and velocity of the projection region, as it moves across one or more remote surfaces when the projecting device is moved.
In step S108, the control unit detects the physical objects in the environment. That is, the control unit may take each identified object tag (e.g., tag ID=“123” from step S104) and search a library of physical object descriptions (reference numeral 127 of
Whereupon, in step S116, the control unit enables the active video objects and physical objects to interact and connect by analyzing the current activity of each object. For example, a video object may only choose to interact and connect with a physical object when certain conditions are met, such as having a specific video object position, orientation, velocity, size, shape, color, etc. relative to the physical object. Further, video objects may interact and connect with other video objects. The control unit may rely on a library of object actions (reference numeral 129 of
Then in step S116, the control unit modifies and tracks one or more video objects, as managed by the application (reference numeral 138 of
Continuing to step S120, the control unit generates or modifies a visible image comprised of one or more video objects (from step S116) such that the one or more video objects appear to adapt to the position and orientation of at least one physical object (from step S108). To generate or modify the visible image, the control unit may retrieve graphic data (e.g., image file in step S122) from at least one application (reference numerals 138 and 128 of
Also, the control unit generates or modifies a sound effect such that the sound effect is based upon the type, position, orientation, shape, size, and velocity of one or more video objects (from step S116) relative to at least one physical object (from step S108). To generate a sound effect, the control unit may retrieve audio data (e.g., MP3 file in step S122) from at least one application (reference numerals 138 and 128 of
Also, the control unit generates or modifies a haptic effect such that the haptic effect is based upon the type, position, orientation, shape, size, and velocity of one or more video objects (from step S116) relative to at least one physical object (from step S108). To generate a haptic effect, the control unit may retrieve haptic data (e.g., wave data in step S122) from at least one application (reference numeral 138 and 128 of
In step S124, the control unit updates clocks and timers so the projecting device operates in a time-coordinated manner.
Finally, in step S126, if the control unit determines that the next video frame needs to be presented (e.g., once every 1/30 of a second), then the method loops to step S102 to repeat the process. Otherwise, the method returns to step S124 to wait for the clocks to update, assuring smooth video frame animation.
Detecting and Locating Object Tags and Surface Planes
Turning now to
In an example tag sensing operation, device 100 may optionally activate the light emitter 158, which illuminates surface 224 with infrared light. Whereupon, the infrared image sensor 156 captures one or more view frames of a view region 230 within the environment. Using computer vision analysis (e.g., segmentation, barcode recognition, etc.), the device 100 may detect and identify the optical machine-readable tag 280 within the captured view frame of the view region 230.
As illustrated, the tag 280 appears in perspective (with a distorted shape) in view region 230. Whereby, using computer vision techniques (e.g., camera pose estimation, homography, projective geometry, etc.), the projecting device 100 computes a tag position TP and a tag distance TD in 3D space relative to the device 100. The projecting device 100 further observes a tag rotation angle TRO and determines a tag orientation comprised of tag rotational orientations TRX, TRY, and TRZ in 3D space relative to the device 100.
The reader may note that the object tag 280, remote surface 224, and physical object 250 substantially reside (or lay flat) on a common geometric plane. As a result, the device 100 may create and determine a position and orientation of a surface plane SPL1, where the object tag 280 resides on the surface plane SPL1 with a surface normal vector SN1. In this case, the surface plane SPL1 represents the remote surface 224.
Moreover, a plurality of surface planes may be generated. Understandably, for each detected object tag, the device can generate a surface plane that coincides with the position and orientation of each object tag. For example, the device 100 may further observe within its captured view frame of view region 230 an object tag 281 affixed to a physical object 251 located on a remote surface 226. Wherein, the device 100 generates a surface plane SPL2 comprised of a surface normal vector SN2, where surface plane SPL2 represents remote surface 226.
Subsequently, the device 100 may use geometric methods (e.g., projective geometry, etc.) to combine and intersect a plurality of surface planes, such as surface plane SPL1 with surface plane SPL2, to generate a 3D surface model that represents one or more remote surfaces 224 and 226. Boundaries of non-parallel surface planes SPL1 and SPL2 may be determined by extending the surface planes until the planes intersect, forming a surface edge 197. Hence, the device 100 can determine the position and orientation of surface edges in 3D space, thereby, determining the shape of surface planes SPL1 and SPL2.
The device 100 can also determine each type of surface plane and remote surface defined in 3D space, whether belonging to a floor, wall, or ceiling. The device 100 may utilize its motion sensor (e.g., a 3-axis accelerometer included with motion sensor 120 of
Method for Detecting and Tracking Object Tags
Turning to
Beginning with step S140, the control unit enables the image sensor (reference numeral 156 of
Then in step S142, using computer vision techniques (e.g., segmentation, pattern recognition, etc.), the control unit analyzes the at least one view frame and may detect one or more object tags in the view frame.
In step S144, for each detected object tag (from step S142), the control unit initially selects a first detected object tag to analyze.
In step S146, the control unit identifies the selected object tag by reading its optical machine-readable pattern that represents data, such as a 1D or 2D barcode. Computer vision methods (e.g., image-based barcode reader, etc.) may be used to read the object tag to retrieve a tag ID of binary data, such as tag ID=“123”. In addition, the control unit determines the spatial position and orientation of the tag in 3D space (such as a 6-tuple having three positional coordinates and three rotational angles in x-y-z space) relative to the projecting device. Such a computation may be made using computer vision techniques (e.g., camera pose estimation, homography, projective geometry, etc.) adapted from current art. The control unit may further reduce spatial data noise (or jitter) by, for example, by computing a moving average of the position and orientation of object tags continually collected in real-time. The detected tag information (e.g., tag ID, position, orientation, etc.) is then stored in data storage (reference 140 of
Finally, in step S148, if there are any more detected object tags to identify, the control unit selects the next detected object tag, and the method returns to step S146. Otherwise the method ends.
Method for Creating Surface Planes and a 3D Surface Model
Turning to
Starting with step S160, for each detected object tag (as determined by the method of
In step S162, the control unit generates a geometrical surface plane that coincides with the position and orientation of the detected object tag (as if the object tag lays flat on the surface plane). Moreover, the surface plane's orientation in 3D space may be defined with a surface normal vector. The surface plane is stored in data storage (reference numeral 140 of
The control unit can also determine the type of surface plane defined in 3D space, whether belonging to a floor, wall, or ceiling. The control unit can accomplish this by, but not limited to, the following steps: 1) determine the orientation of a surface plane relative to the projecting device; 2) read the motion sensor (e.g., a 3-axis accelerometer) and determine the orientation of the projecting device relative to the downward acceleration of gravity; and 3) transform the device orientation and surface plane orientation to determine the type of surface plane, such as a floor-, wall-, or ceiling-surface plane.
Then in step S164, if there are any more detected object tags to analyze, the control unit selects the next detected object tag, and the method returns to step S162. Otherwise the method continues to step S166.
In step S166, the control unit analyzes the generated surface planes (from steps S160-164) for redundancy. As can be surmised, a surface plane exists for each detected object tag. So the control unit may ignore or remove any surface plane that substantially coincides with another surface plane, since these are redundant.
The control unit then combines the remaining surface planes to form a 3D surface model, which represents the surrounding 3D space of one or more remote surfaces, such as walls, a floor, a ceiling, and/or a table top. Geometric functions (e.g., projective geometry, etc.) may be utilized to combine the surface planes to form a three-dimensional model of the environment. Whereby, non-parallel surface planes will converge and intersect, forming a surface edge, such as where a wall surface meets a ceiling surface. The surface edges define the shape or boundaries of one or more surface planes. Hence, the control unit can determine the 3D position and orientation of one or more surface edges of remote surfaces and, thereby, determine the shape of one or more surface planes in 3D space. The control unit then stores the 3D surface model, surface planes, and surface edges in data storage (reference numeral 140 of
Since the computed 3D surface model of surface planes represents one or more remote surfaces in an environment, the control unit can determine the type, position, orientation, and shape of one or more remote surfaces in the user's environment.
Creating a Projection Region for Video Objects
Turning now to
Hence, the device 100 can pre-compute (e.g., prior to video frame projection) the 3D position, orientation, and shape of the projection region 210 using input parameters that may include, but not limited to, the predetermined light projection angles and the position and orientation of one or more surface planes relative to device 100. As shown, the device 100 computes the 3D positions of perimeter points P1-P4 of projection region 210 residing on surface plane SPL1 and remote surface 224.
Moreover, the device 100 can determine a video object position OP and illuminate at least one video object 240 (such as the playful dog) based upon the video object position OP located on the surface plane SPL1 and remote surface 224.
Computing a Velocity of Projection Region, Video Objects, and Physical Objects
Continuing with
Moreover, the device 100 may further compute the position, orientation, and velocity of one or more physical objects, such as physical object 250. Since the device 100 can identify and determine the position, orientation, and velocity of one or more object tags 280, the device can further identify and determine the position and orientation of one or more physical objects 250 associated with the object tag 280.
So turning briefly to
Then in
Method for Creating and Tracking a Projection Region
Beginning with step S180, the control unit determines a geometric projection region (e.g., position, orientation, shape, and size) for a full-sized projected image that coincides on the 3D surface model (as created earlier in
Then in step S182, the control unit takes previously recorded projection region data (e.g., in the past one second) and integrates over time such that a projection region velocity in 3D space is determined. The device's motion sensor (reference numeral 120 of
Method for Identifying and Tracking the Physical Objects
Turning to
Beginning with step S200, for each detected object tag (as determined by the method of
In step S202, the control unit retrieves a detected tag ID, which was previously stored in data storage (reference numeral 140 of
Thus, in step S204, the control unit can determine and track the position, orientation, shape, size, and velocity of the physical object; wherein, the control unit stores such information in the physical object data (reference numeral 145 of
Finally, in step S206, if there are any more detected object tags to analyze, the control unit selects a next detected object tag, and the method returns to step S202. Otherwise the method ends.
Determining when Objects can Interact and Connect
Turning now to
So turning briefly to
So turning back to
Next, the device 100 searches in the object actions library for the physical object 250 of a food dish and available response actions (in
Next, the device 100 determines if object interactivity should occur by determining if the request action region 260 and response action region 270 overlap using a collision detect method (e.g., 2D polygon overlap detection, etc.). In this case, as illustrated, there exists an action collision region ARI where the action regions 260 and 270 overlap. Whereby, the device 100 enables interactivity to occur between the video object 240 and physical object 250.
In cases where the action regions 260 and 270 do not overlap, no interaction would occur, as when the video object 240 (of a dog) is located too far away from the physical object 250 (of a food dish). In more sophisticated interactions between objects 240 and 250, additional action conditions may be analyzed, such as determining if the video object 240 has the correct position, orientation, velocity, color, shape, and/or size before the video object 240 and physical object 250 can interact.
Understandably, in alternative embodiments, a plurality of action types, action regions, and action conditions may be defined for a physical object. This allows the device 100 to simulate interaction with real-world objects with various effects. For example, an alternative physical object 250 may simulate a combined food and water dish (e.g., two bowls side by side), where two action regions are defined: a “food” action region; and juxtaposed, a “water” action region. Then during operation, the device maps both the food and water action regions relative to physical object 250 on at least one remote surface 224. Hence, the dog video object 240 can interact and connect to the combined food and water dish in different ways, either eating food or drinking water, depending on the position and orientation of the dog video object 240 relative to the physical object 250. Whereby, the device 100 may project an image of the dog video object 240 eating or drinking from the physical object 250. The device 100 may further generate sound and haptic effects of the eating or drinking dog video object 240.
Connecting Video Objects to Physical Objects
Turning now to a perspective view in
For example, the device 100 may connect one or more video objects 240 (of a dog) to at least one physical object 250 (of a food dish). To designate a connection, the video object 240 and physical object 250 may form an association in data storage (e.g., reference numerals D110-D114 of
In an example operation of connecting objects, device 100 may connect the video object 240 to the physical object 250 by completing, but not limited to, the following operations: detecting the object tag 280 on the physical object 250, determining the object tag position TP and tag orientation, identifying the tag and related physical object 250, retrieving from memory the video object displacement offset OXD and OYD and the object rotational offset relative to tag 280, adjusting the video object position OP and the video object orientation ORO relative to tag 280, graphically rendering a video frame of the video object 240 at video object position OP and video object orientation ORO, and projecting the video frame as a visible image 220 comprised of the video object 240 such that the video object 240 appears to substantially remain in the vicinity of the physical object 250.
Interactive Video Objects and Physical Objects
Continuing with
The device 100 may modify the visible image 220 comprised of one or more video objects 240 such that the one or more video objects 240 appear to interact with at least one physical object 250. The device 100 may modify the visible image 220 comprised of one or more video objects 240 such that the one or more video objects 240 appear to adapt to the position, orientation, shape, and size of at least one physical object 250. The device 100 may modify the visible image 220 comprised of one or more video objects 240 such that the one or more video objects 240 appear to remain substantially in the vicinity of at least one physical object 250. For example in operation, the device 100 animates the visible image 220 comprised of the video object 240 of a dog eating from the physical object 250 of a food dish (e.g., as defined by an object action, reference numeral D124 of
Also, the device 100 may generate or modify a sound effect such that the sound effect adapts to the position and orientation of one or more video objects 240 relative to at least one physical object 250. For example in operation, the device 100 generates a “crunching” sound (e.g., as defined by an object action, reference numeral D126 of
Also, the device 100 may generate or modify a haptic effect such that the haptic effect adapts to the position and orientation of one or more video objects 240 relative to at least one physical object 250. For example in operation, the device 100 generates a haptic vibration (e.g., as defined by an object action, reference numeral D128 of
Method for Determining when Objects can Interact
Turning to
Beginning with step S250, for each object action for each object A defined in object data (reference numeral 145 of
Then in step S254, for each object action for each object B defined in object data (reference numeral 145 of
In step S258, if the control unit determines that object A is not object B and the action types are agreeable to objects A and B, then continue to step S260. Otherwise, the method skips to step S270. For example, to determine if action types of objects A and B are agreeable, the control unit may compare the action types in data storage (e.g., reference numerals D116 and D152 of
In step S260, if the control unit determines the action regions overlap for objects A and B, then continue to step S262. Otherwise, the method skips to step S270. For example, to determine if action regions overlap for objects A and B, the control unit may construct action regions in 3D space (as discussed in
In step S262, if the control unit determines that the action conditions are acceptable for objects A and B, then continue to step S264. Otherwise, the method skips to step S270. For example, to determine if action conditions are acceptable for objects A and B, the control unit may analyze the current attributes of objects A and B, such as position, orientation, velocity, color, shape, and/or size.
In step S264, if the control unit determines that objects A and B want to connect/or disconnect, then continue to step S266. Otherwise, the method skips to step S268.
In step S266, the control unit connects/or disconnects object A and B. For example, to connect object A and B, the control unit may associate object A and B in data storage (e.g., reference numeral D112 of
Then in step S268, the control unit enables interactivity by activating the object action of object A and the object action of object B. For example, to activate an object action, the control unit may activate object actions in data storage (e.g., reference numerals D110 and D114 of
In step S270, if there are any more object actions for each object B to analyze, the control unit selects a next object action for each object B, and the method returns to step S258. Otherwise the method continues to step S272.
Finally, in step S272, if there are any more object actions for each object A to analyze, the control unit selects a next object action for each object A, and the method returns to step S254. Otherwise the method ends.
Method for Modifying and Tracking the Video Objects
Turning to
Beginning with step S278, the control unit can determine the position, orientation, and velocity of the projecting device in 3D space. This may be accomplished by transforming the collected motion data (from step S102 of
As an optional operation, dependent on the application, the control unit may create instances of one or more video objects in memory by the following: retrieving data from a library of video object descriptions (reference numeral 127 of
As an optional operation, dependent on the application, the control unit may also remove unneeded instances of one or more video objects from object data (reference numeral 145 of
Then continuing in step S280, for each video object defined in object data (reference numeral 145 of
In step S282, depending on the application, the control unit modifies the position, orientation, shape, size, and/or velocity of the Video Object based upon, but not limited to, the following: 1) the position, orientation, and/or velocity of the projecting device; 2) the position, orientation, and/or shape of at least one remote surface; 3) the position, orientation, and/or shape of at least one physical object; 4) the object descriptions; and/or 5) the activated object actions.
For item 1) above, the control unit may analyze the device's motion data (as created earlier in step S278 of
In step S284, if the control unit determines that the video object is connected to at least one physical object, then continue to step S285. Otherwise, the method skips to step S286.
In step S285, depending on the application, the control unit modifies the position, orientation, shape, size, and/or velocity of the video object such that the video object appears to remain substantially in the vicinity of the at least one physical object. Moreover, the control unit may further modify the position, orientation, shape, size, and/or velocity of the video object such that the video object appears to remain substantially affixed to the at least one physical object.
In step S286, if the control unit determines that the video object is connected to at least one other video object, then the method continues to step S288. Otherwise, the method skips to step S290.
In step S288, depending on the application, the control unit modifies the position, orientation, shape, size, and/or velocity of the video object such that the video object appears to substantially remain in the vicinity of the at least one other video object. Moreover, the control unit may further modify the position, orientation, shape, size, and/or velocity of the video object such that the video object appears to remain substantially affixed to the at least one other video object.
Then in step S290, the control unit determines and tracks, but not limited to, the position, orientation, shape, size, and velocity of the video object, storing such data in object tracking data (reference numeral 146 of
Finally, in step S292, if there are any more video objects to analyze, the control unit selects a next video object defined in object data (reference numeral 145 of
Projection of Video Objects with Reduced Distortion
Turning briefly back to
For example, the non-visible outline of the projection region 210 (as denoted by dashed lines) shows keystone or wedge-like distortion on surface 224. Yet the device 100 can modify the visible image 220 comprised of one or more video objects 240 such that the one or more video objects 240 appear substantially undistorted and/or substantially uniformly lit in the vicinity of at least one physical object 250. Moreover, the device 100 can modify the visible image 220 comprised of one or more video objects such that the size of the one or more video objects 240 is based upon the size of at least one physical object 250 in the vicinity of the one or more video objects 240.
Method of Projection of Video Objects with Reduced Distortion
Turning now to
Beginning with step S300, the control unit creates an empty video frame located in projector data (reference numeral 142 of
In step S304, the control unit can optionally modify the shape of the graphic image comprised of one or more video objects such that the shape of the graphic image comprised of one or more video objects creates a visual effect when projected. For example, the control unit may modify the shape of the graphic image by clipping out a circular image and painting the background black to create a circular shaped visible image comprised of one or more video objects when projected (e.g., as shown by visible image 220 in
In step S306, to reduce image distortion, the control unit can pre-warp (or inverse warp) the graphic image comprised of one or more video objects such that the one or more video objects appear substantially undistorted when projected on at least one remote surface in the environment. Wherein, the control unit may pre-warp the image based upon the determined position and orientation of the at least one remote surface (as discussed in
In step S308, to assure undistorted image brightness, the control unit can modify the brightness of the graphic image comprised of one or more video objects such that the one or more video objects appear substantially uniformly lit when projected on at least one remote surface in the environment, substantially irrespective of the orientation of the at least one remote surface. The control unit may adjust the image brightness according to the position and orientation of the at least one remote surface (as discussed earlier in
scalar = ( 1 / (maximum, distance to all pixels P) 2 )
for each pixel P in the video frame...
pixel brightness (P) = ( surface distance D to pixel P) 2 x scalar x
pixel brightness (P)
Then in step S310, the graphic effects are presented. The control unit transfers the graphic image comprised of the one or more video objects within the video frame to the image projector (reference numeral 150 of
Sound effects may be presented as well. In step S312, the control unit can retrieve audio data based upon the one or more video objects and transfer the audio data to the sound generator (reference numeral 112 of
Haptic effects may be presented as well. In step S314, the control unit can retrieve haptic data based upon the one or more video objects and transfer the haptic data to the haptic generator (reference numeral 114 of
Whereby, in one aspect, a projecting device can modify a visible image comprised of one or more video objects such that the one or more video objects appear to adapt to the position, orientation, shape, and/or size of at least one physical object located on at least one remote surface.
A projecting device can modify a visible image comprised of one or more video objects such that the one or more video objects appear to adapt to the position, orientation, shape, and/or size of at least one remote surface.
A projecting device can modify a visible image comprised of one or more video objects such that the size of the one or more video objects is based upon the size of at least one physical object in the vicinity of the one or more video objects.
A projecting device can modify the shape of a visible image comprised of one or more video objects such that the shape of the visible image is substantially circular.
A projecting device can modify the shape of a visible image comprised of one or more video objects such that the shape of the visible image appears to adapt to the position, orientation, and/or shape of at least one remote surface.
A projecting device can modify a visible image comprised of one or more video objects such that the one or more video objects appear substantially uniformly lit in the vicinity of at least one physical object when the projecting device is projecting light onto the remote surface.
A projecting device can modify a visible image comprised of one or more video objects such that the one or more video objects appear substantially uniformly lit on at least one remote surface.
A projecting device can modify a visible image comprised of one or more video objects such that the one or more video objects appear substantially undistorted in the vicinity of at least one physical object.
A projecting device can modify a visible image comprised of one or more video objects such that the one or more video objects appear substantially undistorted on at least one remote surface.
Video Frames of Connecting Video Objects and Physical Objects
Turning now to
Example of a Video Object that Connects to a Physical Object
So turning specifically to
Whereby, in one aspect, a projecting device can determine the position and/or orientation of at least one physical object located on at least one remote surface, connect one or more video objects to the at least one physical object, and generate a visible image comprised of the one or more video objects such that the one or more video objects appear to remain substantially in the vicinity of the at least one physical object.
In another aspect, a projecting device can determine the position and/or orientation of at least one physical object located on at least one remote surface, connect one or more video objects to the at least one physical object, and generate a visible image comprised of the one or more video objects such that the one or more video objects appear to remain substantially affixed to the at least one physical object.
Example of a Video Object Already Connected to a Physical Object
Turning now to
Example of a First Video Object that Connects to a Second Video Object
Turning now to
Whereby, in one aspect, a projecting device can determine the position and/or orientation of at least one physical object located on at least one remote surface, connect a first video object to a second video object, and generate a visible image comprised of the first video object and the second video object such that the first video object appears to remain substantially in the vicinity of the second video object.
In another aspect, a projecting device can determine the position and/or orientation of at least one physical object located on at least one remote surface, connect a first video object to a second video object, and generate a visible image comprised of the first video object and the second video object such that the first video object appears to remain substantially affixed to the second video object.
Video Objects that Connect to Handheld Physical Objects
Turning now to
Turning now to
Video Objects that Interact and Connect Using a Plurality of Object Actions
However, if the device 100 is moved at a slower rate of movement or from a different direction, the video object 340 may interact quite differently with the physical object 306. For example, if the device 100 is moved in the opposite direction, the device 100 may animate a video object of a motorcycle crashing near the ramp physical object 306, with generated “crash” sound and haptic effects.
Turning to
The handheld projecting device 100 may enable such precise interaction and connection of video objects and physical objects by utilizing a plurality of object actions, action regions, and action conditions for the video objects and physical objects (as discussed earlier in
Hence, the projecting device 100 can generate graphic, sound, and haptic effects based upon one or more object action conditions of the one or more video objects and at least one physical object. The projecting device 100 can modify a visible image comprised of one or more video objects based upon object action conditions of the one or more video objects and at least one physical object. The projecting device 100 can modify a visible image comprised of one or more video objects based upon the position, orientation, speed, and/or direction of the one or more video objects relative to at least one physical object.
Creating an Object Path
Turning now to
Now turning to
During construction of the user-defined path 326, the device 100 may acquire the identity of each physical object in sequence to create the object ID path data containing object identifiers (e.g., “300”, “302”, “306”, etc.). Moreover, the device 100 may acquire a sequence of object positions OP to create the position path data that describes the object path 326 in 3D space.
Method for Creating a User-Defined Object Path
Turning now to
Beginning with step S360, the control unit initializes an instance of object path data (reference numeral 148 of
Continuing to step S362, the control unit determines a Video Object Position of an illuminated video object (as discussed earlier in
In step S364, the control unit then adds the Video Object Position (from step S362) to the Position Path Data (reference numeral D302 of
Whereby, in step S366, the control unit determines an Object ID of a physical object nearest to the Video Object Position. The nearest physical object may be determined by comparing the positions and computed distances of the detected physical objects to the Video Object Position (from step S362). The Object ID may be retrieved from the physical object data (reference numeral 145 of
Then in step S368, if the control unit determines that the Object ID is not equal to the variable Previous ID, the method continues to step S370. Otherwise the method skips to step S372.
In step S370, the control unit adds the Object ID (from step S366) to the Object ID Path Data (reference numeral D300 of
Whereby, in step S372, the control unit may wait for a predetermined period of time (e.g., 1/30 second) to limit the rate of data acquisition.
In step S374, if the control unit determines that a user is done creating the user-defined object path, the method continues to step S376. Otherwise, the method loops back and continues at step S362. The control unit can determine if the user is done by, but not limited to, detecting at least two similar Object ID in the Object ID Path Data, which indicates a closed path or racetrack; and/or detecting a user interface signal (e.g., when the user pushes a button on the device) indicating that the user-defined object path is complete.
Finally, in step S376, the control unit may create one or more path video objects (defined in reference numeral D304 of
The control unit may further adjust the path video objects (as created above) and object path data (as created in step S364 and S370), such as, but not limited to, smoothing the path video objects and object path data to remove jagged corners to form a smooth path or paths, removing substantially similar path locations, and/or joining unconnected path ends (e.g., such as the start point SP and end point EP of
Moreover, by utilizing the communication interface (reference numeral 118 of
In some alternative methods and applications, a user operating a projecting device may be able to construct a plurality of user-defined object paths within an environment.
Example of Object Path Data
Turning now to
Application with an Object Path
So presented in
During an example operation, the users/players 200 and 201 move their respective video objects 340 and 341 of motorcycles around the racetrack defined by the object path 326. The video objects 340 and 341 of the motorcycles interact in a realistic manner with the physical objects 300-318. For example, as the first motorcycle video object 340 passes over the hoop physical object 308, an animated fire video object 308 is created and is connected to the hoop physical object 308. Hence, as device 100 is moved, the fire video object 308 substantially remains affixed to the physical object 308, while the motorcycle video object 340 moves beyond the hoop object 308. Further, as device 101 is moved, the second motorcycle video object 341 appears to jump over the ramp object 316. Sound and haptic effects may be further generated by devices 100 and 101 to enhance the game.
Winning the racing game may entail the user 200 or 201 to be the first to move the motorcycle video object 340 or 341 around the racetrack for a certain number of laps to simulate a real-world motorcycle race. That is, every time a motorcycle video object 340 or 341 goes past the gate physical object 300, another lap is counted by device 100 or 101, respectively. Devices 100 and 101 may each retain a history of laps made by a user around the path 326. Game information, such as lap count, speed, etc. may be shared by the devices 100 and 101 using their communication interfaces (reference numeral 118 of
Moreover, if a player, such as user 200 holding the device 100 veers the motorcycle video object 340 too far off course (as defined by the object path 326) or skips a physical object on the course, the device 100 responds by disqualifying the player and displaying text reading “Disqualified Rider” on its projected visible image.
Application with an Illuminated Object Path
Turning now to
In the current embodiment, the path video objects 344, 344′, 345, and 346′ are created during the construction of the user-defined object path (as shown by the flowchart step S376 of
During an example operation, the users/players 200 and 201 move their respective video objects 340 and 341 of motorcycles around the racetrack. As illustrated, path video objects 344′ and 346′ (as denoted by dashed lines) are not yet illuminated by the visible images 220 and 221 projected by devices 100 and 101. Thus, the illuminated path video objects 344 and 345 provide a visual cue to the users 200 and 201 as to the direction of the path or racetrack beyond the projected images 220 and 221. Thus, the projecting device 100 or 101 can generate one or more video objects 340 or 341 that appear to move along the illuminated path video objects 344 or 345, respectively.
Understandably, in alternative embodiments, a projecting device may operate other types of racing, exploration, search and rescue, and adventure applications that utilize an object path, such as a closed path or racetrack, or even a plurality of object paths forming a network of paths. Whereby, the projecting device may project a visible image comprised of one or more video objects of characters, people, airplanes, cars, boats, spaceships, trains, etc. for the user to interactively control in respect to the object path.
In some embodiments, a projecting device is operable to construct one object path that forms a path among one or more video objects and/or physical objects, such as an open path or a closed path. In other embodiments, a projecting device is operable to construct a plurality of object paths that form a network of paths among a plurality of video objects and/or physical objects. One or more object paths may extend across one or more remote surfaces, such as from a ceiling, to a wall, to a floor, to a table top. Wherein, a user can control and move video objects along the object paths throughout the user's environment.
In some embodiments, a projecting device may project a visible image comprised of one or more path video objects that connect to other video objects and/or physical objects. One or more path video objects of various types may be displayed on the visible image. For example, a path video object may be graphically represented as, but not limited to, a racetrack (e.g., for race cars), a stone covered road (e.g., for bicycles), a mud track (e.g., for horse racing), a snow trail (e.g., for slalom skiing), a city street (e.g., for ferrying taxis), a light beam (e.g., for spaceship travel), a river (e.g., for boats), a sidewalk (e.g., for skateboarders), a trail of flowers (e.g., for exploration), or a railroad track (e.g., for model trains).
Whereby, in one aspect, a projecting device can generate a visible image comprised of one or more video objects such that the one or more video objects appear to adapt to the position and/or orientation of the object path.
A projecting device can generate a visible image comprised of one or more video objects such that the position and/or orientation of the one or more video objects are based upon the position and/or orientation of the object path.
A projecting device can generate a visible image comprised of one or more path video objects that graphically define an object path.
A handheld projecting device can generate a visible image comprised of at least one video object that can be moved along an object path when the handheld projecting device is moved.
Application with a Plurality of Object Paths
Turning now to
Then turning to
Unlike the previous applications, an auto-generated object path may be created by the projecting device 100 to connect a plurality of video objects and/or physical objects. In this case, the auto-generated object path comprises path video objects 447, 446, and 446′ that are connected to physical objects 400-408.
Moreover, the projecting device 100 may connect one video object, such as path video object 446, to a plurality of physical objects 400, 402, 406, and 408. Such a capability enables device 100 to create an object path that forms a network of paths.
In an example operation, the device 100 may be moved by a user (not shown) such that the animated horse video object 440 moves along an object path comprised of the first path video object 447, over the bridge physical object 408, and across the second path video object 446.
Method for Creating Auto-Generated Object Paths
Turning now to
Beginning with step S400, the control unit detects, identifies, and tracks the position and orientation of physical objects in the environment.
Then in step S402, if the control unit determines that a user is done surveying the environment with a handheld projecting device, the method continues to step S404. Otherwise, the method loops back to step S400 to detect more physical objects. The control unit may determine when a user is done surveying the environment by, but not limited to, the control unit detects a user interface signal (e.g., such as button press) indicating that the projecting device is done sensing the environment.
In step S404, the control unit selects each physical object that was previously identified (in step S400), selecting the first of a physical object.
In step S406, the control unit then searches for a nearest physical object that is located nearest to the physical object. Spatial distance between physical objects can be readily determined using geometric functions.
Then in step S408, if the control unit determines that the physical object and nearest physical object (from step S406) are already connected, the method skips to step S414. Otherwise, the method continues to step S410. Connections between physical objects may be determined by analyzing the object connect data (reference numeral 147 of
In step S410, the control unit creates and connects a path video object to two objects: the physical object and the nearest physical object (from step S406). To create a path video object, the control unit may rely on a library of path video object descriptions (reference numeral 127 of
In step S412, the control unit can create an instance of object path data (reference numeral 148 of
Then in step S414, if the control unit determines that there are more physical objects to analyze (as identified by step S400), the method loops back to step S406 with the next physical object to analyze. Otherwise, the method continues to step S416.
Finally, in step S416, the control unit adjusts the path video objects (as created in step S410) and object path data (as created in step S412), such as, but not limited to, smoothing the path video objects and object path data to remove jagged corners to form a smooth path or paths, removing substantially similar path locations, and/or joining unconnected path ends to form a closed path. This may be completed, for example, by analyzing the 3D spatial geometry and adjusting path positions of the path video objects and object path data (reference numeral 148 of
Then utilizing the communication interface (reference numeral 118 of
Token Physical Object with 3D Video Object
Turning now to
So turning to
So
Understandably, alternative embodiments of token physical objects may be constructed, each comprising at least one object tag that has a unique tag identifier. Whereupon, the projecting device 100 may generate a visible image comprised of one or more video objects based upon the one or more token physical objects identified by the projecting device 100. As a result, the projecting device may display other types of 2D and/or 3D video objects (e.g., cat, dog, fish, automobile, truck, spaceship, cottage, house, shelf, dirt path, stone path, river path, or other types of characters, vehicles, paths, buildings, items, etc.) in the vicinity one or more token physical objects.
Whereby, in one aspect, a projecting device can identify at least one physical object located on at least one remote surface, connect one or more video objects to the at least one physical object, and generate a visible image comprised of the one or more video objects such that the types of the one or more video objects being displayed are based upon the at least one physical object identified by the projecting device.
A projecting device can identify at least one physical object located on at least one remote surface, connect one or more 3D video objects to the at least one physical object, and generate a visible image comprised of the one or more 3D video objects such that the types of the one or more 3D video objects being displayed are based upon the at least one physical object identified by the projecting device.
A projecting device can identify at least one physical object located on at least one remote surface, determine the position and orientation of the at least one physical object, connect one or more 3D video objects to the at least one physical object, and generate a visible image comprised of the one or more 3D video objects such that the appearance of the one or more 3D video objects is based upon the position and orientation of the at least one physical object.
A projecting device can identify at least one physical object located on at least one remote surface, determine the position and orientation of the at least one physical object, connect one or more 3D video objects to the at least one physical object, and generate a visible image comprised of the one or more 3D video objects such that the one or more 3D video objects appear to remain substantially in the vicinity of the at least one physical object.
A projecting device can identify at least one physical object located on at least one remote surface, determine the position and orientation of the at least one physical object, connect one or more 3D video objects to the at least one physical object, and generate a visible image comprised of the one or more 3D video objects such that the one or more 3D video objects appear to remain substantially affixed to the at least one physical object.
A projecting device can identify at least one physical object located on at least one remote surface, determine the position and orientation of the at least one physical object, connect one or more 3D video objects to the at least one physical object, and generate a visible image comprised of the one or more 3D video objects such that the position and orientation of the one or more 3D video objects is based upon the position and orientation of the remote surface.
Plurality of Token Physical Objects with 3D Video Objects and Paths
Turning now to
Now turning to
Some of the presented video objects appear to adapt to the orientation of the remote surfaces and a surface edge (as detected by the method of
Now turning to
Whereby, in one aspect, a handheld projecting device may present animated video frames, comprising: identifying at least one physical object located on at least one remote surface, connecting one or more video objects to the at least one physical object, determining the position and orientation of the at least one physical object on the at least one remote surface, determining the position and orientation of at least one surface edge of the at least one remote surface, and generating at least one video frame comprised of one or more video objects, wherein the one or more video objects appear to responsively adapt to the position and orientation of the at least one surface edge of the remote surface.
A handheld projecting device may present animated video frames, comprising: connecting at least one video object to a path video object, determining the position and orientation of the path video object on at least one remote surface, and generating at least one video frame comprised of the at least one video object and the path video object, wherein the at least one video object appears to responsively adapt to the position and orientation of the path video object.
A handheld projecting device may present animated video frames, comprising: connecting at least one video object to a path video object, determining the position and orientation of the path video object on at least one remote surface, and generating at least one video frame comprised of the at least one video object and the path video object, wherein the at least one video object appears to remain substantially in the vicinity of the path video object when the handheld projecting device is moved in an arbitrary direction.
A handheld projecting device may present animated video frames, comprising: connecting at least one video object to a path video object, determining the position and orientation of the path video object on at least one remote surface, and generating at least one video frame comprised of the at least one video object and the path video object, wherein the at least one video object appears to remain substantially affixed to the path video object when the handheld projecting device is moved in an arbitrary direction.
Token Physical Objects and Textured-Surface Video Objects
Turning now to
Moreover, in
Understandably, other types of textured-surface video objects may be projected by the projecting device, such as a coffered ceiling, surface of a pond, wood flooring, sheet metal paneling, etc. Moreover, as the projecting device is moved through 3D space, the visual perspective can change of the surface video object, as if the surface video object is affixed to the associated remote surface.
Whereby, in one aspect, a projecting device can identify at least one physical object located on at least one remote surface, connect at least one video object to the at least one physical object, determine the position and orientation of the at least one physical object on the at least one remote surface, determine the position, orientation, and shape of the at least one remote surface, and generate a visible image comprised of the at least one video object such that the at least one video object appears to be substantially contained within at least a portion of the shape of the remote surface.
In another aspect, a projecting device can identify at least one physical object on at least one remote surface, connect at least one video object to the at least one physical object, determine the position and orientation of at least one surface edge of the at least one remote surface, and generate a visible image comprised of the at least one video object such that the at least one video object appears to responsively adapt to the at least one surface edge of the at least one remote surface.
Token Physical Objects and a Moving 3D Video Object
Turning now to
Then in an example operation, shown in
In
Whereby, in one aspect, a handheld projecting device may present animated video frames, comprising: identifying at least one physical object located on at least one remote surface, determining the position and orientation of the at least one remote surface, and generating at least one video frame comprised of one or more video objects that is projected by the handheld projecting device, wherein the animated movement of the one or more video objects appears to adapt to the position and orientation of the at least one remote surface.
In another aspect, a handheld projecting device may present animated video frames, comprising: identifying at least one physical object located on at least one remote surface, determining the position and orientation of the at least one remote surface, determining the position and orientation of at least one surface edge of the at least one remote surface, and generating at least one video frame comprised of one or more video objects, wherein the animated movement of the one or more video objects appears to adapt to the at least one surface edge of the at least one remote surface.
In another aspect, a handheld projecting device may present animated video frames, comprising: identifying at least one physical object located on at least one remote surface, determining the position and orientation of the at least one remote surface, and generating at least one video frame comprised of one or more video objects that is projected by the handheld projecting device, wherein the speed and/or direction of the animated movement of the one or more video objects appears to adapt to the position and orientation of the at least one remote surface.
Token Physical Objects, Surface Video Objects, and 3D Video Objects
Turning now to
Then continuing to
A coffered ceiling textured-surface object 542 appears in the vicinity of the token physical object 543 on the ceiling remote surface 226. Moreover, the textured-surface object 542 adapts and extends to the surface edges 197 and 198 of the ceiling remote surface 226, but no further.
A shelf 3D video object 556 appears in the vicinity of the token physical object 546. Wherein, the shelf video object 556 contains a burning candle.
Finally, a window 3D video object 558 with a dragon looking in appears in the vicinity of token physical objects 548 and 549.
So turning to
Also, the window 3D video object 558 utilizes two physical objects 548 and 549 to define the position of window corners on remote surface 225. For example, if a user moves the token physical objects 548 and 549 closer together, the window 3D video object 558 will get smaller in size. Whereby, a user can adjust the position, size, and/or shape of the video object 558 by adjusting the locations of a plurality of physical objects 548 and 549 on one or more remote surfaces. In other embodiments, a generated video object may utilize a plurality of physical objects to define various attributes of the video object, such as its position, orientation, shape, size, color, rendered perspective, etc.
Understandably, alternate embodiments may be comprised of other types of video objects, 3D video objects, and textured-surface video objects that are connected to physical objects within an environment. The entire environment (e.g., ceiling, walls floor, etc.) can be affixed with physical objects such that a projecting device can render a full-sized virtual world. Moreover, such a virtual world is user-configurable, simply by rearranging one or more physical objects in the environment.
Visible Light Handheld Projecting Device
Turning now to
So in detail in
Most of these components may be constructed and function similar to the previous embodiment's components (as defined in
Visible Image Sensor
As shown in
Visible Object Tags
As shown in
Operations of the Visible Light Handheld Projecting Device
The operations and capabilities of the visible light handheld projecting device 600, shown in
For example, the handheld projecting device 600 of
Infrared Receiver Handheld Projecting Device
Turning now to
So in detail in
Most of these components may be constructed and function similar to the previous embodiment's components (as defined in
Infrared Receiver
As shown in
Active Infrared Object Tags
Turning now to
So turning first to
Then in operation, shown in
Turning now to
Then in operation, the emitters 710 and 711 may toggle in light intensity or duration such that the object tag 722 creates a modulated, data encoded light pattern that has a one-fold rotational symmetry or is asymmetrical.
Finally, turning to
Then in operation, a plurality of infrared light emitters 710-712 modulate infrared light such that the object tag 724 creates a modulated, data encoded light pattern that has a one-fold rotational symmetry or is asymmetrical.
Operations of the Infrared Receiver Handheld Projecting Device
The operations and capabilities of the infrared receiver handheld projecting device 700, shown in
For example, the handheld projecting device 700 of
Summary of Handheld Projecting Devices
Design advantages of the handheld projecting device (as shown in
Although projectors and image sensors may be affixed to the front end of projecting devices, alternative embodiments of the projecting device may locate the image projector and/or image sensor at the device top, side, and/or other device location.
Some embodiments of the handheld projecting device may be integrated with and made integral to a mobile telephone, a tablet computer, a laptop, a handheld game device, a video player, a music player, a personal digital assistant, a mobile TV, a digital camera, a robot, a toy, an electronic appliance, or any combination thereof.
Finally, the handheld projecting device embodiments disclosed herein are not necessarily mutually exclusive in their construction and operation, for some alternative embodiments may be constructed that combine, in whole or part, aspects of the disclosed embodiments.
Various alternatives and embodiments are contemplated as being within the scope of the following claims particularly pointing out and distinctly claiming the subject matter regarded as the invention.
Patent | Priority | Assignee | Title |
10122976, | Dec 25 2014 | PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD. | Projection device for controlling a position of an image projected on a projection surface |
10423012, | Dec 04 2015 | VERTICAL OPTICS, INC | Wearable vision redirecting devices |
10607320, | Oct 05 2017 | International Business Machines Corporation | Filtering of real-time visual data transmitted to a remote recipient |
10818093, | May 25 2018 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
10984600, | May 25 2018 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
11494994, | May 25 2018 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
11528393, | Feb 23 2016 | VERTICAL OPTICS, INC | Wearable systems having remotely positioned vision redirection |
11605205, | May 25 2018 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
11902646, | Feb 23 2016 | Vertical Optics, Inc. | Wearable systems having remotely positioned vision redirection |
9488901, | Sep 25 2013 | X Development LLC | Crowd-deployable objects to control a presentation |
9690119, | May 15 2015 | VERTICAL OPTICS, INC | Wearable vision redirecting devices |
Patent | Priority | Assignee | Title |
6195093, | Sep 14 1998 | FUJI XEROX CO , LTD ; Xerox Corporation | Systems and method for controlling a presentation using physical objects |
6375079, | Nov 04 1991 | Symbol Technologies, Inc. | Portable optical scanning and pointing systems |
6554431, | Jun 07 1999 | Sony Corporation | Method and apparatus for image projection, and apparatus controlling image projection |
6766955, | Oct 19 1998 | Symbol Technologies, LLC | Optical code reader for producing video displays |
7154395, | Jul 01 2004 | Mitsubishi Electric Research Laboratories, Inc | Interactive wireless tag location and identification system |
7204428, | Mar 31 2004 | Microsoft Technology Licensing, LLC | Identification of object on interactive display surface by identifying coded pattern |
7292269, | Apr 11 2003 | Mitsubishi Electric Research Laboratories | Context aware projector |
7743348, | Jun 30 2004 | Microsoft Technology Licensing, LLC | Using physical objects to adjust attributes of an interactive display application |
7874681, | Oct 05 2007 | Interactive projector system and method | |
7907128, | Apr 29 2004 | Microsoft Technology Licensing, LLC | Interaction between objects and a virtual environment display |
7946484, | Oct 19 1998 | Symbol Technologies, LLC | Optical code reader for producing video displays |
7976372, | Nov 09 2007 | IGT | Gaming system having multiple player simultaneous display/input device |
8038304, | Jul 03 2006 | Panasonic Corporation | Projector system and video projection method |
8100540, | May 04 2009 | Light array projection and sensing system | |
8100541, | Mar 01 2007 | Microsoft Technology Licensing, LLC | Displaying and navigating digital media |
8142271, | Mar 11 1998 | Digideal Corporation | Electronic gaming system with real playing cards and multiple player displays for virtual card and betting images |
8651666, | Oct 05 2007 | Interactive projector system and method | |
20040201823, | |||
20050064936, | |||
20060001543, | |||
20060118634, | |||
20080044005, | |||
20090040194, | |||
20090075733, | |||
20090118005, | |||
20090118006, | |||
20090150802, | |||
20090207322, | |||
20090246749, | |||
20090322706, | |||
20100004062, | |||
20100033682, | |||
20100113140, | |||
20100130286, | |||
20100199232, | |||
20100328615, | |||
20100331083, | |||
20110001935, | |||
20110019162, | |||
20110053688, | |||
20110115823, | |||
20110166694, | |||
20110205500, | |||
20110237327, | |||
20110248913, | |||
20110316767, | |||
20120004031, | |||
20120026088, | |||
20120026376, | |||
20120033856, | |||
20120052931, | |||
20130229396, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Date | Maintenance Fee Events |
Feb 05 2019 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
May 08 2023 | REM: Maintenance Fee Reminder Mailed. |
Oct 23 2023 | EXP: Patent Expired for Failure to Pay Maintenance Fees. |
Date | Maintenance Schedule |
Sep 15 2018 | 4 years fee payment window open |
Mar 15 2019 | 6 months grace period start (w surcharge) |
Sep 15 2019 | patent expiry (for year 4) |
Sep 15 2021 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 15 2022 | 8 years fee payment window open |
Mar 15 2023 | 6 months grace period start (w surcharge) |
Sep 15 2023 | patent expiry (for year 8) |
Sep 15 2025 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 15 2026 | 12 years fee payment window open |
Mar 15 2027 | 6 months grace period start (w surcharge) |
Sep 15 2027 | patent expiry (for year 12) |
Sep 15 2029 | 2 years to revive unintentionally abandoned end. (for year 12) |