A computer of an information processing apparatus sets an α value of each pixel in accordance with a depth value (Z value) of the pixel of a reference image. The α value is set such that a synthesizing ratio of the reference image is higher for a pixel having a depth value closer to a predetermined reference value. Next, the computer increases the α value which is set for a pixel having a smaller α value among two adjacent pixels which have an α value difference of a predetermined value or greater. Then, the computer synthesizes the reference image and a blurred image corresponding to the reference image based on the α value which is set for each pixel after being processed by the increasing processing.
|
18. An image processing system for generating an image, comprising:
a processing system configured to:
set an α value of each pixel of a reference image having a depth value, the α value being set in accordance with the depth value of a respective pixel;
increase the α value of a pixel which has a smaller α value among two adjacent pixels each having a depth value equal to or smaller than a reference value, wherein the reference value is based on a focal length of a virtual camera;
assign the α value of the pixel to a first component value when the depth value of the pixel is smaller than the reference value;
assign the α value of the pixel to a second component value when the depth value of the pixel is larger than the reference value;
determine the α value of the pixel based on the first component value and the second component value; and
synthesize the reference image and a blurred image corresponding to the reference image based on the determined α value for the pixel after the α value of the pixel is determined based on the first component value and the second component value.
16. A computer implemented method for generating an image on an image processing apparatus, the method comprising:
calculating a reference value from a focal length of a virtual camera;
setting an α value of each pixel of a reference image having a depth value for each pixel, the α value being set in accordance with the depth value of the respective pixel;
increasing the α value which is set for a pixel having a smaller α value among two adjacent pixels each having a depth value equal to or smaller than the reference value;
assigning the α value of the pixel to a first component value when the depth value of the pixel is smaller than the reference value;
assigning the α value of the pixel to a second component value when the depth value of the pixel is larger than the reference value;
determining the α value of the pixel based on the first component value and the second component value; and
synthesizing the reference image and a blurred image corresponding to the reference image based on the α value for the pixel after the α value of the pixel is determined based on the first component value and the second component value.
14. An image processing apparatus for generating an image, comprising:
a processing system configured to:
set an α value of a pixel of a reference image, the pixel having a depth value, the α value being set in accordance with the depth value;
increase the α value of the pixel when the α value is smaller than an adjacent α value of an adjacent pixel, both the pixel and the adjacent pixel each having a depth value equal to or smaller than a reference value;
assign the α value of the pixel to a first parameter value when the depth value of the pixel is smaller than the reference value;
assign the α value of the pixel to a second parameter value when the depth value of the pixel is larger than the reference value;
determine the α value of the pixel based on the first parameter value and the second parameter value; and
synthesize the reference image and a blurred image corresponding to the reference image based on the α value after the α value of the pixel is determined based on the first parameter value and the second parameter component value,
wherein the reference value is based on a focal length of a virtual camera controlled by the processing system.
1. A non-transitory computer-readable storage medium having stored thereon an image processing program to be executed by a computer of an image processing apparatus for generating an image, the image processing program configured to cause the computer to:
set an α value of each pixel of a reference image having a depth value, the α value being set in accordance with the depth value of the respective pixel;
increase the α value for a pixel having a smaller α value among two adjacent pixels each having a depth value equal to or smaller than a reference value, the reference value being based on a focal length of a virtual camera;
assign the α value of the pixel to a first parameter value when the depth value of the pixel is smaller than the reference value;
assign the α value of the pixel to a second parameter value when the depth value of the pixel is larger than the reference value;
determine the α value of the pixel based on the first parameter value and the second parameter value; and
synthesize the reference image and a blurred image corresponding to the reference image based on the α value after determining the α value of the pixel based on the first parameter value and the second parameter value.
19. An image processing apparatus for generating an image, comprising:
a processing system configured to:
derive a predetermined depth value from a focal length of a virtual camera;
correct a pixel having a depth value closer to the predetermined depth value among two adjacent pixels of a reference image that has a depth value for each pixel, the depth value for each pixel being smaller than or equal to the predetermined depth value, the pixel being corrected such that the depth value thereof becomes closer to the depth value of the other pixel;
set an α value of each pixel in accordance with the depth value of the respective pixel after being corrected;
assign the α value of the pixel to a first component value when the depth value of the pixel is smaller than the reference value;
assign the α value of the pixel to a second component value when the depth value of the pixel is larger than the reference value;
determine the α value of the pixel based on the first component value and the second component value; and
synthesize the reference image and a blurred image corresponding to the reference image based on the α value of the pixel after the α value of the pixel is determined based on the first component value and the second component value.
15. An image processing apparatus for generating an image, comprising:
a processing system configured to:
calculate a predetermined reference value based on a focal length of a virtual camera;
correct a pixel having a depth value closer to the predetermined reference value among two adjacent pixels of a reference image having a depth value for each pixel, the depth value for each pixel being smaller than or equal to the predetermined reference value, the pixel being corrected such that the depth value thereof becomes closer to the depth value of the other pixel;
set an α value of each pixel in accordance with the depth value of the respective pixel after being corrected;
assign the α value of the pixel to a first parameter value when the depth value of the pixel is smaller than the reference value;
assign the α value of the pixel to a second parameter value when the depth value of the pixel is larger than the reference value;
determine the α value of the pixel based on the first parameter value and the second parameter value; and
synthesize the reference image and a blurred image corresponding to the reference image based on the α value of the pixel after the α value of the pixel is determined based on the first parameter value and the second parameter value.
17. A computer implemented method for generating an image on an image processing apparatus, the method comprising:
correcting a pixel having a depth value closer to a predetermined reference value among two adjacent pixels of a reference image having a depth value for each pixel, the depth value for each pixel being smaller than or equal to the predetermined reference value, the pixel being corrected such that the depth value thereof becomes closer to the depth value of the other pixel;
setting an α value of each pixel in accordance with the depth value of the respective pixel;
assigning the α value of the pixel to a first component value when the depth value of the pixel is smaller than the reference value;
assigning the α value of the pixel to a second component value when the depth value of the pixel is larger than the reference value;
determining the α value of the pixel based on the first component value and the second component value; and
synthesizing the reference image and a blurred image corresponding to the reference image based on the α value for the pixel after the α value of the pixel is determined based on the first component value and the second component value,
wherein the predetermined reference value is based on a focal length of a virtual camera.
21. An image processing system for generating an image, comprising:
processing system configured to:
set an α value of each pixel of a reference image having a depth value, the α value being set in accordance with the depth value of a respective pixel;
increase the α value of a pixel which has a smaller α value among two adjacent pixels each having a depth value equal to or smaller than a reference value, wherein the reference value is based on a focal length of a virtual camera;
assign the α value of the pixel to a first component value when the depth value of the pixel is smaller than the reference value or assign the α value of the pixel to a second component value when the depth value of the pixel is larger than the reference value;
determine the α value of the pixel based on the first component value and the second component value; and
synthesize the reference image and a blurred image corresponding to the reference image based on the determined α value for the pixel after the α value of the pixel is determined based on the first component value and the second component value, wherein the processing system is further configured to:
smooth the first component value; and
set a sum of the first component value and the second component value as the α value of the pixel.
10. A non-transitory computer-readable storage medium having stored thereon an image processing program to be executed by a computer of an image processing apparatus for generating an image, the image processing program configured to cause the computer to:
correct a pixel having a depth value closer to a predetermined reference value among two adjacent pixels of a reference image having a depth value for each pixel, the depth value for each pixel being smaller than or equal to the predetermined reference value, the pixel being corrected such that the depth value thereof becomes closer to the depth value of the other pixel;
set an α value of each pixel in accordance with the depth value of the respective pixel;
assign the α value of the pixel to a first component value when the depth value of the pixel is smaller than the reference value;
assign the α value of the pixel to a second component value when the depth value of the pixel is larger than the reference value;
determine the α value of the pixel based on the first component value and the second component value; and
synthesize the reference image and a blurred image corresponding to the reference image based on the α value after the α value of the pixel is determined based on the first component value and the second component value,
wherein the predetermined reference value is derived from a focal length of a virtual camera.
4. A non-transitory computer-readable storage medium having stored thereon an image processing program to be executed by a computer of an image processing apparatus for generating an image, the image processing program configured to cause the computer to:
set an α value of each pixel of a reference image having a depth value, the α value being set in accordance with the depth value of the respective pixel;
increase the α value for a pixel having a smaller α value among two adjacent pixels each having a depth value equal to or smaller than a reference value, the reference value being based on a focal length of a virtual camera;
assign the α value of the pixel to a first component value when the depth value of the pixel is smaller than the reference value or assign the α value of the pixel to a second component value when the depth value of the pixel is larger than the reference value;
determine the α value of the pixel based on the first component value and the second component value; and
synthesize the reference image and a blurred image corresponding to the reference image based on the α value after determining the α value of the pixel based on the first component value and the second component value, wherein the first parameter value and the second parameter value are components of a two-dimensional vector and the image processing program is further configured to cause the computer to:
smooth a value of the first component of the two-dimensional vector;
set a sum of the first component value and the second component value of the two-dimensional vector as the α value for each pixel; and
synthesize the reference image and the blurred image after setting the sum.
2. A non-transitory computer-readable storage medium according to
3. A non-transitory computer-readable storage medium according to
5. A non-transitory computer-readable storage medium according to
6. A non-transitory computer-readable storage medium according to
7. A non-transitory computer-readable storage medium according to
select a pixel having an α value which is smaller than the α value of an adjacent pixel and is different from the α value of the adjacent pixel by a predetermined value or greater; and
increase the α value of the selected pixel.
8. A non-transitory computer-readable storage medium according to
select a pixel having an α value which is smaller than the α value of an adjacent pixel and is equal to or smaller than a predetermined value; and
increase the α value of the selected pixel.
9. A non-transitory computer-readable storage medium according to
11. A non-transitory computer-readable storage medium according to
12. A non-transitory computer-readable storage medium according to
13. A non-transitory computer-readable storage medium according to
20. The apparatus of
|
The disclosure of Japanese Patent Application No. 2007-062193, filed on Mar. 12, 2007, is incorporated herein by reference.
The technology herein relates to a game processing program and an image processing apparatus, and more particularly to a storage medium having stored thereon an image processing program for generating an image focused in accordance with a distance from a viewpoint and an image processing apparatus.
Conventionally, there is a technology for generating an image of a virtual world focused in accordance with a distance from a viewpoint (depth). The expression “image focused in accordance with a distance from a viewpoint” refers to an image in which an object located closer to the focal point of a virtual camera in the depth direction (viewing direction) is shown with a clear outline and an object located farther from the focal point of the virtual camera in the depth direction is shown with a blurred outline. Such an image can represent the distance of different parts of the image from the viewpoint more realistically.
Patent document 1 (Japanese Laid-Open Patent Publication No. 2001-175884) describes an image generation system for generating an image focused as described above. According to this image generation system, an original image and a blurred image are generated, and a focused image is generated by synthesizing the original image and the blurred image based on an α value which is set for each pixel. The α value is in the range of 0≦α≦1, and represents the synthesizing ratio of the original image and the blurred image. In the above-described image generation system, the α value is set in accordance with a depth value of each pixel of the original image. Since the synthesizing ratio (α value) of the original image and the blurred image changes in accordance with the depth value of each pixel, an image focused in accordance with the distance from the viewpoint can be generated.
According to the method described in patent document 1, the α value is set to 0 for the pixels of the object 92 located at the focal point. For the pixels in the vicinity of a border L between the objects 91 and 92 shown overlapping each other, the α value is set to 0 for the pixels of the object 92 and the original image is reflected. Therefore, as shown in
According to the method of patent document 1, the α value is simply set in accordance with the depth value of each pixel. For this reason, where pixels having significantly different depth values are closely adjacent to each other, the border between the focused pixels and the unfocused pixels are presented clearly. The resultant image appears unnatural.
Therefore, certain example embodiments provide an image processing program and an image processing apparatus for generating a more realistic image.
The reference numerals, additional descriptions and the like in parentheses in this section of the specification indicate the correspondence with the embodiments described later for easier understanding of certain example embodiments, and are not limiting in any way.
A first aspect of certain example embodiments is directed to a computer-readable storage medium having stored thereon an image processing program (game program 60) to be executed by a computer (CPU 10 and/or GPU 11b) of an image processing apparatus (game apparatus 3) for generating an image. The image processing program causes the computer to execute an α value setting step (S4), an increasing step (S5, S6), and a synthesis step (S7). The α value setting step sets an α value (α information) of each pixel of a reference image (
According to a second aspect of certain example embodiments, the increasing step may be performed only where the two adjacent pixels have an α value difference of a predetermined value or greater.
According to a third aspect of certain example embodiments, in the α value setting step, the computer may set the α value such that a ratio of the reference image is higher for a pixel having a depth value closer to a predetermined reference value.
According to a fourth aspect of certain example embodiments, in the α value setting step, the computer may set a two-dimensional vector for each pixel by setting the α value as a first component value (n component value) of the two-dimensional vector for a pixel having a depth value smaller than the predetermined reference value, and by setting the α value as a second component value (f component value) of the two-dimensional vector for a pixel having a depth value larger than the predetermined reference value (
According to a fifth aspect of certain example embodiments, in the α value setting step, the computer may set 0 as the second component value of the two-dimensional vector for a pixel having a depth value smaller than the predetermined reference value, and may set 0 as the first component value of the two-dimensional vector for a pixel having a depth value larger than the predetermined reference value.
According to a sixth aspect of certain example embodiments, in the increasing step, the computer may process only the pixels having a depth value which is equal to or smaller than a predetermined value.
According to a seventh aspect of certain example embodiments, in the increasing step, the computer may smooth the α value of each pixel.
According to an eighth aspect of certain example embodiments, in the increasing step, the computer may select a pixel having an α value which is smaller than the α value of an adjacent pixel and is different from the α value of the adjacent pixel by a predetermined value or greater, and may increase the α value of the selected pixel.
According to a ninth aspect of certain example embodiments, in the increasing step, the computer may select a pixel having an α value which is smaller than the α value of an adjacent pixel and is equal to or smaller than a predetermined value, and may increase the α value of the selected pixel.
According to a tenth aspect of certain example embodiments, the image processing program may cause the computer to further execute a blurred image generation step (S3) of generating the blurred image by smoothing a color value of each pixel of the reference image.
An eleventh aspect of certain example embodiments is directed to a computer-readable storage medium having stored thereon an image processing program (game program 60) to be executed by a computer (CPU 10 and/or GPU 11b) of an image processing apparatus (game apparatus 3) for generating an image. The image processing program causes the computer to execute a depth value correction step (S11), an α value setting step (S12), and a synthesis step (S7). The depth value correction step corrects a pixel having a depth value closer to a predetermined reference value among two adjacent pixels of a reference image (
According to a twelfth aspect of certain example embodiments, the depth value correction step may be performed only where the two adjacent pixels have a depth value difference of a predetermined value or greater.
According to a thirteenth aspect of certain example embodiments, in the α value setting step, the computer may set the α value such that as a ratio of the reference image is higher for a pixel having a depth value closer to the predetermined reference value.
According to a fourteenth aspect of certain example embodiments, the depth value correction step may be performed on only the pixels having a depth value which is equal to or smaller than the predetermined value.
Certain example embodiments may be provided in the form of an image processing apparatus having the equivalent functions of an image processing apparatus for executing the steps in the first through fourteenth aspects.
According to the first aspect, for a pixel having a smaller α value among two adjacent pixels which have an α value difference of a predetermined value or greater, the α value which is set for the pixel is increased in accordance with the depth value. Thus, an image can be generated with the outline of a border portion between the two pixels being blurred. Therefore, an unnatural image in which only a part of the outline of an object is clear is prevented from being generated, and a more realistic image can be generated.
According to the second aspect, the α value correction (increase) is performed only when the α value difference is a predetermined value or greater. Thus, the pixels which need to be corrected can be corrected with certainty.
According to the third aspect, an object corresponding to a pixel having a depth value closer to a reference value is shown with a clear outline, whereas an object corresponding to a pixel having a depth value farther from the reference value is shown with a blurred outline. Thus, an image with realistic focusing can be generated.
According to the fourth aspect, the α value correction (increase) is performed at least on a pixel having a depth value smaller than the reference value. Therefore, the problem of the conventional technology that the border between a focused object and an unfocused object is clearly presented is solved.
According to the fifth aspect, the α value correction (increase) is performed only on a pixel having a depth value smaller than the reference value. A pixel having a depth value larger than the reference value is not a target of correction. Therefore, an image, in which the border between a focused object and an object located closer to the viewpoint than the focused object is blurred and the border between the focused object and an object located farther from the viewpoint than the focused object is clear (
According to the sixth aspect, the α value correction (increase) is performed only on a pixel having a depth value smaller than the reference value. A pixel having a depth value larger than the reference value is not a target of correction. Therefore, by setting the reference value to, for example, a predetermined value, an image, in which the border between a focused object and an object located closer to the viewpoint than the focused object is blurred and the border between the focused object and an object located farther from the viewpoint than the focused object is clear (
According to the seventh aspect, smoothing is performed on the α value of each pixel. Thus, for a pixel, the α value of which is to be increased (i.e., for a pixel having a smaller α value among two adjacent pixels which have an α value difference of a predetermined value or greater), the α value can be easily increased.
According to the eighth aspect, a pixel having an α value which is smaller than the α value of an adjacent pixel and is different from the α value of the adjacent pixel by a predetermined value or greater is selected. Thus, the pixel, the α value of which is to be increased, can be specified with certainty. Therefore, an image with realistic focusing can be generated without fail.
According to the ninth aspect, a pixel having an α value which is smaller than the α value of an adjacent pixel and is a predetermined value or smaller is selected. Thus, the pixel, the α value of which is to be increased, can be specified with certainty. Therefore, an image with realistic focusing can be generated without fail.
According to the tenth aspect, a blurred image can be easily generated from the reference image.
According to the eleventh aspect, for a pixel having a depth value closer to a predetermined reference value among two adjacent pixels which have a depth value difference of a predetermined value or greater, the depth α value is corrected so as to be closer to the depth value of the other pixel. The α value is set in accordance with the post-correction depth value. Therefore, substantially the same effect as that of the first aspect of correcting the α value is provided. Namely, an image can be generated with the outline of the border between the two pixels being blurred. Thus, a more realistic image can be generated.
According to the twelfth aspect, the depth value is corrected only when the depth value difference between two adjacent pixels is a predetermined value or greater. Thus, the pixels which need to be corrected can be corrected with certainty.
According to the thirteenth aspect, an object corresponding to a pixel having a depth value closer to a reference value is shown with a clear outline, whereas an object corresponding to a pixel having a depth value farther from the reference value is shown with a blurred outline. Thus, an image with realistic focusing can be generated.
According to the fourteenth aspect, the depth value correction is performed only on a pixel having a depth value smaller than the reference value. A pixel having a depth value larger than the reference value is not a target of correction. Therefore, by setting the reference value to, for example, a predetermined value, an image, in which the border between a focused object and an object located closer to the viewpoint than the focused object is blurred and the border between the focused object and an object located farther from the viewpoint than the focused object is clear (
These and other objects, features, aspects and advantages of certain example embodiments will become more apparent from the following detailed description of certain example embodiments when taken in conjunction with the accompanying drawings.
(Overall Structure of the Game System)
With reference to
On the game apparatus 3 as an exemplary image processing apparatus according to certain example embodiments, the optical disc 4 is detachably mountable as an exemplary information storage medium exchangeably usable for the game apparatus 3. The optical disc 4 has stored thereon a game program to be executed by the game apparatus 3. The game apparatus 3 has an insertion opening for mounting the optical disc 4 on a front surface thereof. The game apparatus 3 reads and executes the game program stored on the optical disc 4 inserted into the insertion opening, and thus performs the game processing.
The game apparatus 3 is connected to the TV 2 as an exemplary display device via a connection cord. The TV 2 displays a game image obtained as a result of the game processing executed by the game apparatus 3. The marker section 6 is provided in the vicinity of a display screen of the TV 2 (above the display screen in
The controller 5 is an input device for providing the game apparatus 3 with operation data representing the particulars of the operation made thereon. The controller 5 and the game apparatus 3 are connected with each other via wireless communication. In this embodiment, the controller 5 and the game apparatus 3 are communicable to each other by, for example, the Bluetooth (registered trademark) technology. In other embodiments, the controller 5 and the game apparatus 3 may be connected with each other in a wired manner.
(Internal Structure of the Game Apparatus 3)
Next, with reference to
The CPU 10 performs the game processing by executing a game program stored on the optical disc 4, and acts as a game processor. The CPU 10 is connected to the system LSI 11. The system LSI 11 is connected to the CPU 10 and also to the external main memory 12, the ROM/RTC 13, the disc drive 14 and the AV-IC 15. The system LSI 11, for example, controls data transfer between the elements connected thereto, generates images to be displayed, and obtains data from external devices. An internal structure of the system LSI 11 will be described later. The external main memory 12, which is of a volatile type, has stored thereon programs including a game program read from the optical disc 4, a game program read from a flash memory 17, or various other data. The external main memory 12 is used as a work area or a buffer area of the CPU 10. The ROM/RTC 13 includes a ROM having a program for starting the game apparatus 3 incorporated thereon (so-called boot ROM) and a clock circuit for counting time (RTC: Real Time Clock). The disc drive 14 reads program data, texture data or the like from the optical disc 4 and writes the read data onto an internal main memory 11e or the external main memory 12.
The system LSI 11 includes an input/output processor (I/O processor) 11a, a GPU (Graphics Processor Unit) 11b, a DSP (Digital Signal Processor) 11c, a VRAM 11d, and the internal main memory 11e. Although not shown, these elements 11a through 11e are connected with each other via an internal bus.
The GPU 11b is a part of drawing means and generates an image in accordance with a graphics command (a command to draw an image) from the CPU 10. The VRAM 11d stores data necessary for the GPU 11b to execute the graphics command (polygon data, texture data or other data). The GPU 11b uses the data stored on the VRAM 11d to generate an image.
The DSP 11c acts as au audio processor and generates audio data using sound data or sound wave (sound tone) data stored on the internal main memory 11e or the external main memory 12.
The image data and the audio data generated as described above are read by the AV-IC 15. The AV-IC 15 outputs the read image data to the TV 2 via an AV connector 16, and outputs the read audio data to a speaker 2a built in the TV 2. Thus, the image is displayed on the TV 2 and also the sound is output from the speaker 2a.
The input/output processor (I/O processor) 11a transmits or receives data to or from the elements connected thereto, or downloads data from external devices. The input/output processor 11a is connected to the flash memory 17, a wireless communication module 18, a wireless controller module 19, an expansion connector 20, and an external memory card connector 21. The wireless communication module 18 is connected to an antenna 22, and the wireless controller module 19 is connected to an antenna 23.
The input/output processor 11a is connected to a network via the wireless communication module 18 and the antenna 22, and thus can communicate with other game apparatuses or various servers also connected to the network. The input/output processor 11a periodically accesses the flash memory 17, and detects whether or not there is data which needs to be transmitted to the network. When there is such data, the input/output processor 11a transmits such data to the network via the wireless communication module 18 and the antenna 22. The input/output processor 11a also receives data transmitted from other game apparatuses or data downloaded from a download server via the network, the antenna 22 and the wireless communication module 18, and stores the received data on the flash memory 17. The CPU 10 executes the game program and thus reads the data stored on the flash memory 17 to be used for the game program. The flash memory 17 may have stored therein data saved as a result of playing the game using the game apparatus 3 (data after or in the middle of the game) as well as the data to be transmitted to, or data received from, the other game apparatuses or various servers.
The input/output processor 11a receives operation data which is transmitted from the controller 5 via the antenna 23 and the wireless controller module 19 and stores the operation data in a buffer area of the internal main memory 11e or the external main memory 12 (temporary storage).
The input/output processor 11a is connected to the expansion connector 20 and the external memory card connector 21. The expansion connector 20 is a connector for an interface such as USB, SCSI or the like. The expansion connector 20 may be connected to a medium such as an external storage medium or the like, may be connected to a peripheral device such as another controller or the like, or may be connected to a wired communication connector, to communicate with the network instead of the wireless communication module 18. The external memory card connector 21 is a connector for an external storage medium such as a memory card or the like. For example, the input/output processor 11a can access an external storage medium via the expansion connector 20 or the external memory card connector 21 to store or read data.
The game apparatus 3 has a power button 24, a reset button 25, and an eject button 26. The power button 24 and the reset button 25 are connected to the system LSI 11. When the power button 24 is turned on, the elements of the game apparatus 3 are provided with power via an AC adaptor (not shown). When the reset button 25 is pressed, the system LSI 11 restarts a starting program of the game apparatus 3. The eject button 26 is connected to the disc drive 14. When the eject button 26 is pressed, the optical disc 4 is dismounted from the disc drive 14.
(Structure of the Controller 5)
With reference to
As shown in
The housing 31 has a plurality of operation buttons. As shown in
On a rear surface of the housing 31, a connector 33 is provided. The connector 33 is used for connecting the controller 5 with another device (for example, another controller).
In a rear part of the top surface of the housing 31, a plurality of LEDs (in
The controller 5 includes an imaging information calculation section 35 (
On the top surface of the housing 31, sound holes 31a are formed between the first button 32b and the home button 32f for releasing the sound outside from a speaker 49 (
With reference to
As shown in
As shown in
On the bottom main surface of the substrate 30, the microcomputer 42 and a vibrator 48 are provided. The vibrator 48 may be, for example, a vibration motor or a solenoid, and is connected to the microcomputer 42 via lines provided on the substrate 30 and the like. The controller 5 is vibrated by an actuation of the vibrator 48 based on an instruction from the microcomputer 42, and the vibration is conveyed to the hand of the player holding the controller 5. Thus, a so-called vibration-responsive game is realized. In this embodiment, the vibrator 48 is slightly forward with respect to the center of the housing 31. Since the vibrator 48 is provided closer to a front end than the center of the controller 5, the vibration of the vibrator 48 can vibrate the entire controller 5 more significantly. The connector 33 is attached at a rear edge of the main bottom surface of the substrate 30. In addition to the elements shown in
The shape of the controller 5, the shape of the operation buttons, and the number, position or the like of the acceleration sensor and the vibrator shown in
The operation section 32 includes the above-described operation buttons 32a through 32i, and outputs data representing an input state of each of the operation buttons 32a through 32i (whether each of the operation buttons 32a through 32i has been pressed or not) to the microcomputer 42 of the communication section 36.
The imaging information calculation section 35 is a system for analyzing image data taken by the imaging means, distinguishing an area having a high brightness in the image data, and calculating the center of gravity, the size and the like of the area. The imaging information calculation section 35 has, for example, a maximum sampling period of about 200 frames/sec., and therefore can trace and analyze even a relatively fast motion of the controller 5.
The imaging information calculation section 35 includes the infrared filter 38, the lens 39, the imaging element 40 and the image processing circuit 41. The infrared filter 38 allows only infrared light to pass therethrough, among light incident on the front surface of the controller 5. The lens 39 collects the infrared light which has been transmitted through the infrared filter 38 and causes the infrared light to be incident on the imaging element 40. The imaging element 40 is a solid-state imaging device such as, for example, a CMOS sensor or a CCD sensor. The imaging element 40 receives the infrared light collected by the lens 39 and outputs an image signal. The markers 6R and 6L of the marker section 6 located in the vicinity of the screen of the TV 2 each include an infrared LED for outputting infrared light forward from the TV 2. The provision of the infrared filter 38 allows the imaging element 40 to receive only the infrared light transmitted through the infrared filter 38 to generate image data. Therefore, the image of each of the markers 6R and 6L can be taken more accurately. Hereinafter, an image taken by the imaging element 40 will be referred to as a “taken image”. The image data generated by the imaging element 40 is processed by the image processing circuit 41. The image processing circuit 41 calculates the positions of imaging targets (the markers 6R and 6L) in the taken image. The image processing circuit 41 outputs a coordinate representing the calculated position to the microcomputer 42 of the communication section 36. The data on the coordinate is transmitted to the game apparatus 3 from the microcomputer 42 as operation data. Hereinafter, this coordinate will be referred to as a “marker coordinate”. The marker coordinate changes in accordance with the direction (posture) or the position of the controller 5 itself, and therefore the game apparatus 3 can calculate the direction or the position of the controller 5 using the marker coordinate.
The acceleration sensor 37 detects an acceleration (including a gravitational acceleration) of the controller 5. Namely, the acceleration sensor 37 detects a force (including the force of gravity) applied to the controller 5. The acceleration sensor 37 detects a value of the acceleration in a linear direction along a sensing axis (linear acceleration) among the accelerations acting on a detection section of the acceleration sensor 37. For example, in the case of a multi-axial (at least two-axial) acceleration sensor, an acceleration of a component along each axis is detected as an acceleration acting on the detection section of the acceleration sensor. For example, a three-axial or two-axial acceleration sensor 37 may be available from Analog Devices, Inc. or STMicroelectronics N.V. The acceleration sensor 37 is, for example, an electrostatic capacitance type, but may be of any other system.
In this embodiment, the acceleration sensor 37 detects a linear acceleration in each of an up-down direction with respect to the controller 5 (Y-axis direction shown in
Data representing the acceleration detected by the acceleration sensor 37 (acceleration data) is output to the communication section 36. Since the acceleration detected by the acceleration sensor 37 changes in accordance with the direction (posture) or the motion of the controller 5 itself, the game apparatus 3 can calculate the direction or the motion of the controller 5 using the acceleration data. Namely, the game apparatus 3 calculates the posture or the motion of the controller 5 based on the acceleration data and the marker coordinate data described above.
The communication section 36 includes the microcomputer 42, a memory 43, the wireless module 44 and the antenna 45. The microcomputer 42 controls the wireless module 44 for wirelessly transmitting the data obtained by the microcomputer 42 to the game apparatus 3 while using the memory 43 as a storage area during processing.
Data which is output from the operation section 32, the imaging information calculation section 35, and the acceleration sensor 37 to the microcomputer 42 is temporarily stored on the memory 43. Such data is transmitted to the game apparatus 3 as the operation data. Namely, at the transmission timing to the wireless controller module 19, the microcomputer 42 outputs the operation data stored on the memory 43 to the wireless module 44. The wireless module 44 modulates a carrier wave of a predetermined frequency with the operation data and radiates the resultant very weak radio signal from the antenna 45, using, for example, the Bluetooth (registered trademark) technology. Namely, the operation data is modulated into a very weak radio signal by the wireless module 44 and transmitted from the controller 5. The very weak radio signal is received by the wireless controller module 19 on the side of the game apparatus 3. The received very weak radio signal is demodulated or decoded, so that the game apparatus 3 can obtain the operation data. The CPU 10 of the game apparatus 3 executes the game processing based on the obtained operation data and the game program. The wireless communication from the communication section 36 to the wireless controller module 19 is performed at a predetermined cycle. Since game processing is generally performed at a cycle of 1/60 sec. (at a cycle of frame time), the wireless transmission is preferably performed at a cycle of a shorter time period. The communication section 36 of the controller 5 outputs the operation data to the wireless controller module 19 of the game apparatus 3 at rate of, for example, once in 1/200 seconds.
By using the controller 5, the player can perform an operation of instructing an arbitrary position on the screen using the controller 5 or moving the controller 5 itself, in addition to a conventional general game operation of pressing the operation buttons.
In this embodiment, the game apparatus 3 uses the controller 5 as an input device. The information processing apparatus according to certain example embodiments is not limited to such a game apparatus, and may be any apparatus capable of generating an image in a virtual world and displaying such an image on a display device connected thereto. The input device may be anything, for example, a keyboard or a mouse, and the information processing apparatus may not include an input device.
(Overview of Image Generation Processing)
Hereinafter, with reference to
The reference image is obtained by performing perspective transformation of a three-dimensional virtual space based on the position of the virtual camera. As shown in
The game apparatus 3 generates a reference image and a blurred image, and also sets an α value for each pixel of the reference image. The α value represents the ratio of the blurred image with respect to a post-synthesis image obtained by synthesizing the blurred image and the reference image (also referred to as a “blending ratio”). Specifically, the α value is in the range of 0≦α≦1. When α=0, the ratio of the blurred image in the post-synthesis image is 0% (the ratio of the reference image is 100%). When α=1, the ratio of the blurred image in the post-synthesis image is 100% (see expression (7) described later).
In this embodiment, the α value is provisionally set in accordance with the depth value (i.e., Z value) of each pixel. Specifically, for a pixel having a Z value equal to the distance from the viewpoint to the focal point of the virtual camera (herein, such a distance will be referred to as a “focal length”), α is set as α=0. For the other pixels, the α value is set to be larger as the difference between the Z value and the focal length is larger. By synthesizing the reference image and the blurred image using such α values, an image, in which an object having a depth closer to the focal length is shown with a clearer outline and an object having a depth farther from the focal length is shown with a blurred outline, can be generated. In other words, an image, in which an object having a depth closer to the focal length is more focused and an object having a depth farther from the focal length is more unfocused, can be generated.
However, when the synthesis is performed using the α values which are set in accordance with the Z values of the respective pixels as they are, as shown in
(Details of the Image Generation Processing)
Hereinafter, with reference to
The game program 60 is an exemplary image processing program according to this embodiment. At an appropriate timing after the power of the game apparatus 3 is turned on, the game program 60 is partially or entirely read onto the main memory from the optical disc 4. The game program 60 includes a program for causing the CPU 10 to execute the image generation processing shown in
The image processing data 61 is data used in the image generation processing (
The reference image data 62 represents the reference image described above. Specifically, the reference image data 62 represents a color value and a Z value of each pixel of the reference image. The blurred image data 63 represents the blurred image described above. Specifically, the blurred image data 63 represents a color value of each pixel of the blurred image. The reference image data 62 and the blurred image data 63 may be stored on a frame buffer included in the VRAM 11d.
The α information data 64 represents a information which is set for each pixel of the reference image. The α information represents an α value provisionally set for calculating a final α value. In this embodiment, the α information is represented by a two-dimensional vector. Specifically, such a two-dimensional vector includes an n component and an f component, and is represented as (n, f). As described later in detail, in this embodiment, an α value is set for either the n component or the f component depending on whether the Z value of the respective pixel is larger or smaller than the focal length, and “0” is set for the other component. The corrected α information represents α information obtained by correcting the α information represented by the α information data 64 by the correction processing. The correction processing will be described later in detail.
The α value data 66 represents an α value set for each pixel. The α value is calculated based on the α information represented by the corrected a information data 65.
Now, with reference to
Referring to
In step S2, the CPU 10 generates a reference image in accordance with the settings performed in step S1. The reference image is generated by executing perspective transformation such that an image representing the virtual space seen from the position of, and in the direction of, the virtual camera is generated. Namely, a color value and a Z value of each pixel of such an image are calculated. Data representing the reference image generated in step S2 (i.e., data representing the color value and the Z value of each pixel of the reference image) is stored on the main memory as the reference image data 62.
In step S3, the CPU 10 generates a blurred image from the reference image. In this embodiment, the blurred image is generated by smoothing the color value of each pixel of the reference image represented by the reference image data 62 stored on the main memory. Hereinafter, with reference to
PA′=(PA+PB+PC+PD+PE)/5 (1)
The CPU 10 executes the processing represented by expression (1) for each pixel of the reference image, so that the color value of each of the pixel of the blurred image is obtained. The resultant data representing the blurred image (i.e., data representing the color value of each pixel of the blurred image) is stored on the main memory as the blurred image data 63.
In other embodiments, expression (1) may be replaced with expression (2), (3) or (4).
PA′=(PA+PF+PG+PH+PI)/5 (2)
PA′=(PA+PB+PC+PD+PE+PF+PG+PH+PI)/9 (3)
PA′=(PA+PB+PC+PD+PE+PJ+PK+PL+PM)/9 (4)
Expression (2) represents that the color value of pixel A, which is a processing target, of the blurred image is obtained as an average of the color value PA of pixel A of the reference image and the color values PF through PI of pixels F through I which are two pixels away from pixel A in the up, down, right and left directions of the reference image. When expression (2) is used, the resultant image is more blurred than when expression (1) is used. Expression (3) represents the color value of pixel A (processing target) of the blurred image is obtained as an average of the color value PA of pixel A of the reference image and the color values PB through PI of pixels B through I which are located within two pixels from pixel A in the up, down, right and left directions of the reference image. When expression (3) is used, the resultant image is more blurred than when expression (1) is used. Expression (4) represents the color value of pixel A (processing target) of the blurred image is obtained as an average of the color value PA of pixel A of the reference image and the color values PB through PE and PJ through PM of pixels B through E and J through M which are located within one pixel from pixel A in the up, down, right, left, and four oblique (upper right, upper left, lower right and lower left) directions of the reference image. When expression (4) is used, the resultant image is more uniformly blurred than when expression (1) is used.
As described above, any method is usable to generate a blurred image. The CPU 10 may generate a blurred image by smoothing the color values of the reference image using a Gauss filter, by once enlarging and then reducing the reference image and thus roughening the outline thereof; or by performing bilinear filtering or trilinear filtering on the reference image.
Returning to
Upon calculating the α value, the CPU 10 determines whether the α value is set for the n component or the f component based on whether the Z value is larger or smaller than the focal length Z1. Specifically, when the Z value is smaller than the focal length Z1, the α value is set for the n component. Namely, a two-dimensional vector in which the value of the n component is the α value and the value of the f component is 0 is calculated. When the Z value is larger than the focal length Z1, the α value is set for the f component. Namely, a two-dimensional vector in which the value of the n component is 0 and the value of the f component is the α value is calculated. In other words, for the pixels of the object located closer to the viewpoint of the virtual camera than the focal length, a two-dimensional vector of (α, 0) (α represents the α value) is set; whereas for the pixels of the object located farther from the viewpoint of the virtual camera than the focal length, a two-dimensional vector of (0, α) is set. When the Z value is equal to the focal length Z1, a two-dimensional vector of (0, 0) is set. In this embodiment, a two-dimensional vector calculated in this manner is the α information. Data representing the α information of each pixel is stored on the main memory as the α information data 64.
Returning to
αA′=(αA+αB+αC+αD+αE)/5×G (5)
In expression (5), G is a constant, and is preferably set to equal to or greater than 1. The CPU 10 executes the processing represented by expression (5) on the α information set for each pixel of the reference image. The resultant data representing the post-α information of each pixel is stored on the main memory as the corrected α information data 65.
In other embodiments, expression (5) may be replaced with expression (6), (7) or (8).
αA′=(αA+αF+αG+αH+αI)/5×G (6)
αA′=(αA+αB+αC+αD+αE+αF+αG+αH+αI)/9×G (7)
αA′=(αA+αB+αC+αD+αE+αJ+αK+αL+αM)/9×G (8)
When expression (6) is used, the post-n component value αA′ of pixel A, which is a processing target, is obtained by first obtaining an average of the pre-correction n component value αA of pixel A and the pre-correction n component values αF through αI of pixels F through I which are two pixels away from pixel A in the up, down, right and left directions, and then multiplying the average by the predetermined value (G). When expression (7) is used, the post-n component value αA′ of pixel A (processing target) is obtained by first obtaining an average of the pre-correction n component value αA of pixel A and the pre-correction n component values αB through αI of pixels B through I which are located within two pixels from pixel A in the up, down, right and left directions, and then multiplying the average by the predetermined value (G). When expression (8) is used, the post-n component value αA′ of pixel A (processing target) is obtained by first obtaining an average of the pre-correction n component value αA of pixel A and the pre-correction n component values αB through αE and αJ through αM of pixels B through E and J through M which are located within one pixel from pixel A in the up, down, right, left, and four oblique (upper right, upper left, lower right and lower left) directions, and then multiplying the average by the predetermined value (G). The averaging of the n component values may be performed using a Gauss filter or using bilinear filtering or trilinear filtering.
Hereinafter, with reference to
When the correction processing in step S5 is performed on pixels P1 through P7 shown in
The range of the pixels, the n component of which is increased by the correction processing, varies depending on the particulars of the correction processing. In the example of
By the correction processing in step S5, as shown in
Returning to
By the processing in step S6, the final α value is set for each pixel. As a result, the α value of each pixel in the area 72 in
In step S7, the CPU 10 synthesizes the reference image and the blurred image based on the α value. Specifically, the CPU 10 refers to the reference image data 62, the blurred image data 63, and the α value data 66 to calculate a color value C of each pixel of the display image in accordance with the following expression (9).
C=C1×(1−α)+C2×α (9)
In expression (9), variable C1 represents the color value of the reference image, and variable C2 represents the color value of the blurred image. The CPU 10 calculates the color value of each pixel of the display image using expression (9), so that data on the display image is obtained.
The pixels in the area 72 shown in
In step S8, the CPU 10 draws the display image obtained in step S7. Namely, the CPU 10 writes the data on the display image obtained in step S7 onto the frame buffer included in the VRAM 11d. Thus, the display image is displayed on the TV 2.
In step S9, the CPU 10 determines whether or not to terminate the image generation processing. The determination is made in accordance with, for example, whether or not the player has made an instruction to terminate the play. Until it is determined that the image generation processing is to be terminated in step S9, the processing in steps S1 through S9 is repeated. When the determination result in step S9 is positive, the CPU 10 terminates the image generation processing shown in
As described above, in this embodiment, an α value of each pixel of the reference image is first provisionally set, and then such a provisional α value is corrected (smoothed). By such an arrangement, an unfocused object (first object 51) is prevented from being shown with a clear outline in the border with a focused object (second object 52) (i.e., the image as shown in
In this embodiment, the provisional α value (α information) is represented by a two-dimensional vector, so that the α value is corrected (step S5) for only pixels having a Z value smaller than the focal length. The α value is not corrected for the pixels having a Z value larger than the focal length. Therefore, as shown in
(Modification Regarding the α Information (1))
In other embodiments, the a information may not be represented by a two-dimensional vector and may be treated as one value (scalar value). Namely, in step S4, the α value calculated in accordance with the Z value may be used as the α information. In this case, the correction processing in step S5 is performed on all the pixels. According to this arrangement, the data amount of the α information can be reduced, and the processing of calculating the α value from the two-dimensional vector (step S6) is not necessary. As a result, the image generation processing can be simplified.
(Modification Regarding the α Information (2))
In other embodiments, an α value with a positive or negative sign may be used as the α information.
(Modification Regarding the α Information (3))
In the above embodiment, the two-dimensional vector is used as the α information, and such a two-dimensional vector is set such as at least one of the components is 0 (step S4). Specifically, when the Z value is larger than the focal length Z1, the n component value is set to 0 and the f component value is set to the α value. In other embodiments, even when the Z value is larger than the focal length Z1, the n component value may be set to a predetermined value which is not 0 (the f component value is set to the α value). In this case, in step S5, the CPU 10 performs the correction processing on the pixels having an n component value which is not 0 as well as the pixels having a Z value smaller than the focal length Z1. According to such an arrangement, the designer can easily designate the pixels to be the targets of the correction processing. For example, in the case where the predetermined value is set to α value in accordance with the f component value (e.g., α value of x % of the f component value), the border L2 between the second object 52 and the third object 53 can be shown as being slightly blurred. Alternatively, for the pixels having a Z value smaller than a predetermined value (which may be a value larger than the focal length Z1) the n component value may be set to the α value. The two-dimensional vector may be used as the α information in this manner, so that the pixels to be corrected can be easily designated.
(Modification Regarding the Correction Processing)
In the above embodiment, the CPU 10 corrects the α value by smoothing the n component value of each pixel in step S5. The method of the correction processing in step S5 is not limited to smoothing. As long as the α value is increased for the pixel having a smaller α value among two adjacent pixels which have an α value difference of a predetermined value or greater, any method is usable.
Specifically, in step S5, the CPU 10 first selects pixels to be the target of correction from the pixels of the reference images. For example, the CPU 10 selects pixels fulfilling the conditions that the α value of the pixel is smaller than either one of the adjacent pixels and that the α value difference is a predetermined value or greater. In addition to the above conditions, another condition that the α value is equal to or smaller than a predetermined value may be added. In other embodiments, the conditions may be that the α value of the pixel is smaller than either one of the adjacent pixels and is equal to or smaller than a predetermined manner. Then, the CPU 10 performs the correction processing such that the α value of each pixel selected as the target of correction is increased. Specifically, the CPU 10 may correct the α value by adding a predetermined constant to the pre-correction value, or by equalizing the pre-correction value to the α value of the adjacent pixel.
(Modification Regarding the Flow of the Image Generation Processing)
In the above embodiment, the game apparatus 3 provisionally sets the α value in accordance with the Z value set for each pixel of the reference image, and corrects the provisional α value to obtain the final α value. In other embodiments, the Z value set for each pixel of the reference image may be corrected and an α value in accordance with the corrected Z value may be calculated.
In the modification shown in
According to one specific method for this correction processing, the Z value of each pixel is smoothed. In the case where the smoothing processing is performed only on the pixels having a Z value equal to or smaller than the focal length, substantially the same display image as that in the above embodiment (
According to another specific method for this correction processing, pixels as the target of correction are selected from the pixels of the reference image, and the Z value of each selected pixel is corrected. Specifically, the CPU 10 selects pixels having a Z value which is within a predetermined range including the focal length (e.g., a Z value which is different from the focal length by a value within a predetermined range) and is different from the Z value of an adjacent pixel by a predetermined value or greater. The CPU 10 then, for example, increases the Z value of each selected pixel by a predetermined value, or equalizes the Z value of each selected pixel to the Z value of the adjacent pixel. Thus, the post-correction Z value is obtained.
In step S12 after step S11, the CPU 10 sets the α value of each pixel in accordance with the Z value obtained in step S11. Specifically, when the Z value is equal to the focal length, the α value is set as α=0. As the difference between the Z value and the focal length is larger, the α value is larger with the maximum being α=1. After step S12, the processing in steps S7 through S9 is executed. In this modification also, substantially the same display image as in the above embodiment (
In the above embodiment, the image generation processing for generating a focused image is performed during the game processing executed by the game apparatus 3. The present invention is not limited to being used for a game. The present invention is applicable to various image processing apparatuses for generating an image of a three-dimensional space.
Certain example embodiments may relate to and be usable for, for example, game apparatuses, programs and the like in order to generate a focused image more realistically.
While the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is understood that numerous other modifications and variations can be devised without departing from the scope of the invention.
Patent | Priority | Assignee | Title |
10540805, | Dec 02 2011 | Sony Corporation | Control of display of composite image based on depth information |
11438608, | Nov 13 2009 | Koninklijke Philips N.V. | Efficient coding of depth transitions in 3D video |
11601659, | Nov 13 2009 | Koninklijke Philips N.V. | Efficient coding of depth transitions in 3D video |
9008421, | Jan 17 2013 | Realtek Semiconductor Corp. | Image processing apparatus for performing color interpolation upon captured images and related method thereof |
9406139, | Dec 02 2011 | Sony Corporation | Image processing device and image processing method |
Patent | Priority | Assignee | Title |
5363475, | Dec 05 1988 | Rediffusion Simulation Limited | Image generator for generating perspective views from data defining a model having opaque and translucent features |
6409598, | Apr 28 2000 | KABUSHIKI KAISHA SQUARE ENIX ALSO AS SQUARE ENIX CO , LTD | Method, program product, and game system for blurring an image |
6429877, | Jul 30 1999 | HEWLETT-PACKARD DEVELOPMENT COMPANY, L P | System and method for reducing the effects of aliasing in a computer graphics system |
6590574, | May 17 2000 | Microsoft Technology Licensing, LLC | Method, system, and computer program product for simulating camera depth-of-field effects in a digital image |
6664958, | Aug 23 2000 | NINTENDO CO , LTD | Z-texturing |
7206000, | Jun 28 2004 | Microsoft Technology Licensing, LLC | System and process for generating a two-layer, 3D representation of a scene |
20040109004, | |||
20040155887, | |||
EP1174829, | |||
JP2001175884, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 24 2007 | DOHTA, TAKUHIRO | NINTENDO CO , LTD | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 019326 | /0298 | |
May 03 2007 | Nintendo Co., Ltd. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Mar 31 2015 | ASPN: Payor Number Assigned. |
Apr 06 2016 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Apr 14 2020 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Apr 10 2024 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Oct 23 2015 | 4 years fee payment window open |
Apr 23 2016 | 6 months grace period start (w surcharge) |
Oct 23 2016 | patent expiry (for year 4) |
Oct 23 2018 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 23 2019 | 8 years fee payment window open |
Apr 23 2020 | 6 months grace period start (w surcharge) |
Oct 23 2020 | patent expiry (for year 8) |
Oct 23 2022 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 23 2023 | 12 years fee payment window open |
Apr 23 2024 | 6 months grace period start (w surcharge) |
Oct 23 2024 | patent expiry (for year 12) |
Oct 23 2026 | 2 years to revive unintentionally abandoned end. (for year 12) |