An information presentation apparatus creates a three-dimensional animation of a specific object in a three-dimensional virtual space on the basis of the human characteristic of paying more attention to a moving object. A user's attention can be drawn to a specific object, such as a destination building, in the virtual space displayed on the screen. Irrespective of whether the specific object is selected by the user or designated at the system side to which the user's attention is to be drawn, the user can easily detect the attention-drawing object.
|
2. An information presentation method in a three-dimensional virtual space, comprising:
selecting an object by specifying an object to be displayed at a location in a three-dimensional map image, the location being found on a basis of an information search result in accordance with a search condition input by a user;
creating an animation by deforming the object shape; and
playing an animation by showing the deforming object shape in the three-dimensional virtual space,
wherein the selecting further includes obtaining the location of the object satisfying the search condition, and the playing further includes showing the animation at the location.
1. An information presentation apparatus in a three-dimensional virtual space, comprising:
object selecting means for specifying an object to be displayed at a location in a three-dimensional map image, the location being found on a basis of an information search result in accordance with a search condition input by a user;
animation creating means for creating a deformed animation of the object shape; and
animation playing means for playing the deformed animation of the object shape in the three-dimensional virtual space,
wherein the object selecting means is further configured to obtain the location of the object satisfying the search condition, and the animation playing means is further configured to play the deformed animation at the location.
10. A computer program written in a computer-readable format to perform on a computer system a process for presenting information in a three-dimensional virtual space, comprising:
selecting an object by specifying an object to be displayed at a location in a three-dimensional map image, the location being found on basis of an information search result in accordance with a search condition input by a user;
creating an animation by deforming the object shape; and
playing an animation by showing the animation of the object shape in the three-dimensional virtual space,
wherein the selecting step further includes obtaining the location of the object satisfying the search condition, and the playing step further includes showing the animation at the location.
3. An information presentation method according to
4. An information presentation method according to
an animation of the object appearing from a ground upward in the three-dimensional virtual space,
an animation of the object vertically expanding and contracting,
an animation of the object swaying from side to side, and/or
an animation of the object disappearing into the ground in the three-dimensional virtual space.
5. An information presentation method according to
in the playing step, the animation is shown at a corresponding place in the three-dimensional virtual space on a basis of map data.
6. An information presentation method according to
the playing further includes switching from one animation being played to another animation, in accordance with a result of the selecting step.
7. An information presentation method according to
wherein, in the selecting step, an information presentation intensity showing a goodness of fit with respect to the search condition is added to the search result, in the creating step, a plurality of patterns of deformed animations of the object shape is created, and in the playing step, an emphasized animation pattern is selected according to the level of the information presentation intensity.
8. An information presentation method according to
9. An information presentation method according to
|
The present application claims priority based on JP Application No. 2002-123500 and JP Application No. 2002-123511, the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to information presentation apparatuses and methods and computer programs therefor in three-dimensional virtual spaces, and more particularly relates to an information presentation apparatus and method and a computer program therefor in a three-dimensional virtual space including a plurality of architectural structures such as buildings and houses.
More particularly, the present invention relates to an information presentation apparatus and method and a computer program therefor for drawing a user's attention to a displayed object serving as a target in a three-dimensional virtual space, and more specifically relates to an information presentation apparatus and method and a computer program therefor for arousing a user's attention while preventing an architectural structure serving as a target from being hidden in many structure objects including buildings and houses.
2. Description of the Related Art
Due to an improvement in the operation speed and an enhancement of rendering function of current computer systems, research and development of so-called “computer graphics” (CG) technology that uses computer resources to create and process graphics and images have been intensively conducted. Furthermore, such computer graphics technology has been put into practice.
For example, three-dimensional graphics technology generates a more realistic three-dimensional-like two-dimensional high-definition color image by expressing, in terms of a mathematical model, an optical phenomenon that is observed when a three-dimensional object is irradiated with light by a predetermined light source and, on the basis of the mathematical model, shading the surface of the object or pasting a pattern onto the surface of the object. Computer graphics has been increasingly used in CAD (Computer-Aided Design) and CAM (Computer-Aided Manufacturing), which are fields of application of science, engineering, and manufacturing, and in various other fields under development.
Nowadays, computer graphics has been applied to create a three-dimensional virtual space or an augmented reality space. For example, conversations among many logged-in users in a virtual society developed in a three-dimensional virtual space enable the users to have a more realistic, exciting virtual experience. In navigation systems and the like, a stereographic, three-dimensional map image is displayed, taking into consideration the relief of the ground, landscape, and buildings on the ground. As a result, high-quality map information display services are offered. Such three-dimensional map information may be applied to public services such as flood control simulation or systems using virtual spaces.
In navigation systems, more faithful representations of architectural structures such as buildings and houses on the ground in a three-dimensional virtual space are highly demanded. In a three-dimensional virtual space that realizes a society shared with other users, disposition of architectural structure varieties enhances the impact of a more realistic virtual experience.
On the other hand, detection of a target object from a three-dimensional image containing many displayed objects such as buildings is difficult or complicated.
For example, in a navigation system, when a building serving as a destination is behind an architectural structure or is tangled with another displayed object, the effect of destination guidance is degraded.
When a displayed building that in fact requires a user's attention is hidden in other displayed objects in a common society, the user's attention is diverted. As a result, the user cannot have a satisfactory virtual experience.
Accordingly, it is an object of the present invention to provide an excellent information presentation apparatus and method and a computer program therefor for offering suitable information presentation services in a three-dimensional virtual space including a plurality of architectural structures such as buildings and houses.
It is another object of the present invention to provide an excellent information presentation apparatus and method and a computer program therefor for suitably drawing a user's attention to a displayed object serving as a target in a three-dimensional virtual space.
It is a further object of the present invention to provide an excellent information presentation apparatus and method and a computer program therefor for arousing a user's attention while preventing an architectural structure serving as a target from being hidden in many architectural structure objects such as buildings and houses.
It is yet another object of the present invention to provide an excellent information presentation apparatus and method and a computer program therefor for efficiently generating, when creating a three-dimensional virtual space, a stereographic, three-dimensional map image by taking into consideration the relief of the ground, landscape, and buildings on the ground.
In order to achieve the foregoing objects, according to a first aspect of the present invention, an information presentation apparatus or method in a three-dimensional virtual space is provided including an object selecting unit or step of specifying an object to be displayed in the three-dimensional virtual space; an animation creating unit or step of creating a deformed animation of the object shape; and an animation playing unit or step of playing the animation of the object shape in the three-dimensional virtual space.
The information presentation apparatus or method according to the first aspect of the present invention utilizes the human characteristic of paying more attention to a moving object. Creating a three-dimensional animation of a specific three-dimensional object in the virtual space arouses the user's attention.
Specifically, the user's attention is drawn to a specific object, such as a destination building, in the three-dimensional virtual space displayed on a screen. Irrespective of whether the specific object is selected by the user or designated at the system side to which the user's attention is to be drawn, the user can easily detect the attention-drawing object.
The object selecting unit or step may select and obtain the object on the basis of an information search result in accordance with a search condition input by a user.
The animation creating unit or step may create a control solid defined by many control points that are arranged in a lattice structure around the object shape and may create the deformed animation in a relatively easy manner by mapping a deformation of the control solid onto a deformation of the object shape.
The animation creating unit or step may create an animation of the object appearing from the ground upward in the three-dimensional virtual space, an animation of the object vertically expanding and contracting, an animation of the object swaying from side to side, or an animation of the object disappearing into the ground in the three-dimensional virtual space.
The word “object” refers to the shape of an architectural structure such as a building or a house and has a location such as a lot number at which the object is disposed in the three-dimensional space. The animation playing step or unit may play the deformed animation at a corresponding place in the three-dimensional virtual space on the basis of map data.
The animation creating unit or step may create a plurality of patterns of deformed animations of the object shape. The animation playing unit or step may switch from one deformed animation being played to another in accordance with the object selection result.
For example, when many objects are obtained by information search, animations of all objects that have satisfied the search condition are generated and played so that the objects appear in the three-dimensional virtual space, and, subsequently, a deformed animation for emphasized display, such as that swaying from side to side or that vertically expanding and contracting, of an object with a high presentation intensity level is played. Alternatively, an animation of an object with a low presentation intensity level, disappearing from the three-dimensional virtual space, is played.
According to a second aspect of the present invention, a computer program written in a computer-readable format to perform on a computer system a process for presenting information in a three-dimensional virtual space is provided. The computer program includes an object selecting step of specifying an object to be displayed in the three-dimensional virtual space; an animation creating step of creating a deformed animation of the object shape; and an animation playing step of playing the deformed animation of the object shape in the three-dimensional virtual space.
The computer program according to the second aspect of the present invention defines a computer program written in a computer-readable format to realize a predetermined process on a computer system. In other words, installing the computer program according to the second aspect of the present invention into a computer system exhibits a cooperative operation on the computer system, thereby achieving advantages similar to those of the information presentation apparatus or method according to the first aspect of the present invention.
According to a third aspect of the present invention, a three-dimensional model generating system or method is provided that generates a building model on the basis of elevation data generated by mapping altitude information onto each observation point on a two-dimensional plane. The system or method includes a cell generating unit or step of dividing an observation area in which observation points reside on the two-dimensional plane into a mesh of separate cells and registering a set of observation points that have been fitted to a plane in each of the separate cells; a plane integration unit or step of integrating adjacent cells into a plane; a boundary line extracting unit or step of extracting a boundary line of each integrated plane; and a three-dimensional plane creating unit or step of creating a three-dimensional plane defined by a set of ridges included in the boundary between adjacent planes.
According to the three-dimensional model generating system or method as set forth in the third aspect of the present invention, an observation area is divided into rectangular cells. Subsequently, when adjacent cells can be fitted to a single plane, the adjacent cells are integrated into a plane. As a result, a set of planes is obtained. Each plane's boundary line is extracted as a polyline with a small number of bends. A vertical face is inserted at the boundary between adjacent planes. Accordingly, a building made of three-dimensional polygons is generated.
When generating a building model based on elevation data that is generated by mapping altitude information onto each observation point on the two-dimensional plane, no building map is necessary. Also, observation positions may be unevenly distributed.
Specifically, according to the three-dimensional model generating system or method as set forth in the third aspect of the present invention, a three-dimensional model of a building on the ground is generated only on the basis of altitude information distributed on the plane, which is obtained by irradiating the ground with laser light from space using an airplane or a satellite.
The elevation data obtained by irradiating the ground with laser light from space using an airplane or the like has no regularly distributed observation points. The cell generating unit or step may normalize the position of each observation point from the measurement coordinate system to the processing coordinate system and then divides the observation area into a mesh.
The cell generating unit or step may divide the observation area into a mesh of smallest cells constructing initial planes. Since the observation points are not distributed regularly, the smallest cell in which no observation point resides may be integrated with a surrounding cell that has the lowest average altitude. This is based on empirical rule that many of the cells containing no observation point are behind buildings when measurement is performed using laser, and it is thus considered most proper to integrate such a portion with a portion at the lowest elevation.
Alternatively, the cell generating unit or step need not divide the mesh into smallest cells. The cell generating unit or step may repeat dividing each observation area in which observation points reside into a mesh of cells until a set of observation points that can be fitted to a plane is obtained. Accordingly, as large a cell as possible may remain in the observation area that can be fitted to a plane.
The plane integration unit or step may estimate regression planes on the basis of altitude information at each observation point included in the adjacent cells. When a correlation coefficient exceeds a predetermined value, the plane integration unit or step may integrate the adjacent cells into a plane.
The plane integration unit or step may link vertices shared with a plane to be integrated with and create a new ridge structure of the integrated plane.
The boundary line extracting unit or step may try to fit each ridge to a straight line. The boundary line extracting unit or step may insert a bend into a ridge that cannot be fitted to a straight line and divide the ridge into at least two ridges.
The bend may be obtained at a position at which the ridge cannot be approximated as a single straight line.
For a looped ridge, a circle may be drawn around the center of gravity of vertices constructing the loop. The radius of the circle is reduced to shrink the scan field down to include some midpoints, which are then adopted as bends.
For a ridge that is not a loop and that has endpoints, a perpendicular bisector of the ridge may be drawn. The ridge may be scanned with an arc that has the center on the perpendicular bisector and that passes through the endpoints to extract a bend.
The three-dimensional plane creating unit or step may insert a vertical face in accordance with the altitude difference at a boundary line between two planes adjacent to each other across the boundary line.
When two planes adjacent to each other across a ridge share the ridge, no vertical face may be inserted at the ridge.
When two planes adjacent to each other across ridges share only one endpoint of each ridge, a vertical face defined by the shared endpoint and the remaining unshared endpoints may be inserted between the ridges.
When two planes adjacent to each other across ridges share no endpoint of the ridges, a vertical face defined by the endpoints of the ridges is inserted between the ridges.
According to a fourth aspect of the present invention, a computer program is provided, which is written in a computer-readable format to perform on a computer system a process for generating a building model based on elevation data that is generated by mapping altitude information onto each observation point on a two-dimensional plane. The computer program includes a cell generating step of dividing an observation area in which observation points reside on the two-dimensional plane into a mesh of separate cells and registering a set of observation points that have been fitted to a plane in each of the separate cells; a plane integration step of integrating adjacent cells into a plane; a boundary line extracting step of extracting a boundary line of each integrated plane; and a three-dimensional plane creating step of creating a three-dimensional plane defined by a set of ridges included in the boundary between adjacent planes.
The computer program according to the fourth aspect of the present invention defines a computer program written in a computer-readable format to realize a predetermined process on a computer system. In other words, installing the computer program according to the fourth aspect of the present invention into a computer system exhibits a cooperative operation on the computer system, thereby achieving advantages similar to those of the three-dimensional model generating system or method according to the third aspect of the present invention.
According to the present invention, an excellent information presentation apparatus and method and a computer program therefor are provided that offer suitable information presentation services in a three-dimensional virtual space including a plurality of architectural structures such as buildings and houses.
According to the present invention, an excellent information presentation apparatus and method and a computer program therefor are provided that suitably draw a user's attention to a displayed object serving as a target in a three-dimensional virtual space.
According to the present invention, an excellent information presentation apparatus and method and a computer program therefor are provided that arouse a user's attention while preventing an architectural structure serving as a target from being hidden in many architectural structure objects such as buildings and houses.
According to the present invention, on the basis of the human characteristic of paying more attention to a moving object, the user's attention is aroused by creating a three-dimensional animation of a specific three-dimensional object in the virtual space. In other words, the user's attention can be drawn to a specific object, such as a destination building, in the three-dimensional virtual space displayed on the screen. Irrespective of whether the specific object is selected by the user or designated at the system side to which the user's attention is to be drawn, the user can easily detect the attention-drawing object.
According to the present invention, an excellent three-dimensional model generating system and method and a computer program therefor are provided that suitably generate a three-dimensional model of a building on the ground on the basis of altitude information distributed on a plane, which is obtained by irradiating the ground with laser light from space using an airplane or a satellite.
According to the present invention, an excellent three-dimensional model generating system and method and a computer program therefor are provided that suitably generate a three-dimensional model that represents the outer appearance of a building having a three-dimensional shape on the basis of altitude information that is distributed unevenly on a two-dimensional plane.
According to the present invention, an excellent three-dimensional model generating system and method and a computer program therefor are provided that suitably generate a three-dimensional model of a building including a roof (top face) and an external wall (vertical face) on the basis of altitude information that is distributed unevenly on a two-dimensional plane.
According to the present invention, an excellent three-dimensional model generating system and method and a computer program therefor are provided that suitably generate a three-dimensional model of a building on the ground using only elevation data that is generated by mapping altitude information onto each observation point on the ground, without using additional information such as a building map.
According to the present invention, information can be obtained on the shape of the individual buildings serving as simply-connected polygon objects, each of which is constructed of planes, while reducing the effects of errors of a boundary line of each plane extracted on the basis of elevation data. In generation of a three-dimensional virtual space based on data obtained from the real world, more accurate building data can be generated.
Further objects, features, and advantages of the present invention will become apparent from the following description of the preferred embodiments with reference to the attached drawings.
With reference to the drawings, embodiments of the present invention will be described in detail.
The central control unit 10 is a main controller of the information presentation apparatus 1 and includes, for example, a CPU (Central Processing Unit), a RAM (Random Access Memory), and a ROM (Read Only Memory). The central control unit 10 controls the overall operation of the information presentation apparatus 1 by executing a program under an execution environment that is provided by an operating system (OS). The program executed by the central control unit 10 includes, for example, a display application for displaying a three-dimensional virtual space such as a three-dimensional map image including buildings on the ground.
The display unit 11 and the input unit 12 provide a user interface of the information presentation apparatus 1. The display unit 11 includes a CRT (Cathode Ray Tube) display or an LCD (Liquid Crystal Display) and is used to display a three-dimensional virtual space, such as a three-dimensional map image including buildings on the ground, other input data, and data processing results. The input unit 12 includes input units such as a keyboard, a mouse, and a touch panel and accepts data and commands input by a user.
The recording unit 13 includes a high-capacity external storage unit, such as a hard disk drive, and is used to install therein a program to be executed by the central control unit 10 and to store other program files and data files. The program installed in the recording unit 13 includes a display application for displaying a three-dimensional virtual space, such as a three-dimensional map image including buildings on the ground. The data stored in the recording unit 13 includes geometric data on a displayed object used to generate an image displayed in the three-dimensional virtual space and other data required for rendering.
The communication unit 16 includes a network interface card (NIC) and connects to a wide-area computer network 3, such as the Internet, via a LAN (Local Area Network) and a public telephone network. On the computer network 3, various information providing services are offered using an information search system, such as a WWW (World Wide Web). For example, a server 2 that provides, free of charge or on payment, map data and three-dimensional building data required to display a three-dimensional virtual space, such as a three-dimensional map image including buildings on the ground, is configured on the computer network 3.
The three-dimensional virtual space displaying application, which is to be executed by the central control unit 10, provides high-quality navigation services by representing architectural structures including buildings and houses on the ground in the three-dimensional virtual space on the display screen of the display unit 11. If necessary, the map data and the three-dimensional building data required to display the three-dimensional virtual space is appropriately obtained via the communication unit 16 from the server 3.
Detection of a target object from a three-dimensional image in which many objects including buildings are displayed imposes a heavy burden on the user. In this embodiment, the human characteristic of paying more attention to a moving object is utilized, and the user's attention is aroused by creating a three-dimensional animation of a specific three-dimensional object in the virtual space.
Specifically, the user's attention can be drawn to a specific object, such as a destination building, in the three-dimensional virtual space displayed on the screen. Irrespective of whether the specific object is selected by the user or designated at the system side to which the user's attention is to be drawn, the user can easily detect the attention-drawing object.
The animation creating unit 14 and the animation playing unit 15 are each activated in response to an instruction from the central control unit 10. The animation creating unit 14 creates animation data on a displayed object, such as a building to which the user's attention is desired to be drawn. The created animation data is accumulated in, for example, the recording unit 13. The animation playing unit 15 plays the animation data, which is created by the animation creating unit 14, in the three-dimensional virtual space displayed on the screen. If necessary, the animation playing unit 15 may change the playing speed, that is, the speed at which the animation moves.
The process obtains three-dimensional data on an object, such as a building to be emphatically displayed (step S1).
The process creates a control solid around the building (step S2).
The animation playing unit 15 is activated. The animation playing unit 15 creates a deformed animation for emphasizing the object (step S3). The possible object deforming methods include creating animations such as those of the displayed object rotating around a predetermined rotation axis, the displayed object's upper portion swaying from side to side while the lower portion thereof being fixed (see
The created emphasis animation is stored in the recording unit 13 (step S4). The animation playing unit 15 is activated at a predetermined time and plays the emphasis animation in the three-dimensional virtual space displayed on the screen (step S5).
The three-dimensional virtual space including the emphasis animation can be used by, for example, a navigation system. In other words, the user's attention can be aroused by displaying a deformed animation of a destination building that is specified by the user or a building that corresponds to the information search result obtained by a query expression input by the user.
Referring to
The central control unit 10 conducts an information search based on the input query expression and obtains the location of a building that satisfies the search condition and the building's three-dimensional building data from the server 3 via the network. A deformed animation of the corresponding building is created and played in the three-dimensional virtual space.
In the example shown in
The search results are classified in such a manner that those satisfying the query expression have an intensity level of two and those belonging to a category to which the query expression belongs have an intensity level of one. The three-dimensional virtual space may be represented in accordance with the information presentation intensity. In the example shown in
The process obtains two-dimensional map data from the server 2 via the computer network 3 or from the recording unit 13 and develops the map data on the ground in the three-dimensional virtual space on the display unit 11 (step S11).
The user inputs a desired query expression using the input unit 12 (step S12). The user may input a query expression on a text basis or may form a query expression by operating a menu or an icon under a GUI (Graphical User Interface) environment.
The central control unit 10 conducts an information search based on the input query expression and obtains information on buildings that satisfy the search condition (step S13). The search result may be obtained by the internal processing, or a server on the computer network 3 may be designated via the communication unit 16 to conduct an information search.
For example, the goodness of fit with respect to the search condition, that is, the information presentation intensity, is added to the information search result. The process selects and obtains a set of buildings having a predetermined information presentation intensity (=1) or greater (step S14).
The animation creating unit 14 is activated, and the animation creating unit 14 creates a deformed animation of each extracted building appearing in the virtual space (see
The animation playing unit 15 is activated, and the animation playing unit 15 plays the animation data created by the animation creating unit 14 in the three-dimensional virtual space displayed on the screen (step S16). The deformed animation of each corresponding building, such as that shown in
The process selects and obtains a set of buildings having an information presentation intensity of two or greater (step S17). The animation creating unit 14 is activated, and the animation creating unit 14 creates a deformed animation for emphatically displaying each extracted building (step S18). A control solid including control points that are arranged in a lattice is created around the corresponding building data, and the interior of the control solid is treated as a three-directional parameter space. By mapping a deformation of the control solid onto a deformation of the building shape, an animation is easily created (as described above).
The animation playing unit 15 is activated, and the animation playing unit 15 plays the animation data created by the animation creating unit 14 in the three-dimensional virtual space displayed on the screen (step S19). The deformed animation of each building shape, such as that shown in
The animations in the three-dimensional virtual space may be switched in accordance with the search results. In the example shown in
The central control unit 10 conducts an additional information search based on the input query expression and obtains information on buildings that satisfy the narrowed search condition (step S22). The search results may be obtained by the internal processing, or a server on the computer network 3 may be designated via the communication unit 16 to conduct an information search.
The goodness of fit with respect to the search condition, that is, the information presentation intensity, is added to the information search results. The process detects a set of buildings whose information presentation intensity has been reduced from level 2 to level 1 as a result of narrowing of the search condition (step S23).
An instruction to the animation playing unit 15 is issued to stop the emphasis animations of the obtained set of buildings that sway from side to side (see
The process detects a set of buildings whose information presentation intensity has been reduced from level 2 to level 1 as a result of narrowing of the search condition (step S25).
The animation creating unit 14 is activated, and the animation creating unit 14 creates a deformed animation of each extracted building disappearing from the virtual space (see
The animation playing unit 15 is activated, and the animation playing unit 15 plays the animation data, which is created by the animation creating unit 14, in the three-dimensional virtual space displayed on the screen (step S27). The deformed animation of each corresponding building, such as that shown in
Generation of Three-dimensional Map
In the above-described embodiment, the server 2 can provide map data and three-dimensional building data required to display a three-dimensional virtual space, such as a three-dimensional map image including buildings on the ground. Creation of a three-dimensional model that can be provided by the server 2 will now be described.
Three-dimensional model generation according to an embodiment of the present invention obtains the shape of a building on the ground in terms of a set of three-dimensional polygons on the basis of altitude information distributed on a plane, which is obtained by irradiating the ground with laser light from space using an airplane or the like.
Altitude information measured from space is compensated for geographical errors, and the altitude information is made into an ortho-image on the basis of the accurate geographic information. As a result, the altitude information is mapped to each observation point on a map.
Since the altitude information is measured from space in an unstable measurement environment susceptible to wind and other climate conditions, observation positions tend to be distributed irregularly over the ground in accordance with the flight path of the airplane. In this embodiment, the assumption is made that there is no regularity in the positions of the observation points distributed on the plane and that the distribution thereof is uneven. With regard to the actual data, a distribution of observation points depends on the flight path. A component face of a building that is in front of the flight path, i.e., a component face that is not behind a building, contains many observation points, and hence the observation accuracy becomes high. In contrast, no observation point is distributed behind a building with respect to the flight path (see
(1) Normalize the position of each observation point; (2) divide an area in which the observation points reside into a mesh of cells; (3) read a set of observation points and register the set of observation points in a cell; (4) register the smallest cell as a plane; (5) integrate adjacent cells into a plane; (6) create each plane's boundary line having the smallest number of bends; (7) represent the boundary line in terms of three dimensions; (8) create a vertical face in accordance with the altitude difference between planes adjacent to each other across the boundary line; and (9) store a polygon created by the above-described steps.
A. Normalization of Position of Observation Point
In this embodiment, the assumption is made that the observation points are distributed irregularly (as described above). An area in which the observation points reside is divided into a mesh of mesh cells, and each set of observation points is registered in the corresponding mesh cell. When the range of the observation area is not specified, the pre-processing is performed to detect an area in which the observation points reside.
A-1. Obtaining Statistical Information
The process obtains statistical information, that is, the number of observation points and a distribution of the positions thereof. The process generates a piece of adjustment information and normalizes each observation position, which is part of the main processing.
The process flow shown in
A-2. Normalization of Observation Position
The process normalizes each observation point that is mapped onto the two-dimensional plane from the measurement coordinate system to the processing coordinate system.
In contrast, when the observation area is free-form, as shown in
B. Registration in Mesh Cell
After the observation area has been normalized as described above, the observation area is divided into a rectangular mesh of cells. The individual cell serves as the minimum unit of processing and is registered as an initial plane.
In this embodiment, the observation points are distributed unevenly. As is clear from
C. Creation of Initial Plane to be Integrated
A plane serving as the smallest unit of integration is allocated to each mesh cell.
Created planes are registered in a plane list. In the plane list, the planes are arranged in ascending order of area.
When creating the plane list, not all the areas need be divided into the smallest unit cells. Specifically, an area having a high contribution rate relative to a regression plane (see
On the other hand, if the correlation coefficient of the estimated regression plane falls below the predetermined value, it is determined that the observation area cannot be fitted to a single plane. As shown in
As shown in
The process treats the entire observation area as a cell (step S111) and registers the cell in a cell list (step S112). The process extracts one cell at a time from the cell list (step S113). The following steps are repeatedly performed on each cell until the cell list becomes empty (step S114).
The process determines whether the extracted cell contains zero observation points (step S115). When the extracted cell contains no observation point, the cell is registered in the plane list (step S119). Then, the process returns to step S113 and performs the processing on the next cell in the cell list.
In contrast, when the extracted cell contains at least one observation point, a regression plane is estimated by the least squares method on the basis of altitude information at each observation point in the cell (step S116). The contribution rate is determined on the basis of whether or not a correlation coefficient of the estimated regression plane falls below a predetermined value (step S117).
When the contribution rate is within a range defined by the predetermined value (e.g., when the contribution rate falls below the predetermined value), the corresponding cell is registered in the plane list (step S119). Then, the process returns to step S113 and performs the processing on the next cell in the cell list.
In contrast, when the contribution rate is outside the defined range, it is considered that the corresponding cell cannot be fitted to the regression plane. The corresponding cell is divided into four cells by bisecting the horizontal and vertical sides (step S118). The process returns to step S112 and registers the cells in the cell list.
D. Plane Integration
In plane integration, the least squares method is performed on the basis of altitude information at observation points included in a corresponding plane and a plane to be integrated with the corresponding plane (hereinafter referred to as a partner plane) to estimate corresponding regression planes (see
The process obtains a plane from the plane list (step S121). The planes in the plane list are arranged in ascending order of area. When planes have the same area, the process obtains the older one first. Accordingly, the plane integration process is designed to process the planes, starting with that with the smallest area first.
As indices of the plane integration, the process calculates the dot products of the normal vector of the plane and the normal vectors of a group of adjacent planes (step S122).
Excluding planes located along the border of the observation area and planes having ridges that are determined to be not capable of plane integration, the process creates a candidate list for the plane integration, which contains candidate planes arranged in descending order of the dot product (step S123), and the process registers the candidate planes (step S124). Since cells located at the border of the observation area contain a fewer observation points and are susceptible to noise, these cells are not integrated with the other planes. Accordingly, noise propagation is prevented.
The process extracts candidates one at a time from the candidate list, starting with that having the largest index of plane integration (step S125), and conducts an integration test to determine whether or not the candidate can be approximated as a single plane (step S126). The integration test is conducted by estimating regression planes by the least squares method on the basis of altitude information at each observation point included in the corresponding plane and the partner plane and determining whether or not a correlation coefficient thereof exceeds a predetermined value.
When the plane is integrated with the partner plane (step S127), the partner plane is deleted from the plane list, and the resultant plane is added to the plane list (step S128). When the plane cannot be integrated with any other plane in the plane list (step S125), the plane is registered in a final plane list (step S129).
In this manner, all the planes in the plane list are integrated as much as possible, and the resultant planes are registered in the final plane list (step S129).
A cell has its vertices v and cell boundary e. A plane is described in terms of the structure of ridges generated by linking vertices, using the vertices v and cell boundary e of a cell constructing the plane. When a plane is integrated with an adjacent cell or plane, the plane links the vertices of the adjacent cell or plane to create a new ridge structure.
When planes sharing the fixed boundary are subjected to integration, the fixed boundary cannot be released to integrate the planes with each other.
E. Extraction of Boundary Line (2D)
For each plane registered in the final plane list, a polyline that corresponds to the fixed boundary obtained in the previous section D and that has the minimum number of bends is obtained. At this point, the polyline is an area for specifying a two-dimensional area, and each vertex is two-dimensional.
Due to the plane integration, vertices shared with the partner planes are linked (described above). In the example shown in
Vertices v2, v6 and v8 are endpoints of the corresponding ridges and in contact with three or more planes. Therefore, vertices v2, v6 and v8 serve as the endpoints or vertices of a boundary line (2D) to be obtained by this process. Vertices v1, v3, v4, v5, and v7 other than the endpoints are treated as “midpoints” of the corresponding ridges.
The boundary line between the planes may be a loop such as that shown in
In contrast, as shown in
The process obtains a boundary line from a set of boundary lines constructing a plane (see
The process obtains the ridge constructing the boundary line (step S132) and processes the ridge, depending on whether the ridge is a loop (see
When the ridge is a loop as in the former case, the process searches for bends (step S134), divides the ridge at the bends into a plurality of ridges (step S135), and registers the obtained ridges (step S136).
A method for searching a looped ridge for bends will now be described with reference to
Two or more bends are obtained from the looped ridge. In this case, it is preferable that the bends be extracted at positions that are far from each other. An example of a process of searching a looped ridge for two or more bends will now be described with reference to
A looped ridge consists of a ridge and a plurality of midpoints on the ridge (see
When the ridge is not a loop as in the latter case, it is determined whether or not the ridge can be approximated as a single straight line (step S137). When the ridge can be approximated as a single straight line, the approximated straight line is registered as a ridge (2D) (step S141).
When the ridge cannot be approximated as a single straight line, at least one bend is obtained on the ridge (step S138). The obtained bend is inserted into the ridge to divide the ridge into a plurality of ridges (step S139). The obtained ridges are registered (step S140).
In the example shown in
In this manner, all ridges included in one boundary line are fitted to straight lines. When this is completed, the resultant plane is registered (step S142).
Taking the actual building as an example, a process of extracting straight lines constructing a boundary line will now be described with reference to
Referring to
The four ridges obtained by the process shown in
Accordingly, the upper ridge is again searched for an additional bend and repeatedly divided. As a result, as shown in
The two ridges obtained by the process shown in
Subsequently, three of the four ridges obtained by the process shown in
Subsequently, the ridges obtained by the process shown in
As discussed above, repetition of the fitting of ridges to straight lines, bend detection, and ridge division consequently results in, as shown in
F. Creation of Three-dimensional Plane
In the previous section E, when the observation area is viewed from above, the area is divided so that a set of observation points constructs as large a single plane as possible. Each plane has an altitude state. When two planes adjacent to each other across the obtained boundary line are different in altitude at the boundary line portion, a three-dimensional plane creating process of inserting a vertical face at this portion is performed.
When a plane has boundary lines corresponding to a plurality of “islands”, as shown in
The process obtains a ridge (2D) constructing a boundary line (step S152), obtains the altitude of two planes adjacent to each other across the boundary line (step S153), and creates a corresponding three-dimensional ridge (step S154).
When the two planes adjacent to each other across the boundary line are different in altitude, a vertical face is created at the boundary line portion (step S155), and the vertical face is registered in a set of faces (step S156).
A ridge (3D) having altitude information is registered (step S157). The process then returns to step S152 and processes the next ridge included in the boundary line.
When all ridges included in the boundary line are processed (step S152), a group of ridges (3D) is registered as a face (step S158). The process then returns to step S151 and processes the next boundary line.
When all the boundary lines included in the set of boundary lines are processed (step S151), the entire process routine is completed.
A case in which a ridge (2D) to be processed has at least one bend will now be described.
When a ridge has at least one bend, the connection state shown in
The three-dimensional model can be generated by various methods other than that described in the above-described embodiment. For example, JP Application No. 2002-089967 (filed on Mar. 27, 2002) discloses three-dimensional modeling of a building portion using information on ground areas, information on other areas that can be distinguished from the ground, and elevation data; and JP Application No. 2002-089966 (filed on Mar. 27, 2002) discloses three-dimensional modeling of a terrain portion using information on ground areas, information on other areas that can be distinguished from the ground, and elevation data. Both of the JP applications are applicable to three-dimensional modeling of the embodiment. Under the law, the entire contents of the JP applications are incorporated herein by reference.
While the present invention has been described with reference to the specific embodiments, it is to be understood that modifications and substitutions can be made by those skilled in the art without departing from the scope of the present invention. In other words, the present invention has been described using the embodiments only for illustration purposes and should not be interpreted in a limited manner. The scope of the present invention is to be determined solely by the appended claims.
Patent | Priority | Assignee | Title |
11321899, | May 28 2021 | 3D animation of 2D images | |
7642929, | Apr 19 2007 | The United States of America as represented by the Secretary of the Air Force | Helicopter brown-out landing |
7936354, | Apr 27 2007 | GRAPHISOFT R&D ZRT | Virtual trace-multiple view modeling system and method |
8566020, | Dec 01 2009 | Nokia Technologies Oy | Method and apparatus for transforming three-dimensional map objects to present navigation information |
9134714, | May 16 2011 | Digital Lumens Incorporated | Systems and methods for display of controls and related data within a structure |
9152741, | Jan 17 2012 | Sony Semiconductor Solutions Corporation | Three-dimensional shape generation method, program, and recording medium |
9977843, | May 15 2014 | KENALL MAUFACTURING COMPANY | Systems and methods for providing a lighting control system layout for a site |
Patent | Priority | Assignee | Title |
5796400, | Aug 07 1995 | AUTODESK CANADA CO | Volume-based free form deformation weighting |
6331861, | Mar 15 1996 | DG HOLDINGS, INC | Programmable computer graphic objects |
6346938, | Apr 27 1999 | III Holdings 1, LLC | Computer-resident mechanism for manipulating, navigating through and mensurating displayed image of three-dimensional geometric model |
20030080957, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 22 2003 | Sony Corporation | (assignment on the face of the patent) | / | |||
Jul 14 2003 | OHTO, YASUNORI | Sony Corporation | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 014300 | /0286 |
Date | Maintenance Fee Events |
Dec 04 2009 | ASPN: Payor Number Assigned. |
Dec 04 2009 | RMPN: Payer Number De-assigned. |
Mar 25 2010 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Mar 27 2014 | M1552: Payment of Maintenance Fee, 8th Year, Large Entity. |
Mar 26 2018 | M1553: Payment of Maintenance Fee, 12th Year, Large Entity. |
Date | Maintenance Schedule |
Oct 03 2009 | 4 years fee payment window open |
Apr 03 2010 | 6 months grace period start (w surcharge) |
Oct 03 2010 | patent expiry (for year 4) |
Oct 03 2012 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 03 2013 | 8 years fee payment window open |
Apr 03 2014 | 6 months grace period start (w surcharge) |
Oct 03 2014 | patent expiry (for year 8) |
Oct 03 2016 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 03 2017 | 12 years fee payment window open |
Apr 03 2018 | 6 months grace period start (w surcharge) |
Oct 03 2018 | patent expiry (for year 12) |
Oct 03 2020 | 2 years to revive unintentionally abandoned end. (for year 12) |