A method for rendering results of an audit includes receiving data corresponding to the results of the audit. The data includes an image to be rendered on a display screen of an electronic computing device. The data includes one or more insights derived from the results of the audit. A user of the electronic computing device is identified. The image is rendered on the display screen. One or more insights derived from the results of the audit are rendered on top of the image on the display screen. A content of the one or more insights derived from the results of the audit that are rendered on top of the image on the display screen is dependent upon the identity of the user of the electronic computing device.
|
12. A method implemented on a server computing device for compiling results of an audit of one or more physical objects, the method comprising:
receiving, at the server computing device, audit data from one or more electronic devices;
processing the audit data received to obtain one or more results of the audit and one or more insights derived from the one or more results;
obtaining an identity, including an authorization level that is defined by a role of a user, of the user to whom at least some of the one or more results is to be displayed;
obtaining a current location of the user via a global positioning sensor associated with a display device of the user;
determining a security level of the current location of the user; and
creating content for an audit report to be rendered on top of one or more images of the one or more physical objects being audited as an overlay in augmented reality on the display device of the user, the content including a monetary estimate of a current value of the one or more physical objects, a level of detail of the content based on the one or more results of the audit, the identity and the authorization level of the user, and the security level of the current location of the user,
wherein the one or more insights derived from the results of the audit that are rendered on top of the one or more images on the display device as the overlay in augmented reality are different for each user authorization level;
wherein a level of detail of the displayed one or more insights includes additional financial data that is provided in the content when the current location of the user is secure.
1. A method implemented on an electronic computing device for rendering results of an audit, the method comprising:
on the electronic computing device, receiving data corresponding to the results of the audit, the data including an image of one or more physical objects being audited and one or more insights derived from the results of the audit;
identifying an identity of a user of the electronic computing device, including an authorization level of the user that is defined by a role of the user;
obtaining a current location of the user via a global positioning sensor on the electronic computing device;
determining a security level of the current location of the user;
rendering the image on a display screen of the electronic computing device; and
creating content for an audit report to be displayed on top of the image on the display screen as an overlay in augmented reality, the content based on: the one or more insights derived from the results of the audit, including a monetary estimate of a current value of the one or more physical objects, the authorization level of the user, and the current location of the user of the electronic computing device;
wherein the one or more insights derived from the results of the audit that are rendered on top of the image on the display screen as the overlay in augmented reality are different for each user authorization level;
wherein a level of detail of the one or more insights derived from the results of the audit that are rendered on top of the image on the display screen as the overlay in augmented reality is dependent upon the security level of the current location of the user of the electronic computing device, wherein a level of detail of the displayed insights includes additional financial data that is provided in the content when the current location of the user of the electronic computing device is secure.
19. An electronic computing device comprises:
at least one processor; and
system memory, the system memory including instructions which, when executed by the at least one processor, cause the electronic computing device to:
receive data corresponding to an audit of a plurality of physical objects, the data including an image to be rendered on a display screen of the electronic computing device, and the data including one or more insights derived from results of the audit;
identify a user of the electronic computing device, including an authorization level of the user that is defined by a role of the user;
obtain a job title of the user;
obtain a current location of the user via a global positioning sensor on the electronic computing device;
determining a security level of the current location of the user;
render the image on the display screen; and
create content for an audit report to be displayed render on top of the image on the display screen as an overlay in augmented reality, the content based on: the one or more insights derived from the results of the audit, the authorization level of the user, the current location of the user of the electronic computing device and from contextual information derived from a pre-audit of the physical objects and other data regarding the physical objects, including a monetary estimate of a current value of the physical objects,
create content for an audit report to be displayed on top of the image on the display screen as an overlay in augmented reality, the content based on: the one or more insights derived from the results of the audit, the authorization level of the user, the current location of the user of the electronic computing device and from contextual information derived from a pre-audit of the physical objects and other data regarding the physical objects, including a monetary estimate of a current value of the physical objects,
wherein the one or more insights derived from the results of the audit that are rendered on top of the image on the display screen as the overlay in augmented reality are different for each user authorization level;
wherein a level of detail of the one or more insights that are rendered on top of the image on the display screen as the overlay in augmented reality is dependent upon the security level of the current location of the user, including:
wherein a level of detail of the displayed one or more insights includes additional financial data that is provided in the content when the current location of the user of the electronic computing device is a private location.
2. The method of
3. The method of
4. The method of
5. The method of
6. The method of
7. The method of
8. The method of
receiving information regarding a time of day,
wherein the content rendered to the user is dependent on the time of day.
9. The method of
10. The method of
11. The method of
13. The method of
receiving one or more of images of the one or more physical objects being audited; and
receiving telemetry data from the one or more physical objects being audited.
14. The method of
15. The method of
16. The method of
17. The method of
18. The method of
obtaining a job title of the user,
wherein the content to be rendered on the display device is based on the job title of the user and the current location of the user.
|
Audits can be performed for a variety of purposes, including verifying assets used to secure a loan, verifying a possession of documents for security purposes and to conform with compliance regulations. Other types of audits can also be performed.
After the audits are performed, reports can be generated that summarize data obtained from the audits. However, when large amounts of data are obtained as a result of the audits, the reports can be complex and difficult to read and understand.
Embodiments of the disclosure are directed to a method implemented on an electronic computing device for rendering results of an audit, the method comprising: on the electronic computing device, receiving data corresponding to the results of the audit, the data including an image to be rendered on a display screen of the electronic computing device, the data including one or more insights derived from the results of the audit; identifying a user of the electronic computing device; rendering the image on the display screen; and rendering on top of the image on the display screen the one or more insights derived from the results of the audit, wherein a content of the one or more insights derived from the results of the audit that are rendered on top of the image on the display screen is dependent upon the identity of the user of the electronic computing device.
In another aspect, a method implemented on an electronic computing device for compiling results of an audit comprises: receiving, at the server computing device, audit data from one or electronic devices; processing the audit data received to obtain one or more results of the audit; obtaining an identity of a user to whom at least some of the one or more results is to be displayed; obtaining a current location of the user; and creating content to be rendered on a display device of the user, the content based on the one or more results of the audit, the identity of the user and the current location of the user.
In yet another aspect, an electronic computing device comprises: at least one processor; and system memory, the system memory including instructions which, when executed by the at least one processor, cause the electronic computing device to: receive data corresponding to an audit of a plurality of physical objects, the data including an image to be rendered on a display screen of the electronic computing device, the data including one or more insights derived from results of the audit; identify a user of the electronic computing device; obtain a job title of the user; obtain a current location of the user; render the image on the display screen; and render on top of the image on the display screen the one or more insights derived from the results of the audit and from contextual information derived from a pre-audit of the physical objects and other data regarding the physical objects, wherein a content of the one or more insights that are rendered on top of the image on the display screen is dependent upon the job title of the user and the current location of the user.
The details of one or more techniques are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of these techniques will be apparent from the description, drawings, and claims.
The present disclosure is directed to systems and methods for using contextual information for audited objects to provide visual summaries of audit data. The visual summaries can simplify reporting of complex audit data. The contextual information can tailor a display of the audit data to specific individuals and to specific locations. In addition, the contextual information can be used to provide potential insights regarding the audit data.
The contextual information can include data such as a name and job title of the specific individuals, information regarding a purpose of an audit and a weighting that can indicate a degree to which sensitive information can be displayed to the specific individuals. The contextual information can also include pre-audit information than can provide a baseline for data obtained during the audit. Other contextual information can be provided, as discussed in more detail later herein.
Audit information can be obtained via imaging devices such as cameras and scanners and via manual input from an auditor. Some or all of the imaging devices can be controlled by the auditor, for example via a smartphone. Some of the imaging devices can be mounted in external devices, such as cameras mounted to building structures and drones that can fly over audited objects and use cameras mounted in the drones to obtain images of the audited objects. In addition, audit information can be obtained via telemetry devices that can be attached to or located near audited objects. The telemetry devices can include radio beacons, radio frequency identification (RFID) devices and global positioning system (GPS) trackers. Other telemetry devices are possible.
In example implementations, one or more insights can be derived from the audit information and from the contextual information. For example, results of the audit can identify one or more physical objects being audited, for example farm equipment. The contextual information can indicate a number of physical objects detected during a pre-audit. An insight can be that one or more of the items of farm equipment are currently missing, damaged, or moved to a different location. The contextual information can also provide information regarding purchased insurance for categories of farm equipment. An insight can be how the purchased insurance for the categories of farm equipment has changed over time. The contextual information can also provide damage estimates for farm equipment or property as a result of an accident, a storm or a fire. An insight can be how a damage estimate compares with an initial estimate of a value for the farm equipment or property.
One example audit that can be performed using the systems and methods can be an asset validation audit. When a loan is obtained from a financial institution to purchase hard assets, for example farm tractors and harvesters, the financial institution has an interest in knowing that the purchased assets are in the possession of the purchaser and have not been damaged. In an example implementation, a drone can fly over a field in which the farm tractors and harvesters are being used. The drone can include a camera that can obtain images of any machinery in the field. In addition, one of more items of purchased equipment can contain radio beacon transmitters that can help identify the purchased assets. The images and telemetry data from the purchased equipment can be analyzed and a determination can be made as to whether the purchased equipment is still in the possession of the purchaser.
Another example audit can be a security audit in which confidential or secret documents can be audited. An auditor using a smartphone can obtain images of documents on an individual's desk or work area, including a file cabinet. Each confidential or secret document can have an identifier, for example a quick response (QR) barcode or RFID imbedded on or attached to the document. When the identifier is scanned using the smartphone, a determination can be made as to whether all required documents can be accounted for.
A third example audit can be a damage assessment audit. For example, a drone can image structures on a farm or other business to assess any damage to the structures as a result of a storm. The images of the structures after the storm can be compared to baseline images to access the damages. Other types of audits are possible.
As discussed in more detail later herein, a report of an audit can be displayed to a user on the user's mobile device, such as a smartphone. Data that is presented in the report can be compiled from the images obtained from the audit, images obtained from a pre-audit and stored contextual information for the audited objects. The data that is presented to the user in the report can be tailored to an authorization level of the user. For example, regarding the damage assessment audit, data presented to a loan executive can include a dollar estimate of a structure loss and an inventor loss. Data presented to a loan officer can include a dollar loss estimate of specific structures, for example structures in a specific area of a farm, rather than an estimate of loss for the entire farm.
The systems and methods disclosed herein are directed to a computer technology that can receive results of an audit of physical objects, derive insights from the results and from contextual information related to the physical objects and display different content regarding the audit based on an identity an identify of an individual who will view the content and a location of the individual. Thus, the systems and method permit a volume of audit data to be analyzed and content to be displayed that is tailored to an individual viewing the content and to where the individual is viewing the content. The systems and method provide efficiencies and enhanced security in displaying a results of an audit because rather than displaying content that may not be meaningful to the individual and that should not be viewed by the individual, the systems and methods can efficiently direct meaningful content to the individual that can be displayed in a secure manner. The display of contextual information tailored to the individual along with audit data of physical objects streamlines complex audit data into a meaningful and efficient presentation on a display screen of an electronic computing device.
The example objects being audited 102 refer to one or more objects than can be audited. The objects can include large assets such as farm machinery, buildings, furniture, electronic equipment and other assets such as documents. The objects can be audited via manual inspection and automatically via visual imaging. More, fewer or different objects can be audited.
The example telemetry devices 104 are electronic devices can identify a location of one or more of the objects being audited 102. The telemetry devices 104 can be attached to or embedded in the one or more objects being audited. Example telemetry devices are radio beacons, RFID devices and GPS trackers. Other telemetry devices 104 are possible.
The example imaging devices 106 are electronic devices that can provide an electronic image of the objects being audited 102. The imaging devices 106 can include cameras that can be physically mounted near the objects being audited 102 or attached to drones, other aircraft or vehicles that can obtain images of the objects being audited 102. The imaging devices 106 can also include scanners and cameras that are contained within portable electronics, such as smartphones. A user of the smartphone can use the scanners and cameras in the smart phone to scan in a document or to take images of the objects being audited 102.
The example sensing devices 107 can be one or more devices that can sense different aspects of the objects being audited 102. Sensing devices 107 can include a global positioning system (GPS) sensor, micro-electro-mechanical (MEMS) sensors, sensors that can detect speech, gestures, and sound and haptic sensors. The MEMS sensors can be one or more of a gyroscope, accelerometer and temperature sensors. The haptic sensors can detect forces, vibrations or motions of the user with respect to a sense of touch. Sensing devices 107 can obtain data related to objects being audited 102 that can determine a current status of the objects being audited 102. More, fewer or different sensing devices 107 can be used.
The example organization electronic computing device 108 is an electronic computing device of an employee of an organization conducting the audit. The employee can use organization electronic computing device 108 to obtain and send contextual information to organization server computing device 112 regarding one or more of objects being audited 102. The contextual information can include pre-audit information and additional information that can insights regarding data obtained during the audit. The organization electronic computing device 108 can include one or more of a desktop electronic computing device, a laptop electronic computing device or a mobile electronic computing device such as a smart phone. The smartphone can include a camera and document scanner software application and can be one of imaging devices 106.
The example third party organization computing devices are typically server computing devices of third party organizations that can provide information pertinent to the audit. For example a third party organization can be an insurance company that can provide data regarding purchased insurance for objects being audited, that can provide damage assessment data for objects being audited that have been damaged by a fire or a storm and that can provide historical information regarding previous damage claims made against the objects being audited. As another example, a third party organization can be an equipment supplier that can provide information regarding a purchase price paid for objects being audited and of a replacement cost for the objects being audited.
The example network 110 is a computer network and can be any type of wireless network, wired network and cellular network, including the Internet. Telemetry devices 104, imaging devices 106 and organization electronic computing device 108 can communicate with organization server computing device 112 using network 110.
The example organization server computing device 112 is a server computer of an organization that can conduct audits for the objects being audited 102. For example, the organization can be hired to perform the audit by another organization that owns the objects being audited 102 or by an organization, for example a financial institution that provided loans to purchase one or more of the objects being audited 102. In another example implementation, the organization can be a government organization that conducts compliance auditing. In still another implementation, the organization can be one that owns one or more of the objects being audited 102 and that is conducting an internal audit. Other example organizations are possible.
The example object auditing engine 114 processes audit data received from imaging devices 106 and telemetry devices 104 and generates one or more reports based on the audit data. The reports can be displayed on a display device, for example as an overlay on an augmented reality device. Object auditing engine 114 can also process contextual information related to the objects being audited 102. The contextual information can include pre-audit information that can be compared with current audit information to determine an audit status. Object auditing engine 114 is described in more detail later herein.
The example database 116 is a database associated with the organization of organization server computing device 112. Database 116 can include a contextual information store. Database 116 can store pre-audit, current audit and contextual information for objects being audited 102. The pre-audit and current information can include such items as identification, location and status information for objects being audited 102. The contextual information can include such items as detailed identification information for the organization and for objects being audited 102 and items such as a purchase date and price and current value for the objects being audited 102, sales data for categories of equipment related to the objects being audited 102, damage assessments and other information. Organization server computing device 112 can store all or parts of the information in database 116. Database 116 can be distributed over a plurality of databases. Organization server computing device 112 can be programmed to query (e.g. using Structured Query Language, SQL) database 116 to obtain the merchant services information.
An example schema including, but not limited to, inventory information stored in database is shown below:
The above schema permits the database to be queried for data such data as assets being audited and a current status of these assets.
As an example, the following messaging format can be used between the organization server computing device 112 and the database 116 to obtain status information for a specific property item.
Property ID
Status
As an example, the database 116 can use the following messaging format in responding to such a request.
Property
Property
Building 1
Building 1
Building 1
Building 2
. . .
ID
Name
Name
Location
Status
Name
The response message can include the property ID, the property name and information for buildings located on the property. Information for each building can include the building name, the building location and the building status. For example building status can be excellent, good, fair or poor and can include qualifiers such as fire damaged or storm damaged. More, fewer, or different fields can be used. Other examples are possible.
The example contextual information module 202 receives, processes and updates contextual information for the objects being audited 102. The contextual information can include identification information, pre-audit information and current audit information for the objects being audited 102. The identification information can include items such as a name, location, purchase price and purchase date of the objects being audited 102. The pre-audit information can include a status of the objects being audited 102. The status can include a description of a condition of the item during the pre-audit.
The example audit data processing module 204 processes data received as a result of the audit. The data can include imaging data received from cameras or other imaging devices that can be hand-held, hard-mounted or located in drones, aircraft or vehicles, telemetry data received from beacons, object identification information received from RFID devices, and data manually provided by auditors. Audit data processing module 204 can compare data received during an audit with data obtained during a pre-audit and with data obtained from one or more databases. Audit data processing module 204 can then make a determination as to whether any physical objects are missing and a determination regarding a condition of the physical objects being audited.
The example audit report module 206 generates reports from the audit data processed by audit data processing module 204. Audit report module 206 then determines how to present the report to a recipient of the audit. As discussed in more detail later herein, data can be presented in a plurality of ways, including hard copy, on a display screen of an electronic computing device, as an email message or as a text message. The display screen can be a display screen of a conventional display monitor or the display screen of an augmented reality (AR) computing device. Audit report module 206 can also tailor the amount and type of data to be displayed based on an identity of a recipient of the audit. For example, as discussed in more detail later herein, different audit data can be displayed to high-level executives than to lower-level employees, for example sales personnel or loan officers.
For
As shown in
GUI 300 also shows example statistical information 314 related to the audit. The statistical information 314 can be obtained from one or more databases, for example database 116, and can provide various types of information. In an example implementation, a user, for example an individual on the farm property can determine what statistical information is to be displayed. As shown in
As shown in
The example subject-in-action 502 field is a text field that identifies a person who can view a visual report of the audit results. The identification typically includes a name and a job title of the person. The job title of the person can define an authorization level that determines a level of detail of the audit results that can be viewed by the person.
The example marker information 504 field is a text field for a marker that can identify a physical object being audited. For example, a document to be audited can contain a red mark that can identify the document. As another example, the marker can comprise one or more items of equipment that are focused on by a camera.
The marker information 504 field can also indicate a location at which the person can view the audit results. The job title of the user and the location at which the user views the audit results can determine the level of detail of the audit results that the person can view at the location. In addition, the marker information 504 field can identify a type of audit information that can be displayed to the person at the location. For example, one type of information that can be displayed can be loan status information.
The example relevance 506 field specifies a weighting for the marker associated with the marker information 504 field. For example, the weighting can be high, medium or low. Other weightings are possible. The weighting can indicate an importance of the marker.
The example other parameters 508 field can indicate a time during which the audit results can be displayed to the person. In some implementation, the other parameters 508 field can be left blank and not used.
The example delivery mode 510 field can specify how the audit results are displayed to the user. Example delivery modes can include a display screen of an augmented reality (AR) device, a text message and whether insights for the audit results are included. Other delivery modes are possible.
The example infoblock name 512 field is a text field that includes a name that identifies the information block.
Table 1, below, is an example of information block data that can be obtained from a contextual information store, for example from database 116. As shown in Table 1, the subject-in-action for the first two rows of table 1 is Joe the CEO. The first row shows that when Joe is in the office, information block data from the “SouthWest/$2 million” information block can be displayed to Joe. The second row shows that when Joe is at the airport, data from a different information block, “SouthWest” can be displayed to Joe. In both locations, the information is displayed on an AR device. The “SouthWest” information block may not include some financial data from the “SouthWest/$2 million” information block viewing audit result data at the airport can be risky from a security standpoint, whereas the financial data can be viewed more securely in Joe's office. For example, a bystander can view the data on the AR display screen at the airport. For all rows in the table “other parameters” aren't used, because in the examples for table 1, there is no time restriction for viewing the audit results.
The red graph marker element refers to an audited document that includes data regarding the southwest account. The document can include a red marker to identify the document to be audited or to identify a section of the document being audited. A graph element can be created by scanning with a camera the section of the document identified by the red marker. The graph element can be combined with contextual information related to the section of the document to form an image for viewing on the AR display screen.
The second from last and the last rows of Table 1 show information block data for an incident (incident_23) in which a loan (loan_1234) is involved. In this example, a loan was taken out to purchase farm equipment and physical structures on the farm, for example a grain bin and water tank, were used to secure the loan (loan_1234). Both the grain bin and the water tank were damaged during a fire (incident_23). The second from last row shows the information block for Sam the Loan Officer, named “Status: Site visit tomorrow 3 PM.” The last row shows the information block for Rita the Executive, named “Status” Waiting for field report.” The delivery modes for audit reports for both informant blocks is AR device. A description of differences between a display of audit results for both Sam and Rita are discussed with regard to
TABLE 1
Subject-
Other
In-
Marker
Rel-
Para-
Delivery
Action
Information
vance
meters
Mode
InfoBlockName
Joe the
Red graph
High
N/A
AR
“Southwest/
CEO
element;
$2 Million”
Office
Joe the
Red graph
High
N/A
AR
“Southwest”
CEO
element;
Airport
Sam the
Loan_1234_status;
High
N/A
AR
“Status: Site
Loan
Incident_23
visit
Officer
tomorrow 4 PM”
Rita the
Loan_1234_status;
High
N/A
AR
“Status:
Executive
Incident_23
Waiting for
field visit report”
At operation 902, contextual information is prepared for the audit. The contextual information can include known data regarding the audit, such as a name of a property, identification, price, current condition, insurance and other descriptive information regarding physical structures on the property, identification of documents to be audited, GPS location information and other items. The contextual information can also include information from a pre-audit of the assets on the property. The pre-audit information can include a count of the assets, a current condition of the assets and a physical location of the assets. Other contextual information is possible.
At operation 904, the contextual information is stored in a data store. For method 900 the data store is database 116.
At operation 906, an audit is performed for the assets of the property. The audit determines a current status of the assets. A detailed description of the audit is provided later herein, with regard to
At operation 908, the contextual information stored in database 116 is updated as a result of the audit. Example updates can include changes in a number of the assets, for example reflecting whether any of the assets are missing, changes of location of the assets and changes in the physical condition of the assets, for example due to damage from storms or fire.
At operation 910, an audit report is prepared from data obtained from the audit. The audit report summarizes the results of the audit. The audit report can be tailored to an individual who is to view the report. A more detailed description of operation 910 is provided with regard to
At operation 912, the audit report is rendered on a display device. For method 900, the display device is an augmented reality (AR) display device. The content of the audit report that is rendered on the display device is dependent on the job title of the individual who is to view the audit report and to a current location of the individual. In some implementations, the content is also dependent on the time of day in which the audit report is to be viewed. A more detailed description of operation 910 is provided with regard to
At operation 1002, imaging data is received of one or more physical objects being audited. The physical objects can be structural assets such as buildings, farm silos, water tanks, desks, chairs, tables and file cabinets, equipment such as electronic equipment, farm machinery, and vehicles, and other assets such as documents and books. The imaging data can be received from one or more cameras that are physically mounted near the equipment, from hand-held cameras, such as stand-alone cameras or cameras that are included in a smartphone and from cameras mounted in vehicles or aircraft such as drones. The cameras can focus on individual physical objects or groups of physical objects.
At operation 1004, location data of the physical objects is received. For some objects, the location data can be obtained from a GPS location of the object. The GPS location of the object can be obtained using GPS software on an electronic device, such as a smartphone, near the object, for example on a smartphone camera that is used to photograph an image of the object or a smartphone carried by an individual near the object. For other objects, the location data can be obtained from a telemetry device, such as a beacon, located on or near the object. For other objects, the location data can be obtained from GPS software on an aircraft, such as a drone, flying over the object, on a vehicle near the object.
At operation 1006, financial data is received regarding the physical objects. The financial data can be obtained from the contextual information data store or from data obtained from or derived from data obtained from the audit. For example, the contextual information store can have data such as an estimated value of the asset and a dollar amount for which the object is insured. As another example, when image data of the object obtained from the audit shows that the object has been damaged due to a storm or fire, an estimate of a dollar amount of the damage can be made and obtained from the contextual information data store. For example, an insurance claim could have been filed and results from a damage assessment could have been stored in the contextual information data store.
At operation 1008, audit data is compiled from the data received from the physical objects. The audit data can include such information as a current status of the physical object, for example whether the physical object can be found, a current condition status of the physical object, and a current location of the physical object. The audit data can be stored in the contextual information store.
At operation 1102, a job title of an individual who can view the audit report is obtained. The job title can be obtained from the contextual information store. As discussed earlier herein, the job title can determine a level of detail that can be included in the audit report for the individual.
At operation 1104, a current location of the individual is obtained. The current location can be obtained, for example, via GPS software on a smartphone of the individual. As discussed earlier herein, the current location of the individual, in conjunction with the job title can determine the level of detail that can be included in the audit report for the individual.
At operation 1106, the audit report is tailored for the individual at the current location. Different levels of detail can be provided depending on the current location of the individual. For example, some information that can be displayed to the individual in the individual's office may not be displayed in a public place, such as an airport, for security reasons.
At operation 1108, the tailored audit report is sent to an electronic computing device, for example a smartphone or a laptop computer of the individual to be displayed on a display screen of the electronic computing device. In example implementation, audit report data is rendered in augmented reality on the display screen of the smartphone. In some implementations, information regarding where to render the data from the audit report on the electronic device is also sent to the electronic computing device.
At operation 1202, an image is rendered on the display screen of the display device. The image is typically one related to the audit. For example, the image can be one of a physical property and can show one or more buildings on the physical property. An example of such an image is shown in
At operation 1204, content from the audit report is rendered over the image on the display screen. As shown in the example of
As illustrated in the example of
The mass storage device 1314 is connected to the CPU 1302 through a mass storage controller (not shown) connected to the system bus 1322. The mass storage device 1314 and its associated computer-readable data storage media provide non-volatile, non-transitory storage for the organization server computing device 112. Although the description of computer-readable data storage media contained herein refers to a mass storage device, such as a hard disk or solid state disk, it should be appreciated by those skilled in the art that computer-readable data storage media can be any available non-transitory, physical device or article of manufacture from which the central display station can read data and/or instructions.
Computer-readable data storage media include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable software instructions, data structures, program modules or other data. Example types of computer-readable data storage media include, but are not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROMs, digital versatile discs (“DVDs”), other optical storage media, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the organization server computing device 112.
According to various embodiments of the invention, the organization server computing device 112 may operate in a networked environment using logical connections to remote network devices through the network 110, such as a wireless network, the Internet, or another type of network. The organization server computing device 112 may connect to the network 110 through a network interface unit 1304 connected to the system bus 1322. It should be appreciated that the network interface unit 1304 may also be utilized to connect to other types of networks and remote computing systems. The organization server computing device 112 also includes an input/output controller 1306 for receiving and processing input from a number of other devices, including a touch user interface display screen, or another type of input device. Similarly, the input/output controller 1306 may provide output to a touch user interface display screen or other type of output device.
As mentioned briefly above, the mass storage device 1314 and the RAM 1310 of the organization server computing device 112 can store software instructions and data. The software instructions include an operating system 1318 suitable for controlling the operation of the organization server computing device 112. The mass storage device 1314 and/or the RAM 1310 also store software instructions and software applications 1316, that when executed by the CPU 1302, cause the organization server computing device 112 to provide the functionality of the organization server computing device 112 discussed in this document. For example, the mass storage device 1314 and/or the RAM 1310 can store software instructions that, when executed by the CPU 1302, cause the organization server computing device 112 to display received data on the display screen of the organization server computing device 112.
Although various embodiments are described herein, those of ordinary skill in the art will understand that many modifications may be made thereto within the scope of the present disclosure. Accordingly, it is not intended that the scope of the disclosure in any way be limited by the examples provided.
Rao, Abhijit, Sellers, Robert Louis, Kakita, Neil Yoshihisa
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10007992, | Oct 09 2014 | State Farm Mutual Automobile Insurance Company | Method and system for assessing damage to infrastucture |
10529028, | Jun 26 2015 | State Farm Mutual Automobile Insurance Company | Systems and methods for enhanced situation visualization |
10937263, | Sep 27 2018 | Amazon Technologies, Inc | Smart credentials for protecting personal information |
11216890, | Feb 08 2018 | HL ACQUISITION, INC | Systems and methods for employing augmented reality in appraisal operations |
5467271, | Dec 17 1993 | Northrop Grumman Corporation | Mapping and analysis system for precision farming applications |
5922073, | Jan 10 1996 | Canon Kabushiki Kaisha | System and method for controlling access to subject data using location data associated with the subject data and a requesting device |
8803970, | Dec 31 2009 | Honeywell International Inc | Combined real-time data and live video system |
9001217, | Jan 28 2010 | Canon Kabushiki Kaisha | Information processing apparatus, method for displaying live view image, and storage medium storing program therefor |
9253198, | Oct 29 2013 | Verizon Patent and Licensing Inc | Systems and methods for geolocation-based authentication and authorization |
9324171, | Aug 24 2011 | CITIBANK, N A | Image overlaying and comparison for inventory display auditing |
9380177, | Jan 30 2004 | Airspace Reality | Image and augmented reality based networks using mobile devices and intelligent electronic glasses |
9389425, | Apr 18 2012 | Kopin Corporation | Viewer with display overlay |
9424767, | Jun 18 2012 | Microsoft Technology Licensing, LLC | Local rendering of text in image |
9721303, | Dec 31 2013 | AON GLOBAL OPERATIONS LIMITED SINGAPORE BRANCH REG T12FC0122F | Audit performance evaluation |
9767615, | Apr 23 2014 | Raytheon Company | Systems and methods for context based information delivery using augmented reality |
20020177449, | |||
20040125956, | |||
20050025235, | |||
20090179916, | |||
20100228585, | |||
20120264443, | |||
20130311595, | |||
20140055489, | |||
20150227732, | |||
20150254738, | |||
20160247116, | |||
20160284127, | |||
20170039765, | |||
20170076114, | |||
20170206414, | |||
20180262480, | |||
WO2017074611, | |||
WO2013036233, | |||
WO2014031899, | |||
WO2018156506, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 13 2018 | Wells Fargo Bank, N.A. | (assignment on the face of the patent) | / | |||
Jun 13 2018 | KAKITA, NEIL YOSHIHISA | WELLS FARGO BANK, N A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 046175 | /0935 | |
Jun 13 2018 | SELLERS, ROBERT LOUIS | WELLS FARGO BANK, N A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 046175 | /0935 | |
Jun 19 2018 | RAO, ABHIJIT | WELLS FARGO BANK, N A | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 046175 | /0935 |
Date | Maintenance Fee Events |
Jun 13 2018 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Dec 13 2025 | 4 years fee payment window open |
Jun 13 2026 | 6 months grace period start (w surcharge) |
Dec 13 2026 | patent expiry (for year 4) |
Dec 13 2028 | 2 years to revive unintentionally abandoned end. (for year 4) |
Dec 13 2029 | 8 years fee payment window open |
Jun 13 2030 | 6 months grace period start (w surcharge) |
Dec 13 2030 | patent expiry (for year 8) |
Dec 13 2032 | 2 years to revive unintentionally abandoned end. (for year 8) |
Dec 13 2033 | 12 years fee payment window open |
Jun 13 2034 | 6 months grace period start (w surcharge) |
Dec 13 2034 | patent expiry (for year 12) |
Dec 13 2036 | 2 years to revive unintentionally abandoned end. (for year 12) |