This disclosure relates to systems and methods for performing inclusive indoor navigation. State of the art systems and methods require extra hardware and fail to provide accurate localization and navigation with less precision. The method of the present disclosure obtains a nested environment data of a facility and estimate current spatial location of a user in the nested environment using surrounding recognition machine learning model. An optimal path categorized as convenient path, shortest path and multi-destination path from the current spatial location to a destination is determined. The current spatial location of the user is tracked on the optimal path using an augmented reality technique when navigation starts. The optimal path is dynamically updated based on feedback obtained from one or more user interaction modalities. The present disclosure provides user navigation with last meter precision, and no hardware and internet dependency.
|
6. A system comprising:
one or more data storage devices operatively coupled to the one or more hardware processors and configured to store instructions configured for execution via the one or more hardware processors to:
obtain nested environment data of a facility for indoor navigation performed by a user, wherein
the nested environment data is stored on and retrieved from a server, and
the nested environment data of the facility is obtained by:
evaluating a plurality of floor exit maps of each floor of a plurality of floors against an evaluation criterion, wherein the evaluation criterion is used to check real world architectural details of the facility;
creating, based on a mismatch between the plurality of floor exit maps and the evaluation criterion, a two-dimensional digital map of each floor among the plurality of floors of the facility;
sequentially arranging the created two-dimensional digital map of each floor of the plurality of floors of the facility;
determining, based on the sequential arrangement, a nested map of the facility; and
performing map labelling to localize and capture information of a plurality of landmarks in the nested map, wherein
the map labelling refers to adding details to the plurality of landmarks with respect to the created two-dimensional digital map along with capturing surrounding information of the plurality of landmarks,
each landmark of the plurality of landmarks is tagged to a reference surroundings based on one or more parameters in the created two-dimensional digital map,
the one or more parameters include images of the plurality of landmarks, text or signages of the plurality of landmarks, a direction of the plurality of landmarks, and a wireless fidelity (Wi-Fi) signal strength and a magnetic field intensity of the plurality of landmarks;
receive a destination location within the facility from the user;
estimate, using a surrounding recognition machine learning model, a current spatial location of the user with a predefined precision range at centimeter (cm) level by identifying (i) a user specific area in the facility using the surrounding recognition machine learning model trained with a plurality of real world images of the facility and (ii) the current spatial location of the user with respect to the identified user specific area by triangulating input data received from a plurality of sensors;
determine an optimal path from the current spatial location to the destination location using the nested environment data, wherein the optimal path from the current spatial location to the destination location is categorized as (i) a convenient path (ii) a multi-destination path, or (iii) a shortest path in accordance with one or more user constraints, wherein
the convenient path is a path with multiple destinations,
the one or more user constraints include user profiling information,
the convenient path is selected based on the one or more user constraints, a minimum time from the current spatial location to the destination location, and a minimum distance from the current spatial location to the destination location,
the multi-destination path is a path that is longest with respect to a time and a distance and covers all important landmarks within the facility,
the shortest path is a path that is shortest with respect to the time and the distance from the current spatial location to the destination location, and
the user profiling information refers to details of physical appearance of the user and is indicative of whether the user is a physically fit person without any disability;
track, using an augmented reality technique, the current spatial location of the user while the user navigates on the optimal path from the current spatial location to the destination location; and
detect one or more obstacles present on the optimal path using an obstacle detector, wherein the obstacle detector detects the one or more obstacles using:
(i) computer vision techniques utilizing data continuously captured by a camera to detect the one or more obstacles with a first range of view, and
(ii) one or more ultrasonic sensors to detect the one or more obstacles with a second range of view, wherein the second range of view refers to a small area range around the user to provide immediate alerts for less distance obstacles, and the small area range is 2 meters-3 meters;
determine, based on the data that is continuously captured by the camera, a pattern of the one or more obstacles;
provide the determined pattern as input to the one or more ultrasonic sensors;
predict, via the one or more ultrasonic sensors, one or more incoming obstacles based on the provided pattern, wherein the one or more incoming obstacles are different from the detected one or more obstacles; and
dynamically update the optimal path from the tracked current spatial location of the user to the destination location based on a feedback obtained from one or more user interaction modalities.
11. One or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes:
Obtaining nested environment data of a facility for indoor navigation performed by a user, wherein
the nested environment data is stored on and retrieved from a server, and
the nested environment data of the facility is obtained by:
evaluating a plurality of floor exit maps of each floor of a plurality of floors against an evaluation criterion, wherein the evaluation criterion is used to check real world architectural details of the facility;
creating, based on a mismatch between the plurality of floor exit maps and the evaluation criterion, a two-dimensional digital map of each floor among the plurality of floors of the facility;
sequentially arranging the created two-dimensional digital map of each floor of the plurality of floors of the facility;
determining, based on the sequential arrangement, a nested map of the facility; and
performing map labelling to localize and capture information of a plurality of landmarks in the nested map, wherein
the map labelling refers to adding details to the plurality of landmarks with respect to the created two-dimensional digital map along with capturing surrounding information of the plurality of landmarks,
each landmark of the plurality of landmarks is tagged to a reference surroundings based on one or more parameters in the created two-dimensional digital map,
the one or more parameters include images of the plurality of landmarks, text or signages of the plurality of landmarks, a direction of the plurality of landmarks, and a wireless fidelity (Wi-Fi) signal strength and a magnetic field intensity of the plurality of landmarks;
receiving a destination location within the facility from the user;
estimating using a surrounding recognition machine learning model implemented, a current spatial location of the user with a predefined precision range at centimeter (cm) level by identifying (i) a user specific area in the facility using the surrounding recognition machine learning model trained with a plurality of real world images of the facility and (ii) the current spatial location of the user with respect to the identified user specific area by triangulating input data received from a plurality of sensors;
determining, an optimal path from the current spatial location to the destination location using the nested environment data, wherein the optimal path from the current spatial location to the destination location is categorized as (i) a convenient path (ii) a multi-destination path, or (iii) a shortest path in accordance with one or more user constraints, wherein
the convenient path is a path with multiple destinations,
the one or more user constraints include user profiling information,
the convenient path is selected based on the one or more user constraints, a minimum time from the current spatial location to the destination location, and a minimum distance from the current spatial location to the destination location,
the multi-destination path is a path that is longest with respect to a time and a distance and covers all important landmarks within the facility,
the shortest path is a path that is shortest with respect to the time and the distance from the current spatial location to the destination location, and
the user profiling information refers to details of physical appearance of the user and is indicative of whether the user is a physically fit person without any disability;
tracking, using an augmented reality technique implemented, the current spatial location of the user while the user navigates on the optimal path from the current spatial location to the destination location; and
detecting one or more obstacles present on the optimal path using an obstacle detector, wherein the obstacle detector detects the one or more obstacles using a combination of:
(i) computer vision techniques utilizing data continuously captured by a camera to detect the one or more obstacles with a first range of view, and
(ii) one or more ultrasonic sensors to detect the one or more obstacles with a second range of view, wherein the second range of view refers to a small area range around the user to provide immediate alerts for less distance obstacles, and the small area range is 2 meters-3 meters;
determining, based on the data that is continuously captured by the camera, a pattern of the one or more obstacles;
providing the determined pattern as input to the one or more ultrasonic sensors;
predicting, via the one or more ultrasonic sensors, one or more incoming obstacles based on the determined pattern, wherein the one or more incoming obstacles are different from the detected one or more obstacles; and
dynamically updating, the optimal path from the tracked current spatial location of the user to the destination location based on a feedback obtained from one or more user interaction modalities.
1. A processor implemented method, comprising:
obtaining, by one or more hardware processors, nested environment data of a facility for indoor navigation performed by a user, wherein
the nested environment data is stored on and retrieved from a server, and
the nested environment data of the facility is obtained by:
evaluating a plurality of floor exit maps of each floor of a plurality of floors against an evaluation criterion, wherein the evaluation criterion is used to check real world architectural details of the facility;
creating, based on a mismatch between the plurality of floor exit maps and the evaluation criterion, a two-dimensional digital map of each floor among the plurality of floors of the facility;
sequentially arranging the created two-dimensional digital map of each floor of the plurality of floors of the facility;
determining, based on the sequential arrangement, a nested map of the facility; and
performing map labelling to localize and capture information of a plurality of landmarks in the nested map, wherein
the map labelling refers to adding details to the plurality of landmarks with respect to the created two-dimensional digital map along with capturing surrounding information of the plurality of landmarks,
each landmark of the plurality of landmarks is tagged to a reference surroundings based on one or more parameters in the created two-dimensional digital map,
the one or more parameters include images of the plurality of landmarks, text or signages of the plurality of landmarks, a direction of the plurality of landmarks, and a wireless fidelity (Wi-Fi) signal strength and a magnetic field intensity of the plurality of landmarks;
receiving, by the one or more hardware processors, a destination location within the facility from the user;
estimating, using a surrounding recognition machine learning model implemented by the one or more hardware processors, a current spatial location of the user with a predefined precision range at centimeter (cm) level by identifying (i) a user specific area in the facility using the surrounding recognition machine learning model trained with a plurality of real world images of the facility and (ii) the current spatial location of the user with respect to the identified user specific area by triangulating input data received from a plurality of sensors;
determining, by the one or more hardware processors, an optimal path from the current spatial location to the destination location using the nested environment data, wherein the optimal path from the current spatial location to the destination location is categorized as (i) a convenient path (ii) a multi-destination path, or (iii) a shortest path in accordance with one or more user constraints, wherein
the convenient path is a path with multiple destinations,
the one or more user constraints include user profiling information,
the convenient path is selected based on the one or more user constraints, a minimum time from the current spatial location to the destination location, and a minimum distance from the current spatial location to the destination location,
the multi-destination path is a path that is longest with respect to a time and a distance and covers all important landmarks within the facility,
the shortest path is a path that is shortest with respect to the time and the distance from the current spatial location to the destination location, and
the user profiling information refers to details of physical appearance of the user and is indicative of whether the user is a physically fit person without any disability;
tracking, using an augmented reality technique implemented by the one or more hardware processors, the current spatial location of the user while the user navigates on the optimal path from the current spatial location to the destination location; and
detecting one or more obstacles present on the optimal path using an obstacle detector coupled with the one or more hardware processors, wherein the obstacle detector detects the one or more obstacles using a combination of:
(i) computer vision techniques utilizing data continuously captured by a camera to detect the one or more obstacles with a first range of view, and
(ii) one or more ultrasonic sensors to detect the one or more obstacles with a second range of view, wherein the second range of view refers to a small area range around the user to provide immediate alerts for less distance obstacles, and the small area range is 2 meters-3 meters;
determining, by the one or more hardware processors, based on the data that is continuously captured by the camera, a pattern of the one or more obstacles;
providing, by the one or more hardware processors, the determined pattern as input to the one or more ultrasonic sensors;
predicting, via the one or more ultrasonic sensors coupled with by the one or more hardware processors, one or more incoming obstacles based on the determined pattern, wherein the one or more incoming obstacles are different from the detected one or more obstacles; and
dynamically updating, by the one or more hardware processors, the optimal path from the tracked current spatial location of the user to the destination location based on a feedback obtained from one or more user interaction modalities.
2. The method of
3. The method of
4. The method of
detecting a deviation in a direction of the user from a preplanned direction on the optimal path; and
detecting an obstacle in the optimal path of the user based on the detected deviation in the direction of the user from the preplanned direction on the optimal path.
5. The method of
7. The system of
8. The system of
9. The system of
detect a deviation in a direction of the user from a preplanned direction on the optimal path; and
detect an obstacle in the optimal path of the user based on the detected deviation in the direction of the user from the preplanned direction on the optimal path.
10. The system of
|
This U.S. patent application claims priority under 35 U.S.C. § 119 to: Indian provisional patent application no. 202021001718, filed on Jan. 14, 2020. The entire contents of the aforementioned application are incorporated herein by reference.
The disclosure herein generally relates to field of indoor positioning system, and more particularly to system and method for performing inclusive indoor navigation for assisting physically challenged subjects.
Indoor navigation including locating a user with high precision and finding way to a desired destination in a new environment is a challenging task. Further, users with disabilities have more limitations because of their physical challenges and hence there is a bigger need for navigational assistance for disabled users than mainstream users. Global Positioning System (GPS) being most popular solution for outdoor navigation fails to show navigation accurately due to precision issues and its more challenging for indoor as GPS cannot be used for indoor environment. Traditional systems which include Indoor Positioning System (IPS) provide turn by turn directional assistance using Bluetooth beacons, RFID tags, Wi-Fi signature, GPS and geographic information system (GIS) with IoT may fail to provide accurate localization and navigation in real time as especially required by person with disabilities.
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. In an aspect, there is provided a processor implemented method, the method comprising: obtaining, by one or more hardware processors, a nested environment data of a facility under consideration for indoor navigation performed by a user, wherein the nested environment data is connected to a server, and wherein the nested environment data of the facility is obtained by: creating a two-dimensional digital map for each floor among a plurality of floors of the facility, determining a nested map of the facility by sequentially arranging two-dimensional digital maps created for each floor of the facility, and performing map labelling to localize and capture information of a plurality of landmarks in the nested map; receiving, by the one or more hardware processors, a destination within the facility from the user; estimating, using a surrounding recognition machine learning model implemented by the one or more hardware processors, a current spatial location of the user with a predefined precision range at centimeter (cm) level by identifying (i) a user specific area in the facility using the surrounding recognition machine learning model trained with a plurality of real world images of the facility and (ii) the current spatial location of the user with respect to the identified user specific area by triangulating input data received from a plurality of sensors; determining, by the one or more hardware processors, an optimal path from the current location to the destination using the nested environment data, wherein the optimal path from the current location to the destination is categorized as at least one of (i) a convenient path (ii) multi-destination path, (iii) shortest path in accordance with one or more user constraints; tracking, using an augmented reality technique implemented by the one or more hardware processors, the current spatial location of the user while user navigates on the optimal path from the current location to the destination; and dynamically updating, by the one or more hardware processors, the optimal path from the tracked current spatial location of the user to the destination based on feedback obtained from one or more user interaction modalities.
In another aspect, there is provided a system, the system comprising: one or more data storage devices operatively coupled to one or more hardware processors and configured to store instructions configured for execution via the one or more hardware processors to: obtain, by one or more hardware processors, a nested environment data of a facility under consideration for indoor navigation performed by a user, wherein the nested environment data is connected to a server, and wherein the nested environment data of the facility is obtained by: creating a two-dimensional digital map for each floor among a plurality of floors of the facility, determining a nested map of the facility by sequentially arranging two-dimensional digital maps created for each floor of the facility, and performing map labelling to localize and capture information of a plurality of landmarks in the nested map; receive, by the one or more hardware processors, a destination within the facility from the user; estimate, using a surrounding recognition machine learning model implemented by the one or more hardware processors, a current spatial location of the user with a predefined precision range at centimeter (cm) level by identifying (i) a user specific area in the facility using the surrounding recognition machine learning model trained with a plurality of real world images of the facility and (ii) the current spatial location of the user with respect to the identified user specific area by triangulating input data received from a plurality of sensors; determine, by the one or more hardware processors, an optimal path from the current location to the destination using the nested environment data, wherein the optimal path from the current location to the destination is categorized as at least one of (i) a convenient path (ii) multi-destination path, (iii) shortest path in accordance with one or more user constraints; track, using an augmented reality technique implemented by the one or more hardware processors, the current spatial location of the user while user navigates on the optimal path from the current location to the destination; and dynamically update, by the one or more hardware processors, the optimal path from the tracked current spatial location of the user to the destination based on feedback obtained from one or more user interaction modalities.
In yet another aspect, there is provided one or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes: to: obtain, by one or more hardware processors, a nested environment data of a facility under consideration for indoor navigation performed by a user, wherein the nested environment data is connected to a server, and wherein the nested environment data of the facility is obtained by: creating a two-dimensional digital map for each floor among a plurality of floors of the facility, determining a nested map of the facility by sequentially arranging two-dimensional digital maps created for each floor of the facility, and performing map labelling to localize and capture information of a plurality of landmarks in the nested map; receive, by the one or more hardware processors, a destination within the facility from the user; estimate, using a surrounding recognition machine learning model implemented by the one or more hardware processors, a current spatial location of the user with a predefined precision range at centimeter (cm) level by identifying (i) a user specific area in the facility using the surrounding recognition machine learning model trained with a plurality of real world images of the facility and (ii) the current spatial location of the user with respect to the identified user specific area by triangulating input data received from a plurality of sensors; determine, by the one or more hardware processors, an optimal path from the current location to the destination using the nested environment data, wherein the optimal path from the current location to the destination is categorized as at least one of (i) a convenient path (ii) multi-destination path, (iii) shortest path in accordance with one or more user constraints; track, using an augmented reality technique implemented by the one or more hardware processors, the current spatial location of the user while user navigates on the optimal path from the current location to the destination; and dynamically update, by the one or more hardware processors, the optimal path from the tracked current spatial location of the user to the destination based on feedback obtained from one or more user interaction modalities.
In accordance with an embodiment of the present disclosure, the one or more user constraints used for categorization of the optimal path include user profiling information, information of landmark of interest, distance from an initial location of the user to the destination, and time to reach the destination.
In accordance with an embodiment of the present disclosure, the predefined precision range of the current spatial location of the user is 10 cm-20 cm.
In accordance with an embodiment of the present disclosure, the step of tracking includes avoiding disorientation and side wall bumping of the user based on a deviation in direction of the user from a preplanned direction on the optimal path.
In accordance with an embodiment of the present disclosure, the feedback obtained from the one or more user interaction modalities for dynamically updating the optimal path includes feedback obtained from the obstacle detector, haptic feedback, voice instructions-based feedback, and visual interactions-based feedback.
In accordance with an embodiment of the present disclosure, the method further comprising detecting one or more obstacles present on the optimal path using an obstacle detector, wherein the obstacle detector detects one or more obstacles using a combination of (i) computer vision techniques utilizing a plurality of data continuously captured by a camera to detect the one or more obstacles with a first range of view and (ii) one or more ultrasonic sensors to detect the one or more obstacles with a second range of view, and wherein the plurality of data continuously captured by the camera helps in determining pattern of the one or more obstacles and is provided as input to the one or more ultrasonic sensors for prediction of one or more incoming obstacles based on the determined pattern.
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems and devices embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the scope being indicated by the following claims.
The embodiments herein provide a system and method for performing inclusive indoor navigation. In multiple scenarios, whenever a user visit in any unfamiliar environment, it is a challenge to find the way to desired destination. For example, if a person, who is visiting a mall for the first time, wants to go to a specific store has to know where the store is, if it is open or if the store is too crowded. Similarly, in case of an airport, where a person is visiting for the first time and wants to explore the airport and at the same time keep track of the flight to catch. Also, in case of book exhibition where the user wants to go to a particular author's stall, he needs to know one or more informative factors such as where the stall is, till when is it open and if the stall is too crowded so that he can explore the rest of the book exhibition. Similarly, in case of an educational institute which might have multiple wings of different departments including physics, chemistry, a person may need to go to kinematics lab on second floor of physics department building. In case of hospital, if the user enters an unfamiliar hospital and goes to OPD (Outpatient Department) then as he/she shares his/her case papers, he/she should be automatically guided to the medical or to a lab for further checkup. However, different type of challenging and complex situations are required to be addressed for different types of users in indoor navigation. For example, for a visually challenged user, indoor navigation becomes more challenging since the visually challenged user may require a specific directional assistance with highest precision to stick to a path and avoid deviation from the path to reach the destination, along with surrounding information and obstacle detection. Similarly, a wheelchair bound user may needs a path wide enough for the wheelchair to pass and the guided path should always avoid hurdles like staircase, stepping, and the like, even if the path is not shortest. Also, for an elderly user, a proactive approach is needed to guide the user through a path that address his old age limitations.
State of the art systems and methods use Bluetooth beacon based indoor positioning system (IPS) technology which gives an accuracy of about 2 meters for localization which might be negligible for mainstream user but for a person with disability (e.g., visually impaired), it creates a huge difference as his/her whole turn would be changed by two meters while route requires precision in centimeters, a strategic positioning and dependency on extra hardware. Further, RFID based systems give an accuracy of 20-50 cm and have lower proximity range wherein the user needs to carry the RFID tag along. Furthermore, conventional Wi-Fi signature technology-based systems rely on signal strength at different locations in indoor environment, need extensive data collection and give an accuracy of about 2-3 meters. Traditional systems also utilize augmented reality (AR) for purpose of navigation wherein starting point is kept fixed, localization is done using markers which may affect look and feel of the indoor environment. However, for visually impaired users, use of the augmented reality (AR) alone may not suffice in guiding the user to stick to a path without deviating and moving close to the walls. Thus, state of the art methods lack in providing last meter precision, localization, require extra hardware or internet dependency.
The system of present disclosure enhances user experience and solves challenges of people with disabilities while abiding by law and creating differential experience. The system of present disclosure is an indoor positioning system (IPS) which is user centric, inclusive and solves last-meter localization problem with no hardware dependency which means there is no need of extra hardware to be setup in a target indoor environment. In context of the present disclosure, the term ‘inclusive’ is used because of the broader reach of the system of the present disclosure addressing physical limitations of a person with disability like wheelchair bound, visually impaired, elderly people and the like. Further, the system of the present disclosure include mainstream users also and bring them to a level ground in terms of navigation in an environment. Further, the system of present disclosure performs indoor navigation with on device path planning, user localization without internet and by utilizing capabilities of augmented reality (AR), computer vision (CV), machine learning (ML), and artificial intelligence (AI) based techniques. The device herein may be a personal digital assistant (PDA), a tablet a mobile or the like. The augmented reality (AR) based technique provides an integration of accelerometer, gyroscope and camera which is used for identifying spatial position and tracking of the user in real time as he/she moves in the indoor environment. The computer vision (CV) and the machine learning (ML) based techniques are used for map creation, localization and obstacle detection. In the proposed disclosure, the artificial intelligence (AI) based techniques are used for creating user profiling information (alternatively referred as user persona) with real time user attributes and pre-fed basic details of the user such as name, age, and the like. Further, the artificial intelligence based techniques are used for determining optimal path including a convenient path that can have multiple destinations for routing disabled user (e.g., a wheelchair bound user along a path wide enough for smooth wheelchair passage avoiding all the hurdles like staircase, stepping). While, convenient path for a mainstream user in a crowded place takes the user from one destination to the other keeping reference of crowd from minimum to maximum. Further the optimal path includes multi-destination path (in case of museums, amusement parks) and shortest path.
Referring now to the drawings, and more particularly to
The I/O interface 104 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The interfaces 104 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a camera device, and a printer. The interfaces 104 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the interfaces 104 may include one or more ports for connecting a number of computing systems with one another or to another server computer. The I/O interface 104 may include one or more ports for connecting a number of devices to one another or to another server.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, one or more modules (not shown) of the system 100 can be stored in the memory 102. The one or more modules (not shown) of the system 100 stored in the memory 102 may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular (abstract) data types. In an embodiment, the memory 102 includes a data repository 110 for storing data processed, received, and generated by the one or more hardware processors 104.
The data repository 110, amongst other things, includes a system database and other data. In an embodiment, the data repository 110 may be external (not shown) to the system 100 and accessed through the I/O interfaces 104. The memory 102 may further comprise information pertaining to input(s)/output(s) of each step performed by the processor 104 of the system 100 and methods of the present disclosure. In an embodiment, the system database stores information being processed at each step of the proposed methodology. The other data may include, data generated as a result of the execution of the one or more hardware processors 104 and the one or more modules (not shown) of the system 100 stored in the memory 102.
The system 100 further includes a plurality of sensors 108 which includes an accelerometer, a gyroscope, a magnetometer, a camera, an ultrasonic sensor, a Bluetooth, a GPS (global positioning system), an obstacle detector, a Wi-Fi sensor, and the like. In an embodiment, the obstacle detector refers to a small device that could be connected to a handheld device such as smartphone and can be carried by user in pocket.
In an embodiment, the system 100 of the present disclosure can be configured to reduce the manual intervention. A detailed description of the above-described system and method for performing inclusive indoor navigation is shown with respect to illustrations represented with reference to
Referring to
Further, as depicted in step 204 of
In an embodiment, GPS location of the user is determined before the navigational session starts. The GPS is used to determine the location of the user to load corresponding nested environment data to the server for indoor navigation but not used during the indoor navigational session as it is inaccurate indoors. In other words, the GPS serves a purpose of giving approximate location of the user in order to determine which indoor area the user is entering which is further used for reference to load respective nested environment data to the server or the local device. For example, in a large academic institute, the GPS location is used before the user enters the campus to load the respective nested environment data of the whole campus and then as building in large campuses are kilometres away from each other, thus once the user reaches close to the destination building based on the GPS, the respective building nested environment data is loaded.
In an embodiment, the nested environment implementation can be further understood with example of a hotel where the user wants to go from ground floor to a room on third floor of the hotel. First, as per GPS coordinates, the nested environment data is loaded (locally or via internet) to the server or the local device from which the ground floor's environment data will be loaded. Then, as navigational session starts from the ground floor, the user is guided to stairs or lift to reach the third floor. On reaching the third floor, its environment data is loaded for further navigation to a specific room. Thus, the combined environment data received from the ground floor to the third floor is referred as the nested environment data. In an embodiment, use of GPS is not confined to a location before user enters the environment but it has a broader use to establish a relationship, wherein the relationship could be use of GPS with respect to a country, a city in the country, a campus in the city, a building in the campus and so on. The above mentioned relationship between country, city, building and campus is facilitated by using the GPS. For example, railways or big corporates at country level, multiple campus within a city relationship or multiple buildings in a campus relationship. Further, the GPS helps in scaling a project to as big as country level or city level. For example, in case of railways, the platform where the user is present is determined by a floor using nested environment data and which city the user is present is determined by the GPS. Also, back tracing of the two-dimensional digital map of indoor environment of the facility under consideration can be performed. For example, the back tracing can be performed from the two-dimensional digital map of the indoor environment of the facility, then to the respective building, then to the city, and then to the country, based on the relationship established by using the GPS.
Referring back to
Further, as depicted in step 210 of
In an embodiment, the step of tracking includes avoiding disorientation and side wall bumping of the user based on a deviation in direction of the user from a pre-planned direction on the optimal path. In other words, once navigational session starts during tracking of the current spatial location of the user, it is determined using a magnetometer of the mobile device if the user starts moving in a direction of the optimal path and direction of the user facing is captured and compared. In normal scenarios, the pre-planned direction on the optimal path should help the user to remain in centre of the optimal path. If the user falls out of a range of direction deflection, then he/she tends to bump in the side walls. As a measure to avoid the disorientation and side wall bumping, an alert is generated immediately to get the user (especially visually challenged users) back in direction the path is leading towards.
Referring to
Now, as the user moves in the indoor environment, then using the plurality of data captured using the camera, the one or more obstacles are added as upcoming obstacle before even the one or more obstacles come in proximity of the user. While in case of the one or more ultrasonic sensors, if any immediate obstacle comes in proximity (e.g., something falls in the way, a person suddenly comes in front) of the user, it is detected. In other words, when the user moves, then any obstacle present on the optimal path is captured even before the user physically arrives near the obstacle and the system of the present disclosure would have apriori knowledge that such an obstacle would fall at a given distance from where it was captured. This way a comprehensive feedback for upcoming obstacle awareness and immediate obstacle alert is covered along with distance from the obstacle and information of the obstacle. In an embodiment, the obstacle detector could be standalone small device connected to the mobile device implementing the system 100 and can be carried by the user in pocket.
In an embodiment, the feedback obtained from the one or more user interaction modalities for dynamically updating the optimal path include (i) feedback obtained from the obstacle detector, haptic feedback, voice instructions-based feedback, and visual interactions-based feedback. For example, a deaf user is not given voice instructions-based feedback, the selected user interaction modality for giving feedback to a deaf user may include visual interaction-based feedback, haptic feedback such as beep or vibrations, and feedback obtained from the obstacle detector for obstacle avoidance if any obstacle is present on the optimal path. Similarly, the selected user interaction modality for giving feedback to a blind user may include voice instructions-based feedback, haptic feedback such as beep or vibrations, and feedback obtained from the obstacle detector.
In an embodiment, the system of the present disclosure can be added with different features and utilities keeping core navigation solution intact for any indoor environment without any changes to be made to the indoor environment. Few high impact non-limiting use cases include retail, public transport, public buildings wherein retail further includes malls, supermarket and theatres. Similarly, public transport includes railway stations, airport and bus stations. Public buildings may include hospitals wherein the system of the present disclosure can be integrated with hospital system. For example, a user enters hospital then as per the user's medical case, he/she is guided to respective doctor. Further, as the doctor updates in his/her system the required tests, the system of the present disclosure guides the user to respective labs. The public buildings use cases may further include government offices, museums, amusement parks, and/or the like. Since the system of the present disclosure tracks the user in indoor space, thus for a fire evacuation use case, there could be two ways including proactive support and rescue team help. Firstly, the system of the present disclosure can notify the user to nearest escape and secondly, it may help rescue teams to reach and identify the user within the indoor space to rescue.
In an embodiment, the system of present disclosure provides the user with an additional favourite's functionality in which the user can mark a few destinations in his/her favourite's list. For example, a user can save his/her cabin in his/her favourite list as ‘work’ that may allow him/her to not enter his/her cabin location as destination every time rather he/she can directly ask for route of ‘work’.
The present disclosure for performing inclusive indoor navigation is independent of internet as it performs all the processing on a mobile or handheld device itself. This reduces time of whole navigational experience as real time data is being used and processed on the mobile device itself without first going to a server then processing the data and then returning output to the mobile device. The system of present disclosure provides a companion guidance which is an additional feature to give a human touch to the system creating a higher user engagement. The system of present disclosure includes virtual companion which communicates with the user.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims (when included in the specification), the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope indicated by the following claims.
Jadhav, Charudatta, Rajput, Govind, Harshavardhan, Achampet
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10012509, | Nov 12 2015 | Malikie Innovations Limited | Utilizing camera to assist with indoor pedestrian navigation |
20060184318, | |||
20150002808, | |||
20150185022, | |||
20150324646, | |||
20190128675, | |||
20190234743, | |||
20200050894, | |||
20210018929, | |||
20210041246, | |||
20210209713, | |||
CN106840158, | |||
CN109029444, | |||
WO2015083150, | |||
WO2018223490, | |||
WO2020034165, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Dec 23 2019 | JADHAV, CHARUDATTA | Tata Consultancy Services Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 054865 | /0806 | |
Dec 23 2019 | RAJPUT, GOVIND | Tata Consultancy Services Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 054865 | /0806 | |
Dec 23 2019 | HARSHAVARDHAN, ACHAMPET | Tata Consultancy Services Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 054865 | /0806 | |
Jan 08 2021 | Tata Consultancy Services Limited | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Jan 08 2021 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Oct 03 2026 | 4 years fee payment window open |
Apr 03 2027 | 6 months grace period start (w surcharge) |
Oct 03 2027 | patent expiry (for year 4) |
Oct 03 2029 | 2 years to revive unintentionally abandoned end. (for year 4) |
Oct 03 2030 | 8 years fee payment window open |
Apr 03 2031 | 6 months grace period start (w surcharge) |
Oct 03 2031 | patent expiry (for year 8) |
Oct 03 2033 | 2 years to revive unintentionally abandoned end. (for year 8) |
Oct 03 2034 | 12 years fee payment window open |
Apr 03 2035 | 6 months grace period start (w surcharge) |
Oct 03 2035 | patent expiry (for year 12) |
Oct 03 2037 | 2 years to revive unintentionally abandoned end. (for year 12) |