A method and system for analyzing a queue comprising: obtaining a first image acquired at a first time of a first position within a queue; obtaining a second image acquired at a second time of a second position within the queue; detecting a queue member within the first image; detecting a queue member with the second image; determining that the queue member detected within the second image is the same as the queue member detected within the first image; and determining a trajectory of the queue member within the queue based on a difference between the first time and the second time.
|
15. A system for analysing a queue comprising:
a processor configured to:
(a) receive a first image acquired at a first time of a first position within a queue;
(b) receive a second image acquired at a second time of a second position within the queue;
(c) detect a queue member within the first image;
(d) detect a queue member within the second image;
(e) determine that the queue member detected within the second image is the same as the queue member detected within the first image; and
(f) determine a trajectory of the queue member within the queue based on a difference between the first time and the second time.
20. One or more non-transitory computer readable media storing computer readable instructions that, when executed, cause a system to perform a method of analyzing a queue by:
obtaining a first image acquired at a first time of a first position within a queue;
obtaining a second image acquired at a second time of a second position within the queue;
detecting a queue member within the first image;
detecting a queue member within the second image;
determining that the queue member detected within the second image is the same as the queue member detected within the first image; and
determining a trajectory of the queue member within the queue based on a difference between the first time and the second time.
1. A computer-implemented method of analysing a queue comprising the steps of:
(a) obtaining, by a computer processor, a first image acquired at a first time of a first position within a queue;
(b) obtaining, by the computer processor, a second image acquired at a second time of a second position within the queue;
(c) detecting, by the computer processor, a queue member within the first image;
(d) detecting, by the computer processor, a queue member within the second image;
(e) determining, by the computer processor, that the queue member detected within the second image is the same as the queue member detected within the first image; and
(f) determining, by the computer processor, a trajectory of the queue member within the queue based on a difference between the first time and the second time.
2. The method of
4. The method according to
5. The method according to
6. The method according to
7. The method according to
8. The method of
9. The method according to
10. The method according to
11. The method according to
12. The method according to
13. The method of
14. The method of
16. The system of
17. The system of
18. The system according to
19. The system of
|
This application claims priority to United Kingdom Application No. 1306313.6, filed Apr. 8, 2013, to Raja et al., hereby incorporated by reference in its entirety for all purposes.
The present invention relates to a system and method for analysing queues and in particular for determining queue properties from image and video data.
Many organisations dealing with the public are required to manage the length of time people are required to queue for services, for example customers in a bank or passengers going through security checks at an airport. In order to conduct such queue management more effectively, they require statistical estimates of queuing times experienced by people within different time slots throughout the day, so as to optimise their operations and minimise queuing times accordingly.
Currently, techniques employed for monitoring queue lengths and estimating queuing times in airports involve a combination of laser-based passenger counting systems (which cannot track specific individuals) and Bluetooth signal-tracking (which requires passengers to have an active Bluetooth device about their person). Such systems cannot be installed in all areas and are highly inaccurate. Most visual sensor based systems require the ability to accurately track all individuals in a queue in order to derive useful statistics, an unreasonable and unworkable assumption in even modest real-world scenarios where imaging conditions and crowd behaviour is relatively unrestricted.
Therefore, there is required a system and method that overcomes these problems.
Queue members (in particular but not limited to people) move on a trajectory through a queue usually by new queue members joining the queue and the longest queuing members leaving the queue at the front when the purpose of the queue is fulfilled for them.
Images are obtained or acquired from two or more positions within the queue. These may be still or moving images. As individual queue members move on their trajectory through the queue then they will be imaged at different times and appear in different images (different positions). The images are analysed and one or more individual queue members are identified in the images. When the same individual is identified in different images at different times then an inference may be made as to their movement and trajectory through the queue.
In general, each queue member will have substantially the same trajectory (although the queue may move faster or slower at different times). Therefore, determining the trajectory of one particular queue member provides an approximation of the progress or progression of the queue for a particular time period. Therefore, information about the queue, such as the total queuing time or the time taken to pass from one point in the queue to another, may be determined. Following different queue members through the queue may provide additional information and queuing statistics, such as the queue time at a particular time of day, for example.
Improved queue time estimation or inference may be achieved by randomly, pseudo randomly or otherwise sampling two or more queue members (preferably visually stable queue members) to detect and track in order to yield a combined queue time estimation. Therefore, not all queue members need be detected and tracked (they may not all be visually distinct or stable due to occlusion and/or none distinctive clothing).
In accordance with a first aspect there is provided a method of analysing a queue comprising the steps of:
Optionally, the method may further comprise the steps of obtaining further images of further positions within the queue and detecting the queue member within these further images. Additional images and queue member detections may improve the characterisation of the queue and its progress.
Preferably, the images overlap. In other words, the view of each image overlaps with at least one other image so that each portion of the queue may be monitored and analysed.
Preferably, positions within the queue are selected from the front of the queue, back of the queue, and the middle of the queue. There may be several positions within or at different stages along the queue. The actual or physical positions or locations in the queue may be measured and recorded.
Optionally, pairs or further pairs of images are obtained, used to detect the queue member, and determine their trajectory, wherein each pair of images includes at least one common point in the queue. The pairs of images may overlap or be separated.
Optionally, determining the trajectory may further include the step of forming the trajectory from two or more other trajectories. These other trajectories may be tracklets or partial trajectories within the queue.
Optionally, determining that the queue member detected within the second image is the same as the queue member detected within the first image may be based on similarities of visual appearances of the queue member within each image. Other criteria may be used.
Advantageously, determining a trajectory of the queue member within the queue may further comprise the step of applying the Munkres (also known as the Hungarian) assignment algorithm to the two or more other trajectories and the similarities of the visual appearances of the queue member within each image. Other algorithms may be used.
Preferably, detecting the queue member within an image further comprises selecting a group of pixels containing the queue member. In other words, a procedure may be executed to box or bound the queue member and isolate their image from the rest of the image in order to restrict processing or comparison to that particular portion.
Preferably, the images may be video images. Frames within the video images may be analysed or the analysis may be carried out on groups or moving images. Therefore, movement of individuals may be used as a further identifier of that individual or used in the comparison steps.
Preferably, determining a trajectory of the queue member within the queue may be further based on the queuing distance between the first position and the second position. In other words, as well as the time being taken into consideration in determining movement within the queue of queue members, the actual distance moved (and perhaps the total length of the queue) may also be considered. Distance may be in absolute terms (meters) but also in number of queue members (e.g. 10 people).
Optionally, the method may further comprise the step of determining from the trajectory a total queuing time for the queue member or queue. Other statistics may be determined or collected.
Optionally, the method according may further comprise iterating steps (a) to (f) to determine a plurality of trajectories. This improves accuracy as it facilitates averaging and other statistical techniques. Each of the plurality of trajectories may relate to a different queue member, for example.
Optionally, the method may further comprise determining an average trajectory from the plurality of trajectories.
Optionally, the method may further comprise the step of determining an average queue time from the average trajectory.
Preferably, the trajectory may include data describing how the queue member moves through the queue over time.
According to a second aspect, there is provided a system for analysing a queue comprising a processor configured to:
Preferably, the system may further comprise a first and a second camera arranged to capture the first and second images, respectively. The images may be acquired from different sources including live images, recorded images, multiple cameras and the same camera.
Optionally, the first and second cameras may be arranged to have partially overlapping views of the queue.
Preferably, the system may further comprise a user interface configured to receive a signal indicating the first and the second positions of the queue on the first and second images. Therefore, the system may receive user input defining points or positions within the queue of interest or to define the extent of the queue.
Optionally, the first or second positions of the queue may be selected from: the start of the queue, one or more intermediate positions in the queue, and the end of the queue.
The methods described above may be implemented as a computer program comprising program instructions to operate a computer. The computer program may be stored on a computer-readable medium or transmitted as a signal.
It should be noted that any feature described above may be used with any particular aspect or embodiment of the invention.
The following numbered clauses provide illustrative examples:
1. A method of analysing a queue comprising the steps of:
The present invention may be put into practice in a number of ways and embodiments will now be described by way of example only and with reference to the accompanying drawings, in which:
It should be noted that the figures are illustrated for simplicity and are not necessarily drawn to scale.
In one example implementation, the system and method may be an automated visual analysis of queuing statistics within user-defined queuing regions spanning multiple and preferably contiguous camera views.
The system and method allows analysis of people queuing and queue management. More specifically, the system facilitates interactive and automated analysis of video streams depicting people in queues for the derivation of queuing statistics (e.g. average queuing times), along with real-time graphical visualisation for human users.
The method and system may be interactive together with an automated queue analysis system configured to derive more accurate and useful statistics for queue management in a non-invasive manner. The method and system may operate by automatically detecting and tracking individuals within video streams from video cameras encompassing the length or substantially the length of a queue. The queue may be automatically or manually segmented into a fixed number of visual regions. Person tracks involve multiple associated individual detections (video frames) of the individual concerned, with transition times computed between the individual detections within a track and associated with the defined queue regions. The observed transition times for individual queue regions are then aggregated over time and overall queue estimates derived accordingly by integrating over individual queue region estimates. Corresponding plots may be generated for denoting observed overall queuing times within different time slots for human visualisation.
Deriving such estimates may be obtained in this way by relaxing requirements imposed by other prior art visual sensor based systems to accurately track every individual in the queue for the duration of their presence; i.e. prior art techniques require the detection of a queue member in every video frame and then generate a complete track characterising their movement along the queue in its entirety.
In contrast, the present method and system only requires two or more detections per individual in the form of partial “tracklets” encompassing a portion of the whole queue (i.e. less than the whole of the queue). This may be sufficient to derive useful statistics over time, and provides an advantage in enabling usability of the system in larger and more crowded environments where queuing individuals may at times undergo temporary occlusion.
An interactive component may enable a human user to define the position and extent of a queue spanning multiple camera views, as well as to specify time slots for real-time visualisation of generated queuing statistics.
The method may be arranged into several component steps. This is illustrated schematically as a process flow between different system components or modules in
An interactive graphical user interface (GUI) based component 1 enables users to graphically delineate the position and extent of a queue spanning multiple partially-overlapping camera views. The interface 1 presents successive pairs of camera views on screen, which the user may mark with the mouse or other pointing device to indicate queue position landmarks corresponding to, for example: (a) the back of the queue (for the first camera view); (b) one or more connecting regions denoting the same physical position in intermediate pairs of camera views to characterise the “joins” between those views; and (c) the front of the queue (for the last camera view). Other landmarks may be defined. The resulting set of queue position landmarks indicates a region of interest for further analysis by describing a preferably seamless position and extent of the queue across the multiple camera views. This is illustrated further in
A processing component may consist of four modules for performing a four-stage person detection and analysis procedure.
A module comprising a person detector 2 may denote the positions of individual people in each camera view. These denotations may take the form of a “bounding box” indicating a rectangular (or other shaped) region of pixels that make up a detected individual. Reference [1] illustrates a person detector 2 based on a mixtures of multiscale deformable part models, although other techniques may be used. The single component person model described with reference to FIG. 1 of reference [1] illustrates one type of person detector that may be used with the present system and method and the remainder of reference [1] provides technical details of an example person detector and associated mathematical functions. Reference [1] therefore provides the skilled person with the material and examples necessary to build a software-based person detector.
A module for merging “tracklets” on the basis of similarity in appearance and space-time consistency (module 5) may also be provided. This module 5 may be used to combine multiple tracklets (or small portions of the queue) that correspond to the same individual to form a lager track or trajectory. More specifically, two or more tracklets may be merged by combining several measures of similarity, for example: (a) similarity of appearance; (b) consistency of position; and (c) consistency of speed [6]. If the similarity exceeds a fixed or predetermined threshold, then the tracklets may be merged into one. Such merged tracklets may be further combined if necessary. During system operation, each newly generated tracklet may be compared with a “pool” of all existing tracklets (
An interactive processing system 7 is shown
These components enable more robust interactive real-time queue statistical analysis and graphical visualisation for human users in large crowded areas. It operates by automatically detecting and tracking individuals within video streams from video cameras encompassing the length of a queue, which may be automatically or manually segmented into a fixed number of visual regions. Person or queue member tracks may involve multiple associated individual detections (video frames) of the individual concerned, with transition times computed between the individual detections within a track and associated with the defined queue regions. The observed transition times for individual queue regions may then be aggregated over time and overall queue estimates derived accordingly by integrating over individual queue region estimates. Corresponding plots are generated for denoting observed overall queuing times within different time slots for human visualisation. This process continues indefinitely with real-time refinement of statistics and visualisation.
In one example implementation (others may be used), detection, extraction and matching of queue members may use a parts-based person detector [1] to detect individuals (queue members) and produce bounding boxes indicating the position and size of individuals in image frames. Image patches within these bounding boxes may then each be split into a number of equal horizontal segments (for example six). Within each segment, a comprehensive set of types of visual features are extracted (for example 29 types), encompassing the colour and texture appearance of individuals for matching [2]. More specifically, these colour features incorporate different colour spaces including RGB, Hue-Saturation and YCrCb, with texture features derived from Gabor wavelet responses at eight different scales and orientations, as well as thirteen differently parameterised Schmid Filters [9], for example. Normalised histograms are generated capturing the statistics of these features for each horizontal segment, and then concatenated into a single feature vector. Given 16 bins for the histogram corresponding to each of the 29 feature types for each of the six horizontal strips, we thus have a 2784-dimensional feature vector per bounding box, which may be used as an appearance descriptor. Rather than consider each feature type equally in terms of relevance, the system may dynamically learn the importance of each of these feature types to more strongly weight those features most relevant for matching across different cameras [2, 3, 10]. The resulting model may take the form of a support vector machine (SVM) known as RankSVM [8, 7]. This model may be used to compute matching scores between the appearance descriptors of detected individuals.
The Munkres Assignment algorithm, also known as the Hungarian algorithm [5, 4], may be employed as part of a multi-target tracking scheme to locally group detections in different frames as likely belonging to the same person or queue member. This process may yield tracklets encompassing individual detections over multiple frames. An individual queue member D is accordingly represented as a trackelt TD={αD,1, . . . , αD,J} comprising J individual detections with appearance descriptors αD,j.
More precisely, tracklets may be built up incrementally over time, with an incomplete set updated after each frame by assigning individual detections from that frame to a tracklet according to their appearance similarity and spatial proximity. That is, given: (1) a set S={α1,f, . . . , αM,f} of M appearance descriptors for detections in frame f with corresponding pixel locations {β1,f, . . . , βM,f}; (2) a current set of N incomplete tracklets R={{circumflex over (T)}1, . . . , {circumflex over (T)}N} with their most recently added appearance descriptors {{circumflex over (α)}n,f
Cm,n=ω1|{circumflex over (α)}n,f
In essence, this cost is computed as a weighted combination of appearance descriptor dissimilarity and physical pixel distance. Predicted pixel locations {circumflex over (β)}n,f for frame f are estimated by assuming constant linear velocity from the last known location and velocity. The Munkres Assignment algorithm maps rows to columns in C so as to minimise the cost, with each detection added accordingly to their mapped incomplete tracklets. Surplus detections are used to initiate new tracklets. In practice, an upper bound is placed on cost, with assignments exceeding the upper bound retracted and the detection concerned treated as surplus. Additionally, tracklets which have not been updated for a length of time may be treated as complete.
As will be appreciated by the skilled person, details of the above embodiment may be varied without departing from the scope of the present invention, as defined by the appended claims.
Many combinations, modifications, or alterations to the features of the above embodiments will be readily apparent to the skilled person and are intended to form part of the invention. Any of the features described specifically relating to one embodiment or example may be used in any other embodiment by making the appropriate changes.
Patent | Priority | Assignee | Title |
10445589, | Dec 18 2014 | Tyco Fire & Security GmbH | Method and system for queue length analysis |
10796517, | May 31 2017 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and recording medium to calculate waiting time in queue using acquired number of objects |
10872422, | May 30 2018 | Canon Kabushiki Kaisha | Information processing device, imaging device, information processing method, and storage medium |
11120280, | Nov 15 2019 | Volkswagen Group of America Investments, LLC | Geometry-aware instance segmentation in stereo image capture processes |
11250644, | Dec 18 2014 | Tyco Fire & Security GmbH | Method and system for queue length analysis |
11669972, | Nov 15 2019 | Volkswagen Group of America Investments, LLC | Geometry-aware instance segmentation in stereo image capture processes |
9965684, | Dec 18 2014 | Tyco Fire & Security GmbH | Method and system for queue length analysis |
Patent | Priority | Assignee | Title |
5581625, | Jan 31 1994 | International Business Machines Corporation; INTERNATIONAL BUSINES MACHINES CORPORATION | Stereo vision system for counting items in a queue |
8010402, | Aug 12 2002 | VIDEOMINING, LLC | Method for augmenting transaction data with visually extracted demographics of people using computer vision |
8107676, | Jul 30 2007 | Taiwan Semiconductor Manufacturing Company, Limited | Line length estimation |
8224028, | May 02 2008 | Cognyte Technologies Israel Ltd | System and method for queue analysis using video analytics |
20070253595, | |||
20110231419, | |||
20120207350, | |||
20130070974, | |||
20140267738, | |||
EP823821, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Apr 08 2014 | Vision Semantics Limited | (assignment on the face of the patent) | / | |||
May 26 2014 | RAJA, YOGESH | Vision Semantics Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033492 | /0268 | |
May 26 2014 | GONG, SHAOGANG | Vision Semantics Limited | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 033492 | /0268 | |
Aug 11 2022 | Vision Semantics Limited | VERITONE, INC | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 061337 | /0051 | |
Dec 13 2023 | VERITONE, INC | WILMINGTON SAVINGS FUND SOCIETY, FSB, AS COLLATERAL AGENT | SECURITY INTEREST SEE DOCUMENT FOR DETAILS | 066140 | /0513 |
Date | Maintenance Fee Events |
Jun 28 2019 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Jun 28 2019 | M2551: Payment of Maintenance Fee, 4th Yr, Small Entity. |
Jul 18 2023 | M2552: Payment of Maintenance Fee, 8th Yr, Small Entity. |
Jul 18 2023 | M2552: Payment of Maintenance Fee, 8th Yr, Small Entity. |
Dec 12 2024 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Jan 26 2019 | 4 years fee payment window open |
Jul 26 2019 | 6 months grace period start (w surcharge) |
Jan 26 2020 | patent expiry (for year 4) |
Jan 26 2022 | 2 years to revive unintentionally abandoned end. (for year 4) |
Jan 26 2023 | 8 years fee payment window open |
Jul 26 2023 | 6 months grace period start (w surcharge) |
Jan 26 2024 | patent expiry (for year 8) |
Jan 26 2026 | 2 years to revive unintentionally abandoned end. (for year 8) |
Jan 26 2027 | 12 years fee payment window open |
Jul 26 2027 | 6 months grace period start (w surcharge) |
Jan 26 2028 | patent expiry (for year 12) |
Jan 26 2030 | 2 years to revive unintentionally abandoned end. (for year 12) |