Techniques allow a computer to responsively search for graph shapes similar to a user-selected graph shape much faster. data can be pre-processed and stored as vectors, along with an index. The index can be used to find similar vectors that represent graph shapes similar to a user-selected shape in a computationally efficient manner. vectors of multiple resolutions can be used to anticipate different sizes of a graph that a user can select, and comparisons can be repeated and refined. When a satisfactorily small number of candidate vectors are determined, more computationally intensive distance calculations can be performed on data reconstructed from the vectors.
|
1. A system comprising:
one or more hardware computer processors configured to execute computer executable instructions to cause the system to:
generate a first plurality of vectors that represent first sections of stored series data;
generate a second plurality of vectors that represent the stored series data at a finer resolution than represented by the first plurality of vectors;
cause display, on a user computer, of an interactive graphical user interface including a selectable visualization of at least a portion of the stored series data;
receive, from the user computer and via the interactive graphical user interface, a selection of first series data from the selectable visualization;
determine, by reference to both the first plurality of vectors and the second plurality of vectors, at least a first candidate section of the stored series data having a similarity with the selection of first series data, wherein the second plurality of vectors are used to refine the determination of the first candidate section; and
transmit, for display on the user computer and in the interactive graphical user interface, at least an indication of the first candidate section in the selectable visualization.
13. A computer-implemented method comprising:
by one or more hardware computer processors executing computer executable instructions:
generating a first plurality of vectors that represent first sections of stored series data;
generating a second plurality of vectors that represent the stored series data at a finer resolution than represented by the first plurality of vectors;
causing display, on a user computer, of an interactive graphical user interface including a selectable visualization of at least a portion of the stored series data;
receiving, from the user computer and via the interactive graphical user interface, a selection of first series data from the selectable visualization;
determining, by reference to both the first plurality of vectors and the second plurality of vectors, at least a first candidate section of the stored series data having a similarity with the selection of first series data, wherein the second plurality of vectors are used to refine the determination of the first candidate section; and
transmitting, for display on the user computer and in the interactive graphical user interface, at least an indication of the first candidate section in the selectable visualization.
2. The system of
determining a first vector from the first plurality of vectors and representing at least a first portion of the selection of first series data; and
performing first one or more comparisons to determine at least the first candidate section of the stored series data, the first one or more comparisons including at least a first comparison of some of the first plurality of vectors against the first vector.
3. The system of
determining a second vector from the second plurality of vectors and representing at least a second portion of the selection of first series data; and
performing second one or more comparisons to further determine at least the first candidate section of the stored series data, the second one or more comparisons including at least a second comparison of some of the second plurality of vectors against the second vector.
4. The system of
determining a subset of the second plurality of vectors that are adjacent to a vector from the first plurality of vectors.
5. The system of
6. The system of
generate the index based at least in part on a nearest neighbor computation or distance computation.
7. The system of
coefficients of results of a mathematical transformation of the first sections of stored series data; and
a normalization index.
8. The system of
9. The system of
perform a reverse transform of the mathematical transform to construct an approximation of at least the first candidate section using vector data.
11. The system of
compare the selection of the first series data to a candidate section; and
compare the selection of the first series data to an offset section, wherein the offset section begins at a shifted time that is offset from a beginning time of the candidate section, and the shifted time is less than a time span of the candidate section.
12. The system of
comparing the selection of the first series data to the candidate section comprises calculating a first distance, deviation, or other statistical metric; and
comparing the selection of the first series data to the offset section comprises calculating a second distance, deviation, or other statistical metric.
14. The computer-implemented method of
determining a first vector from the first plurality of vectors and representing at least a first portion of the selection of first series data; and
performing first one or more comparisons to determine at least the first candidate section of the stored series data, the first one or more comparisons including at least a first comparison of some of the first plurality of vectors against the first vector.
15. The computer-implemented method of
determining a second vector from the second plurality of vectors and representing at least a second portion of the selection of first series data; and
performing second one or more comparisons to further determine at least the first candidate section of the stored series data, the second one or more comparisons including at least a second comparison of some of the second plurality of vectors against the second vector.
16. The computer-implemented method of
determining a subset of the second plurality of vectors that are adjacent to a vector from the first plurality of vectors.
17. The computer-implemented method of
by the one or more hardware computer processors executing computer executable instructions:
generating the index based at least in part on a nearest neighbor computation or distance computation.
18. The computer-implemented method of
coefficients of results of a mathematical transformation of the first sections of stored series data; and
a normalization index.
19. The computer-implemented method of
20. The computer-implemented method of
by one or more hardware computer processors executing computer executable instructions:
performing a reverse transform of the mathematical transform to construct an approximation of at least the first candidate section using vector data.
|
This application references various features of and is a continuation of U.S. patent application Ser. No. 15/997,548, filed on Jun. 4, 2018, which application claims the benefit of priority to U.S. provisional patent application No. 62/593,815, filed on Dec. 1, 2017, the entirety of which is hereby made a part of this specification as if set forth fully herein and is incorporated by reference herein for all purposes, for all that it contains.
The present disclosure relates to techniques for improving a computer's speed of making visual comparisons. More specifically, the present disclosure relates to improving computing speeds for comparing a selected section of one or more graphs against other graphs.
Computers have limited processing power. Although a typical CPU may have a frequency of several gigahertz, the CPU can still be too slow to perform certain tasks or take an impractically slow time to do so. For example, a computer may receive a stream of new data, but the CPU may not be fast enough to perform computations at the speed that new data is received and fall farther and farther behind in performing the computations. As another example, a user may attempt to use a computer to process a large amount of data, but the computer may take too long to respond, thereby frustrating the user. As a result, current computer systems are unable to process certain quantities of data within limited time frames. Similarly, many applications that involve large scale data processing are impossible for people to perform by hand.
While digital computers are designed to perform basic mathematical and logical operations, digital computers have great difficulty perform basic visual analysis. In many cases, computers cannot perform basic visual analysis at all. Indeed, a common technique to discern if a user is a computer or a human is to present a CAPTCHA picture requiring simple visual analysis, such as identifying pictures of animals or street signs. Even when visual analysis by computers is possible, computers may do so very slowly or require a specially designed system with massive amounts of resources. There remains room for improvement in enabling computers to perform visual analysis and in enabling computers to do so faster with fewer resources.
Systems and methods for faster processor comparisons of visual graph features are disclosed herein. An aspect features a computer system includes one or more hardware computer processors configured to execute computer executable instructions in order to cause the system to: generate a first plurality of vectors that represent first sections of stored time series data; transmit, to a user computer, data for displaying a graph of a first time series data; receive, from the user computer, an indication of a user selection of the first time series data; determine a first vector representing at least a first portion of the user-selected section of the first time series data; perform one or more comparisons to determine candidate sections of the stored time series data, the one or more comparisons including at least a first comparison of some of the first plurality of vectors against the first vector to determine first candidate sections of the stored time series data; and transmit, for display on the user computer, results of the one or more comparison, the results including an indication of at least one of the candidate sections.
The computer system of the preceding paragraph can one, any combination, or all of the following features of this paragraph. The first plurality of vectors include: coefficients of results of a mathematical transformation of the first sections of stored time series data; and a normalization index. Generating the first plurality of vectors comprises performing a mathematical transform that includes at least one of: a Fourier transform, Chebyshev transform, or polynomial approximation. The one or more hardware computer processors are further configured to execute computer executable instructions in order to cause the system to: perform a reverse transform of the mathematical transform to construct an approximation of at least one of the candidate sections using vector data. The first comparison is performed by referencing an index. The one or more hardware computer processors are further configured to execute computer executable instructions in order to cause the system to, before receiving the indication of the user selection: generate an index based at least in part on a nearest neighbor computation or distance computation. The one or more hardware computer processors are further configured to execute computer executable instructions in order to cause the system to: generate a second plurality of vectors that represent the stored time series data at a finer resolution than represented by the first plurality of vectors. The one or more hardware computer processors are further configured to execute computer executable instructions in order to cause the system to: convert at least a second portion of the user-selected section of the first time series data into a second vector; determine a subset of the second plurality of vectors that are at least partially included in a candidate section of the first candidate sections and adjacent to a vector from the first plurality of vectors; and perform a comparison of the subset of the second plurality of vectors against the second vector to determine second candidate sections of the stored time series data, where the second candidate sections are more similar to the user-selected section of the first time series data than the first candidate sections that are not included in the second candidate sections. The first time series data can be the stored time series data. The first time series data is different from the stored time series data, and both the first time series data and the stored time series data are stored in a database. The stored time series data is transmitted to the system as streaming data from a sensor. The one or more hardware computer processors are further configured to execute computer executable instructions in order to cause the system to: compare the user-selected section to a candidate section; and compare the user-selected section to an offset section, wherein the offset section begins at a shifted time that is offset from a beginning time of the candidate section, and the shifted time is less than a time span of the candidate section. Comparing the user-selected section to a candidate section comprises calculating a first distance, deviation, or other statistical metric; and comparing the user-selected section to the offset section comprises calculating a second distance, deviation, or other statistical metric. The one or more hardware computer processors are further configured to execute computer executable instructions in order to cause the system to: perform one or more comparisons of at least a part of the user-selected section to a second plurality of vectors generated based at least in part on a second time series data that is different from the stored time series data, wherein the second plurality of vectors represent sections of the second time series data having time ranges that are included in time ranges of the candidate sections of the stored time series data.
An aspect features a system comprising one or more hardware computer processors configured to execute computer executable instructions in order to cause the system to: generate a first plurality of vectors that represent first sections of stored series data; transmit, to a user computer, data for displaying a graph of a first series data; receive, from the user computer, an indication of a user selection of the first series data; determine a first vector representing at least a first portion of the user-selected section of the first series data; perform one or more comparisons to determine candidate sections of the stored series data, the one or more comparisons including at least a first comparison of some of the first plurality of vectors against the first vector to determine first candidate sections of the stored series data; and transmit, for display on the user computer, results of the one or more comparison, the results including an indication of at least one of the candidate sections.
The computer system of the preceding paragraph can one, any combination, or all of the following features of this paragraph. The first plurality of vectors include: coefficients of results of a mathematical transformation of the first sections of stored series data. The one or more hardware computer processors are further configured to execute computer executable instructions in order to cause the system to: perform a reverse transform of the mathematical transform to construct an approximation of at least one of the candidate sections using vector data. The first comparison is performed by referencing an index. The one or more hardware computer processors are further configured to execute computer executable instructions in order to cause the system to: generate a second plurality of vectors that represent the stored series data at a finer resolution than represented by the first plurality of vectors; convert at least a second portion of the user-selected section of the first series data into a second vector; determine a subset of the second plurality of vectors that are at least partially included in a candidate section of the first candidate sections and adjacent to a vector from the first plurality of vectors; and perform a comparison of the subset of the second plurality of vectors against the second vector to determine second candidate sections of the stored series data, where the second candidate sections are more similar to the user-selected section of the first series data than the first candidate sections that are not included in the second candidate sections. The one or more hardware computer processors are further configured to execute computer executable instructions in order to cause the system to: compare the user-selected section to a candidate section; and compare the user-selected section to an offset section, wherein the offset section begins at a shifted phase that is offset from a beginning domain of the candidate section, and the shifted phase is less than a domain span of the candidate section.
Accordingly, in various embodiments, large amounts of data are automatically and dynamically calculated interactively in response to user inputs, and the calculated data is efficiently and compactly presented to a user by the system. Thus, in some embodiments, the user interfaces described herein are more efficient as compared to previous user interfaces in which data is not dynamically updated and compactly and efficiently presented to the user in response to interactive inputs.
Further, as described herein, the system may be configured and/or designed to generate user interface data useable for rendering the various interactive user interfaces described. The user interface data may be used by the system, and/or another computer system, device, and/or software program (for example, a browser program), to render the interactive user interfaces. The interactive user interfaces may be displayed on, for example, electronic displays (such as touch-enabled displays).
Additionally, it has been noted that design of computer user interfaces “that are useable and easily learned by humans is a non-trivial problem for software developers.” (Dillon, A. (2003) User Interface Design. MacMillan Encyclopedia of Cognitive Science, Vol. 4, London: MacMillan, 453-458.) The various embodiments of interactive and dynamic user interfaces of the present disclosure are the result of significant research, development, improvement, iteration, and testing. This non-trivial development has resulted in the user interfaces described herein which may provide significant cognitive and ergonomic efficiencies and advantages over previous systems. The interactive and dynamic user interfaces include improved human-computer interactions that may provide reduced mental workloads, improved decision-making, reduced work stress, and/or the like, for a user. For example, user interaction with the interactive user interfaces described herein may provide an optimized display of time-varying report-related information and may enable a user to more quickly access, navigate, assess, and digest such information than previous systems.
In some embodiments, data may be presented in graphical representations, such as visual representations like charts and graphs, where appropriate, to allow the user to comfortably review the large amount of data and to take advantage of humans' particularly strong pattern recognition abilities related to visual stimuli. In some embodiments, the system may present aggregate quantities, such as totals, counts, and averages. The system may also utilize the information to interpolate or extrapolate, e.g. forecast, future developments.
Further, the interactive and dynamic user interfaces described herein are enabled by innovations in efficient interactions between the user interfaces and underlying systems and components. For example, disclosed herein are improved methods of receiving user inputs, translation and delivery of those inputs to various system components, automatic and dynamic execution of complex processes in response to the input delivery, automatic interaction among various components and processes of the system, and automatic and dynamic updating of the user interfaces. The interactions and presentation of data via the interactive user interfaces described herein may accordingly provide cognitive and ergonomic efficiencies and advantages over previous systems.
Various embodiments of the present disclosure provide improvements to various technologies and technological fields. For example, as described above, existing data storage and processing technology (including, e.g., in memory databases) is limited in various ways (e.g., manual data review is slow, costly, and less detailed; data is too voluminous), and various embodiments of the disclosure provide significant improvements over such technology. As another example, the solutions described herein can improve alerting technology. A system allows users to selected graph features without defining mathematical equations for this features, and alerts can be generated if real time streaming data is determined to be sufficiently visually similar to the selected graph features. Additionally, various embodiments of the present disclosure are inextricably tied to computer technology. In particular, various embodiments rely on detection of user inputs via graphical user interfaces, calculation of updates to displayed electronic data based on those user inputs, automatic processing of related electronic data, and presentation of the updates to displayed images via interactive graphical user interfaces. Such features and others (e.g., processing and analysis of large amounts of electronic data) are intimately tied to, and enabled by, computer technology, and would not exist except for computer technology. For example, the interactions with displayed data described below in reference to various embodiments cannot reasonably be performed by humans alone, without the computer technology upon which they are implemented. Further, the implementation of the various embodiments of the present disclosure via computer technology enables many of the advantages described herein, including more efficient interaction with, and presentation of, various types of electronic data.
Generally described, aspects of the present disclosure are directed to a system and method for finding sections of graphs that can have similar visual features to a user selected section of a graph. A user can select a section of the graph, and the system can automatically search through the same or other timeseries to find similar sections. The system can perform the search in a computationally efficient manner, and the system can do so without the user providing an equation or mathematical definition of the data included in the user selected section.
Uses and User Interfaces
A sensor 103 can be used to collect data for a variety of monitored devices 101. For example, sensors can be used to collect temperature, pressure, size, volume, current, voltage, power, magnetic field, force, humidity, energy, speed, acceleration, flow rates, stress, tension, mass, weight, battery life, etc. It can be useful to monitor various devices including computer systems, machinery, cars, planes, boats, factories, power plants, electric grids, oil rigs, data centers, buildings, fields, production centers, infrastructure, delivery equipment, weather, etc. In some embodiments, the sensors can provide readings as time series data. Time series data can include measurements about the monitored device made each second or over a certain period of time, and the readings can be provided through a network 105 to the datacenter 109. The data can additionally or alternatively be provided to a user system 107 from the datacenter 109 or from the sensors 103.
As data is collected, the collected sensor data can be stored in a data store 121. The collected data can also be graphed and displayed through a display device 113 of the user system 107. Sometimes, a user can use a computer 111 to retrieve stored data from the data store 121 for display. Sometimes, sensor data can be streamed from the sensor 103 to the user system 107 and displayed in real time. Some techniques disclosed herein enable streaming sensor data to be processed and compared for visual features at a rate that can keep up with the rate that the streaming sensor data is received. A data store can include any computer readable storage medium and/or device (or collection of data storage mediums and/or devices). Examples of data stores include, but are not limited to, optical disks (e.g., CD-ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., solid state drives, random-access memory (RAM), etc.), and/or the like. Another example of a data store is a hosted storage environment that includes a collection of physical data storage devices that may be remotely accessible and may be rapidly provisioned as needed (commonly referred to as “cloud” storage).
Vast amounts of sensor data can be collected and stored in a first database 123. For example, the first database 123 can store time series data from sensor S1, including data taken at different times for different lengths of time. The first database 123 can also store time series data from sensor S2, and so on. The first database 123 can store data for tens, hundreds, or thousands of sensors, or more. The first database 123 can store data for many different intervals, such as the sensor data collected for different intervals of seconds, minutes, hours, days, weeks, etc. The first database can store intervals of data collected over days, weeks, months, years, or other long periods of time. The sensors 103 can provide multiple data readings per second, such as 10 Hz, 100 Hz, 1 kHz, 1 MHz, 1 GHz, or faster. The amount of data collected can quickly exceed many gigabytes, terabytes, petabytes, or other amounts beyond which a processor can responsively analyze in response to a user command. A database can include any data structure (and/or combinations of multiple data structures) for storing and/or organizing data, including, but not limited to, relational databases (e.g., Oracle databases, MySQL databases, etc.), non-relational databases (e.g., NoSQL databases, etc.), in-memory databases, spreadsheets, as comma separated values (CSV) files, eXtendible markup language (XML) files, TeXT (TXT) files, flat files, spreadsheet files, and/or any other widely used or proprietary format for data storage. Databases are typically stored in one or more data stores. Accordingly, each database referred to herein (e.g., in the description herein and/or the figures of the present application) is to be understood as being stored in one or more data stores.
A second database 125 can store vector data. The vector data can include a first plurality of S1 vectors (e.g., as further discussed with respect to
Users may collect and analyze sensor data for a variety of uses. In some cases, users may view a graph of sensor data (e.g., time series data from sensor S1) on the display device 113. For example, a sensor 101 on a battery device can take five voltage readings per second and report [0.55V, 0.54V, 0.55V, 0.56V, 0.57V] over the course of one second. As another example, a sensor 101 on a generator may report temperature data [(0.0 seconds, 50° C.); (0.7 seconds, 50° C.), (1.0 seconds, 51° C.), (1.5 seconds, 51° C.), etc.]. Other sensors 101 can report time series data at different consistent or variable rates, including hundreds or thousands of times per second. Other sensors can report other types of data besides time series data. In some systems, there can be many different sensors (e.g., sensor S1, sensor S2, . . . , sensor SN) monitoring one device 101, or there can be many different sensors 103 monitoring many different devices.
After viewing the graph of sensor data, the user can select a section of the sensor data, such as a section of the presented S1 Time Series Data, in a user interface of the user system 107. In the user interface, the user can select to perform a search for other sections of the sensor data that look similar, such as further described with respect to
In various embodiments, the user can select a section 201 of a first graph of data (e.g., senor data collected on Jan. 1, 2020 from a first sensor S1) and then perform a search for similarly looking sections in any of: the same graph being displayed, data from the same sensor (e.g., searching available data from the first sensor S1), or any other data (including data from other sensors such as S2 through SN). The user can use the menu 205 to select which data to perform the search on. In some embodiments, the user can also specify additional search options, such as a number of matches to find, a section of a sensor data to search, which sensors or types of data to search, whether magnitude or shape are more important, and other options.
As explained further below, the search for a similarly looking feature can be performed by a computer system. Vector representations of sensor data can be determined, and an index indicating the similarity of vectors can be pre-computed before a user performs a search. In response to a search for a user selected section 201, the user selected section 201 can be converted into one or more vectors that represent subsets of the user selected section 201, and the index can be referenced to compare the subsets of the user selected section 201 to other vectors in a computationally efficient manner. As further discussed with respect to
How to Process the User-Selected Section of a Graph for Comparison
A section of a graph can have a combination of peaks, valleys, swings, increase, decreases, and/or curves, which a user may be interested in. However, the user (or even a computer) may be unable to write a mathematical equation of the shape for a computer to search against. Although a computer can compare an equation to other data in the data store 121, real-world sensor data rarely, if ever, fits into easily expressible algebraic functions such as y=mX+b. For example,
Additionally, unlike humans, computers lack the inherent ability to make visual comparisons of one graph to another. Although a computer's instruction set architecture (such as the X86 architecture, a MIPS architecture, a RISC architecture, an ARM architecture) may include operations such as addition, subtraction, multiplication, read, write, and logical comparisons (e.g., is equal to, is greater than, is less than), computers are unable to evaluate an instruction of, “Does this first shape look similar to this second shape?”
Accordingly, techniques are disclosed for comparing a user-selection section of a graph to other data without needing a provided mathematical equation of the selected shape.
A first technique for a computer to compare a user-selected section of a graph to other data without a provided algebraic expression of the user-selected section of a graph is to perform a distance calculation. For example, a Euclidian distance between two, 2-dimensional (x, y) points can be expressed as √{square root over (Δx2+Δy2)}. The Euclidian distance between time series S1(y1, y2, y3, etc.) and time series S2(z1, z2, z3, etc.) can be expressed as √{square root over ((y1−z1)2+(y2−z2)2+(y3−z3)2 . . . )} (e.g., as shown in
However, when the amount of data becomes large, such a computationally intensive analysis becomes impractically slow. A typical 3-GHz processor, with an unrealistic 100% pipeline and prediction efficiency, can perform about 3 billion pipeline operations per second. Each Euclidian term includes three operations: a subtraction of two values (e.g., y1−y2) to determine a difference, a squaring or multiplication of the difference to yield a product, and an addition of the product to the next term. In an example, a user selects a 10 second time series data selection sampled at 1,000 Hz for 10,000 total data points, and a Euclidian distance comparison is performed against time series data that includes 1,000 samples per second for 24 hours, or about 86.4 million data values. Each 10,000 data point comparison would take about 30,000 operations; this would be performed at each phase offset for 86,390,000 times. The processor could complete the query in about 864 seconds or 14.4 minutes (unrealistically ignoring any additional read/write latency). To search data collected over one year, this would take 365 times as long, or about 87.6 hours. If 10 sensors were included, then the computation would take about 5.2 weeks to complete. For the same or similar reasons, when the quantity of data is sufficiently large, a processor may not be able to perform a real-time analysis on a stream of data, such as sensor data, without the improved techniques described herein.
How to Process the Search More Responsively
A second technique can be used to perform computations in a way that responds more quickly when users want to perform a search for similarly looking sections of a graph. The second technique can also provide more accurate search results in terms of graphical similarity by being less sensitive to shifts in time or value. A classifier technique can be used, and a search index for the classifier can be pre-computed. Although the examples below are discussed with respect to the kth nearest neighbor (“kNN”) technique, the teachings disclosed herein can extend to any similar classifier technique.
The kNN technique can be used to find similar points. Distances between nearby (or all) points can be determined, and a search index (such as a table, tree, or other data structure) to facilitate kNN comparisons can be pre-computed before a user performs a search. In response to a user request for a search, the index can be referenced to determine nearest neighbors (which can indicate similar-looking sections of graphs) substantially more responsively than brute force distance computation. The kNN technique has been used to provide approximate solutions to finding nearest neighbors, but the answers are not always necessarily correct. Realizations included herein are that the kNN technique or other techniques can be used to find similar vectors that correspond to similarly looking sections of graphs, and that techniques can be used to overcome the accuracy problem.
A first kNN search is performed to find the k=1 nearest neighbor of the selected data point 401. A plurality of lines 409 indicate divisions of the X-Y space indicating which point is nearest. The division boundaries 409 can be pre-computed and stored as a search index, such as a table, tree, or other data structure. The user-selected data point 401 is located in the division including point 408, and therefore is closest to point 408. Accordingly, in response to a user query seeking the nearest neighbor of user selected point 401, the search index can be referenced to determine the nearest neighbor instead of computing the distance between point 401 and all other points.
The pre-computed index for assisting searches and comparisons can be extended to find the k=5 nearest neighbors or other numbers of nearest neighbors. In some implementations, accuracy can suffer when the “k” value increases, the dimensions increase, or if some dimensions are weighted differently than others. A line 403 includes an example index result, where 5 nearby neighbors of user selected point 405 are determined. However, the results may sometimes incorrectly exclude point 405 and include point 407.
As disclosed further below, the comparisons, such as a kNN search, can be used to find graph sections that look similar in a computationally efficient manner. It is realized that the 2-dimensional, XY coordinate example shown in
Smaller Vector Form
Techniques can be used to reduce the complexity of the vectors. A smaller vector form can reduce the storage space for the vectors in a computer system, and the vectors can be processed faster. Sections of the time series data can be mathematically transformed (such as by using a Fourier transform, a Fast Fourier transform, a Chebyshev transform, etc.), and a smaller vector representation of the sections of the time series data can be created. The vector representation can also include a normalization index (e.g., a ymin and a ymax). For example, time series data (y1, y2, y3, . . . , yn) can have a vector representation including one or both normalization indexes ymin, ymax and the coefficients (e.g., C0, C1, C2, . . . , Cm) associated with the resulting transformation, and “m” can be a number smaller than “n” such that the number of terms in the vector representation is less than the number of terms in the time series data.
As an example, a user-selected section of data can include (1, 1, 3, 5, 2, 3, 3, 2, 5, 1, 2, 3, 5, 1, 3). It can be identified that the minimum value is 1 and a maximum value is 5. Accordingly, ymin=1 and a ymax=5 term can be included as numbers in the vector to indicate the range. This can preserve data about the range of data and magnitude of changes. Then, the data can be normalized (e.g., scaled and/or shifted) to a normalized range (e.g., from 0 to 1, from −1 to −1, or any other range) so that later on, coefficients of shapes in a normalized range can be more accurately compared to each other. In some embodiments, a normalizing range of the data can be equivalently captured by yrange=4, and a shift from the normalized minimum can be reflected with yshift (which can be 1 to indicate that the minimum value is shifted by 1 as compared to an example normalization range beginning at 0). Any of the example normalization values can be included in a vector representation of a section.
A user-selected section (y1, y2, y3, . . . , yn) of time series data can also be mathematically transformed. For example, a Chebyshev polynomial can be fitted to a user-selected section (y1, y2, y3, . . . , yn) of time series data, and the coefficients of the expansion terms in the resulting transform (see Eq. 1) can be included in a vector representation (see Eq. 2).
(y1,y2,y3, . . . ,yn)≈C0T1(x)+C1T2(x)+C2T3(x)+ . . . +Cmtm(x) Eq. 1
Vector=[C0,C1,C2, . . . ,Cm] Eq. 2
The T# terms are the basis polynomials. Accordingly, any section of time series data can be represented with a vector including normalization values and/or the coefficients from a transform. An example vector can include [ymin, ymax, C0, C1, C2, . . . , Cm]. A vector of “m” terms can represent (approximately) a data series having significantly more than “m” terms, especially if the frequency of the data series is less than m/2. Although a Chebyshev transform is used as an example, it should be understood that the coefficients of other mathematical transforms, such as a Fourier transform (cosine, sine, fast, or other), Taylor series expansion, McLauren expansion, Laplace transform, geometric series, arithmetic series, polynomial series, or any other similar approximation, expansion, or transform can be used. More generally, any (X, Y) data series can be approximated as in Eq. 3 below and represented by the vector in Eq. 2.
(y1,y2,y3, . . . ,yn)=C0+C1f1(x)+C2f2(x)+ . . . +Cmfm(x) Eq. 3
Instead of comparing the full time series data, smaller vectors that represents the time series data can be used to make comparisons or generate indexes. For example, to directly compare two vectors, the Euclidian distance formula can be applied to the vector terns. By using the smaller vectors, smaller quantities of data can be computed much faster as compared to comparing the actual data. The vectors and search indexes for facilitating searching and comparisons of vectors can be pre-computed and stored in the second database 125 shown in
When a user selects a section of a graph such as shown in
When the actual computations for similarity are performed, certain terms of the vector can sometimes be given more or less weight. For example, if a user indicates that a shape of a graph is more important than the actual values, then the normalization indexes and C0 term (which can indicate an average in some series) can be ignored or given less weight.
Variable Time Sections
Two vectors can be compared against each other for similarity when they represent the same amount of data or same x-axis domains. In the case of time series data, two vectors can be compared when they represent the same amounts of time. The stored time series data (e.g., the S1 time series data) in the first database 123 of time series data can be processed to determine corresponding vectors and search indexes (e.g., S1 Vectors and Index) that can be stored in a second database 125 as shown in
The vectors and search index can be pre-computed and stored in the second database 125 before a user indicates a selected section of a graph for comparison. When the user selects a section 201 of a graph (as shown in
The vector data and search index can be pre-processed and made available before a user search is performed. The vectors can then be compared against user selections representing equal lengths of time. However, the user can select any arbitrary length of time (or amount of data). Therefore, it is unknown what variable-length vectors and indexes to pre-compute until the user wants to perform a comparison and interacts with the computer 111 of
Additional techniques can be used to achieve the improved response speeds associated with pre-computed vector data of smaller sizes and indexes despite not knowing what the user will select ahead of time, such as a particular section of a graph, an amount of time, or quantity of data. The techniques disclosed herein can also apply to unknown quantities or domains of other types of data selected by a user.
Pre-computing all possible vectors representing all possible sizes of selectable data at every phase-shifted offset can take more storage room than the actual sensor data and/or can take an unreasonably long amount of time. A technique discussed with respect to
After receiving a user selection of data, the vectors shown in
The vector data 505 includes vectors that represent different contiguous amounts of the time series data 503. Each vector in the vector data 505 is drawn as a block that is aligned with a section of the time series data 503 represented by the vector. The example vector data 505 is organized into resolution levels, with vectors representing the longest amounts of time series data 503 labeled with “½” resolution, and vectors representing increasingly shorter amounts of time series data 503 in the finer resolution levels.
The vector data 505 can include a vector (not shown) representing the entire span of time series data 503. The vector data 505 can also include vectors representing first-sized subsets of the time series data 503. In the example, two vectors at the “½” resolution level represent respective halves of the time series data 503. The vector data also includes vectors representing smaller subsets of the time series data 503, including quarters of the time series data 503 indicated by ¼, eights of the time series data indicated by ⅛, and so on.
Although the example shown in
Pre-processing sections of time series data to create vectors can cause phase information to be lost, which can cause accuracy to be lost. To preserve accuracy, different, phase-shifted versions of the pre-processed vectors can be created, and the comparisons repeated against phase-shifted versions of the vectors.
In addition to the vectors shown in
For example, vector data 505 can include the V/2 vectors phase-shifted by T=50, T=100, T=150, T=200, T=250, T=300, T=350, T=400, and T=450. As another example, the vector data 505 can include T=2, T=4, T=6, T=8, T=10, T=12, and T=14 phase-shifted versions of the V/64 vectors. In other examples, the phase shifts within resolution levels can be different values, vectors within a resolution level can be phase-shifted by different increments, and/or the phase shifts at a first resolution level can be larger, smaller, or the same as vectors in other resolution levels.
The vector data 505, including the vector representations of different contiguous amounts of the time series data 503 at different resolution levels, and including the phase-shifted versions thereof, can be stored in the second database 125 shown in
A user can select section 601, which includes a peak shape followed by an average section. The user wants to quickly search for other similarly looking sections within the same graph. In other examples, the user can also search for more complex shapes in other graphs. The amount of time series data included in the user-selected section 601 may not exactly align with any division of the time series data represented by any of the vectors 602.
In such a case, a first comparison can be performed based at least in part on a first vector 603 at a first resolution level (e.g., at the 1/16 resolution) to determine similar vectors. In the example shown in
The nearest (or similar, or mathematically close) neighbors of the first vector 603 can be determined. The nearest neighbors can be determined by comparing the first vector 603 to other vectors of the same resolution, such as by performing a distance comparison. The nearest neighbors can also be determined by referencing an index to perform an index-assisted comparison. As used herein, a comparison can refer to either a direct comparison or an index-assisted comparison if the context allows for both. By referencing an index (e.g., a pre-computed index stored in the second database 125), the nearest neighbors of first vector 603 can be determined to include vectors 605, 607, and 609 in a computationally efficient manner. The index can, for example, already list that the nearest neighbors of vector 603 include vectors 605, 607, and 609. Accordingly, the index-assisted comparison of vector 603 can include looking up the comparison results in the index.
Based at least in part on identifying vectors 605, 607, and 609, initial candidate sections A, B, and C can be determined as sections of the graph 619 that will likely look visually similar to a user-selected section 601. The initial candidate sections A, B, and C include sections of the graph that are represented by vectors 605, 607, and 609 and extend to cover the same amount of time as the user-selected section 601. Each candidate section can span from a starting point to an ending point of a section of the graph 619.
Indeed, initial candidate sections A, B, and C represent sections of the graph that include above-average data points and at least a portion of a peak. In some embodiments, an initial similarity ranking can be performed, such as that the initial candidate sections B, A, and C are in order from most to least similar to the section of graph represented by vector 603. The order can be based on indexed information about which vectors are most similar to the first vector 603, based on computed distances between the first vector 603 and vectors 605, 607, and 609, or based on computed distances between the user-selected section 601 and the candidate sections A, B, and C.
A second comparison can be performed based at least in part on a vector 611 of a second resolution level (e.g., at the 1/32 resolution). The second vector 611 for comparison can be a vector representing a second section of the graph that is included in the user-selected section 601. The second vector 611 may be the next largest vector that can fit within user-selected section 601 and/or that is adjacent to the first vector 604. As shown, vector 611 represents a section of the graph after the vector 603, so vector 611 can be compared to vectors 613, 615, and 617 that represent sections of the graph after vectors 605, 607, and 609, respectively. The comparisons can be direct comparisons or facilitated by the index.
The second comparison can be used to confirm or re-rank the initial candidate sections. For example, the graph section represented by vector 617 (a decreasing section of the graph 619) may not look sufficiently close or be mathematically close to the section of the graph represented by vector 611 (a relatively flat, average section of the graph 619). Accordingly, an accurate index (e.g., the index in database 125 of
The comparisons can be performed additional times, such as for vectors representing even more granular resolution levels of time, for vectors representing sections of the graph 619 preceding the section represented by vector 603, and/or for vectors representing sections of the graph 619 after the section represented by vector 613.
After a number of comparisons of vectors (which can include comparisons of the starting point vector 603, comparisons of smaller or neighboring vector 611, further comparisons of smaller or neighboring sections, and/or comparisons of phase-shifted versions of the vectors), a list of final candidate vectors can be determined. The final candidate sections can look visually similar to the user-selected section 601. The final candidate vectors can represent sections of the graph that can be more visually similar to the user-selected section 601 than other sections of the graph represented by vectors that are not included in the final candidate vectors.
Although the example second comparison is shown to be one resolution level below the first comparison, the second comparison can be at a different resolution level (e.g., two or more resolution levels below the first comparison). The second comparison can also be performed for a vector representing a section of the graph 619 preceding the section represented by vector 603. Although both of the vectors 603 and 611 represent sections wholly included in the user-selected section 601, in some embodiments, the vectors can represent sections partially included in the user-selected section 601.
The comparison of vector 611 to vectors 613, 615, and 617 can be performed with reference to an index (e.g., the index in database 125 of
Although
In some embodiments, a larger number of initial candidates can be selected based on the vector at the first resolution level, and the number of initial candidates can be subsequently refined to determine smaller numbers of candidates based on increasingly strict similarity criteria. For example, an initial search can be performed for vectors similar to the starting point first vector 603. The first vector 603 can include a [ymin, ymax, C0, C1, C2, . . . , Cm]. The first comparison may find other vectors having an similar range of ymin to ymax (e.g,. determine which other vectors have a ymin to ymax range within a threshold distance from the first vector's 603 ymin to ymax) or find other vectors having an similar C0 term (e.g., determine which other vectors have a C0 term within a threshold distance from the first vector's 603 C0 term), which is indicative of an average value. The other coefficient terms may play a lesser role or no role. When vector 611 is compared to vectors 613, 615, and 617 to narrow down the initial candidate sections, the other coefficients (e.g., C1, C2, . . . , Cm) can be used or weighted more heavily in comparisons.
The comparisons can be performed any number of times based on additional vectors (e.g., the vectors at different resolution levels or vectors that represent different sections of time) to confirm, refine, and/or re-rank the candidate sections.
The comparisons can also be repeated for phase-shifted vectors selected at different offsets 621, 623 as another way of preserving phase information. The offsets can be at different offset amounts from a beginning or end of the user selected section 601. The offsets can be any amount and may not exactly align with any division of the time series data represented by any of the vectors 602. At each offset, a largest vector to the left and/or right of the offset that is still included in or fits in the user selected section 601 can be selected as a vector for comparison.
For example, a size ⅛ vector could fit within the user selected section 601. To perform a comparison against other vectors of size ⅛, a vector (not shown) can be computed, where the computed vector represents a portion of the selected selection 601 beginning at no phase offset from the beginning of the selected section 601 and extending for ⅛ of the time series data. The computed vector can then be compared to other ⅛ vectors as described above.
At phase offset 621, a size 1/16 vector is a largest vector that could extend from phase offset 621 and still fit within the user selected section 601. To perform a comparison against vectors of size 1/16, another vector (not shown) can be computed, where the computed vector represents a portion of the selected section 601 beginning at phase offset 621 and extending for 1/16 of the time series data. The computed vector can then be compared to other vectors representing 1/16 sections of the data as described above.
At phase offset 623, a size 1/16 vector is still the largest vector that could extend from phase offset 623 and still fit within the user selected section 601. To perform a comparison against vectors of size 1/16, another vector (not shown) can be computed, where the computed vector represents a portion of the selected section 601 beginning at phase offset 623 and extending for 1/16 of the time series data. The computed vector can then be compared to other vectors representing 1/16 sections of the data as described above. Accordingly, vectors representing different sizes of the user selected section 601 can be computed at a plurality of phase offsets, wherein each computed vector is a largest size that extends from the phase offset and still fits within the user selected section 601, and the computed vectors can be used for comparison against respectively sized vectors 602.
Computation of Final Distances
As discussed above, a number of final candidate sections can be determined. Each candidate section can include a section of the graph 619 represented by at least one vector. For example, a first candidate section can be a section of the graph represented by at least vector 607 and vector 615, and a second candidate section can be a section of the graph represented by at least vector 605 and vector 613. The final candidate sections can additionally or alternatively include candidates represented by phase-shifted vectors. Final candidate sections of the graph 619 represented by phase-shifted vectors may overlap, at least partially, with other candidate sections.
The number of final candidate sections can be relatively small compared to the total number of possible options. In some embodiments, the number of final candidate sections can be tens, hundreds, or thousands instead of millions, billions, trillions or more. In some embodiments, the number of final candidate sections can be at least one, two, three, four, or five orders of magnitude fewer than the total number of possible options.
In view of the relatively smaller number of final candidate sections, the data included in the final candidate sections can be read from a database (e.g., the first database 123 shown in
There can be difference measurements (d1, d2, . . . dn) taken between S1 and S2 at various intervals. The smaller the interval, the more accurate the distance calculation can be. The larger the interval, the faster the computation can be. The Euclidian distance formula, shown in
A direct distance calculation, such as the Euclidian distance, Mahalanobis distances, Minkowski distances, Jaccard similarity, Cosine similarity, etc. can be performed on all of the final candidate sections. In various embodiments, any statistical metric can be used. The final candidates can be ranked based at least in part on the distance calculation, and the smallest distances can indicate the closest looking graph sections.
The final candidates can also be confirmed. For example, if a distance calculation for one of the final candidates does not satisfy a threshold distance, then that final candidate can be removed.
Any final candidates representing overlapping sections of a graph can also be de-duplicated. For example, a first section of a graph and a second section of a graph may be included as final candidates, where the second section partially overlaps with the first section. This can occur, for example, if a first vector and a phase-shifted version of the first vector both represent graph sections that are similar to a user-selected section of a graph. Whichever among the first section and the second section that has a closer distance calculation can be kept among the final candidates, and the other can be “de-duplicated” or removed from the list of final candidates.
The final candidate sections can then be listed, displayed, or otherwise indicated in a user interface, for example as shown in
Faster Responses with Vector Reconstruction
As mentioned above, in some embodiments, the sections of data included in the final candidate sections can be read from a database (e.g., the first database 123 shown in
For people, reading or looking up the answers to a math computation is normally the fastest way to determine the answer to the math computation (instead of a person performing the math computation to determine the result). However, with a computer, a counter-intuitive result can occur: it can be faster to perform a computation to determine the candidate section data by performing an inverse mathematical transform of the vector data than to read the original series data for the candidate sections from a database. There can be a latency cost for reading the sections of data included in the final candidate sections from a database (e.g., the first database 123 shown in
As a faster alternative to reading the information from the first database, vector data can be read from memory (such as reading the S2 Vectors as memory 119 shown in
For example, with respect to
Accuracy
As mentioned above, some of the techniques disclosed herein, including techniques for creating search indexes and vector-reconstructed approximations of original data, may not be completely accurate. However, in view of the repeated number of comparisons performed (e.g., direct distance comparisons, index-assisted comparisons, for a vector at a resolution level, for a vector at a different resolution level, repeated comparisons and computations for phase-shifted version of vectors), any errors will likely be caught at one of the many stages, or at least sufficiently uncommon such that visually satisfactory results can be provided to users.
Example Flowchart
At block 801, data is stored into a first database. An example first database 123 is shown in
At block 803, pre-computation and indexing can be performed. In some embodiments, the pre-computation and indexing can be performed before receiving a user request to perform a search. In some embodiments, the pre-computation and indexing can be performed on the data stored in the first database before new streaming data is received.
At block 805, different vectors representing transforms of different sections of the data can be determined. The vectors can include a normalization index. The vectors can include terms (such as coefficients) generated based at least in part on a mathematical transform. In some embodiments, each vector can represent a continuous section of the data. The vectors can represent the data at different resolutions. The vectors can also represent the data starting at different phase shifts.
At block 807, the vectors can be stored in a database, such as the second database 125 shown in
At block 809, indexes that facilitate vector comparisons can be computed and stored. The indexes can be any format, such as a table, tree, or other data structure. An index can, for example, be used to facilitate a search or similarity comparison. The indexes can improve the processing speed of computations performed in blocks 819 and 827.
At block 811, the data can be transmitted for display as a graph. In some embodiments, the data can be read from the first database 127 and transmitted to a user system 107 as shown in
At block 813, a user can select a section of a graph. For example, the user system 107 of
At block 815, vectors from the second database (e.g., the second database 125 of
At block 817, a first vector representing at least a portion of the user-selected section can be determined and used as a first vector for comparison. For example, as shown in
At block 819, a comparison of the first vector to other vectors can be performed. In some embodiments, the comparison can be a direct distance comparison of two vectors, such as determining the Euclidian distance or other statistical metric between two vectors. In some embodiments, the comparison can be inherently performed by referencing an index that indicates results or rankings of comparisons.
At block 821, as a result of the comparison, candidate sections can be found. For example, if a 3NN search technique is used in the example shown in
In some embodiments, the candidate vectors and candidate sections determined at block 821 are final candidate sections or final candidate vectors, and block 821 can proceed to block 843. In some embodiments, the candidate vectors and candidate sections are initial candidates that can be further processed and refined in blocks 823 and/or 831.
In block 823, the candidate sections can be refined, and the process can be repeated. In some embodiments, block 823 can be repeated for different vectors (e.g., phase-shifted vectors, finer resolution vectors, and neighboring vectors) that represent graph sections at least partially included in a user-selected section, and the resulting candidate sections can be further processed as shown in block 831 or displayed as results in block 843. In some embodiments, blocks 823 and 831 can be performed together, and then blocks 823 and 831 can be repeated together for different vectors.
At block 825, a new vector representing a different portion of the user-selected section can be determined. In some embodiments, the new vector can be a phase-shifted vector. For example, in
At block 827, the comparisons of the new vector can be performed against other vectors. For example, as shown in
At block 829, the candidate sections can be confirmed, added to, and/or re-ranked.
As an example of confirming candidates, with reference to
As a example of re-ranking, with reference to
In a third example, three initial candidate sections were determined at block 821 based on a comparison of a first vector. At block 827, a new vector (different from the first vector) is compared against other vectors, and a fourth vector is determined to be very similar to the new vector. A fourth candidate section that includes a section represented by the fourth vector is added to a list of candidate sections, which can include the three initial candidate sections and the fourth candidate section.
At block 831, direct distance computations can be performed. Although direct computations may use relatively more processing power, the computations can be performed on a relatively small number of candidate sections. Accordingly, a processor can still responsively perform the computations. In some embodiments, the processor can still perform the computations as fast as streaming data is received. To perform a distance computation, the data can be read from a database (e.g., in block 837) or (approximately) reconstructed from the cached vector data in memory (e.g., in blocks 833 and 835). Reading cached vector data from memory can be faster than reading the original data from the first database.
At block 837, original data can be read from the first database. The original data can be the data included in the candidate sections.
At block 833, cached vectors can be read from memory. The memory can be, for example, random access memory, L1 memory on a processor die, L2 memory on a processor die, etc. The vectors that are read from memory can be the vectors that represent sections of data that are included in the candidate sections.
At block 835, a reverse transform data can be calculated based at least in part on the cached vectors read from memory. For example, if a Chebyshev transform was used to generate a vector, the Chebyshev expression can be evaluated by multiplying the coefficients and with the Chebyshev terms (and scaled and shifted based on the normalization index) to generate the reverse transform data. As another example, if a Fourier transform was used to generate a vector, then an inverse Fourier transform can be used to generate the reverse transform data.
Accordingly, either block 837 or blocks 833 and 835 can be used to obtain the data for the candidate sections.
At block 839, the distances between the candidate sections and the user-selected section can be computed. A Euclidian distance calculation example is shown in
At block 841, the final candidate sections can be confirmed, re-ranked, and/or de-duplicated based at least in part on the calculated distances. If any final candidate sections overlap (for example, if both a first section and a slightly phase-shifted version of the first section are similar to the user-selected section), then the overlapping candidate section that is closest to the user-selected section can remain in the list of final candidate sections while the farther overlapping candidate section is removed from the list of final candidate sections.
At block 843, indications of the final candidate sections can be transmitted to a user for display. For example, with respect to
Multi-Graph Searching
The techniques disclosed herein can be used to search for other sections of a graph that look similar to a user-selected section of the graph. The techniques disclosed herein can also be used to search for other sections of different graphs that look similar to a user-selected section of a first graph.
The techniques disclosed herein can further be used to search for pair (or triplet, or other plurality) of sections of graphs that look similar to user-selected sections of a pair (or triplet, or other plurality) of graphs. For example, an electrician may want to automatically monitor a circuit that previously experienced problems shortly after a voltage reading showed a first waveform and a current reading simultaneously showed a second waveform. However, the circuit did not experience problem when only one, and not both, of the voltage and current readings showed the first or second waveform, respectively. The electrician may set up streaming sensors to monitor the voltage and current of the device and desire to be alerted when both the voltage and current of the circuit show patterns similar to the first and second waveforms, respectively.
A user can add a plurality of graphs of one or more plots to the user interface 900. Then, the user can select a plurality of sections (such as 905, 907) in the plurality of graphs. In the example shown, the graphs are of time series data, and the selected sections include the same time domain from about 4:22 to 4:48. In various embodiments, the x-axis domain can be the same or different for the plurality of graphs.
A menu 913 can be used to select pairs of graphs to search for pairs of sections similar to the selected pair of sections 905, 907. A first pair of dropdown menus labeled with “1” allows a user to select among data series stored in a database, such as the first database 121 shown in
The user can then click the button 915 to search the selected pairs of data series (or category pairs of data series, or data series within the date ranges, etc.). In response to the user search request, the techniques disclosed above can be extended to search a plurality of graphs. For example, candidate sections of a first graph can be found that match the user-selected section 905, and candidate sections of a second graph can be found that match the user-selected section 907, and the results can include candidate sections common to both the first graph and the second graph.
In some embodiments, when performing searches on more than one graph at a time, each graph can be individually analyzed against a respective user-selected section as shown in
The user can also set alerts using the alert settings menu 917. For example, at a power plant, the menu 913 can be used to select a first data stream for voltage and a second data stream for current 907. The voltage and current sensors may sample data at hundreds or thousands of readings per second or at speeds too fast for a computer to keep up with when performing direct comparisons without applying the teachings disclosed herein. Using the techniques disclosed above, the processor can process and compare the streaming voltage and current data to the user-selected section 905 and 907 while keeping up with the rate of the data stream in a computationally efficient manner. If sections that look similar to the user-selected sections 905, 907 are detected together, than an alert can be generated to warn the user, or other actions can be taken.
In some embodiments, streamed data can be cached in a temporary database (not shown in
In some embodiments, the techniques discussed herein can be used, after receiving a user-selected section of a graph, to search about 4 million data points to find five similarly looking sections in less than two seconds. In various embodiments, a user can select to perform a search for similarly looking sections that are the closest in time to a user-selected section. In response to such a search option, candidate sections can be ranked based at least in part on a time difference (or x-axis difference) between the candidate sections and the user-selected section.
In some embodiments, the user can select other search options. For example, if a user is more interested in a shape of a graph rather than values, the user can select a search option. In processing the search request, comparisons of vector terms such as ymin, ymax, and C0 can be given reduced weighting or ignored completely.
In some embodiments, a user can search for various numbers of results, such as searching for 5, 10, or 100 visually similar graph sections. The searches can be handled using different criteria by the back-end system. For example, in a single graph, to find the five closest looking sections, a number of candidates can be refined (e.g., at block 829 or 841) to drop candidates that do not fall within minimum similarity thresholds. However, when searching a single graph to find the 100 closest looking sections, there may not be enough remaining final candidates if the minimum similarity thresholds are applied at blocks 829 and 841. Accordingly, confirming that candidates sections fall within minimum similarity thresholds can be skipped. In some embodiments, various blocks in
In some embodiments, a search can sort the search results in order of the most to least similar, or in order from closest in time (or x-axis variable) to the user-selected section to farthest.
In some embodiments, the graphs shown in
In some embodiments, data can be repeatedly received. For example, as shown in
A first time series data can be compared against a second time series data that has data taken at different frequencies (e.g., 2 times per second and 1 time per second) by adjusting one of the frequencies. In some embodiments, data values for the slower sample data series can be extrapolated to a higher frequency. In some embodiments, the faster sampled data series can be resampled at the slower frequency. In some embodiments, the techniques described for times series data can be applied to a series of any other type of data, such as any X, Y coordinate data.
In an implementation, the data center 109 (or one or more aspects of the data center 109) may comprise, or be implemented in, a “virtual computing environment”. As used herein, the term “virtual computing environment” should be construed broadly to include, for example, computer readable program instructions executed by one or more processors (e.g., as described in the example of
Implementing one or more aspects of the data center 109 as a virtual computing environment may advantageously enable executing different aspects or modules of the system on different computing devices or processors, which may increase the scalability of the system. Implementing one or more aspects of the data center 109 as a virtual computing environment may further advantageously enable sandboxing various aspects, data, or modules of the system from one another, which may increase security of the system by preventing, e.g., malicious intrusion into the system from spreading. Implementing one or more aspects of the data center 109 as a virtual computing environment may further advantageously enable parallel execution of various aspects or modules of the system, which may increase the scalability of the system. Implementing one or more aspects of the data center 109 as a virtual computing environment may further advantageously enable rapid provisioning (or de-provisioning) of computing resources to the system, which may increase scalability of the system by, e.g., expanding computing resources available to the system or duplicating operation of the system on multiple computing resources. For example, the system may be used by thousands, hundreds of thousands, or even millions of users simultaneously, and many megabytes, gigabytes, or terabytes (or more) of data may be transferred or processed by the system, and scalability of the system may enable such operation in an efficient and/or uninterrupted manner.
Various embodiments of the present disclosure may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or mediums) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
For example, the functionality described herein may be performed as software instructions are executed by, and/or in response to software instructions being executed by, one or more hardware processors and/or any other suitable computing devices. The one or more hardware processors can include one or more processors in a computing device, one or more processors distributed in plurality of computing devices, and/or one or more processors in cloud computing devices. The software instructions and/or other executable code may be read from a computer readable storage medium (or mediums).
The computer readable storage medium can be a tangible device that can retain and store data and/or instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device (including any volatile and/or non-volatile electronic storage devices), a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a solid state drive, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions (as also referred to herein as, for example, “code,” “instructions,” “module,” “application,” “software application,” and/or the like) for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. Computer readable program instructions may be callable from other instructions or from itself, and/or may be invoked in response to detected events or interrupts. Computer readable program instructions configured for execution on computing devices may be provided on a computer readable storage medium, and/or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution) that may then be stored on a computer readable storage medium. Such computer readable program instructions may be stored, partially or fully, on a memory device (e.g., a computer readable storage medium) of the executing computing device, for execution by the computing device. The computer readable program instructions may execute entirely on a user's computer (e.g., the executing computing device), partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart(s) and/or block diagram(s) block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer may load the instructions and/or modules into its dynamic memory and send the instructions over a telephone, cable, or optical line using a modem. A modem local to a server computing system may receive the data on the telephone/cable/optical line and use a converter device including the appropriate circuitry to place the data on a bus. The bus may carry the data to a memory, from which a processor may retrieve and execute the instructions. The instructions received by the memory may optionally be stored on a storage device (e.g., a solid state drive) either before or after execution by the computer processor.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In addition, certain blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. For example, any of the processes, methods, algorithms, elements, blocks, applications, or other functionality (or portions of functionality) described in the preceding sections may be embodied in, and/or fully or partially automated via, electronic hardware such application-specific processors (e.g., application-specific integrated circuits (ASICs)), programmable processors (e.g., field programmable gate arrays (FPGAs)), application-specific circuitry, and/or the like (any of which may also combine custom hard-wired logic, logic circuits, ASICs, FPGAs, etc. with custom programming/execution of software instructions to accomplish the techniques).
Any of the above-mentioned processors, and/or devices incorporating any of the above-mentioned processors, may be referred to herein as, for example, “computers,” “computer devices,” “computing devices,” “hardware computing devices,” “hardware processors,” “processing units,” and/or the like. Computing devices of the above-embodiments may generally (but not necessarily) be controlled and/or coordinated by operating system software, such as Mac OS, iOS, Android, Chrome OS, Windows OS (e.g., Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10, Windows Server, etc.), Windows CE, Unix, Linux, SunOS, Solaris, Blackberry OS, VxWorks, or other suitable operating systems. In other embodiments, the computing devices may be controlled by a proprietary operating system. Conventional operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, I/O services, and provide a user interface functionality, such as a graphical user interface (“GUI”), among other things.
For example,
The computer system 1000 also includes a main memory 1006, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 1002 for storing information and instructions to be executed by processor 1004. Main memory 1006 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1004. Such instructions, when stored in storage media accessible to processor 1004, render computer system 1000 into a special-purpose machine that is customized to perform the operations specified in the instructions.
The computer system 1000 further includes a read only memory (ROM) 1008 or other static storage device coupled to bus 1002 for storing static information and instructions for processor 1004. A storage device 1010, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 1002 for storing information and instructions.
The computer system 1000 may be coupled via bus 1002 to a display 1012, such as a cathode ray tube (CRT) or LCD display (or touch screen), for displaying information to a computer user. An input device 1014, including alphanumeric and other keys, is coupled to bus 1002 for communicating information and command selections to processor 1004. Another type of user input device is cursor control 1016, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 1004 and for controlling cursor movement on display 1012. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.
The computer system 1000 may include a user interface module to implement a GUI that may be stored in a mass storage device as computer executable program instructions that are executed by the computing device(s). Computer system 1000 may further, as described below, implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 1000 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 1000 in response to processor(s) 1004 executing one or more sequences of one or more computer readable program instructions contained in main memory 1006. Such instructions may be read into main memory 1006 from another storage medium, such as storage device 1010. Execution of the sequences of instructions contained in main memory 1006 causes processor(s) 1004 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
Various forms of computer readable storage media may be involved in carrying one or more sequences of one or more computer readable program instructions to processor 1004 for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 1000 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 1002. Bus 1002 carries the data to main memory 1006, from which processor 1004 retrieves and executes the instructions. The instructions received by main memory 1006 may optionally be stored on storage device 1010 either before or after execution by processor 1004.
The computer system 1000 also includes a communication interface 1018 coupled to bus 1002. Communication interface 1018 provides a two-way data communication coupling to a network link 1020 that is connected to a local network 1022. For example, communication interface 1018 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 1018 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, communication interface 1018 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
The network link 1020 typically provides data communication through one or more networks to other data devices. For example, network link 1020 may provide a connection through local network 1022 to a host computer 1024 or to data equipment operated by an Internet Service Provider (ISP) 1026. ISP 1026 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 1028. Local network 1022 and Internet 1028 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 1020 and through communication interface 1018, which carry the digital data to and from computer system 1000, are example forms of transmission media.
The computer system 1000 can send messages and receive data, including program code, through the network(s), network link 1020 and communication interface 1018. In the Internet example, a server 1030 might transmit a requested code for an application program through Internet 1028, ISP 1026, local network 1022 and communication interface 1018.
The received code may be executed by processor 1004 as it is received, and/or stored in storage device 1010, or other non-volatile storage for later execution.
As described above, in various embodiments certain functionality may be accessible by a user through a web-based viewer (such as a web browser), or other suitable software program). In such implementations, the user interface may be generated by a server computing system and transmitted to a web browser of the user (e.g., running on the user's computing system). Alternatively, data (e.g., user interface data) necessary for generating the user interface may be provided by the server computing system to the browser, where the user interface may be generated (e.g., the user interface data may be executed by a browser accessing a web service and may be configured to render the user interfaces based on the user interface data). The user may then interact with the user interface through the web-browser. User interfaces of certain implementations may be accessible through one or more dedicated software applications. In certain embodiments, one or more of the computing devices and/or systems of the disclosure may include mobile computing devices, and user interfaces may be accessible through such mobile computing devices (for example, smartphones and/or tablets).
Many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. The foregoing description details certain embodiments. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the systems and methods can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the systems and methods should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the systems and methods with which that terminology is associated.
Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
The term “substantially” when used in conjunction with the term “real-time” forms a phrase that will be readily understood by a person of ordinary skill in the art. For example, it is readily understood that such language will include speeds in which no or little delay or waiting is discernible, or where such delay is sufficiently short so as not to be disruptive, irritating, or otherwise vexing to a user.
Conjunctive language such as the phrase “at least one of X, Y, and Z,” or “at least one of X, Y, or Z,” unless specifically stated otherwise, is to be understood with the context as used in general to convey that an item, term, etc. may be either X, Y, or Z, or a combination thereof. For example, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present.
The term “a” as used herein should be given an inclusive rather than exclusive interpretation. For example, unless specifically noted, the term “a” should not be understood to mean “exactly one” or “one and only one”; instead, the term “a” means “one or more” or “at least one,” whether used in the claims or elsewhere in the specification and regardless of uses of quantifiers such as “at least one,” “one or more,” or “a plurality” elsewhere in the claims or specification.
The term “comprising” as used herein should be given an inclusive rather than exclusive interpretation. For example, a general purpose computer comprising one or more processors should not be interpreted as excluding other computer components, and may possibly include such components as memory, input/output devices, and/or network interfaces, among others.
While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it may be understood that various omissions, substitutions, and changes in the form and details of the devices or processes illustrated may be made without departing from the spirit of the disclosure. As may be recognized, certain embodiments of the inventions described herein may be embodied within a form that does not provide all of the features and benefits set forth herein, as some features may be used or practiced separately from others. The scope of certain inventions disclosed herein is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Martin, Christopher, Alghunaim, Abdulaziz, Vempati, Sri Krishna
Patent | Priority | Assignee | Title |
Patent | Priority | Assignee | Title |
10452651, | Dec 23 2014 | WELLS FARGO BANK, N A | Searching charts |
10839309, | Jun 04 2015 | META PLATFORMS TECHNOLOGIES, LLC | Data training in multi-sensor setups |
11281726, | Dec 01 2017 | WELLS FARGO BANK, N A | System and methods for faster processor comparisons of visual graph features |
11314738, | Dec 23 2014 | PALANTIR TECHNOLOGIES INC. | Searching charts |
5532717, | May 19 1994 | The United States of America as represented by the Secretary of the Navy | Method of displaying time series data on finite resolution display device |
5724575, | Feb 25 1994 | WebMD Corporation | Method and system for object-based relational distributed databases |
5872973, | Oct 26 1995 | VIEWSOFT, INC | Method for managing dynamic relations between objects in dynamic object-oriented languages |
5897636, | Jul 11 1996 | Flash Networks LTD | Distributed object computer system with hierarchical name space versioning |
6073129, | Dec 29 1997 | BULL HN INFORMATION SYSTEMS INC | Method and apparatus for improving the performance of a database management system through a central cache mechanism |
6094653, | Dec 25 1996 | NEC Corporation | Document classification method and apparatus therefor |
6161098, | Sep 14 1998 | FOLIO FINANCIAL, INC | Method and apparatus for enabling small investors with a portfolio of securities to manage taxable events within the portfolio |
6243717, | Sep 01 1998 | SIEMENS PRODUCT LIFECYCLE MANAGEMENT SOFTWARE INC | System and method for implementing revision management of linked data entities and user dependent terminology |
6304873, | Jul 06 1999 | Hewlett Packard Enterprise Development LP | System and method for performing database operations and for skipping over tuples locked in an incompatible mode |
6366933, | Oct 27 1995 | HANGER SOLUTIONS, LLC | Method and apparatus for tracking and viewing changes on the web |
6418438, | Dec 16 1998 | Microsoft Technology Licensing, LLC | Dynamic scalable lock mechanism |
6510504, | Jun 29 1998 | Oracle International Corporation | Methods and apparatus for memory allocation for object instances in an object-oriented software environment |
6549752, | Jan 29 2001 | Fujitsu Limited | Apparatus and method accumulating cases to be learned |
6560620, | Aug 03 1999 | APLIX RESEARCH, INC | Hierarchical document comparison system and method |
6574635, | Mar 03 1999 | Oracle America, Inc | Application instantiation based upon attributes and values stored in a meta data repository, including tiering of application layers objects and components |
6609085, | Jan 19 1998 | Asahi Glass Company Ltd | Method for storing time series data and time series database system, method and system for processing time series data, time series data display system, and recording medium |
6745382, | Apr 13 2000 | Verizon Patent and Licensing Inc | CORBA wrappers for rules automation technology |
6976210, | Aug 31 1999 | PIECE FUTURE PTE LTD | Method and apparatus for web-site-independent personalization from multiple sites having user-determined extraction functionality |
6978419, | Nov 15 2000 | Justsystem Corporation | Method and apparatus for efficient identification of duplicate and near-duplicate documents and text spans using high-discriminability text fragments |
6980984, | May 16 2001 | CONSONA ERP, INC ; CONSONA CORPORATION | Content provider systems and methods using structured data |
7058648, | Dec 01 2000 | Oracle International Corporation | Hierarchy-based secured document repository |
7111231, | Feb 24 1999 | Intellisync Corporation | System and methodology for dynamic application environment employing runtime execution templates |
7194680, | Dec 07 1999 | Adobe Inc | Formatting content by example |
7233843, | Aug 08 2003 | Electric Power Group, LLC | Real-time performance monitoring and management system |
7461158, | Aug 07 2002 | International Business Machines Corporation | System and method for controlling access rights to network resources |
7617232, | Sep 02 2004 | Microsoft Technology Licensing, LLC | Centralized terminology and glossary development |
7667582, | Oct 14 2004 | Oracle America, Inc | Tool for creating charts |
7725530, | Dec 12 2005 | GOOGLE LLC | Proxy server collection of data for module incorporation into a container document |
7725728, | Mar 23 2005 | BUSINESS OBJECTS DATA INTEGRATION, INC | Apparatus and method for dynamically auditing data migration to produce metadata |
7730082, | Dec 12 2005 | GOOGLE LLC | Remote module incorporation into a container document |
7730109, | Dec 12 2005 | GOOGLE LLC | Message catalogs for remote modules |
7761407, | Oct 10 2006 | MEDALLIA, INC | Use of primary and secondary indexes to facilitate aggregation of records of an OLAP data cube |
7814084, | Mar 21 2007 | DEMOGRAPHICS PRO, INC | Contact information capture and link redirection |
7844892, | Aug 17 2006 | LinkedIn Corporation | Method and system for display of business intelligence data |
7917376, | Dec 29 2003 | Montefiore Medical Center | System and method for monitoring patient care |
7941321, | Apr 12 2002 | International Business Machines Corporation | Facilitating management of service elements usable in providing information technology service offerings |
7962495, | Nov 20 2006 | WELLS FARGO BANK, N A | Creating data in a data store using a dynamic ontology |
7984374, | Jul 23 1999 | Adobe Inc | Computer generation of documents using layout elements and content elements |
8036971, | Mar 30 2007 | WELLS FARGO BANK, N A | Generating dynamic date sets that represent market conditions |
8041714, | Sep 15 2008 | WELLS FARGO BANK, N A | Filter chains with associated views for exploring large data sets |
8046283, | Jan 31 2003 | Trading Technologies International, Inc. | System and method for money management in electronic trading environment |
8054756, | Sep 18 2006 | R2 SOLUTIONS LLC | Path discovery and analytics for network data |
8060259, | Aug 08 2003 | Electric Power Group, LLC | Wide-area, real-time monitoring and visualization system |
8112425, | Oct 05 2006 | SPLUNK INC | Time series search engine |
8126848, | Dec 07 2006 | Automated method for identifying and repairing logical data discrepancies between database replicas in a database cluster | |
8185819, | Dec 12 2005 | GOOGLE LLC | Module specification for a module to be incorporated into a container document |
8229902, | Nov 01 2006 | Architecture LLC; Ab Initio Technology LLC; Ab Initio Software LLC | Managing storage of individually accessible data units |
8302855, | Mar 09 2005 | GLAS AMERICAS LLC, AS THE SUCCESSOR AGENT | Banking system controlled responsive to data bearing records |
8401710, | Aug 08 2003 | Electric Power Group, LLC | Wide-area, real-time monitoring and visualization system |
8484115, | Oct 03 2007 | WELLS FARGO BANK, N A | Object-oriented time series generator |
8504542, | Sep 02 2011 | WELLS FARGO BANK, N A | Multi-row transactions |
8589273, | Dec 23 2002 | General Electric Company | Methods and systems for managing risk management information |
8676857, | Aug 23 2012 | International Business Machines Corporation | Context-based search for a data store related to a graph node |
8812960, | Oct 07 2013 | WELLS FARGO BANK, N A | Cohort-based presentation of user interaction data |
8930331, | Feb 21 2007 | WELLS FARGO BANK, N A | Providing unique views of data based on changes or rules |
8954410, | Sep 02 2011 | WELLS FARGO BANK, N A | Multi-row transactions |
9009827, | Feb 20 2014 | WELLS FARGO BANK, N A | Security sharing system |
9043696, | Jan 03 2014 | WELLS FARGO BANK, N A | Systems and methods for visual definition of data associations |
9092482, | Mar 14 2013 | WELLS FARGO BANK, N A | Fair scheduling for mixed-query loads |
9116975, | Oct 18 2013 | WELLS FARGO BANK, N A | Systems and user interfaces for dynamic and interactive simultaneous querying of multiple data stores |
9195700, | Oct 10 2007 | United Services Automobile Association (USAA) | Systems and methods for storing time-series data |
9208159, | Jun 23 2011 | WELLS FARGO BANK, N A | System and method for investigating large amounts of data |
9230280, | Apr 09 2014 | WELLS FARGO BANK, N A | Clustering data based on indications of financial malfeasance |
9280532, | Aug 02 2011 | WELLS FARGO BANK, N A | System and method for accessing rich objects via spreadsheets |
9672257, | Jun 05 2015 | WELLS FARGO BANK, N A | Time-series data storage and processing database system |
9753935, | Aug 02 2016 | WELLS FARGO BANK, N A | Time-series data storage and processing database system |
20010051949, | |||
20010056522, | |||
20020091694, | |||
20020095360, | |||
20020095658, | |||
20020103705, | |||
20020141652, | |||
20020147805, | |||
20030105759, | |||
20030115481, | |||
20030120675, | |||
20030130993, | |||
20030212718, | |||
20040034570, | |||
20040086186, | |||
20040111410, | |||
20040117345, | |||
20040117387, | |||
20040148301, | |||
20040153418, | |||
20050010472, | |||
20050086207, | |||
20050097441, | |||
20050108231, | |||
20050114763, | |||
20050131990, | |||
20050289524, | |||
20060026120, | |||
20060074881, | |||
20060080316, | |||
20060095521, | |||
20060106783, | |||
20060106847, | |||
20060116991, | |||
20060143075, | |||
20060143079, | |||
20060161558, | |||
20060218206, | |||
20060218405, | |||
20060218491, | |||
20060242630, | |||
20060253502, | |||
20060265397, | |||
20060277230, | |||
20060288035, | |||
20070000999, | |||
20070011304, | |||
20070050429, | |||
20070061487, | |||
20070143108, | |||
20070143253, | |||
20070185850, | |||
20070233756, | |||
20070271317, | |||
20070284433, | |||
20080015970, | |||
20080104060, | |||
20080104149, | |||
20080195672, | |||
20080201339, | |||
20080215546, | |||
20080222295, | |||
20080243711, | |||
20080255973, | |||
20080270316, | |||
20080301378, | |||
20080313132, | |||
20090031247, | |||
20090106178, | |||
20090106308, | |||
20090112745, | |||
20090125459, | |||
20090164387, | |||
20090177962, | |||
20090187546, | |||
20090187548, | |||
20090249244, | |||
20090254971, | |||
20090271343, | |||
20090271435, | |||
20090313223, | |||
20090313311, | |||
20090319418, | |||
20100036831, | |||
20100042922, | |||
20100057622, | |||
20100070489, | |||
20100070842, | |||
20100076939, | |||
20100082541, | |||
20100098318, | |||
20100114817, | |||
20100114831, | |||
20100114887, | |||
20100138842, | |||
20100145909, | |||
20100161565, | |||
20100161688, | |||
20100191563, | |||
20100191884, | |||
20100211550, | |||
20100211618, | |||
20100231595, | |||
20100235606, | |||
20100283787, | |||
20100306193, | |||
20100325581, | |||
20110029498, | |||
20110047540, | |||
20110061013, | |||
20110078173, | |||
20110099133, | |||
20110153384, | |||
20110153592, | |||
20110173619, | |||
20110184813, | |||
20110218978, | |||
20110238659, | |||
20110258158, | |||
20110258242, | |||
20110270812, | |||
20110314007, | |||
20120011238, | |||
20120066166, | |||
20120072825, | |||
20120079363, | |||
20120123989, | |||
20120124179, | |||
20120131512, | |||
20120150791, | |||
20120150925, | |||
20120159307, | |||
20120173381, | |||
20120221553, | |||
20120221589, | |||
20120272186, | |||
20120323888, | |||
20120330908, | |||
20120330931, | |||
20130036346, | |||
20130060742, | |||
20130066882, | |||
20130096988, | |||
20130097130, | |||
20130103657, | |||
20130151388, | |||
20130151453, | |||
20130166480, | |||
20130238616, | |||
20130238619, | |||
20130246170, | |||
20130263019, | |||
20130268533, | |||
20130282696, | |||
20130290825, | |||
20130297619, | |||
20130304770, | |||
20130318060, | |||
20140012796, | |||
20140040276, | |||
20140040371, | |||
20140068487, | |||
20140095509, | |||
20140095543, | |||
20140101139, | |||
20140108380, | |||
20140108985, | |||
20140136285, | |||
20140149272, | |||
20140157172, | |||
20140164502, | |||
20140181833, | |||
20140189536, | |||
20140195515, | |||
20140247946, | |||
20140324876, | |||
20140344231, | |||
20150039886, | |||
20150089353, | |||
20150106347, | |||
20150112956, | |||
20150186338, | |||
20150186434, | |||
20150212663, | |||
20150213043, | |||
20150213134, | |||
20150227295, | |||
20150242397, | |||
20150261817, | |||
20150278325, | |||
20150341467, | |||
20150379065, | |||
20160034545, | |||
20160062555, | |||
20160088013, | |||
20160164912, | |||
20160253679, | |||
20160275432, | |||
20160328432, | |||
20160371363, | |||
20170270172, | |||
20170355036, | |||
20180039651, | |||
20180181629, | |||
AU2014206155, | |||
CN102054015, | |||
DE102014204827, | |||
DE102014204830, | |||
DE102014204834, | |||
EP652513, | |||
EP1126384, | |||
EP2555126, | |||
EP2863326, | |||
EP2891992, | |||
EP2993595, | |||
EP3101560, | |||
EP3279813, | |||
EP3343403, | |||
EP3493109, | |||
JP2004348594, | |||
WO2008043082, | |||
WO2012025915, | |||
WO2014019349, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Jun 15 2018 | MARTIN, CHRISTOPHER | Palantir Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 061126 | /0672 | |
Jul 08 2018 | ALGHUNAIM, ABDULAZIZ | Palantir Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 061126 | /0672 | |
Jul 13 2018 | VEMPATI, SRI KRISHNA | Palantir Technologies Inc | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 061126 | /0672 | |
Feb 14 2022 | PALANTIR TECHNOLOGIES INC. | (assignment on the face of the patent) | / |
Date | Maintenance Fee Events |
Feb 14 2022 | BIG: Entity status set to Undiscounted (note the period is included in the code). |
Date | Maintenance Schedule |
Sep 24 2027 | 4 years fee payment window open |
Mar 24 2028 | 6 months grace period start (w surcharge) |
Sep 24 2028 | patent expiry (for year 4) |
Sep 24 2030 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 24 2031 | 8 years fee payment window open |
Mar 24 2032 | 6 months grace period start (w surcharge) |
Sep 24 2032 | patent expiry (for year 8) |
Sep 24 2034 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 24 2035 | 12 years fee payment window open |
Mar 24 2036 | 6 months grace period start (w surcharge) |
Sep 24 2036 | patent expiry (for year 12) |
Sep 24 2038 | 2 years to revive unintentionally abandoned end. (for year 12) |