systems and methods are disclosed for displaying data on a display device. An example method of displaying data on a display device includes computing a texture based on a difference between a high quality (HQ) tile and a corresponding low quality (LQ) tile. The method also includes storing the texture into an alpha channel of the LQ tile. The method further includes compositing the LQ tile onto the display device when an attribute of the alpha channel satisfies a threshold.
|
1. A method of displaying data on a display device, comprising:
computing a difference between an average intensity of a plurality of pixels in a high quality (HQ) version of a first tile and an average intensity of one or more pixels in a corresponding low quality (LQ) version of the first tile, the first tile having an alpha channel;
storing the difference into an attribute of the alpha channel; and
compositing the LQ version of the first tile onto the display device when the attribute of the alpha channel satisfies a threshold.
18. A non-transitory computer-readable medium having stored thereon computer-executable instructions for performing operations, comprising:
computing an average of differences between a plurality of pixels in a high quality (HQ) version of a first tile and one or more pixels in a corresponding low quality (LQ) version of the first tile, the first tile having an alpha channel;
storing the difference into an attribute of the alpha channel; and
compositing the LQ version of the first tile onto the display device when the attribute of the alpha channel satisfies a threshold.
10. A system for displaying data on a display device, comprising:
a display device;
a memory; and
one or more processors coupled to the memory and display device, wherein the one or more processors is configured to:
compute an average of differences between a plurality of pixels in a high quality (HQ) version of a first tile and one or more pixels in a corresponding low quality (LQ) version of the first tile, the first tile having an alpha channel;
store the difference into an attribute of the alpha; and
composite the LQ version of the first tile onto the display device when the attribute of the alpha channel satisfies a threshold.
2. The method of
compositing the HQ version of the first tile onto the display device when the attribute of the alpha channel does not satisfy the threshold.
3. The method of
4. The method of
computing a first texture based on the difference;
computing a second texture based on a difference between the plurality of pixels in the HQ version of the first tile and the one or more pixels in the LQ version of the first tile, each pixel in the second texture having a single scalar value representing a difference in pixel intensity between the HQ version of the first tile and the corresponding LQ version of the first tile; and
down-sampling the second texture to a resolution of the LQ version of the first tile, wherein the computed first texture is the down-sampled second texture.
5. The method of
6. The method of
computing a first texture based on the difference, the first texture including a single scalar value per pixel in the LQ version of the first tile.
7. The method of
8. The method of
9. The method of
11. The system of
12. The system of
13. The system of
14. The system of
15. The system of
16. The system of
17. The system of
19. The non-transitory computer-readable medium of
compositing the HQ version of the first tile onto the display device when the attribute of the alpha channel does not satisfy the threshold.
20. The non-transitory computer-readable medium of
compositing the HQ version of the first tile onto the display device when the attribute of the alpha channel does not satisfy the threshold.
|
The present disclosure generally relates to computing systems, and more particularly to rendering content in a graphics processing system.
Computing devices may be equipped with one or more high-performance graphics processing units (GPUs) providing high performance with regard to computations and graphics rendering. Computing devices may use a GPU to accelerate the rendering of graphics data for display. Examples of such computing devices may include a computer workstation, mobile phones (e.g., smartphones), embedded systems, personal computers, tablet computers, and video game consoles.
Rendering generally refers to the process of converting a three-dimensional (3D) graphics scene, which may include one or more 3D graphics objects, into two-dimensional (2D) rasterized image data. In particular, GPUs may include a 3D rendering pipeline to provide at least partial hardware acceleration for the rendering of a 3D graphics scene. The 3D graphics objects in the scene may be subdivided by a graphics application into one or more 3D graphics primitives (e.g., points, lines, triangles, patches, etc.), and the GPU may convert the 3D graphics primitives of the scene into 2D rasterized image data.
Systems and methods are disclosed for displaying data on a display device using LQ tiles to reduce memory bandwidth. Users may fast scroll webpages with minimal degradation in the display quality or information content of webpages.
According to some embodiments, a method for displaying data on a display device includes computing a texture based on a difference between a high quality (HQ) tile and a corresponding low quality (LQ) tile. The method also includes storing the texture into an alpha channel of the LQ tile. The method further includes compositing the LQ tile onto the screen when an attribute of the alpha channel satisfies a threshold.
According to some embodiments, a system for displaying data on a display device includes a display device and a memory. The system also includes one or more processors coupled to the memory and display device. The one or more processors read the memory and are configured to compute a texture based on a difference between a HQ tile and a corresponding LQ tile. The processors are also configured to store the texture into an alpha channel of the LQ tile. The processors are further configured to composite the LQ tile onto the display device when an attribute of the alpha channel satisfies a threshold.
According to some embodiments, a computer-readable medium has stored thereon computer-executable instructions for performing operations including: computing a texture based on a difference between a HQ tile and a corresponding LQ tile; storing the texture into an alpha channel of the LQ tile; and compositing the LQ tile onto the display device when an attribute of the alpha channel satisfies a threshold.
According to some embodiments, an apparatus for displaying data on a display device includes means for computing a texture based on a difference between a HQ tile and a corresponding LQ tile. The system also includes means for storing the texture into an alpha channel of the LQ tile. The system further includes compositing the LQ tile onto the display device when an attribute of the alpha channel satisfies a threshold.
The accompanying drawings, which form a part of the specification, illustrate embodiments of the invention and together with the description, further serve to explain the principles of the embodiments. In the drawings, like reference numbers may indicate identical or functionally similar elements. The drawing in which an element first appears is generally indicated by the left-most digit in the corresponding reference number.
Embodiments of the present disclosure and their advantages are best understood by referring to the detailed description that follows.
I. Overview
II. Example System Architectures
III. Render Content onto a Display Device
IV. Example Method
I. Overview
It is to be understood that the following disclosure provides many different embodiments, or examples, for implementing different features of the present disclosure. Some embodiments may be practiced without some or all of these specific details. Specific examples of components, modules, and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting.
Webpages are full of rich multimedia content that may include graphics, videos, images, text, etc. During webpage rendering, web browsers may partition webpages into tiles. The webpage content inside each tile may be rasterized into a bitmap that is then loaded into a texture for the GPU to access. Each bitmap may correspond to a tile that covers a portion of the screen. To display the webpage, the GPU composites the tiles onto the screen. As a user scrolls the webpage frame, new tiles may appear in the browser window and old tiles may disappear from the browser window.
The GPU may generate tiles having different resolutions. A low quality (LQ) tile is a lower resolution version of a corresponding high quality (HQ) tile. While HQ tiles are tiles that may have the same resolution as the screen, LQ tiles are scaled down versions of the information content overlapped by the LQ tiles. LQ tiles are relatively fast to render compared to fully rendered tiles, referred to as HQ tiles, and may be used for quickly conveying a thumbnail sketch of the webpage content overlapped by the LQ tiles.
During fast scrolling, not all of the HQ tiles of a frame may be rendered before a new frame appears in the browser window. To allow smooth scrolling of webpages in web browsers, a frame rate of about 60 frames per second (FPS) may be desirable. Unfortunately, this frame rate typically requires high memory bandwidth. If the user fast scrolls the webpage frame and HQ tiles of the webpage exposed on the screen have not been rendered yet, the user may see blank areas, which may be distracting and degrade overall user experience. Due to the high cost in rendering HQ tiles, corresponding LQ tiles may be generated and composited onto the screen such that a lower resolution version of the webpage can be displayed during scrolling, thus reducing the occurrence of blanking during scroll. The LQ tiles may be rendered into HQ tiles to fully display the information content.
For high resolution devices, a large amount of memory bandwidth may be required to display the entire webpage. Compositing a HQ tile onto the screen may consume a large amount of memory bandwidth and power as well as degrade performance compared to compositing a corresponding LQ tile. It may be desirable to reduce the memory bandwidth in order to improve performance and reduce power consumption. Conventional techniques that reduce the memory bandwidth include performing hardware texture compression. GPUs can perform hardware texture compression, but this technique may be undesirable because it requires hardware support and may be expensive. Alternatively, software techniques for texture compression can also be used. Software texture compression may be undesirable, however, because of the amount of central processing unit (CPU) processing required.
Techniques of the present disclosure may provide solutions that overcome these disadvantages while enabling web browsers to quickly render frames of webpages with minimal degradation in display quality or information content during fast scrolling of the webpages. Systems and methods are disclosed for GPUs to composite either a HQ tile or its corresponding LQ tile onto a display device. A GPU may composite the LQ tile rather than the corresponding HQ tile (without replacing the LQ tile with the HQ tile) if the LQ and HQ tiles are similar enough to not degrade the user's experience. LQ tiles are smaller and consume less memory space than their corresponding HQ tiles. By compositing the LQ tile rather than the HQ tile, the amount of memory accessed during composition by the GPU is reduced. Thus, using LQ tiles may reduce the memory bandwidth required during tile composition.
In some embodiments, the GPU generates a HQ tile and a corresponding LQ tile and computes a texture based on a difference between the HQ tile and LQ tile. Each pixel in the LQ tile may have three color channels and an alpha channel. The alpha channel typically has an attribute describing the degree of opacity of an object fragment for a given pixel. Rather than store the degree of opacity, the GPU may store the texture into the alpha channel of the LQ tile. By doing so, memory space may be conserved. The texture may include a single scalar value per pixel in the LQ tile. The single scalar value corresponding to a pixel in the LQ tile is the difference between the pixel and a plurality of pixels in the corresponding HQ tile, and may be stored as the value of the attribute of the alpha channel.
The GPU may composite the LQ tile onto the display device when an attribute of the alpha channel satisfies a threshold. In an example, an attribute that is below the threshold satisfies the threshold. Such an attribute may indicate that the LQ and HQ tiles are similar enough to each other such that compositing the LQ tile instead of the HQ tile onto the display device will not degrade the user's experience. Alternatively, the GPU may composite the HQ tile onto the display device when the attribute does not satisfy the threshold. An attribute that is not below the threshold may indicate that the LQ and HQ tiles are not similar enough to each other to composite the LQ tile rather than the HQ tile. Accordingly, the HQ tile should be composited onto the display device instead of the corresponding LQ tile.
II. Example System Architectures
As illustrated in the example of
CPU 106 may include a general-purpose or a special-purpose processor that controls operation of computing device 102. A user may provide input to computing device 102 to cause CPU 106 to execute one or more software applications. The software applications that execute on CPU 106 may include, for example, an operating system, a software application 122 (e.g., a word processor application, an email application, a spread sheet application, a media player application, a video game application, a graphical user interface (GUI) application, or a browser), or another program. The user may provide input to computing device 102 via one or more input devices (not shown) such as a keyboard, a mouse, a microphone, a touch pad or another input device that is coupled to computing device 102 via user interface 104.
Software application 122 may include one or more graphics rendering instructions that instruct GPU 112 to render graphics data to display device 118. In some examples, the software instructions may conform to a graphics application programming interface (API), such as an Open Graphics Library (OpenGL®) API, an Open Graphics Library Embedded Systems (OpenGL ES) API, a Direct3D API, an X3D API, a RenderMan API, a WebGL API, or any other public or proprietary standard graphics API. To process the graphics rendering instructions, CPU 106 may issue one or more graphics rendering commands to GPU 112 to cause it to render all or some of the graphics data. The graphics data to be rendered may include a list of graphics primitives, e.g., points, lines, triangles, quadrilaterals, triangle strips, etc.
Memory controller 108 facilitates the transfer of data going into and out of system memory 110. For example, memory controller 108 may receive memory read and write commands, and service such commands with respect to memory system 110 in order to provide memory services for the components in computing device 102. Memory controller 108 is communicatively coupled to system memory 110. Although memory controller 108 is illustrated in the example computing device 102 of
System memory 110 may store program modules and/or instructions that are accessible for execution by CPU 106 and/or data for use by the programs executing on CPU 106. For example, system memory 110 may store user applications and graphics data associated with the applications. System memory 110 may additionally store information for use by and/or generated by other components of computing device 102. For example, system memory 110 may act as a device memory for GPU 112 and may store data to be operated on by GPU 112 as well as data resulting from operations performed by GPU 112. For example, system memory 110 may store any combination of texture buffers, depth buffers, stencil buffers, vertex buffers, frame buffers, or the like. In addition, system memory 110 may store command streams for processing by GPU 112. System memory 110 may include one or more volatile or non-volatile memories or storage devices, such as, for example, random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), read-only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), Flash memory, a magnetic data media or an optical storage media.
GPU 112 may be configured to perform graphics operations to render one or more graphics primitives to display device 118 and to texture map an image to a pixel for display. When software application 122 executing on CPU 106 requires graphics processing, CPU 106 may provide graphics commands and graphics data to GPU 112 for rendering to display device 118. The graphics commands may include draw call commands, GPU state programming commands, memory transfer commands, general-purpose computing commands, kernel execution commands, etc. In some examples, CPU 106 may provide the commands and graphics data to GPU 112 by writing the commands and graphics data to system memory 110, which may be accessed by GPU 112. In an example, graphics data may include a texture that is stored in system memory 110 and used by GPU 112 to determine the color for a pixel on display device 118. In some examples, GPU 112 may be further configured to perform general-purpose computing for applications executing on CPU 106.
GPU 112 may, in some instances, be built with a highly-parallel structure that provides more efficient processing of vector operations than CPU 106. For example, GPU 112 may include a plurality of processing units that are configured to operate on multiple vertices, control points, pixels and/or other data in a parallel manner. The highly parallel nature of GPU 112 may, in some instances, allow GPU 112 to render graphics images (e.g., GUIs and two-dimensional (2D) and/or three-dimensional (3D) graphics scenes) onto display device 118 more quickly than rendering the images using CPU 106. In addition, the highly parallel nature of GPU 112 may allow it to process certain types of vector and matrix operations for general-purposed computing applications more quickly than CPU 106.
GPU 112 may, in some instances, be integrated into a motherboard of computing device 102. In other instances, GPU 112 may be present on a graphics card that is installed in a port in the motherboard of computing device 102 or may be otherwise incorporated within a peripheral device configured to interoperate with computing device 102. In further instances, GPU 112 may be located on the same microchip as CPU 106 forming a system on a chip (SoC). GPU 112 may include one or more processors, such as one or more microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), or other equivalent integrated or discrete logic circuitry.
GPU 112 may be directly coupled to GPU cache 114. Thus, GPU 112 may read data from and write data to GPU cache 114 without necessarily using bus 120. In other words, GPU 112 may process data locally using a local storage, instead of off-chip memory. This allows GPU 112 to operate in a more efficient manner by reducing the need of GPU 112 to read and write data via bus 120, which may experience heavy bus traffic. GPU cache 114 may include one or more volatile or non-volatile memories or storage devices, such as, e.g., random access memory (RAM), static RAM (SRAM), dynamic RAM (DRAM), etc. In some instances, however, GPU 112 may not include a separate cache, but instead use system memory 110 via bus 120.
CPU 106 and/or GPU 112 may store rendered image data in a frame buffer that is allocated within system memory 110. The software application that executes on CPU 106 may store the image data (e.g., texel colors, width, height, and color depth) in system memory 110 Display interface 116 may retrieve the data from the frame buffer and configure display device 118 to display the image represented by the rendered image data. In some examples, display interface 116 may include a digital-to-analog converter (DAC) that is configured to convert the digital values retrieved from the frame buffer into an analog signal consumable by display device 118. In other examples, display interface 116 may pass the digital values directly to display device 118 for processing.
Display device 118 may include a monitor, a television, a projection device, a liquid crystal display (LCD), a plasma display panel, a light emitting diode (LED) array, a cathode ray tube (CRT) display, a surface-conduction electron-emitted display (SED), a laser television display, a nanocrystal display or another type of display unit. Display device 118 may be integrated within computing device 102. For instance, display device 118 may be a screen of a mobile telephone handset or a tablet computer. Alternatively, display device 118 may be a stand-alone device coupled to computer device 102 via a wired or wireless communications link. For instance, display device 118 may be a computer monitor or flat panel display connected to a personal computer via a cable or wireless link.
Bus 120 may be implemented using any combination of bus structures and bus protocols including first, second, and third generation bus structures and protocols, shared bus structures and protocols, point-to-point bus structures and protocols, unidirectional bus structures and protocols, and bidirectional bus structures and protocols. Examples of different bus structures and protocols that may be used to implement bus 120 include, e.g., a HyperTransport bus, an InfiniBand bus, an Advanced Graphics Port bus, a Peripheral Component Interconnect (PCI) bus, a PCI Express bus, an Advanced Microcontroller Bus Architecture (AMBA), an Advanced High-performance Bus (AHB), an AMBA Advanced Peripheral Bus (APB), and an AMBA Advanced eXentisible Interface (AXI) bus. Other types of bus structures and protocols may also be used.
CPU 106 is configured to execute a software application such as a browser 224, a graphics API 226, a GPU driver 228, and an operating system 230. Browser 224 may include one or more instructions that cause graphics images to be displayed and/or one or more instructions that cause a non-graphics task (e.g., a general-purposed computing task) to be performed by GPU 112. Browser 224 may include or implement a plurality of hardware components and/or software components that operate to perform various methodologies in accordance with the described embodiments.
A user may point browser 224 to a uniform resource locator (URL) of a webpage. Browser 224 may load a hypertext markup language (HTML) file referenced by the URL and render the webpage on the screen (e.g., display device 118 in
During webpage rendering, browser 224 partitions webpage 300 into a plurality of tiles 302. Webpage 300 is divided into tiles 302 of three columns and four rows for a total of 12 tiles. Tiles 302 may overlap with graphics, text 304, images 306 and 308, icons, links for videos etc. that convey information. Browser 224 rasterizes the webpage content inside each tile into a bitmap. Each bitmap corresponds to a tile that covers a portion of display device 118.
Browser 224 may rasterize one or more versions of one or more tiles of plurality of tiles 302. In an example, browser 224 rasterizes a LQ version and a HQ version of each tile. Browser 224 may rasterize a LQ version of tile 302 into an LQ bitmap 260, and rasterize a corresponding HQ version of tile 302 into an HQ bitmap 262. Browser 224 may rasterize the LQ version of the tile into smaller tiles than the HQ version of the tile. It may take CPU 106 less time to rasterize the content in the LQ version of tile 302 because it is smaller and contains less information than the corresponding HQ tile. As such, in some embodiments, browser 224 generates LQ bitmap 260 before HQ bitmap 262. HQ bitmap 262 may be generated in the background while LQ bitmap 260 is being generated or after LQ bitmap 260 has been generated.
The bitmaps may be inaccessible to GPU 112. To provide GPU 112 with access to the bitmaps, browser 224 may upload them into texture memory 264. In an example, browser 224 uploads LQ bitmap 260 into an LQ tile 270, and uploads HQ bitmap 262 into an HQ tile 272. LQ tile 270 corresponds to HQ tile 272 and is a lower resolution version of HQ tile 272. In an example, HQ tile 272 may map to a 512×512 texel region of display device 118, and corresponding LQ tile 270 may map to a 32×32 texel region of display device 118. A texel region includes one or more texels, and a texel is a pixel in texture memory 264.
Due to the high cost in rasterizing and compositing HQ tiles, their corresponding LQ tiles may be used to render a lower resolution version of webpage 300 during scrolling, thus reducing the occurrence of blanking during scroll as well as the memory bandwidth. Although one LQ bitmap and HQ bitmap is illustrated in
Browser 224 may issue instructions to graphics API 226, which may translate the instructions received from browser 224 into a format that is consumable by GPU driver 228. GPU driver 228 receives the instructions from browser 224, via graphics API 226, and controls the operation of GPU 112 to service the instructions. For example, GPU driver 228 may formulate one or more commands 240, place the commands 240 into system memory 110 (e.g., in texture memory 264), and instruct GPU 112 to execute commands 240. In some examples, GPU driver 228 may place commands 240 into system memory 110 and communicate with GPU 112 via operating system 230, e.g., via one or more system calls.
System memory 110 may store one or more commands 240. Commands 240 may be stored in one or more command buffers (e.g., a ring buffer) and include one or more state commands and/or one or more draw call commands. A state command may instruct GPU 112 to change one or more of the state variables in GPU 112, such as the draw color. A draw call command may instruct GPU 112 to render a geometry defined by a group of one or more vertices (e.g., defined in a vertex buffer) stored in system memory 110 or to draw content of a texture (e.g., LQ tile 270 or HQ tile 272) onto display device 118.
GPU 112 includes a command engine 232 and one or more processing units 234. Command engine 232 retrieves and executes commands 240 stored in system memory 110. In response to receiving a state command, command engine 232 may be configured to set one or more state registers in GPU 112 to particular values based on the state command. In response to receiving a draw call command, command engine 232 may be configured to cause processing units 234 to render the geometry represented by vertices based on primitive type data stored in system memory 110. Command engine 232 may also receive shader program binding commands, and load particular shader programs into one or more of the programmable processing units 234 based on the shader program binding commands.
Processing units 234 may include one or more processing units, each of which may be a programmable processing unit or a fixed-function processing unit. A programmable processing unit may include, for example, a programmable shader unit that is configured to execute one or more shader programs downloaded onto GPU 112 from CPU 106. A shader program, in some examples, may be a compiled version of a program written in a high-level shading language, such as an OpenGL Shading Language (GLSL), a High Level Shading Language (HLSL), a C for Graphics (Cg) shading language, etc.
In some examples, a programmable shader unit may include a plurality of processing units that are configured to operate in parallel, e.g., an SIMD pipeline. A programmable shader unit may have a program memory that stores shader program instructions and an execution state register, e.g., a program counter register that indicates the current instruction in the program memory being executed or the next instruction to be fetched. The programmable shader units in processing units 234 may include, for example, vertex shader units, pixel shader units, geometry shader units, hull shader units, domain shader units, compute shader units, and/or unified shader units. The one or more processing units 234 may form a 3D graphics rendering pipeline, which may include one or more shader units that are configured to execute a shader program. Browser 224 may send different shader programs to GPU 112.
III. Render Content onto a Display Device
In an example, commands 240 include a command to render webpage 300. Processing units 234 includes a fragment shader or pixel shader 237 that may during the composition stage of the rendering process, composite at most one of LQ tile 270 and HQ tile 272 onto display 118. Pixel shader 237 may also compute and set colors for pixels covered by a texture object (e.g., texture image) displayed on display device 118. The terms “fragment” and “pixel” may be used interchangeably in the disclosure.
Each pixel of display device 118 may have associated information. In some examples, each pixel has three color channels and an alpha channel. A color channel is a function of a specific component of that pixel, which is typically a red, green, and blue (RGB) component. Accordingly, a pixel may have a red channel, green channel, blue channel, and alpha channel. The combination of these three colors at different intensities may represent a full range of the visible spectrum for each pixel. Additionally, the alpha channel may have an attribute indicating the degree of opacity of each pixel. When the attribute is examined in a compositing program, an attribute value of one (white) represents 100 percent opaqueness and entirely covers the pixel's area of interest. In contrast, an attribute value of zero (black) represents 100 percent transparency.
A. HQ Tile is Unavailable and LQ Tile is Available
In some embodiments, during the composition stage of the rendering process, GPU 112 may composite either LQ tile 270 or HQ tile 272 onto display device 118. LQ tiles are smaller and consume less memory space than their corresponding HQ tiles. LQ tile 270 may include content (e.g., graphics, text, images, icons, links for videos etc.) having a lower resolution than HQ tile 272. Accordingly, it may be quicker for GPU 112 to composite LQ tile 270 onto display device 118 instead of HQ tile 272 because LQ tile 270 contains less information than HQ tile 272.
As the user scrolls the webpage frame, new tiles may appear in the browser window and old tiles may disappear from the browser window. During fast scrolling, not all of the HQ tiles of a frame may be available. LQ tile 270 may be available before HQ tile 272 is available, for example, because it may be quicker to generate LQ tile 270 compared to HQ tile 272. Here, LQ tile 270 may be composited onto display device 118 to avoid the user seeing blank areas where the unavailable HQ tile would be displayed. Blank areas may be distracting and degrade overall user experience.
B. HQ Tile and LQ Tile are Available
Alternatively, if both LQ tile 270 and HQ tile 272 are available, it may be desirable to composite LQ tile 270 onto display device 118 rather than HQ tile 272 (without replacing LQ tile 270 with HQ tile 272) if the LQ and HQ tiles are similar enough to each other such that it is unnoticeable or not distracting to the user to see the LQ tile. GPU 112 may determine whether to composite HQ tile 272 or LQ tile 270 onto display device 118.
1. Compute a Texture “DLOW” Having the Same Resolution as the LQ Tile
In an action 402, a texture is computed based on a difference between HQ tile 272 and corresponding LQ tile 270. In an example, browser 224 sends instructions to GPU 112 to compute the texture DLOW via graphics API 226. GPU 112 compares the difference between two images (e.g., corresponding to HQ tile 272 and LQ tile 270) having different resolutions. In an example, pixel shader 237 determines the degree of similarity between HQ tile 272 and LQ tile 270 by computing a texture DLOW based on a difference between the tiles.
A pixel in LQ tile 270 may be referred to as an LQ pixel, and a pixel in HQ tile 272 may be referred to as an HQ pixel. The HQ and LQ tiles have a different number of pixels. Each LQ pixel in LQ tile 270 may be mapped to a plurality of HQ pixels in HQ tile 272. In an example, LQ tile 270 is a 32×32 pixel region that maps to a 512×512 pixel region in HQ tile 272. For each LQ pixel in LQ tile 270, the texture DLOW may include a difference value indicating the difference between the LQ pixel and its mapped plurality of HQ pixels in HQ tile 272. The texture DLOW has the same resolution as LQ tile 270. Each pixel in the texture DLOW may be associated with a single scalar value representing a difference between an LQ pixel and its mapped plurality of HQ pixels. GPU 112 may compute the texture DLOW efficiently because GPU 112 can process the pixels in parallel. In an example, each instance of pixel shader 237 may process one pixel of the browser window.
GPU 112 may calculate the texture DLOW in a variety of ways. In some examples, GPU 112 calculates the texture DLOW in one pass. In an example, for each LQ pixel in LQ tile 270, GPU 112 identifies a corresponding pixel region in HQ tile 272 (e.g., 16×16 HQ pixels). GPU 112 may compute an attribute of the pixel region. The attribute of the pixel region may be the average of the pixel region. In an example, the intensity of the pixel values in the pixel region are averaged out. Each pixel may include an RGB color. For example, each pixel may include three scalar values, where a first scalar value corresponds to the red (“R”) value, a second scalar value corresponds to the green (“G”) value, and a third scalar value corresponds to the blue “B” value. The pixel intensity may be a function of the three color values. In an example, the intensity is a linear combination of the red, green, and blue values. GPU 112 may then compute a difference between the attribute of the pixel region and the LQ pixel. The difference between the attribute of the pixel region and the LQ pixel may be a value in the texture DLOW. In an example, the LQ pixel corresponds to a pixel in the texture DLOW storing the difference between the attribute of the pixel region and the LQ pixel. This difference may be computed for each LQ pixel in LQ tile 270, and the texture DLOW may include each of these computed differences.
In another example, for each LQ pixel in LQ tile 270, GPU 112 identifies a corresponding pixel region in HQ tile 272 and computes a difference between the LQ pixel and each pixel in the pixel region. GPU 112 may then compute an average of the one or more computed differences. The average may be a value in the texture DLOW. In an example, the LQ pixel corresponds to a pixel in the texture DLOW storing the average. This average may be computed for each LQ pixel in LQ tile 270, and the texture DLOW may include each of these computed averages.
In some examples, GPU 112 calculates the texture DLOw in more than one pass. In an example, for each LQ pixel in LQ tile 270, GPU 112 identifies a corresponding pixel region in HQ tile 272 and computes a texture DHIGH based on a difference between the LQ pixel and the pixel region. The resolution of the texture DHIGH may be the same as HQ tile 272's resolution. Each pixel in the texture DHIGH may be associated with a single scalar value representing a difference in pixel intensity between HQ tile 272 and LQ tile 270. In a separate pass, GPU 112 may down-sample the texture DHIGH to the resolution of LQ tile 270. The texture Dam may be the texture DHIGH down-sampled to the resolution of LQ tile 270. In an example, the LQ pixel corresponds to a pixel in the texture DLOW storing the down-sampled difference between the LQ pixel and the pixel region. The texture DHIGH may be stored in one or more temporary buffers and may be discarded after the texture DLOW is computed. This down-sampled difference may be computed for each LQ pixel in LQ tile 270, and the texture DLOW may include each of these down-sampled differences.
2. Store the Texture “DLOW” into an Alpha Channel of the LQ Tile
In an action 404, the texture is stored into an alpha channel of the LQ tile. In an example, browser 224 sends instructions to GPU 112 to store the texture DLOW into an alpha channel of LQ tile 270. Accordingly, GPU 112 may store the texture DLOW into an alpha channel of LQ tile 270. Content of the texture DLOW, which stores a single scalar value per pixel, may be written to the alpha channel of LQ tile 270. In particular, each LQ pixel in LQ tile 270 may have an alpha channel having an attribute that describes the degree of opacity of an object fragment for the LQ pixel. The attribute of the alpha channel typically has a value of one, indicating that the tile is opaque. Because tiles are typically opaque, the attribute of the alpha channel may be used to store information different from the opaqueness of the tile. For example, the attribute may indicate a similarity (or difference) between LQ tile 270 and HQ tile 272 to determine whether compositing LQ tile 270 is sufficient without compositing HQ tile 272 onto display device 118. The alpha channel may be used to store the texture DLOW to save memory space.
In some embodiments, each value in the texture DLOW is stored in an attribute of an alpha channel of an LQ pixel in LQ tile 270. The attribute is based on a difference between the LQ pixel and its mapped pixel region in HQ tile 272. Each difference value stored in the alpha channel of LQ tile 270 may be an average of multiple differences in HQ tile 272. A difference value provides an indication of whether to composite HQ tile 272 or LQ tile 270 onto display device 118. In an example, the difference value may be compared to a threshold to determine how similar LQ tile 270 is to HQ tile 272. The threshold may depend on various factors such as the resolution of the screen and webpage. For example, the threshold may indicate a particular percentage difference in pixel values (e.g., 20 percent difference in pixel intensity).
3. Difference Between the LQ and HQ Tiles Based on a Threshold
In an action 406, the LQ tile is composited onto the display device when an attribute of the alpha channel satisfies a threshold. Accordingly, color values from texels in LQ tile 270 may be used. In an example, browser 224 sends instructions to GPU 112 to read from the attribute of the alpha channel and compare the attribute to the threshold via graphics API 226. GPU 112 may composite LQ tile 270 onto display device 118 when an attribute of the alpha channel satisfies a threshold. The attribute of the alpha channel may satisfy the threshold if the attribute is less than (or equal to) the threshold, which may indicate how similar LQ tile 270 is to HQ tile 272. An attribute that is below the threshold may indicate that LQ tile 270 is similar enough to HQ tile 272 to not degrade the user's experience. Accordingly, LQ tile 270 may be composited onto display device 118 instead of HQ tile 272.
A webpage may contain low frequency data (e.g., blanks, constant color, or slow changing gradient or images) such that if LQ tile 270 is composited onto display device 118 rather than HQ tile 272, the end result is good enough for a user to view without degrading the user's experience. LQ tile 270 may be similar to HQ tile 272 if the HQ tile contains low frequency data.
GPU 112 may cache LQ tile 270 in GPU cache 114 for later retrieval. If GPU 112 accesses LQ tile 270, GPU 112 may retrieve LQ tile 270 from GPU cache 114 rather than accessing texture memory 264. Many of the browser window's pixels may (inversely) map to the same LQ tile. Accordingly, instances of pixel shader 237 may fetch the same LQ tile, which may be more likely to be in GPU 112's texture cache. This may result in lowered memory bandwidth for tiles containing pixels having similar color values between HQ bitmap 262 and LQ bitmap 260, as in the case where the web page contains blank areas, slow changing gradients, etc.
Additionally, it may be unnecessary for GPU 112 to access HQ tile 272 at all. Rather, GPU 112 may read the alpha channel of LQ tile 270 to determine whether the difference between LQ tile 270 and HQ tile 272 is so small that GPU 112 can composite LQ tile 272. If the attribute of the alpha channel satisfies the threshold, GPU 112 is saved from accessing HQ tile 272, thus reducing memory bandwidth.
In contrast, in an action 408 the HQ tile is composited onto the display device when the attribute of the alpha channel does not satisfy the threshold. Accordingly, color values from texels in HQ tile 272 may be used. GPU 112 may composite HQ tile 272 onto display device 118 when the attribute of the alpha channel does not satisfy the threshold. In an example, the attribute does not satisfy the threshold if the attribute is not less than the threshold (e.g., greater than or equal to the threshold). An attribute that is not below the threshold may indicate that LQ tile 270 is not similar enough to HQ tile 272 to be displayed instead of the HQ tile. Accordingly, HQ tile 272 may be composited onto display device 118 instead of LQ tile 270. A webpage may contain high frequency data such that if LQ tile 270 is composited onto display device 118 rather than HQ tile 272, the end result is distracting for a user to view.
GPU 112 may composite HQ tile 272 onto display device 118 in a variety of ways. In an example, GPU 112 accesses the high-resolution tile and copies texels from HQ tile 272 into a frame buffer for display on display device 118. In another example, LQ tile 270 may already be composited onto display device 118. In this example, GPU 112 may add back the difference between HQ tile 272 and LQ tile 270 to obtain HQ tile 272.
In some embodiments, actions 402-408 may be performed for any number of LQ tiles. It is also understood that additional actions may be performed before, during, or after actions 402-408 discussed above. It is also understood that one or more of the actions of method 400 described herein may be omitted, combined, or performed in a different sequence as desired.
Using techniques disclosed in the present disclosure, the number of texture fetches is not reduced, but may actually increase. For example, if the attribute of the alpha channel does not satisfy the threshold, GPU 112 accesses both LQ tile 270 and HQ tile 272. Despite this, embodiments of the disclosure may improve performance significantly by reducing the memory bandwidth if GPU 112 uses texels from LQ tiles often.
In some embodiments, a processing unit (e.g., GPU 112 and/or CPU 106) varies the threshold on-the-fly to reduce memory bandwidth. The processing unit may decrease the threshold so that LQ tiles may be used more often. In an example, during fast scroll, the user is less likely to notice the difference between LQ and HQ tiles. Accordingly, if the processing unit detects a fast scroll, the processing unit may decrease the threshold. In another example, computing device 102 may be in a low-battery mode (e.g., less than 15 percent battery left), and it may be desirable to reduce power consumption. Accordingly, if the processing unit detects that the computing device is in the low-battery mode, the processing unit may decrease the threshold.
As discussed above and further emphasized here,
In an example, instructions for compositing a LQ tile or a HQ tile may be stored in a computer readable medium of system memory 110. Processors may execute the instructions to compute a texture DLOW based on a difference between a HQ tile and a corresponding LQ tile and to store the texture DLOW into an alpha channel of the LQ tile. Processors may also execute the instructions to composite the LQ tile onto display device 118 when an attribute of the alpha channel satisfies a threshold. Processors may also execute the instructions to composite the HQ tile onto display device 118 when the attribute of the alpha channel does not satisfy the threshold.
Where applicable, various embodiments provided by the present disclosure may be implemented using hardware, software, firmware, or combinations thereof. Also where applicable, the various hardware components, software components, and/or firmware components set forth herein may be combined into composite components including software, firmware, hardware, and/or all without departing from the spirit of the present disclosure. Where applicable, the various hardware components, software components, and/or firmware components set forth herein may be separated into sub-components including software, firmware, hardware, or all without departing from the spirit of the present disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components, and vice-versa. Where applicable, the ordering of various steps or actions described herein may be changed, combined into composite steps or actions, and/or separated into sub-steps or sub-actions to provide features described herein.
Although embodiments of the present disclosure have been described, these embodiments illustrate but do not limit the disclosure. It should also be understood that embodiments of the present disclosure should not be limited to these embodiments but that numerous modifications and variations may be made by one of ordinary skill in the art in accordance with the principles of the present disclosure and be included within the spirit and scope of the present disclosure as hereinafter claimed.
Hui, Shiu Wai, Arulesan, Veluppillai, Wang, Yida
Patent | Priority | Assignee | Title |
10867366, | Jun 09 2017 | Samsung Electronics Co., Ltd. | System and method for dynamic transparent scaling of content display |
Patent | Priority | Assignee | Title |
10140268, | Aug 27 2015 | QUALCOMM INNOVATION CENTER, INC | Efficient browser composition for tiled-rendering graphics processing units |
5289205, | Nov 20 1991 | International Business Machines Corporation | Method and apparatus of enhancing presentation of data for selection as inputs to a process in a data processing system |
5696531, | Feb 05 1991 | MINTOLA CO , LTD | Image display apparatus capable of combining image displayed with high resolution and image displayed with low resolution |
5702868, | Apr 13 1993 | Astarix Inc. | High resolution mask programmable via selected by low resolution photomasking |
5933137, | Jun 10 1997 | FlashPoint Technology, Inc. | Method and system for acclerating a user interface of an image capture unit during play mode |
6166712, | Jul 01 1993 | Google Technology Holdings LLC | High-persistence display circuit and method to therefor |
6167442, | Feb 18 1997 | SCENE7, INC | Method and system for accessing and of rendering an image for transmission over a network |
6184888, | Oct 31 1997 | Qualcomm Incorporated | Method and apparatus for rapidly rendering and image in response to three-dimensional graphics data in a data rate limited environment |
6191793, | Apr 01 1998 | Intel Corporation | Method and apparatus for texture level of detail dithering |
6198847, | Sep 30 1996 | Canon Kabushiki Kaisha | Apparatus and method for recognizing a nonuniformly sampled pattern |
6236703, | Mar 31 1998 | WASHINGTON SUB, INC ; ALPHA INDUSTRIES, INC ; Skyworks Solutions, Inc | Fractional-N divider using a delta-sigma modulator |
6252989, | Jan 07 1997 | Board of the Regents, The University of Texas System | Foveated image coding system and method for image bandwidth reduction |
6278447, | Jun 10 1997 | FlashPoint Technology, Inc. | Method and system for accelerating a user interface of an image capture unit during play mode |
6396503, | Dec 31 1999 | Hewlett-Packard Company | Dynamic texture loading based on texture tile visibility |
6515675, | Nov 22 1999 | Adobe Inc | Processing opaque pieces of illustration artwork |
6522336, | Oct 31 1997 | Qualcomm Incorporated | Three-dimensional graphics rendering apparatus and method |
6591020, | Dec 23 1998 | Xerox Corporation | Antialiazed high-resolution frame buffer architecture |
6658080, | Aug 05 2002 | Toshiba Medical Visualization Systems Europe, Limited | Displaying image data using automatic presets |
6728217, | Aug 17 1999 | Ericsson Inc. | System and method for modifying the data rate for data calls in a cellular network |
6904176, | Sep 19 2001 | Qualcomm Incorporated | System and method for tiled multiresolution encoding/decoding and communication with lossless selective regions of interest via data reuse |
7477793, | Dec 13 2002 | RICOH CO , LTD | JPEG 2000-like access using the JPM compound document file format |
7761806, | Feb 03 2007 | Optis Cellular Technology, LLC | Mobile communication device and method of controlling operation of the mobile communication device |
7801220, | Jan 07 2005 | Microsoft Technology Licensing, LLC | In-band wavelet video coding with spatial scalability |
7965902, | May 19 2006 | GOOGLE LLC | Large-scale image processing using mass parallelization techniques |
7991837, | Jul 12 2010 | OPUS MEDICUS, INC | Systems and methods for networked, in-context, high resolution image viewing |
8019597, | Oct 26 2005 | III Holdings 12, LLC | Scalable encoding apparatus, scalable decoding apparatus, and methods thereof |
8081821, | Sep 16 2008 | Adobe Inc | Chroma keying |
8260006, | Mar 14 2008 | GOOGLE LLC | System and method of aligning images |
8307305, | Dec 21 2006 | Canon Kabushiki Kaisha | Scrolling interface |
8339470, | Nov 30 2009 | Indian Institute of Technology Madras | Method and system for generating a high resolution image |
8396325, | Apr 27 2009 | GOOGLE LLC | Image enhancement through discrete patch optimization |
8397180, | Dec 21 2006 | Canon Kabushiki Kaisha | Scrolling browser with previewing area |
8648858, | Mar 25 2009 | OTELLO CORPORATION ASA | Hybrid text and image based encoding |
8682110, | Jun 07 2011 | Samsung Electronics Co., Ltd. | Method and apparatus for converting resolution of block based image |
8701028, | Jun 14 2006 | GOOGLE LLC | Graphical user interface and related method |
8706802, | Nov 24 2009 | GOOGLE LLC | Latency-guided web content retrieval, serving, and rendering |
8766984, | Aug 20 2010 | Qualcomm Incorporated | Graphics rendering methods for satisfying minimum frame rate requirements |
8797365, | Dec 01 2010 | Fujitsu Limited | Image transmission method, device, and computer-readable storage medium storing image transmission program |
8856684, | Dec 21 2006 | Canon Kabushiki Kaisha | Scrolling interface |
8891010, | Jul 29 2011 | PIXELWORKS SEMICONDUCTOR TECHNOLOGY SHANGHAI CO , LTD | Encoding for super resolution playback |
8917344, | Apr 16 1997 | Seiko Epson Corporation | Highspeed image selecting method digital camera having highspeed image selecting function |
8928694, | Jul 23 2010 | SONY INTERACTIVE ENTERTAINMENT INC | Image Processing apparatus receiving editing operation, image display device, and image processing method thereof |
8934542, | Jun 29 2007 | Fraunhofer-Gesellschaft zur Foerderung der Angewandten Forschung E V | Scalable video coding supporting pixel value refinement scalability |
9035973, | Oct 23 2009 | Kyocera Document Solutions Inc | Display device and display control method |
9208403, | Jun 16 2014 | Qualcomm Incorporated | Systems and methods for processing image data associated with line detection |
9215468, | Aug 07 2014 | V-NOVA INTERNATIONAL LIMITED | Video bit-rate reduction system and method utilizing a reference images matrix |
9218689, | Dec 31 2003 | RPX Corporation | Multi-sample antialiasing optimization via edge tracking |
9230393, | Dec 08 2011 | GOOGLE LLC | Method and system for advancing through a sequence of items using a touch-sensitive component |
9367641, | Dec 27 2012 | QUALCOMM INNOVATION CENTER, INC | Predictive web page rendering using a scroll vector |
9384503, | Sep 24 2012 | YAHOO JAPAN CORPORATION | Terminal apparatus, advertisement display control apparatus, and advertisement display method |
9626605, | Apr 04 2013 | Canon Kabushiki Kaisha | Image processing apparatus, information processing method, and storage medium for processing rendering data including a pixel pattern for representing a semitransparent object |
9704270, | Jul 30 2015 | Teradici Corporation | Method and apparatus for rasterizing and encoding vector graphics |
9710887, | Dec 01 2016 | Varjo Technologies Oy | Display apparatus and method of displaying using context display and projectors |
9904665, | Oct 15 2014 | QUALCOMM INNOVATION CENTER, INC | Partial rasterization of web page tiles |
9947089, | Dec 08 2011 | Sovereign Peak Ventures, LLC | Digital specimen manufacturing device, digital specimen manufacturing method, and digital specimen manufacturing server |
20010033303, | |||
20020001682, | |||
20020015042, | |||
20020091738, | |||
20020180734, | |||
20030030646, | |||
20030095135, | |||
20030174110, | |||
20030193681, | |||
20040114813, | |||
20040114814, | |||
20040145599, | |||
20040215659, | |||
20050206658, | |||
20050210399, | |||
20050213125, | |||
20050219275, | |||
20050270311, | |||
20060103663, | |||
20060103665, | |||
20060107204, | |||
20060127880, | |||
20060156254, | |||
20060164438, | |||
20060176312, | |||
20060256387, | |||
20070019869, | |||
20070024879, | |||
20070030359, | |||
20070071362, | |||
20070160153, | |||
20070160300, | |||
20070245119, | |||
20070268533, | |||
20080019601, | |||
20080024389, | |||
20080024683, | |||
20080030527, | |||
20080043209, | |||
20080050031, | |||
20080069458, | |||
20080095363, | |||
20080123997, | |||
20080175519, | |||
20080181498, | |||
20080199091, | |||
20080267525, | |||
20080273784, | |||
20080278522, | |||
20080320396, | |||
20090033792, | |||
20090040386, | |||
20090070711, | |||
20090074328, | |||
20090122349, | |||
20090129643, | |||
20090141809, | |||
20090148620, | |||
20090161183, | |||
20090167870, | |||
20090167909, | |||
20090179999, | |||
20090189900, | |||
20090207458, | |||
20090263044, | |||
20090289913, | |||
20090309892, | |||
20100008418, | |||
20100026825, | |||
20100027619, | |||
20100046612, | |||
20100046842, | |||
20100119176, | |||
20100125792, | |||
20100166058, | |||
20100172411, | |||
20100182478, | |||
20100220789, | |||
20100225984, | |||
20100226593, | |||
20100238350, | |||
20100260260, | |||
20100260412, | |||
20100290529, | |||
20100295853, | |||
20100331041, | |||
20110019088, | |||
20110025720, | |||
20110026761, | |||
20110032419, | |||
20110115813, | |||
20110141224, | |||
20110164580, | |||
20110199542, | |||
20110210960, | |||
20110216981, | |||
20110254845, | |||
20110274368, | |||
20110286526, | |||
20120002082, | |||
20120062593, | |||
20120062957, | |||
20120069020, | |||
20120099153, | |||
20120105673, | |||
20120105681, | |||
20120139952, | |||
20120154420, | |||
20120206470, | |||
20120219229, | |||
20120268465, | |||
20120268655, | |||
20120287147, | |||
20120287148, | |||
20120287168, | |||
20120287995, | |||
20120294512, | |||
20120300122, | |||
20120313960, | |||
20120314975, | |||
20130011046, | |||
20130016112, | |||
20130016920, | |||
20130073388, | |||
20130120460, | |||
20130135347, | |||
20130135448, | |||
20130169644, | |||
20130169665, | |||
20130181977, | |||
20130208789, | |||
20130215360, | |||
20130227389, | |||
20130250091, | |||
20130250144, | |||
20130293928, | |||
20130294514, | |||
20130308877, | |||
20140006129, | |||
20140012831, | |||
20140072242, | |||
20140099036, | |||
20140118529, | |||
20140152838, | |||
20140176731, | |||
20140177706, | |||
20140189487, | |||
20140192281, | |||
20140219547, | |||
20140240363, | |||
20140267346, | |||
20140313211, | |||
20140313409, | |||
20140328509, | |||
20140353475, | |||
20140354886, | |||
20150016522, | |||
20150016523, | |||
20150026566, | |||
20150029216, | |||
20150030213, | |||
20150046121, | |||
20150062175, | |||
20150093015, | |||
20150093045, | |||
20150098499, | |||
20150103094, | |||
20150103208, | |||
20150123993, | |||
20150160450, | |||
20150170386, | |||
20150170387, | |||
20150170388, | |||
20150170616, | |||
20150178936, | |||
20150234316, | |||
20150256837, | |||
20150287239, | |||
20150302612, | |||
20150317767, | |||
20150332435, | |||
20150334398, | |||
20150363634, | |||
20150370952, | |||
20150371123, | |||
20160004695, | |||
20160014411, | |||
20160035069, | |||
20160035133, | |||
20160048980, | |||
20160063308, | |||
20160063677, | |||
20160078600, | |||
20160078634, | |||
20160093026, | |||
20160098589, | |||
20160110323, | |||
20160124696, | |||
20160133046, | |||
20160140737, | |||
20160171657, | |||
20160171658, | |||
20160171715, | |||
20160177875, | |||
20160180762, | |||
20160180803, | |||
20160182593, | |||
20160205330, | |||
20160210911, | |||
20160212438, | |||
20160247259, | |||
20160321781, | |||
20160330493, | |||
20160370855, | |||
20160371818, | |||
20170018277, | |||
20170024856, | |||
20170024924, | |||
20170025098, | |||
20170038589, | |||
20170054871, | |||
20170061574, | |||
20170103565, | |||
20170110093, | |||
20170118476, | |||
20170139213, | |||
20170140506, | |||
20170148212, | |||
20170155966, | |||
20170163966, | |||
20170164290, | |||
20170168990, | |||
20170169545, | |||
20170213370, | |||
20170244908, | |||
20170289310, | |||
20170293825, | |||
20170316597, | |||
20170323028, | |||
20170339391, | |||
20170339431, | |||
20170345321, | |||
20180005561, | |||
20180020204, | |||
20180020223, | |||
20180025478, | |||
20180054630, | |||
20180054660, | |||
20180084253, | |||
20180089091, | |||
20180103259, | |||
20180122104, | |||
20180129902, | |||
20180137689, | |||
20180146212, | |||
20180160160, | |||
20180166043, | |||
20180174275, | |||
20180182099, | |||
20180199067, | |||
20180225807, | |||
20180232857, | |||
20180240221, | |||
20180249163, | |||
20180275402, | |||
20180295400, | |||
20180302650, | |||
20180316855, | |||
20180338078, | |||
20180338138, | |||
20180357810, | |||
20190137932, |
Executed on | Assignor | Assignee | Conveyance | Frame | Reel | Doc |
Feb 20 2015 | Qualcomm Incorporated | (assignment on the face of the patent) | / | |||
Mar 18 2015 | HUI, SHIU WAI | Qualcomm Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035210 | /0155 | |
Mar 18 2015 | ARULESAN, VELUPPILLAI | Qualcomm Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035210 | /0155 | |
Mar 18 2015 | WANG, YIDA | Qualcomm Incorporated | ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS | 035210 | /0155 |
Date | Maintenance Fee Events |
Feb 08 2023 | M1551: Payment of Maintenance Fee, 4th Year, Large Entity. |
Date | Maintenance Schedule |
Sep 10 2022 | 4 years fee payment window open |
Mar 10 2023 | 6 months grace period start (w surcharge) |
Sep 10 2023 | patent expiry (for year 4) |
Sep 10 2025 | 2 years to revive unintentionally abandoned end. (for year 4) |
Sep 10 2026 | 8 years fee payment window open |
Mar 10 2027 | 6 months grace period start (w surcharge) |
Sep 10 2027 | patent expiry (for year 8) |
Sep 10 2029 | 2 years to revive unintentionally abandoned end. (for year 8) |
Sep 10 2030 | 12 years fee payment window open |
Mar 10 2031 | 6 months grace period start (w surcharge) |
Sep 10 2031 | patent expiry (for year 12) |
Sep 10 2033 | 2 years to revive unintentionally abandoned end. (for year 12) |