A method includes provisioning a virtual volume from at least one storage pool of a storage array, designating at least one virtual volume segment of the virtual volume for mapping a virtual volume range to a virtual drive range, organizing the virtual volume range into a plurality of clusters, measuring a data load on each of the plurality of clusters and comparing the data load on each of the plurality of clusters to activity of the virtual volume, and reconfiguring the at least one virtual volume segment to contain a hot-spot.

Patent
   8874867
Priority
Nov 21 2008
Filed
Nov 21 2008
Issued
Oct 28 2014
Expiry
Apr 22 2031
Extension
882 days
Assg.orig
Entity
Large
1
29
currently ok
1. A method for managing a storage array, the storage array including a plurality of storage pools including at least a first storage pool and a second storage pool, each of the plurality of storage pools including at least one storage volume, each of the least one storage volume including a plurality of virtual volume segments, each of the plurality of virtual volume segments including at least one cluster, the method comprising:
measuring data loads on each of a plurality of clusters of the at least one virtual volume of the first storage pool;
determining that one or more clusters of the plurality of clusters of the at least one virtual volume of the first storage pool have measured data loads which exceed a threshold of activity;
designating the one or more clusters of the plurality of clusters as one or more hot-spots upon determining that the one or more clusters of the plurality of clusters of the at least one virtual volume of the first storage pool have measured data loads which exceed the threshold of activity;
reconfiguring one or more particular virtual volume segments to contain the one or more hot-spots; and
transferring the one or more particular virtual volume segments, which contain the one or more clusters having been designated as the one or more hot-spots, from the first storage pool to the second storage pool, wherein the second storage pool is associated with a storage tier having a higher performance relative to a storage tier associated with the first storage pool.
17. A method for managing a storage array, the storage array including a plurality of storage pools including at least a first storage pool and a second storage pool, each of the plurality of storage pools including at least one storage volume, each of the least one storage volume including a plurality of virtual volume segments, each the plurality of virtual volume segments including a plurality of sub-segments, each of the plurality of sub-segments including at least one cluster, the method comprising:
measuring data loads on each of a plurality of clusters of the at least one virtual volume of the first storage pool;
determining that one or more clusters of the plurality of clusters of the at least one virtual volume of the first storage pool have measured data loads which exceed a threshold of activity;
designating the one or more clusters of the plurality of clusters as one or more hot-spots upon determining that the one or more clusters of the plurality of clusters of the at least one virtual volume of the first storage pool have measured data loads which exceed the threshold of activity;
reconfiguring one or more particular sub-segments to contain the one or more hot-spots; and
transferring the one or more particular sub-segments, which contain the one or more clusters having been designated as the one or more hot-spots, from the first storage pool to the second storage pool, wherein the second storage pool is associated with a storage tier having a higher performance relative to a storage tier associated with the first storage pool.
9. A non-transitory computer-readable medium having computer-executable instructions for performing a method for managing a storage array, the storage array including a plurality of storage pools including at least a first storage pool and a second storage pool, each of the plurality of storage pools including at least one storage volume, each of the least one storage volume including a plurality of virtual volume segments, each of the plurality of virtual volume segments including at least one cluster, said method comprising:
measuring data loads on each of a plurality of clusters of the at least one virtual volume of the first storage pool;
determining that one or more clusters of the plurality of clusters of the at least one virtual volume of the first storage pool have measured data loads which exceed a threshold of activity;
designating the one or more clusters of the plurality of clusters as one or more hot-spots upon determining that the one or more clusters of the plurality of clusters of the at least one virtual volume of the first storage pool have measured data loads which exceed the threshold of activity;
reconfiguring one or more particular virtual volume segments to contain the one or more hot-spots; and
transferring the one or more particular virtual volume segments, which contain the one or more clusters having been designated as the one or more hot-spots, from the first storage pool to the second storage pool, wherein the second storage pool is associated with a storage tier having a higher performance relative to a storage tier associated with the first storage pool.
2. The method of claim 1, wherein measuring data loads on each of a plurality of clusters of the at least one virtual volume of the first storage pool includes collecting performance statistics for each of the plurality of clusters.
3. The method of claim 1, wherein each of the plurality of clusters is defined as at least one of a proportion of the at least one virtual volume or a fixed size.
4. The method of claim 1, wherein each of the plurality of virtual volume segments includes a plurality of sub-segments.
5. The method of claim 4, wherein each of the plurality of sub-segments includes more than one cluster.
6. The method of claim 1, wherein any virtual volume segment of the first storage pool that does not contain a cluster designated as the one or more hot-spots remains in the first storage pool.
7. The method of claim 1, wherein the second storage pool is implemented on one or more solid state drives.
8. The method of claim 1, wherein the first storage pool is implemented on one or more hard disk drive devices and wherein the second storage pool is implemented on one or more solid state drives.
10. The non-transitory computer-readable medium of claim 9, wherein measuring data loads on each of a plurality of clusters of the at least one virtual volume of the first storage pool includes collecting performance statistics for each of the plurality of clusters.
11. The non-transitory computer-readable medium of claim 9, wherein each of the plurality of clusters is defined as at least one of a proportion of the at least one virtual volume or a fixed size.
12. The non-transitory computer-readable medium of claim 9, wherein each of the plurality of virtual volume segments includes a plurality of sub-segments.
13. The non-transitory computer-readable medium of claim 12, wherein each of the plurality of sub-segments includes more than one cluster.
14. The non-transitory computer-readable medium of claim 9, wherein any virtual volume segment of the first storage pool that does not contain a cluster designated as the one or more hot-spots remains in the first storage pool.
15. The non-transitory computer-readable medium of claim 9, wherein the second storage pool is implemented on one or more solid state drives.
16. The non-transitory computer-readable medium of claim 9, wherein the first storage pool is implemented on one or more hard disk drive devices and wherein the second storage pool is implemented on one or more solid state drives.
18. The method of claim 17, wherein the steps of measuring data loads on each of a plurality of clusters of the at least one virtual volume of the first storage pool; determining that one or more clusters of the plurality of clusters of the at least one virtual volume of the first storage pool have measured data loads which exceed a threshold of activity; designating the one or more clusters of the plurality of clusters as one or more hot-spots; and transferring one or more particular sub-segments, which contain the one or more clusters having been designated as the one or more hot-spots, from the first storage pool to the second storage pool are performed on a continuous basis.
19. The method of claim 17, wherein any sub-segment of the first storage pool that does not contain a cluster designated as the one or more hot-spots remains in the first storage pool.
20. The method of claim 17, wherein the first storage pool is implemented on one or more hard disk drive devices and wherein the second storage pool is implemented on one or more solid state drives.

The present disclosure generally relates to the field of storage systems, and more particularly to methods for partitioning data loads in virtual volumes.

A storage system may group storage devices into tiers based on various characteristics, including performance, cost, and the like. Data may be stored in the grouped storage devices to utilize specific capabilities of the storage devices. Such grouping may be referred to as storage tiering or storage tiers. A storage array may comprise multiple storage tiers with significantly difference performance characteristics. For instance, higher performing storage tiers typically include relatively expensive storage devices such as Solid State Drives (SSDs), whereas lower performing storage tiers typically include relatively cheap storage devices such as Serial ATA (SATA) Hard Disk Drives (HDDs). A user may prefer the higher performing storage tiers to contain data with a high load/activity, whereas the remaining data may be stored in the lower performing storage tiers.

A method for identifying and containing performance hot-spots in virtual volumes includes provisioning a virtual volume from at least one storage pool of a storage array, designating at least one virtual volume segment of the virtual volume for mapping a virtual volume range to a virtual drive range, organizing the virtual volume range into a plurality of clusters, measuring a data load on each of the plurality of clusters and comparing the data load on each of the plurality of clusters to activity of the virtual volume. A data load exceeding a threshold of activity may be defined as a hot-spot. The method further includes reconfiguring the at least one virtual volume segment to contain the hot-spot.

A computer-readable medium having computer-executable instructions for performing a method for partitioning data loads in virtual volumes includes provisioning a virtual volume from at least one storage pool of a storage array, designating at least one virtual volume segment of the virtual volume for mapping a virtual volume range to a virtual drive range, organizing the virtual volume range into a plurality of clusters, measuring a data load on each of the plurality of clusters and comparing the data load on each of the plurality of clusters to activity of the virtual volume. A data load exceeding a threshold of activity is defined as a hot-spot. The method further includes reconfiguring the at least one virtual volume segment to contain the hot-spot.

A method for identifying and containing performance hot-spots in virtual volumes includes provisioning at least one virtual volume from at least one storage pool of a storage array and designating at least one virtual volume segment of the at least one virtual volume for mapping a virtual volume range to a virtual drive range. The virtual volume range comprises a plurality of clusters. The method further includes measuring a data load on each of the plurality of clusters and comparing the data load on each of the plurality of clusters to activity of the at least one virtual volume. A data load exceeding a threshold of activity is defined as a hot-spot. The method additionally includes reconfiguring the at least one virtual volume segment to contain the hot-spot and transferring the at least one virtual volume segment to a corresponding storage pool.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not necessarily restrictive of the present disclosure. The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate subject matter of the disclosure. Together, the descriptions and the drawings serve to explain the principles of the disclosure.

The numerous advantages of the disclosure may be better understood by those skilled in the art by reference to the accompanying figures in which:

FIG. 1A displays a block diagram of a virtual volume segmentation of a storage array;

FIG. 1B displays a block diagram of a storage array

FIG. 2 is a block diagram of another virtual volume segmentation of a storage array including a plurality of clusters;

FIG. 3 is a block diagram of a further virtual volume segmentation of a storage array;

FIG. 4 is a flow chart illustrating a method for identifying and containing performance hot-spots; and

FIG. 5 is a flow chart illustrating a method for identifying and containing performance hot-spots.

Reference will now be made in detail to the subject matter disclosed, which is illustrated in the accompanying drawings.

Referring to FIGS. 1A and 1B, a block diagram of a virtual volume segmentation 100 of a storage array and a block diagram of a storage array 101 are displayed. In a storage array, a host visible small computer system interface (SCSI) Logical Unit (LU) may be mapped to a virtual volume 102, for instance, when a storage virtualization manager (SVM) is deployed in the storage array. Virtual volume 102 may be derived from a capacity of one or more storage pools 104 in the storage array 101. The one or more storage pools 104 may correspond to a storage tier of the storage array 101. For example, each storage pool 104 may correspond to one storage tier of the storage array 101. Storage pools 104 may include one or more virtual drives 106. In a particular embodiment, virtual drives 106 correspond to redundant array of independent disks (RAID) volumes. When virtual volume 102 is derived from storage pools 104, a virtual volume segment 108 is created for each mapping of a virtual volume logical block address (LBA) range to a virtual drive LBA range. It is appreciated that any number of virtual volume segments 108 may be utilized to map virtual volume 102 to virtual drives 106, thus the depiction of three virtual volume segments 108 in FIG. 1 is not limiting. Note that the virtual volume LBA range may be completely mapped to virtual drive LBA ranges, i.e., the virtual volume may be fully provisioned. This mapping may require one or more segments to cover all the LBAs of the virtual volume. The grouping of virtual volume segments 108 of virtual volume 102 may be referred to as a virtual volume segment configuration.

A Dynamic Storage Tiering (DST) module 103 of the storage array may be used to dynamically move data to optimal storage tiers as loads on the storage array 101 vary throughout a given period. For example, as specific data are accessed frequently during peak periods, the DST module 103 may move the data to higher performing storage tiers. As the load decreases after the peak period (i.e., during a non-peak period), the DST module 103 may move the data back to lower performing storage tiers. Further, the DST module 103 may identify performance hot-spots (e.g., LBA ranges experiencing high loads/activity) in virtual volume 102 and reconfigure virtual volume segments 108 to ensure each performance hot-spot is contained in one or more virtual volume segments 108. The virtual volume segments 108 may then be moved to appropriate corresponding storage pools 104. For instance, virtual volume segments 108 containing performance hot-spots may be transferred to storage pools 104 which correspond to high performing storage tiers. How the DST module 103 accomplishes the above-mentioned tasks will be discussed in detail below.

A performance hot-spot of virtual volume 102 may be contained in a single virtual volume segment 108 (as depicted in FIG. 2) or may span multiple virtual volume segments 108 (as depicted in FIG. 3). In one embodiment of the present disclosure, the virtual volume segments 108 may be subdivided to provide a relatively small base size in order to contain the size of a performance hot-spot. Thus, when a virtual volume segment 108 containing a performance hot-spot is transferred to a corresponding storage pool 104, the amount of data moved may be substantially limited to that of the performance hot-spot. Where a performance hot-spot is relatively large, multiple virtual volume segments 108 and/or subdivided segments may be utilized to contain the performance hot-spot.

Referring now to FIGS. 2 and 3, block diagrams of virtual volume segmentations 200, 300 of a storage array are displayed. As described above, a virtual volume 102 may be segmented into virtual volume segments 208, 308 for mapping a virtual volume range to a virtual drive range. The virtual volume range may be further organized by a plurality of clusters 202, 302. In one embodiment, the clusters 202, 302 are smaller than the virtual volume segments 208, 308, such that, for example, a virtual volume segment 208, 308 may comprise multiple clusters 202, 302. The clusters 202, 302 may be organized such that each cluster is of a substantially equivalent size. For instance, clusters 202, 302 may be defined as a certain percentage or proportion of the overall virtual volume 102 capacity, or may be defined by a fixed size, such as a fixed number of bytes. Alternatively, clusters 202, 302 may be of a varying size without departing from the scope of the disclosure. A minimum size of clusters 202, 302 may be defined by the storage array, which may influence the total number of clusters 202, 302 available on the virtual volume 102.

The storage array may measure the data load/activity on each cluster of the plurality of clusters 202, 302 on a continuous or on-going basis. For example, performance statistics may be collected and/or tracked for each individual cluster within the plurality of clusters 202, 302. To identify performance hot-spots, the data load/activity that was measured for an individual cluster may be compared to the activity of the virtual volume 102 as a whole. In one embodiment, the data load/activity that was measured for an individual cluster may be compared to the average data load on the virtual volume 102. Alternatively, the data load/activity that was measured for an individual cluster may be compared to the activity of multiple virtual volumes, for instance, when many virtual volumes are managed by a DST module. Further, the data load/activity that was measured for an individual cluster may be compared to adjacent clusters or to other clusters within one or more virtual volumes. Where a data load on an individual cluster exceeds a threshold value, that cluster may contain a performance hot-spot, either in whole or in part. The storage array may assign a threshold value to a particular virtual volume. The performance hot-spot exceeds the threshold value and may span one or more clusters. Thus, a cluster or a plurality of adjacent clusters with activity that exceed an assigned threshold may indicate a performance hot-spot within virtual volume 102. Further, multiple performance hot-spots may be present in the storage array at one time, for example, as dictated by multiple clusters 202, 302 spaced apart which exceed the threshold, or by different groupings of adjacent clusters exceeding the threshold.

Upon identifying the hot-spots, the virtual volume segments 208, 308 may be reconfigured (i.e., the virtual volume segment configuration may be changed) to contain the hot-spot within one or more virtual volume segments 208, 308. For example, where more than one virtual volume segment 208, 308 is required to contain a hot-spot, other virtual volume segments 208, 308 may be used, such as when adjacent each other. Referring to FIG. 2, a hot-spot 204 is contained within a single virtual volume segment (segment 1). In this instance, the virtual volume segment 208 may be subdivided into sub-segments 206, such as to reduce the size of the segment to be moved to a corresponding storage pool. Segment 1 is divided into two sub-segments, namely segment 11 and segment 12. Segment 12 is comprised of multiple clusters, which in this exemplary description are clusters i, i+1, and i+2. Since the hot-spot 204 is contained entirely within segment 12 via clusters i, i+1, and i+2, segment 12 may be moved to a corresponding storage pool. For instance, the corresponding storage pool may correspond to a high-performing storage tier.

In an alternative embodiment, a hot-spot may span multiple virtual volume segments. Referring to FIG. 3, a hot-spot 304 spans two virtual volume segments (segments 1 and 2). In this instance, the virtual volume segments 308 may each be subdivided into sub-segments 306, such as to reduce the sizes of any segments to be moved to a corresponding storage pool. Segment 1 is divided into two sub-segments, namely segment 11 and segment 12, whereas segment 2 is divided into two sub-segments, namely segment 21 and segment 22. In this exemplary embodiment, the hot-spot is contained within segments 12 and 21. Clusters i, i+1, and i+2 comprise the combination of segments 12 and 21, and therefore contain the hot-spot. Accordingly, segments 12 and 21 may be moved to a corresponding storage pool. For instance, the corresponding storage pool may correspond to a high-performing storage tier. Where more than one segment is to be moved to a storage pool (such as when a hot-spot spans more than one virtual volume segment), the segments may be combined into a single segment. In an embodiment, the segments are combined into a single segment upon transfer to the storage pool.

It is contemplated that multiple performance hot-spots may be present within a virtual volume 102. The above disclosure may be utilized for identifying and containing the multiple performance hot-spots within virtual volume segments, which are transferred to storage pools of a storage array.

Referring now to FIG. 4, a flow chart is displayed illustrating a method 400 for identifying and containing performance hot-spots in virtual volumes. Method 400 may derive a virtual volume from at least one storage pool of a storage array 402. Method 400 may designate at least one virtual volume segment of the virtual volume for mapping a virtual volume range to a virtual drive range 404. Method 400 may organize the virtual volume range into a plurality of clusters 406. Method 400 may measure a data load on each of the plurality of clusters 408. Method 400 may compare the data load on each of the plurality of clusters to activity of the virtual volume 410. A data load exceeding a threshold of activity may be defined as a hot-spot. Method 400 may reconfigure the at least one virtual volume segment to contain the hot-spot 412.

Measuring a data load on each of the plurality of clusters of method 400 may include collecting performance statistics for each of the plurality of clusters. Each of the plurality of clusters may be defined as at least one of a proportion of the virtual volume or a fixed size. The at least one virtual volume segment may include a plurality of sub-segments. Method 400 may further include transferring at least one of the plurality of sub-segments to a storage pool of the storage array. The at least one of the plurality of sub-segments may contain the hot-spot. The at least one of the plurality of sub-segments may include a multiple of one of the plurality of clusters

Referring now to FIG. 5, a flow chart is displayed illustrating an alternative method 500 for identifying and containing performance hot-spots in virtual volumes. Method 500 may derive a virtual volume from at least one storage pool of a storage array 502. Method 500 may designate at least one virtual volume segment of the virtual volume for mapping a virtual volume range to a virtual drive range 504. The virtual volume may comprise a plurality of clusters. The virtual volume may be fully provisioned. Method 500 may measure a data load on each of the plurality of clusters 506. Method 500 may compare the data load on each of the plurality of clusters to activity of the virtual volume 508. A data load that exceeds a threshold of activity may be defined as a hot-spot. Method 500 may reconfigure the at least one virtual volume segment to contain the hot-spot 510. Method 500 may transfer the at least one virtual volume segment to a corresponding storage pool 512.

The corresponding storage pool of method 500 may correspond to a high-performing storage tier. The high-performing storage tier may be a solid state drive. The steps of measuring a data load on each of the plurality of clusters, comparing the data load on each of the plurality of clusters to activity of the at least one virtual volume, reconfiguring the at least one virtual volume segment to contain the hot-spot, and transferring the at least one virtual volume segment to a corresponding storage pool of method 500 may be performed on a continuous basis. Method 500 may further include combining the at least one virtual volume segment into a single virtual volume segment once transferred to the corresponding storage pool. A plurality of hot-spots may be defined and stored in a plurality of virtual volume segments.

In the present disclosure, the methods disclosed may be implemented as sets of instructions or software readable by a device. Such a software package may be a computer program product which employs a computer-readable storage medium including stored computer code which is used to program a computer to perform the disclosed function and process of the present disclosure. The computer-readable medium may include, but is not limited to, any type of conventional floppy disk, optical disk, CD-ROM, magnetic disk, hard disk drive, magneto-optical disk, ROM, RAM, EPROM, EEPROM, magnetic or optical card, or any other suitable media for storing electronic instructions.

Further, it is understood that the specific order or hierarchy of steps in the methods disclosed are examples of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the method can be rearranged while remaining within the disclosed subject matter. The accompanying method claims present elements of the various steps in a sample order, and are not necessarily meant to be limited to the specific order or hierarchy presented.

It is believed that the present disclosure and many of its attendant advantages will be understood by the foregoing description, and it will be apparent that various changes may be made in the form, construction and arrangement of the components without departing from the disclosed subject matter or without sacrificing all of its material advantages. The form described is merely explanatory, and it is the intention of the following claims to encompass and include such changes.

Jess, Martin

Patent Priority Assignee Title
10768821, Aug 08 2017 SK Hynix Inc. Memory system and method of operating the same
Patent Priority Assignee Title
6145067, Nov 06 1997 NEC Corporation Disk array device
6728831, Oct 23 1998 ORACLE INTERNATIONAL CORPORATION OIC Method and system for managing storage systems containing multiple data storage devices
6874061, Oct 23 1998 ORACLE INTERNATIONAL CORPORATION OIC Method and system for implementing variable sized extents
7185163, Sep 03 2003 Veritas Technologies LLC Balancing most frequently used file system clusters across a plurality of disks
7191304, Sep 06 2002 Hewlett Packard Enterprise Development LP Efficient and reliable virtual volume mapping
7363453, Nov 29 2004 EMC IP HOLDING COMPANY LLC Method and apparatus for improving storage device performance by selective volume swapping based on hot spot analysis
7565487, Apr 12 2006 Hitachi, Ltd. Computer system for managing data among virtual storage systems
7603529, Mar 22 2006 EMC IP HOLDING COMPANY LLC Methods, systems, and computer program products for mapped logical unit (MLU) replications, storage, and retrieval in a redundant array of inexpensive disks (RAID) environment
7711916, May 11 2005 Oracle International Corporation Storing information on storage devices having different performance capabilities with a storage system
7730259, Dec 28 2006 Hitachi, Ltd. Method, computer and system for managing a storage subsystem configuration
7886111, May 24 2006 DELL INTERNATIONAL L L C System and method for raid management, reallocation, and restriping
8380674, Jan 09 2008 NetApp, Inc System and method for migrating lun data between data containers
20040225662,
20060010290,
20060075191,
20060095706,
20060259686,
20070130423,
20070266053,
20080077762,
EP1158395,
JP200358326,
JP2004272324,
JP2006127143,
JP200624024,
JP200779787,
TW245989,
TW263903,
WO2008126202,
////////////
Executed onAssignorAssigneeConveyanceFrameReelDoc
Nov 21 2008LSI Corporation(assignment on the face of the patent)
Nov 21 2008JESS, MARTINLSI CorporationASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0220050361 pdf
May 06 2014LSI CorporationDEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENTPATENT SECURITY AGREEMENT0328560031 pdf
May 06 2014Agere Systems LLCDEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENTPATENT SECURITY AGREEMENT0328560031 pdf
Aug 14 2014LSI CorporationAVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD ASSIGNMENT OF ASSIGNORS INTEREST SEE DOCUMENT FOR DETAILS 0353900388 pdf
Feb 01 2016AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD BANK OF AMERICA, N A , AS COLLATERAL AGENTPATENT SECURITY AGREEMENT0378080001 pdf
Feb 01 2016DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENTLSI CorporationTERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS RELEASES RF 032856-0031 0376840039 pdf
Feb 01 2016DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENTAgere Systems LLCTERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS RELEASES RF 032856-0031 0376840039 pdf
Jan 19 2017BANK OF AMERICA, N A , AS COLLATERAL AGENTAVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS0417100001 pdf
May 09 2018AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITEDMERGER SEE DOCUMENT FOR DETAILS 0472290408 pdf
Sep 05 2018AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITEDCORRECTIVE ASSIGNMENT TO CORRECT THE PATENT NUMBER 9,385,856 TO 9,385,756 PREVIOUSLY RECORDED AT REEL: 47349 FRAME: 001 ASSIGNOR S HEREBY CONFIRMS THE MERGER 0511440648 pdf
Sep 05 2018AVAGO TECHNOLOGIES GENERAL IP SINGAPORE PTE LTD AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE LIMITEDCORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE PREVIOUSLY RECORDED ON REEL 047229 FRAME 0408 ASSIGNOR S HEREBY CONFIRMS THE THE EFFECTIVE DATE IS 09 05 2018 0473490001 pdf
Date Maintenance Fee Events
Mar 22 2018M1551: Payment of Maintenance Fee, 4th Year, Large Entity.
Apr 25 2022M1552: Payment of Maintenance Fee, 8th Year, Large Entity.


Date Maintenance Schedule
Oct 28 20174 years fee payment window open
Apr 28 20186 months grace period start (w surcharge)
Oct 28 2018patent expiry (for year 4)
Oct 28 20202 years to revive unintentionally abandoned end. (for year 4)
Oct 28 20218 years fee payment window open
Apr 28 20226 months grace period start (w surcharge)
Oct 28 2022patent expiry (for year 8)
Oct 28 20242 years to revive unintentionally abandoned end. (for year 8)
Oct 28 202512 years fee payment window open
Apr 28 20266 months grace period start (w surcharge)
Oct 28 2026patent expiry (for year 12)
Oct 28 20282 years to revive unintentionally abandoned end. (for year 12)