US20090106518A1 - Methods, systems, and computer program products for file relocation on a data storage device - Google Patents
Methods, systems, and computer program products for file relocation on a data storage device Download PDFInfo
- Publication number
- US20090106518A1 US20090106518A1 US11/875,191 US87519107A US2009106518A1 US 20090106518 A1 US20090106518 A1 US 20090106518A1 US 87519107 A US87519107 A US 87519107A US 2009106518 A1 US2009106518 A1 US 2009106518A1
- Authority
- US
- United States
- Prior art keywords
- file
- storage device
- data storage
- region
- relocating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0643—Management of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0613—Improving I/O performance in relation to throughput
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0674—Disk device
- G06F3/0676—Magnetic disk device
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B5/00—Recording by magnetisation or demagnetisation of a record carrier; Reproducing by magnetic means; Record carriers therefor
- G11B5/012—Recording on, or reproducing or erasing from, magnetic disks
Definitions
- the present disclosure relates generally to computer system data storage management, and, in particular, to file relocation management for reducing data access time on a data storage device.
- File management systems such as a log-structured file system (LFS) may store data as a circular log, writing data sequentially to the log.
- LFS log-structured file system
- This approach attempts to maximize write throughput on a data storage device by avoiding costly seeks based on an assumption that repositioning of a read/write head used to access the data storage device is not required prior to beginning a new write cycle due to sequential file locations.
- One reason that this approach is expected to be efficient is that a higher percentage of accesses to the data storage device, such as a hard disk drive (HDD), are assumed to be writes, with frequently read data held in a local cache memory. However, this assumption can breakdown when data stored in the file system are read more frequently than anticipated.
- HDD hard disk drive
- Embodiments of the invention include a method for file relocation on a data storage device.
- the method includes initiating file relocation in response to invoking a cleaner function for a data storage device.
- the method also includes examining metadata associated with a file on the data storage device to determine an access frequency of the file, and classifying the file as a function of the access frequency.
- the method further includes relocating the file to a fast region of the data storage device when the file is classified as frequently accessed, and relocating the file to a slow region of the data storage device when the file is classified as infrequently accessed.
- Additional embodiments include a system for file relocation on a data storage device.
- the system includes a storage controller in communication with a data storage device, and a cleaner function accessing the data storage device via the storage controller to relocate a file on the data storage device.
- the cleaner function examines metadata associated with the file on the data storage device to determine an access frequency of the file, and classifies the file as a function of the access frequency.
- the cleaner function also relocates the file to a fast region of the data storage device when the file is classified as frequently accessed, and relocates the file to a slow region of the data storage device when the file is classified as infrequently accessed.
- FIG. 1 depicts a system for file relocation on a data storage device in accordance with exemplary embodiments
- FIG. 2 depicts a hard disk drive for storing data in accordance with exemplary embodiments
- FIG. 3 depicts a platter partitioned into storage regions in accordance with exemplary embodiments
- FIG. 4 depicts an exemplary process for file relocation on a data storage device
- FIG. 5 depicts another system for file relocation on a data storage device in accordance with exemplary embodiments.
- Exemplary embodiments provide file relocation on a data storage device.
- a file system manager periodically initiates a cleaner function to relocate files on the data storage device and identify available space for future writes.
- Metadata associated with the files can be used to determine an access frequency for each file or a subset of files.
- the files may be classified in any number of groupings as a function of access frequency, such as a slow/infrequent access frequency, an intermediate access frequency, and/or a fast/frequent access frequency.
- Files with insufficient or unknown access frequency information can be assigned as an intermediate access frequency file until sufficient data is available to more accurately classify the file as infrequently or frequently accessed. For example, access frequency information for a newly written file is unavailable until either the file is accessed or a sufficient amount of time has elapsed to classify the file as infrequently accessed.
- Mapping physical locations of the data storage device as a function of access speed into storage regions enables the files to be relocated to regions best suited to their respective access frequencies. For example, it may be faster to access files closer to the exterior perimeter of a disk as compared to the interior perimeter of the disk. Therefore, placing frequently accessed files in a fast region of the disk, near the exterior disk perimeter, can improve average access time of the files. Similarly, placing infrequently accessed files in a slow region of the disk near the interior disk perimeter frees more space toward the exterior perimeter for storing files that are more frequently accessed, which also improves the average access time of the files.
- FIG. 1 there is a block diagram of a system 100 upon which file relocation on a data storage device is implemented in exemplary embodiments.
- the system 100 of FIG. 1 includes a host system 102 in communication with user systems 104 over a network 106 .
- the host system 102 is a high-speed processing device (e.g., a mainframe computer, a desktop computer, a laptop computer, or the like) including at least one processing circuit (e.g., a CPU) capable of reading and executing instructions, and handling numerous interaction requests from the user systems 104 as a shared physical resource.
- the host system 102 is an application specific computer, such as a digital video recorder (DVR).
- DVR digital video recorder
- the host system 102 may perform as a file server for storing and accessing files.
- the host system 102 can also run other applications, and may serve as a Web server, applications server, and/or a database server.
- the user systems 104 comprise desktop, laptop, general-purpose computer devices, and/or I/O devices, such as keyboard and display devices, which provide an interface for communicating with the host system 102 .
- the user systems 104 represent one or more remote control devices sending commands to the host system 102 (e.g., a remote control for a DVR, with visual information displayed on a television screen). Users can initiate various tasks on the host system 102 via the user systems 104 , such as accessing and storing files.
- the single host system 102 may also represent a cluster of hosts collectively performing processes as described in greater detail herein.
- the network 106 may be any type of communications network known in the alt.
- the network 106 may be an intranet, extranet, or an internetwork, such as the Internet, or a combination thereof.
- the network 106 can include wireless, wired, and/or fiber optic links.
- the host system 102 accesses and stores data in a data storage device 108 via a storage controller 110 .
- the data storage device 108 refers to any type of computer readable storage medium and may comprise a secondary storage element, e.g., hard disk drive (HDD), tape, or a storage subsystem that is internal or external to the host system 102 .
- Types of data that may be stored in the data storage device 108 include, for example, various files and databases. It will be understood that the data storage device 108 shown in FIG. 1 is provided for purposes of simplification and ease of explanation and is not to be construed as limiting in scope. To the contrary, there may be multiple data storage devices 108 utilized by the host system 102 .
- the storage controller 110 may be internal or external to the host system 102 .
- the storage controller 110 and the data storage device 108 can be packaged together in an HDD module.
- the storage controller 110 can be a card, assembly, or circuitry within the host system 102 .
- the data storage device 108 includes a file system 112 .
- the file system 112 may be organized in a variety of configurations, such as a log-structured file system (LFS), depending upon an operating system implementation on the host system 102 .
- the file system 112 can include numerous files 114 of varying sizes and types.
- the file system 112 tracks and stores information about the files 114 as file system metadata 116 .
- the file system metadata 116 may include information such as file name, physical location on the data storage device 108 , size, time and date data, access frequency, and other such information associated with the files 114 .
- the host system 102 executes various applications, including a file system manager 118 that controls read and write accesses to the file system 112 on the data storage device 108 via the storage controller 110 .
- the file system manager 118 determines when data to store 120 can be written to the data storage device 108 .
- the data to store 120 may represent an update to one of the existing files 114 or a new file to write to the file system 112 .
- the data to store 120 can originate from activities performed by a user of the user systems 104 .
- the file system manager 118 applies storage policies 122 to assist in determining where the data to store 120 should be written within the file system 112 , such as physical address locations on the data storage device 108 .
- the storage policies 122 may also include partitioning information for the data storage device 108 that define address ranges of varying speed regions of the data storage device 108 .
- the storage policies 122 can define a slow region partition and a fast region partition to assist in determining where to locate less frequently and more frequently accessed files, as determined relative to access threshold values.
- Access threshold values in the storage policies 122 may assist in classifying the files 114 based on their associated metadata in the file system metadata 116 . For example, a file may be classified as infiequently accessed when the file system metadata 116 indicates that the file has been accessed once within the past week, while a file accessed several times per minute can be classified as frequently accessed.
- Specific values defining access threshold values (number of accesses per unit of time) may be configured within the storage policies 122 to optimize system performance.
- a cleaner function 124 is periodically initiated to reallocate the files 114 on the data storage device 108 .
- the cleaner function 124 may examine the file system 112 to determine specific locations on the data storage device 108 that are in use and identify free space. While prior art cleaners may simply reorder the files 114 sequentially to remove unused space between the files 114 , the cleaner function 124 applies the storage policies 122 to organize files according to their respective access frequency.
- the access frequency of the files 114 is stored in the file system metadata 116 .
- the file system manager 118 and/or the storage controller 110 may update and maintain the file system metadata 116 , tracking accesses to the files 114 over a period of time.
- an HDD 200 including platters 202 for storing data that rotate about a spindle 204 is depicted.
- the HDD 200 represents an exemplary embodiment of the data storage device 108 upon which file relocation can be performed.
- Data can be written to and read from the HDD 200 from either side (top or bottom) of the platters 202 using a head stack assembly 206 .
- the head stack assembly 206 may include any number of arms, such as top arms 208 and bottom arms 210 .
- a top arm 208 and a bottom arm 210 are allocated to the top and bottom sides of each platter 202 respectively.
- a read/write head 212 is coupled to each of the top and bottom arms 208 and 210 ; however, only the read/write heads 212 coupled the top arms 208 are visible in FIG. 2 .
- the read/write heads 212 can either read or write data to the platters 202 .
- the storage controller 110 of FIG. 1 may control the physical movement of the top and bottom arms 208 and 210 , aligning the read/write heads 212 to specifically targeted tracks, such as track 214 . Tracks, such as the track 214 , can be further subdivided in clusters, sectors, bytes, and bits (not depicted).
- Files such as the files 114 of FIG.
- FIG. 1 can be stored on a common platter 202 or distributed across multiple platters 202 of the HDD 200 . While FIG. 2 depicts a vertical stack of four platters 202 , it will be understood that numerous configurations are possible, including horizontal stacks, single-sided platters 202 , and a variable number of platters 202 .
- FIG. 3 a top view of one of the platters 202 of FIG. 2 rotating about the spindle 204 is depicted.
- the platter 202 of FIG. 3 can be partitioned into multiple regions according to the storage policies 122 of FIG. 1 .
- regions include a fast region 302 , an intermediate region 304 , and a slow region 306 .
- Each of the regions can be established by programmable partition values, such as a fast region partition 308 and a slow region partition 310 . While the fast region partition 308 and the slow region partition 310 may be configurable values stored in the storage policies 122 of FIG.
- the regions can also be delimited by physical boundaries of the platter 202 , such as the exterior perimeter 312 and interior perimeter 314 . Since the amount of time to move one of the read/write heads 212 of FIG. 2 is greater towards the interior perimeter 314 , the slow region 306 can be defined as storage locations delimited by boundaries of the interior perimeter 314 and the slow region partition 310 . Similarly, since there is less delay in positioning the one of the read/write heads 212 of FIG. 2 towards the exterior perimeter 312 , the fast region 302 can be defined as storage locations delimited by boundaries of the exterior perimeter 312 and the fast region partition 308 . Thus, the intermediate region 304 is defined as storage locations delimited by the fast region partition 308 and the slow region partition 310 .
- Active files such as active file 316
- the cleaner function 124 of FIG. 1 determines where each active file 316 should be located based on access frequency data associated with the active file 316 .
- the access frequency data may be held in file metadata 318 associated with the active file 316 , where the file metadata 318 is part of the file system metadata 116 of FIG. 1 .
- the active file 316 can be classified as frequently accessed when the file metadata 318 indicates that the active file 316 has been accessed more often than a frequent access threshold value defined in the storage policies 122 of FIG.
- the active file 316 can be classified as infrequently accessed when the file metadata 318 indicates that the active file 316 has been accessed less often than an infrequent access threshold value defined in the storage policies 122 of FIG. 1 .
- the active file 316 has an access frequency between the infrequent access threshold value and the frequent access threshold value, then the active file 316 is classified as an intermediate access frequency file.
- the cleaner function 124 of FIG. 1 classifies the active file 316 , the active file 316 is relocated to the region that most closely matches the classification, e.g., frequently accessed files are moved to the fast region, while infrequently accessed files are moved to the slow region. It will be understood that any number of regions may be defined for a corresponding number of access frequency classifications, e.g., five regions.
- the file system manager 118 invokes the cleaner function 124 to initiate file relocation on the data storage device 108 .
- the file system manager 118 may invoke the cleaner function 124 at a fixed periodic interval, upon a specific request, or as a function of activity level. Activity level may be gauged relative to the amount processing being performed on the host system 102 and/or the volume of read/write transactions initiated through the storage controller 110 , so as to avoid access contention and minimize delays.
- the cleaner function 124 examines metadata associated with a file on the data storage device 108 to determine an access frequency of the file, such as the active file 316 of FIG. 3 .
- Metadata such as the file metadata 318 of FIG. 3
- the access frequency can be in terms of reads per unit time, writes per unit time, or a combined metric.
- the cleaner function 124 classifies the file as a function of the access frequency.
- the classification may be performed relative to the storage policies 122 .
- File classification can be with respect to reads, writes, reads plus writes, or read/write access ratios. For example, a file that is “read heavy”, is subjected to a larger number of read accesses relative to write accesses (e.g., a static configuration file), while a file that is “write heavy” experiences a smaller number of read accesses relative to write accesses (e.g., an unused log file). Classifying a read heavy file as frequently accessed and a write heavy file as infrequently accessed provides an additional organization scheme for relocating files to the fast and slow regions 302 and 306 . In alternate exemplary embodiments, the fast and slow regions 302 and 306 are further subdivided to group read and write heavy files within each region.
- the cleaner function 124 relocates the file to the fast region 302 of the data storage device 108 via the storage controller 110 when the file is classified as frequently accessed.
- the cleaner function 124 relocates the file to the slow region 306 of the data storage device 108 via the storage controller 110 when the file is classified as infrequently accessed. If the file is classified as intermediate access frequency, and the file is presently located in the intermediate region 304 , relocation need not be performed. When a file is relocated, the space previously occupied by the file on the data storage device 108 may be marked as available so the cleaner function 124 can reclaim the unused space.
- the cleaner function 124 may perform file relocation iteratively, operating on groups of multiple files 114 when the cleaner function 124 is invoked. If there are files classified as frequently accessed and other files classified as infrequently accessed, the cleaner function 124 may perform relocation of the infrequently accessed files first to provide more storage space on a faster portion of the data storage device 108 . Thus, relocating a file to the slow region 306 occurs prior to relocating a previously identified frequently accessed file to the fast region 302 when relocation of the previously identified frequently accessed file is pending.
- the fast region 302 , intermediate region 304 , and slow region 306 can be defined on a per platter 202 basis. Additionally, the amount of storage space allocated to each region can vary between platters 202 and between the top and bottom side of each platter 202 . Files can be moved between each of the regions as the associated access data changes over time. Accordingly, a file that is located in the fast region 302 can sequentially migrate to the intermediate region 304 and then the slow region 306 as time elapses with minimal to no accesses of the file after a period of frequent accesses, e.g., a word processing document after a period of heavy editing. Thus, file relocation is a dynamic process that can establish and maintain an optimized file organization to minimize access delays in response to usage pattern.
- FIG. 5 a block diagram of a system 500 is depicted upon which file relocation on a data storage device is implemented in exemplary embodiments.
- the system 500 includes many of the same elements as the system 100 of FIG. 1 , performing substantially the same functions, including a host system 102 interconnected to user systems 104 via a network 106 .
- storage controller 502 of FIG. 5 differs from the storage controller 110 of FIG. 1 in that the storage controller 502 has enhanced processing capabilities.
- the storage controller 502 performs the cleaner function 124 using the storage policies 122 independent of the host system 102 .
- the host system 102 is offloaded from tasks of executing the cleaner function 124 and directly managing the storage policies 122 .
- the storage controller 502 When the storage controller 502 is packaged together with the data storage device 108 , for example as an HDD module, the combined module can incorporate manufacturer specific information in the storage policies 122 without revealing internal details of specific fast and slow locations on the data storage device 108 .
- the storage controller 502 provides registers or other virtual address mapping features to support an address translation from the file system manager 118 to the physical addresses internal to the data storage device 108 .
- the storage policies 122 may also be visible and/or modifiable as memory mapped registers through the storage controller 502 .
- the process 400 of FIG. 4 can be applied to a mixed memory device storage system, such as a Flash, EEPROM, and/or NOVRAM system that has different read/write times per device or per partitions associated with each device.
- the process 400 of FIG. 4 may be applied to a solid-state data storage device that includes internal partitions of differing access times.
- exemplary embodiments include relocating files on a data storage device dynamically to optimize access time.
- moving infrequently accessed files to a region of the data storage device with a slower access time, such as closer to the interior perimeter of an HDD platter creates a larger storage volume for files that are accessed at a fast and intermediate frequency.
- file relocation periodically as a background task e.g., a cleaner function
- the addition of file reallocation to the cleaner function provides enhanced functionality without spawning additional tasks or delaying each file write to perform reallocation at file write time.
- Incorporating a portion or all of the logic associated with file allocation into a storage controller for a data storage device can provide additional benefits, such as reducing the processing workload of a host system that stores files on the data storage device.
- embodiments can be embodied in the form of computer-implemented processes and apparatuses for practicing those processes.
- the invention is embodied in computer program code executed by one or more network elements.
- Embodiments include computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, universal serial bus (USB) flash drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention.
- Embodiments include computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention.
- the computer program code segments configure the microprocessor to create specific logic circuits.
Abstract
Description
- The present disclosure relates generally to computer system data storage management, and, in particular, to file relocation management for reducing data access time on a data storage device.
- File management systems, such as a log-structured file system (LFS), may store data as a circular log, writing data sequentially to the log. This approach attempts to maximize write throughput on a data storage device by avoiding costly seeks based on an assumption that repositioning of a read/write head used to access the data storage device is not required prior to beginning a new write cycle due to sequential file locations. One reason that this approach is expected to be efficient is that a higher percentage of accesses to the data storage device, such as a hard disk drive (HDD), are assumed to be writes, with frequently read data held in a local cache memory. However, this assumption can breakdown when data stored in the file system are read more frequently than anticipated. Additional complications can arise when the data being read via the file system is read too infrequently or in such a large quantity that cache memory is ineffective. Using a strict sequential approach to writing data ignores potential delays that can occur when files are accessed at different frequencies at non-sequential locations. For example, new files may be written progressively in a sequential manner, but reads to other locations on the data storage device can occur at any location. Thus, frequent read/write head movement can still occur as the read/write head moves between read and write locations. Since a larger degree of movement causes a greater access delay, frequent read/write head movements can lead to significant access delays, even in an LFS.
- In order to remain competitive, computer system manufactures are constantly looking for ways to improve system response time by reducing delays. Therefore, it would be beneficial to develop an approach to manage file locations on a data storage device that improves system responsiveness. Accordingly, there is a need in the art for file relocation on a data storage device to reduce data access time.
- Embodiments of the invention include a method for file relocation on a data storage device. The method includes initiating file relocation in response to invoking a cleaner function for a data storage device. The method also includes examining metadata associated with a file on the data storage device to determine an access frequency of the file, and classifying the file as a function of the access frequency. The method further includes relocating the file to a fast region of the data storage device when the file is classified as frequently accessed, and relocating the file to a slow region of the data storage device when the file is classified as infrequently accessed.
- Additional embodiments include a system for file relocation on a data storage device. The system includes a storage controller in communication with a data storage device, and a cleaner function accessing the data storage device via the storage controller to relocate a file on the data storage device. The cleaner function examines metadata associated with the file on the data storage device to determine an access frequency of the file, and classifies the file as a function of the access frequency. The cleaner function also relocates the file to a fast region of the data storage device when the file is classified as frequently accessed, and relocates the file to a slow region of the data storage device when the file is classified as infrequently accessed.
- Further embodiments include a computer program product for file relocation on a data storage device. The computer program product includes a storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for implementing a method. The method includes initiating file relocation in response to invoking a cleaner function for a data storage device. The method also includes examining metadata associated with a file on the data storage device to determine an access frequency of the file, and classifying the file as a function of the access frequency. The method further includes relocating the file to a fast region of the data storage device when the file is classified as frequently accessed, and relocating the file to a slow region of the data storage device when the file is classified as infrequently accessed.
- Other systems, methods, and/or computer program products according to embodiments will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional systems, methods, and/or computer program products be included within this description, be within the scope of the present invention, and be protected by the accompanying claims.
- The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
-
FIG. 1 depicts a system for file relocation on a data storage device in accordance with exemplary embodiments; -
FIG. 2 depicts a hard disk drive for storing data in accordance with exemplary embodiments; -
FIG. 3 depicts a platter partitioned into storage regions in accordance with exemplary embodiments; -
FIG. 4 depicts an exemplary process for file relocation on a data storage device; and -
FIG. 5 depicts another system for file relocation on a data storage device in accordance with exemplary embodiments. - The detailed description explains the preferred embodiments of the invention, together with advantages and features, by way of example with reference to the drawings.
- Exemplary embodiments provide file relocation on a data storage device. In exemplary embodiments, a file system manager periodically initiates a cleaner function to relocate files on the data storage device and identify available space for future writes. Metadata associated with the files can be used to determine an access frequency for each file or a subset of files. The files may be classified in any number of groupings as a function of access frequency, such as a slow/infrequent access frequency, an intermediate access frequency, and/or a fast/frequent access frequency. Files with insufficient or unknown access frequency information can be assigned as an intermediate access frequency file until sufficient data is available to more accurately classify the file as infrequently or frequently accessed. For example, access frequency information for a newly written file is unavailable until either the file is accessed or a sufficient amount of time has elapsed to classify the file as infrequently accessed.
- Mapping physical locations of the data storage device as a function of access speed into storage regions enables the files to be relocated to regions best suited to their respective access frequencies. For example, it may be faster to access files closer to the exterior perimeter of a disk as compared to the interior perimeter of the disk. Therefore, placing frequently accessed files in a fast region of the disk, near the exterior disk perimeter, can improve average access time of the files. Similarly, placing infrequently accessed files in a slow region of the disk near the interior disk perimeter frees more space toward the exterior perimeter for storing files that are more frequently accessed, which also improves the average access time of the files.
- Turning now to the drawings, it will be seen that in
FIG. 1 there is a block diagram of asystem 100 upon which file relocation on a data storage device is implemented in exemplary embodiments. Thesystem 100 ofFIG. 1 includes ahost system 102 in communication withuser systems 104 over anetwork 106. In exemplary embodiments, thehost system 102 is a high-speed processing device (e.g., a mainframe computer, a desktop computer, a laptop computer, or the like) including at least one processing circuit (e.g., a CPU) capable of reading and executing instructions, and handling numerous interaction requests from theuser systems 104 as a shared physical resource. In alternative exemplary embodiments, thehost system 102 is an application specific computer, such as a digital video recorder (DVR). Thehost system 102 may perform as a file server for storing and accessing files. Thehost system 102 can also run other applications, and may serve as a Web server, applications server, and/or a database server. - In exemplary embodiments, the
user systems 104 comprise desktop, laptop, general-purpose computer devices, and/or I/O devices, such as keyboard and display devices, which provide an interface for communicating with thehost system 102. In alternate exemplary embodiments, theuser systems 104 represent one or more remote control devices sending commands to the host system 102 (e.g., a remote control for a DVR, with visual information displayed on a television screen). Users can initiate various tasks on thehost system 102 via theuser systems 104, such as accessing and storing files. - While only a
single host system 102 is shown inFIG. 1 , it will be understood that multiple host systems can be implemented, each in communication with one another via direct coupling or via one or more networks. For example, multiple host systems may be interconnected through a distributed network architecture. Thesingle host system 102 may also represent a cluster of hosts collectively performing processes as described in greater detail herein. - The
network 106 may be any type of communications network known in the alt. For example, thenetwork 106 may be an intranet, extranet, or an internetwork, such as the Internet, or a combination thereof. Thenetwork 106 can include wireless, wired, and/or fiber optic links. - In exemplary embodiments, the
host system 102 accesses and stores data in adata storage device 108 via astorage controller 110. Thedata storage device 108 refers to any type of computer readable storage medium and may comprise a secondary storage element, e.g., hard disk drive (HDD), tape, or a storage subsystem that is internal or external to thehost system 102. Types of data that may be stored in thedata storage device 108 include, for example, various files and databases. It will be understood that thedata storage device 108 shown inFIG. 1 is provided for purposes of simplification and ease of explanation and is not to be construed as limiting in scope. To the contrary, there may be multipledata storage devices 108 utilized by thehost system 102. Thestorage controller 110 may be internal or external to thehost system 102. For example, thestorage controller 110 and thedata storage device 108 can be packaged together in an HDD module. Alternatively, thestorage controller 110 can be a card, assembly, or circuitry within thehost system 102. - In exemplary embodiments, the
data storage device 108 includes afile system 112. Thefile system 112 may be organized in a variety of configurations, such as a log-structured file system (LFS), depending upon an operating system implementation on thehost system 102. Thefile system 112 can includenumerous files 114 of varying sizes and types. Thefile system 112 tracks and stores information about thefiles 114 asfile system metadata 116. Thefile system metadata 116 may include information such as file name, physical location on thedata storage device 108, size, time and date data, access frequency, and other such information associated with thefiles 114. - In exemplary embodiments, the
host system 102 executes various applications, including afile system manager 118 that controls read and write accesses to thefile system 112 on thedata storage device 108 via thestorage controller 110. Thefile system manager 118 determines when data to store 120 can be written to thedata storage device 108. For example, the data to store 120 may represent an update to one of the existingfiles 114 or a new file to write to thefile system 112. The data to store 120 can originate from activities performed by a user of theuser systems 104. In exemplary embodiments, thefile system manager 118 appliesstorage policies 122 to assist in determining where the data to store 120 should be written within thefile system 112, such as physical address locations on thedata storage device 108. Thestorage policies 122 may also include partitioning information for thedata storage device 108 that define address ranges of varying speed regions of thedata storage device 108. For example, thestorage policies 122 can define a slow region partition and a fast region partition to assist in determining where to locate less frequently and more frequently accessed files, as determined relative to access threshold values. Access threshold values in thestorage policies 122 may assist in classifying thefiles 114 based on their associated metadata in thefile system metadata 116. For example, a file may be classified as infiequently accessed when thefile system metadata 116 indicates that the file has been accessed once within the past week, while a file accessed several times per minute can be classified as frequently accessed. Specific values defining access threshold values (number of accesses per unit of time) may be configured within thestorage policies 122 to optimize system performance. - In exemplary embodiments, a
cleaner function 124 is periodically initiated to reallocate thefiles 114 on thedata storage device 108. Thecleaner function 124 may examine thefile system 112 to determine specific locations on thedata storage device 108 that are in use and identify free space. While prior art cleaners may simply reorder thefiles 114 sequentially to remove unused space between thefiles 114, thecleaner function 124 applies thestorage policies 122 to organize files according to their respective access frequency. In exemplary embodiments, the access frequency of thefiles 114 is stored in thefile system metadata 116. Thefile system manager 118 and/or thestorage controller 110 may update and maintain thefile system metadata 116, tracking accesses to thefiles 114 over a period of time. - Turning now to
FIG. 2 , anHDD 200 includingplatters 202 for storing data that rotate about aspindle 204 is depicted. TheHDD 200 represents an exemplary embodiment of thedata storage device 108 upon which file relocation can be performed. Data can be written to and read from theHDD 200 from either side (top or bottom) of theplatters 202 using ahead stack assembly 206. Thehead stack assembly 206 may include any number of arms, such astop arms 208 andbottom arms 210. In exemplary embodiments, atop arm 208 and abottom arm 210 are allocated to the top and bottom sides of eachplatter 202 respectively. A read/write head 212 is coupled to each of the top andbottom arms top arms 208 are visible inFIG. 2 . As theplatters 202 rotate about thespindle 204, the read/write heads 212 can either read or write data to theplatters 202. Thestorage controller 110 ofFIG. 1 may control the physical movement of the top andbottom arms track 214. Tracks, such as thetrack 214, can be further subdivided in clusters, sectors, bytes, and bits (not depicted). Files, such as thefiles 114 ofFIG. 1 , can be stored on acommon platter 202 or distributed acrossmultiple platters 202 of theHDD 200. WhileFIG. 2 depicts a vertical stack of fourplatters 202, it will be understood that numerous configurations are possible, including horizontal stacks, single-sidedplatters 202, and a variable number ofplatters 202. - Turning now to
FIG. 3 , a top view of one of theplatters 202 ofFIG. 2 rotating about thespindle 204 is depicted. Theplatter 202 ofFIG. 3 can be partitioned into multiple regions according to thestorage policies 122 ofFIG. 1 . In exemplary embodiments, regions include afast region 302, anintermediate region 304, and aslow region 306. Each of the regions can be established by programmable partition values, such as afast region partition 308 and aslow region partition 310. While thefast region partition 308 and theslow region partition 310 may be configurable values stored in thestorage policies 122 ofFIG. 1 , the regions can also be delimited by physical boundaries of theplatter 202, such as theexterior perimeter 312 andinterior perimeter 314. Since the amount of time to move one of the read/write heads 212 ofFIG. 2 is greater towards theinterior perimeter 314, theslow region 306 can be defined as storage locations delimited by boundaries of theinterior perimeter 314 and theslow region partition 310. Similarly, since there is less delay in positioning the one of the read/write heads 212 ofFIG. 2 towards theexterior perimeter 312, thefast region 302 can be defined as storage locations delimited by boundaries of theexterior perimeter 312 and thefast region partition 308. Thus, theintermediate region 304 is defined as storage locations delimited by thefast region partition 308 and theslow region partition 310. - As the
cleaner function 124 ofFIG. 1 examines thefiles 114 on thedata storage device 108, numerous files may be encountered. Active files, such asactive file 316, arefiles 114 that have been created but not deleted in thefile system 112 ofFIG. 1 . Thecleaner function 124 ofFIG. 1 determines where eachactive file 316 should be located based on access frequency data associated with theactive file 316. The access frequency data may be held infile metadata 318 associated with theactive file 316, where thefile metadata 318 is part of thefile system metadata 116 ofFIG. 1 . Theactive file 316 can be classified as frequently accessed when thefile metadata 318 indicates that theactive file 316 has been accessed more often than a frequent access threshold value defined in thestorage policies 122 ofFIG. 1 . Alternatively, theactive file 316 can be classified as infrequently accessed when thefile metadata 318 indicates that theactive file 316 has been accessed less often than an infrequent access threshold value defined in thestorage policies 122 ofFIG. 1 . When theactive file 316 has an access frequency between the infrequent access threshold value and the frequent access threshold value, then theactive file 316 is classified as an intermediate access frequency file. Once thecleaner function 124 ofFIG. 1 classifies theactive file 316, theactive file 316 is relocated to the region that most closely matches the classification, e.g., frequently accessed files are moved to the fast region, while infrequently accessed files are moved to the slow region. It will be understood that any number of regions may be defined for a corresponding number of access frequency classifications, e.g., five regions. - Turning now to
FIG. 4 , aprocess 400 for file relocation on thedata storage device 108 will now be described in accordance with exemplary embodiments, and in reference toFIGS. 1-3 . Atblock 402, thefile system manager 118 invokes thecleaner function 124 to initiate file relocation on thedata storage device 108. Thefile system manager 118 may invoke thecleaner function 124 at a fixed periodic interval, upon a specific request, or as a function of activity level. Activity level may be gauged relative to the amount processing being performed on thehost system 102 and/or the volume of read/write transactions initiated through thestorage controller 110, so as to avoid access contention and minimize delays. - At
block 404, thecleaner function 124 examines metadata associated with a file on thedata storage device 108 to determine an access frequency of the file, such as theactive file 316 ofFIG. 3 . Metadata, such as thefile metadata 318 ofFIG. 3 , may be examined from a larger collection of metadata, e.g., thefile system metadata 116. The access frequency can be in terms of reads per unit time, writes per unit time, or a combined metric. - At
block 406, thecleaner function 124 classifies the file as a function of the access frequency. The classification may be performed relative to thestorage policies 122. File classification can be with respect to reads, writes, reads plus writes, or read/write access ratios. For example, a file that is “read heavy”, is subjected to a larger number of read accesses relative to write accesses (e.g., a static configuration file), while a file that is “write heavy” experiences a smaller number of read accesses relative to write accesses (e.g., an unused log file). Classifying a read heavy file as frequently accessed and a write heavy file as infrequently accessed provides an additional organization scheme for relocating files to the fast andslow regions slow regions - At
block 408, thecleaner function 124 relocates the file to thefast region 302 of thedata storage device 108 via thestorage controller 110 when the file is classified as frequently accessed. Atblock 410, thecleaner function 124 relocates the file to theslow region 306 of thedata storage device 108 via thestorage controller 110 when the file is classified as infrequently accessed. If the file is classified as intermediate access frequency, and the file is presently located in theintermediate region 304, relocation need not be performed. When a file is relocated, the space previously occupied by the file on thedata storage device 108 may be marked as available so thecleaner function 124 can reclaim the unused space. Thecleaner function 124 may perform file relocation iteratively, operating on groups ofmultiple files 114 when thecleaner function 124 is invoked. If there are files classified as frequently accessed and other files classified as infrequently accessed, thecleaner function 124 may perform relocation of the infrequently accessed files first to provide more storage space on a faster portion of thedata storage device 108. Thus, relocating a file to theslow region 306 occurs prior to relocating a previously identified frequently accessed file to thefast region 302 when relocation of the previously identified frequently accessed file is pending. - If the
data storage device 108 is an HDD, such as theHDD 200 ofFIG. 2 , thefast region 302,intermediate region 304, andslow region 306 can be defined on a perplatter 202 basis. Additionally, the amount of storage space allocated to each region can vary betweenplatters 202 and between the top and bottom side of eachplatter 202. Files can be moved between each of the regions as the associated access data changes over time. Accordingly, a file that is located in thefast region 302 can sequentially migrate to theintermediate region 304 and then theslow region 306 as time elapses with minimal to no accesses of the file after a period of frequent accesses, e.g., a word processing document after a period of heavy editing. Thus, file relocation is a dynamic process that can establish and maintain an optimized file organization to minimize access delays in response to usage pattern. - Turning now to
FIG. 5 , a block diagram of asystem 500 is depicted upon which file relocation on a data storage device is implemented in exemplary embodiments. Thesystem 500 includes many of the same elements as thesystem 100 ofFIG. 1 , performing substantially the same functions, including ahost system 102 interconnected touser systems 104 via anetwork 106. However,storage controller 502 ofFIG. 5 differs from thestorage controller 110 ofFIG. 1 in that thestorage controller 502 has enhanced processing capabilities. In exemplary embodiments, thestorage controller 502 performs thecleaner function 124 using thestorage policies 122 independent of thehost system 102. Thus, thehost system 102 is offloaded from tasks of executing thecleaner function 124 and directly managing thestorage policies 122. When thestorage controller 502 is packaged together with thedata storage device 108, for example as an HDD module, the combined module can incorporate manufacturer specific information in thestorage policies 122 without revealing internal details of specific fast and slow locations on thedata storage device 108. In exemplary embodiments, thestorage controller 502 provides registers or other virtual address mapping features to support an address translation from thefile system manager 118 to the physical addresses internal to thedata storage device 108. Thestorage policies 122 may also be visible and/or modifiable as memory mapped registers through thestorage controller 502. - While exemplary embodiments have been described in reference to a hard disk drive, the scope of the invention is not so limited. The inventive principles disclosed herein may apply to any data storage device where access time varies as a function of physical placement location on the data storage device. For example, the
process 400 ofFIG. 4 can be applied to a mixed memory device storage system, such as a Flash, EEPROM, and/or NOVRAM system that has different read/write times per device or per partitions associated with each device. Alternatively, theprocess 400 ofFIG. 4 may be applied to a solid-state data storage device that includes internal partitions of differing access times. - Technical effects of exemplary embodiments include relocating files on a data storage device dynamically to optimize access time. By moving frequently accessed files to a region of the data storage device with a faster access time, such as closer to the exterior perimeter of an HDD platter, average access time of the data storage device may be decreased. Similarly, moving infrequently accessed files to a region of the data storage device with a slower access time, such as closer to the interior perimeter of an HDD platter, creates a larger storage volume for files that are accessed at a fast and intermediate frequency. Performing file relocation periodically as a background task (e.g., a cleaner function) allows for optimizing present file placement as well as future file placement, since space is recovered from both deleted and reallocated files for future storage needs. In systems that include a simple periodically executing cleaner function, the addition of file reallocation to the cleaner function provides enhanced functionality without spawning additional tasks or delaying each file write to perform reallocation at file write time. Incorporating a portion or all of the logic associated with file allocation into a storage controller for a data storage device can provide additional benefits, such as reducing the processing workload of a host system that stores files on the data storage device.
- As described above, embodiments can be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. In exemplary embodiments, the invention is embodied in computer program code executed by one or more network elements. Embodiments include computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, universal serial bus (USB) flash drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. Embodiments include computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits.
- While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims. Moreover, the use of the terms first, second, etc. do not denote any order or importance, but rather the terms first, second, etc. are used to distinguish one element from another. Furthermore, the use of the terms a, an, etc. do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/875,191 US8112603B2 (en) | 2007-10-19 | 2007-10-19 | Methods, systems, and computer program products for file relocation on a data storage device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/875,191 US8112603B2 (en) | 2007-10-19 | 2007-10-19 | Methods, systems, and computer program products for file relocation on a data storage device |
Publications (2)
Publication Number | Publication Date |
---|---|
US20090106518A1 true US20090106518A1 (en) | 2009-04-23 |
US8112603B2 US8112603B2 (en) | 2012-02-07 |
Family
ID=40564659
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/875,191 Expired - Fee Related US8112603B2 (en) | 2007-10-19 | 2007-10-19 | Methods, systems, and computer program products for file relocation on a data storage device |
Country Status (1)
Country | Link |
---|---|
US (1) | US8112603B2 (en) |
Cited By (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090043831A1 (en) * | 2007-08-11 | 2009-02-12 | Mcm Portfolio Llc | Smart Solid State Drive And Method For Handling Critical Files |
US20090271776A1 (en) * | 2008-04-25 | 2009-10-29 | Microsoft Corporation | Dynamic management of operating system resources |
US20090287751A1 (en) * | 2008-05-16 | 2009-11-19 | International Business Machines Corporation | Method and system for file relocation |
US20110087837A1 (en) * | 2009-10-10 | 2011-04-14 | International Business Machines Corporation | Secondary cache for write accumulation and coalescing |
US20120023145A1 (en) * | 2010-07-23 | 2012-01-26 | International Business Machines Corporation | Policy-based computer file management based on content-based analytics |
US20120203809A1 (en) * | 2009-11-03 | 2012-08-09 | Pspace Inc. | Apparatus and method for managing a file in a distributed storage system |
US8341339B1 (en) | 2010-06-14 | 2012-12-25 | Western Digital Technologies, Inc. | Hybrid drive garbage collecting a non-volatile semiconductor memory by migrating valid data to a disk |
US8429343B1 (en) | 2010-10-21 | 2013-04-23 | Western Digital Technologies, Inc. | Hybrid drive employing non-volatile semiconductor memory to facilitate refreshing disk |
US8427771B1 (en) | 2010-10-21 | 2013-04-23 | Western Digital Technologies, Inc. | Hybrid drive storing copy of data in non-volatile semiconductor memory for suspect disk data sectors |
US8510528B2 (en) | 2010-12-29 | 2013-08-13 | Teradata Us, Inc. | Differential data storage based on predicted access frequency |
US8560759B1 (en) | 2010-10-25 | 2013-10-15 | Western Digital Technologies, Inc. | Hybrid drive storing redundant copies of data on disk and in non-volatile semiconductor memory based on read frequency |
US8612798B1 (en) | 2010-10-21 | 2013-12-17 | Western Digital Technologies, Inc. | Hybrid drive storing write data in non-volatile semiconductor memory if write verify of disk fails |
US20140006401A1 (en) * | 2012-06-30 | 2014-01-02 | Microsoft Corporation | Classification of data in main memory database systems |
US8630056B1 (en) | 2011-09-12 | 2014-01-14 | Western Digital Technologies, Inc. | Hybrid drive adjusting spin-up profile based on cache status of non-volatile semiconductor memory |
US8639872B1 (en) | 2010-08-13 | 2014-01-28 | Western Digital Technologies, Inc. | Hybrid drive comprising write cache spanning non-volatile semiconductor memory and disk |
US8670205B1 (en) | 2010-09-29 | 2014-03-11 | Western Digital Technologies, Inc. | Hybrid drive changing power mode of disk channel when frequency of write data exceeds a threshold |
US8683295B1 (en) | 2010-08-31 | 2014-03-25 | Western Digital Technologies, Inc. | Hybrid drive writing extended error correction code symbols to disk for data sectors stored in non-volatile semiconductor memory |
US20140095790A1 (en) * | 2012-10-02 | 2014-04-03 | International Business Machines Corporation | Management of data using inheritable attributes |
US8699171B1 (en) | 2010-09-30 | 2014-04-15 | Western Digital Technologies, Inc. | Disk drive selecting head for write operation based on environmental condition |
US8775720B1 (en) | 2010-08-31 | 2014-07-08 | Western Digital Technologies, Inc. | Hybrid drive balancing execution times for non-volatile semiconductor memory and disk |
US8782334B1 (en) | 2010-09-10 | 2014-07-15 | Western Digital Technologies, Inc. | Hybrid drive copying disk cache to non-volatile semiconductor memory |
US8825976B1 (en) | 2010-09-28 | 2014-09-02 | Western Digital Technologies, Inc. | Hybrid drive executing biased migration policy during host boot to migrate data to a non-volatile semiconductor memory |
US8825977B1 (en) | 2010-09-28 | 2014-09-02 | Western Digital Technologies, Inc. | Hybrid drive writing copy of data to disk when non-volatile semiconductor memory nears end of life |
US8904091B1 (en) | 2011-12-22 | 2014-12-02 | Western Digital Technologies, Inc. | High performance media transport manager architecture for data storage systems |
US8909889B1 (en) | 2011-10-10 | 2014-12-09 | Western Digital Technologies, Inc. | Method and apparatus for servicing host commands by a disk drive |
US8917471B1 (en) | 2013-10-29 | 2014-12-23 | Western Digital Technologies, Inc. | Power management for data storage device |
US8959284B1 (en) | 2010-06-28 | 2015-02-17 | Western Digital Technologies, Inc. | Disk drive steering write data to write cache based on workload |
US8959281B1 (en) | 2012-11-09 | 2015-02-17 | Western Digital Technologies, Inc. | Data management for a storage device |
US8972680B2 (en) | 2012-01-23 | 2015-03-03 | International Business Machines Corporation | Data staging area |
US8977804B1 (en) | 2011-11-21 | 2015-03-10 | Western Digital Technologies, Inc. | Varying data redundancy in storage systems |
US8977803B2 (en) | 2011-11-21 | 2015-03-10 | Western Digital Technologies, Inc. | Disk drive data caching using a multi-tiered memory |
US20150127955A1 (en) * | 2008-02-27 | 2015-05-07 | Samsung Electronics Co., Ltd. | Method and apparatus for inputting/outputting virtual operating system from removable storage device on a host using virtualization technique |
US9058280B1 (en) | 2010-08-13 | 2015-06-16 | Western Digital Technologies, Inc. | Hybrid drive migrating data from disk to non-volatile semiconductor memory based on accumulated access time |
US9069475B1 (en) | 2010-10-26 | 2015-06-30 | Western Digital Technologies, Inc. | Hybrid drive selectively spinning up disk when powered on |
US9070379B2 (en) | 2013-08-28 | 2015-06-30 | Western Digital Technologies, Inc. | Data migration for data storage device |
US9141176B1 (en) | 2013-07-29 | 2015-09-22 | Western Digital Technologies, Inc. | Power management for data storage device |
US9146875B1 (en) | 2010-08-09 | 2015-09-29 | Western Digital Technologies, Inc. | Hybrid drive converting non-volatile semiconductor memory to read only based on life remaining |
US9268499B1 (en) | 2010-08-13 | 2016-02-23 | Western Digital Technologies, Inc. | Hybrid drive migrating high workload data from disk to non-volatile semiconductor memory |
US9268701B1 (en) | 2011-11-21 | 2016-02-23 | Western Digital Technologies, Inc. | Caching of data in data storage systems by managing the size of read and write cache based on a measurement of cache reliability |
US20160085643A1 (en) * | 2013-05-15 | 2016-03-24 | Amazon Technologies, Inc. | Managing contingency capacity of pooled resources in multiple availability zones |
US9323467B2 (en) | 2013-10-29 | 2016-04-26 | Western Digital Technologies, Inc. | Data storage device startup |
US20160285918A1 (en) * | 2015-03-29 | 2016-09-29 | Whitebox Security Ltd. | System and method for classifying documents based on access |
US20160366097A1 (en) * | 2014-02-27 | 2016-12-15 | Fujitsu Technology Solutions Intellectual Property Gmbh | Working method for a system and system |
US9542125B1 (en) * | 2012-09-25 | 2017-01-10 | EMC IP Holding Company LLC | Managing data relocation in storage systems |
US9785561B2 (en) | 2010-02-17 | 2017-10-10 | International Business Machines Corporation | Integrating a flash cache into large storage systems |
WO2018089085A1 (en) * | 2016-11-08 | 2018-05-17 | Micron Technology, Inc. | Data relocation in hybrid memory |
US20180286010A1 (en) * | 2017-04-01 | 2018-10-04 | Intel Corporation | Cache replacement mechanism |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8291186B2 (en) * | 2008-08-21 | 2012-10-16 | International Business Machines Corporation | Volume record data set optimization apparatus and method |
US9020892B2 (en) * | 2011-07-08 | 2015-04-28 | Microsoft Technology Licensing, Llc | Efficient metadata storage |
US9424864B2 (en) * | 2014-07-02 | 2016-08-23 | Western Digital Technologies, Inc. | Data management for a data storage device with zone relocation |
US10305979B2 (en) * | 2015-06-12 | 2019-05-28 | International Business Machines Corporation | Clone efficiency in a hybrid storage cloud environment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5422762A (en) * | 1992-09-30 | 1995-06-06 | Hewlett-Packard Company | Method and apparatus for optimizing disk performance by locating a file directory on a middle track and distributing the file allocation tables close to clusters referenced in the tables |
US5799324A (en) * | 1996-05-10 | 1998-08-25 | International Business Machines Corporation | System and method for management of persistent data in a log-structured disk array |
US5991257A (en) * | 1996-02-02 | 1999-11-23 | Sony Corporation | Disk with zones of tracks segmented into data frames, with tracks closer to the disk edge having more frames, and a data recording/reproducing method and apparatus using such disk |
US6026463A (en) * | 1997-09-10 | 2000-02-15 | Micron Electronics, Inc. | Method for improving data transfer rates for user data stored on a disk storage device |
US6070225A (en) * | 1998-06-01 | 2000-05-30 | International Business Machines Corporation | Method and apparatus for optimizing access to coded indicia hierarchically stored on at least one surface of a cyclic, multitracked recording device |
US6327638B1 (en) * | 1998-06-30 | 2001-12-04 | Lsi Logic Corporation | Disk striping method and storage subsystem using same |
US6658201B1 (en) * | 1999-06-24 | 2003-12-02 | Sony Electronics, Inc. | Data storage device having multiple heads traveling in opposite directions for capacity and throughput optimization |
US6674598B2 (en) * | 2001-05-14 | 2004-01-06 | Hitachi Global Technologies | Radial positioning of data to improve hard disk drive reliability |
US7539820B2 (en) * | 2004-04-20 | 2009-05-26 | Hitachi Global Storage Technologies Netherlands B.V. | Disk device and control method for cache |
-
2007
- 2007-10-19 US US11/875,191 patent/US8112603B2/en not_active Expired - Fee Related
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5422762A (en) * | 1992-09-30 | 1995-06-06 | Hewlett-Packard Company | Method and apparatus for optimizing disk performance by locating a file directory on a middle track and distributing the file allocation tables close to clusters referenced in the tables |
US5991257A (en) * | 1996-02-02 | 1999-11-23 | Sony Corporation | Disk with zones of tracks segmented into data frames, with tracks closer to the disk edge having more frames, and a data recording/reproducing method and apparatus using such disk |
US5799324A (en) * | 1996-05-10 | 1998-08-25 | International Business Machines Corporation | System and method for management of persistent data in a log-structured disk array |
US6026463A (en) * | 1997-09-10 | 2000-02-15 | Micron Electronics, Inc. | Method for improving data transfer rates for user data stored on a disk storage device |
US6070225A (en) * | 1998-06-01 | 2000-05-30 | International Business Machines Corporation | Method and apparatus for optimizing access to coded indicia hierarchically stored on at least one surface of a cyclic, multitracked recording device |
US6327638B1 (en) * | 1998-06-30 | 2001-12-04 | Lsi Logic Corporation | Disk striping method and storage subsystem using same |
US6658201B1 (en) * | 1999-06-24 | 2003-12-02 | Sony Electronics, Inc. | Data storage device having multiple heads traveling in opposite directions for capacity and throughput optimization |
US6674598B2 (en) * | 2001-05-14 | 2004-01-06 | Hitachi Global Technologies | Radial positioning of data to improve hard disk drive reliability |
US7539820B2 (en) * | 2004-04-20 | 2009-05-26 | Hitachi Global Storage Technologies Netherlands B.V. | Disk device and control method for cache |
Cited By (70)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090043831A1 (en) * | 2007-08-11 | 2009-02-12 | Mcm Portfolio Llc | Smart Solid State Drive And Method For Handling Critical Files |
US20150127955A1 (en) * | 2008-02-27 | 2015-05-07 | Samsung Electronics Co., Ltd. | Method and apparatus for inputting/outputting virtual operating system from removable storage device on a host using virtualization technique |
US9164919B2 (en) * | 2008-02-27 | 2015-10-20 | Samsung Electronics Co., Ltd. | Method and apparatus for inputting/outputting virtual operating system from removable storage device on a host using virtualization technique |
US20090271776A1 (en) * | 2008-04-25 | 2009-10-29 | Microsoft Corporation | Dynamic management of operating system resources |
US8578364B2 (en) * | 2008-04-25 | 2013-11-05 | Microsoft Corporation | Dynamic management of operating system resources |
US20090287751A1 (en) * | 2008-05-16 | 2009-11-19 | International Business Machines Corporation | Method and system for file relocation |
US9710474B2 (en) | 2008-05-16 | 2017-07-18 | International Business Machines Corporation | Method and system for file relocation |
US9256272B2 (en) * | 2008-05-16 | 2016-02-09 | International Business Machines Corporation | Method and system for file relocation |
US8549225B2 (en) * | 2009-10-10 | 2013-10-01 | Internatioal Business Machines Corporation | Secondary cache for write accumulation and coalescing |
US8255627B2 (en) * | 2009-10-10 | 2012-08-28 | International Business Machines Corporation | Secondary cache for write accumulation and coalescing |
US20110087837A1 (en) * | 2009-10-10 | 2011-04-14 | International Business Machines Corporation | Secondary cache for write accumulation and coalescing |
US20120203809A1 (en) * | 2009-11-03 | 2012-08-09 | Pspace Inc. | Apparatus and method for managing a file in a distributed storage system |
US8700684B2 (en) * | 2009-11-03 | 2014-04-15 | Pspace Inc. | Apparatus and method for managing a file in a distributed storage system |
CN102687112A (en) * | 2009-11-03 | 2012-09-19 | 皮斯佩斯有限公司 | Apparatus and method for managing a file in a distributed storage system |
US9785561B2 (en) | 2010-02-17 | 2017-10-10 | International Business Machines Corporation | Integrating a flash cache into large storage systems |
US8341339B1 (en) | 2010-06-14 | 2012-12-25 | Western Digital Technologies, Inc. | Hybrid drive garbage collecting a non-volatile semiconductor memory by migrating valid data to a disk |
US8959284B1 (en) | 2010-06-28 | 2015-02-17 | Western Digital Technologies, Inc. | Disk drive steering write data to write cache based on workload |
US20120023145A1 (en) * | 2010-07-23 | 2012-01-26 | International Business Machines Corporation | Policy-based computer file management based on content-based analytics |
US9146875B1 (en) | 2010-08-09 | 2015-09-29 | Western Digital Technologies, Inc. | Hybrid drive converting non-volatile semiconductor memory to read only based on life remaining |
US8639872B1 (en) | 2010-08-13 | 2014-01-28 | Western Digital Technologies, Inc. | Hybrid drive comprising write cache spanning non-volatile semiconductor memory and disk |
US9268499B1 (en) | 2010-08-13 | 2016-02-23 | Western Digital Technologies, Inc. | Hybrid drive migrating high workload data from disk to non-volatile semiconductor memory |
US9058280B1 (en) | 2010-08-13 | 2015-06-16 | Western Digital Technologies, Inc. | Hybrid drive migrating data from disk to non-volatile semiconductor memory based on accumulated access time |
US8683295B1 (en) | 2010-08-31 | 2014-03-25 | Western Digital Technologies, Inc. | Hybrid drive writing extended error correction code symbols to disk for data sectors stored in non-volatile semiconductor memory |
US8775720B1 (en) | 2010-08-31 | 2014-07-08 | Western Digital Technologies, Inc. | Hybrid drive balancing execution times for non-volatile semiconductor memory and disk |
US8782334B1 (en) | 2010-09-10 | 2014-07-15 | Western Digital Technologies, Inc. | Hybrid drive copying disk cache to non-volatile semiconductor memory |
US8825977B1 (en) | 2010-09-28 | 2014-09-02 | Western Digital Technologies, Inc. | Hybrid drive writing copy of data to disk when non-volatile semiconductor memory nears end of life |
US8825976B1 (en) | 2010-09-28 | 2014-09-02 | Western Digital Technologies, Inc. | Hybrid drive executing biased migration policy during host boot to migrate data to a non-volatile semiconductor memory |
US8670205B1 (en) | 2010-09-29 | 2014-03-11 | Western Digital Technologies, Inc. | Hybrid drive changing power mode of disk channel when frequency of write data exceeds a threshold |
US9117482B1 (en) | 2010-09-29 | 2015-08-25 | Western Digital Technologies, Inc. | Hybrid drive changing power mode of disk channel when frequency of write data exceeds a threshold |
US8699171B1 (en) | 2010-09-30 | 2014-04-15 | Western Digital Technologies, Inc. | Disk drive selecting head for write operation based on environmental condition |
US8612798B1 (en) | 2010-10-21 | 2013-12-17 | Western Digital Technologies, Inc. | Hybrid drive storing write data in non-volatile semiconductor memory if write verify of disk fails |
US8427771B1 (en) | 2010-10-21 | 2013-04-23 | Western Digital Technologies, Inc. | Hybrid drive storing copy of data in non-volatile semiconductor memory for suspect disk data sectors |
US8429343B1 (en) | 2010-10-21 | 2013-04-23 | Western Digital Technologies, Inc. | Hybrid drive employing non-volatile semiconductor memory to facilitate refreshing disk |
US8560759B1 (en) | 2010-10-25 | 2013-10-15 | Western Digital Technologies, Inc. | Hybrid drive storing redundant copies of data on disk and in non-volatile semiconductor memory based on read frequency |
US9069475B1 (en) | 2010-10-26 | 2015-06-30 | Western Digital Technologies, Inc. | Hybrid drive selectively spinning up disk when powered on |
US8510528B2 (en) | 2010-12-29 | 2013-08-13 | Teradata Us, Inc. | Differential data storage based on predicted access frequency |
US8630056B1 (en) | 2011-09-12 | 2014-01-14 | Western Digital Technologies, Inc. | Hybrid drive adjusting spin-up profile based on cache status of non-volatile semiconductor memory |
US8909889B1 (en) | 2011-10-10 | 2014-12-09 | Western Digital Technologies, Inc. | Method and apparatus for servicing host commands by a disk drive |
US8977803B2 (en) | 2011-11-21 | 2015-03-10 | Western Digital Technologies, Inc. | Disk drive data caching using a multi-tiered memory |
US9268657B1 (en) | 2011-11-21 | 2016-02-23 | Western Digital Technologies, Inc. | Varying data redundancy in storage systems |
US9268701B1 (en) | 2011-11-21 | 2016-02-23 | Western Digital Technologies, Inc. | Caching of data in data storage systems by managing the size of read and write cache based on a measurement of cache reliability |
US9898406B2 (en) | 2011-11-21 | 2018-02-20 | Western Digital Technologies, Inc. | Caching of data in data storage systems by managing the size of read and write cache based on a measurement of cache reliability |
US8977804B1 (en) | 2011-11-21 | 2015-03-10 | Western Digital Technologies, Inc. | Varying data redundancy in storage systems |
US8904091B1 (en) | 2011-12-22 | 2014-12-02 | Western Digital Technologies, Inc. | High performance media transport manager architecture for data storage systems |
US8972680B2 (en) | 2012-01-23 | 2015-03-03 | International Business Machines Corporation | Data staging area |
US9152575B2 (en) | 2012-01-23 | 2015-10-06 | International Business Machines Corporation | Data staging area |
US20140006401A1 (en) * | 2012-06-30 | 2014-01-02 | Microsoft Corporation | Classification of data in main memory database systems |
US9514174B2 (en) * | 2012-06-30 | 2016-12-06 | Microsoft Technology Licensing, Llc | Classification of data in main memory database systems |
US9892146B2 (en) | 2012-06-30 | 2018-02-13 | Microsoft Technology Licensing, Llc | Classification of data in main memory database systems |
US9542125B1 (en) * | 2012-09-25 | 2017-01-10 | EMC IP Holding Company LLC | Managing data relocation in storage systems |
US9026730B2 (en) * | 2012-10-02 | 2015-05-05 | International Business Machines Corporation | Management of data using inheritable attributes |
US9015413B2 (en) * | 2012-10-02 | 2015-04-21 | International Business Machines Corporation | Management of data using inheritable attributes |
US20140095790A1 (en) * | 2012-10-02 | 2014-04-03 | International Business Machines Corporation | Management of data using inheritable attributes |
US8959281B1 (en) | 2012-11-09 | 2015-02-17 | Western Digital Technologies, Inc. | Data management for a storage device |
US10474547B2 (en) | 2013-05-15 | 2019-11-12 | Amazon Technologies, Inc. | Managing contingency capacity of pooled resources in multiple availability zones |
US9529682B2 (en) * | 2013-05-15 | 2016-12-27 | Amazon Technologies, Inc. | Managing contingency capacity of pooled resources in multiple availability zones |
US20160085643A1 (en) * | 2013-05-15 | 2016-03-24 | Amazon Technologies, Inc. | Managing contingency capacity of pooled resources in multiple availability zones |
US9141176B1 (en) | 2013-07-29 | 2015-09-22 | Western Digital Technologies, Inc. | Power management for data storage device |
US9070379B2 (en) | 2013-08-28 | 2015-06-30 | Western Digital Technologies, Inc. | Data migration for data storage device |
US9323467B2 (en) | 2013-10-29 | 2016-04-26 | Western Digital Technologies, Inc. | Data storage device startup |
US8917471B1 (en) | 2013-10-29 | 2014-12-23 | Western Digital Technologies, Inc. | Power management for data storage device |
US9923868B2 (en) * | 2014-02-27 | 2018-03-20 | Fujitsu Technology Solutions Intellectual Property Gmbh | Working method for a system and system |
US20160366097A1 (en) * | 2014-02-27 | 2016-12-15 | Fujitsu Technology Solutions Intellectual Property Gmbh | Working method for a system and system |
US20160285918A1 (en) * | 2015-03-29 | 2016-09-29 | Whitebox Security Ltd. | System and method for classifying documents based on access |
WO2018089085A1 (en) * | 2016-11-08 | 2018-05-17 | Micron Technology, Inc. | Data relocation in hybrid memory |
CN109923530A (en) * | 2016-11-08 | 2019-06-21 | 美光科技公司 | Data in composite memory relocate |
US10649665B2 (en) | 2016-11-08 | 2020-05-12 | Micron Technology, Inc. | Data relocation in hybrid memory |
US20180286010A1 (en) * | 2017-04-01 | 2018-10-04 | Intel Corporation | Cache replacement mechanism |
US10713750B2 (en) * | 2017-04-01 | 2020-07-14 | Intel Corporation | Cache replacement mechanism |
US11373269B2 (en) | 2017-04-01 | 2022-06-28 | Intel Corporation | Cache replacement mechanism |
Also Published As
Publication number | Publication date |
---|---|
US8112603B2 (en) | 2012-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8112603B2 (en) | Methods, systems, and computer program products for file relocation on a data storage device | |
JP7089830B2 (en) | Devices, systems, and methods for write management of non-volatile memory data | |
JP6729914B2 (en) | Solid state storage drive, system, and method | |
US10346081B2 (en) | Handling data block migration to efficiently utilize higher performance tiers in a multi-tier storage environment | |
US8909887B1 (en) | Selective defragmentation based on IO hot spots | |
US6988165B2 (en) | System and method for intelligent write management of disk pages in cache checkpoint operations | |
US7409522B1 (en) | Method and system for reallocating data in a file system | |
US9477431B1 (en) | Managing storage space of storage tiers | |
KR101246982B1 (en) | Using external memory devices to improve system performance | |
US8627035B2 (en) | Dynamic storage tiering | |
US9658957B2 (en) | Systems and methods for managing data input/output operations | |
US8560801B1 (en) | Tiering aware data defragmentation | |
US8825980B2 (en) | Consideration of adjacent track interference and wide area adjacent track erasure during disk defragmentation | |
US10365845B1 (en) | Mapped raid restripe for improved drive utilization | |
US8069299B2 (en) | Banded indirection for nonvolatile memory devices | |
WO2014209234A1 (en) | Method and apparatus for hot data region optimized dynamic management | |
US11461287B2 (en) | Managing a file system within multiple LUNS while different LUN level policies are applied to the LUNS | |
US20130254508A1 (en) | Consideration of adjacent track interference and wide area adjacent track erasure during block allocation | |
US10198180B2 (en) | Method and apparatus for managing storage device | |
KR20180086120A (en) | Tail latency aware foreground garbage collection algorithm | |
US20130346724A1 (en) | Sequential block allocation in a memory | |
CN112988627A (en) | Storage device, storage system, and method of operating storage device | |
US20080270742A1 (en) | System and method for storage structure reorganization | |
JP2005196793A (en) | Method, system and product for reserving memory | |
US6948009B2 (en) | Method, system, and article of manufacture for increasing processor utilization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DOW, ELI M.;REEL/FRAME:019987/0291 Effective date: 20071018 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20200207 |