WO2014036078A2 - Dynamic central cache memory - Google Patents

Dynamic central cache memory Download PDF

Info

Publication number
WO2014036078A2
WO2014036078A2 PCT/US2013/056980 US2013056980W WO2014036078A2 WO 2014036078 A2 WO2014036078 A2 WO 2014036078A2 US 2013056980 W US2013056980 W US 2013056980W WO 2014036078 A2 WO2014036078 A2 WO 2014036078A2
Authority
WO
WIPO (PCT)
Prior art keywords
memory
module
cache module
request
resources
Prior art date
Application number
PCT/US2013/056980
Other languages
French (fr)
Other versions
WO2014036078A3 (en
Inventor
Kimmo J. MYLLY
Original Assignee
Memory Technologies Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Memory Technologies Llc filed Critical Memory Technologies Llc
Publication of WO2014036078A2 publication Critical patent/WO2014036078A2/en
Publication of WO2014036078A3 publication Critical patent/WO2014036078A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0873Mapping of cache memory to specific storage devices or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7207Details relating to flash memory management management of metadata or control data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Definitions

  • the exemplary and non-limiting embodiments of this invention relate generally to memory storage systems, and, more specifically, relate to a central cache memory implementation.
  • NVM non- volatile memory e.g., NAND
  • a basic premise of so called managedNAND mass storage memory is to hide the flash technology complexity from the host system.
  • a technology such as eMMC is one example.
  • a managedNAND type of memory can be, for example, an eMMC, SSD, UFS or a microSD.
  • FIG 1 A reproduces Figure 2 from JEDEC Standard, Embedded MultiMediaCard (eMMC) Product Standard, High Capacity, JESD84-A42, June 2007, JEDEC Solid State Technology Association, and shows a functional block diagram of an eMMC.
  • the JEDEC eMMC includes, in addition to the flash memory itself, an intelligent on-board controller that manages the MMC communication protocol. The controller also handles block-management functions such as logical block allocation and wear leveling.
  • the interface includes a clock (CLK) input.
  • CMD command
  • CMD is a bidirectional command channel, used for device initialization and command transfers. Commands are sent from a bus master to the device, and responses are sent from the device to the host.
  • DAT[7:0] also included is a bidirectional data bus (DAT[7:0]).
  • the DAT signals operate in push-pull mode. By default, after power-up or RESET, only DAT0 is used for data transfer.
  • the memory controller can configure a wider data bus for data transfer using either DAT[3:0] (4-bit mode) or DAT[7:0] (8- bit mode).
  • FIG. 1 shows an overall block diagram of the NAND flash controller architecture for a SD/MMC card.
  • the particular controller illustrated happens to use a w-bit parallel Bose-Chaudhuri-Hocquengham (BCH) error-correction code (ECC) designed to correct random bit errors of the flash memory, in conjunction with a code-banking mechanism.
  • BCH Bose-Chaudhuri-Hocquengham
  • ECC error-correction code
  • Performances of the mass memory device and of the host device utilizing the mass memory device are highly dependent on the amount of resources that are available for the memory functions.
  • resources have traditionally been the central processing unit (CPU), random access memory (RAM) and also non-volatile memory such as for example non- volatile execution memory type (NOR) or non-volatile mass memory type (NAND).
  • Resource availability also affects reliability and usability of the mass memory device.
  • Most host/mass memory systems currently in commerce utilize a fixed allocation of resources. In traditional memory arrangements the CPU has some means to connect to the RAM and to the non- volatile memory, and these memories themselves have the resources needed for their own internal operations. But since that paradigm became prevalent the variety of resources has greatly increased, for example it is now common for there to be multi-core CPUs, main/slave processors, graphics accelerators, and the like.
  • the memory controller can take care of the flash management functions like bad block management and wear leveling.
  • the memory controller can take care of the flash management functions like bad block management and wear leveling.
  • SRAM buffer/work memory
  • Co-owned US Patent Application 12/455,763 (filed June 4, 2009) details an example in which there is one NAND where the NAND flash translation layer (FTL, a specification by the Personal Computer Memory Card International Association PCMCIA which provides for P2L mapping table, wear leveling, etc.) occurs side by side by the main CPU.
  • FTL NAND flash translation layer
  • PCMCIA Personal Computer Memory Card International Association PCMCIA
  • Co-owned US Patent Application 13/358,806 (filed Jan 26, 2012) details examples in which eMMC and UFS components could also use system dynamic random access memory (DRAM) for various purposes in which case the system CPU would not do any relevant memory-processing.
  • DRAM system dynamic random access memory
  • a method comprising: reserving by at least one cache module of a host device, or receiving a reservation from the host device for, memory resource allocations in the at least one cache module individually reserved for one or more of a plurality of memory modules comprised in the host device; receiving by the at least one cache module from at least one module of the plurality of memory modules a request to use memory resources available in the at least one cache module; implementing the request by the at least one cache module using at least one resource of the memory resources in the at least one cache module,wherein the at least one resource of the memory resources in the at least one cache module is previously reserved for the at least one module, or dynamically identified by the at least one cache module.
  • a method comprising: reserving by a host device during manufacturing stage of the host device a memory resource allocation in at least one cache module individually for one or more memory modules comprised in the host device using information about each of the one or more memory modules comprising at least one or more of: a device identification, a device class identification and a device memory type of the at least one module; and providing by the host device the reserved memory resource allocation to the at least one cache module and corresponding individual memory resource allocations along with an identification of the at least one cache module individually to each of the one or more memory modules.
  • an apparatus comprising: at least one controller and a memory optionally storing a set of computer instructions, in which the controller and the memory optionally storing the computer instructions are configured to cause the apparatus to: reserve by the apparatus in a host device, or receiving a reservation from the host device for, memory resource allocations in the apparatus individually reserved for one or more of a plurality of memory modules comprised in the host device; receive from at least one module of the plurality of memory modules a request to use memory resources available in the apparatus;
  • an apparatus comprising: at least one processor and a memory storing a set of computer instructions, in which the processor and the memory storing the computer instructions are configured to cause the apparatus to: reserve during manufacturing stage of the apparatus a memory resource allocation in at least one cache module individually for one or more memory modules comprised in the apparatus using information about each of the one or more memory modules comprising at least one or more of: a device identification, a device class identification and a device memory type of the at least one module; and provide the reserved memory resource allocation to the at least one cache module and corresponding individual memory resource allocations along with an identification of the at least one cache module individually to each of the one or more memory modules.
  • Figure 1 A reproduces Figure 2 from JEDEC Standard, Embedded
  • eMMC MultiMediaCard
  • Figure IB reproduces Figure 1 of Lin et al., and shows an example of an overall block diagram of a NAND flash controller architecture for a SD/MMC card;
  • Figure 2 is a block diagram of a managedNAND memory module inside a host mobile device
  • Figure 3 is a block diagram of a host mobile device with a (central) cache module and a plurality of memory/IO modules inside of the host mobile device, according to an exemplary embodiment
  • Figure 4 is a memory map of a cache memory, according to an exemplary embodiment
  • Figure 5 is a table of different configuration phase options with a cache module according to selected embodiments
  • Figures 6a-6d are diagrams of different topology options using a cache module/device, according to exemplary embodiments.
  • Figure 7 is a flow chart demonstrating an exemplary embodiment performed by a cache module.
  • the embedded memory in the controller is not sufficient to store all the run time data needed by the memory module and thus some portion of the run time data is stored/mirrored in a non- volatile memory (e.g., NAND) of the module.
  • a non- volatile memory e.g., NAND
  • the non-volatile memory such as NAND, is very slow for storing/reading such data, if compared to typical execution memories like SRAM/DRAM. This causes delay to operation of the memory module and wears out the mass memory faster. For example, after power up the whole mass memory subsystem needs to be re-initialized from NAND and this may take time up to 1 second (e.g., in the eMMC, SD, SATAIO devices). The issue is even more troublesome in case of introduction of networked memory (or other, like 10) devices connected to some port in a host chipset, each memory/IO device lacking some resources.
  • Figure 2 shows an exemplary block diagram of a conventionally managed NAND memory module (mass memory module) 20 inside a host mobile device 10.
  • a portion of the system RAM (e.g., DRAM) 14 may be allocated for use by the mass memory module 20 (described here in a non- limiting embodiment as a UFS memory module or a memory module).
  • the host device 10 includes an application processor that can be embodied as a CPU 12.
  • a DRAM controller 11 for the DRAM 14 may be included with or coupled to the application processor 12 (communication with the DRAM 14 is through an interface 18). Also present is the above-mentioned mass memory module 20 (e.g., UFS module) a host controller 13.
  • the host controller 13 can be embodied as the CPU 12 or it can be embodied as a separate device.
  • the mass memory module (MMM) 20 may be connected to the host device through an interface 16, e.g., via a bus (e.g., the mass storage memory bus).
  • the memory module 20 can be a part of the host device 10 as shown in Figure 2 or it may be a separate device (e.g., a memory card).
  • the memory module 20 may comprise a non-volatile memory NAND 26 (or in general mass memory, flash memory, etc.) with a portion 26A allocated for the memory controller and a memory controller 22 with an SRAM 24.
  • an execution memory 24 of the memory controller 22 and/or the host system memory 14 could be a non-volatile memory such as MRAM, PCRAM and/or RRAM.
  • a new method, apparatus and software related product are presented for using a cache/central cache module/device (instead of, e.g., system DRAM) which can serve multiple memory modules/devices.
  • a cache/central cache module/device instead of, e.g., system DRAM
  • Each memory/IO module/device connected to the same memory network may utilize memory resources of this cache module/device either in a fixed manner using pre-set allocation of resources per the memory module/device, or dynamically using run-time allocation of new resources to an existing module/device per its request or to a new module/device connecting to the memory network (e.g., comprised in a host device) and possibly requesting memory resources.
  • the host device may be a computer, a cellular phone, a digital camera, wireless or wired device, a gaming device or a PDA, as several non-limiting examples.
  • FIG 3 illustrates an exemplary non-limiting embodiment of a host wireless/ device 10a (such as mobile phone) comprising a central cache memory module/device (cache module) 70.
  • An RF module 17 (with an RF antenna) can provide wireless communication for the host device 10a. It is noted that those components described in reference to Figure 2 are numbered accordingly.
  • the cache device 70 may comprise a cache memory 76 (e.g., comprising volatile and/or non-volatile memory, DRAM/MRAM, etc.) and a cache memory controller 72 (which may be a small processor, a logical circuit or the like).
  • the cache module 70 can communicate through a hub 60 such as a memory bus with a plurality of memory modules/devices 20, 30 and 40 (e.g., UFS modules/devices) and with a memory host controller 13 comprised in the host device processor 12 as shown in Figure 3.
  • the UFS memory module 20 is described in reference to Figure 2 with an exception that, for example, managing operational state data for this module is provided by the cache module 70 (e.g., through the hub 60) and not by the host controller 13/system DRAM 14 (as in Figure 2).
  • Another memory module/device served by the cache module 70 may be an IO memory module 30 which can have an IO controller 32 with a SRAM 34 and radio antenna/capabilities to provide wireless communications (in network of memory/IO devices) in the host device 10a with other wireless devices/networks.
  • the module/device 30 may be a general purpose I/O module/device (having or not having a separate memory) in the host device 10a, but since the module/device 30 is served by the cache module 70, it is a part of a memory network comprising modules 20, 40 and 70. Therefore for the purpose of this invention all devices 20, 30 and 40 can be identified as memory modules/devices.
  • a further memory module shown in Figure 3 served by the cache module 70 may be a memory (removable) card 40 inserted in a card slot 50 of the host device 10a.
  • the memory card 40 may comprise a non-volatile NAND memory 46 (or mass memory, flash memory etc.) with a portion 46A allocated for the memory controller and a memory controller 42 with a SRAM 44.
  • an execution memory 44 of the memory controller 42 could be a non-volatile memory such as MRAM, PCRAM and/or RRAM.
  • Each of the modules 20, 30, 40 and the central cache device 70 in the network 10a shown in Figure 3 has a device ID number (device identification). Any new device connected to the network will have a device ID. Every device may also have a ClassID (device class identification) so that the type of device can be recognized (e.g., Class ID1 is a mass storage, ClassID2 is a IO module/device, ClassID9 is a central cache module/device, etc.). Yet another ID could relate to a memory type, for example MemoryTypelDl is a non- volatile memory, MemoryTypeID2 is a volatile memory.
  • the cache memory module 76 may comprise the volatile memory and/or the nonvolatile memory.
  • a pre-set allocation of memory resources for one or more memory modules may be done in the host device 10a during manufacturing stage.
  • the cache module/device 70 could be configured (e.g., by the manufacturing programmer/SW via the host controller 13) already in the production stage of the mobile device 10a so that different memory module/device IDs known to the cache module 70 would have certain allocation of resources in the cache module 70 (i.e., in the cache memory 76).
  • the modules/devices 20 and 30 may be configured so that they may know the resources which have been allocated to them and at which device ID address they can find their resources (e,g., this information may be stored into a register to which the corresponding memory module has access to).
  • one or more memory resources in at least one cache module 70 may be reserved for one or more of the plurality of memory modules 20 and 30 of the host device 10a during manufacturing stage of the host device.
  • the configuring of the one or more modules may be done by the host device at the manufacturing stage and then optionally confirmed at first powering of the host device 10a (which may not be necessary if the cache ID and allocation information is stored in the memory devices during manufacturing stage, as described herein).
  • the at least one cache module 70 may receive the one or more identity interrogation requests from the one or more of the plurality of memory modules such as the memory/IO modules 20 and 30.
  • the cache module may respond to the identity interrogation requests by handshakes with the one or more of the plurality of memory modules which can include providing the identification of the at least one cache module and may further include providing an identity of the reserved one or more memory resources to the corresponding one or more of the plurality of memory modules (modules 20 and 30).
  • pre-allocation/reservation of the memory resource in the cache module 70 may not be performed during manufacturing stage of the host device, it can be performed by the cache module 70 when requested by the memory modules in the network, typically at first powering or when a cache memory assistance is needed.
  • the at least one cache module 70 may receive the one or more identity interrogation requests from the one or more of the plurality of memory modules such as memory/IO modules 20 and 30.
  • the cache module can make appropriate resource reservations for the one or more memory modules and respond to the identity interrogation requests by handshakes with the one or more of the plurality of memory modules which can include providing the identification of the at least one cache module 70 and may further include providing an identity of the reserved one or more memory resources to the corresponding one or more of the plurality of memory modules (modules 20 and 30) as described herein.
  • a dynamic run-time allocation of memory resources in the cache module 70 for a particular memory module/device in the memory network may be performed in response to a specific request to use memory resources available in the cache module 70 from at least one memory module/device (e.g., module 20 or 30).
  • the request may comprise at least a device identification, a device class identification and/or a device memory type of the at least one network device.
  • the request may comprise an indication of a memory type (e.g., volatile or nonvolatile) which is needed.
  • the request processing by the cache memory controller 72 of the cache module 70 may comprise determining whether one or more resources of the available memory resources were previously reserved for the at least one memory module making the request, and if that is the case, using at least one reserved resource of the one or more reserved resources for implementing the request.
  • the cache module 70 may identify at least one available resource or multiple resources (not reserved or used) of the memory resources in the cache module 70 for responding to the request from the at least one memory module.
  • the cache module 70 may identify at least one available resource (or more than one resource if needed) of the memory resources not previously reserved, and use the identified resource(s) for implementing the request.
  • the request from the at least one memory module/device may comprise a writing operation, so that the writing operation can be performed by the cache module 70 using the at least one resource (reserved or identified) at the cache module
  • the request (optionally) may comprises a location or an identity of at least one memory resource in the cache module to use it for the requested writing operation.
  • the request from the at least one memory device may comprise a reading operation, and an identity and/or a location of the requested information (previously written) in the cache memory 76, so that the cache memory controller 72 can identify one or more resources where the requested information is stored and perform the reading operation.
  • this added memory card 40 can send to the cache module 70 an identity interrogation request to identify cache memory module/device available in the memory network. Then only the cache module 70 will respond to the identity request from the memory card 40 by a handshake with the added memory card 40 which may include providing identification of the cache module 70 to the added memory card 40. Also optionally, the cache module 70 may reserve at least one further resource (or multiple resources) of the available memory resources in the cache memory 76 for the added memory card 40 and provide an identity of these reserved resource(s) to the added memory card 40.
  • the identity interrogation request sent by the added memory card 40 may also comprise a request to use memory resources in the cache module 70.
  • the cache module 70 may identify at least one available resource or multiple resources (not reserved or used) of the memory resources in the cache module 70 for responding to the request from the added memory card 40. Also these resources used for responding to the request may be then reserved permanently in the cache module 70 for the added memory card 40 for future transactions until the memory card 40 is removed from the host device 10a.
  • a device like the cache module/device 70 may be a standalone component in the memory network/host device. Also it can have different functions, e.g., acting as a switch in the network. Moreover, it could be integrated into any of the other memory modules/devices (like a mass memory device or an IO device).
  • the host device could inform the memory/IO modules connected in the memory network about existence of the cache module (e.g.
  • Figures 4, 5, 6a-6d and 7 further demonstrate different embodiments described herein.
  • Figure 4 is an example of the cache memory 76 shown in Figure 3 with three allocated areas 76A, 76B and 76C for corresponding modules 20, 30 and 40.
  • a memory area 76D represents additional resources which can be used in
  • area 76D may be used for further resource allocation/reservation for added memory devices/modules like the removable memory card 40.
  • each memory area 76A, 76B, 76C or 76D may comprise volatile and non- volatile sectors which may be used according to the need/request from the memory modules in the memory network.
  • it may be more than one cache memory modules like module 70 in the host device 10a.
  • one cache module may comprise a volatile memory and another cache module may comprise a non- volatile memory, so that their corresponding controllers will coordinate their performance.
  • Figure 5 briefly summarizes three different modes of operation of the cache module: during production (first column), during power boot up (second column) and during connecting a new memory device (third column).
  • Figures 6a-6d illustrate exemplary topology options utilizing a central cache module in a UFS memory network.
  • the topology shown in Figure 6a corresponds to the topology of Figure 3 described herein.
  • Figure 6b shows a topology similar to Figure 5a, but the cache module is placed in the UFS hub.
  • Figure 6c demonstrate a serial/chain connection of different memory modules/devices, and
  • Figure 6d shows a mixed topology incorporating features of Figures 6a and 6c.
  • Figure 7 shows a logic flow diagram that illustrates the operation of a method involving the (central) cache module in a memory network, and a result of execution of computer program instructions embodied on a computer readable memory, further in accordance with the exemplary embodiments of the invention as described herein. It is noted that the order of steps shown in Figure 7 is not absolutely required, so in principle, the various steps may be performed out of the illustrated order. Also certain steps may be skipped, different steps may be added or substituted, or selected steps or groups of steps may be performed in a separate application.
  • a first step 80 one or more memory resources in a cache module are reserved by a host device for one or more of the plurality of memory modules comprised in the host device during manufacturing stage of the host device (optional step).
  • modules/devices receives a message/broadcasting (interrogation) message
  • the cache module responds to such message/broadcasting message with its own identity information and possibly (optionally) information on the memory resource allocations (e.g., in NVM and/or VM) reserved by the cache module for that memory module/device.
  • the cache module receives from a memory module a request to use memory resource(s) in the cache device for a specific purpose.
  • a next step 88 it is determined (by the cache module) whether the memory resource(s) were reserved for that network device. If that is the case, in a next step 90, the cache module would use the at least one reserved resource for implementing the request. If however, it is determined that the memory resource(s) were not reserved for the memory module, in a step 92, the cache module would identify at least one resource of the available memory resources in the cache module to use for implementing the request or could reply with a rejection message due to a lack of available resources.
  • the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • While various aspects of the exemplary embodiments of this invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • the integrated circuit, or circuits may comprise circuitry (as well as possibly firmware) for embodying at least one or more of a data processor or data processors, a digital signal processor or processors, baseband circuitry and radio frequency circuitry that are configurable so as to operate in accordance with the exemplary embodiments of this invention.

Abstract

The specification and drawings present a new apparatus, method and software related product for using a cache/central cache module/device (instead of, e.g., system DRAM) which can serve multiple memory modules/devices. Each memory/IO module/device connected to the same memory network (e.g., via hub, bus, etc.) may utilize memory resources of this cache module/device either in a fixed manner using pre-set allocation of resources per the memory module/device, or dynamically using run-time allocation of new resources to an existing module/device per its request or to a new module/device connecting to the memory network (e.g., comprised in a host device) and possibly requesting memory resources.

Description

DYNAMIC CENTRAL CACHE MEMORY
RELATED APPLICATIONS: This patent application claims priority to U.S. Utility patent application entitled
"Dynamic Central Cache Memory" with Serial No. 13/596,480 filed August 28, 2012, which is fully incorporated herein by reference.
TECHNICAL FIELD:
The exemplary and non-limiting embodiments of this invention relate generally to memory storage systems, and, more specifically, relate to a central cache memory implementation. BACKGROUND:
This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived,
implemented or described. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
The following abbreviations that may be found in the specification and/or the drawing figures are defined as follows:
CPU central processing unit
D2D device-to-device
DRAM dynamic random access memory
eMMC embedded MultiMediaCard
FTL flash translation layer
HC host controller
HCI host controller interface HW hardware
ID identification (number)
I/O, 10 input output
JEDEC joint electron device engineering council
LAN local area network
LTE long term evolution
LTE-A long term evolution advanced
MMM mass memory module
MMC MultiMediaCard
MRAM magnetic random access memory
NFC near field communication
NVM non- volatile memory (e.g., NAND)
OS operations system
P2L physical to logical
PCRAM phase change random access memory
PDA personal digital assistant
RAM random access memory
PvRAM resistive random access memory
SATAIO serial advanced technology attachment international organization
SD secure digital (microsd is just one form factor)
SRAM static random access memory
SSD solid state disk
SW software
UFS universal flash storage
VM volatile memory
Various types of flash-based mass storage memories currently exist. A basic premise of so called managedNAND mass storage memory is to hide the flash technology complexity from the host system. A technology such as eMMC is one example. A managedNAND type of memory can be, for example, an eMMC, SSD, UFS or a microSD.
Figure 1 A reproduces Figure 2 from JEDEC Standard, Embedded MultiMediaCard (eMMC) Product Standard, High Capacity, JESD84-A42, June 2007, JEDEC Solid State Technology Association, and shows a functional block diagram of an eMMC. The JEDEC eMMC includes, in addition to the flash memory itself, an intelligent on-board controller that manages the MMC communication protocol. The controller also handles block-management functions such as logical block allocation and wear leveling. The interface includes a clock (CLK) input. Also included is a command (CMD), which is a bidirectional command channel, used for device initialization and command transfers. Commands are sent from a bus master to the device, and responses are sent from the device to the host. Also included is a bidirectional data bus (DAT[7:0]). The DAT signals operate in push-pull mode. By default, after power-up or RESET, only DAT0 is used for data transfer. The memory controller can configure a wider data bus for data transfer using either DAT[3:0] (4-bit mode) or DAT[7:0] (8- bit mode).
One non-limiting example of a flash memory controller construction is described in "A NAND Flash Memory Controller for SD/MMC Flash Memory Card", Chuan-Sheng Lin and Lan-Rong Dung, IEEE Transactions of Magnetics, Vol. 43, No. 2, February 2007, pp. 933-935 (hereafter referred to as Lin et al.) Figure IB
reproduces Figure 1 of Lin et al., and shows an overall block diagram of the NAND flash controller architecture for a SD/MMC card. The particular controller illustrated happens to use a w-bit parallel Bose-Chaudhuri-Hocquengham (BCH) error-correction code (ECC) designed to correct random bit errors of the flash memory, in conjunction with a code-banking mechanism.
Performances of the mass memory device and of the host device utilizing the mass memory device are highly dependent on the amount of resources that are available for the memory functions. Such resources have traditionally been the central processing unit (CPU), random access memory (RAM) and also non-volatile memory such as for example non- volatile execution memory type (NOR) or non-volatile mass memory type (NAND). Resource availability also affects reliability and usability of the mass memory device. Most host/mass memory systems currently in commerce utilize a fixed allocation of resources. In traditional memory arrangements the CPU has some means to connect to the RAM and to the non- volatile memory, and these memories themselves have the resources needed for their own internal operations. But since that paradigm became prevalent the variety of resources has greatly increased, for example it is now common for there to be multi-core CPUs, main/slave processors, graphics accelerators, and the like.
In the managedNAND type of memory (such as eMMC, SSD, UFS, microSD) the memory controller can take care of the flash management functions like bad block management and wear leveling. In a typical low cost implementation there is only small 10 buffer/work memory SRAM in the managedNAND, embedded in the controller. For example in higher end managedNANDs like SSDs there may be tens- hundreds of megabits of discrete DRAM as cache, but in the future some new memory technology like MRAM could serve as very fast non- volatile cache as well.
Co-owned US Patent Application 12/455,763 (filed June 4, 2009) details an example in which there is one NAND where the NAND flash translation layer (FTL, a specification by the Personal Computer Memory Card International Association PCMCIA which provides for P2L mapping table, wear leveling, etc.) occurs side by side by the main CPU. Co-owned US Patent Application 13/358,806 (filed Jan 26, 2012) details examples in which eMMC and UFS components could also use system dynamic random access memory (DRAM) for various purposes in which case the system CPU would not do any relevant memory-processing.
SUMMARY
According to a first embodiment of the invention, a method, comprising: reserving by at least one cache module of a host device, or receiving a reservation from the host device for, memory resource allocations in the at least one cache module individually reserved for one or more of a plurality of memory modules comprised in the host device; receiving by the at least one cache module from at least one module of the plurality of memory modules a request to use memory resources available in the at least one cache module; implementing the request by the at least one cache module using at least one resource of the memory resources in the at least one cache module,wherein the at least one resource of the memory resources in the at least one cache module is previously reserved for the at least one module, or dynamically identified by the at least one cache module.
According to a second embodiment of the invention, a method, comprising: reserving by a host device during manufacturing stage of the host device a memory resource allocation in at least one cache module individually for one or more memory modules comprised in the host device using information about each of the one or more memory modules comprising at least one or more of: a device identification, a device class identification and a device memory type of the at least one module; and providing by the host device the reserved memory resource allocation to the at least one cache module and corresponding individual memory resource allocations along with an identification of the at least one cache module individually to each of the one or more memory modules.
According to a third embodiment of the invention, an apparatus comprising: at least one controller and a memory optionally storing a set of computer instructions, in which the controller and the memory optionally storing the computer instructions are configured to cause the apparatus to: reserve by the apparatus in a host device, or receiving a reservation from the host device for, memory resource allocations in the apparatus individually reserved for one or more of a plurality of memory modules comprised in the host device; receive from at least one module of the plurality of memory modules a request to use memory resources available in the apparatus;
implement the request using at least one resource of the memory resources in the apparatus, wherein the at least one resource of the memory resources in the apparatus is previously reserved for the at least one module, or dynamically identified by the apparatus.
According to a fourth embodiment of the invention, an apparatus comprising: at least one processor and a memory storing a set of computer instructions, in which the processor and the memory storing the computer instructions are configured to cause the apparatus to: reserve during manufacturing stage of the apparatus a memory resource allocation in at least one cache module individually for one or more memory modules comprised in the apparatus using information about each of the one or more memory modules comprising at least one or more of: a device identification, a device class identification and a device memory type of the at least one module; and provide the reserved memory resource allocation to the at least one cache module and corresponding individual memory resource allocations along with an identification of the at least one cache module individually to each of the one or more memory modules. BRIEF DESCRIPTION OF THE DRAWINGS
In the attached Drawing Figures:
Figure 1 A reproduces Figure 2 from JEDEC Standard, Embedded
MultiMediaCard (eMMC) Product Standard, High Capacity, JESD84-A42, June 2007, JEDEC Solid State Technology Association, and shows a functional block diagram of an eMMC;
Figure IB reproduces Figure 1 of Lin et al., and shows an example of an overall block diagram of a NAND flash controller architecture for a SD/MMC card;
Figure 2 is a block diagram of a managedNAND memory module inside a host mobile device;
Figure 3 is a block diagram of a host mobile device with a (central) cache module and a plurality of memory/IO modules inside of the host mobile device, according to an exemplary embodiment;
Figure 4 is a memory map of a cache memory, according to an exemplary embodiment;
Figure 5 is a table of different configuration phase options with a cache module according to selected embodiments;
Figures 6a-6d are diagrams of different topology options using a cache module/device, according to exemplary embodiments; and
Figure 7 is a flow chart demonstrating an exemplary embodiment performed by a cache module. DETAILED DESCRIPTION
Frequently the embedded memory in the controller is not sufficient to store all the run time data needed by the memory module and thus some portion of the run time data is stored/mirrored in a non- volatile memory (e.g., NAND) of the module. This is also necessary to avoid loss of critical (operation) data in case of sudden power down. The non-volatile memory, such as NAND, is very slow for storing/reading such data, if compared to typical execution memories like SRAM/DRAM. This causes delay to operation of the memory module and wears out the mass memory faster. For example, after power up the whole mass memory subsystem needs to be re-initialized from NAND and this may take time up to 1 second (e.g., in the eMMC, SD, SATAIO devices). The issue is even more troublesome in case of introduction of networked memory (or other, like 10) devices connected to some port in a host chipset, each memory/IO device lacking some resources.
Figure 2 shows an exemplary block diagram of a conventionally managed NAND memory module (mass memory module) 20 inside a host mobile device 10.
A portion of the system RAM (e.g., DRAM) 14 (having a control logic 15) may be allocated for use by the mass memory module 20 (described here in a non- limiting embodiment as a UFS memory module or a memory module). The host device 10 includes an application processor that can be embodied as a CPU 12.
Included with or coupled to the application processor 12 may be a DRAM controller 11 for the DRAM 14 (communication with the DRAM 14 is through an interface 18). Also present is the above-mentioned mass memory module 20 (e.g., UFS module) a host controller 13. The host controller 13 can be embodied as the CPU 12 or it can be embodied as a separate device. The mass memory module (MMM) 20 may be connected to the host device through an interface 16, e.g., via a bus (e.g., the mass storage memory bus). Also the memory module 20 can be a part of the host device 10 as shown in Figure 2 or it may be a separate device (e.g., a memory card).
Furthermore, the memory module 20 may comprise a non-volatile memory NAND 26 (or in general mass memory, flash memory, etc.) with a portion 26A allocated for the memory controller and a memory controller 22 with an SRAM 24. It should be noted that an execution memory 24 of the memory controller 22 and/or the host system memory 14 could be a non-volatile memory such as MRAM, PCRAM and/or RRAM.
A new method, apparatus and software related product (e.g., a computer readable memory) are presented for using a cache/central cache module/device (instead of, e.g., system DRAM) which can serve multiple memory modules/devices. Each memory/IO module/device connected to the same memory network (e.g., via hub, bus, etc.) may utilize memory resources of this cache module/device either in a fixed manner using pre-set allocation of resources per the memory module/device, or dynamically using run-time allocation of new resources to an existing module/device per its request or to a new module/device connecting to the memory network (e.g., comprised in a host device) and possibly requesting memory resources. The host device may be a computer, a cellular phone, a digital camera, wireless or wired device, a gaming device or a PDA, as several non-limiting examples.
Figure 3 illustrates an exemplary non-limiting embodiment of a host wireless/ device 10a (such as mobile phone) comprising a central cache memory module/device (cache module) 70. An RF module 17 (with an RF antenna) can provide wireless communication for the host device 10a. It is noted that those components described in reference to Figure 2 are numbered accordingly.
The cache device 70 may comprise a cache memory 76 (e.g., comprising volatile and/or non-volatile memory, DRAM/MRAM, etc.) and a cache memory controller 72 (which may be a small processor, a logical circuit or the like). The cache module 70 can communicate through a hub 60 such as a memory bus with a plurality of memory modules/devices 20, 30 and 40 (e.g., UFS modules/devices) and with a memory host controller 13 comprised in the host device processor 12 as shown in Figure 3.
The UFS memory module 20 is described in reference to Figure 2 with an exception that, for example, managing operational state data for this module is provided by the cache module 70 (e.g., through the hub 60) and not by the host controller 13/system DRAM 14 (as in Figure 2).
Another memory module/device served by the cache module 70 according to embodiments describe herein may be an IO memory module 30 which can have an IO controller 32 with a SRAM 34 and radio antenna/capabilities to provide wireless communications (in network of memory/IO devices) in the host device 10a with other wireless devices/networks. It is noted that the module/device 30 may be a general purpose I/O module/device (having or not having a separate memory) in the host device 10a, but since the module/device 30 is served by the cache module 70, it is a part of a memory network comprising modules 20, 40 and 70. Therefore for the purpose of this invention all devices 20, 30 and 40 can be identified as memory modules/devices. In other words the term "memory/IO modules/devices is equivalent to the term "memory modules/devices". Yet a further memory module shown in Figure 3 served by the cache module 70 according to embodiments describe herein may be a memory (removable) card 40 inserted in a card slot 50 of the host device 10a. Furthermore, the memory card 40 may comprise a non-volatile NAND memory 46 (or mass memory, flash memory etc.) with a portion 46A allocated for the memory controller and a memory controller 42 with a SRAM 44. It should be noted that an execution memory 44 of the memory controller 42 could be a non-volatile memory such as MRAM, PCRAM and/or RRAM.
Each of the modules 20, 30, 40 and the central cache device 70 in the network 10a shown in Figure 3 has a device ID number (device identification). Any new device connected to the network will have a device ID. Every device may also have a ClassID (device class identification) so that the type of device can be recognized (e.g., Class ID1 is a mass storage, ClassID2 is a IO module/device, ClassID9 is a central cache module/device, etc.). Yet another ID could relate to a memory type, for example MemoryTypelDl is a non- volatile memory, MemoryTypeID2 is a volatile memory. The cache memory module 76 may comprise the volatile memory and/or the nonvolatile memory.
In one embodiment, a pre-set allocation of memory resources for one or more memory modules (e.g., modules 20 and 30 of the host device 10a in Figure 3) may be done in the host device 10a during manufacturing stage. For example, in time of production of a mobile device the connected components to the memory network (of memory/IO devices) are known, so are their Device IDs, Class IDs and Type IDs. Therefore the cache module/device 70 could be configured (e.g., by the manufacturing programmer/SW via the host controller 13) already in the production stage of the mobile device 10a so that different memory module/device IDs known to the cache module 70 would have certain allocation of resources in the cache module 70 (i.e., in the cache memory 76). Correspondingly, the modules/devices 20 and 30 may be configured so that they may know the resources which have been allocated to them and at which device ID address they can find their resources (e,g., this information may be stored into a register to which the corresponding memory module has access to).
Thus one or more memory resources in at least one cache module 70 may be reserved for one or more of the plurality of memory modules 20 and 30 of the host device 10a during manufacturing stage of the host device. The configuring of the one or more modules (e.g., modules 20 and 30) may be done by the host device at the manufacturing stage and then optionally confirmed at first powering of the host device 10a (which may not be necessary if the cache ID and allocation information is stored in the memory devices during manufacturing stage, as described herein). In other words, at the first powering of the host device, the at least one cache module 70 may receive the one or more identity interrogation requests from the one or more of the plurality of memory modules such as the memory/IO modules 20 and 30. The cache module may respond to the identity interrogation requests by handshakes with the one or more of the plurality of memory modules which can include providing the identification of the at least one cache module and may further include providing an identity of the reserved one or more memory resources to the corresponding one or more of the plurality of memory modules (modules 20 and 30).
In another embodiment, if pre-allocation/reservation of the memory resource in the cache module 70 may not be performed during manufacturing stage of the host device, it can be performed by the cache module 70 when requested by the memory modules in the network, typically at first powering or when a cache memory assistance is needed. In this case, the at least one cache module 70 may receive the one or more identity interrogation requests from the one or more of the plurality of memory modules such as memory/IO modules 20 and 30. Then the cache module can make appropriate resource reservations for the one or more memory modules and respond to the identity interrogation requests by handshakes with the one or more of the plurality of memory modules which can include providing the identification of the at least one cache module 70 and may further include providing an identity of the reserved one or more memory resources to the corresponding one or more of the plurality of memory modules (modules 20 and 30) as described herein.
In a further embodiment, a dynamic run-time allocation of memory resources in the cache module 70 for a particular memory module/device in the memory network may be performed in response to a specific request to use memory resources available in the cache module 70 from at least one memory module/device (e.g., module 20 or 30). The request may comprise at least a device identification, a device class identification and/or a device memory type of the at least one network device. Also the request may comprise an indication of a memory type (e.g., volatile or nonvolatile) which is needed.
The request processing by the cache memory controller 72 of the cache module 70 may comprise determining whether one or more resources of the available memory resources were previously reserved for the at least one memory module making the request, and if that is the case, using at least one reserved resource of the one or more reserved resources for implementing the request.
However, if no memory resources were previously reserved for the at least one memory module (e.g., like it may be for the memory card 40 as further discussed herein), then the cache module 70 may identify at least one available resource or multiple resources (not reserved or used) of the memory resources in the cache module 70 for responding to the request from the at least one memory module.
Also according to another embodiment, if the one or more resources were previously reserved for the at least one module but already all used or not sufficient for implementing the request, the cache module 70 may identify at least one available resource (or more than one resource if needed) of the memory resources not previously reserved, and use the identified resource(s) for implementing the request.
Moreover, the request from the at least one memory module/device may comprise a writing operation, so that the writing operation can be performed by the cache module 70 using the at least one resource (reserved or identified) at the cache module Also the request (optionally) may comprises a location or an identity of at least one memory resource in the cache module to use it for the requested writing operation.
Also, the request from the at least one memory device may comprise a reading operation, and an identity and/or a location of the requested information (previously written) in the cache memory 76, so that the cache memory controller 72 can identify one or more resources where the requested information is stored and perform the reading operation.
According to a further embodiment, when a new memory module/device (removable module/device) like the memory card 40 is added to the host device 10a as shown in Figure 3, this added memory card 40 can send to the cache module 70 an identity interrogation request to identify cache memory module/device available in the memory network. Then only the cache module 70 will respond to the identity request from the memory card 40 by a handshake with the added memory card 40 which may include providing identification of the cache module 70 to the added memory card 40. Also optionally, the cache module 70 may reserve at least one further resource (or multiple resources) of the available memory resources in the cache memory 76 for the added memory card 40 and provide an identity of these reserved resource(s) to the added memory card 40.
It is further noted that optionally the identity interrogation request sent by the added memory card 40 may also comprise a request to use memory resources in the cache module 70. In this described herein, the cache module 70 may identify at least one available resource or multiple resources (not reserved or used) of the memory resources in the cache module 70 for responding to the request from the added memory card 40. Also these resources used for responding to the request may be then reserved permanently in the cache module 70 for the added memory card 40 for future transactions until the memory card 40 is removed from the host device 10a.
It is further noted that a device like the cache module/device 70 may be a standalone component in the memory network/host device. Also it can have different functions, e.g., acting as a switch in the network. Moreover, it could be integrated into any of the other memory modules/devices (like a mass memory device or an IO device).
It is further notes that the host device could inform the memory/IO modules connected in the memory network about existence of the cache module (e.g.
confirming the existence of the CacheDevicelD) during the initialization phase of the memory/IO module. This would remove the requirement of interrogating the
CacheDevicelD by the memory/IO modules/devices connected in the memory network. Further, the host device could also configure both the memory/IO
modules/devices and the cache module during the initialization, e.g. by
configuring/linking the DevicelDs and corresponding cache memory resources (e.g., address ranges).
Figures 4, 5, 6a-6d and 7 further demonstrate different embodiments described herein. Figure 4 is an example of the cache memory 76 shown in Figure 3 with three allocated areas 76A, 76B and 76C for corresponding modules 20, 30 and 40. A memory area 76D represents additional resources which can be used in
"emergencies", i.e., when the allocated resource(s) are not sufficient to meet a particular request from the memory module 20, 30 or 40. Also area 76D may be used for further resource allocation/reservation for added memory devices/modules like the removable memory card 40.
Furthermore, each memory area 76A, 76B, 76C or 76D may comprise volatile and non- volatile sectors which may be used according to the need/request from the memory modules in the memory network. Alternatively, it may be more than one cache memory modules like module 70 in the host device 10a. For example, one cache module may comprise a volatile memory and another cache module may comprise a non- volatile memory, so that their corresponding controllers will coordinate their performance.
Figure 5 briefly summarizes three different modes of operation of the cache module: during production (first column), during power boot up (second column) and during connecting a new memory device (third column).
Figures 6a-6d illustrate exemplary topology options utilizing a central cache module in a UFS memory network. The topology shown in Figure 6a corresponds to the topology of Figure 3 described herein. Figure 6b shows a topology similar to Figure 5a, but the cache module is placed in the UFS hub. Figure 6c demonstrate a serial/chain connection of different memory modules/devices, and Figure 6d shows a mixed topology incorporating features of Figures 6a and 6c.
Figure 7 shows a logic flow diagram that illustrates the operation of a method involving the (central) cache module in a memory network, and a result of execution of computer program instructions embodied on a computer readable memory, further in accordance with the exemplary embodiments of the invention as described herein. It is noted that the order of steps shown in Figure 7 is not absolutely required, so in principle, the various steps may be performed out of the illustrated order. Also certain steps may be skipped, different steps may be added or substituted, or selected steps or groups of steps may be performed in a separate application.
In a method according to the exemplary embodiments, as shown in Figure 7, in a first step 80, one or more memory resources in a cache module are reserved by a host device for one or more of the plurality of memory modules comprised in the host device during manufacturing stage of the host device (optional step).
In a next step 82, the cache module (serving multiple memory
modules/devices) receives a message/broadcasting (interrogation) message
(comprising at least a device ID, a device class ID and/or a device memory type) from at least one memory module/device to identify the cache module and possibly (optionally) reserve memory resource allocation in the cache module. In a next step 84, the cache module responds to such message/broadcasting message with its own identity information and possibly (optionally) information on the memory resource allocations (e.g., in NVM and/or VM) reserved by the cache module for that memory module/device.
In a next step 86, the cache module receives from a memory module a request to use memory resource(s) in the cache device for a specific purpose. In a next step 88, it is determined (by the cache module) whether the memory resource(s) were reserved for that network device. If that is the case, in a next step 90, the cache module would use the at least one reserved resource for implementing the request. If however, it is determined that the memory resource(s) were not reserved for the memory module, in a step 92, the cache module would identify at least one resource of the available memory resources in the cache module to use for implementing the request or could reply with a rejection message due to a lack of available resources.
In general, the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the exemplary embodiments of this invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
It should thus be appreciated that at least some aspects of the exemplary embodiments of the inventions may be practiced in various components such as integrated circuit chips and modules, and that the exemplary embodiments of this invention may be realized in an apparatus that is embodied as an integrated circuit. The integrated circuit, or circuits, may comprise circuitry (as well as possibly firmware) for embodying at least one or more of a data processor or data processors, a digital signal processor or processors, baseband circuitry and radio frequency circuitry that are configurable so as to operate in accordance with the exemplary embodiments of this invention.
Various modifications and adaptations to the foregoing exemplary
embodiments of this invention may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. However, any and all modifications will still fall within the scope of the non- limiting and exemplary embodiments of this invention.
It is noted that various non-limiting embodiments described herein may be used separately, combined or selectively combined for specific applications.
Further, some of the various features of the above non-limiting embodiments may be used to advantage without the corresponding use of other described features. The foregoing description should therefore be considered as merely illustrative of the principles, teachings and exemplary embodiments of this invention, and not in limitation thereof.
It is to be understood that the above-described arrangements are only illustrative of the application of the principles of the present invention. Numerous modifications and alternative arrangements may be devised by those skilled in the art without departing from the scope of the invention, and the appended claims are intended to cover such modifications and arrangements.

Claims

CLAIMS What is claimed is:
1. A method, comprising:
reserving by at least one cache module of a host device, or receiving a reservation from the host device for, memory resource allocations in the at least one cache module individually reserved for one or more of a plurality of memory modules comprised in the host device;
receiving by the at least one cache module from at least one module of the plurality of memory modules a request to use memory resources available in the at least one cache module;
implementing the request by the at least one cache module using at least one resource of the memory resources in the at least one cache module,
wherein the at least one resource of the memory resources in the at least one cache module is previously reserved for the at least one module, or dynamically identified by the at least one cache module.
2. The method of claim 1, wherein implementing the request comprises:
determining by the at least one cache module whether one or more resources of the memory resources were previously reserved for the at least one module.
3. The method of claim 2, wherein implementing the request further comprises: if the one or more resources were previously reserved for the at least one module, using the at least one resource of the one or more reserved resources for implementing the request, and
if the one or more resources were not previously reserved for the at least one module, identifying by the at least one cache module at least one resource of the memory resources not previously reserved in the at least one cache module to use for implementing the request.
4. The method of claim 2, wherein implementing the request further comprises: if the one or more resources were previously reserved for the at least one module but all used or not sufficient for implementing the request,
identifying by the at least one cache module at least one resource of the memory resources not previously reserved in the at least one cache module to use for implementing the request.
5. The method of claim 1, wherein the host device is a mobile device for wireless communications .
6. The method of claim 1 , wherein the plurality of memory modules comprise a mass memory module, a removable memory module and an input/output memory module.
7. The method of claim 1 , wherein the request to use the memory resources available in the at least one cache module comprises one or more of: a device identification, a device class identification and a device memory type of the at least one module.
8. The method of claim 1, wherein the cache module comprises:
volatile memory,
non-volatile memory, or
both volatile and non-volatile memory, wherein the request to use the memory resources available in the at least one cache module comprises an indication of a volatile or non-volatile memory type in the at least one cache module.
9. The method of claim 1 , wherein the request to use the memory resources available in the at least one cache module comprises a writing operation, so that the writing operation is performed by the at least one cache module using the at least one resource of the memory resources in the at least one cache module, where the request optionally comprises a location or an identity of at least one memory resource in the at least one cache module to use for the writing operation.
10. The method of claim 1 , wherein the request to use the memory resources available in the at least one cache module comprises a reading operation, and an identity or a location of the requested information, so the at least one cache module is configured to identify the at least one resource of the memory resources where the requested information is stored.
11. The method of claim 1 , wherein at first powering of the host device, the method comprising:
receiving by the at least one cache module one or more identity interrogation requests from the one or more of the plurality of memory modules;
responding to the identity interrogation requests by handshakes with the one or more of the plurality of memory modules including providing an identification of the at least one cache module and further providing an identity of the reserved one or more memory resources to the corresponding one or more of the plurality of memory modules.
12. The method of claim 1 , wherein before receiving the request to use memory resources available in the at least one cache module, the method comprises:
providing by the at least one cache module an identification of the at least one cache module to the one or more of the plurality of memory modules.
13. The method of claim 12, wherein before receiving the request to use memory resources available in the at least one cache module, the method further comprises: receiving by the at least one cache module an identity interrogation request from a memory module added to the host device, the memory module being a removable module; and
responding to the identity interrogation request from the memory module by a handshake with the added memory module including said providing the identification of the at least one cache module to the added memory module.
14. The method of claim 13, wherein the handshake further comprises:
reserving by at least one cache module at least one further resource of the available memory resources for the added memory module; and providing an identity of the reserved at least one further memory resource to the added memory module.
15. The method of claim 13, wherein the added memory module is the at least one module making said request to use memory resources available in the at least one cache module.
16. The method of claim 13, wherein the added memory module is the at least one module, and the identity interrogation request comprises said request to use memory resources available in the at least one cache module.
17. A method, comprising :
reserving by a host device during manufacturing stage of the host device a memory resource allocation in at least one cache module individually for one or more memory modules comprised in the host device using information about each of the one or more memory modules comprising at least one or more of: a device identification, a device class identification and a device memory type of the at least one module; and
providing by the host device the reserved memory resource allocation to the at least one cache module and corresponding individual memory resource allocations along with an identification of the at least one cache module individually to each of the one or more memory modules.
18. An apparatus comprising:
at least one controller and a memory optionally storing a set of computer instructions, in which the controller and the memory optionally storing the computer instructions are configured to cause the apparatus to:
reserve by the apparatus in a host device, or receiving a reservation from the host device for, memory resource allocations in the apparatus individually reserved for one or more of a plurality of memory modules comprised in the host device;
receive from at least one module of the plurality of memory modules a request to use memory resources available in the apparatus; implement the request using at least one resource of the memory resources in the apparatus,
wherein the at least one resource of the memory resources in the apparatus is previously reserved for the at least one module, or dynamically identified by the apparatus.
19. An apparatus comprising:
at least one processor and a memory storing a set of computer instructions, in which the processor and the memory storing the computer instructions are configured to cause the apparatus to:
reserve during manufacturing stage of the apparatus a memory resource allocation in at least one cache module individually for one or more memory modules comprised in the apparatus using information about each of the one or more memory modules comprising at least one or more of: a device identification, a device class identification and a device memory type of the at least one module; and
provide the reserved memory resource allocation to the at least one cache module and corresponding individual memory resource allocations along with an identification of the at least one cache module individually to each of the one or more memory modules.
PCT/US2013/056980 2012-08-28 2013-08-28 Dynamic central cache memory WO2014036078A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/596,480 2012-08-28
US13/596,480 US9116820B2 (en) 2012-08-28 2012-08-28 Dynamic central cache memory

Publications (2)

Publication Number Publication Date
WO2014036078A2 true WO2014036078A2 (en) 2014-03-06
WO2014036078A3 WO2014036078A3 (en) 2014-05-01

Family

ID=50184615

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/056980 WO2014036078A2 (en) 2012-08-28 2013-08-28 Dynamic central cache memory

Country Status (3)

Country Link
US (1) US9116820B2 (en)
TW (1) TW201432452A (en)
WO (1) WO2014036078A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021088423A1 (en) * 2019-11-08 2021-05-14 苏州浪潮智能科技有限公司 Memory management method and system for raid io, terminal and storage medium

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8307180B2 (en) 2008-02-28 2012-11-06 Nokia Corporation Extended utilization area for a memory device
US8874824B2 (en) 2009-06-04 2014-10-28 Memory Technologies, LLC Apparatus and method to share host system RAM with mass storage memory RAM
US9417998B2 (en) 2012-01-26 2016-08-16 Memory Technologies Llc Apparatus and method to provide cache move with non-volatile mass memory system
US9311226B2 (en) 2012-04-20 2016-04-12 Memory Technologies Llc Managing operational state data of a memory module using host memory in association with state change
US9164804B2 (en) 2012-06-20 2015-10-20 Memory Technologies Llc Virtual memory module
US9575884B2 (en) * 2013-05-13 2017-02-21 Qualcomm Incorporated System and method for high performance and low cost flash translation layer
KR20180038109A (en) * 2016-10-05 2018-04-16 삼성전자주식회사 Electronic device including monitoring circuit and storage device included therein
TW201818248A (en) 2016-11-15 2018-05-16 慧榮科技股份有限公司 Memory managing method for data storage device
US10754783B2 (en) * 2018-06-29 2020-08-25 Intel Corporation Techniques to manage cache resource allocations for a processor cache

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040203670A1 (en) * 1998-09-16 2004-10-14 Openwave Systems Inc. Wireless mobile devices having improved operation during network unavailability
US7181574B1 (en) * 2003-01-30 2007-02-20 Veritas Operating Corporation Server cluster using informed prefetching
US20080127131A1 (en) * 2006-09-13 2008-05-29 Yaoqing Gao Software solution for cooperative memory-side and processor-side data prefetching
US20090222629A1 (en) * 2008-03-01 2009-09-03 Kabushiki Kaisha Toshiba Memory system
US20100005281A1 (en) * 2008-07-01 2010-01-07 International Business Machines Corporation Power-on initialization and test for a cascade interconnect memory system
US20110264860A1 (en) * 2010-04-27 2011-10-27 Via Technologies, Inc. Multi-modal data prefetcher
US20120210326A1 (en) * 2011-02-14 2012-08-16 Microsoft Corporation Constrained Execution of Background Application Code on Mobile Devices

Family Cites Families (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59135563A (en) 1983-01-24 1984-08-03 Hitachi Ltd Computer system having disk cache device
JPH0679293B2 (en) 1990-10-15 1994-10-05 富士通株式会社 Computer system
US5586291A (en) 1994-12-23 1996-12-17 Emc Corporation Disk controller with volatile and non-volatile cache memories
US5845313A (en) 1995-07-31 1998-12-01 Lexar Direct logical block addressing flash memory mass storage architecture
US5802069A (en) 1995-11-13 1998-09-01 Intel Corporation Implementing mass storage device functions using host processor memory
US5924097A (en) 1997-12-23 1999-07-13 Unisys Corporation Balanced input/output task management for use in multiprocessor transaction processing system
US6067300A (en) 1998-06-11 2000-05-23 Cabletron Systems, Inc. Method and apparatus for optimizing the transfer of data packets between local area networks
US7702831B2 (en) 2000-01-06 2010-04-20 Super Talent Electronics, Inc. Flash memory controller for electronic data flash card
US6513094B1 (en) 1999-08-23 2003-01-28 Advanced Micro Devices, Inc. ROM/DRAM data bus sharing with write buffer and read prefetch activity
US6665747B1 (en) 1999-10-22 2003-12-16 Sun Microsystems, Inc. Method and apparatus for interfacing with a secondary storage system
US7552251B2 (en) 2003-12-02 2009-06-23 Super Talent Electronics, Inc. Single-chip multi-media card/secure digital (MMC/SD) controller reading power-on boot code from integrated flash memory for user storage
US20060075395A1 (en) 2004-10-01 2006-04-06 Lee Charles C Flash card system
US6804763B1 (en) 2000-10-17 2004-10-12 Igt High performance battery backed ram interface
US6801994B2 (en) 2000-12-20 2004-10-05 Microsoft Corporation Software management systems and methods for automotive computing devices
US6934254B2 (en) 2001-01-18 2005-08-23 Motorola, Inc. Method and apparatus for dynamically allocating resources in a communication system
US6510488B2 (en) 2001-02-05 2003-01-21 M-Systems Flash Disk Pioneers Ltd. Method for fast wake-up of a flash memory system
US6842829B1 (en) 2001-12-06 2005-01-11 Lsi Logic Corporation Method and apparatus to manage independent memory systems as a shared volume
US6754129B2 (en) 2002-01-24 2004-06-22 Micron Technology, Inc. Memory module with integrated bus termination
US7085866B1 (en) 2002-02-19 2006-08-01 Hobson Richard F Hierarchical bus structure and memory access protocol for multiprocessor systems
AU2002304404A1 (en) * 2002-05-31 2003-12-19 Nokia Corporation Method and memory adapter for handling data of a mobile device using non-volatile memory
JP2004021669A (en) 2002-06-18 2004-01-22 Sanyo Electric Co Ltd Transfer control system and transfer controller and recording device and transfer control method
CN1689312B (en) 2002-10-08 2010-04-14 皇家飞利浦电子股份有限公司 Integrated circuit and method for establishing transactions
US20050071570A1 (en) 2003-09-26 2005-03-31 Takasugl Robin Alexis Prefetch controller for controlling retrieval of data from a data storage device
US7321958B2 (en) 2003-10-30 2008-01-22 International Business Machines Corporation System and method for sharing memory by heterogeneous processors
JP4402997B2 (en) * 2004-03-26 2010-01-20 株式会社日立製作所 Storage device
CN100538691C (en) 2004-04-26 2009-09-09 皇家飞利浦电子股份有限公司 Be used to send integrated circuit, data handling system and the method for affairs
US7480749B1 (en) 2004-05-27 2009-01-20 Nvidia Corporation Main memory as extended disk buffer memory
US7334107B2 (en) 2004-09-30 2008-02-19 Intel Corporation Caching support for direct memory access address translation
US8843727B2 (en) 2004-09-30 2014-09-23 Intel Corporation Performance enhancement of address translation using translation tables covering large address spaces
US20060288130A1 (en) 2005-06-21 2006-12-21 Rajesh Madukkarumukumana Address window support for direct memory access translation
US7610445B1 (en) 2005-07-18 2009-10-27 Palm, Inc. System and method for improving data integrity and memory performance using non-volatile media
US7571295B2 (en) 2005-08-04 2009-08-04 Intel Corporation Memory manager for heterogeneous memory control
JP4433311B2 (en) 2005-09-12 2010-03-17 ソニー株式会社 Semiconductor memory device, electronic device, and mode setting method
KR100673013B1 (en) 2005-09-21 2007-01-24 삼성전자주식회사 Memory controller and data processing system with the same
JP4903415B2 (en) * 2005-10-18 2012-03-28 株式会社日立製作所 Storage control system and storage control method
JP2007115382A (en) 2005-10-24 2007-05-10 Renesas Technology Corp Semiconductor integrated circuit, storage device, and control program
US7951008B2 (en) 2006-03-03 2011-05-31 Igt Non-volatile memory management technique implemented in a gaming machine
US7753281B2 (en) 2006-06-01 2010-07-13 Hewlett-Packard Development Company, L.P. System and method of updating a first version of a data file in a contactless flash memory device
US20080082714A1 (en) 2006-09-29 2008-04-03 Nasa Hq's. Systems, methods and apparatus for flash drive
TWM317043U (en) 2006-12-27 2007-08-11 Genesys Logic Inc Cache device of the flash memory address transformation layer
EP2122473B1 (en) * 2007-01-10 2012-12-05 Mobile Semiconductor Corporation Adaptive memory system for enhancing the performance of an external computing device
CA2686313C (en) 2007-05-07 2012-10-02 Vorne Industries, Inc. Method and system for extending the capabilities of embedded devices through network clients
US8527691B2 (en) 2007-07-31 2013-09-03 Panasonic Corporation Nonvolatile memory device and nonvolatile memory system with fast boot capability
US8166238B2 (en) 2007-10-23 2012-04-24 Samsung Electronics Co., Ltd. Method, device, and system for preventing refresh starvation in shared memory bank
US8185685B2 (en) 2007-12-14 2012-05-22 Hitachi Global Storage Technologies Netherlands B.V. NAND flash module replacement for DRAM module
US8880483B2 (en) 2007-12-21 2014-11-04 Sandisk Technologies Inc. System and method for implementing extensions to intelligently manage resources of a mass storage system
JP4533968B2 (en) 2007-12-28 2010-09-01 株式会社東芝 Semiconductor memory device, control method therefor, controller, information processing device
US8892831B2 (en) 2008-01-16 2014-11-18 Apple Inc. Memory subsystem hibernation
US8209463B2 (en) 2008-02-05 2012-06-26 Spansion Llc Expansion slots for flash memory based random access memory subsystem
US7962684B2 (en) 2008-02-14 2011-06-14 Sandisk Corporation Overlay management in a flash memory storage device
JP4672742B2 (en) 2008-02-27 2011-04-20 株式会社東芝 Memory controller and memory system
US8099522B2 (en) 2008-06-09 2012-01-17 International Business Machines Corporation Arrangements for I/O control in a virtualized system
US8166229B2 (en) 2008-06-30 2012-04-24 Intel Corporation Apparatus and method for multi-level cache utilization
US8181046B2 (en) 2008-10-29 2012-05-15 Sandisk Il Ltd. Transparent self-hibernation of non-volatile memory system
US8316201B2 (en) 2008-12-18 2012-11-20 Sandisk Il Ltd. Methods for executing a command to write data from a source location to a destination location in a memory device
US8094500B2 (en) 2009-01-05 2012-01-10 Sandisk Technologies Inc. Non-volatile memory and method with write cache partitioning
US8832354B2 (en) 2009-03-25 2014-09-09 Apple Inc. Use of host system resources by memory controller
US8180981B2 (en) 2009-05-15 2012-05-15 Oracle America, Inc. Cache coherent support for flash in a memory hierarchy
US8874824B2 (en) 2009-06-04 2014-10-28 Memory Technologies, LLC Apparatus and method to share host system RAM with mass storage memory RAM
CN102576333B (en) * 2009-10-05 2016-01-13 马维尔国际贸易有限公司 Data cache in nonvolatile memory
KR101638061B1 (en) 2009-10-27 2016-07-08 삼성전자주식회사 Flash memory system and flash defrag method thereof
WO2011148223A1 (en) 2010-05-27 2011-12-01 Sandisk Il Ltd Memory management storage to a host device
US8938574B2 (en) * 2010-10-26 2015-01-20 Lsi Corporation Methods and systems using solid-state drives as storage controller cache memory
TWI417727B (en) 2010-11-22 2013-12-01 Phison Electronics Corp Memory storage device, memory controller thereof, and method for responding instruction sent from host thereof
US8719464B2 (en) 2011-11-30 2014-05-06 Advanced Micro Device, Inc. Efficient memory and resource management
US20130145055A1 (en) 2011-12-02 2013-06-06 Andrew Kegel Peripheral Memory Management
US8930633B2 (en) 2012-06-14 2015-01-06 International Business Machines Corporation Reducing read latency using a pool of processing cores
US9164804B2 (en) 2012-06-20 2015-10-20 Memory Technologies Llc Virtual memory module

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040203670A1 (en) * 1998-09-16 2004-10-14 Openwave Systems Inc. Wireless mobile devices having improved operation during network unavailability
US7181574B1 (en) * 2003-01-30 2007-02-20 Veritas Operating Corporation Server cluster using informed prefetching
US20080127131A1 (en) * 2006-09-13 2008-05-29 Yaoqing Gao Software solution for cooperative memory-side and processor-side data prefetching
US20090222629A1 (en) * 2008-03-01 2009-09-03 Kabushiki Kaisha Toshiba Memory system
US20100005281A1 (en) * 2008-07-01 2010-01-07 International Business Machines Corporation Power-on initialization and test for a cascade interconnect memory system
US20110264860A1 (en) * 2010-04-27 2011-10-27 Via Technologies, Inc. Multi-modal data prefetcher
US20120210326A1 (en) * 2011-02-14 2012-08-16 Microsoft Corporation Constrained Execution of Background Application Code on Mobile Devices

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021088423A1 (en) * 2019-11-08 2021-05-14 苏州浪潮智能科技有限公司 Memory management method and system for raid io, terminal and storage medium

Also Published As

Publication number Publication date
WO2014036078A3 (en) 2014-05-01
TW201432452A (en) 2014-08-16
US9116820B2 (en) 2015-08-25
US20140068140A1 (en) 2014-03-06

Similar Documents

Publication Publication Date Title
US9116820B2 (en) Dynamic central cache memory
US11782647B2 (en) Managing operational state data in memory module
US11797180B2 (en) Apparatus and method to provide cache move with non-volatile mass memory system
US11733869B2 (en) Apparatus and method to share host system RAM with mass storage memory RAM
TW201230049A (en) Dynamic allocation of power budget for a system having non-volatile memory
WO2017084565A1 (en) Storage data access method, related controller, device, host, and system
US9164804B2 (en) Virtual memory module
US9164703B2 (en) Solid state drive interface controller and method selectively activating and deactivating interfaces and allocating storage capacity to the interfaces
US10719333B2 (en) BIOS startup method and apparatus
US9948336B2 (en) Communication circuit chip and electronic device configured to communicate with plural memory cards
JP2016026345A (en) Temporary stop of memory operation for shortening reading standby time in memory array
WO2016040189A1 (en) System and method for sharing a solid-state non-volatile memory resource

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13833258

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 13833258

Country of ref document: EP

Kind code of ref document: A2