US20150169598A1 - System and method for providing a persistent snapshot of a running system in a distributed data grid - Google Patents

System and method for providing a persistent snapshot of a running system in a distributed data grid Download PDF

Info

Publication number
US20150169598A1
US20150169598A1 US14/271,161 US201414271161A US2015169598A1 US 20150169598 A1 US20150169598 A1 US 20150169598A1 US 201414271161 A US201414271161 A US 201414271161A US 2015169598 A1 US2015169598 A1 US 2015169598A1
Authority
US
United States
Prior art keywords
data grid
distributed data
snapshot
distributed
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/271,161
Inventor
Robert H. Lee
Jason John Howes
Mark Falco
Gene Gleyzer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle International Corp
Original Assignee
Oracle International Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/352,203 external-priority patent/US9063787B2/en
Application filed by Oracle International Corp filed Critical Oracle International Corp
Priority to US14/271,161 priority Critical patent/US20150169598A1/en
Assigned to ORACLE INTERNATIONAL CORPORATION reassignment ORACLE INTERNATIONAL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LEE, ROBERT H., GLEYZER, GENE, FALCO, MARK, HOWES, Jason John
Publication of US20150169598A1 publication Critical patent/US20150169598A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F17/30088
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/142Reconfiguring to eliminate the error
    • G06F11/1425Reconfiguring to eliminate the error by reconfiguration of node membership
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1438Restarting or rejuvenating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • G06F11/1482Generic software techniques for error detection or fault masking by means of middleware or OS functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/128Details of file system snapshots on the file-level, e.g. snapshot creation, administration, deletion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1858Parallel file systems, i.e. file systems supporting multiple processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1865Transactional file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/219Managing data history or versioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/84Using snapshots, i.e. a logical point-in-time copy of the data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/505Clust

Definitions

  • the present invention is generally related to computer systems, and is particularly related to supporting persistence in a distributed data grid.
  • Modern computing systems particularly those employed by larger organizations and enterprises, continue to increase in size and complexity.
  • areas such as Internet applications
  • millions of users should be able to simultaneously access that application, which effectively leads to an exponential increase in the amount of content generated and consumed by users, and transactions involving that content.
  • Such activity also results in a corresponding increase in the number of transaction calls to databases and metadata stores, which have a limited capacity to accommodate that demand.
  • This is the general area that embodiments of the invention are intended to address.
  • Described herein are systems and methods that can support persistence in a distributed data grid, such as providing a persistent snapshot of a running system.
  • the system allows one or more cache services to run on a plurality of cluster members in the distributed data grid.
  • the system can collect a catalogue of state information associated with said one or more cache services from the plurality of cluster members in the distributed data grid, and create a snapshot for said one or more cache services running on the distributed data grid.
  • FIG. 1 is an illustration of a data grid cluster in accordance with various embodiments of the invention.
  • FIG. 2 shows an illustration of supporting persistence in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 3 shows an illustration of using a shared storage to support persistence in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 4 shows an illustration of using distributed local disks to support persistence in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 5 shows an illustration of supporting distributed persistent store recovery in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 6 shows an illustration of coordinating persistent store recovery in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 7 shows an illustration of supporting consistent partition recovery in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 8 illustrates an exemplary flow chart for supporting distributed persistent store recovery in a distributed data grid in accordance with an embodiment of the invention.
  • FIG. 9 shows an illustration of supporting persistent store versioning in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 10 shows an illustration of supporting persistent store integrity in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 11 shows an illustration of restoring the persisted partitions in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 12 illustrates an exemplary flow chart for supporting persistent store versioning and integrity and in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 13 shows an illustration of providing a persistent snapshot of a running system in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 14 illustrates an exemplary flow chart for providing a persistent snapshot of a running system in a distributed data grid in accordance with an embodiment of the invention.
  • Described herein are systems and methods that can support persistence in a distributed data grid.
  • a “data grid cluster”, or “data grid”, is a system comprising a plurality of computer servers which work together to manage information and related operations, such as computations, within a distributed or clustered environment.
  • the data grid cluster can be used to manage application objects and data that are shared across the servers.
  • a data grid cluster should have low response time, high throughput, predictable scalability, continuous availability and information reliability. As a result of these capabilities, data grid clusters are well suited for use in computational intensive, stateful middle-tier applications.
  • Some examples of data grid clusters can store the information in-memory to achieve higher performance, and can employ redundancy in keeping copies of that information synchronized across multiple servers, thus ensuring resiliency of the system and the availability of the data in the event of server failure.
  • Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol.
  • An in-memory data grid can provide the data storage and management capabilities by distributing data over a number of servers working together.
  • the data grid can be middleware that runs in the same tier as an application server or within an application server. It can provide management and processing of data and can also push the processing to where the data is located in the grid.
  • the in-memory data grid can eliminate single points of failure by automatically and transparently failing over and redistributing its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it can automatically join the cluster and services can be failed back over to it, transparently redistributing the cluster load.
  • the data grid can also include network-level fault tolerance features and transparent soft re-start capability.
  • the functionality of a data grid cluster is based on using different cluster services.
  • the cluster services can include root cluster services, partitioned cache services, and proxy services.
  • each cluster node can participate in a number of cluster services, both in terms of providing and consuming the cluster services.
  • Each cluster service has a service name that uniquely identifies the service within the data grid cluster, and a service type, which defines what the cluster service can do.
  • the services can be either configured by the user, or provided by the data grid cluster as a default set of services.
  • FIG. 1 is an illustration of a data grid cluster in accordance with various embodiments of the invention.
  • a data grid cluster 100 e.g. an Oracle Coherence data grid, includes a plurality of cluster members (or server nodes) such as cluster nodes 101 - 106 , having various cluster services 111 - 116 running thereon. Additionally, a cache configuration file 110 can be used to configure the data grid cluster 100 .
  • the distributed data grid can provide recoverable persistent storage for different types of cache content and can prevent data loss after the distributed data grid is shut down.
  • FIG. 2 shows an illustration of supporting persistence in a distributed data grid, in accordance with an embodiment of the invention.
  • a distributed data grid 200 can include various types of cache content 211 - 213 in an in-memory data store 202 .
  • the distributed data grid 200 can use a persistence layer 201 to persist the cache content 211 - 213 in a persistent storage 203 .
  • the persistence layer 201 allows the persistent storage 203 to use different physical topologies.
  • the persistence layer 201 can store the cache content in a central location, such as a storage area network (SAN) 221 , where all members in the distributed data grid 200 can share the same visibility.
  • the persistence layer 201 can store the cache content into different local disks 222 , where members of the distributed data grid 200 may have only local visibility.
  • SAN storage area network
  • the persistence layer 201 can be agnostic to the choice of the physical topology (e.g. a SAN 221 or distributed local disks 222 ).
  • the distributed data grid 200 can take advantage of multiple SANs or multiple SAN mount points.
  • the distributed data grid 200 can take advantage of a physical topology that includes multiple SANs that are not shared by the plurality of members.
  • the physical topology may include multiple SANs exporting storage locations, or may include hybrid deployments of local disks and SANs.
  • the persistence layer 201 can support partition-wide atomicity of persisted data/metadata, and can provide transaction guarantee after a restart of the distributed data grid 200 . Also, the persistence layer 201 can minimize performance impact and reduce recovery time needed to restart the distributed data grid 200 .
  • FIG. 3 shows an illustration of using a shared storage to support persistence in a distributed data grid, in accordance with an embodiment of the invention.
  • a distributed data grid 300 which includes a plurality of members (e.g. the members 301 - 305 on the machines A-C 311 - 313 ), can support various cache services 320 .
  • the distributed data grid 300 can use a shared persistent storage, such as a storage area network (SAN) 310 , to store the cache content for the cache services 320 in a central location.
  • a shared persistent storage such as a storage area network (SAN) 310
  • the different members 301 - 305 on the machines A-C 311 - 313 can share the same visibility, and can all have access to the persisted partitions 322 in the SAN 310 .
  • the system can recover the persisted cache content and prevent data loss, when the distributed data grid 300 is restarted after a shutdown.
  • FIG. 4 shows an illustration of using distributed local disks to support persistence in a distributed data grid, in accordance with an embodiment of the invention.
  • a distributed data grid 400 which includes a plurality of members (e.g. the members 401 - 405 on the machines A-C 411 - 413 ), can support various cache services 420 .
  • the distributed data grid 400 can store the cache content for the cache services 420 into the local disks on different machines.
  • the members 401 - 402 can store the related cache content into the local disk A 431 on machine A 411 (e.g. the persisted partitions 421 ).
  • the members 403 - 404 can store the related cache content into the local disk B 432 on the machine B 412 (e.g. the persisted partitions 422 ), and the machine C 413 can store the related cache content into the local disk C 433 on the machine C 413 (e.g. the persisted partitions 423 ).
  • the distributed data grid 400 can support the automatic recovery of various types of cache content in a distributed fashion, and prevent data loss during the restart of the distributed data grid 400 .
  • the distributed data grid can support persistent store recovery in a distributed fashion.
  • FIG. 5 shows an illustration of supporting distributed persistent store recovery in a distributed data grid, in accordance with an embodiment of the invention.
  • a distributed data grid 500 can include a plurality of members, e.g. members 501 - 505 , and can persist the cache content using the distributed local disks, e.g. local disks A-C 511 - 513 .
  • each member in the distributed data grid 500 may only have visibility to the partitions persisted in the local disk.
  • the member 501 and the member 502 may only be aware of the persisted partitions 521 in the local disk A 511
  • the member 503 and the member 504 may only be aware of the persisted partitions 522 in the local disk B 512
  • the member 505 may only be aware of the persisted partitions 523 in the local disk C 513 .
  • the distributed data grid 500 can use an internal protocol to discover the persisted partitions 521 - 523 on different local disks A-C 511 - 513 .
  • the discovery protocol supports the persistent store recovery during both the cluster cold-start/restart scenario and the multiple-node failure scenario (e.g. with a loss of a primary owner of a partition and/or one or more backup owners of the partition).
  • the distributed data grid 500 can use a coordinator member 510 to coordinate the recovery of various persisted partitions 521 - 523 in the distributed data grid 500 .
  • the coordinator member 510 can send a distributed query to other members 501 - 505 in the distributed data grid 500 in order to obtain a complete list of persisted partitions 521 - 523 .
  • the coordinator member 510 can use a pluggable partition assignment strategy component 520 to determine the partition recovery assignment 540 .
  • the system can go down the list of the partitions to examine which member can see a version of the partition. Then, the system can determine which member should be used to recover which partition based on a synchronized partition ownership view 530 .
  • the system can minimize the performance impact caused by adding persistence support to the distributed data grid 500 .
  • the system can use an asynchronous messaging process in the distributed data grid 500 for implementing the write operation to a persistent store.
  • the system allows the performing of multiple input/output (I/O) operations concurrently.
  • the coordinator member 510 can avoid using only one or a few members in the distributed data grid 500 for performing the recovery, which may be prone to create performance bottleneck.
  • the system can use a recovery quorum to ensure that all persisted partitions are visible prior to the recovery in order to prevent data loss due to recovery.
  • the distributed data grid 500 can automatically carry out a recovery of persisted cache contents in a distributed fashion during a restart of the distributed data grid 500 .
  • FIG. 6 shows an illustration of coordinating persistent store recovery in a distributed data grid, in accordance with an embodiment of the invention.
  • a coordinator member 610 in a distributed data grid 600 can coordinate the recovery of the persisted partitions from the distributed local disks.
  • the coordinator member 610 can direct a member 620 to recover persisted partitions from a local disk 630 .
  • the coordinator 610 can instruct the member 620 (and all other members in the distributed data grid 600 concurrently) to prepare for restoring persisted partitions. Then, at step 602 , the member 620 (possibly along with each other member in the distributed data grid 600 ) can provide a local partition ownership back to the coordinator member 610 .
  • the coordinator member 610 can synchronize a view of the overall partition ownership, after obtaining the partition ownership information from the different members in the distributed data grid 600 .
  • the coordinator 610 can instruct the member 620 to prepare for recovering the persisted partitions based on the view of the overall partition ownership.
  • the member 620 can check for the persisted partitions in the local disk 630 .
  • the member 620 can report the persisted partitions (e.g. the persisted partition IDs) in the local disk 630 to the coordinator member 610 .
  • the coordinator member 610 can make decision on how to configure a recovery process, such as determining a recovery assignment.
  • the coordinator 610 can provide the partition recovery assignment (e.g. the recover partition IDs) to each member in the distributed data grid 600 .
  • the different members in the distributed data grid 600 can carry out the recovery of the persisted partitions based on the received partition recovery assignment.
  • FIG. 7 shows an illustration of supporting consistent partition recovery in a distributed data grid, in accordance with an embodiment of the invention.
  • a distributed data grid 700 can include a plurality of members, e.g. members 701 - 705 , each of which may only have visibility to the partitions persisted in the local disk.
  • a coordinator member 710 can coordinate the recovery of various persisted partitions 721 - 723 from the distributed local disks A-C 711 - 713 . Also, the coordinator member 710 can use a pluggable partition assignment strategy component 720 to determine which member should be used to recover which partition.
  • the system can promote in-memory backups to in-memory primaries.
  • the system can create a new persisted partition on disk and can also create one or more in-memory backups on other members from the data in memory.
  • the system can recover a new in-memory primary from the persisted version on disk, when there is a member having visibility to the disk.
  • the distributed data grid 700 can rebalance itself.
  • the distributed data grid 700 can promote a back-up partition which is persisted in either the local disk B 712 or the local disk C 713 as the primary partition.
  • the distributed data grid 700 can ensure that the system always restores the most recent valid partition.
  • the persisted partitions 722 in the local disk B 712 may contain a newer version of the partition, since the persisted partitions 721 in the local disk A 711 may not be updated correctly or an older version of the partition exists due to the death of the prior owner of the partition.
  • the distributed data grid 700 can use a recovery quorum for supporting the discovery and/or the recovery of the persisted partitions 721 - 723 .
  • the recovery quorum By using the recovery quorum, the recovery from persistence can be gated or protected.
  • the distributed data grid 700 can ensure that no data is lost, even when the number of members that are lost exceeds the in-memory redundancy target.
  • the distributed data grid 700 can ensure that all persisted partitions are visible prior to recovery.
  • the recovery quorum can be configured such that it guarantees visibility to all of the possible storage locations (such as local disks and/or SANs within the cluster).
  • the distributed data grid 700 can recover orphaned partitions from the persistent store and assign them as empty partitions
  • the distributed data grid 700 can establish different recovery policies based on the recovery quorum. For example, the distributed data grid 700 can establish SAN/shared-storage policies that focus on capacity. Also, the distributed data grid 700 can establish distributed/shared-nothing storage policies that ensure all storage locations are reachable. Also, the distributed data grid 700 can establish various policies based on the configured membership size and the host-list.
  • the system allows various members 701 - 705 in the distributed data grid 700 to be shut down (and/or restarted) in an orderly fashion, and allows for a graceful suspend/resume of an service or the entire cluster. Additionally, the system can prevent partition transfers and persistent store movements, during the shutdown of the distributed data grid. For example, a quiesced service/cluster may not join new members, may not restore partitions from backup, may not recover orphaned partitions from persistent store, may not assign empty orphaned partitions, and may not perform partition distribution.
  • FIG. 8 illustrates an exemplary flow chart for supporting distributed persistent store recovery in a distributed data grid in accordance with an embodiment of the invention.
  • the system allowing a plurality of members in the distributed data grid to persist a plurality of partitions associated with one or more cache services in a persistent storage.
  • a coordinator can synchronize a view of partition ownership among the plurality of members in the distributed data grid.
  • the distributed data grid can form a distributed consensus on which partition can be recovered from which member in the distributed data grid.
  • FIG. 9 shows an illustration of supporting persistent store versioning in a distributed data grid, in accordance with an embodiment of the invention.
  • a distributed data grid 900 can use various partitions (e.g. a partition 901 ) in an in-memory data store 920 to support different cache services.
  • the distributed data grid 900 can use a persistent store (e.g. a persisted partition 911 ) to persist the partition 901 in the distributed local disks 910 .
  • a persistent store e.g. a persisted partition 911
  • the system can provide a unique identifier (ID), or a unique version number 906 , for each persisted partition in the distributed local disks 910 .
  • ID unique identifier
  • a member 902 in the distributed data grid 900 can generate a globally unique identifier (GUID) 921 for the persistent partition 911 .
  • GUID 921 can contain various types of information using a special naming format.
  • the GUID 921 can include at least a partition number (or a partition ID 903 ) and a partition version number 911 associated with the partition 901 . Additionally, the GUID 921 can contain a member ID 904 , which indicates that the member 902 generates the GUID 921 .
  • the GUID 921 can include other information, such as a time stamp 905 that indicates the time when the partition 901 is first persisted.
  • the time stamp 905 is a stamp of logical time (e.g. a stamp of a vector clock per partition), instead of a global wall clock.
  • the system can guarantee that the GUID stamps move monotonically forward in the face of any kind of failure or transfer scenario.
  • the distributed data grid 900 can maintain the version number 910 for each persisted partition in a monotonically increasing order.
  • the system can account for the data mutation at any member or ownership changes in the distributed data grid 900 .
  • FIG. 10 shows an illustration of supporting persistent store integrity in a distributed data grid, in accordance with an embodiment of the invention.
  • a persistent store 1001 in a distributed data grid 1000 can contain cache content from different caches A-C 1011 - 1013 , each of which is associated with a cache ID 1021 - 1123 .
  • the system can apply a seal operation 1002 on the persistent store 1001 .
  • the seal operation 1002 can ensure that the persistent store 1001 is fully initialized and is eligible to be recovered.
  • the system can apply a validation operation 1003 on the persistent store 1001 .
  • the validation operation 1003 can check whether the persistent store 1001 has been sealed. For example, the system may decide that the cache content in the persistent store 1001 is not valid if the persistent store 1001 is not sealed.
  • the system can ensure that the distributed data grid 1000 always restores a valid persisted partition and avoids recovering a partial copy that may be caused by cascading cluster failures.
  • FIG. 11 shows an illustration of restoring the persisted partitions in a distributed data grid, in accordance with an embodiment of the invention.
  • a distributed data grid 1100 can store various persisted partitions 1111 - 1113 in distributed local disks 1110 .
  • Each persisted partition 1111 - 1113 stored in the distributed local disks 1110 can be associated with a globally unique identifier (GUID), e.g. GUID 1141 - 1143 .
  • GUIDs 1141 - 1143 can contain different types of information that includes at least a partition number (i.e. a partition-id) and a version number.
  • the members 1101 - 1102 in the distributed data grid 1100 may have different visibility to the persisted partitions 1011 - 1013 in the distributed local disks 1110 .
  • the system can configure the GUIDs 1141 - 1143 to contain information on which member may have visibility to a particular persisted partition 1111 - 1113 .
  • each of the members 1101 - 1102 in the distributed data grid 1100 can report the GUIDs 1141 - 1143 (which can include the partition numbers and other information) for each of the persisted partitions that are found.
  • GUIDs 1141 - 1143 which can include the partition numbers and other information
  • each member 1101 - 1102 in the distributed data grid 1100 can collect a list of available GUIDs 1121 - 1122 from the distributed local disks 1110 based on local visibility. Then, each member 1101 - 1102 can provide (or register) the list of available GUIDs 1121 - 1122 to a resolver 1103 in the distributed data grid 1100 , and the resolver 1103 can determine the newest GUIDs 1130 for different partitions based on the partition number and version number information encoded in the GUIDs 1141 - 1143 .
  • the distributed local disks 1110 may contain multiple different versions of the same partition.
  • the resolver 1103 may receive multiple GUIDs that contain the same partition number and different version numbers.
  • the resolver 1103 can obtain the version number from each GUID associated with the same partition, and determine which GUID has the most recent version number. Also, the distributed data grid 1100 can ensure that the persisted partition with the most recent version number is valid based on performing the seal operation and validation operation.
  • the resolver 1103 can determine which member 1101 - 1102 in the distributed data grid 1100 is responsible for recovering a particular persisted partition 1111 - 1113 , based on the member ID information encoded in the GUIDs 1141 - 1143 .
  • the resolver 1103 can provide the partition recovery assignment, which may include a list of the newest GUIDs 1131 - 1132 , to each different member 1101 - 1102 . Accordingly, the members 1101 - 1102 can carry out the actual operation that restores the persisted partitions 1111 - 1113 .
  • the system can ensure that the distributed data grid 1100 always restores the newest valid version of any persisted partition, and can avoid recovering a partial copy that may be caused by cascading cluster failures.
  • FIG. 12 illustrates an exemplary flow chart for supporting persistent store versioning and integrity and in a distributed data grid, in accordance with an embodiment of the invention.
  • the system can receive a plurality of identifiers (e.g. the GUIDs) from one or more members of the distributed data grid, wherein each said identifier is associated with a persisted partition in a persistent storage for the distributed data grid.
  • the system can select an identifier for each partition, wherein each selected identifier is associated with a most recent valid version of a partition.
  • the system can determine a member in the distributed data grid that is responsible for recovering said partition from a persisted partition associated with the selected identifier.
  • FIG. 13 shows an illustration of providing a persistent snapshot of a running system in a distributed data grid, in accordance with an embodiment of the invention.
  • a distributed data grid 1300 can support various cache services 1320 using an in-memory data store 1302 .
  • the system allows a user to use a management tool 1310 to take a snapshot 1301 of the running system on the in-memory data store 1302 that supports the cache services 1320 on-demand, at any particular time.
  • the snapshot 1301 can be used to make a backup of the running system overnight.
  • the system can suspend the cache services 1320 , prior to taking the snapshot 1301 .
  • the system can provide a consistent point in time for taking the snapshot 1301 .
  • the cache service 1320 can be resumed after the snapshot 1301 is taken.
  • the snapshot 1301 can provide a consistent view of each partitioned cache service 1320 .
  • the snapshot 1301 can provide a catalogue of state information of the running system, including metadata 1311 and cache data 1312 for the partitioned cache services 1320 .
  • the system can store the snapshot 1301 either in a central location (e.g. a SAN 1321 ) or in distributed local disks 1322 .
  • the system can use a pluggable (or portable) archiver 1303 to retrieve the persisted state information of the snapshot 1301 from the distributed local disks 1322 , and can create a single archive unit 1330 , which can be used for auditing or other purposes.
  • the system allows a user to take a snapshot on the state of a partitioned cache service in a distributed data grid 1300 , instead of persisting the cache content in the distributed data grid 1300 in a continuing fashion.
  • FIG. 14 illustrates an exemplary flow chart for providing a persistent snapshot of a running system in a distributed data grid in accordance with an embodiment of the invention.
  • the system allows one or more cache services to run on a plurality of cluster members in the distributed data grid.
  • the system can collect a catalogue of state information associated with said one or more cache services from the plurality of cluster members in the distributed data grid.
  • the system can create a snapshot for said one or more cache services running on the distributed data grid.
  • the present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure.
  • Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
  • the present invention includes a computer program product which is a storage medium or computer readable medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention.
  • the storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.

Abstract

A system and method can support persistence in a distributed data grid, such as providing a persistent snapshot of a running system. The system allows one or more cache services to run on a plurality of cluster members in the distributed data grid. Furthermore, the system can collect a catalogue of state information associated with said one or more cache services from the plurality of cluster members in the distributed data grid, and create a snapshot for said one or more cache services running on the distributed data grid.

Description

    CLAIM OF PRIORITY
  • This application claims priority on U.S. Provisional Patent Application No. 61/915,912, entitled “SYSTEM AND METHOD FOR SUPPORTING PERSISTENCE IN A DISTRIBUTED DATA GRID” filed Dec. 13, 2013, which application is herein incorporated by reference.
  • CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is related to the following patent application(s), each of which is hereby incorporated by reference in its entirety:
  • U.S. patent application titled “SYSTEM AND METHOD FOR SUPPORTING SERVICE LEVEL QUORUM IN A DATA GRID CLUSTER”, application Ser. No. 13/352,203, filed on Jan. 17, 2012 (Attorney Docket No. ORACL-05131US2);
  • U.S. patent application titled “SYSTEM AND METHOD FOR SUPPORTING PERSISTENCE PARTITION DISCOVERY IN A DISTRIBUTED DATA GRID”, application Ser. No. ______, filed ______, 2014 (Attorney Docket No. ORACL-05462US0); and
  • U.S. patent application titled “SYSTEM AND METHOD FOR SUPPORTING PERSISTENT STORE VERSIONING AND INTEGRITY IN A DISTRIBUTED DATA GRID”, application Ser. No. ______, filed ______, 2014 (Attorney Docket No. ORACL-05463US1).
  • COPYRIGHT NOTICE
  • A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
  • FIELD OF INVENTION
  • The present invention is generally related to computer systems, and is particularly related to supporting persistence in a distributed data grid.
  • BACKGROUND
  • Modern computing systems, particularly those employed by larger organizations and enterprises, continue to increase in size and complexity. Particularly, in areas such as Internet applications, there is an expectation that millions of users should be able to simultaneously access that application, which effectively leads to an exponential increase in the amount of content generated and consumed by users, and transactions involving that content. Such activity also results in a corresponding increase in the number of transaction calls to databases and metadata stores, which have a limited capacity to accommodate that demand. This is the general area that embodiments of the invention are intended to address.
  • SUMMARY
  • Described herein are systems and methods that can support persistence in a distributed data grid, such as providing a persistent snapshot of a running system. The system allows one or more cache services to run on a plurality of cluster members in the distributed data grid. Furthermore, the system can collect a catalogue of state information associated with said one or more cache services from the plurality of cluster members in the distributed data grid, and create a snapshot for said one or more cache services running on the distributed data grid.
  • BRIEF DESCRIPTION OF THE FIGURES
  • FIG. 1 is an illustration of a data grid cluster in accordance with various embodiments of the invention.
  • FIG. 2 shows an illustration of supporting persistence in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 3 shows an illustration of using a shared storage to support persistence in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 4 shows an illustration of using distributed local disks to support persistence in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 5 shows an illustration of supporting distributed persistent store recovery in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 6 shows an illustration of coordinating persistent store recovery in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 7 shows an illustration of supporting consistent partition recovery in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 8 illustrates an exemplary flow chart for supporting distributed persistent store recovery in a distributed data grid in accordance with an embodiment of the invention.
  • FIG. 9 shows an illustration of supporting persistent store versioning in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 10 shows an illustration of supporting persistent store integrity in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 11 shows an illustration of restoring the persisted partitions in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 12 illustrates an exemplary flow chart for supporting persistent store versioning and integrity and in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 13 shows an illustration of providing a persistent snapshot of a running system in a distributed data grid, in accordance with an embodiment of the invention.
  • FIG. 14 illustrates an exemplary flow chart for providing a persistent snapshot of a running system in a distributed data grid in accordance with an embodiment of the invention.
  • DETAILED DESCRIPTION
  • Described herein are systems and methods that can support persistence in a distributed data grid.
  • Distributed Data Grid
  • In accordance with an embodiment, as referred to herein a “data grid cluster”, or “data grid”, is a system comprising a plurality of computer servers which work together to manage information and related operations, such as computations, within a distributed or clustered environment. The data grid cluster can be used to manage application objects and data that are shared across the servers. Preferably, a data grid cluster should have low response time, high throughput, predictable scalability, continuous availability and information reliability. As a result of these capabilities, data grid clusters are well suited for use in computational intensive, stateful middle-tier applications. Some examples of data grid clusters, e.g., the Oracle Coherence data grid cluster, can store the information in-memory to achieve higher performance, and can employ redundancy in keeping copies of that information synchronized across multiple servers, thus ensuring resiliency of the system and the availability of the data in the event of server failure. For example, Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol.
  • An in-memory data grid can provide the data storage and management capabilities by distributing data over a number of servers working together. The data grid can be middleware that runs in the same tier as an application server or within an application server. It can provide management and processing of data and can also push the processing to where the data is located in the grid. In addition, the in-memory data grid can eliminate single points of failure by automatically and transparently failing over and redistributing its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it can automatically join the cluster and services can be failed back over to it, transparently redistributing the cluster load. The data grid can also include network-level fault tolerance features and transparent soft re-start capability.
  • In accordance with an embodiment, the functionality of a data grid cluster is based on using different cluster services. The cluster services can include root cluster services, partitioned cache services, and proxy services. Within the data grid cluster, each cluster node can participate in a number of cluster services, both in terms of providing and consuming the cluster services. Each cluster service has a service name that uniquely identifies the service within the data grid cluster, and a service type, which defines what the cluster service can do. Other than the root cluster service running on each cluster node in the data grid cluster, there may be multiple named instances of each service type. The services can be either configured by the user, or provided by the data grid cluster as a default set of services.
  • FIG. 1 is an illustration of a data grid cluster in accordance with various embodiments of the invention. As shown in FIG. 1, a data grid cluster 100, e.g. an Oracle Coherence data grid, includes a plurality of cluster members (or server nodes) such as cluster nodes 101-106, having various cluster services 111-116 running thereon. Additionally, a cache configuration file 110 can be used to configure the data grid cluster 100.
  • Persistent Storage of Cache Contents
  • In accordance with an embodiment of the invention, the distributed data grid can provide recoverable persistent storage for different types of cache content and can prevent data loss after the distributed data grid is shut down.
  • FIG. 2 shows an illustration of supporting persistence in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 2, a distributed data grid 200 can include various types of cache content 211-213 in an in-memory data store 202. Furthermore, the distributed data grid 200 can use a persistence layer 201 to persist the cache content 211-213 in a persistent storage 203.
  • The persistence layer 201 allows the persistent storage 203 to use different physical topologies. For example, the persistence layer 201 can store the cache content in a central location, such as a storage area network (SAN) 221, where all members in the distributed data grid 200 can share the same visibility. Alternatively, the persistence layer 201 can store the cache content into different local disks 222, where members of the distributed data grid 200 may have only local visibility.
  • Furthermore, the persistence layer 201 can be agnostic to the choice of the physical topology (e.g. a SAN 221 or distributed local disks 222). For example, the distributed data grid 200 can take advantage of multiple SANs or multiple SAN mount points. Also, the distributed data grid 200 can take advantage of a physical topology that includes multiple SANs that are not shared by the plurality of members. Alternatively, the physical topology may include multiple SANs exporting storage locations, or may include hybrid deployments of local disks and SANs.
  • Additionally, the persistence layer 201 can support partition-wide atomicity of persisted data/metadata, and can provide transaction guarantee after a restart of the distributed data grid 200. Also, the persistence layer 201 can minimize performance impact and reduce recovery time needed to restart the distributed data grid 200.
  • FIG. 3 shows an illustration of using a shared storage to support persistence in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 3, a distributed data grid 300, which includes a plurality of members (e.g. the members 301-305 on the machines A-C 311-313), can support various cache services 320.
  • Furthermore, the distributed data grid 300 can use a shared persistent storage, such as a storage area network (SAN) 310, to store the cache content for the cache services 320 in a central location. As shown in FIG. 3, the different members 301-305 on the machines A-C 311-313 can share the same visibility, and can all have access to the persisted partitions 322 in the SAN 310.
  • Thus, the system can recover the persisted cache content and prevent data loss, when the distributed data grid 300 is restarted after a shutdown.
  • FIG. 4 shows an illustration of using distributed local disks to support persistence in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 4, a distributed data grid 400, which includes a plurality of members (e.g. the members 401-405 on the machines A-C 411-413), can support various cache services 420.
  • Furthermore, the distributed data grid 400 can store the cache content for the cache services 420 into the local disks on different machines. For example, the members 401-402 can store the related cache content into the local disk A 431 on machine A 411 (e.g. the persisted partitions 421). Also, the members 403-404 can store the related cache content into the local disk B 432 on the machine B 412 (e.g. the persisted partitions 422), and the machine C 413 can store the related cache content into the local disk C 433 on the machine C 413 (e.g. the persisted partitions 423).
  • Thus, the distributed data grid 400 can support the automatic recovery of various types of cache content in a distributed fashion, and prevent data loss during the restart of the distributed data grid 400.
  • Distributed Persistent Store Recovery
  • In accordance with an embodiment of the invention, the distributed data grid can support persistent store recovery in a distributed fashion.
  • FIG. 5 shows an illustration of supporting distributed persistent store recovery in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 5, a distributed data grid 500 can include a plurality of members, e.g. members 501-505, and can persist the cache content using the distributed local disks, e.g. local disks A-C 511-513.
  • Furthermore, each member in the distributed data grid 500 may only have visibility to the partitions persisted in the local disk. For example, the member 501 and the member 502 may only be aware of the persisted partitions 521 in the local disk A 511, while the member 503 and the member 504 may only be aware of the persisted partitions 522 in the local disk B 512 and the member 505 may only be aware of the persisted partitions 523 in the local disk C 513.
  • In accordance with an embodiment of the invention, the distributed data grid 500 can use an internal protocol to discover the persisted partitions 521-523 on different local disks A-C 511-513. For example, the discovery protocol supports the persistent store recovery during both the cluster cold-start/restart scenario and the multiple-node failure scenario (e.g. with a loss of a primary owner of a partition and/or one or more backup owners of the partition).
  • As shown in FIG. 5, the distributed data grid 500 can use a coordinator member 510 to coordinate the recovery of various persisted partitions 521-523 in the distributed data grid 500. The coordinator member 510 can send a distributed query to other members 501-505 in the distributed data grid 500 in order to obtain a complete list of persisted partitions 521-523.
  • In accordance with an embodiment of the invention, the coordinator member 510 can use a pluggable partition assignment strategy component 520 to determine the partition recovery assignment 540. For example, the system can go down the list of the partitions to examine which member can see a version of the partition. Then, the system can determine which member should be used to recover which partition based on a synchronized partition ownership view 530.
  • Furthermore, the system can minimize the performance impact caused by adding persistence support to the distributed data grid 500. For example, the system can use an asynchronous messaging process in the distributed data grid 500 for implementing the write operation to a persistent store. Also, the system allows the performing of multiple input/output (I/O) operations concurrently.
  • Additionally, the coordinator member 510 can avoid using only one or a few members in the distributed data grid 500 for performing the recovery, which may be prone to create performance bottleneck.
  • Also, the system can use a recovery quorum to ensure that all persisted partitions are visible prior to the recovery in order to prevent data loss due to recovery.
  • Additional descriptions of various embodiments of supporting service level quorum in a distributed data grid 500 are provided in U.S. patent application titled “SYSTEM AND METHOD FOR SUPPORTING SERVICE LEVEL QUORUM IN A DATA GRID CLUSTER”, application Ser. No. 13/352,203, filed on Jan. 17, 2012 (Attorney Docket No. ORACL-05131US2), which application is herein incorporated by reference.
  • Thus, the distributed data grid 500 can automatically carry out a recovery of persisted cache contents in a distributed fashion during a restart of the distributed data grid 500.
  • FIG. 6 shows an illustration of coordinating persistent store recovery in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 6, a coordinator member 610 in a distributed data grid 600 can coordinate the recovery of the persisted partitions from the distributed local disks. For example, the coordinator member 610 can direct a member 620 to recover persisted partitions from a local disk 630.
  • At step 601, the coordinator 610 can instruct the member 620 (and all other members in the distributed data grid 600 concurrently) to prepare for restoring persisted partitions. Then, at step 602, the member 620 (possibly along with each other member in the distributed data grid 600) can provide a local partition ownership back to the coordinator member 610.
  • At step 603, the coordinator member 610 can synchronize a view of the overall partition ownership, after obtaining the partition ownership information from the different members in the distributed data grid 600.
  • Furthermore, at step 604, the coordinator 610 can instruct the member 620 to prepare for recovering the persisted partitions based on the view of the overall partition ownership. At step 605, the member 620 can check for the persisted partitions in the local disk 630. Then, at step 606, the member 620 can report the persisted partitions (e.g. the persisted partition IDs) in the local disk 630 to the coordinator member 610.
  • At step 607, after obtaining information about the persisted partitions from the different members in the distributed data grid 600, the coordinator member 610 can make decision on how to configure a recovery process, such as determining a recovery assignment.
  • Then, at step 608, the coordinator 610 can provide the partition recovery assignment (e.g. the recover partition IDs) to each member in the distributed data grid 600. Finally, at step 609, the different members in the distributed data grid 600 (including the member 620) can carry out the recovery of the persisted partitions based on the received partition recovery assignment.
  • FIG. 7 shows an illustration of supporting consistent partition recovery in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 7, a distributed data grid 700 can include a plurality of members, e.g. members 701-705, each of which may only have visibility to the partitions persisted in the local disk.
  • Furthermore, a coordinator member 710 can coordinate the recovery of various persisted partitions 721-723 from the distributed local disks A-C 711-713. Also, the coordinator member 710 can use a pluggable partition assignment strategy component 720 to determine which member should be used to recover which partition.
  • In accordance with an embodiment of the invention, when a machine in the distributed data grid 700 is lost, the system can promote in-memory backups to in-memory primaries. As part of this process, the system can create a new persisted partition on disk and can also create one or more in-memory backups on other members from the data in memory.
  • Additionally, when in-memory data loss occurs due to two or more (depending on the backup count) member processes dying simultaneously, the system can recover a new in-memory primary from the persisted version on disk, when there is a member having visibility to the disk.
  • As shown in FIG. 7, when a machine that is associated with the local disk A 711 is lost, the persisted partitions 721 may become unavailable. In such a case, the distributed data grid 700 can rebalance itself. For example, the distributed data grid 700 can promote a back-up partition which is persisted in either the local disk B 712 or the local disk C 713 as the primary partition.
  • In accordance with an embodiment of the invention, the distributed data grid 700 can ensure that the system always restores the most recent valid partition. For example, the persisted partitions 722 in the local disk B 712 may contain a newer version of the partition, since the persisted partitions 721 in the local disk A 711 may not be updated correctly or an older version of the partition exists due to the death of the prior owner of the partition.
  • In accordance with an embodiment of the invention, the distributed data grid 700 can use a recovery quorum for supporting the discovery and/or the recovery of the persisted partitions 721-723. By using the recovery quorum, the recovery from persistence can be gated or protected. Thus, the distributed data grid 700 can ensure that no data is lost, even when the number of members that are lost exceeds the in-memory redundancy target.
  • Also, the distributed data grid 700 can ensure that all persisted partitions are visible prior to recovery. For example, the recovery quorum can be configured such that it guarantees visibility to all of the possible storage locations (such as local disks and/or SANs within the cluster). Additionally, the distributed data grid 700 can recover orphaned partitions from the persistent store and assign them as empty partitions
  • Furthermore, the distributed data grid 700 can establish different recovery policies based on the recovery quorum. For example, the distributed data grid 700 can establish SAN/shared-storage policies that focus on capacity. Also, the distributed data grid 700 can establish distributed/shared-nothing storage policies that ensure all storage locations are reachable. Also, the distributed data grid 700 can establish various policies based on the configured membership size and the host-list.
  • In accordance with an embodiment of the invention, the system allows various members 701-705 in the distributed data grid 700 to be shut down (and/or restarted) in an orderly fashion, and allows for a graceful suspend/resume of an service or the entire cluster. Additionally, the system can prevent partition transfers and persistent store movements, during the shutdown of the distributed data grid. For example, a quiesced service/cluster may not join new members, may not restore partitions from backup, may not recover orphaned partitions from persistent store, may not assign empty orphaned partitions, and may not perform partition distribution.
  • FIG. 8 illustrates an exemplary flow chart for supporting distributed persistent store recovery in a distributed data grid in accordance with an embodiment of the invention. As shown in FIG. 8, at step 801, the system allowing a plurality of members in the distributed data grid to persist a plurality of partitions associated with one or more cache services in a persistent storage. Then, at step 802, a coordinator can synchronize a view of partition ownership among the plurality of members in the distributed data grid. Furthermore, at step 803, the distributed data grid can form a distributed consensus on which partition can be recovered from which member in the distributed data grid.
  • Persistent Store Versioning and Integrity
  • FIG. 9 shows an illustration of supporting persistent store versioning in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 9, a distributed data grid 900 can use various partitions (e.g. a partition 901) in an in-memory data store 920 to support different cache services.
  • Furthermore, the distributed data grid 900 can use a persistent store (e.g. a persisted partition 911) to persist the partition 901 in the distributed local disks 910.
  • The system can provide a unique identifier (ID), or a unique version number 906, for each persisted partition in the distributed local disks 910. As shown in FIG. 9, a member 902 in the distributed data grid 900 can generate a globally unique identifier (GUID) 921 for the persistent partition 911. The GUID 921 can contain various types of information using a special naming format.
  • For example, the GUID 921 can include at least a partition number (or a partition ID 903) and a partition version number 911 associated with the partition 901. Additionally, the GUID 921 can contain a member ID 904, which indicates that the member 902 generates the GUID 921.
  • Additionally, the GUID 921 can include other information, such as a time stamp 905 that indicates the time when the partition 901 is first persisted. The time stamp 905 is a stamp of logical time (e.g. a stamp of a vector clock per partition), instead of a global wall clock. Thus, the system can guarantee that the GUID stamps move monotonically forward in the face of any kind of failure or transfer scenario.
  • In accordance with an embodiment of the invention, the distributed data grid 900 can maintain the version number 910 for each persisted partition in a monotonically increasing order. Thus, the system can account for the data mutation at any member or ownership changes in the distributed data grid 900.
  • FIG. 10 shows an illustration of supporting persistent store integrity in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 10, a persistent store 1001 in a distributed data grid 1000 can contain cache content from different caches A-C 1011-1013, each of which is associated with a cache ID 1021-1123.
  • Furthermore, the system can apply a seal operation 1002 on the persistent store 1001. The seal operation 1002 can ensure that the persistent store 1001 is fully initialized and is eligible to be recovered.
  • Additionally, the system can apply a validation operation 1003 on the persistent store 1001. The validation operation 1003 can check whether the persistent store 1001 has been sealed. For example, the system may decide that the cache content in the persistent store 1001 is not valid if the persistent store 1001 is not sealed.
  • Thus, the system can ensure that the distributed data grid 1000 always restores a valid persisted partition and avoids recovering a partial copy that may be caused by cascading cluster failures.
  • FIG. 11 shows an illustration of restoring the persisted partitions in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 11, a distributed data grid 1100 can store various persisted partitions 1111-1113 in distributed local disks 1110.
  • Each persisted partition 1111-1113 stored in the distributed local disks 1110 can be associated with a globally unique identifier (GUID), e.g. GUID 1141-1143. The GUIDs 1141-1143 can contain different types of information that includes at least a partition number (i.e. a partition-id) and a version number.
  • In accordance with an embodiment of the invention, the members 1101-1102 in the distributed data grid 1100 may have different visibility to the persisted partitions 1011-1013 in the distributed local disks 1110. The system can configure the GUIDs 1141-1143 to contain information on which member may have visibility to a particular persisted partition 1111-1113.
  • Additionally, as a result of a cascading failure in the distributed local disks 1110, multiple versions of the same persisted partitions 1011-1013 may present on the different members 1101-1102 of the distributed data grid 1100. In order to disambiguate these different versions, each of the members 1101-1102 in the distributed data grid 1100 can report the GUIDs 1141-1143 (which can include the partition numbers and other information) for each of the persisted partitions that are found. In accordance with an embodiment of the invention, only members reporting the presence of the most recent GUID for a partition can be considered for recovery.
  • As shown in FIG. 11, each member 1101-1102 in the distributed data grid 1100 can collect a list of available GUIDs 1121-1122 from the distributed local disks 1110 based on local visibility. Then, each member 1101-1102 can provide (or register) the list of available GUIDs 1121-1122 to a resolver 1103 in the distributed data grid 1100, and the resolver 1103 can determine the newest GUIDs 1130 for different partitions based on the partition number and version number information encoded in the GUIDs 1141-1143.
  • Furthermore, due to the distributed nature of the system, the distributed local disks 1110 may contain multiple different versions of the same partition. In other words, the resolver 1103 may receive multiple GUIDs that contain the same partition number and different version numbers.
  • In such a case, the resolver 1103 can obtain the version number from each GUID associated with the same partition, and determine which GUID has the most recent version number. Also, the distributed data grid 1100 can ensure that the persisted partition with the most recent version number is valid based on performing the seal operation and validation operation.
  • Additionally, the resolver 1103 can determine which member 1101-1102 in the distributed data grid 1100 is responsible for recovering a particular persisted partition 1111-1113, based on the member ID information encoded in the GUIDs 1141-1143.
  • Then, the resolver 1103 can provide the partition recovery assignment, which may include a list of the newest GUIDs 1131-1132, to each different member 1101-1102. Accordingly, the members 1101-1102 can carry out the actual operation that restores the persisted partitions 1111-1113.
  • Thus, the system can ensure that the distributed data grid 1100 always restores the newest valid version of any persisted partition, and can avoid recovering a partial copy that may be caused by cascading cluster failures.
  • FIG. 12 illustrates an exemplary flow chart for supporting persistent store versioning and integrity and in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 12, at step 1201, the system can receive a plurality of identifiers (e.g. the GUIDs) from one or more members of the distributed data grid, wherein each said identifier is associated with a persisted partition in a persistent storage for the distributed data grid. Then, at step 1202, the system can select an identifier for each partition, wherein each selected identifier is associated with a most recent valid version of a partition. Furthermore, at step 1203, the system can determine a member in the distributed data grid that is responsible for recovering said partition from a persisted partition associated with the selected identifier.
  • Persistent Snapshot of a Running System
  • FIG. 13 shows an illustration of providing a persistent snapshot of a running system in a distributed data grid, in accordance with an embodiment of the invention. As shown in FIG. 13, a distributed data grid 1300 can support various cache services 1320 using an in-memory data store 1302.
  • Furthermore, the system allows a user to use a management tool 1310 to take a snapshot 1301 of the running system on the in-memory data store 1302 that supports the cache services 1320 on-demand, at any particular time. For example, the snapshot 1301 can be used to make a backup of the running system overnight.
  • In accordance with an embodiment of the invention, the system can suspend the cache services 1320, prior to taking the snapshot 1301. Thus, the system can provide a consistent point in time for taking the snapshot 1301. Then, the cache service 1320 can be resumed after the snapshot 1301 is taken.
  • Additionally, the snapshot 1301 can provide a consistent view of each partitioned cache service 1320. For example, the snapshot 1301 can provide a catalogue of state information of the running system, including metadata 1311 and cache data 1312 for the partitioned cache services 1320. Additionally, the system can store the snapshot 1301 either in a central location (e.g. a SAN 1321) or in distributed local disks 1322.
  • Furthermore, when various artifacts in a snapshot 1301 are created and stored in the distributed local disks 1322, the system can use a pluggable (or portable) archiver 1303 to retrieve the persisted state information of the snapshot 1301 from the distributed local disks 1322, and can create a single archive unit 1330, which can be used for auditing or other purposes.
  • Thus, the system allows a user to take a snapshot on the state of a partitioned cache service in a distributed data grid 1300, instead of persisting the cache content in the distributed data grid 1300 in a continuing fashion.
  • FIG. 14 illustrates an exemplary flow chart for providing a persistent snapshot of a running system in a distributed data grid in accordance with an embodiment of the invention. As shown in FIG. 14, at step 1401, the system allows one or more cache services to run on a plurality of cluster members in the distributed data grid. Then, at step 1402, the system can collect a catalogue of state information associated with said one or more cache services from the plurality of cluster members in the distributed data grid. Furthermore, at step 1403, the system can create a snapshot for said one or more cache services running on the distributed data grid.
  • The present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
  • In some embodiments, the present invention includes a computer program product which is a storage medium or computer readable medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention. The storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
  • The foregoing description of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art. The modification and variation include any relevant combination of the described features. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalence.

Claims (20)

What is claimed is:
1. A method for supporting persistence in a distributed data grid, comprising:
allowing one or more cache services to run on a plurality of cluster members in the distributed data grid;
collecting a catalogue of state information associated with said one or more cache services from the plurality of cluster members in the distributed data grid; and
creating a snapshot for said one or more cache services running on the distributed data grid.
2. The method according to claim 1, further comprising:
receiving an on-demand request for the snapshot from a user.
3. The method according to claim 1, further comprising:
providing a consistent view for each said cache service running on the distributed data grid based on the collected information.
4. The method according to claim 1, further comprising:
allowing the state information to include both meta data and cache data for said one or more cache services.
5. The method according to claim 1, further comprising:
storing the snapshot in a central location.
6. The method according to claim 1, further comprising:
storing the snapshot in one or more distributed local disks.
7. The method according to claim 6, further comprising:
allowing each distributed local disk to be only visible to one or more members in the distributed data grid.
8. The method according to claim 6, further comprising:
retrieving persisted state information in the snapshot from the plurality of cluster members in the distributed data grid.
9. The method according to claim 8, further comprising:
creating a single archive unit based on the retrieved persisted state information in the snapshot.
10. The method according to claim 1, further comprising:
persisting cache content for said one or more cache services in a persistent storage associated with the distributed data grid.
11. A system for supporting asynchronous message processing in a distributed data grid, comprising:
one or more microprocessors;
a distributed data grid running on the one or more microprocessors, wherein the distributed data grid includes a plurality of server nodes that are interconnected with one or more communication channels, and wherein the distributed data grid operates to perform the steps comprising
allowing one or more cache services to run on a plurality of cluster members in the distributed data grid;
collecting a catalogue of state information associated with said one or more cache services from the plurality of cluster members in the distributed data grid; and
creating a snapshot for said one or more cache services running on the distributed data grid.
12. The system according to claim 11, wherein:
the distributed data grid operates to receive an on-demand request for the snapshot from a user.
13. The system according to claim 11, wherein:
the snapshot provides a consistent view for each said cache service running on the distributed data grid based on the collected information.
14. The system according to claim 11, wherein:
the state information includes both meta data and cache data for said one or more cache services.
15. The system according to claim 11, wherein:
the distributed data grid operates to store the snapshot in a central location.
16. The system according to claim 11, wherein:
the distributed data grid operates to store the snapshot in one or more distributed local disks.
17. The system according to claim 16, wherein:
each distributed local disk is only visible to one or more members in the distributed data grid.
18. The system according to claim 15, wherein:
an archiver operates to retrieve persisted state information from the plurality of cluster members in the distributed data grid.
19. The system according to claim 18, wherein:
the archiver operates to create a single archive unit based on the retrieved persisted state information in the snapshot.
20. A non-transitory machine readable storage medium having instructions stored thereon that when executed cause a system to perform the steps comprising:
allowing one or more cache services to run on a plurality of cluster members in the distributed data grid;
collecting a catalogue of state information associated with said one or more cache services from the plurality of cluster members in the distributed data grid; and
creating a snapshot for said one or more cache services running on the distributed data grid.
US14/271,161 2012-01-17 2014-05-06 System and method for providing a persistent snapshot of a running system in a distributed data grid Abandoned US20150169598A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/271,161 US20150169598A1 (en) 2012-01-17 2014-05-06 System and method for providing a persistent snapshot of a running system in a distributed data grid

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US13/352,203 US9063787B2 (en) 2011-01-28 2012-01-17 System and method for using cluster level quorum to prevent split brain scenario in a data grid cluster
US201361915912P 2013-12-13 2013-12-13
US14/271,161 US20150169598A1 (en) 2012-01-17 2014-05-06 System and method for providing a persistent snapshot of a running system in a distributed data grid

Publications (1)

Publication Number Publication Date
US20150169598A1 true US20150169598A1 (en) 2015-06-18

Family

ID=53368673

Family Applications (4)

Application Number Title Priority Date Filing Date
US14/271,142 Active 2035-03-15 US10706021B2 (en) 2012-01-17 2014-05-06 System and method for supporting persistence partition discovery in a distributed data grid
US14/271,161 Abandoned US20150169598A1 (en) 2012-01-17 2014-05-06 System and method for providing a persistent snapshot of a running system in a distributed data grid
US14/271,150 Active 2036-02-24 US10176184B2 (en) 2012-01-17 2014-05-06 System and method for supporting persistent store versioning and integrity in a distributed data grid
US16/227,877 Active 2034-08-11 US10817478B2 (en) 2013-12-13 2018-12-20 System and method for supporting persistent store versioning and integrity in a distributed data grid

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/271,142 Active 2035-03-15 US10706021B2 (en) 2012-01-17 2014-05-06 System and method for supporting persistence partition discovery in a distributed data grid

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/271,150 Active 2036-02-24 US10176184B2 (en) 2012-01-17 2014-05-06 System and method for supporting persistent store versioning and integrity in a distributed data grid
US16/227,877 Active 2034-08-11 US10817478B2 (en) 2013-12-13 2018-12-20 System and method for supporting persistent store versioning and integrity in a distributed data grid

Country Status (5)

Country Link
US (4) US10706021B2 (en)
EP (2) EP3080698A1 (en)
JP (2) JP6483699B2 (en)
CN (2) CN105830033B (en)
WO (2) WO2015088916A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170093746A1 (en) * 2015-09-30 2017-03-30 Symantec Corporation Input/output fencing optimization
US11550820B2 (en) * 2017-04-28 2023-01-10 Oracle International Corporation System and method for partition-scoped snapshot creation in a distributed data computing environment

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10191817B2 (en) * 2015-12-28 2019-01-29 Veritas Technologies Llc Systems and methods for backing up large distributed scale-out data systems
CN109640693A (en) 2016-06-14 2019-04-16 谱赛科美国股份有限公司 Steviol glycoside composition, production method and purposes
CN107885671B (en) * 2016-09-30 2021-09-14 华为技术有限公司 Nonvolatile memory persistence method and computing device
US10769019B2 (en) * 2017-07-19 2020-09-08 Oracle International Corporation System and method for data recovery in a distributed data computing environment implementing active persistence
CN110764940A (en) * 2018-07-26 2020-02-07 北京国双科技有限公司 Processing method and device for service exception of distributed system
US11100086B2 (en) * 2018-09-25 2021-08-24 Wandisco, Inc. Methods, devices and systems for real-time checking of data consistency in a distributed heterogenous storage system
CN111352878B (en) * 2018-12-21 2021-08-27 达发科技(苏州)有限公司 Digital signal processing system and method
CN109947375B (en) * 2019-04-04 2021-05-14 江南大学 Distributed storage system optimization method based on partition processing consensus algorithm
CN110309128B (en) * 2019-07-05 2020-07-17 广东铭太信息科技有限公司 Oracle backup file automatic importing device, implementation method thereof and method for importing backup file by using device
CN110795605B (en) * 2020-01-03 2020-05-12 北京东方通科技股份有限公司 Data storage system based on distributed memory grid
US11438224B1 (en) 2022-01-14 2022-09-06 Bank Of America Corporation Systems and methods for synchronizing configurations across multiple computing clusters
CN116361389B (en) * 2023-03-17 2024-03-08 国网江苏省电力有限公司营销服务中心 Data synchronization link method and system based on national network marketing acquisition system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040153615A1 (en) * 2003-01-21 2004-08-05 Koning G. Paul Distributed snapshot process
US20080077622A1 (en) * 2006-09-22 2008-03-27 Keith Robert O Method of and apparatus for managing data utilizing configurable policies and schedules
US20110071981A1 (en) * 2009-09-18 2011-03-24 Sourav Ghosh Automated integrated high availability of the in-memory database cache and the backend enterprise database

Family Cites Families (114)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819272A (en) 1996-07-12 1998-10-06 Microsoft Corporation Record tracking in database replication
US5784569A (en) 1996-09-23 1998-07-21 Silicon Graphics, Inc. Guaranteed bandwidth allocation method in a computer system for input/output data transfers
US5940367A (en) 1996-11-06 1999-08-17 Pluris, Inc. Fault-tolerant butterfly switch
US6233601B1 (en) 1996-11-14 2001-05-15 Mitsubishi Electric Research Laboratories, Inc. Itinerary based agent mobility including mobility of executable code
US6125368A (en) 1997-02-28 2000-09-26 Oracle Corporation Fault-tolerant timestamp generation for multi-node parallel databases
US5933818A (en) 1997-06-02 1999-08-03 Electronic Data Systems Corporation Autonomous knowledge discovery system and method
US5991894A (en) 1997-06-06 1999-11-23 The Chinese University Of Hong Kong Progressive redundancy transmission
US5999712A (en) 1997-10-21 1999-12-07 Sun Microsystems, Inc. Determining cluster membership in a distributed computer system
US6605120B1 (en) 1998-12-10 2003-08-12 International Business Machines Corporation Filter definition for distribution mechanism for filtering, formatting and reuse of web based content
US6453426B1 (en) 1999-03-26 2002-09-17 Microsoft Corporation Separately storing core boot data and cluster configuration data in a server cluster
US6693874B1 (en) 1999-05-26 2004-02-17 Siemens Information & Communication Networks, Inc. System and method for enabling fault tolerant H.323 systems
US7020695B1 (en) 1999-05-28 2006-03-28 Oracle International Corporation Using a cluster-wide shared repository to provide the latest consistent definition of the cluster (avoiding the partition-in time problem)
US6871222B1 (en) 1999-05-28 2005-03-22 Oracle International Corporation Quorumless cluster using disk-based messaging
US6553389B1 (en) 1999-09-13 2003-04-22 Hewlett-Packard Company Resource availability determination mechanism for distributed data storage system
US6487622B1 (en) 1999-10-28 2002-11-26 Ncr Corporation Quorum arbitrator for a high availability system
AU2001259402A1 (en) 2000-05-02 2001-11-12 Sun Microsystems, Inc. Cluster membership monitor
US20020035559A1 (en) 2000-06-26 2002-03-21 Crowe William L. System and method for a decision engine and architecture for providing high-performance data querying operations
US6915391B2 (en) 2000-12-15 2005-07-05 International Business Machines Corporation Support for single-node quorum in a two-node nodeset for a shared disk parallel file system
JP4637382B2 (en) * 2001-02-13 2011-02-23 サイボウズ株式会社 Data backup system
US7792977B1 (en) 2001-02-28 2010-09-07 Oracle International Corporation Method for fencing shared resources from cluster nodes
US20040179471A1 (en) 2001-03-07 2004-09-16 Adisak Mekkittikul Bi-directional flow-switched ring
US20020169738A1 (en) 2001-05-10 2002-11-14 Giel Peter Van Method and system for auditing an enterprise configuration
US7113980B2 (en) 2001-09-06 2006-09-26 Bea Systems, Inc. Exactly once JMS communication
US7376953B2 (en) 2001-10-29 2008-05-20 Hewlett-Packard Development Company, L.P. Apparatus and method for routing a transaction to a server
US6904448B2 (en) 2001-12-20 2005-06-07 International Business Machines Corporation Dynamic quorum adjustment
AU2003219835A1 (en) 2002-02-22 2003-09-09 Mission Critical Linux, Inc. Clustering infrastructure system and method
US7139925B2 (en) 2002-04-29 2006-11-21 Sun Microsystems, Inc. System and method for dynamic cluster adjustment to node failures in a distributed data system
US6952758B2 (en) 2002-07-31 2005-10-04 International Business Machines Corporation Method and system for providing consistent data modification information to clients in a storage system
US7206836B2 (en) 2002-09-23 2007-04-17 Sun Microsystems, Inc. System and method for reforming a distributed data system cluster after temporary node failures or restarts
US20040153558A1 (en) 2002-10-31 2004-08-05 Mesut Gunduc System and method for providing java based high availability clustering framework
US7451359B1 (en) 2002-11-27 2008-11-11 Oracle International Corp. Heartbeat mechanism for cluster systems
KR100553920B1 (en) 2003-02-13 2006-02-24 인터내셔널 비지네스 머신즈 코포레이션 Method for operating a computer cluster
WO2004077280A2 (en) 2003-02-27 2004-09-10 Bea Systems, Inc. System and method for communications between servers in a cluster
US20040176968A1 (en) 2003-03-07 2004-09-09 Microsoft Corporation Systems and methods for dynamically configuring business processes
US7958026B2 (en) 2003-04-29 2011-06-07 Oracle International Corporation Hierarchical transaction filtering
US20050021737A1 (en) 2003-05-01 2005-01-27 Ellison Carl M. Liveness protocol
US20040267897A1 (en) 2003-06-24 2004-12-30 Sychron Inc. Distributed System Providing Scalable Methodology for Real-Time Control of Server Pools and Data Centers
JP5068000B2 (en) 2003-07-31 2012-11-07 富士通株式会社 Information processing method and program in XML driven architecture
US8234517B2 (en) * 2003-08-01 2012-07-31 Oracle International Corporation Parallel recovery by non-failed nodes
US7551552B2 (en) 2003-10-17 2009-06-23 Microsoft Corporation Method for providing guaranteed distributed failure notification
US7260698B2 (en) 2003-10-30 2007-08-21 International Business Machines Corporation Method and system for page initialization using off-level worker thread
US7464378B1 (en) 2003-12-04 2008-12-09 Symantec Operating Corporation System and method for allowing multiple sub-clusters to survive a cluster partition
US7779386B2 (en) 2003-12-08 2010-08-17 Ebay Inc. Method and system to automatically regenerate software code
US7299378B2 (en) 2004-01-15 2007-11-20 Oracle International Corporation Geographically distributed clusters
US7712077B2 (en) 2004-02-27 2010-05-04 International Business Machines Corporation Method and system for instantiating components conforming to the “COM” specification in custom contexts
US7428733B2 (en) 2004-05-13 2008-09-23 Bea Systems, Inc. System and method for custom module creation and deployment
US7386753B2 (en) 2004-09-02 2008-06-10 International Business Machines Corporation Subscription-based management and distribution of member-specific state data in a distributed computing system
US7640339B1 (en) 2005-02-14 2009-12-29 Sun Microsystems, Inc. Method and apparatus for monitoring a node in a distributed system
US7530059B2 (en) 2005-02-18 2009-05-05 International Business Machines Corporation Method for inlining native functions into compiled java code
US7613774B1 (en) 2005-03-01 2009-11-03 Sun Microsystems, Inc. Chaperones in a distributed system
US7979457B1 (en) 2005-03-02 2011-07-12 Kayak Software Corporation Efficient search of supplier servers based on stored search results
US7698390B1 (en) 2005-03-29 2010-04-13 Oracle America, Inc. Pluggable device specific components and interfaces supported by cluster devices and systems and methods for implementing the same
US7739677B1 (en) 2005-05-27 2010-06-15 Symantec Operating Corporation System and method to prevent data corruption due to split brain in shared data clusters
US7870230B2 (en) 2005-07-15 2011-01-11 International Business Machines Corporation Policy-based cluster quorum determination
US7720971B2 (en) 2005-09-12 2010-05-18 Microsoft Corporation Arbitrating an appropriate back-end server to receive channels of a client session
US20070118693A1 (en) 2005-11-19 2007-05-24 International Business Machines Cor Method, apparatus and computer program product for cache restoration in a storage system
US7627584B2 (en) 2005-11-30 2009-12-01 Oracle International Corporation Database system configured for automatic failover with no data loss
US7882079B2 (en) 2005-11-30 2011-02-01 Oracle International Corporation Database system configured for automatic failover with user-limited data loss
US7756924B2 (en) 2005-12-21 2010-07-13 Microsoft Corporation Peer communities
JP2007219609A (en) 2006-02-14 2007-08-30 Hitachi Ltd Snapshot management device and method
EP2002634B1 (en) 2006-03-27 2014-07-02 Telecom Italia S.p.A. System for enforcing security policies on mobile communications devices
US7676628B1 (en) * 2006-03-31 2010-03-09 Emc Corporation Methods, systems, and computer program products for providing access to shared storage by computing grids and clusters with large numbers of nodes
US8570857B2 (en) 2006-04-07 2013-10-29 At&T Intellectual Property I, Lp Resilient IP ring protocol and architecture
US7975288B2 (en) 2006-05-02 2011-07-05 Oracle International Corporation Method and apparatus for imposing quorum-based access control in a computer system
US20070271584A1 (en) 2006-05-16 2007-11-22 Microsoft Corporation System for submitting and processing content including content for on-line media console
US7953861B2 (en) 2006-08-10 2011-05-31 International Business Machines Corporation Managing session state for web applications
US8775402B2 (en) 2006-08-15 2014-07-08 Georgia State University Research Foundation, Inc. Trusted query network systems and methods
US7814248B2 (en) 2006-12-07 2010-10-12 Integrated Device Technology, Inc. Common access ring/sub-ring system
US9111276B2 (en) 2006-12-08 2015-08-18 Sap Se Secure execution environments for process models
US8104080B2 (en) 2007-01-26 2012-01-24 Microsoft Corporation Universal schema for representing management policy
US9026655B2 (en) 2007-01-31 2015-05-05 Oracle America, Inc. Method and system for load balancing
JP5036041B2 (en) 2007-04-25 2012-09-26 アズビル株式会社 RSTP processing method
US8745584B2 (en) 2007-05-03 2014-06-03 International Business Machines Corporation Dependency injection by static code generation
US20080281959A1 (en) 2007-05-10 2008-11-13 Alan Robertson Managing addition and removal of nodes in a network
US20100312861A1 (en) 2007-11-30 2010-12-09 Johan Kolhi Method, network, and node for distributing electronic content in a content distribution network
US8397227B2 (en) 2007-12-04 2013-03-12 International Business Machines Corporation Automatic deployment of Java classes using byte code instrumentation
US8401994B2 (en) 2009-09-18 2013-03-19 Oracle International Corporation Distributed consistent grid of in-memory database caches
US20090228321A1 (en) 2008-03-04 2009-09-10 Oracle International Corporation Accessing an Enterprise Calendar and Scheduling Group Meetings Using a Mobile Device
US7990850B2 (en) 2008-04-11 2011-08-02 Extreme Networks, Inc. Redundant Ethernet automatic protection switching access to virtual private LAN services
US20090265449A1 (en) 2008-04-22 2009-10-22 Hewlett-Packard Development Company, L.P. Method of Computer Clustering
US7543046B1 (en) 2008-05-30 2009-06-02 International Business Machines Corporation Method for managing cluster node-specific quorum roles
US8719803B2 (en) 2008-06-04 2014-05-06 Microsoft Corporation Controlling parallelization of recursion using pluggable policies
JP5557840B2 (en) * 2008-10-03 2014-07-23 テレフオンアクチーボラゲット エル エム エリクソン(パブル) Distributed database monitoring mechanism
JP5425448B2 (en) * 2008-11-27 2014-02-26 インターナショナル・ビジネス・マシーンズ・コーポレーション Database system, server, update method and program
US8402464B2 (en) 2008-12-01 2013-03-19 Oracle America, Inc. System and method for managing contention in transactional memory using global execution data
US7917596B2 (en) * 2009-01-07 2011-03-29 Oracle International Corporation Super master
US8595714B1 (en) 2009-03-04 2013-11-26 Amazon Technologies, Inc. User controlled environment updates in server cluster
US8626552B2 (en) 2009-03-26 2014-01-07 International Business Machines Corporation Quorum management of appointment scheduling
US8209307B2 (en) 2009-03-31 2012-06-26 Commvault Systems, Inc. Systems and methods for data migration in a clustered file system
US20100268571A1 (en) 2009-04-16 2010-10-21 Mitel Networks Corporation System and method for determining availibility of a group to communicate with a user
GB2472620B (en) * 2009-08-12 2016-05-18 Cloudtran Inc Distributed transaction processing
CN101997823B (en) * 2009-08-17 2013-10-02 联想(北京)有限公司 Distributed file system and data access method thereof
US8108734B2 (en) 2009-11-02 2012-01-31 International Business Machines Corporation Intelligent rolling upgrade for data storage systems
US8578038B2 (en) 2009-11-30 2013-11-05 Nokia Corporation Method and apparatus for providing access to social content
US9286369B2 (en) 2009-12-30 2016-03-15 Symantec Corporation Data replication across enterprise boundaries
US9135268B2 (en) 2009-12-30 2015-09-15 Symantec Corporation Locating the latest version of replicated data files
US8417899B2 (en) 2010-01-21 2013-04-09 Oracle America, Inc. System and method for controlling access to shared storage device
US8725951B2 (en) 2010-04-12 2014-05-13 Sandisk Enterprise Ip Llc Efficient flash memory-based object store
JP5691306B2 (en) * 2010-09-03 2015-04-01 日本電気株式会社 Information processing system
US8600944B2 (en) * 2010-09-24 2013-12-03 Hitachi Data Systems Corporation System and method for managing integrity in a distributed database
US8639758B2 (en) 2010-11-09 2014-01-28 Genesys Telecommunications Laboratories, Inc. System for determining presence of and authorizing a quorum to transact business over a network
US9558256B2 (en) 2010-11-16 2017-01-31 Linkedin Corporation Middleware data log system
US20120158650A1 (en) 2010-12-16 2012-06-21 Sybase, Inc. Distributed data cache database architecture
US9355145B2 (en) 2011-01-25 2016-05-31 Hewlett Packard Enterprise Development Lp User defined function classification in analytical data processing systems
US9262229B2 (en) 2011-01-28 2016-02-16 Oracle International Corporation System and method for supporting service level quorum in a data grid cluster
US20120254118A1 (en) 2011-03-31 2012-10-04 Microsoft Corporation Recovery of tenant data across tenant moves
US9703610B2 (en) 2011-05-16 2017-07-11 Oracle International Corporation Extensible centralized dynamic resource distribution in a clustered data grid
US9609060B2 (en) 2011-08-02 2017-03-28 Nec Corporation Distributed storage system and method
US8584136B2 (en) 2011-08-15 2013-11-12 Sap Ag Context-aware request dispatching in clustered environments
US9621409B2 (en) * 2011-09-15 2017-04-11 Oracle International Corporation System and method for handling storage events in a distributed data grid
US8868546B2 (en) * 2011-09-15 2014-10-21 Oracle International Corporation Query explain plan in a distributed data management system
WO2013141308A1 (en) 2012-03-22 2013-09-26 日本電気株式会社 Distributed storage system, storage control method and program
US9311014B2 (en) 2012-11-29 2016-04-12 Infinidat Ltd. Storage system and methods of mapping addresses of snapshot families
US20140278573A1 (en) * 2013-03-15 2014-09-18 State Farm Mutual Automobile Insurance Company Systems and methods for initiating insurance processing using ingested data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040153615A1 (en) * 2003-01-21 2004-08-05 Koning G. Paul Distributed snapshot process
US20080077622A1 (en) * 2006-09-22 2008-03-27 Keith Robert O Method of and apparatus for managing data utilizing configurable policies and schedules
US20110071981A1 (en) * 2009-09-18 2011-03-24 Sourav Ghosh Automated integrated high availability of the in-memory database cache and the backend enterprise database

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170093746A1 (en) * 2015-09-30 2017-03-30 Symantec Corporation Input/output fencing optimization
US10320703B2 (en) 2015-09-30 2019-06-11 Veritas Technologies Llc Preventing data corruption due to pre-existing split brain
US10320702B2 (en) * 2015-09-30 2019-06-11 Veritas Technologies, LLC Input/output fencing optimization
US10341252B2 (en) 2015-09-30 2019-07-02 Veritas Technologies Llc Partition arbitration optimization
US11550820B2 (en) * 2017-04-28 2023-01-10 Oracle International Corporation System and method for partition-scoped snapshot creation in a distributed data computing environment

Also Published As

Publication number Publication date
EP3080697A1 (en) 2016-10-19
JP6483699B2 (en) 2019-03-13
JP2016540312A (en) 2016-12-22
US20150169653A1 (en) 2015-06-18
US10817478B2 (en) 2020-10-27
JP2017504880A (en) 2017-02-09
US20150169718A1 (en) 2015-06-18
WO2015088916A1 (en) 2015-06-18
US10176184B2 (en) 2019-01-08
CN105814544A (en) 2016-07-27
CN105830033A (en) 2016-08-03
JP6491210B2 (en) 2019-03-27
US20190121790A1 (en) 2019-04-25
CN105814544B (en) 2020-03-24
CN105830033B (en) 2020-03-24
EP3080698A1 (en) 2016-10-19
US10706021B2 (en) 2020-07-07
WO2015088918A1 (en) 2015-06-18

Similar Documents

Publication Publication Date Title
US10817478B2 (en) System and method for supporting persistent store versioning and integrity in a distributed data grid
US11755415B2 (en) Variable data replication for storage implementing data backup
US8954391B2 (en) System and method for supporting transient partition consistency in a distributed data grid
US20200068038A1 (en) Managing cloud-based storage using a time-series database
US11470146B2 (en) Managing a cloud-based distributed computing environment using a distributed database
US8856091B2 (en) Method and apparatus for sequencing transactions globally in distributed database cluster
US10585599B2 (en) System and method for distributed persistent store archival and retrieval in a distributed computing environment
US20070061379A1 (en) Method and apparatus for sequencing transactions globally in a distributed database cluster
US10423643B2 (en) System and method for supporting resettable acknowledgements for synchronizing data in a distributed data grid
US11567837B2 (en) Journaling data received in a cloud-based distributed computing environment
US11550820B2 (en) System and method for partition-scoped snapshot creation in a distributed data computing environment
US9424147B2 (en) System and method for supporting memory allocation control with push-back in a distributed data grid
WO2007028249A1 (en) Method and apparatus for sequencing transactions globally in a distributed database cluster with collision monitoring

Legal Events

Date Code Title Description
AS Assignment

Owner name: ORACLE INTERNATIONAL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEE, ROBERT H.;HOWES, JASON JOHN;FALCO, MARK;AND OTHERS;SIGNING DATES FROM 20140410 TO 20140417;REEL/FRAME:032834/0539

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION