CA2277981A1 - Shared memory control using multiple linked lists with pointers, status flags, memory block counters and parity - Google Patents

Shared memory control using multiple linked lists with pointers, status flags, memory block counters and parity Download PDF

Info

Publication number
CA2277981A1
CA2277981A1 CA002277981A CA2277981A CA2277981A1 CA 2277981 A1 CA2277981 A1 CA 2277981A1 CA 002277981 A CA002277981 A CA 002277981A CA 2277981 A CA2277981 A CA 2277981A CA 2277981 A1 CA2277981 A1 CA 2277981A1
Authority
CA
Canada
Prior art keywords
block
link list
pointer
data
storage locations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002277981A
Other languages
French (fr)
Inventor
Joseph C. Lau
Subhash C. Roy
Dirk L. M. Callaerts
Ivo Edmond Nicole Vandeweerd
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Transwitch Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2277981A1 publication Critical patent/CA2277981A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2205/00Indexing scheme relating to group G06F5/00; Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F2205/06Indexing scheme relating to groups G06F5/06 - G06F5/16
    • G06F2205/064Linked list, i.e. structure using pointers, e.g. allowing non-contiguous address segments in one logical buffer or dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5681Buffer or queue management

Abstract

Apparatus and methods for allocating shared memory utilizing linked lists (LLs) use a management RAM which controls the flow of data to/from a shared memory (RAM), and stores information regarding a number of LLs and a free link list (FLL) in the RAM, and a block pointer to unused RAM locations. A head pointer (HP), tail pointer (TP), block counter and empty flag (EF) are stored for each data link list. The HP and TP each include a block pointer and a position counter. The block counter contains the number of blocks used in the particular queue. An EF indicates an empty queue. The FLL includes a HP, a block counter, and an EF. Each page of RAM receiving the incoming data includes locations for storing data. The last location of the last page in a block stores a next-block pointer plus parity information, and in the last block of a queue, is set to all ones. An independent agent used in the background monitors the integrity of the LL structure.

Description

SHARED MEMORY CONTROL USING MULTIPLE LINKED LISTS WITH POINTERS, STATUS FLAGS, MEMORY
BLOCK COUNTERS AND PARITY
This application is related to co-owned U.S. Serial No.
08/650,910, filed May 17, 1996, which is hereby incorporated by reference herein in its entirety.
BACKGROUND OF THE INVENTION
1. Field of the Invention The present invention relates to memory management. More particularly, the present invention relates to apparatus and methods of managing a plux-ality of data queues stored in linked lists in a shared common memory. The invention has particular application to the u.se of a very large scale integrated circuit (VLSI) for the buffering of telecommunications information such as ATM data, although it i.s not limited thereto.
2. State of the Art In high speed communication networks, the management of buffer resources is one mechanism of increasing network performance.
One group of methods of managing buffer resources is known as sharing, where a single RP,M is simultaneously utilized as a buffer by a plurality of different channels. Various sharing methods are known (see Vela:muri, R. et al., "A Multi-Queue Flexible Buffer Manager Architecture", TFFF Document No. 0-7803-0917-0/93) and each has inherent advantages coupled with inherent disadvantages in terms of blocking probability, utilization, throughput, and delay. What is common to all sharing methods, however, is that a mechanism is required to direct data into appropriate locations in the RAM in a desired order so that the data can be retrieved from, the RAM appropriately. One such mechanism which is well known is the use of link lists which are used to manage multiple queues sharing a common memory buffer.
Typically, a link list com,p:rises bytes of data, where each byte has at least one pointer (forward and/or backward) attached to it, thereby identifying the location of the next byte of data in the queue. The link list typically includes extensive initialization and self-check procedures which are carried out by a microprocessor on a non-real-time basis. Thus, the use of standard prior art link list structures to manage multiplex queues sharing a common memory is not readily adaptable for VLSI
implementation, and is likewise not particularly suited to the handling of very high speed telecommunications information where processing and handling are dictated by the data rate of the real-time telecommunications signal.
SUMMARY OF THE INVENTION
It is therefore an object of the present invention to provide an apparatus and method for control of memory allocation.
It is another object of the invention to provide a new link list structure for managing queues in a shared memory.
It is a further object of the invention to provide a single VLSI which utilizes a link list structure for managing queues of high speed real time data in a shared memory.
It is an additional object of the invention to provide a link list apparatus and method for controlling the flow of Asynchronous Transfer Mode (ATM) telecommunications data into and out of a shared buffer.
Another object of the invention is to provide an apparatus and method for VLSI control of ATM data into and out of a shared RAM
by utilizing a separate RAM containing information related to the plurality of link lists in the shared RAM.
In accord with the objects of the invention a management RAM
contained within a VLSI is provided for controlling the flow of data into and out of a shared memory (data RAM). The management R.AM is preferably structured as an x by y bit RAM which stores information regarding y-2 data link lists in the shared RAM, a free link list in the shared RAM, and a block pointer to unused shared RAM locations. Information stored in the x bits for each data link list includes a head pointer, a tail pointer, a block counter and an empt_~ flag. In a preferred embodiment particularly applic<~ble to the control of ATM data, the head and tail pointers are each composed of a block pointer and a position counter, with the position counter indicating a specific page in a block which is mace up of: a set of contiguous pages of memory, and the block pointE~r pointing to the block number. Regardless of how constituted, the head pointer contains the address of the first word of the f_Lrst memory page of the link list, and the tail pointer preferably contains the address of the first word of the last memory page in they link list. The block counter contains the number of blocks used in the particular queue, and has a non-zero value if at. least one page is used in the queue.
The empty flag indicates whether the queue is empty such that the content of the link list should be ignored if the queue-empty flag indicates that the qne~ue is empty.
Information stored in the management RAM for the free link list includes a head pointer, a block counter, and an empty flag, but does not need to include a tail pointer as free blocks are added to the top of the free list according to the preferred embodiment of the invention. ~~s is discussed below in more detail, as data from different channels is directed into blocks of the data RAM, a link list is kept for each channel. As data is read out of the data RAM, blocks become available to receive new data. It is these freed blocks yahich are added to the free list. Block space can be assigned from the i=ree list before or after the unused blocks (discussed bE:low) are used.
To avoid excessive initia:Lization requirements, an unused-block pointer is provided in thE~ management RAM, as discussed above, and provides a pointer to the next unused block in memory.
Initially all link 7_ists, including the free list, are empty, and the unused block pointer is set to the number of blocks in the memory. As data is written to a block of shared RAM memory, the unused block pointer is decremented. When the unused block pointer equals zero, all of the cell blocks are included in the link lists (including the free link list).
According to a preferred aspect of the invention, each memory page of the shared data RAM receiving the incoming data (which RAM is managed by the management RAM) is composed of M contiguous memory addresses. Depending on the memory type, each address location can be of size B bits. The most common sizes are eight bits (byte), sixteen bits (word), thirty-two bits, and sixty-four bits. The first M-1 locations in the page are used to store data.
The last (M'th) location of the last page in the block preferably is used to store the address of the first location of the next block of the queue plus an odd parity bit; i.e., the M'th location of the last page in the block stores a next block pointer plus parity information. If there are no more blocks in the queue, the M'th location in the last page is set to all ones.
According to another aspect of the invention, an independent agent is utilized in the background to monitor the integrity of the.link list structure. The independent agent monitors the sum of the count of all of the link list block counters plus the unused blocks to ensure that it equals the total number of memory blocks in the common RAM. If not, an error is declared.
Likewise, the independent agent checks each link list stored in the management RAM for the following error conditions: head and tail pointers are equal and the block counter is not of value one; head and tail pointers are different and the block counter is one; and, block counter equals zero. If desired, the independent agent can also monitor the block pointers stored in the M'th location of the last page of each block to determine parity errors and/or to determine errors using parity or CRC.
Using the methods and apparatus of the invention, four operations are defined for ATM cell management: cell write, cell read, queue clear, and link list monitoring. In the cell write operation, a cell is stored into a queue. More particularly, when an ATM cell is received at a port w so that it is to be stored in queue number n (which stores cells of priority v for port w), a determination is first made as to whether the queue is empty. If it is not empty, the queue status (i.e., the tail pointer and position counter stored in management RAM) is obtained, and a det~erminat:ion is made as to whether a new block will be needed to b~~ added to the queue. If a new block is not required, the cell is written to the location indicated by the tail pointer posit ion, and the tail pointer position counter for that queue in the management RAM is updated. If this is the last page of a block, the M'th location of the page (in the shared memory) is set to all ones.. If a new block is required, either because the queue w~3s empty or because a previous cell had been written into the laat page of a block, a block must be obtained.
If it is a first black of a queue, initial queue parameters are stored. If it is not the first block of the link list, a block is obtained from thc~ free 7_ist and the free list is updated; or the block is obtaincsd from the unused blocks and the block pointer for the unu:~ed blocks is updated. Then, the cell is written to the queue, and t:he tail pointer, position counter, and block counter for the queuEa are all updated in the management RAM.
The cell read operation is utilized where a cell is to be read from a queue. In the cel:1 read operation, the cell indicated by the head pointer and head pointer position counter for that queue is read from the quEsue. ~~f:ter reading the cell from the queue a determination is made as to whether the cell was either the last cell in a block and!or thE~ last cell in the queue. If it is neither, then the queue si_atus is updated (i.e., the head pointer position counter is changed), and another cell read operation is awaited. If the ce:L1 is i~he last cell in the block, then the queue status prefer<~bly is checked for correctness by verifying the parity of the painter (using a parity bit), and is updated by changing the head pointer and head pointer position counter. The free list is updated by adding the freed block to the head of the free list, and the :Free l:i~;t and link list block counters are updated. If the ce_L1 is the last cell in the queue, the procedure for the list ce:Ll. in the block is followed, and the queue empty flag is set.

The queue clear operation is a microprocessor command provided for the purpose of clearing a queue. When the queue clear operation is presented, the queue status is updated by setting the queue flag, and the blocks in the queue are added to the head of the free list which is likewise updated.
The link list monitoring operation is the agent which monitors the integrity of the link list structure whenever the cell write, cell read, and queue clear operations are not running. As set forth above, the link list monitoring operation monitors the linked lists for errors by checking that the sum of the count of all of the link list block counters plus the unused blocks equals the total number of memory blocks in the common RAM, that when head and tail pointers are equal the block counter is set to one, that when head and tail pointers are different the block counter is not set to one, etc.
Additional objects and advantages of the invention will become apparent to those skilled in the art upon reference to the detailed description taken in conjunction with the provided figures.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a block diagram of an apparatus incorporating the link list memory management RAM of the invention.
Figure 2 is a chart showing the structure of the memory management RAM of Figure 1.
Figure 3a is a diagram of an example of the shared data memory of the apparatus of Figure 1.
Figure 3b is a diagram of the details of a page of one of the blocks shown in Figure 3a.
Figure 3c is a diagram of an example of the information contained in the memory management RAM of Fig. 1 for managing the shared data memory example of Figure 3a.

Figures 4a - 4d are flow ~~harts for the write, read, queue clear, and link list: monitoring operations carried out by the flow controller of t:he apparatus of Figure 1.
Figures 5a-5d are state machine diagrams for a write, read, clear, and monitor state machine according to the invention.
DETAILED DESCRIPTION OI' THE PREFERRED EMBODIMENTS
The invention will now be described with reference to the physical layer VLSI portion of an ATM destination switch described in parent U.S. :serial No. 08/650,910, although it is not limited thereto. As ;>een in Fig. 1, and as discussed in the parent application, the physical layer portion 130 of the ATM
destination switch 1.00 preferably includes a UTOPIA interface 150, a managing RAM 162, a flow controller 166, a microprocessor interface 167, channel interface buffers 170, and a RAM interface 175. The flow controller 166 is coupled to the UTOPIA interface 160, the managing RP,M 162, the microprocessor interface 167, the channel interface bL~,ffers 170, and the RAM interface 175. The UTOPIA interface ger,~erally receives cells of ATM data in a byte-wide format, and passes them to the flow controller 165. Based on the destination of the cell (as discussed in the parent application), and th.e priority of the cell, the flow controller 166 writes the cell into an appropriate output buffer 170. The output buffer is preferably capable of storing at least two ATM
cells so that one cell can be read out of the buffer as another is being read into the buffer without conflict. If buffer space is not available for a particular cell at a particular time, the flow controller 166 forwards the ATM cell via the RAM interface 175 to a desired location in a shared RAM 180 (which may be on or off chip) based on informaaion contained in the managing RAM 162 as discussed in more detail below. When room becomes available in the output buffer 170 for the cell, the flow controller 166 reads the data out of the shared RAM 180, and places it in the buffer 170. In the background, when not receiving data from the UTOPIA interface, and when :not reading data from or writing data to the shared RAM 180 or waiting data to the buffers, the flow controller 166 monitors the integrity of the link list structure contained in the managing RAM, as is described in more detail below. In addition, the flow controller 166 cari perform various functions in response to microprocessor command received via the microprocessor interface 167.
The managing RAM 162 may serve various functions, including providing information for assisting in the processing of the header of the ATM cell as discussed in the parent application hereto. For purposes of this invention, however, the managing RAM 262, or at least a portion thereof, is preferably provided as a x bit by y word RAM for the purpose of managing y-2 link lists which are set up in the shared RAM 180 (y-2 equalling the product of w ports times v priorities). Thus, as seen in Fig. 2, a link list information structure for y-2 data queues includes: a head pointer, a tail pointer, a block counter, and a queue empty flag for each of the y-2 data queues; a free list block pointer, block counter, and queue empty flag for a free list; and a block pointer for the unused blocks of memory. Each head pointer and tail pointer preferably includes a block pointer and a position counter, with the block pointer used for pointing to a block in the memory, and the position counter being used to track pages within a block of memory. Thus, for example, where ATM cells of fifty-three bytes of data are to be stored in the shared memory, and each cell is to be stored on a "page", a block having four contiguous pages may be arranged with the position counter being a two bit counter for referencing the page of a block. The block counter for each queue is used to reference the number of blocks contained within the queue. The queue empty flag when set indicates that the queue is empty, and that the pointers contained within the queue as well as the block count can be ignored.
As suggested above, the head pointer for each link List queue contains the address of the first word of the first memory page of the queue in the shared memory. The tail pointer for each link list queue contains the address of the first word of the last memory page in the queue. Each memory page of the shared memory is composed of M contiguous memory addresses. Depending on the memory type, each address location can be of size B bits, with common sizes being e~Lg~ht bits (byte) , sixteen bits (word) , thirty-two bits, or sixty--four bits. In accord with the preferred embodiment: of the invention, the address locations are sixteen bits in length with the first M-1 locations in a page containing the stored information. The M'th location of a last page in a block is used to store a next block pointer which is set to the first location of the next block plus an odd parity bit. Where the block is t:he last block in the queue, the M'th location of the last. page in the last block is set to all ones.
Where the page is neither the last page of the block, nor the last block in the queue, t:he M'th location of the page is not utilized. In the preferred embodiment of the invention used with respect to ATM telecommunications data, each page is thirty-two words in length (i.E~., M =- 32), with each word being sixteen bits. Thus, an ATM cell c>f fifty-three bytes can be stored on a single page with room to :>pare. It should be appreciated, that in some application~~, only the data payload portion of the ATM
cell (i.e., forty-eight bytes), and not the overhead portion (five bytes) will be stored in the shared memory. In other applications, such a.s in ;witches where routing information is added, cells of more than fifty-three bytes may be stored.
Regardless, with a thirty--two word page, system addressing is simplified.
An example of the memory organization of the shared memory is seen in Fig. 3a. In. Fig. 3a, two active link list data queues are represented, as well as a free list queue and an Unused block. In particular, memory blocks 512, 124, and 122 are shown linked together for a fir~~t queue, memory blocks 511, 125, and 123 are linked together fc>r a second queue, memory blocks 510 -125 are linked together for the free list queue, and memory blocks 121 - 1 are t;'nused. It will be appreciated that in the preferred embodiment of the invention, each page contains thirty-two sixteen bit worc~.s. Thus, the thirty-second (M'th) word of memory block 512 (seen in more detail in Fig. 3b) contains a pointer (the ten least significant bits) which points to memory block 124, the thirty-second word of memory block 124 contains a pointer which points to memory block 122, and the thirty-second word of memory block 122 contains all ones, thereby indicating the last word in the queue. Likewise, the thirty-second word of memory block 511 contains a pointer which points to memory block 125, the thirty-second word of memory block 125 contains a pointer which points to memory block 123, and the thirty-second word of memory block 123 contains all ones, thereby indicating the last word of that queue.
The free list of Fig. 3a is seen extending from block 510 to block 126. The unused blocks run from block 121 to block 1.
Turning to Fig. 3c, specifics are seen of the management RAM
which would be associated with managing the shared memory in the state of Fig. 3a. In particular, information for link list #1 is seen with a head pointer having a block pointer having a value equal to 512 and a position counter set at "00" to indicate a first page of memory in the block storing data. The tail pointer of the link list #1 information has a block pointer having a value equal to 122 and a position counter set to "11" to indicate that all pages of block 122 are being used. The block counter of the information for link list #1 is set to a value of three, and the queue empty flag is not set (i.e., equals zero). Information for link list #2 is seen with a head pointer having a block pointer having a value equal to 511 and a position counter set at "01" to indicate that the data first occurs at a second page of the block (i.e., the first page already having been read from the block). The tail pointer of the link list #2 information has a block pointer having a value equal to 123 and a position counter set at "10" which indicates that there is no data in the last page of the block. The block counter of the link list #2 information is also set to a value of three, and the queue empty flag is not set. The value of the head and tail pointers and block count for the information of link list #N are not indicated, as the queue empty flag of link list #N is set (equals one), thereby indicating that the pointers and block counter do not store valid data. Likewise, while details of information for other link lists are not shown, the only data of interest would be that the queue empty f:Lags related to all of those link lists would equal one to :Lndicat~E~ that no valid data is being stored with reference to those 1_ink lists. The head pointer of the free list information has a block pointer set to a value 510, and a block count of 385. The queue empty flag of the free list is not set, as the free list contains data. Finally, the block pointer relating to the Unused queue is shown set to a value of 121. It is noted that in order to increase performance, the free list head pointer and block counter information is preferably implemented in a series oi= flip-flops, and is thus readily available for purpo:~es discussed below with reference to Figs.
4a-4d. The queue empty flags are also preferably similarly implemented.
It should be appreciated that by providing the queue empty flags and an Unused block pointer, excessive initialization requirements are eli.minatE:d.. As suggested above, the queue empty flag indicates that there is no valid data for a link list and that the head and tail pointers for that link list and the block counter of that link: list can be ignored. The Unused block pointer is provided to point to the next unused block in shared memory. As memory F>ages are written or used, the Unused block pointer is decrement.ed until a value of zero is reached. At that point, all cell blocks are included in the link lists (including the free list). As previously mentioned, when a block is read from the shared memory, tree available block is added to the free list. When a new block is required for adding to a link list, the block space may be tal~:en from either the free list or from the Unused blocks, and available blocks from the free list may be taken either before or after the Unused blocks are used.
Turning now to Figure 4a, a flow chart of operations of the flow controller 166 of the' apparatus 100 of Figure 1 is seen with respect to writing data to the shared memory. It is rioted that while the operation; are shown in flow chart form, in accord with the preferred embodiment of the invention, the operations are carried out in hard~rare. When the flow controller 166 determines that it is receiving an ATM cell which cannot be written into a buffer directly, the flow controller makes a determination at 200 (by checking the management RAM queue empty flag associated with that queue) as to whether the queue which should receive that cell is empty. If the queue is not empty, at 202 the queue status (i.e., the tail pointer and position counter) for that queue is obtained, and at 204 a determination is made as to whether a new block will be needed to be added to the queue (i.e., is the position counter equal to "11"). If a new block is not required, at 206 the cell is written to the shared RAM
location indicated by the tail pointer position counter for that queue (stored in management RAM), and at 208 the tail pointer position counter for that queue is updated. At 210, a determination is made as to whether the cell is being written into the last page of a block. If so, at 212 the flow controller writes a word of all ones into the M'th location of the page (in the shared memory).
If it is determined that a new block of shared RAM is required to store the incoming cell because at 200 the queue was empty, at 214, a block is obtained from the either the free list or from unused RAM. If the block is obtained from the free list, at 216, the free list information is updated by changing the head pointer of the free list (i.e., setting the head pointer to the value stored in the M'th location of the last page of the obtained block), and by decrementing the block counter associated with the free list. If the block is obtained from the unused RAM, the block pointer for the unused RAM is decremented at 216.
Regardless, at 218, the cell is written to the queue, and at 220, the tail pointer and block counter for the queue are both updated in the management RAM (with the block counter being set to the value one), and the queue empty flag is changed.
If it is determined that a new block of shared RAM is required to store the incoming cell because at 204 the tail pointer position counter of the link List indicated that the entire tail block is storing data, at 222, a block is obtained from the either the free list or from unused RAM. If the block is obtained from the free list, at 224, the free list is updated by changing the head pointer of the free list (i.e., setting the head pointer to the value stored in the M'th location of the last page of the obtained block:), and by decrementing the block counter associated with tree free list. If the free list becomes empty because a block is removed, the queue empty flag of the free list is set. If the block is obtained from the unused RAM, the block pointer far the unused RAM is decremented at 224.
Regardless, at 228, the cell is written to the queue, and at 230, the tail pointer ano! block: counter for the queue are both updated in the management RAM.
The details of the flow controller operation with respect to a cell read operation (i.e., where a cell is to be read from a queue because a buffer is available to receive the cell) is seen in Fig. 4b. In particular, when a data buffer becomes available, the flow controller at 250 reads the head pointer and tail pointer in the management RAM for the link list associated with the available data ~~uffer. Then, at 252, the flow controller reads from shared memory the cell at the location in the shared memory indicated by the head pointer, and provides the cell to the data buffer. After th:e data has been read, the flow controller determines at 254 (based on the head pointer and tail pointer) whether the cell was the last cell in the queue, and at 256 (based on the head pointer position counter) whether the cell was the last cell in a block. If it is neither, then at 258 the queue status is updated (i..e., the head pointer position counter is changed), and another cell read operation is awaited. If at 254 it is determined. that the cell is the last cell in the queue, at 260, the head pointer for the free list (obtained from the management RAM) is inserted into the last word of the last page of the freed block. Then at 262, the free list in the management RAM is updated by adding the freed block to the head of the free list; i.e., by updating th:e free list block pointer and block counter. At 264, th.e queue empty flag is set for the link list which now has no blocks. If the free list was empty prior to adding the freed block, the free list must be initialized (with appropriate head pointer and block counter) and the queue empty flag changed at 264. In addition, in the case were the free list was empty prior to adding the freed block, the last word in the freed block in the shared RAM should be set to all ones.
If at 256 it is determined that the cell which has been read out of shared memory is the last in a block, then at 266, the head pointer for the free list as obtained from the management RAM is inserted into the last word of the last page of the freed block. Then, at 268, the queue status for the link list is updated by changing the block pointer and position counter of the head pointer (to the value contained in the last word of the page of memory being read out of the shared memory), and by decrementing the block counter. Again, it is noted that if the free list was empty prior to adding the freed block, the free list must be initialized (with appropriate head pointer and block counter) and the queue empty flag changed, and the last word in the freed block in the shared RAM should be set to all ones. It is also noted, that upon obtaining the pointer in the M'th location of the last page of the block, according to the preferred embodiment of the invention, at 270, a parity check is done on the pointer. At 272, the calculated parity value is compared to the parity bit stored along with the pointer. Based on the comparison, at 274, a parity error condition can be declared, and sent as an interrupt message via the microprocessor interface port 167 (Fig. 1) to the microprocessor (not shown).
Preferably, when a parity error is found, the microprocessor treats the situation as a catastrophic error and reinitializes the management and data RAMs.
Figure 4c sets out the operation with respect to the queue clear microprocessor command (received via the microprocessor interface 167). When the queue clear operation is presented, at 270 the queue status for the link list is updated by setting the queue empty flag, and at 272, the blocks in the queue are added to the head of the free list which is updated in a manner discussed above (Fig. 4b) with reference to the cell read operation.

The link list monitoring operation seen in Fig. 4d is the hardware agent which monitors the integrity of the link list structure whenever the cell write, cell read, and queue clear operations are not running. The link list monitoring operation preferably monitors four different error conditions. In particular, at 280, the cc>unts of all of the link list block counters (including the free list) where the queue empty flag for those link lists are not ~>et are summed together with the unused blocks and compared the total number of memory blocks in the common RAM. If the sum do>es not equal the total number of memory blocks in the common. RAM, at 281, an error condition is declared by triggering a microprocessor interrupt bit. At 282, the head and tail block pointers of each link list are compared. If at 284 the head and tail block pointers are determined to be equal, at 286 the block counter i.s checked, and if not equal to one, at 287 an error condition is declared. If the head and tail block pointers are not equal when compared at 284, at 288 the block counter is checked, and if' the block count is equal to one, at 289 an error condition is declared. At 290, the block counter for each link list whose queue empty flag is not set is checked;
and if the block counter equals zero, at 291 an error condition is declared.
According to the preferred embodiment of the invention, the write, read, clear, and monitoring operations of the flow controller are carried out in hardware which may be generated by using HDL code to synthesize hardware gates via use a VHDL
compiler. Figures 5a-5d a.re state machines diagrams corresponding to the HDL c:o~de, including a write state machine (Fig. 5a) , a read state ma.c:hine (Fig. 5b) , a clear state machine (Fig. 5c), and a monitoring state machine (Fig. 5d). The gates created using the code may :be standard cell technology or gate array technology.
It should be appreciated that the invention is not intended to be limited to a strictly hardware implementation, but is also intended to apply to memory management utilizing a microprocessor with associated firmware (e. g., a ROM).

There have been described and illustrated herein an apparatus and method for management of shared memory. While particular embodiments of the invention have been described, it is not intended that the invention be limited thereto, as it is intended that the invention be as broad in scope as the art will allow and that the specification be read likewise. Thus, while the invention has been described with reference to VLSI implemented ATM equipment, it will be appreciated that the invention has broader applicability. Also, while specific details of RAM
sizes, etc. have been disclosed, it will be appreciated that the details could be varied without deviating from the scope of the invention. For example, while a management of RAM of size x bits by y words has been described for managing y-2 link lists of data, it will be appreciated that the management RAM could assume different sizes. Thus, for example, instead of using a separate word for the unused block pointer, the unused block pointer could be located in the "tail pointer" location of the free list (which itself does not use a tail pointer), thereby providing a management RAM of x bits by y words for managing y-1 link lists of data. In addition, rather than providing the information related to the link lists with the head pointer, tail pointer, block counter, and queue empty flag in that order, the variables of the link list could be reordered. Similarly, instead of providing a shared memory having pages of thirty-two words in depth, each word being sixteen bits in length, it will be appreciated that memories of different lengths and depths could be utilized. Also, rather than locating the pointer to the next block in the last word of the last page of a previous block, it will be appreciated that the pointer could be located in a different location. Further yet, while specific flow charts have been disclosed with respect to various operations, it will be appreciated that various aspects of the operations can be conducted in different orders. In addition, while particular code has been disclosed for generating gate arrays which conduct the operations in hardware, it should be appreciated by those skilled in the art that other code can be utilized to generate hardware, and that hardware and/or firmware can be generated in different manners. Furthermore, while the invention was described with respect to separate RAMS for the management RAM
and the shared data RAM, it will be appreciated that both memories may be part. of a larger single memory means. It will therefore be appreciated by those skilled in the art that yet other modifications could be made to the provided invention without deviating from it:; spirit and scope as so claimed.

Claims (23)

Claims:
1. Apparatus for managing the storage of data in a memory, comprising:
a) a shared memory means having a plurality of data storage locations;
b) control means for receiving said data and forwarding said data to desired of said plurality of data storage locations in said shared memory means, wherein said data is stored in said plurality of data storage to cations in the form of a plurality of link lists, each link list having a head;
c) management memory means for storing information regarding each of said plurality of kink lists, said information including a head pointer and a queue empty flag for each link list, said head pointer for each particular respective link list pointing to a location of a respective said head of that particular link list, and said queue empty flag for a link list indicating that that link list has no valid data contained therein.
2. An apparatus according to claim 1, wherein:
said control means reads data from said shared memory means, at least a plurality of raid data storage locations are in the form of a free link list, said free link list relating to data storage locations from which data has been read by said control means, and said management memory means includes a pointer and a queue empty flag for said free link list.
3. An apparatus for managing the storage of data in a memory, comprising:
a) a shared memory means having a plurality of data storage locations;
b) control means for receiving said data and forwarding said data to desired of said plurality of data storage locations in said shared memory means, and for reading data from said shared memory means, wherein said data is stored in said plurality of data storage locations in the form of a plurality of link lists, each link list having a head;

c) management memory means for storing information regarding each of said plurality of link lists, said information including a head pointer for each link list queue, said head pointer for each particular respective link list pointing to a location of a respective said head of that particular link list, wherein upon initialization, at least a plurality of said data storage locations of said shared memory means are unused, and after utilization, at least a plurality of said data storage locations are in the form of a free link list, said free link list relating to data storage locations from which data has been read by said control means, and wherein said management memory means includes a pointer to at least one of said unused data storage locations, and said management memory means includes a pointer for said free link list.
4. An apparatus according to any preceding claim, wherein:
at least upon initialization, at least a plurality of said data storage locations of said shared memory means are unused, and said management memory means includes a pointer to at least one of said unused data storage locations.
5. An apparatus according to any previous claim, wherein:
said shared memory means is arranged in a plurality of blocks with each block having a plurality of said data storage locations, and said information stored in said management memory means regarding each of said plurality of link list queues includes a block counter for each of said plurality of link list queues, each block counter daunting the number of blocks contained in that link list queue.
6. An apparatus according to claim 5, wherein:
each of said plurality of blocks is arranged as a plurality of contiguous pages with each page having a plurality of said data storage locations, and each said head pointer comprises a block pointer which points to a block and a page counter which points to a page in said block.
7. An apparatus according to claim 5, wherein:
each block storing data includes at least one location containing one of (i) a pointer to a next block in the link list, and (ii) an indicator which indicates that the block is the last block in the link list.
8. An apparatus according to claim 7, wherein:
said pointer to a next block in the link list includes a parity bit for said pointer.
9. An apparatus according to claim 6, wherein:
each block storing data includes at least one location in a last page of that block containing one of (i) a pointer to a next block in the link list, and (ii) an indicator which indicates that the block is the last block in the link list.
10. An apparatus according to any previous claim, wherein:
said information includes a tail pointer for each link list containing said data.
11. An apparatus according to claim 6, wherein:
said information includes a tail pointer for each link list containing said data, each of said plurality of blocks is arranged as a plurality of contiguous pages with each page having a plurality of said data storage locations, each said head pointer comprises a first block pointer which points to a block and a page counter which points to a page in said block, and each said tail pointer comprises a second block pointer which points to a tail block and a page counter which points to a page in said tail block.
12. An apparatus according to claim 6, wherein:
said data comprises ATM data received in cell format, and each said page includes enough of said data storage locations to store all of the data contained in an ATM cell.
13. An apparatus according to claim 12, wherein:
each page includes thirty-two sixteen bit word locations.
14. An apparatus according to claim 5, wherein:
said control means reads data from said shared memory means, at least a plurality of said data storage locations are in the form of a free link list, said free link list relating to data storage locations from which data has been read by said control means, and said management memory means includes a pointer, a block counter, and a queue empty flag for said free link list, at least a plurality of said data storage locations of said shared memory means are unused, and said management memory means includes a pointer to said at least one of said unused data storage locations, and said control means includes means for comparing a sum of counts of said block counters of each link list containing data, said free link list, and said unused pointer to the number of blocks in said shared memory means.
15. An apparatus according to claim 14, wherein:
said control means further comprises means for generating an error signal is said sum of counts does not equal said number of blocks in said shared memory means.
16. An apparatus according to claim 10, wherein:
said control means includes means for comparing, for each link list containing data, said tail pointer to said head pointer.
17. An apparatus according to claim 16, wherein:
said control means further comprises means for generating an error signal if said tail pointer and said head pointer for a link list containing data point to an identical block, and said block counter for said link list does not equal one.
18. An apparatus according to claim 16, wherein:
said control means further comprises means for generating an error signal if said tail pointer and said head pointer for a link list containing data point to different blocks, and said block counter for said link list equals one.
19. An apparatus according to claim 5, wherein:
said control means further comprises means for checking the count of each block counter of a link list where the queue empty flag is not set, and for generating an error signal if the count is zero and the queue empty flag is not set.
20. An apparatus according to any preceding claim, wherein:
said control means and said management memory means are contained on a single integrated circuit.
21. An apparatus according to claim 5, wherein:
said management memory means includes said pointer, a block counter, and a queue empty flag for said free link list, and said control means includes means for comparing a sum of counts of said block counters of each link list containing data, said free link list, and said unused pointer to the number of blocks in said shared memory means, and means for generating an error signal is said sum of counts does not equal said number of blocks in said shared memory means.
22. An apparatus according to claim 10, wherein:
said control means includes means for comparing, for each link list containing date, said tail pointer to said head pointer, and means for generating an error signal if either (i) said tail pointer and said head pointer for a link list containing data point to an identical block, and said block counter for said link list does not equal one, or (ii) said tail pointer and said head pointer for a link list containing data point to different blocks, and said block counter for said link list equals one.
23. A method of managing the storage of data utilizing a controller, a shared memory having a plurality of data storage locations, and a management memory, said method comprising:
a) using said controller to forward received data to desired of the plurality of data storage locations in the shared memory, wherein the data is stored in the plurality of data storage locations in the form of a plurality of link lists, each link list having a head; and b) storing information regarding each of the plurality of link lists in the management memory, said information including a head pointer and a queue empty flag for each link list, said head pointer for each particular respective link list pointing to a location of a respective said head of that particular link list, and said queue empty flag for a link list indicating that that link list has no valid data contained therein.
CA002277981A 1997-02-05 1998-02-05 Shared memory control using multiple linked lists with pointers, status flags, memory block counters and parity Abandoned CA2277981A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US08/796,085 US5893162A (en) 1997-02-05 1997-02-05 Method and apparatus for allocation and management of shared memory with data in memory stored as multiple linked lists
US08/796,085 1997-02-05
PCT/US1998/002131 WO1998036357A1 (en) 1997-02-05 1998-02-05 Shared memory control using multiple linked lists with pointers, status flags, memory block counters and parity

Publications (1)

Publication Number Publication Date
CA2277981A1 true CA2277981A1 (en) 1998-08-20

Family

ID=25167251

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002277981A Abandoned CA2277981A1 (en) 1997-02-05 1998-02-05 Shared memory control using multiple linked lists with pointers, status flags, memory block counters and parity

Country Status (6)

Country Link
US (1) US5893162A (en)
EP (1) EP1036360A1 (en)
JP (1) JP2001511281A (en)
CA (1) CA2277981A1 (en)
IL (1) IL130834A (en)
WO (1) WO1998036357A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220382464A1 (en) * 2021-04-07 2022-12-01 Samsung Electronics Co., Ltd. Semiconductor memory device and memory system including the same

Families Citing this family (102)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7188352B2 (en) 1995-07-11 2007-03-06 Touchtunes Music Corporation Intelligent digital audiovisual playback system
EP0786121B1 (en) 1994-10-12 2000-01-12 Touchtunes Music Corporation Intelligent digital audiovisual playback system
JP3222083B2 (en) * 1997-03-21 2001-10-22 沖電気工業株式会社 Shared memory controller
US6065104A (en) * 1997-07-23 2000-05-16 S3 Incorporated Method of embedding page address translation entries within a sequentially accessed digital audio data stream
FR2769165B1 (en) 1997-09-26 2002-11-29 Technical Maintenance Corp WIRELESS SYSTEM WITH DIGITAL TRANSMISSION FOR SPEAKERS
US6341342B1 (en) * 1997-11-04 2002-01-22 Compaq Information Technologies Group, L.P. Method and apparatus for zeroing a transfer buffer memory as a background task
US5941972A (en) * 1997-12-31 1999-08-24 Crossroads Systems, Inc. Storage router and method for providing virtual local storage
USRE42761E1 (en) 1997-12-31 2011-09-27 Crossroads Systems, Inc. Storage router and method for providing virtual local storage
US6253237B1 (en) * 1998-05-20 2001-06-26 Audible, Inc. Personalized time-shifted programming
US6445680B1 (en) * 1998-05-27 2002-09-03 3Com Corporation Linked list based least recently used arbiter
FR2781591B1 (en) 1998-07-22 2000-09-22 Technical Maintenance Corp AUDIOVISUAL REPRODUCTION SYSTEM
FR2781580B1 (en) 1998-07-22 2000-09-22 Technical Maintenance Corp SOUND CONTROL CIRCUIT FOR INTELLIGENT DIGITAL AUDIOVISUAL REPRODUCTION SYSTEM
US8028318B2 (en) 1999-07-21 2011-09-27 Touchtunes Music Corporation Remote control unit for activating and deactivating means for payment and for displaying payment status
FR2787600B1 (en) 1998-12-17 2001-11-16 St Microelectronics Sa BUFFER MEMORY ASSOCIATED WITH MULTIPLE DATA COMMUNICATION CHANNELS
US6529519B1 (en) * 1998-12-22 2003-03-04 Koninklijke Philips Electronics N.V. Prioritized-buffer management for fixed sized packets in multimedia application
US6240498B1 (en) * 1999-01-06 2001-05-29 International Business Machines Corporation Object oriented storage pool apparatus and method
US6246682B1 (en) 1999-03-05 2001-06-12 Transwitch Corp. Method and apparatus for managing multiple ATM cell queues
US6570877B1 (en) 1999-04-07 2003-05-27 Cisco Technology, Inc. Search engine for forwarding table content addressable memory
FR2796482B1 (en) 1999-07-16 2002-09-06 Touchtunes Music Corp REMOTE MANAGEMENT SYSTEM FOR AT LEAST ONE AUDIOVISUAL INFORMATION REPRODUCING DEVICE
US6615302B1 (en) * 1999-09-15 2003-09-02 Koninklijke Philips Electronics N.V. Use of buffer-size mask in conjunction with address pointer to detect buffer-full and buffer-rollover conditions in a CAN device that employs reconfigurable message buffers
US6493287B1 (en) 1999-09-15 2002-12-10 Koninklijke Philips Electronics N.V. Can microcontroller that utilizes a dedicated RAM memory space to store message-object configuration information
FR2805377B1 (en) 2000-02-23 2003-09-12 Touchtunes Music Corp EARLY ORDERING PROCESS FOR A SELECTION, DIGITAL SYSTEM AND JUKE-BOX FOR IMPLEMENTING THE METHOD
FR2805072B1 (en) 2000-02-16 2002-04-05 Touchtunes Music Corp METHOD FOR ADJUSTING THE SOUND VOLUME OF A DIGITAL SOUND RECORDING
FR2805060B1 (en) 2000-02-16 2005-04-08 Touchtunes Music Corp METHOD FOR RECEIVING FILES DURING DOWNLOAD
US6598140B1 (en) * 2000-04-30 2003-07-22 Hewlett-Packard Development Company, L.P. Memory controller having separate agents that process memory transactions in parallel
US6611906B1 (en) * 2000-04-30 2003-08-26 Hewlett-Packard Development Company, L.P. Self-organizing hardware processing entities that cooperate to execute requests
FR2808906B1 (en) 2000-05-10 2005-02-11 Touchtunes Music Corp DEVICE AND METHOD FOR REMOTELY MANAGING A NETWORK OF AUDIOVISUAL INFORMATION REPRODUCTION SYSTEMS
US20020031133A1 (en) * 2000-05-25 2002-03-14 Mcwilliams Patrick Embedded communication protocol using a UTOPIA-LVDS bridge
US20020031141A1 (en) * 2000-05-25 2002-03-14 Mcwilliams Patrick Method of detecting back pressure in a communication system using an utopia-LVDS bridge
US20020031132A1 (en) * 2000-05-25 2002-03-14 Mcwilliams Patrick UTOPIA-LVDS bridge
JP3593954B2 (en) * 2000-05-31 2004-11-24 株式会社島津製作所 Electronic balance
US6829647B1 (en) * 2000-06-09 2004-12-07 International Business Machines Corporation Scaleable hardware arbiter
US6735207B1 (en) 2000-06-13 2004-05-11 Cisco Technology, Inc. Apparatus and method for reducing queuing memory access cycles using a distributed queue structure
FR2811175B1 (en) 2000-06-29 2002-12-27 Touchtunes Music Corp AUDIOVISUAL INFORMATION DISTRIBUTION METHOD AND AUDIOVISUAL INFORMATION DISTRIBUTION SYSTEM
FR2811114B1 (en) 2000-06-29 2002-12-27 Touchtunes Music Corp DEVICE AND METHOD FOR COMMUNICATION BETWEEN A SYSTEM FOR REPRODUCING AUDIOVISUAL INFORMATION AND AN ELECTRONIC ENTERTAINMENT MACHINE
US7406547B2 (en) 2000-08-09 2008-07-29 Seagate Technology Llc Sequential vectored buffer management
FR2814085B1 (en) 2000-09-15 2005-02-11 Touchtunes Music Corp ENTERTAINMENT METHOD BASED ON MULTIPLE CHOICE COMPETITION GAMES
US6851000B2 (en) * 2000-10-03 2005-02-01 Broadcom Corporation Switch having flow control management
US6977941B2 (en) * 2000-11-08 2005-12-20 Hitachi, Ltd. Shared buffer type variable length packet switch
US7113516B1 (en) 2000-11-28 2006-09-26 Texas Instruments Incorporated Transmit buffer with dynamic size queues
US20020108094A1 (en) * 2001-02-06 2002-08-08 Michael Scurry System and method for designing integrated circuits
US7215672B2 (en) * 2001-03-13 2007-05-08 Koby Reshef ATM linked list buffer system
US7062702B2 (en) * 2001-03-14 2006-06-13 Hewlett-Packard Development Company, L.P. Efficient parity operations
US6766480B2 (en) * 2001-03-14 2004-07-20 Hewlett-Packard Development Company, L.P. Using task description blocks to maintain information regarding operations
US6728857B1 (en) * 2001-06-20 2004-04-27 Cisco Technology, Inc. Method and system for storing and retrieving data using linked lists
US7225281B2 (en) * 2001-08-27 2007-05-29 Intel Corporation Multiprocessor infrastructure for providing flexible bandwidth allocation via multiple instantiations of separate data buses, control buses and support mechanisms
US7216204B2 (en) * 2001-08-27 2007-05-08 Intel Corporation Mechanism for providing early coherency detection to enable high performance memory updates in a latency sensitive multithreaded environment
US7487505B2 (en) * 2001-08-27 2009-02-03 Intel Corporation Multithreaded microprocessor with register allocation based on number of active threads
US6868476B2 (en) * 2001-08-27 2005-03-15 Intel Corporation Software controlled content addressable memory in a general purpose execution datapath
US7158964B2 (en) 2001-12-12 2007-01-02 Intel Corporation Queue management
US7107413B2 (en) * 2001-12-17 2006-09-12 Intel Corporation Write queue descriptor count instruction for high speed queuing
US7269179B2 (en) * 2001-12-18 2007-09-11 Intel Corporation Control mechanisms for enqueue and dequeue operations in a pipelined network processor
US20030120966A1 (en) * 2001-12-21 2003-06-26 Moller Hanan Z. Method for encoding/decoding a binary signal state in a fault tolerant environment
US7895239B2 (en) 2002-01-04 2011-02-22 Intel Corporation Queue arrays in network devices
US7181573B2 (en) * 2002-01-07 2007-02-20 Intel Corporation Queue array caching in network devices
US7610451B2 (en) * 2002-01-25 2009-10-27 Intel Corporation Data transfer mechanism using unidirectional pull bus and push bus
US7149226B2 (en) * 2002-02-01 2006-12-12 Intel Corporation Processing data packets
US7239651B2 (en) * 2002-03-11 2007-07-03 Transwitch Corporation Desynchronizer having ram based shared digital phase locked loops and sonet high density demapper incorporating same
US6765867B2 (en) 2002-04-30 2004-07-20 Transwitch Corporation Method and apparatus for avoiding head of line blocking in an ATM (asynchronous transfer mode) device
US7337275B2 (en) * 2002-08-13 2008-02-26 Intel Corporation Free list and ring data structure management
US10373420B2 (en) 2002-09-16 2019-08-06 Touchtunes Music Corporation Digital downloading jukebox with enhanced communication features
US7822687B2 (en) 2002-09-16 2010-10-26 Francois Brillon Jukebox with customizable avatar
US8332895B2 (en) 2002-09-16 2012-12-11 Touchtunes Music Corporation Digital downloading jukebox system with user-tailored music management, communications, and other tools
US8103589B2 (en) 2002-09-16 2012-01-24 Touchtunes Music Corporation Digital downloading jukebox system with central and local music servers
US8584175B2 (en) 2002-09-16 2013-11-12 Touchtunes Music Corporation Digital downloading jukebox system with user-tailored music management, communications, and other tools
US11029823B2 (en) 2002-09-16 2021-06-08 Touchtunes Music Corporation Jukebox with customizable avatar
US9646339B2 (en) 2002-09-16 2017-05-09 Touchtunes Music Corporation Digital downloading jukebox system with central and local music servers
US7028023B2 (en) * 2002-09-26 2006-04-11 Lsi Logic Corporation Linked list
US20040151170A1 (en) * 2003-01-31 2004-08-05 Manu Gulati Management of received data within host device using linked lists
JP5089167B2 (en) * 2003-04-22 2012-12-05 アギア システムズ インコーポレーテッド Method and apparatus for shared multi-bank memory
CA2426619A1 (en) * 2003-04-25 2004-10-25 Ibm Canada Limited - Ibm Canada Limitee Defensive heap memory management
US7827375B2 (en) * 2003-04-30 2010-11-02 International Business Machines Corporation Defensive heap memory management
JP2005078481A (en) * 2003-09-02 2005-03-24 Toshiba Corp Semiconductor system
US7093065B2 (en) * 2003-12-15 2006-08-15 International Business Machines Corporation Random access memory initialization
US7213099B2 (en) * 2003-12-30 2007-05-01 Intel Corporation Method and apparatus utilizing non-uniformly distributed DRAM configurations and to detect in-range memory address matches
US7752355B2 (en) * 2004-04-27 2010-07-06 International Business Machines Corporation Asynchronous packet based dual port link list header and data credit management structure
US7613213B2 (en) * 2004-08-23 2009-11-03 Transwitch Corporation Time multiplexed SONET line processing
US7899557B2 (en) * 2005-03-01 2011-03-01 Asm Japan K.K. Input signal analyzing system and control apparatus using same
US7251552B2 (en) * 2005-04-22 2007-07-31 Snap-On Incorporated Diagnostic display unit including replaceable display protector
US9171419B2 (en) 2007-01-17 2015-10-27 Touchtunes Music Corporation Coin operated entertainment system
US8332887B2 (en) 2008-01-10 2012-12-11 Touchtunes Music Corporation System and/or methods for distributing advertisements from a central advertisement network to a peripheral device via a local advertisement server
US10290006B2 (en) 2008-08-15 2019-05-14 Touchtunes Music Corporation Digital signage and gaming services to comply with federal and state alcohol and beverage laws and regulations
JP2009104497A (en) * 2007-10-25 2009-05-14 Nec Access Technica Ltd Memory management system and memory management method
WO2009083027A1 (en) * 2007-12-27 2009-07-09 Nokia Corporation Method and system for managing data in a memory
GB2460217B (en) * 2008-04-03 2012-05-09 Broadcom Corp Controlling a read order of asynchronously written data segments in the same queue
US8849435B2 (en) 2008-07-09 2014-09-30 Touchtunes Music Corporation Digital downloading jukebox with revenue-enhancing features
CN102449658A (en) 2009-03-18 2012-05-09 踏途音乐公司 Entertainment server and associated social networking services
US10719149B2 (en) 2009-03-18 2020-07-21 Touchtunes Music Corporation Digital jukebox device with improved user interfaces, and associated methods
US9292166B2 (en) 2009-03-18 2016-03-22 Touchtunes Music Corporation Digital jukebox device with improved karaoke-related user interfaces, and associated methods
US10564804B2 (en) 2009-03-18 2020-02-18 Touchtunes Music Corporation Digital jukebox device with improved user interfaces, and associated methods
KR101446403B1 (en) 2010-01-26 2014-11-04 터치튠즈 뮤직 코포레이션 Digital jukebox device with improved user interfaces, and associated methods
GB2522772B (en) 2011-09-18 2016-01-13 Touchtunes Music Corp Digital jukebox device with karaoke and/or photo booth features, and associated methods
BR112014014414A2 (en) * 2011-12-14 2017-06-13 Optis Cellular Tech Llc Temporary storage resource management method and telecommunication equipment
US11151224B2 (en) 2012-01-09 2021-10-19 Touchtunes Music Corporation Systems and/or methods for monitoring audio inputs to jukebox devices
US20140250252A1 (en) * 2013-03-04 2014-09-04 Silicon Graphics International Corp. First-in First-Out (FIFO) Modular Memory Structure
US9436634B2 (en) * 2013-03-14 2016-09-06 Seagate Technology Llc Enhanced queue management
US9921717B2 (en) 2013-11-07 2018-03-20 Touchtunes Music Corporation Techniques for generating electronic menu graphical user interface layouts for use in connection with electronic devices
US10193831B2 (en) * 2014-01-30 2019-01-29 Marvell Israel (M.I.S.L) Ltd. Device and method for packet processing with memories having different latencies
JP6777545B2 (en) 2014-03-25 2020-10-28 タッチチューンズ ミュージック コーポレイションTouchtunes Music Corporation Digital jukebox devices with an improved user interface and related methods
CN108846045B (en) * 2018-05-30 2022-10-18 杭州吉吉知识产权运营有限公司 Storage method and system for drinking water data of intelligent water cup
CN112134805B (en) * 2020-09-23 2022-07-08 中国人民解放军陆军工程大学 Fast route updating circuit structure and updating method based on hardware implementation
CN114817091B (en) * 2022-06-28 2022-09-27 井芯微电子技术(天津)有限公司 FWFT FIFO system based on linked list, implementation method and equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5123101A (en) * 1986-11-12 1992-06-16 Xerox Corporation Multiple address space mapping technique for shared memory wherein a processor operates a fault handling routine upon a translator miss
US5446726A (en) * 1993-10-20 1995-08-29 Lsi Logic Corporation Error detection and correction apparatus for an asynchronous transfer mode (ATM) network device
US5390175A (en) * 1993-12-20 1995-02-14 At&T Corp Inter-cell switching unit for narrow band ATM networks

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220382464A1 (en) * 2021-04-07 2022-12-01 Samsung Electronics Co., Ltd. Semiconductor memory device and memory system including the same
US11947810B2 (en) * 2021-04-07 2024-04-02 Samsung Electronics Co., Ltd. Semiconductor memory device and memory system including the same

Also Published As

Publication number Publication date
US5893162A (en) 1999-04-06
JP2001511281A (en) 2001-08-07
IL130834A0 (en) 2001-01-28
EP1036360A1 (en) 2000-09-20
IL130834A (en) 2003-10-31
WO1998036357A1 (en) 1998-08-20

Similar Documents

Publication Publication Date Title
US5893162A (en) Method and apparatus for allocation and management of shared memory with data in memory stored as multiple linked lists
US6246682B1 (en) Method and apparatus for managing multiple ATM cell queues
EP0702500B1 (en) Method of multicasting and multicast system
US5561807A (en) Method and device of multicasting data in a communications system
US8081654B2 (en) Bandwidth division for packet processing
US7089346B2 (en) Method of operating a crossbar switch
EP0792081A2 (en) A system and method for an efficient ATM adapter/device driver interface
JP2923427B2 (en) ATM switching device having memory
EP0828403B1 (en) Improvements in or relating to an ATM switch
WO1997042737A1 (en) Asynchronous transfer mode cell processing system with multiple cell source multiplexing
JPH07321815A (en) Shared buffer type atm switch and its multi-address control method
Dittia et al. Design of the APIC: A high performance ATM host-network interface chip
US20070237151A1 (en) Reordering Sequence Based Channels
US20100290466A1 (en) Routing of data streams
US7126959B2 (en) High-speed packet memory
US6310875B1 (en) Method and apparatus for port memory multicast common memory switches
US7362751B2 (en) Variable length switch fabric
US6636524B1 (en) Method and system for handling the output queuing of received packets in a switching hub in a packet-switching network
US5668798A (en) Multiplexed TC sublayer for ATM switch
EP0618709A2 (en) Memory manager for a multichannel network interface
EP0917783B1 (en) Addressable, high speed counter array
US6647477B2 (en) Transporting data transmission units of different sizes using segments of fixed sizes
EP0966174B1 (en) Address release method, and common buffering device for ATM switching system which employs the same method
US7009981B1 (en) Asynchronous transfer mode system for, and method of, writing a cell payload between a control queue on one side of a system bus and a status queue on the other side of the system bus
JP3144386B2 (en) Back pressure control method and device

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued