WO2010047706A1 - Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system - Google Patents

Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system Download PDF

Info

Publication number
WO2010047706A1
WO2010047706A1 PCT/US2008/080870 US2008080870W WO2010047706A1 WO 2010047706 A1 WO2010047706 A1 WO 2010047706A1 US 2008080870 W US2008080870 W US 2008080870W WO 2010047706 A1 WO2010047706 A1 WO 2010047706A1
Authority
WO
WIPO (PCT)
Prior art keywords
video stream
format
processor
uncompressed video
computer system
Prior art date
Application number
PCT/US2008/080870
Other languages
French (fr)
Inventor
Lee B. Hinkle
Kent E. Biggs
Original Assignee
Hewlett-Packard Development Company, L.P.
Thomas, Andrew
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P., Thomas, Andrew filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2008/080870 priority Critical patent/WO2010047706A1/en
Priority to TW098133808A priority patent/TW201029471A/en
Publication of WO2010047706A1 publication Critical patent/WO2010047706A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/64322IP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • Figure 1 shows illustrative steps of video processing in accordance with an embodiment
  • Figure 2 shows a system in accordance with an embodiment
  • Figure 3 shows a computer system in accordance with another embodiment
  • Figure 4 shows a method in accordance with an embodiment.
  • Hardware decoder shall mean a hardware device specifically designed to perform compressing and/or decompression operations with respect to a video stream. The fact that a hardware decoder executes firmware on an internal processor shall not negate its status as a hardware decoder.
  • the various embodiments are directed to dividing tasks of video processing between the server computer systems and the end-user or client computer systems to leverage the capabilities of the end-user devices.
  • the specification first discusses an illustrative system, an illustrative computer system, tasks associated with video processing (and how bunching the video processing tasks adversely effects overall system performance), and then turns to the various embodiments of dividing the video processing tasks. While the discussion is with respect to "video” or a "video stream,” it will be understood the various tasks take place on discrete portions of the video (e.g., frame-by-frame basis) on a continuous basis while the video is streaming.
  • Figure 1 shows a computer system acting as server 30, and coupled to a plurality of client computer systems 32 by way of a computer network 34.
  • the server 30 may be a single server, or the server 30 may be associated with a plurality of other servers in a central location (e.g., a plurality of "blade" servers in a rack-mounted system).
  • Each client 32 is likewise a computer system; however, the computing power of the each client 32 is, in most cases, less or significantly less than the computing power of each server 30.
  • the computer network 34 is any network that enables the server 30 the communicate with each client 32, such as local area network (LAN), wide area network (WAN), a hardwired network (e.g., Ethernet ® Network), or a wireless network (e.g., cellular-based broadband, IEEE 802.1 1 (b), (g), (n) compliant wireless network, BLUETOOTH ® ).
  • LAN local area network
  • WAN wide area network
  • a hardwired network e.g., Ethernet ® Network
  • a wireless network e.g., cellular-based broadband, IEEE 802.1 1 (b), (g), (n) compliant wireless network, BLUETOOTH ®
  • BLUETOOTH ® wireless network
  • the processor 40 couples to the bridge device 44 by way of a processor bus 46 and the memory 42 couples to the bridge device 44 by way of a memory bus 48.
  • Memory 42 is any volatile or non-volatile memory device, or array of memory devices, such as random access memory (RAM) devices, dynamic RAM (DRAM) devices, static DRAM (SDRAM) devices, double data rate DRAM (DDR DRAM) devices, or magnetic RAM (MRAM) devices.
  • RAM random access memory
  • DRAM dynamic RAM
  • SDRAM static DRAM
  • DDR DRAM double data rate DRAM
  • MRAM magnetic RAM
  • the bridge device 44 comprises a memory controller (not shown) that asserts control signals for reading and writing the memory 42, the reading and writing both by processor 40 and by other devices coupled to the bridge device 44 (i.e., direct memory access (DMA)).
  • DMA direct memory access
  • the memory 42 is the working memory for the processor 40, which stores programs executed by the processor 40 and which stores data structures used by the programs executed on the processor 40. In some cases, the programs held in memory 42 are copied from other devices (e.g., hard drive 52, discussed below) prior to execution.
  • Bridge device 44 not only bridges the processor 40 to the memory 42, but also bridges the processor 40 and memory 42 to other devices.
  • sever 30 comprises a super input/output (I/O) controller 50.
  • the super I/O controller 50 interfaces various I/O devices, if present, to the sever computer system 30.
  • the super I/O controller 50 enables coupling and use of a non-volatile memory device 52 (such as a hard drive (HD)), a pointing device or mouse 54, and a keyboard 56.
  • the super I/O controller 50 may also enable use of other device not specifically shown (e.g., compact disc read only memory (CDROM) drives, Universal Serial Bus (USB) ports), and is referred to as "super" because of the many I/O devices for which it enables use.
  • the bridge device 44 further bridges the processor 40 and memory 42 to a graphics adapter 58 and network adapter 60.
  • Graphics adapter 58 is any suitable graphics adapter for reading display memory and driving a monitor 62 with the graphics images represented in the display memory.
  • the graphics adapter 58 internally comprises a memory area to which graphics primitives are written by the processor 40 and/or by way DMA writes between the memory 42 and the graphics adapter 58.
  • the graphics adapter 58 couples to the bridge device by way of any suitable bus system, such as peripheral components interconnect (PCI) bus or an advanced graphics port (AGP) bus.
  • PCI peripheral components interconnect
  • AGP advanced graphics port
  • the graphics adapter 58 is integral with the bridge device 44.
  • the graphics adapter and/or the display device may be omitted.
  • Network adapter 60 enables the server 30 to communicate with other computer systems over a computer network.
  • the network adapter 60 provides access to a local area network (LAN) or wide area network (WAN) by way of hardwired connection (e.g., Ethernet network), and in other embodiments the network adapter 60 provides access to the LAN or WAN through a wireless networking protocol (e.g., IEEE 802.1 1 (b), (g), (n)). In yet still other embodiments, the network adapter 60 provides access to the internet through a wireless broadband connection, such as a cellular-based wireless broadband internet connection.
  • the client computer systems 32 may be locally coupled (i.e., within a few feet), or may be many miles from the sever 30. While Figure 2 is discussed in reference to a server 30, the description is equally applicable to any of the computer systems 32.
  • Figure 3 illustrates a series of steps or tasks performed such that that a video stream may be displayed on a computer system.
  • the video may be stored on a non-volatile device, such as a Digital Versatile Disk (DVD) 10.
  • DVD Digital Versatile Disk
  • the video is stored on the DVD 10 in a binary format, such as under Eight-to- Fourteen Modulation (EFM).
  • EFM Eight-to- Fourteen Modulation
  • the video is stored on other types of non-volatile memory, such as hard drive 52 of server 30 ( Figure 2).
  • the video on the illustrative hard drive 52 may have been previously copied from the DVD 10 to the hard drive 52 (as shown by dashed line 14).
  • the video may be compressed or encoded in one of many video compression schemes, such as the Moving Picture Experts Group (MPEG) MPEG-2, MPEG-4, Windows Media Video format (WMV), Real Media format (RM), Advanced Streaming Format (ASF), Quicktime format, and AVI format, etc. Further, in some cases the compressed video may also be encrypted. [0021] Regardless of the source, and assuming the video stream is encrypted, the encrypted video stream is first decrypted, as illustrated by block 16. The decryption may be performed in a variety of ways. For example, in some embodiments the encrypted video stream is decrypted by software executing on the main processor of a computer system.
  • the decryption may be performed by a hardware component within the computer system, the hardware component specifically designed to perform decryption (i.e., an application specific integrated circuit (ASIC)), and which hardware component may itself have an internal processor that executes software.
  • decryption may be accomplished by a combination of software on the main processor and the hardware component. In situations where the video is not encrypted, decryption may be omitted.
  • the video stream (in a first digital format (e.g., MPEG)) is decompressed or decoded, as illustrated by block 18.
  • the decompression likewise may be performed in a variety of ways.
  • the video stream is decompressed by a software- based compression-decompression (CODEC) system executing on the main processor of a computer system.
  • the decompression may be performed by a hardware CODEC or hardware decoder within the computer system, the hardware component specifically designed to perform decompression (i.e., an application specific integrated circuit (ASIC)), and which hardware decoder may itself have an internal processor that executes software.
  • CODEC software- based compression-decompression
  • decompression may be accomplished by a combination of software on the main processor and the hardware decoder. Regardless of the precise physical implementation of the decompression, the decompression step receives as input the video stream in the first digital format (e.g., MPEG) and creates an uncompressed video stream.
  • the first digital format e.g., MPEG
  • the compressed video stream is decompressed to a YUV color space. That is, the compressed video stream is turned into a stream of YUV values, with each set of YUV values applicable to a single spot (e.g., a pixel) on the display.
  • the Y value is a luma component
  • U and V are chrominance components.
  • the compressed video stream is decompressed to a Y':Cb:Cr color space. That is, the compressed video stream is turned into a stream of Y':Cb:Cr values, with each set of Y':Cb:Cr values applicable to a single spot (e.g., a pixel) on the display.
  • the Y value is a luminance component
  • Cb and Cr are chrominance components.
  • Other color spaces e.g., Y:Pb:Pr or other packet- based system based on Red-Green-Blue (RGB)
  • RGB Red-Green-Blue
  • the uncompressed video stream may be subjected to color space depth conversion, as shown in block 20 of Figure 3.
  • each set of values of the uncompressed video stream represent the luminance and/or chrominence of a particular spot (e.g., pixel) on the screen, and each value may span a certain number of bits.
  • the display device on which the uncompressed video is to be displayed may not have the same color space depth (i.e., number of bits) as the uncompressed video stream.
  • color space depth conversion involves changing and/or adjusting the number of bits each value spans to match or substantially match the capabilities of the computer system on which the uncompressed video is to be displayed.
  • the Y', Cb and Cr values in the MPEG standards span as many as 32 bits each, yet a display device on which the uncompressed video is to be displayed may only have 8 bits of resolution.
  • the various components of the uncompressed video stream are color space depth converted. When the color space depth of the uncompressed video stream and the computer system on which the uncompressed video stream are substantially the same, color space depth conversion may be omitted.
  • the uncompressed video stream may be scaled in size, as illustrated by block 22.
  • the uncompressed video stream may have a particular size (aspect ratio) in which the uncompressed video stream was recorded and/or rendered.
  • the size of the display device and/or the size of the display area to be used for the uncompressed video stream on the display device may not match that of the uncompressed video stream as recorded and/or rendered.
  • the video may need size scaling to meet the expected display size.
  • the size of each illustrative Y':Cb:Cr value may be scaled to be applicable to a plurality of pixels.
  • a plurality of illustrative Y':Cb:Cr may be combined to be applicable to a single pixel.
  • the various components of the uncompressed video stream are scaled prior to displaying the video.
  • the scaling may be omitted.
  • the video is displayed on the display device, as shown by block 24.
  • the decryption, decompressing, color space depth conversion and scaling may be a continuous process, possibly operating on a frame-by-frame basis during streaming and display of the video.
  • the central computing devices e.g., a plurality of high-end servers
  • perform all illustrative video processing steps e.g., decrypting, decompressing, color space depth conversion and scaling.
  • the client machines are provided a video stream ready for display.
  • Such operating philosophy centralizes and limits the number of software licenses and/or specialty hardware devices to just the servers.
  • one server may hold a single license for a particular software CODEC, but provide uncompressed video to a plurality of client computer systems that are not licensed for the particular software CODEC.
  • one server may implement a specialty hardware decoder, and provide video to a plurality of client computer systems that do not implement the hardware decoder.
  • video processing is computationally intensive, and having the server perform all the video limits the number of users that can be serviced by the server and/or the number of other tasks that may be performed.
  • the central computing device provides only the compressed video stream, and the client machines perform all the illustrative video processing steps (e.g., decrypting, decompression, color space depth conversion and scaling).
  • Such operating philosophy removes significant computing load from the servers, but dictates that each client machine be licensed and/or provided with sufficient software (e.g., software CODEC) and/or hardware (e.g., hardware decoder) to perform the decompression, and that the each client machine have sufficient computing power to perform the decrypting, converting color depth and scaling.
  • Most client computer systems while having limited computing power compared to the high-end servers, have sufficient computing power to perform all or a portion of the video processing steps, such as color depth conversation and/or scaling. Moreover, color depth conversion and scaling does not require proprietary software applications and/or specialty hardware.
  • the server systems can serve a greater number of clients than if all the video processing steps are performed at the server level, while maintaining the ability to retain the proprietary software (e.g., software CODECs) and/or hardware at the central locations.
  • managing the CODECs at the server end gives the information technologist the ability to control, to some extent, what video the clients may access.
  • the server 30 performs a portion of the video processing, and each client 32 performs the remaining portion of the video processing.
  • the server 30 performs the video processing steps above the dashed line 36 in Figure 3 (i.e., decryption and decompression) and sends the decrypted and uncompressed video stream to the client 32.
  • the client 32 performs the video processing steps below the dashed line 36 in Figure 3 (i.e., color space depth conversion and size scaling), and then displays the video stream.
  • Dividing the video processing tasks in this way limits and centralizes the expensive and/or licensed-based processes on the server 30. For example, a limited number of software CODECs and/or hardware decoders may be resident within the server 30, rather than in each client computer 32. Moreover, distribution of a portion of the video processing duties to the clients 32 enables each server 30 to provide video to more clients 32. Color space depth conversion and/or size scaling may not require proprietary software and/or hardware permitting thin clients and terminals having limited computing power to perform these tasks at the client 32 level. Moreover, dividing video processing tasks between the server 30 and client 32 can address one or more quality issues, such as erratic, pixilated, or jerky video and/or inconsistent audio.
  • Figure 4 illustrates a method in accordance with at least some embodiments.
  • the method starts in block 400.
  • a first computer system obtains a video stream in a first digital format.
  • the video stream is decompressed by the first computer system creating a decompressed video stream in a second digital format.
  • the decompressed video stream is sent to a second computer system.
  • the second computer system can be incapable of decompressing the first digital format, thereby requiring the decompression of the first digital format by the first computer system in block 408.
  • the uncompressed video stream can be transmitted using one or more transfer protocols, for example the uncompressed video stream can be broken into a series of packets, and sent as transmission control protocol-internet protocol (TCP-IP) packets. Moreover, if security is of concern, the illustrative TCP-IP packets may implement a security protocol to ensure only a particular client or set of clients may access the video stream.
  • the second computer system may or may not have the software CODEC(s) and/or hardware decoders required to decompress the video stream. [0033] Still referring to Figure 4, in block 416, the uncompressed video stream is processed by the second computer system, comprising converting color space depth conversion and scaling.
  • the uncompressed video stream may be sent from the first computer system with a 32 bit luma and/or chrominance component, however the display device coupled to the client 32 may only be capable of 8 bit resolution.
  • a color space depth conversion can be performed by the client 32 in block 416 to ensure the video matches the resolution of the display device.
  • scaling such scaling may be used if the size and/or aspect ratio of the display device coupled to the second computer system is different than the size and/or aspect ratio of the video supplied by the first computer system.
  • uncompressed video is displayed by the second computer system at the appropriate resolution, size and/or aspect ratio. The method then terminates in block 424.

Abstract

Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system. At least some of the illustrative embodiments are methods comprising obtaining a video stream in a first digital format in a first computer system, decompressing the video stream from the first digital format to a second digital format creating an uncompressed video stream (the decompressing by the first computer system), then sending the uncompressed video stream to a second computer system, processing the uncompressed video stream by the second computer system, wherein the processing comprises converting color space depth of the uncompressed video stream and scaling the size of the uncompressed video stream, and displaying the uncompressed video stream on a display device.

Description

DECOMPRESSING A VIDEO STREAM ON A
FIRST COMPUTER SYSTEM, AND SCALING AND
DISPLAYING THE VIDEO STREAM ON A SECOND COMPUTER SYSTEM
BACKGROUND
[0001] Several operational philosophies exist with respect to distribution of computing power within organizations (e.g., a corporation with hundreds or thousands of employees). In one operational philosophy, the bulk of the computing power resides at a central location (e.g., a plurality of high-end computer systems acting as servers), and the end-user computer systems have limited computing power (i.e., "thin" clients). In most of the "thin" client situations, the end-user computer systems act merely as terminal devices. In another operational philosophy, the end-user computer systems have significant computing power, and the central servers operate as mere file servers. [0002] With respect to video, the operating philosophy has likewise dictated how video is handled. In the case of the bulk of the computing power residing in a central location, in such situations all video processing steps are performed at the server (e.g., decrypting, decompressing, color space depth conversion and scaling). The computer system of the end-user receives the video ready for display. In the case of the end-user machines having significant computing power, the server merely provides the end-user computer system the encrypted/compressed video, and the end-user machine is responsible for video processing (e.g., decrypting, decompressing, color space depth conversion and scaling). BRIEF DESCRIPTION OF THE DRAWINGS
[0003] For a detailed description of exemplary embodiments, reference will now be made to the accompanying drawings in which:
[0004] Figure 1 shows illustrative steps of video processing in accordance with an embodiment;
[0005] Figure 2 shows a system in accordance with an embodiment; [0006] Figure 3 shows a computer system in accordance with another embodiment; and [0007] Figure 4 shows a method in accordance with an embodiment.
NOTATION AND NOMENCLATURE
[0008] Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, companies may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. [0009] In the following discussion and in the claims, the terms "including" and "comprising" are used in an open-ended fashion, and thus should be interpreted to mean "including, but not limited to... ." Also, the term "couple" or "couples" is intended to mean either an indirect or direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, through an indirect connection via other devices and connections. [0010] "Decompress" and "decoder" are used interchangeably, and the terms "compress" and "encode" are used interchangeably as well. [0011] "Hardware decoder" shall mean a hardware device specifically designed to perform compressing and/or decompression operations with respect to a video stream. The fact that a hardware decoder executes firmware on an internal processor shall not negate its status as a hardware decoder.
DETAILED DESCRIPTION
[0012] The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment.
[0013] The various embodiments are directed to dividing tasks of video processing between the server computer systems and the end-user or client computer systems to leverage the capabilities of the end-user devices. In order to fully describe the division of tasks, the specification first discusses an illustrative system, an illustrative computer system, tasks associated with video processing (and how bunching the video processing tasks adversely effects overall system performance), and then turns to the various embodiments of dividing the video processing tasks. While the discussion is with respect to "video" or a "video stream," it will be understood the various tasks take place on discrete portions of the video (e.g., frame-by-frame basis) on a continuous basis while the video is streaming.
[0014] Figure 1 shows a computer system acting as server 30, and coupled to a plurality of client computer systems 32 by way of a computer network 34. The server 30 may be a single server, or the server 30 may be associated with a plurality of other servers in a central location (e.g., a plurality of "blade" servers in a rack-mounted system). Each client 32 is likewise a computer system; however, the computing power of the each client 32 is, in most cases, less or significantly less than the computing power of each server 30. The computer network 34 is any network that enables the server 30 the communicate with each client 32, such as local area network (LAN), wide area network (WAN), a hardwired network (e.g., Ethernet® Network), or a wireless network (e.g., cellular-based broadband, IEEE 802.1 1 (b), (g), (n) compliant wireless network, BLUETOOTH®). [0015] Figure 2 shows a server 30 in greater detail. In particular, the server 30 comprises a processor 40 coupled to a memory device 42 by way of a bridge device 44. Although only one processor 40 is shown, multiple processor systems, and systems where the "processor" has multiple processing cores, may be equivalently implemented. The processor 40 couples to the bridge device 44 by way of a processor bus 46 and the memory 42 couples to the bridge device 44 by way of a memory bus 48. Memory 42 is any volatile or non-volatile memory device, or array of memory devices, such as random access memory (RAM) devices, dynamic RAM (DRAM) devices, static DRAM (SDRAM) devices, double data rate DRAM (DDR DRAM) devices, or magnetic RAM (MRAM) devices. [0016] The bridge device 44 comprises a memory controller (not shown) that asserts control signals for reading and writing the memory 42, the reading and writing both by processor 40 and by other devices coupled to the bridge device 44 (i.e., direct memory access (DMA)). The memory 42 is the working memory for the processor 40, which stores programs executed by the processor 40 and which stores data structures used by the programs executed on the processor 40. In some cases, the programs held in memory 42 are copied from other devices (e.g., hard drive 52, discussed below) prior to execution.
[0017] Bridge device 44 not only bridges the processor 40 to the memory 42, but also bridges the processor 40 and memory 42 to other devices. For example, sever 30 comprises a super input/output (I/O) controller 50. The super I/O controller 50 interfaces various I/O devices, if present, to the sever computer system 30. In the server 30 of Figure 2, the super I/O controller 50 enables coupling and use of a non-volatile memory device 52 (such as a hard drive (HD)), a pointing device or mouse 54, and a keyboard 56. The super I/O controller 50 may also enable use of other device not specifically shown (e.g., compact disc read only memory (CDROM) drives, Universal Serial Bus (USB) ports), and is referred to as "super" because of the many I/O devices for which it enables use. [0018] Still referring to Figure 2, the bridge device 44 further bridges the processor 40 and memory 42 to a graphics adapter 58 and network adapter 60. Graphics adapter 58 is any suitable graphics adapter for reading display memory and driving a monitor 62 with the graphics images represented in the display memory. In some embodiments, the graphics adapter 58 internally comprises a memory area to which graphics primitives are written by the processor 40 and/or by way DMA writes between the memory 42 and the graphics adapter 58. The graphics adapter 58 couples to the bridge device by way of any suitable bus system, such as peripheral components interconnect (PCI) bus or an advanced graphics port (AGP) bus. In some embodiments, the graphics adapter 58 is integral with the bridge device 44. In some cases (e.g., "blade" type servers), the graphics adapter and/or the display device may be omitted. [0019] Network adapter 60 enables the server 30 to communicate with other computer systems over a computer network. In some embodiments, the network adapter 60 provides access to a local area network (LAN) or wide area network (WAN) by way of hardwired connection (e.g., Ethernet network), and in other embodiments the network adapter 60 provides access to the LAN or WAN through a wireless networking protocol (e.g., IEEE 802.1 1 (b), (g), (n)). In yet still other embodiments, the network adapter 60 provides access to the internet through a wireless broadband connection, such as a cellular-based wireless broadband internet connection. Thus, the client computer systems 32 (Figure 1 ) may be locally coupled (i.e., within a few feet), or may be many miles from the sever 30. While Figure 2 is discussed in reference to a server 30, the description is equally applicable to any of the computer systems 32.
[0020] Figure 3 illustrates a series of steps or tasks performed such that that a video stream may be displayed on a computer system. In particular, the video may be stored on a non-volatile device, such as a Digital Versatile Disk (DVD) 10. The video is stored on the DVD 10 in a binary format, such as under Eight-to- Fourteen Modulation (EFM). In other embodiments, the video is stored on other types of non-volatile memory, such as hard drive 52 of server 30 (Figure 2). In some cases, the video on the illustrative hard drive 52 may have been previously copied from the DVD 10 to the hard drive 52 (as shown by dashed line 14). The video may be compressed or encoded in one of many video compression schemes, such as the Moving Picture Experts Group (MPEG) MPEG-2, MPEG-4, Windows Media Video format (WMV), Real Media format (RM), Advanced Streaming Format (ASF), Quicktime format, and AVI format, etc. Further, in some cases the compressed video may also be encrypted. [0021] Regardless of the source, and assuming the video stream is encrypted, the encrypted video stream is first decrypted, as illustrated by block 16. The decryption may be performed in a variety of ways. For example, in some embodiments the encrypted video stream is decrypted by software executing on the main processor of a computer system. In other embodiments, the decryption may be performed by a hardware component within the computer system, the hardware component specifically designed to perform decryption (i.e., an application specific integrated circuit (ASIC)), and which hardware component may itself have an internal processor that executes software. In yet still other embodiments, decryption may be accomplished by a combination of software on the main processor and the hardware component. In situations where the video is not encrypted, decryption may be omitted.
[0022] Still referring to Figure 3, next the video stream (in a first digital format (e.g., MPEG)) is decompressed or decoded, as illustrated by block 18. The decompression likewise may be performed in a variety of ways. For example, in some embodiments the video stream is decompressed by a software- based compression-decompression (CODEC) system executing on the main processor of a computer system. In other embodiments, the decompression may be performed by a hardware CODEC or hardware decoder within the computer system, the hardware component specifically designed to perform decompression (i.e., an application specific integrated circuit (ASIC)), and which hardware decoder may itself have an internal processor that executes software. In yet still other embodiments, decompression may be accomplished by a combination of software on the main processor and the hardware decoder. Regardless of the precise physical implementation of the decompression, the decompression step receives as input the video stream in the first digital format (e.g., MPEG) and creates an uncompressed video stream.
[0023] In accordance with at least some embodiments, the compressed video stream is decompressed to a YUV color space. That is, the compressed video stream is turned into a stream of YUV values, with each set of YUV values applicable to a single spot (e.g., a pixel) on the display. The Y value is a luma component, and U and V are chrominance components. In yet still other embodiments, the compressed video stream is decompressed to a Y':Cb:Cr color space. That is, the compressed video stream is turned into a stream of Y':Cb:Cr values, with each set of Y':Cb:Cr values applicable to a single spot (e.g., a pixel) on the display. The Y value is a luminance component, and Cb and Cr are chrominance components. Other color spaces (e.g., Y:Pb:Pr or other packet- based system based on Red-Green-Blue (RGB)) may be equivalent^ used. [0024] Next, the uncompressed video stream may be subjected to color space depth conversion, as shown in block 20 of Figure 3. In particular, each set of values of the uncompressed video stream represent the luminance and/or chrominence of a particular spot (e.g., pixel) on the screen, and each value may span a certain number of bits. However, the display device on which the uncompressed video is to be displayed may not have the same color space depth (i.e., number of bits) as the uncompressed video stream. As the name implies, color space depth conversion involves changing and/or adjusting the number of bits each value spans to match or substantially match the capabilities of the computer system on which the uncompressed video is to be displayed. For example, the Y', Cb and Cr values in the MPEG standards span as many as 32 bits each, yet a display device on which the uncompressed video is to be displayed may only have 8 bits of resolution. Thus, in some embodiments, prior to displaying the video, the various components of the uncompressed video stream are color space depth converted. When the color space depth of the uncompressed video stream and the computer system on which the uncompressed video stream are substantially the same, color space depth conversion may be omitted.
[0025] Next, the uncompressed video stream may be scaled in size, as illustrated by block 22. In particular, the uncompressed video stream may have a particular size (aspect ratio) in which the uncompressed video stream was recorded and/or rendered. However, the size of the display device and/or the size of the display area to be used for the uncompressed video stream on the display device may not match that of the uncompressed video stream as recorded and/or rendered. Thus, prior to actually displaying the uncompressed video, the video may need size scaling to meet the expected display size. For example, the size of each illustrative Y':Cb:Cr value may be scaled to be applicable to a plurality of pixels. For decreasing the size, a plurality of illustrative Y':Cb:Cr may be combined to be applicable to a single pixel. Thus, in some embodiments, prior to displaying the video, the various components of the uncompressed video stream are scaled. When the scaling is not needed, the scaling may be omitted.
[0026] Finally, after the decryption (if any), decompression, color space depth conversion (if any) and scaling (if any), the video is displayed on the display device, as shown by block 24. For displaying video, the decryption, decompressing, color space depth conversion and scaling may be a continuous process, possibly operating on a frame-by-frame basis during streaming and display of the video.
[0027] In some related art systems and/or methodologies, the central computing devices (e.g., a plurality of high-end servers) perform all illustrative video processing steps (e.g., decrypting, decompressing, color space depth conversion and scaling). The client machines are provided a video stream ready for display. Such operating philosophy centralizes and limits the number of software licenses and/or specialty hardware devices to just the servers. For example, one server may hold a single license for a particular software CODEC, but provide uncompressed video to a plurality of client computer systems that are not licensed for the particular software CODEC. As yet another example, one server may implement a specialty hardware decoder, and provide video to a plurality of client computer systems that do not implement the hardware decoder. However, video processing is computationally intensive, and having the server perform all the video limits the number of users that can be serviced by the server and/or the number of other tasks that may be performed.
[0028] In other related art operating methodologies, the central computing device provides only the compressed video stream, and the client machines perform all the illustrative video processing steps (e.g., decrypting, decompression, color space depth conversion and scaling). Such operating philosophy removes significant computing load from the servers, but dictates that each client machine be licensed and/or provided with sufficient software (e.g., software CODEC) and/or hardware (e.g., hardware decoder) to perform the decompression, and that the each client machine have sufficient computing power to perform the decrypting, converting color depth and scaling. [0029] Most client computer systems, while having limited computing power compared to the high-end servers, have sufficient computing power to perform all or a portion of the video processing steps, such as color depth conversation and/or scaling. Moreover, color depth conversion and scaling does not require proprietary software applications and/or specialty hardware. Thus, by offloading portions of the video processing to the client computer systems, the server systems can serve a greater number of clients than if all the video processing steps are performed at the server level, while maintaining the ability to retain the proprietary software (e.g., software CODECs) and/or hardware at the central locations. Moreover, managing the CODECs at the server end gives the information technologist the ability to control, to some extent, what video the clients may access.
[0030] Returning to Figure 1 , in accordance with the various embodiments, and with respect to displaying video on one or more of the client computer systems 32, the server 30 performs a portion of the video processing, and each client 32 performs the remaining portion of the video processing. In particular, in accordance with the various embodiments the server 30 performs the video processing steps above the dashed line 36 in Figure 3 (i.e., decryption and decompression) and sends the decrypted and uncompressed video stream to the client 32. The client 32, in turn, performs the video processing steps below the dashed line 36 in Figure 3 (i.e., color space depth conversion and size scaling), and then displays the video stream.
[0031] Dividing the video processing tasks in this way limits and centralizes the expensive and/or licensed-based processes on the server 30. For example, a limited number of software CODECs and/or hardware decoders may be resident within the server 30, rather than in each client computer 32. Moreover, distribution of a portion of the video processing duties to the clients 32 enables each server 30 to provide video to more clients 32. Color space depth conversion and/or size scaling may not require proprietary software and/or hardware permitting thin clients and terminals having limited computing power to perform these tasks at the client 32 level. Moreover, dividing video processing tasks between the server 30 and client 32 can address one or more quality issues, such as erratic, pixilated, or jerky video and/or inconsistent audio. [0032] Figure 4 illustrates a method in accordance with at least some embodiments. First, the method starts in block 400. In block 404, a first computer system obtains a video stream in a first digital format. In block 408, the video stream is decompressed by the first computer system creating a decompressed video stream in a second digital format. Thereafter, in block 412, the decompressed video stream is sent to a second computer system. In some embodiments, the second computer system can be incapable of decompressing the first digital format, thereby requiring the decompression of the first digital format by the first computer system in block 408. The uncompressed video stream can be transmitted using one or more transfer protocols, for example the uncompressed video stream can be broken into a series of packets, and sent as transmission control protocol-internet protocol (TCP-IP) packets. Moreover, if security is of concern, the illustrative TCP-IP packets may implement a security protocol to ensure only a particular client or set of clients may access the video stream. The second computer system may or may not have the software CODEC(s) and/or hardware decoders required to decompress the video stream. [0033] Still referring to Figure 4, in block 416, the uncompressed video stream is processed by the second computer system, comprising converting color space depth conversion and scaling. For example, the uncompressed video stream may be sent from the first computer system with a 32 bit luma and/or chrominance component, however the display device coupled to the client 32 may only be capable of 8 bit resolution. Thus, a color space depth conversion can be performed by the client 32 in block 416 to ensure the video matches the resolution of the display device. As for scaling, such scaling may be used if the size and/or aspect ratio of the display device coupled to the second computer system is different than the size and/or aspect ratio of the video supplied by the first computer system. In block 420, uncompressed video is displayed by the second computer system at the appropriate resolution, size and/or aspect ratio. The method then terminates in block 424. [0034] From the description provided herein, those skilled in the art are readily able to combine software created as described with appropriate general-purpose or special-purpose computer hardware to create a computer system and/or computer subcomponents in accordance with the various embodiments, to create a computer system and/or computer subcomponents for carrying out the methods of the various embodiments, and/or to create a computer-readable media for storing a software program to implement the method aspects of the various embodiments.
[0035] The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.

Claims

CLAIMS What is claimed is:
1. A method comprising: obtaining a video stream in a first digital format in a first computer system; decompressing the video stream from the first digital format to a second digital format creating an uncompressed video stream, the decompressing by the first computer system; then sending the uncompressed video stream to a second computer system; processing the uncompressed video stream by the second computer system, wherein the processing comprises converting color space depth of the uncompressed video stream and scaling the size of the uncompressed video stream; and displaying the uncompressed video stream on a display device.
2. The method of claim 1 wherein the first digital format is at least one selected from a group consisting of: Moving Picture Expert Group (MPEG) format, Windows Media Video format (WMV), Real Media format (RM), Advanced Streaming Format (ASF), Quicktime format, and AVI format.
3. The method of claim 1 wherein decompressing comprises creating an uncompressed video stream in at least one format selected from a group consisting of: YUV format; and Y:Cr:Cb format.
4. The method of claim 1 wherein sending comprises sending the uncompressed video stream is sent as a series of TCP-IP packets.
5. The method of claim 1 wherein decompressing further comprises decompressing by way of at least one selected from the group consisting of: software executed on a processor that also executes an operating system for the first computer system; and a hardware decoder.
6. The method of claim 1 further comprising decrypting the digital video stream prior to decompressing the digital video stream.
7. A system comprising: a server comprising: a processor; a memory coupled to the processor; a decompression subsystem implemented by the server, the decompression subsystem configured to obtain a video stream in a first format, and to decompress the video stream to produce a uncompressed video stream, the server is configured to send the uncompressed video stream to a client computer over a network; and a client computer coupled to the server, the client computer comprising: a processor; a memory coupled to the processor; and a display device coupled to the processor, the client computer is configured to receive the uncompressed video stream, scale the uncompressed video stream, and display the uncompressed video stream on the display device.
8. The system of claim 7 wherein the client computer is further configured to convert color space depth of the uncompressed video stream.
9. The system of claim 7 wherein the decompression subsystem of the server is at least one selected from the group consisting of: a software compression/decompression system executed on the processor; and a hardware decoder.
10. The system of claim 7 further comprising a decryption system prior to the decompression system wherein the video stream is decrypted prior to the decompression system.
1 1. The system of claim 7 wherein when the uncompressed video stream comprises YUV format or Y:Cr:Cb format.
12. A computer-readable media storing a program that, when executed by a processor, causes the processor to: obtain a digital video stream in a first digital format; decompress the digital video stream from the first digital format to an uncompressed video stream in a second digital format; refrain from the performance of color space depth conversion, and also refrain from scaling the video stream to have a picture size different than the decoded video stream; and transmit the uncompressed video stream to a second computer system.
13. The computer-readable media of claim 12 wherein the program directs the processor to obtain the digital video stream from a source connectively coupled to the processor, the source comprising a compact disc (CD), a digital versatile/video disc (DVD), a non-volatile memory device, or any combination thereof.
14. The computer-readable media of claim 12 wherein the program directs the processor to decompress the digital video stream using a software- based decompression process.
15. The computer-readable media of claim 12 wherein the program directs the processor to decompress the digital video stream using a hardware-based decompression process.
PCT/US2008/080870 2008-10-23 2008-10-23 Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system WO2010047706A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US2008/080870 WO2010047706A1 (en) 2008-10-23 2008-10-23 Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system
TW098133808A TW201029471A (en) 2008-10-23 2009-10-06 Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2008/080870 WO2010047706A1 (en) 2008-10-23 2008-10-23 Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system

Publications (1)

Publication Number Publication Date
WO2010047706A1 true WO2010047706A1 (en) 2010-04-29

Family

ID=42119562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2008/080870 WO2010047706A1 (en) 2008-10-23 2008-10-23 Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system

Country Status (2)

Country Link
TW (1) TW201029471A (en)
WO (1) WO2010047706A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2786590A4 (en) * 2011-12-02 2015-07-15 Hewlett Packard Development Co Video clone for a display matrix
US9794650B2 (en) 2013-04-05 2017-10-17 Media Global Links Co., Ltd. IP uncompressed video encoder and decoder

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5262875A (en) * 1992-04-30 1993-11-16 Instant Video Technologies, Inc. Audio/video file server including decompression/playback means
WO2002097584A2 (en) * 2001-05-31 2002-12-05 Hyperspace Communications, Inc. Adaptive video server
JP2007004301A (en) * 2005-06-21 2007-01-11 Sony Corp Computer, data processing method, program and communication method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5262875A (en) * 1992-04-30 1993-11-16 Instant Video Technologies, Inc. Audio/video file server including decompression/playback means
WO2002097584A2 (en) * 2001-05-31 2002-12-05 Hyperspace Communications, Inc. Adaptive video server
JP2007004301A (en) * 2005-06-21 2007-01-11 Sony Corp Computer, data processing method, program and communication method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2786590A4 (en) * 2011-12-02 2015-07-15 Hewlett Packard Development Co Video clone for a display matrix
US9794650B2 (en) 2013-04-05 2017-10-17 Media Global Links Co., Ltd. IP uncompressed video encoder and decoder
US10104451B2 (en) 2013-04-05 2018-10-16 Media Global Links Co., Ltd. IP uncompressed video encoder and decoder

Also Published As

Publication number Publication date
TW201029471A (en) 2010-08-01

Similar Documents

Publication Publication Date Title
US8736760B2 (en) Picture processing apparatus, picture processing method, picture data storage medium and computer program
US6222885B1 (en) Video codec semiconductor chip
US7627886B2 (en) Systems and methods for displaying video streams
US20030185302A1 (en) Camera and/or camera converter
US20050195205A1 (en) Method and apparatus to decode a streaming file directly to display drivers
JP5156655B2 (en) Image processing device
JP2012508485A (en) Software video transcoder with GPU acceleration
US20150103086A1 (en) Display device with graphics frame compression and methods for use therewith
US7312800B1 (en) Color correction of digital video images using a programmable graphics processing unit
US20060164328A1 (en) Method and apparatus for wireless display monitor
JP2007506305A (en) Adaptive management of video storage resources
WO2006073830A1 (en) Image rotation via jpeg decompression according to an order different from the encoding block scanning order
JP2010529567A (en) How to share a computer display over a network
JP2006197535A (en) Method for transcoding compressed data, and storage medium
US20240048738A1 (en) Methods, apparatuses, computer programs and computer-readable media for processing configuration data
US10304213B2 (en) Near lossless compression scheme and system for processing high dynamic range (HDR) images
US7483037B2 (en) Resampling chroma video using a programmable graphics processing unit to provide improved color rendering
US20120033727A1 (en) Efficient video codec implementation
JP2002524007A (en) Image compression method and apparatus
WO2010047706A1 (en) Decompressing a video stream on a first computer system, and scaling and displaying the video stream on a second computer system
WO2010069059A1 (en) Video decoder
CN106954073B (en) Video data input and output method, device and system
WO2023223401A1 (en) Real-time editing system
JP6990172B2 (en) Determination of luminance samples to be co-located with color component samples for HDR coding / decoding
US20210076048A1 (en) System, apparatus and method for data compaction and decompaction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08877627

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08877627

Country of ref document: EP

Kind code of ref document: A1