US20080072256A1 - System and method for real-time media searching and alerting - Google Patents

System and method for real-time media searching and alerting Download PDF

Info

Publication number
US20080072256A1
US20080072256A1 US11/947,460 US94746007A US2008072256A1 US 20080072256 A1 US20080072256 A1 US 20080072256A1 US 94746007 A US94746007 A US 94746007A US 2008072256 A1 US2008072256 A1 US 2008072256A1
Authority
US
United States
Prior art keywords
media
video
text
monitoring system
search
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/947,460
Other versions
US8015159B2 (en
Inventor
Trevor Boicey
Christopher Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DNA13 Inc
Original Assignee
DNA13 Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DNA13 Inc filed Critical DNA13 Inc
Priority to US11/947,460 priority Critical patent/US8015159B2/en
Publication of US20080072256A1 publication Critical patent/US20080072256A1/en
Application granted granted Critical
Publication of US8015159B2 publication Critical patent/US8015159B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data

Definitions

  • the present invention relates generally to media monitoring systems. More particularly, the present invention relates to video media searching and alerting systems.
  • closed captions which have been used successfully to identify the subject matter of a video stream.
  • Systems have been developed to monitor and act upon the closed captioned text. For example, such systems trigger on the basis of keywords and selectively record video for later viewing.
  • no refinement or cross-referencing could be performed on past video, and new searches would only be applied to subsequent video broadcasts.
  • U.S. Pat. No. 5,481,296 is directed to a scanning method of monitoring video content using a predefined set of keywords. Based on a keyword, the system has the ability to monitor multiple streams and to return reception devices in real-time to selectively capture the matching video. The described system also attempts to selectively save video that has matched while removing segments that have not matched. The goal is to selectively record only the video that is desired.
  • U.S. Pat. No. 5,986,692 is directed to a system for generating a custom-tailored video stream.
  • the system is designed to work unattended, watching video signals, extracting and collating those that are deemed to be of interest to a specific user.
  • the system also defines filters that attempt to detect and discern specific components of a video signal that are unwanted. For example, opening credits are video components that are typically undesired.
  • U.S. Pat. No. 6,061,056 is directed to a system that automatically monitors a video stream for desired content. Users enter their search parameters, and the system watches the applied video streams for matches. However, this system only records video when a match occurs. The user is then presented with a series of clips that were saved based on their matches. Any new searches or refinements to the query only take effect for future searches. As well, any desired content that was not caught by the programmed search is lost forever. As an example, a user search for “Company A” may produce a result announcing a surprise merger of “Company A” and “Company B”. With the system as described in U.S. Pat. No. 6,061,056, new searches for “Company B” will only take effect on video occurring after the user adds this search. Therefore, the system is incapable of searching for any records prior to the new search being executed, such as recent happenings leading up to the merger.
  • U.S. Pat. No. 6,266,094 is directed to a system of aggregating and distributing closed caption text over a distributed system.
  • the system focuses on extensive scrubbing and preparation of closed caption text to enhance usability.
  • the described system has no facility for archiving the video associated with the clip, nor does it present the program text to the user.
  • the present invention provides a media monitoring system for receiving at least one video channel having corresponding closed captioned text in real time.
  • the media monitoring system includes a media management system and a user access system.
  • the media management system continuously stores all the data of the at least one video channel locally and extracts the corresponding closed captioned text into decoded text.
  • the decoded text is provided to a global storage database.
  • the media management system further includes a search engine for comparing the decoded text against search terms to provide matching results, and an indexing engine for indexing units of the decoded text by time.
  • the user access system receives and displays the matching results, and transmits a request for stored data corresponding to specific units of the decoded text from the media management system.
  • the media management system then provides said stored data corresponding to specific units of the decoded text in response to the request.
  • the media management system can include a media server pod, an index server and a web server.
  • the media server pod receives the at least one video channel and locally stores the data of the at least one video channel.
  • the media server pod can include a close caption decoder for extracting the corresponding closed captioning text into the decoded text.
  • the index server receives the decoded text from the media server pod over a first network, and includes the index engine.
  • the web server includes the global storage database for storing the decoded text received from the index servers over a second network.
  • the web servers can include the search engine and a search term database for storing the search terms.
  • the media server pod can include at least one media source for providing the at least one video channel, and a media server in direct communication with the at least one media source.
  • the media server receives the at least one video channel, and has a decoder for extracting the corresponding closed captioned text into decoded text from a vertical blanking interval of the at least one video channel.
  • the media server can further include mass storage media for storing the data of the at least one video channel.
  • the media server can include a parser for generating the stored data corresponding to specific units of the decoded text
  • the media server pod can include a plurality of media sources for providing a corresponding number of video channels.
  • the media server can include a video/audio compression system for compressing the data of the at least one video channel prior to storage onto the mass storage media.
  • the media server can include a speech-to-text system for converting audio signals corresponding to the at least one video channel into text, and a text detector for detecting an absence of the corresponding closed captioned text, such that the text detector generates an alert indicating the absence of the corresponding closed captioned text.
  • the media source can include one of a satellite receiver, a cable box, an antenna, and a digital radio source.
  • the index server can include the global storage database, or the web server can include the global storage database.
  • the first network can include a wide area network.
  • the media management system can further include a second media server pod for receiving data of a different video channel, where the second media server pod is in communication with the first network.
  • the media server pod and the second media server pod can be geographically distant from each other.
  • the user access system can include a duplicate video clip detector for identifying the matching results that are duplicates of each other, and a user access device in communication with the web server over a third network for receiving and displaying the matching results.
  • the user access device can provide the search terms to the media management system.
  • the user access system can include a fourth network in communication with the user access device and the media server pod, where the user access device receives said stored data corresponding to specific units of the decoded text in response to the request over the fourth network.
  • the present invention provides a method for searching video data corresponding to at least one video channel collected and stored in a media monitoring system.
  • the method includes (a) providing search terms; (b) comparing the search terms to stored closed captioned text corresponding to the video data, the closed captioned text being indexed by channel and time; (c) displaying matching results from the step of comparing; (d) requesting selected video data corresponding to one of the matching results; and (e) providing the selected video data corresponding to one of the matching results.
  • the step of requesting includes selecting a time indexed segment of the closed captioned text of one of the matching results, the step of selecting includes setting a video start time and a video end time for the selected time indexed segment, and the step of providing the selected video data includes parsing the video data to correspond with the selected time indexed segment to provide the selected video data.
  • the step of providing search terms can include storing the search terms.
  • the search terms are provided to a web server over a first network, where the web server executes the step of comparing and providing the matching results to a user access device for display over the first network.
  • the step of providing the video data includes transferring the video data over a second network to the user access device, and the step of providing the video data can include parsing the video data to provide the portion of the video data.
  • the present invention provides a method for automatic identification of video clips matching stored search terms.
  • the method includes (a) continuously receiving and locally storing video data corresponding to at least one video channel in real time; (b) extracting and globally storing the closed captioned text from the video data; (c) indexing the closed captioned text by channel and time; (d) comparing the stored closed captioned text to the stored search terms; and, (e) providing match results of the closed captioned text matching the search terms, each match result having an optionally viewable video clip.
  • the method can further include the steps of displaying the match results on a user access device, requesting the video clip corresponding to a selected match result, and displaying the video clip on the user access device.
  • the step of requesting includes viewing the closed captioned text corresponding to the selected match result with time indices, setting a video start time and a video end time, and, providing a request having the video start time and the video end time, and channel information corresponding to the selected match result.
  • the step of displaying the video clip includes receiving the request, and parsing the video data to provide the video clip having the video start time and the video end time.
  • the video data is compressed prior to being stored, the extracted closed captioned text is transmitted over a first network to an indexing server for indexing the closed captioned text by channel and time, the closed captioned text is transmitted over a second network for storage on a web server, the step of comparing is executed on the web server.
  • the match results can be transmitted over a third network to a user access device.
  • the step of displaying the video clip includes transmitting the video clip over a fourth network to the user access device.
  • the step of comparing includes (i) providing a segment of the closed captioned text, (ii) iteratively obtaining search terms from the stored search terms for comparing to the segment of the closed captioned text until all of the stored search terms have been compared to the segment of the closed captioned text, and (iii) storing details of all matches to the stored search terms as the match results.
  • the step of storing includes selectively generating an alert when the segment of the closed captioned text matches the stored search terms, the step of selectively generating includes generating the alert only when the matching search term has an associated alerting status, and the alert can include one of in-system alerts, mobile device activation, pager activation and automatic email generation.
  • the step of extracting includes detecting an absence of the closed captioned text from the video data, and generating an alert message when no closed captioned text is detected.
  • FIG. 1 is a schematic of the media monitoring system according to an embodiment of the present invention
  • FIG. 2 is a schematic of the media monitoring system according to another embodiment of the present invention.
  • FIG. 3 is a block diagram of the functional components of the media monitoring system shown in FIG. 1 ;
  • FIG. 4 is a flow chart illustrating a manual operation mode of the media monitoring system of the present invention.
  • FIG. 5 is a computer screen user interface for prompting search parameters from a user
  • FIG. 6 is a computer screen user interface showing compact example results from a search
  • FIG. 7 is a computer screen user interface showing detailed example results from a search
  • FIG. 8 is a computer screen user interface showing matching captioning and timing information.
  • FIG. 9 is a flow chart illustrating an automatic scanning mode of the media monitoring system of the present invention.
  • the present invention provides a method and system for continually storing and cataloguing streams of broadcast content, allowing real-time searching and real-time results display of all catalogued video.
  • a bank of video recording devices store and index all video content on any number of broadcast sources. This video is stored along with the associated program information such as program name, description, airdate and channel.
  • a parallel process obtains the text of the program, either from the closed captioning data stream, or by using a speech-to-text system. Once the text is decoded, stored, and indexed, users can then perform searches against the text, and view matching video immediately along with its associated text and broadcast information.
  • an alerting mechanism scans all content in real-time and can be configured to notify users by various means upon the occurrence of a specified search criteria in the video stream.
  • the system is preferably designed to be used on publicly available broadcast video content, but can also be used to catalog private video, such as conference speeches or audio-only content such as radio broadcasts.
  • the system non-selectively records all video/audio applied to it, and allows user searches to review all video on the system. Only under user control is a presentation clip prepared and retrieved. Furthermore, searches can be performed at any time to examine archived video, rather than searches being the basis by which video is saved. Video is only shown to the user that they specifically request, and any editing can be done under user control.
  • FIG. 1 A general block diagram of a media monitoring system according to an embodiment of the present invention is shown in FIG. 1 .
  • Media monitoring system 100 comprises two major component groups. The first is the media management system 102 , and the second is a user access system 104 .
  • the media management system 102 is responsible for receiving and archiving video and its corresponding audio.
  • the video and audio data is continuously received and stored.
  • Video streams are tuned using media sources 106 such as satellite receivers, other signal receiving devices such as cable boxes, antennas, and VCR's or DVD players.
  • media sources 106 can include non-video media sources, such as digital radio sources, for example.
  • the media sources 106 receive digital signals.
  • the video signal, corresponding audio, and any corresponding closed-captioned text, is captured by video/audio capture hardware/software stored on media servers 108 .
  • the video/audio data can be stored in any sized segment, such as in one-hour segments, with software code to later extract any desired segment of any size by channel, start time, and end time.
  • media management system 102 can include any number of media servers 108 , and each video server can be in communication with any number of media sources 106 . If storage space is limited, the media servers can compress the digital data from the media sources 106 into smaller file sizes. The data from media sources 106 are stored in consecutive segments onto a mass storage device using uniquely generated filenames that encode the channel and airdate of the video segment.
  • closed-captioned text is extracted from the video stream and stored in web servers 114 as searchable text, as will be discussed later.
  • the extracted closed-captioned text is indexed to its corresponding video/audio clips stored in the media servers 108 .
  • the media management system 102 stores all of the text associated with the video stream. In most cases, this text is obtained from the closed-captioning signal encoded into the video.
  • the media management system 102 can further include a closed-captioned text detector for detecting the absence of closed-captioned text in the data stream, in order to alert the system administrator that closed-captioned text has not been detected for a predetermined amount of time. In a situation where closed-captioned text is undetected, the aforementioned alert can notify the system operator to take appropriate action in order to resolve the problem.
  • the stream may not be a digital stream, and the system can include a speech-to-text system to convert the audio signals into text.
  • these sub-systems can be executed within each media server 108 .
  • the extracted text is broken into small sections, preferably into one-minute segments.
  • Each clip is then stored in a database along with the program name, channel, and airdate of the clip.
  • the text is also pushed into an indexing engine of index servers 110 , which allows it to be searched.
  • the closed captioned text spanning a preset time received by index servers 110 are converted to XML format, bundled, and sent to web servers 114 for global storage via network 116 .
  • Web servers 114 can execute the searches for matches between user specified terms and the stored closed captioned text, via a web-based search interface.
  • the closed captioned text can be stored in index servers 110 .
  • the channel and airdate fields of the text segment allow it to be matched to a video clip stored by the media management system 102 as needed. Further details of media management system 102 will be described later.
  • the media management system 102 includes an alerting system. This system watches for each closed captioned segment that is indexed and cross-references it against the stored list of user defined alerts. Any matches will trigger user alerts to notify the user that a match has occurred. Alerts can include in-system alerts, mobile device activation, pager activation, automatic email generation, which can be generated from web servers 114 .
  • the user access system 104 can include access devices such as a computer workstation 118 , or mobile computing devices such as a laptop computer 120 and a PDA 122 . Of course, other wireless devices such as mobile phones can also be used. These web enabled access devices can communicate with the web servers 114 via the Internet 126 wirelessly through BlueTooth or WiFi network systems, or through traditional wired systems. Optionally, users can dial up directly to the network 116 with a non-web search interface enabled computer 124 . As will be shown later in FIG. 3 , the user access system 104 further includes an alternate data transfer path for transferring video data to the access devices to reduce congestion within media management system 102 . As previously discussed, each web server 114 can store identical copies of the closed captioned text bundle received from index servers 110 . This configuration facilitates searches conducted by users since the text data is quickly accessible, and search results consisting of closed captioned text can be quickly forwarded to the user's access device.
  • the user can search for occurrences of keywords, retrieve video by date and time, store alert parameters, etc.
  • the user interface software can take the form of a web interface, locally run software, mobile device interface, or any other interactive form.
  • the back end portion of the web interface maintains a connection to the text database, as well as to the index of video streams.
  • the user interface software can be used to stream video to the user, or alternatively, to direct the user to an alternate server where video will be presented.
  • networks 112 and 116 can be implemented as a local area network (LAN), such as in an office building for example.
  • Local area networks typically provide high bandwidth operation.
  • media monitoring system 100 can be deployed across a wide network, meaning that the components of the system can be geographically dispersed, making networks 112 and 116 wide area networks (WAN).
  • WAN wide area networks
  • the bandwidth of a WAN is generally smaller than that of a LAN.
  • those of skill in the art will understand that the presently described system can be implemented with a combination of WAN and LAN.
  • media servers 108 and their corresponding media sources 106 can be geographically distributed to collect and store local video, which is then shared within the system.
  • “pods” of media servers 108 and their corresponding media sources 106 can be located in different cities, and in different countries.
  • the server the user is connected to may not physically be at the location where the video streams are being recorded.
  • the distributed media server pods are considered remotely connected to index servers 110 , since they are connected via a WAN.
  • an advantage of the present invention is that the monitoring and notification speed remains fast regardless of the network configuration of the media monitoring system 100 . This is due to the fact that the small sized closed captioned text can be rapidly transferred within the system, and more particularly, between the media servers 108 and the user access devices.
  • the larger video data is accessed and sent to the user. Due to the size of the video, it is preferable to avoid congesting the networks 112 , 116 and 126 and limiting performance for all users. However, video may be transferred to the user in an all-LAN environment with satisfactory speed.
  • the user access device connects to index servers 110 , which functions as the conductor of traffic between media servers 108 and the user access device. Therefore, according to another embodiment of the invention, requested video can be directly sent from the appropriate media server 108 to the video enabled user access device.
  • FIG. 2 illustrates the configuration of the media monitoring system 100 when video data is to be transferred to a user access device in a geographically distributed system.
  • one video server 108 and its corresponding media sources 106 represent a single video processing unit of a pod of video processing units 130 , that may be deployed in a particular city and geographically distant from index servers 110 and network 112 .
  • the pod 130 remains in communication with remote access devices 118 , 120 and 122 via LAN/WAN network 132 , which may be geographically distant from pod 130 .
  • LAN/WAN network 132 which may be geographically distant from pod 130 .
  • Media server 108 can include a parser for providing the requested video clip that corresponds with the time-indexed closed captioned text. Since the video clips are received through a path outside of the media management system 102 and user access system 104 , the potential for congestion of data traffic within the system is greatly reduced. At the same time, multiple users can receive their respective requested video clips rapidly.
  • the index servers will search the archived closed captioned text, and notify the user if any matches have occurred. Matches are displayed with the relevant bibliographic information such as air date and channel. The user then has the option of viewing and hearing a time segment of the videos containing the matched terms, the time segment being selectable by the user.
  • the search of key terms can extend to future broadcasts, such that the search is conducted dynamically in real-time. Thus, the user can be notified shortly after a search term has been matched in a current broadcast. Since the video broadcast is recorded, the user can selectively view the entire broadcast, or any portion thereof.
  • FIG. 3 illustrates a block diagram of the general functional components of media monitoring system 100 shown in FIG. 1 .
  • the media monitoring system 100 converts a video signal to an indexed series of digital files on a mass storage system, which can then be retrieved by specifying the desired channel, start time, and end time. This capability is then used to supply the actual video that matches the search result from the user interface component.
  • Video is archived at a specified quality, depending on operator configuration. Higher quality settings allow for larger video frames, higher frame rates, and greater image detail, but with a penalty of greater file storage requirements. All parameters are configurable by the operator at the system level.
  • the video/audio signal to be archived is made available from an external source. In practice, this usually consists of an antenna, or a satellite receiver or cable feed supplied by a signal provider.
  • VBI Vertical Blanking Interval
  • the video/audio signal is applied to the input of a video capture device 200 , which, either through a hardware or a software compression system 202 , converts the video signal to a digital stream.
  • video capture device 200 and software compression system 202 can be implemented in media servers 108 .
  • the exact format of this stream can be specified by the operator, but is typically chosen to be a compressed stream in a standard format such as MPEG or AVI formatted video.
  • the video capture process outputs a continuous stream of video, which is then divided into manageable files. According to an embodiment of the present invention, the files are preferably limited to one hour blocks of video.
  • mass storage system 204 locally stores the video/audio data for its corresponding media sources 106 .
  • Video clips can be retrieved from mass storage system 204 in response to retrieval requests from permitted machines. These requests would be generated from servers that are serving users who have requested a video clip. From the users standpoint, this video clip is chosen by its content, but the system would know it as belonging to a specified channel for a given period of time. Most user clip requests are for small segments of video, an example being “CBC-Ottawa, 5:55 pm-5:58 pm”.
  • the archive system using the channel and the date required, first deduces which large file the video segment is located in. It then parses the video file to locate and extract the stream data representing the segment selected. The stream data is then re-encapsulated to convert it to a stand-alone video file, and the result is returned to the calling machine, ultimately to be delivered to the user.
  • the system can continuously replace the oldest video streams in its archive with the newest. This ensures that as much video is stored as possible. Additional storage can be added or removed as needed.
  • Media monitoring system 100 can include self monitoring functions to ensure robust operation, and to minimize potential errors.
  • the video digitizing process has the ability to detect the lack of video present at its input. This condition will raise an operator alert to allow the operator to locate the cause of the outage. In the field, this can be attributed to cabling problems, weather phenomena, hardware failure, upstream problems, etc.
  • the system can be configured to attempt an automatic repair, by restarting or re-initializing a process or external device.
  • the closed captioned text associated with the video is preferably extracted from the closed captioning stream in the video signal, or an associated speech-to-text device.
  • closed captioning data is available in the video signal, the signal is applied to a decoder 206 typically located in each media server 108 , that can read the VBI stream.
  • the decoder 206 extracts the closed captions that are encoded into the video signal. In practice, this can be the same device performing the video compression, and the extraction can be done in software.
  • the audio stream is fed into a speech-to-text device instead of decoder 206 , and the resulting text is fed into the system. This option can be used if the content is not a video signal, such as a commercial radio stream or recorded speech.
  • the decoder 206 includes a buffer, into which text accumulates at “human reading” speed. After a short increment of time, preferably one minute, the text buffer is stored into text database 208 along with the channel and time information associated with the clip. This database 208 then contains a complete record of all text that has flowed through the system, sorted by channel and airdate. As previously mentioned, database 208 can be located within either index servers 110 or web servers 114 . In either case, database 208 functions as global storage of the decoded closed captioned text.
  • indexing engine 210 implemented in index servers 110 receives a block of text, which in this case represents a small unit of video transcript (typically one minute), and stores it in a format that is optimized for full text searches. For practical implementation purposes, standard off-the-shelf products can be employed for the indexing function. According to the presently described embodiments, the video captions are indexed by channel and time for example.
  • the formatted text is stored in index database 212 , which can be located in index servers 110 or web servers 114 . Database 212 can also function as global storage of all the formatted text.
  • the user's search string is submitted to a full text search engine that searches database 212 .
  • Any results returned from this engine also contain indexes to the corresponding channel and time of the airing.
  • the entire text is stored in database 208 , it can be retrieved using standard techniques to search on the channel and air time.
  • database 212 is used for full text searching, while database 208 has been formatted such that the data is ordered by time and channel to facilitate look up by time and channel.
  • user-defined searches can be executed through user access system 104 .
  • user search interface Operating upon each access device is a user search interface that provides the functionality of the system. The interface is designed to allow users with minimal training to be able to perform text searches, examine the program text that matches, and selectively view or archive the video streams where the captioning appeared.
  • the reference application is a web-based system, the system can be searched through other means, such as mobile WiFi devices, Bluetooth-enabled devices, and locally running software, for example.
  • FIG. 4 shows a flow chart of the process executed by the media monitoring system 100
  • FIGS. 5-8 are examples of user interface screens that prompt the user for information and display results to the user.
  • the process begins at step 300 , where the user logs into the interface with the goal of researching a topic's appearance in the recent media.
  • the user is presented with a screen that allows them to enter the search terms that would match their desired content.
  • Common search parameters are provided, such as specifying phrases that must appear as typed, words that should appear within a certain distance of each other, boolean queries, etc.
  • the query can be limited to only return results from specific broadcast channels.
  • FIG. 5 is an example user interface for prompting the search parameters from the user.
  • the search parameters provided by the user are first groomed at step 302 .
  • Grooming is an optional step, which refers to optimization of the search parameters, especially if the user's search parameters are broken. For example, the user may enter “red blue” in the MUST CONTAIN THESE WORDS search field, and “GREEN” in the MAY CONTAIN search field.
  • the grooming process then optimizes the search parameters to “GREEN RED AND BLUE”.
  • the groomed search parameters are compared to database 208 that stores all the closed captioned text.
  • the user is presented with a match results page at step 304 , itemizing the results obtained, the programs they appeared in, and a score that represents how strong the match was.
  • the results can be sorted in numerous ways, such as by date, by program name, or by score.
  • a compact example results page is shown in FIG. 6
  • a more detailed version is shown in FIG. 7 .
  • the user can select any row to view further details of that program segment.
  • the results pages shown in FIGS. 6 and 7 may list concurrent segments belonging to the same broadcast, since the search term appears in each segment. For example, the results may return “Channel Y, 6:00 pm to 6:01 pm”, “Channel Y, 6:01 pm to 6:02 pm” and “Channel Y, 6:02 pm to 6:03 pm” as separate program segment items.
  • the system can optimize the results by recognizing that the three segments are chronological segments of Channel Y, and collapse the results into a simplified description, such as “Channel Y, 6:00 pm to 6:03 pm”.
  • the user Upon selecting a program segment at step 306 , the user is presented with a caption viewing screen showing the matching captioning and timing information, as shown in FIG. 8 .
  • the present screen gives the user the option of viewing the clip associated with the shown extracted closed captioned text.
  • the user is also presented with a navigation system that allows the user to move forward or backward in the video stream beyond the matched segment, to peruse the context that the clip was presented in.
  • the caption viewing screen also features controls to compose a video clip that consists of several consecutive units of video. More specifically, the user has the ability to define the start and end points of a video clip, and then view or save that clip. This is suitable for preparing a salient clip that is to be saved for future reference.
  • the process can return to step 300 to restart the search.
  • the process can return to step 304 to permit the user to view the results page and select a different program segment.
  • the system determines if the video clip is stored locally at step 308 . It is important to note that a locally stored video clip refers to one that is accessible via a high bandwidth network, which is typically available in a local area network, such as in an office environment. In contrast, remotely stored video clips are generally available only through a low bandwidth network, or one that is too low to have a copy of all video sent to it all the time. As previously discussed, the user can access the video remotely over a low bandwidth connection.
  • the process provides a video access method optimized according to whether or not the user is accessing the system remotely. If the video clip is stored locally, ie. on a high bandwidth connection suitable for streaming video, then the system proceeds to step 310 . At step 310 , the video clip is retrieved and assembled with the appropriate video segments, and then displayed for the user at step 312 . The video clip can be played with the user's preferred video playing software. Alternately at step 308 , if the video clip is not stored locally, the system proceeds to step 314 , where a query is sent to the specific remote server that will return the video that the user is asking for. The video clip is retrieved from the remote system at step 316 , and finally displayed for the user at step 312 . Once the clip has ended, the user has the option of returning to step 304 to view another program segment. Alternately, the user may return to step 300 to initiate a new search.
  • the video clip can be ordered through the user interface where it will be delivered to the user via email, via a link to a web site, or a physical medium such as a DVD, CD or video cassette for example.
  • This service is suitable for clients requiring a permanent copy of especially important video segments.
  • the previously described manual interactive operation method of FIG. 4 is effective for searching and viewing archived video.
  • the media monitoring system 100 can concurrently operate in an automatic scanning mode to match user defined terms with real time extracted closed captioned text.
  • the user can selectively activate the alerting system to provide notification for specific terms.
  • searches can be stored by users so that they are executed on all incoming text corresponding to real-time recorded video. Any matches will selectively generate an immediate alert, which can be communicated to the user by various means. Selective generation of an alert refers to the fact that the user can set specific search terms to trigger an alert when matched.
  • the stored search terms are archived in a search term database, preferably located on web servers, 114 including parameters reflecting the desired level of alerting the user has requested. Examples of such alerting levels can include “Never alert me”, “alert me by putting a message in the product”, “and alert me urgently by sending me an email to my mobile device”.
  • the automatic scanning mode method of operation of the media monitoring system 100 is described with reference to FIG. 9 . It is assumed that the following process operates upon each stored unit of program text after the text is stored and indexed. Then the index is searched again with the terms to detect if anything new appears. It is further assumed that the user has previously defined his/her search terms and stored them in a search term database 404 , which can be physically located on web server 114 .
  • the process begins at step 400 , where the text from index database 212 for the unit is retrieved.
  • a search term from the users search term database 404 is retrieved and compared to the stored unit of program text at step 406 .
  • step 408 the system checks if there are any further search terms to check against the stored unit of program text. If there are no more search terms, the process ends at step 410 . Otherwise, the system loops back to step 402 to fetch the next search term.
  • step 406 the system proceeds to step 412 to store the match information in a results database 414 .
  • This results database is preferably located in web server 114 , and is local to the user's portal.
  • the results summarize matches between the search terms and the video clips for the user when they log in to their portal.
  • step 416 the system checks if the user has activated an alert, for the present search term. If an alert has been activated for the present search term, the system generates a notification message for the user at step 418 , in accordance with their desired alert level. Depending on settings and system configuration, this alert/notification can be delivered using a number of methods, including but not limited to, alerts in the interface, via email, via mobile and wireless devices, for example.
  • step 418 the matched search result processing the system proceeds to step 408 to determine if there are any further search terms. This aforementioned process is executed for each unit of program text stored in the index.
  • the media monitoring system of the present invention can immediately search the archives to identify any prior program segments that match the new search term, and monitor new program segments for occurrences of the new search term.
  • the system described in this application stores all video from all channels, allowing searches to be refined or changed at will with instant results. As well, learnings from the results of one query can be incorporated into a new search with instant results.
  • This invention improves the user experience by storing and indexing all recent video and captions. This allows not only unlimited queries with real time results, but also allows new searches inspired by results to be performed immediately and with instant results.
  • web server 114 can include a duplicate video clip detector to mark matching video clip results that are essentially the same. This function can be executed in web servers 114 as search results are returned to the user. For example, the text of the returned search results can be scanned such that the duplicates are marked as such. This feature allows the user to view one video clip and dismiss those marked as duplicates very quickly, without opening each one and viewing the clip.
  • the duplicate video clip detector can be implemented on web server 114 , but can be executed in index servers 110 .
  • a first matching result is added to the database and then fuzzy matching is executed to determine if further matches are essentially the same as the first stored instance. If so, then the duplicates are marked as such for the users' convenience.
  • fuzzy matching is executed to determine if further matches are essentially the same as the first stored instance. If so, then the duplicates are marked as such for the users' convenience.
  • an essential match between two clips is one where a substantial percentage of the content are the same. Naturally, this percentage can be preset by the system administrator.

Abstract

A method and system for continually storing and cataloguing streams of broadcast content, allowing real-time searching and real-time results display of all catalogued video. A bank of video recording devices store and index all video content on any number of broadcast sources. This video is stored along with the associated program information such as program name, description, airdate and channel. A parallel process obtains the text of the program, either from the closed captioning data stream, or by using a speech-to-text system. Once the text is decoded, stored, and indexed, users can then perform searches against the text, and view matching video immediately along with its associated text and broadcast information. Users can retrieve program information by other methods, such as by airdate, originating station, program name and program description. An alerting mechanism scans all content in real-time and can be configured to notify users by various means upon the occurrence of a specified search criteria in the video stream. The system is preferably designed to be used on publicly available broadcast video content, but can also be used to catalog private video, such as conference speeches or audio-only content such as radio broadcasts.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 11/063,559 filed on Feb. 24, 2005, which claims priority from U.S. Provisional Patent Application Ser. No. 60/546,954, filed Feb. 24, 2004, which is incorporated herein by reference in its entirety.
  • FIELD OF THE INVENTION
  • The present invention relates generally to media monitoring systems. More particularly, the present invention relates to video media searching and alerting systems.
  • BACKGROUND OF THE INVENTION
  • Many businesses and organizations have an interest in what is being broadcast, but the volume of information available makes it prohibitive to monitor completely.
  • The overwhelming majority of broadcast sources include closed captions, which have been used successfully to identify the subject matter of a video stream. Systems have been developed to monitor and act upon the closed captioned text. For example, such systems trigger on the basis of keywords and selectively record video for later viewing. However, no refinement or cross-referencing could be performed on past video, and new searches would only be applied to subsequent video broadcasts.
  • U.S. Pat. No. 5,481,296 is directed to a scanning method of monitoring video content using a predefined set of keywords. Based on a keyword, the system has the ability to monitor multiple streams and to return reception devices in real-time to selectively capture the matching video. The described system also attempts to selectively save video that has matched while removing segments that have not matched. The goal is to selectively record only the video that is desired.
  • U.S. Pat. No. 5,986,692 is directed to a system for generating a custom-tailored video stream. The system is designed to work unattended, watching video signals, extracting and collating those that are deemed to be of interest to a specific user. The system also defines filters that attempt to detect and discern specific components of a video signal that are unwanted. For example, opening credits are video components that are typically undesired.
  • U.S. Pat. No. 6,061,056 is directed to a system that automatically monitors a video stream for desired content. Users enter their search parameters, and the system watches the applied video streams for matches. However, this system only records video when a match occurs. The user is then presented with a series of clips that were saved based on their matches. Any new searches or refinements to the query only take effect for future searches. As well, any desired content that was not caught by the programmed search is lost forever. As an example, a user search for “Company A” may produce a result announcing a surprise merger of “Company A” and “Company B”. With the system as described in U.S. Pat. No. 6,061,056, new searches for “Company B” will only take effect on video occurring after the user adds this search. Therefore, the system is incapable of searching for any records prior to the new search being executed, such as recent happenings leading up to the merger.
  • U.S. Pat. No. 6,266,094 is directed to a system of aggregating and distributing closed caption text over a distributed system. The system focuses on extensive scrubbing and preparation of closed caption text to enhance usability. However, the described system has no facility for archiving the video associated with the clip, nor does it present the program text to the user.
  • It is, therefore, desirable to provide a media monitoring system that can dynamically search archived media content and real-time media content with unlimited queries.
  • SUMMARY OF THE INVENTION
  • It is an object of the present invention to obviate or mitigate at least one disadvantage of previous media monitoring systems. In particular, it is an object of the present invention to provide a system and method for conducting real-time searches of recorded video, by comparing extracted closed captioned text of the video to predefined search parameters. Selected video segments time indexed to closed captioned text segments can be selectively viewed. The system searches real-time video and archived video.
  • In a first aspect, the present invention provides a media monitoring system for receiving at least one video channel having corresponding closed captioned text in real time. The media monitoring system includes a media management system and a user access system. The media management system continuously stores all the data of the at least one video channel locally and extracts the corresponding closed captioned text into decoded text. The decoded text is provided to a global storage database. The media management system further includes a search engine for comparing the decoded text against search terms to provide matching results, and an indexing engine for indexing units of the decoded text by time. The user access system receives and displays the matching results, and transmits a request for stored data corresponding to specific units of the decoded text from the media management system. The media management system then provides said stored data corresponding to specific units of the decoded text in response to the request.
  • According to embodiments of the first aspect, the media management system can include a media server pod, an index server and a web server. The media server pod receives the at least one video channel and locally stores the data of the at least one video channel. The media server pod can include a close caption decoder for extracting the corresponding closed captioning text into the decoded text. The index server receives the decoded text from the media server pod over a first network, and includes the index engine. The web server includes the global storage database for storing the decoded text received from the index servers over a second network. The web servers can include the search engine and a search term database for storing the search terms. The media server pod can include at least one media source for providing the at least one video channel, and a media server in direct communication with the at least one media source. The media server receives the at least one video channel, and has a decoder for extracting the corresponding closed captioned text into decoded text from a vertical blanking interval of the at least one video channel. The media server can further include mass storage media for storing the data of the at least one video channel.
  • In aspects of the present embodiment, the media server can include a parser for generating the stored data corresponding to specific units of the decoded text, and the media server pod can include a plurality of media sources for providing a corresponding number of video channels. The media server can include a video/audio compression system for compressing the data of the at least one video channel prior to storage onto the mass storage media.
  • According to further aspects of the present embodiment, the media server can include a speech-to-text system for converting audio signals corresponding to the at least one video channel into text, and a text detector for detecting an absence of the corresponding closed captioned text, such that the text detector generates an alert indicating the absence of the corresponding closed captioned text. In yet other aspects, the media source can include one of a satellite receiver, a cable box, an antenna, and a digital radio source. The index server can include the global storage database, or the web server can include the global storage database.
  • According to yet another embodiment of the present aspect, the first network can include a wide area network. The media management system can further include a second media server pod for receiving data of a different video channel, where the second media server pod is in communication with the first network. The media server pod and the second media server pod can be geographically distant from each other.
  • In further embodiments of the present aspect, the user access system can include a duplicate video clip detector for identifying the matching results that are duplicates of each other, and a user access device in communication with the web server over a third network for receiving and displaying the matching results. The user access device can provide the search terms to the media management system. In an aspect of the present embodiments, the user access system can include a fourth network in communication with the user access device and the media server pod, where the user access device receives said stored data corresponding to specific units of the decoded text in response to the request over the fourth network.
  • In a second aspect, the present invention provides a method for searching video data corresponding to at least one video channel collected and stored in a media monitoring system. The method includes (a) providing search terms; (b) comparing the search terms to stored closed captioned text corresponding to the video data, the closed captioned text being indexed by channel and time; (c) displaying matching results from the step of comparing; (d) requesting selected video data corresponding to one of the matching results; and (e) providing the selected video data corresponding to one of the matching results.
  • In an embodiment of the present aspect, the step of requesting includes selecting a time indexed segment of the closed captioned text of one of the matching results, the step of selecting includes setting a video start time and a video end time for the selected time indexed segment, and the step of providing the selected video data includes parsing the video data to correspond with the selected time indexed segment to provide the selected video data. The step of providing search terms can include storing the search terms. In yet another embodiment of the present aspect, the search terms are provided to a web server over a first network, where the web server executes the step of comparing and providing the matching results to a user access device for display over the first network. The step of providing the video data includes transferring the video data over a second network to the user access device, and the step of providing the video data can include parsing the video data to provide the portion of the video data.
  • In a third aspect, the present invention provides a method for automatic identification of video clips matching stored search terms. The method includes (a) continuously receiving and locally storing video data corresponding to at least one video channel in real time; (b) extracting and globally storing the closed captioned text from the video data; (c) indexing the closed captioned text by channel and time; (d) comparing the stored closed captioned text to the stored search terms; and, (e) providing match results of the closed captioned text matching the search terms, each match result having an optionally viewable video clip.
  • According to an embodiment of the present aspect, the method can further include the steps of displaying the match results on a user access device, requesting the video clip corresponding to a selected match result, and displaying the video clip on the user access device. The step of requesting includes viewing the closed captioned text corresponding to the selected match result with time indices, setting a video start time and a video end time, and, providing a request having the video start time and the video end time, and channel information corresponding to the selected match result. The step of displaying the video clip includes receiving the request, and parsing the video data to provide the video clip having the video start time and the video end time.
  • According to another embodiment of the present aspect, the video data is compressed prior to being stored, the extracted closed captioned text is transmitted over a first network to an indexing server for indexing the closed captioned text by channel and time, the closed captioned text is transmitted over a second network for storage on a web server, the step of comparing is executed on the web server. The match results can be transmitted over a third network to a user access device. and the step of displaying the video clip includes transmitting the video clip over a fourth network to the user access device.
  • According to yet another embodiment of the present aspect, the step of comparing includes (i) providing a segment of the closed captioned text, (ii) iteratively obtaining search terms from the stored search terms for comparing to the segment of the closed captioned text until all of the stored search terms have been compared to the segment of the closed captioned text, and (iii) storing details of all matches to the stored search terms as the match results. The step of storing includes selectively generating an alert when the segment of the closed captioned text matches the stored search terms, the step of selectively generating includes generating the alert only when the matching search term has an associated alerting status, and the alert can include one of in-system alerts, mobile device activation, pager activation and automatic email generation.
  • According to a further embodiment of the present aspect, the step of extracting includes detecting an absence of the closed captioned text from the video data, and generating an alert message when no closed captioned text is detected.
  • Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the present invention will now be described, by way of example only, with reference to the attached Figures, wherein:
  • FIG. 1 is a schematic of the media monitoring system according to an embodiment of the present invention;
  • FIG. 2 is a schematic of the media monitoring system according to another embodiment of the present invention;
  • FIG. 3 is a block diagram of the functional components of the media monitoring system shown in FIG. 1;
  • FIG. 4 is a flow chart illustrating a manual operation mode of the media monitoring system of the present invention;
  • FIG. 5 is a computer screen user interface for prompting search parameters from a user;
  • FIG. 6 is a computer screen user interface showing compact example results from a search;
  • FIG. 7 is a computer screen user interface showing detailed example results from a search;
  • FIG. 8 is a computer screen user interface showing matching captioning and timing information; and,
  • FIG. 9 is a flow chart illustrating an automatic scanning mode of the media monitoring system of the present invention.
  • DETAILED DESCRIPTION
  • Generally, the present invention provides a method and system for continually storing and cataloguing streams of broadcast content, allowing real-time searching and real-time results display of all catalogued video. A bank of video recording devices store and index all video content on any number of broadcast sources. This video is stored along with the associated program information such as program name, description, airdate and channel. A parallel process obtains the text of the program, either from the closed captioning data stream, or by using a speech-to-text system. Once the text is decoded, stored, and indexed, users can then perform searches against the text, and view matching video immediately along with its associated text and broadcast information.
  • Users can also retrieve program information by other methods, such as by airdate, originating station, program name and program description. Additionally, an alerting mechanism scans all content in real-time and can be configured to notify users by various means upon the occurrence of a specified search criteria in the video stream. The system is preferably designed to be used on publicly available broadcast video content, but can also be used to catalog private video, such as conference speeches or audio-only content such as radio broadcasts.
  • The system according to the embodiments of the present invention non-selectively records all video/audio applied to it, and allows user searches to review all video on the system. Only under user control is a presentation clip prepared and retrieved. Furthermore, searches can be performed at any time to examine archived video, rather than searches being the basis by which video is saved. Video is only shown to the user that they specifically request, and any editing can be done under user control.
  • A general block diagram of a media monitoring system according to an embodiment of the present invention is shown in FIG. 1. Media monitoring system 100 comprises two major component groups. The first is the media management system 102, and the second is a user access system 104.
  • The media management system 102 is responsible for receiving and archiving video and its corresponding audio. Preferably, the video and audio data is continuously received and stored. Video streams are tuned using media sources 106 such as satellite receivers, other signal receiving devices such as cable boxes, antennas, and VCR's or DVD players. Alternately, media sources 106 can include non-video media sources, such as digital radio sources, for example. Preferably, the media sources 106 receive digital signals. The video signal, corresponding audio, and any corresponding closed-captioned text, is captured by video/audio capture hardware/software stored on media servers 108. The video/audio data can be stored in any sized segment, such as in one-hour segments, with software code to later extract any desired segment of any size by channel, start time, and end time. Those of skill in the art will understand that the video/audio data can be stored in any suitable format. As shown in FIG. 1, media management system 102 can include any number of media servers 108, and each video server can be in communication with any number of media sources 106. If storage space is limited, the media servers can compress the digital data from the media sources 106 into smaller file sizes. The data from media sources 106 are stored in consecutive segments onto a mass storage device using uniquely generated filenames that encode the channel and airdate of the video segment. If the data is a video stream, closed-captioned text is extracted from the video stream and stored in web servers 114 as searchable text, as will be discussed later. The extracted closed-captioned text is indexed to its corresponding video/audio clips stored in the media servers 108.
  • The media management system 102 stores all of the text associated with the video stream. In most cases, this text is obtained from the closed-captioning signal encoded into the video. The media management system 102 can further include a closed-captioned text detector for detecting the absence of closed-captioned text in the data stream, in order to alert the system administrator that closed-captioned text has not been detected for a predetermined amount of time. In a situation where closed-captioned text is undetected, the aforementioned alert can notify the system operator to take appropriate action in order to resolve the problem. In some cases, the stream may not be a digital stream, and the system can include a speech-to-text system to convert the audio signals into text. Accordingly, these sub-systems can be executed within each media server 108. The extracted text is broken into small sections, preferably into one-minute segments. Each clip is then stored in a database along with the program name, channel, and airdate of the clip. The text is also pushed into an indexing engine of index servers 110, which allows it to be searched. In a preferred embodiment, the closed captioned text spanning a preset time received by index servers 110 are converted to XML format, bundled, and sent to web servers 114 for global storage via network 116. Web servers 114 can execute the searches for matches between user specified terms and the stored closed captioned text, via a web-based search interface. Alternately, the closed captioned text can be stored in index servers 110. The channel and airdate fields of the text segment allow it to be matched to a video clip stored by the media management system 102 as needed. Further details of media management system 102 will be described later.
  • Although not shown in FIG. 1, the media management system 102 includes an alerting system. This system watches for each closed captioned segment that is indexed and cross-references it against the stored list of user defined alerts. Any matches will trigger user alerts to notify the user that a match has occurred. Alerts can include in-system alerts, mobile device activation, pager activation, automatic email generation, which can be generated from web servers 114.
  • The user access system 104 can include access devices such as a computer workstation 118, or mobile computing devices such as a laptop computer 120 and a PDA 122. Of course, other wireless devices such as mobile phones can also be used. These web enabled access devices can communicate with the web servers 114 via the Internet 126 wirelessly through BlueTooth or WiFi network systems, or through traditional wired systems. Optionally, users can dial up directly to the network 116 with a non-web search interface enabled computer 124. As will be shown later in FIG. 3, the user access system 104 further includes an alternate data transfer path for transferring video data to the access devices to reduce congestion within media management system 102. As previously discussed, each web server 114 can store identical copies of the closed captioned text bundle received from index servers 110. This configuration facilitates searches conducted by users since the text data is quickly accessible, and search results consisting of closed captioned text can be quickly forwarded to the user's access device.
  • From user access system 104, the user can search for occurrences of keywords, retrieve video by date and time, store alert parameters, etc. The user interface software can take the form of a web interface, locally run software, mobile device interface, or any other interactive form.
  • The back end portion of the web interface maintains a connection to the text database, as well as to the index of video streams. Depending on configuration, the user interface software can be used to stream video to the user, or alternatively, to direct the user to an alternate server where video will be presented.
  • The previously described embodiment of the invention can be deployed locally, at a single site for example, to monitor all the media channels of interest. Therefore, networks 112 and 116 can be implemented as a local area network (LAN), such as in an office building for example. Local area networks typically provide high bandwidth operation. Alternately, media monitoring system 100 can be deployed across a wide network, meaning that the components of the system can be geographically dispersed, making networks 112 and 116 wide area networks (WAN). The bandwidth of a WAN is generally smaller than that of a LAN. Of course, those of skill in the art will understand that the presently described system can be implemented with a combination of WAN and LAN.
  • In a wide deployment embodiment of the invention, media servers 108 and their corresponding media sources 106 can be geographically distributed to collect and store local video, which is then shared within the system. For example, “pods” of media servers 108 and their corresponding media sources 106 can be located in different cities, and in different countries. As such, it is advantageous to store the relatively large video/audio data locally within respective media servers 108. In such an embodiment, the server the user is connected to may not physically be at the location where the video streams are being recorded. In the present context, the distributed media server pods are considered remotely connected to index servers 110, since they are connected via a WAN. However, an advantage of the present invention is that the monitoring and notification speed remains fast regardless of the network configuration of the media monitoring system 100. This is due to the fact that the small sized closed captioned text can be rapidly transferred within the system, and more particularly, between the media servers 108 and the user access devices.
  • Once the user desires to view corresponding video, then the larger video data is accessed and sent to the user. Due to the size of the video, it is preferable to avoid congesting the networks 112, 116 and 126 and limiting performance for all users. However, video may be transferred to the user in an all-LAN environment with satisfactory speed. In a system implementation with relatively high bandwidth, the user access device connects to index servers 110, which functions as the conductor of traffic between media servers 108 and the user access device. Therefore, according to another embodiment of the invention, requested video can be directly sent from the appropriate media server 108 to the video enabled user access device.
  • FIG. 2 illustrates the configuration of the media monitoring system 100 when video data is to be transferred to a user access device in a geographically distributed system. In the present example, one video server 108 and its corresponding media sources 106 represent a single video processing unit of a pod of video processing units 130, that may be deployed in a particular city and geographically distant from index servers 110 and network 112. The pod 130 remains in communication with remote access devices 118, 120 and 122 via LAN/WAN network 132, which may be geographically distant from pod 130. Hence, once a user requests a particular video clip, the request is sent directly to the appropriate video server 108, which then transfers the requested video clip, parsed as requested by the user, to their access device via WAN/LAN network 132. Media server 108 can include a parser for providing the requested video clip that corresponds with the time-indexed closed captioned text. Since the video clips are received through a path outside of the media management system 102 and user access system 104, the potential for congestion of data traffic within the system is greatly reduced. At the same time, multiple users can receive their respective requested video clips rapidly.
  • In general operation, when a user specifies key search terms through their computer or wireless device, the index servers will search the archived closed captioned text, and notify the user if any matches have occurred. Matches are displayed with the relevant bibliographic information such as air date and channel. The user then has the option of viewing and hearing a time segment of the videos containing the matched terms, the time segment being selectable by the user. The search of key terms can extend to future broadcasts, such that the search is conducted dynamically in real-time. Thus, the user can be notified shortly after a search term has been matched in a current broadcast. Since the video broadcast is recorded, the user can selectively view the entire broadcast, or any portion thereof.
  • FIG. 3 illustrates a block diagram of the general functional components of media monitoring system 100 shown in FIG. 1.
  • The media monitoring system 100 converts a video signal to an indexed series of digital files on a mass storage system, which can then be retrieved by specifying the desired channel, start time, and end time. This capability is then used to supply the actual video that matches the search result from the user interface component. Video is archived at a specified quality, depending on operator configuration. Higher quality settings allow for larger video frames, higher frame rates, and greater image detail, but with a penalty of greater file storage requirements. All parameters are configurable by the operator at the system level. As previously mentioned, the video/audio signal to be archived is made available from an external source. In practice, this usually consists of an antenna, or a satellite receiver or cable feed supplied by a signal provider. Any standard video signal may be used, although the originating device preferably supports encoding of closed-captions in the Vertical Blanking Interval (VBI), which is the dead time where the scanning gun of the monitor finishes at the bottom and moves back to the top of the screen. The system can also be configured to store audio-only content should the signal not have a video component.
  • The video/audio signal is applied to the input of a video capture device 200, which, either through a hardware or a software compression system 202, converts the video signal to a digital stream. In FIG. 1, video capture device 200 and software compression system 202 can be implemented in media servers 108. The exact format of this stream can be specified by the operator, but is typically chosen to be a compressed stream in a standard format such as MPEG or AVI formatted video. The video capture process outputs a continuous stream of video, which is then divided into manageable files. According to an embodiment of the present invention, the files are preferably limited to one hour blocks of video. These files are then stored on a mass storage system 204 within their respective media servers 108, indexed by the channel they represent, and the block of time that the video recording was done. Accordingly, mass storage system 204 locally stores the video/audio data for its corresponding media sources 106.
  • Video clips can be retrieved from mass storage system 204 in response to retrieval requests from permitted machines. These requests would be generated from servers that are serving users who have requested a video clip. From the users standpoint, this video clip is chosen by its content, but the system would know it as belonging to a specified channel for a given period of time. Most user clip requests are for small segments of video, an example being “CBC-Ottawa, 5:55 pm-5:58 pm”. The archive system, using the channel and the date required, first deduces which large file the video segment is located in. It then parses the video file to locate and extract the stream data representing the segment selected. The stream data is then re-encapsulated to convert it to a stand-alone video file, and the result is returned to the calling machine, ultimately to be delivered to the user.
  • Since storage space is finite, the system can continuously replace the oldest video streams in its archive with the newest. This ensures that as much video is stored as possible. Additional storage can be added or removed as needed.
  • Media monitoring system 100 can include self monitoring functions to ensure robust operation, and to minimize potential errors. For example, the video digitizing process has the ability to detect the lack of video present at its input. This condition will raise an operator alert to allow the operator to locate the cause of the outage. In the field, this can be attributed to cabling problems, weather phenomena, hardware failure, upstream problems, etc. In certain cases the system can be configured to attempt an automatic repair, by restarting or re-initializing a process or external device.
  • The closed captioned text associated with the video is preferably extracted from the closed captioning stream in the video signal, or an associated speech-to-text device. If closed captioning data is available in the video signal, the signal is applied to a decoder 206 typically located in each media server 108, that can read the VBI stream. The decoder 206 extracts the closed captions that are encoded into the video signal. In practice, this can be the same device performing the video compression, and the extraction can be done in software. If closed captioning data is not available, the audio stream is fed into a speech-to-text device instead of decoder 206, and the resulting text is fed into the system. This option can be used if the content is not a video signal, such as a commercial radio stream or recorded speech. The decoder 206 includes a buffer, into which text accumulates at “human reading” speed. After a short increment of time, preferably one minute, the text buffer is stored into text database 208 along with the channel and time information associated with the clip. This database 208 then contains a complete record of all text that has flowed through the system, sorted by channel and airdate. As previously mentioned, database 208 can be located within either index servers 110 or web servers 114. In either case, database 208 functions as global storage of the decoded closed captioned text.
  • To facilitate and accelerate searching, the program text is provided to an indexing engine 210. Indexing engine 210, implemented in index servers 110 receives a block of text, which in this case represents a small unit of video transcript (typically one minute), and stores it in a format that is optimized for full text searches. For practical implementation purposes, standard off-the-shelf products can be employed for the indexing function. According to the presently described embodiments, the video captions are indexed by channel and time for example. The formatted text is stored in index database 212, which can be located in index servers 110 or web servers 114. Database 212 can also function as global storage of all the formatted text.
  • For searching the text database 208, the user's search string is submitted to a full text search engine that searches database 212. Any results returned from this engine also contain indexes to the corresponding channel and time of the airing. Furthermore, since the entire text is stored in database 208, it can be retrieved using standard techniques to search on the channel and air time. It is noted that database 212 is used for full text searching, while database 208 has been formatted such that the data is ordered by time and channel to facilitate look up by time and channel.
  • Due to the small size of text streams, all extracted text could be retained for as long as required, even after its corresponding video clip has been deleted. The cleanup thread of the text system removes the captions from the database and the search index as they expire from the archival service. Alternatively, they may be retained as long as desired but are flagged to indicate that the associated video is no longer available. Additional search options allow searches to include this “archived” text if desired.
  • Once video data has been received, processed and archived in media management system 102 as previously described, user-defined searches can be executed through user access system 104. Operating upon each access device is a user search interface that provides the functionality of the system. The interface is designed to allow users with minimal training to be able to perform text searches, examine the program text that matches, and selectively view or archive the video streams where the captioning appeared. While the reference application is a web-based system, the system can be searched through other means, such as mobile WiFi devices, Bluetooth-enabled devices, and locally running software, for example.
  • Following is an example of a common interactive mode of operation between a user and the media monitoring system 100 shown in FIG. 1. FIG. 4 shows a flow chart of the process executed by the media monitoring system 100, while FIGS. 5-8 are examples of user interface screens that prompt the user for information and display results to the user.
  • The process begins at step 300, where the user logs into the interface with the goal of researching a topic's appearance in the recent media. The user is presented with a screen that allows them to enter the search terms that would match their desired content. Common search parameters are provided, such as specifying phrases that must appear as typed, words that should appear within a certain distance of each other, boolean queries, etc. As well, the query can be limited to only return results from specific broadcast channels. FIG. 5 is an example user interface for prompting the search parameters from the user.
  • Upon submitting the form, the search parameters provided by the user are first groomed at step 302. Grooming is an optional step, which refers to optimization of the search parameters, especially if the user's search parameters are broken. For example, the user may enter “red blue” in the MUST CONTAIN THESE WORDS search field, and “GREEN” in the MAY CONTAIN search field. The grooming process then optimizes the search parameters to “GREEN RED AND BLUE”. The groomed search parameters are compared to database 208 that stores all the closed captioned text. The user is presented with a match results page at step 304, itemizing the results obtained, the programs they appeared in, and a score that represents how strong the match was. The results can be sorted in numerous ways, such as by date, by program name, or by score. A compact example results page is shown in FIG. 6, and a more detailed version is shown in FIG. 7. In both the compact and detailed results pages, the user can select any row to view further details of that program segment. The results pages shown in FIGS. 6 and 7 may list concurrent segments belonging to the same broadcast, since the search term appears in each segment. For example, the results may return “Channel Y, 6:00 pm to 6:01 pm”, “Channel Y, 6:01 pm to 6:02 pm” and “Channel Y, 6:02 pm to 6:03 pm” as separate program segment items. The system can optimize the results by recognizing that the three segments are chronological segments of Channel Y, and collapse the results into a simplified description, such as “Channel Y, 6:00 pm to 6:03 pm”.
  • Upon selecting a program segment at step 306, the user is presented with a caption viewing screen showing the matching captioning and timing information, as shown in FIG. 8. The present screen gives the user the option of viewing the clip associated with the shown extracted closed captioned text. From the caption viewing screen, the user is also presented with a navigation system that allows the user to move forward or backward in the video stream beyond the matched segment, to peruse the context that the clip was presented in. The caption viewing screen also features controls to compose a video clip that consists of several consecutive units of video. More specifically, the user has the ability to define the start and end points of a video clip, and then view or save that clip. This is suitable for preparing a salient clip that is to be saved for future reference.
  • If the user chooses not to view the corresponding video clip at step 306, the process can return to step 300 to restart the search. Optionally, the process can return to step 304 to permit the user to view the results page and select a different program segment. If the user chooses to view the corresponding video clip, then the system determines if the video clip is stored locally at step 308. It is important to note that a locally stored video clip refers to one that is accessible via a high bandwidth network, which is typically available in a local area network, such as in an office environment. In contrast, remotely stored video clips are generally available only through a low bandwidth network, or one that is too low to have a copy of all video sent to it all the time. As previously discussed, the user can access the video remotely over a low bandwidth connection. Therefore, the process provides a video access method optimized according to whether or not the user is accessing the system remotely. If the video clip is stored locally, ie. on a high bandwidth connection suitable for streaming video, then the system proceeds to step 310. At step 310, the video clip is retrieved and assembled with the appropriate video segments, and then displayed for the user at step 312. The video clip can be played with the user's preferred video playing software. Alternately at step 308, if the video clip is not stored locally, the system proceeds to step 314, where a query is sent to the specific remote server that will return the video that the user is asking for. The video clip is retrieved from the remote system at step 316, and finally displayed for the user at step 312. Once the clip has ended, the user has the option of returning to step 304 to view another program segment. Alternately, the user may return to step 300 to initiate a new search.
  • Some installations and user devices (such as WiFi or Bluetooth wireless devices) do not have the ability to view video clips. In this scenario, the video clip can be ordered through the user interface where it will be delivered to the user via email, via a link to a web site, or a physical medium such as a DVD, CD or video cassette for example. This service is suitable for clients requiring a permanent copy of especially important video segments.
  • The previously described manual interactive operation method of FIG. 4 is effective for searching and viewing archived video. According to an embodiment of the present invention, the media monitoring system 100 can concurrently operate in an automatic scanning mode to match user defined terms with real time extracted closed captioned text. The user can selectively activate the alerting system to provide notification for specific terms.
  • As previously described, searches can be stored by users so that they are executed on all incoming text corresponding to real-time recorded video. Any matches will selectively generate an immediate alert, which can be communicated to the user by various means. Selective generation of an alert refers to the fact that the user can set specific search terms to trigger an alert when matched. The stored search terms are archived in a search term database, preferably located on web servers, 114 including parameters reflecting the desired level of alerting the user has requested. Examples of such alerting levels can include “Never alert me”, “alert me by putting a message in the product”, “and alert me urgently by sending me an email to my mobile device”.
  • The automatic scanning mode method of operation of the media monitoring system 100 is described with reference to FIG. 9. It is assumed that the following process operates upon each stored unit of program text after the text is stored and indexed. Then the index is searched again with the terms to detect if anything new appears. It is further assumed that the user has previously defined his/her search terms and stored them in a search term database 404, which can be physically located on web server 114. The process begins at step 400, where the text from index database 212 for the unit is retrieved. At step 402, a search term from the users search term database 404 is retrieved and compared to the stored unit of program text at step 406. If there is an absence of a match, the system proceeds to step 408, where the system checks if there are any further search terms to check against the stored unit of program text. If there are no more search terms, the process ends at step 410. Otherwise, the system loops back to step 402 to fetch the next search term.
  • If a match was found at step 406, the system proceeds to step 412 to store the match information in a results database 414. This results database is preferably located in web server 114, and is local to the user's portal. The results summarize matches between the search terms and the video clips for the user when they log in to their portal. At step 416, the system checks if the user has activated an alert, for the present search term. If an alert has been activated for the present search term, the system generates a notification message for the user at step 418, in accordance with their desired alert level. Depending on settings and system configuration, this alert/notification can be delivered using a number of methods, including but not limited to, alerts in the interface, via email, via mobile and wireless devices, for example. Once the user has been alerted at step 418, or if no alert has been activated for the present search term at step 416, the matched search result processing the system proceeds to step 408 to determine if there are any further search terms. This aforementioned process is executed for each unit of program text stored in the index.
  • Therefore, should the user add a new search term to his/her search term database at a later time, the media monitoring system of the present invention can immediately search the archives to identify any prior program segments that match the new search term, and monitor new program segments for occurrences of the new search term.
  • The system described in this application stores all video from all channels, allowing searches to be refined or changed at will with instant results. As well, learnings from the results of one query can be incorporated into a new search with instant results.
  • This invention improves the user experience by storing and indexing all recent video and captions. This allows not only unlimited queries with real time results, but also allows new searches inspired by results to be performed immediately and with instant results.
  • The aforementioned embodiments of the present invention records and stores video/audio clips that are broadcast across any number of channels. There are instances where the same video clips are broadcast by affiliated channels. An example includes all those channels affiliated with CTV. Hence, there is a great likelihood that a user's search parameters will return duplicate video clips. In an enhancement to the embodiments of the present invention, web server 114 can include a duplicate video clip detector to mark matching video clip results that are essentially the same. This function can be executed in web servers 114 as search results are returned to the user. For example, the text of the returned search results can be scanned such that the duplicates are marked as such. This feature allows the user to view one video clip and dismiss those marked as duplicates very quickly, without opening each one and viewing the clip. Preferably, the duplicate video clip detector can be implemented on web server 114, but can be executed in index servers 110. Generally, a first matching result is added to the database and then fuzzy matching is executed to determine if further matches are essentially the same as the first stored instance. If so, then the duplicates are marked as such for the users' convenience. Those of skill in the art should understand that an essential match between two clips is one where a substantial percentage of the content are the same. Naturally, this percentage can be preset by the system administrator.
  • The above-described embodiments of the present invention are intended to be examples only. Alterations, modifications and variations may be effected to the particular embodiments by those of skill in the art without departing from the scope of the invention, which is defined solely by the claims appended hereto.

Claims (18)

1. A media monitoring system for receiving at least one video channel having corresponding closed captioned text in real time, comprising:
a storage medium for containing at least one of stored search terms for conducting an alert search and provided search terms for conducting an archival search;
a media management system for continuously storing all the data of the at least one video channel locally and for extracting the corresponding closed captioned text into decoded text, the decoded text being provided to a global storage database, the media management system having
a search engine for comparing the decoded text against at least one of the stored search terms and the provided search terms to provide matching results during the alert search and the archival search, and
an indexing engine for indexing units of the decoded text by time; and
a user access system for receiving and displaying the matching results, the user access system transmitting a request for stored data corresponding to specific units of the decoded text from the media management system, the media management system providing said stored data corresponding to specific units of the decoded text in response to the request.
2. The media monitoring system of claim 1, wherein the media management system includes
a media server pod for receiving the at least one video channel and for locally storing the data of the at least one video channel, the media server pod including a close caption decoder for extracting the corresponding closed captioning text into the decoded text,
an index server for receiving the decoded text from the media server pod over a first network, the index servers having the index engine, and
a web server including the global storage database for storing the decoded text received from the index servers over a second network, the web servers having the search engine and a search term database for storing the at least one of stored search terms and provided search terms.
3. The media monitoring system of claim 2, wherein the media server pod includes
at least one media source for providing the at least one video channel, and
a media server in direct communication with the at least one media source for receiving the at least one video channel, the media server having a decoder for extracting the corresponding closed captioned text into decoded text from a vertical blanking interval of the at least one video channel, the media server including mass storage media for storing the data of the at least one video channel.
4. The media monitoring system of claim 3, wherein the media server includes a parser for generating the stored data corresponding to specific units of the decoded text.
5. The media monitoring system of claim 3, wherein the media server pod includes
a plurality of media sources for providing a corresponding number of video channels.
6. The media monitoring system of claim 3, wherein the media server includes a video/audio compression system for compressing the data of the at least one video channel prior to storage onto the mass storage media.
7. The media monitoring system of claim 6, wherein the media server includes a speech-to-text system for converting audio signals corresponding to the at least one video channel into text.
8. The media monitoring system of claim 6, wherein the media server includes a text detector for detecting an absence of the corresponding closed captioned text, the text detector generating an alert indicating the absence of the corresponding closed captioned text.
9. The media monitoring system of claim 3, wherein the media source includes one of a satellite receiver, a cable box, an antenna, and a digital radio source.
10. The media monitoring system of claim 2, wherein the index server includes the global storage database.
11. The media monitoring system of claim 2, wherein the web server includes the global storage database.
12. The media monitoring system of claim 3, wherein the first network includes a wide area network.
13. The media monitoring system of claim 12, wherein the media management system further includes a second media server pod for receiving data of a different video channel, the second media server pod being in communication with the first network.
14. The media monitoring system of claim 13, wherein the media server pod and the second media server pod are geographically distant from each other.
15. The media monitoring system of claim 1, wherein the user access system includes a duplicate video clip detector for identifying the matching results that are duplicates of each other.
16. The media monitoring system of claim 2, wherein the user access system includes a user access device in communication with the web server over a third network, for receiving and displaying the matching results.
17. The media monitoring system of claim 16, wherein the user access system includes a fourth network in communication with the user access device and the media server pod, the user access device receiving said stored data corresponding to specific units of the decoded text in response to the request over the fourth network.
18. The media monitoring system of claim 16, wherein the user access device provides the at least one of the stored search terms and the provided search terms to the media management system.
US11/947,460 2004-02-24 2007-11-29 System and method for real-time media searching and alerting Expired - Fee Related US8015159B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/947,460 US8015159B2 (en) 2004-02-24 2007-11-29 System and method for real-time media searching and alerting

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US54695404P 2004-02-24 2004-02-24
US11/063,559 US20050198006A1 (en) 2004-02-24 2005-02-24 System and method for real-time media searching and alerting
US11/947,460 US8015159B2 (en) 2004-02-24 2007-11-29 System and method for real-time media searching and alerting

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/063,559 Continuation US20050198006A1 (en) 2004-02-24 2005-02-24 System and method for real-time media searching and alerting

Publications (2)

Publication Number Publication Date
US20080072256A1 true US20080072256A1 (en) 2008-03-20
US8015159B2 US8015159B2 (en) 2011-09-06

Family

ID=34886282

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/063,559 Abandoned US20050198006A1 (en) 2004-02-24 2005-02-24 System and method for real-time media searching and alerting
US11/947,460 Expired - Fee Related US8015159B2 (en) 2004-02-24 2007-11-29 System and method for real-time media searching and alerting

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/063,559 Abandoned US20050198006A1 (en) 2004-02-24 2005-02-24 System and method for real-time media searching and alerting

Country Status (2)

Country Link
US (2) US20050198006A1 (en)
CA (1) CA2498364C (en)

Cited By (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070204285A1 (en) * 2006-02-28 2007-08-30 Gert Hercules Louw Method for integrated media monitoring, purchase, and display
US20070203945A1 (en) * 2006-02-28 2007-08-30 Gert Hercules Louw Method for integrated media preview, analysis, purchase, and display
US20080091513A1 (en) * 2006-09-13 2008-04-17 Video Monitoring Services Of America, L.P. System and method for assessing marketing data
US20080313146A1 (en) * 2007-06-15 2008-12-18 Microsoft Corporation Content search service, finding content, and prefetching for thin client
US20090043818A1 (en) * 2005-10-26 2009-02-12 Cortica, Ltd. Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof
US20090240674A1 (en) * 2008-03-21 2009-09-24 Tom Wilde Search Engine Optimization
US20090313305A1 (en) * 2005-10-26 2009-12-17 Cortica, Ltd. System and Method for Generation of Complex Signatures for Multimedia Data Content
US20090319365A1 (en) * 2006-09-13 2009-12-24 James Hallowell Waggoner System and method for assessing marketing data
US20100262609A1 (en) * 2005-10-26 2010-10-14 Cortica, Ltd. System and method for linking multimedia data elements to web pages
US20110211814A1 (en) * 2006-05-01 2011-09-01 Yahoo! Inc. Systems and methods for indexing and searching digital video content
US20110262105A1 (en) * 2007-02-14 2011-10-27 Candelore Brant L Transfer of Metadata Using Video Frames
US8266185B2 (en) 2005-10-26 2012-09-11 Cortica Ltd. System and methods thereof for generation of searchable structures respective of multimedia data content
US20120240177A1 (en) * 2011-03-17 2012-09-20 Anthony Rose Content provision
US8935713B1 (en) * 2012-12-17 2015-01-13 Tubular Labs, Inc. Determining audience members associated with a set of videos
US9031999B2 (en) 2005-10-26 2015-05-12 Cortica, Ltd. System and methods for generation of a concept based database
US9087049B2 (en) 2005-10-26 2015-07-21 Cortica, Ltd. System and method for context translation of natural language
CN104866404A (en) * 2015-05-19 2015-08-26 北京控制工程研究所 Universal data monitoring method
US20150310107A1 (en) * 2014-04-24 2015-10-29 Shadi A. Alhakimi Video and audio content search engine
US9191626B2 (en) 2005-10-26 2015-11-17 Cortica, Ltd. System and methods thereof for visual analysis of an image on a web-page and matching an advertisement thereto
US9218606B2 (en) 2005-10-26 2015-12-22 Cortica, Ltd. System and method for brand monitoring and trend analysis based on deep-content-classification
US9235557B2 (en) 2005-10-26 2016-01-12 Cortica, Ltd. System and method thereof for dynamically associating a link to an information resource with a multimedia content displayed in a web-page
US9256668B2 (en) 2005-10-26 2016-02-09 Cortica, Ltd. System and method of detecting common patterns within unstructured data elements retrieved from big data sources
US9286623B2 (en) 2005-10-26 2016-03-15 Cortica, Ltd. Method for determining an area within a multimedia content element over which an advertisement can be displayed
US9330189B2 (en) 2005-10-26 2016-05-03 Cortica, Ltd. System and method for capturing a multimedia content item by a mobile device and matching sequentially relevant content to the multimedia content item
US9372940B2 (en) 2005-10-26 2016-06-21 Cortica, Ltd. Apparatus and method for determining user attention using a deep-content-classification (DCC) system
US9396435B2 (en) 2005-10-26 2016-07-19 Cortica, Ltd. System and method for identification of deviations from periodic behavior patterns in multimedia content
US9418389B2 (en) 2012-05-07 2016-08-16 Nasdaq, Inc. Social intelligence architecture using social media message queues
US9466068B2 (en) 2005-10-26 2016-10-11 Cortica, Ltd. System and method for determining a pupillary response to a multimedia data element
US9477658B2 (en) 2005-10-26 2016-10-25 Cortica, Ltd. Systems and method for speech to speech translation using cores of a natural liquid architecture system
US9489431B2 (en) 2005-10-26 2016-11-08 Cortica, Ltd. System and method for distributed search-by-content
US9529984B2 (en) 2005-10-26 2016-12-27 Cortica, Ltd. System and method for verification of user identification based on multimedia content elements
US9558449B2 (en) 2005-10-26 2017-01-31 Cortica, Ltd. System and method for identifying a target area in a multimedia content element
US9639532B2 (en) 2005-10-26 2017-05-02 Cortica, Ltd. Context-based analysis of multimedia content items using signatures of multimedia elements and matching concepts
US9646005B2 (en) 2005-10-26 2017-05-09 Cortica, Ltd. System and method for creating a database of multimedia content elements assigned to users
US9747420B2 (en) 2005-10-26 2017-08-29 Cortica, Ltd. System and method for diagnosing a patient based on an analysis of multimedia content
US20170255619A1 (en) * 2005-10-26 2017-09-07 Cortica, Ltd. System and methods for determining access permissions on personalized clusters of multimedia content elements
US9767143B2 (en) 2005-10-26 2017-09-19 Cortica, Ltd. System and method for caching of concept structures
US9953032B2 (en) 2005-10-26 2018-04-24 Cortica, Ltd. System and method for characterization of multimedia content signals using cores of a natural liquid architecture system
US10180942B2 (en) 2005-10-26 2019-01-15 Cortica Ltd. System and method for generation of concept structures based on sub-concepts
US10193990B2 (en) 2005-10-26 2019-01-29 Cortica Ltd. System and method for creating user profiles based on multimedia content
US10191976B2 (en) 2005-10-26 2019-01-29 Cortica, Ltd. System and method of detecting common patterns within unstructured data elements retrieved from big data sources
US10304036B2 (en) 2012-05-07 2019-05-28 Nasdaq, Inc. Social media profiling for one or more authors using one or more social media platforms
US10331737B2 (en) 2005-10-26 2019-06-25 Cortica Ltd. System for generation of a large-scale database of hetrogeneous speech
US10360253B2 (en) 2005-10-26 2019-07-23 Cortica, Ltd. Systems and methods for generation of searchable structures respective of multimedia data content
US10372746B2 (en) 2005-10-26 2019-08-06 Cortica, Ltd. System and method for searching applications using multimedia content elements
US10380623B2 (en) 2005-10-26 2019-08-13 Cortica, Ltd. System and method for generating an advertisement effectiveness performance score
US10380164B2 (en) 2005-10-26 2019-08-13 Cortica, Ltd. System and method for using on-image gestures and multimedia content elements as search queries
US10380267B2 (en) 2005-10-26 2019-08-13 Cortica, Ltd. System and method for tagging multimedia content elements
US10387914B2 (en) 2005-10-26 2019-08-20 Cortica, Ltd. Method for identification of multimedia content elements and adding advertising content respective thereof
WO2019183436A1 (en) * 2018-03-23 2019-09-26 nedl.com, Inc. Real-time audio stream search and presentation system
US10535192B2 (en) 2005-10-26 2020-01-14 Cortica Ltd. System and method for generating a customized augmented reality environment to a user
US10585934B2 (en) 2005-10-26 2020-03-10 Cortica Ltd. Method and system for populating a concept database with respect to user identifiers
US10607355B2 (en) 2005-10-26 2020-03-31 Cortica, Ltd. Method and system for determining the dimensions of an object shown in a multimedia content item
US10614626B2 (en) 2005-10-26 2020-04-07 Cortica Ltd. System and method for providing augmented reality challenges
US10621988B2 (en) 2005-10-26 2020-04-14 Cortica Ltd System and method for speech to text translation using cores of a natural liquid architecture system
US10635640B2 (en) 2005-10-26 2020-04-28 Cortica, Ltd. System and method for enriching a concept database
US10691642B2 (en) 2005-10-26 2020-06-23 Cortica Ltd System and method for enriching a concept database with homogenous concepts
US10698939B2 (en) 2005-10-26 2020-06-30 Cortica Ltd System and method for customizing images
US10733326B2 (en) 2006-10-26 2020-08-04 Cortica Ltd. System and method for identification of inappropriate multimedia content
US10742340B2 (en) 2005-10-26 2020-08-11 Cortica Ltd. System and method for identifying the context of multimedia content elements displayed in a web-page and providing contextual filters respective thereto
US10748022B1 (en) 2019-12-12 2020-08-18 Cartica Ai Ltd Crowd separation
US10748038B1 (en) 2019-03-31 2020-08-18 Cortica Ltd. Efficient calculation of a robust signature of a media unit
US10776585B2 (en) 2005-10-26 2020-09-15 Cortica, Ltd. System and method for recognizing characters in multimedia content
US10776669B1 (en) 2019-03-31 2020-09-15 Cortica Ltd. Signature generation and object detection that refer to rare scenes
US10789535B2 (en) 2018-11-26 2020-09-29 Cartica Ai Ltd Detection of road elements
US10789527B1 (en) 2019-03-31 2020-09-29 Cortica Ltd. Method for object detection using shallow neural networks
US10796444B1 (en) 2019-03-31 2020-10-06 Cortica Ltd Configuring spanning elements of a signature generator
US10839694B2 (en) 2018-10-18 2020-11-17 Cartica Ai Ltd Blind spot alert
US10848590B2 (en) 2005-10-26 2020-11-24 Cortica Ltd System and method for determining a contextual insight and providing recommendations based thereon
US10846544B2 (en) 2018-07-16 2020-11-24 Cartica Ai Ltd. Transportation prediction system and method
US10949773B2 (en) 2005-10-26 2021-03-16 Cortica, Ltd. System and methods thereof for recommending tags for multimedia content elements based on context
US11019161B2 (en) 2005-10-26 2021-05-25 Cortica, Ltd. System and method for profiling users interest based on multimedia content analysis
US11029685B2 (en) 2018-10-18 2021-06-08 Cartica Ai Ltd. Autonomous risk assessment for fallen cargo
US11032017B2 (en) 2005-10-26 2021-06-08 Cortica, Ltd. System and method for identifying the context of multimedia content elements
US11037015B2 (en) 2015-12-15 2021-06-15 Cortica Ltd. Identification of key points in multimedia data elements
US11126869B2 (en) 2018-10-26 2021-09-21 Cartica Ai Ltd. Tracking after objects
US11126870B2 (en) 2018-10-18 2021-09-21 Cartica Ai Ltd. Method and system for obstacle detection
US11132548B2 (en) 2019-03-20 2021-09-28 Cortica Ltd. Determining object information that does not explicitly appear in a media unit signature
US11181911B2 (en) 2018-10-18 2021-11-23 Cartica Ai Ltd Control transfer of a vehicle
US11195043B2 (en) 2015-12-15 2021-12-07 Cortica, Ltd. System and method for determining common patterns in multimedia content elements based on key points
US11216498B2 (en) 2005-10-26 2022-01-04 Cortica, Ltd. System and method for generating signatures to three-dimensional multimedia data elements
US11222069B2 (en) 2019-03-31 2022-01-11 Cortica Ltd. Low-power calculation of a signature of a media unit
US11285963B2 (en) 2019-03-10 2022-03-29 Cartica Ai Ltd. Driver-based prediction of dangerous events
US11361014B2 (en) 2005-10-26 2022-06-14 Cortica Ltd. System and method for completing a user profile
US11386139B2 (en) 2005-10-26 2022-07-12 Cortica Ltd. System and method for generating analytics for entities depicted in multimedia content
US11403336B2 (en) 2005-10-26 2022-08-02 Cortica Ltd. System and method for removing contextually identical multimedia content elements
US11590988B2 (en) 2020-03-19 2023-02-28 Autobrains Technologies Ltd Predictive turning assistant
US11593662B2 (en) 2019-12-12 2023-02-28 Autobrains Technologies Ltd Unsupervised cluster generation
US11604847B2 (en) 2005-10-26 2023-03-14 Cortica Ltd. System and method for overlaying content on a multimedia content element based on user interest
US11620327B2 (en) 2005-10-26 2023-04-04 Cortica Ltd System and method for determining a contextual insight and generating an interface with recommendations based thereon
US11643005B2 (en) 2019-02-27 2023-05-09 Autobrains Technologies Ltd Adjusting adjustable headlights of a vehicle
US11694088B2 (en) 2019-03-13 2023-07-04 Cortica Ltd. Method for object detection using knowledge distillation
US11758004B2 (en) 2005-10-26 2023-09-12 Cortica Ltd. System and method for providing recommendations based on user profiles
US11756424B2 (en) 2020-07-24 2023-09-12 AutoBrains Technologies Ltd. Parking assist
US11760387B2 (en) 2017-07-05 2023-09-19 AutoBrains Technologies Ltd. Driving policies determination
US11827215B2 (en) 2020-03-31 2023-11-28 AutoBrains Technologies Ltd. Method for training a driving related object detector
US11899707B2 (en) 2017-07-09 2024-02-13 Cortica Ltd. Driving policies determination

Families Citing this family (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7111009B1 (en) 1997-03-14 2006-09-19 Microsoft Corporation Interactive playlist generation using annotations
US7295752B1 (en) 1997-08-14 2007-11-13 Virage, Inc. Video cataloger system with audio track extraction
CN1867068A (en) 1998-07-14 2006-11-22 联合视频制品公司 Client-server based interactive television program guide system with remote server recording
US6833865B1 (en) * 1998-09-01 2004-12-21 Virage, Inc. Embedded metadata engines in digital capture devices
US8171509B1 (en) 2000-04-07 2012-05-01 Virage, Inc. System and method for applying a database to video multimedia
US7260564B1 (en) 2000-04-07 2007-08-21 Virage, Inc. Network video guide and spidering
US7962948B1 (en) 2000-04-07 2011-06-14 Virage, Inc. Video-enabled community building
US7222163B1 (en) 2000-04-07 2007-05-22 Virage, Inc. System and method for hosting of video content over a network
KR20120032046A (en) 2000-10-11 2012-04-04 유나이티드 비디오 프로퍼티즈, 인크. Systems and methods for delivering media content
US7912827B2 (en) * 2004-12-02 2011-03-22 At&T Intellectual Property Ii, L.P. System and method for searching text-based media content
KR100782810B1 (en) * 2005-01-07 2007-12-06 삼성전자주식회사 Apparatus and method of reproducing an storage medium having metadata for providing enhanced search
TWI323456B (en) * 2005-01-07 2010-04-11 Samsung Electronics Co Ltd Storage medium storing metadata for providing enhanced search function
JP2006262034A (en) * 2005-03-17 2006-09-28 Hitachi Ltd Broadcast receiver terminal and information processing apparatus
KR100772857B1 (en) * 2005-06-28 2007-11-02 삼성전자주식회사 Apparatus and method for playing content according to number-key input
US20070027844A1 (en) * 2005-07-28 2007-02-01 Microsoft Corporation Navigating recorded multimedia content using keywords or phrases
US8156114B2 (en) * 2005-08-26 2012-04-10 At&T Intellectual Property Ii, L.P. System and method for searching and analyzing media content
NO327155B1 (en) 2005-10-19 2009-05-04 Fast Search & Transfer Asa Procedure for displaying video data within result presentations in systems for accessing and searching for information
WO2007073347A1 (en) * 2005-12-19 2007-06-28 Agency For Science, Technology And Research Annotation of video footage and personalised video generation
US20070154171A1 (en) * 2006-01-04 2007-07-05 Elcock Albert F Navigating recorded video using closed captioning
US20070154176A1 (en) * 2006-01-04 2007-07-05 Elcock Albert F Navigating recorded video using captioning, dialogue and sound effects
US20070174326A1 (en) * 2006-01-24 2007-07-26 Microsoft Corporation Application of metadata to digital media
CN1859142A (en) * 2006-03-08 2006-11-08 华为技术有限公司 Broadcasting method and system
US7954049B2 (en) 2006-05-15 2011-05-31 Microsoft Corporation Annotating multimedia files along a timeline
US20070276852A1 (en) * 2006-05-25 2007-11-29 Microsoft Corporation Downloading portions of media files
US8577889B2 (en) * 2006-07-18 2013-11-05 Aol Inc. Searching for transient streaming multimedia resources
KR100792261B1 (en) * 2006-07-19 2008-01-07 삼성전자주식회사 System for managing video based on topic and method usign the same and method for searching video based on topic
US7962937B2 (en) * 2006-08-01 2011-06-14 Microsoft Corporation Media content catalog service
KR100778314B1 (en) * 2006-08-21 2007-11-22 한국전자통신연구원 System and method for processing continuous integrated queries on both data stream and stored data using user-defined shared trigger
US8214374B1 (en) * 2011-09-26 2012-07-03 Limelight Networks, Inc. Methods and systems for abridging video files
US8966389B2 (en) 2006-09-22 2015-02-24 Limelight Networks, Inc. Visual interface for identifying positions of interest within a sequentially ordered information encoding
US9015172B2 (en) 2006-09-22 2015-04-21 Limelight Networks, Inc. Method and subsystem for searching media content within a content-search service system
US8396878B2 (en) 2006-09-22 2013-03-12 Limelight Networks, Inc. Methods and systems for generating automated tags for video files
US8832742B2 (en) * 2006-10-06 2014-09-09 United Video Properties, Inc. Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications
AU2013201160B2 (en) * 2006-10-06 2016-09-29 Rovi Guides, Inc. Systems and Methods for Acquiring, Categorizing and Delivering Media in Interactive Media Guidance Applications
AU2016277714B2 (en) * 2006-10-06 2018-07-05 Rovi Guides, Inc. Systems and Methods for Acquiring, Categorizing and Delivering Media in Interactive Media Guidance Applications
US8381249B2 (en) 2006-10-06 2013-02-19 United Video Properties, Inc. Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications
EP1976297A1 (en) * 2007-03-29 2008-10-01 Koninklijke KPN N.V. Method and system for autimatically selecting television channels
US7930420B2 (en) * 2007-06-25 2011-04-19 University Of Southern California Source-based alert when streaming media of live event on computer network is of current interest and related feedback
US8781996B2 (en) 2007-07-12 2014-07-15 At&T Intellectual Property Ii, L.P. Systems, methods and computer program products for searching within movies (SWiM)
US8238669B2 (en) * 2007-08-22 2012-08-07 Google Inc. Detection and classification of matches between time-based media
US9178957B2 (en) * 2007-09-27 2015-11-03 Adobe Systems Incorporated Application and data agnostic collaboration services
US9420014B2 (en) 2007-11-15 2016-08-16 Adobe Systems Incorporated Saving state of a collaborative session in an editable format
US20090281897A1 (en) * 2008-05-07 2009-11-12 Antos Jeffrey D Capture and Storage of Broadcast Information for Enhanced Retrieval
US8239359B2 (en) * 2008-09-23 2012-08-07 Disney Enterprises, Inc. System and method for visual search in a video media player
US7945622B1 (en) 2008-10-01 2011-05-17 Adobe Systems Incorporated User-aware collaboration playback and recording
US9294291B2 (en) 2008-11-12 2016-03-22 Adobe Systems Incorporated Adaptive connectivity in network-based collaboration
US10063934B2 (en) 2008-11-25 2018-08-28 Rovi Technologies Corporation Reducing unicast session duration with restart TV
US8914829B2 (en) * 2009-09-14 2014-12-16 At&T Intellectual Property I, Lp System and method of proactively recording to a digital video recorder for data analysis
US8910232B2 (en) * 2009-09-14 2014-12-09 At&T Intellectual Property I, Lp System and method of analyzing internet protocol television content for closed-captioning information
US8938761B2 (en) * 2009-09-14 2015-01-20 At&T Intellectual Property I, Lp System and method of analyzing internet protocol television content credits information
US20110072456A1 (en) * 2009-09-24 2011-03-24 At&T Intellectual Property I, L.P. System and Method for Substituting Broadband Delivered Advertisements for Expired Advertisements
CN101720028A (en) * 2009-12-01 2010-06-02 北京中星微电子有限公司 Method and system for realizing voice broadcast during video monitoring
US20110173270A1 (en) * 2010-01-11 2011-07-14 Ricoh Company, Ltd. Conferencing Apparatus And Method
US9190109B2 (en) * 2010-03-23 2015-11-17 Disney Enterprises, Inc. System and method for video poetry using text based related media
US8688679B2 (en) 2010-07-20 2014-04-01 Smartek21, Llc Computer-implemented system and method for providing searchable online media content
US8688667B1 (en) * 2011-02-08 2014-04-01 Google Inc. Providing intent sensitive search results
US9473614B2 (en) * 2011-08-12 2016-10-18 Htc Corporation Systems and methods for incorporating a control connected media frame
US20130066633A1 (en) * 2011-09-09 2013-03-14 Verisign, Inc. Providing Audio-Activated Resource Access for User Devices
US8805418B2 (en) 2011-12-23 2014-08-12 United Video Properties, Inc. Methods and systems for performing actions based on location-based rules
US8972262B1 (en) 2012-01-18 2015-03-03 Google Inc. Indexing and search of content in recorded group communications
US20130291019A1 (en) * 2012-04-27 2013-10-31 Mixaroo, Inc. Self-learning methods, entity relations, remote control, and other features for real-time processing, storage, indexing, and delivery of segmented video
US8521719B1 (en) 2012-10-10 2013-08-27 Limelight Networks, Inc. Searchable and size-constrained local log repositories for tracking visitors' access to web content
US8782721B1 (en) 2013-04-05 2014-07-15 Wowza Media Systems, LLC Closed captions for live streams
US8782722B1 (en) 2013-04-05 2014-07-15 Wowza Media Systems, LLC Decoding of closed captions at a media server
KR102121534B1 (en) 2015-03-10 2020-06-10 삼성전자주식회사 Method and device for determining similarity of sequences
US9667640B2 (en) 2015-04-28 2017-05-30 Splunk Inc. Automatically generating alerts based on information obtained from search results in a query-processing system
US9922097B2 (en) * 2015-04-28 2018-03-20 Splunk Inc. Facilitating configuration of alerts based on information obtained from search results in a query-processing system
GB2546797A (en) * 2016-01-29 2017-08-02 Waazon (Holdings) Ltd Automated search method,apparatus and database
US11868445B2 (en) 2016-06-24 2024-01-09 Discovery Communications, Llc Systems and methods for federated searches of assets in disparate dam repositories
US10452714B2 (en) 2016-06-24 2019-10-22 Scripps Networks Interactive, Inc. Central asset registry system and method
US10372883B2 (en) 2016-06-24 2019-08-06 Scripps Networks Interactive, Inc. Satellite and central asset registry systems and methods and rights management systems
US10845956B2 (en) * 2017-05-31 2020-11-24 Snap Inc. Methods and systems for voice driven dynamic menus
US10891100B2 (en) 2018-04-11 2021-01-12 Matthew Cohn System and method for capturing and accessing real-time audio and associated metadata
US11569921B2 (en) 2019-03-22 2023-01-31 Matthew Cohn System and method for capturing and accessing real-time audio and associated metadata
US11531712B2 (en) 2019-03-28 2022-12-20 Cohesity, Inc. Unified metadata search
US10795699B1 (en) * 2019-03-28 2020-10-06 Cohesity, Inc. Central storage management interface supporting native user interface versions
US11463507B1 (en) * 2019-04-22 2022-10-04 Audible, Inc. Systems for generating captions for audio content
CN111898510B (en) * 2020-07-23 2023-07-28 合肥工业大学 Cross-modal pedestrian re-identification method based on progressive neural network

Citations (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3760275A (en) * 1970-10-24 1973-09-18 T Ohsawa Automatic telecasting or radio broadcasting monitoring system
US4792864A (en) * 1985-09-03 1988-12-20 Video Research Limited Apparatus for detecting recorded data in a video tape recorder for audience rating purposes
US4975770A (en) * 1989-07-31 1990-12-04 Troxell James D Method for the enhancement of contours for video broadcasts
US5157491A (en) * 1988-10-17 1992-10-20 Kassatly L Samuel A Method and apparatus for video broadcasting and teleconferencing
US5231494A (en) * 1991-10-08 1993-07-27 General Instrument Corporation Selection of compressed television signals from single channel allocation based on viewer characteristics
US5313297A (en) * 1991-09-19 1994-05-17 Costem Inc. System for providing pictures responding to users' remote control
US5636346A (en) * 1994-05-09 1997-06-03 The Electronic Address, Inc. Method and system for selectively targeting advertisements and programming
US5717878A (en) * 1994-02-25 1998-02-10 Sextant Avionique Method and device for distributing multimedia data, providing both video broadcast and video distribution services
US5847760A (en) * 1997-05-22 1998-12-08 Optibase Ltd. Method for managing video broadcast
US5892554A (en) * 1995-11-28 1999-04-06 Princeton Video Image, Inc. System and method for inserting static and dynamic images into a live video broadcast
US5986692A (en) * 1996-10-03 1999-11-16 Logan; James D. Systems and methods for computer enhanced broadcast monitoring
US5999970A (en) * 1996-04-10 1999-12-07 World Gate Communications, Llc Access system and method for providing interactive access to an information source through a television distribution system
US6061056A (en) * 1996-03-04 2000-05-09 Telexis Corporation Television monitoring system with automatic selection of program material of interest and subsequent display under user control
US6157809A (en) * 1996-08-07 2000-12-05 Kabushiki Kaisha Toshiba Broadcasting system, broadcast receiving unit, and recording medium used in the broadcasting system
US6160988A (en) * 1996-05-30 2000-12-12 Electronic Data Systems Corporation System and method for managing hardware to control transmission and reception of video broadcasts
US6188436B1 (en) * 1997-01-31 2001-02-13 Hughes Electronics Corporation Video broadcast system with video data shifting
US6226030B1 (en) * 1997-03-28 2001-05-01 International Business Machines Corporation Automated and selective distribution of video broadcasts
US6266094B1 (en) * 1999-06-14 2001-07-24 Medialink Worldwide Incorporated Method and apparatus for the aggregation and selective retrieval of television closed caption word content originating from multiple geographic locations
US20010023498A1 (en) * 2000-03-15 2001-09-20 Michel Cosmao Process for displaying broadcast and recorded transmissions possessing a common characteristic and associated receiver
US6320917B1 (en) * 1997-05-02 2001-11-20 Lsi Logic Corporation Demodulating digital video broadcast signals
US20010049820A1 (en) * 1999-12-21 2001-12-06 Barton James M. Method for enhancing digital video recorder television advertising viewership
US20020056093A1 (en) * 2000-02-02 2002-05-09 Kunkel Gerard K. System and method for transmitting and displaying targeted infromation
US6397041B1 (en) * 1999-12-22 2002-05-28 Radio Propagation Services, Inc. Broadcast monitoring and control system
US20020120925A1 (en) * 2000-03-28 2002-08-29 Logan James D. Audio and video program recording, editing and playback systems using metadata
US20020129381A1 (en) * 2000-04-21 2002-09-12 Barone Samuel T. System and method for merging interactive television data with closed caption data
US20020133816A1 (en) * 1994-06-21 2002-09-19 Greene Steven Bradford System for collecting data concerning received transmitted material
US20020152463A1 (en) * 2000-11-16 2002-10-17 Dudkiewicz Gil Gavriel System and method for personalized presentation of video programming events
US20020157113A1 (en) * 2001-04-20 2002-10-24 Fred Allegrezza System and method for retrieving and storing multimedia data
US20030033603A1 (en) * 2001-07-03 2003-02-13 Canon Kabushiki Kaisha Receiving apparatus, program notifying method, recording medium, and program
US6546556B1 (en) * 1997-12-26 2003-04-08 Matsushita Electric Industrial Co., Ltd. Video clip identification system unusable for commercial cutting
US20030140342A1 (en) * 2001-12-20 2003-07-24 Pioneer Corporation System and method for preparing a TV viewing schedule
US6606128B2 (en) * 1995-11-20 2003-08-12 United Video Properties, Inc. Interactive special events video signal navigation system
US20030163830A1 (en) * 2002-02-23 2003-08-28 Nam Ho Jun Method for automatically searching cable TV band
US20030229900A1 (en) * 2002-05-10 2003-12-11 Richard Reisman Method and apparatus for browsing using multiple coordinated device sets

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9504376D0 (en) 1995-03-04 1995-04-26 Televitesse Systems Inc Automatic broadcast monitoring system

Patent Citations (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3760275A (en) * 1970-10-24 1973-09-18 T Ohsawa Automatic telecasting or radio broadcasting monitoring system
US4792864A (en) * 1985-09-03 1988-12-20 Video Research Limited Apparatus for detecting recorded data in a video tape recorder for audience rating purposes
US5157491A (en) * 1988-10-17 1992-10-20 Kassatly L Samuel A Method and apparatus for video broadcasting and teleconferencing
US4975770A (en) * 1989-07-31 1990-12-04 Troxell James D Method for the enhancement of contours for video broadcasts
US5313297A (en) * 1991-09-19 1994-05-17 Costem Inc. System for providing pictures responding to users' remote control
US5231494A (en) * 1991-10-08 1993-07-27 General Instrument Corporation Selection of compressed television signals from single channel allocation based on viewer characteristics
US5717878A (en) * 1994-02-25 1998-02-10 Sextant Avionique Method and device for distributing multimedia data, providing both video broadcast and video distribution services
US5636346A (en) * 1994-05-09 1997-06-03 The Electronic Address, Inc. Method and system for selectively targeting advertisements and programming
US20020133816A1 (en) * 1994-06-21 2002-09-19 Greene Steven Bradford System for collecting data concerning received transmitted material
US6606128B2 (en) * 1995-11-20 2003-08-12 United Video Properties, Inc. Interactive special events video signal navigation system
US5892554A (en) * 1995-11-28 1999-04-06 Princeton Video Image, Inc. System and method for inserting static and dynamic images into a live video broadcast
US6061056A (en) * 1996-03-04 2000-05-09 Telexis Corporation Television monitoring system with automatic selection of program material of interest and subsequent display under user control
US5999970A (en) * 1996-04-10 1999-12-07 World Gate Communications, Llc Access system and method for providing interactive access to an information source through a television distribution system
US6160988A (en) * 1996-05-30 2000-12-12 Electronic Data Systems Corporation System and method for managing hardware to control transmission and reception of video broadcasts
US6157809A (en) * 1996-08-07 2000-12-05 Kabushiki Kaisha Toshiba Broadcasting system, broadcast receiving unit, and recording medium used in the broadcasting system
US5986692A (en) * 1996-10-03 1999-11-16 Logan; James D. Systems and methods for computer enhanced broadcast monitoring
US6188436B1 (en) * 1997-01-31 2001-02-13 Hughes Electronics Corporation Video broadcast system with video data shifting
US6226030B1 (en) * 1997-03-28 2001-05-01 International Business Machines Corporation Automated and selective distribution of video broadcasts
US6320917B1 (en) * 1997-05-02 2001-11-20 Lsi Logic Corporation Demodulating digital video broadcast signals
US5847760A (en) * 1997-05-22 1998-12-08 Optibase Ltd. Method for managing video broadcast
US6546556B1 (en) * 1997-12-26 2003-04-08 Matsushita Electric Industrial Co., Ltd. Video clip identification system unusable for commercial cutting
US6266094B1 (en) * 1999-06-14 2001-07-24 Medialink Worldwide Incorporated Method and apparatus for the aggregation and selective retrieval of television closed caption word content originating from multiple geographic locations
US20010049820A1 (en) * 1999-12-21 2001-12-06 Barton James M. Method for enhancing digital video recorder television advertising viewership
US20050273828A1 (en) * 1999-12-21 2005-12-08 Tivo Inc. Method for enhancing digital video recorder television advertising viewership
US6397041B1 (en) * 1999-12-22 2002-05-28 Radio Propagation Services, Inc. Broadcast monitoring and control system
US20020056093A1 (en) * 2000-02-02 2002-05-09 Kunkel Gerard K. System and method for transmitting and displaying targeted infromation
US20010023498A1 (en) * 2000-03-15 2001-09-20 Michel Cosmao Process for displaying broadcast and recorded transmissions possessing a common characteristic and associated receiver
US20020120925A1 (en) * 2000-03-28 2002-08-29 Logan James D. Audio and video program recording, editing and playback systems using metadata
US20020129381A1 (en) * 2000-04-21 2002-09-12 Barone Samuel T. System and method for merging interactive television data with closed caption data
US20020152463A1 (en) * 2000-11-16 2002-10-17 Dudkiewicz Gil Gavriel System and method for personalized presentation of video programming events
US20020157113A1 (en) * 2001-04-20 2002-10-24 Fred Allegrezza System and method for retrieving and storing multimedia data
US20030033603A1 (en) * 2001-07-03 2003-02-13 Canon Kabushiki Kaisha Receiving apparatus, program notifying method, recording medium, and program
US20030140342A1 (en) * 2001-12-20 2003-07-24 Pioneer Corporation System and method for preparing a TV viewing schedule
US20030163830A1 (en) * 2002-02-23 2003-08-28 Nam Ho Jun Method for automatically searching cable TV band
US20030229900A1 (en) * 2002-05-10 2003-12-11 Richard Reisman Method and apparatus for browsing using multiple coordinated device sets
US20040031058A1 (en) * 2002-05-10 2004-02-12 Richard Reisman Method and apparatus for browsing using alternative linkbases

Cited By (155)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10380623B2 (en) 2005-10-26 2019-08-13 Cortica, Ltd. System and method for generating an advertisement effectiveness performance score
US9292519B2 (en) 2005-10-26 2016-03-22 Cortica, Ltd. Signature-based system and method for generation of personalized multimedia channels
US11758004B2 (en) 2005-10-26 2023-09-12 Cortica Ltd. System and method for providing recommendations based on user profiles
US11620327B2 (en) 2005-10-26 2023-04-04 Cortica Ltd System and method for determining a contextual insight and generating an interface with recommendations based thereon
US20090043818A1 (en) * 2005-10-26 2009-02-12 Cortica, Ltd. Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof
US20090112864A1 (en) * 2005-10-26 2009-04-30 Cortica, Ltd. Methods for Identifying Relevant Metadata for Multimedia Data of a Large-Scale Matching System
US20090216761A1 (en) * 2005-10-26 2009-08-27 Cortica, Ltd. Signature Based System and Methods for Generation of Personalized Multimedia Channels
US11604847B2 (en) 2005-10-26 2023-03-14 Cortica Ltd. System and method for overlaying content on a multimedia content element based on user interest
US20090282218A1 (en) * 2005-10-26 2009-11-12 Cortica, Ltd. Unsupervised Clustering of Multimedia Data Using a Large-Scale Matching System
US20090313305A1 (en) * 2005-10-26 2009-12-17 Cortica, Ltd. System and Method for Generation of Complex Signatures for Multimedia Data Content
US11403336B2 (en) 2005-10-26 2022-08-02 Cortica Ltd. System and method for removing contextually identical multimedia content elements
US20100262609A1 (en) * 2005-10-26 2010-10-14 Cortica, Ltd. System and method for linking multimedia data elements to web pages
US11386139B2 (en) 2005-10-26 2022-07-12 Cortica Ltd. System and method for generating analytics for entities depicted in multimedia content
US11361014B2 (en) 2005-10-26 2022-06-14 Cortica Ltd. System and method for completing a user profile
US8112376B2 (en) 2005-10-26 2012-02-07 Cortica Ltd. Signature based system and methods for generation of personalized multimedia channels
US8266185B2 (en) 2005-10-26 2012-09-11 Cortica Ltd. System and methods thereof for generation of searchable structures respective of multimedia data content
US11216498B2 (en) 2005-10-26 2022-01-04 Cortica, Ltd. System and method for generating signatures to three-dimensional multimedia data elements
US8312031B2 (en) 2005-10-26 2012-11-13 Cortica Ltd. System and method for generation of complex signatures for multimedia data content
US11032017B2 (en) 2005-10-26 2021-06-08 Cortica, Ltd. System and method for identifying the context of multimedia content elements
US8326775B2 (en) 2005-10-26 2012-12-04 Cortica Ltd. Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof
US8386400B2 (en) 2005-10-26 2013-02-26 Cortica Ltd. Unsupervised clustering of multimedia data using a large-scale matching system
US8799195B2 (en) 2005-10-26 2014-08-05 Cortica, Ltd. Method for unsupervised clustering of multimedia data using a large-scale matching system
US8799196B2 (en) 2005-10-26 2014-08-05 Cortica, Ltd. Method for reducing an amount of storage required for maintaining large-scale collection of multimedia data elements by unsupervised clustering of multimedia data elements
US8818916B2 (en) 2005-10-26 2014-08-26 Cortica, Ltd. System and method for linking multimedia data elements to web pages
US8868619B2 (en) 2005-10-26 2014-10-21 Cortica, Ltd. System and methods thereof for generation of searchable structures respective of multimedia data content
US8880539B2 (en) 2005-10-26 2014-11-04 Cortica, Ltd. System and method for generation of signatures for multimedia data elements
US8880566B2 (en) 2005-10-26 2014-11-04 Cortica, Ltd. Assembler and method thereof for generating a complex signature of an input multimedia data element
US11019161B2 (en) 2005-10-26 2021-05-25 Cortica, Ltd. System and method for profiling users interest based on multimedia content analysis
US8959037B2 (en) 2005-10-26 2015-02-17 Cortica, Ltd. Signature based system and methods for generation of personalized multimedia channels
US8990125B2 (en) 2005-10-26 2015-03-24 Cortica, Ltd. Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof
US9009086B2 (en) 2005-10-26 2015-04-14 Cortica, Ltd. Method for unsupervised clustering of multimedia data using a large-scale matching system
US9031999B2 (en) 2005-10-26 2015-05-12 Cortica, Ltd. System and methods for generation of a concept based database
US9087049B2 (en) 2005-10-26 2015-07-21 Cortica, Ltd. System and method for context translation of natural language
US9104747B2 (en) 2005-10-26 2015-08-11 Cortica, Ltd. System and method for signature-based unsupervised clustering of data elements
US11003706B2 (en) * 2005-10-26 2021-05-11 Cortica Ltd System and methods for determining access permissions on personalized clusters of multimedia content elements
US10949773B2 (en) 2005-10-26 2021-03-16 Cortica, Ltd. System and methods thereof for recommending tags for multimedia content elements based on context
US9191626B2 (en) 2005-10-26 2015-11-17 Cortica, Ltd. System and methods thereof for visual analysis of an image on a web-page and matching an advertisement thereto
US10902049B2 (en) 2005-10-26 2021-01-26 Cortica Ltd System and method for assigning multimedia content elements to users
US9218606B2 (en) 2005-10-26 2015-12-22 Cortica, Ltd. System and method for brand monitoring and trend analysis based on deep-content-classification
US9235557B2 (en) 2005-10-26 2016-01-12 Cortica, Ltd. System and method thereof for dynamically associating a link to an information resource with a multimedia content displayed in a web-page
US10380164B2 (en) 2005-10-26 2019-08-13 Cortica, Ltd. System and method for using on-image gestures and multimedia content elements as search queries
US9256668B2 (en) 2005-10-26 2016-02-09 Cortica, Ltd. System and method of detecting common patterns within unstructured data elements retrieved from big data sources
US9286623B2 (en) 2005-10-26 2016-03-15 Cortica, Ltd. Method for determining an area within a multimedia content element over which an advertisement can be displayed
US10380267B2 (en) 2005-10-26 2019-08-13 Cortica, Ltd. System and method for tagging multimedia content elements
US9330189B2 (en) 2005-10-26 2016-05-03 Cortica, Ltd. System and method for capturing a multimedia content item by a mobile device and matching sequentially relevant content to the multimedia content item
US9372940B2 (en) 2005-10-26 2016-06-21 Cortica, Ltd. Apparatus and method for determining user attention using a deep-content-classification (DCC) system
US9396435B2 (en) 2005-10-26 2016-07-19 Cortica, Ltd. System and method for identification of deviations from periodic behavior patterns in multimedia content
US10848590B2 (en) 2005-10-26 2020-11-24 Cortica Ltd System and method for determining a contextual insight and providing recommendations based thereon
US9449001B2 (en) 2005-10-26 2016-09-20 Cortica, Ltd. System and method for generation of signatures for multimedia data elements
US9466068B2 (en) 2005-10-26 2016-10-11 Cortica, Ltd. System and method for determining a pupillary response to a multimedia data element
US9477658B2 (en) 2005-10-26 2016-10-25 Cortica, Ltd. Systems and method for speech to speech translation using cores of a natural liquid architecture system
US9489431B2 (en) 2005-10-26 2016-11-08 Cortica, Ltd. System and method for distributed search-by-content
US9529984B2 (en) 2005-10-26 2016-12-27 Cortica, Ltd. System and method for verification of user identification based on multimedia content elements
US9558449B2 (en) 2005-10-26 2017-01-31 Cortica, Ltd. System and method for identifying a target area in a multimedia content element
US9575969B2 (en) 2005-10-26 2017-02-21 Cortica, Ltd. Systems and methods for generation of searchable structures respective of multimedia data content
US9639532B2 (en) 2005-10-26 2017-05-02 Cortica, Ltd. Context-based analysis of multimedia content items using signatures of multimedia elements and matching concepts
US9646006B2 (en) 2005-10-26 2017-05-09 Cortica, Ltd. System and method for capturing a multimedia content item by a mobile device and matching sequentially relevant content to the multimedia content item
US9646005B2 (en) 2005-10-26 2017-05-09 Cortica, Ltd. System and method for creating a database of multimedia content elements assigned to users
US9652785B2 (en) 2005-10-26 2017-05-16 Cortica, Ltd. System and method for matching advertisements to multimedia content elements
US9672217B2 (en) 2005-10-26 2017-06-06 Cortica, Ltd. System and methods for generation of a concept based database
US9747420B2 (en) 2005-10-26 2017-08-29 Cortica, Ltd. System and method for diagnosing a patient based on an analysis of multimedia content
US20170255619A1 (en) * 2005-10-26 2017-09-07 Cortica, Ltd. System and methods for determining access permissions on personalized clusters of multimedia content elements
US9767143B2 (en) 2005-10-26 2017-09-19 Cortica, Ltd. System and method for caching of concept structures
US9792620B2 (en) 2005-10-26 2017-10-17 Cortica, Ltd. System and method for brand monitoring and trend analysis based on deep-content-classification
US9798795B2 (en) 2005-10-26 2017-10-24 Cortica, Ltd. Methods for identifying relevant metadata for multimedia data of a large-scale matching system
US9886437B2 (en) 2005-10-26 2018-02-06 Cortica, Ltd. System and method for generation of signatures for multimedia data elements
US9940326B2 (en) 2005-10-26 2018-04-10 Cortica, Ltd. System and method for speech to speech translation using cores of a natural liquid architecture system
US9953032B2 (en) 2005-10-26 2018-04-24 Cortica, Ltd. System and method for characterization of multimedia content signals using cores of a natural liquid architecture system
US10180942B2 (en) 2005-10-26 2019-01-15 Cortica Ltd. System and method for generation of concept structures based on sub-concepts
US10193990B2 (en) 2005-10-26 2019-01-29 Cortica Ltd. System and method for creating user profiles based on multimedia content
US10191976B2 (en) 2005-10-26 2019-01-29 Cortica, Ltd. System and method of detecting common patterns within unstructured data elements retrieved from big data sources
US10210257B2 (en) 2005-10-26 2019-02-19 Cortica, Ltd. Apparatus and method for determining user attention using a deep-content-classification (DCC) system
US10831814B2 (en) 2005-10-26 2020-11-10 Cortica, Ltd. System and method for linking multimedia data elements to web pages
US10331737B2 (en) 2005-10-26 2019-06-25 Cortica Ltd. System for generation of a large-scale database of hetrogeneous speech
US10360253B2 (en) 2005-10-26 2019-07-23 Cortica, Ltd. Systems and methods for generation of searchable structures respective of multimedia data content
US10372746B2 (en) 2005-10-26 2019-08-06 Cortica, Ltd. System and method for searching applications using multimedia content elements
US10698939B2 (en) 2005-10-26 2020-06-30 Cortica Ltd System and method for customizing images
US10776585B2 (en) 2005-10-26 2020-09-15 Cortica, Ltd. System and method for recognizing characters in multimedia content
US10742340B2 (en) 2005-10-26 2020-08-11 Cortica Ltd. System and method for identifying the context of multimedia content elements displayed in a web-page and providing contextual filters respective thereto
US10387914B2 (en) 2005-10-26 2019-08-20 Cortica, Ltd. Method for identification of multimedia content elements and adding advertising content respective thereof
US10706094B2 (en) 2005-10-26 2020-07-07 Cortica Ltd System and method for customizing a display of a user device based on multimedia content element signatures
US10430386B2 (en) 2005-10-26 2019-10-01 Cortica Ltd System and method for enriching a concept database
US10535192B2 (en) 2005-10-26 2020-01-14 Cortica Ltd. System and method for generating a customized augmented reality environment to a user
US10552380B2 (en) 2005-10-26 2020-02-04 Cortica Ltd System and method for contextually enriching a concept database
US10585934B2 (en) 2005-10-26 2020-03-10 Cortica Ltd. Method and system for populating a concept database with respect to user identifiers
US10607355B2 (en) 2005-10-26 2020-03-31 Cortica, Ltd. Method and system for determining the dimensions of an object shown in a multimedia content item
US10614626B2 (en) 2005-10-26 2020-04-07 Cortica Ltd. System and method for providing augmented reality challenges
US10621988B2 (en) 2005-10-26 2020-04-14 Cortica Ltd System and method for speech to text translation using cores of a natural liquid architecture system
US10635640B2 (en) 2005-10-26 2020-04-28 Cortica, Ltd. System and method for enriching a concept database
US10691642B2 (en) 2005-10-26 2020-06-23 Cortica Ltd System and method for enriching a concept database with homogenous concepts
US20070204285A1 (en) * 2006-02-28 2007-08-30 Gert Hercules Louw Method for integrated media monitoring, purchase, and display
US20070203945A1 (en) * 2006-02-28 2007-08-30 Gert Hercules Louw Method for integrated media preview, analysis, purchase, and display
US20110211814A1 (en) * 2006-05-01 2011-09-01 Yahoo! Inc. Systems and methods for indexing and searching digital video content
US9196310B2 (en) * 2006-05-01 2015-11-24 Yahoo! Inc. Systems and methods for indexing and searching digital video content
US20080091513A1 (en) * 2006-09-13 2008-04-17 Video Monitoring Services Of America, L.P. System and method for assessing marketing data
US20090319365A1 (en) * 2006-09-13 2009-12-24 James Hallowell Waggoner System and method for assessing marketing data
US10733326B2 (en) 2006-10-26 2020-08-04 Cortica Ltd. System and method for identification of inappropriate multimedia content
US9241134B2 (en) * 2007-02-14 2016-01-19 Sony Corporation Transfer of metadata using video frames
US20110262105A1 (en) * 2007-02-14 2011-10-27 Candelore Brant L Transfer of Metadata Using Video Frames
US20080313146A1 (en) * 2007-06-15 2008-12-18 Microsoft Corporation Content search service, finding content, and prefetching for thin client
US8312022B2 (en) * 2008-03-21 2012-11-13 Ramp Holdings, Inc. Search engine optimization
US20090240674A1 (en) * 2008-03-21 2009-09-24 Tom Wilde Search Engine Optimization
US20120240177A1 (en) * 2011-03-17 2012-09-20 Anthony Rose Content provision
US11803557B2 (en) 2012-05-07 2023-10-31 Nasdaq, Inc. Social intelligence architecture using social media message queues
US9418389B2 (en) 2012-05-07 2016-08-16 Nasdaq, Inc. Social intelligence architecture using social media message queues
US10304036B2 (en) 2012-05-07 2019-05-28 Nasdaq, Inc. Social media profiling for one or more authors using one or more social media platforms
US11100466B2 (en) 2012-05-07 2021-08-24 Nasdaq, Inc. Social media profiling for one or more authors using one or more social media platforms
US11086885B2 (en) 2012-05-07 2021-08-10 Nasdaq, Inc. Social intelligence architecture using social media message queues
US11847612B2 (en) 2012-05-07 2023-12-19 Nasdaq, Inc. Social media profiling for one or more authors using one or more social media platforms
US8935713B1 (en) * 2012-12-17 2015-01-13 Tubular Labs, Inc. Determining audience members associated with a set of videos
US20150310107A1 (en) * 2014-04-24 2015-10-29 Shadi A. Alhakimi Video and audio content search engine
CN104866404A (en) * 2015-05-19 2015-08-26 北京控制工程研究所 Universal data monitoring method
US11037015B2 (en) 2015-12-15 2021-06-15 Cortica Ltd. Identification of key points in multimedia data elements
US11195043B2 (en) 2015-12-15 2021-12-07 Cortica, Ltd. System and method for determining common patterns in multimedia content elements based on key points
US11760387B2 (en) 2017-07-05 2023-09-19 AutoBrains Technologies Ltd. Driving policies determination
US11899707B2 (en) 2017-07-09 2024-02-13 Cortica Ltd. Driving policies determination
WO2019183436A1 (en) * 2018-03-23 2019-09-26 nedl.com, Inc. Real-time audio stream search and presentation system
US10824670B2 (en) 2018-03-23 2020-11-03 nedl.com, Inc. Real-time audio stream search and presentation system
US10846544B2 (en) 2018-07-16 2020-11-24 Cartica Ai Ltd. Transportation prediction system and method
US11282391B2 (en) 2018-10-18 2022-03-22 Cartica Ai Ltd. Object detection at different illumination conditions
US11126870B2 (en) 2018-10-18 2021-09-21 Cartica Ai Ltd. Method and system for obstacle detection
US11029685B2 (en) 2018-10-18 2021-06-08 Cartica Ai Ltd. Autonomous risk assessment for fallen cargo
US11718322B2 (en) 2018-10-18 2023-08-08 Autobrains Technologies Ltd Risk based assessment
US11673583B2 (en) 2018-10-18 2023-06-13 AutoBrains Technologies Ltd. Wrong-way driving warning
US10839694B2 (en) 2018-10-18 2020-11-17 Cartica Ai Ltd Blind spot alert
US11087628B2 (en) 2018-10-18 2021-08-10 Cartica Al Ltd. Using rear sensor for wrong-way driving warning
US11181911B2 (en) 2018-10-18 2021-11-23 Cartica Ai Ltd Control transfer of a vehicle
US11685400B2 (en) 2018-10-18 2023-06-27 Autobrains Technologies Ltd Estimating danger from future falling cargo
US11126869B2 (en) 2018-10-26 2021-09-21 Cartica Ai Ltd. Tracking after objects
US11700356B2 (en) 2018-10-26 2023-07-11 AutoBrains Technologies Ltd. Control transfer of a vehicle
US11244176B2 (en) 2018-10-26 2022-02-08 Cartica Ai Ltd Obstacle detection and mapping
US11373413B2 (en) 2018-10-26 2022-06-28 Autobrains Technologies Ltd Concept update and vehicle to vehicle communication
US11270132B2 (en) 2018-10-26 2022-03-08 Cartica Ai Ltd Vehicle to vehicle communication and signatures
US11170233B2 (en) 2018-10-26 2021-11-09 Cartica Ai Ltd. Locating a vehicle based on multimedia content
US10789535B2 (en) 2018-11-26 2020-09-29 Cartica Ai Ltd Detection of road elements
US11643005B2 (en) 2019-02-27 2023-05-09 Autobrains Technologies Ltd Adjusting adjustable headlights of a vehicle
US11285963B2 (en) 2019-03-10 2022-03-29 Cartica Ai Ltd. Driver-based prediction of dangerous events
US11694088B2 (en) 2019-03-13 2023-07-04 Cortica Ltd. Method for object detection using knowledge distillation
US11755920B2 (en) 2019-03-13 2023-09-12 Cortica Ltd. Method for object detection using knowledge distillation
US11132548B2 (en) 2019-03-20 2021-09-28 Cortica Ltd. Determining object information that does not explicitly appear in a media unit signature
US11741687B2 (en) 2019-03-31 2023-08-29 Cortica Ltd. Configuring spanning elements of a signature generator
US10789527B1 (en) 2019-03-31 2020-09-29 Cortica Ltd. Method for object detection using shallow neural networks
US10748038B1 (en) 2019-03-31 2020-08-18 Cortica Ltd. Efficient calculation of a robust signature of a media unit
US10776669B1 (en) 2019-03-31 2020-09-15 Cortica Ltd. Signature generation and object detection that refer to rare scenes
US11488290B2 (en) 2019-03-31 2022-11-01 Cortica Ltd. Hybrid representation of a media unit
US11481582B2 (en) 2019-03-31 2022-10-25 Cortica Ltd. Dynamic matching a sensed signal to a concept structure
US10846570B2 (en) 2019-03-31 2020-11-24 Cortica Ltd. Scale inveriant object detection
US10796444B1 (en) 2019-03-31 2020-10-06 Cortica Ltd Configuring spanning elements of a signature generator
US11275971B2 (en) 2019-03-31 2022-03-15 Cortica Ltd. Bootstrap unsupervised learning
US11222069B2 (en) 2019-03-31 2022-01-11 Cortica Ltd. Low-power calculation of a signature of a media unit
US11593662B2 (en) 2019-12-12 2023-02-28 Autobrains Technologies Ltd Unsupervised cluster generation
US10748022B1 (en) 2019-12-12 2020-08-18 Cartica Ai Ltd Crowd separation
US11590988B2 (en) 2020-03-19 2023-02-28 Autobrains Technologies Ltd Predictive turning assistant
US11827215B2 (en) 2020-03-31 2023-11-28 AutoBrains Technologies Ltd. Method for training a driving related object detector
US11756424B2 (en) 2020-07-24 2023-09-12 AutoBrains Technologies Ltd. Parking assist

Also Published As

Publication number Publication date
US8015159B2 (en) 2011-09-06
CA2498364A1 (en) 2005-08-24
US20050198006A1 (en) 2005-09-08
CA2498364C (en) 2012-05-15

Similar Documents

Publication Publication Date Title
US8015159B2 (en) System and method for real-time media searching and alerting
US6061056A (en) Television monitoring system with automatic selection of program material of interest and subsequent display under user control
US6266094B1 (en) Method and apparatus for the aggregation and selective retrieval of television closed caption word content originating from multiple geographic locations
US8589973B2 (en) Peer to peer media distribution system and method
CA2214605C (en) Automatic broadcast monitoring system
US8533210B2 (en) Index of locally recorded content
US8285701B2 (en) Video and digital multimedia aggregator remote content crawler
US9047375B2 (en) Internet video content delivery to television users
US8566872B2 (en) Broadcasting system and program contents delivery system
KR100889986B1 (en) System for providing interactive broadcasting terminal with recommended keyword, and method for the same
US20020170068A1 (en) Virtual and condensed television programs
US20030074671A1 (en) Method for information retrieval based on network
KR100807745B1 (en) Method for providing electronic program guide information and system thereof
US7518657B2 (en) Method and system for the automatic collection and transmission of closed caption text
KR20140010992A (en) Method and device for optimizing storage of recorded video programs
US20020044219A1 (en) Method and system for the automatic collection and conditioning of closed caption text originating from multiple geographic locations
CN1976430B (en) Method for realizing previewing mobile multimedia program in terminal
WO2004043029A2 (en) Multimedia management
US20070136257A1 (en) Information processing apparatus, metadata management server, and metadata management method
KR100878909B1 (en) System and Method of Providing Interactive DMB Broadcast
JP5105109B2 (en) Search device and search system
TWI700925B (en) Digital news film screening and notification methods
SIEGEL RELATED APPLICATIONS
IES83424Y1 (en) Multimedia management
KR20060037106A (en) The system for unmanned automatic recording on broadcasting and the method of operating thereof and storage media having program thereof

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20190906