WO2002001548A1 - System for characterizing pieces of music - Google Patents

System for characterizing pieces of music Download PDF

Info

Publication number
WO2002001548A1
WO2002001548A1 PCT/US2001/019970 US0119970W WO0201548A1 WO 2002001548 A1 WO2002001548 A1 WO 2002001548A1 US 0119970 W US0119970 W US 0119970W WO 0201548 A1 WO0201548 A1 WO 0201548A1
Authority
WO
WIPO (PCT)
Prior art keywords
music
pieces
musical
profile
listener
Prior art date
Application number
PCT/US2001/019970
Other languages
French (fr)
Inventor
Brian James Adams
Original Assignee
Music Buddha, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Music Buddha, Inc. filed Critical Music Buddha, Inc.
Priority to AU2001271384A priority Critical patent/AU2001271384A1/en
Publication of WO2002001548A1 publication Critical patent/WO2002001548A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/632Query formulation
    • G06F16/634Query by example, e.g. query by humming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • G06F16/637Administration of user profiles, e.g. generation, initialization, adaptation or distribution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • G06F16/639Presentation of query results using playlists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/036Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal of musical genre, i.e. analysing the style of musical pieces, usually for selection, filtering or classification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/075Musical metadata derived from musical analysis or for use in electrophonic musical instruments
    • G10H2240/081Genre classification, i.e. descriptive metadata for classification or selection of musical pieces according to style
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/121Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
    • G10H2240/131Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/171Transmission of musical instrument data, control or status information; Transmission, remote access or control of music data for electrophonic musical instruments
    • G10H2240/281Protocol or standard connector for transmission of analog or digital data to or from an electrophonic musical instrument
    • G10H2240/295Packet switched network, e.g. token ring
    • G10H2240/305Internet or TCP/IP protocol use for any electrophonic musical instrument data or musical parameter transmission purposes

Definitions

  • This invention relates in general to systems for finding information with subjective content that is likely to appeal to a user or a group of users, and, in particular, to a system for finding information with subjective content such as music or other multimedia information that is likely to appeal to users based on subjective criteria.
  • the conventional systems described above have many disadvantages.
  • the suggestions made by these systems are not based on accurate information on the particular listener to whom these suggestions are made; instead, the suggestions are based on the statistical purchasing data or the website owner's perception as to what songs are roughly similar in style without any further analysis as to what makes two songs similar. Consequently, the music suggestions made by means of the conventional systems are less than satisfactory.
  • U.S. Patent No, 5,616,876 an interactive network providing music to subscribers is described.
  • the purpose of the network is to allow a listener to interactively find more songs that are like a current song being played to the listener; no attempt is made to actually construct a listener profile ahead of time or to accurately match such profile that has been constructed to a song database.
  • style tables are prepared ahead of time for artists with different weightings given to different subjective style categories.
  • a style slider is then provided to allow a listener or user to locate more music that is like the current song being played. If a particular weighting in a style table for a particular artist is at least as high as the position of the style slider, a suggestion will then be made that the listener listen to the artist's works. While the system suggested in U.S. Patent No. 5,616,876 does attempt to identify other songs that are similar to the seed song in subjective content, this is done on the basis of similarity according to a single style element, so that the suggestion so made may still be inaccurate.
  • the invention is based on the recognition that it is possible to predict more accurately pieces of music that would appeal to a listener (or group of listeners) if the system utilizes more than one style element to construct a listener profile and uses such listener profile to find pieces of music that would match the profile. Applicant found that by using more style elements to construct the user profile, and to characterize each piece of music in a database used in the matching, the accuracy of prediction is increased exponentially.
  • the system of the invention can be extended to find other types of information having subjective content that would appeal to a user or a group of users.
  • the style or other elements employed include ones that correspond to subjective musical traits of the pieces of music.
  • This system can be further extended to find pieces of music that match the profile of a particular piece of music, or more generally, a musical profile, which can be one of a piece of music or of a listener or a group of listeners.
  • a database of user profiles is first obtained that reflects the taste of a cross-section of users.
  • This user profile is compiled pertaining to preference of users with respect to a set of elements corresponding to subjective characteristics of the information, such as music or other multimedia information.
  • the item of information is analyzed using the same elements.
  • the system finds a match between the item of information (such as music and multimedia works) and a profile of the user or users in a database.
  • the above-described database of user profiles may be stored together with instructions for performing the above-described matching in a storage medium.
  • the above are features in the parent application.
  • a play list may be created by using the profiles of the pieces of music found to match a musical profile as seed profiles. These seed profiles are then matched against the profiles of music in a database to find additional pieces of music for the play list.
  • the play list may include all of or fewer than all of the pieces of music in the set of seed profiles.
  • locators are provided for locating the pieces of music on the play list, so that such pieces can be retrieved as a stream of audio data to a listener.
  • the play list can serve as a play list for the device, which in turn can retrieve the pieces on the play list as a stream of audio data to a listener.
  • a profile of the listener or group of listeners is provided. Pieces of music from the storage are then obtained that match the profile. Preferably this is done by matching the listener profile against the profiles of music in a database, and finding pieces of music in the database that match such profile, and finding locations of such pieces in the storage, if such pieces can be located in the storage.
  • the above-described database may be stored together with instructions for performing the above-described matching in a storage medium.
  • Fig. 1 is a block diagram of a system for finding m ⁇ sic or other information with subjective content that is likely to appeal to a user or a group of users to illustrate the preferred embodiment of the invention.
  • Fig. 2 is a dataflow diagram illustrating the operation of the system of
  • Fig. 3 is a graphical illustration of a web page to illustrate the operation of the system of Fig. 1.
  • Fig. 4 is a schematic diagram illustrating a storage medium storing a database of user profiles together with instructions for performing matching of information to the database.
  • Fig. 5 is a data flow diagram illustrating a process for generating a play list to illustrate an embodiment of the invention .
  • Fig. 6 is a work flow diagram illustrating a process for generating a play list to illustrate the embodiment of Fig. 5.
  • Fig. 7 is a schematic diagram illustrating a process for retrieving music according to a play list as a stream of audio data to illustrate another embodiment of the invention .
  • Fig. 8 is a schematic diagram illustrating a process for identifying pieces of music in a storage to form a play list.
  • the invention envisions that an expert in each particular area of music would develop a list of musical elements corresponding to the traits that they believe best distinguish one particular piece of music or song from another in that area.
  • traits include the existence of drum rolls, transitions and speed.
  • traits include the degree of similarity to the "40's cotton club sound.”
  • Appendix A A list of musical elements proposed by Applicant are attached hereto as Appendix A and made a part of this application.
  • the different musical elements may be classified in four different categories: genre (or style), instrument, miscellaneous (or feel) and mood. Not all of the musical elements are necessarily accorded equal weight for the purpose of identifying pieces of music that are most likely to appeal to a particular listener or a group of listeners.
  • the musical elements classified as genre are particularly important for the purpose of identifying pieces of music that are most likely to appeal to a particular listener or a group of listeners.
  • the musical elements described above are then used as the basis for characterizing pieces of music such as songs and the musical tastes of listeners or groups of listeners by analyzing a piece of music such as a song. Using these elements, a profile of the piece of music is produced. To enable a match of listener preferences with the music profile, a profile of a listener is constructed using the same set of musical elements. As discussed in detail below, in order to obtain a profile of a listener, a set of definition music clips is first constructed by choosing one section of a piece of music, where that section highlights one or more defining characteristics of the musical elements in the set. Such definition clips are then played to a listener and the Ustener is asked to rate each of the definitional clips in a test or quiz.
  • a database of pieces of music may be constructed as follows: pieces of music input to the system are distributed to individuals such as experts for classifying the pieces of music according to the set of musical elements above. The ratings of the experts for the same song or piece of music are combined into a music profile vector. After a number of pieces of music have been so analyzed, a matrix of music profile vectors may be compiled. Thus, if n is the number of musical elements and m is the number of pieces of music such as songs in the database, an n x m matrix may then be constructed consisting of m rows of music profile vectors and n columns of musical elements. The matrix may appear as follows:
  • the first piece of music (e.g. Song 1) comprises the music profile vector n V 12 Vi3...V ln .
  • the ith piece of music (Song i) is characterized by the music profile vector Vu Vj 2 Vi3...Vin and the mth piece of music (Song m) is characterized by the vector V ml Vma V ⁇ i 3...V nm .
  • the same m musical elements are then used for the selection of definition music clips for construction of the listener profile. For example, in the simplest case, n definition clips are employed, each definitional clip corresponding to one musical element. These n definition clips are then played sequentially or as selected to a listener and the Ustener is requested to rate each of the definition clips.
  • the n ratings Pi P 2 P3 ... P n corresponding to the n definition clips by the listener constitutes the listener profile or the listener profile vector.
  • This listener profile vector is then matched against the above n x m matrix to discover the piece of music is that is the closest match to the listener profile.
  • the piece of music that is the next closest match to the listener profile can also be found and so on until a predetermined number of pieces of music have been discovered from the matrix that are likely to appeal to the listener with such listener profile. It is possible that for some musical elements, it may not be possible to find a definition clip that highlights only such element without highlighting also, at least to some extent, aspects of other musical elements.
  • a piece of instrumental music chosen for its genre does, at least to some extent, also highlight the instrument that is used.
  • a rule-based system may be employed to resolve any resulting conflicts. For example, if two definition clips both highlight musical elements a, b, but to different extents, the rating of a particular listener with respect to element a may be a weighted average of the two ratings given by the same listener to the two definition clips. The same may be done for the rating with respect to element b.
  • other types of rows may be employed to resolve any apparent inconsistencies.
  • a difference factor may be constructed for each song i in the matrix, where the difference vector Di is given by Pi- iVn P 2 -w 2 Vi 2 P3-W3V13 P -w Vi4 ... P n -w n Vi n ., where wi, w 2 , W3, ... w n are the difference weights given to the n different musical elements.
  • the closest match is found by finding the song that has the smallest sum of the different components of the difference vector.
  • the genre category of elements is particularly important.
  • no match is found unless the sum of the difference vector elements corresponding to the category of genre elements is smaller than a threshold, no matter how closely the listener profile matches the other musical elements, such as the instrument musical elements. Obviously, there can be more than one closest match, and songs that are not the closest match but are also close to the listener profile may be deemed to be useful to be presented to the listener.
  • Fig. 1 is a block diagram of a system for finding music or other information with subjective content that would appeal to a user or a group of users or listeners to illustrate the preferred embodiment of the invention .
  • system 10 includes four servers.
  • An external website server 12 provides an external website for receiving full length pieces of music or songs. These pieces of music are then submitted to the central data repository server 14 which, in turn, sends these pieces of music to editorial managers 16, preferably simultaneously for their analysis. By allowing the experts to rate the songs in parallel, much time is saved.
  • Server 14 also sends the full length songs or pieces of music to audio clips server 18 where these pieces of music are stored. Alternatively, server 14 may simply send the pieces of music to server 18 which, in turn, sends the pieces to managers 16 for analysis. While two servers 14, 18 are shown, their functions may be combined and performed by a single server.
  • each editorial manager is an expert with respect to a particular category as shown in Appendix A, or with respect to a particular group of musical elements, so that there will be no conflict between the analysis by different editorial managers of the same piece of music.
  • one expert or editorial manager may analyze the piece of music and provide ratings with respect to musical elements 1 through j so that such expert will provide the ratings Vu Va—Vy.
  • a different expert may analyze the same piece of music and provide the ratings Vi, +1 Vij +2 ... Vy+ and a third expert may analyze the same piece of music and provide the ratings Vij+k+i Vy+k+ 2 . -Va and so on.
  • Audio clips server serves the function of storing incoming full length pieces of music and allowing simultaneous access of the same piece of music by different experts and by server 14.
  • the experts are also asked to select selections of the pieces of music in the matrix as definition clips to highlight such element.
  • 30 second song clips are selected from the full length song.
  • server 14 These definition clips are stored in server 14 and provided to listeners through the public website server 20.
  • Server 20 may also request ratings of each of the definition clips by the listener and transmits the ratings back to server 14. From these ratings, server 14 then constructs listener profiles which are stored in a user profile database.
  • the listener need not be a single individual. It may be a group of people, an audience of a radio station or any aggregate demographical unit. In addition, more than one profile may be recorded for each listener, where the listener has one musical preference profile at work and a different musical preference profile at home, or different profiles when the listener is in different moods.
  • Music preference profiles may be compiled for a large number of individuals or groups of individuals, and the resulting listener preference profile vectors may be compiled into a matrix much in the same way the music matrix is compiled as described above.
  • Such listener preference profile vectors may be compiled to form a matrix such as the one shown below for k listeners or groups of listeners:
  • the above matrix represents k Ustener preference profiles representing the musical tastes of k individuals or groups of individuals with respect to the same n musical elements used for characterizing the pieces of music by the experts.
  • the profile or music profile vector of a particular piece of music may be matched against the listener preference profiles in the above listener profile database to predict which individuals or groups of individuals that the particular piece of music or song would be likely to be appealed to. This is particularly useful when a new song or piece of music is to be introduced. For a new piece of music or song, there will be no purchasing data to rely on so that the above-described conventional systems relying on purchasing data will not be useful. In the invention , however, prediction is possible by first sending the new piece of music to experts or other individuals to characterize the piece of music according to the n music elements so that the music profile vector of this new piece of music can be constructed.
  • this vector can then be matched against the musical preference profiles of individuals or groups of individuals in the above listener preference profile database to find a match or a number of matches in order to predict individuals or groups of individuals to whom this new piece of music is likely to appeal to.
  • demographic information concerning the individuals or groups of individuals having musical preference profile vectors that match the music profile vector of the new piece of music may then be provided to, for example, promoters of a new piece of music so that an effective marketing strategy may be formulated for the new piece of music. This can greatly assist the formulation of an effective marketing plan.
  • the above-described process can also be applied to assist radio stations in creating play lists.
  • the radio station itself receives a user profile, either by profiling all of the songs that are popular on its existing play list, by a program manager taking the quizzes in a manner he/she feels corresponds to a typical listener, or by having the radio station's listeners actually take the quiz by playing sample songs and receiving feedback from its users.
  • the radio station may establish multiple profiles for different shows or programming times. Once the listener profiles are established, the system of the invention can suggest new songs for the radio station play list that are compatible with the radio station's identity.
  • Fig. 2 is a dataflow diagram illustrating the operation of the system of Fig. 1.
  • Fig. 3 is a graphical illustration of a webpage to illustrate the operation of the system of Fig. 1.
  • the oblong shaped boxes are data depositories, large circles indicate results that are displayed or sent and the small circles are interfaces.
  • the dotted lines 80 are database notifications, broken lines 82 indicates file transfers and the solid line COM.
  • the small rectangles 86 next to larger boxes indicate that the large boxes initiate action.
  • the right-hand portion of the dataflow diagram illustrates the backend operation of server 14 and its interaction with the two website servers 12 and 20.
  • full length pieces of music such as songs are submitted to server 14 through external website 12 or by other means such as by mailing a compact disk or tape.
  • Each incoming song is coded into server 14 using a process called the "keyhole" music profiling application. This involves several substeps:
  • the song is shipped to location of server 14, either by transporting the physical medium or by electronic submission;
  • the song is uploaded from its distribution medium into a server 14 in digital format;
  • pertinent information is copied, coded, calculated, or automatically read from the song, record jacket, and so on (e.g. song length, artist, title, promotional text, cover and other images).
  • the text information e.g. title, artist
  • the full-length piece of music is usually submitted with other information such as title, artist and other information.
  • stripper application 102 strips such information and sends only the full-length piece of music to auto- encode 104 in which the digital song file (and/or the
  • 30-second sample is encoded into various common formats.
  • server 14 uses software to encode songs into Microsoft Media Player and Beatnik formats (real audio or MP3 formats could be used).
  • the song profile vectors are combined into a master matrix, which is uploaded into a master matrix stored in results clips database 108 in server 14.
  • Server 14 then combines the encoded digital music file and information submitted through web submit 106 and stores the combined file in result clips database 108.
  • the full-length piece of music may also be sent from experts with rating information for constructing the music profile vector of the music sent.
  • the rating information is detached by rating appUcation 112 and only the full-length music is sent to auto encode 104 for encoding as described above.
  • the rating and other information is sent by rating application 112 to result clips 108 and combined with the encoded digital music file into a combined file and stored in result clips database 108.
  • the rating application 112 recognizes the different vector elements Vy as the weightings given by a particular expert or individual with respect to the jth musical element in regard to the ith song or music submitted. This is then combined with similar weighting or weightings given by other experts in regard to other different musical elements and combined in a music profile vector for the particular ith piece of music submitted where such music profile vector is sent to result clips database 108 and combined with a digital musical file from auto encode 104 and stored as a combined file in database 108.
  • server 14 or server 18 sends full-length pieces of music to experts or other individuals 16 for characterizing the piece of music with respect to the n musical elements. After the experts or individuals 16 have done so, they then submit their ratings to server 14 and rating application 112 then compiles the music profile vectors of the different pieces of music so rated by the experts or individuals. In this manner, the above-described matrix of music profile vectors of different pieces of music is compiled as a matrix and stored in result clips database 108.
  • the experts or individuals 16 are also asked to select a short segment of each piece of music in database 108 to be the definition clip corresponding to a particular musical element of the n musical elements, or if that is not possible, a definition clip corresponding to two or more of the n musical elements.
  • the selected definition clips together with their corresponding musical elements are sent by experts or individuals 16 and received at server 14, where rating application 112 sends the clips and their corresponding musical elements are batched and sent to definition clips database 132.
  • rating application 112 may send the definition clips to auto encode 104 which auto encodes the definition clips into the appropriate format and sends the digital definition clip files to database 132.
  • the above-described process is controlled by work flow engine 138.
  • the definition clips database 132 and the result clips database 108 receives from time to time information concerning new pieces of music or definition clips submitted by experts or individuals 16 and music files from auto encode 104, stripper application 102 and rating application 112.
  • Work flow engine 138 would query databases 108 and 132 periodically.
  • the appropriate database will send a notification to engine 138.
  • Engine 138 instructs stripper application 102 or rating application 112 to process the piece of music received in the manner described above so that the digitized definition clip or full-length music may be combined with information through web submit 106 and stored as a combined file in result clips database 108 or definition clip database 132.
  • definition clips 132 now stores a number of definition clips, each corresponding to one or more of the n musical elements.
  • Result clips database 108 now contains a matrix of the music profile vectors of a number of pieces of music or songs.
  • Each listener preference profile vector is a list of ratings corresponding to how important a given musical element is to the listener (or alternatively, the amount of the given characteristic that is ideal for that particular listener).
  • server 20 displays a webpage such as page 200 shown in Fig. 3 to present the listener with a series of musical quizzes or tests. As shown in Fig. 3, seven tests or quizzes are presented to the user as seven different entry points. By clicking on any one of the seven boxes 202, a number of musical definition clips are shown. When the listener clicks on any one of the seven circles 204, the corresponding definition clip will be played and the listener is asked to select any one of five choices 206, shown in Fig. 3.
  • the five choices for scoring the definition clip are as follows: the user may select "not my style"
  • the various quizzes are linked to other quizzes for related genres of music. For example, the basic "rock” quiz is linked to "girls who rock” and to another quiz entitled “tattoos and pool cues.” After taking each quiz, the Ustener is given a choice of several other linked quizzes. If a listener takes more than one quiz to establish a given profile, the accuracy of the profile increases exponentially. When the listener is finished, the responses from the various quizzes are compiled using a "preprocessor" to convert the list of clip ratings into a user profile.
  • the seven definition clips may be randomly selected at the time the tests or quizzes run, however, each of the songs or pieces of music for a given test question are testing for more-or-less the same thing. This means that a listener can take the same quiz multiple times and receives a different experience each time, thus, avoiding monotony.
  • Each of the entry points or tests corresponds to a category of musical elements (e.g. a genre).
  • the seven definition clips shown in Fig. 3 may correspond to the seven musical elements in the category "instrument.”
  • the dynamic test generator 152 selects seven definition clips from database 132, each of the definition clips highlighting one or more defining characteristics of the particular instrument (e.g. piano or peddle steel guitar or saxophone).
  • a definition clip highlighting piano is then played and the user is asked to rate the clip according to the five choices.
  • graphical user interface templates 154 are combined with definition clips from generator 152 and integrated by integrator 156 into the webpage and presented as a test page 200.
  • the ratings of the listener when one of the five choices is made by the listener are presented by server 20 to fingerprint preprocessor 162 in server 14.
  • Preprocessor 162 compiles the listener preference profile vector based on the ratings by the listener and provides the vector to search engine 164 in server 14.
  • Search engine 164 matches the listener profile vector against the matrix of the different pieces of music in result clips database 108 and finds one or more matches to the listener profile and presents such pieces of music so found to web integration 166.
  • Integration engine 166 then integrates such information with graphical user interface templates 168 and presents the results found to result webpage 170 which is then transmitted to server 20 to inform the listener. If so desired, a Ustener can retake the test by clicking box 208 and server 20 will inform server 14 so that generator 152 will generate a new set of definition clips for the same test for the same listener so that the listener will hear different definition clips to avoid monotony when the test is retaken.
  • the reverse process of matching listener profiles to a particular piece of music will now be described.
  • the listener preference profile vector compiled by preprocessor 162 and presented as the result webpage 170 may be stored in a user profile/ personalization database 172.
  • the above-described listener preference profile vector matrix may be compiled and stored in database 172.
  • a new piece of music such as a new song is to be promoted, such song is rated by experts or other individuals 16 in the same manner as that described above to construct a music profile vector which is stored in database 108.
  • This vector is retrieved by search engine 174 which compares this vector to the listener preference profile vectors in the matrix in database 172 and finds the listener preference profile vectors that match such music profile vector by means of a difference algorithm in the same manner as that described above where the match is found between a particular listener preference profile vector and the matrix in database 108.
  • the less close matches may also be useful and all the matches are presented to data warehouse reporting engine 176. If demographic data corresponding to the listener preference profile vectors are available, engine 176 then combines such data with the matching listener profiles so found to compile a demographic match to new song report 180 and presented to server 20.
  • the invention can assist radio stations in creating play lists. This can be done by profiling all of the songs that are popular on its existing play lists stored in the station's spin list database 182, and the profiling can be performed in the same manner as that described above to construct musical profile vectors. One or more of such musical profile vectors can then be matched in the same reverse process as that described above for the new song or music to find the matching Ustener profiles. Such listener profiles can be used in turn to find more or different matching songs or music by matching such profiles with the matrix of database 108 in the manner discussed above. The result is then compiled as a station's songs to play listing report 186 and presented to server 20. This process can be repeated to enlarge the play list as well as the roster of listener profiles. Search engine 174 may also compile a targeted direct mailing list 183 based on the matching listener profiles so found.
  • the above-described system of the invention can be used also for finding multimedia works such as advertisements that may appeal to a user or a group of users in the same manner as that described above for music.
  • the multimedia works will contain subjective content just as is the case with music so that multimedia elements, instead of musical elements, may be identified for characterizing different multimedia works such as advertisements.
  • the above-described system can then be used for creating a matrix comprising multimedia profile vectors stored in the matrix and the database such as database 108.
  • user profile vectors can be derived and stored in a database such as database 172. Since multimedia works also include images, the tests or quizzes taken by the user would also include video images.
  • the multimedia elements may be the same as the musical elements as shown in Appendix A, so that in the quizzes taken by users through test page 200, the visual component will have an influence on the choices made by the user.
  • entirely visual subjective elements may be formulated in addition to the sound-based elements of Appendix A.
  • the invention of the application can be applied to information for subjective content other than music or multimedia works, such as literature, dating service or any other type of information with subjective content.
  • Elements useful for characterizing or related to subjective characteristics of information may then be used in the same manner as that described above for characterizing items of information with subjective content. These elements may then be used for analyzing one or more items of information with subjective content.
  • User profiles may also be constructed pertaining to preferences of users with respect to such set of elements. Matching can then be performed between one or more items of information with subjective content and a profile of a user or users in a database in the same manner as that described above. In this manner, items of information may be found with subjective content that are likely to appeal to a user or a group of users. The reverse process is also true. Given an item of information with subjective content, it is possible to discover the profile of a user or a group of users to whom such information would likely to appeal to.
  • the user preference profile database and the instructions for performing the various instruction illustrated in Figs. 1, 2 and 3 may be stored in the storage medium 300 shown in Fig. 4. Then when a computer loads the instructions 304 and uses the instructions to access the database 302, the above-described processes can then be performed.
  • Fig. 5 is a data flow diagram illustrating a process for generating a play list to illustrate an embodiment of the invention .
  • Fig. 6 is a work flow diagram illustrating a process for generating a play list to illustrate the embodiment of Fig. 5. It may be useful to generate a play list based on the profile of a listener or a group of listeners, or the profile of a piece of music. As shown in Figs. 5 and 6, the profile 312 is provided and matched against the profiles of the pieces of music in the results clip 108 of Fig. 2 using the subjective recommend engine 314 illustrated in more detail in reference to Figs. 1 and 2.
  • the pieces 316 of music so found are then used as the seed profiles and again matched against the profiles in results clip database 108 to find additional pieces of music to form the play list 318 preferably in the form of a matrix, where the list may include aU or less than all of the music pieces that served as the seed music for the play list.
  • a listener or a radio station may be able to retrieve the pieces of music on a play list compiled as described above in a streaming audio and play the music in real time as the data is retrieved, without having to first store the data. In this manner, the listener or the station does not have to store the music on the play list in any storage before the music on the list is played. This is illustrated in Fig. 7 described below.
  • locators may be provided for the music so that the pieces of music may be retrieved conveniently through the locators. These locators may be obtained when the pieces of music are first entered as described above.
  • the locators may be uniform resource locators (URLs), although other locators may also be used for other types of networks and are within the scope of the invention.
  • the matrix of songs may be converted into a play list format 320, where the format includes, in addition to the information such as title, author, artist, and also the locators.
  • the play list 320 may be in the asx format, so that it can be read by a multimedia player 322, such as the Microsoft Media Player.
  • the multimedia player 322 then retrieves the pieces of music one after another according to the order on the play list using the locators from a database 330 of music in a streaming audio stream and play each piece of music in real time as the audio data is retrieved. In this manner, the listener or the radio station does not need to provide storage to store any of the music on the play list.
  • the play list may be stored in a device, such as a portable music device (e.g. ipag from Compaq), equipped with a multimedia player, which can retrieve the pieces of music one by one in a streaming audio without having to store any of the pieces.
  • a listener or a radio station has a collection of pieces of music, such as a collection of compact disks ("CDs")
  • CDs compact disks
  • First a profile of a Ustener or a group of listeners or of a piece of music is obtained, either by means of the rating process described in reference to Figs. 1-3, or from the user profile vector of the Ustener or the group of listeners in the database 172 of Fig. 2, where the vector is derived previously by means of a similar process.
  • an intelligent device such as a jukebox. Responding to a user's input from an input device 402, the jukebox
  • the jukebox 404 logs in (line 406) to system 408, such as the system described above in reference to Figs, 1-3, and requests the user's profile from database 172, or requests a session in which the user's profile may be constructed as described above.
  • the profile is retrieved or constructed (line 410).
  • the jukebox then sends information concerning the collection of music to the system. Where the collection of music is in the form of compact disks, the jukebox 404 provides locations of the CDs (e.g. slot numbers) and imprints of the CDs, such as the number of tracks, the lengths of the tracks and the order of the tracks, to system 408.
  • System 408 then requests, based on the CD imprints provided by the jukebox, further information concerning the collection, such as title, artist, author of each track of the CDs in the collection from a database 410, such as the CDDB at www.cddb.com.
  • System 408 matches the user profile with the profiles in results clip 108 to find the pieces of music that match the profile.
  • System 408 searches the matching pieces of music from the collection and provides to jukebox 404 the slot numbers of the CDs and track numbers on the CDs of such pieces in the collection.
  • the jukebox may then play the pieces of music. Where a play list has been constructed in the manner described above, system 408 provides the list to jukebox 404. The jukebox then plays the music on the list.
  • system 408 including databases such as those illustrated in Fig. 2 and instructions for carrying out the above described processes may be stored in a storage medium. Then when a computer loads the instructions and uses the instructions to access the databases, the above-described processes can then be performed.
  • ⁇ Ref href " ⁇ ttp: /li8te ⁇ .mubu.com aster/asFul!/6 20-FU L.asf '/ ⁇ /Hn»ry>

Abstract

Profiles of songs or multimedia works may be constructed with respect to a set of musical or multimedia elements for characterizing subjectively a number of pieces of music or multimedia works. A user preference profile may be constructed by presenting definition clips (132) of the music or multimedia works to a user and having the user rate (112) the clip, where the definition clip highlights one or more defining characteristics of the elements. A match is then found between the user profile and the database to discover the type of song or multimedia work that would appeal to a particular user or to discover a profile of user or users to whom a particular song or multimedia would appeal to.

Description

SYSTEM FOR CHARACTERIZING PIECES OP MUSIC
CROSS REFERENCE TO RELATED APPLICATION
This application is a continuation of United States application Serial No. 09/709,928 filed 09 November 2000, which in turn is a continuation-in- part of United States application Serial No. 09/602,953 filed 23 June 2000.
BACKGROUND OF THE INVENTION
This invention relates in general to systems for finding information with subjective content that is likely to appeal to a user or a group of users, and, in particular, to a system for finding information with subjective content such as music or other multimedia information that is likely to appeal to users based on subjective criteria.
Many music lovers listen to music not because of a particular artist, song or band, but because of the feeling one experiences when listening to the music. To many people, listening to music can be a very emotional experience.
To this date, there is no entirely satisfactory system for retrieving or finding music based on the subjective feeling the listener may experience. Most of the existing systems in use for finding and retrieving music is text based, so that when the user asks the system to "show me more artists like this," existing conventional systems will merely list a group of musicians that has been hand-coded by website owners or others as being similar in style.
The few systems that perform intelligent searches generally use "collaborative filtering." In these systems, purchasing data is analyzed to discover statistical correlation between purchases, such as what songs tend to be purchased together, or by purchasers of a particular educational or ethnic background. This correlation is then used to suggest other songs the purchaser may like based on prior purchases already made.
The conventional systems described above have many disadvantages. The suggestions made by these systems are not based on accurate information on the particular listener to whom these suggestions are made; instead, the suggestions are based on the statistical purchasing data or the website owner's perception as to what songs are roughly similar in style without any further analysis as to what makes two songs similar. Consequently, the music suggestions made by means of the conventional systems are less than satisfactory.
In U.S. Patent No, 5,616,876 an interactive network providing music to subscribers is described. The purpose of the network is to allow a listener to interactively find more songs that are like a current song being played to the listener; no attempt is made to actually construct a listener profile ahead of time or to accurately match such profile that has been constructed to a song database. In the preferred embodiment of this patent, style tables are prepared ahead of time for artists with different weightings given to different subjective style categories. A style slider is then provided to allow a listener or user to locate more music that is like the current song being played. If a particular weighting in a style table for a particular artist is at least as high as the position of the style slider, a suggestion will then be made that the listener listen to the artist's works. While the system suggested in U.S. Patent No. 5,616,876 does attempt to identify other songs that are similar to the seed song in subjective content, this is done on the basis of similarity according to a single style element, so that the suggestion so made may still be inaccurate.
All of the above-described prior art systems require the presence of a seed song as the beginning point. These systems then attempt to identify more songs that are like the seed song. In circumstances where there is no seed song to begin with, it may be difficult or impossible to identify songs or other music that may appeal to a particular listener or a group of listeners. This is especially a problem for promoters of new songs or for advertising agencies which try to predict the type of viewers a new kind of advertisement would appeal to. This is also true for radio stations attempting to develop play lists.
None of the above-described prior art systems is entirely satisfactory. It is, therefore, desirable to provide an improved system in which many of the above-described difficulties are overcome.
SUMMARY OF THE INVENTION
The invention is based on the recognition that it is possible to predict more accurately pieces of music that would appeal to a listener (or group of listeners) if the system utilizes more than one style element to construct a listener profile and uses such listener profile to find pieces of music that would match the profile. Applicant found that by using more style elements to construct the user profile, and to characterize each piece of music in a database used in the matching, the accuracy of prediction is increased exponentially. The system of the invention can be extended to find other types of information having subjective content that would appeal to a user or a group of users. Preferably, the style or other elements employed include ones that correspond to subjective musical traits of the pieces of music. This system can be further extended to find pieces of music that match the profile of a particular piece of music, or more generally, a musical profile, which can be one of a piece of music or of a listener or a group of listeners. According to another aspect of the invention, a database of user profiles is first obtained that reflects the taste of a cross-section of users. This user profile is compiled pertaining to preference of users with respect to a set of elements corresponding to subjective characteristics of the information, such as music or other multimedia information. In order to discover what types of users a particular item of information would appeal to, the item of information is analyzed using the same elements. The system then finds a match between the item of information (such as music and multimedia works) and a profile of the user or users in a database.
The above-described database of user profiles may be stored together with instructions for performing the above-described matching in a storage medium. The above are features in the parent application.
According to one aspect of the invention in the present application, a play list may be created by using the profiles of the pieces of music found to match a musical profile as seed profiles. These seed profiles are then matched against the profiles of music in a database to find additional pieces of music for the play list. The play list may include all of or fewer than all of the pieces of music in the set of seed profiles. Preferably locators are provided for locating the pieces of music on the play list, so that such pieces can be retrieved as a stream of audio data to a listener. When downloaded to a musical device, such as one that is portable, the play list can serve as a play list for the device, which in turn can retrieve the pieces on the play list as a stream of audio data to a listener. When it is desirable to identify music from a storage that is likely to appeal to a Ustener or a group of listeners, a profile of the listener or group of listeners is provided. Pieces of music from the storage are then obtained that match the profile. Preferably this is done by matching the listener profile against the profiles of music in a database, and finding pieces of music in the database that match such profile, and finding locations of such pieces in the storage, if such pieces can be located in the storage.
The above-described database may be stored together with instructions for performing the above-described matching in a storage medium. BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a block diagram of a system for finding mμsic or other information with subjective content that is likely to appeal to a user or a group of users to illustrate the preferred embodiment of the invention. Fig. 2 is a dataflow diagram illustrating the operation of the system of
Fig. 1.
Fig. 3 is a graphical illustration of a web page to illustrate the operation of the system of Fig. 1.
Fig. 4 is a schematic diagram illustrating a storage medium storing a database of user profiles together with instructions for performing matching of information to the database.
Fig. 5 is a data flow diagram illustrating a process for generating a play list to illustrate an embodiment of the invention .
Fig. 6 is a work flow diagram illustrating a process for generating a play list to illustrate the embodiment of Fig. 5.
Fig. 7 is a schematic diagram illustrating a process for retrieving music according to a play list as a stream of audio data to illustrate another embodiment of the invention .
Fig. 8 is a schematic diagram illustrating a process for identifying pieces of music in a storage to form a play list.
For simplicity in description, identical components are labeled by the same numerals in this application.
DETAILED DESCRIPTION OF THE EMBODIMENTS The invention envisions that an expert in each particular area of music would develop a list of musical elements corresponding to the traits that they believe best distinguish one particular piece of music or song from another in that area. For example, in the "electronica" genre, traits include the existence of drum rolls, transitions and speed. In the jazz genre, traits include the degree of similarity to the "40's cotton club sound." These traits are referred to herein as musical elements that embody the subjective distinctive traits of music. While it is possible to add or remove musical elements from time-to-time during updates, the list of elements can be thought of as static for most purposes.
A list of musical elements proposed by Applicant are attached hereto as Appendix A and made a part of this application. As shown in Appendix A, the different musical elements may be classified in four different categories: genre (or style), instrument, miscellaneous (or feel) and mood. Not all of the musical elements are necessarily accorded equal weight for the purpose of identifying pieces of music that are most likely to appeal to a particular listener or a group of listeners. The musical elements classified as genre are particularly important for the purpose of identifying pieces of music that are most likely to appeal to a particular listener or a group of listeners.
The musical elements described above are then used as the basis for characterizing pieces of music such as songs and the musical tastes of listeners or groups of listeners by analyzing a piece of music such as a song. Using these elements, a profile of the piece of music is produced. To enable a match of listener preferences with the music profile, a profile of a listener is constructed using the same set of musical elements. As discussed in detail below, in order to obtain a profile of a listener, a set of definition music clips is first constructed by choosing one section of a piece of music, where that section highlights one or more defining characteristics of the musical elements in the set. Such definition clips are then played to a listener and the Ustener is asked to rate each of the definitional clips in a test or quiz. The ratings by the listener are then recorded and analyzed to construct a listener profile. A database of pieces of music may be constructed as follows: pieces of music input to the system are distributed to individuals such as experts for classifying the pieces of music according to the set of musical elements above. The ratings of the experts for the same song or piece of music are combined into a music profile vector. After a number of pieces of music have been so analyzed, a matrix of music profile vectors may be compiled. Thus, if n is the number of musical elements and m is the number of pieces of music such as songs in the database, an n x m matrix may then be constructed consisting of m rows of music profile vectors and n columns of musical elements. The matrix may appear as follows:
SONG PROFILE
RATINGS WITH RESPECT TO PIECE MUSICAL ELEMENTS
Figure imgf000008_0001
Figure imgf000008_0002
Figure imgf000008_0003
Thus, the first piece of music (e.g. Song 1) comprises the music profile vector n V12 Vi3...Vln. The ith piece of music (Song i) is characterized by the music profile vector Vu Vj2 Vi3...Vin and the mth piece of music (Song m) is characterized by the vector Vml Vma Vπi3...Vnm. The same m musical elements are then used for the selection of definition music clips for construction of the listener profile. For example, in the simplest case, n definition clips are employed, each definitional clip corresponding to one musical element. These n definition clips are then played sequentially or as selected to a listener and the Ustener is requested to rate each of the definition clips. The n ratings Pi P2 P3 ... Pn corresponding to the n definition clips by the listener constitutes the listener profile or the listener profile vector. This listener profile vector is then matched against the above n x m matrix to discover the piece of music is that is the closest match to the listener profile. Obviously, the piece of music that is the next closest match to the listener profile can also be found and so on until a predetermined number of pieces of music have been discovered from the matrix that are likely to appeal to the listener with such listener profile. It is possible that for some musical elements, it may not be possible to find a definition clip that highlights only such element without highlighting also, at least to some extent, aspects of other musical elements. Thus, a piece of instrumental music chosen for its genre does, at least to some extent, also highlight the instrument that is used. In such event, there may be an apparent conflict between the ratings by the same listener of two definition clips. In such event, a rule-based system may be employed to resolve any resulting conflicts. For example, if two definition clips both highlight musical elements a, b, but to different extents, the rating of a particular listener with respect to element a may be a weighted average of the two ratings given by the same listener to the two definition clips. The same may be done for the rating with respect to element b. Obviously, other types of rows may be employed to resolve any apparent inconsistencies.
To find the closest match, a difference factor may be constructed for each song i in the matrix, where the difference vector Di is given by Pi- iVn P2-w2Vi2 P3-W3V13 P -w Vi4 ... Pn-wnVin., where wi, w2, W3, ... wn are the difference weights given to the n different musical elements. In one embodiment, the closest match is found by finding the song that has the smallest sum of the different components of the difference vector. As noted above, the genre category of elements is particularly important. In one embodiment, no match is found unless the sum of the difference vector elements corresponding to the category of genre elements is smaller than a threshold, no matter how closely the listener profile matches the other musical elements, such as the instrument musical elements. Obviously, there can be more than one closest match, and songs that are not the closest match but are also close to the listener profile may be deemed to be useful to be presented to the listener.
Fig. 1 is a block diagram of a system for finding music or other information with subjective content that would appeal to a user or a group of users or listeners to illustrate the preferred embodiment of the invention . As shown in Fig. 1, system 10 includes four servers. An external website server 12 provides an external website for receiving full length pieces of music or songs. These pieces of music are then submitted to the central data repository server 14 which, in turn, sends these pieces of music to editorial managers 16, preferably simultaneously for their analysis. By allowing the experts to rate the songs in parallel, much time is saved. Server 14 also sends the full length songs or pieces of music to audio clips server 18 where these pieces of music are stored. Alternatively, server 14 may simply send the pieces of music to server 18 which, in turn, sends the pieces to managers 16 for analysis. While two servers 14, 18 are shown, their functions may be combined and performed by a single server.
Preferably, each editorial manager is an expert with respect to a particular category as shown in Appendix A, or with respect to a particular group of musical elements, so that there will be no conflict between the analysis by different editorial managers of the same piece of music. Thus, for the ith piece of music (Song i), one expert or editorial manager may analyze the piece of music and provide ratings with respect to musical elements 1 through j so that such expert will provide the ratings Vu Va—Vy. A different expert may analyze the same piece of music and provide the ratings Vi,+1 Vij+2... Vy+ and a third expert may analyze the same piece of music and provide the ratings Vij+k+i Vy+k+2. -Va and so on. Thus, all the ratings provided by all the experts would make up the music profile vector for the ith piece of music (song i). Through this process, the n x m matrix is constructed and stored in server 14. Audio clips server serves the function of storing incoming full length pieces of music and allowing simultaneous access of the same piece of music by different experts and by server 14. For each of the musical elements, the experts are also asked to select selections of the pieces of music in the matrix as definition clips to highlight such element. In one embodiment proposed by the Applicant, 30 second song clips are selected from the full length song. As noted above, for some musical elements, it may be difficult to select a definition clip that highlights only such element without also highlighting at least to some degree one or more of the other different musical elements. These definition clips are stored in server 14 and provided to listeners through the public website server 20. Server 20 may also request ratings of each of the definition clips by the listener and transmits the ratings back to server 14. From these ratings, server 14 then constructs listener profiles which are stored in a user profile database.
The listener need not be a single individual. It may be a group of people, an audience of a radio station or any aggregate demographical unit. In addition, more than one profile may be recorded for each listener, where the listener has one musical preference profile at work and a different musical preference profile at home, or different profiles when the listener is in different moods. Musical preference profiles may be compiled for a large number of individuals or groups of individuals, and the resulting listener preference profile vectors may be compiled into a matrix much in the same way the music matrix is compiled as described above. Such listener preference profile vectors may be compiled to form a matrix such as the one shown below for k listeners or groups of listeners:
LISTENER PROFILE
RATINGS WITH RESPECT TO PIECE MUSICAL ELEMENTS
Listener 1 Pπ P12 Pi3 ..Pin
Listener 2 P21 P22 P23- P2n
Listener 3 Pa^ Pas-Psn
Listener i P Pj2 Pj3.. Pin
Listener k Pki Pk2 Pk3 .Pkn
The above matrix represents k Ustener preference profiles representing the musical tastes of k individuals or groups of individuals with respect to the same n musical elements used for characterizing the pieces of music by the experts.
As will be described below, it is also possible to do a reverse process where the profile or music profile vector of a particular piece of music may be matched against the listener preference profiles in the above listener profile database to predict which individuals or groups of individuals that the particular piece of music or song would be likely to be appealed to. This is particularly useful when a new song or piece of music is to be introduced. For a new piece of music or song, there will be no purchasing data to rely on so that the above-described conventional systems relying on purchasing data will not be useful. In the invention , however, prediction is possible by first sending the new piece of music to experts or other individuals to characterize the piece of music according to the n music elements so that the music profile vector of this new piece of music can be constructed. Once the music profile vector of the new piece of music has been constructed, this vector can then be matched against the musical preference profiles of individuals or groups of individuals in the above listener preference profile database to find a match or a number of matches in order to predict individuals or groups of individuals to whom this new piece of music is likely to appeal to. As discussed below, demographic information concerning the individuals or groups of individuals having musical preference profile vectors that match the music profile vector of the new piece of music may then be provided to, for example, promoters of a new piece of music so that an effective marketing strategy may be formulated for the new piece of music. This can greatly assist the formulation of an effective marketing plan.
The above-described process can also be applied to assist radio stations in creating play lists. Here the radio station itself receives a user profile, either by profiling all of the songs that are popular on its existing play list, by a program manager taking the quizzes in a manner he/she feels corresponds to a typical listener, or by having the radio station's listeners actually take the quiz by playing sample songs and receiving feedback from its users. The radio station may establish multiple profiles for different shows or programming times. Once the listener profiles are established, the system of the invention can suggest new songs for the radio station play list that are compatible with the radio station's identity.
A more detailed description of the operation of system 10 of Fig. 1 will now be set forth in reference to Figs. 2 and 3. Fig. 2 is a dataflow diagram illustrating the operation of the system of Fig. 1. Fig. 3 is a graphical illustration of a webpage to illustrate the operation of the system of Fig. 1.
In reference to Fig. 2, the oblong shaped boxes are data depositories, large circles indicate results that are displayed or sent and the small circles are interfaces. The dotted lines 80 are database notifications, broken lines 82 indicates file transfers and the solid line COM. The small rectangles 86 next to larger boxes indicate that the large boxes initiate action.
In reference to Fig. 2, the right-hand portion of the dataflow diagram illustrates the backend operation of server 14 and its interaction with the two website servers 12 and 20. Thus, full length pieces of music such as songs are submitted to server 14 through external website 12 or by other means such as by mailing a compact disk or tape.
Each incoming song is coded into server 14 using a process called the "keyhole" music profiling application. This involves several substeps:
(i) the song is shipped to location of server 14, either by transporting the physical medium or by electronic submission; (ii) the song is uploaded from its distribution medium into a server 14 in digital format; (iii) pertinent information is copied, coded, calculated, or automatically read from the song, record jacket, and so on (e.g. song length, artist, title, promotional text, cover and other images). In some cases the text information (e.g. title, artist) is entered by the editorial experts 16 via a custom-designed web front-end 106 accessible via the Internet, and in others by entry personnel on an Internet or Intranet application. The full-length piece of music is usually submitted with other information such as title, artist and other information. When received by server 14, stripper application 102 strips such information and sends only the full-length piece of music to auto- encode 104 in which the digital song file (and/or the
30-second sample) is encoded into various common formats. Currently, server 14 uses software to encode songs into Microsoft Media Player and Beatnik formats (real audio or MP3 formats could be used). The song profile vectors are combined into a master matrix, which is uploaded into a master matrix stored in results clips database 108 in server 14. Server 14 then combines the encoded digital music file and information submitted through web submit 106 and stores the combined file in result clips database 108. The full-length piece of music may also be sent from experts with rating information for constructing the music profile vector of the music sent. The rating information is detached by rating appUcation 112 and only the full-length music is sent to auto encode 104 for encoding as described above. The rating and other information is sent by rating application 112 to result clips 108 and combined with the encoded digital music file into a combined file and stored in result clips database 108.
As described above, the rating application 112 recognizes the different vector elements Vy as the weightings given by a particular expert or individual with respect to the jth musical element in regard to the ith song or music submitted. This is then combined with similar weighting or weightings given by other experts in regard to other different musical elements and combined in a music profile vector for the particular ith piece of music submitted where such music profile vector is sent to result clips database 108 and combined with a digital musical file from auto encode 104 and stored as a combined file in database 108.
Thus, as described above in reference to Fig. 1, server 14 or server 18 sends full-length pieces of music to experts or other individuals 16 for characterizing the piece of music with respect to the n musical elements. After the experts or individuals 16 have done so, they then submit their ratings to server 14 and rating application 112 then compiles the music profile vectors of the different pieces of music so rated by the experts or individuals. In this manner, the above-described matrix of music profile vectors of different pieces of music is compiled as a matrix and stored in result clips database 108. The experts or individuals 16 are also asked to select a short segment of each piece of music in database 108 to be the definition clip corresponding to a particular musical element of the n musical elements, or if that is not possible, a definition clip corresponding to two or more of the n musical elements. The selected definition clips together with their corresponding musical elements are sent by experts or individuals 16 and received at server 14, where rating application 112 sends the clips and their corresponding musical elements are batched and sent to definition clips database 132. Alternatively, rating application 112, may send the definition clips to auto encode 104 which auto encodes the definition clips into the appropriate format and sends the digital definition clip files to database 132.
The above-described process is controlled by work flow engine 138. Thus, the definition clips database 132 and the result clips database 108 receives from time to time information concerning new pieces of music or definition clips submitted by experts or individuals 16 and music files from auto encode 104, stripper application 102 and rating application 112. Work flow engine 138 would query databases 108 and 132 periodically. When a definition clip has been received by database 132 or a new piece of music or its related information has been received by database 108, upon the period query by engine 138, the appropriate database will send a notification to engine 138. Engine 138 instructs stripper application 102 or rating application 112 to process the piece of music received in the manner described above so that the digitized definition clip or full-length music may be combined with information through web submit 106 and stored as a combined file in result clips database 108 or definition clip database 132.
From the above, it will be apparent that definition clips 132 now stores a number of definition clips, each corresponding to one or more of the n musical elements. Result clips database 108 now contains a matrix of the music profile vectors of a number of pieces of music or songs.
Each listener preference profile vector is a list of ratings corresponding to how important a given musical element is to the listener (or alternatively, the amount of the given characteristic that is ideal for that particular listener).
To derive the vector, the user is guided through a series of musical quizzes by means of a public website operated by server 20. In reference to Figs. 1 and 3, server 20 displays a webpage such as page 200 shown in Fig. 3 to present the listener with a series of musical quizzes or tests. As shown in Fig. 3, seven tests or quizzes are presented to the user as seven different entry points. By clicking on any one of the seven boxes 202, a number of musical definition clips are shown. When the listener clicks on any one of the seven circles 204, the corresponding definition clip will be played and the listener is asked to select any one of five choices 206, shown in Fig. 3. The five choices for scoring the definition clip are as follows: the user may select "not my style"
(0 points), "less my style" (2 points); "don't know" (5 points, or reverts to unrated); "more my style" (seven points); or "completely my style" (9 points). The various quizzes are linked to other quizzes for related genres of music. For example, the basic "rock" quiz is linked to "girls who rock" and to another quiz entitled "tattoos and pool cues." After taking each quiz, the Ustener is given a choice of several other linked quizzes. If a listener takes more than one quiz to establish a given profile, the accuracy of the profile increases exponentially. When the listener is finished, the responses from the various quizzes are compiled using a "preprocessor" to convert the list of clip ratings into a user profile.
The seven definition clips may be randomly selected at the time the tests or quizzes run, however, each of the songs or pieces of music for a given test question are testing for more-or-less the same thing. This means that a listener can take the same quiz multiple times and receives a different experience each time, thus, avoiding monotony.
Each of the entry points or tests corresponds to a category of musical elements (e.g. a genre). Thus, the seven definition clips shown in Fig. 3 may correspond to the seven musical elements in the category "instrument." The dynamic test generator 152 then selects seven definition clips from database 132, each of the definition clips highlighting one or more defining characteristics of the particular instrument (e.g. piano or peddle steel guitar or saxophone). When the listener clicks on the circle 204 corresponding to the musical element, a definition clip highlighting piano is then played and the user is asked to rate the clip according to the five choices. To construct the webpage 200 as shown in Fig. 3, graphical user interface templates 154 are combined with definition clips from generator 152 and integrated by integrator 156 into the webpage and presented as a test page 200. The ratings of the listener when one of the five choices is made by the listener are presented by server 20 to fingerprint preprocessor 162 in server 14. Preprocessor 162 compiles the listener preference profile vector based on the ratings by the listener and provides the vector to search engine 164 in server 14. Search engine 164 then matches the listener profile vector against the matrix of the different pieces of music in result clips database 108 and finds one or more matches to the listener profile and presents such pieces of music so found to web integration 166. Integration engine 166 then integrates such information with graphical user interface templates 168 and presents the results found to result webpage 170 which is then transmitted to server 20 to inform the listener. If so desired, a Ustener can retake the test by clicking box 208 and server 20 will inform server 14 so that generator 152 will generate a new set of definition clips for the same test for the same listener so that the listener will hear different definition clips to avoid monotony when the test is retaken. The reverse process of matching listener profiles to a particular piece of music, will now be described. The listener preference profile vector compiled by preprocessor 162 and presented as the result webpage 170 may be stored in a user profile/ personalization database 172. After a large number of listeners having a cross-section of different backgrounds have taken the tests, the above-described listener preference profile vector matrix may be compiled and stored in database 172. Thus, when a new piece of music such as a new song is to be promoted, such song is rated by experts or other individuals 16 in the same manner as that described above to construct a music profile vector which is stored in database 108. This vector is retrieved by search engine 174 which compares this vector to the listener preference profile vectors in the matrix in database 172 and finds the listener preference profile vectors that match such music profile vector by means of a difference algorithm in the same manner as that described above where the match is found between a particular listener preference profile vector and the matrix in database 108. In addition to the closest match, the less close matches may also be useful and all the matches are presented to data warehouse reporting engine 176. If demographic data corresponding to the listener preference profile vectors are available, engine 176 then combines such data with the matching listener profiles so found to compile a demographic match to new song report 180 and presented to server 20.
The invention can assist radio stations in creating play lists. This can be done by profiling all of the songs that are popular on its existing play lists stored in the station's spin list database 182, and the profiling can be performed in the same manner as that described above to construct musical profile vectors. One or more of such musical profile vectors can then be matched in the same reverse process as that described above for the new song or music to find the matching Ustener profiles. Such listener profiles can be used in turn to find more or different matching songs or music by matching such profiles with the matrix of database 108 in the manner discussed above. The result is then compiled as a station's songs to play listing report 186 and presented to server 20. This process can be repeated to enlarge the play list as well as the roster of listener profiles. Search engine 174 may also compile a targeted direct mailing list 183 based on the matching listener profiles so found.
The above-described system of the invention can be used also for finding multimedia works such as advertisements that may appeal to a user or a group of users in the same manner as that described above for music. The multimedia works will contain subjective content just as is the case with music so that multimedia elements, instead of musical elements, may be identified for characterizing different multimedia works such as advertisements. The above-described system can then be used for creating a matrix comprising multimedia profile vectors stored in the matrix and the database such as database 108. In a similar manner, user profile vectors can be derived and stored in a database such as database 172. Since multimedia works also include images, the tests or quizzes taken by the user would also include video images. These images are stored in advertisement profile database 190 and are supplied to ad engine 192 and to integrator 156 so that the visual images may be integrated with the sound and presented as the test page 200. The multimedia elements may be the same as the musical elements as shown in Appendix A, so that in the quizzes taken by users through test page 200, the visual component will have an influence on the choices made by the user. Alternatively, entirely visual subjective elements may be formulated in addition to the sound-based elements of Appendix A. Such and other variations are within the scope of the invention. The invention of the application can be applied to information for subjective content other than music or multimedia works, such as literature, dating service or any other type of information with subjective content. Elements useful for characterizing or related to subjective characteristics of information may then be used in the same manner as that described above for characterizing items of information with subjective content. These elements may then be used for analyzing one or more items of information with subjective content. User profiles may also be constructed pertaining to preferences of users with respect to such set of elements. Matching can then be performed between one or more items of information with subjective content and a profile of a user or users in a database in the same manner as that described above. In this manner, items of information may be found with subjective content that are likely to appeal to a user or a group of users. The reverse process is also true. Given an item of information with subjective content, it is possible to discover the profile of a user or a group of users to whom such information would likely to appeal to.
The user preference profile database and the instructions for performing the various instruction illustrated in Figs. 1, 2 and 3 may be stored in the storage medium 300 shown in Fig. 4. Then when a computer loads the instructions 304 and uses the instructions to access the database 302, the above-described processes can then be performed.
Fig. 5 is a data flow diagram illustrating a process for generating a play list to illustrate an embodiment of the invention . Fig. 6 is a work flow diagram illustrating a process for generating a play list to illustrate the embodiment of Fig. 5. It may be useful to generate a play list based on the profile of a listener or a group of listeners, or the profile of a piece of music. As shown in Figs. 5 and 6, the profile 312 is provided and matched against the profiles of the pieces of music in the results clip 108 of Fig. 2 using the subjective recommend engine 314 illustrated in more detail in reference to Figs. 1 and 2. The pieces 316 of music so found are then used as the seed profiles and again matched against the profiles in results clip database 108 to find additional pieces of music to form the play list 318 preferably in the form of a matrix, where the list may include aU or less than all of the music pieces that served as the seed music for the play list.
It may be desirable for a listener or a radio station to be able to retrieve the pieces of music on a play list compiled as described above in a streaming audio and play the music in real time as the data is retrieved, without having to first store the data. In this manner, the listener or the station does not have to store the music on the play list in any storage before the music on the list is played. This is illustrated in Fig. 7 described below.
When the matrix 318 of pieces of music is formed for the play list, locators may be provided for the music so that the pieces of music may be retrieved conveniently through the locators. These locators may be obtained when the pieces of music are first entered as described above. When the music is retrieved through the internet, the locators may be uniform resource locators (URLs), although other locators may also be used for other types of networks and are within the scope of the invention. Thus, the matrix of songs may be converted into a play list format 320, where the format includes, in addition to the information such as title, author, artist, and also the locators. As one example, the play list 320 may be in the asx format, so that it can be read by a multimedia player 322, such as the Microsoft Media Player. An example of the asx format is shown in Appendix B attached hereto and made part of this application. The multimedia player 322 then retrieves the pieces of music one after another according to the order on the play list using the locators from a database 330 of music in a streaming audio stream and play each piece of music in real time as the audio data is retrieved. In this manner, the listener or the radio station does not need to provide storage to store any of the music on the play list. The play list may be stored in a device, such as a portable music device (e.g. ipag from Compaq), equipped with a multimedia player, which can retrieve the pieces of music one by one in a streaming audio without having to store any of the pieces.
Where a listener or a radio station has a collection of pieces of music, such as a collection of compact disks ("CDs"), it may be desirable for the play list to be based on music in the collection. This may be accomplished by means of a system illustrated in Fig. 8. First a profile of a Ustener or a group of listeners or of a piece of music is obtained, either by means of the rating process described in reference to Figs. 1-3, or from the user profile vector of the Ustener or the group of listeners in the database 172 of Fig. 2, where the vector is derived previously by means of a similar process. This is done preferably by means of an intelligent device such as a jukebox. Responding to a user's input from an input device 402, the jukebox
404 logs in (line 406) to system 408, such as the system described above in reference to Figs, 1-3, and requests the user's profile from database 172, or requests a session in which the user's profile may be constructed as described above. The profile is retrieved or constructed (line 410). The jukebox then sends information concerning the collection of music to the system. Where the collection of music is in the form of compact disks, the jukebox 404 provides locations of the CDs (e.g. slot numbers) and imprints of the CDs, such as the number of tracks, the lengths of the tracks and the order of the tracks, to system 408. System 408 then requests, based on the CD imprints provided by the jukebox, further information concerning the collection, such as title, artist, author of each track of the CDs in the collection from a database 410, such as the CDDB at www.cddb.com. System 408 then matches the user profile with the profiles in results clip 108 to find the pieces of music that match the profile. System 408 then searches the matching pieces of music from the collection and provides to jukebox 404 the slot numbers of the CDs and track numbers on the CDs of such pieces in the collection. The jukebox may then play the pieces of music. Where a play list has been constructed in the manner described above, system 408 provides the list to jukebox 404. The jukebox then plays the music on the list.
In the same manner as that described above for the parent application in reference to Fig. 4, system 408 including databases such as those illustrated in Fig. 2 and instructions for carrying out the above described processes may be stored in a storage medium. Then when a computer loads the instructions and uses the instructions to access the databases, the above-described processes can then be performed.
While the invention has been described above by reference to various embodiments, it will be understood that changes and modifications may be made without departing from the scope of the invention, which is to be defined only by the appended claims and their equivalents.
APPENDIX A
MUSICAL ELEMENT CATEGORY
1970's/1980's Icons Genre 20/30 something Genre 2 Step Genre 80's Synth Genre Lite Rock Vocals Genre Adult Lilith Folk Genre Abstract/Tech Genre AC Country- Genre Acid Grooves Genre Pedal Steel Guitar Instrument Percussion/drums Instrument Piano Instrument
Sampled Drums Instrument Saxophone Instrument Slide Guitar Instrument Spoken Female Vocal Instrument Funky Miscellaneous Jazzy Miscellaneous Tempo Miscellaneous Epic Mood Happy Mood Hypnotic Mood Melancholy Mood
APPENDIX B
«ASX_ptay!!sttxt»
"Jack is back" l>
Figure imgf000026_0001
<AUTHOR>Fiorβ«a</AUTHOR>
<ABSTRACT>Full of Flamenco strings and a kicking basβline...lbi;sa...neβd ! say rrrøβ?</ABSTRACT:
<Siar»Tϊm9 value * K00:00;COOO" !>
<P3τam Name = "Msdla/ βdlaPropertiea/ e iaType" Value - "audio' !>
<Param Name = "type" Value * "downloaded" t>
<Pararn Nams - "Is rruste '' Value * "true" t>
<Par«m Name * "IsJTolectød" Value * "false* l>
<Param Name - "Lsbel" Value » "Urfwersal* l>
<Param Name = "TiHe" Value = "Azurro" />
<Param Name = "Alhum" Value * "Kiss In lblza 2000" l>
<P8ram Name = "Author* Value - "Florella" }>
<Pβram Name = "SiugLlnβ" Value ≠ "Pull of Flamenco strings and a kicking basel|nB...lblza...πeed I say rnσre?" »
<Param Name » "CoverAtT Value * "rr ://ibβta.mubu,com/ar-'3135-ART.Jpg'' />
<Ref href = "^ttp: /li8teπ.mubu.com aster/asFul!/6 20-FU L.asf '/ </Hn»ry>
<Entry>
<T«lβ > Cplum Scumbag∑</Tit!e <AUTHOR>Oiav BasoskWAUTHOR*
<ABSTRACT l.aiin ftavored house track, from the Outcft Maβta blasta.</ABSTRACT
•≤StartTime value = "00:00:0.000" i>
<Param Name » "Media/MedlaProperaes Me iaType" Value = "audio" />
<Param Name * "type" Value = "downloaded" !>
<Param Name = "l*_Trusted" Value * "true" />
<Param Name «= "^Protected" Value «* "false" />
<Param Name = "Label" Value * "Universal" />
<Param Name « "Title" Vatue * "Opium Scumbag?" />
<Parβm Name *■ "Albu " Value = "Kiss In Ibjza 2000" !>
<Pβt am Name = "Author" Value * "Otav fiasoskp />
<Param Name a "SiugUnβ" Value « "Latin flavored house track, from the Dutch Masta Waste.'" I>
<Param Name « "CoverArt" Valu = "Wtp /lbeta.mubu.com art/313&-ART g" /

Claims

WHAT IS CLAIMED IS:
1. A method for finding a play list of music, comprising: providing a musical profile, wherein said musical profile includes preference data pertaining to one or more elements of a set of musical elements for characterizing a number of pieces of music in a database, said elements corresponding to subjective musical traits of the pieces, wherein said database includes a matrix of musical profile data of the pieces of music compiled using said set of musical elements; finding musical profiles of a first number of pieces of music in the database that match the musical profile; and creating the play list by finding musical profiles of a second number of pieces of music in the database that match the musical profiles of the first number of pieces of music, wherein said first and second number of pieces of music are not the same.
2. The method of claim 1, said second number greater than the first number.
3. The method of claim 1, wherein said musical profile provided is likely to appeal to a listener or a group of listeners of a radio station.
4. The method of claim 1, wherein said providing includes playing at least one music clip to a listener and compiling the listener profile according to listener response to requests for ratings in a quiz concerning the at least one clip, said at least one clip being taken from a corresponding piece of music in the database and shorter than such piece.
5. The method of claim 1, said creating further comprising providing resource locators that point to data sources for the of pieces of music in the play list.
6. The method of claim 1, said locators being uniform resource locators.
7. The method of claim 1, said creating further comprising placing the play list in a format that is readable by a media player.
8. A method for finding a play list of music and playing it, comprising: providing a musical profile, wherein said musical profile includes preference data pertaining to at least two elements of a set of musical elements for characterizing a number of pieces of music in a database, said elements corresponding to subjective musical traits of the pieces, wherein said database includes a matrix of musical profile data of the pieces of music compiled using said set of musical elements and locators pointing to data sources for the pieces of music; finding musical profiles of a first number of pieces of music in the database that match the musical profile; creating the play list by finding musical profiles of a second number of pieces of music in the database that match the musical profiles of the first number of pieces of music, wherein said first and second number of pieces of music are not the same; and retrieving the pieces of music on the play list using the locators.
9. A storage medium storing a database comprising a matrix of data of profile data with respect to a set of elements related to subjective characteristics of information with subjective content, wherein said matrix is arrived at by an analysis of a plurality of pieces of music with respect to said set of elements, and wherein said matrix also includes locators pointing to data sources for the pieces of music.
10. The medium of claim 9, wherein said locators include uniform resource locators.
11. A method for identifying music from a storage that is likely to appeal to a listener or a group of listeners, comprising: providing a listener profile of a listener or a group of listeners, wherein said listener profile includes listener data pertaining to one or more elements of a set of musical elements for characterizing a number of pieces of music in a database, said set of elements corresponding to subjective musical traits of the pieces, wherein said database includes a matrix of profile data of the pieces of music compiled using said set of musical elements; obtaining a plurality of pieces of music in the database where the pieces of music match the listener profile; and finding locations in the storage music that matches the plurality of pieces of music.
12. The method of claim 11, wherein said obtaining further includes searching information relating to the plurality of pieces of music in a second database to uniquely identify music in the storage matching the plurality of pieces of music.
13. The method of claim 12, said storage storing compact disks, wherein said finding includes providing location information of disks in the storage and track numbers of the such disks where the plurality of pieces of music are located.
14. The method of claim 13, wherein said obtaining further includes searching name and artist information of music related to tracks on the disks.
15. A storage medium storing a data base and a set of instructions for finding a play list of music, said database includes a matrix of musical profile data of the pieces of music compiled using a set of musical elements for characterizing a number of pieces of music in a database, said elements corresponding to subjective musical traits of the pieces, wherein said set of instructions causes a computer to provide a musical profile, wherein said musical profile includes preference data pertaining to one or more elements of a set of musical, find musical profiles of a first number of pieces of music in the database that match the musical profile; and create the play list by finding musical profiles of a second number of pieces of music in the database that match the musical profiles of the first number of pieces of music, wherein said first and second number of pieces of music are not the same.
16. The medium of claim 15, said second number greater than the first number.
17. The method of claim 15, wherein said musical profile provided is likely to appeal to a listener or a group of listeners of a radio station.
18. The method of claim 15, wherein said providing includes playing at least one music clip to a listener and compiling the listener profile according to listener response to requests for ratings in a quiz concerning the at least one clip, said at least one clip being taken from a corresponding piece of music in the database and shorter than such piece.
19. The method of claim 15, said creating further comprising providing resource locators that point to data sources for the of pieces of music in the play list.
20. The method of claim 15, said locators being uniform resource locators.
21. The method of claim 15, said creating further comprising placing the play list in a format that is readable by a media player.
22. A storage medium storing a data base and a set of instructions for identifying music from a storage that is likely to appeal to a listener or a group of listeners, said database including a matrix of musical profile data of the pieces of music compiled using a set of musical elements for characterizing a number of pieces of music in a database, said elements corresponding to subjective musical traits of the pieces, wherein said set of instructions causes a computer to provide a listener profile of a listener or a group of listeners, wherein said listener profile includes listener data pertaining to one or more elements of the set of musical elements, obtain a plurality of pieces of music in the database where the pieces of music match the listener profile; and find locations in the storage music that matches the plurality of pieces of music.
23. The medium of claim 22, wherein said obtaining further includes searching information relating to the plurality of pieces of music in a second database to uniquely identify music in the storage matching the plurality of pieces of music.
24. The medium of claim 23, said storage storing compact disks, wherein said finding includes providing location information of disks in the storage and track numbers of the such disks where the plurality of pieces of music are located.
25. The medium of claim 24 wherein said obtaining further includes searching name and artist information of music related to tracks on the disks.
26. A method for identifying a piece of music likely to appeal to a potential listener which comprises: (i) providing a set of at least two elements for characterizing a piece of music;
(ii) separately applying at least two elements from said step (i) set of elements to each of a plurality of pieces of music, wherein a music piece profile of said plurality of pieces of music is obtained;
(ϋi) providing a data base comprising at least one music piece profile obtained in step (ii); (iv) applying said at least two elements of step (i) to at least one potential listener wherein a Ustener profile is provided;
and
(v) comparing said listener profile of step (iv) with said data base of music piece profile of step (iii), wherein a match between said listener profile and a music piece profile in said data base is indicative of a music piece likely to appeal to said listener.
27. The method of claim 26 wherein step (iv) comprises applying a plurality of elements from said step (i) set of elements to a plurality of potential listeners in step (iv).
PCT/US2001/019970 2000-06-23 2001-06-22 System for characterizing pieces of music WO2002001548A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001271384A AU2001271384A1 (en) 2000-06-23 2001-06-22 System for characterizing pieces of music

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US60295300A 2000-06-23 2000-06-23
US09/602,953 2000-06-23
US70992800A 2000-11-09 2000-11-09
US09/709,928 2000-11-09

Publications (1)

Publication Number Publication Date
WO2002001548A1 true WO2002001548A1 (en) 2002-01-03

Family

ID=27084287

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/019970 WO2002001548A1 (en) 2000-06-23 2001-06-22 System for characterizing pieces of music

Country Status (2)

Country Link
AU (1) AU2001271384A1 (en)
WO (1) WO2002001548A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1376583A3 (en) * 2002-06-19 2004-12-22 Microsoft Corporation System and method for automatically authoring video compositions using video clips
EP1533786A1 (en) * 2003-11-21 2005-05-25 Pioneer Corporation Automatic musical composition classification device and method
EP1612706A3 (en) * 2004-06-30 2006-05-24 Sony Corporation Content storage device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864868A (en) * 1996-02-13 1999-01-26 Contois; David C. Computer control system and user interface for media playing devices
US5969283A (en) * 1998-06-17 1999-10-19 Looney Productions, Llc Music organizer and entertainment center
US6201176B1 (en) * 1998-05-07 2001-03-13 Canon Kabushiki Kaisha System and method for querying a music database

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864868A (en) * 1996-02-13 1999-01-26 Contois; David C. Computer control system and user interface for media playing devices
US6201176B1 (en) * 1998-05-07 2001-03-13 Canon Kabushiki Kaisha System and method for querying a music database
US5969283A (en) * 1998-06-17 1999-10-19 Looney Productions, Llc Music organizer and entertainment center

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1376583A3 (en) * 2002-06-19 2004-12-22 Microsoft Corporation System and method for automatically authoring video compositions using video clips
US7222300B2 (en) 2002-06-19 2007-05-22 Microsoft Corporation System and method for automatically authoring video compositions using video cliplets
EP1533786A1 (en) * 2003-11-21 2005-05-25 Pioneer Corporation Automatic musical composition classification device and method
US7250567B2 (en) 2003-11-21 2007-07-31 Pioneer Corporation Automatic musical composition classification device and method
EP1612706A3 (en) * 2004-06-30 2006-05-24 Sony Corporation Content storage device

Also Published As

Publication number Publication date
AU2001271384A1 (en) 2002-01-08

Similar Documents

Publication Publication Date Title
US7797272B2 (en) System and method for dynamic playlist of media
US20070276733A1 (en) Method and system for music information retrieval
US7279629B2 (en) Classification and use of classifications in searching and retrieval of information
US7447705B2 (en) System and methods for the automatic transmission of new, high affinity media
Ragno et al. Inferring similarity between music objects with application to playlist generation
US20170139671A1 (en) Systems and methods for customized music selection and distribution
US8280889B2 (en) Automatically acquiring acoustic information about music
US20160147876A1 (en) Systems and methods for customized music selection and distribution
US20060083119A1 (en) Scalable system and method for predicting hit music preferences for an individual
JP2005526340A (en) Playlist generation, distribution and navigation
EP1949272A2 (en) Audio search system
Lu et al. A novel method for personalized music recommendation
Celma et al. If you like radiohead, you might like this article
i Termens Audio content processing for automatic music genre classification: descriptors, databases, and classifiers
Pérez-Verdejo et al. The rhythm of Mexico: an exploratory data analysis of Spotify’s top 50
Bogdanov From music similarity to music recommendation: Computational approaches based on audio features and metadata
WO2002001548A1 (en) System for characterizing pieces of music
EP2096558A1 (en) Method for generating an ordered list of content items
Lehtiniemi et al. Evaluation of automatic mobile playlist generator
CA3197589A1 (en) Identification of media items for target groups
Pauws Music and choice: adaptive systems and multimodal interaction
Schneidewind Interaction Effects and Selecting Regression Models of Taylor Swift Song Popularity
WO2007133760A2 (en) Method and system for music information retrieval
Chaudhary et al. Parametrized Optimization Based on an Investigation of Musical Similarities Using SPARK and Hadoop
Grgurić et al. Streaming Services: Revolutionizing Music Management

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP US US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP