US20060040718A1 - Audio-visual games and game computer programs embodying interactive speech recognition and methods related thereto - Google Patents

Audio-visual games and game computer programs embodying interactive speech recognition and methods related thereto Download PDF

Info

Publication number
US20060040718A1
US20060040718A1 US11/182,269 US18226905A US2006040718A1 US 20060040718 A1 US20060040718 A1 US 20060040718A1 US 18226905 A US18226905 A US 18226905A US 2006040718 A1 US2006040718 A1 US 2006040718A1
Authority
US
United States
Prior art keywords
game
player
gaming device
dialog
game player
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/182,269
Inventor
Ian Davis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mad Doc Software LLC
Original Assignee
Mad Doc Software LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mad Doc Software LLC filed Critical Mad Doc Software LLC
Priority to US11/182,269 priority Critical patent/US20060040718A1/en
Assigned to LAWRENCE SAVINGS BANK reassignment LAWRENCE SAVINGS BANK SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MAD DOC SOFTWARE, LLC
Assigned to MAD DOC SOFTWARE, LLC reassignment MAD DOC SOFTWARE, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DAVIS, IAN L.
Publication of US20060040718A1 publication Critical patent/US20060040718A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/215Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/335Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1081Input via voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/407Data transfer via internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/57Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
    • A63F2300/572Communication between players during game play of non game information, e.g. e-mail, chat, file transfer, streaming of audio and streaming of video
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6081Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games

Definitions

  • the present invention relates to audio-visual video games implemented for dedicated gaming systems, computer systems or via the Internet, and more particularly to such audio-visual games that embody speech recognition methods and devices for use in the conduct of the game and more specifically such audio-visual games embodying interactive techniques and methods that allow one or more players to interact or actively participate in the play of the game, including simulating natural language interactions with non-player characters of the game, where such interactive participation is accomplished using speech recognition methods and devices.
  • UI control systems for interactive computer and video games typically consist of a keyboard and/or other hardware controller (e.g., joystick, mouse), which the game player manipulates by hand to control characters and game action.
  • Traditional hardware-controller UI limits immersive game-play, not least because it is an imperfect replacement for the most common and intuitive means humans use to interact with one another in the real world: spoken language.
  • Speech recognition for Command & Control links a spoken word or short phrase to a single “hotkey” or other existing hardware-controller (e.g., keyboard, joystick) game command, for example, “run,” or “shoot”.
  • This application of speech recognition still falls short of immersive game play, because it is one-way, from player to game character
  • An essential aspect of the immersive game experience, dialog between the human player and the computer game characters, has not been addressed, either by traditional UI via hardware-controller or by Command and Control speech recognition. Examples of such speech recognition command and control type of systems and methods are found in U.S. Pat. No. 6,529,875 and U.S. Pat. No. 5,893,064 and U.S. Pat. No. 6,456,977.
  • gaming programs, systems and gaming methods that allow the game player to become an interactive participant in the game, including but not limited to directly using his/her voice to role-play a simulated character in the game, and thus be better immersed in the gameplay in comparison to prior art techniques and programs, in which spoken words are limited to phrases or commands that in effect implement a command that corresponds to the functionality of a key or button of a remote control device. It also would be desirable to provide such gaming programs, systems and gaming methods in which the speech recognition training is integrated into the play of the game. Such gaming programs, systems and methods preferably create an immersive gaming experience for the human player as well as improving his/her control of the game.
  • the present invention features game applications, programs, software, systems and methods that are such as to provide an interactive gaming experience for one or more game players by using speech recognition techniques in combination with interactive techniques and functionalities so as to create an interface that allows the spoken words and/or spoken phrases of a game player to initiate any one of a number of functionalities, actions or informational outputs in connection with the game.
  • Such methods, programs, and systems can further include creating an interface that allows a game player to initiate a function or action to be performed by the game player's character, to initiate a dialog or interaction between a player's character and a non-player game character appearing in a game or to request information from a non-player character or information about a phase of the game in general as part of the play of the game.
  • such methods, programs and systems further include creating an interface that allows the game player to directly interact with a non-player character as well as to directly control the play of the game (e.g., such as directly casting a spell in a game involving magic).
  • the process of dialoging with a non-player character includes displaying one or more text messages of various possible responses in response to the interactive input of the game player; having the game player provide a voice input to select a specific one of the one or more text messages and creating a game program output that corresponds or relates to the game player's response selected via the player's voice or speech input to the game.
  • the non-player character provides a voice output of the one or more text messages being displayed and/or of the response being outputted.
  • the process of dialoging includes displaying a text message, that can be in the form of a statement by or query from the non-player character, along with the one or more text messages and/or having the non-player character provide a voice output of the message.
  • Such games are those implemented by any of a number of formats or mediums known to those skilled including but not limited to computer games installed and executed on a conventional personal computer, games installed and executed on dedicated gaming machines or apparatuses (e.g., console, video games, or hand held devices such as cellphones or handheld gaming devices) or games that are installed and executed at a remotely located computer system or apparatus where communications between the game player and the remotely located computer system or apparatus is effected over wide area networks, local area networks and/or the Internet (e.g., via Internet Service Provider—ISP).
  • ISP Internet Service Provider
  • the dialoging process includes identifying a specific phrase, term or word of each of the text messages being displayed, which phrase term or word is a keyword for selecting a given text message and thus the corresponding response to be outputted.
  • the phrase, term or word is located at the beginning of the displayed text message (e.g., the first word of the text message).
  • the initiating phrase, term or word is highlighted in the text message being displayed but is not necessarily located at the beginning of the text message.
  • the dialoging process is created so as to allow an open dialog between the game player acting as the playing character and the non-player character, such open dialog being initiated by the game player at any time during the game or in response to the non-player character's actions or words.
  • the open dialog process is created so as to allow the game player to provide a request (e.g., requesting location or other information) using natural native language to the non-player character or gaming program.
  • the game player as the playing character can ask the non-player character where an artifact (e.g., amulet, scroll, weapon) is located using natural language as they would to another game player or person (e.g., Where is the sword?).
  • the dialog process would further include developing a response or action responsive to the dialog initiated by the game player.
  • such an open dialog process can further include reviewing one or more of game environmental context information and character history to arrive at the response or responsive action.
  • the dialoging process can map nouns, concepts and other language elements to the appropriate game event, action or object (i.e., game context) so as to thereby significantly enhance the ability of the speech recognition system to more accurately derive the player's intent, desired action, question or statement because reference also is being made to what has happened or is happening in the play of the game.
  • Also featured are computer systems including gaming application programs embodying the methods and techniques of the present invention for execution thereon and storage mediums on which are stored such applications programs.
  • Player character or player game character shall be understood to be any one of a number of simulated game characters in human animal or inanimate form, for example, or a device under the control of the player's character, that appears during the conduct of a game and which is/are under the control of the gaming software or applications program and also is under the control of the player of the game.
  • the game player can provide an input to the game via a device (e.g., joystick) or voice that causes the player character to initiate an action or output a reply within the parameters allowed by the game.
  • the gaming software and applications program in response to such input causes the player character to initiate the action or reply.
  • the joystick can be used to cause the playing character (e.g., person or automobile) to move in a given direction and the gaming software causes the character to move in response to the input.
  • a non-player character or a non-player game character shall be understood to be any one of a number of simulated game characters in human, animal or inanimate form, for example, that appear during the conduct of a game and which are under the sole control of the gaming software or applications program and not under the specific control of the game player.
  • the gaming software or applications program can cause the non-player character to initiate an action or reply resulting from an action or input of a game player. For example, if the non-playing and player characters are battling using swords, the gaming software can cause the non-player character to parry and block the sword actions of the player character as well as create offensive actions.
  • a computer readable medium shall be understood to mean any article of manufacture that contains data that can be read by a computer or a carrier wave signal carrying data that can be read by a computer.
  • Such computer readable media includes but is not limited to magnetic media, such as a floppy disk, a flexible disk, a hard disk, reel-to-reel tape, cartridge tape, cassette tape or cards; flash memory devices including NVRAN; optical media such as CD-ROM, DVD and writeable compact disc; magneto-optical media in disc, tape or card form; paper media, such as punched cards and paper tape; or on carrier wave signal received through a network, wireless network or modem, including radio-frequency signals and infrared signals.
  • voice when used in connection with the player character as an input shall be understood to include or mean speech and the words, phrases, phonemes, or terms spoken by the game player as well as in particular the words, phrases, phonemes, or terms spoken by the game player in the game player's natural language.
  • FIG. 1A is a block diagram of an exemplary computer system embodying the methodology and software of the present invention
  • FIG. 1B is a block diagram of another exemplary computer system being implemented as a computer network
  • FIG. 1C is an external view of an exemplary gaming system using a video gaming machine embodying the methodology and software of the present invention
  • FIG. 2A is a high level flow diagram generally illustrating the overall process of a game methodology according to the present invention
  • FIG. 2B is a flow diagram illustrating the dialog process between the player and a non-player character (NPC);
  • FIG. 2C is a flow diagram illustrating the process of the game player casting a spell
  • FIG. 2D is a flow diagram illustrating the in-game learning process
  • FIG. 3 is an illustrative screen shot illustrating an NPC statement/query and a response list
  • FIG. 4 is a schematic view illustrating interactions in the open dialog process.
  • FIG. 5 is an illustration of a language neutral phoneme set.
  • FIG. 1A a block diagram of a computer system 100 embodying and implementing the gaming programs and gaming methodology of the present invention.
  • the following discussion describes the structure of such a computer system 100 but the discussion of the applications program that embodies the gaming methodology of the present invention is described elsewhere herein.
  • Such a computer system 100 includes a computer 110 , a display 120 , a speaker 130 and one or more input devices.
  • the display 120 is any of a number of devices known to those skilled in the art for displaying images responsive to outputs signals from the computer 110 .
  • Such devices include but are not limited to cathode ray tubes (CRT), liquid crystal displays (LCDS), plasma screens and the like.
  • CTR cathode ray tubes
  • LCDS liquid crystal displays
  • plasma screens plasma screens and the like.
  • the speaker 130 is any of a number of auditory signal generating devices or auditory signal output devices known to those skilled in the art that generate an auditory signal or auditory output responsive to outputs signals from the computer 110 . It should be recognized that the signals being outputted from the computer 110 to the speaker 130 can originate from any of a number of devices including PCI sound boards or cards located within the computer that are operably coupled to the microprocessor 112 and the speaker 130 . Also, and although an external speaker 130 is illustrated, this shall not be construed as limiting the invention as the speaker 130 can be disposed internally within the housing or case of the computer 110 as well as being a speaker or a system of speakers external to, the computer.
  • a computer system 100 can include the speaker 130 this may not be essential to practicing the present invention (i.e., an auditory output is not needed).
  • the speaker's function can be disabled, for example, if other auditory devices are being used (e.g., headset 140 b with one or more earphones provided) or the output by the speaker may be limited to auditory output during the conduct of the game (e.g., sounds of battle, background sounds associated with the displayed game environs, etc.).
  • the one or more input devices includes any of a number of devices known to those skilled in the art which can be used to provide input signals to the computer for control of applications programs and other programs such as the operating system being executed within the computer.
  • one input device 140 a preferably comprises a switch, a slide, a mouse, a track ball, a glide point or a joystick or other such device (e.g., a keyboard having an integrally mounted glide point or mouse) by which a user such as game player can input control signals other than by means of a keyboard.
  • Another input device comprises any one of a number of audio input devices known to those skilled in the art and in an illustrative embodiment, the input device comprises a headset 140 b having a microphone for use by the game player.
  • the audio input device or headset 140 b is operably coupled to the computer 110 , more specifically the speech recognition functionalities being executed on the microprocessor 112 , so that spoken words, sounds, phrases, terms or phonemes of the game player can be inputted into the game program. It should be recognized that the number and type of input devices is not particularly limited to that shown and described herein.
  • the computer 110 typically includes a central processing unit 112 including one or more micro-processors such as those manufactured by Intel or AMD, random access memory (RAM) 113 , mechanisms and structures for performing I/O operations (not shown), a storage medium such as a magnetic hard disk drive(s) 114 , a device 116 for reading from and/or writing to removable computer readable media and an operating system for execution on the central processing unit.
  • a central processing unit 112 including one or more micro-processors such as those manufactured by Intel or AMD, random access memory (RAM) 113 , mechanisms and structures for performing I/O operations (not shown), a storage medium such as a magnetic hard disk drive(s) 114 , a device 116 for reading from and/or writing to removable computer readable media and an operating system for execution on the central processing unit.
  • RAM random access memory
  • the hard disk drive 114 is being provided for purposes of booting and storing the operating system, applications or systems that are to be executed on the computer, paging and swapping between the hard disk and the RAM and the like.
  • the game applications program 152 according to the present invention including the programming instructions and the data containing the text, auditory and visual informational data associated therewith and the speech recognition program 154 are stored in a removable computer readable medium 150 such as a CD or DVD type of media that is inserted into a device 116 for reading and/or writing to the removable computer readable media.
  • the reading/writing device 116 is any of a number of devices known to those skilled in the art for reading from and/or writing to the particular medium on which the applications program is stored.
  • the gaming applications program including that portion 152 including the programming instructions and the speech recognition program or module 154 containing the text, auditory and visual informational data associated therewith is stored on the hard drive 114 .
  • the game player or user accesses the gaming program directly from the hard drive 116 for example, by entering appropriate commands via the keyboard or by other appropriate action of the input device (e.g., positioning the cursor over an icon on the desktop and clicking a mouse button).
  • FIG. 1B there is shown a network based computer system 200 according to the present invention that includes a server 210 , an external storage device 260 and a network infrastructure 310 that operably couples the plurality or more of client computer systems 100 ′ to the server 210 .
  • the client computer systems 100 ′ are typically configured like the computer system of FIG. 1A except that in use the game player or user would access the game applications program and related data for a given game from the server 210 and upload such information temporarily onto the client computer system.
  • the network infrastructure 310 can comprise, for example, a local area network (LAN), a wide area network (WAN) or the Internet where access to the Internet is typically provided by an ISP.
  • the server 210 is any of a number of servers known to those skilled in the art that are intended to be operably connected: to a network so as to operably link a plurality or more of client computers via the network to the server and thus also to the external storage device 260 .
  • the server 210 typically includes a central processing unit including one or more microprocessors such as those manufactured by Intel or AMD, random access memory (RAM), mechanisms and structures for performing I/O operations, a storage medium such as a magnetic hard disk drive(s), and an operating system for execution on the central processing unit.
  • the hard disk drive of the server typically is not used for storing data and the like utilized by client applications being executed on the client computers. Rather the hard disk drive(s) of the server 210 are typically provided for purposes of booting and storing the operating system, other applications or systems that are to be executed on the server, paging and swapping between the hard disk and the RAM.
  • Data and the like being used in connection with the execution of client applications, such as the game applications program and the information and/or data related thereto, on client computers is stored in the external storage device 260 that is operably interconnected to the server 210 using any of a number of techniques and related devices or cabling known to those skilled in the art.
  • such an interconnection is implemented using a small computer systems interface (SCSI) technique(s) or via a fiber optic cable or other high-speed type of interconnection.
  • SCSI small computer systems interface
  • the external storage device 260 comprises a disk assembly typically made up of one or more hard disks that are configured and arranged so the external storage medium functionally appears to the servers 210 as a single hard disk.
  • Such an external storage medium is further configured and arranged to implement any of a number of storage schemes such as mirroring data on a duplicate disk (RAID level 1) or providing a mechanism by which data on one disk, which disk has become lost or inaccessible, can be reconstructed from the other disks comprising the storage medium (RAID level 5).
  • RAID level 1 mirroring data on a duplicate disk
  • RAID level 5 providing a mechanism by which data on one disk, which disk has become lost or inaccessible
  • each of the client computers 100 ′ includes one or more I/O ports that are operably connected to the microprocessor 112 and which are configured and arranged for the transfer of the data and program instructions between and amongst the client computers and the server 210 using any of a number of non-wireless techniques or wireless techniques known to those skilled in the art.
  • non-wireless techniques include for example any of a number of network infrastructures 300 known to those skilled in the art such as Ethernet, token ring, FDDI, ATM, Sonet, X.25 and Broadband.
  • the I/O ports of the client computers are configured so as to include a transceiver as is known to those skilled in the art for wireless network transmission systems.
  • a wireless network device is operably coupled to the computer (e.g., via a USB I/O port), which device includes the transceiver.
  • An exemplary wireless network technique includes those systems embodying a transceiver or transmitter complying with IEEE-802.11 sometimes referred to as a Bluetooth chip.
  • the transceiver operably coupled to the client computer is configured and arranged so as to establish a communications link between the client computer and a receiver or transceiver remote from the location of the client computer that is in turn operably coupled to the server 210 .
  • the corresponding remotely located transceiver/receiver would be located within about 100 meters or so of the location of the client computer device and operating at a frequency of about 2.4 GHz.
  • the server 210 in turn could be coupled to the remotely located transceiver/receiver using non-wireless or wireless techniques.
  • the game machine main unit 402 includes the circuit boards, microprocessors, and circuitry for the game processing including the processing of the input signals from game players and outputting signals to the display 404 .
  • the main unit 402 includes one or more input ports or connectors 410 which are operably coupled to a mechanical input type of input device such as a pad 420 .
  • the main unit includes another input connector or port 412 that is operably couple to an audio input device, such as a headset 140 b (see also FIG. 1A ).
  • the main unit 402 also includes a video output port or connector and an audio output port or connector that operably couple the main unit 402 to the display 404 .
  • the display can be any of a number of audio-visual displays known to those skilled in the art including but not limited to a television receiver or monitor.
  • the main unit 402 also can include one or more different types of drives, including but not limited to cartridge drives, CD-ROM rives and DVD ROM drives for respectively operably connecting read only cartridges or read only CDs or DVDs to the circuit boards, processors and the like of the main unit 402 that execute the programs and the like stored on the storage medium.
  • drives including but not limited to cartridge drives, CD-ROM rives and DVD ROM drives for respectively operably connecting read only cartridges or read only CDs or DVDs to the circuit boards, processors and the like of the main unit 402 that execute the programs and the like stored on the storage medium.
  • FIGS. 2 A-D there is shown various high-level flow diagrams illustrating the gaming applications program and functionalities thereof that embody various aspects and embodiments of the gaming methodology of the present invention.
  • FIGS. 1A or 1 B for features of the computer systems embodying the software/applications programs and methodology of the present invention not otherwise shown on FIGS. 2 A-D.
  • the game applications program or game software of the present invention to be accessed via a removable computer readable medium, from a remote server connected to a client computer system dedicated by storage or a computer system.
  • a remote server connected to a client computer system dedicated by storage or a computer system.
  • the following discussion is limited to description of the software, flow of information and methodology in the case where the software ia already stored on a hard drive of a computer system.
  • FIG. 2A there is shown a high level flow diagram generally illustrating the flow of the game being executed on a computer system 100 .
  • the player starts the game using any of number of techniques appropriate for the operating system and game applications program, Step 500 .
  • the game player would double click the desktop icon to load and start the program.
  • the game player would load the cartridge/CD/DVD after starting the machine so that the game program is loaded and started.
  • the start process can further include other functions such as logging onto the program as an identified game player.
  • the game program can be configured and arranged using any of a number of known techniques to store selected information or particularities of the player so that they can be automatically reloaded when the player returns to the game at a later time.
  • such techniques can allow a game player to stop the game but without exiting the game so the game player can re-start the game where he/she left off.
  • the game applications program of the present invention will embody or use any of a number of speech recognition programs as is known to those skilled in the art in combination with the game specific applications program of the present invention.
  • speech recognition programs are configured and arranged so as to convert the words or sounds spoken through an audio input device such as the microphone of the headset 140 b into digital signals that are inputted into the game applications program.
  • the game applications program and methodology to be configured and arranged so as to include the instructions, criteria and data so as to be capable of implementing or executing a speech recognition function.
  • the gaming methodology can further embody an initial speech recognition learning protocol that would be implemented prior to starting the gaming process.
  • the method can further include determining if such a speech recognition learning protocol should be initiated or conducted, step 502 . It should be recognized, however, if the game designer does not include an initial speech recognition learning protocol (NO, step 502 ), then the illustrated process essentially proceeds from step 500 directly to step 506 . It also is contemplated that in such a case; steps 502 and 504 could be excluded from the flow of the game or gaming method of the present invention. The discussion regarding the specifics of the learning protocol of steps 502 , 504 are discussed further below.
  • the methodology can allow a player to take various actions, such as initiating a dialog process with non-player characters ( FIG. 2B ) or casting a spell ( FIG. 2C ), including role-playing by the game player.
  • the methodology also contemplates a process for controlling speech learning operations so any such learning is integrated into the conduct of the game or game play.
  • the speech recognition module is utilized to simulate natural language interactions or dialog with simulated characters, or non-player characters in the game. This is in contrast with that of conventional systems where interactions with non-player characters has been limited to simple interactions such as shooting at the character or scrolling through a list of responses provided by the game designer that is linked to some text or voice-over that is played by the character in the game.
  • the dialog mode step 600 is initiated. Thereafter, and according to the methodology of the present invention, there is displayed a statement or query by the non-player character, Step 602 Such a display also can further include a listing of a series of responses for the game player. There is shown in FIG. 3 an illustrative screen shot that illustrates a non-player character query and a list of possible responses.
  • the non-player character also plays a voice-over or audio message corresponding to the statement or query via the speaker 130 or other auditory output device such as the headphones if the headset 140 b.
  • the game player speaks a keyword, phrase or term into the microphone of the headset 140 b that corresponds to the response the game player has selected, step 604 .
  • the player speaks the first word of the desired response and thus the dialog line.
  • the selected response provides all of the information from the response to the non-player character thus allowing more dialog or action corresponding to the selected dialog to ensue.
  • the game designer specifies a word, phrase or term in each response listing as the keyword, key phrase or key term.
  • the keyword, phrase or term is appropriately highlighted on the display (e.g., in color, bold, italics) to facilitate the game player's identification.
  • the foregoing also creates a faster, easier and more intuitive game play user interface (UI) that also makes the gaming experience more immersive. Allowing the designer to select the keyword is particularly advantageous because it provides the designer with the opportunity to create a more naturally sounding or phrased dialog line or response. The foregoing also is particularly advantageous as compared to conventional game techniques where the player needs to scroll through the list of responses or find a button on the keyboard or controller that corresponds to one of the responses.
  • UI game play user interface
  • the speech recognizer After speaking the key words/phrase/terms, the speech recognizer converts the spoken input to an output signal corresponding to a text string output, step 606 . The output is then evaluated to determine the keyword match, step 608 . In performing this evaluation reference is made to a stored set of keywords, phrases and/or terms, step 610 . If there is no keyword match (NO, step 608 , the process returns to having the player again provide a spoken response, step 604 . In more particular embodiments, a message is outputted to the game player requesting the player to again provide a response. This message can be another pop-up type of message on the display and/or an audio message apparently coming from the non-player character.
  • the non-player character could indicate that the player's reply was not correct or unclear (e.g. output another query like What did you say?) and essentially request the game player to reply again to the query. If there is a matching of the keyword (YES, step 608 ) then the process exits back to the game, step 612 , and the game continues based on the selected response.
  • the game player to initiate an open dialog with the non-player character either in initiating the dialog or when responding to the non-player character's words or actions.
  • the player can say anything they desire in their natural language such, “Where is the Sword?” or the player may inform the non-player character of some event, such as “I have defeated the Giant.”
  • the open dialog methodology of the present invention further includes using the game's environmental context information and stored character history to enhance the Speech Recognition System's success rate, and to map nouns, concepts, and other language elements to the appropriate game event, action, or object.
  • the recognizer can have trouble distinguishing between words such as “bit” and “pit”, but in a game, it is known if there's a pit in the room because of the game's environmental information, or if the playing character was recently bit by a dog/person/monster/etc. Thus, the recognizer can take advantage of this environmental context information to simply recognize the correct words better.
  • the Natural Language Processor in other non-game applications can have trouble mapping words to the needed concepts. But in a game, the successfully matched words can be taken and matched to the characters' histories and the current context information. From this, the player's intent, desired action, question, or statement can be inferred from the information of what's happened and what's happening.
  • FIG. 4 a schematic view that illustrates these interactions in the open dialog process. Particularly preferable aspects of this dialog process are the back and forth interactions between the Speech Recognizer, Natural Language Processor, Environmental Context Information System, and the Character/Game History Module.
  • FIG. 2C there is shown a flow diagram illustrating the process of the game player casting a spell in the conduct of the game.
  • the game player role-plays the player's character in the game.
  • the player would issue a command that directs the player's character to create and cast the spell.
  • this methodology is easily adaptable for other types of games. For example, if the game player was a captain of a submarine, the game player could issue an order directing non-player characters to take a specific action. Such an order for example could be the reloading of certain kinds of weapons in the torpedo tubes, changing headings and the like.
  • the methodology is adaptable so the non-player character responds or reacts in the same way as if it were a real world situation. For example, if the captain of a submarine gave a command to change course, the appropriate non-player character (e.g., helmsman) would repeat the command in the same as it would be in a real world situation, thereby making the level of play more immersive and real world like.
  • the appropriate non-player character e.g., helmsman
  • the game player while role-playing their character recites the spell-language of the spell to be cast or says the spell words, step 700 .
  • the game player will utter the actual spoken language to cast Magic Spells (and likewise for battle cries, poems, songs) in a game setting.
  • the game player is not issuing a command to the character, but instead is actually Role Playing the playing character by speaking as the playing character.
  • the spoken word representative of the spell is inputted to the speech recognizer, step 700 .
  • the speech recognizer converts the spoken input to an output signal corresponding to a text string output, step 702 .
  • the text output string from the speech recognizer is then inputted to a spell recognizer, step 710 .
  • This text output string is processed through a spell word matching module to determine and establish a match between the spoken words and spell words, step 712 . If the words do not match those spell words for the game, then no further action is taken. If the words do match those applicable for the game, then spell words are inputted into a spell grammar integrator module.
  • the spell grammar integrator module takes the sequences of spell words and decodes the sequence of spell words into a game action, step 714 . Thereafter the process returns or exits to the game, step 720 and thereafter the game action per the recognized spell is executed in the game.
  • the language spoken by the game player can be directly turned into game events and commands for casting spells and causing other special events to happen.
  • the methodology of the present invention contemplates development of game language such as the language used for casting spells to be created using what is sometimes referred to as phoneme neutral language(s).
  • game language such as the language used for casting spells to be created using what is sometimes referred to as phoneme neutral language(s).
  • Computer software, movies, and written and aural stories are often sold in many countries. Since different languages may be spoken and read in the different markets, with conventional techniques the products being sold need to go through a process called Localization.
  • the methodology of the present invention embodies a technique of constructing a language for use in the product that is constructed entirely of phonemes found in all of the languages that are targets for Localization of the product. That is, the languages (e.g., for spells in a game concerned with magic, as one example) will use only phonemes found in Chinese, English, German if the game is to be sold in China, the US, and Germany. This makes the speech recognition during play easier by using only sounds that all users of the product can naturally speak because all of the effort can be focused on one set of words. Also, a common language (i.e., and corresponding marketing potential) spoken by all users of the game enables cross-language playing and building a more unified consumer base. There is shown in FIG. 5 an illustration of a language neutral phoneme set.
  • the spell recognizer when the spell words being used are developed using a phoneme neutral language the spell recognizer would further include referring to the spell words of the language neutral phoneme set of spell words, step 730 . From this the spell recognizer would determine as herein described what game action to take based on the resultant decoded sequence of spell words.
  • speech recognition programs typically include data files for converting the spoken words or sounds to digital signals for processing, so that pre-use training of the speech recognition program for a game player is not a necessary element to the use of the speech recognition program or function.
  • pre-use training may be advantageous because it could improve upon the capability of the speech recognition program to perform the identification and conversion process.
  • the speech recognition programs can be configured and arranged so as to implement a pre-use training or learning function or protocol, where a user would speak some selected phrases that are read and learned by the speech recognition program and the results of this learning process are stored in an individual specific data file.
  • Step 502 if it is determined that the learning protocol should be initiated (YES, Step 502 ) then the game applications program/speech recognition module initiates the learn process, Step 504 .
  • the gaming process or game is started, step 506 , and the player carries out whatever roles, functions, objects and the like involved as part of the conduct of the game.
  • such determining if the learning protocol should be initiated can further include a determination if this is the first time the player is playing the game.
  • the results of an initial speech recognition learning process is typically stored in an individual specific data file. As such, once a game player goes through or performs the initial training or learning process, it is not necessary to repeat the process. Thus, if this is not the first time the game player has played the game, then the speech recognition learn process need not be performed (NO, step 502 ) and the process would continue the playing of the game, Step 506 .
  • the initial learning process (Step 504 ) is created so as to make this process a part of the game playing experience.
  • This is advantageous as the game player is not aware that he/she is training or teaching the program to better recognize the speaker's voice but believes that this is just another element of the gaming experience.
  • the player will be prompted and asked to read a document, for example an oath, or to repeat what a non-player character says.
  • the player preferably will think that this learning process is part of the gameplay experiences. For example, if the player graduates from a police academy, the player can be asked to recite their oath of duty (e.g., I, Johnny Goodguy, do solemnly swear to uphold the law . .
  • the computer or game machine would use the recitation of this oath to improve the recognition and conversion process for a given player.
  • the player will be asked to recite a set of words that are known ahead of time, but the words being recited will be made a part of the game playing experience.
  • the methodology of the present invention further includes speech recognition learning or training during the conduct, preferably at times where the learning experience can be integrated so as to appear to the game player to be part of the gaming experience.
  • the game designer can include an in-game learning process in the flow of the game, by prompting the player to recite or repeat a new magical word, a specific phrase to take an action in the game (e.g., to open a door) or to ask the player to speak a pre-established phrase, word or term that arises in the conduct of the game, Step 800 .
  • a wizard or other player may ask a player to repeat a new magical word the player was just being taught by the wizard or other player.

Abstract

Featured is a game applications programs, software, systems and methods that are such as to provide an interactive gaming experience for one or more game players by using speech recognition techniques in combination with interactive techniques and functionalities thereby as creating an interface that allows spoken words and/or phrases of a game player to initiate any one of a number of functionalities, actions or informational outputs in connection with the conduct of the game. Such methods, programs, and systems can further include creating an interface that allows a game player to initiate a function or action to be performed by the game player's character, to initiate a dialog or interaction between a player's character and a non-player character appearing in a game or to request information from a non-player character or about a phase of the game in general as part of the play of the game.

Description

  • This application claims the benefit of U.S. Provisional Application Ser. No. 60/588,703 filed Jul. 15, 2004, the teaching of which are incorporated herein by reference in their entirety.
  • FIELD OF INVENTION
  • The present invention relates to audio-visual video games implemented for dedicated gaming systems, computer systems or via the Internet, and more particularly to such audio-visual games that embody speech recognition methods and devices for use in the conduct of the game and more specifically such audio-visual games embodying interactive techniques and methods that allow one or more players to interact or actively participate in the play of the game, including simulating natural language interactions with non-player characters of the game, where such interactive participation is accomplished using speech recognition methods and devices.
  • BACKGROUND OF THE INVENTION
  • User Interface (UI) control systems for interactive computer and video games typically consist of a keyboard and/or other hardware controller (e.g., joystick, mouse), which the game player manipulates by hand to control characters and game action. Traditional hardware-controller UI limits immersive game-play, not least because it is an imperfect replacement for the most common and intuitive means humans use to interact with one another in the real world: spoken language.
  • Game developers recently have applied Speech Recognition technologies to allow simple, one-way “Command and Control” of the game via the player's voice. Speech recognition for Command & Control links a spoken word or short phrase to a single “hotkey” or other existing hardware-controller (e.g., keyboard, joystick) game command, for example, “run,” or “shoot”. This application of speech recognition, however, still falls short of immersive game play, because it is one-way, from player to game character An essential aspect of the immersive game experience, dialog between the human player and the computer game characters, has not been addressed, either by traditional UI via hardware-controller or by Command and Control speech recognition. Examples of such speech recognition command and control type of systems and methods are found in U.S. Pat. No. 6,529,875 and U.S. Pat. No. 5,893,064 and U.S. Pat. No. 6,456,977.
  • There also is described in U.S. Pat. No. 5,393,073, talking video games that provide a simulated voice dialog between human players and animated characters on a TV screen. In that system, as the game is played an animated character talks to the human game player and waits for a response. Each game player has a hand-held controller that displays two or more phrase or sentences and the player responds by pushing a button on the hand-held controller next to the selected phrase. The animated character then responds to the selected phrase as if the game player had spoken.
  • It thus would be desirable to provide new gaming programs, systems and gaming methods for game play that employ speech recognition techniques and methods that provide a mechanism by which one or more players of a given game can interact with or become immersed in and control the play of the game. It would be particularly desirable to provide such a gaming program and methods that create or establish an interface between the game player, in particular the game player's voice, and the method and/or program implementing the game functionalities. It also would be particularly desirable to provide such gaming programs and methods that would allow one or more game players to use his/her voice to engage in dialog with non-player characters quickly, naturally, and intuitively, and thereby further the progress of the game.
  • It also would be desirable to provide such gaming programs, systems and gaming methods, that allow the game player to become an interactive participant in the game, including but not limited to directly using his/her voice to role-play a simulated character in the game, and thus be better immersed in the gameplay in comparison to prior art techniques and programs, in which spoken words are limited to phrases or commands that in effect implement a command that corresponds to the functionality of a key or button of a remote control device. It also would be desirable to provide such gaming programs, systems and gaming methods in which the speech recognition training is integrated into the play of the game. Such gaming programs, systems and methods preferably create an immersive gaming experience for the human player as well as improving his/her control of the game.
  • SUMMARY OF THE INVENTION
  • The present invention features game applications, programs, software, systems and methods that are such as to provide an interactive gaming experience for one or more game players by using speech recognition techniques in combination with interactive techniques and functionalities so as to create an interface that allows the spoken words and/or spoken phrases of a game player to initiate any one of a number of functionalities, actions or informational outputs in connection with the game. Such methods, programs, and systems can further include creating an interface that allows a game player to initiate a function or action to be performed by the game player's character, to initiate a dialog or interaction between a player's character and a non-player game character appearing in a game or to request information from a non-player character or information about a phase of the game in general as part of the play of the game. In more particular aspects, such methods, programs and systems further include creating an interface that allows the game player to directly interact with a non-player character as well as to directly control the play of the game (e.g., such as directly casting a spell in a game involving magic).
  • In further aspects or embodiments, the process of dialoging with a non-player character includes displaying one or more text messages of various possible responses in response to the interactive input of the game player; having the game player provide a voice input to select a specific one of the one or more text messages and creating a game program output that corresponds or relates to the game player's response selected via the player's voice or speech input to the game. In a more particular embodiment, the non-player character provides a voice output of the one or more text messages being displayed and/or of the response being outputted. In a further embodiment, the process of dialoging includes displaying a text message, that can be in the form of a statement by or query from the non-player character, along with the one or more text messages and/or having the non-player character provide a voice output of the message. Such games are those implemented by any of a number of formats or mediums known to those skilled including but not limited to computer games installed and executed on a conventional personal computer, games installed and executed on dedicated gaming machines or apparatuses (e.g., console, video games, or hand held devices such as cellphones or handheld gaming devices) or games that are installed and executed at a remotely located computer system or apparatus where communications between the game player and the remotely located computer system or apparatus is effected over wide area networks, local area networks and/or the Internet (e.g., via Internet Service Provider—ISP).
  • In further embodiments, the dialoging process includes identifying a specific phrase, term or word of each of the text messages being displayed, which phrase term or word is a keyword for selecting a given text message and thus the corresponding response to be outputted. In one particular embodiment, the phrase, term or word is located at the beginning of the displayed text message (e.g., the first word of the text message). In another particular embodiment, the initiating phrase, term or word is highlighted in the text message being displayed but is not necessarily located at the beginning of the text message. Thus, the designer of the game can create text messages that are more naturally phrased while still providing a mechanism by which the game player can initiate a dialog with the non-player character using speech or the spoken natural language of the game player. Such naturally phrased text messages also are advantageous in that the game player can quickly, readily and naturally initiate and carry on such dialog with the non-player character.
  • In yet further embodiments, the dialoging process is created so as to allow an open dialog between the game player acting as the playing character and the non-player character, such open dialog being initiated by the game player at any time during the game or in response to the non-player character's actions or words. Preferably, the open dialog process is created so as to allow the game player to provide a request (e.g., requesting location or other information) using natural native language to the non-player character or gaming program. For example, the game player as the playing character can ask the non-player character where an artifact (e.g., amulet, scroll, weapon) is located using natural language as they would to another game player or person (e.g., Where is the sword?). The dialog process would further include developing a response or action responsive to the dialog initiated by the game player.
  • In more particular embodiments, such an open dialog process can further include reviewing one or more of game environmental context information and character history to arrive at the response or responsive action. In this way, the dialoging process can map nouns, concepts and other language elements to the appropriate game event, action or object (i.e., game context) so as to thereby significantly enhance the ability of the speech recognition system to more accurately derive the player's intent, desired action, question or statement because reference also is being made to what has happened or is happening in the play of the game.
  • Also featured are computer systems including gaming application programs embodying the methods and techniques of the present invention for execution thereon and storage mediums on which are stored such applications programs.
  • Other aspects and embodiments of the invention are discussed below.
  • DEFINITIONS
  • The instant invention is most clearly understood with reference to the following definitions:
  • Player character or player game character shall be understood to be any one of a number of simulated game characters in human animal or inanimate form, for example, or a device under the control of the player's character, that appears during the conduct of a game and which is/are under the control of the gaming software or applications program and also is under the control of the player of the game. In other words, the game player can provide an input to the game via a device (e.g., joystick) or voice that causes the player character to initiate an action or output a reply within the parameters allowed by the game. The gaming software and applications program in response to such input causes the player character to initiate the action or reply. For example, the joystick can be used to cause the playing character (e.g., person or automobile) to move in a given direction and the gaming software causes the character to move in response to the input.
  • A non-player character or a non-player game character (hereinafter “NPC”) shall be understood to be any one of a number of simulated game characters in human, animal or inanimate form, for example, that appear during the conduct of a game and which are under the sole control of the gaming software or applications program and not under the specific control of the game player. The gaming software or applications program can cause the non-player character to initiate an action or reply resulting from an action or input of a game player. For example, if the non-playing and player characters are battling using swords, the gaming software can cause the non-player character to parry and block the sword actions of the player character as well as create offensive actions.
  • A computer readable medium shall be understood to mean any article of manufacture that contains data that can be read by a computer or a carrier wave signal carrying data that can be read by a computer. Such computer readable media includes but is not limited to magnetic media, such as a floppy disk, a flexible disk, a hard disk, reel-to-reel tape, cartridge tape, cassette tape or cards; flash memory devices including NVRAN; optical media such as CD-ROM, DVD and writeable compact disc; magneto-optical media in disc, tape or card form; paper media, such as punched cards and paper tape; or on carrier wave signal received through a network, wireless network or modem, including radio-frequency signals and infrared signals.
  • The term voice when used in connection with the player character as an input shall be understood to include or mean speech and the words, phrases, phonemes, or terms spoken by the game player as well as in particular the words, phrases, phonemes, or terms spoken by the game player in the game player's natural language.
  • BRIEF DESCRIPTION OF THE DRAWING
  • For a fuller understanding of the nature and desired objects of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawing figures wherein like reference character denote corresponding parts throughout the several views and wherein:
  • FIG. 1A is a block diagram of an exemplary computer system embodying the methodology and software of the present invention;
  • FIG. 1B is a block diagram of another exemplary computer system being implemented as a computer network;
  • FIG. 1C is an external view of an exemplary gaming system using a video gaming machine embodying the methodology and software of the present invention;
  • FIG. 2A is a high level flow diagram generally illustrating the overall process of a game methodology according to the present invention;
  • FIG. 2B is a flow diagram illustrating the dialog process between the player and a non-player character (NPC);
  • FIG. 2C is a flow diagram illustrating the process of the game player casting a spell;
  • FIG. 2D is a flow diagram illustrating the in-game learning process;
  • FIG. 3 is an illustrative screen shot illustrating an NPC statement/query and a response list;
  • FIG. 4 is a schematic view illustrating interactions in the open dialog process; and
  • FIG. 5 is an illustration of a language neutral phoneme set.
  • DESCRIPTION OF THE PREFERRED EMBODIMENT
  • Referring now to the various figures of the drawing, wherein like reference characters refer to like parts, there is shown in FIG. 1A a block diagram of a computer system 100 embodying and implementing the gaming programs and gaming methodology of the present invention. The following discussion describes the structure of such a computer system 100 but the discussion of the applications program that embodies the gaming methodology of the present invention is described elsewhere herein.
  • Such a computer system 100 includes a computer 110, a display 120, a speaker 130 and one or more input devices. The display 120 is any of a number of devices known to those skilled in the art for displaying images responsive to outputs signals from the computer 110. Such devices include but are not limited to cathode ray tubes (CRT), liquid crystal displays (LCDS), plasma screens and the like. Although a simplified block diagram is illustrated such illustration shall not be construed as limiting the present invention to the illustrated embodiment. It should be recognized that the signals being outputted from the computer can originate from any of a number of devices including PCI or AGP video boards or cards mounted with the housing of the computer 110 that are operably coupled to the microprocessor 112 and the display 120.
  • The speaker 130 is any of a number of auditory signal generating devices or auditory signal output devices known to those skilled in the art that generate an auditory signal or auditory output responsive to outputs signals from the computer 110. It should be recognized that the signals being outputted from the computer 110 to the speaker 130 can originate from any of a number of devices including PCI sound boards or cards located within the computer that are operably coupled to the microprocessor 112 and the speaker 130. Also, and although an external speaker 130 is illustrated, this shall not be construed as limiting the invention as the speaker 130 can be disposed internally within the housing or case of the computer 110 as well as being a speaker or a system of speakers external to, the computer.
  • It should be recognized that although a computer system 100 can include the speaker 130 this may not be essential to practicing the present invention (i.e., an auditory output is not needed). Thus, the speaker's function can be disabled, for example, if other auditory devices are being used (e.g., headset 140 b with one or more earphones provided) or the output by the speaker may be limited to auditory output during the conduct of the game (e.g., sounds of battle, background sounds associated with the displayed game environs, etc.).
  • The one or more input devices includes any of a number of devices known to those skilled in the art which can be used to provide input signals to the computer for control of applications programs and other programs such as the operating system being executed within the computer. In illustrative embodiments, one input device 140 a preferably comprises a switch, a slide, a mouse, a track ball, a glide point or a joystick or other such device (e.g., a keyboard having an integrally mounted glide point or mouse) by which a user such as game player can input control signals other than by means of a keyboard. Another input device comprises any one of a number of audio input devices known to those skilled in the art and in an illustrative embodiment, the input device comprises a headset 140 b having a microphone for use by the game player. The audio input device or headset 140 b is operably coupled to the computer 110, more specifically the speech recognition functionalities being executed on the microprocessor 112, so that spoken words, sounds, phrases, terms or phonemes of the game player can be inputted into the game program. It should be recognized that the number and type of input devices is not particularly limited to that shown and described herein.
  • The computer 110 typically includes a central processing unit 112 including one or more micro-processors such as those manufactured by Intel or AMD, random access memory (RAM) 113, mechanisms and structures for performing I/O operations (not shown), a storage medium such as a magnetic hard disk drive(s) 114, a device 116 for reading from and/or writing to removable computer readable media and an operating system for execution on the central processing unit.
  • According to one embodiment, the hard disk drive 114 is being provided for purposes of booting and storing the operating system, applications or systems that are to be executed on the computer, paging and swapping between the hard disk and the RAM and the like. In this embodiment, the game applications program 152 according to the present invention including the programming instructions and the data containing the text, auditory and visual informational data associated therewith and the speech recognition program 154 are stored in a removable computer readable medium 150 such as a CD or DVD type of media that is inserted into a device 116 for reading and/or writing to the removable computer readable media. As such the reading/writing device 116 is any of a number of devices known to those skilled in the art for reading from and/or writing to the particular medium on which the applications program is stored.
  • According to another embodiment of the present invention, the gaming applications program including that portion 152 including the programming instructions and the speech recognition program or module 154 containing the text, auditory and visual informational data associated therewith is stored on the hard drive 114. In use, the game player or user accesses the gaming program directly from the hard drive 116 for example, by entering appropriate commands via the keyboard or by other appropriate action of the input device (e.g., positioning the cursor over an icon on the desktop and clicking a mouse button).
  • Referring now to FIG. 1B there is shown a network based computer system 200 according to the present invention that includes a server 210, an external storage device 260 and a network infrastructure 310 that operably couples the plurality or more of client computer systems 100′ to the server 210. The client computer systems 100′ are typically configured like the computer system of FIG. 1A except that in use the game player or user would access the game applications program and related data for a given game from the server 210 and upload such information temporarily onto the client computer system. The network infrastructure 310 can comprise, for example, a local area network (LAN), a wide area network (WAN) or the Internet where access to the Internet is typically provided by an ISP.
  • The server 210 is any of a number of servers known to those skilled in the art that are intended to be operably connected: to a network so as to operably link a plurality or more of client computers via the network to the server and thus also to the external storage device 260. As illustrated, the server 210 typically includes a central processing unit including one or more microprocessors such as those manufactured by Intel or AMD, random access memory (RAM), mechanisms and structures for performing I/O operations, a storage medium such as a magnetic hard disk drive(s), and an operating system for execution on the central processing unit. The hard disk drive of the server typically is not used for storing data and the like utilized by client applications being executed on the client computers. Rather the hard disk drive(s) of the server 210 are typically provided for purposes of booting and storing the operating system, other applications or systems that are to be executed on the server, paging and swapping between the hard disk and the RAM.
  • Data and the like being used in connection with the execution of client applications, such as the game applications program and the information and/or data related thereto, on client computers is stored in the external storage device 260 that is operably interconnected to the server 210 using any of a number of techniques and related devices or cabling known to those skilled in the art. In an illustrative embodiment, such an interconnection is implemented using a small computer systems interface (SCSI) technique(s) or via a fiber optic cable or other high-speed type of interconnection.
  • In an illustrative, exemplary embodiment, the external storage device 260 comprises a disk assembly typically made up of one or more hard disks that are configured and arranged so the external storage medium functionally appears to the servers 210 as a single hard disk. Such an external storage medium is further configured and arranged to implement any of a number of storage schemes such as mirroring data on a duplicate disk (RAID level 1) or providing a mechanism by which data on one disk, which disk has become lost or inaccessible, can be reconstructed from the other disks comprising the storage medium (RAID level 5). Although reference is made to a disk assembly and hard disks, this is for illustration and shall not be construed as being a limitation on the particular form of the devices or mechanism that makes up the external storage device 260 or the medium comprising such a device.
  • In addition, each of the client computers 100′ includes one or more I/O ports that are operably connected to the microprocessor 112 and which are configured and arranged for the transfer of the data and program instructions between and amongst the client computers and the server 210 using any of a number of non-wireless techniques or wireless techniques known to those skilled in the art. Such non-wireless techniques include for example any of a number of network infrastructures 300 known to those skilled in the art such as Ethernet, token ring, FDDI, ATM, Sonet, X.25 and Broadband.
  • In the case of wireless techniques, the I/O ports of the client computers are configured so as to include a transceiver as is known to those skilled in the art for wireless network transmission systems. Alternatively, a wireless network device is operably coupled to the computer (e.g., via a USB I/O port), which device includes the transceiver. An exemplary wireless network technique includes those systems embodying a transceiver or transmitter complying with IEEE-802.11 sometimes referred to as a Bluetooth chip. In each case, the transceiver operably coupled to the client computer is configured and arranged so as to establish a communications link between the client computer and a receiver or transceiver remote from the location of the client computer that is in turn operably coupled to the server 210. For example, with a client computer 100′ having a IEEE-802.11b or 802.1 μg compliant transceiver, the corresponding remotely located transceiver/receiver would be located within about 100 meters or so of the location of the client computer device and operating at a frequency of about 2.4 GHz. The server 210 in turn could be coupled to the remotely located transceiver/receiver using non-wireless or wireless techniques.
  • As indicated herein, it also is contemplated and thus within the scope of the present invention for the methodology and software of the present invention to be implemented/executed on a dedicated audio-visual gaming machine as is known to those skilled in the art. As such, there is shown in FIG. 1C, an external view of an exemplary gaming system 400 using a video gaming machine embodying the methodology and software of the present invention. In the illustrated embodiment, the game machine main unit 402 includes the circuit boards, microprocessors, and circuitry for the game processing including the processing of the input signals from game players and outputting signals to the display 404. The main unit 402 includes one or more input ports or connectors 410 which are operably coupled to a mechanical input type of input device such as a pad 420. The main unit includes another input connector or port 412 that is operably couple to an audio input device, such as a headset 140 b (see also FIG. 1A).
  • Although not shown, the main unit 402 also includes a video output port or connector and an audio output port or connector that operably couple the main unit 402 to the display 404. The display can be any of a number of audio-visual displays known to those skilled in the art including but not limited to a television receiver or monitor.
  • The main unit 402 also can include one or more different types of drives, including but not limited to cartridge drives, CD-ROM rives and DVD ROM drives for respectively operably connecting read only cartridges or read only CDs or DVDs to the circuit boards, processors and the like of the main unit 402 that execute the programs and the like stored on the storage medium.
  • The foregoing computer systems and dedicated gaming machines are exemplary and are not inclusive as to every configuration or device by which one can play a game. For example, such games can be played on cellphones as well as handheld gaming machines. As such, it is contemplated and thus within the scope of the present invention for the herein described gaming methodology to be adapted so as to be embodied in games to be played such other devices.
  • Referring now to FIGS. 2A-D there is shown various high-level flow diagrams illustrating the gaming applications program and functionalities thereof that embody various aspects and embodiments of the gaming methodology of the present invention. In the following discussion, reference shall be made to FIGS. 1A or 1B for features of the computer systems embodying the software/applications programs and methodology of the present invention not otherwise shown on FIGS. 2A-D.
  • As indicated above it is within the scope of the present invention for the game applications program or game software of the present invention to be accessed via a removable computer readable medium, from a remote server connected to a client computer system dedicated by storage or a computer system. However, for simplicity, the following discussion is limited to description of the software, flow of information and methodology in the case where the software ia already stored on a hard drive of a computer system. It should be recognized that this shall not be construed as a limitation on the manner and means by which the software/program of the present invention can be accessed, as it is within the scope of the present invention to adapt the courseware and methodology of the present invention so as to be useable with any of a number of techniques known to those skilled in the art, such as the above-described network and server embodiment as well as dedicated gaming machine.
  • Referring now to FIG. 2A there is shown a high level flow diagram generally illustrating the flow of the game being executed on a computer system 100. The player starts the game using any of number of techniques appropriate for the operating system and game applications program, Step 500. For example, in the case where the program is pre-installed on a computer system 100 running a Mircosoft Windows operating system, the game player would double click the desktop icon to load and start the program. For a dedicated video gaming machine, the game player would load the cartridge/CD/DVD after starting the machine so that the game program is loaded and started.
  • After the program is loaded and started, the start process can further include other functions such as logging onto the program as an identified game player. As game programs become more sophisticated and powerful, the game program can be configured and arranged using any of a number of known techniques to store selected information or particularities of the player so that they can be automatically reloaded when the player returns to the game at a later time. In addition, such techniques can allow a game player to stop the game but without exiting the game so the game player can re-start the game where he/she left off.
  • It is contemplated that the game applications program of the present invention, will embody or use any of a number of speech recognition programs as is known to those skilled in the art in combination with the game specific applications program of the present invention. Such speech recognition programs are configured and arranged so as to convert the words or sounds spoken through an audio input device such as the microphone of the headset 140 b into digital signals that are inputted into the game applications program. It also is contemplated, and thus within the scope of the present invention for the game applications program and methodology to be configured and arranged so as to include the instructions, criteria and data so as to be capable of implementing or executing a speech recognition function.
  • In particular embodiments of the present invention, the gaming methodology can further embody an initial speech recognition learning protocol that would be implemented prior to starting the gaming process. As such, the method can further include determining if such a speech recognition learning protocol should be initiated or conducted, step 502. It should be recognized, however, if the game designer does not include an initial speech recognition learning protocol (NO, step 502), then the illustrated process essentially proceeds from step 500 directly to step 506. It also is contemplated that in such a case; steps 502 and 504 could be excluded from the flow of the game or gaming method of the present invention. The discussion regarding the specifics of the learning protocol of steps 502, 504 are discussed further below.
  • In the following discussion, the game applications program and method of the present invention is described in terms of a game that involves magic such as casting of spells and the like. This shall not be construed as limiting the program and methodology of the present invention to this particular form of game as it is contemplated and thus within the scope of the present invention for the described program and methodology to be adapted for use in connection with any of a number of other types of games such as martial arts and those involving combat or war including warships and planes. Also, and although not specifically stated in the following, it shall be understood that applications programs embodying the methodology of the present invention shall include code segments, instructions and criteria including data as well as audio and visual data, so the applications program can carry out the below described functions/methodology.
  • For clarity, the following discussion excludes those parts of the game being implemented beyond that needed to understand the methodology of the present invention. For example, the following does not describe the initial processes for selecting the character being played or selecting various inanimate objects, mechanisms and the like that would or could be used during the game (e.g., attire, weapons). It is well within the skill of those knowledgeable in the game arts to develop and design these aspects of games and game programs and to adapt these elements so as to embody the herein described methodology.
  • In the present invention, and at times during the conduct of the game, the methodology can allow a player to take various actions, such as initiating a dialog process with non-player characters (FIG. 2B) or casting a spell (FIG. 2C), including role-playing by the game player. In addition, the methodology also contemplates a process for controlling speech learning operations so any such learning is integrated into the conduct of the game or game play.
  • Now with reference to FIG. 2B there is shown a flow diagram illustrating the dialog process between the non-player character and the game player/playing character. According to aspects of the present invention, the speech recognition module is utilized to simulate natural language interactions or dialog with simulated characters, or non-player characters in the game. This is in contrast with that of conventional systems where interactions with non-player characters has been limited to simple interactions such as shooting at the character or scrolling through a list of responses provided by the game designer that is linked to some text or voice-over that is played by the character in the game.
  • When a player or the corresponding playing character encounters a non-player character, the dialog mode, step 600 is initiated. Thereafter, and according to the methodology of the present invention, there is displayed a statement or query by the non-player character, Step 602 Such a display also can further include a listing of a series of responses for the game player. There is shown in FIG. 3 an illustrative screen shot that illustrates a non-player character query and a list of possible responses. In yet further embodiments, the non-player character also plays a voice-over or audio message corresponding to the statement or query via the speaker 130 or other auditory output device such as the headphones if the headset 140 b.
  • According to an aspect of the present invention, the game player speaks a keyword, phrase or term into the microphone of the headset 140 b that corresponds to the response the game player has selected, step 604. In one embodiment of the present invention, the player speaks the first word of the desired response and thus the dialog line. The selected response provides all of the information from the response to the non-player character thus allowing more dialog or action corresponding to the selected dialog to ensue. In another embodiment, the game designer specifies a word, phrase or term in each response listing as the keyword, key phrase or key term. In more particular embodiments, the keyword, phrase or term is appropriately highlighted on the display (e.g., in color, bold, italics) to facilitate the game player's identification. The foregoing also creates a faster, easier and more intuitive game play user interface (UI) that also makes the gaming experience more immersive. Allowing the designer to select the keyword is particularly advantageous because it provides the designer with the opportunity to create a more naturally sounding or phrased dialog line or response. The foregoing also is particularly advantageous as compared to conventional game techniques where the player needs to scroll through the list of responses or find a button on the keyboard or controller that corresponds to one of the responses.
  • After speaking the key words/phrase/terms, the speech recognizer converts the spoken input to an output signal corresponding to a text string output, step 606. The output is then evaluated to determine the keyword match, step 608. In performing this evaluation reference is made to a stored set of keywords, phrases and/or terms, step 610. If there is no keyword match (NO, step 608, the process returns to having the player again provide a spoken response, step 604. In more particular embodiments, a message is outputted to the game player requesting the player to again provide a response. This message can be another pop-up type of message on the display and/or an audio message apparently coming from the non-player character. For example, the non-player character could indicate that the player's reply was not correct or unclear (e.g. output another query like What did you say?) and essentially request the game player to reply again to the query. If there is a matching of the keyword (YES, step 608) then the process exits back to the game, step 612, and the game continues based on the selected response.
  • In yet further embodiments, it is within the scope of the methodology of the present invention for the game player to initiate an open dialog with the non-player character either in initiating the dialog or when responding to the non-player character's words or actions. For example, the player (either in initiating dialog or responding to the NPC's words or actions) can say anything they desire in their natural language such, “Where is the Sword?” or the player may inform the non-player character of some event, such as “I have defeated the Giant.” In addition, the open dialog methodology of the present invention further includes using the game's environmental context information and stored character history to enhance the Speech Recognition System's success rate, and to map nouns, concepts, and other language elements to the appropriate game event, action, or object.
  • The techniques of using the game's environmental information and the histories of the characters to enhance the speech recognition system's success rate are possible because of the self-contained nature of a game. In general dictation applications for Speech Recognition, the recognizer can have trouble distinguishing between words such as “bit” and “pit”, but in a game, it is known if there's a pit in the room because of the game's environmental information, or if the playing character was recently bit by a dog/person/monster/etc. Thus, the recognizer can take advantage of this environmental context information to simply recognize the correct words better.
  • Additionally, the Natural Language Processor in other non-game applications can have trouble mapping words to the needed concepts. But in a game, the successfully matched words can be taken and matched to the characters' histories and the current context information. From this, the player's intent, desired action, question, or statement can be inferred from the information of what's happened and what's happening. There is shown in FIG. 4 a schematic view that illustrates these interactions in the open dialog process. Particularly preferable aspects of this dialog process are the back and forth interactions between the Speech Recognizer, Natural Language Processor, Environmental Context Information System, and the Character/Game History Module.
  • Now referring to FIG. 2C there is shown a flow diagram illustrating the process of the game player casting a spell in the conduct of the game. It should be recognized that according to this aspect of the present invention, the game player role-plays the player's character in the game. In contrast, in conventional games, the player would issue a command that directs the player's character to create and cast the spell. In also should be recognized that this methodology is easily adaptable for other types of games. For example, if the game player was a captain of a submarine, the game player could issue an order directing non-player characters to take a specific action. Such an order for example could be the reloading of certain kinds of weapons in the torpedo tubes, changing headings and the like.
  • It also should be recognized that the methodology is adaptable so the non-player character responds or reacts in the same way as if it were a real world situation. For example, if the captain of a submarine gave a command to change course, the appropriate non-player character (e.g., helmsman) would repeat the command in the same as it would be in a real world situation, thereby making the level of play more immersive and real world like.
  • Thus, and using the spell casting example, the game player while role-playing their character recites the spell-language of the spell to be cast or says the spell words, step 700. In other words, the game player will utter the actual spoken language to cast Magic Spells (and likewise for battle cries, poems, songs) in a game setting. Thus, and in contrast to conventional techniques the game player is not issuing a command to the character, but instead is actually Role Playing the playing character by speaking as the playing character.
  • This comparison can be better understood from the following example that refers to a magic type of software game program. For example, the game player does not say “Cast Fireball”, as if instructing his/her character to cast a fireball spell. Instead, and according to the methodology of the present invention, the game player instead uses the power word for “attack”, the power word for “fire”, and the power word for “ball” thereby directly casting the spell. Besides creating a fundamentally different end-user experience (i.e., the player learns individual words and can even combine them to create new spells), this system avoids combinatorial problems with pure “hotkey” command systems. For example, the player may learn a spell word that means “double damage”; in a traditional “hotkey” style system applying this to every possible spell would involve adding a new “hotkey” for every spell that integrated this symbol.
  • Referring back to FIG. 2C, the spoken word representative of the spell is inputted to the speech recognizer, step 700. As indicated herein the speech recognizer converts the spoken input to an output signal corresponding to a text string output, step 702. The text output string from the speech recognizer is then inputted to a spell recognizer, step 710.
  • This text output string is processed through a spell word matching module to determine and establish a match between the spoken words and spell words, step 712. If the words do not match those spell words for the game, then no further action is taken. If the words do match those applicable for the game, then spell words are inputted into a spell grammar integrator module.
  • The spell grammar integrator module takes the sequences of spell words and decodes the sequence of spell words into a game action, step 714. Thereafter the process returns or exits to the game, step 720 and thereafter the game action per the recognized spell is executed in the game. Thus, in this way, the language spoken by the game player can be directly turned into game events and commands for casting spells and causing other special events to happen.
  • In further aspects, the methodology of the present invention contemplates development of game language such as the language used for casting spells to be created using what is sometimes referred to as phoneme neutral language(s). Computer software, movies, and written and aural stories are often sold in many countries. Since different languages may be spoken and read in the different markets, with conventional techniques the products being sold need to go through a process called Localization.
  • The methodology of the present invention embodies a technique of constructing a language for use in the product that is constructed entirely of phonemes found in all of the languages that are targets for Localization of the product. That is, the languages (e.g., for spells in a game concerned with magic, as one example) will use only phonemes found in Chinese, English, German if the game is to be sold in China, the US, and Germany. This makes the speech recognition during play easier by using only sounds that all users of the product can naturally speak because all of the effort can be focused on one set of words. Also, a common language (i.e., and corresponding marketing potential) spoken by all users of the game enables cross-language playing and building a more unified consumer base. There is shown in FIG. 5 an illustration of a language neutral phoneme set.
  • Now with reference to FIG. 2C, in further embodiments, when the spell words being used are developed using a phoneme neutral language the spell recognizer would further include referring to the spell words of the language neutral phoneme set of spell words, step 730. From this the spell recognizer would determine as herein described what game action to take based on the resultant decoded sequence of spell words.
  • As indicated hereinabove and with reference to steps 502, 504 of FIG. 2A, speech recognition programs typically include data files for converting the spoken words or sounds to digital signals for processing, so that pre-use training of the speech recognition program for a game player is not a necessary element to the use of the speech recognition program or function. There can be cases, however, where pre-use training may be advantageous because it could improve upon the capability of the speech recognition program to perform the identification and conversion process. In such cases, the speech recognition programs can be configured and arranged so as to implement a pre-use training or learning function or protocol, where a user would speak some selected phrases that are read and learned by the speech recognition program and the results of this learning process are stored in an individual specific data file.
  • As such, in the case where a pre-use learning protocol is embodied in the gaming methodology of the present invention, if it is determined that the learning protocol should be initiated (YES, Step 502) then the game applications program/speech recognition module initiates the learn process, Step 504. Upon completion of the learning process the gaming process or game is started, step 506, and the player carries out whatever roles, functions, objects and the like involved as part of the conduct of the game.
  • In further embodiments, such determining if the learning protocol should be initiated (Step 502) can further include a determination if this is the first time the player is playing the game. As indicated above, the results of an initial speech recognition learning process is typically stored in an individual specific data file. As such, once a game player goes through or performs the initial training or learning process, it is not necessary to repeat the process. Thus, if this is not the first time the game player has played the game, then the speech recognition learn process need not be performed (NO, step 502) and the process would continue the playing of the game, Step 506.
  • In more particular embodiments, the initial learning process (Step 504) is created so as to make this process a part of the game playing experience. This is advantageous as the game player is not aware that he/she is training or teaching the program to better recognize the speaker's voice but believes that this is just another element of the gaming experience. Thus, the player will be prompted and asked to read a document, for example an oath, or to repeat what a non-player character says. In this way, the player preferably will think that this learning process is part of the gameplay experiences. For example, if the player graduates from a police academy, the player can be asked to recite their oath of duty (e.g., I, Johnny Goodguy, do solemnly swear to uphold the law . . . ). The computer or game machine would use the recitation of this oath to improve the recognition and conversion process for a given player. In sum, the player will be asked to recite a set of words that are known ahead of time, but the words being recited will be made a part of the game playing experience.
  • In further embodiments and with reference to FIG. 2D, the methodology of the present invention further includes speech recognition learning or training during the conduct, preferably at times where the learning experience can be integrated so as to appear to the game player to be part of the gaming experience. Thus, the game designer can include an in-game learning process in the flow of the game, by prompting the player to recite or repeat a new magical word, a specific phrase to take an action in the game (e.g., to open a door) or to ask the player to speak a pre-established phrase, word or term that arises in the conduct of the game, Step 800. For example, a wizard or other player may ask a player to repeat a new magical word the player was just being taught by the wizard or other player.
  • A determination is then made to see if the spoken word is acceptable for speech recognition purposes, step 802. If it is determined that the repeated word, phrase or term is acceptable for speech recognition purposes (YES, step 802), the result is stored in the speech file library or data files for the specific player and the process exits to the game 806. If it is determined that the repeated word, phrase or term is not acceptable for speech recognition purposes (NO, step 802), then the game player is prompted to repeat the word, phrase again as herein described above.
  • Although a preferred embodiment of the invention has been described using specific terms, such description is for illustrative purposes only, and it is to be understood that changes and variations may be made without departing from the spirit or scope of the following claims.
  • INCORPORATION BY REFERENCE
  • All patents, published patent applications and other references disclosed herein are hereby expressly incorporated by reference in their entireties by reference.
  • EQUIVALENTS
  • Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents of the specific embodiments of the invention described herein. Such equivalents are intended to be encompassed by the following claims.

Claims (27)

1. A method for a game comprising the step of:
initiating an interactive dialog between a non-player character and a game player, wherein the response from the game player is an audio input.
2. The game method of claim 1, wherein said initiating a dialog includes:
the game player selecting a response from one or more possible responses;
the game player providing an auditory reply that corresponds to the selected response.
3. The game method of claim 2, further comprising the step of:
creating a game program output corresponding to the selected response of the game player.
4. The game method of any of claims 2 or 3, wherein said initiating a dialog includes;
displaying a text message from the non-player character for the game player.
5. The game method of claim 4, wherein said initiating a dialog includes the non-player character providing an auditory output corresponding to the text message from the non-player character for the game player.
6. The game method of any of claims 2 or 3, wherein said initiating a dialog includes the non-player character providing an auditory output corresponding to the text message from the non-player character for the game player.
7. The game method of claim 4, wherein the text message being displayed is in the form of one of a statement or a query to the game player.
8. The game method of claim 7, further comprising the step of:
providing one of a computer, a dedicated gaming device, a handheld gaming device, a video game machine and a cellphone; and
executing said initiating a dialog on said one of the computer, a dedicated gaming device, a handheld gaming device, a video game machine and the cellphone.
9. The game method of any of claims 1-3, further comprising the step of:
providing one of a computer, a dedicated gaming device, a handheld gaming device, a video game machine and a cellphone; and
executing said initiating a dialog on said one of the computer, a dedicated gaming device, a handheld gaming device, a video game machine and the cellphone.
10. The game method of claim 4, further comprising the step of:
providing one of a computer, a dedicated gaming device, a handheld gaming device, a video game machine and a cellphone; and
executing said initiating a dialog on said one of the computer, a dedicated gaming device, a handheld gaming device, a video game machine and the cellphone.
11. The game method of claim 6, further comprising the step of:
providing one of a computer, a dedicated gaming device, a handheld gaming device, a video game machine and a cellphone; and
executing said initiating a dialog on said one of the computer, a dedicated gaming device, a handheld gaming device, a video game machine and the cellphone.
12. The game method of claim 2, wherein:
said initiating a dialog includes arranging each of the one or more possible responses so as to include a key portion, and
wherein the auditory input from the game player corresponds to the key portion.
13. A method for a initiating a dialog between a non-player character of a game and a game player, said dialoging method comprising the steps of:
displaying one or more text messages from the non-player character for the game player and displaying one or more possible responses of the game player to the one or more text messages;
the game player selecting a response from the one or more possible responses;
the game player providing an auditory reply that corresponds to the selected response; and
creating a game program output corresponding to the selected response of the game player.
14. The dialoging method of claim 13, further comprising the step of the non-player character providing an auditory output corresponding to the one or more text messages from the non-player character for the game player.
15. The dialoging method of claim 14, wherein the text message being displayed is in the form of one of a statement or a query to the game player.
16. The dialoging method of any of claims 13-15, further comprising the step of:
providing one of a computer, a dedicated gaming device, a handheld gaming device, a video game machine and a cellphone; and
executing said steps of displaying, the game player selecting a response, the game player providing an auditory reply, and creating a game program output on said one of the computer, a dedicated gaming device, a handheld gaming device, a video game machine and the cellphone.
17. The dialoging method of claim 13, further comprising the step of
arranging each of the one or more possible responses so as to include a key portion; and
wherein said providing an auditory input includes providing an auditory input from the game player corresponds to the key portion.
18. A method for a game comprising the step of:
initiating an open dialog between a non-player character and a game player, wherein the dialog is initiated by the game player at one of any time during the conduct of the game or following one of a non-player character's actions or words.
19. The game method of claim 18, wherein said initiating a dialog includes:
the game player providing an auditory input corresponding to one of a request for game information or a desired action in the game; and
determining from the game player's auditory input a response to the game player auditory input; and
creating an output so as to one of provide the requested game information to the game player or initiate the desired game action.
20. The game method of claim 19, wherein said determining includes determining a response based on one or more of game environmental context information and character history.
21. The game method of claim 19, wherein said determining includes determining a response based on one or more of what has happened in the game and what is happening in the game as of the time the game player provided the auditory input.
22. The game method of claim 19, wherein said determining includes deriving the game player's one of desired action or request for information by mapping language elements contained in the auditory input to the game context.
23. An interactive method for a game, comprising the step of
the game player providing an auditory input so the game player is playing the role of the player character in the conduct of the game.
24. The interactive game method of claim 23, further comprising the step(s) of:
the game player determining a desired game action responsive to actions or words of one or more non-player characters in the game; and wherein
the provided auditory input is an instruction inputted into the game by the game player so as to cause the game to perform or carry out the desired game action.
25. The interactive game method of claim 24, wherein the instruction is a natural language statement to perform the desired action and wherein said interactive game method further comprises the steps of:
determining from the natural language statement of the game player the desired action to be performed in the play of the game.
26. The interactive game method of claim 25, wherein:
the game is a magic game,
the desired action is to cast a spell,
the statement is the language spoken to cast the spell,
said determining include determining from the statement the spell that is to be cast; and
the interactive method further comprises casting the spell.
27. A method for a game that uses speech recognition function to convert auditory input from a game player to an action in the play of the game; said game method comprising the step(s) of:
initiating a learn protocol of the speech recognition function at one or more times during the play of the game,
prompting the game player at each of the one or more times using language and expression consistent with the game and for the specific time the learning protocol is being initiated, and
requesting the game player to recite a predetermined phrase, word or passage, the predetermined phrase, word or passage being selected so as to be consistent with the game of play at the specific time the learning protocol is being initiated.
US11/182,269 2004-07-15 2005-07-15 Audio-visual games and game computer programs embodying interactive speech recognition and methods related thereto Abandoned US20060040718A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/182,269 US20060040718A1 (en) 2004-07-15 2005-07-15 Audio-visual games and game computer programs embodying interactive speech recognition and methods related thereto

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US58870304P 2004-07-15 2004-07-15
US11/182,269 US20060040718A1 (en) 2004-07-15 2005-07-15 Audio-visual games and game computer programs embodying interactive speech recognition and methods related thereto

Publications (1)

Publication Number Publication Date
US20060040718A1 true US20060040718A1 (en) 2006-02-23

Family

ID=35907916

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/182,269 Abandoned US20060040718A1 (en) 2004-07-15 2005-07-15 Audio-visual games and game computer programs embodying interactive speech recognition and methods related thereto

Country Status (2)

Country Link
US (1) US20060040718A1 (en)
WO (1) WO2006019987A2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080221892A1 (en) * 2007-03-06 2008-09-11 Paco Xander Nathan Systems and methods for an autonomous avatar driver
US20080318676A1 (en) * 2007-06-21 2008-12-25 Microsoft Corporation Responsive Cutscenes in Video Games
US20100029382A1 (en) * 2008-07-22 2010-02-04 Sony Online Entertainment Llc System and method for providing persistent character personalities in a simulation
US20100088097A1 (en) * 2008-10-03 2010-04-08 Nokia Corporation User friendly speaker adaptation for speech recognition
US20100112528A1 (en) * 2008-10-31 2010-05-06 Government Of The United States As Represented By The Secretary Of The Navy Human behavioral simulator for cognitive decision-making
US20110106536A1 (en) * 2009-10-29 2011-05-05 Rovi Technologies Corporation Systems and methods for simulating dialog between a user and media equipment device
CN108881610A (en) * 2018-04-27 2018-11-23 努比亚技术有限公司 A kind of terminal control method, terminal and computer readable storage medium
CN109200578A (en) * 2017-06-30 2019-01-15 电子技术公司 The adjoint application that interactive voice for video-game controls
WO2019070254A1 (en) * 2017-10-04 2019-04-11 Ford Global Technologies, Llc Natural speech data generation systems and methods
US10642873B2 (en) 2014-09-19 2020-05-05 Microsoft Technology Licensing, Llc Dynamic natural language conversation
US10926173B2 (en) * 2019-06-10 2021-02-23 Electronic Arts Inc. Custom voice control of video game character
US11120113B2 (en) 2017-09-14 2021-09-14 Electronic Arts Inc. Audio-based device authentication system
US11157703B2 (en) 2018-04-19 2021-10-26 Sg Gaming, Inc. Systems and methods for natural language processing in gaming environments
US20220072425A1 (en) * 2020-09-10 2022-03-10 Holland Bloorview Kids Rehabilitation Hospital Customizable user input recognition systems
US11410649B2 (en) * 2019-10-31 2022-08-09 International Business Machines Corporation Voice commands to facilitate in-game communication
US20230071358A1 (en) * 2021-09-07 2023-03-09 Nvidia Corporation Event information extraction from game logs using natural language processing

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101648077A (en) * 2008-08-11 2010-02-17 巍世科技有限公司 Voice command game control device and method thereof
EP2154678A1 (en) * 2008-08-13 2010-02-17 Weistech Technology Co., Ltd. Voice command game controlling apparatus and method of the same

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4104625A (en) * 1977-01-12 1978-08-01 Atari, Inc. Apparatus for providing facial image animation
US4692941A (en) * 1984-04-10 1987-09-08 First Byte Real-time text-to-speech conversion system
US4852168A (en) * 1986-11-18 1989-07-25 Sprague Richard P Compression of stored waveforms for artificial speech
US4884972A (en) * 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5111409A (en) * 1989-07-21 1992-05-05 Elon Gasper Authoring and use systems for sound synchronized animation
US5377997A (en) * 1992-09-22 1995-01-03 Sierra On-Line, Inc. Method and apparatus for relating messages and actions in interactive computer games
US5393073A (en) * 1990-11-14 1995-02-28 Best; Robert M. Talking video games
US5893064A (en) * 1997-05-14 1999-04-06 K2 Interactive Llc Speech recognition method and apparatus with voice commands and associated keystrokes
US6456977B1 (en) * 1998-10-15 2002-09-24 Primax Electronics Ltd. Voice control module for controlling a game controller
US20020198170A1 (en) * 1999-12-20 2002-12-26 Charanjit Bountra Formulations of adenosine a1 agonists
USRE37957E1 (en) * 1994-06-22 2003-01-07 Wizards Of The Coast, Inc. Trading card game method of play
US6529875B1 (en) * 1996-07-11 2003-03-04 Sega Enterprises Ltd. Voice recognizer, voice recognizing method and game machine using them
US6612931B2 (en) * 2000-03-15 2003-09-02 Konami Corporation Game system provided with message exchange function, game apparatus used in the game system, message exchange system, and computer readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI282941B (en) * 2001-03-15 2007-06-21 Toshiba Corp Entrance management apparatus and entrance management method by using face features identification

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4104625A (en) * 1977-01-12 1978-08-01 Atari, Inc. Apparatus for providing facial image animation
US4692941A (en) * 1984-04-10 1987-09-08 First Byte Real-time text-to-speech conversion system
US4852168A (en) * 1986-11-18 1989-07-25 Sprague Richard P Compression of stored waveforms for artificial speech
US4884972A (en) * 1986-11-26 1989-12-05 Bright Star Technology, Inc. Speech synchronized animation
US5111409A (en) * 1989-07-21 1992-05-05 Elon Gasper Authoring and use systems for sound synchronized animation
US5393073A (en) * 1990-11-14 1995-02-28 Best; Robert M. Talking video games
US5377997A (en) * 1992-09-22 1995-01-03 Sierra On-Line, Inc. Method and apparatus for relating messages and actions in interactive computer games
USRE37957E1 (en) * 1994-06-22 2003-01-07 Wizards Of The Coast, Inc. Trading card game method of play
US6529875B1 (en) * 1996-07-11 2003-03-04 Sega Enterprises Ltd. Voice recognizer, voice recognizing method and game machine using them
US5893064A (en) * 1997-05-14 1999-04-06 K2 Interactive Llc Speech recognition method and apparatus with voice commands and associated keystrokes
US6456977B1 (en) * 1998-10-15 2002-09-24 Primax Electronics Ltd. Voice control module for controlling a game controller
US20020198170A1 (en) * 1999-12-20 2002-12-26 Charanjit Bountra Formulations of adenosine a1 agonists
US6612931B2 (en) * 2000-03-15 2003-09-02 Konami Corporation Game system provided with message exchange function, game apparatus used in the game system, message exchange system, and computer readable storage medium

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150066484A1 (en) * 2007-03-06 2015-03-05 Mark Stephen Meadows Systems and methods for an autonomous avatar driver
US20080221892A1 (en) * 2007-03-06 2008-09-11 Paco Xander Nathan Systems and methods for an autonomous avatar driver
US10133733B2 (en) * 2007-03-06 2018-11-20 Botanic Technologies, Inc. Systems and methods for an autonomous avatar driver
US8622831B2 (en) 2007-06-21 2014-01-07 Microsoft Corporation Responsive cutscenes in video games
US20080318676A1 (en) * 2007-06-21 2008-12-25 Microsoft Corporation Responsive Cutscenes in Video Games
US9579576B2 (en) 2008-07-22 2017-02-28 Daybreak Game Company Llc System and method for providing persistent character personalities in a simulation
US20100029382A1 (en) * 2008-07-22 2010-02-04 Sony Online Entertainment Llc System and method for providing persistent character personalities in a simulation
US10293256B2 (en) 2008-07-22 2019-05-21 Daybreak Game Company Llc System and method for providing persistent character personalities in a simulation
US20100088097A1 (en) * 2008-10-03 2010-04-08 Nokia Corporation User friendly speaker adaptation for speech recognition
US20100112528A1 (en) * 2008-10-31 2010-05-06 Government Of The United States As Represented By The Secretary Of The Navy Human behavioral simulator for cognitive decision-making
US20110106536A1 (en) * 2009-10-29 2011-05-05 Rovi Technologies Corporation Systems and methods for simulating dialog between a user and media equipment device
US10642873B2 (en) 2014-09-19 2020-05-05 Microsoft Technology Licensing, Llc Dynamic natural language conversation
US11077361B2 (en) 2017-06-30 2021-08-03 Electronic Arts Inc. Interactive voice-controlled companion application for a video game
CN109200578A (en) * 2017-06-30 2019-01-15 电子技术公司 The adjoint application that interactive voice for video-game controls
US11120113B2 (en) 2017-09-14 2021-09-14 Electronic Arts Inc. Audio-based device authentication system
WO2019070254A1 (en) * 2017-10-04 2019-04-11 Ford Global Technologies, Llc Natural speech data generation systems and methods
US11318373B2 (en) * 2017-10-04 2022-05-03 Ford Global Technologies, Llc Natural speech data generation systems and methods
US11157703B2 (en) 2018-04-19 2021-10-26 Sg Gaming, Inc. Systems and methods for natural language processing in gaming environments
US11675982B2 (en) 2018-04-19 2023-06-13 Lnw Gaming, Inc. Systems and methods for natural language processing in gaming environments
CN108881610A (en) * 2018-04-27 2018-11-23 努比亚技术有限公司 A kind of terminal control method, terminal and computer readable storage medium
US10926173B2 (en) * 2019-06-10 2021-02-23 Electronic Arts Inc. Custom voice control of video game character
US11410649B2 (en) * 2019-10-31 2022-08-09 International Business Machines Corporation Voice commands to facilitate in-game communication
US20220072425A1 (en) * 2020-09-10 2022-03-10 Holland Bloorview Kids Rehabilitation Hospital Customizable user input recognition systems
US11878244B2 (en) * 2020-09-10 2024-01-23 Holland Bloorview Kids Rehabilitation Hospital Customizable user input recognition systems
US20230071358A1 (en) * 2021-09-07 2023-03-09 Nvidia Corporation Event information extraction from game logs using natural language processing

Also Published As

Publication number Publication date
WO2006019987A2 (en) 2006-02-23
WO2006019987A3 (en) 2006-04-27

Similar Documents

Publication Publication Date Title
US20060040718A1 (en) Audio-visual games and game computer programs embodying interactive speech recognition and methods related thereto
US7627536B2 (en) Dynamic interaction menus from natural language representations
JP2021518197A (en) Voice help system using artificial intelligence
US9403088B2 (en) Method of controlling computer device, storage medium, and computer device
Allison et al. Design patterns for voice interaction in games
US7734562B1 (en) Voice to text conversion with keyword parse and match to semantic and transactional concepts stored in a brain pool state machine using word distance to generate character model interaction in a plurality of dramatic modes
KR20100020006A (en) Responsive cutscenes in video games
Wardrip-Fruin Playable media and textual instruments
CN109817244B (en) Spoken language evaluation method, device, equipment and storage medium
EP1172132A2 (en) Entertainment system, recording medium
Jones et al. Acoustic emotion recognition for affective computer gaming
WO2022257502A1 (en) Interaction method, interaction apparatus, electronic device, and readable storage medium
US20220023756A1 (en) Method for game service and computing device for executing the method
CN112423093B (en) Game video generation method, device, server and storage medium
Milam et al. Looking at the Interactive Narrative Experience through the Eyes of the Participants
US10255916B1 (en) Methods, systems, and media for presenting interactive audio content
US11691076B2 (en) Communication with in-game characters
EP3964271A1 (en) User input method and apparatus
KR101962421B1 (en) Apparatus and method of feedback on chatting and operation method of terminal
KR20200130225A (en) Output system, sever and output method for game effect sound
KR20210032838A (en) Game apparatus and method using emotion
US20240029725A1 (en) Customized dialogue support
Kiiski Voice Games: The History of Voice Interaction in Digital Games
JP7330518B2 (en) GAME SYSTEM, GAME SYSTEM CONTROL METHOD, AND GAME PROGRAM
US20240009559A1 (en) Communication with in-game characters

Legal Events

Date Code Title Description
AS Assignment

Owner name: LAWRENCE SAVINGS BANK, MASSACHUSETTS

Free format text: SECURITY INTEREST;ASSIGNOR:MAD DOC SOFTWARE, LLC;REEL/FRAME:017243/0566

Effective date: 20051107

AS Assignment

Owner name: MAD DOC SOFTWARE, LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DAVIS, IAN L.;REEL/FRAME:017231/0793

Effective date: 20051028

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION