WO2010141403A1 - Separately portable device for implementing eye gaze control of a speech generation device - Google Patents

Separately portable device for implementing eye gaze control of a speech generation device Download PDF

Info

Publication number
WO2010141403A1
WO2010141403A1 PCT/US2010/036805 US2010036805W WO2010141403A1 WO 2010141403 A1 WO2010141403 A1 WO 2010141403A1 US 2010036805 W US2010036805 W US 2010036805W WO 2010141403 A1 WO2010141403 A1 WO 2010141403A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
generation device
speech generation
eye
menu
Prior art date
Application number
PCT/US2010/036805
Other languages
French (fr)
Inventor
Bob Cunningham
Dan Sweeney
Jason Mccullough
Rick Severa
Jeff Holt
Mike Salandro
Mike Zaffuto
David Brunecz
Brent Weatherly
Rob Cantine
Pierre Musick
Linnea Mcafoose
Original Assignee
Dynavox Systems, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dynavox Systems, Llc filed Critical Dynavox Systems, Llc
Publication of WO2010141403A1 publication Critical patent/WO2010141403A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • G10L13/047Architecture of speech synthesisers

Definitions

  • the present invention relates to devices that can be operated using the gaze of the user's eye and in particular to speech generation devices that can be operated using the gaze of the user's eye.
  • Speech generation devices are known and are used by persons with either total or partial speech impairments. Such persons use speech generation devices to communicate audibly with their environments. Many classes of persons desiring to use such speech generation devices have very limited motor control over most parts of their bodies. Moreover, many such persons experience diminishing motor control with the passage of time. Accordingly, where once such user was able to control a speech generating device with the user's fingers, over time this capability diminishes and finally is lost. This sort of diminishing motor control is especially acute among sufferers from Lou Gehrig's disease, myasthenia gravis. However, control over one's eye movements and/or head movements tends to persist over long periods of time.
  • Conventional speech generating devices having eye gaze controllers typically employ a camera and can be differentiated on the basis of whether the user has sufficient access to the camera's lens to be able to focus the camera, Those that allow user access to the camera's lens risk having the user, who typically has limited motor control skills, accidentally hit the lens of the camera so as to throw it out of the proper focus. Because of the user's limited motor control, the user likely will be unable to manipulate the lens to recover the proper focus. Moreover, if the user is permitted access to the lens, a further problem arises from the possibility of the user's bodily fluids (saliva, vomit, etc) contaminating the lens and/or the camera.
  • Conventional speech generation devices with eye gaze controllers consume valuable screen real estate due to their dependence on using part of the computer/device display to show the user if the user's eyes are being tracked.
  • a conventional gaze-access controller determines that the user is gazing anywhere on an object on the input display screen, that time is counted toward the "dwell time" setting that is used to activate the object on the screen.
  • users who suffer with uncontrollable head-movements or poor eye-control may unintentionally have their gaze leave the object that they desire to select. In such instances, the user would lose the accumulated dwell time and need to start over in the user's attempt to select that object. This result can cause significant frustration.
  • Another source of frustration for users of conventional speech generation devices with eye gaze controllers is the amount of time it takes the user to compose a message. The frustration arises from the resulting disruption in the flow of the user's conversation, writing, and thought process.
  • a speech generation device an eye gaze controller that is separately portable and detachable from the speech generation device.
  • an eye gaze controller that can be oriented in more than one position relative to the speech generation device.
  • an eye gaze controller that has an universal series bus (USB) port by which the eye gaze controller can be connected to the speech generation device.
  • USB universal series bus
  • a yet further advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having its own on-board processor and its own source of power in the form of a battery that is separate from any battery that provides power to the speech generation device.
  • a speech generation device an eye gaze controller that includes a fixed- focus camera and yet is a fully portable eye gaze controller.
  • an eye gaze controller that includes a fixed- focus camera and yet prevents access by the user to the camera and to the lens of the camera.
  • a fully portable eye gaze controller that includes a fixed-focus camera while preventing access by the user to the camera and to the lens of the camera and using only two sets of LEDs - each set of LEDs being spaced linearly apart from the other set of LEDs.
  • a further advantage of some embodiments of the present invention is to provide for a speech generation device, a fixed-focus type eye gaze controller that avoids the drawbacks of conventional devices.
  • Another further principal object of the present invention is to provide a speech generation device with a "fixed-focus" type eye gaze controller that can be positioned less than 17 inches from the screen.
  • Another advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller that incorporates a preprogrammed chip for remote control of other devices in the user's environment without having to rely on others for lengthy programming of the system and that has pre-programmed pages that let the user's eye gaze control all consumer electronics in the user's environment.
  • An additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller that enables the user to automatically dial (including 911), speed-dial, receive calls, talk over the telephone and listen as well and perform every function that one can perform using a conventional telephone.
  • a yet further advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller that incorporates an eBook reader whereby a user can completely control with eye-tracking the normal uses of an eBook reader, including reading the books, changing the voices used to read the books, and obtaining the books from bookshare.org, all without the need of intervention from a caregiver.
  • a still further advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller that allows the user to reliably and immediately switch the access method between eye-tracking to another method with a single selection and thus independently and without requiring intervention from a caregiver.
  • Yet another additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having positioned near its camera housing, indicator lights that tell the user if the user's eyes are being tracked.
  • Still another additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having settings that enable the user to choose to retain the "accumulated dwell time” if the user's gaze leaves the object that the user desires to select.
  • One such setting relates to the duration of time before the user loses the "accumulated dwell time”
  • another such setting relates to the rate at which the user loses the "accumulated dwell time”.
  • a yet additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having access to tools that minimize necessary navigation to save time and energy during message composition.
  • a still additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having access to a special "On Screen Keyboard" using larger buttons for eye-tracking and other access methods whereby the user can enter from the special "On Screen Keyboard," the numeric indicator that some web browsers place beside every link on a webpage and so more easily select a desired link to a webpage.
  • Still another additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having control over a special 'split' message window in which part of the message window contains the message undergoing composition by the user but cannot be activated, and the remainder of the 'split' message window (size defined by the user) is then a 'Speak Message Window' button that will speak the contents of the message window when the user is satisfied with the final composition of the message.
  • an additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having access to a 'Dashboard Hotspot' that the user can locate in any section of the screen and with any desired size and that enables the user to launch a 'popup window' containing the critical items that can be selected by the user by making no more than two selections.
  • a yet further additional advantage of some embodiments of the present invention is to provide for users that are blind or have very poor vision, a speech generation device with an eye gaze controller having audio-eye-tracking that provides audio cues to the user to inform the user when the user's eyes are focused on an area of the screen that the user might want to select and thereby enables such users to employ eye-tracking as a communication and computer access method.
  • One exemplary embodiment of the disclosed technology concerns a portable eye gaze controller comprising a first housing, an eye tracker, a first battery and a first universal serial (USB) socket.
  • the eye tracker is disposed within the first housing.
  • the first battery is also disposed within the first housing and is electrically connected to the eye tracker to provide power to operate the eye tracker.
  • the first universal serial (USB) socket is carried by the first housing and is electrically connected to the eye tracker.
  • Another exemplary embodiment of the disclosed technology concerns a speech generation device including a portable eye gaze controller.
  • the portable eye gaze controller includes a first housing, an eye tracker disposed within the first housing and a first universal serial bus (USB) socket carried by the first housing and electrically connected to the eye tracker.
  • USB universal serial bus
  • the speech generation device further includes a second housing, a processor disposed within the second housing, an input screen also disposed within the second housing, and a second universal serial bus (USB) socket carried by the second housing and connected to the processor.
  • the portable eye gaze controller is coupled to the processor via a connection established between the first USB socket and the second USB socket.
  • an eye tracker including a housing, first and second light sources, a video camera and a focusing lens.
  • the first and second light sources are disposed within the housing such that light illuminates outwardly from the housing towards the eyes of a user.
  • the video camera is disposed within said housing and is configured to detect light reflected from the eyes of a user.
  • the focusing lens is disposed in front of the video camera and is aligned with a central opening that is defined in said housing.
  • the eye tracker includes first and second light sources comprising LED arrays that are disposed respectively to the right and left of the video camera within the housing.
  • the eye tracker further includes first and second indicator lights configured to illuminate when the eye tracker has acquired the location of the user's eye associateed with that indicator light.
  • a speech generation device including an input screen, an eye tracker, a processor and related computer-readable medium for storing instructions executable by the processor, and speakers.
  • the input screen is configured for displaying selectable pages to a viewer.
  • the eye tracker includes at least one light source and at least one photosensor that detects light reflected from the viewer's eyes to determine where the viewer is looking relative to the input screen.
  • the instructions stored on the computer-readable medium configure the speech generation device to generate output signals for establishing communication with a separate device or network.
  • the speakers provide audio output of signals received from the separate device or network.
  • the separate device or network comprises a telephone
  • the instructions stored on the computer readable medium initiate the display on the input screen of a keypad with numbers for dialing the telephone that are selectable by a user's gaze detected by the eye tracker.
  • the instructions stored on the computer-readable medium more particularly configure the speech generation device to connect to the internet such that a user can navigate web pages displayed on the input screen with the selection control of said eye tracker.
  • the instructions stored on the computer-readable medium more particularly configure the speech generation device to download an e-book from over the established internet connection.
  • Yet further exemplary embodiments of the subject technology relate to a method of changing the access method of an electronic device interfaced with an eye gaze controller from an eye tracking access method to at least one other access control protocol.
  • a selection method navigator is displayed on an input screen for a user, wherein the selection method navigator displays a plurality of access methods for interfacing with the electronic device.
  • a user's gaze is detected with the eye gaze controller as the user's eyes are focused on an area of the input screen depicting the desired access method for subsequent operation of said electronic device.
  • the access method of the electronic device is switched from an eye gaze tracking access method to the desired access method selected by the user's gaze.
  • Another exemplary embodiment concerns a method for determining user selection of an object on a display screen using eye tracking.
  • a dwell time setting that defines the duration of time for which a user's eyes must gaze on an object on a display screen to trigger selection of the object by an eye gaze selection system is electronically established.
  • An eye gaze controller electronically tracks the amount of time a user's gaze remains upon a given object on the display screen. The user's accumulated dwell time is retained for a predetermined amount of time even after a user's gaze leaves the given object. Selection of the given object is electronically implemented if the accumulated dwell time exceeds the electronically established dwell time setting.
  • a first step involves electronically displaying an interface on an input screen of the speech generation device, the interface comprising a message window in which a message may be composed by a user and ultimately spoken and input buttons by which a user selects one or more of words, characters and symbols.
  • the message composed by a user in the message window is electronically tracked. Based on the tracked message being composed within the message window, selected of the input buttons are electronically changed to include predictor buttons.
  • the message composed by a user in the message window comprises a phrase including one or more slot placeholders within the phrase, and wherein said predictor buttons comprise one or more corresponding filler words for selection by a user to populate the one or more slot placeholders.
  • the message window comprises a composing window and a separate speak message window, the composing window being configured to display the message being composed by a user, and the speak message window being a separate display area within the message window such that the message within the composing window can only be selected to have it spoken by directing a user's eye gaze to the speak message window and not to the composing window.
  • Still further exemplary embodiments of the disclosed technology concern a method for implementing display of a dashboard hotspot on a display screen using eye tracking.
  • a first step involves electronically establishing a predetermined area defined relative to an input screen for corresponding to a gaze location for implementing a dashboard hotspot.
  • a user's gaze is electronically tracked with an eye gaze controller to determine when a user's gaze is within the predetermined area.
  • a popup window is displayed to a user, the popup window containing a plurality of predetermined critical interface features.
  • a method for assisting a user with control of an electronic device using eye tracking includes a step of electronically tracking a user's gaze with an eye gaze controller to determine when a user's eyes get close to focusing on a given object provided on a display screen associated with the electronic device. An audio signal is then generated for the user once the user's gaze is determined by the eye gaze controller to be within a predetermined distance from the given object.
  • a related electronic device includes an input screen, an eye tracker, speakers, a processor and related computer-readable medium for storing instructions executable by the processor. The input screen displays interface pages to a user.
  • the eye tracker includes at least one light source for illuminating the eyes of a user and at least one photosensor that detects light reflected from the user's eyes to determine where the user is looking relative to the input screen.
  • the speakers provide audio output of signals.
  • the instructions stored on the computer- read able medium configure the electronic device to electronically track a user's gaze with said eye tracker to determine when a user's eyes get close to focusing on a given object provided within an interface page on the input screen, and to generate an audio signal via the speakers once the user's gaze is determined by the eye tracker to be within a predetermined distance from the given object.
  • Fig. 1 is a head-on plan view of an embodiment of an eye gaze controller in accordance with the present invention
  • Fig. 2 is an elevated perspective view of an embodiment of an eye gaze controller in accordance with the present invention seen from the rear right hand side;
  • Fig. 3 is an elevated perspective view of various disassembled components of an embodiment of an eye gaze controller in accordance with the present invention from the front left side;
  • Fig. 4 is an elevated perspective view of various disassembled components of an embodiment of an eye gaze controller in accordance with the present invention taken from the rear right side;
  • Fig. 5 is a schematic diagram of components of an embodiment of the portable eye gaze controller in accordance with the present invention;
  • Fig. 6 is an elevated perspective view of an embodiment of an eye gaze controller in accordance with the present invention shown attached to an embodiment of a speech generation device seen from the front right hand side;
  • Fig. 7 is a right side plan view of a schematic representation of an embodiment of a portable eye gaze controller in accordance with the present invention shown connected to an embodiment of a speech generation device;
  • Fig. 8 is a side-on view of an embodiment of the right side of an eye gaze controller in accordance with the present invention shown attached to a part of a speech generation device;
  • Fig. 9 is a front plan view of an embodiment of a portable eye gaze controller in accordance with the present invention shown attached to an embodiment of a speech generation device;
  • Fig. 10 is a left side plan view of the embodiment shown in Fig. 1;
  • Fig. 11 is rear plan view of the embodiment shown in Figs. 1 and 2;
  • Fig. 12 is an exploded cross-sectional view taken in the direction of the arrows labeled 12 - 12 from above the view shown in Fig. 2;
  • Fig. 13 is a schematic diagram of electronic components of an embodiment of the portable eye gaze controller in accordance with the present invention.
  • Fig. 14A schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 14B schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 15A schematically illustrates components of an embodiment of the speech generator and telephone in accordance with the present invention
  • Fig. 15B schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 16A schematically illustrates components of an embodiment of the portable eye gaze controller and associated speech generation device in accordance with the present invention
  • Fig. 16B schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 17 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 18 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 19 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 20 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 21 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 22 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 23A schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention
  • Fig. 23B schematically illustrates components of an embodiment of the speech generator and with activated dashboard hotspot in accordance with the present invention
  • Fig. 24 provides an embodiment of a graphical user interface menu provided via software features for providing an exemplary remote control framework for customization
  • Fig. 25 provides an embodiment of a graphical user interface menu provided via software features for providing a remote control framework having buttons with programmed environmental control behaviors
  • Fig. 26 provides an embodiment of a graphical user interface menu provided via software features for providing a My Remote Controls menu
  • Fig. 27 provides an embodiment of a graphical user interface menu provided via software features for providing a Test Standard IR Codes menu
  • Fig. 28 provides an embodiment of a graphical user interface menu provided via software features for providing a Select Remote Control menu
  • Fig. 29 provides an embodiment of a graphical user interface menu provided via software features for providing an IR Browser menu
  • Figs. 30A-30C respectively, provide embodiments of a graphical user interface via software features for performing IR Learning functionality
  • Fig. 31 provides an embodiment of a graphical user interface menu provided via software features for providing an eBook download menu
  • Fig. 32 provides an embodiment of a graphical user interface menu provided via software features for providing eBook content details
  • Fig. 33 provides an embodiment of a graphical user interface menu provided via software features for providing a periodical download menu
  • Fig. 34 displays additional aspects of an embodiment of a graphical user interface menu provided via software features for providing a periodical download menu;
  • Fig. 35 provides an embodiment of a graphical user interface menu provided via software features for providing a favorite searches menu;
  • Fig. 36 provides an embodiment of a graphical user interface menu provided via software features for providing a periodical ID menu
  • Fig. 37 provides another embodiment of a graphical user interface menu provided via software features for providing a favorite searches menu
  • Fig. 38 provides an embodiment of a graphical user interface menu provided via software features for providing an eBook Reader menu
  • Fig. 39 provides an embodiment of a graphical user interface menu provided via software features for selecting an eBook file menu;
  • Fig. 40 provides an embodiment of a graphical user interface menu provided via software features for providing an Available Bookmarks menu;
  • Fig. 41 provides an embodiment of a graphical user interface menu provided via software features for providing a Modify eBook Viewer menu
  • Fig. 42 provides an embodiment of a graphical user interface menu provided via software features for providing a Modify eBook Table of Contents menu
  • Fig. 43 provides an embodiment of a graphical user interface menu provided via software features for providing a Scroll Behavior Settings menu
  • Fig. 44 provides an embodiment of a graphical user interface menu provided via software features for providing an eBook Reader tools toolbar;
  • Fig. 45 provides a partial view of an embodiment of a graphical user interface menu provided via software features for providing a system keyboard, with message window and predictor button features;
  • Fig. 52 provides a partial view of an embodiment of a graphical user interface menu provided via software features for entering an abbreviation;
  • Fig. 53 provides a partial view of an embodiment of a graphical user interface menu provided via software features for displaying an expanded abbreviation;
  • Fig. 54 provides an embodiment of a graphical user interface menu provided via software features for providing a Concept Browser menu;
  • Fig. 55 provides an embodiment of a graphical user interface menu provided via software features for providing a Concept Slot Fillers menu;
  • Fig. 56 provides an embodiment of a graphical user interface menu provided via software features for providing a Title bar toobar;
  • Fig. 62 provides a second view of an embodiment of a graphical user interface menu provided via software features for implementing slots and fillers;
  • Fig. 63 provides an embodiment of a graphical user interface menu provided via software features for providing a Select Concept for Slot menu;
  • Fig. 64 provides an embodiment of a graphical user interface menu provided via software features for providing a Select a Symbol menu;
  • Fig. 65 provides an embodiment of a graphical user interface menu provided via software features for providing a Select Slot Filler menu;
  • Fig, 66 provides an embodiment of a graphical user interface menu provided via software features for providing a Behavior Editor menu;
  • Fig. 67 provides an embodiment of a graphical user interface menu provided via software features for providing another Select a Symbol menu,
  • Fig. 68 provides exemplary orientations for mounting of an eye gaze controller in accordance with aspects of the present invention;
  • Fig. 69 provides an embodiment of a graphical user interface menu provided via software features for providing an Eye Tracking Settings Menu;
  • Fig. 70 provides an embodiment of a graphical user interface menu provided via software features for providing a Blink Settings Menu;
  • Fig. 71 provides an embodiment of a graphical user interface menu provided via software features for providing a Dwell Settings menu;
  • Fig. 72 provides an embodiment of a graphical user interface menu provided via software features for providing a Switch Settings menu;
  • Fig. 73 provides an embodiment of a graphical user interface menu provided via software features for providing a Target Settings menu;
  • Fig. 77 provides an embodiment of a graphical user interface menu provided via software features for showing a fill type example, where fill is indicated in a contract format;
  • FIG. 1 - 3 A presently preferred embodiment of the portable eye gaze controller in accordance with the present invention is shown in Figs. 1 - 3 and is represented generally by the numeral 20.
  • the eye gaze controller 20 includes a housing, which desirably is defined by a front shell 21 and an opposing rear shell 22.
  • the front shell 21 and the rear shell 22 desirably are detachably connected to one another as by selectively removable mechanical fasteners such as screws 23.
  • the rear shell 22 of the housing carries a universal serial bus (USB) connector in the form of a USB socket 24, which also is shown in Fig. 4 for example.
  • USB universal serial bus
  • This USB connector 24 enables the portable eye gaze controller 20 to be connected to a microprocessor.
  • the microprocessor can be in any one of a number of different types of devices, including a personal computer for example.
  • the microprocessor desirably forms part of a speech generation device that is to be controlled using the portable eye gaze controller 20.
  • a separate microprocessor dedicated to operation of the portable eye gaze controller 20 can be provided in the housing of the portable eye gaze controller 20.
  • the eye gaze controller 20 includes a main board 28 on which the integrated circuits are mounted along with the USB connector 24, which is electrically connected to the integrated circuits.
  • the USB connector 24 enables the portable eye gaze controller 20 to be connected with any computer device, and in particular with any computer device that forms part of a speech generation device, which is indicated generally in Fig. 6 for example by the designating numeral 30.
  • a speech generation device which is indicated generally in Fig. 6 for example by the designating numeral 30.
  • the selection software that implements the user's decision to select an object displayed on the input screen 33 must be provided with the capability of using inputs from an eye gaze controller to effect the selection of the objects displayed on the input screen 33 of the speech generation device.
  • the selection software runs on a microprocessor of the speech generation device 30 or runs on a dedicated microprocessor of the eye gaze controller 20.
  • the selection software includes an algorithm that enables the eye gaze controller 20 to deal effectively with images of the user's eye that are slightly out of focus.
  • the eye gaze controller 20 permits the user to employ one or more selection methods to select an object on the display screen 33 of the speech generation device 30 by taking some action with the user's eyes.
  • Optional selection methods that can be activated using the eye gaze controller 20 to interact with the display screen 33 of the speech generation device 30 include blink, dwell, blink/dwell, blink/switch and external switch.
  • a selection will be performed when the user gazes at an object on the input screen 33 of the speech generation device 30 and then blinks for a specific length of time.
  • the system also can be set to interpret as a "blink,” a set duration of time during which the camera 50 cannot see the user's eye.
  • the dwell method of selection is implemented when the user's gaze is stopped on an object on the input screen 33 of the speech generation device 30 for a specified length of time.
  • the blink/dwell selection combines the blink and dwell selection so that the object on the input screen 33 of the speech generation device 30 can be selected either when the user's gaze is focused on the object for a specified length of time or if before that length of time elapses, the user blinks an eye.
  • an object is selected when the user gazes on the object for a particular length of time and then closes an external switch.
  • the blink/switch selection combines the blink and external switch selection so that the object on the input screen 33 of the speech generation device 30 can be selected when the user's gaze blinks on the object and the user then closes an external switch.
  • the user can make direct selections instead of waiting for a scan that highlights the individual object on the input screen 33 of the speech generation device 30.
  • the system that uses the eye gaze controller 20 to interact with the input screen 33 of the speech generation device 30 can be set (at the user's discretion) to track both eyes or can be set to track only one eye.
  • USB connectors in the form of USB plugs 26 on each opposite end of a USB cable 25 can be used to connect the speech generation device 30 and the portable eye gaze controller 20.
  • Fig. 6 USB connectors in the form of USB plugs 26 on each opposite end of a USB cable 25 can be used to connect the speech generation device 30 and the portable eye gaze controller 20.
  • a type B USB connector 26 can be plugged into a corresponding type A USB connector socket 24 (Fig. 8 for example), which can be carried on the right side of the housing for the eye gaze controller 20.
  • a type B USB connector plug 26a can be plugged into a corresponding type A USB connector socket 24a, which can be carried on the right side of the housing for the speech generation device 30.
  • the portable eye gaze controller further comprises an eye tracker device.
  • Eye tracker devices are known and are commercially available in several different operating configurations. Suitable eye tracker devices are available from Eye Tech Digital Systems, Inc. of Mesa, Arizona and include both hardware and selection software, which desirably includes an algorithm that enables the eye tracker to deal effectively with images of the user's eye that are slightly out of focus.
  • a basic eye tracker device employs a light source and a photosensor that detects light reflected from the viewer's eyes.
  • a video-based gaze tracking system contains a processing unit which executes image processing routines such as detection and tracking algorithms employed to accurately estimate the centers of the subject's eyes, pupils and corneal-reflexes (known as glint) in two- dimensional images generated by a mono-camera near infrared system.
  • the gaze measurements are computed from the pupil and glint (reference point).
  • a mapping function - usually a second order polynomial function - is employed to map the gaze measurements from the two-dimensional image space to the two-dimensional coordinate space of the input display 33 of the speech generation device 30.
  • the coefficients of this mapping function are estimated during a standard interactive calibration process in which the user is asked to look consecutively at a number of points displayed (randomly or not) on the input display 33.
  • Known calibration techniques for passive eye monitoring may use a number of calibration points ranging, for example, from one to sixteen points.
  • a test desirably is conducted as follows. The user is asked again to look at some points on the input display 33, the gaze points are estimated using the mapping function, and an average error (in pixel) is computed between the actual points and the estimated ones. If the error is above a threshold, then the user needs to re-calibrate.
  • eye tracker devices are known, and any of them can be employed in accordance with the present invention. Examples of eye tracker devices are disclosed in U.S.
  • Examples of suitable eye tracker devices also are disclosed in U.S.
  • FIG. 3, 4 and 5 schematically shows the arrangement of several of the main components of an embodiment of a portable eye gaze controller 20 in accordance with the present invention.
  • the eye tracker of the portable eye gaze controller 20 desirably can include a USB video camera 50 r a focusing lens 40, a left infrared LED array 41 and a right infrared LED array 42.
  • a suitable video camera 50 is available from Sony in the form of the Sony® 1.3MP 1/3" ICX445 EXview HAD CCD® video camera, which is a 1.3 megapixel video camera having a resolution of 1296 x 964 at 18 frames per second (FPS) and a USB 2.0 5-pin Mini-B digital interface. Each pixel measures 3.75 microns by 3.75 microns.
  • the USB video camera 50 should have a signal-to-noise ratio of at least about 5 in order to be able to distinguish image features at 100% certainty (according to the Rose criterion).
  • the focusing lens 40 is mounted in an adjustable lens housing 40a and disposed in front of the video camera 50.
  • the adjustable lens housing 40a desirably can be mechanically locked into position so that the focus of the lens 40 does not change with vibration or drops,
  • a high quality Tamron brand lens having a 16MM focal length with an iris range of F/1.4 to 16 provides a suitable lens 40 and housing 40a.
  • the focusing lens 40 and video camera 50 are aligned with a central opening 21 a that is defined in the front shell 21 of the housing of the eye gaze controller 20. As shown schematically in Fig.
  • an O-ring 40b desirably is disposed against the front surface of the periphery of a lens cover 40c formed of transparent glass so that the lens cover 40c can be sealed against the back of the front shell 21 of the housing of the eye gaze controller 20. In this way, the fens 40 is protected against tampering, soiling or other undesirable environmental conditions.
  • the eye tracker of the portable eye gaze controller 20 desirably includes a left infrared LED array 41 and a right infrared LED array 42.
  • Each of the light emitting diodes (LED) 41a, 42a in each respective infrared LED array 41 , 42 desirably emits at a wavelength of about 880 nanometers, which is the shortest wavelength that was deemed suitable for use without distracting the user (the shorter the wavelength, the more sensitive the sensor, i.e., video camera 50, of the eye tracker).
  • LEDs 41a, 42a operating at wavelengths other than about 880 nanometers easily can be substituted and may be desirable for certain users and/or certain environments.
  • each of the light emitting diodes (LED) 41a, 42a desirably is a 5 mm diode.
  • each array 41 , 42 desirably contains seven staggered vertical columns of LEDs 41a, 42a with six LEDs 41a, 42a in each column.
  • a respective transparent protective cover 41b, 42b for each of the infrared LED arrays 41 , 42 is disposed against the back of the front shell 21 of the housing of the eye gaze controller 20 and in front of each respective infrared LED array 41 , 42.
  • the USB video camera 50 is mounted to the back of the rear shell 22 of the housing of the eye gaze controller 20.
  • each of the infrared LED arrays 41 , 42 desirably is mounted to the back of the front shell 21 of the housing of the eye gaze controller 20.
  • Fig. 4 for example, the USB video camera 50 is mounted to the back of the rear shell 22 of the housing of the eye gaze controller 20.
  • each of the infrared LED arrays 41 , 42 desirably is mounted to the back of the front shell 21 of the housing of the eye gaze controller 20.
  • the camera 50, the lens 40a and the central opening 21a of the front shell 21 are disposed centrally between the left infrared LED array 41 and the right infrared LED array 42. Moreover, the camera 50, the lens 40a and the central opening 21 a of the front shell 21 desirably are disposed aligned in a straight line with the infrared LED arrays 41 , 42 such that a straight line horizontally bisects the central opening 21 a as well as each of the infrared LED arrays 41, 42.
  • each of the LED arrays 41, 42 is disposed tilted toward the central opening 21a at an angle ⁇ of about eleven degrees from the horizontal plane to a degree that maximizes the depth range over which movements of the user's eyes can be detected by the eye tracker with the separation S (Fig.
  • each of the LED arrays 41, 42 is therefore disposed tilted toward the central opening 21a at an angle ⁇ of about seventy-nine degrees from the central axis 51 of the USB video camera 50.
  • a tilt angle ⁇ of about 8.1 degrees the depth range of the eye tracker extends over a range of about 16.5 inches to about 28 inches when as schematically shown in Fig. 1 , the separation S between the vertical centerlines of the two LED arrays 41 , 42 in the same plane as the plane of the lens 40a is about 9.4 inches.
  • a rigid mounting bracket 45 desirably is provided to attach the portable eye gaze controller 20 to the speech generation device 30.
  • the proximal portion 45a of the mounting bracket 45 must be configured and disposed to be rigidly attached, as by threaded screws 45c, to the bottom panel 32 of the housing for the speech generation device 30.
  • the proximal portion 45a of the mounting bracket 45 is configured and disposed to function as a replacement door that closes the battery compartment of the speech generation device 30.
  • the distal section 45b of the mounting bracket 45 desirably can be rigidly connected, as by threaded screws 45d, to the rear shell 22 of the housing for the portable eye gaze controller 20. Moreover, the distal section 45b of the mounting bracket 45 desirably is disposed at an angle with respect to the proximal section 45a of the mounting bracket such that the plane of the lens 40 in front of the video camera 50 of the eye gaze controller 20 is disposed at an angle of about 160 degrees with respect to the plane in which the input display 33 of the speech generation device 30 is disposed. [00153] As schematically shown in Fig.
  • the outputs of the USB video camera 50 and the two infrared LED arrays 41 , 42 of the eye gaze controller 20 are the outputs of the eye tracker that are provided as inputs to the speech generation device 30 via the type A USB 2.0 connector socket 24. These inputs are provided to the microprocessor of the speech generation device 30 and are processed to generate control signals for controlling operation of the speech generation device 30 by the user's eye movements. Alternatively, a separate microprocessor that is dedicated to operation of the portable eye gaze controller 20 can be provided in the housing of the portable eye gaze controller 20.
  • the outputs from the USB video camera 50 and the two infrared LED arrays 41 , 42 would then be provided to and processed by the dedicated microprocessor of the eye gaze controller 20 to generate control signals for controlling operation of the speech generation device 30 by the user's eye movements.
  • two spaced apart indicator lights 21 b, 21c desirably are disposed beneath the central opening 21a defined in the front shell 21.
  • the eye gaze controller 20 is configured to illuminate each indicator light 21b, 21 c when the eye tracker has acquired the location of the user's eye associated with that indicator light.
  • the eye tracker's acquisition of the location of the user's eye may require using the processing power of either the microprocessor in the speech generation device 30 or of a dedicated microprocessor in the eye gaze controller 20, as the case may be.
  • the eye gaze controller 20 is configured to illuminate the left indicator light 21 b.
  • the eye gaze controller 20 is configured to illuminate the right indicator light 21c.
  • the portable eye gaze controller 20 further comprises a self-contained power supply that powers the eye tracker and that is separate from any power source for the speech generation device that is being controlled by the eye gaze controller 20.
  • the power supply is provided in the form of a pack 27a of six lithium-ion batteries 27.
  • Each battery 27 desirably is a rechargeable lithium ion battery having a target life of at least about six hours and a nominal voltage of about 3.7 volts.
  • the batteries 27 in the pack are provided in the form of a pack 27a of six lithium-ion batteries 27.
  • Each battery 27 desirably is a rechargeable lithium ion battery having a target life of at least about six hours and a nominal voltage of about 3.7 volts.
  • the batteries 27 are configured with two batteries 27 electrically connected in series and three batteries 27 electrically connected in parallel.
  • the six pack 27a of batteries 27 electrically connected in this way provides a nominal voltage of about 7.4 volts.
  • the batteries 27 provide electric power in the form of direct current to the USB video camera 50 and to the two infrared LED arrays 41 , 42 through a battery charger 43.
  • a complete constant-current/constant-voltage charger 43 for lithium batteries 27 is available from Linear Technology Corporation of Milpitas, California under the tradename LTC® 4006 for example.
  • an AC/DC transformer 43a can be connected to the portable eye gaze controller 20, which can be connected to the speech generation device 30 in order to charge simultaneously each battery 27 of the portable eye gaze controller 20 and each battery of the speech generation device 30.
  • the AC/DC transformer 43a is connected to the battery charger 43 (Fig. 5).
  • the AC/DC transformer 43a can be connected directly to the speech generation device 30 to charge only the battery in the speech generation device 30 or connected directly to the portable eye gaze controller 20 to charge only the batteries 27 in the portable eye gaze controller 20.
  • the identical AC/DC transformer 43a can be used for each of the portable eye gaze controller 20 and the associated speech generation device 30, whether to charge batteries and/or to power the respective device.
  • a power output port 35a and a charger port 36a are carried by the housing of the eye gaze controller 20 and mounted on the main board 28 (Figs. 3 and 4).
  • the power output port 35a of the eye gaze controller 20 is configured to be connected to a charger port 35b of the speech generation device 30 via suitable connectors 35c on the opposite ends of a suitable power cable 35d.
  • the charger port 36a of the eye gaze controller 20 is configured to be connected to an AC/DC transformer 43a via a suitable connector 36b on the opposite end of a suitable charger cable 36c.
  • a power indicator LED 37a and a charging indicator LED 37b are provided and carried by the housing of the eye gaze controller 20.
  • the eye gaze controller 20 is configured to illuminate the power indicator LED 37a when the eye gaze controller 20 is receiving power and is operating.
  • the power indicator LED 37a can be covered with a sleeve that desirably is green in color so that when illuminated, the power indicator LED 37a will be seen as a green indicator light.
  • the eye gaze controller 20 is configured to illuminate the charging indicator LED 37b when the batteries of the eye gaze controller 20 and the speech generation device 30 are being charged.
  • the charging indicator LED 37b can be covered with a sleeve that desirably is amber in color or of a different color than the color of the sleeve that covers the power indicator light 37a so that when illuminated, the charging indicator LED 37b will be seen as a different color than the color of the power indicator light 37a.
  • the eye gaze controller 20 is configured to stop illuminating the charging indicator LED 37b when the batteries have been fully charged.
  • the eye gaze controller 20 desirably is configured to include a preprogrammed chip for remote control of electronic devices found in the user's environment. A suitable remote control chip is provided by Universal Electronics, Inc.
  • Fig. 13 schematically illustrates how the remote control chip is integrated into suitable electronic components carried for this purpose on the main board 28 (Figs. 3 and 4) of an embodiment of the eye gaze controller 20.
  • the eye gaze controller 20 In order to configure the eye gaze controller 20 to control devices outside of North America, the eye gaze controller
  • this remote control chip desirably is similarly integrated into suitable electronic components carried for this purpose on the main board 28 (Figs. 3 and 4) of an embodiment of the eye gaze controller 20.
  • the eye gaze controller 20 is configured to associate each of these sets of commands on each of these chips with a separate page displayed on the input display 33 of the speech generation device 30 and corresponding to the consumer electronic device that is to be controlled by the user. For example, from the menu containing consumer electronics such as TV, DVD, VCR, radio, etc., the user can select a particular Sony® television, and the buttons for the remote control for that particular Sony® television will appear on the display screen 33 of the speech generation device 30.
  • the remote control chip assures that when the user uses the eye gaze controller 20 to select the desired button on the display screen 33, the remote control chip permits the eye gaze controller 20 to control the speech generation device 30 to emulate the remote control signal associated with that selected button as if the user were using the actual remote control for that Sony® television.
  • the eye gaze controller 20 affords the user greater control over the user's environment, while not having to rely on others for lengthy programming of the eye gaze controller 20, which already has been pre-programmed with appropriate pages for environmental control of consumer electronics.
  • the microprocessor of the speech generation device 30 is programmed to map the information on the remote control chip for the chosen electronic device (such as Sony TV) that is to be remotely controlled, to buttons on pre-made pages that will appear on the input display 33 of the speech generation device 30.
  • the eye gaze controller 20 has its own dedicated microprocessor, then the information can be mapped on that dedicated microprocessor of the eye gaze controller 20.
  • FIG. 14A and 14B schematically illustrate flow charts for an exemplary program that can be provided for the eye gaze controller 20 (to be run on the microprocessor dedicated to the controller 20 or on the microprocessor of the associated speech generation device 30) so that the user can operate the eye gaze controller 20 to define the desired remote control in the user's environment and use it to activate the desired appliance that responds to that remote control.
  • exemplary speech generation devices having remote control capabilities include the provision of software tools with which a user may program a remote control or other infrared (IR) command, or to access a computer using a USB connection or Bluetooth communications link.
  • IR infrared
  • Such software-enabled functionality may be provided as software stored on the microprocessor or other dedicated memory device associated with the speech generation device 30.
  • Such software tools including the data defining the graphical screenshots and menu interfaces for displaying to a user, may be stored as instructions in a computer-readable medium within a memory element. The microprocessor within the speech generation device 30 or other processor or controller device may then execute such software instructions to provide these and other software tools and features of the present invention.
  • FIG. 15A A particular example of how to program a speech generation device to send remote control signals is now presented with reference to Figs. 24-28.
  • Fig. 15A at least one IR emitter 30a is provided within the speech generation device 30 to send infrared (IR) signals to any appliance that can be used with an IR remote control
  • IR infrared
  • software features are provided that enable a user to both program a speech generation device to function with remote control capability for specified appliances as well as use environmental control behaviors to add remote control commands to buttons on a user's displayed communication page.
  • buttons shown in the user interface menu of Fig. 24 feature labels such as "TV On/Off,” “Volume Up” and “Volume
  • a user may use an existing remote control to teach a command (for example, the signal for turning on the TV in a user's living room) to the speech generation device 30. Then, the user may use an environmental control behavior to add the new remote control command to a button on the user's page.
  • a command for example, the signal for turning on the TV in a user's living room
  • buttons on other environmental control pages are already programmed with environmental control behaviors.
  • a name for each appropriate remote control command for example, "Family Room TV Channel Down" or "Family Room TVPower" has already been created in the IR Browser menu.
  • a user can simply select the command name in the IR Browser menu and uses his remote control to teach the command to speech generation device 30,
  • the microprocessor within the speech generation device 30, in conjunction with the additional circuitry such as illustrated in Fig. 13 in the eye gaze controller 20 may be programmed with software that provides a number of default remote controls that a user can program to use as the remote control for the user's electronic appliance.
  • a user simply selects the default remote in the software that matches a given appliance (e.g. TV 1 VCR 1 DVD) and then uses a remote control wizard in the software to program the default remote for the appliance.
  • a default remote is programmed by the following interactive steps available from within a user interface such as the "My Remote Controls menu" displayed in Fig. 26, [00169] In the viewport of Fig.
  • a user may select the default remote that he wants to program.
  • the "Program the selected remote control” button will be activated, and a remote control wizard will open.
  • a first portion of the viewport shows the steps involved in programming the selected default remote, with each step being highlighted as it is performed.
  • the user may select the manufacturer of the appliance in a second portion of the main viewport.
  • the software wizard will display a number of possible standard codes that may be valid for the given appliance.
  • the wizard may display the steps for a user to perform to "learn" each IR command individually.
  • a Test Standard IR Codes menu may open on the speech generation device 30, an example of which is shown in Fig. 27. [00170] Referring still to Fig. 27, a user may press the POWER button while aiming the IR output port of the IR emitter 30a of the speech generation device 30 at the appliance. The software interface will show the current code being tested. If the appliance shuts off, then the appliance successfully received the proper IR signal from speech generation device 30.
  • a user may then select the "Yes - button works as expected” button.
  • the wizard will inform the user that he has successfully programmed his equipment with the appropriate IR commands. If the appliance does not shut off, it means that the appliance did not receive the proper IR signal, and a user may select the "No - button does not work as expected” button. If none of the standard codes works as expected (i.e. by shutting off the electronic appliance), then a user may select the "No - button does not work as expected” button after the last standard code is tested. [00171] The software wizard will give a user the option to "discover" a non- standard code that may work with the appliance.
  • the user selects the "Discover the right code” button to attempt to discover a non-standard code that will control the user's appliance.
  • a Discover Non-Standard IR Codes menu will open and display the total number of non-standard codes that can be tested.
  • the software automatically begins testing the first ten (or other predetermined number) non- standard codes. If one of the codes successfully shuts off the appliance, then a user should select the "Yes - one of the commands did what was expected" button.
  • the Test Non-Standard IR Codes menu opens, allowing a user to find the specific code that powered off the electronic appliance.
  • the software displays the steps to perform in the remote control wizard. For example, to manually learn each IR command, the following steps may be implemented: (1) Obtain the remote control that belongs to the given appliance. (2) Turn on the given electronic appliance. (3) Aim the remote control at the IR port on the speech generation device 30. (4) Select the "Start learning each command” button. The IR Learning popup will open. Select the "Start IR Learning” button on the input screen 33 of the speech generation device 30 and then press the appropriate button on the remote control. The user will be automatically prompted for each command that the device must learn from the remote control. (5) Select the "Stop IR Learning” button when finished. If the device did not receive a signal from the remote control, a window will inform the user that no signal was detected. Select the "Try again” button to send the signal again.
  • the IR command may have a maximum time interval of 20 seconds.
  • a user may select the type of electronic appliance that the custom remote will control.
  • a text box will prompt the user to enter a name for the new remote control.
  • a user may then select the text box to open the system keyboard and enter a name for the custom remote control (e.g., Justin's DVD Player, Kitchen TV).
  • the name entered on the system keyboard is displayed in the text box.
  • the "Pick the manufacturer” step is highlighted in the left viewport. From this point, similar steps as implemented in the above-described "Program a Default Remote" procedure may be used to finish creating a custom remote control.
  • Additional software features may be available per some embodiments of the present invention for assigning a remote control to a page.
  • a custom remote may be assigned to a remote control page by following these steps: (1) Select the Main Menu button in the title bar. (2) Select Setup in the main drop-down menu. (3) Select Page Navigator in the second drop-down menu. (4) In the left viewport, select the folder that contains the remote control page desired for use. (5) In the right viewport, select the remote control page desired for use to control an appliance. (6) Select the Go to Page button. The selected remote control page will open.
  • Behaviors box The Behavior Editor menu will open. (15d) View the behaviors displayed in the Steps viewport. If the Set Active Remote behavior is displayed, proceed to step 16. If the Set Active Remote behavior is not displayed, proceed to step 17. (16) If the Set Active Remote behavior is displayed in the Steps viewport, perform the following steps: (16a) Select the Set Active Remote behavior in the Steps viewport.
  • the Select Remote Control menu will open, an example of which his shown in Fig. 28.
  • the name of the remote control you selected is displayed in parentheses beside Set Active Remote behavior in the Steps viewport of the Behavior Editor menu.
  • a user wants to use the speech generation device 30 to remotely control a device other than a standard electronic appliance (i.e., remote- controlled ceiling fan, X-10 light, toy, etc.)
  • the user first adds a name for the command to the IR Browser menu.
  • a standard electronic appliance i.e., remote- controlled ceiling fan, X-10 light, toy, etc.
  • the user first adds a name for the command to the IR Browser menu.
  • user selections open up an IR Browser menu, an example of which is provided in Fig. 29.
  • a user may select the New button.
  • the system keyboard will open, and a user can enter a name for the new IR remote control command.
  • a user can use the actual remote control unit (such as a TV remote control) to teach the appropriate IR signal to the speech generation device 30. To do this, a user can open the IR Browser menu and select the name of the command the user wants to edit. The scroll bar may be needed to see all the stored command names.
  • an IR Learning window (an example of which is shown in Fig. 30) will open and display the name of the command that is being learned.
  • the user should then aim the remote control at the IR port on the speech generation device 30 and select the Start IR Learning button on the device and press the appropriate button on the remote control, At this point, the Start IR Learning button changes to a Stop IR Learning button (as shown in the exemplary display of Fig. 30B), which button may be selected upon completion of the command learning.
  • the IR learning window will let the user know that the command learning is complete as shown in the exemplary display of Fig. 3OC.
  • the OK button in the IR learning window and also the OK buttons in the IR Browser and Tools menus
  • the process can be completed. If an environmental control behavior and the name for this new command already have been assigned to a button, then the button can now be successfully used for remote control of an electronic appliance.
  • Modify button in the title bar The button will turn red after it is selected.
  • the Modify Button menu will open.
  • the Behavior Editor menu will open.
  • the Behaviors viewport will display the available environmental control behaviors. (6) Select the
  • Steps viewport in the Behavior Editor menu (10) Select the OK button to close the Behavior Editor menu. (11) Select the OK button to close the Modify Button menu. The button that has been selected now has an environmental control behavior and a remote control command.
  • the speech generation device 30 desirably is provided with pre-programmed content that the user can select using the eye gaze controller 20 in order to be able to make and receive telephone calls with an eye-tracking access method. Additionally, as schematically shown in Fig. 16A, a special arrangement must be made to allow communication between the speech generation device 30 and a telephone 29 that connects to the plain old telephone service 29a.
  • Fig. 15A Another option schematically shown in Fig. 15A for this arrangement is the provision of any of several available infrared-controlled telephones 29, each of which being potentially controllable in the manner describe above as an electronic appliance in the user's environment.
  • the speech generation device 30 also is provided with an infrared emitter 30a and an infrared receiver 30b. In this way, as schematically shown in Fig. 15A, the telephone 29 is enabled to receive infrared commands 39a from the speech generation device 30 and to send voice transmissions via infrared signals 39b to the speech generation device 30.
  • the speech generation device 30 is programmed to display on the input screen 33, a menu containing the various telephone functions, which the user can perform by eye selection via the eye gaze controller 20.
  • the menu displays buttons, the selection of which generates the desired logic or sequences for telephone communication. Some of these buttons simply represent the numbers that are being dialed on the telephone. Another button desirably collects the string of numbers so they can be dialed at once. Other buttons are provided to represent telephone commands like "hang up,” “answer,” “automatically dial,” “speed-dial,” “program speed-dial,” “receive calls,” “dial 911 ,” “talk with the party who is listening over the phone” and “listen to the party who is talking over the phone” as well.
  • Fig. 15B schematically illustrates a software protocol desirably used by the speech generation device 30 under the control of the eye gaze controller 20 to place a telephone call, converse with the party called, and hang up the telephone call.
  • the user can control the speech generation device 30 to speak over the telephone 29, and as schematically shown in Fig. 15A, the user can hear the caller's voice via the speakers 30c that are provided as a component of the speech generation device 30.
  • eBook Reader [00181] eBook Reader.
  • the speech generation device 30 desirably can be provided with pre-programmed content and/or selectabiy downloaded or imported content that the user can select using the eye- tracking access method provided by the eye gaze controller 20 in order to be able to order, download and read so-called e-books.
  • the speech generation device 30 desirably is provided with an internet browser 3Od, a high speed modem 3Oe and a high speed internet connection 3Of by which the speech generation device 30 can access websites from which e-books can be selected and downloaded.
  • an internet browser 3Od a high speed modem 3Oe and a high speed internet connection 3Of by which the speech generation device 30 can access websites from which e-books can be selected and downloaded.
  • the speech generation device 30 desirably is provided with an e-book reader 3Og, which desirably is a software package that runs on the microprocessor of the speech generation device 30 (or alternatively on the dedicated microprocessor of the eye gaze controller 20).
  • the user can employ the eye gaze controller 20 to select from a menu on the speech generation device 30 so that various e-book functions can be performed by eye selection on the e-book screen displayed on the input screen 33 of the speech generation device 30.
  • These functions include reading the e-books aloud to the user via the speakers 30c (Fig. 16A) of the speech generation device 30, changing the voices used to read the e-books, and obtaining the e-books from internet sites such as bookshare.org for example.
  • the eye gaze controller 20 enables the user to perform these functions without the need of intervention from a caregiver.
  • FIG. 16B schematically illustrates a software protocol desirably used by the speech generation device 30 under the control of the eye gaze controller 20 to download an e-book and read the e-book aloud to the user via the speakers 30c (Fig. 16A) that are provided as a component of the speech generation device 30.
  • exemplary eBook Reader embodiments may include software tools stored on the microprocessor or other dedicated memory device associated with the speech generation device 30.
  • Such software tools including the data defining the graphical screenshots and menu interfaces for displaying to a user, may be stored as instructions in a computer-readable medium within a memory element.
  • the microprocessor within the speech generation device 30 or other processor or controller device may then execute such software instructions to provide these and other software tools and features of the present invention.
  • features are provided that enable a user to interface with: (A) an eBook Downloader menu to search for and download eBooks, (B) an eBook
  • Reader menu to read an eBook and (C) an eBook Actions Behaviors menu to create a new eBook interface page.
  • An eBook (“electronic book”) is a digital representation of a printed publication. Many different formats of eBooks have emerged in the past several years, Including the DAISY (Digitial Accessible Information System) format. The DAISY format was developed to provide published information in an easy-to-navigate format for people with print disabilities. eBook tools in accordance with the present invention fully support eBooks in the DAISY format, although other formats may also be used, including but not limited to BRF (Braille Refreshible Format) and others.
  • DAISY Digitial Accessible Information System
  • eBook Downloader software tools enable a user to set up a subscription with an online eBook repository, for example the Bookshare website available at www.bookshare.org.
  • Bookshare is a nonprofit Internet-based organization that provides digital talking books to the visually impaired or print-disabled. Bookshare maintains an online library of over 45,000 eBooks, including both books and periodicals.
  • Downloader software tool provides a direct link that lets the user search for and download eBooks directly from the eBook repository.
  • the eBook Downloader menu gives a user direct access to an online eBook repository.
  • a user can search for books or periodicals, with search options available to search by author, title, keyword, or periodical ID number.
  • a "favorite searches" feature may be available for quick and easy access to a user's favorite periodicals, authors, or keyword search parameters.
  • EBooks from the online eBook repository can be downloaded directly into an eBooks folder on the speech generation device 30, where they may be immediately available for reading via the eBook Reader menu or a custom eBook page created by a user.
  • FIG. 31 An example of an eBook Downloader menu with which a user may interface is provided in Fig. 31.
  • the procedures for searching for and downloading a book and searching for and downloading a periodical may be the same or slightly different, and examples of both will now be presented.
  • steps for downloading a book using the eBook are provided in Fig. 31.
  • Downloader menu of Fig. 31 include: (1) Open the Bookshare Download menu if it is not already open. (2) Check to make sure that Book View is displayed in the Current View area of the Searching group box. If Periodical View is displayed, select the Toggle View button to change to Book View. (3) Select the Search text box. The system keyboard will open. (4) Enter the title, author, or a keyword in the system keyboard and select the
  • Appropriate text boxes may be selected, which open the system keyboard on the speech generation device so that a user can provide his user name and password, and then the user selects the Login button. (10) The book will automatically download into the folder shown in the Download Location group box. (11) A software prompt will appear, asking whether the user wants to open the book. Select No to return to the Bookshare Download menu.
  • a user may select the Content Details button in the Actions group box to open the Bookshare Content Details window (see below). Select the Close button when finished.
  • a user may want to download the book directly from this window by selecting the Download button on the Bookshare Content Details window.
  • steps for downloading a periodical involve interfacing with a slightly different eBook Downloader menu as shown in Fig. 33. Steps include: (1)
  • eBooks can be downloaded directly to the speech generation device 30, or to other dedicated memory/storage devices such as but not limited to a USB flash drive, CompactFlash card, or other memory device from which the eBook can later be imported to the speech generation device 30.
  • dedicated memory/storage devices such as but not limited to a USB flash drive, CompactFlash card, or other memory device from which the eBook can later be imported to the speech generation device 30.
  • Fig. 35 shows an interface menu also available in some embodiments of the download process, which allow a user to save search criteria so that the user can quickly access favorite periodicals, authors, or subject matter. Separate favorite searches for both books and periodicals may be provided.
  • a user may interface with the menu shown in Fig. 35 using the following steps: (1) Open the Bookshare Download menu if it is not already open. (2) Select the View Favorites button in the Searching group box. The Favorite Searches menu will open. (3) Select the Create New text box. If a user is in Book
  • the system keyboard will open. If the user is in Periodical View, the Periodical ID menu shown in Fig. 36 will open. (4) Enter search criteria (author, title, or keyword) on the system keyboard for a book search (then select the OK button), or enter the Periodical ID number on the Periodical ID menu for a periodical search (and then select the OK button). (5) Select the Search and Save button. This will begin a search of the repository library for the books that meet a user's search criteria (or for the periodical ID number). Search criteria will be saved as a Search Favorite. (6) Select the book or the edition of the periodical desired for download. (7) Select the Download button in the Actions group box. (8) The book or periodical will automatically download into the folder shown in the Download Location group box.
  • a software prompt will appear, asking whether the user wants to open the book or periodical. Select No. (10) Repeat steps 2 through 9 above, creating new favorite searches, up to a certain number (e.g., three favorite book searches and three favorite periodical searches). These Favorite Searches may be replaced in chronological order by subsequent searches (the oldest search will be replaced first).
  • search criteria are saved as described above, a user may search for a Book or Periodical Using the Favorite Searches Menu, an example of which is shown in Fig. 37.
  • the following exemplary steps may be used for interfacing with the menu of
  • Fig. 37 (1) Open the Bookshare Download menu if it is not already open. (2) Select the View Favorites button in the Searching group box. The Favorite Searches menu will open. (3) Select one of the buttons in the Search Favorites group box. (A designated maximum number of Favorite Searches may be saved in both the book and periodical Favorite Searches menus.) This will begin a search of the Bookshare library for the books that meet the search criteria (or for the periodical ID number). (4) Select the book or the edition of the periodical desired for download. (5) Select the Download button in the Actions group box. (6) The book or periodical will automatically download into the folder shown in the Download Location group box. (7) A software prompt will appear, asking whether the user wants to open the book or periodical. Select No. (8)
  • Software tools in accordance with an exemplary embodiment of a speech generation device 30 enable a user to use a current selection method to read eBooks. After an eBook has been downloaded, a user can use the same access methods for making selections on the speech generation device 30 to scroll through the pages of an eBook, speak and highlight text on the eBook page, symbolate each page of the eBook, or bookmark a place on the page of an eBook.
  • FIG. 38 A specific example of an eBook Reader menu with which a user may interface is provided in Fig. 38. Exemplary steps for interacting with such menu are: (1) Select the Load eBook button on the eBook Reader menu or on the eBook page.
  • Select an eBook File menu will open.
  • An example of this menu is provided in Fig. 39.
  • the eBooks folder will open, and the subfolder(s) for the downloaded eBooks will be displayed.
  • the file it contains will be displayed in the right viewport.
  • Select the OK button The first page of the book will appear in the eBook Viewer pane, and a page list will appear in the eBook Table of Contents Viewer pane. [00197] Additional functionality for reading an eBook as provided by the user interface menu of Fig. 38 is now presented.
  • next Page will be displayed in the eBook Viewer pane.
  • Previous Page button on the eBook Reader menu
  • Previous Page the previous page will be displayed in the eBook Viewer pane.
  • a user can move within the current page in the eBook Viewer by selecting a "Page Down" button on the eBook
  • Fig. 38 features by which a user may symbolate the eBook page are provided when a user selects the "Symbolate” button on the eBook Reader menu. Symbolation involves displaying symbols for as many words as possible for the text on the current eBook page. Some users find this option helpful, especially if they are more comfortable reading symbols instead of plain text.
  • the "Symbolate" button is selected by a user, a message will appear indicating that the eBook page is symbolizing. The page will be symbolated, and the Symbolate button will toggle to Desymbolate. To desymbolate the current eBook page, select the Desymbolate button on the eBook Reader menu. Symbols will disappear from the page, and the Desymbolate button will toggle back to Symbolate.
  • a still further feature provided on the interface menu of Fig. 38 enables a user to create a bookmark.
  • a user may first navigate to the location on the eBook page where he wants to place a bookmark, then selects the "Create Bookmark" button on the eBook Reader menu.
  • the system keyboard will open.
  • a name for the bookmark may be entered on the system keyboard (for example, a point in the story), after which the OK button is selected.
  • the bookmark will automatically be inserted at the specified location. Such steps may be repeated as desired for creation of additional bookmarks.
  • Bookmarks may be named by user or automatically numbered with a default procedure.
  • a user may interact with another interactive menu as shown in Fig. 40 to go to a Bookmark. After selecting the "View Bookmarks" button on the eBook Reader menu of Fig. 38, the Available
  • Bookmarks menu of Fig. 40 will open.
  • a list of all bookmarks in the currently loaded eBook will appear in the Bookmarks viewport.
  • a user may use the scroll buttons on the right side of the viewport to move to the bottom of the viewport.
  • a user may then select a bookmark to which he wants to move.
  • An ' 1 X" will appear in the check box next to the bookmark's name.
  • the Available Bookmarks menu will close, and the bookmarked eBook page will appear in the eBook Viewer.
  • a user may also use the View Bookmarks menu to rename or delete bookmarks.
  • a user may also create his own eBook Reader page with buttons using the Jump to Bookmark and Jump to Specific Bookmark behaviors.
  • a speech generation device 30 to speak the page of the eBook when a user selects the "Speak Page” button on the eBook Reader menu.
  • the current eBook page will be spoken, and the Speak Page button will toggle to Stop Speaking.
  • the speech will stop.
  • a user can select a different reading voice from the voice normally used for communication. To choose a reading voice, select "Modify Viewer" on the eBook Reader menu. Use the Reading Voice drop-down menu to select a reading voice.
  • a user can also automatically enable the eBook page to speak when it is selected.
  • Select "Modify Viewer" on the eBook Reader menu. Use the When Selected drop-down menu to select Speak e Book.
  • the current eBook page will automatically speak when selected and stop speaking when it is selected a second time.
  • the Speak Page button desirably is configured to toggle automatically between Speak Page and Stop Speaking each time the text is selected.
  • a still further feature desirably is provided for a user to highlight the words on the eBook Page by selecting the "Highlight” box on the eBook Reader menu of Fig. 38.
  • the words on the current eBook page will be highlighted as they are spoken.
  • the Highlight box will remain selected as the user moves from page to page, and the words will continue to be highlighted as they are spoken until the Highlight Highlight box is deselected.
  • the words on the current eBook page will no longer be highlighted as they are spoken.
  • Additional software features desirably provide options by which a user can change the characteristics of the eBook Viewer pane and of the eBook TOC (Table of
  • a user may be provided with an interface menu as shown in Fig. 41.
  • a user can select the "Edit” button in the Background Color group box in the Modify Viewer menu to open a Color Selector menu by which the background color of the eBook Viewer can be changed.
  • a user can select a text size from the Text Size drop-down menu in the Modify Viewer menu.
  • a user can turn the eBook Speech behavior on and off.
  • a user may be provided with an interface menu as shown in Fig. 42.
  • the user selects the "Modify TOC" button on the eBook Reader menu of Fig. 38.
  • the Modify TOC menu of Fig. 42 will open.
  • a user can select the Edit button in the Background Color group box in the Modify TOC menu to open the Color Selector menu and change the background color of the Table of
  • a user can select a text size from the Text Size drop-down menu in the Modify TOC menu to change the size of the text in the eBook Table of Contents.
  • a user can select the Edit button in the Text Color group box in the Modify TOC menu to open the Color Selector menu and change the color of the text in the eBook Table of Contents.
  • OK to close the Modify TOC menu.
  • Yet another feature available on the eBook Reader menu of Fig. 38 allows a user to unload an eBook by selecting the "Unload eBook" button on the eBook Reader menu. The current eBook will be unloaded from the eBook Viewer and eBook Table of
  • eBook Actions Behaviors Menu After a user becomes familiar with the eBook Reader menu and the many unique eBook behaviors, a user may select options for creating a custom eBook page. Creating a custom eBook page will allow a user to use all of the eBook Actions behaviors such as downloading an eBook, defining scrolling behaviors, "jumping to" bookmarks, and sending eBook pages to the Message
  • the eBooks Actions category of behaviors allow a user to program a custom page for loading and reading eBooks.
  • Exemplary behaviors available for selection include the following:
  • Assign Loaded eBook to a Button Assigns the currently loaded eBook to a specific button.
  • Desymbolate eBook Page Removes the symbols from the current page of the eBook.
  • Download eBook Opens the Bookshare Download menu, which enables you to search for and download an eBook from an online repository. (You must have an active Internet connection to use the Download eBook behavior.)
  • Load eBook Opens the Select an eBook File menu and loads the selected eBook into the eBook Reader menu.
  • Next Page Displays the next page of the current eBook in the eBook
  • Open eBook Opens a specific eBook that you can preselect to be loaded each time this button is selected.
  • Play/Pause/Resume eBook Speech Toggles among speaking, pausing speech, and resuming speech on the current page of the loaded eBook.
  • Scroll eBook Opens the Scroll Behavior Settings menu (an example of which is shown in Fig. 43), which allows a user to scroll through an eBook at a desired measure, as shown in Fig. 43.
  • the drop-down menus on the Scroll Behavior Settings menu allow a user to choose: Object to scroll: (Viewer or Table of Contents)Type of
  • the Page Editor feature should [00230] be opened on a speech generation device 30.
  • a "New Page” option may be available from a drop-down menu that will toggle the system keyboard on a user's screen.
  • the new page may be named, for example, "My eBook Reader Page” which then opens up a blank page in the Page Editor tool.
  • the eBook Viewer tool in the Tools palette of the Page Editor title bar can be used to draw out a large rectangle for the eBook Viewer pane by following exemplary steps as follows: (1) Select the eBook
  • the eBook Table of Contents tool in the Tools palette of the Page Editor title bar may be used to draw out a second, smaller rectangle to contain the eBook Table of Contents pane.
  • the following steps may be used: (1) Select the eBook Table of Contents tool in the Tools palette. (2) Select the location on the page where you want to place one corner of the eBook Table of Contents pane. Do not release the selection. (3) Continue to maintain the selection while you drag out the cursor to form a second rectangle. An outline of the eBook Table of Contents pane you are drawing will appear on the page. (4) Move the cursor to adjust the size and shape of the rectangle, Do not release the selection until the eBook Table of Contents pane is size and shape you want. (5) Release the selection.
  • the tools in the Tools palette of the Page Editor may be used to add the buttons that a user wants on his eBook page.
  • Suggested basic buttons may include buttons that will use the Load eBook, Unload eBook, Play/Pause/Resume eBook Speech, and the Page Up and Page Down scrolling behaviors.
  • a user may select the Modify button to add labels and behaviors to the created buttons. If a user wants to use the eBook Page to Message Window behavior, you should add a Message Window to the page. When finished building the new page, select the Main Menu button in the title bar.
  • [00233] Ability to change access methods. There are scenarios where it is beneficial to change methods of controlling the speech generation device 30. For progressive conditions - such as ALS/MND - there is a natural progression from direct selection to other control methods - such as scanning to mouse to trackball to joystick to head control to eye gaze tracking (and then sometimes back to scanning). For other conditions, there are situational changes: such as using eye-tracking while in a wheelchair, but when riding in a vehicle where the user cannot remain in a wheelchair, the user may move to a different method of controlling the speech generation device 30. In conventional systems, the minimum number of selection steps that the user must perform in order to implement such transitions was six.
  • the eye gaze controller 20 and associated speech generation device 30 desirably are configured to allow such transitions to be implemented immediately - with a single button selection requiring only two steps to be performed by the user. Moreover, the eye gaze controller 20 and associated speech generation device 30 desirably are configured to allow the user to implement reliably such transitions between eye-tracking and another control protocol without requiring intervention from a caregiver.
  • Fig. 17 schematically illustrates a software protocol, which is provided as a component of the speech generation device 30 and desirably used by the speech generation device 30 under the control of the eye gaze controller 20 to implement reliably with a single button selection such transitions between eye- tracking and another control protocol without requiring intervention from a caregiver.
  • the user can change from the gaze control selection method to the scanning control selection and then return to the gaze control selection method.
  • the user can change from one selection method to another desired selection method by simply focusing the user's eyes on the area of the screen depicting the desired selection method and using the eye gaze controller 20 to select that desired selection method.
  • the desired control device headmouse, joystick, switches, etc.
  • the desired selection method must be operatively connected to the speech generation device 30.
  • Conventional software designed for an eye gaze selection system maps the input screen into discrete regions that are the so-called Objects' of the gaze of the user's eyes.
  • the system determines that the user is gazing anywhere on an object, the system starts a clock that begins recording the duration of time that is compared to the "dwell time” setting, which is the duration of time that triggers selection of the object by the eye gaze selection system.
  • the eye gaze controller 20 and associated speech generation device 30 desirably are configured to provide settings that enable the user to choose to retain the user's "accumulated dwell time” if the user's eye gaze leaves the object.
  • One of these "retain object” settings allows the user to set the amount of time that the user's eye gaze spends away from the object before the user's already accumulated dwell time is lost and must be restarted.
  • the other of these "retain object” settings allows the user to set the rate (per unit of time) at which the user's already accumulated dwell time will be decremented after the user's eye gaze has moved away from the object.
  • Fig. 18 schematically illustrates a software protocol, which is provided as a component of the speech generation device 30 and desirably used by the speech generation device 30 under the control of the eye gaze controller 20 to govern how the "accumulated dwell time" is affected when the user's eye gaze leaves the object.
  • the accumulated dwell time for an object on the input screen 33 of the speech generation device 30 is desirably indicated visually by the degree of contrast of the object being considered for selection relative to other objects on the input screen 33.
  • Rate Enhancement The rate at which a selection device enables the user to compose a message to be spoken by the speech generation device is a key measure of the desirability of the system that combines the selection device and the speech generation device.
  • the speech generation device 30 that is controllable by the eye gaze controller 20 has been provided with certain capabilities that enhance the rate at which the eye gaze controller 20 enables the user to compose a message to be spoken by the speech generation device 30.
  • Speech generation device 30 Software features may be provided with speech generation device 30 that offer rate enhancement for quicker and more efficient communication. Such features can reduce the number of selections that are required to perform a task or create a message, resulting in a faster, more efficient communication rate. Examples of such communication features include: (1) Word Prediction; (2) Abbreviation Expansion; (3) Concept Grouping; (4) Phrase Customization and Prediction; and (5) Concept Slots. Additional details of such exemplary rate enhancement features will now be presented in respective order. [00240] CQ Word Prediction: Word prediction can be used with keyboard pages that include predictor buttons. As a message is composed, the prediction feature anticipates word choices and displays vocabulary from a device dictionary for quick selection. These options are displayed in predictor buttons, as shown in the exemplary interface screen of Fig.
  • the software predicts the word a user is trying to compose, the user can conserve his efforts and save time by selecting the predictor button that features the correct word. This will immediately send the word to the Message Window and add a space (to prepare for another word), allowing a user to simply move on to the next word in a message.
  • the word prediction feature draws selections from either an internal or online software dictionary, A user can also make his own personal vocabulary (including names, single words, multiple word phrases and full sentences) available for word prediction by adding these items to the dictionary. Users may find it helpful to create dictionary entries for the names of family, friends, businesses, towns, hobbies, foods, movies or other things that they often talk about.
  • Word prediction features may be particularly advantageous for individuals who have good literacy skills, who need help with spelling out words but can recognize them on sight, who use alternate access methods that make it inefficient to completely spell out words, and/or who can spell the first few letters of words and then must rely on symbols to identify words.
  • Word prediction software is useful for such users and others because it increases spelling speed, can help improve literacy skill by enabling users to spell a few letters and then rely on word recognition or symbols to get the right option, and decreases user fatigue by reducing the number of necessary keystrokes.
  • a first related variation to word prediction software features include character prediction, which involves providing character predictor buttons that predict single characters based on the letters that a user is typing.
  • a character prediction feature may be useful for individuals who use alternate access methods and/or who need a faster and less fatiguing way to find words through spelling. Such features are helpful to make keyboard communication quicker or less physically taxing and to generate longer words more quickly and with less effort, especially when combined with word prediction software tools.
  • a second related variation to word prediction software features involves context prediction. Such feature anticipates word selection based on the grammatical structure of the sentence that a user is creating. Such feature may be helpful for users who use alternate access methods and/or who need to maximize prediction. Such features my be useful for a user to help see words that might logically come next in a sentence, and/or to increase the number of grammatically correct sentences since words that may be left out (i.e., a, an, the, is, etc.) may be predicted. [00244] In accordance with the disclosed prediction software options, it may be possible for a user to activate and deactivate software prediction features via a Prediction Settings menu. An example of such a Prediction Settings menu is shown in the interface display of Fig.
  • a user may select the check boxes in the Prediction Settings group box to activate or deactivate various prediction features. For example, a user may select the Prediction check box to activate basic prediction features (including word prediction, phrase prediction and character prediction). To deactivate prediction, a user identifies that the check box is not selected.
  • buttons may be available for user selection in the Predictions Settings menu of Fig. 46.
  • a Flexible Abbreviation check box may be available to activate the flexible abbreviation feature, which will be described later in more detail. To deactivate this feature, make sure the check box is not selected.
  • a "Don't Predict Words Already on Buttons" box may be selected for configuring a speech generation device to not predict a word that is already on a button on the page. Only words that do not appear on the page will be predicted. When this check box is not selected, a word may appear in a predictor button even if it appears on the page.
  • the speech generation device 30 may be configured to examine words as they are added to the Message Window. When the software discovers a word that is not in the dictionary, it will automatically add it to the dictionary. To deactivate this feature, this check box should not selected. [00247] A "Context Prediction" check box may be provided to activate/deactivate the context prediction feature.
  • phrases will be predicted based on the beginning of the phrase, rather than any matching characters (For example, "can you” would match “Can you help me?" but not "How can you tell?"). If this check box is not selected, then phrases will be predicted based on any part of the phrase, not just based on the beginning of the phrase. [00252] If a user wants the selected prediction features to predict vocabulary only after having typed a specific number of letters, select the "Predict After _ Letters" dropdown menu and select one of the available options. The drop-down menu will close and display the chosen option.
  • Vocabulary items are presented in alphabetical order, (b) Frequency -The vocabulary items that are used most often are presented first, (c) Length -The longest vocabulary items are presented first. The drop-down menu will close and display the chosen option. [00254] If a user wants symbols to be presented with vocabulary in the predictor buttons, select the "Symbol Prediction" check box in the Presentation Settings group box. If a user wants only text to be presented in the predictor buttons, make sure the check box is not selected. [00255] If a user wants to maximize the size of a symbol within the predictor button, select the "Symbols on the Left" check box in the Presentation Settings group box.
  • a dictionary for use in a speech generation device 30 is an alphabetized catalog of every word, name and phrase that is stored in the software's vocabulary database. This dictionary can be customized easily and, since rate enhancement on the devices is based on dictionary vocabulary, additional dictionary entries can be added for names, questions, statements, and the like.
  • Dictionary entries can be created and edited in a Dictionary Browser menu, an example of which is shown in Fig. 47. To add a word, name or phrase to the dictionary, the "New" button can be selected in the interface of Fig.
  • an Edit Word menu such as shown in Fig. 48 will open.
  • the system keyboard By selecting the Word text box, the system keyboard will open and a user can enter the word, name or phrase desired for adding to the dictionary. After selecting the OK button, the system keyboard will close and the new dictionary entry will be displayed in the Word text box.
  • a user may select the Part of Speech drop-down menu and then select the option that best applies to the new dictionary entry. If the "Kind of drop-down menu is available, a user may add a more specific definition to the part of speech that a user has assigned to the new dictionary entry. For example, a noun may be further defined as a proper noun. To adjust this setting, select the Kind of drop-down menu and then select one of the available options.
  • a user may review any word form variations that apply to the new dictionary entry (for example, "colder” and "coldest” for the adjective "cold”).
  • the Variant drop-down menu offers a list of variation types that are associated with the part of speech that is assigned to the new vocabulary item.
  • the Word Form text box displays an example of the dictionary entry that is changed to reflect the variant form that is selected in the Variant drop-down menu. If one of the examples in the Word
  • Forms text box must be corrected, a user may select the Word Form text box to open the system keyboard, use the system keyboard to enter the corrected form of the dictionary entry, and select the OK button to close the system keyboard. The change will be displayed in the Word Form text box.
  • Software features may be available for a user to select a frequency that is assigned to a dictionary entry. Such chosen frequency affects how quickly the entry is predicted by rate enhancement. To assign a frequency to the new dictionary entry, select the Frequency button and then complete the rest of the steps .
  • a user may accept a default frequency (e.g., 10) or select a frequency number within a range (e.g., between one and 100, with 100 generally used for items that will be used the most often).
  • a frequency keypad may be used to enter such new frequency number, which will then be displayed in the Frequency button.
  • Select Concepts menu viewport (see, e.g., Fig. 50) to find a concept.
  • Each main concept is represented by a folder icon.
  • Concepts that contain smaller sub-concepts are indicated by an expansion box (with a [+]).
  • Select the expansion box to view the available sub-concepts.
  • select the check box next to each name select the check box next to each name.
  • the selected concepts will be added to the Concepts group box in the Edit Word menu.
  • the new dictionary entry will be added to the viewport in the Dictionary Browser menu, and the dictionary entry will be available for the user that is currently active.
  • the abbreviation expansion feature When activated, the abbreviation expansion feature lets a user define specific abbreviations for longer words and phrases. This feature can save a great deal of effort and time if using a keyboard page to compose a message. When a user enters the abbreviation and then adds a space, the software will automatically expand an abbreviation into the full word or phrase. [00262] In order to create an abbreviation expansion, it should be recognized that an abbreviation expansion consists of two parts: the abbreviation and the expansion. The abbreviation is the combination of characters that a user wants to enter (i.e., "INY").
  • the expansion text is the word or phrase that the abbreviation represents (i.e., "It's nice to meet you") . Once both parts are saved in the Abbreviation Browser menu, the words “It's nice to meet you.” Will be automatically sent to the Message Window anytime a user enters "INY” and then adds a space.
  • a user may interact with an interface menu such as the exemplary Abbreviation Browser shown in Fig. 51.
  • This menu can be used to select the New button, at which point the system keyboard will open.
  • a user can then enter the abbreviation and select the OK button to close the system keyboard.
  • the system keyboard may be configured to open again automatically for a user to enter the expansion text. Again, selection of the OK button will close the system keyboard.
  • the abbreviation expansion that you just created should be visible in the viewport of the Abbreviation Browser menu.
  • a user may apply the following steps to use an abbreviation expansion.
  • a user simply types an abbreviation that was saved (in this example, "PP"). A user then adds a space after the abbreviation, and the abbreviation will be immediately expanded, as shown in Fig. 53.
  • (2) Concept Grouping Software features desirably configure a speech generation device to function with instructions for editing a concept. For example, software may use concepts to provide structure and organization for various elements of the software, including symbols, dictionary entries, slots and phrases. Concepts are designed to group similar items or ideas together, making it more efficient to search a particular item or idea.
  • a Concept Browser menu (e.g., as seen in Fig. 54) enables a user to view and edit the list of concepts. Any changes made in the Concept Browser menu will be seen anywhere concepts are used. This includes the Symbol Browser menu, the Dictionary Browser menu and the My Phrases menu, as well as in the Select Slot Filler menu for slots.
  • each main concept is represented by a folder-shaped icon. If a concept contains smaller sub- concepts, the concept folder will have an expansion box (with a [+]) beside it. If you select an expansion box, the concept will expand to display all of the smaller sub- concepts. Each sub-concept is represented by a gray dot icon. When a main concept is open, the expansion box will contain a [-]. To close the concept, select the expansion box again. A user may need to use the scroll bar on the right side of the viewport to see all of the available concepts and sub-concepts.
  • the Concept Browser menu of Fig. 54 also includes a Search button and text box, enabling a user to search for a concept by name.
  • Other buttons in the Concept Browser menu enable a user to create a new concept, change the organization of concepts within the viewport, rename a concept or edit the words that are available within a concept. If a user wants to see the individual words that are associated with a concept or sub-concept, a user may select the concept (or sub-concept) that he wants to see, and select the Edit Slot Fillers button.
  • the Concept Slot Fillers menu an example of which is shown in Fig. 55, will open. Every word that is assigned to the selected concept will be visible in the viewport at the top of this menu.
  • the Concept Slot Fillers menu also provides options for editing and rearranging the words that are available in the selected concept. When you select a slot, words will be presented in the same order in which they are shown here.
  • phrases is one of the best ways to speed up communication. Exemplary software embodiments enable a user to store phrases for future use. When a user is communicating, he can quickly access and use a phrase in just a few simple steps.
  • phrases can drastically reduce the number of selections that are required to compose a message, since the user no longer has to create the phrase word by word when he wants to use it. Phrases also save time since they can be accessed from any point within the page set; and a user does not need to navigate to a particular page or popup to use a phrase.
  • a user may be provided access to a customizable menu called the "My Phrases" Menu, which is designed to give a user immediate access to the phrases that are used frequently in everyday conversation (e.g., comments, statements and questions that are used frequently).
  • the My Phrases menu may be opened, for example, by toggling the My Phrases button available on the Title Bar, as shown in Fig. 56. By selecting the Modify button in the title bar, the My Phrases button will turn red. But selecting this button, the My Phrases menu will open, an example of which is shown in Fig. 57.
  • phrases may be organized by concept. Sorting phrases into concepts is one good way to make them faster and easier to use, since it allows a user to search through small groups of phrases instead of the whole collection.
  • phrases may include general topics like the following: [00272] Greetings -How's it going? Hi there! Hey. [00273] Closings -I'll see you around. See ya! Have a nice day. [00274] Agree -Yeah, I know. absolutely. Of course.
  • Select Concepts menu find a concept by either selecting the Search text box and enter the name of the concept you want to use, or scrolling through the Select Concepts menu viewport to find a concept.
  • Each main concept is represented by a folder icon, with smaller sub-concepts organized as previously described.
  • a user can then select the OK button to close the Select Concepts menu, at which point the selected concept will be added to the Concepts viewport in the New
  • Phrase menu If a user wants to remove a concept from this phrase, select the concept in the Concepts group box and then select the Delete button. The concept will still exist, but will no longer be associated with this phrase.
  • Software features may be available for a user to assign a frequency to a phrase. The frequency that is assigned to a phrase affects the way the phrase is predicted by rate enhancement. To assign a frequency to the new phrase, select the Frequency button on the New Phrase menu.
  • An Enter Frequency Menu such as shown in the exemplary interface menu of Fig. 59, may be displayed to a user by which a user may use the keypad to enter a new frequency number.
  • a frequency may be within some predetermined range, for example from 1-100, with 10 being a default setting and 100 being a maximum level for items that are expected to be used the most often.
  • Features may also be available for a user to assign a symbol to a phrase to help a user recognize it more quickly (or for use in predictor buttons).
  • To assign a symbol to the new phrase select the Symbol button on the New Phrase menu (see, e.g., Fig. 58) and then select the Search text box in the Select a Symbol menu.
  • the system keyboard will open, and a user can enter the name of the symbol he wants to find.
  • the software finds any symbols for the word you entered, they will be presented in the right viewport of the Select a Symbol menu. A user may then select the symbol that he wants to use. The Select a Symbol menu will close automatically and the new symbol will be displayed inside the Symbol button in the New Phrase menu. After closing the New Phrase menu, the new phrase is now available in the My Phrases menu under the All Phrases concept, as well as under any other concepts a user may have assigned or created. If a symbol was added for that phrase, it will be displayed beside the phrase. The new phrase can now be used for communication by the current user, no matter where the user is in the page set. It may also be presented by phrase predictor buttons on keyboard pages in the current user. [00280] To quickly access phrases created by a user, an interface such as the
  • Select a Phrase menu shown in Fig. 60 may be available. Such menu may allow a user to specify how he wants to use the phrase by selecting one (or both) of the appropriate check boxes in the bottom left comer. If a user wants to speak the phrase as soon as it is selected, select the Speak Phrase check box. If a user wants to send the phrase to the Message Window as soon as it is selected, select the Insert Phrase check box. (If the Speak Phrase check box is not also selected, the phrase will not be spoken until a user selects the Message Window.) In the Concepts box, a user may select the concept that contains the phrase he wants to use. If the desired concept is not visible, a user can use the Prev and Next buttons to scroli through the list of concepts that contain phrases.
  • a user may select the phrase he wants to use. Again, if the phrase is not visible, the user may need to use the Prev and Next buttons to scroll through the phrases in the selected category.
  • speech generation software will act according to the selected check boxes. The possibilities are: (i) If the Speak Phrase check box is selected, then device will immediately speak the phrase; (ii) If the Insert Phrase check box is selected, the phrase will be sent to the Message Window; and (iii) If the Close on Selection check box is selected, the Select a Phrase menu will close as soon as a user chooses a phrase. [00281]
  • a related software feature available within some embodiments of a speech generation device provide phrase prediction tools based on the above defined phrases
  • phrase predictor buttons may be available to predict phrases from the My Phrases menu, based on the letters or words that a user is typing. Phrase predictor buttons can predict phrases from the entire My Phrases menu, or they can be assigned to one phrase concept. Such a phrase prediction feature may be useful for individuals who user alternate access methods, who are able to use a keyboard, and/or who consistently use any number of phrases. Phrase prediction software may offer advantages by enabling individuals to communicate common phrases more quickly because entire phrases can be accessed by selecting only the first few letters in the phrase. Phrases can be completed with novel information, as well.
  • phrase “I would like it if you" can be completed in a variety of ways depending on the situation, thus enabling individuals to clearly communicate their wants and needs quickly and easily.
  • phrase prediction provides the user with the ability to select from a menu of certain phrases that have been populated into the menu based on what the user already has typed in composing a message.
  • phrase prediction protocol predicts an entire phrase that the user may try to be typing (instead of just predicting the next word or next character).
  • the phrase prediction protocol presents one or more areas of the display, called buttons, which are filled with phrases as the user enters text into a document.
  • the phrase prediction protocol matches the partially entered text to an internal database of text phrases and presents those phrases that have starting characters matching the partially entered text.
  • Each phrase also has associated with it a priority rating. In the case where more phrases match the partially entered text than there are buttons to fill, those phrases with the highest priority ratings are shown to the user.
  • the user is provided with the capability to add phrases to the phrase database and to delete phrases from the phrase database.
  • the phrases may optionally have pictures associated with them, and in such cases those pictures can be used to augment the display of the phrases on the buttons.
  • the phrases also may contain slots with their associated fillers.
  • a still further rate enhancement feature that may be provided via software used with a subject speech generation device embodiment is the concept slot (also called "slot").
  • a slot is a variable placeholder that can be included in button text, button labels and phrases. Specific interface options may provided for a user to create a phrase with slots, adding slots to buttons, and work with slots in a Message Window.
  • the capability embodied in the so-called “slots" protocol provides the user with the ability to fill in certain words in otherwise static text.
  • the slots protocol provides the user with easy access to common words that can be used to complete a message in a variety of settings and situations. For example, in the phrase: “Can we have dinner now?", the slot is "dinner". Other commonly-used words like “breakfast” or “coffee” are slots that can be used to interchangeably to complete similar messages like: “Can we have breakfast now?". Slots are additional tools that minimize necessary navigation to save the user's time and energy during message composition.
  • slots are designed to provide a variety of vocabulary options while reducing the number of selections that a user must make to create a whole message. Slots also help to conserve space on the touch screen. Slots provide a user with easy access to all of the words associated with a particular vocabulary concept (or category). When a user selects a slot, the user can choose to replace the word that is currently filling the slot with another word from the same concept. Rather than build a dynamic message one word at a time, a user can create sentences that contain slots in key locations. When the phrase is added to the Message Window, a user can then select the slots (which are visually indicated in some fashion, for example displayed as blue underlined words) and replace the current words with different options.
  • the software provides mechanisms to insert a "slot" or placeholder in a text phrase.
  • Each one of these placeholders is associated with a list of "fillers” that can potentially fill in this place in the text.
  • the list of slots and their associated fillers are stored in a database internal to the software.
  • Another variation on the slots protocol allows a user to declare to speak a text phrase containing one or more fillers. At which point the user is prompted to specify the filler values for each slot, and then the entire text phrase, with the filler values chosen by the user, is spoken.
  • Slot fillers can optionally have pictures associated with them in which case those pictures can be used to augment the display of the filler value.
  • the message in the Message Window is the fabel text for a button.
  • the first slot is associated with the "breakfast" concept and the second slot is associated with the "fruit” concept.
  • the slots allow a user to create dynamic messages with a reduced number of selections. By selecting the slots and changing the filler text, the example phrase "I want oatmeal and a banana for breakfast” can quickly and easily be changed to read as follows: "I want toast and a nectarine for breakfast”.
  • a user may be able to add slots to his customized phrase database - My Phrases. Adding slots to phrases is one way to maximize the potential of both rate enhancement features. This technique provides a user with rapid access to complete statements, while still enabling the user to vary what he is going to say. For example, if a user tells an assistant what he wants to wear every morning, then he may want to create a phrase to say "I want to wear my jeans today.” Then, simply turn the word “jeans" into a slot that accesses the "clothing" concept.
  • an eye gaze controller 20 is connected to the speech generation device 30, a user can dwell on the button and move the pointer until all of the desired text is highlighted.
  • the Select Concept for Slot menu will open and display any concepts that are associated with the selected word.
  • An example of the Select Concept for Slot menu is shown in Fig. 63.
  • the Select Concept for Slot menu enables a user to choose a vocabulary concept for the slot he is creating. This concept will determine the type of vocabulary that is presented whenever a user selects the slot. If the word chosen as the slot is associated with any existing concepts, the concepts will be displayed in the buttons at the top of the menu. [00290] There may be different ways for a user to choose a concept for a new slot:
  • the selected concept(s) will be added to the Concepts viewport in the New Phrase menu.
  • (14) If a user wants to create a new concept for this phrase, select the Add New Concept button in the Concepts group box (the system keyboard will open) and the user can enter the name of the concept he wants to create.
  • the created concept will be added to the Concepts viewport in the New Phrase menu.
  • the created concept will also automatically be added (as a sub-concept) to the My Phrases concept.
  • a user wants to remove a concept from this phrase select the concept in the Concepts group box and then select the Delete button. The concept will still exist, but will no longer be associated with this phrase.
  • a frequency may also be assigned to a phrase, for example as previously described with reference to the Enter Frequency menu of Fig. 59.
  • a user may choose to assign a symbol to the phrase to help recognize it more quickly. To assign a symbol to the new phrase, select the Symbol button (the Select a Symbol menu will open) and then use the system keyboard to enter the name of the symbol you want to find. If the software finds any symbols for the entered word, they will be presented in the right viewport of the Select a Symbol menu. Once a symbol is selected, the Select a Symbol menu will close automatically and the new symbol will be displayed inside the Symbol button in the New Phrase menu. The new phrase is now available in the My Phrases menu.
  • buttons to scroll through the available options As soon as you select a word, the Select Slot Fillermenu will close. In the Message Window, the word in the slot will be replaced with the word just chosen.
  • Software features may be available for a user to add slots to button labels. For example, a user may select an "Insert Label” option, which will send the button label to the Message Window. The user can then select the slot to open the Select Slot Filler menu and choose a new word for the slot. A user can also choose an "Insert Label, Fill Slots" option, which will send the button label to the Message Window, and then automatically open the Select Slot Filler menu.
  • the following steps may be followed to add a slot to a button's label: (1) Select the green Modify button in the title bar. The button will turn red when it is selected. (2) Select the button desired for modification. The Modify Button menu will open. (3) Select the Behaviors button. The Behavior Editor menu will open, an example of which is shown in Fig. 66. (4) Select the Behaviors drop-down menu. The menu will expand to display all the behavior categories. (5) Select the
  • Behavior Editor menu The new behavior will be displayed by the Behaviors button in the Modify Button menu.
  • (9) Select the Label text box.
  • the system keyboard will open.
  • (10) Enter the desired text for the button label.
  • (11) Highlight the word desired for use as a slot. A user can make a selection on the touch screen at the beginning of the word and drag the selection until the whole word is highlighted, or may use an external mouse to perform a similar function.
  • the Select Concept for Slot menu will open.
  • the Select Concept for Slot menu (e.g., Fig. 63) enables a user to choose a vocabulary concept for the slot that is created. This concept will determine the type of vocabulary that is presented whenever the user selects the slot.
  • the concepts will be displayed in the buttons at the top of the menu. (14)
  • a user may scroll through the viewport at the top of the menu, or create a new concept for the slot by using the system keyboard to enter a name for the new concept and then select the
  • the new slot will be shown as a blue, underlined word.
  • Software features may be provided that automatically search for a symbol that corresponds to a user's label. If there is no symbol to match the label, then only the label will be added to the button. If the label matches one symbol, then the symbol will be automatically added to the button. If the label matches more than one symbol, the Select a Symbol menu (e.g., as shown in Fig. 67) will open to display all of the corresponding symbols. A user can then select the symbol he wants to use, and the selected symbol will be added to the button. If a user does not want to use one of these symbols, select the Cancel button to close the Select a Symbol button without choosing a symbol.
  • Still further software features may be provided for a user to adding slots to a button's text message.
  • a user may proceed with the following steps: (1) Select the green Modify button in the title bar, The button will turn red when it is selected. (2) Select the button that you want to modify. The Modify Button menu will open. See, for example, the menu shown in Fig. 66. (3) Select the Behaviors button.
  • the Behavior Editor menu will open. (4) Select the Behaviors drop-down menu. The menu will expand to display all the behavior categories. (5) Select the Message Window Operations option (you will need to use the scroll bar on the right side of the drop-down menu). The drop-down menu will close and display this category. (6) Select Insert Text, Fill Slots or Insert Text in the Behaviors viewport. Select the Add button.
  • the system keyboard will open. (8) Use the system keyboard to enter the desired text. (9) Highlight the word that he wants to use as a slot. (10) Select the Make Slot button in the bottom row of the system keyboard.
  • the Select Concept for Slot menu will open. An example of such menu is shown in Fig. 63. (11)
  • the Select Concept for Slot menu enables a user to choose a vocabulary concept for the slot he is creating. This concept will determine the type of vocabulary that is presented whenever a user selects the slot. If the word you chose as the slot is associated with any existing concepts, the concepts will be displayed in the buttons at the top of the menu. If a user wants to search through the existing concepts, the user can select the Select Concept button (the Select Concepts menu will open - see, e.g., Fig.
  • the rate at which a selection device enables the user to use the internet and select links from a webpage is another key measure of the desirability of the system that combines the selection device and the speech generation device.
  • the speech generation device 30 that is controllable by the eye gaze controller 20 has been provided with certain capabilities that enhance the rate at which the eye gaze controller 20 enables the user to select links from a webpage being displayed on the input screen 33 of the speech generation device 30.
  • the speech generation device 30 that is controllable by the eye gaze controller 20 is configured to use the eye gaze controller 20 in conjunction with special accessible features provided by Moziila Firefox to allow the users to directly select a link with the user's eyes without having to be accurate enough to hit the actual link on the webpage,
  • the speech generation device 30 desirably is provided with a Firefox® internet browser 3Od available from Moziila software, a high speed modem 3Oe and a high speed internet connection 3Of by which the speech generation device 30 can access websites using the browser 3Od.
  • the Firefox® internet browser 3Od has a feature that inserts a numeric indicator beside every link on any webpage accessed by the browser.
  • the speech generation device 30 is provided with a special "On Screen Keyboard" for eye-tracking (and other access methods). The special "On Screen Keyboard"
  • Screen Keyboard is configured to display to the user on the input screen 33, relatively larger buttons having numbers corresponding to each numeric indicator that has been assigned by the browser 3Od beside every link on any webpage accessed by the browser 3Od. As schematically shown in Fig. 21 , to access a desired link, the user operates the eye gaze controller 20 to select the number displayed on the "On Screen
  • the eye gaze controller 20 is configured to enable the user to select that number by focusing the user's eyes on the larger button of the special "On Screen Keyboard” to activate that link and call for the associated pages to be retrieved to the browser 3Od from the website's server.
  • Message Window division Conventional communication software provided in a speech generation device typically works for creative conversation by providing a "message window" in which the user composes the message to be spoken by the speech generation device. When the user wants to send or "speak" the composed message, the user selects the message window.
  • the user often selects the message for speaking by just looking at the message window too long during review of some aspect of the message.
  • the "dwell' selection option governs activation in a conventional eye-tracking controller
  • the user cannot afford to take up more than the dwell time when reading a given region of the input screen, else the user will select objects on the screen that the user only meant to read.
  • This annoying aspect associated with the dwell selection option particularly arises when the user wants to review what is in the message window to ensure it is correct before selecting the message to have it spoken by the speech generation device. The user may want to perform such a review either at some point during composition of the message or directly before choosing to select the message for being spoken by the speech generation device.
  • the speech generation device 30 and the eye gaze controller 20 are configured to provide simultaneously on the input screen 33 of the speech generation device, a "composing window” and a “speak message window” separate from the "composing window.”
  • the speech generation device 30 and the eye gaze controller 20 are configured to give the user the option of setting the system to this "split" message window.
  • the "composing window” part of the message window contains the message but cannot be activated by the user's gaze focused in the "composing window.”
  • the remaining part of the message window is then a "speak message window” button, and the message in the "composing window” part of the message window only will be spoken by the speech generation device 30 when the user's gaze focuses on the "speak message window” button for the pre-set dwell time.
  • the speech generation device 30 and the eye gaze controller 20 are configured to permit the user to define the relative size defined by the "speak message window” button.
  • dashboard Hotspot Some conventional communication software contains many thousands of pages. Additionally, conventional communication software typically relies on some very critical features to which a user desirably should always have quick access, and these features include: Pause; Alarm; and the Selection option, However, space on the input screen of a conventional speech generation device typically is at a premium, and it thus is inefficient to display these critical features on every page being viewed on the input screen. Moreover, it also is critical to have these features available to be selected by the user at the times when the user is not viewing any of the communication pages.
  • the speech generation device 30 and the eye gaze controller 20 are configured to provide a "Dashboard Hotspot".
  • the speech generation device 30 and the eye gaze controller 20 are configured so that when the Dashboard
  • Hotspot is selected, a "popup window" appears on the input screen 33 of the speech generation device 30, and all of the critical features appear within the "popup window” for selection by the user. Thus, the user only needs to make two selections to activate any of the critical features.
  • An example of such a dashboard popup window are shown in Fig. 81.
  • the speech generation device 30 and the eye gaze controller 20 are configured to permit the user to locate this Dashboard Hotspot in any user-defined section of the input screen 33, at the user's option. As schematically shown in Fig. 23B, the most popular location for the Dashboard Hotspot 38 is in one of the corners of the input screen 33. For example, a user may select the dashboard hotspot in the exemplary visual display of Fig. 80 by selecting the bottom left corner of the display.
  • the speech generation device 30 and the eye gaze controller 20 also are configured to permit the user to choose the size of the area on the input screen 33 that is occupied by the Dashboard Hotspot. Moreover, as schematically presented in Fig.
  • the speech generation device 30 and the eye gaze controller 20 also are configured to provide for the Dashboard Hotspot an extended area 38a beyond the display area of the input screen 33 in order to make it easier for the user to employ the eye gaze controller 20 to select the Dashboard Hotspot.
  • the eye gaze controller 20 is configured to look for the user's gaze in the extended area 38a, which extends beyond the boundary of the input screen 33, in order to enable the user to focus the user's gaze in that extended area 38a and still be able to select the Dashboard Hotspot 38.
  • users may be provided with software features to define the dashboard settings, For example, an interface menu such as shown in Fig. 82 provides a Dashboard Hotspot Settings menu. A user may select the Show
  • the Position drop-down menu may be chosen to choose the corner where a user wants the Dashboard Hotspot to be displayed (e.g., bottom left or bottom right),
  • the Size drop-down menu may be chosen to select the size of the Dashboard Hotspot (e.g., Normal, Bigger, Biggest). If a user wants to change the popup that will open when the Dashboard Hotspot is selected, the Dashboard Popup button may be selected, and a user can navigate through a directory by searching, scrolling or other means to find the desired popup.
  • a user can select the Dashboard Onscreen Keyboard button to make a selection.
  • Audio-Evetracking Conventional eye gaze communication software is configured to illuminate the object on the input screen of the speech generation device when the user's eyes focus on the object.
  • this way of indicating to the user where the user's eyes are focusing does not work with users that are blind or work very well with users that have very poor vision.
  • the speech generation device 30 and the eye gaze controller 20 desirably are configured with audio-eyetracking software that generates an audio signal to the user as the user's eyes get close to focusing on an object on the input screen 33 of the speech generation device 30.
  • the audio-eyetracking software protocol is configured to cause the speech generation device 30 to speak the name of the object to tell the user what it is.
  • the audio signal controlled by the audio-eyetracking software protocol can change as the user's eye gaze focuses closer to the object or farther from the object.
  • the audio- eyetracking software protocol of the present invention desirably is configured to cause the speech generation device 30 to tell the user whether the user is focusing the user's gaze above, below, to the left or to the right of the object and how far away the user's gaze is focusing from the object in that direction.
  • the audio-eyetracking feature of the present invention there would be a setting within the software menus as to what message the user wants to hear as the "audio-feedback" for each given button being displayed on the input screen of the speech generation device 30.
  • the software menus would cause the messages to be spoken by the speech generation device 30 when the user was in this set-up mode of the audio-eyetracking feature.
  • the menu options of the voice and volume of the audio feedback and whether the audio feedback is to be provided via private means (earphones) or public means (speakers) are spoken by the speech generation device 30 when the user was in the set-up mode.
  • Fitting a standard DAESSY® wheelchair mount determines the location and size for the clamp that is attached to the frame of the wheelchair and the correct lengths and bends for the stainless steel tubes that support the mount. These dimensions are determined by the relationship between the position of the mounted device, the location on the wheelchair where the clamp will be attached to the frame and the position of the user's head when using the wheelchair.
  • a communication device located for scanning or head pointer access must be higher and further away from the user.
  • Prior eye gaze controlled communication systems had set a minimum focus range at 20 inches, which is often too far for safely mounting the system onto smaller and/or lighter wheelchairs.
  • close mounting is within a range of about 12-20 inches from a display screen to a user's face/eyes.
  • a range of about 15-17 inches is employed, with a particular example of 16.5 inches desirable for some applications, such as for mounting on a pediatric wheelchair.
  • Other wider ranges such as but not limited to a distance range of between about 17-28 inches, or between about 20-24 inches may also be used.
  • Such desirable ranges including those affording close proximity to a user, are accomplished in part by providing a fixed-focus type eye tracking device, which allows a user to focus the camera portion of the tracking device by various calibration procedures.
  • a configuration with improved (i.e., closer) mounting locations can provide more security to the user and mounting options for a wide range of wheelchairs, including pediatric wheelchairs, as well as mounting options for desks and walls.
  • the closer mounting ability of the subject system helps avoid potential problems when a system Is mounted so far away from the user that it makes the balance of the wheelchair unstable.
  • Particular proximity for a user also allows users, especially those with visual impairments, to view the screen more clearly and thus function more efficiently.
  • An exemplary view of acceptable and unacceptable mounting orientations including views of exemplary positioning for height, distance, angle settings, tilt, and accommodations for users with glasses relative to their position in a wheelchair are shown in Fig. 68.
  • a microprocessor associated with speech generation device 30, with eye gaze controller 20, and/or a separate processor/controller may provide software storage and execution functionality by which a user can establish certain preferences and capabilities of the eye tracker portion of the system that includes a speech generation device 30 that the user operates with an eye gaze controller 20 in accordance with an embodiment of the present invention.
  • an eye tracking settings menu may be provided to a user. A user may select the "Select With" drop-down menu and then choose one of the available options previously described as a selection method.
  • the "Blink” option sets the software to register a selection when the user gazes at an object and then blinks within a specific length of time. (There is an adjustable minimal time setting to avoid false activations from naturally-occurring blinks.)
  • the "Dwell” option sets the software so that if the user's gaze is stopped on an object for a specified length of time, the highlighted object is selected.
  • the "Blink/Dwell” option sets the software so that if the user's gaze is stopped on an object for a specified length of time, the highlighted object is selected. The object may also be selected if the user blinks on it before the time elapses.
  • the "External Switch” option sets the software to select the highlighted object when an external switch is activated.
  • buttons may be provided for a user to perform a secondary action when a user maintains the blink for a specified length of time. This may be activated by selecting the
  • the "Requires Both Eyes to Select" check box may be enabled by default in one example, with the user being able to clear the check box if he wants to blink only one eye to trigger a selection.
  • Sliders may be adjusted to increase or decrease the time frames for each of the selection options. For example, a user may select and drag the Blink Time slider to adjust the time that a user must maintain the blink to make a selection (the primary action). A user may select and drag the Secondary Time slider to adjust the additional time that a user must maintain the blink to trigger the secondary action.
  • a user may select and drag the Cancel Time slider to adjust the total time that the user must maintain the blink to cancel all actions (primary and/or secondary).
  • the cofor-coded time frame bar at the bottom of the exemplary Blink Settings menu of Fig. 70 displays the cumulative time periods for each of the selection options. [00315] If a user selects the "Dwell" option as a desired user selection method, a user may be provided with another interface menu, such as the Dwell Settings menu of
  • This interface may be provided with a slider feature.
  • the dwell time slider can be used to adjust the length of time a user's eyes must pause on an object to make a selection. Select the slider thumb and drag it to the right to increase the dwell time, or drag it to the left to decrease the dwell time. It should be appreciated that if a user selects a "Blink/Dwell" option, the user may want to define a longer dwell setting here to allow more time to blink.
  • a user may be provided with an interface menu, such as the switch settings menu of Fig. 72.
  • This interface menu may be used to configure use of a computer keyboard as an external switch. For example, a user can select the "acts as switch 1" drop-down menu and then select the key that the user wants to act as a switch input.
  • a user may select the highlight target check box if the user wants to highlight the area of the touch screen that is being selected. Otherwise, the user can make sure the highlight target check box is not selected.
  • a highlight rules button may be selected to customize the style and appearance of the screen highlight, as discussed later in more detail relative to Fig. 75.
  • Many individuals who use eye tracking equipment depend on seeing the screen cursor. Referring still to the menu of Fig. 69, features may be available for a user to select the Show Cursor check box if the user wants the cursor to be visible on the input screen 33. Otherwise, make sure the Show Cursor check box is not selected.
  • a user can select the Click check box if he wants his speech generation device 30 to make an audible sound when it selects an object.
  • a volume slider may be used to increase or decrease the volume of the click. If a user does not want to use audio feedback for object selection, the user will make sure that the Click check box is not selected.
  • a user can select the Number of Targets drop-down menu to choose the number of screen targets used for calibrating the eye gaze controller 20 (EyeMax) accessory (the higher the number of targets, the more accurate the calibration, and the longer the calibration procedure will take to complete).
  • a user may choose the "Target Settings" button in the eye tracking settings menu of Fig. 73 to select the visual target used in calibration (and customize its settings).
  • a user can select the Target Image drop-down menu to choose the image he wants the user to focus his gaze on during the calibration process. The chosen option is provided in a display box on the left. If a user wants to use multiple images during the calibration process, the user can select the Randomize Targets check box. If a user wants to display the focal point (the actual spot on the graphic that the user should be watching during calibration), the user can select the Show Focal Point check box. The focal point will appear in the display box as a light green region. A user may use the Target Speed slider to adjust the speed of the target.
  • the user can select the slider thumb and drag it to the left to slow the target down, or drag it to the right to speed the target up.
  • the display square above the slider will update to reflect the current setting, if a user wants the software to display animation on the touch screen in-between displaying the calibration targets, the user can select the Animate Between Targets check box.
  • the animation will be shown in the display box underneath the check box.
  • a user may use the Animation Speed slider to adjust the speed of the "in-between" animation.
  • the user can select the slider thumb and drag it to the left to slow the animation down, or drag it to the right to speed the animation up.
  • the display box above the slider will update to reflect the current setting.
  • Additional features are provided in the eye tracking settings menu of Fig. 69 for a user to select the Background Color drop-down menu to choose the scheme that is closest to the that of the page(s) the user will most often use.
  • Exemplary options are Navigator Yellow (appropriate for any page set dominated by light-colored buttons), Black, or Grey (appropriate for page sets with a darker color scheme).
  • selection boxes are available for a user to choose which eye(s) to perform calibration procedures on.
  • the software may use both of the user's eyes for calibration (this usually results in a more accurate calibration). If one of the user's eyes is compromised, the user can select the check box that corresponds to the compromised eye (Calibrate Left Eye or Calibrate Right Eye) to clear the selection. Clearing the selection means that the software will not use that eye for calibration.
  • the Eye Track Status menu (for example, see Fig. 74) will display a blue box, with a dynamic picture of the user's eyes.
  • the eyes When properly calibrated and positioned, the eyes should both appear in the blue box, and a green cross-hair should appear on the eye(s) that the software is set to track.
  • a user may toggle the image in this menu to display either the live camera image or only the eye glints (green crosshairs that signify the pupil of each eye).
  • the user can select the triangular button in the lower right corner of the viewing field.
  • the button symbol When the image is displaying the live camera feed, the button symbol will be green crosshairs. When the image is displaying only the eye glints, the button symbol will change to an eye. A user may also select the Please Guide Me button to launch the Eye Tracking Wizard, which provides an explanation about the calibration process, as well as a demonstration video,
  • a user may be provided access to an interface menu that enables the user to modify highlight rules settings, as shown in the screenshot of Fig. 75.
  • a "Type" control may be selected by using the Type drop-down menu to select the type of highlight: Invert or Outline.
  • Outline Color a user can use the Outline Color button to open the Color Selector menu and select (or create) the desired color for the outline. When a user has chosen the desired color, select the OK button to close the Color Selector menu.
  • Outline Width a user can select the Thicker button to increase the width of the outline, or select the Thinner button to decrease the width of the outline.
  • the "Preview Button” control enables a user to select the Preview Button to see an example of how the current highlight rule settings will appear.
  • the Till Type” control enables a user to use the Fill Type drop-down menu to select the type of fill the user wants to use for the currently highlighted object: None, Bottom Up 1 or Contract, which is from the outside edges in.
  • a "Drain” control enables a user to select the Drain check box if the user wants an object that he hovered over (but did not select) to retain its fill for a brief time before draining (this will enable the Drain Delay and the Drain Time sliders).
  • the "Drain Delay” control enables a user to use the Drain Delay slider to set the time interval that the software will wait before it starts to drain the fill from a screen object.
  • the "Drain Time” control enables a user to use the Drain Time slider to set the time interval required to completely drain the fill from a screen object.
  • the "OK/Cancel” feature may be selected to either accept the current settings or close the Highlight Rules menu without accepting any changes. [00325] When the Fill Type drop-down menu is set to either Bottom Up or Contract, the software may highlight the object that is about to be selected, and will "fill” it with the highlight color. The examples shown in Figs. 76 and 77 show the difference between the two fill types.
  • Drain check box If the Drain check box is selected, screen objects will not lose their fill immediately after the user moves the cursor off of them.
  • the Drain Delay slider indicates how long an object will maintain its fill before it starts to drain.
  • the Drain Time slider indicates how long it will take screen objects to completely lose their fill.
  • a user may be provided with an exemplary interface for modifying advanced eye tracking settings.
  • a user can indicate for software to perform a specific action if the user's eyes have been "lost" (out of calibration) for a set amount of time.
  • the timeout duration can be selected, as well as the particular action (e.g., display the Dashboard popup, sound an audible alarm to request help from a caregiver, both or other options).
  • the particular alarm sound may also be selectable by a user.
  • a user may be provided with an exemplary interface for modifying additional desktop settings for a speech generation device when it is extended to a Windows desktop or other computer interface.
  • a user may be able to select the Show Dwell Time Animation check box if the user wants to display the dwell time animation when making a selection on the Windows desktop.
  • the Dwell Box Size group box may be used to define an area in which the user's eyes must remain for the duration of the dwell time for the selection to register. In one embodiment, small, medium or large check boxes may be available, or other ranges of areas.
  • the software when a user stops his gaze on the Windows desktop (and if he has defined a dwell time), the software may be configured to show an animation (e.g., circular sweep) that indicates the dwell time. If a user keeps his gaze within the dwell box during the entire animation, then the selection will take place.
  • an animation e.g., circular sweep

Abstract

A portable eye gaze controller comprises a universal serial bus (USB) that is configured and arranged to enable the controller to be connected with any computer device, and in particular one that forms part of a speech generation device. The portable eye gaze controller further comprises an eye tracker and a battery that powers the eye tracker and that is separate from any power source for the speech generation device that is being controlled by the eye gaze controller.

Description

TITLE OF THE INVENTION
SEPARATELY PORTABLE DEVICE FOR IMPLEMENTING EYE GAZE CONTROL OF
A SPEECH GENERATION DEVICE
PRIORITY CLAIM
[0001] This application claims the benefit of previously filed U.S. Provisional Patent Application entitled "SEPARATELY PORTABLE DEVICE FOR IMPLEMENTING EYE GAZE CONTROL OF A SPEECH GENERATION DEVICE," assigned USSN 61/217,536, filed June 1 , 2009, and which is fully incorporated herein by reference for all purposes.
BACKGROUND OF THE INVENTION
[0002] The present invention relates to devices that can be operated using the gaze of the user's eye and in particular to speech generation devices that can be operated using the gaze of the user's eye.
[0003] Speech generation devices are known and are used by persons with either total or partial speech impairments. Such persons use speech generation devices to communicate audibly with their environments. Many classes of persons desiring to use such speech generation devices have very limited motor control over most parts of their bodies. Moreover, many such persons experience diminishing motor control with the passage of time. Accordingly, where once such user was able to control a speech generating device with the user's fingers, over time this capability diminishes and finally is lost. This sort of diminishing motor control is especially acute among sufferers from Lou Gehrig's disease, myasthenia gravis. However, control over one's eye movements and/or head movements tends to persist over long periods of time. Accordingly, technology that detects the location of the focus of one's eyes has been used to generate signals that can be used to control the features of the speech generation device that once was controlled by operation of the user's fingers for example. [0004] Conventional devices with such eye gaze control include the MyTobii P10 device from Tobii AT of Sweden for example. However, the addition of the eye gaze control technology increases the power drain of the speech generation device. To compensate for this added power requirement, the speech generation device can be connected to a power supply. But this solution in effect tethers the user to an electrical outlet. The need for proximity to an electrical outlet imposes limitations on the user's freedom of movement. Nor has this limitation been overcome by providing a battery powered speech generation device, as the addition of the eye gaze control technology increases the required frequency of recharging the battery. This fact again places its own limitations on the user's freedom of movement by limiting the time during which the speech generating device can be used without recharging. Moreover, calibrating a conventional eye gaze control to a specific user is time consuming and tedious. [0005] Another problem with conventional speech generating devices having eye gaze controllers is the fixed location of the eye gaze controller's input field with respect to the rest of the speech generating device. This fixed architecture can be cumbersome depending upon the requirements of the potential user. The relative location of the input field of the eye gaze controller with respect to the rest of the speech generating device may be less than ideal for particular users. In order to take full advantage of the eye gaze controller, the user may be subjected to a less than completely comfortable body position.
[0006] Conventional speech generating devices having eye gaze controllers typically employ a camera and can be differentiated on the basis of whether the user has sufficient access to the camera's lens to be able to focus the camera, Those that allow user access to the camera's lens risk having the user, who typically has limited motor control skills, accidentally hit the lens of the camera so as to throw it out of the proper focus. Because of the user's limited motor control, the user likely will be unable to manipulate the lens to recover the proper focus. Moreover, if the user is permitted access to the lens, a further problem arises from the possibility of the user's bodily fluids (saliva, vomit, etc) contaminating the lens and/or the camera. [0007] While so-called 'fixed-focus' type systems prevent user access to the lens of the camera and thus avoid the above-mentioned problems, such conventional 'fixed- focus' type devices have their disadvantages. Chief among them is the loss in flexibility because the eye-gaze controller is built into the system in a fixed architecture. Another is their requirement of at least four separate light arrays. Still another disadvantage is the need for the user to be positioned at least 20 inches from the screen for the camera to be able to focus on the user's eye gaze, and 20 inches is too far for safely mounting the system onto smaller and/or lighter wheelchairs such as pediatric wheelchairs and sometimes too far for users with visual impairments to be able to view the screen comfortably.
[0008] While some speech generation devices have capabilities to enable the user to control some peripheral devices in the user's environment, they are cumbersome to set up and require the steady devotion of more time than the typical patience of the user population will tolerate. [0009] There are scenarios where it is beneficial to change how the user is able to access and use the speech generation device. For progressive conditions - such as ALS/MND - there is a natural progression from direct selection by the user to other access methods - such as scanning/mouse/trackball/joystick/head control - to eye gaze tracking (and then sometimes back to scanning). For other conditions, there are situational changes: such as using eye-tracking while in a wheelchair, but when riding in a vehicle where the user can't be in a wheelchair the user may move to a different access method of operating the speech generating device. Unfortunately, conventional speech generation devices with eye gaze control require the user to go through a minimum of six different steps in order to change access methods for operating the speech generation device. These six steps include: 1) select menus; 2) select setup area; 3) select access menu; 4) select new access type; 5) choose access type; 6) click OK. Also, several of these six steps are very hard to do with eye-tracking because of the target size or because the user is in the scanning mode and the menu does not fit into the scan pattern, which is necessary if the user wanted to switch from scanning back to eye-tracking. Thus, often the user requires intervention from a caregiver in order to switch access methods.
[0010] Conventional speech generation devices with eye gaze controllers consume valuable screen real estate due to their dependence on using part of the computer/device display to show the user if the user's eyes are being tracked. [0011] When a conventional gaze-access controller determines that the user is gazing anywhere on an object on the input display screen, that time is counted toward the "dwell time" setting that is used to activate the object on the screen. Unfortunately, users who suffer with uncontrollable head-movements or poor eye-control may unintentionally have their gaze leave the object that they desire to select. In such instances, the user would lose the accumulated dwell time and need to start over in the user's attempt to select that object. This result can cause significant frustration. [0012] Another source of frustration for users of conventional speech generation devices with eye gaze controllers is the amount of time it takes the user to compose a message. The frustration arises from the resulting disruption in the flow of the user's conversation, writing, and thought process.
[0013] Within communication software of conventional speech generation devices, there are an increasing number of pages, which can number in the thousands. And yet there are also some very critical items that always should be readily available for the user's selection. However, screen space in a conventional speech generation device is at a premium, and it is thus inefficient to put these critical items on the screen for every page of the communication software. It is also desirable to have these critical items available at the times when the user is not using the communication pages. [0014] Finally, conventional eye gaze controllers are not useful for a potential user who is blind or has visual acuity below a certain minimum threshold.
SUMMARY OF THE INVENTION
[0015] It is an advantage of some embodiments of the present invention to provide a speech generation device with an eye gaze controller overcoming the draw backs of the conventional speech generation device technology that incorporates an eye gaze controller.
[0016] It is another advantage of some embodiments of the present invention to provide for a speech generation device, an eye gaze controller that is separately portable and detachable from the speech generation device. [0017] It is a further advantage of some embodiments of the present invention to provide for a speech generation device, an eye gaze controller that can be oriented in more than one position relative to the speech generation device. [0018] It is still another advantage of some embodiments of the present invention to provide for a speech generation device, an eye gaze controller that has an universal series bus (USB) port by which the eye gaze controller can be connected to the speech generation device.
[0019] It is yet another advantage of some embodiments of the present invention to provide for a speech generation device, an eye gaze controller wherein the controller has its own portable power supply separate from the power supply of the speech generation device.
[0020] It is an additional advantage of some embodiments of the present invention to provide for a speech generation device, an eye gaze controller wherein the controller has its own source of power in the form of a battery that is separate from any battery that provides power to the speech generation device.
[0021] A yet further advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having its own on-board processor and its own source of power in the form of a battery that is separate from any battery that provides power to the speech generation device.
[0022] It is still a further advantage of some embodiments of the present invention to provide for a speech generation device, an eye gaze controller that includes a fixed- focus camera and yet is a fully portable eye gaze controller. [0023] It is yet a further advantage of some embodiments of the present invention to provide for a speech generation device, an eye gaze controller that includes a fixed- focus camera and yet prevents access by the user to the camera and to the lens of the camera.
[0024] It is an additional further advantage of some embodiments of the present invention to provide for a speech generation device, a fully portable eye gaze controller that includes a fixed-focus camera while preventing access by the user to the camera and to the lens of the camera and using only two sets of LEDs - each set of LEDs being spaced linearly apart from the other set of LEDs.
[0025] A further advantage of some embodiments of the present invention is to provide for a speech generation device, a fixed-focus type eye gaze controller that avoids the drawbacks of conventional devices.
[0026] Another further principal object of the present invention is to provide a speech generation device with a "fixed-focus" type eye gaze controller that can be positioned less than 17 inches from the screen. [0027] Another advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller that incorporates a preprogrammed chip for remote control of other devices in the user's environment without having to rely on others for lengthy programming of the system and that has pre-programmed pages that let the user's eye gaze control all consumer electronics in the user's environment. [0028] An additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller that enables the user to automatically dial (including 911), speed-dial, receive calls, talk over the telephone and listen as well and perform every function that one can perform using a conventional telephone.
[0029] A yet further advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller that incorporates an eBook reader whereby a user can completely control with eye-tracking the normal uses of an eBook reader, including reading the books, changing the voices used to read the books, and obtaining the books from bookshare.org, all without the need of intervention from a caregiver.
[0030] A still further advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller that allows the user to reliably and immediately switch the access method between eye-tracking to another method with a single selection and thus independently and without requiring intervention from a caregiver.
[0031] Yet another additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having positioned near its camera housing, indicator lights that tell the user if the user's eyes are being tracked.
[0032] Still another additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having settings that enable the user to choose to retain the "accumulated dwell time" if the user's gaze leaves the object that the user desires to select. One such setting relates to the duration of time before the user loses the "accumulated dwell time", and another such setting relates to the rate at which the user loses the "accumulated dwell time". [0033] A yet additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having access to tools that minimize necessary navigation to save time and energy during message composition. These would include Slots: whereby the user's selection of a common word would cause a commonly used phrase containing such word to be provided in an object that the user can select with the user's eye gaze. These also would include Phrase Prediction: whereby from what the user already has typed, an entire phrase (instead of just the next word or next character) is provided in an object that the user can select with the user's eye gaze.
[0034] A still additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having access to a special "On Screen Keyboard" using larger buttons for eye-tracking and other access methods whereby the user can enter from the special "On Screen Keyboard," the numeric indicator that some web browsers place beside every link on a webpage and so more easily select a desired link to a webpage. [0035] Still another additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having control over a special 'split' message window in which part of the message window contains the message undergoing composition by the user but cannot be activated, and the remainder of the 'split' message window (size defined by the user) is then a 'Speak Message Window' button that will speak the contents of the message window when the user is satisfied with the final composition of the message.
[0036] Yet an additional advantage of some embodiments of the present invention is to provide a speech generation device with an eye gaze controller having access to a 'Dashboard Hotspot' that the user can locate in any section of the screen and with any desired size and that enables the user to launch a 'popup window' containing the critical items that can be selected by the user by making no more than two selections.
[0037] A yet further additional advantage of some embodiments of the present invention is to provide for users that are blind or have very poor vision, a speech generation device with an eye gaze controller having audio-eye-tracking that provides audio cues to the user to inform the user when the user's eyes are focused on an area of the screen that the user might want to select and thereby enables such users to employ eye-tracking as a communication and computer access method. [0038] Additional aspects and advantages of the invention will be set forth in part in the description that follows, and in part will be obvious from the description, or may be learned by practice of the invention. The aspects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the descriptions that follow.
[0039] One exemplary embodiment of the disclosed technology concerns a portable eye gaze controller comprising a first housing, an eye tracker, a first battery and a first universal serial (USB) socket. The eye tracker is disposed within the first housing. The first battery is also disposed within the first housing and is electrically connected to the eye tracker to provide power to operate the eye tracker. The first universal serial (USB) socket is carried by the first housing and is electrically connected to the eye tracker. [0040] Another exemplary embodiment of the disclosed technology concerns a speech generation device including a portable eye gaze controller. The portable eye gaze controller includes a first housing, an eye tracker disposed within the first housing and a first universal serial bus (USB) socket carried by the first housing and electrically connected to the eye tracker. The speech generation device further includes a second housing, a processor disposed within the second housing, an input screen also disposed within the second housing, and a second universal serial bus (USB) socket carried by the second housing and connected to the processor. The portable eye gaze controller is coupled to the processor via a connection established between the first USB socket and the second USB socket.
[0041] Another exemplary embodiment of the disclosed technology concerns an eye tracker including a housing, first and second light sources, a video camera and a focusing lens. The first and second light sources are disposed within the housing such that light illuminates outwardly from the housing towards the eyes of a user. The video camera is disposed within said housing and is configured to detect light reflected from the eyes of a user. The focusing lens is disposed in front of the video camera and is aligned with a central opening that is defined in said housing. In some more particular eye tracker embodiments, the eye tracker includes first and second light sources comprising LED arrays that are disposed respectively to the right and left of the video camera within the housing. In other more particular eye tracker embodiments, the eye tracker further includes first and second indicator lights configured to illuminate when the eye tracker has acquired the location of the user's eye asociated with that indicator light.
[0042] Other embodiments of the present subject matter concern a speech generation device including an input screen, an eye tracker, a processor and related computer-readable medium for storing instructions executable by the processor, and speakers. The input screen is configured for displaying selectable pages to a viewer. The eye tracker includes at least one light source and at least one photosensor that detects light reflected from the viewer's eyes to determine where the viewer is looking relative to the input screen. The instructions stored on the computer-readable medium configure the speech generation device to generate output signals for establishing communication with a separate device or network. The speakers provide audio output of signals received from the separate device or network. In some more particular embodiments, the separate device or network comprises a telephone, and wherein the instructions stored on the computer readable medium initiate the display on the input screen of a keypad with numbers for dialing the telephone that are selectable by a user's gaze detected by the eye tracker. In other more particular embodiments, [0043] the instructions stored on the computer-readable medium more particularly configure the speech generation device to connect to the internet such that a user can navigate web pages displayed on the input screen with the selection control of said eye tracker.
[0044] In still further particular embodiments, the instructions stored on the computer-readable medium more particularly configure the speech generation device to download an e-book from over the established internet connection. [0045] Yet further exemplary embodiments of the subject technology relate to a method of changing the access method of an electronic device interfaced with an eye gaze controller from an eye tracking access method to at least one other access control protocol. In accordance with such exemplary method, a selection method navigator is displayed on an input screen for a user, wherein the selection method navigator displays a plurality of access methods for interfacing with the electronic device. A user's gaze is detected with the eye gaze controller as the user's eyes are focused on an area of the input screen depicting the desired access method for subsequent operation of said electronic device. Finally, the access method of the electronic device is switched from an eye gaze tracking access method to the desired access method selected by the user's gaze.
[0046] Another exemplary embodiment concerns a method for determining user selection of an object on a display screen using eye tracking. In accordance with such an exemplary method, a dwell time setting that defines the duration of time for which a user's eyes must gaze on an object on a display screen to trigger selection of the object by an eye gaze selection system is electronically established. An eye gaze controller electronically tracks the amount of time a user's gaze remains upon a given object on the display screen. The user's accumulated dwell time is retained for a predetermined amount of time even after a user's gaze leaves the given object. Selection of the given object is electronically implemented if the accumulated dwell time exceeds the electronically established dwell time setting. Alternatively, the accumulated dwell time is restarted if a user's gaze leaves the given object for longer than the predetermined amount of time during which the accumulated dwell time is retained. [0047] Another exemplary embodiment of the present technology concerns a method for enhancing the rate of message composition within a message window of a speech generation device. A first step involves electronically displaying an interface on an input screen of the speech generation device, the interface comprising a message window in which a message may be composed by a user and ultimately spoken and input buttons by which a user selects one or more of words, characters and symbols. The message composed by a user in the message window is electronically tracked. Based on the tracked message being composed within the message window, selected of the input buttons are electronically changed to include predictor buttons. In some more particular exemplary embodiments, the message composed by a user in the message window comprises a phrase including one or more slot placeholders within the phrase, and wherein said predictor buttons comprise one or more corresponding filler words for selection by a user to populate the one or more slot placeholders. In other more particular exemplary embodiments, the message window comprises a composing window and a separate speak message window, the composing window being configured to display the message being composed by a user, and the speak message window being a separate display area within the message window such that the message within the composing window can only be selected to have it spoken by directing a user's eye gaze to the speak message window and not to the composing window.
[0048] Still further exemplary embodiments of the disclosed technology concern a method for implementing display of a dashboard hotspot on a display screen using eye tracking. A first step involves electronically establishing a predetermined area defined relative to an input screen for corresponding to a gaze location for implementing a dashboard hotspot. A user's gaze is electronically tracked with an eye gaze controller to determine when a user's gaze is within the predetermined area. Upon determination that a user's gaze is within the predetermined area, a popup window is displayed to a user, the popup window containing a plurality of predetermined critical interface features.
[0049] Additional exemplary embodiments of the subject technology concern audio eye tracking. For example, a method for assisting a user with control of an electronic device using eye tracking includes a step of electronically tracking a user's gaze with an eye gaze controller to determine when a user's eyes get close to focusing on a given object provided on a display screen associated with the electronic device. An audio signal is then generated for the user once the user's gaze is determined by the eye gaze controller to be within a predetermined distance from the given object. [0050] A related electronic device includes an input screen, an eye tracker, speakers, a processor and related computer-readable medium for storing instructions executable by the processor. The input screen displays interface pages to a user. The eye tracker includes at least one light source for illuminating the eyes of a user and at least one photosensor that detects light reflected from the user's eyes to determine where the user is looking relative to the input screen. The speakers provide audio output of signals. The instructions stored on the computer- read able medium configure the electronic device to electronically track a user's gaze with said eye tracker to determine when a user's eyes get close to focusing on a given object provided within an interface page on the input screen, and to generate an audio signal via the speakers once the user's gaze is determined by the eye tracker to be within a predetermined distance from the given object.
[0051] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate at least one presently preferred embodiment of the invention as well as some alternative embodiments. These drawings, together with the description, serve to explain the principles of the invention but by no means are intended to be exhaustive of all of the possible manifestations of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS [0052] Fig. 1 is a head-on plan view of an embodiment of an eye gaze controller in accordance with the present invention;
[0053] Fig. 2 is an elevated perspective view of an embodiment of an eye gaze controller in accordance with the present invention seen from the rear right hand side; [0054] Fig. 3 is an elevated perspective view of various disassembled components of an embodiment of an eye gaze controller in accordance with the present invention from the front left side;
[0055] Fig. 4 is an elevated perspective view of various disassembled components of an embodiment of an eye gaze controller in accordance with the present invention taken from the rear right side; [0056] Fig. 5 is a schematic diagram of components of an embodiment of the portable eye gaze controller in accordance with the present invention; [0057] Fig. 6 is an elevated perspective view of an embodiment of an eye gaze controller in accordance with the present invention shown attached to an embodiment of a speech generation device seen from the front right hand side;
[0058] Fig. 7 is a right side plan view of a schematic representation of an embodiment of a portable eye gaze controller in accordance with the present invention shown connected to an embodiment of a speech generation device; [0059] Fig. 8 is a side-on view of an embodiment of the right side of an eye gaze controller in accordance with the present invention shown attached to a part of a speech generation device;
[0060] Fig. 9 is a front plan view of an embodiment of a portable eye gaze controller in accordance with the present invention shown attached to an embodiment of a speech generation device; [0061] Fig. 10 is a left side plan view of the embodiment shown in Fig. 1;
[0062] Fig. 11 is rear plan view of the embodiment shown in Figs. 1 and 2; [0063] Fig. 12 is an exploded cross-sectional view taken in the direction of the arrows labeled 12 - 12 from above the view shown in Fig. 2; [0064] Fig. 13 is a schematic diagram of electronic components of an embodiment of the portable eye gaze controller in accordance with the present invention;
[0065] Fig. 14A schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0066] Fig. 14B schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0067] Fig. 15A schematically illustrates components of an embodiment of the speech generator and telephone in accordance with the present invention; [0068] Fig. 15B schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0069] Fig. 16A schematically illustrates components of an embodiment of the portable eye gaze controller and associated speech generation device in accordance with the present invention;
[0070] Fig. 16B schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0071] Fig. 17 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0072] Fig. 18 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0073] Fig. 19 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0074] Fig. 20 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0075] Fig. 21 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention;
[0076] Fig. 22 schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0077] Fig. 23A schematically illustrates a flow chart for an exemplary program that can be provided for the eye gaze controller of the present invention; [0078] Fig. 23B schematically illustrates components of an embodiment of the speech generator and with activated dashboard hotspot in accordance with the present invention;
[0079] Fig. 24 provides an embodiment of a graphical user interface menu provided via software features for providing an exemplary remote control framework for customization;
[0080] Fig. 25 provides an embodiment of a graphical user interface menu provided via software features for providing a remote control framework having buttons with programmed environmental control behaviors; [0081] Fig. 26 provides an embodiment of a graphical user interface menu provided via software features for providing a My Remote Controls menu;
[0082] Fig. 27 provides an embodiment of a graphical user interface menu provided via software features for providing a Test Standard IR Codes menu; [0083] Fig. 28 provides an embodiment of a graphical user interface menu provided via software features for providing a Select Remote Control menu; [0084] Fig. 29 provides an embodiment of a graphical user interface menu provided via software features for providing an IR Browser menu; [0085] Figs. 30A-30C, respectively, provide embodiments of a graphical user interface via software features for performing IR Learning functionality; [0086] Fig. 31 provides an embodiment of a graphical user interface menu provided via software features for providing an eBook download menu;
[0087] Fig. 32 provides an embodiment of a graphical user interface menu provided via software features for providing eBook content details; [0088] Fig. 33 provides an embodiment of a graphical user interface menu provided via software features for providing a periodical download menu;
[0089] Fig. 34 displays additional aspects of an embodiment of a graphical user interface menu provided via software features for providing a periodical download menu; [0090] Fig. 35 provides an embodiment of a graphical user interface menu provided via software features for providing a favorite searches menu;
[0091] Fig. 36 provides an embodiment of a graphical user interface menu provided via software features for providing a periodical ID menu;
[0092] Fig. 37 provides another embodiment of a graphical user interface menu provided via software features for providing a favorite searches menu;
[0093] Fig. 38 provides an embodiment of a graphical user interface menu provided via software features for providing an eBook Reader menu;
[0094] Fig. 39 provides an embodiment of a graphical user interface menu provided via software features for selecting an eBook file menu; [0095] Fig. 40 provides an embodiment of a graphical user interface menu provided via software features for providing an Available Bookmarks menu;
[0096] Fig. 41 provides an embodiment of a graphical user interface menu provided via software features for providing a Modify eBook Viewer menu;
[0097] Fig. 42 provides an embodiment of a graphical user interface menu provided via software features for providing a Modify eBook Table of Contents menu;
[0098] Fig. 43 provides an embodiment of a graphical user interface menu provided via software features for providing a Scroll Behavior Settings menu;
[0099] Fig. 44 provides an embodiment of a graphical user interface menu provided via software features for providing an eBook Reader tools toolbar; [00100] Fig. 45 provides a partial view of an embodiment of a graphical user interface menu provided via software features for providing a system keyboard, with message window and predictor button features;
[00101] Fig. 46 provides an embodiment of a graphical user interface menu provided via software features for providing a Prediction Settings menu; [00102] Fig. 47 provides an embodiment of a graphical user interface menu provided via software features for providing a Dictionary Browser menu; [00103] Fig. 48 provides an embodiment of a graphical user interface menu provided via software features for providing an Edit Word menu; [00104] Fig, 49 provides an embodiment of a graphical user interface menu provided via software features for providing a Word Forms group box; [00105] Fig. 50 provides an embodiment of a graphical user interface menu provided via software features for providing a Select Concepts menu; [00106] Fig. 51 provides an embodiment of a graphical user interface menu provided via software features for providing an Abbreviation Browser menu;
[00107] Fig. 52 provides a partial view of an embodiment of a graphical user interface menu provided via software features for entering an abbreviation; [00108] Fig. 53 provides a partial view of an embodiment of a graphical user interface menu provided via software features for displaying an expanded abbreviation; [00109] Fig. 54 provides an embodiment of a graphical user interface menu provided via software features for providing a Concept Browser menu; [00110] Fig. 55 provides an embodiment of a graphical user interface menu provided via software features for providing a Concept Slot Fillers menu; [00111] Fig. 56 provides an embodiment of a graphical user interface menu provided via software features for providing a Title bar toobar;
[00112] Fig. 57 provides an embodiment of a graphical user interface menu provided via software features for providing a My Phrases menu; [00113] Fig. 58 provides an embodiment of a graphical user interface menu provided via software features for providing a New Phrase menu; [00114] Fig. 59 provides an embodiment of a graphical user interface menu provided via software features for providing an Enter Frequency menu; [00115] Fig. 60 provides an embodiment of a graphical user interface menu provided via software features for providing a Select a Phrase menu; [00116] Fig. 61 provides a first view of an embodiment of a graphical user interface menu provided via software features for implementing slots and fillers;
[00117] Fig. 62 provides a second view of an embodiment of a graphical user interface menu provided via software features for implementing slots and fillers; [00118] Fig. 63 provides an embodiment of a graphical user interface menu provided via software features for providing a Select Concept for Slot menu; [00119] Fig. 64 provides an embodiment of a graphical user interface menu provided via software features for providing a Select a Symbol menu; [00120] Fig. 65 provides an embodiment of a graphical user interface menu provided via software features for providing a Select Slot Filler menu; [00121] Fig, 66 provides an embodiment of a graphical user interface menu provided via software features for providing a Behavior Editor menu; [00122] Fig. 67 provides an embodiment of a graphical user interface menu provided via software features for providing another Select a Symbol menu, [00123] Fig. 68 provides exemplary orientations for mounting of an eye gaze controller in accordance with aspects of the present invention;
[00124] Fig. 69 provides an embodiment of a graphical user interface menu provided via software features for providing an Eye Tracking Settings Menu; [00125] Fig. 70 provides an embodiment of a graphical user interface menu provided via software features for providing a Blink Settings Menu; [00126] Fig. 71 provides an embodiment of a graphical user interface menu provided via software features for providing a Dwell Settings menu; [00127] Fig. 72 provides an embodiment of a graphical user interface menu provided via software features for providing a Switch Settings menu; [00128] Fig. 73 provides an embodiment of a graphical user interface menu provided via software features for providing a Target Settings menu;
[00129] Fig. 74 provides an embodiment of a graphical user interface menu provided via software features for providing an Eye Track Status menu; [00130] Fig. 75 provides an embodiment of a graphical user interface menu provided via software features for providing a Highlight Rules menu; [00131] Fig. 76 provides an embodiment of a graphical user interface menu provided via software features for showing a fill type example, where fill is indicated from the bottom up;
[00132] Fig. 77 provides an embodiment of a graphical user interface menu provided via software features for showing a fill type example, where fill is indicated in a contract format;
[00133] Fig. 78 provides an embodiment of a graphical user interface menu provided via software features for providing an Eye Tracking - Advanced Settings Menu; [00134] Fig. 79 provides an embodiment of a graphical user interface menu provided via software features for providing an Additional Eye Tracking Desktop Menu; [00135] Fig. 80 provides an embodiment of a graphical user interface menu provided via software features for selecting a Dashboard Hotspot; [00136] Fig. 81 provides an embodiment of a graphical user interface menu provided via software features for providing a Dashboard Popup menu; and [00137] Fig. 82 provides an embodiment of a graphical user interface menu provided via software features for providing a Dashboard Hotspot Settings menu,
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[00138] Reference now will be made in detail to the presently preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. Each example is provided by way of explanation of the invention, which is not restricted to the specifics of the examples. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For instance, features illustrated or described as part of one embodiment, can be used on another embodiment to yield a still further embodiment. Thus, it is intended that the present invention cover such modifications and variations as come within the scope of the appended claims and their equivalents. The same numerals are assigned to the same components throughout the drawings and description.
[00139] A presently preferred embodiment of the portable eye gaze controller in accordance with the present invention is shown in Figs. 1 - 3 and is represented generally by the numeral 20. As shown in Figs. 1 - 3, the eye gaze controller 20 includes a housing, which desirably is defined by a front shell 21 and an opposing rear shell 22. The front shell 21 and the rear shell 22 desirably are detachably connected to one another as by selectively removable mechanical fasteners such as screws 23. As shown in Fig. 2 for example, the rear shell 22 of the housing carries a universal serial bus (USB) connector in the form of a USB socket 24, which also is shown in Fig. 4 for example. This USB connector 24 enables the portable eye gaze controller 20 to be connected to a microprocessor. The microprocessor can be in any one of a number of different types of devices, including a personal computer for example. For purposes of illustration herein, the microprocessor desirably forms part of a speech generation device that is to be controlled using the portable eye gaze controller 20. Alternatively, a separate microprocessor dedicated to operation of the portable eye gaze controller 20 can be provided in the housing of the portable eye gaze controller 20. [00140] As shown in Figs. 3 and 4, the eye gaze controller 20 includes a main board 28 on which the integrated circuits are mounted along with the USB connector 24, which is electrically connected to the integrated circuits. As noted above, the USB connector 24 enables the portable eye gaze controller 20 to be connected with any computer device, and in particular with any computer device that forms part of a speech generation device, which is indicated generally in Fig. 6 for example by the designating numeral 30. [00141] As shown in Figs. 6, 9 and 10 for example, the speech generation device
30 that can be controlled by the portable eye gaze controller 20 is of a type that provides an input screen 33 that displays visual objects that the user can consider whether to select. The selection software that implements the user's decision to select an object displayed on the input screen 33 must be provided with the capability of using inputs from an eye gaze controller to effect the selection of the objects displayed on the input screen 33 of the speech generation device. The selection software runs on a microprocessor of the speech generation device 30 or runs on a dedicated microprocessor of the eye gaze controller 20. Desirably, the selection software includes an algorithm that enables the eye gaze controller 20 to deal effectively with images of the user's eye that are slightly out of focus. Through the use of the selection software, the eye gaze controller 20 permits the user to employ one or more selection methods to select an object on the display screen 33 of the speech generation device 30 by taking some action with the user's eyes. [00142] Optional selection methods that can be activated using the eye gaze controller 20 to interact with the display screen 33 of the speech generation device 30 include blink, dwell, blink/dwell, blink/switch and external switch. Using the blink selection method, a selection will be performed when the user gazes at an object on the input screen 33 of the speech generation device 30 and then blinks for a specific length of time. Additionally, the system also can be set to interpret as a "blink," a set duration of time during which the camera 50 cannot see the user's eye. The dwell method of selection is implemented when the user's gaze is stopped on an object on the input screen 33 of the speech generation device 30 for a specified length of time. The blink/dwell selection combines the blink and dwell selection so that the object on the input screen 33 of the speech generation device 30 can be selected either when the user's gaze is focused on the object for a specified length of time or if before that length of time elapses, the user blinks an eye. In the external switch selection method, an object is selected when the user gazes on the object for a particular length of time and then closes an external switch. The blink/switch selection combines the blink and external switch selection so that the object on the input screen 33 of the speech generation device 30 can be selected when the user's gaze blinks on the object and the user then closes an external switch. In each of these selection methods, the user can make direct selections instead of waiting for a scan that highlights the individual object on the input screen 33 of the speech generation device 30. Additionally, the system that uses the eye gaze controller 20 to interact with the input screen 33 of the speech generation device 30 can be set (at the user's discretion) to track both eyes or can be set to track only one eye.
[00143] A presently preferred speech generation device 30, which includes a microprocessor that is configured to run on an operating system and provided with suitable selection software, is available from the assignee of this application, DynaVox of Pittsburgh, Pennsylvania and is sold under the tradename Vmax. Indeed, an eye gaze controller accessory for the Vmax speech generation device 30 is sold under the tradename EyeMax. As shown in Fig. 6 for example, USB connectors in the form of USB plugs 26 on each opposite end of a USB cable 25 can be used to connect the speech generation device 30 and the portable eye gaze controller 20. As shown in Fig.
6 for example, a type B USB connector 26 can be plugged into a corresponding type A USB connector socket 24 (Fig. 8 for example), which can be carried on the right side of the housing for the eye gaze controller 20. As shown in Fig. 7 for example, a type B USB connector plug 26a can be plugged into a corresponding type A USB connector socket 24a, which can be carried on the right side of the housing for the speech generation device 30.
[00144] In accordance with the present invention, the portable eye gaze controller further comprises an eye tracker device. Eye tracker devices are known and are commercially available in several different operating configurations. Suitable eye tracker devices are available from Eye Tech Digital Systems, Inc. of Mesa, Arizona and include both hardware and selection software, which desirably includes an algorithm that enables the eye tracker to deal effectively with images of the user's eye that are slightly out of focus. [00145] A basic eye tracker device employs a light source and a photosensor that detects light reflected from the viewer's eyes. In one particular example, a video-based gaze tracking system contains a processing unit which executes image processing routines such as detection and tracking algorithms employed to accurately estimate the centers of the subject's eyes, pupils and corneal-reflexes (known as glint) in two- dimensional images generated by a mono-camera near infrared system. The gaze measurements are computed from the pupil and glint (reference point). A mapping function - usually a second order polynomial function - is employed to map the gaze measurements from the two-dimensional image space to the two-dimensional coordinate space of the input display 33 of the speech generation device 30. The coefficients of this mapping function are estimated during a standard interactive calibration process in which the user is asked to look consecutively at a number of points displayed (randomly or not) on the input display 33. Known calibration techniques for passive eye monitoring may use a number of calibration points ranging, for example, from one to sixteen points. Once this calibration session for a particular user is completed, any new gaze measurement in the two-dimensional image will be mapped to its point of gaze on the input display 33 using an equation of this nature: (Xs, Ys) = F(Xi, Yi) with F being the mapping function, (Xs1 Ys) the screen coordinates (or POG) on the input display 33 and Xi, Yi the gaze measurement drawn from the image of the camera 50. In order to evaluate the success of the calibration procedure, a test desirably is conducted as follows. The user is asked again to look at some points on the input display 33, the gaze points are estimated using the mapping function, and an average error (in pixel) is computed between the actual points and the estimated ones. If the error is above a threshold, then the user needs to re-calibrate. [00146] It should be appreciated that other types of eye tracker devices are known, and any of them can be employed in accordance with the present invention. Examples of eye tracker devices are disclosed in U.S. Patent Nos.: 3,712,716 to Cornsweet et al.; 4,950,069 to Hutchinson; 5,589,619 to Smyth; 5,818,954 to Tomono et al.; 5,861 ,940 to Robinson et al.; 6,079,828 to Bullwinkel; and 6,152,563 to Hutchinson et al.; each of which being hereby incorporated herein by this reference for all purposes. Examples of suitable eye tracker devices also are disclosed in U.S. Patent Application Publication Nos.: 2006/0238707 to Elvesjo et al.; 2007/0164990 to Bjorklund et al.; and 2008/0284980 to Skogo et al.; each of which being hereby incorporated herein by this reference for all purposes. [00147] Each of Figs. 3, 4 and 5 schematically shows the arrangement of several of the main components of an embodiment of a portable eye gaze controller 20 in accordance with the present invention. As depicted therein, the eye tracker of the portable eye gaze controller 20 desirably can include a USB video camera 50r a focusing lens 40, a left infrared LED array 41 and a right infrared LED array 42. A suitable video camera 50 is available from Sony in the form of the Sony® 1.3MP 1/3" ICX445 EXview HAD CCD® video camera, which is a 1.3 megapixel video camera having a resolution of 1296 x 964 at 18 frames per second (FPS) and a USB 2.0 5-pin Mini-B digital interface. Each pixel measures 3.75 microns by 3.75 microns. Desirably, the USB video camera 50 should have a signal-to-noise ratio of at least about 5 in order to be able to distinguish image features at 100% certainty (according to the Rose criterion).
[00148] As schematically shown in Fig. 3, the focusing lens 40 is mounted in an adjustable lens housing 40a and disposed in front of the video camera 50. The adjustable lens housing 40a desirably can be mechanically locked into position so that the focus of the lens 40 does not change with vibration or drops, A high quality Tamron brand lens having a 16MM focal length with an iris range of F/1.4 to 16 provides a suitable lens 40 and housing 40a. The focusing lens 40 and video camera 50 are aligned with a central opening 21 a that is defined in the front shell 21 of the housing of the eye gaze controller 20. As shown schematically in Fig. 3, an O-ring 40b desirably is disposed against the front surface of the periphery of a lens cover 40c formed of transparent glass so that the lens cover 40c can be sealed against the back of the front shell 21 of the housing of the eye gaze controller 20. In this way, the fens 40 is protected against tampering, soiling or other undesirable environmental conditions. [00149] Referring to Figs. 3, 4 and 5, the eye tracker of the portable eye gaze controller 20 desirably includes a left infrared LED array 41 and a right infrared LED array 42. Each of the light emitting diodes (LED) 41a, 42a in each respective infrared LED array 41 , 42 desirably emits at a wavelength of about 880 nanometers, which is the shortest wavelength that was deemed suitable for use without distracting the user (the shorter the wavelength, the more sensitive the sensor, i.e., video camera 50, of the eye tracker). However, LEDs 41a, 42a operating at wavelengths other than about 880 nanometers easily can be substituted and may be desirable for certain users and/or certain environments. As schematically shown in Fig. 3, for example, each of the light emitting diodes (LED) 41a, 42a desirably is a 5 mm diode. Forty-two such diodes 41 a, 42a are disposed in each array 41 , 42, and each array 41 , 42 desirably contains seven staggered vertical columns of LEDs 41a, 42a with six LEDs 41a, 42a in each column. As shown in Fig. 3 for example, a respective transparent protective cover 41b, 42b for each of the infrared LED arrays 41 , 42 is disposed against the back of the front shell 21 of the housing of the eye gaze controller 20 and in front of each respective infrared LED array 41 , 42.
[00150] As shown in Fig. 4, for example, the USB video camera 50 is mounted to the back of the rear shell 22 of the housing of the eye gaze controller 20. As shown in Fig. 3, for example, each of the infrared LED arrays 41 , 42 desirably is mounted to the back of the front shell 21 of the housing of the eye gaze controller 20. As shown in Fig.
3, the camera 50, the lens 40a and the central opening 21a of the front shell 21 are disposed centrally between the left infrared LED array 41 and the right infrared LED array 42. Moreover, the camera 50, the lens 40a and the central opening 21 a of the front shell 21 desirably are disposed aligned in a straight line with the infrared LED arrays 41 , 42 such that a straight line horizontally bisects the central opening 21 a as well as each of the infrared LED arrays 41, 42.
[00151] When the two LED arrays 41 , 42 are disposed in a straight horizontal line with the camera sensor as shown in Fig. 3, the farther apart the LED arrays 41 , 42 are, the more accurately a user's gaze can be tracked. Thus, the LED arrays 41 , 42 are disposed with the maximum separation from one another within the confines of the boundaries imposed by the extreme edges of the front shell 21. Moreover, as schematically shown in Fig. 3, each of the LED arrays 41, 42 is disposed tilted toward the central opening 21a at an angle α of about eleven degrees from the horizontal plane to a degree that maximizes the depth range over which movements of the user's eyes can be detected by the eye tracker with the separation S (Fig. 1) between the vertical centerlines of the two LED arrays 41 , 42 being about 9.25 inches. As schematically shown in Fig. 12, each of the LED arrays 41, 42 is therefore disposed tilted toward the central opening 21a at an angle β of about seventy-nine degrees from the central axis 51 of the USB video camera 50. With a tilt angle α of about 8.1 degrees, the depth range of the eye tracker extends over a range of about 16.5 inches to about 28 inches when as schematically shown in Fig. 1 , the separation S between the vertical centerlines of the two LED arrays 41 , 42 in the same plane as the plane of the lens 40a is about 9.4 inches. [00152] Relative movement between the camera 50 of the portable eye gaze controller 20 and the speech generation device 30 will compromise the effectiveness of the eye gaze controller 20. As schematically shown in each of Figs. 7, 8, 10 and 11 for example, a rigid mounting bracket 45 desirably is provided to attach the portable eye gaze controller 20 to the speech generation device 30. As shown in Figs. 10 and 11 for example, the proximal portion 45a of the mounting bracket 45 must be configured and disposed to be rigidly attached, as by threaded screws 45c, to the bottom panel 32 of the housing for the speech generation device 30. Desirably, the proximal portion 45a of the mounting bracket 45 is configured and disposed to function as a replacement door that closes the battery compartment of the speech generation device 30. The distal section 45b of the mounting bracket 45 desirably can be rigidly connected, as by threaded screws 45d, to the rear shell 22 of the housing for the portable eye gaze controller 20. Moreover, the distal section 45b of the mounting bracket 45 desirably is disposed at an angle with respect to the proximal section 45a of the mounting bracket such that the plane of the lens 40 in front of the video camera 50 of the eye gaze controller 20 is disposed at an angle of about 160 degrees with respect to the plane in which the input display 33 of the speech generation device 30 is disposed. [00153] As schematically shown in Fig. 5, the outputs of the USB video camera 50 and the two infrared LED arrays 41 , 42 of the eye gaze controller 20 are the outputs of the eye tracker that are provided as inputs to the speech generation device 30 via the type A USB 2.0 connector socket 24. These inputs are provided to the microprocessor of the speech generation device 30 and are processed to generate control signals for controlling operation of the speech generation device 30 by the user's eye movements. Alternatively, a separate microprocessor that is dedicated to operation of the portable eye gaze controller 20 can be provided in the housing of the portable eye gaze controller 20. The outputs from the USB video camera 50 and the two infrared LED arrays 41 , 42 would then be provided to and processed by the dedicated microprocessor of the eye gaze controller 20 to generate control signals for controlling operation of the speech generation device 30 by the user's eye movements. [00154] As shown in Figs. 1 , 3, 6 and 9, two spaced apart indicator lights 21 b, 21c desirably are disposed beneath the central opening 21a defined in the front shell 21. The eye gaze controller 20 is configured to illuminate each indicator light 21b, 21 c when the eye tracker has acquired the location of the user's eye associated with that indicator light. The eye tracker's acquisition of the location of the user's eye may require using the processing power of either the microprocessor in the speech generation device 30 or of a dedicated microprocessor in the eye gaze controller 20, as the case may be. In each case for example, if the eye tracker of the eye gaze controller 20 has acquired the location of the user's left eye, then the eye gaze controller 20 is configured to illuminate the left indicator light 21 b. Similarly, if the eye tracker has acquired the location of the user's right eye, then the eye gaze controller 20 is configured to illuminate the right indicator light 21c. This feature of providing separate indicator lights 21 b, 21 c mounted on the front of the front shell 21 enables the eye gaze controller 20 to avoid using part of the display 33 of the speech generation device 30 to show the user if one or both of the user's eyes are being tracked. Accordingly, this indicator light feature of the eye gaze controller 20 conserves valuable space on the display screen 33 of the speech generation device 30. Additionally, it has also been observed that these indicators 21b, 21c act as a relaxation technique for otherwise hyper users. [00155] As schematically shown in Figs. 3 and 4 for example, the portable eye gaze controller 20 further comprises a self-contained power supply that powers the eye tracker and that is separate from any power source for the speech generation device that is being controlled by the eye gaze controller 20. In the embodiment shown, the power supply is provided in the form of a pack 27a of six lithium-ion batteries 27. Each battery 27 desirably is a rechargeable lithium ion battery having a target life of at least about six hours and a nominal voltage of about 3.7 volts. The batteries 27 in the pack
27a are configured with two batteries 27 electrically connected in series and three batteries 27 electrically connected in parallel. The six pack 27a of batteries 27 electrically connected in this way provides a nominal voltage of about 7.4 volts. As schematically shown in Fig. 5, the batteries 27 provide electric power in the form of direct current to the USB video camera 50 and to the two infrared LED arrays 41 , 42 through a battery charger 43. A complete constant-current/constant-voltage charger 43 for lithium batteries 27 is available from Linear Technology Corporation of Milpitas, California under the tradename LTC® 4006 for example. [00156] As shown in Fig. 7, an AC/DC transformer 43a can be connected to the portable eye gaze controller 20, which can be connected to the speech generation device 30 in order to charge simultaneously each battery 27 of the portable eye gaze controller 20 and each battery of the speech generation device 30. The AC/DC transformer 43a is connected to the battery charger 43 (Fig. 5). Alternatively, the AC/DC transformer 43a can be connected directly to the speech generation device 30 to charge only the battery in the speech generation device 30 or connected directly to the portable eye gaze controller 20 to charge only the batteries 27 in the portable eye gaze controller 20. Thus, the identical AC/DC transformer 43a can be used for each of the portable eye gaze controller 20 and the associated speech generation device 30, whether to charge batteries and/or to power the respective device.
[00157] As shown in Figs. 2, 4, 5, 6, 7 and 8 for example, a power output port 35a and a charger port 36a are carried by the housing of the eye gaze controller 20 and mounted on the main board 28 (Figs. 3 and 4). As shown in Fig. 7 for example, the power output port 35a of the eye gaze controller 20 is configured to be connected to a charger port 35b of the speech generation device 30 via suitable connectors 35c on the opposite ends of a suitable power cable 35d. As shown in Fig. 7 for example, the charger port 36a of the eye gaze controller 20 is configured to be connected to an AC/DC transformer 43a via a suitable connector 36b on the opposite end of a suitable charger cable 36c. [00158] As shown in Figs. 2, 3, 4, 6 and 8 for example, a power indicator LED 37a and a charging indicator LED 37b are provided and carried by the housing of the eye gaze controller 20. The eye gaze controller 20 is configured to illuminate the power indicator LED 37a when the eye gaze controller 20 is receiving power and is operating. The power indicator LED 37a can be covered with a sleeve that desirably is green in color so that when illuminated, the power indicator LED 37a will be seen as a green indicator light. The eye gaze controller 20 is configured to illuminate the charging indicator LED 37b when the batteries of the eye gaze controller 20 and the speech generation device 30 are being charged. The charging indicator LED 37b can be covered with a sleeve that desirably is amber in color or of a different color than the color of the sleeve that covers the power indicator light 37a so that when illuminated, the charging indicator LED 37b will be seen as a different color than the color of the power indicator light 37a. The eye gaze controller 20 is configured to stop illuminating the charging indicator LED 37b when the batteries have been fully charged. [00159] Universal Environmental Control. In accordance with one aspect of the present invention, the eye gaze controller 20 desirably is configured to include a preprogrammed chip for remote control of electronic devices found in the user's environment. A suitable remote control chip is provided by Universal Electronics, Inc. of Cypress, California (UEI) and is their Remote Control Integrated Circuit part number S3F80JB-NA, which contains all known commands of consumer electronics sold in North America. For example, the entire set of commands recognized by any given electronic appliance sold in North America will be found stored on this UEI chip. A separate display page for that remote control for that appliance will be stored on the microprocessor or other dedicated memory device associated with the speech generation device 30 (or with the eye gaze controller 20, as the case may be). The separate display page for that remote can be selected by the user and displayed on the input screen 33 of the speech generation device 30. The buttons on the remote will be emulated on the display page. The user can then use the eye gaze controller to select buttons on that display page and in this way control the electronic appliance via the eye gaze controller 20 and the speech generation device 30, which is pre-programmed to send the desired infrared (IR) signals to the electronic appliance. Such software tools, including the data defining the graphical screenshots and menu interfaces for displaying to a user, may be stored as instructions in a computer-readable medium within a memory element. The microprocessor within the speech generation device 30 or other processor or controller device may then execute such software instructions to provide these and other software tools and features of the present invention. [00160] Fig. 13 schematically illustrates how the remote control chip is integrated into suitable electronic components carried for this purpose on the main board 28 (Figs. 3 and 4) of an embodiment of the eye gaze controller 20. In order to configure the eye gaze controller 20 to control devices outside of North America, the eye gaze controller
20 also desirably includes UEI's Remote Control Integrated Circuit part number S3F80JB-WW, which contains all known commands of consumer electronics sold outside of North America. In the manner shown schematically in Fig. 13, this remote control chip desirably is similarly integrated into suitable electronic components carried for this purpose on the main board 28 (Figs. 3 and 4) of an embodiment of the eye gaze controller 20.
[00161] The eye gaze controller 20 is configured to associate each of these sets of commands on each of these chips with a separate page displayed on the input display 33 of the speech generation device 30 and corresponding to the consumer electronic device that is to be controlled by the user. For example, from the menu containing consumer electronics such as TV, DVD, VCR, radio, etc., the user can select a particular Sony® television, and the buttons for the remote control for that particular Sony® television will appear on the display screen 33 of the speech generation device 30. The remote control chip assures that when the user uses the eye gaze controller 20 to select the desired button on the display screen 33, the remote control chip permits the eye gaze controller 20 to control the speech generation device 30 to emulate the remote control signal associated with that selected button as if the user were using the actual remote control for that Sony® television. In this way, the eye gaze controller 20 affords the user greater control over the user's environment, while not having to rely on others for lengthy programming of the eye gaze controller 20, which already has been pre-programmed with appropriate pages for environmental control of consumer electronics. [00162] Desirably, the microprocessor of the speech generation device 30 is programmed to map the information on the remote control chip for the chosen electronic device (such as Sony TV) that is to be remotely controlled, to buttons on pre-made pages that will appear on the input display 33 of the speech generation device 30. Alternatively, if the eye gaze controller 20 has its own dedicated microprocessor, then the information can be mapped on that dedicated microprocessor of the eye gaze controller 20. Figs. 14A and 14B schematically illustrate flow charts for an exemplary program that can be provided for the eye gaze controller 20 (to be run on the microprocessor dedicated to the controller 20 or on the microprocessor of the associated speech generation device 30) so that the user can operate the eye gaze controller 20 to define the desired remote control in the user's environment and use it to activate the desired appliance that responds to that remote control.
[00163] More particular aspects of exemplary speech generation devices having remote control capabilities include the provision of software tools with which a user may program a remote control or other infrared (IR) command, or to access a computer using a USB connection or Bluetooth communications link. Such software-enabled functionality may be provided as software stored on the microprocessor or other dedicated memory device associated with the speech generation device 30. Such software tools, including the data defining the graphical screenshots and menu interfaces for displaying to a user, may be stored as instructions in a computer-readable medium within a memory element. The microprocessor within the speech generation device 30 or other processor or controller device may then execute such software instructions to provide these and other software tools and features of the present invention.
[00164] A particular example of how to program a speech generation device to send remote control signals is now presented with reference to Figs. 24-28. As schematically shown in Fig. 15A, at least one IR emitter 30a is provided within the speech generation device 30 to send infrared (IR) signals to any appliance that can be used with an IR remote control, In one embodiment, software features are provided that enable a user to both program a speech generation device to function with remote control capability for specified appliances as well as use environmental control behaviors to add remote control commands to buttons on a user's displayed communication page.
[00165] With regard to this latter function involving environmental control behaviors and pages, software features can implement a variety of customizable behaviors for environmental control. When these behaviors are programmed into a button that is displayed on the input screen 33 of the speech generation device 30, a user can use the eye gaze controller 20 to select that button on the input screen 33 of the speech generation device 30 to activate the behavior. Each of a plurality of page sets includes pages that are designed for use as environmental control pages. These pages offer a general theme (such as "living room TV" or "stereo") and an efficient layout that is consistent with other display pages on the display screen 33 of the speech generation device 30.
[00166] Some of the control pages (see the example page provided as Fig. 24) simply offer a framework for customization. While the buttons shown in the user interface menu of Fig. 24 feature labels such as "TV On/Off," "Volume Up" and "Volume
Down," the behaviors to actually produce these actions should be separately programmed. To set up these pages, a user may use an existing remote control to teach a command (for example, the signal for turning on the TV in a user's living room) to the speech generation device 30. Then, the user may use an environmental control behavior to add the new remote control command to a button on the user's page.
[00167] The buttons on other environmental control pages (see the sample interface of Fig. 25) are already programmed with environmental control behaviors. In addition, a name for each appropriate remote control command (for example, "Family Room TV Channel Down" or "Family Room TVPower") has already been created in the IR Browser menu. To make the button behaviors functional, a user can simply select the command name in the IR Browser menu and uses his remote control to teach the command to speech generation device 30,
[00168] In accordance with one aspect of the present invention, the microprocessor within the speech generation device 30, in conjunction with the additional circuitry such as illustrated in Fig. 13 in the eye gaze controller 20 may be programmed with software that provides a number of default remote controls that a user can program to use as the remote control for the user's electronic appliance. A user simply selects the default remote in the software that matches a given appliance (e.g. TV1 VCR1 DVD) and then uses a remote control wizard in the software to program the default remote for the appliance. In one example, a default remote is programmed by the following interactive steps available from within a user interface such as the "My Remote Controls menu" displayed in Fig. 26, [00169] In the viewport of Fig. 26, a user may select the default remote that he wants to program. The "Program the selected remote control" button will be activated, and a remote control wizard will open. A first portion of the viewport shows the steps involved in programming the selected default remote, with each step being highlighted as it is performed. The user may select the manufacturer of the appliance in a second portion of the main viewport. In one embodiment, the software wizard will display a number of possible standard codes that may be valid for the given appliance. The wizard may display the steps for a user to perform to "learn" each IR command individually. This may be accomplished by turning on the appliance that the user wants to control, aiming the IR output port of the IR emitter 30a associated with the speech generation device 30 at the appliance, and selecting the "Find the right code" button while aiming the device 30 at the appliance. A Test Standard IR Codes menu may open on the speech generation device 30, an example of which is shown in Fig. 27. [00170] Referring still to Fig. 27, a user may press the POWER button while aiming the IR output port of the IR emitter 30a of the speech generation device 30 at the appliance. The software interface will show the current code being tested. If the appliance shuts off, then the appliance successfully received the proper IR signal from speech generation device 30. In such case, a user may then select the "Yes - button works as expected" button. The wizard will inform the user that he has successfully programmed his equipment with the appropriate IR commands. If the appliance does not shut off, it means that the appliance did not receive the proper IR signal, and a user may select the "No - button does not work as expected" button. If none of the standard codes works as expected (i.e. by shutting off the electronic appliance), then a user may select the "No - button does not work as expected" button after the last standard code is tested. [00171] The software wizard will give a user the option to "discover" a non- standard code that may work with the appliance. The user selects the "Discover the right code" button to attempt to discover a non-standard code that will control the user's appliance. While again aiming the IR output port(s) of the speech generation device 30 at the appliance, and selecting the OK button, a Discover Non-Standard IR Codes menu will open and display the total number of non-standard codes that can be tested. The software automatically begins testing the first ten (or other predetermined number) non- standard codes. If one of the codes successfully shuts off the appliance, then a user should select the "Yes - one of the commands did what was expected" button. The Test Non-Standard IR Codes menu opens, allowing a user to find the specific code that powered off the electronic appliance. If none of the codes in the first group of ten shuts the appliance off, a user should select the "No - none of the commands did what was expected button." The wizard will continue testing with the next group of ten non- standard codes. If the correct code is not found, then the remaining non-standard codes will be tested in the same way. If none of the non-standard codes works as expected, the wizard will inform the user that each of the IR commands for the appliance will need to be manually learned.
[00172] If the system must manually learn each IR command from the electronic appliance's remote control, the software displays the steps to perform in the remote control wizard. For example, to manually learn each IR command, the following steps may be implemented: (1) Obtain the remote control that belongs to the given appliance. (2) Turn on the given electronic appliance. (3) Aim the remote control at the IR port on the speech generation device 30. (4) Select the "Start learning each command" button. The IR Learning popup will open. Select the "Start IR Learning" button on the input screen 33 of the speech generation device 30 and then press the appropriate button on the remote control. The user will be automatically prompted for each command that the device must learn from the remote control. (5) Select the "Stop IR Learning" button when finished. If the device did not receive a signal from the remote control, a window will inform the user that no signal was detected. Select the "Try again" button to send the signal again. The IR command may have a maximum time interval of 20 seconds.
If a user does not select the "Stop IR Learning" button before that time runs out, he may receive an error. Select the "Try again" button to send the signal again, or select the "Cancel" button to cancel the IR learning process. If a user is prompted for a button that is not on his remote control, the "Skip this command" button in the Learn IR Command menu may be used. (6) Select the "Stop IR Learning" button when finished. (7) To learn the remaining commands, repeat the above steps. Select the "OK" button once the command learning is complete. (8) When the remote control buttons have been learned, the wizard will inform the user that he has successfully programmed the commands for his appliance. Select the "Done" button. (9) To test the programmed commands, open the "Searchable Help" field and perform a search on the keywords "relearn a command on your remote control." (10) Select the OK button to close the My Remote Controls menu, as well as the "OK" button to close the IR Browser menu, [00173] Software features provided via a My Remote Controls menu (e.g., Fig. 26) may also be provided to create a custom remote control on the speech generation device 30. From the My Remote Controls menu, a user may select the "Create a new remote control" button. The remote control wizard will open (in a menu cailed "Create a new remote control"). The left viewport shows the steps for creating a new remote control. Each step is highlighted as it is performed. In the right viewport, a user may select the type of electronic appliance that the custom remote will control. A text box will prompt the user to enter a name for the new remote control. A user may then select the text box to open the system keyboard and enter a name for the custom remote control (e.g., Justin's DVD Player, Kitchen TV). The name entered on the system keyboard is displayed in the text box. Next, the "Pick the manufacturer" step is highlighted in the left viewport. From this point, similar steps as implemented in the above-described "Program a Default Remote" procedure may be used to finish creating a custom remote control.
[00174] Additional software features may be available per some embodiments of the present invention for assigning a remote control to a page. When a user creates a custom remote control for an electronic appliance, he may assign it to a remote control page in a page set in order to use the custom remote control. In one example, a custom remote may be assigned to a remote control page by following these steps: (1) Select the Main Menu button in the title bar. (2) Select Setup in the main drop-down menu. (3) Select Page Navigator in the second drop-down menu. (4) In the left viewport, select the folder that contains the remote control page desired for use. (5) In the right viewport, select the remote control page desired for use to control an appliance. (6) Select the Go to Page button. The selected remote control page will open. (7) Select the Main Menu button in the title bar. (8) Select Page Editing in the main drop-down menu. (9) Select Page Editor in the second drop-down menu. The remote control page in the page set is highlighted for editing. (10) Select the page. The page may be visually indicated as "selected" when the page border displays blue highlight or other indication. (11) Select the Main Menu button in the title bar. (12) Select File in the main drop-down menu. (13) Select Save as... in the second drop- down menu. The system keyboard opens. (14) Enter the name of the new remote control page that is to be created. Select OK to close the system keyboard. (15) View the current behavior associated with the new remote control page by following these steps: (15a) Select a vacant spot on the page. The page is selected when the page border displays a blue highlight. (15b) Select the Modify button in the title bar. The Modify Page menu will open. (15c) Select the Behavior Editor button in the Open Page
Behaviors box. The Behavior Editor menu will open. (15d) View the behaviors displayed in the Steps viewport. If the Set Active Remote behavior is displayed, proceed to step 16. If the Set Active Remote behavior is not displayed, proceed to step 17. (16) If the Set Active Remote behavior is displayed in the Steps viewport, perform the following steps: (16a) Select the Set Active Remote behavior in the Steps viewport.
(16b) Select the Edit button. The Select Remote Control menu will open, an example of which his shown in Fig. 28. (16c) Select the remote control you want as the active remote control for the page. (16d) Select the Assign select remote control button. The name of the remote control you selected is displayed in parentheses beside Set Active Remote behavior in the Steps viewport of the Behavior Editor menu. (16e) Proceed to step 17. (17) If the Set Active Remote behavior is not displayed in the Steps viewport, the user must assign the behavior to each button on the remote control page. To do so, the user performs the following steps: (17a) Select the Cancel button to close the Behavior Editor menu. (17b) Select the Cancel button to close the Modify Page menu. (17c) Select the first remote control command button. (17d) Select the Modify button in the title bar. The Modify Button menu will open. (17e) Select the Behavior Editor button in the Behaviors box. The Behavior Editor menu will open. (17f) Select Environmental Control in the Behaviors drop-down menu. (17g) Select the Play Command from Specific Remote behavior. (17h) Select the Add button. The Select Remote Control menu of Fig. 28 will open. (17i) Select the remote control that the user wants as the active remote control for the button. (17j) Select the Assign selected remote control button. The Select Command menu will open. (17k) Select the command that the user wants to assign to the button and then select the OK button. The name of the remote control and the command that has been assigned will be displayed in parentheses beside the Play Command from Specific Remote behavior in the Steps viewport of the Behavior Editor menu. If the remote control is not correct, select the behavior, then select the Edit button to display the Select Remote Control menu. Select the correct remote control, then select the Assign selected remote control button. If there are any other behaviors displayed in the viewport besides the Play Command from Specific
Remote behavior, delete them by selecting them and then selecting the Delete button. (171) Repeat steps 17a-17k to assign the Play Command from Specific Remote behavior for every button on your remote control page. (17m) Proceed to step 18. (18) Select the OK button to close the Behavior Editor menu. (19) Select the OK button to close the Modify Page menu or Modify Button menu. (20) Select the Main Menu button in the title bar. (21) Select Exit Page Editor in the main drop-down menu. (22) Select Yes to save the changes that have been made to the new remote control page. The remote control page is ready to be used to control the user's electronic appliance. Select the Close button to close the remote control page, [00175] Referring now to Figs. 29 and 30A-30C, additional aspects of software functionality are discussed for adding a new command to the IR Browser menu, and for using a remote control to teach IR environmental control commands to a speech generation device 30. If a user wants to use the speech generation device 30 to remotely control a device other than a standard electronic appliance (i.e., remote- controlled ceiling fan, X-10 light, toy, etc.), the user first adds a name for the command to the IR Browser menu. To do this, user selections open up an IR Browser menu, an example of which is provided in Fig. 29. Using the interface of Fig. 29, a user may select the New button. The system keyboard will open, and a user can enter a name for the new IR remote control command. After selecting the OK button to close the system keyboard, the new command will be displayed and highlighted in the viewport at the top of the IR Browser menu. Steps for Learning an IR Command can then be followed. [00176] Once a name for an IR remote control command is stored in the IR Browser menu, a user can use the actual remote control unit (such as a TV remote control) to teach the appropriate IR signal to the speech generation device 30. To do this, a user can open the IR Browser menu and select the name of the command the user wants to edit. The scroll bar may be needed to see all the stored command names. By selecting a Learn button, an IR Learning window (an example of which is shown in Fig. 30) will open and display the name of the command that is being learned. The user should then aim the remote control at the IR port on the speech generation device 30 and select the Start IR Learning button on the device and press the appropriate button on the remote control, At this point, the Start IR Learning button changes to a Stop IR Learning button (as shown in the exemplary display of Fig. 30B), which button may be selected upon completion of the command learning. Once selected, the IR learning window will let the user know that the command learning is complete as shown in the exemplary display of Fig. 3OC. By selecting the OK button in the IR learning window (and also the OK buttons in the IR Browser and Tools menus), the process can be completed. If an environmental control behavior and the name for this new command already have been assigned to a button, then the button can now be successfully used for remote control of an electronic appliance. If the user has not yet added an environmental control behavior to the button he wants to use, then he should continue with the following steps for Adding an Environmental Control Behavior. [00177] Once a speech generation device 30 has successfully learned a new IR remote control command, a user can use the environmental control behaviors to add the command to a button. To do this, the user can follow these steps: (1) Select the
Modify button in the title bar. The button will turn red after it is selected. (2) Select the button that is to be changed. The Modify Button menu will open. (3) Select the Behaviors button. The Behavior Editor menu will open. (4) Select the Behaviors dropdown menu. (5) Select the Environmental Control behavior category. The Behaviors viewport will display the available environmental control behaviors. (6) Select the
Perform IR Command behavior. (7) Select the Add button. The Select IR Command menu will open. (8) Select the name of the remote control command that is to be added to the button (the user may need to use the scroll bar to see all the options). (9) Select the OK button to close the Select the IR Command menu. The Perform IR Command behavior and the name of the command that has been selected will be added to the
Steps viewport in the Behavior Editor menu. (10) Select the OK button to close the Behavior Editor menu. (11) Select the OK button to close the Modify Button menu. The button that has been selected now has an environmental control behavior and a remote control command. [00178] Telephone Access. In accordance with the present invention, the speech generation device 30 desirably is provided with pre-programmed content that the user can select using the eye gaze controller 20 in order to be able to make and receive telephone calls with an eye-tracking access method. Additionally, as schematically shown in Fig. 16A, a special arrangement must be made to allow communication between the speech generation device 30 and a telephone 29 that connects to the plain old telephone service 29a. One option for this arrangement is to plug directly into the speech generation device 30 a telephone that operates over an analog telephone line. Another option schematically shown in Fig. 15A for this arrangement is the provision of any of several available infrared-controlled telephones 29, each of which being potentially controllable in the manner describe above as an electronic appliance in the user's environment. The speech generation device 30 also is provided with an infrared emitter 30a and an infrared receiver 30b. In this way, as schematically shown in Fig. 15A, the telephone 29 is enabled to receive infrared commands 39a from the speech generation device 30 and to send voice transmissions via infrared signals 39b to the speech generation device 30.
[00179] In each case, the speech generation device 30 is programmed to display on the input screen 33, a menu containing the various telephone functions, which the user can perform by eye selection via the eye gaze controller 20. Desirably, the menu displays buttons, the selection of which generates the desired logic or sequences for telephone communication. Some of these buttons simply represent the numbers that are being dialed on the telephone. Another button desirably collects the string of numbers so they can be dialed at once. Other buttons are provided to represent telephone commands like "hang up," "answer," "automatically dial," "speed-dial," "program speed-dial," "receive calls," "dial 911 ," "talk with the party who is listening over the phone" and "listen to the party who is talking over the phone" as well. [00180] For example, Fig. 15B schematically illustrates a software protocol desirably used by the speech generation device 30 under the control of the eye gaze controller 20 to place a telephone call, converse with the party called, and hang up the telephone call. Thus, with the eye gaze controller 20 of the present invention, the user can control the speech generation device 30 to speak over the telephone 29, and as schematically shown in Fig. 15A, the user can hear the caller's voice via the speakers 30c that are provided as a component of the speech generation device 30. [00181] eBook Reader. In accordance with the present invention, the speech generation device 30 desirably can be provided with pre-programmed content and/or selectabiy downloaded or imported content that the user can select using the eye- tracking access method provided by the eye gaze controller 20 in order to be able to order, download and read so-called e-books. In accordance with this aspect of the present invention and as schematically shown in Fig. 16A, the speech generation device 30 desirably is provided with an internet browser 3Od, a high speed modem 3Oe and a high speed internet connection 3Of by which the speech generation device 30 can access websites from which e-books can be selected and downloaded. As additionally schematically shown in Fig. 16A, the speech generation device 30 desirably is provided with an e-book reader 3Og, which desirably is a software package that runs on the microprocessor of the speech generation device 30 (or alternatively on the dedicated microprocessor of the eye gaze controller 20).
[00182] As schematically shown in Fig. 16B, the user can employ the eye gaze controller 20 to select from a menu on the speech generation device 30 so that various e-book functions can be performed by eye selection on the e-book screen displayed on the input screen 33 of the speech generation device 30. These functions include reading the e-books aloud to the user via the speakers 30c (Fig. 16A) of the speech generation device 30, changing the voices used to read the e-books, and obtaining the e-books from internet sites such as bookshare.org for example. Moreover, the eye gaze controller 20 enables the user to perform these functions without the need of intervention from a caregiver. Fig. 16B schematically illustrates a software protocol desirably used by the speech generation device 30 under the control of the eye gaze controller 20 to download an e-book and read the e-book aloud to the user via the speakers 30c (Fig. 16A) that are provided as a component of the speech generation device 30.
[00183] More particular aspects of exemplary eBook Reader embodiments may include software tools stored on the microprocessor or other dedicated memory device associated with the speech generation device 30. Such software tools, including the data defining the graphical screenshots and menu interfaces for displaying to a user, may be stored as instructions in a computer-readable medium within a memory element. The microprocessor within the speech generation device 30 or other processor or controller device may then execute such software instructions to provide these and other software tools and features of the present invention. With particular regard to the eBook software tools, features are provided that enable a user to interface with: (A) an eBook Downloader menu to search for and download eBooks, (B) an eBook
Reader menu to read an eBook, and (C) an eBook Actions Behaviors menu to create a new eBook interface page.
[00184] An eBook ("electronic book") is a digital representation of a printed publication. Many different formats of eBooks have emerged in the past several years, Including the DAISY (Digitial Accessible Information System) format. The DAISY format was developed to provide published information in an easy-to-navigate format for people with print disabilities. eBook tools in accordance with the present invention fully support eBooks in the DAISY format, although other formats may also be used, including but not limited to BRF (Braille Refreshible Format) and others.
[00185] eBook Downloader software tools enable a user to set up a subscription with an online eBook repository, for example the Bookshare website available at www.bookshare.org. Bookshare is a nonprofit Internet-based organization that provides digital talking books to the visually impaired or print-disabled. Bookshare maintains an online library of over 45,000 eBooks, including both books and periodicals. The eBook
Downloader software tool provides a direct link that lets the user search for and download eBooks directly from the eBook repository.
[00186] To facilitate download of an eBook, the eBook Downloader menu gives a user direct access to an online eBook repository. A user can search for books or periodicals, with search options available to search by author, title, keyword, or periodical ID number. A "favorite searches" feature may be available for quick and easy access to a user's favorite periodicals, authors, or keyword search parameters. EBooks from the online eBook repository can be downloaded directly into an eBooks folder on the speech generation device 30, where they may be immediately available for reading via the eBook Reader menu or a custom eBook page created by a user.
[00187] An example of an eBook Downloader menu with which a user may interface is provided in Fig. 31. The procedures for searching for and downloading a book and searching for and downloading a periodical may be the same or slightly different, and examples of both will now be presented. [00188] In one example, steps for downloading a book using the eBook
Downloader menu of Fig. 31 include: (1) Open the Bookshare Download menu if it is not already open. (2) Check to make sure that Book View is displayed in the Current View area of the Searching group box. If Periodical View is displayed, select the Toggle View button to change to Book View. (3) Select the Search text box. The system keyboard will open. (4) Enter the title, author, or a keyword in the system keyboard and select the
OK button. (5) Select the Search button. (6) An hourglass will appear while the software is searching the Bookshare library. A list of books that meets the search criteria will appear in the viewport. Use the scroll buttons on the right side of the viewport to move to the bottom of the viewport. Use the Previous and Next buttons in the Searching group box to navigate through the search results. (7) Select a book in the viewport that is to be downloaded. (8) Select the Download button in the Actions group box. (9) The Bookshare.org Account menu may appear. This menu desirably will appear the first time a user downloads an eBook and then after each software reboot. Appropriate text boxes may be selected, which open the system keyboard on the speech generation device so that a user can provide his user name and password, and then the user selects the Login button. (10) The book will automatically download into the folder shown in the Download Location group box. (11) A software prompt will appear, asking whether the user wants to open the book. Select No to return to the Bookshare Download menu.
[00189] During the eBook download process, software may be provided that interfaces with the online eBook repository to provide various details of a selected book. For example, as shown in the exemplary screenshot of Fig. 32, a user may select the Content Details button in the Actions group box to open the Bookshare Content Details window (see below). Select the Close button when finished. When a user becomes more familiar with the eBook Reader menu, he may want to download the book directly from this window by selecting the Download button on the Bookshare Content Details window.
[00190] In one example, steps for downloading a periodical involve interfacing with a slightly different eBook Downloader menu as shown in Fig. 33. Steps include: (1)
Open the Bookshare Download menu if it is not already open. (2) Check to make sure that Periodical View is displayed in the Current View area of the Searching group box. (If Book View is displayed, select the Toggle View button to change to Periodical View.) (3) Select the See All button. (4) An hourglass will appear while the software is searching the Bookshare library, A list of all available periodicals will appear in the viewport. Use the scroll buttons on the right side of the viewport to move to the bottom of the viewport. Use the Previous and Next buttons in the Searching Group box above the Search Results pane to navigate to the next group of search results. (5) Select a periodical in the viewport. All available editions of the periodical will be displayed, as shown in the exemplary interface menu of Fig. 34. (6) Select the edition of the periodical for download. (7) Select the Content Details button in the Actions group box to open the Bookshare Content Details window. This will allow the user to see the details for the selected periodical, similar to the details shown in Fig. 33 but also including any applicable information such as title, publication date, category of periodical (e.g., California Newspaper), time uploaded, revision number, periodical ID number, etc. Select the Close button when finished. (8) Select the Download button in the Actions group box. (9) If the Bookshare.org Account menu appears, enter user name and password in the appropriate text boxes. Then select the Login button. (10) The periodical will automatically download into the folder shown in the Download
Location group box. (11) A software prompt will appear, asking whether the user wants to open the periodical. Select No.
[00191] It should be appreciated as part of the download process, that eBooks can be downloaded directly to the speech generation device 30, or to other dedicated memory/storage devices such as but not limited to a USB flash drive, CompactFlash card, or other memory device from which the eBook can later be imported to the speech generation device 30.
[00192] Fig. 35 shows an interface menu also available in some embodiments of the download process, which allow a user to save search criteria so that the user can quickly access favorite periodicals, authors, or subject matter. Separate favorite searches for both books and periodicals may be provided. [00193] In one example, a user may interface with the menu shown in Fig. 35 using the following steps: (1) Open the Bookshare Download menu if it is not already open. (2) Select the View Favorites button in the Searching group box. The Favorite Searches menu will open. (3) Select the Create New text box. If a user is in Book
View, the system keyboard will open. If the user is in Periodical View, the Periodical ID menu shown in Fig. 36 will open. (4) Enter search criteria (author, title, or keyword) on the system keyboard for a book search (then select the OK button), or enter the Periodical ID number on the Periodical ID menu for a periodical search (and then select the OK button). (5) Select the Search and Save button. This will begin a search of the repository library for the books that meet a user's search criteria (or for the periodical ID number). Search criteria will be saved as a Search Favorite. (6) Select the book or the edition of the periodical desired for download. (7) Select the Download button in the Actions group box. (8) The book or periodical will automatically download into the folder shown in the Download Location group box. (9) A software prompt will appear, asking whether the user wants to open the book or periodical. Select No. (10) Repeat steps 2 through 9 above, creating new favorite searches, up to a certain number (e.g., three favorite book searches and three favorite periodical searches). These Favorite Searches may be replaced in chronological order by subsequent searches (the oldest search will be replaced first).
[00194] Once search criteria are saved as described above, a user may search for a Book or Periodical Using the Favorite Searches Menu, an example of which is shown in Fig. 37. The following exemplary steps may be used for interfacing with the menu of
Fig. 37: (1) Open the Bookshare Download menu if it is not already open. (2) Select the View Favorites button in the Searching group box. The Favorite Searches menu will open. (3) Select one of the buttons in the Search Favorites group box. (A designated maximum number of Favorite Searches may be saved in both the book and periodical Favorite Searches menus.) This will begin a search of the Bookshare library for the books that meet the search criteria (or for the periodical ID number). (4) Select the book or the edition of the periodical desired for download. (5) Select the Download button in the Actions group box. (6) The book or periodical will automatically download into the folder shown in the Download Location group box. (7) A software prompt will appear, asking whether the user wants to open the book or periodical. Select No. (8)
Select the Done button to close the Bookshare Download menu. [00195] Software tools in accordance with an exemplary embodiment of a speech generation device 30 enable a user to use a current selection method to read eBooks. After an eBook has been downloaded, a user can use the same access methods for making selections on the speech generation device 30 to scroll through the pages of an eBook, speak and highlight text on the eBook page, symbolate each page of the eBook, or bookmark a place on the page of an eBook.
[00196] A specific example of an eBook Reader menu with which a user may interface is provided in Fig. 38. Exemplary steps for interacting with such menu are: (1) Select the Load eBook button on the eBook Reader menu or on the eBook page. The
Select an eBook File menu will open. An example of this menu is provided in Fig. 39. (2) Select the expansion box to the left of the eBooks folder in the left viewport. The eBooks folder will open, and the subfolder(s) for the downloaded eBooks will be displayed. (3) Select a book subfolder in the left viewport. The file it contains will be displayed in the right viewport. (4) Select the book file in the right viewport. (5) Select the OK button. The first page of the book will appear in the eBook Viewer pane, and a page list will appear in the eBook Table of Contents Viewer pane. [00197] Additional functionality for reading an eBook as provided by the user interface menu of Fig. 38 is now presented. In order to move between pages of the eBook, "Next Page" and "Previous Page" buttons are available. By selecting the Next Page button on the eBook Reader menu, the next page will be displayed in the eBook Viewer pane. By selecting the Previous Page button on the eBook Reader menu, the previous page will be displayed in the eBook Viewer pane. A user can move within the current page in the eBook Viewer by selecting a "Page Down" button on the eBook
Reader menu, which moves the Book page down in the eBook Viewer pane. A user can alternatively select a "Page Up" button on the eBook Reader menu, which will move the Book page up in the eBook Viewer pane. [00198] Referring still to Fig. 38, features by which a user may scroll the eBook Page in the eBook Viewer are provided when a user selects the "Scroll Down" button on the eBook Reader menu. The eBook page will scroll down in small increments toward the bottom of the eBook Viewer pane. 2. Select the Scroll Up button on the eBook Reader menu. The eBook page will scroll up in small increments toward the top of the eBook Viewer pane. When a user builds a custom eBook page, the user can modify the Scroll eBook settings and choose the object to scroll (eBook Viewer or eBook Table of
Contents), the type of scroll (up, down, continuous up, or continuous down), and whether or not the scroll will span the pages of the eBook. [001Θ9] Referring still to Fig. 38, features by which a user may symbolate the eBook page are provided when a user selects the "Symbolate" button on the eBook Reader menu. Symbolation involves displaying symbols for as many words as possible for the text on the current eBook page. Some users find this option helpful, especially if they are more comfortable reading symbols instead of plain text. When the "Symbolate" button is selected by a user, a message will appear indicating that the eBook page is symbolizing. The page will be symbolated, and the Symbolate button will toggle to Desymbolate. To desymbolate the current eBook page, select the Desymbolate button on the eBook Reader menu. Symbols will disappear from the page, and the Desymbolate button will toggle back to Symbolate.
[00200] A still further feature provided on the interface menu of Fig. 38 enables a user to create a bookmark. A user may first navigate to the location on the eBook page where he wants to place a bookmark, then selects the "Create Bookmark" button on the eBook Reader menu. The system keyboard will open. A name for the bookmark may be entered on the system keyboard (for example, a point in the story), after which the OK button is selected. The bookmark will automatically be inserted at the specified location. Such steps may be repeated as desired for creation of additional bookmarks. Bookmarks may be named by user or automatically numbered with a default procedure. [00201] Once one or more bookmarks are created, a user may interact with another interactive menu as shown in Fig. 40 to go to a Bookmark. After selecting the "View Bookmarks" button on the eBook Reader menu of Fig. 38, the Available
Bookmarks menu of Fig. 40 will open. A list of all bookmarks in the currently loaded eBook will appear in the Bookmarks viewport. A user may use the scroll buttons on the right side of the viewport to move to the bottom of the viewport. A user may then select a bookmark to which he wants to move. An '1X" will appear in the check box next to the bookmark's name. By selecting the "Go to Bookmark" button, the Available Bookmarks menu will close, and the bookmarked eBook page will appear in the eBook Viewer. A user may also use the View Bookmarks menu to rename or delete bookmarks. A user may also create his own eBook Reader page with buttons using the Jump to Bookmark and Jump to Specific Bookmark behaviors. [00202] Referring still to Fig. 38, options are provided for a speech generation device 30 to speak the page of the eBook when a user selects the "Speak Page" button on the eBook Reader menu. The current eBook page will be spoken, and the Speak Page button will toggle to Stop Speaking. When a user selects the "Stop Speaking" button on the eBook Reader menu, the speech will stop. If desired, a user can select a different reading voice from the voice normally used for communication. To choose a reading voice, select "Modify Viewer" on the eBook Reader menu. Use the Reading Voice drop-down menu to select a reading voice. (If you do not choose a reading voice in the Modify Viewer menu, the software will automatically default to a predetermined voice when speaking an eBook page.) [00203] A user can also automatically enable the eBook page to speak when it is selected. Select "Modify Viewer" on the eBook Reader menu. Use the When Selected drop-down menu to select Speak e Book. The current eBook page will automatically speak when selected and stop speaking when it is selected a second time. The Speak Page button desirably is configured to toggle automatically between Speak Page and Stop Speaking each time the text is selected.
[00204] A still further feature desirably is provided for a user to highlight the words on the eBook Page by selecting the "Highlight" box on the eBook Reader menu of Fig. 38. The words on the current eBook page will be highlighted as they are spoken. The Highlight box will remain selected as the user moves from page to page, and the words will continue to be highlighted as they are spoken until the Highlight Highlight box is deselected. When a user deselects the Highlight box on the eBook Reader menu, the words on the current eBook page will no longer be highlighted as they are spoken. [00205] Additional software features desirably provide options by which a user can change the characteristics of the eBook Viewer pane and of the eBook TOC (Table of
Contents) pane to better suit individual needs and preferences. [00206] For example, a user may be provided with an interface menu as shown in Fig. 41. When the user selects the "Modify Viewer" button on the eBook Reader menu of Fig. 38. A user can select the "Edit" button in the Background Color group box in the Modify Viewer menu to open a Color Selector menu by which the background color of the eBook Viewer can be changed. A user can select a text size from the Text Size drop-down menu in the Modify Viewer menu. In the "When Selected" drop-down menu in the Modify Viewer menu of Fig. 42, a user can turn the eBook Speech behavior on and off. The Edit button in the Text Color group box in the Modify Viewer menu of Fig. 42 will open a Color Selector menu, from which a user can change the color of the text in the eBook Viewer. A user can also select a reading voice from the Reading Voice drop-down menu in the Modify Viewer menu. After each of these options, select OK to close the Modify Viewer menu to see the results of the selected modifications. When finished, select Modify Viewer on the eBook Reader menu and continue with the next step.
[00207] As another example, a user may be provided with an interface menu as shown in Fig. 42. When the user selects the "Modify TOC" button on the eBook Reader menu of Fig. 38. By selecting such button, the Modify TOC menu of Fig. 42 will open. A user can select the Edit button in the Background Color group box in the Modify TOC menu to open the Color Selector menu and change the background color of the Table of
Contents. A user can select a text size from the Text Size drop-down menu in the Modify TOC menu to change the size of the text in the eBook Table of Contents. A user can select the Edit button in the Text Color group box in the Modify TOC menu to open the Color Selector menu and change the color of the text in the eBook Table of Contents. After each of the following steps, a user can see the results of selected modifications by selecting OK to close the Modify TOC menu. When finished, select Modify TOC on the eBook Reader menu and continue with the next step. [00208] Yet another feature available on the eBook Reader menu of Fig. 38 allows a user to unload an eBook by selecting the "Unload eBook" button on the eBook Reader menu. The current eBook will be unloaded from the eBook Viewer and eBook Table of
Contents.
[00209] eBook Actions Behaviors Menu; After a user becomes familiar with the eBook Reader menu and the many unique eBook behaviors, a user may select options for creating a custom eBook page. Creating a custom eBook page will allow a user to use all of the eBook Actions behaviors such as downloading an eBook, defining scrolling behaviors, "jumping to" bookmarks, and sending eBook pages to the Message
Window or Vocabulary Clipboard.
[00210] The eBooks Actions category of behaviors allow a user to program a custom page for loading and reading eBooks. Exemplary behaviors available for selection include the following:
[00211] Adjust eBook Font Size - Increases, decreases, or sets a specific font size in the eBook Viewer pane or the eBook Table of Contents pane.
[00212] Assign Loaded eBook to a Button - Assigns the currently loaded eBook to a specific button.
[00213] Create Bookmark - Places a bookmark at the current location on the loaded eBook page.
[00214] Desymbolate eBook Page - Removes the symbols from the current page of the eBook. [00215] Download eBook - Opens the Bookshare Download menu, which enables you to search for and download an eBook from an online repository. (You must have an active Internet connection to use the Download eBook behavior.)
[00216] eBook Page to Message Window - Sends text on the current page to the
Message Window. [00217] eBook Page to Vocabulary Clipboard - Opens the Vocabulary Clipboard and sends the vocabulary on the current page to the Vocabulary Clipboard.
[00218] eBook Reader - Opens the eBook Reader menu.
[00219] Jump to Bookmark - Opens the Available Bookmarks menu that lists ail bookmarked locations in the loaded eBook so you can navigate to the bookmark of your choice.
[00220] Jump to Specific Bookmark - Navigates to a specific bookmark that you have preselected in the loaded eBook.
[00221] Load eBook - Opens the Select an eBook File menu and loads the selected eBook into the eBook Reader menu. [00222] Next Page - Displays the next page of the current eBook in the eBook
Viewer pane.
[00223] Open eBook - Opens a specific eBook that you can preselect to be loaded each time this button is selected. [00224] Play/Pause/Resume eBook Speech - Toggles among speaking, pausing speech, and resuming speech on the current page of the loaded eBook.
[00225] Previous Page - Displays the previous page of the current eBook in the eBook Viewer pane.
[00226] Scroll eBook - Opens the Scroll Behavior Settings menu (an example of which is shown in Fig. 43), which allows a user to scroll through an eBook at a desired measure, as shown in Fig. 43. The drop-down menus on the Scroll Behavior Settings menu allow a user to choose: Object to scroll: (Viewer or Table of Contents)Type of
Scroll: (Up, Down, Page Up, Page Down, Continuous Up, Continuous Down) Span
Pages: (Yes or No). [00227] Symbolate eBook Page - Adds symbols to words on the current eBook page.
[00228] Unload eBook - Removes the current eBook from the eBook Viewer and eBook Table of Contents panes.
[00229] In order to build a new eBook reader page, the Page Editor feature should [00230] be opened on a speech generation device 30. A "New Page" option may be available from a drop-down menu that will toggle the system keyboard on a user's screen. The new page may be named, for example, "My eBook Reader Page" which then opens up a blank page in the Page Editor tool. The eBook Viewer tool in the Tools palette of the Page Editor title bar can be used to draw out a large rectangle for the eBook Viewer pane by following exemplary steps as follows: (1) Select the eBook
Viewer tool in the Tools palette, as shown in the example menu of Fig. 44, (2) Select the location on the page where you want to place one corner of the eBook Viewer pane.
Do not release the selection. (3) Continue to maintain the selection while you drag out the cursor to form a large rectangle. An outline of the eBook Viewer pane you are drawing will appear on the page. (4) Move the cursor to adjust the size and shape of the rectangle. Do not release the selection until the eBook Viewer pane is the size and shape you want. (5) Release the selection.
[00231] The eBook Table of Contents tool in the Tools palette of the Page Editor title bar (again, see Fig. 44) may be used to draw out a second, smaller rectangle to contain the eBook Table of Contents pane. For example, the following steps may be used: (1) Select the eBook Table of Contents tool in the Tools palette. (2) Select the location on the page where you want to place one corner of the eBook Table of Contents pane. Do not release the selection. (3) Continue to maintain the selection while you drag out the cursor to form a second rectangle. An outline of the eBook Table of Contents pane you are drawing will appear on the page. (4) Move the cursor to adjust the size and shape of the rectangle, Do not release the selection until the eBook Table of Contents pane is size and shape you want. (5) Release the selection. [00232] The tools in the Tools palette of the Page Editor may be used to add the buttons that a user wants on his eBook page. Suggested basic buttons may include buttons that will use the Load eBook, Unload eBook, Play/Pause/Resume eBook Speech, and the Page Up and Page Down scrolling behaviors. A user may select the Modify button to add labels and behaviors to the created buttons. If a user wants to use the eBook Page to Message Window behavior, you should add a Message Window to the page. When finished building the new page, select the Main Menu button in the title bar.
[00233] Ability to change access methods. There are scenarios where it is beneficial to change methods of controlling the speech generation device 30. For progressive conditions - such as ALS/MND - there is a natural progression from direct selection to other control methods - such as scanning to mouse to trackball to joystick to head control to eye gaze tracking (and then sometimes back to scanning). For other conditions, there are situational changes: such as using eye-tracking while in a wheelchair, but when riding in a vehicle where the user cannot remain in a wheelchair, the user may move to a different method of controlling the speech generation device 30. In conventional systems, the minimum number of selection steps that the user must perform in order to implement such transitions was six. In accordance with the present invention, the eye gaze controller 20 and associated speech generation device 30 desirably are configured to allow such transitions to be implemented immediately - with a single button selection requiring only two steps to be performed by the user. Moreover, the eye gaze controller 20 and associated speech generation device 30 desirably are configured to allow the user to implement reliably such transitions between eye-tracking and another control protocol without requiring intervention from a caregiver. [00234] As embodied herein, Fig. 17 schematically illustrates a software protocol, which is provided as a component of the speech generation device 30 and desirably used by the speech generation device 30 under the control of the eye gaze controller 20 to implement reliably with a single button selection such transitions between eye- tracking and another control protocol without requiring intervention from a caregiver. As schematically in Fig. 17, the user can change from the gaze control selection method to the scanning control selection and then return to the gaze control selection method. Once the user is looking at the selection method navigator in the pop-up window on the input screen 33 of the speech generation device 30, the user can change from one selection method to another desired selection method by simply focusing the user's eyes on the area of the screen depicting the desired selection method and using the eye gaze controller 20 to select that desired selection method. Of course, the desired control device (headmouse, joystick, switches, etc.) associated with the desired selection method must be operatively connected to the speech generation device 30. [00235] Retaining Effort. Conventional software designed for an eye gaze selection system maps the input screen into discrete regions that are the so-called Objects' of the gaze of the user's eyes. When the conventional system determines that the user is gazing anywhere on an object, the system starts a clock that begins recording the duration of time that is compared to the "dwell time" setting, which is the duration of time that triggers selection of the object by the eye gaze selection system.
However, when users with head-movement or poor eye-control employ conventional eye gaze selection systems, such users may unintentionally have their gaze leave the object. Once the user's gaze leaves the object, a conventional eye gaze selection system restarts the clock that records the duration of time that is compared to the "dwell time" setting. Thus, in a conventional eye gaze selection system, such users would lose the "accumulated dwell time" and need to start over in the attempt to select that object. [00236] In accordance with the present invention, the eye gaze controller 20 and associated speech generation device 30 desirably are configured to provide settings that enable the user to choose to retain the user's "accumulated dwell time" if the user's eye gaze leaves the object. One of these "retain object" settings allows the user to set the amount of time that the user's eye gaze spends away from the object before the user's already accumulated dwell time is lost and must be restarted. The other of these "retain object" settings allows the user to set the rate (per unit of time) at which the user's already accumulated dwell time will be decremented after the user's eye gaze has moved away from the object.
[00237] As embodied herein, Fig. 18 schematically illustrates a software protocol, which is provided as a component of the speech generation device 30 and desirably used by the speech generation device 30 under the control of the eye gaze controller 20 to govern how the "accumulated dwell time" is affected when the user's eye gaze leaves the object. In the embodiment shown in Fig. 18, the accumulated dwell time for an object on the input screen 33 of the speech generation device 30 is desirably indicated visually by the degree of contrast of the object being considered for selection relative to other objects on the input screen 33. That degree of contrast is increased or decreased by an increment of contrast, and the magnitude of one increment of contrast is set according to the relative size of the area occupied by the object being considered for selection and amount of accumulated dwell time that is needed to trigger a selection of that object. [00238] Rate Enhancement. The rate at which a selection device enables the user to compose a message to be spoken by the speech generation device is a key measure of the desirability of the system that combines the selection device and the speech generation device. In accordance with the present invention, the speech generation device 30 that is controllable by the eye gaze controller 20 has been provided with certain capabilities that enhance the rate at which the eye gaze controller 20 enables the user to compose a message to be spoken by the speech generation device 30. [00239] Software features may be provided with speech generation device 30 that offer rate enhancement for quicker and more efficient communication. Such features can reduce the number of selections that are required to perform a task or create a message, resulting in a faster, more efficient communication rate. Examples of such communication features include: (1) Word Prediction; (2) Abbreviation Expansion; (3) Concept Grouping; (4) Phrase Customization and Prediction; and (5) Concept Slots. Additional details of such exemplary rate enhancement features will now be presented in respective order. [00240] CQ Word Prediction: Word prediction can be used with keyboard pages that include predictor buttons. As a message is composed, the prediction feature anticipates word choices and displays vocabulary from a device dictionary for quick selection. These options are displayed in predictor buttons, as shown in the exemplary interface screen of Fig. 45. If the software predicts the word a user is trying to compose, the user can conserve his efforts and save time by selecting the predictor button that features the correct word. This will immediately send the word to the Message Window and add a space (to prepare for another word), allowing a user to simply move on to the next word in a message. In one example, the word prediction feature draws selections from either an internal or online software dictionary, A user can also make his own personal vocabulary (including names, single words, multiple word phrases and full sentences) available for word prediction by adding these items to the dictionary. Users may find it helpful to create dictionary entries for the names of family, friends, businesses, towns, hobbies, foods, movies or other things that they often talk about.
[00241] Word prediction features may be particularly advantageous for individuals who have good literacy skills, who need help with spelling out words but can recognize them on sight, who use alternate access methods that make it inefficient to completely spell out words, and/or who can spell the first few letters of words and then must rely on symbols to identify words. Word prediction software is useful for such users and others because it increases spelling speed, can help improve literacy skill by enabling users to spell a few letters and then rely on word recognition or symbols to get the right option, and decreases user fatigue by reducing the number of necessary keystrokes. [00242] A first related variation to word prediction software features include character prediction, which involves providing character predictor buttons that predict single characters based on the letters that a user is typing. A character prediction feature may be useful for individuals who use alternate access methods and/or who need a faster and less fatiguing way to find words through spelling. Such features are helpful to make keyboard communication quicker or less physically taxing and to generate longer words more quickly and with less effort, especially when combined with word prediction software tools.
[00243] A second related variation to word prediction software features involves context prediction. Such feature anticipates word selection based on the grammatical structure of the sentence that a user is creating. Such feature may be helpful for users who use alternate access methods and/or who need to maximize prediction. Such features my be useful for a user to help see words that might logically come next in a sentence, and/or to increase the number of grammatically correct sentences since words that may be left out (i.e., a, an, the, is, etc.) may be predicted. [00244] In accordance with the disclosed prediction software options, it may be possible for a user to activate and deactivate software prediction features via a Prediction Settings menu. An example of such a Prediction Settings menu is shown in the interface display of Fig. 46. When a user first turns on his speech generation device 30, several prediction settings may be selected as defaults. To review and change the current prediction settings, a user may select the check boxes in the Prediction Settings group box to activate or deactivate various prediction features. For example, a user may select the Prediction check box to activate basic prediction features (including word prediction, phrase prediction and character prediction). To deactivate prediction, a user identifies that the check box is not selected.
[00245] Additional buttons may be available for user selection in the Predictions Settings menu of Fig. 46. For example, a Flexible Abbreviation check box may be available to activate the flexible abbreviation feature, which will be described later in more detail. To deactivate this feature, make sure the check box is not selected. A "Don't Predict Words Already on Buttons" box may be selected for configuring a speech generation device to not predict a word that is already on a button on the page. Only words that do not appear on the page will be predicted. When this check box is not selected, a word may appear in a predictor button even if it appears on the page. [00246] When the "Add New Words to Dictionary" check box is selected, the speech generation device 30 may be configured to examine words as they are added to the Message Window. When the software discovers a word that is not in the dictionary, it will automatically add it to the dictionary. To deactivate this feature, this check box should not selected. [00247] A "Context Prediction" check box may be provided to activate/deactivate the context prediction feature.
[00248] When the "Only Words or Phrases with Symbols" check box is selected, only words or phrases that have assigned symbols will be predicted. Any words or phrases that do not have symbols will not be predicted. When this check box is not selected, all words or phrases are eligible for prediction. [00249] When the "Predict Items Only Once" check box is selected, a user has only one chance to select a word in a predictor button. If a user is entering letters into a keyboard page and does not select a word from a predictor button, that word will not be predicted again until after the word being typed is completed (by entering end punctuation or a space), [00250] When the "Predict All Capitals" check box is selected, words will appear in the predictor buttons in all capital letters.
[00251] When the "Only Predict Phrases From Start of Sentence" check box is selected, phrases will be predicted based on the beginning of the phrase, rather than any matching characters (For example, "can you" would match "Can you help me?" but not "How can you tell?"). If this check box is not selected, then phrases will be predicted based on any part of the phrase, not just based on the beginning of the phrase. [00252] If a user wants the selected prediction features to predict vocabulary only after having typed a specific number of letters, select the "Predict After _ Letters" dropdown menu and select one of the available options. The drop-down menu will close and display the chosen option.
[00253] To specify the order in which vocabulary should be presented in the prediction boxes, select the "Prediction Order" drop-down menu (in the Presentation Settings group box) and select one of three available options: (a) Alphabetical -
Vocabulary items are presented in alphabetical order, (b) Frequency -The vocabulary items that are used most often are presented first, (c) Length -The longest vocabulary items are presented first. The drop-down menu will close and display the chosen option. [00254] If a user wants symbols to be presented with vocabulary in the predictor buttons, select the "Symbol Prediction" check box in the Presentation Settings group box. If a user wants only text to be presented in the predictor buttons, make sure the check box is not selected. [00255] If a user wants to maximize the size of a symbol within the predictor button, select the "Symbols on the Left" check box in the Presentation Settings group box. (This may cause the text in the button to be partially hidden.) [00256] As another part of the prediction software available on an exemplary speech generation device, software tools may be available for creating a new dictionary entry. One example of a dictionary for use in a speech generation device 30 is an alphabetized catalog of every word, name and phrase that is stored in the software's vocabulary database. This dictionary can be customized easily and, since rate enhancement on the devices is based on dictionary vocabulary, additional dictionary entries can be added for names, questions, statements, and the like. [00257] Dictionary entries can be created and edited in a Dictionary Browser menu, an example of which is shown in Fig. 47. To add a word, name or phrase to the dictionary, the "New" button can be selected in the interface of Fig. 47 at which point an Edit Word menu such as shown in Fig. 48 will open. By selecting the Word text box, the system keyboard will open and a user can enter the word, name or phrase desired for adding to the dictionary. After selecting the OK button, the system keyboard will close and the new dictionary entry will be displayed in the Word text box. A user may select the Part of Speech drop-down menu and then select the option that best applies to the new dictionary entry. If the "Kind of drop-down menu is available, a user may add a more specific definition to the part of speech that a user has assigned to the new dictionary entry. For example, a noun may be further defined as a proper noun. To adjust this setting, select the Kind of drop-down menu and then select one of the available options. [00258] Referring to the exemplary interface of Fig. 49, if the items in the Word Forms group box are available, a user may review any word form variations that apply to the new dictionary entry (for example, "colder" and "coldest" for the adjective "cold"). The Variant drop-down menu offers a list of variation types that are associated with the part of speech that is assigned to the new vocabulary item. The Word Form text box displays an example of the dictionary entry that is changed to reflect the variant form that is selected in the Variant drop-down menu. If one of the examples in the Word
Forms text box must be corrected, a user may select the Word Form text box to open the system keyboard, use the system keyboard to enter the corrected form of the dictionary entry, and select the OK button to close the system keyboard. The change will be displayed in the Word Form text box. [00259] Software features may be available for a user to select a frequency that is assigned to a dictionary entry. Such chosen frequency affects how quickly the entry is predicted by rate enhancement. To assign a frequency to the new dictionary entry, select the Frequency button and then complete the rest of the steps . A user may accept a default frequency (e.g., 10) or select a frequency number within a range (e.g., between one and 100, with 100 generally used for items that will be used the most often). A frequency keypad may be used to enter such new frequency number, which will then be displayed in the Frequency button.
[00260] Software features may be available for a user to add a concept tag to a new dictionary entry, thus associating the item with a group of similar vocabulary items. Concept tags make dictionary entries available for concept searches. To add a concept to the dictionary entry, select the Add button in the Concepts group box (the Select Concepts menu, an example of which is shown in Fig. 50, will open) and continue with the rest of this step. By selecting the Search text box, the system keyboard will open and a user can enter the name of the concept he wants to find or can scroll through the
Select Concepts menu viewport (see, e.g., Fig. 50) to find a concept. Each main concept is represented by a folder icon. Concepts that contain smaller sub-concepts are indicated by an expansion box (with a [+]). Select the expansion box to view the available sub-concepts. Use the scroll bar on the right side of the viewport to see all of the available options. When a user finds one or more appropriate concepts, select the check box next to each name. The selected concepts will be added to the Concepts group box in the Edit Word menu. The new dictionary entry will be added to the viewport in the Dictionary Browser menu, and the dictionary entry will be available for the user that is currently active. [00261] (2) Abbreviation Expansion: When activated, the abbreviation expansion feature lets a user define specific abbreviations for longer words and phrases. This feature can save a great deal of effort and time if using a keyboard page to compose a message. When a user enters the abbreviation and then adds a space, the software will automatically expand an abbreviation into the full word or phrase. [00262] In order to create an abbreviation expansion, it should be recognized that an abbreviation expansion consists of two parts: the abbreviation and the expansion. The abbreviation is the combination of characters that a user wants to enter (i.e., "INY"). The expansion text is the word or phrase that the abbreviation represents (i.e., "It's nice to meet you") . Once both parts are saved in the Abbreviation Browser menu, the words "It's nice to meet you." Will be automatically sent to the Message Window anytime a user enters "INY" and then adds a space.
[00263] To create a unique abbreviation expansion for a word, name or phrase, a user may interact with an interface menu such as the exemplary Abbreviation Browser shown in Fig. 51. This menu can be used to select the New button, at which point the system keyboard will open. A user can then enter the abbreviation and select the OK button to close the system keyboard. The system keyboard may be configured to open again automatically for a user to enter the expansion text. Again, selection of the OK button will close the system keyboard. The abbreviation expansion that you just created should be visible in the viewport of the Abbreviation Browser menu. [00264] Once a user has added some abbreviation expansions to a speech generation device 30, a user may apply the following steps to use an abbreviation expansion. First, on a keyboard page such as shown in Fig. 52, a user simply types an abbreviation that was saved (in this example, "PP"). A user then adds a space after the abbreviation, and the abbreviation will be immediately expanded, as shown in Fig. 53.
In the example of Figs. 52 and 53, the abbreviation "PP" is expanded to "Pittsburgh, Pennsylvania."
[00265] (3) Concept Grouping: Software features desirably configure a speech generation device to function with instructions for editing a concept. For example, software may use concepts to provide structure and organization for various elements of the software, including symbols, dictionary entries, slots and phrases. Concepts are designed to group similar items or ideas together, making it more efficient to search a particular item or idea. A Concept Browser menu (e.g., as seen in Fig. 54) enables a user to view and edit the list of concepts. Any changes made in the Concept Browser menu will be seen anywhere concepts are used. This includes the Symbol Browser menu, the Dictionary Browser menu and the My Phrases menu, as well as in the Select Slot Filler menu for slots.
[00266] All concepts that are available for the current user may be displayed in the viewport at the top of the Concept Browser menu of Fig. 54. In the viewport, each main concept is represented by a folder-shaped icon. If a concept contains smaller sub- concepts, the concept folder will have an expansion box (with a [+]) beside it. If you select an expansion box, the concept will expand to display all of the smaller sub- concepts. Each sub-concept is represented by a gray dot icon. When a main concept is open, the expansion box will contain a [-]. To close the concept, select the expansion box again. A user may need to use the scroll bar on the right side of the viewport to see all of the available concepts and sub-concepts.
[00267] The Concept Browser menu of Fig. 54 also includes a Search button and text box, enabling a user to search for a concept by name. Other buttons in the Concept Browser menu enable a user to create a new concept, change the organization of concepts within the viewport, rename a concept or edit the words that are available within a concept. If a user wants to see the individual words that are associated with a concept or sub-concept, a user may select the concept (or sub-concept) that he wants to see, and select the Edit Slot Fillers button. The Concept Slot Fillers menu, an example of which is shown in Fig. 55, will open. Every word that is assigned to the selected concept will be visible in the viewport at the top of this menu. A user may need to select the Next and Prev buttons at the bottom of the viewport to see all of the available words. The Concept Slot Fillers menu also provides options for editing and rearranging the words that are available in the selected concept. When you select a slot, words will be presented in the same order in which they are shown here.
[00268] Features may be available in the Concept Browser menu of Fig. 54 for a user to edit a concept. To edit the list of words that is available for a concept, a user can open the Concept Browser menu and select the Search text box. The system keyboard will open, and a user can enter the name of the concept he wants to find and then select the OK button. The system keyboard will close and the concept will be highlighted in the viewport. A user can select the Edit Slot Fillers button, and the Concept Slot Fillers menu will appear. To add a new word to the concept, select the Add button (the system keyboard will open) and use the system keyboard to enter the new word to add to the concept. Select the OK button to close the system keyboard, and the word just added will be highlighted in the Edit Slot Fillers menu, and first in the list. To delete a word from the concept, select the word in the menu and then select the Remove button. To change the order in which the words will appear when a slot is selected, select a word in the menu and then select the Move Up button or the Move Down button. Repeat this step until the words are displayed in the desired order. [00269] (4) Phrase Customization and Prediction: Using phrases is one of the best ways to speed up communication. Exemplary software embodiments enable a user to store phrases for future use. When a user is communicating, he can quickly access and use a phrase in just a few simple steps. This can drastically reduce the number of selections that are required to compose a message, since the user no longer has to create the phrase word by word when he wants to use it. Phrases also save time since they can be accessed from any point within the page set; and a user does not need to navigate to a particular page or popup to use a phrase.
[00270] A user may be provided access to a customizable menu called the "My Phrases" Menu, which is designed to give a user immediate access to the phrases that are used frequently in everyday conversation (e.g., comments, statements and questions that are used frequently). The My Phrases menu may be opened, for example, by toggling the My Phrases button available on the Title Bar, as shown in Fig. 56. By selecting the Modify button in the title bar, the My Phrases button will turn red. But selecting this button, the My Phrases menu will open, an example of which is shown in Fig. 57.
[00271] To make phrases easy to find, they may be organized by concept. Sorting phrases into concepts is one good way to make them faster and easier to use, since it allows a user to search through small groups of phrases instead of the whole collection.
The Concepts box displays the categories of phrases. Phrase concepts (or categories) may include general topics like the following: [00272] Greetings -How's it going? Hi there! Hey. [00273] Closings -I'll see you around. See ya! Have a nice day. [00274] Agree -Yeah, I know. Absolutely. Of course.
[00275] Disagree -I'm not so sure. No way! I don't think so.
[00276] Email Phrases -How are you? What's up? LOL
[00277] Features may be available for a user to add a phrase to the My Phrases menu by selecting the New button on the My Phrases menu. The New Phrase menu, an example of which is shown in Fig. 58, will open. A user may select the Phrase text box, and the system keyboard will open. A user can use the system keyboard to enter the phrase that he wants to add. After selecting the OK button to close the system keyboard, the new phrase will be displayed in the Phrase text box. If a user wants to choose an existing concept for the new phrase, select the Select Concept button in the Concepts group box. In the Select Concepts menu, find a concept by either selecting the Search text box and enter the name of the concept you want to use, or scrolling through the Select Concepts menu viewport to find a concept. Each main concept is represented by a folder icon, with smaller sub-concepts organized as previously described. A user can then select the OK button to close the Select Concepts menu, at which point the selected concept will be added to the Concepts viewport in the New
Phrase menu. To add another concept to this phrase, select the Select Concept button again and repeat the above steps. If a user wants to create a new concept for this phrase, select the Add New Concept button in the Concepts group box and enter the name of the newly desired concept. Select the OK button to close the system keyboard. The concept just created will be added to the Concepts viewport in the New
Phrase menu. If a user wants to remove a concept from this phrase, select the concept in the Concepts group box and then select the Delete button. The concept will still exist, but will no longer be associated with this phrase. [00278] Software features may be available for a user to assign a frequency to a phrase. The frequency that is assigned to a phrase affects the way the phrase is predicted by rate enhancement. To assign a frequency to the new phrase, select the Frequency button on the New Phrase menu. An Enter Frequency Menu, such as shown in the exemplary interface menu of Fig. 59, may be displayed to a user by which a user may use the keypad to enter a new frequency number. As previously described, a frequency may be within some predetermined range, for example from 1-100, with 10 being a default setting and 100 being a maximum level for items that are expected to be used the most often. [00279] Features may also be available for a user to assign a symbol to a phrase to help a user recognize it more quickly (or for use in predictor buttons). To assign a symbol to the new phrase, select the Symbol button on the New Phrase menu (see, e.g., Fig. 58) and then select the Search text box in the Select a Symbol menu. The system keyboard will open, and a user can enter the name of the symbol he wants to find. If the software finds any symbols for the word you entered, they will be presented in the right viewport of the Select a Symbol menu. A user may then select the symbol that he wants to use. The Select a Symbol menu will close automatically and the new symbol will be displayed inside the Symbol button in the New Phrase menu. After closing the New Phrase menu, the new phrase is now available in the My Phrases menu under the All Phrases concept, as well as under any other concepts a user may have assigned or created. If a symbol was added for that phrase, it will be displayed beside the phrase. The new phrase can now be used for communication by the current user, no matter where the user is in the page set. It may also be presented by phrase predictor buttons on keyboard pages in the current user. [00280] To quickly access phrases created by a user, an interface such as the
Select a Phrase menu shown in Fig. 60 may be available. Such menu may allow a user to specify how he wants to use the phrase by selecting one (or both) of the appropriate check boxes in the bottom left comer. If a user wants to speak the phrase as soon as it is selected, select the Speak Phrase check box. If a user wants to send the phrase to the Message Window as soon as it is selected, select the Insert Phrase check box. (If the Speak Phrase check box is not also selected, the phrase will not be spoken until a user selects the Message Window.) In the Concepts box, a user may select the concept that contains the phrase he wants to use. If the desired concept is not visible, a user can use the Prev and Next buttons to scroli through the list of concepts that contain phrases. Then, in the My Phrases box, a user may select the phrase he wants to use. Again, if the phrase is not visible, the user may need to use the Prev and Next buttons to scroll through the phrases in the selected category. After a phrase is selected, speech generation software will act according to the selected check boxes. The possibilities are: (i) If the Speak Phrase check box is selected, then device will immediately speak the phrase; (ii) If the Insert Phrase check box is selected, the phrase will be sent to the Message Window; and (iii) If the Close on Selection check box is selected, the Select a Phrase menu will close as soon as a user chooses a phrase. [00281] A related software feature available within some embodiments of a speech generation device provide phrase prediction tools based on the above defined phrases
(i.e., My Phrases) stored in memory. For example, phrase predictor buttons may be available to predict phrases from the My Phrases menu, based on the letters or words that a user is typing. Phrase predictor buttons can predict phrases from the entire My Phrases menu, or they can be assigned to one phrase concept. Such a phrase prediction feature may be useful for individuals who user alternate access methods, who are able to use a keyboard, and/or who consistently use any number of phrases. Phrase prediction software may offer advantages by enabling individuals to communicate common phrases more quickly because entire phrases can be accessed by selecting only the first few letters in the phrase. Phrases can be completed with novel information, as well. For example, the phrase "I would like it if you..." can be completed in a variety of ways depending on the situation, thus enabling individuals to clearly communicate their wants and needs quickly and easily. [00282] As noted above, the so-called "phrase prediction" protocol provides the user with the ability to select from a menu of certain phrases that have been populated into the menu based on what the user already has typed in composing a message. The
"phrase prediction" protocol predicts an entire phrase that the user may try to be typing (instead of just predicting the next word or next character).
[00283] As schematically shown in Fig. 20, the phrase prediction protocol presents one or more areas of the display, called buttons, which are filled with phrases as the user enters text into a document. As the user enters text, the phrase prediction protocol matches the partially entered text to an internal database of text phrases and presents those phrases that have starting characters matching the partially entered text. Each phrase also has associated with it a priority rating. In the case where more phrases match the partially entered text than there are buttons to fill, those phrases with the highest priority ratings are shown to the user. In accordance with the phrase prediction protocol, the user is provided with the capability to add phrases to the phrase database and to delete phrases from the phrase database. In accordance with the phrase prediction protocol, the phrases may optionally have pictures associated with them, and in such cases those pictures can be used to augment the display of the phrases on the buttons. In further accordance with the phrase prediction protocol, the phrases also may contain slots with their associated fillers.
[00284] (5) Concept Slots: A still further rate enhancement feature that may be provided via software used with a subject speech generation device embodiment is the concept slot (also called "slot"). A slot is a variable placeholder that can be included in button text, button labels and phrases. Specific interface options may provided for a user to create a phrase with slots, adding slots to buttons, and work with slots in a Message Window. The capability embodied in the so-called "slots" protocol provides the user with the ability to fill in certain words in otherwise static text. The slots protocol provides the user with easy access to common words that can be used to complete a message in a variety of settings and situations. For example, in the phrase: "Can we have dinner now?", the slot is "dinner". Other commonly-used words like "breakfast" or "coffee" are slots that can be used to interchangeably to complete similar messages like: "Can we have breakfast now?". Slots are additional tools that minimize necessary navigation to save the user's time and energy during message composition.
[00285] In general, slots are designed to provide a variety of vocabulary options while reducing the number of selections that a user must make to create a whole message. Slots also help to conserve space on the touch screen. Slots provide a user with easy access to all of the words associated with a particular vocabulary concept (or category). When a user selects a slot, the user can choose to replace the word that is currently filling the slot with another word from the same concept. Rather than build a dynamic message one word at a time, a user can create sentences that contain slots in key locations. When the phrase is added to the Message Window, a user can then select the slots (which are visually indicated in some fashion, for example displayed as blue underlined words) and replace the current words with different options.
[00286] As schematically shown in Fig. 19, to implement the "slots" feature, the software provides mechanisms to insert a "slot" or placeholder in a text phrase. Each one of these placeholders is associated with a list of "fillers" that can potentially fill in this place in the text. The list of slots and their associated fillers are stored in a database internal to the software. When a user inserts the text containing the slot into a document, the slot position is filled with a default filler, and that filler is underlined in the text. The user then may select this underlined word, and upon the user's selection of the underlined word the user is presented with the entire list of fillers for that slot and may choose another filler, at which point that filler is inserted in that place in the text, which remains underlined so that the filler may be changed again in the future. Another variation on the slots protocol allows a user to declare to speak a text phrase containing one or more fillers. At which point the user is prompted to specify the filler values for each slot, and then the entire text phrase, with the filler values chosen by the user, is spoken. Slot fillers can optionally have pictures associated with them in which case those pictures can be used to augment the display of the filler value. Slots can be added and deleted by the user, and filler values can also be added and deleted by the user. [00287] Refer to the following example of Figs. 61 and 62, is example, the message in the Message Window is the fabel text for a button. The first slot is associated with the "breakfast" concept and the second slot is associated with the "fruit" concept. The slots allow a user to create dynamic messages with a reduced number of selections. By selecting the slots and changing the filler text, the example phrase "I want oatmeal and a banana for breakfast" can quickly and easily be changed to read as follows: "I want toast and a nectarine for breakfast". [00288] In accordance with the concepts slot technology, a user may be able to add slots to his customized phrase database - My Phrases. Adding slots to phrases is one way to maximize the potential of both rate enhancement features. This technique provides a user with rapid access to complete statements, while still enabling the user to vary what he is going to say. For example, if a user tells an assistant what he wants to wear every morning, then he may want to create a phrase to say "I want to wear my jeans today." Then, simply turn the word "jeans" into a slot that accesses the "clothing" concept. Every morning the user can quickly access the same phrase, no matter what page or popup is active, and say "I want to wear my sweater today." or "I want to wear my boots today." A user could even add more slots, such as one that accesses the "colors" concept or the "textures" concept to add more description to such statements.
[00289] Exemplary steps by which a user may be able to add a new phrase with a slot to the My Phrases menu are presented. (1) Select the Modify button in the title bar, an example of which is shown in Fig, 56. The button will turn red when it is selected. (2) Select the My Phrases button. The My Phrases menu will open. (3) Select the New button. The New Phrase menu will open, as shown in Fig. 58. (4) Select the Phrase text box. The system keyboard will open. (5) Use the system keyboard to enter the phrase that the user wants to add. (6) Highlight the word that the user wants to use as a slot. The user can make a selection on the touch screen at the beginning of the word and drag the selection until the whole word is highlighted. If an eye gaze controller 20 is connected to the speech generation device 30, a user can dwell on the button and move the pointer until all of the desired text is highlighted. (7) Select the Make Slot button in the bottom row of the system keyboard. The Select Concept for Slot menu will open and display any concepts that are associated with the selected word. An example of the Select Concept for Slot menu is shown in Fig. 63. (8) The Select Concept for Slot menu enables a user to choose a vocabulary concept for the slot he is creating. This concept will determine the type of vocabulary that is presented whenever a user selects the slot. If the word chosen as the slot is associated with any existing concepts, the concepts will be displayed in the buttons at the top of the menu. [00290] There may be different ways for a user to choose a concept for a new slot:
(a) If a user wants to choose one of the concepts in the Select Concept for Slot menu, simply select the desired concept. The Select Concept for Slot menu will close. In the Message Window of the system keyboard, the new slot will be shown as a blue, underlined word, (b) If a user wants to search through the existing concepts, select the Select Concept button (the Select Concepts menu will open) and then proceed to step
9. (c) If a user wants to add a new concept, select the Add New Concept button (the system keyboard will open) and then proceed to step 10. (9) To find a concept in the Select Concepts menu, you must scroll through the viewport at the top of the menu. In the Select Concepts menu, each main concept is represented by a folder icon. Concepts that contain smaller sub-concepts are indicated by an expansion box (with a
[+]). Select the expansion box to view the available sub-concepts. Use the scroll bars on the right side of the viewport to see all of the available options. A user may need to use the scroll bar on the right side of the viewport to see all the available options. Select the concept you want to use, then the OK button to close the Select Concepts menu. In the Message Window of the system keyboard, the new slot will be shown as a blue, underlined word. Proceed to step 11. (10) To create a new concept for the slot, use the system keyboard to enter a name for the new concept and then select the OK button. In the Message Window of the system keyboard, the new slot will be shown as a blue, underlined word. Proceed to step 11. (11 ) If a user wants to add another slot to a phrase, the above steps 6 through 10 can be repeated. Otherwise, proceed to step 12. (12) Select the OK button to close the system keyboard. The new phrase will be displayed in the Phrase text box. (13) A user can now choose a concept for his new phrase. This concept is different than the one chosen for the slots within the phrase. Phrases in the My Phrases menu are grouped according to concept. This divides the whole collection of phrases into small groups, making it easier to find individual phrases. If a user wants to choose an existing concept for the new phrase, select the Select Concept button in the Concepts group box. This opens the Select Concepts menu, such as shown in Fig. 50, by which a user may select one or more concepts by either searching or scrolling through available concepts. The selected concept(s) will be added to the Concepts viewport in the New Phrase menu. (14) If a user wants to create a new concept for this phrase, select the Add New Concept button in the Concepts group box (the system keyboard will open) and the user can enter the name of the concept he wants to create. The created concept will be added to the Concepts viewport in the New Phrase menu. The created concept will also automatically be added (as a sub-concept) to the My Phrases concept. (15) If a user wants to remove a concept from this phrase, select the concept in the Concepts group box and then select the Delete button. The concept will still exist, but will no longer be associated with this phrase. (16) A frequency may also be assigned to a phrase, for example as previously described with reference to the Enter Frequency menu of Fig. 59. (17) A user may choose to assign a symbol to the phrase to help recognize it more quickly. To assign a symbol to the new phrase, select the Symbol button (the Select a Symbol menu will open) and then use the system keyboard to enter the name of the symbol you want to find. If the software finds any symbols for the entered word, they will be presented in the right viewport of the Select a Symbol menu. Once a symbol is selected, the Select a Symbol menu will close automatically and the new symbol will be displayed inside the Symbol button in the New Phrase menu. The new phrase is now available in the My Phrases menu. It can be found under the All Phrases concept, as well as under any other concepts you may have been assigned or created. If a symbol was added, it will be displayed beside the phrase, The new phrase can now be used for communication in the current user, no matter where the user is in the page set. It can also be presented by phrase predictor buttons on keyboard pages in the current user. [00291] Exemplary steps by which a user may choose to work with Slots in the Message Window are now presented. When text that contains a slot is sent to the Message Window, a user can easily replace the word that currently appears in the slot. To do this, simply select the slot (e.g., the blue, underlined word). The Select Slot Filler menu will appear, displaying all of the other words that are associated with the slot's concept. An example of such menu is shown in Fig, 65. A user can change the word in the slot by choosing any of the words in the Select Slot Filler menu. Use the Prevand
Next buttons to scroll through the available options. As soon as you select a word, the Select Slot Fillermenu will close. In the Message Window, the word in the slot will be replaced with the word just chosen. [00292] Software features may be available for a user to add slots to button labels. For example, a user may select an "Insert Label" option, which will send the button label to the Message Window. The user can then select the slot to open the Select Slot Filler menu and choose a new word for the slot. A user can also choose an "Insert Label, Fill Slots" option, which will send the button label to the Message Window, and then automatically open the Select Slot Filler menu. [00293] In one particular example, the following steps may be followed to add a slot to a button's label: (1) Select the green Modify button in the title bar. The button will turn red when it is selected. (2) Select the button desired for modification. The Modify Button menu will open. (3) Select the Behaviors button. The Behavior Editor menu will open, an example of which is shown in Fig. 66. (4) Select the Behaviors drop-down menu. The menu will expand to display all the behavior categories. (5) Select the
Message Window Operations option (a user may need to use the scroll bar on the right side of the drop-down menu). The drop-down menu will close and display this category. (6) Select Insert Label, Fill Slots or Insert Label in the Behaviors viewport (you may need to use the scroll bar on the right side of the viewport). (7) Select the Add button. The behavior will appear in the Steps viewport. (8) Select the OK button to close the
Behavior Editor menu. The new behavior will be displayed by the Behaviors button in the Modify Button menu. (9) Select the Label text box. The system keyboard will open. (10) Enter the desired text for the button label. (11) Highlight the word desired for use as a slot. A user can make a selection on the touch screen at the beginning of the word and drag the selection until the whole word is highlighted, or may use an external mouse to perform a similar function. (12) Select the Make Slot button in the bottom row of the system keyboard. The Select Concept for Slot menu will open. (13) The Select Concept for Slot menu (e.g., Fig. 63) enables a user to choose a vocabulary concept for the slot that is created. This concept will determine the type of vocabulary that is presented whenever the user selects the slot. If the word chosen as the slot is associated with any existing concepts, the concepts will be displayed in the buttons at the top of the menu. (14) To find a concept in the Select Concepts menu, a user may scroll through the viewport at the top of the menu, or create a new concept for the slot by using the system keyboard to enter a name for the new concept and then select the
OK button. In the Message Window of the system keyboard, the new slot will be shown as a blue, underlined word. (15) Software features may be provided that automatically search for a symbol that corresponds to a user's label. If there is no symbol to match the label, then only the label will be added to the button. If the label matches one symbol, then the symbol will be automatically added to the button. If the label matches more than one symbol, the Select a Symbol menu (e.g., as shown in Fig. 67) will open to display all of the corresponding symbols. A user can then select the symbol he wants to use, and the selected symbol will be added to the button. If a user does not want to use one of these symbols, select the Cancel button to close the Select a Symbol button without choosing a symbol.
[00294] Still further software features may be provided for a user to adding slots to a button's text message. In one example, a user may proceed with the following steps: (1) Select the green Modify button in the title bar, The button will turn red when it is selected. (2) Select the button that you want to modify. The Modify Button menu will open. See, for example, the menu shown in Fig. 66. (3) Select the Behaviors button.
The Behavior Editor menu will open. (4) Select the Behaviors drop-down menu. The menu will expand to display all the behavior categories. (5) Select the Message Window Operations option (you will need to use the scroll bar on the right side of the drop-down menu). The drop-down menu will close and display this category. (6) Select Insert Text, Fill Slots or Insert Text in the Behaviors viewport. Select the Add button.
The system keyboard will open. (8) Use the system keyboard to enter the desired text. (9) Highlight the word that he wants to use as a slot. (10) Select the Make Slot button in the bottom row of the system keyboard. The Select Concept for Slot menu will open. An example of such menu is shown in Fig. 63. (11) The Select Concept for Slot menu enables a user to choose a vocabulary concept for the slot he is creating. This concept will determine the type of vocabulary that is presented whenever a user selects the slot. If the word you chose as the slot is associated with any existing concepts, the concepts will be displayed in the buttons at the top of the menu. If a user wants to search through the existing concepts, the user can select the Select Concept button (the Select Concepts menu will open - see, e.g., Fig. 50) and then scroll through the available choices. If a user wants to add a new concept, he can select the Add New Concept button (the system keyboard will open) and a user can enter the new concept here. The new behavior and the text added by a user will be displayed in the Steps viewport of the Behavior Editor menu.
[00295] Direct Internet Link Access. The rate at which a selection device enables the user to use the internet and select links from a webpage is another key measure of the desirability of the system that combines the selection device and the speech generation device. In accordance with the present invention, the speech generation device 30 that is controllable by the eye gaze controller 20 has been provided with certain capabilities that enhance the rate at which the eye gaze controller 20 enables the user to select links from a webpage being displayed on the input screen 33 of the speech generation device 30. The speech generation device 30 that is controllable by the eye gaze controller 20 is configured to use the eye gaze controller 20 in conjunction with special accessible features provided by Moziila Firefox to allow the users to directly select a link with the user's eyes without having to be accurate enough to hit the actual link on the webpage,
[00296] In accordance with this aspect of the present invention and as schematically shown in Fig. 16A, the speech generation device 30 desirably is provided with a Firefox® internet browser 3Od available from Moziila software, a high speed modem 3Oe and a high speed internet connection 3Of by which the speech generation device 30 can access websites using the browser 3Od. The Firefox® internet browser 3Od has a feature that inserts a numeric indicator beside every link on any webpage accessed by the browser. The speech generation device 30 is provided with a special "On Screen Keyboard" for eye-tracking (and other access methods). The special "On
Screen Keyboard" is configured to display to the user on the input screen 33, relatively larger buttons having numbers corresponding to each numeric indicator that has been assigned by the browser 3Od beside every link on any webpage accessed by the browser 3Od. As schematically shown in Fig. 21 , to access a desired link, the user operates the eye gaze controller 20 to select the number displayed on the "On Screen
Keyboard" associated with the desired link. Moreover, the eye gaze controller 20 is configured to enable the user to select that number by focusing the user's eyes on the larger button of the special "On Screen Keyboard" to activate that link and call for the associated pages to be retrieved to the browser 3Od from the website's server. [00297] Message Window division. Conventional communication software provided in a speech generation device typically works for creative conversation by providing a "message window" in which the user composes the message to be spoken by the speech generation device. When the user wants to send or "speak" the composed message, the user selects the message window.
[00298] However, with the conventional eye-tracking controller in the dwell selection mode, the user often selects the message for speaking by just looking at the message window too long during review of some aspect of the message. When the "dwell' selection option governs activation in a conventional eye-tracking controller, the user cannot afford to take up more than the dwell time when reading a given region of the input screen, else the user will select objects on the screen that the user only meant to read. This annoying aspect associated with the dwell selection option particularly arises when the user wants to review what is in the message window to ensure it is correct before selecting the message to have it spoken by the speech generation device. The user may want to perform such a review either at some point during composition of the message or directly before choosing to select the message for being spoken by the speech generation device.
[00299] Conventional eye-tracking controllers try to prevent this annoying aspect by providing a "pause" feature that can be activated to enable the user to temporarily disable the "dwell' selection option. However, having to repeatedly activate and then disable the "pause" feature can disrupt the flow of the user's conversation, writing, and thought process and is equally annoying to the user as the original annoyance associated with the "dwell" selection option. Therefore, there is no practical way for the user to review the message without speaking it and without delaying the user's ability to speak by first selecting the "pause" function and then disabling the "pause" function.
[00300] In accordance with this aspect of the present invention, and as schematically presented in Fig. 22, the speech generation device 30 and the eye gaze controller 20 are configured to provide simultaneously on the input screen 33 of the speech generation device, a "composing window" and a "speak message window" separate from the "composing window." The speech generation device 30 and the eye gaze controller 20 are configured to give the user the option of setting the system to this "split" message window. When this "split" message window is activated by the user for composing and sending messages spoken by the speech generation device 30, the "composing window" part of the message window contains the message but cannot be activated by the user's gaze focused in the "composing window." The remaining part of the message window is then a "speak message window" button, and the message in the "composing window" part of the message window only will be spoken by the speech generation device 30 when the user's gaze focuses on the "speak message window" button for the pre-set dwell time. Moreover, the speech generation device 30 and the eye gaze controller 20 are configured to permit the user to define the relative size defined by the "speak message window" button.
[00301] Dashboard Hotspot. Some conventional communication software contains many thousands of pages. Additionally, conventional communication software typically relies on some very critical features to which a user desirably should always have quick access, and these features include: Pause; Alarm; and the Selection option, However, space on the input screen of a conventional speech generation device typically is at a premium, and it thus is inefficient to display these critical features on every page being viewed on the input screen. Moreover, it also is critical to have these features available to be selected by the user at the times when the user is not viewing any of the communication pages.
[00302] In accordance with this aspect of the present invention, and as schematically presented in Fig. 23A1 the speech generation device 30 and the eye gaze controller 20 are configured to provide a "Dashboard Hotspot". The speech generation device 30 and the eye gaze controller 20 are configured so that when the Dashboard
Hotspot is selected, a "popup window" appears on the input screen 33 of the speech generation device 30, and all of the critical features appear within the "popup window" for selection by the user. Thus, the user only needs to make two selections to activate any of the critical features. An example of such a dashboard popup window are shown in Fig. 81.
[00303] The speech generation device 30 and the eye gaze controller 20 are configured to permit the user to locate this Dashboard Hotspot in any user-defined section of the input screen 33, at the user's option. As schematically shown in Fig. 23B, the most popular location for the Dashboard Hotspot 38 is in one of the corners of the input screen 33. For example, a user may select the dashboard hotspot in the exemplary visual display of Fig. 80 by selecting the bottom left corner of the display. The speech generation device 30 and the eye gaze controller 20 also are configured to permit the user to choose the size of the area on the input screen 33 that is occupied by the Dashboard Hotspot. Moreover, as schematically presented in Fig. 23B, the speech generation device 30 and the eye gaze controller 20 also are configured to provide for the Dashboard Hotspot an extended area 38a beyond the display area of the input screen 33 in order to make it easier for the user to employ the eye gaze controller 20 to select the Dashboard Hotspot. The eye gaze controller 20 is configured to look for the user's gaze in the extended area 38a, which extends beyond the boundary of the input screen 33, in order to enable the user to focus the user's gaze in that extended area 38a and still be able to select the Dashboard Hotspot 38. [00304] In one embodiment, users may be provided with software features to define the dashboard settings, For example, an interface menu such as shown in Fig. 82 provides a Dashboard Hotspot Settings menu. A user may select the Show
Dashboard Hotspot check box to display the Dashboard Hotspot on the input screen 33 of a speech generation device 30. The position drop-down menu may be chosen to choose the corner where a user wants the Dashboard Hotspot to be displayed (e.g., bottom left or bottom right), The Size drop-down menu may be chosen to select the size of the Dashboard Hotspot (e.g., Normal, Bigger, Biggest). If a user wants to change the popup that will open when the Dashboard Hotspot is selected, the Dashboard Popup button may be selected, and a user can navigate through a directory by searching, scrolling or other means to find the desired popup. To choose the onscreen keyboard that a user wants to open when the Dashboard Hotspot is selected on an onscreen keyboard, a user can select the Dashboard Onscreen Keyboard button to make a selection.
[00305] Audio-Evetracking, Conventional eye gaze communication software is configured to illuminate the object on the input screen of the speech generation device when the user's eyes focus on the object. However, this way of indicating to the user where the user's eyes are focusing does not work with users that are blind or work very well with users that have very poor vision.
[00306] In accordance with an aspect of the present invention, the speech generation device 30 and the eye gaze controller 20 desirably are configured with audio-eyetracking software that generates an audio signal to the user as the user's eyes get close to focusing on an object on the input screen 33 of the speech generation device 30. Once the user's gaze gets close enough to an object, the audio-eyetracking software protocol is configured to cause the speech generation device 30 to speak the name of the object to tell the user what it is. In accordance with the present invention, the audio signal controlled by the audio-eyetracking software protocol can change as the user's eye gaze focuses closer to the object or farther from the object. The audio- eyetracking software protocol of the present invention desirably is configured to cause the speech generation device 30 to tell the user whether the user is focusing the user's gaze above, below, to the left or to the right of the object and how far away the user's gaze is focusing from the object in that direction.
[00307] In one embodiment of the audio-eyetracking feature of the present invention, there would be a setting within the software menus as to what message the user wants to hear as the "audio-feedback" for each given button being displayed on the input screen of the speech generation device 30. The software menus would cause the messages to be spoken by the speech generation device 30 when the user was in this set-up mode of the audio-eyetracking feature. Similarly, the menu options of the voice and volume of the audio feedback and whether the audio feedback is to be provided via private means (earphones) or public means (speakers) are spoken by the speech generation device 30 when the user was in the set-up mode. When the system determines that the user's gaze is on a particular object of the input screen 33 - then that audible feedback information is 'spoken' by the speech generation device 30 to the user. The user can then decide whether to continue to dwell on that object, or blink or hit a switch to select that object. [00308] Device Mounting. Daedalus Technologies, Inc. of Richmond, British Columbia, Canada manufactures DAESSY® hardware for mounting communication devices on wheelchairs, and such hardware can be used to mount on wheelchairs, embodiments of the speech generation device 30 and eye gaze controller 20 in accordance with the present invention. Fitting a standard DAESSY® wheelchair mount determines the location and size for the clamp that is attached to the frame of the wheelchair and the correct lengths and bends for the stainless steel tubes that support the mount. These dimensions are determined by the relationship between the position of the mounted device, the location on the wheelchair where the clamp will be attached to the frame and the position of the user's head when using the wheelchair. A communication device located for scanning or head pointer access must be higher and further away from the user. Prior eye gaze controlled communication systems had set a minimum focus range at 20 inches, which is often too far for safely mounting the system onto smaller and/or lighter wheelchairs.
[00309] In some embodiments of the speech generation device 30 and eye gaze controller 20 in accordance with the present invention, features are provided that enable wheelchair mounting of the devices within a close range for a user. In one example, close mounting is within a range of about 12-20 inches from a display screen to a user's face/eyes. In another example, a range of about 15-17 inches is employed, with a particular example of 16.5 inches desirable for some applications, such as for mounting on a pediatric wheelchair. Other wider ranges, such as but not limited to a distance range of between about 17-28 inches, or between about 20-24 inches may also be used. Such desirable ranges, including those affording close proximity to a user, are accomplished in part by providing a fixed-focus type eye tracking device, which allows a user to focus the camera portion of the tracking device by various calibration procedures.
[00310] A configuration with improved (i.e., closer) mounting locations can provide more security to the user and mounting options for a wide range of wheelchairs, including pediatric wheelchairs, as well as mounting options for desks and walls. The closer mounting ability of the subject system helps avoid potential problems when a system Is mounted so far away from the user that it makes the balance of the wheelchair unstable. Particular proximity for a user also allows users, especially those with visual impairments, to view the screen more clearly and thus function more efficiently. An exemplary view of acceptable and unacceptable mounting orientations, including views of exemplary positioning for height, distance, angle settings, tilt, and accommodations for users with glasses relative to their position in a wheelchair are shown in Fig. 68.
[00311] Eve Tracker Software Settings. As previously mentioned, a microprocessor associated with speech generation device 30, with eye gaze controller 20, and/or a separate processor/controller may provide software storage and execution functionality by which a user can establish certain preferences and capabilities of the eye tracker portion of the system that includes a speech generation device 30 that the user operates with an eye gaze controller 20 in accordance with an embodiment of the present invention. [00312] For example, as shown in the exemplary interface menu of Fig. 69, an eye tracking settings menu may be provided to a user. A user may select the "Select With" drop-down menu and then choose one of the available options previously described as a selection method. The "Blink" option sets the software to register a selection when the user gazes at an object and then blinks within a specific length of time. (There is an adjustable minimal time setting to avoid false activations from naturally-occurring blinks.) The "Dwell" option sets the software so that if the user's gaze is stopped on an object for a specified length of time, the highlighted object is selected. The "Blink/Dwell" option sets the software so that if the user's gaze is stopped on an object for a specified length of time, the highlighted object is selected. The object may also be selected if the user blinks on it before the time elapses. The "External Switch" option sets the software to select the highlighted object when an external switch is activated. [00313] If the "Blink" option is selected as a desired user selection method, another interface, such as the Blink Settings Menu of Fig. 70, may be provided to a user. Options may be provided for a user to perform a secondary action when a user maintains the blink for a specified length of time. This may be activated by selecting the
Secondary Action drop-down menu and choosing the desired action. Another feature allows a user to set the number of eyes required. The "Requires Both Eyes to Select" check box may be enabled by default in one example, with the user being able to clear the check box if he wants to blink only one eye to trigger a selection. [00314] Sliders may be adjusted to increase or decrease the time frames for each of the selection options. For example, a user may select and drag the Blink Time slider to adjust the time that a user must maintain the blink to make a selection (the primary action). A user may select and drag the Secondary Time slider to adjust the additional time that a user must maintain the blink to trigger the secondary action. A user may select and drag the Cancel Time slider to adjust the total time that the user must maintain the blink to cancel all actions (primary and/or secondary). The cofor-coded time frame bar at the bottom of the exemplary Blink Settings menu of Fig. 70 displays the cumulative time periods for each of the selection options. [00315] If a user selects the "Dwell" option as a desired user selection method, a user may be provided with another interface menu, such as the Dwell Settings menu of
Fig. 71. This interface may be provided with a slider feature. The dwell time slider can be used to adjust the length of time a user's eyes must pause on an object to make a selection. Select the slider thumb and drag it to the right to increase the dwell time, or drag it to the left to decrease the dwell time. It should be appreciated that if a user selects a "Blink/Dwell" option, the user may want to define a longer dwell setting here to allow more time to blink.
[00316] If a user selects the "External Switch" option as a desired selection method, a user may be provided with an interface menu, such as the switch settings menu of Fig. 72. This interface menu may be used to configure use of a computer keyboard as an external switch. For example, a user can select the "acts as switch 1" drop-down menu and then select the key that the user wants to act as a switch input. [00317] With further reference to the Eye Tracking Settings menu of Fig. 69, a user may select the highlight target check box if the user wants to highlight the area of the touch screen that is being selected. Otherwise, the user can make sure the highlight target check box is not selected. A highlight rules button may be selected to customize the style and appearance of the screen highlight, as discussed later in more detail relative to Fig. 75. [00318] Many individuals who use eye tracking equipment depend on seeing the screen cursor. Referring still to the menu of Fig. 69, features may be available for a user to select the Show Cursor check box if the user wants the cursor to be visible on the input screen 33. Otherwise, make sure the Show Cursor check box is not selected. A user can select the Click check box if he wants his speech generation device 30 to make an audible sound when it selects an object. A volume slider may be used to increase or decrease the volume of the click. If a user does not want to use audio feedback for object selection, the user will make sure that the Click check box is not selected. A user can select the Number of Targets drop-down menu to choose the number of screen targets used for calibrating the eye gaze controller 20 (EyeMax) accessory (the higher the number of targets, the more accurate the calibration, and the longer the calibration procedure will take to complete).
[00319] A user may choose the "Target Settings" button in the eye tracking settings menu of Fig. 73 to select the visual target used in calibration (and customize its settings). A user can select the Target Image drop-down menu to choose the image he wants the user to focus his gaze on during the calibration process. The chosen option is provided in a display box on the left. If a user wants to use multiple images during the calibration process, the user can select the Randomize Targets check box. If a user wants to display the focal point (the actual spot on the graphic that the user should be watching during calibration), the user can select the Show Focal Point check box. The focal point will appear in the display box as a light green region. A user may use the Target Speed slider to adjust the speed of the target. The user can select the slider thumb and drag it to the left to slow the target down, or drag it to the right to speed the target up. The display square above the slider will update to reflect the current setting, if a user wants the software to display animation on the touch screen in-between displaying the calibration targets, the user can select the Animate Between Targets check box. The animation will be shown in the display box underneath the check box. A user may use the Animation Speed slider to adjust the speed of the "in-between" animation. The user can select the slider thumb and drag it to the left to slow the animation down, or drag it to the right to speed the animation up. The display box above the slider will update to reflect the current setting.
[00320] Additional features are provided in the eye tracking settings menu of Fig. 69 for a user to select the Background Color drop-down menu to choose the scheme that is closest to the that of the page(s) the user will most often use. Exemplary options are Navigator Yellow (appropriate for any page set dominated by light-colored buttons), Black, or Grey (appropriate for page sets with a darker color scheme).
[00321] Still further, selection boxes are available for a user to choose which eye(s) to perform calibration procedures on. By default, the software may use both of the user's eyes for calibration (this usually results in a more accurate calibration). If one of the user's eyes is compromised, the user can select the check box that corresponds to the compromised eye (Calibrate Left Eye or Calibrate Right Eye) to clear the selection. Clearing the selection means that the software will not use that eye for calibration.
[00322] Finally, a user may select the Eye Track Status button in the menu of Fig.
69 to see the current status of the EyeMax accessory. The Eye Track Status menu (for example, see Fig. 74) will display a blue box, with a dynamic picture of the user's eyes.
When properly calibrated and positioned, the eyes should both appear in the blue box, and a green cross-hair should appear on the eye(s) that the software is set to track. A user may toggle the image in this menu to display either the live camera image or only the eye glints (green crosshairs that signify the pupil of each eye). To toggle the image, the user can select the triangular button in the lower right corner of the viewing field.
When the image is displaying the live camera feed, the button symbol will be green crosshairs. When the image is displaying only the eye glints, the button symbol will change to an eye. A user may also select the Please Guide Me button to launch the Eye Tracking Wizard, which provides an explanation about the calibration process, as well as a demonstration video,
[00323] Referring now to Fig. 75, a user may be provided access to an interface menu that enables the user to modify highlight rules settings, as shown in the screenshot of Fig. 75. A "Type" control may be selected by using the Type drop-down menu to select the type of highlight: Invert or Outline. For Outline Color" control, a user can use the Outline Color button to open the Color Selector menu and select (or create) the desired color for the outline. When a user has chosen the desired color, select the OK button to close the Color Selector menu. For Outline Width" control, a user can select the Thicker button to increase the width of the outline, or select the Thinner button to decrease the width of the outline.
[00324] The "Preview Button" control enables a user to select the Preview Button to see an example of how the current highlight rule settings will appear. The Till Type" control enables a user to use the Fill Type drop-down menu to select the type of fill the user wants to use for the currently highlighted object: None, Bottom Up1 or Contract, which is from the outside edges in. A "Drain" control enables a user to select the Drain check box if the user wants an object that he hovered over (but did not select) to retain its fill for a brief time before draining (this will enable the Drain Delay and the Drain Time sliders). The "Drain Delay" control enables a user to use the Drain Delay slider to set the time interval that the software will wait before it starts to drain the fill from a screen object. The "Drain Time" control enables a user to use the Drain Time slider to set the time interval required to completely drain the fill from a screen object. The "OK/Cancel" feature may be selected to either accept the current settings or close the Highlight Rules menu without accepting any changes. [00325] When the Fill Type drop-down menu is set to either Bottom Up or Contract, the software may highlight the object that is about to be selected, and will "fill" it with the highlight color. The examples shown in Figs. 76 and 77 show the difference between the two fill types. If the Drain check box is selected, screen objects will not lose their fill immediately after the user moves the cursor off of them. The Drain Delay slider indicates how long an object will maintain its fill before it starts to drain. The Drain Time slider indicates how long it will take screen objects to completely lose their fill.
[00326] Referring now to Fig. 78, a user may be provided with an exemplary interface for modifying advanced eye tracking settings. In particular, when using the eye tracking selection method, a user can indicate for software to perform a specific action if the user's eyes have been "lost" (out of calibration) for a set amount of time. For example, the timeout duration can be selected, as well as the particular action (e.g., display the Dashboard popup, sound an audible alarm to request help from a caregiver, both or other options). The particular alarm sound may also be selectable by a user. [00327] Referring now to Fig. 79, a user may be provided with an exemplary interface for modifying additional desktop settings for a speech generation device when it is extended to a Windows desktop or other computer interface. For example, as shown in the Additional Eye Tracking Desktop Menu of Fig. 79, a user may be able to select the Show Dwell Time Animation check box if the user wants to display the dwell time animation when making a selection on the Windows desktop. The Dwell Box Size group box may be used to define an area in which the user's eyes must remain for the duration of the dwell time for the selection to register. In one embodiment, small, medium or large check boxes may be available, or other ranges of areas. In one implementation, when a user stops his gaze on the Windows desktop (and if he has defined a dwell time), the software may be configured to show an animation (e.g., circular sweep) that indicates the dwell time. If a user keeps his gaze within the dwell box during the entire animation, then the selection will take place. [00328] While at least one presently preferred embodiment of the invention has been described using specific terms, such description is for illustrative purposes only, and it is to be understood that changes and variations may be made without departing from the spirit or scope of the present invention.

Claims

WHAT IS CLAIMED IS:
1. A portable eye gaze controller, comprising: a first housing; an eye tracker disposed within said first housing; a first battery disposed within said first housing and electrically connected to said eye tracker to provide power to operate the eye tracker; and a first universal serial bus (USB) socket carried by said first housing and electrically connected to said eye tracker.
2. A portable eye gaze controller as in claim 1 , wherein said first housing comprises a front shell and an opposing rear shell, wherein the front shell and the rear shell are detachably connected to one another.
3. A portable eye gaze controller as in claim 1 , wherein said rear shell of said first housing carries said first universal serial bus (USB) socket.
4. A portable eye gaze controller as in claim 1 , further comprising a microprocessor connected to said portable eye gaze controller via said first USB socket such that said microprocessor is controlled using the portable eye gaze controller.
5. A portable eye gaze controller as in claim 1, further comprising a microprocessor dedicated to operation of said portable eye gaze controller, wherein said microprocessor is provided in said first housing.
6. A portable eye gaze controller as in claim 1 , further comprising a main circuit board provided within said first housing, said first circuit board comprising integrated circuits mounted on said first circuit board and electrically connected to said first USB socket.
7. A portable eye gaze controller as in claim 1 , wherein said eye tracker comprises at least one light source for illuminating one or more eyes of a user and at least one photosensor that detects light reflected from the user's eyes.
8. A portable eye gaze controller as in claim 7, wherein said eye tracker is further configured to compute gaze measurements from a user's identified pupil and glint and to map the gaze measurements to a two-dimensional coordinate space.
9. A portable eye gaze controller as in claim 1, wherein said first battery comprises one or more rechargeable lithium ion batteries
10. A speech generation device, comprising: a portable eye gaze controller, comprising: a first housing; an eye tracker disposed within said first housing; and a first universal serial bus (USB) socket carried by said first housing and electrically connected to said eye tracker; a second housing; a processor disposed within said second housing; an input screen disposed within said second housing; a second universal serial bus (USB) socket carried by said second housing and connected to said processor; and wherein said portable eye gaze controller is coupled to said processor via a connection established between said first USB socket and said second USB socket.
11. A speech generation device as in claim 10, further comprising: a first battery disposed within said first housing and electrically connected to said eye tracker to provide power to operate the eye tracker; and a second battery disposed within said second housing and electrically connected to said processor to provide power to operate the processor.
12. A speech generation device as in claim 11, further comprising an AC/DC transformer removably connectable to both said portable eye gaze controller and said speech generation device such that the same AC/DC transformer can be used to charge one or both of the first battery disposed within the first housing and the second battery disposed within the second housing.
13. A speech generation device as in claim 11 , wherein said portable eye gaze controller further comprises a charger port configured to receive a charger cable to charge said first battery and a power output port configured to connect said portable eye gaze controller to said speech generation device so that said eye gaze controller can also relay power to charge said second battery.
14. A speech generation device as in claim 11 , wherein said portable eye gaze controller further comprises first and second indicator lights to respectively indicate when said first and second batteries are charging.
15. A speech generation device as in claim 10, wherein said input screen is configured to display visual objects such that inputs from said portable eye gaze controller effect the selection of objects displayed on said input screen.
16. A speech generation device as in claim 10, wherein said portable eye gaze controller is configured to detect one or more user actions to indicate selection of an object at which a user is looking on said input screen, said one or more user actions comprising blinking, dwelling, blinking and dwelling, blinking and closing a switch, and selecting a switch.
17. A speech generation device as in claim 10, wherein the eye tracker within said portable eye gaze controller is further configured to compute gaze measurements from pupil and glint identified within one or more eyes of a user and to map the gaze measurements from image space to a coordinate space associated with said input screen.
18. A speech generation device as in claim 10, further comprising a rigid mounting bracket to secure said first housing associated with said portable eye gaze controller to said second housing.
19. A speech generation device as in claim 10, wherein outputs from said portable eye gaze controller are provided to the processor disposed within said second housing and are processed to generate control signals for controlling the operation of the speech generation device by a user's eye movements.
20. A speech generation device as in claim 10, wherein said portable eye gaze controller further comprises a separate processor disposed within said first housing, and wherein said separate processor is configured to generate control signals for controlling the operation of the speech generation device by a user's eye movements.
21. An eye tracker, comprising: a housing; first and second light sources disposed within said housing such that light illuminates outwardly from said housing towards the eyes of a user; a video camera disposed within said housing and configured to detect light reflected from the eyes of a user; and a focusing lens disposed in front of said video camera and aligned with a central opening that is defined in said housing,
22. An eye tracker as in claim 21 , wherein said housing comprises a lens housing for receiving said focusing lens, said lens housing being configured to mechanically lock said focusing lens into position.
23. An eye tracker as in claim 21 , wherein said first and second light sources comprise LED arrays disposed respectively to the right and left of said video camera within said housing.
24. An eye tracker as in claim 23, wherein said LED arrays comprise a plurality of
25. diodes disposed in respective arrays of staggered vertical columns.
26. An eye tracker as in claim 23, wherein said LED arrays are disposed with maximum separation from one another within the confines of the boundaries imposed by said housing.
27. An eye tracker as in claim 23, wherein said LED arrays are disposed tilted toward said central opening.
28. An eye tracker as in claim 21, wherein said eye tracker is coupled to a speech generation device comprising an input display.
29. An eye tracker as in claim 27, wherein said eye tracker and said speech generation device are mounted relative to one another such that a plane defined by the focusing lens of said eye tracker is disposed at an angle with respect to a plane defined by the input display of said speech generation device.
30. An eye tracker as in claim 21 , wherein said eye tracker is configured for mounting less than about seventeen inches from a user's eyes.
31. An eye tracker as in claim 21 , wherein said eye tracker is configured for mounting between about twelve and about twenty inches from a user's eyes.
32. An eye tracker as in claim 21 , wherein said video camera comprises a fixed- focus camera.
33. An eye tracker as in claim 21 , further comprising a processor storing computer- readable instructions that configure said eye tracker to deal effectively with images of the eyes of a user that are slightly out of focus.
34. An eye tracker as in claim 21 , further comprising first and second indicator lights configured to illuminate when the eye tracker has acquired the location of the user's eye asociated with that indicator light.
35. An eye tracker as in claim 34, wherein said first and second indicator lights are disposed beneath said central opening defined in said housing
36. A speech generation device, comprising: an input screen for displaying selectable pages to a viewer; an eye tracker comprising at least one light source and at least one photosensor that detects light reflected from the viewer's eyes to determine where the viewer is looking relative to the input screen; a processor and related computer-readable medium for storing instructions executable by said processor; wherein the instructions stored on said computer-readable medium configure said speech generation device to generate output signals for establishing communication with a separate device or network; and speakers for providing audio output of signals received from the separate device or network.
37. A speech generation device as in claim 36, wherein the instructions stored on said computer-readable medium more particularly configure said speech generation device to remotely control one or more electronic devices within a user's environment.
38. A speech generation device as in claim 37, further comprising at least one infrared emitter provided within the speech generation device, and wherein the output signals generated by said speech generation device comprise infrared signals.
39. A speech generation device as in claim 38, wherein said one or more electronic devices comprises an infrared-controlled telephone.
40. A speech generation device as in claim 37, wherein the instructions stored on said computer-readable medium further configure said speech generation device to provide one or more display pages associated with one or more remote controls for the one or more electronic devices on said input screen such that said eye tracker determines where a user is looking relative to the one or more display pages to effect selection of buttons on said one or more display pages to control the one or more electronic devices within the user's environment. .
41. A speech generation device as in claim 37, further comprising a remote control integrated circuit chip containing sets of commands recognized by known electronic appliances.
42. A speech generation device as in claim 41 , wherein the instructions stored on said computer-readable medium are further configured to map information from the remote control integrated circuit chip corresponding to the one or more electronic devices to buttons for display on said input screen.
43. A speech generation device as in claim 41 , wherein the instructions stored on said computer-readable medium are further configured to enable a user to use environmental control behaviors to program remote control commands to buttons that are displayed on the input screen of the speech generation device.
44. A speech generation device as in claim 41, wherein the instructions stored on said computer-readable medium are further configured to teach the speech generation device new remote control commands from an existing remote control for the one or more electronic devices and to further associate selected remote control commands to one or more buttons within a user's communication pages.
45. A speech generation device as in claim 36, wherein said separate device or network comprises a telephone, and wherein the instructions stored on said computer readable medium initiate the display on said input screen of a keypad with numbers for dialing said telephone that are selectable by a user's gaze detected by said eye tracker.
46. A speech generation device as in claim 45, wherein the instructions stored on said computer readable medium are further configured to initiate the display on said input screen of a plurality of buttons for controlling the operation of said telephone, said plurality of input buttons comprising one or more of the following commands: hang up, answer call, automatically dial, speed dial, program speed dial, receive call, dial 911 , talk with party, listen to party.
47. A speech generation device as in claim 45, wherein said speakers provide audio output corresponding to a caller's voice while the user is operating the telephone.
48. A speech generation device as in claim 36, wherein the instructions stored on said computer-readable medium more particularly configure said speech generation device to connect to the internet such that a user can navigate web pages displayed on said input screen with the selection control of said eye tracker.
49. The speech generation device of claim 48, wherein said input screen displays a plurality of buttons containing numbers, each number corresponding to a desired internet link, said plurality of buttons being selectable by a user's gaze determined by said eye tracker.
50. A speech generation device as in claim 48, wherein the instructions stored on said computer-readable medium more particularly configure said speech generation device to download an e-book from over the established internet connection.
51. A speech generation device as in claim 48, wherein the instructions stored on said computer-readable medium more particularly configure said speech generation device to initiate audio output via said speakers corresponding to the downloaded eBook.
52. A speech generation device as in claim 48, wherein the instructions stored on said computer-readable medium more particularly configure said speech generation device to display via said input screen a selectable eBook interface menu.
53. A speech generation device as in claim 52, wherein said selectable e-book interface menu contains selectable interface elements that configure said speech generation device to perform one or more of the following functions: scroll through eBook pages, highlight text on an eBook page, speak text on an eBook page, symbolate pages of an eBook, and bookmark a place on the page of an eBook.
54. A method of changing the access method of an electronic device interfaced with an eye gaze controller from an eye tracking access method to at least one other access control protocol, comprising: displaying a selection method navigator on an input screen for a user, said selection method navigator displaying a plurality of access methods for interfacing with the electronic device; detecting a user's gaze with the eye gaze controller as the user's eyes are focused on an area of the input screen depicting the desired access method for subsequent operation of said electronic device; and switching the access method from an eye gaze tracking access method to the desired access method selected by the user's gaze.
55. The method of claim 54, wherein the selection method navigator is displayed as a popup window on the input screen upon a single button selection by a user.
56. The method of claim 54, wherein the plurality of access methods comprise one or more of scanning, mouse, trackball, joystick and head control.
57. The method of claim 54, further comprising an additional step of receiving additional user confirmation of the desired access method selected by the user's gaze before the access method is switched from the eye gaze tracking access method to the desired access method selected by the user's gaze.
58. An electronic device, comprising: an input screen for displaying interface pages to a user; an eye gaze controller comprising at least one light source and at least one photosensor that detects light reflected from the viewer's eyes to determine where the viewer is looking relative to the input screen; a processor and related computer-readable medium for storing instructions executable by said processor; wherein the instructions stored on said computer-readable medium configure said electronic device to implement the method of claim 54.
59. A method for determining user selection of an object on a display screen using eye tracking, comprising: electronically establishing a dwell time setting that defines the duration of time for which a user's eyes must gaze on an object on a display screen to trigger selection of the object by an eye gaze selection system; electronically tracking with an eye gaze controller the amount of time a user's gaze remains upon a given object on the display screen; retaining for a predetermined amount of time an accumulated dwell time even after a user's gaze leaves the given object; and electronically implementing selection of the given object if the accumulated dwell time exceeds the electronically established dwelt time setting or restarting the accumulated dwell time if a user's gaze leaves the given object for longer than the predetermined amount of time during which the accumulated dwell time is retained.
60. The method of claim 59, wherein the accumulated dwell time for the given object is visually indicated on the input screen.
61. The method of claim 60, wherein the visual indication of the accumulated dwell time comprises a degree of contrast of the given object.
62. The method of claim 61 , wherein the degree of contrast is increased or decreased by an increment of contrast as the accumulated dwell time either respectively increases or decreases.
63. The method of claim 62, wherein the magnitude of one increment of contrast is set according to the relative size of the area occupied by the given object on the input screen.
64. An electronic device, comprising: an input screen for displaying interface pages to a user; an eye gaze controller comprising at least one light source for illuminating the eyes of a user and at least one photosensor that detects light reflected from the user's eyes to determine where the user is looking relative to the input screen; a processor and related computer-readable medium for storing instructions executable by said processor; wherein the instructions stored on said computer-readable medium configure said electronic device to implement the method of claim 59.
65. A method for enhancing the rate of message composition within a message window of a speech generation device, comprising: electronically displaying an interface on an input screen of the speech generation device, the interface comprising a message window in which a message may be composed by a user and ultimately spoken and input buttons by which a user selects one or more of words, characters and symbols; electronically tracking the message composed by a user in the message window; and electronically changing selected of the input buttons to include predictor buttons based on the tracked message being composed within the message window.
66. A method as in claim 65, wherein said predictor buttons comprise word predictor buttons chosen by comparison of the message in the message window to a database of words.
67. A method as in claim 66, wherein the database of words comprises one or more of an internal dictionary database, an online dictionary database, and a personal vocabulary database.
68. A method as in claim 66, wherein new words from the message composed in the message window that are not found in the database of words can be selectively added to the database of words.
69. A method as in claim 66, wherein the database of words includes a plurality of phrases that may be selected during composition of a message in the message window.
70. A method as in claim 66, wherein the database of words is organized by concept such that words or phrases corresponding to similar items or ideas are grouped together.
71. A method as in claim 65, wherein said predictor buttons comprise character prediction buttons that predict single characters based on the message that a user is typing.
72. A method as in claim 65, wherein said predictor buttons comprise context prediction buttons that anticipate word selection based on the grammatical structure of the sentence that a user is creating.
73. A method as in claim 65, wherein said predictor buttons comprise one or more expansions based on an identified abbreviation entered within the message window.
74. A method as in claim 65, wherein said predictor buttons comprise phrase prediction buttons that predict multi-word phrases based on the message that a user is typing.
75. A method as in claim 65, wherein the message composed by a user in the message window comprises a phrase including one or more slot placeholders within the phrase, and wherein said predictor buttons comprise one or more corresponding filler words for selection by a user to populate the one or more slot placeholders.
76. A method as in claim 75, wherein said one or more filler words presented as selectable options for a user comprise words within a similar vocabulary concept grouping.
77. A method as in claim 75, wherein the slot placeholders are visually displayed within the message window with a distinguishing identification relative to surrounding elements within the message window.
78. A method as in claim 75, wherein the message window comprises a composing window and a separate speak message window, the composing window being configured to display the message being composed by a user, and the speak message window being a separate display area within the message window such that the message within the composing window can only be selected to have it spoken by directing a user's eye gaze to the speak message window and not to the composing window.
79. A method for implementing display of a dashboard hotspot on a display screen using eye tracking, comprising: electronically establishing a predetermined area defined relative to an input screen for corresponding to a gaze location for implementing a dashboard hotspot; electronically tracking a user's gaze with an eye gaze controller to determine when a user's gaze is within the predetermined area; and upon determination that a user's gaze is within the predetermined area, displaying a popup window to a user, the popup window containing a plurality of predetermined critical interface features.
80. A method as in claim 79, wherein the eye gaze controller further tracks whether a user blinks or gazes within the predetermined area for a sufficient amount of dwell time before displaying the popup window to the user.
81. A method as in claim 79, wherein the predetermined area defined relative to the input screen comprises one of four corners of an input screen.
82. A method as in claim 79, wherein the predetermined area defined relative to the input screen comprises an extended area beyond the boundary of the input screen.
83. A method as in claim 79, further comprising electronically establishing one of a plurality of sizes for the popup window that is displayed upon determination that a user's gaze is within the predetermined area.
84. A method for assisting a user with control of an electronic device using eye tracking, comprising: electronically tracking a user's gaze with an eye gaze controller to determine when a user's eyes get close to focusing on a given object provided on a display screen associated with the electronic device; and generating an audio signal to the user once the user's gaze is determined by the eye gaze controller to be within a predetermined distance from the given object.
85. A method as in claim 84, wherein said audio signal comprises a spoken name corresponding to the given object.
86. A method as in claim 84, wherein said audio signal comprises feedback to tell the user where the user is focusing his gaze relative to the location of the given object.
87. A method as in claim 84, wherein said audio signal changes as the user's eye gaze focuses closer to the given object or farther from the given object.
88. A method as in claim 84, further comprising electronically establishing one or more parameters related to the audio signal, selected from the message the user wants to hear as the audio feedback, the voice of the message, the volume of the message, and the feedback means for playing the message.
89. An electronic device, comprising: an input screen for displaying interface pages to a user; an eye tracker comprising at least one light source for illuminating the eyes of a user and at least one photosensor that detects light reflected from the user's eyes to determine where the user is looking relative to the input screen; speakers for providing audio output of signals; a processor and related computer-readable medium for storing instructions executable by said processor; wherein the instructions stored on said computer-readable medium configure said electronic device to electronically track a user's gaze with said eye tracker to determine when a user's eyes get close to focusing on a given object provided within an interface page on said input screen, and to generate an audio signal via said speakers once the user's gaze is determined by the eye tracker to be within a predetermined distance from the given object.
PCT/US2010/036805 2009-06-01 2010-06-01 Separately portable device for implementing eye gaze control of a speech generation device WO2010141403A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US21753609P 2009-06-01 2009-06-01
US61/217,536 2009-06-01

Publications (1)

Publication Number Publication Date
WO2010141403A1 true WO2010141403A1 (en) 2010-12-09

Family

ID=43298069

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/036805 WO2010141403A1 (en) 2009-06-01 2010-06-01 Separately portable device for implementing eye gaze control of a speech generation device

Country Status (1)

Country Link
WO (1) WO2010141403A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013036632A1 (en) * 2011-09-09 2013-03-14 Thales Avionics, Inc. Eye tracking control of vehicle entertainment systems
WO2015042358A1 (en) * 2013-09-20 2015-03-26 Amazon Technologies, Inc. Providing descriptive information associated with objects
US9367490B2 (en) 2014-06-13 2016-06-14 Microsoft Technology Licensing, Llc Reversible connector for accessory devices
US9384334B2 (en) 2014-05-12 2016-07-05 Microsoft Technology Licensing, Llc Content discovery in managed wireless distribution networks
US9384335B2 (en) 2014-05-12 2016-07-05 Microsoft Technology Licensing, Llc Content delivery prioritization in managed wireless distribution networks
US9430667B2 (en) 2014-05-12 2016-08-30 Microsoft Technology Licensing, Llc Managed wireless distribution network
US9614724B2 (en) 2014-04-21 2017-04-04 Microsoft Technology Licensing, Llc Session-based device configuration
US9619020B2 (en) 2013-03-01 2017-04-11 Tobii Ab Delay warp gaze interaction
US9760123B2 (en) 2010-08-06 2017-09-12 Dynavox Systems Llc Speech generation device with a projected display and optical inputs
JP2017182217A (en) * 2016-03-28 2017-10-05 株式会社バンダイナムコエンターテインメント Simulation controller and simulation control program
US9782069B2 (en) 2014-11-06 2017-10-10 International Business Machines Corporation Correcting systematic calibration errors in eye tracking data
US9864498B2 (en) 2013-03-13 2018-01-09 Tobii Ab Automatic scrolling based on gaze detection
US9874914B2 (en) 2014-05-19 2018-01-23 Microsoft Technology Licensing, Llc Power management contracts for accessory devices
US9952883B2 (en) 2014-08-05 2018-04-24 Tobii Ab Dynamic determination of hardware
US10111099B2 (en) 2014-05-12 2018-10-23 Microsoft Technology Licensing, Llc Distributing content in managed wireless distribution networks
US10317995B2 (en) 2013-11-18 2019-06-11 Tobii Ab Component determination and gaze provoked interaction
CN110475145A (en) * 2019-08-23 2019-11-19 四川长虹网络科技有限责任公司 TV control system and method based on eye recognition
US10558262B2 (en) 2013-11-18 2020-02-11 Tobii Ab Component determination and gaze provoked interaction
EP3654148A1 (en) * 2017-10-16 2020-05-20 Tobii AB Improved computing device accessibility via eye tracking
US10691445B2 (en) 2014-06-03 2020-06-23 Microsoft Technology Licensing, Llc Isolating a portion of an online computing service for testing
CN114237388A (en) * 2021-12-01 2022-03-25 辽宁科技大学 Brain-computer interface method based on multi-mode signal recognition

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4081623A (en) * 1976-11-15 1978-03-28 Bio-Systems Research, Inc. Sight operated telephone and machine controller
US4973149A (en) * 1987-08-19 1990-11-27 Center For Innovative Technology Eye movement detector
US5471542A (en) * 1993-09-27 1995-11-28 Ragland; Richard R. Point-of-gaze tracker
US5583795A (en) * 1995-03-17 1996-12-10 The United States Of America As Represented By The Secretary Of The Army Apparatus for measuring eye gaze and fixation duration, and method therefor
US5717512A (en) * 1996-05-15 1998-02-10 Chmielewski, Jr.; Thomas A. Compact image steering and focusing device
US6091378A (en) * 1998-06-17 2000-07-18 Eye Control Technologies, Inc. Video processing methods and apparatus for gaze point tracking
US6152563A (en) * 1998-02-20 2000-11-28 Hutchinson; Thomas E. Eye gaze direction tracker
US6282553B1 (en) * 1998-11-04 2001-08-28 International Business Machines Corporation Gaze-based secure keypad entry system
US20020087555A1 (en) * 2000-12-28 2002-07-04 Casio Computer Co., Ltd. Electronic book data delivery apparatus, electronic book device and recording medium
US20040105264A1 (en) * 2002-07-12 2004-06-03 Yechezkal Spero Multiple Light-Source Illuminating System
US20040174496A1 (en) * 2003-03-06 2004-09-09 Qiang Ji Calibration-free gaze tracking under natural head movement
US20050175218A1 (en) * 2003-11-14 2005-08-11 Roel Vertegaal Method and apparatus for calibration-free eye tracking using multiple glints or surface reflections
US20050188330A1 (en) * 2004-02-20 2005-08-25 Griffin Jason T. Predictive text input system for a mobile communication device
US20050231520A1 (en) * 1995-03-27 2005-10-20 Forest Donald K User interface alignment method and apparatus
US20060095842A1 (en) * 2004-11-01 2006-05-04 Nokia Corporation Word completion dictionary
US20060109242A1 (en) * 2004-11-19 2006-05-25 Simpkins Daniel S User interface for impaired users
US20060209013A1 (en) * 2005-03-17 2006-09-21 Mr. Dirk Fengels Method of controlling a machine connected to a display by line of vision
US20060256083A1 (en) * 2005-11-05 2006-11-16 Outland Research Gaze-responsive interface to enhance on-screen user reading tasks
US20070076862A1 (en) * 2005-09-30 2007-04-05 Chatterjee Manjirnath A System and method for abbreviated text messaging
US7365738B2 (en) * 2003-12-02 2008-04-29 International Business Machines Corporation Guides and indicators for eye movement monitoring systems
US20080120141A1 (en) * 2006-11-22 2008-05-22 General Electric Company Methods and systems for creation of hanging protocols using eye tracking and voice command and control
US20090058690A1 (en) * 2007-08-31 2009-03-05 Sherryl Lee Lorraine Scott Mobile Wireless Communications Device Providing Enhanced Predictive Word Entry and Related Methods
US20090125849A1 (en) * 2005-10-28 2009-05-14 Tobii Technology Ab Eye Tracker with Visual Feedback

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4081623A (en) * 1976-11-15 1978-03-28 Bio-Systems Research, Inc. Sight operated telephone and machine controller
US4973149A (en) * 1987-08-19 1990-11-27 Center For Innovative Technology Eye movement detector
US5471542A (en) * 1993-09-27 1995-11-28 Ragland; Richard R. Point-of-gaze tracker
US5583795A (en) * 1995-03-17 1996-12-10 The United States Of America As Represented By The Secretary Of The Army Apparatus for measuring eye gaze and fixation duration, and method therefor
US20050231520A1 (en) * 1995-03-27 2005-10-20 Forest Donald K User interface alignment method and apparatus
US5717512A (en) * 1996-05-15 1998-02-10 Chmielewski, Jr.; Thomas A. Compact image steering and focusing device
US6152563A (en) * 1998-02-20 2000-11-28 Hutchinson; Thomas E. Eye gaze direction tracker
US6091378A (en) * 1998-06-17 2000-07-18 Eye Control Technologies, Inc. Video processing methods and apparatus for gaze point tracking
US6282553B1 (en) * 1998-11-04 2001-08-28 International Business Machines Corporation Gaze-based secure keypad entry system
US20020087555A1 (en) * 2000-12-28 2002-07-04 Casio Computer Co., Ltd. Electronic book data delivery apparatus, electronic book device and recording medium
US20040105264A1 (en) * 2002-07-12 2004-06-03 Yechezkal Spero Multiple Light-Source Illuminating System
US20040174496A1 (en) * 2003-03-06 2004-09-09 Qiang Ji Calibration-free gaze tracking under natural head movement
US20050175218A1 (en) * 2003-11-14 2005-08-11 Roel Vertegaal Method and apparatus for calibration-free eye tracking using multiple glints or surface reflections
US7365738B2 (en) * 2003-12-02 2008-04-29 International Business Machines Corporation Guides and indicators for eye movement monitoring systems
US20050188330A1 (en) * 2004-02-20 2005-08-25 Griffin Jason T. Predictive text input system for a mobile communication device
US20060095842A1 (en) * 2004-11-01 2006-05-04 Nokia Corporation Word completion dictionary
US20060109242A1 (en) * 2004-11-19 2006-05-25 Simpkins Daniel S User interface for impaired users
US20060209013A1 (en) * 2005-03-17 2006-09-21 Mr. Dirk Fengels Method of controlling a machine connected to a display by line of vision
US20070076862A1 (en) * 2005-09-30 2007-04-05 Chatterjee Manjirnath A System and method for abbreviated text messaging
US20090125849A1 (en) * 2005-10-28 2009-05-14 Tobii Technology Ab Eye Tracker with Visual Feedback
US20060256083A1 (en) * 2005-11-05 2006-11-16 Outland Research Gaze-responsive interface to enhance on-screen user reading tasks
US20080120141A1 (en) * 2006-11-22 2008-05-22 General Electric Company Methods and systems for creation of hanging protocols using eye tracking and voice command and control
US20090058690A1 (en) * 2007-08-31 2009-03-05 Sherryl Lee Lorraine Scott Mobile Wireless Communications Device Providing Enhanced Predictive Word Entry and Related Methods

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9760123B2 (en) 2010-08-06 2017-09-12 Dynavox Systems Llc Speech generation device with a projected display and optical inputs
US8928585B2 (en) 2011-09-09 2015-01-06 Thales Avionics, Inc. Eye tracking control of vehicle entertainment systems
US9037354B2 (en) 2011-09-09 2015-05-19 Thales Avionics, Inc. Controlling vehicle entertainment systems responsive to sensed passenger gestures
WO2013036632A1 (en) * 2011-09-09 2013-03-14 Thales Avionics, Inc. Eye tracking control of vehicle entertainment systems
US9619020B2 (en) 2013-03-01 2017-04-11 Tobii Ab Delay warp gaze interaction
US11853477B2 (en) 2013-03-01 2023-12-26 Tobii Ab Zonal gaze driven interaction
US10545574B2 (en) 2013-03-01 2020-01-28 Tobii Ab Determining gaze target based on facial features
US9864498B2 (en) 2013-03-13 2018-01-09 Tobii Ab Automatic scrolling based on gaze detection
US10534526B2 (en) 2013-03-13 2020-01-14 Tobii Ab Automatic scrolling based on gaze detection
WO2015042358A1 (en) * 2013-09-20 2015-03-26 Amazon Technologies, Inc. Providing descriptive information associated with objects
US10558262B2 (en) 2013-11-18 2020-02-11 Tobii Ab Component determination and gaze provoked interaction
US10317995B2 (en) 2013-11-18 2019-06-11 Tobii Ab Component determination and gaze provoked interaction
US9614724B2 (en) 2014-04-21 2017-04-04 Microsoft Technology Licensing, Llc Session-based device configuration
US9384334B2 (en) 2014-05-12 2016-07-05 Microsoft Technology Licensing, Llc Content discovery in managed wireless distribution networks
US10111099B2 (en) 2014-05-12 2018-10-23 Microsoft Technology Licensing, Llc Distributing content in managed wireless distribution networks
US9384335B2 (en) 2014-05-12 2016-07-05 Microsoft Technology Licensing, Llc Content delivery prioritization in managed wireless distribution networks
US9430667B2 (en) 2014-05-12 2016-08-30 Microsoft Technology Licensing, Llc Managed wireless distribution network
US9874914B2 (en) 2014-05-19 2018-01-23 Microsoft Technology Licensing, Llc Power management contracts for accessory devices
US10691445B2 (en) 2014-06-03 2020-06-23 Microsoft Technology Licensing, Llc Isolating a portion of an online computing service for testing
US9477625B2 (en) 2014-06-13 2016-10-25 Microsoft Technology Licensing, Llc Reversible connector for accessory devices
US9367490B2 (en) 2014-06-13 2016-06-14 Microsoft Technology Licensing, Llc Reversible connector for accessory devices
US9952883B2 (en) 2014-08-05 2018-04-24 Tobii Ab Dynamic determination of hardware
US9782069B2 (en) 2014-11-06 2017-10-10 International Business Machines Corporation Correcting systematic calibration errors in eye tracking data
JP2017182217A (en) * 2016-03-28 2017-10-05 株式会社バンダイナムコエンターテインメント Simulation controller and simulation control program
EP3654148A1 (en) * 2017-10-16 2020-05-20 Tobii AB Improved computing device accessibility via eye tracking
CN114461071A (en) * 2017-10-16 2022-05-10 托比股份公司 Computing device accessibility through eye tracking
CN114461071B (en) * 2017-10-16 2024-04-05 托比丹拿沃斯公司 Improved computing device accessibility through eye tracking
CN110475145A (en) * 2019-08-23 2019-11-19 四川长虹网络科技有限责任公司 TV control system and method based on eye recognition
CN114237388A (en) * 2021-12-01 2022-03-25 辽宁科技大学 Brain-computer interface method based on multi-mode signal recognition
CN114237388B (en) * 2021-12-01 2023-08-08 辽宁科技大学 Brain-computer interface method based on multi-mode signal identification

Similar Documents

Publication Publication Date Title
WO2010141403A1 (en) Separately portable device for implementing eye gaze control of a speech generation device
US10095327B1 (en) System, method, and computer-readable medium for facilitating adaptive technologies
JP6865175B2 (en) Systems and methods for biomechanical visual signals to interact with real and virtual objects
KR100224618B1 (en) View changing method for multi-purpose educational device
US6115482A (en) Voice-output reading system with gesture-based navigation
US6253184B1 (en) Interactive voice controlled copier apparatus
KR102476621B1 (en) Multimodal interaction between users, automated assistants, and computing services
US20110201387A1 (en) Real-time typing assistance
US20110063231A1 (en) Method and Device for Data Input
KR20160065174A (en) Emoji for text predictions
CA2757850A1 (en) Portable e-reader and method of use
EP1050010A1 (en) Voice-output reading system with gesture-based navigation
Majaranta Text entry by eye gaze
WO1998059334A2 (en) Transparent overlay viewer interface
Tuisku et al. Text entry by gazing and smiling
KR101543189B1 (en) Letter board for Augmentalive and Atternative Commnunication
US20220258606A1 (en) Method and operating system for detecting a user input for a device of a vehicle
KR101400129B1 (en) Apparatus and Method for Display Characteristics, System for Educating Korean Language in Online Using It
Lopes Designing user interfaces for severely handicapped persons
Anson Assistive technology for people with disabilities
Meyer et al. Literature review of computer tools for the visually impaired: a focus on search engines
JP2002207413A (en) Action recognizing speech type language learning device
Jordan et al. Modality-independent interaction framework for cross-disability accessibility
Ding User-generated vocabularies on Assistive/Access Technology
KR20120139326A (en) System for educating korean language in online

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10783891

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10783891

Country of ref document: EP

Kind code of ref document: A1