Groups | Search | Server Info | Login | Register
Groups > comp.lang.beta > #104
| From | John Doe <jdoe@usenetlove.invalid> |
|---|---|
| Newsgroups | comp.lang.beta |
| Subject | Speech activated scripting without Natlink... |
| Date | 2014-01-12 01:19 +0000 |
| Organization | A noiseless patient Spider |
| Message-ID | <lasqi4$50m$2@dont-email.me> (permalink) |
Why are Vocola, Dragonfly, Unimacro, and Advanced Scripting integrated into speech recognition? So that NaturallySpeaking can distinguish between what is a command and what is dictation? Speech activated scripting is light years better than keystroke activated scripting, but what's the point in integrating the dictation with the commands? To activate by keystrokes, you press the keystroke combination and then the script is executed. To activate by voice, speech recognition must translate it to text anyways. So, just like using a keystroke combination, you can simply use the output of speech recognition, as long as you have some way of determining what is dictation and what is a command. So why not take it from the SR output? To be continued... I'm still rebuilding my computer desk area. Need to finish the mouse pad/platform. Since I was spending 12 hours a day sitting down, I bought one of those fitball seating discs. Seems to work reasonably well when underinflated.
Back to comp.lang.beta | Previous | Next — Next in thread | Find similar
Speech activated scripting without Natlink... John Doe <jdoe@usenetlove.invalid> - 2014-01-12 01:19 +0000
Re: Speech activated scripting without Natlink... John Doe <jdoe@usenetlove.invalid> - 2014-01-12 20:34 +0000
Re: Speech activated scripting without Natlink... Mr. Anderson <matrixlove@yahoo.com> - 2014-01-12 17:31 -0600
Re: Speech activated scripting without Natlink... John Doe <jdoe@usenetlove.invalid> - 2014-01-13 02:10 +0000
Re: Speech activated scripting without Natlink... Mr. Anderson <matrixlove@yahoo.com> - 2014-01-13 18:02 -0600
csiph-web