Jump to content
IGNORED

TI Speech (and Pascal, of course)


Rossman

Recommended Posts

If there is a master thread on the subject of speech synthesis, I apologize for starting another. I did search on speech and came up with several discussions, but not one on core TI speech itself. This may need to merge into another thread.

 

Just for fun, I'm looking into adding some speech elements into my chemin-de-fer card game in Pascal. To understand what is possible with speech, I have to understand what is there. What is there in Pascal appears to be limited to the vocabulary of the speech synthesizer. That is a functional (I can make it work) but pretty short list of words, which is a surprise given that the vocabulary of TE-II is unlimited.

 

This is how I understand the evolution of speech on the TI.

 

The design philosophy of the speech synthesizer (mid-70s) was as a hardware device. It was intended to accurately replicate human speech (no small feat for the time), and came with a small but reasonable vocabulary for the hobbyist (memory was expensive at the time).

 

Want more vocabulary words? We'll create plug-in speech modules (Speech Synthesizer Model PHP-1500 manual, page 4). Not a bad strategy: it is easy to imagine adding game specific vocabulary ("Warp Factor 1, Captain") or entire spoken languages and dialects (e.g., Swiss-German). As a hardware device, its core function was accuracy in replicating speech patterns; memory-rich dedicated plug-in modules would have provided capacity for high-fidelity words (tonality, pitch, etc.) Imagine how conversationally rich Key to Spanish - PHL-7012 - could have been with a hardware language module.... of course, given the times, I'm sure there would have been intense competition to create the definitive Klingon module.

 

TI expected to create these speech modules, until they didn't. I suspect because somewhere along the way, some smart engineer(s) figured out how to create a phonetic interpreter that would render a (memory-consumptive) proscriptive dictionary useless. The Text-to-Speech and TE-II carts support wide-bandfield phonetic pronunciation (presumably, as logic on a chip) on top of the phonetic range of the speech synthesizer itself.

 

As long as that interpretive-phonetic code was accessible to a program, the TI was very chatty indeed. That could be achieved by requiring something like TE-II in the slot, which made ROMs accessible through TI Basic (TI-Trek). It could also be done by embedding the code in the cart itself: Parsec is ever so nice to compliment my skill with "Great shot, Pilot", yet none of those words are indigenous to the speech synthesizer. And I'm pretty sure there's never been a "Wheel of Fortune" module that would offer up the puzzle - - - - - - - - - T O B E G I N because despite being monosyllabic none of the missing words (Press ... Fire) are core to PHP1500.

 

I guess that means Parsec was the Smartest Cart in the Room.

 

Regardless, it meant that the economics of embedded phonetics on a cart-based game like Parsec or a disk-based game with an inserted cart like TE-II doomed the hardware-based vocabulary modules. Total speculation, of course. But it may have been a Thing.

 

Whatever the cause, there certainly was a change in implementation philosophy. It is one thing to have a vocabulary (smarting-up the speech synthesizer, which TI walked away from) and another to rely on phonetic pattern (dumbing-down the speech synthesizer). The non-extant speech modules would have provided the prior, but TE-II in the slot allows for the latter.

 

Here's why this matters in development. The best software for the TI will be developed in languages like XB (and, obviously, RXB). But stepping back in time, with XB in the slot, there's no TE-II. And - as I understand it - that means no phonetics. XB had the vocabulary of the speech synthesizer. It could easily spout forth phrases like "Fourteen Forty Four". How deep a conversation is that?

 

TI-XB does have SPGET, which allows for speech manipulation: you can take a standard word that the speech synthesizer already knows, add a suffix, and you have expanded the vocabulary, even with nuance (TI Extended Basic manual, appendix M). That suggests that the Speech Synthesizer was open to manipulation all along, and that the magic of Text-to-Speech and TE-II was a sophisticated text interpreter that fed a rudimentary phonetic processor and create a rational aural result. It's always been in the speech synthesizer to do this, it was the interface that was lacking. TE-II had a helper to make this easy; XP did not.

 

Which brings me around to what I want to do. I'm still goofing around with my little game in Pascal. At first glance, the Pascal implementation of speech seems immune to any phonetic flexibility: here's your very limited list of vocabulary words, enjoy! But maybe it is not so limited. get_speech returns a speech pattern, which I think is the equivalent of XB SPGET. If I have a speech pattern, I just need to know all of the characteristics of the pattern, not the pattern overlays (e.g., known words). And maybe that is the basis of getting out all kinds of sounds that will come out like words.

 

Which has me wondering: the speech synthesizer probably had some base phonetic language. That phonetic language could be extrapolated through a simple interrogation (SPGET or get_speech) of known words. From there, it would be an exercise in manipulating the phonetics to create whatever words that I want.

 

I realize this exposes my complete ignorance of the speech synthesizer (and my ignorance about it is as complete as it gets). My first read of speech in Pascal is that I'm limited, yet I believe I have a lot of paths around that if I am willing to investigate.

 

Best regards,

 

 

R.

Edited by Rossman
Link to comment
Share on other sites

The Text-to-Speech program was an assembly program that would add TE-II-like capability to the machine. Those root sounds you mentioned are sound allophones, which is the favored input language of the speech synthesizer. Parsec used a set of allophone strings fed directly to the speech synthesizer to generate its sound--and those allophones were created by running the voice of one of the secretaries through a speech processor to generate the initial allophone strings. If you go to the source code for Parsec, you can pull these strings out of the code and try to feed them to the get_speech routine and see what happens. The Text-to-Speech manual also has a good explanation of how the allophone speech works, which may also help.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...