Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

interactive voice response 3

Status
Not open for further replies.

hzBlueTooth

Programmer
Dec 9, 2003
5
0
0
GB
hello
i am trying to make use of voice modem instead of a dialogic board. i have to do this using java. basically i have to detect/send dtmf and voice through a modem.i would be really grateful for your response.
haris
 
Take a look at xtapi and jtapi on sourceforge. They will give you working programs that do both of these in Java. I have to caution you that a modem will NOT work correctly much of the time. Call progress tone detection is poor at best with modems. Frequently it is non-existent.


pansophic
 
thanx panasophic for the reply.the problem with jtapi is that i would need a provider. when i run the code i get the exception :"provider could not be instantiated". basically i dont have the provider for us robotics modem and java isnt picking any default provider from the system by itself.
then i thought that i should move to at commands to do this. but so far i havent figured out a way to playback a wave file through the modem to the calling number. if u have any ideas on this please let me know. thanx again for your response.
 
Xtapi supports a Serial provider, MSTAPI, Dialogic and OpenH323 as service providers. There is an included answering machine application that I have gotten to work with the serial, MSTAPI and Dialogic providers. I got the serial and MSTAPI providers working with voice modems, and MSTAPI and Dialogic working with a Dialogic card. It is really straight forward.


Xtapi requires Jtapi for many inherited functions.


pansophic
 
thanx again for the help. just a small problem. when i ran the answering machine example i got the following error:

java.lang.NoClassDefFoundError: net/xtapi/audio/ccitt/decode

at net.xtapi.serviceProvider.SerialINIFile.createTerminals(SerialINIFile.java:251)
at net.xtapi.serviceProvider.Serial.XTinit(Serial.java:67)
at net.xtapi.XProvider.<init>(XProvider.java:104)
at net.xtapi.XJtapiPeer.getProvider(XJtapiPeer.java:108)
at answeringmachine.JAnsweringMachine.initJTapi(JAnsweringMachine.java:531)
at answeringmachine.JAnsweringMachine.main(JAnsweringMachine.java:458)

i have included the folloing inthe class path
xtapi.jar,serialsp.jar,serialsrc.zip,jtapi.
i dont know what seems to be the problem. maybe its the modem.i am using an external us robotics modem.thanx again.
 
You need to read the examples about using the serial service provider. In order for it to work, you have to create an xtapi.inf file that accurately depicts the modem parameters.

Also, you need to install the audio service provider to play and record sounds. It looks like either the xtapi.inf file is misconfigured, or that the audio service provider is not in your path.

If you are developing under Windows, you may find it easier to use the MSTAPI service provider. The program is not portable then, but you avoid having to configure the xtapi.inf file for every modem that you want to use. And you avoid some of the issues with COM1 in the xtapi.inf not actually being associated in any way with the serial port that is on COM1. You also get the familiar names with MSTAPI.

Be careful that you replace the xtapi.dll. It is different for MSTAPI than for the serial provider as I recall, but that may be historical and no longer accurate.


pansophic
 
well i tried the application with the mstapi driver. the provider was detected successfully but unfotunately i again got an exception:

TermConnCreatedEv:javax.telephony.ResourceUnavailableException

i have a question. does it need to be a full duplex modem or half duplex could do as well. the audio device driver installed along with the modem says :&quot;unimodem half duplex&quot;.

i dont if this is the problem...

 
the application needs to recognize certain callChangedEvents but when i run the application and make a call from a phone to the modem no event is generated.
if u have any thing in mind to solve this please let me know.thanx
 
I don't really have anything in mind. For anything other than my own answering machine I wouldn't use a voice modem, and I own a couple of Dialogic cards, so I would probably use them anyway. Modems are notorious for not generating events in a timely fashion, if at all.

The ResourceUnavailableException is probably because you have a Connexant chipset modem. Windows frequently will ignore the fact that these modems are voice capable because they use &quot;#&quot; instead of &quot;+&quot; to reference the voice commands. Or you are correct in that MSTAPI is expecting a full-duplex voice modem, and your is half-duplex. I've used half-duplex modems before, but I think that it was with the serial service provider, not MSTAPI.

In that case, you may be forced to use the serial service provider. You will need to generate the xtapi.inf file with more of the options than you did before. There is some detailed documentation on the content of the xtapi.inf file on the sourceforge projects page. Also, Steven Frare is pretty good at responding to messages, but not necessarily in a timely fashion.


Make sure that you make a thorough search of the forums first though. Occassionally Steve will not respond if the question has been asked and answered already.


pansophic
 
Hi pansophic and hzBlueTooth,

First thanks for your comments, whcih make me know much about the JTAPI, in fact, I am a Computer-Science student who is going to propose to develop a IVR system with a voice-Modem using Java, so JSAPI and JTAPI is the core.

Maybe I go to the point directly, I would like to seek for comments on my project:

FACT that I know:- (correct me if I go wrong)
I) Java can control a voice modem to answer a phone call.
II) JTAPIn can play a au file (voice) into a phne call.
III) FreeTTS(JSAPI) can convert a test into speech as au file.

So I would like to know:
Can I use a Free Text-To-Speech API(JSAPI) to convert a text content into voice and play it directly using the JTAPI to the phone?

 
It is theoretically possible to use TTS to create audio that can be played through the voice modem.

XTAPI uses a file to play audio (I'm not certain that it will play an au file, but it will do a WAV). Converting the TTS to a file, writing it, and then reading that file is an EXTREMELY inefficient way to play sounds. If you are going to use files, convert words to files, and then play the files, but write the files one time only. If you are going to use TTS, then you need to rewrite the play routines in JTAPI to play a stream (unless there is already a stream reading/writing method), and stream directly from TTS to the modem. Skip the file creation/deletion in between.

A possibly novel approach would be to &quot;read&quot; the text files and do a lookup on each word, then play the associated file. This isn't TTS in the traditional sense, it is more like the terrible voice synthesizers that were used for the WOPR in War Games.

Sounds like fun anyway. I'm sure we'd all be interested in the outcome of your project.


pansophic
 
Hi pansophic,
Thanks for your reply. My project aims to provide a way to let the blinds to check email(listen an email) using their mobile phone, so I would like to establish a IVRS.

1. Users phone to the IVRS (a computer linked with a USR Voice Modem).

2. User Authenication

3. IVR Directory browsing (IVR Directory play the pre-record audio file)

4. System fetch email using JMS (system fetch email from the pre-set email account)
5. convert email to speech (Free TTS or other Text-to-Speech API)
6. Read email to the users.

So I need TTS, for sure, I hope TTS can convert the email on fly and speak to the Handset object directly.

And now, I am trying to run JAnsweringMachine of XAPI/TAPI with my Modem.

May I use Java SDK 1.3 or 1.4?
(I ask this question because I read the some doc of XAPI, it is based on JTAPI 1.2 and JTAPI 1.2 seems need a JDK 1.1.)
 
Speaking email is easier said than done (no pun intended). The problem comes from the number of misspellings, acronyms and abbreviations that we tend to use. For instance, a TTS application will end up spelling IVRS, USR, IVR, JMS, TTS, API, handset, JAnsweringMachine, XTAPI/JTAPI, SDK and JDK. This makes listening to email very tedious, not to mention the fact that you would have to listen to this message for about 3.5 minutes. Not exactly speed reading.

When you are using XTAPI, are you using the Serial provider or the MSTAPI provider? I like the Serial provider, but it can be difficult to figure out what all of the settings should be for a given modem.

I have done my XTAPI development using Sun's 1.3.1? and 1.4.2 SDK. The JDK is another term for the same thing (Java Development Kit). I've also used IBM's JRE for some of my testing, just to see if it actually worked. I have done most of my development using Eclipse, so IBM's JRE is built-in.


pansophic
 
Hi Pansophic,

Thanks again for your quick reply and also for your valuable comment (the wording problem inside a email).

For the Provider, I am no comment for it because you know I am a newbie. do you recommmending me to use Serial instead?

And, I already tried the JAnsweringMachine.java with my JSDK 1.4.1 with the XTAPI + JTAPI 1.2. I successfully compile the code, but I run it with error (Cannot get a Provider)

I read the java code, and I think it is trying to use serial provider since the useMSTAPI is set to false.

I copy all the files in serial_sp and modified the &quot;xtapi.inf&quot;, in fact, I un-comment the USR External Modem (I am user USR Extermal Voice Modem)

I already tried all the AT Comment, most of them I can send direct to Modem (for sure, some of them I can only work in Voice Mode, and Some is not)

I would like to ask if you can tell me what's going wrong?


 
Hi Pansophic,

I already fixed my own problem. And now I can run the JAnsweringMachine.

After I successfully execute the JAnsweringMachine and see the &quot;playing Greeting&quot; and &quot;record message&quot; are worked, I changed the section of code of case TermConn &quot;PlayURL Greeting&quot;

Modified:
1. Use freetts to convert a Greeting.txt into a wav file.
2. Play the wav file as greeting.

Lucky it is easy and I make it work.

Following, I would like to create the interactive call directory.

So now I am working on it, if you have ANY hints that you can give me, I am greatly appreciate.

Thanks for your greatful comment and help.

(If I experience any other problem on tmy project, I will post to this forum, and looking for your help. ^^ )
 
Sounds great!

Were you able to see if you can stream rather than write to a file? It is a much more elegant solution than doing all of that disk IO.


pansophic
 
Yes, stream is a better solution than using Disk I/O, however, I am not familiar with AudioInputStream so I get stuck with it.

As I know, FreeTTS can convert a text file into AudioInputStream, but the JAnsweringMachine play the greeting by using usePlayURL( need a URL object ) method of MediaTerminalConnection object. So I cannot link this 2 objects together.



And one more question, when I recorded some WAV file and want the JAnsweringMachine to play. it does not work for me.
But I can play them if I put the files inside the MESSAGES folder like a recorded voice mail.

Do you know why? the format is incompatible? what is the format?




 
That's what I was saying earlier. You need to find the method(s) used by usePlayURL. At some point, the file is opened and streamed to the modem. When you find that method, it is the one that you will want to call.

As far as JAnsweringMachine is concerned, the file that you should play needs to be in the './' directory and be named &quot;Greeting.wav&quot; in order to be played. Note that if you are using a *nix OS, the file is case sensitive. Windows is, of course, case insensitive.

Also, it looks like you can use &quot;.au&quot; format files for audio as well as &quot;.wav&quot; files. Apparently the decoders are built in to usePlayURL.

The format is definitely not incompatible. I think that it is just a path issue.


pansophic
 
Hi pansophic,

In fact I fixed the problem lastnight (I am HongKong People), since I am so sleepy, so I go to bed after that.

The problem is the sampling rate, the wav file that I generated is at 16000 sampling rate, but the API will only accept 8000. That's why it cannot be played.

After I convert the audio sampling rate. it works.

But, I am facing another problem instead (newbie is definitly a newbie, too many problems).

I start the EchoDigits java, and trying to start the DMTF detction.

I don't know what's wrong (or it is fine), the java only detect my first key press, after I press on key, the phone still on and the system will not handle any other DTMF signal.

Any suggestion?
 
Hi pansophic,

It's me again, I would likt to ask a conceptual problem, and wish you can teach me.

When I spend more time to study the JAnswerinMachine. I found 1 method recordTimeout and 1 event Media TermConnStateEv

I know that the recordTimeout is goind to terminate a call if the recording time is too long. And the TermConnStateEv is the event that the Term is being connected (after created but before hangup/unavaliable).

Inside the TermConnStateEv, it firstly detect is the call is in a playing status, if it is, nothing to do; but, however, if the call is in recording status, the 10sec timeout control will be activate. the timeout is control by the PLAYING and RECORDING status with the Thread.sleep()

And now here is the problem.
(1) How to differentiate the TermConnCreatedEv, TermConnAvaliableEv, MediaTermConnState.

(2) Also, usePlayURL seems to be will not play the URL until .startPlaying() method being involved, same as useRecordURL, it will started to record until .startRecordin() method being invloved, am I correct?

(3) Finally, the code of MediaTermConnStateEv seems to check it is in playing,
if yes, nothing to do,
if no, check &quot;is it in recoding&quot;
if yes, nothing to do
if no, start recording, then following,
check timeout.
am I correct?

If my unstanding is correct, will it start the recording before playing, since this event just check is it playing or recording, before the startPlaying() method, the connection also is in no playing and no recording status.

(4) if I would like to add my DTMF after the play wav (for sure my plan will no play 1 wav and detect 1 DMTF signal), where should I put my code? MediaTermConnAvailableEv??

Thanks again.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top