Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chris Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

VFP and AI (ChatGPT) 2

Status
Not open for further replies.

Gerrit Broekhuis

Programmer
Aug 16, 2004
316
NL
Hi,

It’s interesting to read in another thread today about VFP and AI (ChatGPT).

I’m still wondering if anyone has found a way to use ChatGPT (I’m using 3.5) in combination with VFP. In other words, have a VFP application asking ChatGPT and get the answer in VFP.

Regards, Gerrit
 
I just asked ChatGPT:

Code:
PROCEDURE AskChatGPT
   LOCAL lcQuestion, lcApiKey, lcApiUrl, lcResponse

   lcApiKey = "YOUR_OPENAI_API_KEY"
   lcApiUrl = "[URL unfurl="true"]https://api.openai.com/v1/engines/davinci-codex/completions"[/URL]

   lcQuestion = GETSTRING("Ask ChatGPT:", "Question Input")

   * Replace "YOUR_OPENAI_API_KEY" with your actual OpenAI GPT-3 API key
   lcResponse = SYS(2335, "curl -X POST -H 'Content-Type: application/json' -H 'Authorization: Bearer " + lcApiKey + "' -d '{""prompt"":""" + lcQuestion + """}' " + lcApiUrl)

   ? lcResponse
ENDPROC

In short, it's just a web API and there are many examples out there using MSXML httprequest like "WinHttp.WinHttpRequest" or "MSXML2.ServerXMLHTTP". So the only difference to other APIs is that the serverside is an AI processing your input string.

SYS(2335) has nothing to do with that, but you get the idea from the curl command. ChatGPT seems to think SYS(2335) is making a shell command call, well that's completely off it setting Unattended Server Mode on or off. I also don't know a GETSTRING() function, but that's the least problem, isn't it?

The comments coming with the code make it clear how ChatGPT errs about SYS(2335) but also have helpful hints:
ChatGPT said:
Please note that interacting with the OpenAI API requires proper authentication, and you should keep your API key secure. Also, this example uses the SYS(2335, ...) function to execute a shell command (curl) from within VFP. Ensure that your VFP environment allows the execution of shell commands.

This is a basic example, and you may want to handle errors, parse the JSON response for a cleaner output, and implement a more user-friendly interface based on your application's requirements. Additionally, please refer to the OpenAI API documentation for more details on using their API:

Chriss
 
And here's an answer when explicitly asking to use MSXML2.ServerXMLHTTP

Code:
PROCEDURE AskChatGPT
    LOCAL loHTTP, lcURL, lcQuestion, lcResponse

    * Set the OpenAI API endpoint
    lcURL = "[URL unfurl="true"]https://api.openai.com/v1/engines/davinci-codex/completions"[/URL]

    * Set your OpenAI API key
    lcApiKey = "YOUR_OPENAI_API_KEY"

    * Get user input
    lcQuestion = GETQUESTION()

    * Create the ServerXMLHTTP object
    loHTTP = CREATEOBJECT("MSXML2.ServerXMLHTTP")

    * Set up the HTTP request
    loHTTP.Open("POST", lcURL, .F.)
    loHTTP.setRequestHeader("Content-Type", "application/json")
    loHTTP.setRequestHeader("Authorization", "Bearer " + lcApiKey)

    * Create the request payload
    lcPayload = '{ "prompt": "' + lcQuestion + '", "max_tokens": 150 }'

    * Send the request
    loHTTP.send(lcPayload)

    * Check for a successful response
    IF loHTTP.status = 200
        * Parse the JSON response
        lcResponse = loHTTP.responseText
        lcResponse = JSONPARSE(lcResponse)
        
        * Display the response to the user
        ? "ChatGPT Response:"
        ? lcResponse.choices[1].text
    ELSE
        * Display an error message if the request fails
        ? "Error:", loHTTP.statusText
    ENDIF
ENDPROC

FUNCTION GETQUESTION
    * Function to get user input for the question
    LOCAL lcQuestion
    ? "Enter your question:"
    INPUT TO lcQuestion
    RETURN lcQuestion
ENDFUNC

Again, not fully functional, but you get a better idea of how to make the request without needing curl.

Chriss
 
I looked a bit deeper into the referenced documentation at (not at - beta suggested there is a better final documentation).

One thing is clear: The major endpoint URLs you need to send requests to are different from the URL given by ChatGPT. is a still working endpoint, but the quickstart examples show that the current endpoint URLs you should use are depending on what kind of AI you want to use. The main usage of what ChatGPT has become known for most are chat completions. The main idea of ChatGPT is to generate (predict) how a chat between a user and a system (itself) continues. Given a chat at minimum with something the user said first. The endpoint for that is now


In previous (legacy) forms and models the previous chat was not given in the form of a list of messages, but a freeform text called "prompt" and the endpoint for that is


In both cases the used model, like gpt-4 is not put into the URL as in ChatGPTs generated example URL endpoint using an AI model called davinci. Inhstead it's part of the "payload" or simpler said the body of the request to send.

The starting point of the docuemtation is giving a rough overview about the options inclujding the details of wqhat json body to send as requests at Besides text generation, you can use image generation and vision, text-to-speech and speech-to-text besides some other things. So called embeddings, for example, I don't yet understand what they mean and fine-tuning, which in the first sentence given for it exists for you to "customize a model for your application" (aka your use case).

There's a lot to dive into.

Chriss.
 
I think you have more questions and ideas on the aspect of which models to use with what requests to make best use of OpenAI/ChatGPT.

The first topic to understand should be API keys limits and billing, though.

Perhaps it's obvious to you how to handle API keys. And I don't just mean the technical aspect to set it into an Authorization request header. API keys you generate in your OpenAI account are a) rate limited, and b) request made with it are billed to your account.

So make it configurable to set an API key and instruct users (perhaps a manager at copmpanies) to maintain their own (personal or company) account at openAI to have control about that aspect and their cost of usage and, well, not your cost of usage, obviously.

Indeed, you could also consider that all requests with your application are going through your own account, which might lower the overall cost of all users and if you earn good enough that could pay off for all, but there are things that would then influence all users, for example the fine tuning requests that can be made, so I'd recommend each company, maybe even within a company each user should have their own OpenAI account.

I already encountered a "rate limit exceeded" error response in the first request, even though that's not what is refelected in the account overview and rate limit overview. The general rate limits stated are 3 requests per minute, which surely is very low, but also 200 per day. And while I surely made 3 requests in the first minute, I did get the rate limit in the first request already and I got this response pemanently yesterday, even after waiting for a few minutes. Today that aspect works but I guess you have to wait a bit before being able to use the API with a new account. If your account is already older and you already use the website interface for longer, it might work to use an API key for the http request API instantly.

Another detail to know about rate limit is that there is a second "currency" besides this amount of requests per minute or day. Each request uses what they call "tokens". That seems to be a unit measuring the cost of generating an answer. I think I remember from videos about ChatGPT a token is what makes up a grammatical unit in the model or generated text, not just a word count, but a unit of computation time, too. One parameter of the request you send in, that is even used in the outdated ChatGPT sample code is the limitation of how many of these tokens to use for the answer: [tt]"max_tokens": 150[/tt].

So this request parameter allows you to limit what chunk of computation time can be used up. In the free tier a limit for that given for one AI model is 40,000 TPM (toekens per minute), which means whatever is first exceeded request count or tokens, limits the further usage. Another consequence of that is when a response uses up the maximum tokens for that request, it will be incomplete. Which does not only mean an incomplete last sentence, also answers the AI model would continue to work on with more time/tokens before considering it complete.

Overall, there's a lot to first understand about that aspect of costs and safety of the API key. The Open AI key generation is also very clear about only showing you a generated key once, so you have to copy and store it safely for yourself, you can't just look into your account overview of keys like other APIs allow to do it. But you can generate a new key at any time, rendering all previous keys as useless, I think. But maybe an old key also stays valid and you may use multiple keys with different fine tuning settings for different use cases. So please, dig into the documentation of this topic first.

As long as you stay on the free tier, even a leaked key can't generate costs for you, someone using that key could only drive it up to the rate limit and render your account as unusable until you get new requests per minute or day. It becomes more important once you change to a billable paid tier, but you better know in advance.

Chriss
 
Hi Chriss,

You already found a lot. Of course I would try to avoid paying for an open service, at least for occasional use and testing. I would intend to use it for text generating, for example in an e-mail application. Perhaps I can find some time after Christmas to try some things myself.

I had the idea that using the API was only possible with a paid account (v4 instead of 3.5?). I hope I was wrong.

Regards, Gerrit
 
Well, you can see your specific limit s after you log in at
And the overview about rate limits and also models usable is at
I don't see GTP-4 under the free tier, if you click on Tier 1, it'll show up.
So indeed, you need to pay a minimum of $5 to get access to Chat GPT 4, that's not n available model in the free tier.

Chriss
 
By the way, this is a sample request that fully works, techniically at least, once you set in your API key.

Code:
Local loHTTP, lcURL, lcQuestion, lcResponse

* Set the OpenAI API endpoint
lcURL = "[URL unfurl="true"]https://api.openai.com/v1/chat/completions"[/URL]

* Set your OpenAI API key
lcApiKey = "[highlight #FCE94F]your API key here[/highlight]"

* Get user input
lcQuestion = Inputbox('Prompt','Your Prompt for ChatGPT','')

* Create the ServerXMLHTTP object
loHTTP = Createobject("MSXML2.ServerXMLHTTP")

* Set up the HTTP request
loHTTP.Open("POST", lcURL, .F.)
loHTTP.setRequestHeader("Content-Type", "application/json")
loHTTP.setRequestHeader("Authorization", "Bearer " + lcApiKey)

* Create the request payload
Text To lcPayload Textmerge Noshow
{
    "model": "gpt-3.5-turbo",
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "<<lcQuestion>>"
        }
    ]
}
EndText

* Send the request
loHTTP.Send(lcPayload)

* Check for a successful response
If loHTTP.Status = 200
   * Display the response
   ? "ChatGPT Response:"
   ? loHTTP.responseText
Else
   * Display an error message if the request fails
   ? "Error:", loHttp.Status, loHTTP.statusText
   ? loHTTP.responseText
EndIf
? loHTTP.getAllResponseHeaders()

Just that I'm back to getting Error 429 Too Many Requests.
And I didn't make 200 requests today or 3 in the last minute.

If anybody is more successful with his/her account, I'd be glad to hear advice on how to get there. I think you have to go to tier 1 and pay at least $5 once, to get a bit of headroom. I'm actuallly not keen on giving out a credit card number just to be able to get this going, though.

One more questiuonable thing is that the response headers don't include what the documentation promises here: You can see for yourself that the output of loHTTP.getAllResponseHeaders does not include any of the headers x-ratelimit-limit-requests, x-ratelimit-limit-tokens, x-ratelimit-remaining-requests, x-ratelimit-remaining-tokens, x-ratelimit-reset-requests or x-ratelimit-reset-tokens.

So a typical case of documentation being wrong or behind the actual implemnentation- or vice versa. For now, I'm done with this. You make a bad impression, OpenAI, on the level of developer support, at least.
Also, I have tried to use the so called playground ( which makes clear it's not some specialty of VFP or MSXML2.ServerXMLHTTP that hinders it to work. The playground also makes use of your account and API key, without needing to provide that code, but doing all things interactively under your account. And it also responds with rate limit exceeded.

I know, I should tell all this to OpenAI customer support and not here, I'm not that involved and happy to go there after Christmas, more likely next year, though.

So I wish you a Merry Christmas and a Happy New Year, may this work for you or be helpful to get started.

Chriss
 
Hi Chriss,

I saw that a command to the API consists of many tokens. So a simple instruction can easily be 100 or more tokens.

I will read more after Christmas.

Merry Christmas and a Happy and Bugfree New Yesr!

Regards, Gerrit
 
Actually, that code makes no use of a max-tokens parameter. And the token limit even for the free tier specifically for the model gpt-3.5-turbo are 40,000 TPM (tokens per minute). That's not the problem, either.

Code:
* Create the request payload
Text To lcPayload Textmerge Noshow
{
    "model": "gpt-3.5-turbo",
    "max_tokens" : 200,
    "messages": [
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "<<lcQuestion>>"
        }
    ]
}
EndText

I'm not the only one reporting that as a problem that's not explained by the reason the actual rate limit is exceeded. As said, they don't even give you the actual numbers that would be helpful to know in the promised response headers, also not in your account and the pages they reference to look into.

Well, next year perhaps.



Chriss
 
Nice, I've created a monster.


Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS, ATD

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
No, OpenAI did. And it is just a text prediction/generation AI, not general AI.

ttmvp_kdpvqz.jpg


Chriss
 
Hi Chris,

I just tried your last sample with the "max_tokens" : 200 setting. I added my own key for the 3.5 API.
I also get error 429 "too many requests" when running the code in IDE. I haven't used ChatGPT for more than a week so I cannot run out of resources (I think).

Online I found this: "If you are encountering Error Code 1020 or Error Code 429, these are related to rate limiting and security measures put in place to protect the service. You can try reducing the number of requests or waiting for a while before trying again."

So far nothing is working with the Free version of ChatGPT. Perhaps it's because my first registration is more than 3 months ago?

On StackOverFlow I found this: "TL;DR: You need to upgrade to a paid plan. Set up a paid account, add a credit or debit card, and generate a new API key if your old one was generated before the upgrade. It might take 10 minutes or so after you upgrade to a paid plan before the paid account becomes active and the error disappears."

I'm wondering now if you switched to the paid version to avoid error 429?

Regards, Gerrit


 
Update: I added $10 to my credit balance and now it's working. My account was simply "too old" for this.

OK, experiments will go on! Thanks Chriss for your help.

Regards, Gerrit
 
@Chris Miller
LMAO.


Best Regards,
Scott
MSc ISM, MIET, MASHRAE, CDCAP, CDCP, CDCS, CDCE, CTDC, CTIA, ATS, ATD

"I try to be nice, but sometimes my mouth doesn't cooperate.
 
Hi,

I'm testing the monster trying to use TEXT...ENDTEXT with the variable lcQuestion.

I tried with this construction:

Code:
TEXT TO lcQuestion NOSHOW
Create a reply to the question below that we will look into this A.S.A.P.;

"I have a large number of computers that have not received any virus updates. I think this is critical. Please take care of this. "
ENDTEXT

When trying this with the sample code I get an error (400):

{
"error": {
"message": "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON. If you have trouble figuring out how to fix this, please contact us through our help center at help.openai.com.)",
"type": "invalid_request_error",
"param": null,
"code": null
}
}

What could be wrong with the JSON message? Or: how can I send a long message with a question using JSON to ChatGPT?

Regards, Gerrit

UPDATE: It seems that CR+LF is not accepted here. I think it's working now to a certain degree. At least I get a valid 200 response.
 
In the JSON lcQuestion is inserted here:

Code:
 "content": "<<lcQuestion>>"

The first problem when you define lcQuestion with quotes is that the first quote ends the JSON value and renders the rest of the JSON invalid, the second problem is multiline texts, so consider writing everything in one line, pretty formatting is nothing the AI needs.

Also see
There really is no good way to represent multiline strings in a JSON structure, so simply don't do it, remove any CHR(13) and CHR(10) and put in spaces instead, also escape the string. I'm not giving you a JSON generator, this sample code wasn't meant to be used as a one size fits all solution anyway. Learn a bit about JSON, please. And about the API and all it's parameters, there's more to it than that minimal JSON.

Chriss
 
Hi Chriss,

I just updated my previous message and I agree with you that I not only changed the message by removing CR+LF and replaced " by '. That works.

Regards, Gerrit
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top