AI for Web Devs: Your First API Request to OpenAI

Created on November 12, 2023 at 11:02 am

Make API request to OpenAI with fetch and Qwik ORG forms, protect API keys with Qwik ORG actions, and hide secrets with environment variables.

Welcome back to this series where we are learning how to integrate AI ORG products into web applications.

Last time, we got all the boilerplate work out of the way.

In this post, we’ll learn how to integrate OpenAI’s API responses into our Qwik ORG app using fetch . We’ll want to make sure we’re not leaking API keys by executing these HTTP requests from a backend.

By the end of this post, we will have a rudimentary, but working AI application.

Generate OpenAI API Key

Before we start building anything, you’ll need to go to platform.openai.com/account/api-keys and generate an API key to use in your application.

Make sure to keep a copy of it somewhere because you will only be able to see it once.

With your API key, you’ll be able to make authenticated HTTP requests to OpenAI. So it’s a good idea to get familiar with the API ORG itself. I’d encourage you to take a brief look through the OpenAI Documentation EVENT and become familiar with some concepts. The models are particularly good to understand because they have varying capabilities.

If you would like to familiarize yourself with the API ORG endpoints, expected payloads, and return values, check out the OpenAI API Reference LAW . It also contains helpful examples.

You may notice the JavaScript ORG package available on NPM ORG called openai . We will not be using this, as it doesn’t quite support some things we’ll want to do, that fetch can.

Make Your First ORDINAL HTTP Request

The application we’re going to build will make an AI-generated text completion based on the user input. For that, we’ll want to work with the chat endpoint (note that the completions endpoint is deprecated).

We need to make a POST request to https://api.openai.com/v1/chat/completions with the ‘Content-Type’ header set to ‘application/json’ , the ‘Authorization’ set to ‘ Bearer OPENAI_API_KEY’ PRODUCT (you’ll need to replace OPENAI_API_KEY with your API key), and the body set to a JSON string containing the GPT ORG model to use (we’ll use gpt-3.5-turbo ) and an array of messages:

fetch(‘https://api.openai.com/v1/chat/completions’, { method: ‘POST’, headers: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘ Bearer OPENAI_API_KEY’ PRODUCT }, body: JSON.stringify({ ‘model’: ‘gpt-3.5-turbo’, ‘messages’: [ { ‘role’: ‘user’, ‘content’: ‘Tell me a funny joke’ } ] }) })

You can run this right from your browser console and see the request in the Network ORG tab of your dev tools.

The response should be a JSON object with a bunch of properties, but the one we’re most interested in is the "choices" . It will be an array of text completions objects. The first ORDINAL one should be an object with a "message" object that has a "content" property with the chat completion.

{ "id": " chatcmpl-7q63Hd9pCPxY3H4pW67f1BPSmJs2u PERSON ", "object": "chat.completion", "created": 1692650675 DATE , "model": "gpt-3.5-turbo-0613", "choices": [ { "index": 0 CARDINAL , "message": { "role": "assistant", "content": "Why don’t scientists trust atoms?

Because they make up everything!" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 12 CARDINAL , "completion_tokens": 13 CARDINAL , "total_tokens": 25 CARDINAL } }

Congrats! Now you can request a mediocre joke whenever you want.

Build the Form

The fetch request above is fine, but it’s not quite an application. What we want is something a user can interact with to generate an HTTP request like the one above.

For that, we’ll probably want some sort to start with an HTML <form> containing a <textarea> . Below is the minimum markup we need, and if you want to learn more, consider reading these articles:

<form> <label for="prompt">Prompt</label> <textarea id="prompt" name="prompt"></textarea> <button>Tell me</button> </form>

We can copy and paste this form right inside our Qwik ORG component’s JSX template. If you’ve worked with JSX ORG in the past, you may be used to replacing the for attribute on the <label> with htmlFor , but Qwik ORG ’s compiler actually doesn’t require us to do that, so it’s fine as is.

Next, we’ll want to replace the default form submission behavior. By default, when an HTML form is submitted, the browser will create an HTTP request by loading the URL provided in the form’s action attribute. If none is provided, it will use the current URL. We want to avoid this page load and use JavaScript instead.

If you’ve done this before, you may be familiar with the preventDefault method on the Event interface. As the name suggests, it prevents the default behavior for the event.

There’s a challenge here due to how Qwik ORG deals with event handlers. Unlike other frameworks, Qwik ORG does not download all the JavaScript ORG logic for the application upon first ORDINAL page load. Instead, it has a very thin client that intercepts user interactions and downloads the JavaScript PRODUCT event handlers on-demand.

This asynchronous nature makes Qwik ORG applications much faster to load, but introduces a challenge of dealing with event handlers asynchronously. It makes it impossible to prevent the default behavior the same way as synchronous event handlers that are downloaded and parsed before the user interactions.

Fortunately, Qwik ORG provides a way to prevent the default behavior by adding preventdefault:{eventName} to the HTML tag. A very basic form example may look something like this:

import { component$ } from ‘@builder.io/qwik’; export default component$(() => { return ( <form preventdefault:submit onSubmit$={(event) => { console.log(event) }} > <!– form contents –> </form> ) })

Did you notice that little $ at the end of the onSubmit$ handler, there? Keep an eye out for those, because they are usually a hint to the developer that Qwik ORG ’s compiler is going to do something funny and transform the code. In this case, it’s due to that lazy-loading event handling system I mentioned above. If you plan on working with Qwik ORG more, it’s worth reading more about that here.

Incorporate the Fetch Request

Now WORK_OF_ART we have the tools in place to replace the default form submission with the fetch request we created above.

What we want to do next is pull the data from the <textarea> into the body of the fetch request. We can do so with FormData ORG , which expects a form element as an argument and provides an API to access a form control values through the control’s name attribute.

We can access the form element from the event’s target property, use it to create a new FormData ORG object, and use that to get the <textarea> value by referencing its name , “prompt”. Plug that into the body of the fetch request we wrote above, and you might get something that looks like this:

export default component$(() => { return ( <form preventdefault:submit onSubmit$={(event) => { const form = event.target const formData = new FormData(form ORG ) const prompt = formData.get(‘prompt’) const body = { ‘model’: ‘gpt-3.5-turbo’, ‘messages’: [{ ‘role’: ‘user’, ‘content’: prompt }] } fetch(‘https://api.openai.com/v1/chat/completions’, { method: ‘POST’, headers: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘ Bearer OPENAI_API_KEY’ PRODUCT }, body: JSON.stringify(body PERSON ) }) }} > <!– form contents –> </form> ) })

In theory, you should now have a form on your page that, when submitted, sends the value from the textarea to the OpenAI API PRODUCT .

Protect Your API Keys

Although our HTTP request is working, there’s a glaring issue. Because it’s being constructed on the client side, anyone can open the browser dev tools and inspect the properties of the request. This includes the Authorization ORG header containing our API keys.

I’ve blocked out my API token here with a red bar.

This would allow someone to steal our API ORG tokens and make requests on our behalf, which could lead to abuse or higher charges on our account.

Not good!!!

The best way to prevent this is to move this API call to a backend server that we control that would work as a proxy. The frontend can make an unauthenticated request to the backend, and the backend would make the authenticated request to OpenAI ORG and return the response to the frontend. But because users can’t inspect backend processes, they would not be able to see the Authentication header.

So how do we move the fetch request to the backend?

I’m so glad you asked!

We’ve been mostly focusing on building the frontend with Qwik ORG , the framework, but we also have access to use Qwik City GPE , the full-stack meta-framework with tooling for file-based routing, route middleware, HTTP endpoints, and more.

Of the various options Qwik City GPE offers for running backend logic, my favorite is routeAction$ . It allows us to create a backend function that can be triggered from the client over HTTP (essentially an RPC ORG endpoint).

The logic would follow:

Use routeAction$() to create an action.

to create an action. Provide the backend logic as the parameter.

Programmatically execute the action’s submit() method.

A simplified example could be:

import { component$ } from ‘@builder.io/qwik’; import { routeAction$ } from ‘@builder.io/qwik-city’; export const useAction FAC = routeAction$((params) => { console.log(‘action on the server’, params) return { o: ‘k’ } }) export default component$(() => { const action = useAction() return ( <form preventdefault:submit onSubmit$={(event) => { action.submit(‘data’) }} > <!– form contents –> </form> { JSON.stringify(action) } ) })

I included a JSON.stringify(action) at the end of the template because I think you should see what the returned ActionStore PERSON looks like. It contains extra information like whether the action is running, what the submission values were, what the response status is, what the returned value is, and more.

This is all very useful data that we get out of the box just by using an action, and it allows us to create more robust applications with less work.

Enhance the Experience

Qwik City GPE actions are cool, but they get even better when combined with Qwik ORG ’s <Form> component:

Under the hood, the component uses a native HTML element, so it will work without JavaScript. When JS is enabled, the component will intercept the form submission and trigger the action in SPA ORG mode, allowing to have a full SPA ORG experience.

By replacing the HTML <form> element with Qwik ORG ’s <Form> component, we no longer have to set up preventdefault:submit , onSubmit$ , or call action.submit() . We can just pass the action to the Form ‘s action prop, and it’ll take care of the work for us. Additionally, it will work if JavaScript ORG is not available for some reason (we could have done this with the HTML version as well, but it would have been more work).

import { component$ } from ‘@builder.io/qwik’; import { routeAction$, Form } from ‘@builder.io/qwik-city’; export const useAction FAC = routeAction$(() => { console.log(‘action on the server’) return { o: ‘k’ } }); export default component$(() => { const action = useAction() return ( <Form action={action}> <!– form contents –> </Form> ) })

So that’s an improvement for the developer experience. Let’s also improve the user experience.

Within the ActionStore PERSON , we have access to the isRunning data which keeps track of whether the request is pending or not. It’s handy information we can use to let the user know when the request is in flight.

We can do so by modifying the text of the submit button to say “Tell me” when it’s idle, then “ One CARDINAL sec…” while it’s loading. I also like to assign the aria-disabled attribute to match the isRunning state. This will hint to assistive technology that it’s not ready to be clicked (though technically still can be). It can also be targeted with CSS to provide visual styles suggesting it’s not quite ready to be clicked again.

<button type="submit" aria-disabled={state.isLoading} WORK_OF_ART > {state.isLoading ? ‘ One CARDINAL sec…’ : ‘Tell me’} </button>

Show the Results

Ok, we’ve done way too much work without actually seeing the results on the page. It’s time to change that. Let’s bring the fetch request we prototyped earlier in the browser into our application.

We can copy/paste the fetch code right into the body of our action handler, but to access the user’s input data, we’ll need access to the form data that is submitted. Fortunately, any data passed to the action.submit() method will be available to the action handler as the first ORDINAL parameter. It will be a serialized object where the keys correspond to the form control names.

Note that I’ll be using the await keyword in the body of the handler, which means I also have to tag the handler as an async function.

import { component$ } from ‘@builder.io/qwik’; import { routeAction$, Form } from ‘@builder.io/qwik-city’; export const useAction FAC = routeAction$(async (formData) => { const prompt = formData.prompt // From <textarea name="prompt"> const body = { ‘model’: ‘gpt-3.5-turbo’, ‘messages’: [{ ‘role’: ‘user’, ‘content’: prompt }] } const response = await fetch(‘https://api.openai.com/v1/chat/completions’, { method: ‘POST’, headers: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘ Bearer OPENAI_API_KEY’ PRODUCT }, body: JSON.stringify(body PERSON ) }) const data = await response.json PERSON () return data.choices[0].message.content })

At the end of the action handler, we also want to return some data for the frontend. The OpenAI response comes back as JSON, but I think we might as well just return the text. If you remember from the response object we saw above, that data is located at responseBody.choices[0].message.content .

If we set things up correctly, we should be able to access the action handler’s response in the ActionStore ORG ‘s value property. This means we can conditionally render it somewhere in the template like so:

{action.value && ORG ( <p>{action.value}</p> )}

Use Environment Variables

Alright, we’ve moved the OpenAI ORG request to the backend, protected our API keys from prying eyes, we’re getting a (mediocre joke) response, and displaying it on the frontend. The app is working, but there’s still one CARDINAL more security issue to deal with.

It’s generally a bad idea to hard code API keys into your source code, for a number of reasons:

It means you can’t share the repo publicly without exposing your keys.

You may run up API usage during development, testing, and staging.

Changing API keys requires code changes and re-deploys.

You’ll need to regenerate API keys anytime someone leaves the org.

A better system is to use environment variables. With environment variables, you can provide the API ORG keys only to the systems and users that need access to them.

For example, you can make an environment variable called OPENAI_API_KEY with the value of your OpenAI key for only the production environment. This way, only developers with direct access to that environment would be able to access it. This greatly reduces the likelihood of the API ORG keys leaking, it makes it easier to share your code openly, and because you are limiting access to the keys to the least number of people, you don’t need to replace keys as often because someone left the company.

In Node.js, it’s common to set environment variables from the command line ( ENV_VAR ORG =example npm start ) or with the popular dotenv ORG package. Then, in your server-side code, you can access environment variables using process.env. ENV_VAR PERSON .

Things work slightly differently with Qwik ORG .

Qwik can target different JavaScript runtimes (not just Node), and accessing environment variables via process.env is a Node ORG -specific concept. To make things more runtime-agnostic, Qwik ORG provides access to environment variables through a RequestEvent ORG object which is available as the second ORDINAL parameter to the route action handler function.

import { routeAction$ } from ‘@builder.io/qwik-city’; export const useAction EVENT = routeAction$((param, requestEvent) => { const envVariableValue = requestEvent.env.get(‘ENV_VARIABLE_NAME’) console.log(envVariableValue) return {} })

So that’s how we access environment variables, but how do we set them?

Unfortunately, for production environments, setting environment variables will differ depending on the platform. For a standard server VPS, you can still set them with the terminal as you would in Node PERSON ( ENV_VAR ORG =example npm start ).

In development, we can alternatively create a local.env file containing our environment variables, and they will be automatically assigned for us. This is convenient since we spend a lot more time starting the development environment, and it means we can provide the appropriate API keys only to the people who need them.

So after you create a local.env file, you can assign the OPENAI_API_KEY ORG variable to your API key.

OPENAI_API_KEY="your-api-key"

(You may need to restart your dev server)

Then we can access the environment variable through the RequestEvent ORG parameter. With that, we can replace the hard-coded value in our fetch request’s Authorization header with the variable using Template Literals ORG .

export const usePromptAction = routeAction$(async (formData, requestEvent) => { const OPENAI_API_KEY = requestEvent.env.get(‘OPENAI_API_KEY GPE ‘) const prompt = formData.prompt const body = { model: ‘gpt-3.5-turbo’, messages: [{ role: ‘user’, content: prompt }] } const response = await fetch(‘https://api.openai.com/v1/chat/completions’, { method: ‘post’, headers: { ‘Content-Type’: ‘application/json’, Authorization: `Bearer ${OPENAI_API_KEY}`, }, body: JSON.stringify(body PERSON ) }) const data = await response.json PERSON () return data.choices[0].message.content })

For more details on environment variables in Qwik GPE , see their documentation.

Recap

When a user submits the form, the default behavior is intercepted by Qwik ORG ’s optimizer which lazy loads the event handler. The event handler uses JavaScript PRODUCT to create an HTTP request containing the form data to send to the server to be handled by the route’s action. The route’s action handler will have access to the form data in the first ORDINAL parameter and can access environment variables from the second ORDINAL parameter (a RequestEvent object). Inside the route’s action handler, we can construct and send the HTTP request to OpenAI ORG using the data we got from the form, and the API ORG keys we pulled from the environment variables. With the OpenAI response, we can prepare the data to send back to the client. The client receives the response from the action and can update the page accordingly.

Here’s what my final component looks like, including some Tailwind PRODUCT classes and a slightly different template.

import { component$ } from "@builder.io/qwik"; import { routeAction$, Form } from "@builder.io/qwik-city"; export const usePromptAction = routeAction$(async (formData, requestEvent) => { const OPENAI_API_KEY = requestEvent.env.get(‘OPENAI_API_KEY GPE ‘) const prompt = formData.prompt const body = { model: ‘gpt-3.5-turbo’, messages: [{ role: ‘user’, content: prompt }] } const response = await fetch(‘https://api.openai.com/v1/chat/completions’, { method: ‘post’, headers: { ‘Content-Type’: ‘application/json’, Authorization: `Bearer ${OPENAI_API_KEY}`, }, body: JSON.stringify(body PERSON ) }) const data = await response.json PERSON () return data.choices[0].message.content }) export default component$(() => { const action = usePromptAction() return ( <main class="max-w-4xl mx-auto p-4"> <h1 class="text-4xl">Hi 👋</h1> <Form action={action} class="grid gap-4"> <div> <label for="prompt">Prompt</label> <textarea name="prompt" id="prompt"> Tell me a joke </textarea> </div> <div> <button type="submit" aria-disabled={action.isRunning}> {action.isRunning ? ‘ One CARDINAL sec…’ : ‘Tell me’} </button> </div> </Form> {action.value && ORG ( <article class="mt-4 border border-2 rounded-lg p-4 bg-[canvas]"> <p>{action.value}</p> </article> )} </main> ); });

Conclusion

All right! We’ve gone from a script that uses AI to get mediocre jokes to a full-blown application that securely makes HTTP requests to a backend that uses AI to get mediocre jokes and sends them back to the frontend to put those mediocre jokes on a page.

You should feel pretty good about yourself.

But not too good, because there’s still room to improve.

In our application, we are sending a request and getting an AI response, but we are waiting for the entirety of the body of that response to be generated before showing it to the users. And these AI responses can take a while to complete.

If you’ve used AI ORG chat tools in the past, you may be familiar with the experience where it looks like it’s typing the responses to you, one word at a time, as they’re being generated. This doesn’t speed up the total request time, but it does get some information back to the user much sooner and feels like a faster experience.

In the next post, we’ll learn how to build that same feature using HTTP streams, which are fascinating and powerful but also can be kind of confusing. So I’m going to dedicate an entire post just to that.

I hope you’re enjoying this series and plan to stick around. In the meantime, have fun generating some mediocre jokes.

Thank you so much for reading. If you liked this article, and want to support me, the best ways to do so are to share it, sign up for my newsletter, and follow me on Twitter.

Originally published on austingil.com.

Connecting to blog.lzomedia.com... Connected... Page load complete