Today we’ll talk about a new framework that helps developers build conversational, streaming, and chat user interfaces in JavaScript and TypeScript.
Introducing Vercel AI SDK
Vercel AI SDK is an open-source library for building AI-powered user interfaces.
How does it work
Vercel AI SDK provides you with the frontend UI similar to Chat GPT’s UI and also the backend APIs needed to communicate with any AI model.
All you need to do is provide an API key for any AI model like Chat GPT, Lang Chain, Anthropic, Hugging Face, etc.
Starter kits
There are multiple starter kits as well using which we can quickly build full-fledged AI apps by cloning the starter kits and customizing them as needed.
Here are a few starter kits:
- Next.js OpenAI Starter
- SvelteKit OpenAI Starter
- Next.js Hugging Face Starter
- Next.js LangChain Starter
Features
Here’s a list of features from Vercel’s AI SDK
Streaming First approach
Making REST API requests to the AI modesl and rendering the responses can be slower as we have to wait for the AI to complete it’s response. So Vercel has taken a streaming first approach which renders the responses via streams in real time so that we don’t have to wait for the AI to complete it’s response.
Using the streaming approach, the response is quick and rendered as and when it is received just like Chat GPT. You can stop generating when needed, similar to Chat GPT
Built-in Adapters
This SDK has built-in adapters to AI services such as OpenAI, LangChain, Anthropic, HuggingFace etc. All you need to do is specify which service you are going to use in your app and specify it’s API key.
Highly customizable
The react hooks provided by the SDK allows customing the functionality in any way that you need. The visual UI can also be customized by writing CSS from scratch or using any thirdparty CSS libraries/frameworks.
Example of frontend customization:
'use client'
import { useCompletion } from 'ai/react';
export default function SloganGenerator() {
const { completion, input, handleInputChange, handleSubmit } = useCompletion();
return (
<div className="mx-auto w-full max-w-md py-24 flex flex-col stretch">
<form onSubmit={handleSubmit}>
<input
className="fixed w-full max-w-md bottom-0 border border-gray-300 rounded mb-8 shadow-xl p-2"
value={input}
placeholder="Describe your business..."
onChange={handleInputChange}
/>
</form>
<div className="whitespace-pre-wrap my-6">{completion}</div>
</div>
);
}
The stream utilities provided by the SDK allows in customizing the stream responses and configurations for the AI services.
Example of backend customization:
import { Configuration, OpenAIApi } from 'openai-edge';
import { OpenAIStream, StreamingTextResponse } from 'ai';
// Create an OpenAI API client (that's edge friendly!)
const config = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(config);
// IMPORTANT! Set the runtime to edge
export const runtime = 'edge';
export async function POST(req: Request) {
const { prompt } = await req.json();
// Ask OpenAI for a streaming completion given the prompt
const response = await openai.createCompletion({
model: 'text-davinci-003',
stream: true,
temperature: 0.6,
prompt: `Create three slogans for a business with unique features.
Business: Bookstore with cats
Slogans: "Purr-fect Pages", "Books and Whiskers", "Novels and Nuzzles"
Business: Gym with rock climbing
Slogans: "Peak Performance", "Reach New Heights", "Climb Your Way Fit"
Business: ${prompt}
Slogans:`,
});
// Convert the response into a friendly text-stream
const stream = OpenAIStream(response);
// Respond with the stream
return new StreamingTextResponse(stream);
}
Conclusion
The Vercel AI SDK is a great library and a good step ahead in opensource for building conversational AI experiences. Go ahead to the Getting started section of the SDK and build your AI chat app now 🤖
Also read, DeepMoji, Artificial emotional intelligence with machine learning