This week, I got my hands dirty working with Swift on a server using Vapor for a client’s project. I’m working on building a server to handle API interactions for the OneTap app. During this process, I couldn’t find any explicit documentation on how to stream responses in Vapor, but I figured it out and turns out it’s rather straightforward! I’m sharing this method here so that other people facing the same roadblock can easily find a solution.
Since we’re focusing on how to stream responses, I’m going to assume you already have a Vapor project set up.
Let's begin by defining a getStream function and adding it to a "stream" route in our routes function. Here, I’m going to use a GET request since it’s easier to visualize, but the same logic applies for a POST request (I am using a POST request in production).
Unfortunately, this leads to an error since AsyncThrowingStream<String, Error> does not conform to AsyncResponseEncodable.
We can fix this by adding AsyncResponseEncodable conformance to AsyncThrowingStream<String, Error> via an extension.
Adding this code gets rid of the error, and that’s about it for the most part! You can continue setting up a stream as usual.
Here, for instance, I’m going to set up a stream which will stream responses from OpenAI’s ChatGPT API. I’ll be using Alfian Losari’s ChatGPTSwift project for handling the API requests; it’s my favorite ChatGPT API package for Swift thanks to its simplicity, ease of use, and cross-platform support, including support for Linux, the latter of which is essential for running a Vapor server.
Let’s add the package to Vapor. Open the Package file and add this to the dependencies:
Then, add the following line to the dependencies inside the targets:
Now, we’re ready to use the ChatGPTSwift package.
I’m modifying the code to an ai route which includes a prompt parameter to pass into the API.
Then, let’s implement the API request into the getStream function.
Voíla! Sending a GET request to the stream route with a prompt parameter should allow you to get a streamed response.
However, we might not always require a streamed response. So I decided to allow the addition of a query to determine whether a response should be streamed or sent all at once. Here's how I modified the return function.
Now, on the client side, this still means you have to handle the output as a stream, but it’s much less code on the client side and it’s more efficient in terms of memory since it offloads most of the task to the server.
Now, let’s test this out!
Build the project so start the server locally on device (or in production if you have it set up). Let’s run it without the stream first, and then with the stream. Let’s head to: http://127.0.0.1:8080/ai/Write%100%words%About%AI
And that wraps up this post. You’ve learned how to output an AsyncThrowingStream of strings in Vapor, an example of using this for the ChatGPT API, and also how to switch between streaming and getting standard responses from the same function with the help of a query. I’m just getting started with Vapor—in fact, today is the third day since I first started writing Swift on the server—and I’m excited to see where this takes me. I’m also working on a lot of amazing features for OneTap that I can’t talk about just yet, but they are going to improve the app experience by a lot. See you next week
Comentarios