OpenAI released a brand new API that allows you to make use of the powerful AI model that also powers ChatGPT. Let's see how we can make use of it and build a ChatGPT clone using Laravel 10. If you want to jump directly to the source code, you can find it on GitHub.
The new AI model is called gpt-3.5-turbo
and is 10 times (!) cheaper than the most powerful DaVinci model, which is amazing.
Prompting for this model works a little bit differently, as it is aimed at "chat" autocompletions. This means that you don't just send a simple string as a prompt, but instead send along a chat conversation, which the AI model will then autocomplete.
Here's a look at the finished version of our ChatGPT clone:
Getting started
To get started, we are going to create a new Laravel 10 application using laravel new chatgpt-clone
and afterwards install the OpenAI PHP client using composer require openai-php/laravel
.
The next thing we need to do is publish the package configuration file and set up our OpenAI API key.
To publish the configuration file, run the following command:
php artisan vendor:publish --provider="OpenAI\Laravel\ServiceProvider"
After this, you can set your OpenAI API key in your .env file like this:
OPENAI_API_KEY=sk-...
Building a simple UI
As we want a chat style UI, let's add a simple input field to our welcome.blade.php file as well as a "Reset conversation" button, just like we have in ChatGPT to reset the current conversation. We also wrap the input field in a form that just posts to the main route of our Laravel application.
<form class="p-4 flex space-x-4 justify-center items-center" action="/" method="post">
@csrf
<label for="message">Laravel Question:</label>
<input id="message" type="text" name="message" autocomplete="off" class="border rounded-md p-2 flex-1" />
<a class="bg-gray-800 text-white p-2 rounded-md" href="/reset">Reset Conversation</a>
</form>
Building the "prompt"
As I mentioned above, the prompt in this case works a little differently. It's a mix of messages that come from the user as well as the responses that we get from OpenAI. This allows us to ask questions that reference previous messages and replies.
The new chat API also allows us to define a "system" message, which is some kind of generic instruction for the chat model to tell it what its general purpose should be.
As these "messages" need to grow over time, we need to store them somewhere. For this simple clone, we are just going to store the messages in the session. As a default value, we will place our "system" message in there:
Route::post('/', function (Request $request) {
$messages = $request->session()->get('messages', [
['role' => 'system', 'content' => 'You are LaravelGPT - A ChatGPT clone. Answer as concisely as possible.']
]);
$messages[] = ['role' => 'user', 'content' => $request->input('message')];
// ...
});
As you can see we also immediately add the user message to that $messages
array.
Next, we can use this array and perform the API request.
$response = OpenAI::chat()->create([
'model' => 'gpt-3.5-turbo',
'messages' => $messages
]);
This will give us the OpenAI chat response, which we also want to add to our messages array.
$messages[] = ['role' => 'assistant', 'content' => $response->choices[0]->message->content];
Note that this time, we added the message with the "assistant" role to the array, to indicate that this one came from the API instead of the user.
Now our $messages
array contains all the messages we need and we can store in back in the session and redirect back.
$request->session()->put('messages', $messages);
return redirect('/');
And...that's it! The next time the user sends a message, we are going to re-use the messages from the session and append the new message onto it - just like ChatGPT does.
Finishing the UI
As we now have all the messages (those from the OpenAI response and from the user) in the session, all we need to do is pass them to the view and display them to the user.
To not show the internal "system" message, we can remove it from our messages array before passing it to our view.
Route::get('/', function () {
$messages = collect(session('messages', []))->reject(fn ($message) => $message['role'] === 'system');
return view('welcome', [
'messages' => $messages
]);
});
In the view I'm now simply looping over the messages, giving them a different background color depending on if it's coming from a user or from the assistant and then use a Markdown parser for the content:
@foreach($messages as $message)
<div class="flex rounded-lg p-4 @if ($message['role'] === 'assistant') bg-green-200 flex-reverse @else bg-blue-200 @endif ">
<div class="ml-4">
<div class="text-lg">
@if ($message['role'] === 'assistant')
<a href="#" class="font-medium text-gray-900">LaravelGPT</a>
@else
<a href="#" class="font-medium text-gray-900">You</a>
@endif
</div>
<div class="mt-1">
<p class="text-gray-600">
{!! \Illuminate\Mail\Markdown::parse($message['content']) !!}
</p>
</div>
</div>
</div>
@endforeach
And that's all you need to do, thanks to the new OpenAI Chat completions API to build a ChatGPT clone yourself. Pretty mindblowing, right?