Old phone

The ChatGPT engine is now available in the OpenAI API

NOTE This article builds on the basics covered in an introductory companion article. If you’re new to working with the OpenAI API with PHP you should start there, since that article covers foundational material that I’ll be skating over here.

The engine that powers ChatGPT has arrived in the API. The new engine is at least as powerful as and significantly cheaper than the best text completion engine (text-davinci003), making it a good all round choice.

While the existing text completion engines are optimised for individual queries, the chat engine, gpt-3.5-turbo, is designed to excel at conversational flow – although a client still needs to include the conversation’s history in each of its queries.

So, what should we build to try things out? A chat bot is the obvious answer.

What has changed?

To start with, the existing endpoints and engines are still in place. Chat support is an addition rather than a change. It provides a new endpoint, POST /chat/completions, with slightly different parameter requirements. At a bare minimum, you must provide an engine parameter and a messages array.

Each element in the messages array should include a role (one of user, assistant or system) and a content field. A message with a system role conventionally starts the list to define the kind of assistant we want our chat bot to be. Subsequent messages form a dialogue between a user and the assistant.

This dialogue has a dual role. Initially it can act as training material – a developer’s invention to define the tone and form of the interaction to come. Thereafter, new messages form a rolling context for the conversation. The AI does not provide its own memory of the exchange, so it’s up to us, as developers, to manage and re-transmit the conversation’s history. Obviously, this introduces an issue for longer conversations since this growing corpus of background information must be sent over the network and digested by the AI for every request.

Here is an actual exchange, severely cut for legibility. It was captured from a finished instance of the bot we’ll be building here.

{"role":"system","content":"You are a helpful, interested, and witty assistant"},
{"role":"user","content":"I am writing an article about creating an AI chatbot. What should I include in my introduction?"},
{"role":"assistant","content":"In the introductory section of your article, you should include some background information on chat bots..."},
{"role":"user","content":"What about a comparison between chat and completion as choices for the developer?"},
{"role":"assistant","content":"When it comes to building chat bot applications, developers have a choice between..."}

That’s the background. Let’s get started with the code.

Getting started

Very little has changed as far as setting up is concerned. We will use the same library as before:

$ composer require orhanerday/open-ai

At the time of writing, this will install version 4.0.0. If you are working with an older version of this library you should update it, or you will not gain access to the chat endpoint.

Managing messages

Since the message array is crucial to the chat endpoint, it is a good idea to build a very simple class to manage this data. Here is mine:

class Messages
    private string $description;
    private array $messages = [];

    public function __construct(string $description)
        $this->description = $description;

    public function addMessage(string $role, string $message)
        $this->messages[] = ["role" => $role, "content" => $message];

    public function toArray($max = 0)
        $desc = [ "role" => "system", "content" => "You are a {$this->description} assistant" ];
        $messages = $this->messages;
        if ($max > 0) {
            $messages = array_slice($this->messages, ($max * -1));
        return array_merge([$desc], $messages);

So this is pretty simple. We require some information about the assistant in the class’s constructor, which we store so that we can use it to generate our system message. The addMessage() method requires $role and $content arguments which it uses to build the data structure for an individual message. Finally, the toArray() method brings the stored data together into an array suitable for passing to the open-ai library. The only innovation here is the $max argument which, if supplied, will limit the number of messages sent to the AI. I have found that five messages is adequate to provide good continuity for most purposes, though different use cases may require different memory buffers.

For a more sophisticated application, we might tool this class with more methods – to help with saving our data in different formats, for example. For now, though, let’s keep it clean and move on.

Sending our messages

In the last AI article, we encountered the OpenAI::complete() method which maps to the POST /completions endpoint. This time, we’re going to invoke a new method: OpenAI::chat() which will invoke POST /chat/completions.

use Orhanerday\OpenAi\OpenAi;

class Comms
    private string $secretKey;

    public function __construct($secretKey)
        $this->secretKey = $secretKey;

    public function sendQuery(Messages $messages): string
        $open_ai = new OpenAi($this->secretKey);
        $completion = $open_ai->chat([
            'messages' => $messages->toArray(5),
            'temperature' => 0.5,
            'max_tokens' => 1000,
            'frequency_penalty' => 0,
            'presence_penalty' => 0.6,

        $ret = json_decode($completion, true);
        if (isset($ret['error'])) {
            throw new \Exception($ret['error']['message']);
        if (! isset($ret['choices'][0]['message']['content'])) {
            throw new \Exception("Unknown error: " . $completion);
        $response = $ret['choices'][0]['message']['content'];
        $messages->addMessage("assistant", $response);
        return $response;

This business of this class is almost entirely managed in sendQuery(). The method requires a Messages object which will provide conversational context and should include a new, unsent message. We instantiate OpenAI with a token (see the previous article for more on how to get one of those) and invokes chat(). We acquire the messages array by calling Messages::toArray(). By passing in 5, we limit the context we send to five messages in addition to the system message.

The other parameters to chat() might already be familiar from the previous completion() example. The temperature argument is a measure of randomness.max_tokens denotes the maximum size of the combined input and response. frequency_penalty determines the amount of repetition you are ready to tolerate in a response – with 0 applying no restriction. presence_penalty specifies the extent to which a response should diverge from the prompt. A value of 0 applies no penalty, resulting in the greatest tolerated divergence. The ranges of temperature, frequency_penalty and presence_penalty are commonly 0 to 1, however this may vary from engine to engine.

Although the /chat/completions endpoint requires an engine argument, this is preset to gpt-3.5-turbo behind the scenes for us by the OpenAI class so we do not have to add it explicitly here.

Having sent our query along to the AI and receved a response, we perform some sanity checks to ensure we’re getting an expected result. If all is well, we populate Messsages with the chat response from the server so that our context remains up to date and then return this value to the caller. Because objects are passed, in effect, by reference in PHP the caller will also continue to have access to the updated Messages object.

A Runner class

We’re all but done with the core classes now. It really is that simple. Let’s round things out with some code to build out a command-line bot. First, we need a simple runner:

class Runner
    private object $conf;

    public function __construct()
        $conffile = __DIR__ . "/../../conf/chat.json";
        $this->conf = json_decode(file_get_contents($conffile));
        $this->comms = new Comms($this->conf->openai->token);

    public function start(string $assistant = "helpful, interested, and witty")
        $messages = new Messages($assistant);
        return $messages;

    public function query(Messages $messages, string $message)
        $messages->addMessage("user", $message);
        return $this->comms->sendQuery($messages);

The Runner class’s constructor fetches the API token from a json file and uses it to instantiate a Comms instance. start() sets up a new Messages instance. Finally, query() requires a Messages object and $message string. It adds the user input in $message to the Messages object which it then passes on to Comms::sendQuery().

NOTE In a future iteration of Runner, I would likely refactor to have it manage Messages internally, thereby hiding logic from client code and simplifying interaction.

A quick script

While Runner wrangles Comms and Messages we also need to provide an interface for the user. In many cases, that will mean embedding the system into a Web app. For this example, though, I’m just going to create a simple command-line interface:

$runner = new Runner();
$msgs = $runner->start();
$input = "";
while (($input = readline("USER      > ")) != "q") {
    $resp = $runner->query($msgs, $input);
    print "\n";
    print "ASSISTANT > {$resp} \n";
    print "\n";

I have kept things to the bare bones here. We begin our conversation with Runner::start() and call readline() and Runner::query() in a loop until the user kills the process or adds a single q as their input.

From here, it would be easy enough to kit the system out with persistence either using flat files or a database and to extend functionality in all sorts of ways. For my own purposes, for example, I will probably create myself a mini-ChatGPT enhanced to read Web pages and files. Persistance would also be useful for a couple of reasons. My command-line application is able to keep the message context in memory – but a Web app would need to restore state for every request. Also, persistence would allow me to return to conversations over time.

Trying it out

It’s possible that the author interviews AI for article trope is already fast becoming a cliche. However, in an article that builds a simple chat bot, I guess it can be forgiven. Here’s my conversation with the AI using the code developed above.

USER > I have written an article on using your engine from PHP. I have covered.

ASSISTANT > That sounds great! It’s always helpful to have resources that cover using OpenAI’s API from different programming languages. Can you share some details about what you covered in your article about using the OpenAI API from PHP? I’d be happy to hear more about it.

USER > Yes, sorry I hit return too fast. I have covered: the differences between chat and completion. The POST /chat/completion endpoint. I have created a class for managing message history, an API class, a controller and a CLI script. I’m now ready to wrap things up. What should I include to bring things to a close?

ASSISTANT > Great job! Here are a few things you could include to wrap up your article:

  1. A summary of the key points covered in the article: This will help readers to quickly understand what they can expect to learn from your article.

  2. A conclusion: You could use this section to summarize the main takeaways from your article and emphasize the importance of using OpenAI’s API from PHP.

  3. Next steps: Consider providing some guidance on where readers can go next to continue learning about OpenAI’s API or related topics. For example, you could suggest checking out OpenAI’s documentation for more information or exploring other programming languages that can be used with the API.

  4. Code samples: You could include a few code samples to illustrate how to use the API from PHP. This will help readers to better understand the concepts covered in your article.

  5. Acknowledgements: If you used any resources or received help from anyone while writing your article, be sure to acknowledge them in this section.

Overall, the goal of your conclusion should be to leave readers feeling informed and empowered to start using OpenAI’s API from PHP in their own projects.

USER > Thanks. I will use your reply in my article with attribution.

ASSISTANT > You’re welcome! It’s great to hear that my suggestions were helpful. And thank you for attributing my input in your article - I appreciate it! Let me know if there’s anything else I can do to assist you.


To take a couple of the AI’s cues, then: you can find more information about the API at the OpenAI documentation site. The Open AI PHP SDK documentation is also worth checking out.

You can find source code for all my recent articles on GitHub. The code for this article is in the 012-gpt-chatbot directory.

Photo by M.S. Meeuwesen on Unsplash