GPTs DevBlog

Learnings from building Custom GPTs for OpenAIs GPT Store

Massive API:s for the GPT

Offloading the master prompt even more


Continuous experimentation is showing that the more you can incorporate into the API, the better. Thinking of the GPT as the front-end for the API seems to be a good approach.

To put this idea to the test, I decided to create an idle game with a much larger API than my previous ones. Idle games are good for this purpose because they involve multiple upgradable elements and various aspects to track.

In the current example, the API consists of 12 fairly complex routes, handling tasks such as API-key management, login tokens, and secrets. Additionally, the routes responsible for game mechanics have varying required fields to function properly.

The master prompt is now just a few sentences, and everything still appears to run smoothly. The schema itself contains 27,606 characters, while the master prompt consists of only 1,235 characters. This is a stark contrast to the previous game, where the schema had 12,104 characters and the prompt had 4,693 characters.

Token Idler is a typical idle game, except that both the theme and all the upgrades are generated. You can try it out here: Token Idler.


Prompt Realms released!

Circumventing multiple API calls not being possible


The roadblock to get this game out was to have the GPT make multiple API calls in one reply. After realizing that was not going to work, I had to come up with new ideas.

The situation was as follows: The player wants to do something, a dice is rolled, modifiers are applied and then how well the player did is determined. If the player does well enough, they earn coins. Adding more coins is a secondary API call.

To circumvent not being able to do multiple API calls, I instead had the GPT define everything in the first API call. When the API is called now, the GPT also specifies what the coin potential is, should the player succeed. The two operations of checking for success and adding coins can then be done on the same route.

This also has a another positive side effect. There is no longer an API call that is always beneficial to the player. In previous iterations, it should have been possible to trick the GPT into only adding more and more coins without any risk involved. Now, the only routes available are game mechanics with risks and rewards.

Prompt Realms is a combination of Alex and the Bone Carvers and Prompt Wars, creating an MMORPG with characters, stats, coins and the ability to attack each other to steal coins. You can try it out here: Prompt Realms.


Put it in the API

How making very verbose API:s greatly improved performance


Working on a much larger game than before, I struggled severly to fit everything into the master prompt. And the more was put in the master prompt, the less consistent the GPT got. The idea then struck me: Let's go overboard with making a verbose API. And it really, really worked.

Like my other games, this one involves rolling dice. I want to show the user what they rolled and what modifiers where used. Asking the GPT to do this gave mixed results. However, making the reply from the API contain: "Start your reply to the user with this: ..." gave 100% consistency

This frees up a lot of space in the master prompt, and I can be even more precise. It also opens up the capability to use several other actions. As this method works, the specifics of what happens when the action is called can be described by the reply itself.


Writing concise master prompts

Learning how less can be more


The greatest challenge with building good GPTs really is to find the most clear and concise way to phrase what it is you want.

Building Alex and the Bone Carvers, I really wanted the game player to feel in control and use the fact that the API calls can be denied. Therefore, whenever the player asks to do something, I wanted the GPT to explain what it was about to do before doing it so that the player could cancel it. This turned out to be challenging.

I already explained how offloading mathematical work to the API is a good thing, as it gives room to write more. However, writing more here was a mistake. What ended up working, again, was to be as clear and concise as possible.

In the version of the master prompt I have now, this important rule is the first thing stated (well, after stating that the GPT is a GM for Alex and the Bone Carvers). The format for how it should reply is listed as a numbered list, and then the numbers are used as headlines throughout the master prompt. Under each headline, the details are specified. At the end, there is a "Remember to X"-section just before the game starts where the rules are re-iterated again.

When building GPTs, it really helps to have small tests set up that check the specific thing you are tweaking for. In my case, I had "Gather supplies in the village" as a conversation starter so that I could iterate quickly. Testing only what you are building is also important, because you want to avoid wasting GPT-4 usage and hitting your cap.

What seems to work best for GPTs, when asking it to perform actions at a specific time, is to refer to them as "Calling the API and using the X operation." This is also the wording used by ChatGPT when you click the test-buttons when inside the action.

You can try out Alex and the Bone Carvers here: Alex and the Bone Carvers.


Generate or Code

Dividing the work between the GPT and the API


Writing this as I'm waiting for more GPT-4 quota, haha

I've built a couple of D6 RPGs, and I am now working on a system where the player actually has a character and different actions vary in their difficulty. I initially thought that I could just re-use the existing API and write all the rules of how to add or subtract from a roll to the GPT, but that didn't go so well.

I tried explaining the rules very clearly, very briefly, with and without Markdown formatting, and with and without examples. In the end I realized that any sorts of calculation is a lot better to leave to the API.

What I did in the end, which is what worked the best, was that the GPT says what the difficulty level is and what the modifiers are, but then all of that is sent to the API. The API then does all the math, as code always does math flawlessly.

I think I'm getting a better feeling for where you should just count on the GPT doing a good job, and when you should offload the work to code. Leave all the creative work to the GPT, and take whatever can write as code and put it in the API. This also reduces the size of the master prompt for the GPT, which leaves room for other descriptions or mechanics.


Authenticating

Allowing users to retain information between sessions


Hi! This is my first blog post. I'm on my way to build 100 GPTs for OpenAIs GPT Store. I think they are great fun to build, and I have ideas for some actually useful ones but for now I'm mostly making games.

I managed to get my own API running and got a few simple games to run, but soon realized that for bigger games I would want to be able to save my game. My understanding is that the ID of the user is not available to the GPT (which would have helped as the user is already logged in technically), so authentication has to happen in the chat somehow.

I did not want a password authentication, so I tried the OAUTH setup but it seemed complicated. I stumbled upon a reddit thread that listed several alternatives. In the end I of course decided to build something myself.

It works just like the open source project: The user enters their email address, a token is generated and emailed to them, the user enters the token again, and this serves as authentication. I made sure to have all tokens be one-time use, and the save-file that is created for the user hashes the email address with salt to create a unique filename. This way I am never storing the users email address either, so I get to keep my "I do not store any personal information"-privacy policy which is great.

These games add one functionality I would want to have after another, and once I have made enough of them I think I will be able to make some really cool, actually useful GPTs.