novablocks · 2023Character Builder

8 min read

Goal

For the next quarter, the team wanted to add enemy AIs, NPCs and other player that uses different models in the game. Currently, we only have a single player model in the game and there is no way to dynamically add AIs.

Specification

I was given the following wireframes as reference to work on the characters. From this, I've further improved how the UI and database will look like. As well as how it will be integrated with the engine.

An sketch of the characters list and character builder's basic tab.
The wireframe of the character builder.
An sketch of character builder's sounds and events tab.
The wireframe of the character builder's sounds and events tab.

Database Structure

Based on the wireframes and the needed information for the characters, I've created the following database structure:

An sketch showing the relationship between the character, events, and dialgoues.
A overview of the database structure for the character.

In the above, the character is owned by the world. A character can either have a dialogue, an event, both, or none.

Implementation

PR Date: May 17, 2023

Summary of the changes made:

  • CRUD (Create Read Update Delete) for Characters in the World Editor.
  • Random Name Generator under Character Builder
  • AI Assisted Dialogue Generation under Character Builder using OpenAI - also converts audio to an IOS compatible if user uploaded an ogg file.
  • Text to Speech Generation - uses Elevenlabs
    • triggers on save character via character builder.
    • triggers also when you update a character then updated its dialogue text.
  • Remix of Event Recipes under Character Builder
    • admins can set "default" recipes by updating is_default to true under the admin panel.
    • user's can create/update/delete recipes of their own.
  • Admin Panel for Characters
  • Added "AI Create Character" Node for spawning AI characters.
  • Changes with the AI
    • Added AI Type - currently, the "enemy" will trigger the "attack" behavior.
    • hide/show nametag
    • Added "interact" event to AI.
    • On interact, trigger dialogue for the AI.
    • Added AI Events

Characters under the World Editor

The world editor's characters tab is open listing the characters named 'Radiant Bard' and 'Majestic Squire'.
Characters' Tab under the World Editor.

Since the Avatar system or the models is still being worked on by the 3D modellers, I have to make do with what we currently have. In which, I decided to screenshot the current model and use it as a placeholder for now.

Basic Tab of the Character Builder which includes inputs for the character's name, AI recipe, AI Type, and Display Nametag
Basic Tab of the Character Builder.

For the Basic Tab, I've changed the AI Preset into two dropdown input which are AI Type (neutral, ally, enemy), and AI Recipe (lists of AI Events).

Sounds Tab of the Character Builder which includes inputs for dialogue
Sounds Tab of the Character Builder.

For the sounds, based on the wireframe, the user should be able to either upload an audio file, use AI assisted dialogue, normal text to speech, or use a random dialogue. To simplify this and make it more user friendly, I've changed it into two options only, either text to speech or upload an audio file. For the text to speech, the user can either click a random generated dialogue (the dice button) which uses OpenAI under the hood, or use an AI-assisted one in which they are able to type-in their own prompt and pick the best one from the responses.

For using OpenAI, specifically gpt-3.5-turbo, I've made the request a background job as to avoid server timeout since the request can take up to a couple of seconds to finish. Then use websockets to update the UI once the request is done.

As for the audio file, after saving the character, I've made a background job to convert the audio file into an IOS compatible one in case it is in ogg format which is not compatible with IOS. I've used ffmpeg to convert the audio file into an mp4 format.

As for text to speech, I've used Elevenlabs to generate the audio file using a background job. In addition, any changes made to the dialogue text will trigger a text to speech conversion. I've also made an API for it to make it easier to use:

elevenlabs = Elevenlabs.new
audio_file = elevenlabs.text_to_speech("This is some text", {
  voice: Elevenlabs.voices[:rachel],
})

Then for the background Job for Text to Speech Converter:

TextToSpeechConverterJob.perform id, "CharacterDialogue", "dialogue_text", "audio"

Then I've made the Audio Converter to be general purpose and not tied to only a single model:

AudioConverterJob.perform self.id, "WorldSound", "sound"
AudioConverterJob.perform self.id, "CharacterDialogue", "audio"

Lastly, I've made a hook for the audio player which can be used like:

const {
    audioRef,
    loaded,
    setLoaded,
    isPlaying,
    setIsPlaying,
    togglePlaying,
    handleEnd,
    handleCanPlay,
} = useAudioControls();

return (
<>
  <button
   type="button"
    className="px-1.5 py-1 inline-flex items-center gap-1 bg-gray-200 text-gray-900 border-b-2 border-gray-400 text-sm font-bold rounded hover:bg-red-600 hover:text-white focus:bg-red-600 focus:text-white"
    onClick={togglePlaying}
   >
      {isPlaying
         ? <HiPause className="h-5 w-5" aria-hidden="true" />
         : <HiPlay className="h-5 w-5" aria-hidden="true" />}
       {!loaded
         ? "Loading..."
         : isPlaying
            ? "Pause"
            : "Play"}
 </button>

 <audio ref={audioRef} onCanPlay={handleCanPlay} onEnded={handleEnd} />
</>
);
Events Tab of the Character Builder which includes an editor using Rete.js
Events Tab of the Character Builder.

Lastly, the events editor which uses ReteJS under the hood. In implementing this, I've made some changes as to how the AI type, and AI event recipe inputs are displayed to be more integrated to the editor.

Below is a demo of creating a character:

Below is a demo of updating a character:

For the deletion, I've added a confirm dialog to prevent accidental deletion.

Spawning a Character

In order to spawn a character into the game, I've added a Rete node that uses BB.world.createEntity.

Event's Tab of the Block Builder showing a 'On spawn' node connected to the 'AI Create Character' node.
Recipe for spawning a character.

A spawned character can also be updated using the World Editor:

Triggering Dialogue for Spawned Character

For the dialogue, it can be triggered using E or the on interact key. I've used the SFX manager that I've made before which uses PositionalAudio in ThreeJS under the hood.

Note: As you can see there is a slight change to the game since some changes were pushed and this was recorded after a week as opposed to the other recordings.

For the text dialogue, since we don't have on proximity based messages yet, I've decided to just display the text on the player that triggered the dialogue.

Triggering Character Events

For the character events, I've added listeners to the characters and used the Rete event interpreter, that I made and the team further improved, to trigger the AI events.

Admin Panel for Characters

For the admin so that they can manage the characters overall, I've added pages for that.

Admin Panel page for the World Characters.
In this page, the admins can manage the characters.
Admin Panel page for the Character Dialogues.
In this page, the admins can manage the dialogues. They can also view the audio version of the dialogues.
Admin Panel page for the Character Event Recipes.
In this page, the admins can manage the event recipes. They can set 'default' recipes which will be shown to the users as options that they can use.

Wrench

PR Date: May 25, 2023

Further improvement was adding the capability of using the wrench not only at blocks, but also at AI. This is done by checking whether the target is an AI or not using the Raycaster in ThreeJS whenever the user uses the wrench. For now, a reload is needed in order for the changes to take effect. But in the future, this can be improved by adding a real-time update to the engine.