Project Vulcan: Bot Module



Vulcan is a new custom Engine built to work with Beach Core.

The default capabilities of Vulcan are are a visual and collaborative rich notes and research tool. See this topic for more on this aspect.

This topic is to introduce and discuss the development of a new and powerful module introduced to Vulcan, called Bots.

Bots used the Flows concept that we introduced into the administrative area of Beach Core Engine, but takes it to a whole new level for creating conversational flows that are deeply integrated to your application stack as well as benefitting from integrations to outside services.

The work we’re doing here is a culmination of learning, from designing chatbots visually, using Mural, to coding
and building chatbots for health, wellness, ticketing, intellectual property and crypto.


Our Flows feature in Beach Core laid the foundation for this new interface for building conversations. It provided us the backend data structures, core functionalities and APIs necessary. The UI however, built into our Admin area, whilst convenient for building single Bots and conversations as an admin user of the Beach Platform instance, was not fulfilling the entire vision.

The Flow building UI was good, but not great. The ability to create and view Flows in a directory tree view was good, but as the this structure grew and the Bot became more complex, it quickly became unmanageable to navigate.

The linear Screens lists within Flows was ok, until you needed to visualise complex relationships and navigations between Flows and Screens. Whilst the available actions were limited, it was manageable, as we get many more possible Actions and intermediary functions that determine the path that we take through Flows, we need a different approach.

For now at least, the Flows admin will remain within the Administrative area and work as the default feature that ships with Beach Core.

What I am about to present will require the addition of the Vulcan Engine in order to take advantage of the full power of Vulcan Bots.

Vulcan Bots Editor

In order to give you an overview of the editor, I’ll give you a brief example of creating a small customer service Bot for our Beach Website. You can see a practical example which we built using, as an example here.

We will reproduce this, throughout this topic, piece by piece and even extend it with some very powerful new capabilities.

Let’s start with a quick setup.

We will create a Chatbot type project. I’ll name it Ulf Bot and hit create.

We now have a fresh new Project and an initial Chart to play with to create our new Conversation.

Double Click anywhere on the canvas and you’ll see this menu for the types of Objects you can place on the Chart.

Since our Conversation is going to start with an automated trigger of our Bot, named Ulf, we will start with a Question type of Object, which is basically another word for Bot Input. I will come back to the other two types in due course.

The Question object is a very powerful one, capable of many things. It is essentially the equivalent to our original Screen concept in Flows.

The anatomy of this Question object is basically the same, though we’ve made some improvements based on experience, to make it possible to iterate these much faster, especially when adding Header Message Variant text - basically the stuff the bot says in the message.

In the same way as Screens, you have access to rich media elements for the Body section, that will be sent as a multipart message in the conversation and Actions - any explicitly expected User inputs.

Let’s go ahead and reproduce the same starting point that we have in the bot from above, with a simple text message.

In the example, I basically had a larger block of text along side the image, basically because it was quite laborious to create each conversation step. In Vulcan I don’t have the same constraint and can confidently break up the conversation without a massive time hit…

By simply dragging the connector between the two Questions, I create an automatic flow from one message to the next.

To complete this message, I also need to add an image. I can do this by adding our image to the second Question object’s body section.

As you can see, we have already brought across a few of the rich components from Flows and will continue to add to these. I select the Image component.

I upload my Image, of course. You can also see that I can control the relative width of the image and have access to some powerful conditional logic features. For now I will just declare that I want the image to fill 100% width of the available space.

After saving this, we now have completed the initial messages sent by Ulf to the User.

I’ll digress from the example for a sec.

Next up, we would like to receive some feedback from the User. To do this, the simplest way (and often optimal way in terms of creating simple to navigate conversations) is to provide explicit and tightly controlled options for the User, in the form of Quick Replies.

Quick Replies are essentially just buttons that represent a typical User response. It could just be a single button or multiple. If you’ve worked with FB Messenger, then Quick Replies are different to buttons in terms of the way they are presented in the UI, but they are still effectively just buttons (when we enable Messenger as a client interface for Bots created here, it’ll be important to make that distinction, of course).

Let’s imagine that we expect the User to simply acknowledge Ulf at this point…

We will add a Button type action, with an appropriate label.

When we save this, our Question has a single Quick Reply option, that the User must press to continue the conversation.

Next, we will want to handle this response. For this, we will introduce the second type of object, the Answer object.

The Answer object is basically a User input.

We can create an expectation of the Answer we will receive. This is simple to do, since we just have one Quick Reply option.

You’ll notice that I have left the Action to default - more on Actions in another post. The default Action will just move to the next step in the Flow.

I can also list the value of the Answer that I expect to receive. In here I have put “hi”, though in my button, I set the label test as “Hi Ulf :wave:”.

So I need to go and tweak my Quick Reply. Opening the Button, I click “Add Action” and in the Send Answer, I input the value “hi”. This will be the value that will be sent when the User clicks this button. The label text is just for show…

Finally I simply connect these two objects.

We now have the start of a conversation, with both an opening set of messages from Ulf and a very simple and controlled response from the User.

Let’s expand this a bit further. Let’s add a second quick reply, in this case, if our User is in need of urgent assistance, they can immediately declare it and force the flow down that path, by clicking on the “I need help!” button. I have added this, along with the Action and value “help”.

We now need to handle the case where the User has clicked this button, so we have to handle the Answer. We already know how to do this…

It’s at this point now where it’s possible for our Flow to fork into different directions, based on the User input. Before we continue that, let’s have a look at a few more things add a bit more natural language to our initial message from Ulf.

Message Variants

It is pretty important, especially for a flow that will be seen repeatedly, for it to feel natural and fresh, even if it’s something as repetitive as mundane as saying hello everyday. In order to avoid over-complicating your Flows to achieve this, one simple technique is to add multiple variants of the message, that essentially have an equal meaning just rephrased.

The selection of the message will be randomised.

It will also be possible to add State-based messages, with more fine-tuned control over the conditional logic behind the selection of certain messages.

Free Text Input

Our original Flows feature was only cable of handling very explicitly controlled user inputs, from the available Actions components - Buttons, Input Fields (with strict value types - email, number, text, phone number, date / time etc.), Select box.

The most flexible of these was the input field, but generally it would be used when there was an expected type of response value and handled in only a very limited number of ways. The Text type input would only know it had received some string value, but wouldn’t be able to do much with it.

But the best Chatbots are able to successfully combine controlled flows, with the natural, conversational input of receiving User text messages and determining what they mean, through Intent recognition, given the Context of the state of our conversation.

This is a feature of the industry leading Natural Language Processing (NLP) services, such as Google’s Dialogue Flow (formerly

In Vulcan Bots, we’re enabling deep integration with NLP to advance your conversation building capabilities.

We will talk about this in much more detail in following posts, but here’s how we can introduce it to our example conversation.

Implementing NLP

So, we already have 2 quick replies that can determine explicitly the flow that the conversation will take. But we also want the user to be able to input a message at any time and be able to move the conversation forward according to what we believe to be the intent of the User, based on their message.

Our client app will have a free text message input component available to the User. Let’s imagine that the User didn’t select any one of our quick replies, but instead typed “hello Ulf”. How would we handle that?

One option could be to add this as a variant to the Answers, like this.

But then, what if there was a slight difference - “hi Ulf”, “hi ulf”, “hello there” - this would be impossible to list all of the exact variants. So we need to run the User response through a machine learning engine to try and predict what we think they mean, with a certain degree of confidence.

We can do this using Dialogue Flow’s NLP machine learning capabilities. Add a Dialogue Flow object, the third type, to our Flow.

Optionally, but I’d recommend, we provide a Context. This Context tells Dialogue Flow where we are in the conversation, so it knows what type of meaning the User input is likely to have and therefore determine the appropriate Intent. I have provided the Context “Welcome”.

Contexts, Intents, Entities and other terms you’ll hear are key concepts of conversational design and building using tools such as Dialogue Flow. I recommend reading my Medium post to get a basic understanding of these terms.

Dialogue Flow Setup

I have signed into my DF account and created an Agent “UlfDemo”.

I have created a single new Intent, called “welcome”

Within this Intent, I have set the Context to be the same value as the Content we provided with the User text, via the DF component in the Vulcan flow. As can see, this Context, which has now been set, will survive for 5 further Intents in the flow, so the Context can be “remembered”.

I have also added a number of example phrases, in similar fashion to our first example, but with a significant difference. These are training samples that will inform DF’s predictive engines and as such, will be able to handle all sorts of variants of these inputs to still reach the same conclusion - without having to explicitly define them all up front. DF will use the Context and this training data to return to us, hopefully, the “welcome” Intent. Should we receive this Intent back from DF, then we will need to handle it accordingly.

I have added an Answer object, with the value equal to the Intent name that I expect to receive. Should that be the case we can continue our conversation.

However, given the flexible nature of the input the User could possibly decide to write we may also want to handle both predictable Intents and unexpected one’s at this stage also… we’ll cover that in later posts.

Core - evolution of our infrastructure

Object Picker Component

Double click on the Chart canvas area and you’ll find a new instance of the Object Picker component added for you, waiting your next move. So the next thing to do is to decide what the Object should be.

We’ve now got a growing list of Object types, so the Object Picker component becomes pretty important.

We have 2 categories of Objects now, Core and Chatbot.

Core objects are primitive - such as Text and Shapes. They also contain richer Object types, such as the General object, which can be used for taking notes, Person object, representing a, well person obviously, as a kind of address book entry, and Organisation, representing a company, business or some other entity.

All of these Objects can be Symbolised - that is, created with a Master and have copies of this placed in different charts without needing to maintain the content for each instance, they will be updated via the master copy.

The second category of Object Types are related to our Chatbot module. If you have this enabled in your project, then you will have access.


Chatbot Entry

Signifies the entry point to your conversation.


Add conditional logic to filter data and direct the conversation accordingly.

Bot Message

Contains the Message Variants, Body content components and User Actions. Used whenever you want your Bot to say something to the User.


You can pass user input from the custom text field into DialogueFlow, Google’s NLP machine learning platform and hook it up to one of your agents. You can then handle the DF response accordingly in your flows.



You can jump to all manner of actions using the Router component, such as jump to another Bot Message in another flow or even in another Chart in your project, trigger actions such as loading a webpage, writing to the database, calling a webhook… the list of actions will continue to expand.


User Message

When you want to display a User message in the Chat history, such as the text selected from a quick reply, you pass it through a User Message object.

All of these objects can be used together in flow, related through Connections, in order to build your conversations and give life to your amazing Bots.


I recently wrote a follow up to my original Conversation Design Using Mural article for Chatbots Magazine on Medium, this time talking all about how we can use Vulcan’s Chatbot Module to Design & Build Chatbots.

It provides a detailed breakdown of the core concepts behind Vulcan and specifically a look at the features of the Bot Module.

And a few Toothless GIF’s, so all’s good in the world.


A couple of new Bot Module Components

Endpoint Object

The Endpoint Object enables you to create REST API calls through our Proxy endpoint to your endpoints and handle that response in the context of your Conversations.

This is very powerful.

You can create and store your Endpoint Requests as templates (great for re-use) in the Project Settings.

Then via the Endpoint Object, you can override your templated headers and params (if you need to).

When you receive your response, you can parse it from the signal and pluck and save any data you want to keep in the Model for use within the flow.

Render Template Object

This Object type enables you to specify and call your custom Components into the conversation. Components that exist in the global components directory or your own custom components, can be used.

Pass props data into the component manually or from your contextual data held in the Model or Signal state.

:rocket: :rocket:


Basic Objects

Creating awesome chatbot flows or rich knowledge maps wouldn’t be the same without the ability to add visual call-outs to assist in presenting information in a meaningfully visual way.

We created a suite of initial primitive objects that can be used in any Project type.


The Rectangle is a simple primitive shape, but very versatile. It can be used as a digital sticky note or a flow diagram node.


The Text object enables you add to add Titles, Paragraphs and annotations direct to the canvas.


Add additional callouts and visual cues with Arrows. We will add more line options and line-end options soon as well as the ability to connect arrow ends to the Objects.


Comments are somewhat less primitive - they enable real-time discussions to take place at any time, relating to any item on the canvas. This is extremely useful, whereby you can separate your notes and long form research in the rich editor, but hold topical conversations about the content inside a separate interface element.

We will be adding full real-time support, notifications, @replies and other fun stuff soon.



You can easily create explicit clusters or groups of objects using the, er, Group Object.