Skip to main content
This guide is for dashboard users. Everything below happens inside the Kakiyo dashboard. No code or API calls required.
The Sandbox is a safe testing environment where you can simulate conversations with your AI agent before it talks to real prospects. Use it to validate your prompts, offerings, and follow-up sequences without risking your reputation or wasting credits on poorly configured campaigns.

How to Access the Sandbox

The Sandbox is campaign-specific. You need a campaign before you can use it.
  1. In the sidebar, click Campaigns.
  2. Click on the campaign you want to test.
  3. Click the Sandbox tab (or the Test in Sandbox button).
  4. The Sandbox chat interface opens.
If the Sandbox tab is not visible, make sure your campaign has an offering and prompt assigned. Both are required for the Sandbox to work. Your LinkedIn account does not need to be connected to use the Sandbox.

How the Sandbox Works

The Sandbox simulates a real LinkedIn conversation between your AI agent and a fictional prospect. Here’s how it differs from real conversations:
SandboxReal Campaign
Messages sent to LinkedInNoYes
Credits consumedMinimal (testing credits)Full credits
Prospect is realNo — you play the prospectYes
AI uses your offering + promptYesYes
Follow-up messages workYesYes

Running a Sandbox Test

Step 1: Start a New Conversation

  1. In the Sandbox, click New Conversation (or start typing).
  2. The AI agent sends its first message based on your First Message Prompt.
  3. Review the message: is it personalized? Does it match the tone you configured? Is the CTA clear?

Step 2: Play the Prospect

Now you role-play as the prospect to test how the AI handles different scenarios:
  1. Type a reply as if you were a real prospect.
  2. The AI responds based on your Context Prompt and Offering.
  3. Continue the conversation to test specific scenarios.

Step 3: Test Different Prospect Types

Run multiple conversations to cover these common scenarios:
Prospect typeWhat to testExample message
InterestedDoes the AI qualify correctly and book a meeting?”That sounds interesting, tell me more about pricing.”
SkepticalDoes the AI handle objections without being pushy?”I’m not sure this is right for us. We already use a competitor.”
BusyDoes the AI respect the prospect’s time?”Not a good time, maybe later.”
Not interestedDoes the AI accept rejection gracefully?”No thanks, not interested.”
Off-topicDoes the AI stay on track?”Do you know any good restaurants nearby?”
AggressiveDoes the AI remain professional?”Stop messaging me, this is spam!”

Step 4: Test Follow-Up Messages

If you have follow-up messages configured (see Follow-up Messages):
  1. In the Sandbox conversation, do not reply to the AI’s first message.
  2. Click the Trigger Follow-up button (or wait for the simulated delay).
  3. The AI sends the first follow-up message.
  4. Repeat to test the second and third follow-ups.
  5. Verify that each follow-up adds new value and doesn’t repeat the same content.

What to Look For

When reviewing Sandbox conversations, check:
AspectWhat to verify
ToneDoes the AI match the personality you configured? Professional, casual, friendly?
Product knowledgeDoes the AI accurately describe your product based on the offering?
QualificationDoes the AI correctly identify qualified vs. unqualified prospects?
BoundariesDoes the AI respect the rules in your Context Prompt? (e.g., “never discuss pricing”)
Call to actionDoes the AI propose a clear next step (book a call, share a link)?
Handling objectionsDoes the AI address concerns without being defensive or pushy?
Message lengthAre messages concise and readable, or too long and wall-of-text?

Adjusting Based on Sandbox Results

If the AI doesn’t behave as expected, here’s what to adjust:
ProblemWhere to fix it
AI doesn’t know enough about your productEdit your Offering — go to Offerings in the sidebar, click the offering, edit
AI tone is wrongEdit the Context Prompt — go to Prompts in the sidebar, click the prompt, edit context
First message is offEdit the First Message Prompt — go to Prompts, click the prompt, edit first message
AI qualifies too easily or too strictlyAdjust qualification criteria in the Context Prompt
Follow-up messages are repetitiveEdit follow-up templates in the campaign’s Follow-ups tab
AI gives incorrect informationAdd or correct facts in the Offering
After making changes, run the Sandbox again to verify the improvement. Repeat until you’re satisfied with the AI’s behavior.

Sandbox Best Practices

  1. Test before every campaign launch — always run at least 3-5 Sandbox conversations before going live.
  2. Test after every prompt or offering change — even small edits can change behavior.
  3. Test with different models — if you’re considering switching AI models, compare Sandbox outputs (see How to Choose Your AI Model).
  4. Save examples — note down particularly good or bad Sandbox conversations to reference when optimizing.
  5. Involve your team — have team members play the prospect for more realistic testing.

Last modified on March 11, 2026