This guide is for dashboard users. Everything below happens inside the Kakiyo dashboard. No code or API calls required.
The Sandbox is a safe testing environment where you can simulate conversations with your AI agent before it talks to real prospects. Use it to validate your prompts, offerings, and follow-up sequences without risking your reputation or wasting credits on poorly configured campaigns.
How to Access the Sandbox
The Sandbox is campaign-specific. You need a campaign before you can use it.
- In the sidebar, click Campaigns.
- Click on the campaign you want to test.
- Click the Sandbox tab (or the Test in Sandbox button).
- The Sandbox chat interface opens.
If the Sandbox tab is not visible, make sure your campaign has an offering and prompt assigned. Both are required for the Sandbox to work. Your LinkedIn account does not need to be connected to use the Sandbox.
How the Sandbox Works
The Sandbox simulates a real LinkedIn conversation between your AI agent and a fictional prospect. Here’s how it differs from real conversations:
| Sandbox | Real Campaign |
|---|
| Messages sent to LinkedIn | No | Yes |
| Credits consumed | Minimal (testing credits) | Full credits |
| Prospect is real | No — you play the prospect | Yes |
| AI uses your offering + prompt | Yes | Yes |
| Follow-up messages work | Yes | Yes |
Running a Sandbox Test
Step 1: Start a New Conversation
- In the Sandbox, click New Conversation (or start typing).
- The AI agent sends its first message based on your First Message Prompt.
- Review the message: is it personalized? Does it match the tone you configured? Is the CTA clear?
Step 2: Play the Prospect
Now you role-play as the prospect to test how the AI handles different scenarios:
- Type a reply as if you were a real prospect.
- The AI responds based on your Context Prompt and Offering.
- Continue the conversation to test specific scenarios.
Step 3: Test Different Prospect Types
Run multiple conversations to cover these common scenarios:
| Prospect type | What to test | Example message |
|---|
| Interested | Does the AI qualify correctly and book a meeting? | ”That sounds interesting, tell me more about pricing.” |
| Skeptical | Does the AI handle objections without being pushy? | ”I’m not sure this is right for us. We already use a competitor.” |
| Busy | Does the AI respect the prospect’s time? | ”Not a good time, maybe later.” |
| Not interested | Does the AI accept rejection gracefully? | ”No thanks, not interested.” |
| Off-topic | Does the AI stay on track? | ”Do you know any good restaurants nearby?” |
| Aggressive | Does the AI remain professional? | ”Stop messaging me, this is spam!” |
Step 4: Test Follow-Up Messages
If you have follow-up messages configured (see Follow-up Messages):
- In the Sandbox conversation, do not reply to the AI’s first message.
- Click the Trigger Follow-up button (or wait for the simulated delay).
- The AI sends the first follow-up message.
- Repeat to test the second and third follow-ups.
- Verify that each follow-up adds new value and doesn’t repeat the same content.
What to Look For
When reviewing Sandbox conversations, check:
| Aspect | What to verify |
|---|
| Tone | Does the AI match the personality you configured? Professional, casual, friendly? |
| Product knowledge | Does the AI accurately describe your product based on the offering? |
| Qualification | Does the AI correctly identify qualified vs. unqualified prospects? |
| Boundaries | Does the AI respect the rules in your Context Prompt? (e.g., “never discuss pricing”) |
| Call to action | Does the AI propose a clear next step (book a call, share a link)? |
| Handling objections | Does the AI address concerns without being defensive or pushy? |
| Message length | Are messages concise and readable, or too long and wall-of-text? |
Adjusting Based on Sandbox Results
If the AI doesn’t behave as expected, here’s what to adjust:
| Problem | Where to fix it |
|---|
| AI doesn’t know enough about your product | Edit your Offering — go to Offerings in the sidebar, click the offering, edit |
| AI tone is wrong | Edit the Context Prompt — go to Prompts in the sidebar, click the prompt, edit context |
| First message is off | Edit the First Message Prompt — go to Prompts, click the prompt, edit first message |
| AI qualifies too easily or too strictly | Adjust qualification criteria in the Context Prompt |
| Follow-up messages are repetitive | Edit follow-up templates in the campaign’s Follow-ups tab |
| AI gives incorrect information | Add or correct facts in the Offering |
After making changes, run the Sandbox again to verify the improvement. Repeat until you’re satisfied with the AI’s behavior.
Sandbox Best Practices
- Test before every campaign launch — always run at least 3-5 Sandbox conversations before going live.
- Test after every prompt or offering change — even small edits can change behavior.
- Test with different models — if you’re considering switching AI models, compare Sandbox outputs (see How to Choose Your AI Model).
- Save examples — note down particularly good or bad Sandbox conversations to reference when optimizing.
- Involve your team — have team members play the prospect for more realistic testing.