AI Personas
AI Personas define the identity, intelligence, behavior, and capabilities of your custom agents in Fabrix.ai. A persona decides how the AI thinks, what tools it can use, which models power it, and how it processes conversation history.
Personas act as the brain of your agent, combining LLMs, toolsets, and prompt templates into a single intelligent workflow.
π Where Personas Are Managed
Inside each AI Project β MCP β Personas, users can:
- View all personas in the project
- Add new personas
- Edit persona configuration
- Assign LLMs
- Define guardrails
- Configure toolset & prompt access policies
π§© Persona Configuration Fields (Add Persona Modal)
When creating a new persona, you fill out several key fields:
1οΈβ£ Basic Details
Name
A human-friendly name of the persona.
Examples: - "Resume Agent" - "AIOps Troubleshooter" - "Salesforce Bellbox Agent"
Description
Used for listing and UI display.
This does not control behavior (that comes from prompts and system instructions).
Color
A unique color tag used only for UI identity.
2οΈβ£ Introductory Prompt
This is the default prompt message(s) that are preloaded when the conversation starts.
π Purpose of Introductory Prompt
- Shows predefined instructions to the user
- Provides starting buttons or guidance
- Executes automatically when clicked
- Often includes sample queries or onboarding steps
Example:
Welcome! I can help you analyze logs, troubleshoot incidents, and identify root causes.
Try asking:
- "Analyze errors in the last 1 hour"
- "Find anomalies in CPU metrics"
Multiple prompts can be addedβeach is a quick-start action.
3οΈβ£ LLM Selection
You must select one or more LLM models that the persona can use.
This determines:
- Model availability
- Routing rules
- Final response formatting
Example options:
- gpt-4.1
- claude-sonnet-4
- gpt-4o
- prod-claude-sonnet-4
β You can select multiple models.
β The system internally chooses the best model depending on instruction.
4οΈβ£ Guardrails (Optional)
Guardrails are rule-based constraints to control the assistant's behavior.
Examples: - Prevent sharing classified data - Limit actions to specific domains - Enforce safe response patterns
If no guardrails are created, this section will show "No data available".
π‘ Access Policy (MOST IMPORTANT SECTION)
The Access Policy defines:
- β Which MCP server this persona talks to
- β Which toolsets it is allowed to use
- β Which prompt templates it can access
- β Whether to optimize conversation using AI
- β Whether to format final responses using another LLM
- β Whether mandatory internal tools auto-execute
This is where you connect Toolsets + Prompt Templates + System Behavior.
β¨ Access Policy JSON Example
[
{
"mcpserver": "rdaf",
"toolset_pattern": "aiops.*|snmp.*|syslog.*|backup.*|network.*|context.*|common.*|post_to.*",
"prompt_templates_pattern": "incident_remediate_recommend.*",
"optimizeHistoryUsingAI": true,
"formatFinalResponseUsingAI": true,
"enableLearning": true,
"system_instruction_name": "default system interaction",
"prepolulateMandatoryTools": true
}
]
π Detailed Explanation of Access Policy Fields
π mcpserver
Specifies which MCP server the persona is connected to.
Example:
π toolset_pattern
Regex pattern of toolsets the persona is allowed to use.
Example:
Allows: - AIOps tools - Network automation tools - Context cache tools
π prompt_templates_pattern
Defines which prompt templates this persona can use via regex.
Example:
π optimizeHistoryUsingAI
Enables internal SLM models to compress earlier conversation content to save tokens.
- β Better long conversations
- β Lower cost
- β Faster responses
π formatFinalResponseUsingAI
Before sending the final content to the user, the system sends the LLM output to another model for formatting (HTML, markdown, structure, etc.).
Improves: - HTML dashboards - Structured reports - Readability
π enableLearning
Allows the persona to improve using local learning mechanisms.
π system_instruction_name
References a predefined system instruction stored in MCP.
Example:
π prepolulateMandatoryTools
Automatically runs required MCP tools without making the main LLM decide.
Tools that auto-run every time:
- get_conversation_history
- list_prompt_templates_by_persona
- get_persona_details
This guarantees: - β Consistent context - β Stable multi-step workflows - β Faster tool selection
π§© Final Persona Workflow Summary
When the persona is created, it determines:
- π§ Behavior β From Introductory Prompt + System Instructions
- π Tools it can use β From
toolset_pattern - π Prompt workflows β From
prompt_templates_pattern - π€ Models powering it β From LLM selection
- π‘ Safety rules β From Guardrails
- βοΈ Internal optimization β From
optimizeHistoryUsingAI&formatFinalResponseUsingAI