Skip to content

v0.5: Separate persona-directed vs goal-oriented dialogue modes #6

@MakiDevelop

Description

@MakiDevelop

Background

Per arXiv:2512.12775 (EACL 2026), persona fidelity degrades systematically faster when an LLM must simultaneously:

  • maintain the persona (roleplay the interviewee), and
  • follow task instructions (do the work)

vs. pure persona-directed dialogue (just be the interviewee).

Proposal

Architecturally separate two response modes:

  1. persona_mode — pure roleplay. Bot responds in the interviewee's voice as if having a conversation. No task constraints.
  2. task_mode — goal-oriented. Bot drafts an email / replies to a candidate / generates a LinkedIn post. The persona acts as a style filter applied to a task-completion response, not as the primary directive.

task_mode would use stronger persona re-injection (every 5 turns, not every 20) and prefer PPA Stage 3 with more aggressive refinement.

API sketch

async def respond(
    ...,
    mode: Literal["persona", "task"] = "persona",
    task_directive: str | None = None,  # required when mode="task"
) -> str:
    ...

Acceptance criteria

  • mode parameter wired through process_turn()
  • task_mode triggers tighter re-injection
  • Documentation in spec on when to use which mode
  • Blind test protocol updated to test both modes separately

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions