A Dollhouse Stack

Market research powered by composable AI elements

Elemental Surveys is a complete market research pipeline — from research brief to executive report — built entirely from DollhouseMCP elements. Survey design, fielding, data cleaning, analysis, and insight synthesis, running as a coordinated stack of personas, skills, agents, and memories.

Built on DollhouseMCP

From brief to boardroom in a single stack

Every stage of the research process is handled by a dedicated DollhouseMCP element. The pipeline runs sequentially or orchestrated by an autonomous agent — with human approval at the moments that matter.

Step 1
Brief
Business question → research design with methodology justification
skill
Step 2
Survey Design
Full questionnaire with skip logic, scales, quality controls, and fielding spec
skill + template
Step 3
Field
Launch to panel via Prolific API or conversational interview agent
agent + bridge
Step 4
Clean
Speeder detection, straight-liner removal, attention check validation, OE quality scoring
skill
Step 5
Analyze
Crosstabs by segment, significance testing, open-end theme analysis
skill + persona
Step 6
Report
Insight synthesis, executive summary, strategic recommendations
template + persona

Six element types. One research stack.

DollhouseMCP elements are reusable, composable AI building blocks — stored as structured markdown, version-controlled, and activated into any Claude session. Elemental Surveys uses all six element types working in concert.

🎭
Persona

Expert researcher voices

Persistent behavioral and expertise layers that shape how Claude approaches every task. Activated at the start of a session and held throughout.

Market Research Director Survey Methodologist Insights Storyteller Quantitative Analyst
⚙️
Skill

Step-by-step research capabilities

Procedural instructions for complex, multi-stage tasks. Each skill is a documented, repeatable process that produces consistent, high-quality output.

Survey Design Data Cleaning Crosstab Analysis Insight Synthesis Open-End Analysis Concept Testing
📄
Template

Standardized output formats

Structured formats that ensure every deliverable — questionnaire, data file, report, scorecard — conforms to a consistent, professional standard.

Survey Questionnaire Research Report Executive Summary Concept Scorecard Response Data (JSON)
🧠
Memory

Persistent reference knowledge

Domain knowledge that is always available without re-prompting: industry terminology, benchmarks, methodological standards, platform specifics.

Survey Best Practices Research Methodologies Industry Knowledge Prolific Platform Conversational Techniques
🤖
Agent

Autonomous research workers

Long-running agents that orchestrate multi-step workflows, run surveys conversationally, generate synthetic test data, or coordinate the full pipeline end-to-end — with human approval gates built in.

Survey Director Survey Interviewer Synthetic Respondent Generator
📦
Ensemble

Pre-configured capability bundles

Named combinations of personas, skills, templates, and memories that activate together. One command configures Claude for a complete research workflow.

Market Research Suite Full Research Pipeline Brand Health Tracker Concept Test Suite

What a "stack" means in DollhouseMCP

A Dollhouse Stack is a productized collection of elements that together replicate — and in many areas exceed — the capability of a commercial SaaS platform. Unlike a SaaS subscription, a stack is modular, composable, LLM-agnostic, and yours.

Orchestration
Survey Director Agent
Autonomous orchestrator that runs the full 6-stage pipeline. Monitors fielding, flags problems, and escalates decisions to a human via Zulip or messaging. Hard limits prevent auto-publishing without approval.
Expertise
Four Research Personas
Market Research Director · Survey Methodologist · Quantitative Data Analyst · Insights Storyteller. Each persona shifts Claude's voice, priorities, and judgment for its specific stage of the pipeline.
Process
10 Research Skills
Research brief development · Survey design · Prolific study setup · Data cleaning · Crosstab analysis · Insight synthesis · Open-end analysis · Brand health tracker design · Concept testing · Segmentation study.
Format
7 Templates
Research brief · Survey questionnaire · Structured response data (JSON) · Research report · Executive summary · Concept test scorecard · Prolific study config. Every output is in a documented, consistent format.
Knowledge
5 Domain Memories
Survey best practices · Research methodologies · Market research industry knowledge · Prolific platform specifics · Conversational survey techniques. Always active, never re-prompted.
Validation
Synthetic Respondent Generator
Generates realistic synthetic panels with seeded data quality issues for full end-to-end pipeline testing — without real respondents, a panel account, or any external APIs. The dry-run baseline is in the repository.

LLM-agnostic

Elements run on any Claude model. The pipeline architecture supports OpenAI, local models via Ollama, and any future LLM — swap the model without rewriting the stack.

Two survey modes

Traditional form-based surveys and conversational AI-conducted interviews produce identical structured JSON output — the same downstream pipeline handles both.

Locally runnable

Pre-field synthetic validation runs entirely on your machine. No panel account, no API costs, no data leaving your environment. Ideal for question development and gap analysis.

Version-controlled elements

Every persona, skill, template, memory, and agent is a markdown file in a git repository. Diffable, reviewable, auditable. Research methodology treated like software.

Human approval architecture

The autonomous Survey Director agent has hard limits: it never auto-publishes findings, never fields a survey without approval. Human judgment at every consequential gate.

Open panel API architecture

Panel providers (Prolific, Lucid/Cint) are connected via MCP server adapters — treating panel APIs as native tools rather than file-based integrations.

How the stack compares

Elemental Surveys covers the same research workflow as commercial AI-powered platforms. In areas that are purely AI capability — writing, analysis, synthesis — it exceeds them. Panel fielding requires a separate provider (Prolific, Cint/Lucid, others).

Capability Elemental Surveys Typical AI Research Platform
Brief → survey questionnaire ✓ Exceeds — produces full spec with methodology justification Basic AI generation
Survey quality assurance ✓ At parity — full QA checklist, skip logic review Varies
Panel fielding Via Prolific, Cint/Lucid (MCP adapters in development) Integrated
Pre-field synthetic validation ✓ Unique — full dry-run with calibrated personas, no panel needed Not available
Data cleaning ✓ At parity — speeder, straight-liner, attention check, OE quality Varies
Crosstabs with AI insight narrative ✓ Exceeds — produces full narrative, not just tables Tables + summaries
Open-end analysis ✓ Exceeds — theme analysis + sentiment + insight layer Theme extraction
Executive report ✓ Exceeds — strategic narrative + Level 3 implications + recommendations AI summary
Conversational survey mode ✓ Unique — AI conducts live interviews with probing follow-ups Not available
Brand health tracker design ✓ At parity Varies
Concept testing ✓ At parity Common feature
Local / offline operation ✓ Supported — runs on local LLMs via Ollama Cloud only