Last updated: 2026-04-17
This is a usage guide for FORMLOVA's sales email auto-detection feature. For the release announcement see AI Now Detects Sales Emails in Your Forms. For the rationale and design philosophy see Why We Built Sales Email Detection.
Sales email auto-detection classifies every form response into one of three labels -- legitimate, sales, or suspicious -- using AI. This guide walks through every step you are likely to use in daily operations: enabling the feature, reading the dashboard, correcting mistakes, excluding sales emails from analytics, and chaining the labels into workflows.
| Section | What it covers |
|---|---|
| 1. Enable | Turn on detection for a new or existing form |
| 2. Dashboard | Read labels, scores, and filter results |
| 3. Correct | Fix mistaken labels; audit trail |
| 4. Exclude from analytics | Chat, MCP tools, and exports |
| 5. Workflows | Use labels as conditions for automated actions |
| 6. Operational tips | Field design and accuracy monitoring |
| 7. Troubleshooting | No label, false positives, etc. |
1. Enabling detection
For a form you are about to publish
When you publish a form that contains text input fields (short text, long text, email, URL, phone), the FORMLOVA MCP server always asks "enable sales email detection?" as part of the publish checklist, alongside duplicate prevention, privacy policy, and thank-you page checks.
Example chat:
Publish this form.
Answer "enable" and forms.spam_filter_enabled is set to true. All future responses will be classified automatically.
This check is enforced server-side, so you cannot silently skip it. If you are certain the form will never receive sales pitches, answer "skip" and detection will stay off.
Turning it on for an already-published form
You can toggle it any time from chat:
Turn on sales email detection for the contact form.
The change takes effect for responses submitted from that point onward. Responses that arrived before you enabled detection are not classified retroactively -- this is intentional for cost and privacy reasons.
You can also flip the toggle from the admin screen at /ark/forms/{formId}/settings.
Forms where detection does not run
Detection is intentionally skipped on:
- Forms without any text input fields (e.g. selection-only forms)
- Paid-event forms using Stripe Connect
The first has no place for a sales pitch to land. The second is a very unlikely target for spammers who would have to pay to send a pitch. In both cases the cost-accuracy trade-off does not favor running classification.
2. Reading classification results in the dashboard

Response list
On the response list (/ark/forms/{formId}/responses), each response shows:
- Label:
legitimate/sales/suspicious - Score: 0-100 (higher means more confident it is a sales pitch)
- Source:
auto(classified by AI) ormanual(corrected by a human)
Sales-labeled responses are shown slightly dimmed. They are still visible, just visually deprioritized so your eye naturally lands on the responses that matter.
Filter
The filter at the top of the list lets you narrow down by label:
- Only sales -- to confirm the spam bucket
- Only suspicious -- to triage the gray zone and label things manually
- Exclude sales -- to see only genuine inquiries

Sort
You can also sort by score, ascending or descending. When your suspicious pile is large, sorting by score descending surfaces the "most sales-like" entries first.
3. Correcting labels by hand
AI judgments are not perfect. A legitimate inquiry can be tagged as sales, and occasionally a sales pitch slips through as legitimate. Fix them manually.
Steps
- Open the response detail page
- Pick the correct label from the dropdown
- Save
Correcting the label sets spam_label_source = manual. Future automatic re-runs will never overwrite that label. AI proposes, humans decide.
Audit log
Every label change is written to audit_logs. You can see who changed which response to which label, and when. That matters for teams that want to keep criteria consistent.
4. Excluding sales emails from analytics
From chat
Examples:
CVR for this month, excluding sales emails. Last week's analysis for the contact form, skip the sales ones.
Natural phrasing works. Under the hood, the analytics query gets exclude_sales = true, and responses labeled sales are dropped from the aggregation.
From MCP tools
The MCP tools get_responses and export_responses accept an exclude_sales parameter:
{
"form_id": "xxxxx",
"exclude_sales": true
}
suspicious responses are not dropped -- only sales. That means uncertain responses stay visible and do not silently disappear from your workflow.
From exports
CSV / Excel / JSON exports support the same exclude_sales parameter. This is particularly useful when exporting ad performance reports for clients, where a handful of sales pitches in the pipeline can distort CPA numbers.
5. Chaining with workflows
Classification labels can drive conditional logic in workflows.
Example 1: Slack only for real inquiries
Take a standard "notify Slack on new response" workflow and add a condition: "exclude sales." What lands in the Slack channel is now only responses that actually need human attention. This single change often has the biggest impact on team focus.
Example chat:
Notify Slack when the contact form receives a response, but skip sales emails.
Example 2: Auto-reply only to sales
You can auto-reply to the sales bucket with a canned "we can't take inquiries like this" message. Only do this if you trust the sales-side accuracy to be very high -- a false positive here means a rude reply to a legitimate inquiry. When in doubt, run a manual check on suspicious and sales before letting the auto-reply go out.
Example chat:
Send a decline auto-reply only to responses labeled as sales.
Example 3: Push legitimate responses to HubSpot
If your MCP client is also connected to a HubSpot MCP server, you can compose the branch in a single sentence.
Example chat:
Add contact form responses to HubSpot, excluding sales emails.
6. Operational tips
Clear field labels reduce false positives
If your form field labels are vague, the AI has a harder time reading intent.
- Good: "What would you like to discuss?", "Purpose of inquiry"
- Vague: "Message", "Content"
Clear labels give the AI context about what a valid answer looks like. If you see many false positives on an existing form, renaming the fields alone often improves things.
Correct aggressively in the first one or two weeks
Right after you turn it on, correct every mistake you see. Those corrections land in the audit log, so later you can see what kinds of errors are common and adjust your form design accordingly.
Glance at the score distribution now and then
If a lot of responses cluster in suspicious (scores 40-60), that form naturally attracts gray-zone responses. For those, the "only suspicious" filter becomes part of your regular review routine.
7. Troubleshooting
Some responses have no label
- Forms without text inputs are not classified
- Stripe Connect (paid-event) forms are not classified
- Responses submitted before you enabled the feature are not classified
- In rare cases the classification times out and the response is stored with
nulllabels (form submission is never broken)
If none of the above apply and forms.spam_filter_enabled = true, let the admins know.
Legitimate inquiries are flagged as sales
The prompt is biased toward "if unsure, treat as legitimate," but specific wording or tone can still trigger false positives. Fix them manually. Your correction will not be overwritten.
When the same pattern keeps appearing, revisit the form field design (section 6).
Sales pitches are labeled legitimate
Thanks to the same "treat as legitimate when unsure" bias, borderline sales messages can slip through. Fix them manually. Another common pattern is to move them to suspicious first and batch-review later.
Scores only show up as 0 or 100
Very clear text (obviously legitimate, or obviously templated sales) produces polarized scores. That is expected behavior, not a bug.
FAQ
Can I classify old responses retroactively?
Not automatically today. We intentionally limit classification to new responses -- it is the right trade-off for cost and for respecting content that already sits in your database. If enough users ask for it, we will consider adding an opt-in backfill.
Do you share classifications with third parties?
No. The classifier runs via OpenRouter against Claude Haiku 4.5. Model providers do not use this traffic for training, per OpenRouter's data policy.
Can I turn it back off?
Yes. Chat or the admin screen can flip spam_filter_enabled back to false. Existing labels stay attached; new responses will not be classified.
Does upgrading to a paid plan improve accuracy?
No. Classification is the same mechanism, the same model, and the same prompt on every plan. No plan-based differentiation.
Summary
- Enable detection at publish time, or later from chat or the admin UI
- Labels, scores, and source appear on the response list
- Fix mistaken labels by hand -- corrections are never overwritten
- Analytics, exports, and workflows all honor
exclude_sales - Clear field labels reduce false positives
Related articles:
- AI Now Detects Sales Emails in Your Forms -- release announcement
- Why We Built Sales Email Detection -- background and design philosophy

