What it does
Tap the + button in the PWA bottom-right → pop-up appears with textarea + mic.
- Type a note or tap 🎤 to voice-record (uses iOS Safari Web Speech API)
- Tap "Capture" → text goes to Cloudflare KV
- Live-ea processes within 20 min
- Claude decides what it is and where it goes
Solves the "I thought of it at 80mph on I-70" problem. No app switching, no typing in HubSpot on your phone, no forgetting.
Architecture
PWA (iPhone) → /api/capture → Cloudflare KV captures/YYYY-MM-DD/{seq}
↓
live-ea (every 20 min during business hours)
↓
Classifier routes each capture:
├── Action item → meeting_action_items.jsonl (YELLOW)
├── HS note proposal → pending_hs_updates.jsonl (YELLOW, approval required)
├── Memory update proposal → pending_memory_updates.jsonl (INFO)
├── Draft a reply → Gmail create_draft (if addressee known)
└── Unclear → pending_drafts.jsonl (needs_review)
↓
Mark KV record status=processed
ntfy push: "💭 N captures processed"
Classification rules (live-ea Step 6c)
| Signal in text | Route |
|---|---|
| "remind me", "todo", "action item", starts with verb ("call X", "send Y") | Action item |
| Known prospect/owner name + fact about them | HS note proposal |
| "Rachel prefers", "team does X", working-norm statements | Memory update proposal |
| "email Andy and say X" with addressee Claude can identify | Gmail draft |
| Doesn't match any → YELLOW for manual sort | pending_drafts.jsonl |
Safety rules
- Never auto-send email — always create as draft
- Never auto-update HS — always queue for approval
- Never auto-update memory — always queue for approval
- Only action items + drafts are auto-created (low-risk, reversible)
Files
| What | Where |
|---|---|
| Capture POST endpoint | ~/Library/Application Support/SkyRun/pwa/functions/api/capture.js |
| Capture GET (for live-ea) | ~/Library/Application Support/SkyRun/pwa/functions/api/captures.js |
| KV namespace | skyrun_approvals — keys under captures/YYYY-MM-DD/{seq} with 7d TTL |
| PWA UI | Floating + button — build_pwa.py .capture-fab + .capture-panel |
| Classification logic | ~/.claude/scheduled-tasks/live-ea/SKILL.md Step 6c |
iOS voice input note
Uses the browser's webkitSpeechRecognition API. Works on:
- iOS Safari 14.5+ (September 2021+)
- Chrome, Edge (any platform)
- NOT Firefox (missing API)
If API unavailable, mic button is hidden; textarea still works.
Voice permission is requested once; iOS remembers per-site. If denied, you'll see "Mic unavailable" in the status line.
Typical use cases
- "Call Marjorie next week about photo refresh" → action item
- "Andy said his buddy in Tabernash might be interested — name is Dave something" → HS note on Andy's contact + new contact flag
- "Rachel prefers quick voice notes over email on schedule changes" → memory file update proposal
- "Email Stephen and say we can do the walkthrough Monday 3pm instead" → Gmail draft (Stephen is a known prospect)
What happens if you capture something ambiguous
It lands in the approval queue as needs_review YELLOW. You see it in the PWA next time you check, with the original text + a one-line "I wasn't sure what this was — want me to...?" prompt. You pick the action or dismiss.
Related
reference_pane_of_glass_pwa.md— PWA infrastructurereference_approval_queue_v2.md— the KV + approval queue pipelinereference_scheduled_tasks.md— live-ea context