Prototyping a Tag System in v0 to Enable AI Classification
Permitting requirements vary wildly between different municipalities. One county triggers inspections for reroofing over 25% of a roof area, another uses a $15K valuation threshold for HVAC work, another counts windows. We needed a way to effectively describe scopes across thousands of different juridictions and permit types. Our existing infastructure was leading to mis-tagged projects, which made our data unusable for future AI implementations.
I designed a hierarchical tagging system that could handle this complexity while staying understandable for our ops team.
Problems
Variable Scope Triggers
Some jurisdictions use dollar thresholds (>$15K), others percentages (>25% sheathing), others counts (≥1 window). The existing system had no way to model this variation, making it impossible to track which rules applied where.
Tags Often Have Dependencies
Some L1 tags like 'Residential' always implied 'Single Family' for certain customers. Some required a second-level tag, others didn't. We needed a way to eliminate errors that led to innacurate permit results.
Admin Interface Needed to Stay Simple
Because our less-technical ops team would be the primary users of this feature, the configuration system needed to handle scope definitions, boundaries, relationships, and requirements without becoming overwhelming.
Approach
I mapped out every dependency I could find from customer interviews and our scope trigger database. Which scenarios required L1 tags? When were L2s mandatory? How should implied tags behave? The rules were messy and full of edge cases.
Instead of building elaborate Figma prototypes, I built a working version in v0. This let me test the actual behaviors in practice. Engineers could click through it to understand the exact interactions, and I used it in discussions and testing with our ops team to make sure that the experience felt fluid and understandable.
Solution
The system uses three tag levels. L1 tags handle primary classifications (Residential, Commercial). L2 tags add specificity (New Construction, Single Family, Renovation). L3 tags are variable scopes with numerical thresholds. When you select a scope with an implied relationship, the additional scope automatically appears with a lock icon showing why you can't remove it.
Variable scopes support different boundary types (>, ≥, <, ≤, any), values, and units. If a variable scope is selected, a dialog appears requiring an input before being added to the applied tags.
I broke scope configuration into components: Scope (free text), Boundary (dropdown), Value (number input), and Unit (text with suggestions). This kept complex triggers manageable across jurisdictions and unit preferences. More importantly, it gave us structured data we could feed to AI models instead of having them try to parse free text descriptions of permit rules.


Impacts
Improved Data Integrity
Implied tags, variable scopes, and an improved UX meant that more projects recieved correct tag relationships, which made reporting more accurate.
Created Foundation for AI Automation
The more structured and granular scope data gave us something to train AI models on. Instead of having ops manually figure out which requirements applied to each project, we could start automating it.
Reflections
Skipping Figma and prototyping in v0 was a big part of the success of this project. The interactions were too complex for static mocks, and engineers could click through the behavior instead of interpreting redlines. It also made customer conversations more concrete—I could show them exactly how locked tags would work rather than describing it.
Choosing structured components over free text wasn't obvious at first. Free text would've been faster to build and easier to configure. But structured data meant we could query it, report on it, and eventually feed it to AI models. The configuration complexity paid off once we started automating permit research.