Turning Municipal Comments Into Customer Insights With AI
One of PermitFlow's key KPIs is time to permit. We could optimize internal processes all day — get ops speed to 100%, eliminate every error on our side — and still hit a ceiling. A huge chunk of delays came from things customers were getting wrong on their end: missing documents, incorrect plot plans, repeated submission errors. Identifying or preventing those errors on our side doesn't move the needle. We needed a way to communicate them back so customers could fix their own processes.
Municipal reviewer comments were the signal — the feedback municipalities send back after submission listing what needs to be fixed on plans, documents, and code compliance. PermitFlow already consolidated these into a per-project view, but they were siloed there. Good for tracking one application, not for spotting patterns across hundreds.
I prototyped with ChatGPT and real data, validated with ops until the output matched their intuition, then wrote the PRD to productize it as a dedicated workspace feature.
Ops leads adopted the prototype within days for their highest-volume workspaces
Validated the feature concept before any engineering investment
PRD accepted and prioritized for production development
Problems
Identifying Trends Required Technical Effort
You could get to the comment data through Omni, but spotting patterns meant exporting hundreds of comments, manually categorizing them, and doing that separately for each customer. Nothing was persistent or shared across municipalities.
Existing Categorization Tracked Blame, Not Issues
The "reason" dropdown on comments was designed around accountability — our fault, customer fault, muni, or other. It answered who was responsible, not what was actually going wrong or how to fix it.
Customer Communication Was Ad Hoc
Some insights did reach customers, but only when ops happened to notice a pattern and brought it up. There was no data behind it — just awareness and intuition.
Approach
I pulled comment data from Omni, our internal analytics tool, filtering by customer workspace, municipality, and timeframe. I wrote a custom prompt to process this into a trend report and started iterating with ops.
It took a lot of iteration to get the reports right. The prompt would match keywords but miss reviewer intent, and multi-issue comments only surfaced one concern. Reviewers pack multiple concerns into one paragraph and reference local ordinances instead of saying what's actually needed — a mention of "Huntersville Sediment & Erosion Control Ordinance" is an erosion control issue, but it never says "submit erosion control plan." I'd run a batch, sit down with ops leads, and walk through the output. They'd flag where categorization felt off, I'd tune and re-run.
"We've already got a thousand projects there in two months. Would love to see a trend for [customer]. That would be valuable as hell. Imagine if I take that and go to the customer — that's like a gold mine to them."
Ops lead
Solution
The whole prototype workflow was intentionally low-tech: export a CSV from Omni, paste into a custom GPT, get back a trend report. No engineering required.
Example trend report output
Workspace Trends Report: Charlotte-Mecklenburg
1. Siting / Dimensional Compliance (28 comments)
Reviewers frequently flagged noncompliance with zoning setbacks, easement encroachments, and pool placement requirements. Comments often cite property line distances and failure to show required setback measurements.
"All swimming pools shall be located a minimum of ten (10) feet from any property line measured to the water's edge." 1
"The survey shows pool encroaching into the SDE. No encroachment into the easement will be allowed." 2
2. Tree Protection / Heritage Trees (26 comments)
Urban Forestry compliance continues to generate high comment volume. Reviewers emphasized tree protection fencing, critical root zones, and heritage tree identification.
"Show the tree protection fence around the heritage tree. Please keep in mind the room needed for construction." 3
3. Documentation / Missing Plan Details (17 comments)
Recurring pattern of incomplete plan sets — missing updates, outdated plats, absence of required documents like tree save areas or erosion control details.
Secondary issues: Encroachment into Easements (6), Fence Not Shown on Plan (5), Critical Root Zone Violations (4)
It worked, but the manual workflow wasn't scalable — only people most comfortable with Omni exports and LLM use could run it, and there were no persistent records or versioning. PermitFlow's strategy is AI-first, and muni comments are some of the best raw data we have for understanding why permits get delayed. They needed to live inside the product.
I wrote a PRD to bring this in as a first-class feature: a dedicated insights page within each workspace, with comments processed automatically, structured data indexed by workspace, municipality, and project type, and weekly refreshes. This would allow us to layer on submission warnings, quality scoring, and automated customer reports — features you can't build without structured data underneath.
Impacts
Immediate Ops Adoption
Ops leads started using the prototype for their highest-volume workspaces within days, pulling it into customer meetings and weekly reviews.
Customer Value Validated
Optimizing internal ops has a ceiling. The bigger thing is that customers can see their own recurring errors and fix them — nobody else gives them that data.
Foundation for Product Roadmap
Structuring this data opened up a real roadmap: submission warnings, quality scoring, automated customer reports. The insights page is where it starts, but the indexed data underneath is what the rest gets built on.
Reflections
Building the prototype first with real data made the PRD way easier to sell internally. Instead of a pitch, I could show the actual outputs that ops were already using in customer calls.
This ended up shaping the product direction in ways I didn't anticipate: surfacing issues internally is useful, but sharing them with customers is where the real value is. A customer who can see that 30% of their rejections come from missing plot plan details can actually fix that.