Building a Pinterest-Style Photo Discovery Experience
Created a massive wedding imagery database that unlocked photos from gallery silos into a browseable, searchable experience filtered by location, venue, and tagged details. Used computer vision to automatically tag visual elements and designed a data model where each photo linked to multiple vendors, turning single images into cross-promotion opportunities.
Problems
Photos Trapped in Silos
Beautiful wedding photos existed only in individual vendor galleries. Couples searching for "rustic barn wedding" inspiration had to browse dozens of photographer portfolios manually.
No Cross-Vendor Discovery
A photo of a floral arrangement couldn't surface the venue, photographer, and florist who collaborated on that wedding. Each vendor was an island.
Limited Search & Filtering
No way to search photos by style, color palette, venue type, or specific details like "centerpieces" or "wedding cakes." Location filtering was rudimentary at best.
Missed Engagement Opportunity
Couples were going to Pinterest for inspiration anyway. We were missing a chance to keep them engaged on our platform and drive vendor profile views.
Approach
I partnered with local wedding photographers to build an initial database of thousands of images, each tagged with venue, vendors, and location data. Manual tagging of visual elements would be impossibly time-consuming, so I experimented with computer vision APIs to automatically detect rings, flowers, centerpieces, cakes, dresses, and venues. It took several rounds of tuning confidence thresholds to balance accuracy with coverage.
The key data model insight was allowing each photo to link to multiple vendors: photographer, venue, florist, caterer. This meant a single image could drive discovery for five or six vendors, turning photos into a cross-promotion engine rather than just portfolio pieces.
Interactive: Computer Vision Tagging
Sunset Ceremony at Mountain Vista
π Mountain Vista Estate
πΈ Jane Smith Photography
Garden Reception Setup
π Denver Botanic Gardens
πΈ Peak Wedding Photos
Barn Wedding Reception
π Spruce Mountain Ranch
πΈ Colorado Captures
Solution
We launched a dedicated "Real Weddings" section with thousands of tagged photos displayed in a masonry grid layout. Hover states revealed quick details about each image. Users could filter by location, venue type, style tags, and specific visual elements like flowers, cakes, dresses, and centerpieces. The filtering was immediate and intuitive, creating a discovery experience that kept people browsing.
Each photo had its own detail page showing all credited vendors with direct links to their profiles. A related images algorithm suggested similar photos based on visual tags and location, keeping users engaged and exploring. This turned single-image visits into multi-image browsing sessions.
The vendor integration was the real unlock. Photos appeared in search results and on vendor profile pages automatically. A florist's portfolio could now include all photos tagged with their work, even if those photos were uploaded by photographers. Vendor portfolios grew without any extra work on their end, and everyone involved in each wedding got proper credit.

Impacts
Deeper Content Connections
Pages per session increased significantly as users discovered related photos and vendor profiles through visual browsing rather than directory searches.
Increased Time on Site
The browse experience kept couples engaged longer. Session duration increased as inspiration became a valid use case alongside vendor search.
Vendor Cross-Promotion
Smaller vendors benefited from appearing in popular photos. A single popular wedding could drive traffic to 5-10 different vendor profiles.
Location-Specific Discovery
Couples could finally answer "What do weddings at this venue actually look like?" Real examples from their target location replaced abstract browsing.
Reflections
This project taught me the power of connecting existing data in new ways. We didn't need to create new content. We just needed to make existing photos discoverable and interconnected.
The computer vision tagging wasn't perfect initially. We had to iterate on confidence thresholds to balance accuracy with coverage. Some manual QA was necessary, but it was far better than fully manual tagging.
If I could do it again, I'd invest more in the related photos algorithm. We used simple tag matching, but ML-powered visual similarity would have created even better discovery paths. The technology was available, but we prioritized speed to market.