Driving Funded Accounts
Mydoh / Activation Experience
Transforming a fragmented activation problem into a focused in-product experience that drove 200% growth in money loads.
I am a product builder. I focus on designing scalable, intuitive products, balancing business goals, technology, and human needs.
Mydoh / Activation Experience
Transforming a fragmented activation problem into a focused in-product experience that drove 200% growth in money loads.
Mydoh / Habit Formation
Developing goal-based saving mechanics that transform financial responsibility into an engaging habit for teen users.
Fintech / Savings Platform
A family banking app serving 300K+ users across Canada. Led UX across onboarding, money movement, and account management, driving a 200% increase in activated accounts and stronger lifecycle engagement.
Real Estate / Agentic AI
An AI-powered workspace for commercial real estate brokers managing large lease portfolios. Led 0→1 design and product on a lean team, delivering a POC that cut manual review time by ~45%.
Fintech / Capital Raising
Due diligence is slow and fundraising is guesswork. Eqwitty's AI-powered platform helps founders build credible data rooms and gives investors instant answers across every document, without the back and forth.
Real Estate / Capital Management
A portfolio optimization tool for Infrastructure Ontario's 40M+ sq ft public asset portfolio. Designed an ILP-powered system that shifted capital allocation from reactive triggers to proactive, data-driven prioritization across 10+ asset types.
Capital Markets / NLP Analysis
A personalized news and sentiment platform for analysts, investors, and enterprise research teams. Centralized fragmented research workflows into a single intelligent feed, cutting daily research time by 20%.
Healthcare / Blockchain
A mobile health passport giving patients ownership of their vaccination records and simplifying booking, check-in, and data sharing. Led 0→1 strategy from concept to MVP, winning Blockhack Global 2020 and reaching 100+ sign-ups at launch.
Data Integrity / Data Governance
An enterprise-grade data governance engine designed to resolve deep fragmentation within high-volume CRM environments. Engineered advanced matching logic and automated merging workflows to consolidate millions of duplicate records into a single source of truth, significantly improving data reliability for enterprise operations.
HQ Toronto, CA
I turn messy problem spaces into user-centric products people actually use. AI-powered, data-driven, and shipped end-to-end across fintech, healthcare, real estate, procurement, and infrastructure.
Product Designer
Oct 2022 – Present
Savings Experience Redesign
Drove 100% YoY increase in kid contributions and $1M+ in new deposits within 5 months; boosted Goal adoption by 50%.
Money Movement UX
Simplified Autoload and transfer flows, driving a 165% YoY increase in fiscal loads to $33M.
Ignite Acquisition Initiative
Led design delivering 10x ROI, converting a $30K investment into $300K in new money loads.
Apple Pay & Debit Integration
Launched card and wallet features, unlocking $200K/week in new transaction volume.
Co-founder, Product Lead
Sep 2024 – Present
MVP Launch
Led a team of 3 developers to ship core product including auth, onboarding, dashboard, marketplace, and admin.
RAG & AI Models
Led 3 AI engineers to build business proposal analysis and term sheet analysis models.
Pilot
Onboarded 5 startups through WeRise Investments partnership.
Product Manager, Data & AI
May 2021 – Oct 2022
Data & AI Academy
Launched and scaled org-wide learning platform via Coursera, Microsoft, and Vector Institute partnerships; drove 20% adoption and saved $200K in year one.
Pension Liability Model
Led ML-powered redesign, improving forecast accuracy and reducing risk exposure by $2M.
AI Tools
Defined strategy and UX for NLP news aggregation and entity resolution systems.
Lead Data Scientist
2017 – 2021
Funding Optimization Model
Built ILP model across 4,000+ buildings, improving allocation of $200M+ in public capital.
NLP Legal Query Tool
Reduced legal research workload by 20%.
Smart Contract Platform
Launched blockchain-based milestone disbursement system with projected savings of $2.5M.
University of Waterloo
University of Waterloo
An AI-powered fundraising platform for the next generation of founders and investors.
Eqwitty — an AI-powered fundraising platform that helps founders build investor-ready data rooms and helps investors evaluate deals faster.
Early-stage founders and investors. Founders going into fundraising blind, with no guidance on what investors need. Investors spending 570 hours per funded deal chasing documents through incomplete, unstructured data rooms.
Co-founder and sole designer. Owned the full 0-to-1 process including research, IA, interaction design, and product strategy on a lean founding team.
570 hours of diligence per funded deal. No standard structure, no search, no founder guidance, and no investor signals. The result was 4 to 6 weeks of back and forth and lost deal momentum on both sides.
A category-driven data room with readiness signals, a guided founder setup flow, and an AI layer for natural language queries across documents would cut that 570 hours in half.
An AI-powered workspace for analyzing commercial lease agreements and property tax implications.
Commercial real estate brokers running active deal pipelines. At any given time they're juggling 10 to 30 leases, working fast, and under real pressure to close. A missed clause isn't just an inconvenience. It's a blown deal or a financial liability.
Lead designer and product manager on a team of three. I owned the product direction, the UX, and the story we told about it.
Brokers don't have a reading problem. They have a volume problem. If we let them upload an entire portfolio at once, process it in the background, and give them a single workspace where they can move between extracted data and the source document without losing their place, they'll catch what matters before it becomes a problem.
A mobile health passport that enables patients to book vaccinations, track their medical records, and securely own their health data.
Patients navigating fragmented healthcare systems, especially those managing vaccinations and records across multiple providers.
Co-founder and Design Lead on a small founding team. Owned product strategy and 0→1 design end-to-end.
If we give patients ownership of their health data and simplify key actions like booking and record sharing, we can reduce friction and improve trust and engagement in healthcare interactions.
Healthcare systems are difficult to navigate and lack a unified patient experience. Patients often rely on disconnected providers, manual processes, and outdated tools to manage their health information. The core issue was systemic fragmentation: health data is siloed, patients lack direct access, and paper-based systems introduce errors and inefficiencies.
The problem was not just access to data, but control and trust. Patients needed ownership over their health records, confidence that their data is accurate, and a simple way to interact with complex systems.
YellO is built around three core actions: book appointments for seamless vaccination scheduling, check in digitally at the clinic, and access verified portable health records. Vaccination records are stored in a scannable format, shareable instantly with institutions, and serve as a single source of truth across providers. Users choose when and with whom to share their data, and records stay consistent across all touchpoints.
Paper-based systems like the physical Yellow Card are fragile, easy to lose, and hard to verify. YellO replaces them with tamper-resistant digital records while shifting control from institutions back to patients. Booking and check-in eliminate redundant manual paperwork, reducing administrative burden on both sides of the interaction.
I helped design a 0→1 healthcare product that simplifies how patients access and manage their health data by focusing on ownership, trust, and usability.
Deupe uses machine learning to find, link, and resolve duplicate records across data silos. Fewer duplicates. Cleaner pipelines. More reliable decisions.
Organizations treat data quality as a background task. In reality, duplicate records silently inflate costs, corrupt reporting, and erode decision-making confidence at every level.
"Data is leveraged in everyday work and managing data quality is often lumped into being 'part of the job.'"
Harvard Business Review
Duplicate records inflate storage usage on Azure, AWS, and GCP, where memory directly dictates billing.
Redundant records compound computational load across every ETL job that touches the dataset.
Workers spend untracked hours accommodating bad data downstream rather than fixing it at the source.
Duplicate records skew metrics, corrupt dashboards, and introduce reporting errors that compound over time.
Worked directly with the Data Science team to translate a Python ML library into an interface non-technical users could understand and act on.
"The design challenge wasn't just making ML accessible. It was making uncertainty legible, and giving people enough context to trust the machine."
Design Rationale
The core tension in deduplication UX: the model is confident but not infallible. Every design decision had to communicate probability without creating paralysis. Confidence bars, review queues, and clear merge previews were the vocabulary of that trust-building.
Users upload CSV or connect via API. Deupe profiles the schema, identifies field types, and surfaces data quality signals upfront.
Fields are mapped across datasets visually. Name, address, identifier columns are aligned so the model has the right training signal.
The ML model runs blocking and pairwise comparison, scoring each candidate pair with a confidence value. Uncertain pairs surface for human review.
Confirmed duplicates are merged or linked. A clean, canonical dataset is exported or pushed back to the source system via the microservice API.
The Data Science team needed business users to interact with deduplication algorithms without requiring technical expertise. The design challenge was surfacing confidence, uncertainty, and control within a clean, task-oriented UI.
A step-by-step wizard lets non-technical users configure dataset fields and connect sources without writing code.
Pairs are surfaced by confidence score. High-certainty duplicates auto-resolve. Ambiguous ones enter a human review queue with clear context.
Users can trace how a canonical record was formed, which records were linked, and with what confidence.
The same microservice powering the UI is exposed as a REST API, letting engineering teams integrate deduplication directly into pipelines.
Demonstrated as a working microservice integrated into the enterprise data stack. Engineering and business stakeholders both had a path to adoption.
A single deployable service served both the non-technical business UI and the technical REST API, eliminating duplication of effort and infrastructure.
Resolving duplicate records reduced cloud storage utilization and cut redundant computational load across ETL pipelines touching deduplicated datasets.
Comparing field-level content between matched records (e.g. contract terms, address line differences) was scoped out. The POC focused on record-level identity matching, not content diffing.
Allowing users to edit canonical records inline post-merge added significant complexity to the data model. Deferred to the source system for any downstream editing.
Training the Dedupe model with user-labeled pairs in-app was technically feasible but introduced enough surface area to warrant its own initiative. Left to the Data Science team via CLI.
The design was validated with the DS team but not with the actual business users who'd own the review queue. Observing how a data steward processes ambiguous pairs would have sharpened the confidence UI significantly.
Why did the model flag these two records as duplicates? The current UI shows the score, not the reasoning. A feature-level breakdown (name similarity: 94%, address: 88%) would build more trust with skeptical users.
The review queue was designed for one pair at a time. For large datasets with hundreds of uncertain pairs, a bulk-action pattern, or similarity clustering, would reduce the human review burden substantially.
A personalized news and sentiment platform that centralizes discovery, analysis, and sharing for enterprise research teams.
A personalized news and sentiment platform that centralizes discovery, analysis, and sharing for enterprise research teams.
Analysts, investors, and research teams who rely on timely, high-quality information to support decision-making, often spending 1 to 3+ hours daily across fragmented tools and subscriptions.
Lead Product Designer & Product Manager
If we centralize news, personalize it by user interests, and layer in sentiment analysis, we can reduce research time and enable faster, more confident decision-making.
Enterprise research workflows are heavily dependent on news, but existing tools make it difficult to efficiently find, organize, and interpret information. Users relied on multiple platforms and subscriptions, with research often consuming 1 to 3+ hours daily. Articles were hard to store, retrieve, and act on.
The core insight: the problem wasn't access to information, it was the ability to process and act on it efficiently. Users needed a centralized place for discovery and faster ways to interpret what they found.
Newsly brings together three core capabilities in one platform: follow topics for custom feeds based on user interests, analyze sentiment for at-a-glance industry perception, and save and share for structured knowledge organization.
Centralized experience
Replaced fragmented multi-platform workflows with a single unified interface, reducing switching costs and keeping context intact.
Personalized discovery
Users create and follow topics tailored to their needs, increasing content relevance and cutting time spent searching for high-signal news.
Reduced cognitive load
Sentiment indicators give users at-a-glance context on market perception, so they can triage without reading every article in full.
Flexible organization
Collections let users save, group, and revisit content, making research reusable and shareable across teams.
Topic-Based Feeds
Curates content around user-defined interests, eliminating the need to search multiple platforms and keeping users updated in real time.
Sentiment Analysis Layer
Classifies articles as positive, neutral, or negative, helping users quickly surface signals and identify trends without reading everything in full.
Collections and Sharing
Stores articles in a structured, accessible way and enables collaboration across teams, improving how research is organized and reused.
I transformed a fragmented research workflow into a centralized, intelligent system that helps enterprise teams move from information overload to actionable insights.
A portfolio optimization system for Infrastructure Ontario that recommends which assets to retain, monitor, or transition, shifting decision-making from reactive triggers to proactive, data-driven evaluation.
Infrastructure Ontario asset managers and planning teams overseeing a portfolio of 40M+ rentable square feet and 1M acres across 10+ asset types, working within strict capital and operational budget constraints.
Product Designer and Data Scientist on the project, leading end-to-end design of both the front-end experience and the ILP optimization model powering decision-making.
If we score every asset on a consistent performance scorecard and run an ILP model across the full portfolio, we can replace reactive, trigger-based categorization with proactive recommendations that maximize portfolio value under real budget constraints.
The Asset Performance Scorecard
To move from reactive triggers to proactive evaluation, the first step was making every asset comparable. The scorecard defined four performance dimensions applied consistently across all asset types: Condition (FCI), Financials (O&M demands), Utilization (Occupancy), and Strategy (Classification). This gave every asset in the portfolio a common language, regardless of whether it was a courthouse, a correctional facility, or a land holding.
Why Integer Linear Programming
The selection problem is combinatorial: given a portfolio of hundreds of assets, each with a known value and a known cost, which combination maximizes total portfolio value without exceeding the available budget? This is a constrained combinatorial problem. The number of possible combinations grows exponentially with portfolio size, ruling out manual evaluation. ILP is the right tool for this class of problem because it finds the provably optimal solution under hard constraints, rather than approximating it.
The Optimization Model
The objective function maximizes the sum of Annual Capital Value across all selected assets:
Where Xi ∈ {0, 1}: each asset is either retained as Core or it is not. No partial selections. This formulation ensures the model selects the highest-value combination of assets the budget can sustain, evaluated across the full portfolio simultaneously rather than asset by asset.
Finnish Biodiversity Precedent
ILP has strong precedent in government portfolio decision-making. Finland's Forest and Biodiversity Program applied the same technique to maximize conservation value across land acquisitions under strict budget and ecological constraints, including minimum thresholds for endangered species coverage, old-growth tree density, and proximity to existing reserves. The parallel is direct: a government balancing competing asset values against a fixed capital budget, using ILP to find the optimal selection across a large, diverse portfolio. IO's problem is structurally identical, with provincial strategy and program alignment standing in for ecological targets.
From Model to UX
The system needed to answer three questions a planner would actually ask: why is this asset recommended for transition, what happens to portfolio value if I override it, and what does the picture look like if next year's budget is 10 percent lower? The AI agent chat interface addresses this directly. Rather than exposing model parameters through form inputs or dashboards, planners interact through natural language. A question like "show me Transition-flagged assets with an FCI under 15" or "what is the impact of removing this asset from Core" maps to a model query without requiring the user to understand the underlying formulation. The conversational layer makes the optimization accessible without simplifying it away.