👋

I am a product builder. I focus on designing scalable, intuitive products, balancing business goals, technology, and human needs.

X
Mydoh
OKX
Wayflyer
Infrastructure Ontario
Newtopia
Brex
X
Mydoh
OKX
Wayflyer
Infrastructure Ontario
Newtopia
Brex
0% Activation Increase
0% Engagement Rate

Driving Funded Accounts

Mydoh / Activation Experience

Transforming a fragmented activation problem into a focused in-product experience that drove 200% growth in money loads.

0% YOY Growth
$0M+ New Deposits
0% Goals Adoption

Savings Goals Mechanics

Mydoh / Habit Formation

Developing goal-based saving mechanics that transform financial responsibility into an engaging habit for teen users.

Mydoh Mosaic 1 Mydoh Mosaic 3 Mydoh Mosaic 5 Mydoh Mosaic 7 Mydoh Mosaic 1 clone Mydoh Mosaic 3 clone Mydoh Mosaic 5 clone Mydoh Mosaic 7 clone
Mydoh Mosaic 2 clone Mydoh Mosaic 4 clone Mydoh Mosaic 6 clone Mydoh Mosaic 8 clone Mydoh Mosaic 2 Mydoh Mosaic 4 Mydoh Mosaic 6 Mydoh Mosaic 8

Mydoh

Fintech / Savings Platform

A family banking app serving 300K+ users across Canada. Led UX across onboarding, money movement, and account management, driving a 200% increase in activated accounts and stronger lifecycle engagement.

Quantum Project
Metropolitan Portfolio Analysis

LeaseAI

Real Estate / Agentic AI

An AI-powered workspace for commercial real estate brokers managing large lease portfolios. Led 0→1 design and product on a lean team, delivering a POC that cut manual review time by ~45%.

Eqwitty Backdrop
Eqwitty Data Room Preview

Eqwitty

Fintech / Capital Raising

Due diligence is slow and fundraising is guesswork. Eqwitty's AI-powered platform helps founders build credible data rooms and gives investors instant answers across every document, without the back and forth.

ZURICH Backdrop
Toronto Cityscape

Zurich

Real Estate / Capital Management

A portfolio optimization tool for Infrastructure Ontario's 40M+ sq ft public asset portfolio. Designed an ILP-powered system that shifted capital allocation from reactive triggers to proactive, data-driven prioritization across 10+ asset types.

Newsly

Capital Markets / NLP Analysis

A personalized news and sentiment platform for analysts, investors, and enterprise research teams. Centralized fragmented research workflows into a single intelligent feed, cutting daily research time by 20%.

Yello

Healthcare / Blockchain

A mobile health passport giving patients ownership of their vaccination records and simplifying booking, check-in, and data sharing. Led 0→1 strategy from concept to MVP, winning Blockhack Global 2020 and reaching 100+ sign-ups at launch.

Dedupe

Data Integrity / Data Governance

An enterprise-grade data governance engine designed to resolve deep fragmentation within high-volume CRM environments. Engineered advanced matching logic and automated merging workflows to consolidate millions of duplicate records into a single source of truth, significantly improving data reliability for enterprise operations.

Let's
Talk.

Pixel Character

Let's Connect

I hope you enjoyed scrolling through my portfolio. I'm currently looking for new opportunities, feel free to reach out on my LinkedIn or via email.

Download CV

My Experience

I turn messy problem spaces into user-centric products people actually use. AI-powered, data-driven, and shipped end-to-end across fintech, healthcare, real estate, procurement, and infrastructure.

Mydoh – RBCx

Oct 2022 – Present
Role

Product Designer

Oct 2022 – Present

Key Projects

Savings Experience Redesign

Drove 100% YoY increase in kid contributions and $1M+ in new deposits within 5 months; boosted Goal adoption by 50%.

Money Movement UX

Simplified Autoload and transfer flows, driving a 165% YoY increase in fiscal loads to $33M.

Ignite Acquisition Initiative

Led design delivering 10x ROI, converting a $30K investment into $300K in new money loads.

Apple Pay & Debit Integration

Launched card and wallet features, unlocking $200K/week in new transaction volume.

Scope
Product Strategy UX Research UI Design IA Testing Prototyping Mobile-first Design

Eqwitty

Sep 2024 – Present
Role

Co-founder, Product Lead

Sep 2024 – Present

Key Projects

MVP Launch

Led a team of 3 developers to ship core product including auth, onboarding, dashboard, marketplace, and admin.

RAG & AI Models

Led 3 AI engineers to build business proposal analysis and term sheet analysis models.

Pilot

Onboarded 5 startups through WeRise Investments partnership.

Scope
0-to-1 Product AI/RAG Vision Research Fundraising

OMERS

May 2021 – Oct 2022
Role

Product Manager, Data & AI

May 2021 – Oct 2022

Key Projects

Data & AI Academy

Launched and scaled org-wide learning platform via Coursera, Microsoft, and Vector Institute partnerships; drove 20% adoption and saved $200K in year one.

Pension Liability Model

Led ML-powered redesign, improving forecast accuracy and reducing risk exposure by $2M.

AI Tools

Defined strategy and UX for NLP news aggregation and entity resolution systems.

Scope
AI/ML Strategy Roadmapping UX Design NLP Forecasting Stakeholders

Infrastructure Ontario

2017 – 2021
Role

Lead Data Scientist

2017 – 2021

Key Projects

Funding Optimization Model

Built ILP model across 4,000+ buildings, improving allocation of $200M+ in public capital.

NLP Legal Query Tool

Reduced legal research workload by 20%.

Smart Contract Platform

Launched blockchain-based milestone disbursement system with projected savings of $2.5M.

Scope
Optimization NLP Blockchain Python SQL Policy Analytics

Academic Path

2013 - 2014

Master of Arts, Economics

University of Waterloo

2008 - 2012

Bachelor of Arts, Economics

University of Waterloo

Eqwitty

An AI-powered fundraising platform for the next generation of founders and investors.

📱 Product

Eqwitty — an AI-powered fundraising platform that helps founders build investor-ready data rooms and helps investors evaluate deals faster.

🎯 Who it's for

Early-stage founders and investors. Founders going into fundraising blind, with no guidance on what investors need. Investors spending 570 hours per funded deal chasing documents through incomplete, unstructured data rooms.

👩‍💻 My role

Co-founder and sole designer. Owned the full 0-to-1 process including research, IA, interaction design, and product strategy on a lean founding team.

🔍 The problem

570 hours of diligence per funded deal. No standard structure, no search, no founder guidance, and no investor signals. The result was 4 to 6 weeks of back and forth and lost deal momentum on both sides.

💡 The bet

A category-driven data room with readiness signals, a guided founder setup flow, and an AI layer for natural language queries across documents would cut that 570 hours in half.

⚠️ Constraints

  • 1. Two users, competing needs. The same surface had to serve a founder uploading and an investor evaluating, without compromising either.
  • 2. Trust at high stakes. Investors judge credibility within minutes of opening a data room. Structure and seriousness had to come through from the first screen.
  • 3. No playbook to borrow from. 50+ interviews and a competitive audit of DocSend, Visible, Carta, and Dealum confirmed no product combined AI, cap tables, data rooms, and analytics in one place.

📊 Outcomes

  • 1. Diligence time cut in half. Readiness signals and category-driven structure targeted a 50% reduction in the 570-hour benchmark.
  • 2. Founder confidence improved. Guided checklists removed the guesswork. Founders knew exactly what was needed before sharing.
  • 3. Shipped and validated. Launched with ConsidraCare and WeRise as early partners, moving from zero to a live product in market.

✂️ What we cut

  • 1. AI-first entry point. Vivi as the primary interface created anxiety before the data room was ready. Became a persistent layer instead.
  • 2. Investor CRM. Valuable but scope creep for V1.
  • 3. Custom AI training. Deferred fine-tuning on founder data until the core bet was validated.

LeaseAI

An AI-powered workspace for analyzing commercial lease agreements and property tax implications.

🎯 Who it's for

Commercial real estate brokers running active deal pipelines. At any given time they're juggling 10 to 30 leases, working fast, and under real pressure to close. A missed clause isn't just an inconvenience. It's a blown deal or a financial liability.

👩‍💻 My role

Lead designer and product manager on a team of three. I owned the product direction, the UX, and the story we told about it.

💡 The bet

Brokers don't have a reading problem. They have a volume problem. If we let them upload an entire portfolio at once, process it in the background, and give them a single workspace where they can move between extracted data and the source document without losing their place, they'll catch what matters before it becomes a problem.

⚠️ Constraints

  1. 1. POC scope. We weren't building for production. Every decision had to prove the concept, not scale it.
  2. 2. Trust. Brokers are legally on the hook for what they miss. The product had to earn credibility before it earned love.
  3. 3. Document chaos. Leases run anywhere from 20 to 200+ pages with no consistent structure. The AI needed to absorb that variability so the user never had to think about it.

🗺️ Key User Journeys

  1. 1. Bulk lease ingestion. Drop in an entire portfolio at once. The AI gets to work in the background while the broker moves on. No waiting around, no hand-holding required.
  2. 2. The lease workspace. One place where the extracted data, source document, and summary all live together. Brokers can cross-reference terms and clauses without bouncing between tabs or losing their train of thought.
  3. 3. Renewal and decision support. Ask the portfolio a question and get a real answer. Upcoming expirations, rent escalations, break clauses, landlord obligations. It stops being document management and starts being deal intelligence.

📊 Outcomes

  1. 1. Estimated 45% reduction in lease review time, based on where brokers were spending their hours before.
  2. 2. Validated without a tutorial. Brokers picked it up and moved through it on their own. That was the proof point we were after.
  3. 3. V1 scope locked. We came out of the POC knowing exactly what to build first and why.

✂️ What We Cut

  1. 1. Clause comparison. Brokers wanted it, but it was the right thing to defer. It's first in line for V1.
  2. 2. Inline redlining. It kept creeping into scope. We cut it to stay focused on analysis over authoring.
  3. 3. Custom AI training. Tabled until we know what the post-POC infrastructure actually looks like.

🔜 Next Steps

  1. 1. Productionize the core. Take what the prototype proved and build something stable enough to ship.
  2. 2. Test the hard parts. Bulk upload and workspace navigation need real brokers and real workflows pushing on them.
  3. 3. GTM groundwork. Nail down the ICP, work out pricing, and get a pilot running with design partners.
  4. 4. Build the V1 backlog. Take the most validated features and turn them into a focused, sequenced plan.

YellO: A Decentralized Health Passport

A mobile health passport that enables patients to book vaccinations, track their medical records, and securely own their health data.

🎯 Who it's for

Patients navigating fragmented healthcare systems, especially those managing vaccinations and records across multiple providers.

👩‍💻 My role

Co-founder and Design Lead on a small founding team. Owned product strategy and 0→1 design end-to-end.

💡 The bet

If we give patients ownership of their health data and simplify key actions like booking and record sharing, we can reduce friction and improve trust and engagement in healthcare interactions.

⚠️ Constraints

  • Fragmented ecosystem with poor interoperability
  • High privacy and security requirements
  • Limited funding requiring a focused MVP

📊 Outcomes

  • 100+ user sign-ups during early MVP rollout
  • Defined a 0→1 product strategy across patient and provider experiences
  • Designed core flows for booking, check-in, and record tracking

✂️ What we cut

  • Expanding beyond vaccinations too early
  • Complex medical record systems
  • Feature-heavy experiences that slowed development

Overview & The Problem

Healthcare systems are difficult to navigate and lack a unified patient experience. Patients often rely on disconnected providers, manual processes, and outdated tools to manage their health information. The core issue was systemic fragmentation: health data is siloed, patients lack direct access, and paper-based systems introduce errors and inefficiencies.

Insight

The problem was not just access to data, but control and trust. Patients needed ownership over their health records, confidence that their data is accurate, and a simple way to interact with complex systems.

What it does

YellO is built around three core actions: book appointments for seamless vaccination scheduling, check in digitally at the clinic, and access verified portable health records. Vaccination records are stored in a scannable format, shareable instantly with institutions, and serve as a single source of truth across providers. Users choose when and with whom to share their data, and records stay consistent across all touchpoints.

Why it matters

Paper-based systems like the physical Yellow Card are fragile, easy to lose, and hard to verify. YellO replaces them with tamper-resistant digital records while shifting control from institutions back to patients. Booking and check-in eliminate redundant manual paperwork, reducing administrative burden on both sides of the interaction.

Why it worked

  • Narrow scope enabled faster execution
  • Vaccinations as a high-frequency entry point drove early adoption
  • A clear three-action mental model improved usability
  • End-to-end thinking connected patient and provider experiences into one system
  • Addressed both user-level friction and systemic fragmentation

Takeaway

I helped design a 0→1 healthcare product that simplifies how patients access and manage their health data by focusing on ownership, trust, and usability.

Proof of Concept  ·  Data Quality
Enterprise Data Tool

Build better trust in your data.

Deupe uses machine learning to find, link, and resolve duplicate records across data silos. Fewer duplicates. Cleaner pipelines. More reliable decisions.

Type
Proof of Concept
Role
Product + UX
Domain
Data Engineering
deupe / entity-resolution / run-42
124,580
Records Scanned
3,241
Duplicates Found
97.4%
Confidence Avg
Record A
Record B
Confidence
Status
Acme Corp · Toronto
ACME Corp. · ON
Dup
J. Smith · 416-555-0143
John Smith · 416-555-0143
Review
Bright Labs · Vancouver
Bright Labs Inc · BC
Linked
NovaTech · Calgary
NovaTech Solutions · AB
Review

Poor data quality is
an invisible tax.

$3.1T
IBM Est. · Annual U.S. Cost · 2016

Organizations treat data quality as a background task. In reality, duplicate records silently inflate costs, corrupt reporting, and erode decision-making confidence at every level.

"Data is leveraged in everyday work and managing data quality is often lumped into being 'part of the job.'"

Harvard Business Review
  • ☁️
    Cloud Storage Costs

    Duplicate records inflate storage usage on Azure, AWS, and GCP, where memory directly dictates billing.

  • ⚙️
    ETL Overhead

    Redundant records compound computational load across every ETL job that touches the dataset.

  • 🕐
    Shadow Data Quality

    Workers spend untracked hours accommodating bad data downstream rather than fixing it at the source.

  • 📊
    Misleading KPIs

    Duplicate records skew metrics, corrupt dashboards, and introduce reporting errors that compound over time.

End-to-end
design ownership.

Worked directly with the Data Science team to translate a Python ML library into an interface non-technical users could understand and act on.

Product Design UI/UX Visual Design Interaction Design Design Systems

"The design challenge wasn't just making ML accessible. It was making uncertainty legible, and giving people enough context to trust the machine."

Design Rationale

The core tension in deduplication UX: the model is confident but not infallible. Every design decision had to communicate probability without creating paralysis. Confidence bars, review queues, and clear merge previews were the vocabulary of that trust-building.

From raw data to
resolved entities.

Step 01
📂
Ingest

Users upload CSV or connect via API. Deupe profiles the schema, identifies field types, and surfaces data quality signals upfront.

Step 02
🗂️
Map

Fields are mapped across datasets visually. Name, address, identifier columns are aligned so the model has the right training signal.

Step 03
🤖
Deduplicate

The ML model runs blocking and pairwise comparison, scoring each candidate pair with a confidence value. Uncertain pairs surface for human review.

Step 04
Resolve

Confirmed duplicates are merged or linked. A clean, canonical dataset is exported or pushed back to the source system via the microservice API.

Accessible interfaces
for complex ML.

The Data Science team needed business users to interact with deduplication algorithms without requiring technical expertise. The design challenge was surfacing confidence, uncertainty, and control within a clean, task-oriented UI.

01
Guided upload and field mapping

A step-by-step wizard lets non-technical users configure dataset fields and connect sources without writing code.

02
Confidence-first review queue

Pairs are surfaced by confidence score. High-certainty duplicates auto-resolve. Ambiguous ones enter a human review queue with clear context.

03
Entity resolution timeline

Users can trace how a canonical record was formed, which records were linked, and with what confidence.

04
API + UI in parallel

The same microservice powering the UI is exposed as a REST API, letting engineering teams integrate deduplication directly into pipelines.

Step 1 of 3 — Upload Dataset In Progress
📁
Choose a file or drag it here
CSV, JSON, XLSX · Max 500MB
Step 2 of 3 — Map Fields Ready
company_name str
Organization str
address_1 str
Street Address str
phone_number str
Unmapped

What it delivered.

POC
Validated at Enterprise Scale

Demonstrated as a working microservice integrated into the enterprise data stack. Engineering and business stakeholders both had a path to adoption.

2-in-1
API and UI from One Service

A single deployable service served both the non-technical business UI and the technical REST API, eliminating duplication of effort and infrastructure.

↓ Cost
Reduced Storage and ETL Spend

Resolving duplicate records reduced cloud storage utilization and cut redundant computational load across ETL pipelines touching deduplicated datasets.

Scope discipline.

Cross-dataset Clause Comparison

Comparing field-level content between matched records (e.g. contract terms, address line differences) was scoped out. The POC focused on record-level identity matching, not content diffing.

Inline Redlining and Edit Workflows

Allowing users to edit canonical records inline post-merge added significant complexity to the data model. Deferred to the source system for any downstream editing.

Custom Model Training UI

Training the Dedupe model with user-labeled pairs in-app was technically feasible but introduced enough surface area to warrant its own initiative. Left to the Data Science team via CLI.

What I'd push further
with more runway.

01
User testing with data stewards

The design was validated with the DS team but not with the actual business users who'd own the review queue. Observing how a data steward processes ambiguous pairs would have sharpened the confidence UI significantly.

02
Audit trail and explainability

Why did the model flag these two records as duplicates? The current UI shows the score, not the reasoning. A feature-level breakdown (name similarity: 94%, address: 88%) would build more trust with skeptical users.

03
Batch review patterns

The review queue was designed for one pair at a time. For large datasets with hundreds of uncertain pairs, a bulk-action pattern, or similarity clustering, would reduce the human review burden substantially.

Newsly

A personalized news and sentiment platform that centralizes discovery, analysis, and sharing for enterprise research teams.

📱 Product

A personalized news and sentiment platform that centralizes discovery, analysis, and sharing for enterprise research teams.

🎯 Who it's for

Analysts, investors, and research teams who rely on timely, high-quality information to support decision-making, often spending 1 to 3+ hours daily across fragmented tools and subscriptions.

👩‍💻 My role

Lead Product Designer & Product Manager

💡 The bet

If we centralize news, personalize it by user interests, and layer in sentiment analysis, we can reduce research time and enable faster, more confident decision-making.

⚠️ Constraints

  1. 1. Large volume of data across fragmented sources
  2. 2. Balancing powerful functionality with simplicity
  3. 3. Limited existing infrastructure for personalization and search

📊 Outcomes

  1. 1. Reduced research time by 20% through centralized discovery
  2. 2. Defined a scalable structure for topic-based personalization
  3. 3. Improved ability to discover, organize, and share research, creating a foundation for faster team collaboration

✂️ What we cut

  1. 1. Overly complex analytics dashboards, added overhead without proportional value
  2. 2. Static, non-customizable news feeds, didn't support personalized workflows
  3. 3. Feature-heavy experiences, increased cognitive load and undermined the core simplicity goal

📋 Overview

Enterprise research workflows are heavily dependent on news, but existing tools make it difficult to efficiently find, organize, and interpret information. Users relied on multiple platforms and subscriptions, with research often consuming 1 to 3+ hours daily. Articles were hard to store, retrieve, and act on.

The core insight: the problem wasn't access to information, it was the ability to process and act on it efficiently. Users needed a centralized place for discovery and faster ways to interpret what they found.

💻 The Solution

Newsly brings together three core capabilities in one platform: follow topics for custom feeds based on user interests, analyze sentiment for at-a-glance industry perception, and save and share for structured knowledge organization.

✨ Design Highlights

Centralized experience

Replaced fragmented multi-platform workflows with a single unified interface, reducing switching costs and keeping context intact.

Personalized discovery

Users create and follow topics tailored to their needs, increasing content relevance and cutting time spent searching for high-signal news.

Reduced cognitive load

Sentiment indicators give users at-a-glance context on market perception, so they can triage without reading every article in full.

Flexible organization

Collections let users save, group, and revisit content, making research reusable and shareable across teams.

🎯 Key Features

Topic-Based Feeds

Curates content around user-defined interests, eliminating the need to search multiple platforms and keeping users updated in real time.

Sentiment Analysis Layer

Classifies articles as positive, neutral, or negative, helping users quickly surface signals and identify trends without reading everything in full.

Collections and Sharing

Stores articles in a structured, accessible way and enables collaboration across teams, improving how research is organized and reused.

🎓 Takeaway

I transformed a fragmented research workflow into a centralized, intelligent system that helps enterprise teams move from information overload to actionable insights.

Expanded view
1 / 4

Project Zurich: IO Portfolio Optimizer

A portfolio optimization system for Infrastructure Ontario that recommends which assets to retain, monitor, or transition, shifting decision-making from reactive triggers to proactive, data-driven evaluation.

🎯 Who it's for

Infrastructure Ontario asset managers and planning teams overseeing a portfolio of 40M+ rentable square feet and 1M acres across 10+ asset types, working within strict capital and operational budget constraints.

👩‍💻 My role

Product Designer and Data Scientist on the project, leading end-to-end design of both the front-end experience and the ILP optimization model powering decision-making.

💡 The bet

If we score every asset on a consistent performance scorecard and run an ILP model across the full portfolio, we can replace reactive, trigger-based categorization with proactive recommendations that maximize portfolio value under real budget constraints.

⚠️ Constraints

  1. 1. Budget limits: asset selection must stay within annual capital and operational thresholds.
  2. 2. Policy alignment: Core asset picks must reflect provincial strategies and social objectives.
  3. 3. Data diversity: scoring had to work across 10+ asset types with different condition, financial, and utilization profiles.

📊 Outcomes

  1. 1. Holistic oversight: delivered a comprehensive portfolio-wide view across 10+ asset types simultaneously.
  2. 2. Strategic alignment: asset recommendations tied directly to provincial objectives and program priorities.
  3. 3. Proactive prioritization: replaced FCI-threshold and ministry-trigger based decisions with continuous, model-driven evaluation.

✨ Approach & Methodology

The Asset Performance Scorecard

To move from reactive triggers to proactive evaluation, the first step was making every asset comparable. The scorecard defined four performance dimensions applied consistently across all asset types: Condition (FCI), Financials (O&M demands), Utilization (Occupancy), and Strategy (Classification). This gave every asset in the portfolio a common language, regardless of whether it was a courthouse, a correctional facility, or a land holding.

Why Integer Linear Programming

The selection problem is combinatorial: given a portfolio of hundreds of assets, each with a known value and a known cost, which combination maximizes total portfolio value without exceeding the available budget? This is a constrained combinatorial problem. The number of possible combinations grows exponentially with portfolio size, ruling out manual evaluation. ILP is the right tool for this class of problem because it finds the provably optimal solution under hard constraints, rather than approximating it.

The Optimization Model

The objective function maximizes the sum of Annual Capital Value across all selected assets:

Objective Function
max Z = Σ Xi · ACVi
Subject to Budget Constraint
Σ Xi · Costi ≤ BudgetTotal
Where Xi ∈ {0, 1}

Where Xi ∈ {0, 1}: each asset is either retained as Core or it is not. No partial selections. This formulation ensures the model selects the highest-value combination of assets the budget can sustain, evaluated across the full portfolio simultaneously rather than asset by asset.

Finnish Biodiversity Precedent

ILP has strong precedent in government portfolio decision-making. Finland's Forest and Biodiversity Program applied the same technique to maximize conservation value across land acquisitions under strict budget and ecological constraints, including minimum thresholds for endangered species coverage, old-growth tree density, and proximity to existing reserves. The parallel is direct: a government balancing competing asset values against a fixed capital budget, using ILP to find the optimal selection across a large, diverse portfolio. IO's problem is structurally identical, with provincial strategy and program alignment standing in for ecological targets.

From Model to UX

The system needed to answer three questions a planner would actually ask: why is this asset recommended for transition, what happens to portfolio value if I override it, and what does the picture look like if next year's budget is 10 percent lower? The AI agent chat interface addresses this directly. Rather than exposing model parameters through form inputs or dashboards, planners interact through natural language. A question like "show me Transition-flagged assets with an FCI under 15" or "what is the impact of removing this asset from Core" maps to a model query without requiring the user to understand the underlying formulation. The conversational layer makes the optimization accessible without simplifying it away.