Methodology
Our Methods
A rigorous, transparent framework for evaluating femtech products across four weighted dimensions — reviewed by our team, not automated software.
We evaluate femtech and women's health technology — digital products and services that women use to manage, track, or improve their health. This includes apps, wearables, digital therapeutics, telehealth platforms, and health education tools.
How We Classify Products
Before scoring, every product is classified as Medical or Wellness. All products use the same scoring formula, but classification affects how we interpret evidence — particularly for the Accuracy dimension.
A product is Medical if any of the following are true:
- ✓Has FDA clearance, approval, or equivalent
- ✓Makes diagnostic, prognostic, or treatment claims
- ✓Marketed as a medical device or digital therapeutic
- ✓Requires a prescription or clinical referral
A product is Wellness if:
- ✓Does not make clinical or diagnostic claims
- ✓Positioned as general health, tracking, or education
- ✓Available directly to consumers without clinical gatekeeping
- ✓E.g. period trackers, health education platforms, wellness wearables
When a product has both wellness and medical features, we classify based on the primary marketed use case. When in doubt, we classify as Medical — the higher standard protects users.
What Information We Use
We only use publicly available information. We do not conduct proprietary testing, request internal data from companies, or rely on information provided by companies for the purpose of being reviewed.
Reviewed by our team — not automated software
Every evaluation is conducted by Matropia's research team. We manually read privacy policies, review published studies, watch product marketing, and investigate company leadership. A human reads the evidence and applies the criteria — no automated scraper assigns these scores.
Security & Privacy
How well does the product protect user data?
We evaluate what data is collected, how it's used and shared, whether users have meaningful control, and how seriously the company takes security. Femtech products often handle deeply sensitive data — reproductive health, biometrics, mental health, sexual activity — so this dimension carries significant weight.
What we look for
- ·Data minimization — does the product collect only what's necessary?
- ·Privacy policy clarity — specific, readable, non-vague?
- ·User control — can users opt out, request deletion, close their account?
- ·Third-party sharing — limited and specific, or broad and open-ended?
- ·Security infrastructure — encryption, MFA, compliance standards (SOC 2, HIPAA)
- ·Track record — known breaches, regulatory actions, public controversies?
How we find it
- ·Read the full privacy policy, terms of service, and in-app disclosures
- ·Identify every data type collected and assess whether it's proportionate
- ·Check for data broker relationships, ad targeting language, and broad consent clauses
- ·Review FAQ and help docs for plain-language privacy explanations
- ·Search for reported breaches, regulatory actions, and public incidents
Accuracy
Are health claims supported by evidence?
We assess whether the product's health claims are backed by real evidence — peer-reviewed research, appropriate regulatory status, and credible clinical partnerships. Marketing language alone does not count as evidence.
What we look for
- ·Regulatory status — FDA cleared, approved, or registered? Does it match the claims made?
- ·Clinical validation — peer-reviewed studies, study design quality, sample size, population relevance
- ·Medical partnerships — substantive relationships with hospitals or academic institutions
- ·Independent reviews — third-party assessments of accuracy or reliability
- ·Transparency — does the product clearly disclose what it can and cannot do?
How we find it
- ·Search FDA databases and international regulatory registries
- ·Search PubMed and Google Scholar for peer-reviewed studies using the product or company name
- ·Search for clinical partnerships, hospital affiliations, and research collaborations
- ·Review independent product comparison sites for accuracy-related findings
- ·Note user complaints about accuracy in app store reviews and public forums
Foundation
Is the company credible, transparent, and mission-aligned?
We evaluate the organizational foundation of the company — leadership credibility, mission authenticity, advisory structures, and whether their marketing reflects their stated values. Companies that are serious about women's health tend to show it.
What we look for
- ·Leadership experience — relevant backgrounds in women's health or healthcare?
- ·Mission clarity — specific, actionable commitment to women's health?
- ·Advisory board — advisors with backgrounds in women's health, research, or bioethics?
- ·Thought leadership — do leaders speak publicly on women's health topics?
- ·Marketing alignment — does marketing reflect respect, evidence, and inclusivity?
How we find it
- ·Review founder and leadership team bios on the company website and LinkedIn
- ·Read mission and vision statements for specificity and authenticity
- ·Search for speaking engagements, podcast appearances, and published thought leadership
- ·Review advisory board pages for relevant clinical or research expertise
- ·Analyze social media, ads, and product copy for values alignment or red flags
Equity
Does the product intentionally serve diverse users?
We evaluate how intentionally the product and company advance digital health equity across economic, geographic, language, racial, cultural, and identity-based lines. Equity doesn't happen by accident — it requires conscious, sustained effort.
What we look for
- ·Accessibility — multilingual support, plain language, screen-reader compatibility
- ·Representation — does imagery and branding reflect diversity in race, body size, age, identity?
- ·Economic accessibility — insurance accepted? Sliding scale or low-income options?
- ·Community engagement — partnerships with underserved communities or health equity programs?
- ·Designed for real diverse users — evidence it works beyond an assumed default user?
How we find it
- ·Review the product interface and app store listing for accessibility features
- ·Evaluate website imagery, testimonials, and marketing materials for representation
- ·Check pricing pages, FAQ, and insurance info for economic accessibility signals
- ·Search for advocacy work, equity pledges, and community health partnerships
- ·Review user feedback for patterns related to who the product actually serves
Each product receives a SAFE Score from 0 to 100. Every dimension is scored out of 25 raw points, then multiplied by a weight to produce the final composite score.
| Dimension | Weight | Why |
|---|---|---|
| S — Security & Privacy | 35% | Core to consumer trust — femtech handles some of the most sensitive health data |
| A — Accuracy | 35% | Health decisions depend on reliable information — inaccurate claims cause real harm |
| F — Foundation | 15% | Company credibility matters but isn't the primary consumer deciding factor |
| E — Equity | 15% | Access and inclusion are essential — and often overlooked |
The Formula
SAFE Score = (S × 1.40) + (A × 1.40) + (F × 0.60) + (E × 0.60)
Maximum: (25×1.40) + (25×1.40) + (25×0.60) + (25×0.60) = 35 + 35 + 15 + 15 = 100
A multiplier of 1.40 means the dimension contributes 35% of the total. A multiplier of 0.60 means 15%.
Score Thresholds
A product can't score Excellent overall if it has a critical weakness in one dimension. Minimum dimension thresholds apply before a rating is assigned.
Requires 15/25 in every dimension. If any dimension < 15, capped at Good.
Requires 10/25 in every dimension. If any dimension < 10, capped at Poor.
No minimum threshold. Score stands as calculated.
Example: A product scores S=22, A=24, F=20, E=7. The composite is 80.6 — Excellent range. But E=7 falls below the 15-point minimum for Excellent and the 10-point minimum for Good. The product is capped at Poor. The scorecard will show which dimension triggered the cap.
Handling Missing Information
Absence of evidence is not the same as evidence of a problem. If we can't find information, we score conservatively — we don't assume the worst, but we don't assume the best either.
| Situation | How we score it |
|---|---|
| Evidence exists and is positive | Full or partial points based on quality |
| Evidence exists and is negative | Zero points or negative |
| Information is missing but should exist | Half the available points, rounded down |
| Information is not expected (N/A) | Marked N/A; points redistributed |
Every scorecard includes a "What We Couldn't Find" section listing any sub-criteria scored as missing or N/A, with an explanation of what we looked for.
Disputing a Score
If a manufacturer or user believes a score doesn't accurately reflect a product's current state, we offer a formal reconsideration process.
Submit a request
Email hello@matropia.com with the product name, the dimension(s) in question, and supporting evidence.
Evidence review
Our team reviews the evidence against the SAFE criteria and may request additional documentation.
Decision & update
If warranted, we update the scorecard and notify the requester. All decisions are documented for transparency.
Typically reviewed within 14 business days. Score adjustments are based solely on evidence — not on who is asking.
Products that score 80 or above with no individual dimension below 15/25 earn the "Meets SAFE Standard" designation — strong performance across all four dimensions.
The badge appears on the product's scorecard and in the product directory, making it easy for consumers to identify products that have demonstrated a high standard of security, accuracy, organizational foundation, and equity.
How to earn it
Composite score of 80+
The weighted total across all four SAFE dimensions must be at least 80 out of 100.
No dimension below 15/25
A high overall score isn't enough — strong performance in every dimension is required. No critical weaknesses allowed.
Important: The SAFE Badge is an informational designation based on our standardized evaluation criteria, not a product endorsement. Consumers should always consider their individual needs.
Products can lose the badge if a re-evaluation reveals that standards are no longer met. Score changes are tracked publicly on each product's scorecard with a full explanation.
The short version
A thorough product evaluation means reading privacy policies, finding clinical studies, tracking down leadership bios, checking regulatory filings, and more — across dozens of sources per product.
We use AI as a research assistant: it fetches pages, reads documents, and surfaces relevant information so our team can focus on evaluation rather than data gathering. Our team then reads that evidence, applies the SAFE criteria, and makes every scoring decision themselves.
What AI does
Fetches and reads source documents
Given URLs — privacy policy, terms, app store listing, clinical studies — the AI reads each document and extracts information relevant to the SAFE criteria.
Searches for additional evidence
Beyond provided sources, the AI searches for clinical studies, regulatory filings, partnership announcements, breach reports, and other publicly available evidence.
Organizes findings by dimension
Findings are organized into the four SAFE dimensions so our team has a structured research summary rather than raw notes.
Flags missing information
When expected information can't be found, the AI flags it so our team can note it in the "What We Couldn't Find" section.
What AI does not do
Score products
AI surfaces information. Our team evaluates it and assigns every score.
Make judgment calls
Context and domain expertise matter. Human judgment is required for nuanced interpretation.
Access private information
The AI only reads publicly available information. No proprietary data or internal documents.
Publish scorecards
Publishing requires explicit human sign-off. Our team reviews everything before it goes live.
The Workflow
Our team collects sources
We identify key URLs — website, privacy policy, terms, app store pages, clinical studies, and other public documents.
AI researches the product
The AI reads all sources, searches the web for additional evidence, organizes findings, and flags gaps. AI's role ends here.
Our team evaluates the evidence
Evaluators review research alongside original sources, apply SAFE sub-criteria, assign scores, and write the scorecard narrative.
Human sign-off and publication
The final scorecard is reviewed for completeness and accuracy. Only after explicit human approval is it published.
Why not fully automated?
Context requires judgment. A vague privacy policy might be standard for an early-stage startup or alarming for a funded medical device company.
Trust requires accountability. When a human signs off, there's clear responsibility. Fully automated scores create an accountability gap.
The stakes are too high. Women are making real health decisions based on these evaluations. That warrants human review.
100% Independent
Matropia maintains complete independence from product manufacturers. We do not accept payment for reviews, and products cannot buy better scores. All evaluations are based solely on publicly available evidence and our standardized methodology.