Over the past several months, I’ve had the privilege of meeting many entrepreneurs who are raising funds for new medical device startups. One common VC refrain they hear: “Come back when you have more data.” Many times this can be a VC’s way of saying no without saying no. Sometimes though, the VC really means what s/he says: the current proof-of-concept data hasn’t proved the concept. It’s not just startups that face this challenge. I’ve seen weak proof-of-concept data in bigger companies too. How can you make sure your proof-of-concept data is solid?
Let’s start by defining proof-of-concept. Proof-of-concept is the outcome of an experiment demonstrating that the use of a prototype, on a model of the clinical condition, meets the performance requirements that will achieve market success. The goal of the proof-of-concept is to enable useful conclusions about the eventual performance of the final product in the real clinical environment.
Where do proof-of-concept experiments usually fall down?
Over-simplified clinical or tissue model
At InfraReDx, we designed a cardiovascular catheter to assess plaques within the wall of a blood-filled coronary artery. Early proof-of-concept experiments were performed on relatively homogeneous samples of formalin-fixed aortic tissue with no blood. Relatively homogeneous samples (i.e. the samples were either highly lipidic, calcified, fibrotic or completely normal) meant that we couldn’t know how our prototype would perform on most real-world plaques. Formalin-fixed tissues changed the optical properties. We simply did not simulate a blood-filled artery. So, no real conclusion could be drawn about our product’s eventual success.
Some months later we repeated the experiment, this time with fresh ex-vivo aortic tissue, a wider variety of tissue types, and measurements through bovine blood. Just a few changes to the experiment and we addressed real-world plaques, real-world optical properties, and assessment through blood. These results convinced me that our technology would ultimately work (and it did).
Critical technology missing from the prototype device
For a new handheld laser at Candela, we aligned novel optical components on an optical table to form a laser, energized the setup a few times and achieved the desired laser output. Based on the results of that simple experiment, we spent millions of dollars on product development and a splashy market launch. Six months after the planned ship date we realized that the tolerance requirements were beyond state-of-the-art manufacturing capabilities, and we did not yet have a plan to modify the design.
Sample size too small
The Candela handheld laser proof-of-concept was an “n of 1” experiment. At the time, we assumed that the unique optical elements in the design, such as novel optical coatings, could be reproduced as-needed, as the product volumes scaled. Unfortunately, this turned out to be a poor assumption, which we would have known sooner if our “n” was larger.
How can you improve your proof-of-concept experiments?
A proof-of-concept experiment attempts, at an early stage, to simulate the critical elements of the ultimate product in real clinical use. No early-stage simulation can be perfect, so judgement must be applied to determine if the experiment is good enough. Here are some questions to help you assess your own proof-of-concept experiment.
1. How accurately does your clinical or tissue model portray all elements of in vivo human use? List the ways that the biological model differs from the ultimate real-life use. For a transcatheter mitral valve implant, a healthy pig heart differs greatly from the weakened heart of the human mitral regard patient. Differences in the mitral annulus, the chordae, the ventricular wall, and the motion environment may be critically important to a proof-of-concept experiment. Differences are okay, as long as for each you can define the rationale for why it doesn’t affect the conclusion of your experiment. If you can’t define a rationale that would stand up to an expert in the field, your proof-of-concept model probably needs work.
2. Does your model include the most confounding conditions? For a diabetes screening tool, accurately classifying normal and already-diagnosed-diabetic patients isn’t enough; you need to show how your technology works on the in-between pre-diabetic patients. For a laparoscopic surgery tool, variations in anatomy are important, so you need to show that your technology works in the close-to-worst case anatomy (5th percentile).
3. Does your sample size show that the results are real, not chance? For diagnostic devices, the proof-of-concept should enable statistically meaningful accuracy measurements. For treatment devices, the proof-of-concept must show that the results are different from chance.
4. What critical components or manufacturing processes are not yet defined for the final product? For each, what’s the risk level associated with the plan to develop those components? It’s okay to have undefined low-development-risk components and processes, but problematic to have high development risks.
5. Can your procedure be performed by a below-average-skill physician in the target market? Typically, new devices should be designed for the 25th percentile physician (or a nurse or tech if that’s your target market). If your proof-of-concept experiment can only be performed by your superstar collaborator, explain how the real product will be different. If you can’t come up with a reasonable explanation, you have an issue.
A great proof-of-concept experiment demonstrates the value that has been created by your early work. An oversimplified clinical model may be useful to rapidly rule out a technical approach (if it doesn’t work in the simple model, it won’t work in the real world), but don’t confuse a rule-out experiment with a proof-of-concept. It’s okay to perform two or three separate experiments for a complete proof-of-concept (e.g. one for efficacy and one for safety). Ultimately, the value of a proof-of-concept experiment, like beauty, is in the eye of the beholder. Make sure your proof-of-concept won’t end up in the Museum of Bad Art.
4 thoughts on “Come Back When You Have More Data”
Good observations. I especially liked the “design down to the below-average user” comment. How often have we seen this rule broken? Feel like I should post this on the I.T. blogs which cover crappy UX design…you would have them by the first mention of aortic tissue.
Once again, nice post.
And finding those “below-average users” to help with product testing is always a challenge.
Jay, Good one – Dan