Models & Modeling

If you've ever wondered, "How does Josh's mind work?" this is the episode for you. We talk about what models are, why they matter, the model Josh uses for building mental models, and — in addition to a wide-ranging conversation — how it all applies to cycling.

Got a question you’d like to ask? Text or leave a voicemail at the Marginal Gains Hotline: +1-317-343-4506 or just leave a comment in this post!

Subscribe using your favorite podcast platform (but be sure to rate and review us on Apple Podcasts).


6 comments


  • Osman Isvan

    Regarding the evolution of our mental models of rolling resistance, I believe that “impedance losses” were known all along and called suspension losses. It is just that the retarding force they generate on rough surfaces was considered a distinct category external to the tire; whereas rolling resistance was defined explicitly for the tire rolling on a smooth surface. What changed in the last 10 years is that we began paying more attention to average speed on typical surfaces as opposed to ultimate speed on smooth surfaces.


  • John Morrell

    p.s. I was told during my masters thesis that Einstein advised his graduate students to stay away from tribology – it was too hard. lol.


  • Steve

    Is there a link you can provide to an article that outlines Jim Martin’s definitive cycling performance model? A short Google search of the same takes me to a long list of his publications. https://scholar.google.com/citations?user=1lSY7Q0AAAAJ&hl=en I’m sure all of these are quite interesting but most are quite expensive to buy so I’d like to employ my own “best bang for the buck” model in selecting one. Many thanks


  • Robert Chung

    This is a complicated question, and the best I can do is give a simple response, beginning with a twist on Karlin: the purpose of models is to sharpen the questions but the test of a model is how well it fits the data. That said, the ways that a model can fail to fit the data are numerous: there’s at least measurement error, parameter estimation error, and model specification error. Worse, a prediction can also err even if the model were correct if the putative inputs were wrong. So, in a large sense, that you can come within 1% of a model prediction is suggestive but unless you have some idea of why it failed you won’t know whether it was a model specification failure or a measurement error. However, you’re exactly right, errors are clues because model specification errors manifest differently than measurement errors. So, like NTSB crash investigators, we need to do forensic analysis of a failure to figure out what went wrong and figure out what we can learn from it.

    I know that’s an unsatisfying response but I hope it’s a helpful response. I’ll add that VE profiles are a type of error analysis. That’s why I envisioned them mostly as a diagnostic for fit and deviations from fit from a particular model of cycling drag.


  • Tony Geller

    Interesting show with many things to ponder. One thing Josh said sounded off to me. Speaking of his implementation of the Martin model, he said something to the effect that he was able to calculate TT times to within 1% so there’s still something missing. I think it’s important to draw a distinction between model form error and parameter estimation errors. The fact that the predictions came within 1% says to me that it is less likely that the physics model is missing something of any importance than that he is seeing the limit of his parameter estimation. I don’t know how sophisticated his method is to determine and apply CdA, Crr, rho, wind speed and direction, and drivetrain losses along the route, but I’d be surprised if these were within 1% given the variability of these on the road. For example, even if extremely accurate measurements of CdA were determined for different body positions (after all no one can maintain the exact same head position for the duration of a time trial), the exact distribution of these positions couldn’t be known. Similarly for all the other parameters in the Martin model.

    It would be interesting to know the distribution of model errors for different predictions of TT performance using measured power. A narrow distribution would indicate that the effect of errors in parameter estimation are limited to the ~1% quoted by Josh. A wide distribution would say the impact of parameter errors is greater with the more accurate predictions being the result of errors canceling and the larger ones the result of errors adding. I suspect it’s possible for someone with a greater data analysis background that I (Robert Chung) to determine the magnitude of these parameter errors from the model error distribution function.


Leave a comment

Please note, comments must be approved before they are published

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.