Hey Jake! Sorry for the late response.
Yeah it's a very nuanced problem.
At the surface, parameteric tests produce smaller p-values because they can leverage parametric assumptions. For eample, if we know our data are normal, the "only" things that can change are mean, stdev, and sample size. Non-parametric tests on the other hand state that anything can vary in our distribution. Because they're more relaxed, they're less powerful.
The safest thing to do is check if you're data meet the given parametric tests' assumptions. If any assumptions are even slightly violated, it's safest to use a non-parametric test.
Getting into assessing the impact of specific violations is super advanced stats and well above my pay grade.
Hope this helps!