I thought I should share the solution I arrived at for anyone else who might wrestle with this problem. My original question contains some extraneous information. Essentially my problem was this: I know a distribution for a system output variable. I can fully define how the system inputs are transformed into the output. I also know how all my inputs are distributed except for one variable. How do I derive the input distribution which best fits my output data and my system model?
My solution was to develop an optimization process which begins with a candidate distribution for my unknown input. Many random observations are taken from this distribution and the output values are calculated for each. The goodness of fit between the simulated output data and the known output distribution is measured with the Sum of Squares Due to Error (SSE) defined here. Specifically, the normalized height of each histogram bar is compared against the height of the probability density function (PDF) at that quantile. Before making the comparison, care needs to be taken that the histogram bars and PDF have consistent heights. I did this by scaling the PDF height such that the height at each histogram quantile will sum to 1 (as a histogram with 'Normalization' set to 'probability' will do).
The goal of the optimization process is to minimize SSE, but this objective function is stochastic, so it doesn't play nice with traditional solvers like fmincon(). Patternsearch() seems to be the way to go.
I came to view this problem as the reverse of a traditional distribution fitting problem. Instead of fitting a curve to data, I was trying to fit simulated data to a curve.
I hope to take some time in the near future to simplify & generalize my code such that it can be shared in this post.