Create Bayesian linear regression model object
To create a Bayesian vector autoregression (VARX) model for multivariate time series analysis, see bayesvarm
.
creates a Bayesian linear regression model object (PriorMdl
= bayeslm(NumPredictors
)PriorMdl
) composed of NumPredictors
predictors, an intercept, and a diffuse, joint prior distribution for β and σ^{2}. PriorMdl
is a template that defines the prior distributions and dimensionality of β.
specifies the joint prior distribution PriorMdl
= bayeslm(NumPredictors
,'ModelType
',modelType)modelType
for β and σ^{2}. For this syntax, modelType
can be:
'conjugate'
, 'semiconjugate'
, or 'diffuse'
to create a standard Bayesian linear regression prior model
'mixconjugate'
, 'mixsemiconjugate'
, or 'lasso'
to create a Bayesian linear regression prior model for predictor variable selection
For example, 'ModelType','conjugate'
specifies conjugate priors for the Gaussian likelihood, that is, βσ^{2} as Gaussian, σ^{2} as inverse gamma.
uses additional options specified by one or more namevalue pair arguments. For example, you can specify whether to include a regression intercept or specify additional options for the joint prior distribution PriorMdl
= bayeslm(NumPredictors
,'ModelType
',modelType,Name,Value
)modelType
.
If you specify 'ModelType','empirical'
, you must also specify the BetaDraws
and Sigma2Draws
namevalue pair arguments. BetaDraws
and Sigma2Draws
characterize the respective prior distributions.
If you specify 'ModelType','custom'
, you must also specify the LogPDF
namevalue pair argument. LogPDF
completely characterizes the joint prior distribution.
Consider the multiple linear regression model that predicts the US real gross national product (GNPR
) using a linear combination of industrial production index (IPI
), total employment (E
), and real wages (WR
).
$${\text{GNPR}}_{t}={\beta}_{0}+{\beta}_{1}{\text{IPI}}_{t}+{\beta}_{2}{\text{E}}_{t}+{\beta}_{3}{\text{WR}}_{t}+{\epsilon}_{t}.$$
For all $$t$$, $${\epsilon}_{t}$$ is a series of independent Gaussian disturbances with a mean of 0 and variance $${\sigma}^{2}$$.
Suppose that the regression coefficients $$\beta =[{\beta}_{0},...,{\beta}_{3}{]}^{\prime}$$ and the disturbance variance $${\sigma}^{2}$$ are random variables, and their prior values and distribution are unknown. In this case, use the noninformative Jefferys prior: the joint prior distribution is proportional to $$1/{\sigma}^{2}$$.
These assumptions and the data likelihood imply an analytically tractable posterior distribution.
Create a diffuse prior model for the linear regression parameters, which is the default model type. Specify the number of predictors p
.
p = 3; Mdl = bayeslm(p)
Mdl = diffuseblm with properties: NumPredictors: 3 Intercept: 1 VarNames: {4x1 cell}  Mean Std CI95 Positive Distribution  Intercept  0 Inf [ NaN, NaN] 0.500 Proportional to one Beta(1)  0 Inf [ NaN, NaN] 0.500 Proportional to one Beta(2)  0 Inf [ NaN, NaN] 0.500 Proportional to one Beta(3)  0 Inf [ NaN, NaN] 0.500 Proportional to one Sigma2  Inf Inf [ NaN, NaN] 1.000 Proportional to 1/Sigma2
Mdl
is a diffuseblm
Bayesian linear regression model object representing the prior distribution of the regression coefficients and disturbance variance. bayeslm
displays a summary of the prior distributions at the command line. Because the prior is noninformative and the model does not contain data, the summary is trivial.
If you have data, then you can estimate characteristics of the posterior distribution by passing the prior model Mdl
and data to estimate
.
Consider the linear regression model in Default Diffuse Prior Model. Assume these prior distributions:
$$\beta {\sigma}^{2}\sim {N}_{4}(M,V)$$. $$M$$ is a 4by1 vector of means, and $$V$$ is a scaled 4by4 positive definite covariance matrix.
$${\sigma}^{2}\sim IG(A,B)$$. $$A$$ and $$B$$ are the shape and scale, respectively, of an inverse gamma distribution.
These assumptions and the data likelihood imply a normalinversegamma semiconjugate model. The conditional posteriors are conjugate to the prior with respect to the data likelihood, but the marginal posterior is analytically intractable.
Create a normalinversegamma semiconjugate prior model for the linear regression parameters. Specify the number of predictors p
.
p = 3; Mdl = bayeslm(p,'ModelType','semiconjugate')
Mdl = semiconjugateblm with properties: NumPredictors: 3 Intercept: 1 VarNames: {4x1 cell} Mu: [4x1 double] V: [4x4 double] A: 3 B: 1  Mean Std CI95 Positive Distribution  Intercept  0 100 [195.996, 195.996] 0.500 N (0.00, 100.00^2) Beta(1)  0 100 [195.996, 195.996] 0.500 N (0.00, 100.00^2) Beta(2)  0 100 [195.996, 195.996] 0.500 N (0.00, 100.00^2) Beta(3)  0 100 [195.996, 195.996] 0.500 N (0.00, 100.00^2) Sigma2  0.5000 0.5000 [ 0.138, 1.616] 1.000 IG(3.00, 1)
Mdl
is a semiconjugateblm
Bayesian linear regression model object representing the prior distribution of the regression coefficients and disturbance variance. bayeslm
displays a summary of the prior distributions at the command line. For example, the elements of Positive
represent the prior probability that the corresponding parameter is positive.
If you have data, then you can estimate characteristics of the marginal or conditional posterior distribution by passing the prior model Mdl
and data to estimate
.
Consider the linear regression model in Default Diffuse Prior Model. Assume these prior distributions:
$$\beta {\sigma}^{2}\sim {N}_{4}(M,{\sigma}^{2}V)$$. $$M$$ is a 4by1 vector of means, and $$V$$ is a scaled 4by4 positive definite covariance matrix. Suppose you have prior knowledge that $$M={\left[\begin{array}{cccc}20& 4& 0.1& 2\end{array}\right]}^{\prime}$$ and V is the identity matrix.
$${\sigma}^{2}\sim IG(A,B)$$. $$A$$ and $$B$$ are the shape and scale, respectively, of an inverse gamma distribution.
These assumptions and the data likelihood imply a normalinversegamma conjugate model.
Create a normalinversegamma conjugate prior model for the linear regression parameters. Specify the number of predictors p
and set the regression coefficient names to the corresponding variable names.
p = 3; Mdl = bayeslm(p,'ModelType','conjugate','Mu',[20; 4; 0.1; 2],'V',eye(4),... 'VarNames',["IPI" "E" "WR"])
Mdl = conjugateblm with properties: NumPredictors: 3 Intercept: 1 VarNames: {4x1 cell} Mu: [4x1 double] V: [4x4 double] A: 3 B: 1  Mean Std CI95 Positive Distribution  Intercept  20 0.7071 [21.413, 18.587] 0.000 t (20.00, 0.58^2, 6) IPI  4 0.7071 [ 2.587, 5.413] 1.000 t (4.00, 0.58^2, 6) E  0.1000 0.7071 [1.313, 1.513] 0.566 t (0.10, 0.58^2, 6) WR  2 0.7071 [ 0.587, 3.413] 0.993 t (2.00, 0.58^2, 6) Sigma2  0.5000 0.5000 [ 0.138, 1.616] 1.000 IG(3.00, 1)
Mdl
is a conjugateblm
Bayesian linear regression model object representing the prior distribution of the regression coefficients and disturbance variance. bayeslm
displays a summary of the prior distributions at the command line. Although bayeslm
assigns names to the intercept and disturbance variance, all other coefficients have the specified names.
By default, bayeslm
sets the shape and scale to 3
and 1
, respectively. Suppose you have prior knowledge that the shape and scale are 5
and 2
.
Set the prior shape and scale of $${\sigma}^{2}$$ to their assumed values.
Mdl.A = 5; Mdl.B = 2
Mdl = conjugateblm with properties: NumPredictors: 3 Intercept: 1 VarNames: {4x1 cell} Mu: [4x1 double] V: [4x4 double] A: 5 B: 2  Mean Std CI95 Positive Distribution  Intercept  20 0.3536 [20.705, 19.295] 0.000 t (20.00, 0.32^2, 10) IPI  4 0.3536 [ 3.295, 4.705] 1.000 t (4.00, 0.32^2, 10) E  0.1000 0.3536 [0.605, 0.805] 0.621 t (0.10, 0.32^2, 10) WR  2 0.3536 [ 1.295, 2.705] 1.000 t (2.00, 0.32^2, 10) Sigma2  0.1250 0.0722 [ 0.049, 0.308] 1.000 IG(5.00, 2)
bayeslm
updates the prior distribution summary based on the changes in the shape and scale.
Consider the linear regression model in Default Diffuse Prior Model. Assume these prior distributions:
is 4D t distribution with 50 degrees of freedom for each component and the identity matrix for the correlation matrix. Also, the distribution is centered at and each component is scaled by the corresponding elements of the vector .
.
bayeslm
treats these assumptions and the data likelihood as if the corresponding posterior is analytically intractable.
Declare a MATLAB® function that:
Accepts values of and together in a column vector, and accepts values of the hyperparameters
Returns the value of the joint prior distribution, , given the values of and
function logPDF = priorMVTIG(params,ct,st,dof,C,a,b) %priorMVTIG Log density of multivariate t times inverse gamma % priorMVTIG passes params(1:end1) to the multivariate t density % function with dof degrees of freedom for each component and positive % definite correlation matrix C. priorMVTIG returns the log of the product of % the two evaluated densities. % % params: Parameter values at which the densities are evaluated, an % mby1 numeric vector. % % ct: Multivariate t distribution component centers, an (m1)by1 % numeric vector. Elements correspond to the first m1 elements % of params. % % st: Multivariate t distribution component scales, an (m1)by1 % numeric (m1)by1 numeric vector. Elements correspond to the % first m1 elements of params. % % dof: Degrees of freedom for the multivariate t distribution, a % numeric scalar or (m1)by1 numeric vector. priorMVTIG expands % scalars such that dof = dof*ones(m1,1). Elements of dof % correspond to the elements of params(1:end1). % % C: Correlation matrix for the multivariate t distribution, an % (m1)by(m1) symmetric, positive definite matrix. Rows and % columns correspond to the elements of params(1:end1). % % a: Inverse gamma shape parameter, a positive numeric scalar. % % b: Inverse gamma scale parameter, a positive scalar. % beta = params(1:(end1)); sigma2 = params(end); tVal = (beta  ct)./st; mvtDensity = mvtpdf(tVal,C,dof); igDensity = sigma2^(a1)*exp(1/(sigma2*b))/(gamma(a)*b^a); logPDF = log(mvtDensity*igDensity); end
Create an anonymous function that operates like priorMVTIG
, but accepts the parameter values only and holds the hyperparameter values fixed.
dof = 50; C = eye(4); ct = [25; 4; 0; 3]; st = [10; 1; 1; 1]; a = 3; b = 1; prior = @(params)priorMVTIG(params,ct,st,dof,C,a,b);
Create a custom joint prior model for the linear regression parameters. Specify the number of predictors p
. Also, specify the function handle for priorMVTIG
, and pass the hyperparameter values.
p = 3; Mdl = bayeslm(p,'ModelType','custom','LogPDF',prior)
Mdl = customblm with properties: NumPredictors: 3 Intercept: 1 VarNames: {4x1 cell} LogPDF: @(params)priorMVTIG(params,ct,st,dof,C,a,b) The priors are defined by the function: @(params)priorMVTIG(params,ct,st,dof,C,a,b)
Mdl
is a customblm
Bayesian linear regression model object representing the prior distribution of the regression coefficients and disturbance variance. In this case, bayeslm
does not display a summary of the prior distributions at the command line.
Consider the linear regression model in Default Diffuse Prior Model.
Assume these prior distributions:
For k = 0,...,3, $${\beta}_{k}{\sigma}^{2}$$ has a Laplace distribution with a mean of 0 and a scale of ${\sigma}^{2}/\lambda $, where $\lambda $ is the shrinkage parameter. The coefficients are conditionally independent.
$${\sigma}^{2}\sim IG(A,B)$$. $$A$$ and $$B$$ are the shape and scale, respectively, of an inverse gamma distribution.
Create a prior model for Bayesian linear regression by using bayeslm
. Specify the number of predictors p
and the variable names.
p = 3; PriorMdl = bayeslm(p,'ModelType','lasso','VarNames',["IPI" "E" "WR"]);
PriorMdl
is a lassoblm
Bayesian linear regression model object representing the prior distribution of the regression coefficients and disturbance variance. By default, bayeslm
attributes a shrinkage of 0.01
to the intercept and 1
to the other coefficients in the model.
Using dot notation, change the default shrinkages for all coefficients, except the intercept, by specifying a 3by1 vector containing the new values for the Lambda
property of PriorMdl
.
Attribute a shrinkage of 10
to IPI
and WR
.
Because E
has a scale that is several orders of magnitude larger than the other variables, attribute a shrinkage of 1e5
to it.
Lambda(2:end)
contains the shrinkages of the coefficients corresponding to the specified variables in the VarNames
property of PriorMdl
.
PriorMdl.Lambda = [10; 1e5; 10];
Load the NelsonPlosser data set. Create variables for the response and predictor series.
load Data_NelsonPlosser X = DataTable{:,PriorMdl.VarNames(2:end)}; y = DataTable{:,"GNPR"};
Perform Bayesian lasso regression by passing the prior model and data to estimate
, that is, by estimating the posterior distribution of $\beta $ and ${\sigma}^{2}$. Bayesian lasso regression uses Markov chain Monte Carlo (MCMC) to sample from the posterior. For reproducibility, set a random seed.
rng(1); PosteriorMdl = estimate(PriorMdl,X,y);
Method: lasso MCMC sampling with 10000 draws Number of observations: 62 Number of predictors: 4  Mean Std CI95 Positive Distribution  Intercept  1.3472 6.8160 [15.169, 11.590] 0.427 Empirical IPI  4.4755 0.1646 [ 4.157, 4.799] 1.000 Empirical E  0.0001 0.0002 [0.000, 0.000] 0.796 Empirical WR  3.1610 0.3136 [ 2.538, 3.760] 1.000 Empirical Sigma2  60.1452 11.1180 [42.319, 85.085] 1.000 Empirical
Plot the posterior distributions.
plot(PosteriorMdl)
Given a shrinkage of 10, the distribution of E
is fairly dense around 0. Therefore, E
might not be an important predictor.
NumPredictors
— Number of predictor variablesNumber of predictor variables in the Bayesian multiple linear regression model, specified as a nonnegative integer.
NumPredictors
must be the same as the number of columns in your predictor data, which you specify during model estimation or simulation.
When counting the number of predictors in the model, exclude the intercept term specified by Intercept
. If you include a column of ones in the predictor data for an intercept term, then count it as a predictor variable and specify 'Intercept',false
.
Data Types: double
Specify optional
commaseparated pairs of Name,Value
arguments. Name
is
the argument name and Value
is the corresponding value.
Name
must appear inside quotes. You can specify several name and value
pair arguments in any order as
Name1,Value1,...,NameN,ValueN
.
'ModelType','conjugate','Mu',1:3,'V',1000*eye(3),'A',1,'B',0.5
specifies that the prior distribution of Beta
given Sigma2
is Gaussian with mean vector 1:3
and covariance matrix Sigma2*1000*eye(3)
, and the distribution of Sigma2
is inverse gamma with shape 1
and scale 0.5
.'ModelType'
— Joint prior distribution of (β,σ^{2})'diffuse'
(default)  'conjugate'
 'semiconjugate'
 'empirical'
 'custom'
 'lasso'
 'mixconjugate'
 'mixsemiconjugate'
Joint prior distribution of (β,σ^{2}), specified as the commaseparated pair consisting of 'ModelType'
and a value in the following tables.
For a standard Bayesian regression model, choose a value in this table.
Value  Description 

'conjugate'  Normalinversegamma conjugate model
You can adjust corresponding hyperparameters using the 
'semiconjugate'  Normalinversegamma semiconjugate model
You can adjust corresponding hyperparameters using the 
'diffuse'  Diffuse prior distributions

'empirical'  Custom prior distributions

'custom'  Custom prior distributions

For a Bayesian regression model that performs predictor variable selection, choose a value in this table.
Value  Description 

'mixconjugate'  Stochastic search variable selection (SSVS) [1] conjugate prior distributions
For more details, see 
'mixsemiconjugate'  SSVS [1] semiconjugate prior distributions
For more details, see 
'lasso'  Bayesian lasso regression prior distributions [3]

The prior model type that you choose depends on your assumptions on the joint distribution of the parameters. Your choice can affect posterior estimates and inferences. For more details, see Implement Bayesian Linear Regression.
Example: 'ModelType','conjugate'
Data Types: char
'Intercept'
— Flag for including regression model intercepttrue
(default)  false
Flag for including a regression model intercept, specified as the commaseparated pair consisting of 'Intercept'
and a value in this table.
Value  Description 

false  Exclude an intercept from the regression model. Therefore, β is a p dimensional vector, where p is the value of NumPredictors . 
true  Include an intercept in the regression model. Therefore, β is a (p + 1)dimensional vector. This specification causes a Tby1 vector of ones to be prepended to the predictor data during estimation and simulation. 
If you include a column of ones in the predictor data for an intercept term, then specify 'Intercept',false
.
Example: 'Intercept',false
'VarNames'
— Predictor variable namesPredictor variable names for displays, specified as the commaseparated pair consisting of 'VarNames'
and a string vector or cell vector of character vectors. VarNames
must contain NumPredictors
elements. VarNames(
is the name of the variable in column j
)j
of the predictor data set, which you specify during estimation, simulation, or forecasting.
The default is {'Beta(1)','Beta(2)',...,'Beta(
, where p
)'}p
is the value of NumPredictors
.
Note
You cannot set the name of the intercept or disturbance variance. In displays, bayeslm
gives the intercept the name Intercept
and the disturbance variance the name Sigma2
. Therefore, you cannot use "Intercept"
and "Sigma2"
as predictor names.
Example: 'VarNames',["UnemploymentRate"; "CPI"]
Data Types: string
 cell
 char
'Mu'
— Mean hyperparameter of Gaussian prior on βzeros(Intercept + NumPredictors,1)
(default)  numeric vectorMean hyperparameter of the Gaussian prior on β, specified as the commaseparated pair consisting of 'Mu'
and a numeric vector.
If Mu
is a vector, then it must have NumPredictors
or NumPredictors + 1
elements.
For NumPredictors
elements, bayeslm
sets the prior mean of the NumPredictors
predictors only. Predictors correspond to the columns in the predictor data (specified during estimation, simulation, or forecasting). bayeslm
ignores the intercept in the model, that is, bayeslm
specifies the default prior mean to any intercept.
For NumPredictors + 1
elements, the first element corresponds to the prior mean of the intercept, and all other elements correspond to the predictors.
Example: 'Mu',[1; 0.08; 2]
Data Types: double
'V'
— Conditional covariance matrix hyperparameter of Gaussian prior on β1e5*eye(Intercept + NumPredictors)
(default)  symmetric, positive definite matrixConditional covariance matrix hyperparameter of the Gaussian prior on β, specified as the commaseparated pair consisting of 'V'
and a c
byc
symmetric, positive definite matrix. c
can be NumPredictors
or NumPredictors + 1
.
If c
is NumPredictors
, then
bayeslm
sets the prior covariance matrix to
$$\left[\begin{array}{cccc}1e5& 0& \cdots & 0\\ 0& & & \\ \vdots & & V& \\ 0& & & \end{array}\right].$$
bayeslm
attributes the default prior
covariances to the intercept, and attributes V
to the
coefficients of the predictor variables in the data. Rows and columns of
V
correspond to columns (variables) in the predictor
data.
If c
is NumPredictors + 1
, then
bayeslm
sets the entire prior covariance to
V
. The first row and column correspond to the intercept.
All other rows and columns correspond to the columns in the predictor
data.
The default value is a flat prior. For an
adaptive prior, specify diag(Inf(Intercept +
NumPredictors,1))
. Adaptive priors indicate zero precision in order for the
prior distribution to have as little influence as possible on the posterior
distribution.
For 'ModelType',conjugate
, V
is the prior covariance of β up to a factor of σ^{2}.
Example: 'V',diag(Inf(3,1))
Data Types: double
'Lambda'
— Lasso regularization parameter1
(default)  positive numeric scalar  positive numeric vectorLasso regularization parameter for all regression coefficients, specified as the commaseparated pair consisting of 'Lambda'
and a positive numeric scalar or (Intercept
+ NumPredictors
)by1 positive numeric vector. Larger values of Lambda
cause corresponding coefficients to shrink closer to zero.
Suppose X
is a T
byNumPredictors
matrix of predictor data, which you specify during estimation, simulation, or forecasting.
If Lambda
is a vector and Intercept
is
true
, Lambda(1)
is the shrinkage for
the intercept, Lambda(2)
is the shrinkage for the coefficient
of the first predictor X(:,1)
, Lambda(3)
is the shrinkage for the coefficient of the second predictor
X(:,2)
,…, and Lambda(NumPredictors +
1)
is the shrinkage for the coefficient of the last predictor
X(:,NumPredictors)
.
If Lambda
is a vector and Intercept
is
false
, Lambda(1)
is the shrinkage for
the coefficient of the first predictor X(:,1)
,…, and
Lambda(NumPredictors)
is the shrinkage for the
coefficient of the last predictor X(:,NumPredictors)
.
If you supply the scalar s
for Lambda
, then all
coefficients of the predictors in X
have a shrinkage of
s
.
If Intercept
is true
, the
intercept has a shrinkage of 0.01
, and
lassoblm
stores [0.01;
s*ones(NumPredictors,1)]
in
Lambda
.
Otherwise, lassoblm
stores
s*ones(NumPredictors,1)
in
Lambda
.
Example: 'Lambda',6
Data Types: double
'Mu'
— Componentwise mean hyperparameter of Gaussian mixture prior on βzeros(Intercept + NumPredictors,2)
(default)  numeric matrixComponentwise mean hyperparameter of the Gaussian mixture prior on β, specified as the commaseparated pair consisting of 'Mu'
and an (Intercept + NumPredictors
)by2 numeric matrix. The first column contains the prior means for component 1 (the variableinclusion regime, that is, γ = 1). The second column contains the prior means for component 2 (the variableexclusion regime, that is, γ = 0).
If Intercept
is false
, then
Mu
has NumPredictors
rows.
bayeslm
sets the prior mean of the
NumPredictors
coefficients corresponding to the columns
in the predictor data set, which you specify during estimation, simulation, or
forecasting.
Otherwise, Mu
has NumPredictors + 1
elements. The first element corresponds to the prior means of the intercept, and
all other elements correspond to the predictor variables.
Tip
To perform SSVS, use the default value of Mu
.
Data Types: double
'V'
— Componentwise variance factor or variance hyperparameter of Gaussian mixture prior on βrepmat([10 0.1],Intercept + NumPredictors,1)
(default)  positive numeric matrixComponentwise variance factor or variance hyperparameter of the Gaussian mixture prior on β, specified as the commaseparated pair consisting of 'V'
and an (Intercept + NumPredictors
)by2
positive numeric matrix. The first column contains the prior variance factors for component 1
(the variableinclusion regime, that is, γ = 1). The second column contains
the prior variance factors for component 2 (the variableexclusion regime, that is,
γ = 0). For conjugate models ('ModelType','mixconjugate'
), V
contains variance factors, and for semiconjugate models ('ModelType','mixsemiconjugate'
), V
contains variances.
If Intercept
is false
, then
V
has NumPredictors
rows.
bayeslm
sets the prior variance factor of the
NumPredictors
coefficients corresponding to the columns
in the predictor data set, which you specify during estimation, simulation, or
forecasting.
Otherwise, V
has NumPredictors + 1
elements. The first element corresponds to the prior variance factor of the
intercept, and all other elements correspond to the predictor variables.
Tip
To perform SSVS, specify a larger variance factor for regime 1 than for regime 2. That is, for all j
, specify V(
>
j
,1)V(
.j
,2)
For details on what value to specify for V
, see [1].
Data Types: double
'Probability'
— Prior probability distribution for variable inclusion and exclusion regimes0.5*ones(Intercept + NumPredictors,1)
(default)  numeric vector of values in [0,1]  function handlePrior probability distribution for the variable inclusion and exclusion regimes, specified as the commaseparated pair consisting of 'Probability'
and an (Intercept
+ NumPredictors
)by1 numeric vector of values in [0,1], or a function handle in the form @fcnName
, where fcnName
is the function name. Probability
represents the prior probability distribution of γ = {γ_{1},…,γ_{K}}, where:
K = Intercept
+
NumPredictors
, which is the number of coefficients in the
regression model.
γ_{k} ∈ {0,1} for k = 1,…,K. Therefore, the sample space has a cardinality of 2^{K}.
γ_{k} = 1 indicates variable
VarNames
(
is included in the model, and γ_{k} = 0
indicates that the variable is excluded from the model.k
)
If Probability
is a numeric vector:
Rows correspond to the variable names in VarNames
. For models containing an intercept, the prior probability for intercept inclusion is Probability(1)
.
For
=
1,…,K, the prior probability for excluding variable
k
is 1 –
k
Probability(
k
).
Prior probabilities of the variableinclusion regime, among all variables and the intercept, are independent.
If Probability
is a function handle, then it represents a custom prior distribution of the variableinclusion regime probabilities. The corresponding function must have this declaration statement (the argument and function names can vary):
logprob = regimeprior(varinc)
logprob
is a numeric scalar representing the log of the
prior distribution. You can write the prior distribution up to a proportionality
constant.
varinc
is a Kby1 logical vector.
Elements correspond to the variable names in VarNames
and
indicate the regime in which the corresponding variable exists.
varinc(
=
k
)true
indicates
VarName(
is included in
the model, and k
)varinc(
=
k
)false
indicates it is excluded from the model.
You can include more input arguments, but they must be known when you call
bayeslm
.
For details on what value to specify for Probability
, see [1].
Data Types: double
 function_handle
'Correlation'
— Prior correlation matrix of βeye(Intercept + NumPredictors)
(default)  numeric, positive definite matrixPrior correlation matrix of β for both components in the mixture model, specified as the commaseparated pair consisting of 'Correlation'
and
an (Intercept
+
NumPredictors
)by(Intercept
+
NumPredictors
) numeric, positive definite matrix. Consequently, the
prior covariance matrix for component
in the
mixture model is:j
For conjugate ('ModelType','mixconjugate'
), sigma2*diag(sqrt(V(:,
j
)))*Correlation*diag(sqrt(V(:,j
)))
For semiconjugate ('ModelType','mixsemiconjugate'
), diag(sqrt(V(:,
j
)))*Correlation*diag(sqrt(V(:,j
)))
where sigma2
is σ^{2} and V
is the matrix of coefficient variance factors or variances.
Rows and columns correspond to the variable names in VarNames
.
By default, regression coefficients are uncorrelated, conditional on the regime.
Note
You can supply any appropriately sized numeric matrix. However, if your specification is not
positive definite, bayeslm
issues a warning and replaces your
specification with CorrelationPD
,
where:
CorrelationPD = 0.5*(Correlation + Correlation.');
Tip
For details on what value to specify for Correlation
, see [1].
Data Types: double
'A'
— Shape hyperparameter of inverse gamma prior on σ^{2}3
(default)  numeric scalarShape hyperparameter of the inverse gamma prior on σ^{2}, specified as the commaseparated pair consisting of 'A'
and a numeric scalar.
A
must be at least –(Intercept +
NumPredictors)/2
.
With B
held fixed, the inverse gamma distribution becomes taller and more
concentrated as A
increases. This characteristic weighs the prior model
of σ^{2} more heavily than the likelihood during
posterior estimation.
For the functional form of the inverse gamma distribution, see Analytically Tractable Posteriors.
This option does not apply to empirical or custom prior distributions.
Example: 'A',0.1
Data Types: double
'B'
— Scale hyperparameter of inverse gamma prior on σ^{2}1
(default)  positive scalar  Inf
Scale hyperparameter of the inverse gamma prior on σ^{2}, specified as the commaseparated pair consisting of 'B'
and a positive scalar or Inf
.
With A
held fixed, the inverse gamma distribution becomes taller and more
concentrated as B
increases. This characteristic weighs the prior model
of σ^{2} more heavily than the likelihood during
posterior estimation.
This option does not apply to empirical or custom prior distributions.
Example: 'B',5
Data Types: double
'BetaDraws'
— Random sample from prior distribution of βRandom sample from the prior distribution of β, specified as the commaseparated pair consisting of 'BetaDraws'
and an (Intercept
+ NumPredictors
)byNumDraws
numeric matrix. Rows correspond to regression coefficients: the first row corresponds to the intercept, and the subsequent rows correspond to columns in the predictor data. Columns correspond to successive draws from the prior distribution.
BetaDraws
and Sigma2Draws
must have the same number of columns.
For best results, draw a large number of samples.
Data Types: double
'Sigma2Draws'
— Random sample from prior distribution of σ^{2}Random sample from the prior distribution of σ^{2}, specified as the commaseparated pair consisting of 'Sigma2Draws'
and a 1byNumDraws
numeric row vector. Columns correspond to successive draws from the prior distribution.
BetaDraws
and Sigma2Draws
must have the same number of columns.
For best results, draw a large number of samples.
Data Types: double
'LogPDF'
— Log of joint probability density function of (β,σ^{2})Log of the joint probability density function of (β,σ^{2}), specified as the commaseparated pair consisting of 'LogPDF'
and a function handle.
Suppose logprior
is
the name of the MATLAB^{®} function defining the joint prior distribution of
(β,σ^{2}). Then,
logprior
must have this form.
function [logpdf,glpdf] = logprior(params) ... end
logpdf
is a numeric scalar representing the log of
the joint probability density of
(β,σ^{2}).
glpdf
is an (Intercept
+
NumPredictors
+ 1)by1 numeric vector representing the
gradient of logpdf
. Elements correspond to the
elements of params
.
glpdf
is an optional output argument, and only the
Hamiltonian Monte Carlo sampler (see hmcSampler
) applies it. If you know
the analytical partial derivative with respect to some parameters, but not
others, then set the elements of glpdf
corresponding
to the unknown partial derivatives to NaN
. MATLAB computes the numerical gradient for missing partial derivatives,
which is convenient, but slows sampling.
params
is an (Intercept
+
NumPredictors
+ 1)by1 numeric vector. The first
Intercept
+ NumPredictors
elements
must correspond to values of β, and the last element must
correspond to the value of σ^{2}. The
first element of β is the intercept, if one exists. All other
elements correspond to predictor variables in the predictor data, which you
specify during estimation, simulation, or forecasting.
Example: 'LogPDF',@logprior
PriorMdl
— Bayesian linear regression model storing prior model assumptionsconjugateblm
model object  semiconjugateblm
model object  diffuseblm
model object  mixconjugateblm
model object  mixsemiconjugateblm
model object  lassoblm
model object  ...Bayesian linear regression model storing prior model assumptions, returned as an object in this table.
Value of ModelType  Returned Bayesian Linear Regression Model Object 

'conjugate'  conjugateblm 
'semiconjugate'  semiconjugateblm 
'diffuse'  diffuseblm 
'empirical'  empiricalblm 
'custom'  customblm 
'mixconjugate'  mixconjugateblm 
'mixsemiconjugate'  mixsemiconjugateblm 
'lasso'  lassoblm 
PriorMdl
specifies the joint prior distribution and characteristics of the linear regression model only. The model object is a template intended for further use. To incorporate data into the model for posterior distribution analysis, pass the model object and data to the appropriate object function, for example, estimate
or simulate
.
A Bayesian linear regression model treats the parameters β and σ^{2} in the multiple linear regression (MLR) model y_{t} = x_{t}β + ε_{t} as random variables.
For times t = 1,...,T:
y_{t} is the observed response.
x_{t} is a 1by(p + 1) row vector of observed values of p predictors. To accommodate a model intercept, x_{1t} = 1 for all t.
β is a (p + 1)by1 column vector of regression coefficients corresponding to the variables that compose the columns of x_{t}.
ε_{t} is the random disturbance with a mean of zero and Cov(ε) = σ^{2}I_{T×T}, while ε is a Tby1 vector containing all disturbances. These assumptions imply that the data likelihood is
$$\ell \left(\beta ,{\sigma}^{2}y,x\right)={\displaystyle \prod _{t=1}^{T}\varphi \left({y}_{t};{x}_{t}\beta ,{\sigma}^{2}\right).}$$
ϕ(y_{t};x_{t}β,σ^{2}) is the Gaussian probability density with mean x_{t}β and variance σ^{2} evaluated at y_{t};.
Before considering the data, you impose a joint prior distribution assumption on (β,σ^{2}). In a Bayesian analysis, you update the distribution of the parameters by using information about the parameters obtained from the likelihood of the data. The result is the joint posterior distribution of (β,σ^{2}) or the conditional posterior distributions of the parameters.
[1] George, E. I., and R. E. McCulloch. "Variable Selection Via Gibbs Sampling." Journal of the American Statistical Association. Vol. 88, No. 423, 1993, pp. 881–889.
[2] Koop, G., D. J. Poirier, and J. L. Tobias. Bayesian Econometric Methods. New York, NY: Cambridge University Press, 2007.
[3] Park, T., and G. Casella. "The Bayesian Lasso." Journal of the American Statistical Association. Vol. 103, No. 482, 2008, pp. 681–686.
conjugateblm
 customblm
 diffuseblm
 empiricalblm
 lassoblm
 mixconjugateblm
 mixsemiconjugateblm
 semiconjugateblm
You have a modified version of this example. Do you want to open this example with your edits?
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
Select web siteYou can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.