One of the traditional methods for stabilizing the solutions to ill-conditioned linear least squares problems
arising from the linear regression model
has been to truncate the singular value decomposition for the matrix which can be written
where is a diagonal matrix whose elements are the singular values of . The least squares solution
involves the inverse matrix so the smallest singular values, which are also the most poorly determined, make the largest contribution to the solution. Wild variations in the calculated solution can be suppressed by setting those singular values to zero and replacing by the generalized inverse of the truncated . This is usually treated as a problem of determining the ``numerical rank'' of , but if the measurements are stochastically independent, and if good estimates of their variances are available, then a more natural way to make the truncation is by comparing those variances with the elements of the vector and zeroing all of the latter adjudged to be statistically insignificant. Thus the truncation is made on the rotated right hand side vector rather than on the rotated matrix, and the number of terms discarded may be different for different measurement vectors using the same matrix . This technique has been incorporated into a Fortran program and extensively tested. In all cases residual diagnostics have given good agreement with the standard assumptions about statistical significance. Future work will be devoted to further testing and refinements and to documenting the algorithm and computer program.