This document discusses methods for estimating the score vector and observed information matrix in intractable models using Monte Carlo techniques. It begins with general results on using a normal prior distribution and taking the prior variance to zero to obtain derivatives of the log-likelihood. It then discusses using importance sampling to estimate the first and second derivatives, known as the score vector and observed information matrix. Finally, it discusses applying these methods to hidden Markov models. The key ideas are that posterior moments can approximate derivatives of the log-likelihood as the prior variance goes to zero, and Monte Carlo techniques like importance sampling can be used to estimate these derivatives.