This paper proposes a new method called T1-T2 to simultaneously learn the model parameters and regularization hyperparameters during neural network training. T1-T2 works by alternating between optimizing the model parameters using the training objective, and optimizing the hyperparameters using the validation objective with the model parameters fixed. Experiments show that T1-T2 can find better hyperparameters than grid search, and the hyperparameters found by T1-T2 generalize well when used for normal training. However, T1-T2 may slightly overfit the validation set in some cases.