Using phone posterior probabilities has been increasingly explored for improving automatic speech recognition (ASR) systems. In this paper, we propose two approaches for hierarchically enhancing these phone posteriors, by integrating long acoustic context, as well as phonetic and lexical knowledge. In the first approach, phone posteriors estimated with a multilayer perceptron (MLP), are used as emission probabilities in hidden Markov model (HMM) forward-backward recursions. This yields new enhanced posterior estimates integrating HMM topological constraints (encoding specific phonetic and lexical knowledge), and context. In the second approach, temporal contexts of the regular MLP posteriors are postprocessed by a secondary MLP, in order to learn inter-and intra-dependencies between the phone posteriors. These dependencies are phonetic knowledge. The learned knowledge is integrated in the posterior estimation during the inference (forward pass) of the second MLP, resulting in enhanced phone posteriors. We investigate the use of the enhanced posteriors in hybrid HMM/artificial neural network (ANN) and Tandem configurations. We propose using the enhanced posteriors as replacement, or as complementary evidences to the regular MLP posteriors. The proposed methods have been tested on different small and large vocabulary databases, always resulting in consistent improvements in frame, phone, and word recognition rates.