Existing variational methods for learning disentangled representations often suffer from leakage between latent variables. In this paper, we introduce DISCoVeR, a new method based on a dual-latent architecture and dual reconstruction pathways. We propose a new max–min objective that both maximizes the data likelihood and explicitly encourages disentanglement. We prove that this objective admits a unique equilibrium and promotes clean separation of latent factors. Empirically, DISCoVeR significantly outperforms prior methods in achieving disentangled representations.