This thesis addresses three primary areas within score-based and diffusion generative models. First, it introduces objectives to minimize score estimation error through parametric frameworks and kernel-inspired approaches. Second, it develops a Bayesian-inspired optimization framework combining Gaussian smoothing with alpha-posterior estimation, demonstrated via signal separation and interference mitigation. Third, it proposes a one-step neural sampler using multi-divergence minimization and mixture distribution score estimation, achieving strong performance in image generation. The work extends these methods to multi-step sampling and posterior sampling for inverse problem applications.