Unconditional Generation: Implemented LSTM-based models to predict the next note based on a 50-note context, significantly reducing dissonance compared to RNN baselines.
Conditional Harmony Generation: Built a Transformer-based Conditional Variational Autoencoder (CVAE) to generate complex harmonies conditioned on a specific melody input.
Advanced Modeling: Utilized latent embeddings and multi-head attention mechanisms to capture long-range musical dependencies and improve structural diversity.
Results: Achieved high rhythmic consistency and natural harmonic progression, demonstrating the effectiveness of attention-based models in symbolic music composition.