Artificial Intelligence (AI) models offer consumers a resource that will generate music in a variety of genres and with a range of emotions with only a text prompt. However, emotion is a complex human phenomenon which becomes even more complex when attempted to convey through music. There is limited research assessing AI’s capability to generate music with emotion. Utilizing specified target emotions this study examined the validity of those emotions as expressed in AI-generated musical samples. Seven audio engineering graduate students listened to 144 AI-generated musical examples with sixteen emotions in three genres and reported their impression of the most appropriate emotion for each stimulus. Using Cohen’s kappa minimal agreement was found between subjects and AI. Results suggest that generating music with a specific emotion is still challenging for AI. Additionally, the AI model here appeared to operate with a predetermined group of musical samples linked to similar emotions. Discussion includes how this rapidly changing technology might be better studied in the future.