I’m Shayan, an AI music researcher.
My research explores human-AI co-creation in music, developing AI tools that genuinely collaborate with artists rather than replace them. I focus on systems that resonate with individual musical tastes and foster meaningful partnerships between humans and machines.
Research & Work
My doctoral work investigated how AI can better serve music creators through adaptable, interactive systems. I envision creativity as emerging from sustained human-AI collaboration, where computational precision meets human intuition.
Recent activities:
- 2025 - Lecture: Control and Explore: Neural Audio Synthesis with VAEs for Live-Electronics and Interactive Performances (Faglig Forum, Music Technology Department, NTNU, Norway)
- 2025 - Lecture: Artificial Intelligence and Music: Deep Learning and Agents for Music Generation (SINTEF-ZEB Lab, Trondheim, Norway)
- 2024 - Podcast Interview: A user-centric approach for symbolic music generation (CreateMe podcast, University of Agder)
More about my work:
Research Interests
- Human-Computer Interaction (HCI)
- Computational Creativity
- Algorithmic Composition
- Deep Learning & Deep Reinforcement Learning
- Multi-Agent Systems
- Human-AI Co-Creation
- Music Information Retrieval
Elsewhere
Email: dadman.shayan@gmail.com
Academic: shayan.dadman@uit.no
Connect: GitHub - LinkedIn - Google Scholar - Research Gate
I’m open to collaborations and discussions about generative AI, music generation, machine learning, or human-computer interaction. Feel free to reach out if you’re curious to learn more or interested in working together.