The coding aspect is novel I'll admit, and something an audience may find interesting, but I've yet to hear any examples of live coded music (or even coded music) that I'd actually want to listen to. They almost always take the form of some bog-standard house music or techno, which I don't find that enjoyable.
Additionally, the technique is fun for demonstrating how sound synthesis works (like in the OP article), but anything more complex or nuanced is never explored or attempted. Sequencing a nuanced instrumental part (or multiple) requires a lot of moment-to-moment detail, dynamics, and variation. Something that is tedious to sequence and simply doesn't play to this formats' strengths.
So again, I want to integrate this skill into my music production tool set, but aside from the novelty of coding live, it doesn't appear well-suited to making interesting music in real time. And for offline sequencing there are better, more sophisticated tools, like DAWs or trackers.
Consider this: there are teenagers today, out there somewhere, learning to code music. Remember when synthesisers were young and cool and there was an explosion of different engines and implementations?
This is happening for the kids, again.
Try to use this new technology to replicate the modern, and then the old sound, and then discover new sounds. Like we synth nerds have been doing for decades.
Pro developers who really care about the sound variously write in C/C++ or use cross compilers for pd or Max. High quality oscillators, filters, reverb etc are hard work, although you can certainly get very good results with basic ones given today's fast processors.
Live coding is better for conditionals like 'every time [note] is played increment [counter], when [counter] > 15 reset [counter] to 0 and trigger [something else]'. But people who are focused on the result rather than the live coding performance tend to either make their own custom tooling (Autechre) or programmable Eurorack modules that integrate into a larger setup, eg https://www.perfectcircuit.com/signal/the-programmable-euror...
It's not that you can't get great musical results via coding, of course you can. But coding as performance is a celebration of the repl, not of the music per se.
I like your idea of celebrating the repl, its right up there with performance menu diving as a statement for how orthogonal things can get .. I’ve never enjoyed fishing for parameters, so having editor chops applied musically is .. refreshing .. in some strange ways.
Sure wish hardware manufactures would be motivated to not just throw a linked list and a couple of buttons at the parameter issue ..
No it wasn't. Jean-Michel Jarre sold 80 million albums and is one of the most famous musicians of the 20th century.
(Disclaimer: I've been in the MI business for decades, I've seen some things..)
There's a learning curve for sure, but it's not too bad once you learn the basics of how audio and MIDI are handled + general JUCE application structure.
Two tips:
Don't bother with the Projucer, use the CMAke example to get going. Especially if you don't use XCode or Visual Studio.
If your on a Mac, you might need to self-sign the VST. I don't remember the exact process, but it's something I had to do once I got an M4 Mac.
LLMs have absolutely killed any interest I use to have in the max/pd/reaktor wiring up boxes UI.
I have really gone further though and thought why do I even care about VST or a DAW or anything like this? Why not break completely free of everything?
I take inspiration from Trevor Wishart and the Composers Desktop Project for this. Wishart's music could only really be made with his own tools.
It is easy to sound original when using a tool no one else has.
Python for audio apps? First I've heard of this. Is it a "Python acts as a thin wrapper over C" or something?
> I have really gone further though and thought why do I even care about VST or a DAW or anything like this?
Been there. I started making music on a Windows 95 PC, built up a studio over the years (including some DIY hardware), and eventually was using Logic as glorified multi-track recorder + effects rack. These days, I've kind of went back to my roots, and I'm doing a lot of sample chopping. Only difference is: I'm using my own sounds as source material.
I’ve been using LLMs to help build out audio-related projects that I didn’t think I’d get a chance to pursue until I retired.
Under the hood, are they crap? Maybe. Probably, even. But they function well enough to make my own weird music with, and they’re available for use now - not twenty years from now when I retire.
The era of custom software on tap is here. As someone who is primarily interested in making unique stuff, it’s a great time to be alive.
For live coding, Switch Angel is definitely someone I would actually go to see live, check out this video of hers [1].
[0] https://nathanho.bandcamp.com/album/haywire-frontier [1] https://youtu.be/iu5rnQkfO6M
I feel like the newer (ish) tools such as Strudel, and also this here Loopmaster, have a much better toolset for producing stuff that actually sounds great (vs just purely the novelty of "look im coding beats"). Like, Strudel comes with an extensive sample bank of actually quality samples (vs relying on synthesis out of some sense of purity), and also comes with lots of very decent sounding effects and filters and the likes.
Combine that with the ability to do generative stuff in a way that Ableton, FL Studio or Renoise are never going to get you, I won't be surprised if people invent some cool new genres with these tools eventually.
Basically, your comment reads a bit like saying demoscene makes no sense because you can make any video better with Blender and DaVinci Resolve. And this obviously isn't true given the sheer overload of spectacularly great demos out there whose unique esthetic was easy to obtain because they're code, not video. (find "cdak" by Quite for an on-the-nose example).
I'm going to be surprised if this new wave of music coding tools will not result in some madly weird new electronic music genres.
Obviously there's plenty of stuff these tools are terrible for (like your example of nuanced instrument parts), but don't dismiss the kinds of things they're going to turn out to be amazing at.
Aside from the novelty factor (due to very different UI/UX) and the idea that you can use generative code to make music (which became an even more interesting factor in the age of LLMs), I agree.
And even the generative code part I mentioned is a novelty factor as well, and isn't really practical for someone who actually makes music as their end-goal (and not someone who is just experimenting around with tech or how far one can get with music-as-code UIUX).
Of course, often creativity comes from limitations. I would agree that it's usually not desirable to go full procedural generation, especially when you want to wrangle something into the structure of a song. I think the best approach is a hybrid one, where procedural generation is used to generate certain ideas and sounds, and then those are brought into a more traditional DAW-like environment.
Sure it might be cool to use cellular automata to generate rhythms, or pick notes from a diatonic scale, or modulate signals, but without a rhyme or reason or _very_ tight constraints the music - more often than not - ends up feeling unfocused and meandering.
These methods may be able to generate a bar or two of compelling material, but it's hard to write long musical "sentences" or "paragraphs" that have an arc and intention to them. Or where the individual voices are complementing and supporting one another as they drive towards a common effect.
A great deal of compelling music comes from riding the tightrope between repetition and surprising deviations from that scheme. This quality is (for now) very hard to formalize with rules or algorithms. It's a largely intuitive process and is a big part of being a compelling writer.
I think the most effective music comes from the composer having a clear idea of where they are going musically and then using the tools to supplement that vision. Not allowing them to generate and steer for you.
-----
As an aside, I watch a lot of Youtube tutorials in which electronic music producers create elaborate modulation sources or Max patches that generate rhythms and melodies for them. A recurring theme in many of these videos is an approach of "let's throw everything at the wall, generate a lot of unfocused material, and then winnow it down and edit it into something cool!" This feels fundamentally backwards to me. I understand why it's exciting and cool when you're starting out, but I think the best music still comes from having a strong grasp of the musical fundamentals, a big imagination, and the technical ability to render it with your tools and instruments.
----
To your final point, I think the best example of this hybrid generative approach you're describing are Autechre. They're really out on the cutting edge and carving their own path. Their music is probably quite alienating because it largely forsakes melody and harmony. Instead it's all rhythm and timbre. I think they're a positive example of what generative music could be. They're controlling parameters on the macro level. They're not dictating every note. Instead they appear to be wrangling and modulating probabilities in a very active way. It's exciting stuff.
Harmony and timbre is basically the same thing. You can feel this if you play a long drone note and twiddle the filter cutoff and resonance.
I think this format of composition is going to encourage a highly repetitive structure to your music. Good programming languages constrain and prevent the construction of bad programs. Applying that to music is effectively going to give you quantization of every dimension of composition.
I'm sure its possible to break out of that but you are fighting an uphill battle.
It's also notable for being probably the only Haskell library used almost exclusively by people with no prior knowledge of Haskell, which is an insane feat in itself.
I think I must not be expressing myself well. These tools seem to be optimized for parametric pattern manipulation. You essentially declare patterns, apply transformations to them, and then play them back in loops. The whole paradigm is going to encourage a very specific style of composition where repeating structures and their variations are the primary organizational principle.
Again, I'm not trying to critique the styles of music that lend themselves well to these tools.
> And yet it also constrains and prevents the construction of bad programs in a very strict manner via its type system and compiler.
Looking at the examples in their documentation, all I see are examples like:
d1 $ sound "[[bd [bd bd bd bd]] bd sn:5] [bd sn:3]"
So it definitely isn't leveraging GHC's typechecker for your compositions. Is the TidalCycles runtime doing some kind of runtime typechecking on whatever it parses from these strings?> It's also notable for being probably the only Haskell library used almost exclusively by people with no prior knowledge of Haskell, which is an insane feat in itself.
I think Pandoc or Shellcheck would win on this metric.
the runtime is GHC (well GHCi actually). tidal's type system (and thus GHC's typechecker) ensures that only computationally valid pattern transformations can be composed together. if you're interested in the type system here's a good overview from a programmer's perspective https://www.imn.htwk-leipzig.de/~waldmann/etc/untutorial/tc/...
these strings are a special case, they're formatted in "mini-notation" which is parsed into composed functions at runtime. a very expressive kind of syntactic sugar you could say. while they're the most immediately obvious feature of Tidal (and have since been adapted in numerous other livecoding languages), mini-notation is really just the tip of the iceberg.
>The whole paradigm is going to encourage a very specific style of composition where repeating structures and their variations are the primary organizational principle.
but that applies to virtually all music, from bach to coltrane to the beatles! my point is that despite what the average livecoder might stream/perform online, live coding languages are certainly not restricted to or even particularly geared towards repetitive dance music - it just happens that that's a common denominator of the kind of demographic who's interested in livecoding music in the first place.
i'd argue that (assuming sufficient knowledge of the underlying theory) composing a fugue in the style of bach is much easier in tidal than in a DAW or other music software. on the more experimental end, a composition in which no measure ever repeats fully is trivial to realize in tidalcycles - it takes only a handful of lines of code to build up a stochastic composition based on markov chains, perlin noise and conditional pattern transformations. via the latter you can actually sculpt these generative processes into something that sounds intentional and follows some inner logic rather than just being random.
the text-based interface makes it much easier to use than anything GUI-based. it's all just pure functions that you can compose together, you could almost say that Tidal is like a musical equivalent of shell programs and pipes. equally useful and expressive both for a 10 year old and a CS professor.
>I think Pandoc or Shellcheck would win on this metric.
touché!
I agree that it's easier to build a composition in a coding environment that uses stochastic models, markov chains, noise, conditions, etc. But I don't think that actually makes for compelling music. It can render a rough facsimile of the structure, but the result is uncanny. The magic is still in the tiny choices and long arc of the composition. Leaving it to randomness is not sufficient.
Bach's style of composition _is_ broadly algorithmic. So much so that his style is taught in conservatories as the foundational rules of Western multi-voice writing, but it's still not a perfect machine. Taste and judgment have to be exercised at key moments in the composition on a micro level. You can intellectually understand florid counterpoint on a rules-based level, but you still have to listen to what's being written to decide if it's musically compelling or if it needs to be revised.
The proof is in the pudding. If coded music were that good, we would be able to list composers who work in this manner. We might even have charting music. But we don't, and the best work is still being done with instruments in hand, or written on a staff, or sequenced in a DAW.
I want this paradigm to work - and perhaps it can - but I've yet to hear work that lives up to the promise.
"Computer games don't affect kids. If Pac Man affected us as kids, we'd all be running around in darkened rooms, munching pills and listening to repetitive music." -- Marcus Brigstocke (probably?)
Also, related but not - YouTube's algorithm gave me this the other day - showing how to reconstruct the beat of Blue Monday by New Order:
My sister likes to work with [checks notes carefully to avoid the wrong words] old textiles. This of course constrains the kind of art she can make. That's the whole point.
I see live coding the same way as the harp, or a loop sampler, an instrument, one of an enormous variety of tools which you might find suits you or not. As performance I actually enjoy live coding far more than most ways to make music, although I thought Amon Tobin's ISAM Live was amazing that's because of the visuals.
And year, your music tools/instruments constrain you. There are only so many music genres you can reasonable play or compose on an acoustic guitar. Or an oboe. Or modular synths. I suspect it's _possible_ to compose and play altrock or country music using live coding instead of a guitar - but why would you?
For the end result .. which we've yet to hear.
Before it landed a country(?) banjo(?) cover of Eminem's rap classic Lose yourself was a but why would you.
And then Kasey Chambers owned it.
Johnny Cash's "Hurt" is an example where the performance was transformative. Reznor's "Hurt" is a song by a 20-something addict feeling sorry for himself. However Cash is a man who knows he actually doesn't have much time left†, and so almost identical lyrics ("Crown of shit" is changed) feel very different.
† Cash died about a year after his recording was published.
I'm only aware of it myself because of an unusual number of vocal coaches being overly enthusiastic about it. "Country" is a an odd label for it given the transition midway.
The thrust to the comment was to remind the GP to not limit their expectations about what others might do. You yourself highlighted Cash's cover as something you deem of value, it's another example of an unexpected product.
Live coding my or may not progress in any particular direction or genre, I'd prefer to not make any predictions myself and leave open the possibility of being pleasantly surprised.
To kinda get away from that or even just experiment, I was interested in the possibility of writing music with code and either inject randomness in places or at least vary the intervals between beats/etc using some other functions (which would again just be layering in patterns, but in a more subtle way).
Years ago I went to a sci-fi convention for the first time, because I had moved to a new town and didn't know anyone, and I like sci-fi. I realized when I was there that despite me growing up reading Hugo and Nebula award winners, despite watching pretty much every sci-fi show on TV, despite being a full-time computer nerd, the folks who go to sci-fi conventions are a whole nother subculture again. They have their own heroes, their own in-jokes, their own jargon... and even their own form of music! It's made by people in the community for the community and it misses the point to judge it by "objective" standards from the outside, because it's not about trying to make "interesting music" or write the best song of all time. The music made in that context is not being made as an end in itself, or even as the focus of the event, it's just a mechanism to enable a quirky subculture to hang out and bond in a way that's fun for them. I see this kind of live coded music as fulfilling a similar role in a different subculture. Maybe it's not for you, but that's fine.
It's fine to have a preference for live musicianship, but the 'real music' argument has been leveled against every new musical technology (remember the furore around Dylan going electric?). It dismisses contemporary creativity based on a traditionalist bias that elevates one form of execution above all others. There's also a huge amount of skill in producing good electronic music. It's always hard to make good music no matter the means.
If you dial the dial high enough you can say that that amplifiers aren't "real music" because you are no longer hearing the "real instruments", but "a machine that is distorting the sound". If that's your line, then only listening to classical music at a concert hall would count as "real music".
You could dial it up even higher. Using a musical instrument at all is not "real music" any more, because human voice can have more nuance than any instrument. Then going to a church to listen to gregorian chants would be the only "real music".
I personally think that Daft Punk rocks, and for a lot of artists I very much prefer listening to their studio recording rather than listening to them in a concert. (Surrounded by ... people. Ugh.)
It's like saying a novel isn't real speaking, but a speech is.
Like animating an image isn't real, but recording a video is.
If that's your preference, then that's alright. But it's a silly distinction to make.
If you see it as yet another instrument you have to master, then you can go pretty far. I'm finding myself exploring rhythms and sounds in ways I could never do in a DAW so fast, but at the same time I do find limiting a lot of factors, especially sequencing.
So far I haven't gotten beyond a good sounding loop, hence the name "loopmaster", and maybe that's the limit, which is why I made a 2 deck "dual" mode in the editor, so that it can be played as a DJ set where you don't really need that much progression.
That said, it's quite fun to play with it and experiment with sounds, and whenever you make something you enjoy, you can export a certain length and use it as a track in your mix.
My goal is certainly to be able to create full length tracks with nuances and variations as you say, just not entirely sure how to integrate this into the format right now.
Feedback[0] is appreciated!
[1] https://news.ycombinator.com/item?id=46052478 [2] Nice example: https://m.youtube.com/watch?v=GWXCCBsOMSg
Real-time sound synthesis was tough to live-code, or to run in real-time at all, prior to the faster personal computers of the early 90s. (The tracker scene obviously pre-dates this, but in that case the actual sound synthesis algorithms weren't live coded.) In fact, code-to-music dates back to 1951[3], or 1957[4], depending on your definitions. There is a large history of development by many computer musicians following on from Max Matthews' MUSIC-N. The Computer Music Tutorial[5] is a good source for the academic/research institutions/serious composers part of the picture.
[1] https://en.wikipedia.org/wiki/Hierarchical_Music_Specificati...
[2] https://ccrma.stanford.edu/software/clm/
[3] https://cis.unimelb.edu.au/about/history/csirac/music
[4] https://en.wikipedia.org/wiki/MUSIC-N
[5] https://mitpress.mit.edu/9780262044912/the-computer-music-tu...
I must say the narrated trance piece by switch angel blew me socks right off, to me feels like this should be a genre in itself.
https://en.wikipedia.org/wiki/Musikalisches_W%C3%BCrfelspiel
The tools/frameworks have become more plentiful, approachable, and mature over the past 10-15 years, to the point where you can just go to strudel.cc and start coding music right from your browser.
I'll shamelessly plug my weirdo version in a Forth variant, also a house loop running in the browser: https://audiomasher.org/patch/WRZXQH
Well, maybe it's closer to trance than house. It's also considerably more esoteric and less commented! Win-win?
I like how music recognition flags it as the original Jarre piece.
I first did stuff like this when I was a teen using a 6502 machine and a synth card - using white noise to make tshhh snares etc. All coded in 6502. The bible was Hal Chamberlin's Musical Application of Microprocessors.
Then of course we had games abusing the SID etc to make fantastic tunes and then came very procedural music in size coded PC and Amiga demo coding that underneath the hood were doing tiny synth work and sequencing very much like dittytoy etc.
Shadertoy even has procedural audio but it doesn't get used enough.
Fantastic to experience all of this!
fun experiment to get you tinkerers started, skip to the bottom play The Complete Loop - https://loopmaster.xyz/tutorials/how-to-synthesize-a-house-l...
Then, on line 21, with `pat('[~ e3a3c4]*4',(trig,velocity,pitches)->`.
Change *4 to *2 and back to *4, to reduce the interval that the "Chords" play. If you do it real fast with your backspace + 2 or backspace + 4 key, you can change the chords in realtime, and kinda vibe with the beat a little bit.
Definitely recommend wearing headphones to hear the entire audio spectrum (aka bass).*
change line 12 from 8000 to 800
I don't imagine making a full song out of this, but it would be a great instrument to have.
I'll put 50$ down right now.
[0]: https://loopmaster.xyz/loop/75a00008-2788-44a5-8f82-ae854e87...
The janky way to do this would be to run it locally, and setup a watch job to reload the audio file into a vst plugin every time the file changes.
The license at: https://github.com/juce-framework/JUCE/blob/master/LICENSE.m...
indicates you can just license any module under agpl and avoid the JUCE 8 license (which to be fair, I'm not bothering to read)
And sure you can license under APGL. It should be obvious that's undesirable.
I'm not going to test it, but couldn't you just load a json file with all params.
Various instructions, etc.
I can't believe it's not code!
https://supercollider.github.io/
https://github.com/digego/extempore
And numerous others
Also, there is an AI DJ mode[0] where you set the mood/genre and the AI generates and plays music for you infinitely.
For now you can see how it's done here[0] on line 139. I pretty much use it on every other track I've made as well.
[0]: https://loopmaster.xyz/loop/6221a807-9658-4ea0-bfec-8925ccf8...
Not like a fringe unknown one, but one with over 20 years of history and now-owned by Beatport.
It's still clipping terribly in my browser