This made me remember old set of tools called mtx2midi and midi2mtx, I used them to edit some midi files while making sure I'm not introducing any unwanted changes.
While roundtrip output was not binary identical, it still sounded the same.
Looks like MTXT tool here does not quite work for this use case, the result of the roundtrip of a midi I tried has a segment folded over, making two separate segments play at the same time while the total duration got shorter.
\relative c' {
\key d
\major
fis4 fis g a
a g fis e
d d e fis
fis4. e8 e2
}
...but why is it so complicated? A novice interpretation of "music" is "a bunch of notes!" ... my amateur interpretation of "music" is "layers of notes".
You can either spam 100 notes in a row, or you effectively end up with:
melody = [ a, b, [c+d], e, ... ]
bassline = [ b, _, b, _, ... ]
music = melody + bassline
score = [
"a bunch of helper text",
+ melody,
+ bassline,
+ page_size, etc...
]
...so Lilypond basically made "Tex4Music", and the format serves a few dual purposes:
Engraving! Basically "typesetting" the music for human eyeballs (ie: `*.ly` => `*.pdf`).
Rendering! Basically "playing" the music for human ears (ie: `*.ly` => `*.mid`)
Librarification! Basically, if your music format has "variables" and "for-loops", you can end up with an end score that's something like: `song = [ intro + chorus + bridge + chorus + outro ]`, and then not have to chase down and modify all the places you use `chorus` when you modify it. (See this answer for more precision: https://music.stackexchange.com/a/130894 )
...now imagine doing all of the above for multiple instruments and parceling out `guitar.pdf`, `bass.pdf`, `drums.pdf` and `whole-song.pdf`
TL;DR: Music is haaard, and a lot closer to programming than you think!
Cool. My one concern with this is that it has no horizontally scannable note/chord mode. It’s super common for humans to read a sequence of notes left to right, or write it that way, but it’s also just more efficient in terms of scanning / reading.
Can I suggest a guarded mode that specifies how far apart each given note/chord is by the count, e.g.
#1.0:verse1
Am - C - G - E - F F F F
#
You could then repeat this or overlay a melody line like
I considered it but decided against it in the first version, because specifying note durations is too tricky. It was more important to get the .mid -> MTXT conversion and live-performance recording working, where notes usually have irregular note lengths.
Representations like "C4 0.333 D4 0.333 E4 0.25" feel too hard to read.
I've been spending the last week casually looking at strudel.cc.
They have a notation that looks similar (basically a JavaScript port of the Haskell version).
I like this, but I'm curious why I would want to use this over strudel. Strudel blends the language with a js runtime and that's really powerful and fun.
My initial goal was to fix some mistakes in the MIDI files I recorded from my keyboard. I was also interested in making dynamic tempo and expression changes without dealing with complicated DAW GUIs.
Now I'm working on a synth that uses MTXT as its first-class recording format, and it's also pushing me to fine-tune a language model on it.
I like the idea overall. Looks like something that would be fun to combine with music programming languages (SuperCollider/Of etc).
Not so sure how human-friendly the fractional beats are? Is that something that people more into music than I am are comfortable with? I would have expected something like MIDIs "24 ticks per quarter note" instead. And a format like bar.beat.tick. Maybe just because that is what I am used to.
It should be fine, but fractions (or both fractions and decimals) would be preferable in order to express triplets (3 over 2, effectively a duration of 0.3333...)
To me it seems like files could get hard to understand if events that happen simultaneously aren't horizontally lined up like this:
Like a text version of old school tracker interfaces:https://youtu.be/eclMFa0mD1c
It makes no sense to design for llm's. Do what makes sense for the reader and forget that llm's exist at all.
What prompted this and why does it not?
It's not the 19th Century. You don't need to punch holes in cards to help the machine "think" any more.
How does this compare to standard ABC? More capable, presumably, but a comparison would be useful.
https://en.wikipedia.org/wiki/ABC_notation https://abcnotation.com/
This made me remember old set of tools called mtx2midi and midi2mtx, I used them to edit some midi files while making sure I'm not introducing any unwanted changes. While roundtrip output was not binary identical, it still sounded the same.
Looks like MTXT tool here does not quite work for this use case, the result of the roundtrip of a midi I tried has a segment folded over, making two separate segments play at the same time while the total duration got shorter.
https://files.catbox.moe/5q44q0.zip (buggy output starts at 42 seconds)
Thank you, I will have a look. I consider it important to have the round trip conversion working seamlessly.
I created an issue here: https://github.com/Daninet/mtxt/issues/1
Similar things:
* Perl MIDI::Score -- https://metacpan.org/pod/MIDI::Score
* Csound standard numeric scores -- https://csound.com/docs/manual/ScoreTop.html
* CsBeats (alternative score language for Csound) -- https://csound.com/docs/manual/CsBeats.html
Lilypond, too. Though it needs a full scheme interpreter to evaluate macros (provided by both the system and the user), it can emit midi files.
Lilypond isn't well-known enough!
https://en.wikipedia.org/wiki/LilyPond#Integration_into_Medi...
https://www.mutopiaproject.org
https://lilypond.org/text-input.html
...but why is it so complicated? A novice interpretation of "music" is "a bunch of notes!" ... my amateur interpretation of "music" is "layers of notes".You can either spam 100 notes in a row, or you effectively end up with:
...so Lilypond basically made "Tex4Music", and the format serves a few dual purposes:Engraving! Basically "typesetting" the music for human eyeballs (ie: `*.ly` => `*.pdf`).
Rendering! Basically "playing" the music for human ears (ie: `*.ly` => `*.mid`)
Librarification! Basically, if your music format has "variables" and "for-loops", you can end up with an end score that's something like: `song = [ intro + chorus + bridge + chorus + outro ]`, and then not have to chase down and modify all the places you use `chorus` when you modify it. (See this answer for more precision: https://music.stackexchange.com/a/130894 )
...now imagine doing all of the above for multiple instruments and parceling out `guitar.pdf`, `bass.pdf`, `drums.pdf` and `whole-song.pdf`
TL;DR: Music is haaard, and a lot closer to programming than you think!
Cool. My one concern with this is that it has no horizontally scannable note/chord mode. It’s super common for humans to read a sequence of notes left to right, or write it that way, but it’s also just more efficient in terms of scanning / reading.
Can I suggest a guarded mode that specifies how far apart each given note/chord is by the count, e.g.
You could then repeat this or overlay a melody line like Etc. I think this would be easier to parse and produce for an LLM, and it’s would compile back to the original spec easily as well.I considered it but decided against it in the first version, because specifying note durations is too tricky. It was more important to get the .mid -> MTXT conversion and live-performance recording working, where notes usually have irregular note lengths. Representations like "C4 0.333 D4 0.333 E4 0.25" feel too hard to read.
I've been spending the last week casually looking at strudel.cc.
They have a notation that looks similar (basically a JavaScript port of the Haskell version).
I like this, but I'm curious why I would want to use this over strudel. Strudel blends the language with a js runtime and that's really powerful and fun.
Hey, the idea is nice, It would be great to know what pushed you to start this format.
Also, any apps that uses it would benefit from being add to the repo assuring usability in addition to readibility.
My initial goal was to fix some mistakes in the MIDI files I recorded from my keyboard. I was also interested in making dynamic tempo and expression changes without dealing with complicated DAW GUIs.
Now I'm working on a synth that uses MTXT as its first-class recording format, and it's also pushing me to fine-tune a language model on it.
I like the idea overall. Looks like something that would be fun to combine with music programming languages (SuperCollider/Of etc).
Not so sure how human-friendly the fractional beats are? Is that something that people more into music than I am are comfortable with? I would have expected something like MIDIs "24 ticks per quarter note" instead. And a format like bar.beat.tick. Maybe just because that is what I am used to.
The library has MIT license, I would be more than happy to see people use it in different synths.
I'm planning to add support for math formulas in beat numbers, something like: "15+/3+/4" = 15.58333
> "15+/3+/4"
Can you explain how to read that? 15 plus divided by 3 plus divided by 4?
It's a shorthand for 15 + (1/3) + (1/4), but I'm still not settled on the syntax.
It should be fine, but fractions (or both fractions and decimals) would be preferable in order to express triplets (3 over 2, effectively a duration of 0.3333...)
I have been using:
https://www.vexflow.com/
Which has a text format, and typesets it for you nicely.
I think that for completeness it needs looping and conditional constructs
Obligatory xkcd: https://xkcd.com/927/
pretty cool!
Probably stating the obvious here, but this would be a good way for an LLM to attempt to write or modify music.