Show HN: Bedrock – An 8-bit computing system for running programs anywhere
benbridle.comHey everyone, this is my latest project.
Bedrock is a lightweight program runtime: programs assemble down to a few kilobytes of bytecode that can run on any computer, console, or handheld. The runtime is tiny, it can be implemented from scratch in a few hours, and the I/O devices for accessing the keyboard, screen, networking, etc. can be added on as needed.
I designed Bedrock to make it easier to maintain programs as a solo developer. It's deeply inspired by Uxn and PICO-8, but it makes significant departures from Uxn to provide more capabilities to programs and to be easier to implement.
Let me know if you try it out or have any questions.
This is the latest in a very honourable tradition. My first encounter with it was with Martin Richards's BCPL system in 1972. The compiler generated a hypothetical ISA called OCODE, from which backends generated pretty good native code for Titan-2 and System/360, among others. One of those backends generated INTCODE, which was an extremely reduced ISA, for which an interpreter could be easily written (I wrote one in Fortran). Richards also provided the BCPL compiler and runtime library in INTCODE, so you could quickly have BCPL running interpretively. Then you could use this interpretive version to bootstrap a native-code backend implementation. Put this all together, and you now have a very easy compiler port.
Wirth's Pascal-P compiler of 1974(?) used the same idea, also in aid of a highly portable compiler. I have never been able to find out whether this was an independent invention, or whether Wirth was influenced by Richards's work.
Of course, the JVM and CLR are descendents of this, but they build a very complex structure on the basic idea. Writing an implementation of one of these virtual machines is not for the faint of heart.
So I think Bedrock can be very useful as a compiler target, if nothing else. However, I must agree with some of the other commenters that the 64KiB address space makes it very much of a niche tool. Come up with a 32-bit variant that's not much more complicated, and I think you have a winner.
Wirth did it before BCPL in EULER, but virtual machines of various kinds predate EULER; Schorre's META-II output a textual assembly language for a fictitious processor, and Short Code predated that, but was intended to be written by hand. Simulators for one real computer on another were already commonplace by the 01960s, so that you could keep running your programs for an obsolete machine, and I don't remember anyone specifically saying so, but I would assume that designers of planned computers were writing such simulators for their designs by the end of the 01950s.
Regarding the 64kB limit: I notice that an implementation can provide the programmer an optional memory block of up to 64MB, IIUC:
https://benbridle.com/projects/bedrock/user-manual/memory-de...
Another early example is Stoy and Strachey's virtual machine for running their OS-6 operating system on the Module One minicomputer, starting around 1969 [1,2]. It was written in BCPL. Butler Lampson wrote that it influenced the early pre-Smalltalk OS for the Xerox Alto, also in BCPL [3].
1. https://academic.oup.com/comjnl/article-abstract/15/2/117/35...
2. https://academic.oup.com/comjnl/article-abstract/15/3/195/48...
3. https://www.microsoft.com/en-us/research/publication/an-open...
Wouldn't the 32-bit variant just be WebAssembly?
No, the WebAssembly spec is over 200 pages.
Not to disagree, but WebAssembly intentionally does contain two equivalent descriptions (prose vs. mathematical) and two different formats (binary vs. text), plus a completely independent redefinition of IEEE 754 (!). The true size would be more like around 100 pages, where the instruction definitions would take about a half of them if my prediction from the table of contents is close enough. Maybe highly desugarable "WebAssembly Zero" might be defined and would be a good fit once SpecTec can produce a working modular interpreter.
I think Bedrock's choice of not having floating point at all is a good example of the divergence in design goals.
That said, I don't see a completely independent redefinition of IEEE 754 in the 226-page https://webassembly.github.io/spec/core/_download/WebAssembl.... In §4.3.3 it does restrict IEEE 754, for example requiring a particular rounding mode, and it defines NaN propagation details that the IEEE 754 spec leaves open-ended IIRC, and it does define some things redundantly to IEEE 754, such as addition and square roots and so on. But it doesn't, for example, describe the binary representation of floating-point numbers at all, even though they can be stored in linear memory and in modules (it just refers you to the IEEE spec), nor permit decimal floating point. §4.3.3 only runs from p. 74 to p. 87, so it would be hard for it to independently define all of IEEE 754.
I'm not steeped in computer science, so please pardon me if the following are dumb questions.
> Programs written for Bedrock can run on any computer system, so long as a Bedrock emulator has been implemented for that system.
Isn't that true of any program? As long as the language that the program is written in is implemented on the system, any (valid?) program in that language will run on that system?
In theory, if a program is written in a high-level language and you have a correct implementation (interpreter, compiler, runtime) of that language on a new system, then the program should be able to run there.
In practice, this is not always so straightforward, especially as you move closer to machine-level details or consider compiled binaries.
Many compiled programs are built for a specific architecture (x86, ARM, etc.). They won’t run on a different architecture unless you provide either: A cross-compiler (to generate new native code for that architecture), or an emulator (which mimics the old architecture on the new one)
Yes, but it might do something different there, such as print an error message and exit.
Based on the OP, I think the idea is that it's very easy to port this emulator to a new system.
Not quite. Programs normally need to be compiled per system type, and sometimes per system, due to differences in the OSes' versions, APIs and hardware. The idea behind this type of emulator is that you need compile it only once, and the emulator takes care of those differences for you. The Bedrock program would always ‘see’ the same OS and hardware.
You've got it, yeah. It makes writing programs across different systems so much nicer, because you can just write your programs against a single tidy 'bedrock' abstraction of the file system, screen, networking, etc.
You're right, it's technically true of any program, but it wouldn't necessarily be practical. Implementing CPython on a Gameboy Advance, for example, would be tedious and likely not entirely possible.
The purpose of Bedrock was to make a system that is easy to implement on as many computer systems as possible. I've got plans to make a working system from a 64KB RAM chip and a $2 PIC12F1572 8-bit microcontroller (2K memory, 6mW power, 8 pins), just to see how far down I can take it.
moreover, once "it can run anywhere" is defined, you can't run it anywhere
One-size-fits-all never fits everyone :)
There's Open Firmware, which runs on a portable Forth interpreter. That was supposed to be a standard for board setup code. But proprietary systems won out. It was too open.
For people curious about the differences between this and Uxn (as I was): https://benbridle.com/articles/bedrock-differences-from-uxn....
> each pixel having both a foreground and a background colour
how does that work?
I thought this might be like the ZX spectrum, but that covers each 8x8 block with a foreground and background colour; "sprites" are then bitmaps over that. This does say two colours per pixel, which is confusing.
Here's the docs for the screen device: https://benbridle.com/projects/bedrock/user-manual/screen-de...
You set up a palette of 16 colours, then write 0-15 to the coordinates where you want to set a pixel, but you can also choose between an overlapping foreground and background layer (colour 0 on the foreground layer is transparent).
I guess it's no more weird than some hardware designs from the 80's...
I made a fantasy console with 3x3 pixel cells defined in 16 bits to do any two of 16 colours in the cell
The last Pixel is always color A. You can independently change all pixels in the cell because changing the last pixel on its own can be done by swapping A and B and inverting the second byte.In hindsight I don't think there was much advantage to the last bit being the odd one out. The code for setting individual pixels in a cell was pretty custom anyway. If I were to do it again, I'd place the color A pixel in the center.
And I do find myself working on a memory constrained device again, so perhaps I'll be giving it a go.
Each screen pixel has two colours because there are two screen layers, a foreground layer and a background layer. Anything you draw to the foreground layer will be draw over anything on the background layer, so you can use the foreground layer for fast-moving elements like mouse cursors and game characters without having to redraw the entire background every frame.
So each pixel has a colour on the foreground layer and a colour on the background layer, and will be drawn as one or the other. Normally the foreground colour of the pixel will be the colour used, but if the foreground colour is palette colour 0 (treated as transparent), the background colour will be used instead.
I made a bedrock onion of death: https://paste.pictures/FgqbwUTCY8.png
Couldn't have done it without you
Amazing work, I love it
There are few examples here: https://benbridle.com/projects/bedrock.html
But where are the source codes?
The source code for the microwave clock program is available on the 'Example: Microwave clock' subpage [0]. I hadn't put up code for any of the other programs yet, just because they currently use a lot of library code and idioms that I thought could be confusing to people. I'm intending to make them tidier and release them as proper examplars with commentary sometime. I'll also package up and release my library code at some point, it'd be helpful for people to be able to grab and use all kinds of pre-made functions, and there's a whole user interface framework in there too.
For the meantime though, I uploaded the source code for each of the snake [1], keyboard [2], and system information [3] programs for you or anyone else here to have a look at. Each one is a single source code file with library macros and functions baked in, so you can run `br asm snake-full.brc | br -z` to assemble and run them.
[0] https://benbridle.com/projects/bedrock/example-microwave-clo...
[1] https://benbridle.com/share/snake-full.brc
[2] https://benbridle.com/share/keyboard-full.brc
[3] https://benbridle.com/share/sysinfo-full.brc
If you want to see how the code looks like: https://benbridle.com/projects/bedrock/examples.html
The source for the examples and assembler/emulator is also there, follow the links.
I mean the source code of sysinfo, clock, cobalt etc. Yes I've seen that page, but don't find the codes there....
There are directly linked there, I guess the hyperlink styling is not too obvious: https://benbridle.com/share/sysinfo.br
I think those are the binaries. Opening those on nano/vim shows unreadable characters.
I mean, if you take a look at this page: https://benbridle.com/projects/bedrock/bedrock-pc.html
"To assemble a source code file program.brc and save the result as the program program.br, run the command..."
Where are the brc files?
Ah, you're right, I didn't see that.
I have thought of doing a similar thing from time to time.
I had thought it could have a use in producing tiny visual apps. I am still somewhat bitter from when I found a volume control that used 3MB on a machine with 256MB total.
It seems you can change the shape of the display, which I like, although I don't really understand the documentation text
>Writing to this port group will perform an atomic write, requesting on commit that the width of the screen be locked and changed to the value written.
Locked and changed?
You also seem to be using double to refer to two bytes, is that correct? If so, I would recommend something that won't confuse people so much. Word is a common nomenclature for a 16 bit value, although it does share the space with the concept of machine words.
And of course to use it for a lot of things it would have to be able to talk to the outside world. A simplified version of what Deno does for allowing such capabilities could allow that. In the terms of Bedrock it would be easiest to have a individual device for each permission that you wanted to supply and have the host environment optionally provide them. I'd put the remote bytestream into it's own device to enable it that way.
> Locked and changed?
That could do with some better wording. Normally the user can freely drag-resize the window, but after the program sets the width or height then the user will be unable to resize that axis. This is for, say, a list program where the screen contents would have a fixed width but a dynamic height, so you'd want to keep the height resizable (unlocked).
> You also seem to be using double to refer to two bytes
Double does mean a 16-bit value, yeah, there's a list of definitions on the main page of the user manual and specification. Short tends to be the expected name for a 16-bit value (from C et al.), but it doesn't make much sense for a short to be the wider of two values. I briefly considered word, but the definition is too broad, with a byte also being a type of word. Double felt like the most intuitive name, because it's double the width of a byte. There weren't really any other decent candidates.
> a individual device for each permission that you wanted to supply and have the host environment optionally provide them
That's more or less the plan, only with a bit more granularity depending on the implementation, so that you can, say, allow reading files but forbid writing or deleting.
It's a pity there is not some similar concept using more high level language (instead of assembly).
But I can see why as every interpreted language can be "fantasy console" on itself.
There's PICO-8 in this category if you haven't already heard of it, it uses Lua as the language for writing programs. It was another huge inspiration of mine while working on Bedrock.
https://www.lexaloffle.com/pico-8.php
See also the wonderful LOAD81 from antirez:
https://github.com/antirez/load81
I've fantasised about turning LOAD81 into a much more full-featured development/execution environment for years, and have done a fair bit of work on extending it to support other things such as joystick devices, an internal sound synthesizer based on sfxr, and so on .. one of these days I'll get back to it ..
I like things like this.
One of the big differences from Uxn is the introduction of undefined behavior; by design, you can break it, unlike Stanislav's legos. So presumably Bedrock programs, like C programs, will do different things on different implementations of the system. That's not fatal to portability, obviously, just extra write-once-debug-everywhere work.
The undefined behaviour is limited to only a couple of very edge-case situations that would cause issues for the program anyway, like overflowing the stacks. My thoughts were that a program would have to be broken in order to have triggered one of these situations in the first place.
All programs are broken. No program over 100 lines is bug-free, probably. Maybe seL4, but even seL4 had to be patched for Spectre.
In particular, you can be sure that if people build implementations that don't detect those situations, people who test their code on those implementations will ship code that triggers them. It's pretty easy to underflow a stack at startup if you don't use whatever you popped off of it for anything important, unless the thing you happen to be overwriting is important and/or gets written to later by another means. Limited stack overflow is less common but does occasionally happen.
What typically happens in situations like this is that new implementations have to copy the particular handling of supposedly undefined behavior that the most popular implementation of the platform happened to have. But because it isn't documented, or maybe even intentional, they have to reverse engineer it. It's often much more complex than anything that anyone would have come up with on purpose. This all strikes me as a massive waste of time, but maybe it's an acceptable tradeoff for squeezing that last 30% of performance out of your hardware, a consideration that isn't relevant here.
In the days before Valgrind we would often find new array bounds errors when we ported a C or C++ program to a new platform, and Tony Hoare tells us this sort of thing has been ubiquitous since the 60s. It's hard to avoid. Endianness used to be a pitfall, too, and Valgrind can't detect that, but then all the big-endian architectures died out. The C standards documents and good books were always clear on how to avoid the problems, but often people didn't understand, so they just did whatever seemed to work.
If you want write-once-run-anywhere instead of write-once-debug-everywhere you have to get rid of undefined behavior. That's why Uxn doesn't have any.
This is fantastic! As someone who's used PICO-8 in after-school STEM enrichment classes (and has evaluated uxn), one of the frustrations that my students have always had is easy I/O and persisting state -- for saving/loading game progress and settings, of course. The clipboard and registry devices seem like a good fit.
I hope you stick with this!
Thank you, that means a lot!
I've got plans for tooling in the future that will make Bedrock more accessible to people who are learning to program, like a high-level language that runs on Bedrock and a graphical debugger for visually clicking around and changing the internal state as your program runs.
> I designed Bedrock to make it easier to maintain programs as a solo developer.
Can you say more? I really love this idea but can’t think of any practical use case with 65k of memory. What programs are you now more easily maintaining with Bedrock? To what end?
Check out Uxn for some "practical" use cases: https://github.com/hundredrabbits/awesome-uxn
I'm currently selling a pixel-art drawing program called Cobalt, which is built on Bedrock (you can see a demo of it running at the bottom of the project page). It was initially only available for Windows and Linux, but I wanted to make it available for the Nintendo DS as well, so I wrote a new emulator and now it and all of my other programs work on the DS. It was far easier to write the emulator than it would have been to figure out how to port Cobalt to the DS directly, and now I don't have the issue of having to maintain two versions of the same software.
It's true that 64KB is pretty small in modern terms, but it feels massive when you're writing programs for Bedrock, and the interfaces exposed by Bedrock for accessing files and drawing to the screen and the likes make for very compact programs.
It’s true you can’t build giant video editors or even photo editors. But, if you reestablish your expectations and think 8-bit retro, you’ll be reminded that very few things didn’t exist in some form in the 80s… just at a smaller scale. Spreadsheet? Check. Paint programs? Check. Music composition? Check.
I have not delved to deep in the code, but is there any functional differences it has over Java other than the size ?
Presumably Java would also be pretty tiny if we wrote it in bytecode instead of higher lever Java.
The Java bytecode instruction set actually has a quite complicated specification: https://docs.oracle.com/javase/specs/jvms/se8/html/
Which means implementations also have to be correspondingly complicated. You have to handle quite a few different primitive data types each with their own opcodes, class hierarchies, method resolution (including overloading), a "constant pool" per class, garbage collection, exception handling, ...
I would expect a minimal JVM that can actually run real code generated by a Java compiler to require at least 10x as much code as a minimal Bedrock VM, and probably closer to 100x.
Fun stuff Ben! Nicely done. I'm building a simple terminal that talks over a SPI port or IIC port and this looks like it would be a fun demo to run on it.
Thank you! That sounds fascinating, I'd love to hear how you get on with it if you do.
Love this! Takes me back to the literal 8-bit computers of the 80s when it was much easier to learn to program with, for example, BASIC built into the operating system.
Love it, I think it's very cool! I am not sold on its "everlasting" promise yet, but as an addition to the family of "fantasy" platforms seems very solid.
Thank you! It's early days yet, we'll see how well it holds up in a few decades.
My immediate reaction was, "oh, like Uxn!" but then of course I read it was originally a fork of Uxn. I love these 'toy' OSes, the more the better.
The demos are surprisingly fun!
Why 8 bit?
That puzzled me too, since it's a fork of Uxn, a 16-bit architecture.
I had a hard time figuring out whether Bedrock counted as an 8-bit or 16-bit computer, because it doesn't line up so cleanly with the usual criteria as does a physical CPU. I decided that the 8-bit label fitted best because it has an 8-bit data path, an 8-bit instruction format, and the stacks hold only bytes. It also has a 16-bit address space and can perform 16-bit arithmetic, but so can the well-known 8-bit Z80 processor.
The usual meaning of "data path" https://en.wikipedia.org/wiki/Datapath is the path from the register file (and/or memory access unit) to the ALU and back to the register file (and/or memory access unit). So we could say that both the 8088 and the 68000 had a 16-bit data path, because they used 16-bit buses for those transfers and a 16-bit ALU, even though the 68000 had 32-bit registers and the 8088 had an 8-bit data bus to connect it to RAM. The 68020 implemented the same instructions and registers as the 68000 (and additional ones) but used a 32-bit data path, so they were twice as fast.
In what sense does a virtual machine instruction set architecture with no hardware implementation have a "data path" separate from its arithmetic size? You seem to be using the term in a nonstandard way, which is fine, but I cannot guess what it is.
By your other criteria, the (uncontroversially "16-bit") 8088 would be an 8-bit computer, except that it had a 20-bit address space.
By data path, I mean the width of the values that are read from the stacks, program memory, and device bus. Pairs of 8-bit values can be treated as 16-bit values in order to perform wider arithmetic, but all data ultimately moves around the system as 8-bit values.
Whether 16-bit data moves around the system as 8-bit values or not sounds like a question about the implementation, not the architecture (the spec).
For example, the spec says, "Reading a double from program memory will read the high byte of the double from the given address and the low byte from the following address," but I'd think that generally you'd want the implementation to work by reading the whole 16-bit word at once and then byte-swapping it if necessary, because that would usually be faster, and there's no way for the program to tell if it's doing that, unless reading from the first byte has a side effect that changes the contents of the second byte or otherwise depends on whether you were reading the second byte at the same time.
(Of course if you have a "double" that crosses the boundaries of your memory subsystem's word, you have to fall back to two successive word reads, but that happens transparently on amd64 CPUs.)
Wow awesome imagine this in a browser console, for ... management of the codes, you are genius
Someone should make this for highly parallel architectures that runs over GPUs.
This would be fascinating to see, I have no idea how you'd even start.
There was a video I saw a couple of years back that was showcasing a cellular programming model, where each cell in a two dimensional grid performed an operation on values received from its neighbours. Values would move into one side of a cell and out the other every tick, something like Orca (by 100 rabbits), so the whole thing could be parallelised on the cell level very easily.
You need a really simple set of assembly instructions for a vm that is based on gpu architecture.
Then make all the old school IO apis and rendering engine around it similar to pico 8 or bedrock.
The UI is a bit Similar to shader toy I guess.
Lacks power. Existing solutions are SPIR-V and PTX.
I know. I mean something that's simple and has this old school flavor. Like say if GPUs were the standard in the 80s.