Catalyst uses a synchronized linear representation of object memory to coordinate JavaScript and WebAssembly
It’s straightforward to apply WebAssembly (WASM) to isolated JavaScript hotspots, where there are no side-effects. This enables us to speed up sections of the SqueakJS primitives, like BitBLT, which perform pure functions. The SqueakJS interpreter code, on the other hand, is rife with side-effects. The act of interpretation modifies the deep graph of connected structures which is the Smalltalk object memory.
In my first WASM-enabled SqueakJS virtual machine, I wrote a WASM version of the JS function that interprets a single Smalltalk instruction, interpretOneWASM. To perform the necessary interactions with the object memory, that WASM function called the original JS support functions. That approach yielded a working virtual machine, but with a high performance penalty. Since JS and WASM can use shared memory, I’m curious to see if we can eliminate the JS function calls in interpretOneWASM by using a synchronized linear representation of object memory, and if this would speed up the virtual machine.
I’ve modified SqueakJS so that it maintains a shared WASM memory buffer, into which it writes and updates serializations of the objects in object memory. Now, interpretOneWASM can read and write the object information it needs without having to make most of the previous JS calls. At the moment, some primitives (like garbage collection) are still JS calls. Since they were relatively long-running operations already, I don’t expect performance to suffer much.
The format of the shared linearized object memory is nearly identical to that of an object memory snapshot, with two differences. The first is that only one object header word is written in all cases; WASM doesn’t need the additional object header information supplied in a snapshot for re-creating objects, since it doesn’t do that. The second difference is that young objects, with negative addresses, are also represented, in a special memory segment above the tenured objects. This scheme allows WASM to access all objects by address.
When WASM modifies the object memory, it records the addresses of the modified objects in a special object. The JS side uses this information to updates the canonical JS object memory representation. If at some point we implement all of the JS functions in WASM, we’ll be able to make the linear representation the only one. In the meantime, we have a framework for using both JS and WASM to their strengths, and transitioning from JS to WASM gradually.
The user can choose dynamically whether the JS or the WASM version of the interpretOne function is in use. At the moment, we let JS read the object memory snapshot and get the system started, then switch to WASM after the initial context is loaded and running. JS also writes the entire linearized object memory into the shared WASM buffer. The user can switch interpretOne from JS to WASM by evaluating a Smalltalk expression that invokes a JS function of the interpreter (“JS display vm useWASM”).
It’ll be interesting to see if this scheme yields an overall increase in system speed. Hopefully, with this gradual transition, we’ll be able to tell if a full conversion of SqueakJS from JS to WASM is worthwhile, before investing a lot of effort into rewriting the JS functions.
I’ve replaced the inner instruction-dispatch loop of a running SqueakJS virtual machine with a handwritten WebAssembly (WASM) function, and run several thousand instructions of the Caffeine object memory. The WASM module doesn’t yet have its own memory. It’s using the same JavaScript objects that the old dispatch loop did, and the supporting JS state and functions (like Squeak.Interpreter class). I wrote a simple object proxy scheme, whereby WASM can use unique integer identifiers to refer to the Smalltalk objects.
Because of this indirection, the current performance is very slow. The creation of an object proxy is based on stable object pointer (OOP) values; young objects require full garbage collection to stabilize their OOPs. There is also significant overhead in calling JavaScript functions from WASM. At this stage, the performance penalties are worthwhile. We can verify that the hybrid JS/WASM interpreter is working, without having to write a full WASM implementation first.
a hybrid approach
My original approach was to recapitulate the Slang experience, by using Epigram to decompile the Smalltalk methods of a virtual machine to WASM. I realized, though, that it’s better to take advantage of the livecoding capacity of the SqueakJS VM. I can replace individual functions of the SqueakJS VM, maintaining a running system all the while. I can also switch those functions back and forth while the system is running, perhaps many millions of instructions into a Caffeine session. This will be invaluable for debugging.
The next goal is to duplicate the object memory in a WASM memory, and operate on it directly, rather than using the object proxy system. I’ll start by implementing the garbage collector, and testing that it produces correct results with an actual object memory, by comparing its behavior to that of the SqueakJS functions.
Minimal object memories will be useful in this process, because garbage collection is faster, and there is less work to do when resuming a snapshot.
performance improvement expected
From my experiment with decompiling a Smalltalk method for the Fibonacci algorithm into WASM, I saw that WASM improves the performance of send-heavy Smalltalk code by about 250 times. I was able to achieve significant speedups from the targeted use of WASM for the inner functions of BitBLT. From surveying performance comparisons between JS and WASM, I’m expecting a significant improvement for the interpreter, too.
With the JavaScript bridge in SqueakJS, we can utilize built-in web browser behavior and other JS frameworks from Smalltalk, just as any other JS code would. I’ve used it to build Caffeine apps using A-Frame and croquet.io. Another useful framework we can integrate is WebAssembly (WASM), a stack-oriented instruction set for writing high-performance code. I have begun to identify performance-critical code in the SqueakJS virtual machine, and replace it with WASM code. The initial results are encouraging and useful.
identifying hotspots
I’m running SqueakJS in the Chrome web browser. To identify virtual machine code that consumes the most time, I profile use cases that seem slow, using Chrome’s built-in devtools. The first use case I chose was drag-selecting a large quantity of text in a workspace.
a performance capture of drag-selecting text, indicating that rgbMulwith() is the most time-consuming inner BitBLT function
From reviewing a performance capture of this use case, we can see that rgbMulwith() is the most time-consuming inner function from the BitBLT plugin. While it doesn’t modify variables in outer scopes, it does read a plugin-global variable. Most of the work it does, however, is done by partitionedMulwithnBitsnPartitions(), a pure function returning the result of a mathematical operation on the inputs, without any other system state interaction. That makes it well-suited to WASM implementation.
While there are APIs for coordinating side effects with JavaScript, they are relatively slow. It is therefore harder to rationalize WASM implementations of individual higher-level Squeak virtual machine primitives, since they interact extensively with complex JavaScript objects like the Squeak interpreter. Eventually, we’ll represent the entire Squeak object memory inside a WASM memory, and implement the entire Squeak virtual machine with WASM functions. WASM garbage collection will assist the Squeak garbage collector, much as the JavaScript garbage collector assists the SqueakJS VM now. JavaScript interaction will be limited to the WASM implementation of the SqueakJS JS bridge.
translating from JS to WASM
Here’s the existing JS implementation of partitionedMulwithnBitsnPartitions():
With its stack-based instructions, WASM code is reminiscent of Smalltalk bytecode. Here’s some of the equivalent WASM implementation of the above function, written by hand. The WASM memory holds the maskTable from BitBitPlugin.js.
a section of the equivalent WASM
Note that WASM’s shift-left and shift-right instructions are fine as is; we don’t need to make wrapper functions for them as we did in JS.
After I modified the BitBLT plugin so that rgbMulwith() uses partitionedMUL(), drag-selecting text in the Caffeine user interface was much more responsive, and a different inner BitBLT plugin function was the most time-consuming. Even though rgbMulwith() used a small percentage of total time in the first performance capture, every saved millisecond significantly improves perceived animation smoothness. By using additional use cases (scrolling long lists, and repainting by alternating the stacking order of two windows), I identified other inner BitBLT plugin functions to optimize. The Caffeine user interface is now much more responsive than it was. This is especially useful with Worldly, the spatial IDE I’m building with Caffeine and A-Frame, where every bit of performance matters.
an alternative to writing WASM by hand
For the JS code in the Squeak virtual machine, it makes sense to write replacement WASM code by hand. Since WASM code is so similar to Smalltalk bytecode, for Smalltalk compiled methods it makes more sense to use automated decompilation to WASM. I have done this for a small proof-of-concept, using a Smalltalk compiled method for the Fibonacci algorithm.
Using the Smalltalk compiler and decompiler I wrote with my Epigram parsing framework, I was able to decompile the Smalltalk compiled method for the Fibonacci algorithm into WASM text. I then used an in-browser version of the WebAssembly Binary Toolkit from Caffeine to generate binary WASM, compile it in the current page as a function, and call the function. Comparing the execution time of finding the 29th Fibonacci number in both Smalltalk and WASM showed that WASM had 250 times the execution speed of the normal SqueakJS bytecode-to-JS translator.
I plan to write, in Smalltalk, a version of the Squeak virtual machine simulator that stores all objects in a WASM memory. Once it can evaluate (3 + 4), I’ll translate all its Smalltalk compiled methods to WASM, and see how much faster it runs. The next step will be to get a JS bridge working, and implement interfaces to the web browser DOM for graphics and user input event handling. Ultimately, a WASM implementation of the Squeak virtual machine may be preferable to the SqueakJS virtual machine.
I’ve written a Caffeine class which, in real time, takes detected pitches from a melody and chords, and sends re-voiced versions of the chords to a harmonizer, which renders them using shifted copies of the melody. It’s an example of an aggregate audio plugin, which builds a new feature from other plugins running in Ableton Live.
re-creating a classic
Way way back in 1991, before the Auto-Tune algorithm popularized in 1998, a Canadian company called IVL Technologies developed a hardware harmonizer, the Vocalist VHM5. It generated five-part vocal harmonies, live from sung melodies and chords played via MIDI. It had a simple but effective model of vocal formants, which enabled it to shift the pitch of a sung note to natural-sounding new pitches, including correcting the pitch of the sung note. It also had very fast pitch detection.
My favorite feature, though, was how it combined those features when voicing chords. In what was called “vocoder mode”, it would adjust the pitches of incoming MIDI chords to be as close as possible to the current pitch of a sung melody, or closed voicing. If the melody moved more than half an octave away from a chord voice, the rendered chord voice would adjust by some number of octaves up or down, so as to be within half an octave of the melody. With kinetic melodies and dense chords, this becomes a simple but compelling voice-leading technique. It’s even more compelling when the voices are spatialized in a stereo or 3D audio field, with reverb, reflections, and other post-processing.
It’s also computationally inexpensive. The IVL pitch-detection and shifting algorithms were straightforward for off-the-shelf digital signal processing chips to perform, and the Auto-Tune algorithm is orders of magnitude cheaper. One of the audio plugins I use in the Ableton Live audio environment, Harmony Engine by Antares, implements Auto-Tune’s pitch shifting. Another, MIDI Guitar by Jam Origin, does polyphonic pitch detection. With these plugins, I have all the live MIDI information necessary to implement closed re-voicing, and the pitch shifting for rendering it. I suppose I would call this “automated closed-voice harmonization”.
implementation
Caffeine runs in a web browser, which, along with Live, has access to all the MIDI interfaces provided by the host operating system. Using the WebMIDI API, I can receive and schedule MIDI events in Smalltalk, exchanging music information with Live and its plugins. With MIDI as one possible transport layer, I’ve developed a Smalltalk model of music events based upon sequences and simultaneities. One kind of simultaneity is the chord, a collection of notes sounded at the same time. In my implementation, a chord performs its own re-voicing, while also taking care to send a minimum of MIDI messages to Live. For example, only the notes which were adjusted in response to a melodic change are rescheduled. The other notes simply remain on, requiring no sent messages. Caffeine also knows how many pitch-shifted copies of the melody can be created by the pitch-shifting plugin, and culls the least-recently-activated voices from chords, to remain within that number.
All told, I now have a perfect re-creation of the original Vocalist closed-voicing sound, enhanced by all the audio post-processing that Live can do.
the setup
a GK-3 hex pickup through a breakout box
Back in the day, I played chords to the VHM5 from an exotic MIDI electric guitar controller, the Zeta Mirror 6. This guitar has a hex (six-channel) pickup, and can send a separate data stream for each string. While I still have that guitar, I also have a Roland GK-3 hex pickup, which is still in production and can be moved between guitars without modifying them. Another thing I like about hex pickups is having access to the original analog signal for each string. These days I run the GK-3 through a SynQuaNon breakout module, which makes the signals available at modular levels. The main benefit of this is that I can connect the analog signals directly to my audio interface, without software drivers that may become unsupported. I have a USB GK-3 interface, but the manufacturer never updated the original 32-bit driver for it.
Contemporary computers can do polyphonic pitch detection on any audio stream, without the use of special controller hardware. While the resulting MIDI stream uses only a single channel, with no distinction between strings, it’s very convenient. The Jam Origin plugin is my favorite way to produce a polyphonic chord stream from audio.
the ROLI Lightpad
My favorite new controller for generating multi-channel chord streams is the ROLI Lightpad. It’s a MIDI Polyphonic Expression (MPE) device, using an entire 16-channel MIDI port for each instrument, and a separate MIDI channel for each note. This enables very expressive use of MIDI channel messages for representing the way a note changes after it starts. The Lightpad sends messages that track the velocity with which each finger strikes the surface, how it moves in X, Y, and Z while on the surface, and the velocity with which it leaves the surface. The surface is also a display; I use it as a five-by-five grid, which presents musical intervals in a way I find much more accessible than that of a traditional piano keyboard. There are several MPE instruments that use this grid, including the Linnstrument and the GeoShred iPad app. The Lightpad is also very portable, and modular; many of them can be connected together magnetically.
The main advantage of using MPE for vocal harmonization is associating various audio processing state with each chord voice’s separate channel. For example, the bass voice of a chord progression can have its own spatialization and equalization settings.
My chord signal path starts with an instrument, a hex or normal guitar or Lightpad. Audio and MIDI data goes from the instrument, through a host operating system MIDI interface, through Live where I can detect pitches and record, through another MIDI interface to Caffeine in a web browser, then back to Live and the pitch-shifting plugin. My melody signal path starts with a vocal performance using a microphone, through Live and pitch detection, then through pitch shifting as controlled by the chords.
Let’s Play!
Between this vocal harmonization, control of the Ableton Live API, and the Beatshifting protocol, there is great potential for communal livecoded music performance. If you’re a livecoder interested in music, I’d love to hear from you!
Livecoding access can tame the complexity of Ableton Live.
I’ve written a proxy system to communicate with Ableton Live from Caffeine, for interactive music composition and performance. Live includes Max for Live (M4L), an embedded version of the Max media programming system. M4L has, in turn, access to both Node.JS, a server-side JavaScript engine embedded as a separate process, and to an internal JS engine extension of its own object system. Caffeine can connect to Node.JS through a websocket, Node.JS can send messages to Max, Max can call user-written JS functions, and those JS functions can invoke the Live Object Model, an API for manipulating Live. This stack of APIs also supports returning results back over the websocket, and for establishing callbacks.
getting connected
Caffeine creates a websocket connection to a server running in M4L’s Node.JS, using the JS WebSocket function provided by the web browser. A Caffeine object can use this connection to send a JSON string describing a Live function it would like to invoke. Node.JS passes the JSON string to Max, through an output of a Max object in a Max program, or patcher:
connecting the Node.JS server with JS Live API function invocation
Max is a visual dataflow system, in which objects inputs and outputs are connected, and their functions are run by a real-time scheduler. There are two special objects in the patcher above. The first is node.script, which controls the operation of a Node.JS script. It’s running the Node.JS script “caffeine-server.js”, which creates a websocket server. That script has access to a Max API, which it uses to send data through the output of the node.script object.
The second special object is js, which runs “caffeine-max.js”. That script parses the JSON function invocation request sent by Caffeine, invokes the desired Live API function, and sends the result back to Caffeine through the Node.JS server.
proxying
With this infrastructure in place, we can create a proxy object system in Caffeine. In class Live, we can write a method which invokes Live functions:
invoking a Live function from Caffeine
This method uses a SharedQueue for each remote message sent; the JS bridge callback process delivers results to them. This lets us nest remote message sends among multiple processes. The JSON data identifies the function and argument of the invocation, the identifier of receiving Live object, and the desired Smalltalk class of the result.
The LiveObject proxy class can use this invoking function from its doesNotUnderstand method:
forwarding a message from a proxy
Now that we have message forwarding, we can represent the entire Live API as browsable Smalltalk classes. I always find this of huge benefit when doing mashups with external code libraries, but especially so with Live. The Live API is massive, and while the documentation is complete, it’s not very readable. It’s much more pleasant to learn about the API with the Smalltalk browsing tools. As usual, we can extend the API with composite methods of our own, aggregating multiple Live API calls into one. With this we can effectively extend the Live API with new features.
extending the Live API
One area of Live API extension where I’m working now is in song composition. Live has an Arrangement view, for a traditional recording studio workflow, and a Session view, for interactive performance. I find the “scenes” feature of the Session view very useful for sketching song sections, but Live’s support for playing them in different orders is minimal. With Caffeine objects representing scenes, I can compose larger structures from them, and play them however I like.
How would you extend the Live API? How would you simplify it?
The Node.JS server, JS proxying code, and the Max patcher that connects them are available as a self-contained M4L device, which can be applied to any Live track. Look for it in the devices folder of the Caffeine repository.
In February 2015 I spoke about Bert Freudenberg’s SqueakJS at FOSDEM. We were all intrigued with the potential of this system to change both Smalltalk and web programming. This year I’ve had some time to pursue that potential, and the results so far are pretty exciting.
SqueakJS is a Squeak virtual machine implemented with pure JavaScript. It runs in all the web browsers, and features a bi-directional JavaScript bridge. You can invoke JavaScript functions from Smalltalk code, and pass Smalltalk blocks for JavaScript code to invoke as callbacks. This lets Smalltalk programmers take advantage of the myriad JavaScript frameworks available, as well as the extensive APIs exposed by the browsers themselves.
The most familiar built-in browser behavior is for manipulating the structure of rendered webpages (the Document Object Model, or “DOM”). Equally important is behavior for manipulating the operation of the browser itself. The Chrome Debugging Protocol is a set of JavaScript APIs for controlling every aspect of a web browser, over a WebSocket. The developer tools built into the Chrome browser are implemented using these APIs, and it’s likely that other browsers will follow.
Using the JavaScript bridge and the Chrome Debugging Protocol, I have SqueakJS controlling the web browser running it. SqueakJS can get a list of all the browser’s tabs, and control the execution of each tab, just like the built-in devtools can. Now we can use Squeak’s user interface for debugging and building webpages. We can have persistent inspectors on particular DOM elements, rather than having only the REPL console of the built-in tools. We can build DOM structures as Smalltalk object graphs, complete with scripted behavior.
I am also integrating my previous WebDAV work, so that webpages are manifested as virtual filesystems, and can be manipulated with traditional text editors and other file-oriented tools. I call this a metaphorical filesystem. It extends the livecoding ability of Smalltalk and JavaScript to the proverbial “favorite text editor”.
This all comes together in a project I call Caffeine. had fun demoing it at ESUG 2016 in Prague. Video to come…
I wrote a new website for Black Page Digital, my consultancy in Amsterdam and San Francisco. It features a running Squeak Smalltalk that you can use for livecoding. Please check it out, pass it on, and let me know what you think!
Context is the umbrella project for Naiad (a distributed module system for all Smalltalks), Spoon (a minimal object memory that provides the starting point for Naiad), and Lightning (a remote-messaging framework which performs live serialization, used by Naiad for moving methods and other objects between systems). I intend for it to be a future release of Squeak, and a launcher and module system for all the other Smalltalks. I’m writing Context apps for cloud computing, web services, and distributed computation.
Support for installable object memories as git submodule repos.
Submodule repos for memories for each of the known Smalltalk dialects, with Naiad support pre-loaded. I’m currently working on the submodules for Squeak and Pharo.
A web-browser-based console for launching and managing object memories.
A WebDAV-based virtual filesystem that enables Smalltalk to appear as a network-attached storage device, and mappings of the system to that filesystem that make Smalltalk accessible from external text editors (e.g., for editing code, managing processes and object memories).