Archive for squeak

realtime vocal harmonization with Caffeine

Posted in Uncategorized with tags , , , , , , , , on 14 July 2021 by Craig Latta

I’ve written a Caffeine class which, in real time, takes detected pitches from a melody and chords, and sends re-voiced versions of the chords to a harmonizer, which renders them using shifted copies of the melody. It’s an example of an aggregate audio plugin, which builds a new feature from other plugins running in Ableton Live.

re-creating a classic

Way way back in 1991, before the Auto-Tune algorithm popularized in 1998, a Canadian company called IVL Technologies developed a hardware harmonizer, the Vocalist VHM5. It generated five-part vocal harmonies, live from sung melodies and chords played via MIDI. It had a simple but effective model of vocal formants, which enabled it to shift the pitch of a sung note to natural-sounding new pitches, including correcting the pitch of the sung note. It also had very fast pitch detection.

My favorite feature, though, was how it combined those features when voicing chords. In what was called “vocoder mode”, it would adjust the pitches of incoming MIDI chords to be as close as possible to the current pitch of a sung melody, or closed voicing. If the melody moved more than half an octave away from a chord voice, the rendered chord voice would adjust by some number of octaves up or down, so as to be within half an octave of the melody. With kinetic melodies and dense chords, this becomes a simple but compelling voice-leading technique. It’s even more compelling when the voices are spatialized in a stereo or 3D audio field, with reverb, reflections, and other post-processing.

It’s also computationally inexpensive. The IVL pitch-detection and shifting algorithms were straightforward for off-the-shelf digital signal processing chips to perform, and the Auto-Tune algorithm is orders of magnitude cheaper. One of the audio plugins I use in the Ableton Live audio environment, Harmony Engine by Antares, implements Auto-Tune’s pitch shifting. Another, MIDI Guitar by Jam Origin, does polyphonic pitch detection. With these plugins, I have all the live MIDI information necessary to implement closed re-voicing, and the pitch shifting for rendering it. I suppose I would call this “automated closed-voice harmonization”.

implementation

Caffeine runs in a web browser, which, along with Live, has access to all the MIDI interfaces provided by the host operating system. Using the WebMIDI API, I can receive and schedule MIDI events in Smalltalk, exchanging music information with Live and its plugins. With MIDI as one possible transport layer, I’ve developed a Smalltalk model of music events based upon sequences and simultaneities. One kind of simultaneity is the chord, a collection of notes sounded at the same time. In my implementation, a chord performs its own re-voicing, while also taking care to send a minimum of MIDI messages to Live. For example, only the notes which were adjusted in response to a melodic change are rescheduled. The other notes simply remain on, requiring no sent messages. Caffeine also knows how many pitch-shifted copies of the melody can be created by the pitch-shifting plugin, and culls the least-recently-activated voices from chords, to remain within that number.

All told, I now have a perfect re-creation of the original Vocalist closed-voicing sound, enhanced by all the audio post-processing that Live can do.

the setup

a GK-3 hex pickup through a breakout box

Back in the day, I played chords to the VHM5 from an exotic MIDI electric guitar controller, the Zeta Mirror 6. This guitar has a hex (six-channel) pickup, and can send a separate data stream for each string. While I still have that guitar, I also have a Roland GK-3 hex pickup, which is still in production and can be moved between guitars without modifying them. Another thing I like about hex pickups is having access to the original analog signal for each string. These days I run the GK-3 through a SynQuaNon breakout module, which makes the signals available at modular levels. The main benefit of this is that I can connect the analog signals directly to my audio interface, without software drivers that may become unsupported. I have a USB GK-3 interface, but the manufacturer never updated the original 32-bit driver for it.

Contemporary computers can do polyphonic pitch detection on any audio stream, without the use of special controller hardware. While the resulting MIDI stream uses only a single channel, with no distinction between strings, it’s very convenient. The Jam Origin plugin is my favorite way to produce a polyphonic chord stream from audio.

the ROLI Lightpad

My favorite new controller for generating multi-channel chord streams is the ROLI Lightpad. It’s a MIDI Polyphonic Expression (MPE) device, using an entire 16-channel MIDI port for each instrument, and a separate MIDI channel for each note. This enables very expressive use of MIDI channel messages for representing the way a note changes after it starts. The Lightpad sends messages that track the velocity with which each finger strikes the surface, how it moves in X, Y, and Z while on the surface, and the velocity with which it leaves the surface. The surface is also a display; I use it as a five-by-five grid, which presents musical intervals in a way I find much more accessible than that of a traditional piano keyboard. There are several MPE instruments that use this grid, including the Linnstrument and the GeoShred iPad app. The Lightpad is also very portable, and modular; many of them can be connected together magnetically.

The main advantage of using MPE for vocal harmonization is associating various audio processing state with each chord voice’s separate channel. For example, the bass voice of a chord progression can have its own spatialization and equalization settings.

My chord signal path starts with an instrument, a hex or normal guitar or Lightpad. Audio and MIDI data goes from the instrument, through a host operating system MIDI interface, through Live where I can detect pitches and record, through another MIDI interface to Caffeine in a web browser, then back to Live and the pitch-shifting plugin. My melody signal path starts with a vocal performance using a microphone, through Live and pitch detection, then through pitch shifting as controlled by the chords.

Let’s Play!

Between this vocal harmonization, control of the Ableton Live API, and the Beatshifting protocol, there is great potential for communal livecoded music performance. If you’re a livecoder interested in music, I’d love to hear from you!

Ableton Livecoding with Caffeine

Posted in Uncategorized with tags , , , , , , on 5 June 2021 by Craig Latta
Livecoding access can tame the complexity of Ableton Live.

I’ve written a proxy system to communicate with Ableton Live from Caffeine, for interactive music composition and performance. Live includes Max for Live (M4L), an embedded version of the Max media programming system. M4L has, in turn, access to both Node.JS, a server-side JavaScript engine embedded as a separate process, and to an internal JS engine extension of its own object system. Caffeine can connect to Node.JS through a websocket, Node.JS can send messages to Max, Max can call user-written JS functions, and those JS functions can invoke the Live Object Model, an API for manipulating Live. This stack of APIs also supports returning results back over the websocket, and for establishing callbacks.

getting connected

Caffeine creates a websocket connection to a server running in M4L’s Node.JS, using the JS WebSocket function provided by the web browser. A Caffeine object can use this connection to send a JSON string describing a Live function it would like to invoke. Node.JS passes the JSON string to Max, through an output of a Max object in a Max program, or patcher:

connecting the Node.JS server with JS Live API function invocation

Max is a visual dataflow system, in which objects inputs and outputs are connected, and their functions are run by a real-time scheduler. There are two special objects in the patcher above. The first is node.script, which controls the operation of a Node.JS script. It’s running the Node.JS script “caffeine-server.js”, which creates a websocket server. That script has access to a Max API, which it uses to send data through the output of the node.script object.

The second special object is js, which runs “caffeine-max.js”. That script parses the JSON function invocation request sent by Caffeine, invokes the desired Live API function, and sends the result back to Caffeine through the Node.JS server.

proxying

With this infrastructure in place, we can create a proxy object system in Caffeine. In class Live, we can write a method which invokes Live functions:

invoking a Live function from Caffeine

This method uses a SharedQueue for each remote message sent; the JS bridge callback process delivers results to them. This lets us nest remote message sends among multiple processes. The JSON data identifies the function and argument of the invocation, the identifier of receiving Live object, and the desired Smalltalk class of the result.

The LiveObject proxy class can use this invoking function from its doesNotUnderstand method:

forwarding a message from a proxy

Now that we have message forwarding, we can represent the entire Live API as browsable Smalltalk classes. I always find this of huge benefit when doing mashups with external code libraries, but especially so with Live. The Live API is massive, and while the documentation is complete, it’s not very readable. It’s much more pleasant to learn about the API with the Smalltalk browsing tools. As usual, we can extend the API with composite methods of our own, aggregating multiple Live API calls into one. With this we can effectively extend the Live API with new features.

extending the Live API

One area of Live API extension where I’m working now is in song composition. Live has an Arrangement view, for a traditional recording studio workflow, and a Session view, for interactive performance. I find the “scenes” feature of the Session view very useful for sketching song sections, but Live’s support for playing them in different orders is minimal. With Caffeine objects representing scenes, I can compose larger structures from them, and play them however I like.

How would you extend the Live API? How would you simplify it?

The Node.JS server, JS proxying code, and the Max patcher that connects them are available as a self-contained M4L device, which can be applied to any Live track. Look for it in the devices folder of the Caffeine repository.

Beatshifting: playing music in sync and out of phase

Posted in Appsterdam, Caffeine, consulting, Context, livecoding, music, Smalltalk, SqueakJS with tags , , , , , , , on 27 April 2021 by Craig Latta
two Beatshifting timelines

I’ve written a Caffeine app implementation of the Beatshifting algorithm, for collaborative remote music performance that is synchronized and out-of-phase. Beatshifting uses network latency as a rhythmic element, using offsets from beats as timestamps, with a shared metronome and score.

I was inspired to write the Beatshifting app by NINJAM, a similar system that has hosted many hours of joyous sessions. There are a few interesting twists I think I can bring to the technology, through late-binding of audio rendering.

NINJAM also synchronizes distributed streams of rhythmic music. It works by using a server to collect an entire measure of audio from the performers’ timestamped streams, stamps them all with an upcoming measure number, and sends them back to each performer. Each performer’s system plays the collected measures with the start times aligned. In effect, each performer plays along with what everyone else did a measure ago. Each performer must receive audio only by the start of the upcoming measure, rather than fast enough to create the illusion of simultaneity.

Beatshifting gives more control over the session to each performer, and to an audience as well. Each performer can modify not only the local volume levels of the other performers, but also their delays and instruments. Each performer can also change the tempo and time signature of the session. A session can have an audience as well, and each audience member is really a performer who hasn’t played anything yet.

It’s straightforward to have an arbitrary number of participants in a session because Beatshifting takes the form of a web app. Each participant only needs to visit a session link in a web browser, rather than use a special digital audio workstation (DAW) app. By default, Beatshifting uses MIDI event messages instead of audio, using much less bandwidth even with a large group.

To deliver events to each participant’s web browser, Beatshifting uses the Croquet replication service. Croquet is able to replicate and synchronize any JavaScript object in every participant’s web browser, up to 60 times per second. Beatshifting uses this to provide a shared score. Music events like notes and fader movements can be scheduled into the score by any participant, and from code run by the score itself.

One piece of code the score runs broadcasts events indicating that measures have elapsed, so that the web browsers can render metronome clicks. There are three kinds of metronome clicks, for ticks, beats, and measures. For example, with a time signature of 6/8, there are two beats per measure, and three ticks per beat. Each tick is an eighth-note, so each beat is a dotted-quarter note. The sequence of clicks one hears is:

  • measure
  • tick
  • tick
  • beat
  • tick
  • tick

At a tempo of 120 beats per minute, or 240 clicks per 60,000 milliseconds, there are 250 milliseconds between clicks. Each time a web browser receives a measure-elapsed event, it schedules MIDI events for the next measure’s clicks with the local MIDI output interface. Since each web browser knows the starting time of the session in its output MIDI interface’s timescale, it can calculate the timestamps of all ensuing clicks.

When a performer plays a note, their web browser notes the offset in milliseconds between when the note was played and the time of the most recent click. The web browser then publishes an event-scheduling message, to which the score is subscribed. The score then broadcasts a note-played event to all the web browsers. Again, it’s up to each web browser to schedule a corresponding MIDI note with its local MIDI output interface. The local timestamp of that note is chosen to be the same millisecond offset from some future click point. How far in the future that click is can be chosen based on who played the note, or any other element of the event’s data. Each web browser can also choose other parameters for each event, like instrument, volume level, and panning position.

Quantities like tempo are part of the score’s state, and can be changed by any performer or audience member. Croquet ensures that the changed JavaScript variables are synchronized in all the participants’ web browsers.

With so many decisions about how music events are rendered left to each web browser, the mix that each participant hears can be wildly different. The only constants are the millisecond beat offsets of each performer’s notes. I think it’ll be fun to compare recordings of these mixes after the fact, and to make new ones from individual recorded tracks.

There’s no server that any participant needs to set up, and the Croquet service knows nothing of the Beatshifting protocol. This makes it very easy to start and join new sessions.

next steps

The current Beatshifting UI has controls for joining a session, enabling the local scheduling of metronome clicks, and changing the tempo and time signature of a session.

the current Beatshifting UI

If one is using a MIDI output interface connected to a DAW, then one may use the DAW to control instruments, volume, panning, and so on. I’d also like to provide the option of all MIDI event rendering performed by the web browser, and a UI for controlling and recording that. I’ve established the use of the ToneJS audio framework for rendering events, and am now developing the UI.

I led a debut performance of Beatshifting as part of the Netherlands Coding Live concert series, on 23 April 2021.

I’ve written an animated 3D visualization of the Beatshifting algorithm, which can be driven from live session data. This movie is an annotated slow-motion version:

visualizing the Beatshifting algorithm

I’m excited about the creative potential of Beatshifting sessions. Please contact me if you’re interested in playing or coding for this medium!

The Big Shake-Out

Posted in Appsterdam, Caffeine, consulting, Context, livecoding, Naiad, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , , , , , , , , , , on 25 March 2019 by Craig Latta

Golden Retriever shaking off water

Some of those methods were there for a very long time!

I have adapted the minimization technique from the Naiad module system to Caffeine, my integration of OpenSmalltalk with the Web and Node platforms. Now, from a client Squeak, Pharo, or Cuis system in a web browser, I can make an EditHistory connection to a history server Smalltalk system, remove via garbage collection every method not run since the client was started, and imprint needed methods from the server as the client continues to run.

This is a garbage collection technique that I had previously called “Dissolve”, but I think the details are easier to explain with a different metaphor: “shaking” loose and removing everything which isn’t attached to the system through usage. This is a form of dynamic dead code elimination. The technique has two phases: “fusing” methods that must not be removed, and “shaking” loose all the others, removing them. This has a cascading effect, as the literals of removed methods without additional references are also removed, and further objects without references are removed as well.

After unfused methods and their associated objects are removed, the subsystems that provided them are effectively unloaded. For the system to use that functionality again, the methods must be reloaded. This is possible using the Naiad module system. By connecting a client system to a history server before shaking, the client can reload missing methods from the server as they are needed. For example, if the Morphic UI subsystem is shaken away, and the user then attempts to use the UI, the parts of Morphic needed by the user’s interactions are reloaded as needed.

This technology is useful for delineating subsystems that were created without regard to modularity, and creating deployable modules for them. It’s also useful for creating minimal systems suited to a specific purpose. You can fuse all the methods run by the unit tests for an app, and shake away all the others, while retaining the ability to debug and extend the system.

how it works

Whether a method is fused or not is part of the state of the virtual machine running the system, and is reset when the virtual machine starts. On system resumption, no method is fused. Each method can be told to fuse itself manually, through a primitive interface. Otherwise, methods are fused by the virtual machine as they are run. A class called Shaker knows which methods in a typical system are essential for operation. A Shaker instance can ensure those methods are fused, then shake the system.

Shaking itself invokes a variant of the normal OpenSmalltalk garbage collector. It replaces each unfused method with a special method which, when run, knows how to install the original method from a connected history server. In effect, all unfused methods are replaced by a single method.

Reinstallation of a method uses Naiad behavior history metadata, obtained by remote messaging with a history server, to reconstruct the method and put it in the proper method dictionary. The process creates any necessary prerequisites, such as classes and shared pools. No compiler is needed, because methods are constructed from previously-generated instructions; source code is merely an optional annotation.

the benefits of livecoding all the way down

I developed the virtual machine support for this feature with Bert Freudenberg‘s SqueakJS virtual machine, making heavy use of the JavaScript debugger in a web browser. I was struck by how much faster this sort of work is with a completely livecoded environment, rather than the C-based environment in which we usually develop the virtual machine. It’s similar to the power of Squeak’s virtual machine simulator. The tools, living in JavaScript, aren’t as powerful as Smalltalk-based ones, but they operate on the final Squeak virtual machine, rather than a simulation that runs much more slowly. Rebuilding the virtual machine amounts to reloading the web page in which it runs, and takes a few seconds, rather than the ordeal of a C-based build.

Much of the work here involved trial and error. How does Shaker know which methods are essential for system operation? I found out directly, by seeing where the system broke after being shaken. One can deduce some of the answer; for example, it’s obvious that the methods used by method contexts of current processes should be fused. Most of the essential methods yet to run, however, are not obvious. It was only because I had an interactive virtual machine development environment that it was feasible to restart the system and modify the virtual machine as many times as I needed (many, many times!), in a reasonable timeframe. Being able to tweak the virtual machine in real time from Smalltalk was also indispensable for debugging and feature development.

I want to thank Bert again for his work on SqueakJS. Also, many thanks to Dan Ingalls and the rest of the Lively team for creating the environment in which SqueakJS was originally built.

release schedule

I’m preparing Shaker for the next seasonal release of Caffeine, on the first 2019 solstice, 21 June 2019. I’ll make the virtual machine changes available for all OpenSmalltalk host platforms, in addition to the Web and Node platforms that Caffeine uses via the SqueakJS virtual machine. There may be alpha and beta releases before then.

If this technology sounds interesting to you, please let me know. I’m interested in use cases for testing. Thanks!

livecoding VueJS with Caffeine

Posted in Appsterdam, Caffeine, consulting, Context, Smalltalk, Spoon, SqueakJS with tags , , , , , , , on 30 August 2018 by Craig Latta

Vue component

Livecoding Vue.js with Caffeine: using a self-contained third-party Vue component compiled live from the web, no offline build step.

a tour of Caffeine

Posted in Appsterdam, consulting, Context, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , , , , , , on 27 August 2018 by Craig Latta

https://player.vimeo.com/video/286872152

Here’s a tour of the slides from a Caffeine talk I’m going to give at ESUG 2018. I hope to see you there!

retrofitting Squeak Morphic for the web

Posted in Appsterdam, consulting, Context, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , on 30 June 2017 by Craig Latta

Google ChromeScreenSnapz022

Last time, we explored a way to improve SqueakJS UI responsiveness by replacing Squeak Morphic entirely, with morphic.js. Now let’s look at a technique that reuses all the Squeak Morphic code we already have.

many worlds, many canvases

Traditionally, Squeak Morphic has a single “world” where morphs draw themselves. To be a coherent GUI, Morphic must provide all the top-level effects we’ve come to expect, like dragging windows and redrawing them in their new positions, and redrawing occluded windows when they are brought to the top. Today, this comes at an acceptable but noticeable cost. Until WebAssembly changes the equation again, we want to do all we can to shift UI work from Squeak Morphic to the HTML5 environment hosting it. This will also make the experience of using SqueakJS components more consistent with that of the other elements on the page.

Just as we created an HTML5 canvas for morphic.js to use in the last post, we can do so for individual morphs. This means we’ll need a new Canvas subclass, called HTML5FormCanvas:

Object
  ...
    Canvas
       FormCanvas
         HTML5FormCanvas

An HTML5FormCanvas draws onto a Form, as instances of its parent class do, but instead of flushing damage rectangle from the Form onto the Display, it flushes them to an HTML5 canvas. This is enabled by a primitive I added to the SqueakJS virtual machine, which reuses the normal canvas drawing code path.

Accompanying HTML5FormCanvas are new subclasses of PasteUpMorph and WorldState:

Object
  Morph
    ...
      PasteUpMorph
        HTML5PasteUpMorph

Object
  WorldState
    HTML5WorldState

HTML5PasteUpMorph provides a message interface for other Smalltalk objects to create HTML5 worlds, and access the HMTL5FormCanvas of each world and the underlying HTML5 canvas DOM element. An HTML5WorldState works on behalf of an HTML5PasteUpMorph, to establish event handlers for the HTML5 canvas (such as for keyboard and mouse events).

HTML5 Morphic in action

You don’t need to know all of that just to create an HTML5 Morphic world. You only need to know about HTML5PasteUpMorph. In particular, (HTML5PasteUpMorph class)>>newWorld. All of the traditional Squeak Morphic tools can use HTML5PasteUpMorph as a drop-in replacement for the usual PasteUpMorph class.

There are two examples of single-window Morphic worlds in the current Caffeine release, for a workspace and classes browser. I consider these two tools to be the “hello world” exercise for UI framework experimentation, since you can use them to implement all the other tools.

We get an immediate benefit from the web browser handling window movement and clipping for us, with opaque window moves rendering at 60+ frames per second. We can also interleave Squeak Morphic windows with other DOM elements on the page, which enables a more natural workflow when creating hybrid webpages. We can also style our Squeak Morphic windows with CSS, as we would any other DOM element, since as far as the web browser is concerned they are just HTML5 canvases. This makes effects like the rounded corners and window buttons trays that Caffeine uses very easy.

Now, we have flexible access to the traditional Morphic tools while we progress with adapting them to new worlds like morphic.js. What shall we build next?

Pharo comes to Caffeine and SqueakJS

Posted in Appsterdam, consulting, Context, GLASS, Naiad, Seaside, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , , on 29 June 2017 by Craig Latta

Google ChromeScreenSnapz025

The Caffeine web livecoding project has added Pharo to the list of Smalltalk distributions it runs with SqueakJS. Bert Freudenberg and I spent some time getting SqueakJS to run Pharo at ESUG 2016 in Prague last summer, and it mostly worked. I think Bert got a lot further since then, because now there are just a few Pharo primitives that need implementing. All I’ve had to do so far this time is a minor fix to the input event loop and add the JavaScript bridge. The bridge now works from Pharo, and it’s the first time I’ve seen that.

Next steps include getting the Tether remote messaging protocol and Snowglobe app streaming working between Pharo and Squeak, all running in SqueakJS. Of course, I’d like to see fluid code-sharing of all kinds between Squeak, Pharo, and all the other Smalltalk implementations.

So, let the bugfixing begin! :)  You can run it at https://caffeine.js.org/pharo/. Please do get in touch if you find and fix things. Thanks!

a faster Morphic with morphic.js

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , , , on 28 June 2017 by Craig Latta

Google ChromeScreenSnapz017

Caffeine is powered by SqueakJS. The performance of SqueakJS is amazingly good, thanks in large part to its dynamic translation of Smalltalk compiled methods to JavaScript functions (which are in turn translated to machine code by your web browser’s JS engine). In the HTML5 environment where SqueakJS finds itself, there are several other tactics we can use to further improve user interface performance.

Delegate!

In a useful twist of fate, SqueakJS emerges into a GUI ecosystem descended from Smalltalk, now brimming with JavaScript frameworks to which SqueakJS can delegate much of its work. To make Caffeine an attractive environment for live exploration, I’m addressing each distraction I see.

The most prominent one is user interface responsiveness. SqueakJS is quite usable, even with large object memories, but its Morphic UI hasn’t reached the level of snappiness that we expect from today’s web apps. Squeak is a virtual machine, cranking away to support what is essentially an entire operating system, with a process scheduler, window system, compiler, and many other facilities. Since, with SqueakJS, that OS has access to a multitude of similar behavior in the JavaScript world, we should take advantage.

Of course, the UI design goals of the web are different than those of other operating systems. Today’s web apps are still firmly rooted in the web’s original “page” metaphor. “Single Page Applications” that scroll down for meters are the norm. While there are many frameworks for building SPAs, support for open-ended GUIs is uncommon. There are a few, though; one very good one is morphic.js.

morphic.js

Morphic.js is the work of Jens Mönig, and part of the Snap! project at UC Berkeley, a Scratch-like environment which teaches advanced computer science concepts. It’s a standalone JavaScript implementation of the Morphic UI framework. By using morphic.js, Squeak can save its cycles for other things, interacting with it only when necessary.

To use morphic.js in Caffeine, we need to give morphic.js an HTML5 canvas for drawing. The Webpage class can create new DOM elements, and use jQuery UI to give them effects like dragging and rotation. With one line we create a draggable canvas with window decorations:

canvas := Webpage createWindowOfKind: 'MorphicJS'

Now, after loading morphic.js, we can create a morphic.js WorldMorph object that uses the canvas:

world := (JS top at: #WorldMorph) newWithParameters: {canvas. false}

Finally, we need to create a rendering loop that regularly gets the world to draw itself on the canvas:

(JS top)
  at: #world
  put: world;
  at: #morphicJSRenderingLoop
  put: (
    (JS Function) new: '
      requestAnimationFrame(morphicJSRenderingLoop)
      world.doOneCycle()').

JS top morphicJSRenderingLoop

Now we have an empty morphic.js world to play with. The first thing to know about morphic.js is that you can get a world menu by control-clicking:

Google ChromeScreenSnapz018

Things are a lot more interesting if you choose development mode:

Google ChromeScreenSnapz019.png

Take some time to play around with the world menu, creating some morphs and modifying them. Note that you can also control-click on morphs to get morph-specific menus, and that you can inspect any morph.

Google ChromeScreenSnapz020.png

Also notice that this user interface is noticeably snappier than the current SqueakJS Morphic. MorphicJS isn’t trying to do all of the OS-level stuff that Squeak does, it’s just animating morphs, using a rendering loop that is runs as machine code in your web browser’s JavaScript engine.

Smalltalk tools in another world, with Hex

The inspector gives us an example of a useful morphic.js tool. Since we can pass Smalltalk blocks to JavaScript as callback functions, we have two-way communication between Smalltalk and JavaScript, and we can build morphic.js tools that mimic the traditional Squeak tools.

I’ve built two such tools so far, a workspace and a classes browser. You can try them out with these expressions:

HexMorphicJSWorkspace open.
HexMorphicJSClassesBrowser open

“Hex” refers to a user interface framework I wrote called Hex, which aggregates several JavaScript UI frameworks. HexMorphicJSWorkspace and HexMorphicJSClassesBrowser are subclasses of HexMorphicJSWindow. Each instance of every subclass of HexMorphicJSWindow can be used either as a standalone morphic.js window, or as a component in a more complex window. This is the case with these first two tools; a HexMorphicJSClassesBrowser uses a HexMorphicJSWorkspace as a pane for live code evaluation, and you can also use a HexMorphicJSWorkspace by itself as a workspace.

With a small amount of work, we get much snappier versions of the traditional Smalltalk tools. When using them, SqueakJS only has to do work when the tools request information from them. For example, when a workspace wants to print the result of evaluating some Smalltalk code, it asks SqueakJS to compile and evaluate it.

coming up…

It would be a shame not to reuse all the UI construction effort that went into the original Squeak Morphic tools, though. What if we were to put each Morphic window onto its own canvas, so that SqueakJS didn’t have to support moving windows, clipping and so on? Perhaps just doing that would yield a performance improvement. I’ll write about that next time.

Caffeine :: Livecode the Web!

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , , , , , , , on 22 June 2017 by Craig Latta

CaffeineFor the impatient… here it is.

Back to the Future, Again

With the arrival of Bert Freudenberg’s SqueakJS, it was finally time for me to revisit the weird and wonderful world of JavaScript and web development. My previous experiences with it, in my consulting work, were marked by awkward development tools, chaotic frameworks, and scattered documentation. Since I ultimately rely on debuggers to make sense of things, my first question when evaluating a development environment is “What is debugging like?”

Since I’m a livecoder, I want my debugger to run in the web browser I’m using to view the site I’m debugging. The best in-browser debugger I’ve found, Chrome DevTools (CDT), is decent if you’re used to a command-line interface, but lacking as a GUI. With Smalltalk, I can open new windows to inspect objects, and keep them around as those objects evolve. CDT has an object explorer integrated into its read-eval-print loop (REPL), and a separate tab for inspecting DOM trees, but using them extensively means a lot of scrolling in the REPL (since asynchronous console messages show up there as well) and switching between tabs. CDT can fit compactly onto the screen with the subject website, but doesn’t make good use of real estate when it has more. This interrupts the flow of debugging and slows down development.

The Pieces Are All Here

With SqueakJS, and its JavaScript bridge, we can make something better. We can make an in-browser development environment that compares favorably with external environments like WebStorm. I started from a page like try.squeak.org. The first thing we need is a way to move the main SqueakJS HTML5 canvas around the page. I found jQuery UI to be good for this, with its “draggable” effect. While we’re at it, we can also put each of Squeak‘s Morphic windows onto a separate draggable canvas. This moves a lot of the computation burden from SqueakJS to the web browser, since SqueakJS no longer has to do window management. This is a big deal, since Morphic window management is the main thing making modern Squeak UIs feel slow in SqueakJS today.

SqueakJS provides a basic proxy class for JavaScript objects, called JSObjectProxy. Caffeine has an additional proxy class called JSObject, which provides additional reflection features, like enumerating the subject JS object’s properties. It’s also a good place for documenting the behavior of the JS objects you’re using. Rather than always hunting down the docs for HTMLCanvasElement.getContext on MDN, you can summarize things in a normal method comment, in your HTMLCanvasElement class in Smalltalk.

Multiple Worlds

With a basic window system based on HTML5 canvases, we can draw whatever we like on those canvases, using the SqueakJS bridge and whatever other JS frameworks we care to load. I’ve started integrating a few frameworks, including React (for single-page-app development), three.js (for WebGL 3D graphics development), and morphic.js (a standalone implementation of Morphic which is faster than what’s currently in Squeak). I’ll write about using them from Caffeine in future blog posts.

Another framework I’ve integrated into Caffeine is Snowglobe (for Smalltalk app streaming and other remote GUI access), which I wrote about here previously. I think the Snowglobe demo is a lot more compelling when run from Caffeine, since it can co-exist with other web apps in the same page. You can also run multiple Snowglobes easily, and drag things between them. I’ll write more about that, too.

Fitting Into the JavaScript Ecosystem

To get the full-featured debugger UI I wanted, I wrote a Chrome extension called Caffeine Helper, currently available on the Chrome Web Store. It exposes the Chrome Debugging Protocol (CDP) support in the web browser to SqueakJS, letting it do whatever the CDT can do (CDT, like SqueakJS, is just another JavaScript-powered web app). The support for CDP that I wrote about previously uses a WebSocket-based CDP API that requires Chrome to be started in a special way. The Caffeine Helper extension provides a JavaScript API, without that requirement.

I also wrote support for generating Smalltalk code from JavaScript, using the esprima parsing framework, and vice-versa. With my debugger and code generation, I’m going to try developing for some file-based JS projects, using Smalltalk behind the scenes and converting to and from JavaScript when necessary. I think JS web development might actually not drive me crazy this way. :)

Please Try It Out!

So, please check out Caffeine, at caffeine.js.org! I would very much appreciate your feedback. I’m particularly interested to hear your use cases, as I plan the next development steps. I would love to have collaborators, too. Let’s build!

%d bloggers like this: