Archive for the Appsterdam Category

Caffeine Web services through Deno

Posted in Appsterdam, Caffeine, consulting, Context, livecoding, Naiad, Smalltalk, Spoon, SqueakJS with tags , on 9 July 2022 by Craig Latta
Caffeine in a Deno worker can provide Web APIs to Smalltalk in a native app.

bridging native apps and the Web

We’ve been able to run Caffeine headlessly in a Web Worker for some time now, using NodeJS. I’ve updated this support to use the Deno JavaScript runtime instead of Node. This gives us better access to familiar Web APIs, and a cleaner module system without npm. I’ve also extended the bridging capability of the code that Deno runs. Now, a native Squeak app can start Deno (via class OSProcess), Deno starts Caffeine in a worker, and the two Smalltalk instances can communicate with each via remote messaging.

I’m using this bridge to let native Squeak participate in WebRTC sessions with other Smalltalks, as part of the Naiad team development system. The same Squeak object memory runs in both the native Squeak and the Deno worker. I’m sure many other interesting use cases will arise, as we explore what native Squeak and Web Squeak can do together!

Epigram: reifying grammar production rules for clearer parsing, compiling, and searching

Posted in Appsterdam, Caffeine, consulting, Context, livecoding, Smalltalk, SqueakJS, Uncategorized with tags , , , , , , on 28 June 2022 by Craig Latta
a section of the Smalltalk grammar

putting production rules to work

In a traditional EBNF grammar, production rules describe all the allowed relationships between a language’s terminal symbols. By expressing them as live objects with behavior, they can parse and compile as well. They form a definitive reference network in which to record parsed terminals, making them ideally suited as parse trees. Individual rules also function as search terms in other rules which use them. Epigram is a framework for doing this. Let’s explore these features with an example.

We’ll use the grammar for Smalltalk methods. The production rules are included in the book Smalltalk-80: The Language and Its Implementation by Adele Goldberg and Dave Robson. They are depicted visually, with railroad diagrams (a few of them are shown above).

Each diagram shows a path going through one or more symbols. An EBNF production rule, or grammar symbol, is indicated by the name of the rule in a box. A terminal symbol is indicated by a circle with the symbol inside. An alternation is indicated by a path’s divergence through multiple symbols, converging afterward. A compound rule is indicated by a path going directly through multiple symbols. A repetition is indicated by a loop through a sequence of symbols, representing one or more occurrences of that sequence. EBNF also supports the option, which is no or one occurrence of a symbol, and the difference, which matches one rule but not another. These kinds of rules are sufficient for the Smalltalk grammar. There are other grammars, like XML, that extend BNF further, but we won’t discuss them here.

production rules as code

We can express these diagrams as code. For a terminal symbol, we can use a literal string. For an alternation, we can use a “|” (“or”) operator. For a compound rule, we can use a “||” (“then”) operator (after changing the Smalltalk compiler so that it doesn’t confuse “||” with “|”). For repetitions and options, we can use the unary messages “repetition” and “option”. We can store entire production rules as shared variables (pool variables in Squeak).

For example, we can write the first diagram as:

Digit := '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9'.

Digit is an instance of class Alternation, and can be a variable in a SmalltalkProductionRules pool. We can write the second diagram as:

Digits := Digit repetition.

Digits is an instance of class Repetition. A rule which uses a compound rule is:

SymbolConstant := '#' || Symbol.

We can write each rule in this way, culminating with Method.

parsing

Once we’ve created all the rules for our grammar, we can ask the topmost rule, Method, to parse the source code of a method. To parse, a rule creates a stream on the proposed content, and attempts to accept the next character in the stream until the stream is empty. For example, a terminal symbol for ‘3’ will accept the next character if it is $3.

A symbol which consists of other symbols will delegate parsing to those symbols. An alternation between the terminal symbols for ‘3’ and ‘4’ will accept the next character if it is $3 or $4, but it decides this by delegating the parse to each of those symbols, and noting which of them was able to accept the next character. A symbol’s parse succeeds if it is able to accept enough characters to match every character in its string, if it’s a terminal symbol, or a sufficient set of subsymbols, if it’s a compound rule, alternation or repetition.

If a symbol doesn’t succeed, it fails and resets the stream’s position as it was before parsing began. Control is returned to the delegating symbol. This is called backtracking. If the overall parse backtracks all the way to the topmost rule without having emptied the stream, and the next character is unacceptable, then the entire parse fails and the content is ungrammatical. Having reached this point, however, we have information about which rules failed and how far the parse got in the stream. This is useful information to present to the user, with an exception.

The complexity of a grammar can make backtracking very expensive in time; reducing this cost is the main challenge in Epigram development currently. Informed choices of alternation orders in a grammar (as with a parsing expression grammar) and primitives (described below) yield dramatic performance increases.

compilation

If a parse is successful. We are left with a graph of successful production rules, each with a record of the characters it accepted, and its successful constituent symbols. We can use this graph as we would have used a traditional parse tree. Compilers can use the parse graph to create objects representing the source content in a useful structure. For example, we can create a CompiledMethod of Smalltalk virtual machine instructions, embodying the behavior specified by the source code.

For example, if our source code were:

The successful rules in our parse, in chronological order, would be:

  • Letter ($a)
  • Letter ($d)
  • Letter ($d) — further Letter successes are elided.
  • Identifier (‘add’)
  • UnarySelector (‘add’)
  • MessagePattern (‘add’)
  • SpecialCharacter (carriage return)
  • SpecialCharacter (tab)
  • Comment (‘”Add two numbers and answer the result.”‘)
  • Digit ($3)
  • Number (‘3’)
  • Literal (‘3’)
  • SpecialCharacter ($+)
  • BinarySelector (‘+’)
  • Literal (‘4’)
  • BinaryExpression (‘3 + 4’)
  • MessageExpression (‘3 + 4’)
  • Expression (‘3 + 4’)
  • Statements (‘3 + 4’)
  • Method (‘add “…” ^3 + 4’)

To get the intended method selector (#add), a compiler holding this parse history can simply ask the Method rule for its MessagePattern. The compiler can also ask the Expression to generate the Smalltalk stack machine instructions that carry it out.

searching

Since MessagePattern is a well-known shared variable in the SmalltalkProductionRules pool, the compiler can use it as a search term in queries to Method:

selector := (Method at: MessagePattern) terminals

Using production rules as search terms is a very useful way of navigating the grammatical structure of the parse tree, allowing the compiler writer to apply their knowledge of the grammar. Rather than focusing on how parsing works, or how to manipulate a parse tree which is separate from the grammar, one may express compilation entirely with the grammar’s rules.

performance optimization: primitives

It’s very convenient and clear to express a grammar as EBNF rules, but it can lead to alternations between many options, with expensive parsing behavior. Since the grammar keeps a complete history of the accepted rules for a parse, we can easily see which rules are most popular and consume the most time. For these rules, we can specify Smalltalk code equivalent to their parsing work, providing primitives. For XML, which has frequently-used alternations between thousands of Unicode characters, primitives provide speedups of 200 times or more.

enforcing constraints

Some grammars specify additional constraints on parsed content. For example, the HTML grammar requires an element’s opening and closing tags to match. Epigram supports adding constraints to production rules, in the form of block closures which must evaluate to true after parsing has taken place.

resolving ambiguities

Some grammars include points of intentional ambiguity. In Smalltalk, for example, there’s a grammatical ambiguity between chains of unary and binary messages. Epigram supports noting ambiguities, and resolving them through constraints. In the Smalltalk example, the ambiguity is resolved through constraint considering the scope in which parsing occurs. Which variable names are currently bound, and which unary and binary messages are actually defined, lead to a single interpretation.

decompilation

Writing a Smalltalk decompiler with reified production rules is also easier. The rule for a method declaration can dispatch decompilation for each bytecode to the corresponding instruction class, resulting in a set of equivalent instruction instances. An instruction which pops the virtual machine stack corresponds to a Smalltalk statement, and it can construct a structure of production rules equivalent to that statement, as if created from a parse. The rule structure can answer terminal symbols which are the equivalent source code. I’m writing an extended example of this decompilation process, as an Observable active essay with a live Caffeine session embedded inside it.

special thanks

Special thanks to Chris Thorgrimsson and Lam Research, for supporting this open-source work through commercial use cases.

Beatshifting: playing music in sync and out of phase

Posted in Appsterdam, Caffeine, consulting, Context, livecoding, music, Smalltalk, SqueakJS with tags , , , , , , , on 27 April 2021 by Craig Latta
two Beatshifting timelines

I’ve written a Caffeine app implementation of the Beatshifting algorithm, for collaborative remote music performance that is synchronized and out-of-phase. Beatshifting uses network latency as a rhythmic element, using offsets from beats as timestamps, with a shared metronome and score.

I was inspired to write the Beatshifting app by NINJAM, a similar system that has hosted many hours of joyous sessions. There are a few interesting twists I think I can bring to the technology, through late-binding of audio rendering.

NINJAM also synchronizes distributed streams of rhythmic music. It works by using a server to collect an entire measure of audio from the performers’ timestamped streams, stamps them all with an upcoming measure number, and sends them back to each performer. Each performer’s system plays the collected measures with the start times aligned. In effect, each performer plays along with what everyone else did a measure ago. Each performer must receive audio only by the start of the upcoming measure, rather than fast enough to create the illusion of simultaneity.

Beatshifting gives more control over the session to each performer, and to an audience as well. Each performer can modify not only the local volume levels of the other performers, but also their delays and instruments. Each performer can also change the tempo and time signature of the session. A session can have an audience as well, and each audience member is really a performer who hasn’t played anything yet.

It’s straightforward to have an arbitrary number of participants in a session because Beatshifting takes the form of a web app. Each participant only needs to visit a session link in a web browser, rather than use a special digital audio workstation (DAW) app. By default, Beatshifting uses MIDI event messages instead of audio, using much less bandwidth even with a large group.

To deliver events to each participant’s web browser, Beatshifting uses the Croquet replication service. Croquet is able to replicate and synchronize any JavaScript object in every participant’s web browser, up to 60 times per second. Beatshifting uses this to provide a shared score. Music events like notes and fader movements can be scheduled into the score by any participant, and from code run by the score itself.

One piece of code the score runs broadcasts events indicating that measures have elapsed, so that the web browsers can render metronome clicks. There are three kinds of metronome clicks, for ticks, beats, and measures. For example, with a time signature of 6/8, there are two beats per measure, and three ticks per beat. Each tick is an eighth-note, so each beat is a dotted-quarter note. The sequence of clicks one hears is:

  • measure
  • tick
  • tick
  • beat
  • tick
  • tick

At a tempo of 120 beats per minute, or 240 clicks per 60,000 milliseconds, there are 250 milliseconds between clicks. Each time a web browser receives a measure-elapsed event, it schedules MIDI events for the next measure’s clicks with the local MIDI output interface. Since each web browser knows the starting time of the session in its output MIDI interface’s timescale, it can calculate the timestamps of all ensuing clicks.

When a performer plays a note, their web browser notes the offset in milliseconds between when the note was played and the time of the most recent click. The web browser then publishes an event-scheduling message, to which the score is subscribed. The score then broadcasts a note-played event to all the web browsers. Again, it’s up to each web browser to schedule a corresponding MIDI note with its local MIDI output interface. The local timestamp of that note is chosen to be the same millisecond offset from some future click point. How far in the future that click is can be chosen based on who played the note, or any other element of the event’s data. Each web browser can also choose other parameters for each event, like instrument, volume level, and panning position.

Quantities like tempo are part of the score’s state, and can be changed by any performer or audience member. Croquet ensures that the changed JavaScript variables are synchronized in all the participants’ web browsers.

With so many decisions about how music events are rendered left to each web browser, the mix that each participant hears can be wildly different. The only constants are the millisecond beat offsets of each performer’s notes. I think it’ll be fun to compare recordings of these mixes after the fact, and to make new ones from individual recorded tracks.

There’s no server that any participant needs to set up, and the Croquet service knows nothing of the Beatshifting protocol. This makes it very easy to start and join new sessions.

next steps

The current Beatshifting UI has controls for joining a session, enabling the local scheduling of metronome clicks, and changing the tempo and time signature of a session.

the current Beatshifting UI

If one is using a MIDI output interface connected to a DAW, then one may use the DAW to control instruments, volume, panning, and so on. I’d also like to provide the option of all MIDI event rendering performed by the web browser, and a UI for controlling and recording that. I’ve established the use of the ToneJS audio framework for rendering events, and am now developing the UI.

I led a debut performance of Beatshifting as part of the Netherlands Coding Live concert series, on 23 April 2021.

I’ve written an animated 3D visualization of the Beatshifting algorithm, which can be driven from live session data. This movie is an annotated slow-motion version:

visualizing the Beatshifting algorithm

I’m excited about the creative potential of Beatshifting sessions. Please contact me if you’re interested in playing or coding for this medium!

The Big Shake-Out

Posted in Appsterdam, Caffeine, consulting, Context, livecoding, Naiad, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , , , , , , , , , , on 25 March 2019 by Craig Latta

Golden Retriever shaking off water

Some of those methods were there for a very long time!

I have adapted the minimization technique from the Naiad module system to Caffeine, my integration of OpenSmalltalk with the Web and Node platforms. Now, from a client Squeak, Pharo, or Cuis system in a web browser, I can make an EditHistory connection to a history server Smalltalk system, remove via garbage collection every method not run since the client was started, and imprint needed methods from the server as the client continues to run.

This is a garbage collection technique that I had previously called “Dissolve”, but I think the details are easier to explain with a different metaphor: “shaking” loose and removing everything which isn’t attached to the system through usage. This is a form of dynamic dead code elimination. The technique has two phases: “fusing” methods that must not be removed, and “shaking” loose all the others, removing them. This has a cascading effect, as the literals of removed methods without additional references are also removed, and further objects without references are removed as well.

After unfused methods and their associated objects are removed, the subsystems that provided them are effectively unloaded. For the system to use that functionality again, the methods must be reloaded. This is possible using the Naiad module system. By connecting a client system to a history server before shaking, the client can reload missing methods from the server as they are needed. For example, if the Morphic UI subsystem is shaken away, and the user then attempts to use the UI, the parts of Morphic needed by the user’s interactions are reloaded as needed.

This technology is useful for delineating subsystems that were created without regard to modularity, and creating deployable modules for them. It’s also useful for creating minimal systems suited to a specific purpose. You can fuse all the methods run by the unit tests for an app, and shake away all the others, while retaining the ability to debug and extend the system.

how it works

Whether a method is fused or not is part of the state of the virtual machine running the system, and is reset when the virtual machine starts. On system resumption, no method is fused. Each method can be told to fuse itself manually, through a primitive interface. Otherwise, methods are fused by the virtual machine as they are run. A class called Shaker knows which methods in a typical system are essential for operation. A Shaker instance can ensure those methods are fused, then shake the system.

Shaking itself invokes a variant of the normal OpenSmalltalk garbage collector. It replaces each unfused method with a special method which, when run, knows how to install the original method from a connected history server. In effect, all unfused methods are replaced by a single method.

Reinstallation of a method uses Naiad behavior history metadata, obtained by remote messaging with a history server, to reconstruct the method and put it in the proper method dictionary. The process creates any necessary prerequisites, such as classes and shared pools. No compiler is needed, because methods are constructed from previously-generated instructions; source code is merely an optional annotation.

the benefits of livecoding all the way down

I developed the virtual machine support for this feature with Bert Freudenberg‘s SqueakJS virtual machine, making heavy use of the JavaScript debugger in a web browser. I was struck by how much faster this sort of work is with a completely livecoded environment, rather than the C-based environment in which we usually develop the virtual machine. It’s similar to the power of Squeak’s virtual machine simulator. The tools, living in JavaScript, aren’t as powerful as Smalltalk-based ones, but they operate on the final Squeak virtual machine, rather than a simulation that runs much more slowly. Rebuilding the virtual machine amounts to reloading the web page in which it runs, and takes a few seconds, rather than the ordeal of a C-based build.

Much of the work here involved trial and error. How does Shaker know which methods are essential for system operation? I found out directly, by seeing where the system broke after being shaken. One can deduce some of the answer; for example, it’s obvious that the methods used by method contexts of current processes should be fused. Most of the essential methods yet to run, however, are not obvious. It was only because I had an interactive virtual machine development environment that it was feasible to restart the system and modify the virtual machine as many times as I needed (many, many times!), in a reasonable timeframe. Being able to tweak the virtual machine in real time from Smalltalk was also indispensable for debugging and feature development.

I want to thank Bert again for his work on SqueakJS. Also, many thanks to Dan Ingalls and the rest of the Lively team for creating the environment in which SqueakJS was originally built.

release schedule

I’m preparing Shaker for the next seasonal release of Caffeine, on the first 2019 solstice, 21 June 2019. I’ll make the virtual machine changes available for all OpenSmalltalk host platforms, in addition to the Web and Node platforms that Caffeine uses via the SqueakJS virtual machine. There may be alpha and beta releases before then.

If this technology sounds interesting to you, please let me know. I’m interested in use cases for testing. Thanks!

livecoding VueJS with Caffeine

Posted in Appsterdam, Caffeine, consulting, Context, Smalltalk, Spoon, SqueakJS with tags , , , , , , , on 30 August 2018 by Craig Latta

Vue component

Livecoding Vue.js with Caffeine: using a self-contained third-party Vue component compiled live from the web, no offline build step.

a tour of Caffeine

Posted in Appsterdam, consulting, Context, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , , , , , , on 27 August 2018 by Craig Latta

https://player.vimeo.com/video/286872152

Here’s a tour of the slides from a Caffeine talk I’m going to give at ESUG 2018. I hope to see you there!

Livecoding other tabs with the Chrome Remote Debugging Protocol

Posted in Appsterdam, consulting, Context, Smalltalk, SqueakJS with tags , , , , , , , on 24 July 2017 by Craig Latta

Chrome Debugging Protocol

We’ve seen how to use Caffeine to livecode the webpage in which we’re running. With its support for the Chrome Remote Debugging Protocol (CRDP), we can also use it to livecode every other page loaded in the web browser.

Some Help From the Inside

To make this work, we need to coordinate with the Chrome runtime engine. For CRDP, there are two ways of doing this. One is to communicate using a WebSocket connection; I wrote about this last year. This is useful when the CRDP client and target pages are running in two different web browsers (possibly on two different machines), but with the downside of starting the target web browser in a special way (so that it starts a conventional webserver).

The other way, possible when both the CRDP client and target pages are in the same web browser, is to use a Chrome extension. The extension can communicate with the client page over an internal port object, created by the chrome.runtime API, and expose the CRDP APIs. The web browser need not be started in a special way, it just needs to have the extension installed. I’ve published a Caffeine Helper extension, available on the Chrome Web Store. Once installed, the extension coordinates communication between Caffeine and the CRDP.

Attaching to a Tab

In Caffeine, we create a connection to the extension by creating an instance of CaffeineExtension:

CaffeineExtension new inspect

As far as Chrome is concerned, Caffeine is now a debugger, just like the built-in DevTools. (In fact, the DevTools do what they do by using the very same CRDP APIs; they’re just another JavaScript application, like Caffeine is.) Let’s open a webpage in another tab, for us to manipulate. The Google homepage makes for a familiar example. We can attach to it, from the inspector we just opened, by evaluating:

self attachToTabWithTitle: 'Google'

Changing Feelings

Now let’s change something on the page. We’ll change the text of the “I’m Feeling Lucky” button. We can get a reference to it with:

tabs onlyOne find: 'Feeling'

When we attached to the tab, the tabs instance variable of our CaffeineExtension object got an instance of ChromeTab added to it. ChromeTabs provide a unified message interface to all the CRDP APIs, also known as domains. The DOM domain has a search function, which we can use to find the “I’m Feeling Lucky” button. The CaffeineExtension>>find: method which uses that function answers a collection of search results objects. Each search result object is a proxy for a JavaScript DOM object in the Google page, an instance of the ChromeRemoteObject class.

In the picture above, you can see an inspector on a ChromeRemoteObject corresponding to the “I’m Feeling Lucky” button, an HTMLInputElement DOM object. Like the JSObjectProxies we use to communicate with JavaScript objects in our own page, ChromeRemoteObjects support normal Smalltalk messaging, making the JavaScript DOM objects in our attached page seem like local Smalltalk objects. We only need to know which messages to send. In this case, we send the messages of HTMLInputElement.

As with the JavaScript objects of our own page, instead of having to look up external documentation for messages, we can use subclasses of JSObject to document them. In this case, we can use an instance of the JSObject subclass HTMLInputElement. Its proxy instance variable will be our ChromeRemoteObject instead of a JSObjectProxy.

For the first message to our remote HTMLInputElement, we’ll change the button label text, by changing the element’s value property:

self at: #value put: 'I''m Feeling Happy'

The Potential for Dynamic Web Development

The change we made happens immediately, just as if we had done it from the Chrome DevTools console. We’re taking advantage of JavaScript’s inherent livecoding nature, from an environment which can be much more comfortable and powerful than DevTools. The form of web applications need not be static files, although that’s a convenient intermediate form for webservers to deliver. With generalized messaging connectivity to the DOM of every page in a web browser, and with other web browsers, we have a far more powerful editing medium. Web applications are dynamic media when people are using them, and they can be that way when we develop them, too.

What shall we do next?

 

browser-to-browser websocket tunnels with Caffeine and livecoded NodeJS

Posted in Appsterdam, consulting, Context, Smalltalk, SqueakJS with tags , , , , , , , , on 4 July 2017 by Craig Latta

network

In our previous look at livecoding NodeJS from Caffeine, we implemented tweetcoding. Now let’s try another exercise, creating WebSockets that tunnel between web browsers. This gives us a very simple version of peer-to-peer networking, similar to WebRTC.

Once again we’ll start with Caffeine running in a web browser, and a NodeJS server running the node-livecode package. Our approach will be to use the NodeJS server as a relay. Web browsers that want to establish a publicly-available server can register there, and browser that want to use such a server can connect there. We’ll implement the following node-livecode instructions:

  • initialize, to initialize the structures we’ll need for the other instructions
  • create server credential, which creates a credential that a server browser can use to register a WebSocket as a server
  • install server, which registers a WebSocket as a server
  • connect to server, which a client browser can use to connect to a registered server
  • forward to client, which forwards data from a server to a client
  • forward to server, which forwards data from a client to a server

In Smalltalk, we’ll make a subclass of NodeJSLivecodingClient called NodeJSTunnelingClient, and give it an overriding implementation of configureServerAt:withCredential:, for injecting new instructions into our NodeJS server:

configureServerAt: url withCredential: credential
  "Add JavaScript functions as protocol instructions to the
node-livecoding server at url, using the given credential."

  ^(super configureServerAt: url withCredential: credential)
    addInstruction: 'initialize'
    from: '
      function () {
        global.servers = []
        global.clients = []
        global.serverCredentials = []
        global.delimiter = ''', Delimiter, '''
        return ''initialized tunnel relay''}';
    invoke: 'initialize';
    addInstruction: 'create server credential'
    from: '
      function () {
        var credential = Math.floor(Math.random() * 10000)
        serverCredentials.push(credential)
        this.send((serverCredentials.length - 1) + '' '' + credential)
        return ''created server credential''}';
    addInstruction: 'install server'
    from: '
      function (serverID, credential) {
        if (serverCredentials[serverID] == credential) {
          servers[serverID] = this
          this.send(''1'')
          return ''installed server''}
      else {
        debugger;
        this.send(''0'')
        return ''bad credential''}}';
    addInstruction: 'connect to server'
    from: '
      function (serverID, port, req) {
        if (servers[serverID]) {
          clients.push(this)
          servers[serverID].send(''connected:atPort:for: '' + (clients.length - 1) + delimiter + port + delimiter + req.connection.remoteAddress.toString())
          this.send(''1'')
          return ''connected client''}
        else {
          this.send(''0'')
          return ''server not connected''}}';
    addInstruction: 'forward to client'
    from: '
      function (channel, data) {
        if (clients[channel]) {
          clients[channel].send(''from:data: '' + servers.indexOf(this) + delimiter + data)
          this.send(''1'')
          return ''sent data to client''}
        else {
          this.send(''0'')
          return ''no such client channel''}}';
    addInstruction: 'forward to server'
    from: '
      function (channel, data) {
        if (servers[channel]) {
          servers[channel].send(''from:data: '' + clients.indexOf(this) + delimiter + data)
          this.send(''1'')
          return (''sent data to server'')}
        else {
          this.send(''0'')
          return ''no such server channel''}}'

We’ll send that message immediately, configuring our NodeJS server:

NodeJSTunnelingClient
  configureServerAt: 'wss://yourserver:8087'
  withCredential: 'shared secret';
  closeConfigurator

On the NodeJS console, we see the following messages:

server: received command 'add instruction'
server: adding instruction 'initialize'
server: received command 'initialize'
server: evaluating added instruction 'initialize'
server: initialized tunnel relay
server: received command 'add instruction'
server: adding instruction 'create server credential'
server: received command 'add instruction'
server: adding instruction 'install server'
server: received command 'add instruction'
server: adding instruction 'connect to server'
server: received command 'add instruction'
server: adding instruction 'forward to client'
server: received command 'add instruction'
server: adding instruction 'forward to server'

Now our NodeJS server is a tunneling relay, and we can connect servers and clients through it. We’ll make a new ForwardingWebSocket class hierarchy:

Object
  ForwardingWebSocket
    ForwardingClientWebSocket
    ForwardingServerWebSocket

Instances of ForwardingClientWebSocket and ForwardingServerWebSocket use a NodeJSTunnelingClient to invoke our tunneling instructions.

We create a new ForwardingServerWebSocket with newThrough:, which requests new server credentials from the tunneling relay, and uses them to install a new server. Another new class, PeerToPeerWebSocket, provides the public message interface for the framework. There are two instantiation messages:

  • toPort:atServerWithID:throughURL: creates an outgoing client that uses a ForwardingClientWebSocket to connect to a server and exchange data
  • throughChannel:of: creates an incoming client that uses a ForwardingServerWebSocket to exchange data with a remote outgoing client.

Incoming clients are used by ForwardingServerWebSockets to represent their incoming connections. Each ForwardingServerWebSocket can provide services over a range of ports, as a normal IP server would. To connect, a client needs the websocket URL of the tunneling relay, a port, and the server ID assigned by the relay.

As usual, you can examine and try out this code by clearing your browser’s caches for caffeine.js.org (including IndexedDB), and visiting https://caffeine.js.org/. With browsers able to communicate directly, there are many interesting things we can build, including games, chat applications, and team development tools. What would you like to build?

retrofitting Squeak Morphic for the web

Posted in Appsterdam, consulting, Context, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , on 30 June 2017 by Craig Latta

Google ChromeScreenSnapz022

Last time, we explored a way to improve SqueakJS UI responsiveness by replacing Squeak Morphic entirely, with morphic.js. Now let’s look at a technique that reuses all the Squeak Morphic code we already have.

many worlds, many canvases

Traditionally, Squeak Morphic has a single “world” where morphs draw themselves. To be a coherent GUI, Morphic must provide all the top-level effects we’ve come to expect, like dragging windows and redrawing them in their new positions, and redrawing occluded windows when they are brought to the top. Today, this comes at an acceptable but noticeable cost. Until WebAssembly changes the equation again, we want to do all we can to shift UI work from Squeak Morphic to the HTML5 environment hosting it. This will also make the experience of using SqueakJS components more consistent with that of the other elements on the page.

Just as we created an HTML5 canvas for morphic.js to use in the last post, we can do so for individual morphs. This means we’ll need a new Canvas subclass, called HTML5FormCanvas:

Object
  ...
    Canvas
       FormCanvas
         HTML5FormCanvas

An HTML5FormCanvas draws onto a Form, as instances of its parent class do, but instead of flushing damage rectangle from the Form onto the Display, it flushes them to an HTML5 canvas. This is enabled by a primitive I added to the SqueakJS virtual machine, which reuses the normal canvas drawing code path.

Accompanying HTML5FormCanvas are new subclasses of PasteUpMorph and WorldState:

Object
  Morph
    ...
      PasteUpMorph
        HTML5PasteUpMorph

Object
  WorldState
    HTML5WorldState

HTML5PasteUpMorph provides a message interface for other Smalltalk objects to create HTML5 worlds, and access the HMTL5FormCanvas of each world and the underlying HTML5 canvas DOM element. An HTML5WorldState works on behalf of an HTML5PasteUpMorph, to establish event handlers for the HTML5 canvas (such as for keyboard and mouse events).

HTML5 Morphic in action

You don’t need to know all of that just to create an HTML5 Morphic world. You only need to know about HTML5PasteUpMorph. In particular, (HTML5PasteUpMorph class)>>newWorld. All of the traditional Squeak Morphic tools can use HTML5PasteUpMorph as a drop-in replacement for the usual PasteUpMorph class.

There are two examples of single-window Morphic worlds in the current Caffeine release, for a workspace and classes browser. I consider these two tools to be the “hello world” exercise for UI framework experimentation, since you can use them to implement all the other tools.

We get an immediate benefit from the web browser handling window movement and clipping for us, with opaque window moves rendering at 60+ frames per second. We can also interleave Squeak Morphic windows with other DOM elements on the page, which enables a more natural workflow when creating hybrid webpages. We can also style our Squeak Morphic windows with CSS, as we would any other DOM element, since as far as the web browser is concerned they are just HTML5 canvases. This makes effects like the rounded corners and window buttons trays that Caffeine uses very easy.

Now, we have flexible access to the traditional Morphic tools while we progress with adapting them to new worlds like morphic.js. What shall we build next?

Pharo comes to Caffeine and SqueakJS

Posted in Appsterdam, consulting, Context, GLASS, Naiad, Seaside, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , , on 29 June 2017 by Craig Latta

Google ChromeScreenSnapz025

The Caffeine web livecoding project has added Pharo to the list of Smalltalk distributions it runs with SqueakJS. Bert Freudenberg and I spent some time getting SqueakJS to run Pharo at ESUG 2016 in Prague last summer, and it mostly worked. I think Bert got a lot further since then, because now there are just a few Pharo primitives that need implementing. All I’ve had to do so far this time is a minor fix to the input event loop and add the JavaScript bridge. The bridge now works from Pharo, and it’s the first time I’ve seen that.

Next steps include getting the Tether remote messaging protocol and Snowglobe app streaming working between Pharo and Squeak, all running in SqueakJS. Of course, I’d like to see fluid code-sharing of all kinds between Squeak, Pharo, and all the other Smalltalk implementations.

So, let the bugfixing begin! :)  You can run it at https://caffeine.js.org/pharo/. Please do get in touch if you find and fix things. Thanks!

%d bloggers like this: