Archive for consulting

Backend Caffeine with Node.js and Tweetcoding

Posted in Appsterdam, consulting, Context, Smalltalk, Spoon, SqueakJS with tags , , , , , , on 27 June 2017 by Craig Latta

tweetcodingWe’ve seen a couple of examples of frontend Caffeine development, using jQuery UI and voxel.js. Let’s look at something on the backend, using Node.js and something I like to call tweetcoding.

Social Livecoding

Livecoding means having all the capabilities of your development environment while your system is live. Runtime is all the time. Smalltalk programmers have always enjoyed this way of working. It is particularly effective in domains such as music performance, where the ability to make changes while your system is running is critical, both for fixing bugs and for artistic expression. One of my artistic goals with musical livecoding (writing code that makes live music for an audience) is audience accessibility. I want the audience to have some understanding of what’s going on in my code, even if they aren’t coders (yet).

Another aspect I like to incorporate is audience participation. If the audience actually does understand what I’m doing in my code, why not let them join in? Social media has become a ubiquitous communication platform; billions of people are using it with prose and pictures. I think the time has come to use it with live code as well. Nearly everyone in the audience has a powerful computer with social media access in a pocket. We can all livecode together.

Let’s Do This

We need three things to make this happen: a way for people to submit code, something that listens for that code and evaluates it, and a shared artifact everyone can see that shows the code’s effects. I like Twitter as a coding medium, for its minimalism. It also has a culture of hashtag use, and a streaming API, which make listening for code straightforward. For the shared artifact, we can use something viewed with Caffeine in a web browser.

For our first example, let’s use something simple for our coding domain, like turtle graphics. A person can tweet instructions to a turtle which draws on a square piece of paper, 100 units on a side. There are four commands:

  • color, for the turtle to change its pen for one of the given color
  • down, for the turtle to put its pen down on the paper
  • rotate, for the turtle to change direction
  • move, for the turtle to move some distance
  • up, for the turtle to take its pen up from the paper

We’ll say that that the turtle can be identified by some ID, and is initially at the center of the paper. A tweetcoded instruction for the turtle might look like:

FranzScreenSnapz002

Tweetcoding tweets start with the #tweetcoding hashtag, so people seeing one for the first time can look at others with that hashtag, to see what it’s about. This is followed by hashtags identifying our turtle-graphics protocol, and the turtle we’re using. We end the tweet with a drawing instruction, using our instruction set above.

Livecoding NodeJS

To detect these tweets, we’ll use the Twitter streaming API from Node.js. In the spirit of livecoding, we won’t use an application-specific Node.js module for this directly, but instead inject JavaScript code live into a Node.js instance, from Caffeine in a web browser. I’ve written a node-livecode Node.js module which takes commands over a websocket. It starts with three instructions:

  • require, for loading another Node.js module into itself with npm
  • add instruction, for adding an instruction to itself
  • eval, for evaluating JavaScript code

You can see the implementation at GitHub. It can use SSL to keep injected code secure in transit, so you may want to set up a certificate for your server with Let’s Encrypt. Also note that it exposes the listening server as a global variable, so that you can use it in your injected code.

Once we have node-livecode on a server listening for commands, we can inject code into it from Caffeine. First we’ll inject code to listen to #tweetcoding tweets from Twitter:

| websocket |

websocket := JS WebSocket newWithParameters: {'wss://yourserver:port'}.
JS top at: #websocket put:websocket.

websocket
  at: #onmessage
  put: [:message |
    Transcript
      cr;
      nextPutAll: message data asString;
      endEntry];
  at: #onopen
  put: (
    (JS Function) new: '
      window.top.ws.send(JSON.stringify({
        credential: ''shared secret'',
        verb: ''require'',
        parameters: {
          package: ''node-tweet-stream'',
          then: ''
            var Twitter = require(''node-tweet-stream'')
              , twitter = new Twitter({
                  consumer_key: '''',
                  consumer_secret: '''',
                  token: '''',
                  token_secret: ''''})

            twitter.on(
              ''tweet'',
              function (tweet) {
                if (instructions[''broadcast'']) {
                  instructions[''broadcast''](tweet)}}

            twitter.on(
              ''error'',
              function (err) {
                console.log(''error in uploaded code'')})

            twitter.track(''#tweetcoding #turtlegraphics '')''}})))

The example above uses handwritten JavaScript code, but we could also use Caffeine’s JavaScript code generation to produce it from Smalltalk code (see the method MethodNode>>javaScript for the implementation). Also, since we’ll be injecting several things, it would be nice to have a more compact way of writing it. Let’s move that workspace code to a class.

Next we’ll inject a new instruction, for initializing a set of drawing-command listeners:

| client |

client := (
  NodeJSLivecodingClient
    at: 'wss://yourserver:port'
    withCredential: 'shared secret').

client
  addInstruction: 'initialize listeners'
  from: '
    function () {
      global.listeners = []
      return ''initialized listeners''}'

To round out our tweetcoding protocol, we’ll add instructions for accepting listeners, and broadcasting tweets detected from Twitter:

client
  addInstruction: 'start listening'
  from: '
    function () {
      listeners.push(this)
      return ''added listener''}'
  addInstruction: 'stop listening'
  from: '
    function () {
      listeners.splice(listeners.indexOf(this), 1);
      return ''removed listener''}'
  addInstruction: 'broadcast'
  from: '
    function (payload) {
      listeners.forEach(
        function (listener) {listener.send(payload)})
      return ''broadcasted''}'

Putting It All Together

Now we can write other Smalltalk classes which subscribe to tracked tweets from the server, and collaborate to do something interesting with them in the web browser. I’ve implemented the following classes:

Object
  NodeJSLivecodingClient
    TweetcodingClient
      TurtleGraphicsTweetcodingClient
  Turtle
  Tweet

Instances of NodeJSLivecodingClient can inject code into a node-livecoding server, and invoke code added by other clients. Instances of TweetcodingClient can also set tracking filters for tweets, and process matching tweets when they occur. Instances of TurtleGraphicsTweetcodingClient can also control Turtles, which can draw on canvases in Caffeine windows. Instances of Tweet bundle up the text and metadata of tweets. For the implementation, clear your browser’s cache for caffeine.js.org (including IndexedDB), and reload https://caffeine.js.org/.

I’m also running a node-livecoding server injected with turtle-graphics tweetcoding code, at wss://frankfurt.demo.blackpagedigital.com:8087/. Once you get connected (and tweets are flowing), you might see something like this:

Google ChromeScreenSnapz014

After developing this system, I’ve realized I don’t really need to run SqueakJS from within NodeJS; just injecting code into it is fine. There are many possibilities here. :)

Have fun, and please let me know what you think!

 

Caffeine in 3D with voxel.js

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon, SqueakJS with tags , , , , , , , on 24 June 2017 by Craig Latta

voxel.js in CaffeineSince Caffeine is powered by SqueakJS, you can create mashups with any other JavaScript frameworks you like. Let’s take a simple look at 3D graphics using voxel.js, an open-source voxel game building toolkit.

hello blocks

Following the “hello world” example at voxeljs.com, we generate a simple Minecraft-like blocks world in which we can walk around and dig (you can visit it here). The example gives us a JavaScript file, builtgame.js, that we can also use from Caffeine.

As generated, builtgame.js evaluates its createGame function at load time. This creates an HTML5 canvas, initializes WebGL, and begins the game when the hosting page is loaded. We want to save those steps for SqueakJS to initiate, and we also want to use a canvas of our own, in a Caffeine window.

hooks for Caffeine

We can achieve the first part by changing builtgame.js so that it just puts createGame somewhere SqueakJS can get to it, instead of evaluating it. We can create a property on the browser DOM window for this:

window.game = createGame;

Normally we would edit the source projects from which builtgame.js is generated, rather than builtgame.js directly (properly forking the corresponding repositories), but for this example we’ll just go ahead.

Voxel.js uses the three.js framework as its WebGL interface. The three.js WebGL renderer accepts an HTML5 canvas parameter for its initialization function. The second change we’ll make to builtgame.js is to pass a canvas set by SqueakJS in another window property:

View.prototype.createRenderer = function() {
  this.renderer = new THREE.WebGLRenderer({
    canvas: window.gameCanvas !== undefined ? window.gameCanvas : undefined,
    antialias: true});
  this.renderer.setSize(this.width, this.height);
  this.renderer.setClearColorHex(this.skyColor, 1.0);
  this.renderer.clear();};

Finally, we’ll change the game rendering initialization function, to save a reference to the voxel.js renderer’s event emitter, so that we can tell it to pause from SqueakJS:

Game.prototype.initializeRendering = function(opts) {
  var self = this;
  if (!opts.statsDisabled) self.addStats();
  window.addEventListener('resize', self.onWindowResize.bind(self), false);
  self.ee = (
    requestAnimationFrame(window).on(
      'data',
      function(dt) {
        self.emit('prerender', dt);
        self.render(dt);
        self.emit('postrender', dt);
      });
  if (typeof stats !== 'undefined') {
    self.on(
      'postrender',
      function() {
        stats.update();});}}

on the Smalltalk side

Now, in SqueakJS, we can create a VoxelJS class:

Object
  subclass: #VoxelJS
  instanceVariableNames: ''
  classVariableNames: ''
  poolDictionaries: ''
  category: 'Hex-HTML5-WebGL-VoxelJS'.

VoxelJS class instanceVariableNames: 'game'

We’ll give it a class-side method to load and initialize voxel.js, with a Caffeine window and canvas for it to use:

initialize

| canvas |

Webpage current loadScriptFrom: 'js/voxeljs/builtgame.js'.
canvas := Webpage createWorldOfKind: 'voxeljs'.
canvas styleAt: #borderRadius put: '10px'.

(Webpage current)
  windowizeElementNamed: canvas window id
  closingWith: [
    self pause.
    Webpage current top at: #gameCanvas put: nil].

canvas window dragWith: canvas window windowButtonsTray moveButton.
game := (
  (Webpage current top)
    at: #gameCanvas put: canvas;
    game: {#container -> canvas window})

We’ll also add a method for pausing the voxel.js renderer, using the ee property we added to the game rendering initialization in builtgame.js:

pause
  game ee pause.
  (JQuery at: #fps) element remove

In a workspace, we send our initialization message:

VoxelJS initialize

Now we have our first voxel world, running in a Caffeine window that we can easily close, rather than the whole screen. If you clear your browser cache (including IndexedDB) for caffeine.js.org, you can reload the Caffeine page to see this code in action.

Please let me know if you get this far!

 

Caffeine :: Livecode the Web!

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon, SqueakJS with tags , , , , , , , , , , , , , , on 22 June 2017 by Craig Latta

CaffeineFor the impatient… here it is.

Back to the Future, Again

With the arrival of Bert Freudenberg’s SqueakJS, it was finally time for me to revisit the weird and wonderful world of JavaScript and web development. My previous experiences with it, in my consulting work, were marked by awkward development tools, chaotic frameworks, and scattered documentation. Since I ultimately rely on debuggers to make sense of things, my first question when evaluating a development environment is “What is debugging like?”

Since I’m a livecoder, I want my debugger to run in the web browser I’m using to view the site I’m debugging. The best in-browser debugger I’ve found, Chrome DevTools (CDT), is decent if you’re used to a command-line interface, but lacking as a GUI. With Smalltalk, I can open new windows to inspect objects, and keep them around as those objects evolve. CDT has an object explorer integrated into its read-eval-print loop (REPL), and a separate tab for inspecting DOM trees, but using them extensively means a lot of scrolling in the REPL (since asynchronous console messages show up there as well) and switching between tabs. CDT can fit compactly onto the screen with the subject website, but doesn’t make good use of real estate when it has more. This interrupts the flow of debugging and slows down development.

The Pieces Are All Here

With SqueakJS, and its JavaScript bridge, we can make something better. We can make an in-browser development environment that compares favorably with external environments like WebStorm. I started from a page like try.squeak.org. The first thing we need is a way to move the main SqueakJS HTML5 canvas around the page. I found jQuery UI to be good for this, with its “draggable” effect. While we’re at it, we can also put each of Squeak‘s Morphic windows onto a separate draggable canvas. This moves a lot of the computation burden from SqueakJS to the web browser, since SqueakJS no longer has to do window management. This is a big deal, since Morphic window management is the main thing making modern Squeak UIs feel slow in SqueakJS today.

SqueakJS provides a basic proxy class for JavaScript objects, called JSObjectProxy. Caffeine has an additional proxy class called JSObject, which provides additional reflection features, like enumerating the subject JS object’s properties. It’s also a good place for documenting the behavior of the JS objects you’re using. Rather than always hunting down the docs for HTMLCanvasElement.getContext on MDN, you can summarize things in a normal method comment, in your HTMLCanvasElement class in Smalltalk.

Multiple Worlds

With a basic window system based on HTML5 canvases, we can draw whatever we like on those canvases, using the SqueakJS bridge and whatever other JS frameworks we care to load. I’ve started integrating a few frameworks, including React (for single-page-app development), three.js (for WebGL 3D graphics development), and morphic.js (a standalone implementation of Morphic which is faster than what’s currently in Squeak). I’ll write about using them from Caffeine in future blog posts.

Another framework I’ve integrated into Caffeine is Snowglobe (for Smalltalk app streaming and other remote GUI access), which I wrote about here previously. I think the Snowglobe demo is a lot more compelling when run from Caffeine, since it can co-exist with other web apps in the same page. You can also run multiple Snowglobes easily, and drag things between them. I’ll write more about that, too.

Fitting Into the JavaScript Ecosystem

To get the full-featured debugger UI I wanted, I wrote a Chrome extension called Caffeine Helper, currently available on the Chrome Web Store. It exposes the Chrome Debugging Protocol (CDP) support in the web browser to SqueakJS, letting it do whatever the CDT can do (CDT, like SqueakJS, is just another JavaScript-powered web app). The support for CDP that I wrote about previously uses a WebSocket-based CDP API that requires Chrome to be started in a special way. The Caffeine Helper extension provides a JavaScript API, without that requirement.

I also wrote support for generating Smalltalk code from JavaScript, using the esprima parsing framework, and vice-versa. With my debugger and code generation, I’m going to try developing for some file-based JS projects, using Smalltalk behind the scenes and converting to and from JavaScript when necessary. I think JS web development might actually not drive me crazy this way. :)

Please Try It Out!

So, please check out Caffeine, at caffeine.js.org! I would very much appreciate your feedback. I’m particularly interested to hear your use cases, as I plan the next development steps. I would love to have collaborators, too. Let’s build!

App streaming with Snowglobe

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon with tags , , , , , , , , on 31 October 2016 by Craig Latta

Now that we’ve seen how to run Smalltalk in a web browser, clone web Smalltalk as a desktop app, and send remote messages between Smalltalks, let’s look at an application of these technologies.

app streaming

App streaming is a way of delivering the user experience of an app without actually running the app on the user’s machine. The name is an allusion to music and video streaming; you get to experience the asset immediately, without waiting for it to download completely. Streaming an app also has the benefit of avoiding installation, something which can be problematic to do (and to undo). This is nice when you just want to demo an app, before deciding to install it.

Another advantage of app streaming is that the app can run on a much faster machine than the user’s, or even on a network of machines. Social networks are a crude example of app streaming; there are massive backends working behind your web browser, crunching away on all that graph data. Typically, though, app streaming involves an explicit visual component, with the user’s display and input devices standing in for the normal ones. The goal is to make using a new app as simple as playing an online video.

distributing Smalltalk user interface components

Everything in Smalltalk happens by objects sending messages to each other. With a remote messaging framework like Tether, we can put some of the objects in a user interface on a remote machine. Snowglobe is an adaptation of Squeak‘s Morphic user interface framework which runs Squeak on a server, but uses SqueakJS in a client web browser as the display. This is an easy way to recast a Smalltalk application as a web app, while retaining the processing speed and host platform access of the original.

Morphic is built around a display loop, where drawable components (morphs) are “stepped” at some frequency, like a flipbook animation. Normally, drawing is done on a single morph that corresponds to the display of the machine where Squeak is running. Snowglobe adds a second display morph which is Tether-aware. When drawing to this tethered display morph, the app server translates every display operation into a compact remote message.

To maximize speed, Morphic already tries to do its drawing with as few operations as possible (e.g., avoiding unnecessary redrawing). This is especially important when display operations become remote, since network transmission is orders of magnitude slower than local drawing. Since the tethered display morph also lives in a Smalltalk object memory, we can optimize drawing operations involving graphics that are known to both sides of the connection. For example, when changing the mouse cursor to a resize icon when hovering over the corner of a window, there’s no need to send the icon over the wire, since the displaying system already has it. Instead, we can send a much smaller message requesting that the icon be shown.

For full interaction, we also need to handle user input events going back the other way. Snowglobe co-opts Morphic’s user input handling as well. With user input and display forwarded appropriately together, we achieve the seamless illusion that our app is running locally, either as a single morph amongst other local morphs, or using the entire screen.

going beyond screen-sharing

Protocols like VNC do the remote display and user input handling we’ve discussed, although they are typically more complicated to start than clicking a link in a web browser. But since both systems in a Snowglobe session are Smalltalk, we can go beyond simple screen sharing. We can use Tether to send any remote messages we want, so either side can modify the app-streaming behavior in response to user actions. For example, the user might decide to go full-screen in the web browser displaying the app, prompting SqueakJS to notify the remote app, which could change the way the app displays itself.

try it for yourself

I’ve set up an AWS server running the Squeak IDE, reachable from SqueakJS in your web browser. Be gentle… there’s only one instance running (actually two, one in Europe and one in North America, chosen for you automatically by Amazon). Please check it out and let me know what you think!

 

Tether: remote messaging between Smalltalks with WebSockets

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon with tags , , , , , , on 30 October 2016 by Craig Latta

In my previous post, I introduced a new topology for distributed computation with Smalltalk: an object memory in SqueakJS in a web browser, paired by remote-messaging connection with another object memory in Cog in a native app, and connected with other SqueakJS/Cog pairs on other physical machines. The remote-messaging protocol that the memories speak is called Tether. I’ll go into a few details here.

passive frame-based messaging with WebSockets

We begin with a major constraint imposed by running in a web browser: we’re sandboxed, unable to listen for network connections. We must initiate a remote-messaging conversation by connecting to a listening server on a traditional operating system. Over the last few years, the W3C WebSockets standard has received widespread support in every major web browser. We can rely on the ability to create JavaScript WebSocket objects with the SqueakJS JavaScript bridge, and we can easily implement the WebSocket API in Smalltalk on non-web platforms using normal TCP sockets.

WebSockets use callback functions to deliver messages, or frames of bytes, between conversants. The Tether protocol imposes a structure on those bytes, which are processed on each side of the connection by instances of class Tether. The first four bytes are a 32-bit tag which indicates the Smalltalk class which should interpret the rest of the frame. In the case of a remote message, this is class Tether. Successive bytes indicate the message selector to perform, the receiver of the message, and the message parameters.

The message receiver is expressed as a 32-bit key into a table of exposed objects, maintained by the Tether instance handling the connection. The Tether instances themselves expose their identities to each other at the beginning of the conversation. Objects that aren’t specified by reference to an exposed-object table (such as message selectors) are expressed through serialization.

live serialization

The fact that both sides of the conversation are objects in live Smalltalk systems affords many optimizations that aren’t possible when serializing objects to a static file. For this reason, I call this live serialization. For example, when transferring a compiled method between systems, if the method’s literals are objects which already exist in the receiving system, we may write references to them rather than serializing them.

We can also take special measures when the receiving system is missing the classes whose instances we want to transfer. Instead of assuming in advance that the receiving system lacks the classes whose instances we’re transferring, and including them in our payload, as a static serialization file would, we can transfer such classes only on demand. This yields much higher accuracy in object transfer, and far fewer bytes sent over the wire. Since live serialization is part of a complete remote messaging protocol, any messages at all can be sent from either side to complete an object transfer.

With a receiver, selector, and parameters specified, the receiving system can perform the message sent from the sending system. Each object in the system is responsible for serializing itself over a Tether. If the answer to the remote message is a literal object, like a symbol or integer, it will write the bytes of its value on the Tether instance handling the message. If the answer isn’t a literal object, by default it will write a reference to itself. Developers can choose to pass objects by value or by reference as they see fit.

scheduling

Tether performs every remote message in a distinct process, so that no system blocks waiting for an answer to be sent back over the network. Each remote message-send is assigned a unique identifier, and each answer is sent with the ID of its message-send as metadata, so that it can be delivered to the correct waiting process.

This scheme extends the traditional imperative messaging semantics of Smalltalk to any number of machines, and each message-send may involve receiver and parameter objects which are all on different machines. Every message-send may invoke any number of further remote messages before answering.

transparent proxies

An object which represents an object on a remote system is called a proxy. Ideally, it forwards every message sent to it to the remote object, and so provides the illusion of transparent remote messaging. Remote messaging in Smalltalk is often done by using a proxy class which inherits and implements as few messages as possible, and overriding the handler message sent by the virtual machine when a message is not understood. This provides enough coverage to do many useful things, but some messages handled specially by the virtual machine are not forwarded. Some use cases, like remote debugging, require forwarding even those special messages.

To achieve total forwarding coverage, we must modify the virtual machine. There are some situations where this is undesirable (e.g., a lack of tools or expertise, or a requirement to use a past virtual machine unmodified). Tether uses the “does not understand” tactic above in these situations, but provides a modified virtual machine for the rest. In this virtual machine, message forwarding is triggered during method lookup for instances of a specific proxy class (which can be located anywhere in the class hierarchy). Method caching and methods implemented directly as virtual machine instructions are also appropriately adapted. There are a few messages which proxies must understand locally, to participate in live serialization. These messages are also handled specially by the virtual machine.

see for yourself

Tether is an integral part of the Context project from Black Page Digital. A demo of remote messaging between SqueakJS and Cog is available. In tomorrow’s post, I’ll discuss an everyday application of remote messaging.

 

Suspend in the browser, resume on the desktop.

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon with tags , , , , , , , , on 29 October 2016 by Craig Latta

In yesterday’s post, I showed how Bert Freudenberg’s SqueakJS can extend Smalltalk‘s traditional host platform integration for the web, with DOM access through my class ThisWebpage. With this we can build any front-end HTML5 webapp behavior that any other JavaScript framework can. Another thing we can do with ThisWebpage is install the native-app version of Squeak, giving us the full speed of Eliot Miranda’s Cog virtual machine.

Running Smalltalk on the web is a satisfying thing, but there are times when you need more speed or host operating system access than JavaScript in a web browser can provide. Eventually, we’ll be able to embed native code in web browsers using WebAssembly, which will be a big improvement. Even then, though, there will still be the platform access compromises that come with a sandboxed web environment. So, one obvious thing to do with our JavaScript access is move our Smalltalk processes to the desktop, where we can use the high-performance Cog virtual machine.

platform portability

Smalltalk is an image-based system. In addition to modeling an idealized processor, with its own instruction set, the Smalltalk virtual machine models the processor’s memory. The virtual machine can make a snapshot of this memory, as a normal host platform file called an image. This snapshot captures the complete execution state of the Smalltalk system (the object memory), including the program counters of every process that was running at the time. The virtual machine can also resume this snapshot, restoring that system state so that the system may continue. This is similar to the “sleep” function of a laptop.

Since the virtual machine provides a consistent platform abstraction above whatever host is running it, we can suspend the system on one host and resume it on another. Smalltalk programmers have taken great advantage of this for years, writing single systems which run on multiple operating systems (e.g., Macintosh and Windows). Java attempted to provide consistent execution semantics, under the tagline “write once, run anywhere” (with mixed results), but this did not extend to a continuous memory image. We can make an object memory snapshot with SqueakJS in a web browser, and resume it with Cog in a native app.

exporting ourselves

SqueakJS uses the HTML5 “indexed database” feature for persistent storage, making it look like a normal filesystem. We can write a snapshot of the running SqueakJS system this way. We can then use ThisWebpage to download the snapshot to the user’s machine. We add a link to the document in which SqueakJS runs, and synthesize a click on it, invoking the download.

We also use ThisWebpage to detect the user’s host operating system, and download a platform-specific Cog installer. I’ve written installers for macOS, Windows, and Linux. Each one downloads the Cog virtual machine, installs a platform handler for “squeak://” URLs that runs Cog, and invokes a squeak:// URL that runs our snapshot. This URL encodes the name of the snapshot file, Cog parameters (such as whether or not to run the system headless), and a base64-encoded JSON dictionary of other parameters that the Smalltalk object memory can process.

making contact

Now our object memory is running both in the browser in SqueakJS, and in Cog as a native app. The current Black Page Digital object memory for Squeak connects the two with a remote-messaging network service. When the SqueakJS-hosted instance invokes the local one, it also forks a process that periodically attempts a remote-messaging connection. When the memory resumes on a non-web host, it starts a WebSocket-based server to provide remote messaging service.

With the two Squeaks connected, objects in one can send messages to objects in the other, creating a distributed system. The Cog-based system now has access to the DOM of the web browser through ThisWebpage in the SqueakJS-based system, and the SqueakJS-based system has unsandboxed access to the host operating system through the Cog-based system. In particular, SqueakJS can use Cog to run network servers, so one can create a distributed system from SqueakJS instances running in many web browsers on many machines.

In tomorrow’s post, I’ll discuss the remote messaging protocol in more detail. After that, I’ll introduce a distributed network service that takes advantage of it.

 

SqueakJS changes its world with ThisWebpage

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon with tags , , , , , , , , , , , , on 28 October 2016 by Craig Latta

a new platform

Since becoming a virtual-machine-based app, Smalltalk has integrated well with other operating systems, providing the illusion of a consistent unified platform. With the ascendancy of JavaScript, the common execution environment provided by web browsers is effectively another host operating system. Smalltalk runs there too now, thanks to Bert Freudenberg’s SqueakJS. So in addition to macOS, Windows, and Linux, we now have the Web host platform.

While all platforms expose some of their functionality to apps through system calls, the Web exposes much more, through its Document Object Model API (DOM). This gives Smalltalk a special opportunity to enable livecoded apps on this platform. It also means that Smalltalk can interoperate more extensively with other Web platform apps, and participate in the ecosystem of JavaScript frameworks, both as a consumer and a producer.

to JavaScript and back

The part of SqueakJS which enables this is its bidirectional JavaScript bridge. This is implemented by class JSObjectProxy, and some special support in the SqueakJS virtual machine. One may set Smalltalk variables to JavaScript objects, send messages to JavaScript objects, and provide Smalltalk block closures as callback functions to JavaScript. One may interact with any JavaScript object in the Web environment. This means we can manipulate DOM objects as any other JavaScript framework would, to create new HTML5 user interfaces and modify existing ones.

In particular, we can embed SqueakJS in a web page, and modify that web page from SqueakJS processes. It would be very useful to have a Smalltalk object model of the host web page. I have created such a thing with the new class ThisWebpage.

reaching out with ThisWebpage

I chose the name of ThisWebpage to be reminiscent of “thisContext”, the traditional Smalltalk pseudo-variable used by an expression to access its method execution context. In a similar way, expressions can use ThisWebpage to access the DOM of the hosting Web environment. One simple example is adding a button:

ThisWebpage
  createButtonLabeled: 'fullscreen'
  evaluating: [Project current fullscreen: true]

Behind the scenes, ThisWebpage is doing this:

(JS document createElement: 'input')
  at: #type
  put: 'button';
  at: #onclick:
  put: [Project current fullscreen: true]

Class JSObjectProxy creates JS, an instance of itself, during installation of the JavaScript bridge. It corresponds to the JavaScript DOM object for the current web browser window, the top of the DOM object graph. By sending createElement:, the expression is invoking one of the DOM methods. The entire set of DOM methods is well-documented online (for example, here’s the documentation for Document.createElement).

So far, ThisWebpage has some basic behavior for adding buttons and frames, and for referring to the document elements in which SqueakJS is embedded. It can also create links and synthesize clicks on them. This is an important ability, which I use in making a Squeak object memory jump from SqueakJS in a web browser to a native Cog virtual machine on the desktop (the subject of tomorrow’s post).

The possibilities here are immense. ThisWebpage is waiting for you to make it do amazing front-end things! Check it out as part of the Context 7 alpha 1 release.

 

Caffeine: live web debugging with SqueakJS

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon with tags , , , , , , , , , , , , , , , , , , , , on 26 October 2016 by Craig Latta

In February 2015 I spoke about Bert Freudenberg’s SqueakJS at FOSDEM. We were all intrigued with the potential of this system to change both Smalltalk and web programming. This year I’ve had some time to pursue that potential, and the results so far are pretty exciting.

SqueakJS is a Squeak virtual machine implemented with pure JavaScript. It runs in all the web browsers, and features a bi-directional JavaScript bridge. You can invoke JavaScript functions from Smalltalk code, and pass Smalltalk blocks for JavaScript code to invoke as callbacks. This lets Smalltalk programmers take advantage of the myriad JavaScript frameworks available, as well as the extensive APIs exposed by the browsers themselves.

The most familiar built-in browser behavior is for manipulating the structure of rendered webpages (the Document Object Model, or “DOM”). Equally important is behavior for manipulating the operation of the browser itself. The Chrome Debugging Protocol is a set of JavaScript APIs for controlling every aspect of a web browser, over a WebSocket. The developer tools built into the Chrome browser are implemented using these APIs, and it’s likely that other browsers will follow.

Using the JavaScript bridge and the Chrome Debugging Protocol, I have SqueakJS controlling the web browser running it. SqueakJS can get a list of all the browser’s tabs, and control the execution of each tab, just like the built-in devtools can. Now we can use Squeak’s user interface for debugging and building webpages. We can have persistent inspectors on particular DOM elements, rather than having only the REPL console of the built-in tools. We can build DOM structures as Smalltalk object graphs, complete with scripted behavior.

I am also integrating my previous WebDAV work, so that webpages are manifested as virtual filesystems, and can be manipulated with traditional text editors and other file-oriented tools. I call this a metaphorical filesystem. It extends the livecoding ability of Smalltalk and JavaScript to the proverbial “favorite text editor”.

This all comes together in a project I call Caffeine. had fun demoing it at ESUG 2016 in Prague. Video to come…

Context status 2015-01-16

Posted in Appsterdam, consulting, Context, Naiad, Smalltalk, Spoon with tags , , , , , , , on 16 January 2015 by Craig Latta

Hoi all–

Context is the umbrella project for Naiad (a distributed module system for all Smalltalks), Spoon (a minimal object memory that provides the starting point for Naiad), and Lightning (a remote-messaging framework which performs live serialization, used by Naiad for moving methods and other objects between systems). I intend for it to be a future release of Squeak, and a launcher and module system for all the other Smalltalks. I’m writing Context apps for cloud computing, web services, and distributed computation.

Commits b7676ba2cc and later of the Context git repo have:

  • Support for installable object memories as git submodule repos.
  • Submodule repos for memories for each of the known Smalltalk dialects, with Naiad support pre-loaded. I’m currently working on the submodules for Squeak and Pharo.
  • A web-browser-based console for launching and managing object memories.
  • A WebDAV-based virtual filesystem that enables Smalltalk to appear as a network-attached storage device, and mappings of the system to that filesystem that make Smalltalk accessible from external text editors (e.g., for editing code, managing processes and object memories).
  • Remote code and process browsers.

There’s a live discussion site and a mailing list. The newsgroup is gmane.comp.lang.smalltalk.squeak.context.

Thanks for checking it out!

Craig

I’m available for work.

Posted in Appsterdam, consulting, GLASS, Naiad, Seaside, Smalltalk, Spoon with tags , , , , , on 17 February 2014 by Craig Latta

Hey there, it’s my semi-regular reminder that I’m available for work. Consultant or employee, I’m comfortable as either. Check out my résumé. Thanks!

%d bloggers like this: