The Intellectual Wilderness There is nothing more useless than doing efficiently that which should not be done at all.

2020.01.9 00:26

Packaging and Distributing/Deploying Erlang GUI apps with ZX

Filed under: Computing — Tags: , , , , , , — zxq9 @ 00:26

In the last two posts I wrote walkthroughs for how to create new CLI and GUI apps in Erlang from scratch using a tool called ZX. Today I want to show how to package apps and publish them using Zomp (the package distribution system) and get them into the hands of your users with minimal fuss.

To understand how packages are distributed to users (the “why does anything do anything?” part of grokking ZX/Zomp), one must first understand a bit about how Zomp views the world.

Packages in the system are organized by “realms”. A code realm is like a package repository for a Linux distribution. The most important thing about realm/repository configuration is that you know by each package’s signature whether it really came from the it claims as its origin. Just like Linux repositories, anyone can create or host a Zomp realm, and realms can be mirrored.

(As you will see in a future tutorial, though, administration and and mirroring with Zomp is way easier and flexible than traditional Linux repositories. As you will see below, packaging with ZX is just a single command — ZX already knows everything it needs from the zomp.meta file.)

In this example I am going to put the example CLI project, Termifier GUI (a toy example app that converts JSON to Erlang terms) into the default FOSS realm, “otpr”. Because I am the sysop I have packaging and maintenance permissions for every package in the realm, as well as the sole authority to add projects and “accept” a package into the write-only indexes (packagers have “submit” authority, maintainers have “review”, “reject” and “approve” authorities).

[Note: The indexes are write only because dependencies in ZX are statically defined (no invisible updates) and the indexes are the only complete structure that must be mirrored by every mirroring node. Packages are not copied to new mirrors, they are cached the first time they are requested, with mirror nodes connected in a tree instead of a single hub pushing to all mirrors at once. This makes starting a new mirror very light weight, even for large realms, as no packages need to be copied to start (only the realm’s update history, from which the index is constructed), and packages in high demand experience “trickle down” replication, allowing mirrors to be sparse instead of complete. Only the “prime node” for a given realm must have a complete copy of everything in that particular realm. Nodes can mirror an arbitrary number of realms, and a node that is prime for one or more realms may mirror any number of others at the same time, making hosting of private project code mixed with mirrored public FOSS code a very efficient arrangement for organizations and devops.]

In the original Termifier GUI tutorial I simply created it and launched it from the command line using ZX’s zx rundir [path] and zx runlocal commands. The package was implicitly defined as being in the otpr realm because I never defined any other, but otpr itself was never told about this, so it merely remained a locally created project that could use packages hosted by Zomp as dependencies, but was not actually available through Zomp. Let’s change that:

ceverett@okonomiyaki:~/vcs$ zx add package otpr-termifierg

Done. That’s all there is to it. I’m the sysop, so this command told ZX to send a signed instruction (signed with my sysop key) to the prime node of otpr to create an entry for that package in the realm’s index.

Next we want to package the project. Last time we messed with it it was located at ~/vcs/termifierg/, so that’s where I’ll point ZX:

ceverett@okonomiyaki:~/vcs$ zx package termifierg/
Packaging termifierg/
Writing app file: ebin/termifierg.app
Wrote archive otpr-termifierg-0.1.0.zsp

Next I need to submit the package:

ceverett@okonomiyaki:~/vcs$ zx submit otpr-termifierg-0.1.0.zsp

The idea behind submission is that normally there are two cases:

  1. A realm is a one-man show.
  2. A realm has a lot of people involved in it and there is a formal preview/approval, review/acceptance process before publication (remember, the index is write-only!).

In the case where a single person is in charge rushing through the acceptance process only involves three commands (no problem). In the case where more than one person is involved the acceptance of a package should be a staged process where everyone has a chance to see each stage of the acceptance process.

Once a package has been submitted it can be checked by anyone with permissions on that project:

ceverett@okonomiyaki:~/vcs$ zx list pending otpr-termifierg
0.1.0
ceverett@okonomiyaki:~/vcs$ zx review otpr-termifierg-0.1.0
ceverett@okonomiyaki:~/vcs$ cd otpr-termifierg-0.1.0
ceverett@okonomiyaki:~/vcs/otpr-termifierg-0.1.0$ 

What the zx review [package_id] command does is download the package, verify the signature belongs to the actual submitter, and unpacks it in a directory so you can inspect it (or more likely) run it with zx rundir [unpacked directory].

After a package is reviewed (or if you’re flying solo and already know about the project because you wrote it) then you can “approve” it:

ceverett@okonomiyaki:~/vcs$ zx approve otpr-termifierg-0.1.0

The if the sysop is someone different than the packager then the review command is actually necessary, because the next step is re-signing the package with the sysop’s key as a part of acceptance into the realm. That is, the sysop runs zx review [package_id], actually reviews the code, and then once satisfied runs zx package [unpacked_dir] which results in a .zsp file signed by the sysop. If the sysop is the original packager, though, the .zsp file that was created in the packaging step above is already signed with the sysop’s key.

The sysop is the final word on inclusion of a package. If the green light is given, the sysop must “accept” the package:

ceverett@okonomiyaki:~/vcs$ zx accept otpr-termifierg-0.1.0.zsp

Done! So now let’s see if we can search the index for it, maybe by checking for the “json” tag since we know it is a JSON project:

ceverett@okonomiyaki:~/vcs/termifierg$ zx search json
otpr-termifierg-0.1.0
otpr-zj-1.0.5
ceverett@okonomiyaki:~/vcs/termifierg$ zx describe otpr-termifierg-0.1.0
Package : otpr-termifierg-0.1.0
Name    : Termifier GUI
Type    : gui
Desc    : Create, edit and convert JSON to Erlang terms.
Author  : Craig Everett zxq9@zxq9.com
Web     : 
Repo    : https://gitlab.com/zxq9/termifierg
Tags    : ["json","eterms"]

Yay! So we can now already do zx run otpr-termifierg and it will build itself and execute from anywhere, as long as the system has ZX installed.

I notice above that the “Web” URL is missing. The original blog post is as good a reference as this project is going to get, so I would like to add it. I do that by running the “update meta” command in the project directory:

ceverett@okonomiyaki:~/vcs/termifierg$ zx update meta

DESCRIPTION DATA
[ 1] Project Name             : Termifier GUI
[ 2] Author                   : Craig Everett
[ 3] Author's Email           : zxq9@zxq9.com
[ 4] Copyright Holder         : Craig Everett
[ 5] Copyright Holder's Email : zxq9@zxq9.com
[ 6] Repo URL                 : https://gitlab.com/zxq9/termifierg
[ 7] Website URL              : 
[ 8] Description              : Create, edit and convert JSON to Erlang terms.
[ 9] Search Tags              : ["json","eterms"]
[10] File associations        : [".json"]
Press a number to select something to change, or [ENTER] to continue.
(or "QUIT"): 7
... [snip] ...

The “update meta” command is interactive so I’ll spare you the full output, but if you followed the previous two tutorials you already know how this works.

After I’ve done that I need to increase the “patch” version number (the “Z” part of the “X.Y.Z” semver scheme). I can do this with the “verup” command, also run in the project’s base directory:

ceverett@okonomiyaki:~/vcs/termifierg$ zx verup patch
Version changed from 0.1.0 to 0.1.1.

And now time to re-package and put it into the realm. Again, since I’m the sysop this is super fast for me working alone:

ceverett@okonomiyaki:~/vcs$ zx submit otpr-termifierg-0.1.1.zsp 
ceverett@okonomiyaki:~/vcs$ zx approve otpr-termifierg-0.1.1
ceverett@okonomiyaki:~/vcs$ zx accept otpr-termifierg-0.1.1.zsp

And that’s that. It can immediately be run by anyone anywhere as long as they have ZX installed.

BONUS LEVEL!

“Neat, but what about the screenshot of it running?”

Up until now we’ve been launching code using ZX from the command line. Since Termifier GUI is a GUI program and usually the target audience for GUI programs is not programmers, yesterday I started on a new graphical front end for ZX intended for ordinary users (you know, people expert at things other than programming!). This tool is called “Vapor” and is still an ugly duckling in beta, but workable enough to demonstrate its usefulness. It allows people to graphically browse projects from their desktop, and launch by clicking if the project is actually launchable.

Vapor is like low-pressure Steam, but with a strong DIY element to it, as anyone can become a developer and host their own code.

I haven’t written the window manager/desktop registration bits yet, so I will start Vapor from the command line with ZX:

You’ll notice a few things here:

  • Termifier GUI’s latest version is already selected for us, but if we click that button it will become a version selector and we can pick a specific version.
  • Observer is listed, but only as a “virtual package” because it is part of OTP, not actually a real otpr package. For this reason it lacks a version selector. (More on this below.)
  • Vapor lacks a “run” button of its own because it is already running (ZX is similarly special-cased)

When I click Termifier’s “run” button Vapor’s window goes away and we see that the termifierg-0.1.1 package is fetched from Zomp (along with deps, if they aren’t already present on the system), built and executed. If we run it a second time it will run immediately from the local cache since it and all deps are already built.

When Termifier terminates Vapor lets ZX know it is OK to shutdown the runtime.

A special note on Observer and “Virtual Packages”

[UPDATE 2020-01-12: The concept of virtual packages is going away, observer will have a different launch method soon, and a rather large interface change is coming to Vapor soon. The general principles and function the system remain the same, but the GUI will look significantly different in the future — the above is the day-2 functioning prototype.]

When other programs are run by Vapor the main Vapor window is closed. Remember, each execution environment is constructed at runtime for the specific application being run, so if we run two programs that have conflicting dependencies there will be confusion about the order to search for modules that are being called! To prevent contamination Vapor only allows a single application to be run at once from a single instance of Vapor (you can run several Vapor instances at once, though, as each invocation of ZX creates an independent Erlang runtime with its own context and environment — the various zx_daemons coordinate locally to pick a leader, though, so resource contention is avoided by proxying through the leader). If you want several inter-related apps to run at once within the same Erlang runtime, create a meta-package that has the sole function of launching them all together with commonly defined dependencies.

Because Observer is part of OTP it does not suffer from dependency or environmental conflict issues, so running Observer is safe and the “run” button does just that: it runs Observer. Vapor will stay open while Observer is running, allowing you to pick another application to run, and you can watch what it is up to using Observer as a monitoring tool, which can be quite handy (and interesting!).

If you want to run an Erlang network service type application using Vapor while using Observer (like a chat server, or even a Zomp node) you should start Vapor using the zxh command (not just plain zx), because that provides an Erlang shell on the command line so you can still interact with the program from there. You can also run anything using plain old zx run, and when the target application terminates that instance of the runtime will shut down (this is why ZX application templates define applications as “permanent“).

Cool story, bro. What Comes Next?

The next step for this little bundle of projects is to create an all-encompassing Windows installer for Erlang, ZX and Vapor (so it can all be a one-shot install for users), and add a desktop registration feature to Vapor so that Erlang applications can be “installed” on the local system, registered with desktop icons, menu entries and file associations in FreeDesktop and Windows conformant systems (I’ll have to learn how to do it on OSX, but that would be great, too!). Then users could run applications without really knowing about Vapor, because authors could write installation scripts that invoke Vapor’s registration routines directly.

If I have my way (and I always get my way eventually) Erlang will go from being the hardest and most annoying language to deploy client-side to being one of the easiest to deploy client-side across all supported platforms. BWAHAHAHA! (I admit, maybe this isn’t a world-changing goal, but for me it would be a world-changing thing…)

2018.05.30 01:00

Erlang: Eventually Things Will Change

Filed under: Computing,Science & Tech — Tags: , , , , , , , — zxq9 @ 01:00

I finally got a few days to really dedicate to the whole Zomp/ZX thing and wrote some docs.

If you actually click this link soon you’ll see an incomplete pile of poo, but it is a firm enough batch of poo that I can show it now, and you can get a very basic idea what this system is supposed to do:

Zomp/ZX docs

Some pages are missing and things are still a bit self-conflicted. The problem is that until you really use a system like this a bit it is hard to know what the actual requirements need to be. So that’s been a long internal journey.

If my luck holds I’ll have something useful out in short order, though. Here’s to keeping fingers crossed and creating useful on-ramps for new programmers in desperate need of easy-to-use power tools. While we can all only hope the gods will help them when it comes to tackling their actual human-relevant problems, the environment in which they render their solutions should not be actively hostile.

2017.12.12 17:59

Zomp/zx: Yet Another Repository System

I’ve been working on a from-source repo system for Erlang on and off for the last few months, contributing time to it pretty much whenever real-life is not interfering. I’m getting close to making a release. Now that my main data bits are worked out, the rest isn’t all that hard. I need to figure out what I want to say in an announcement.

The problem is that I’m really horrible at announcements and this system does things in a pretty different way to other repository systems out there, so I’m not sure what things are going to be important about it to users (worth putting into an announcement) and what things are going to be important to only me because I’m the one who wrote it (and am therefore obsessed with its externally inconsequential internals). What is internally interesting about a project is almost never what is externally interesting about it. Marketing; QED. So I need to sort that out, and writing sometimes helps me sort that kind of thing out.

I’m making this deliberately half-baked, disorganized, over-long post public because Joe Armstrong gave me some food for thought the other day. I had written him my thoughts on a subject posted to a mailing list but sent the message in private. I made my message to him off-list for two reasons: first, I wasn’t comfortable with my way of expressing the idea just yet; and second, I am busy with real-life stuff and side projects, including the repo system, and don’t want to get sucked into online chatter that might amount to nothing more than bikeshedding. (I’m a world-class bikeshedder!) Joe wrote me back asking why I made the reply private, I told him my reasons, and he made me change my mind. He hopes that more people will publish their ideas all the time, good or bad, fully baked or still soggy — because that’s the only way we can ever find any other interesting ideas these days is by searching for them, usually in text, on the net somewhere. It isn’t like we can’t go back and revise, but whether or not we do go back and clean up our literary messes, the availability of core ideas and exposure of thought processes are more important than polish. He’s been on a big drive to make sure that he posts most of his thoughts to public mailing lists or blogs so that his ideas get at least indexed and archived. On reflection I agree with him.

So here I am, trying to publicly organize my thoughts on my repository system.

I should start with the goals of the system.

This system is intended to smooth over a few points of pain experienced when trying to get a new Erlang project off the ground, and in particular avert the path of pain peculiar to Erlang newcomers when they encounter the “how to set up a project” problem. Erlang’s tooling is great but a bit crufty (deeply featured, but confusing to interface with) and not at all what the kool kids expect these days. And anyway I’m really just trying to scratch my own itch here.

At the moment we have two de facto standards for publishing Erlang systems: erlang.mk and Rebar. I like both of these, especially erlang.mk, but they do one thing that annoys me and never seems to quite fit my need: they build Erlang releases.

Erlang releases are great. They cut all the cruft of a release out and pack everything needed to actually run a system into a single blob of digits that you can move, in a single shot, to a new target system — including the Erlang runtime itself. Awesome! Self-contained deployment and it never misses. This has been an Erlang feature since before people even realized that they needed repeatable deployment infrastructure outside of the classic “let’s build a monolithic, static binary executable” approach. (Erlang is perpetually ahead of its time, even by today’s standards. I look at the poor kids stubbing their toes with Docker and language du jour and just shake my head — though part of that is because many shops are using Docker to solve concurrency issues that they haven’t even become cognizant of, thinking that they are experiencing “scaling” problems but missing the point entirely.)

Erlang releases are awesome when the deployment target is an embedded system, but not so awesome if the target is a full-blown operating system, VM, container, or virtual environment fully stocked with gobs of memory and storage and flush with system utilities and resources. Erlang releases sort of kitchen-sink the deployment itself. What if you want to run several different Erlang programs, all delivered as releases, all depending on the same library? You’ve got tons of copies of that library. Which is OK, but still sort of weird, because you also have tons of copies of the runtime (among other things). Each release is self-contained and lean, but in aggregate this is a bit odd.

Erlang releases make sense when you’re deploying to a phone switch or a sensor device in the middle of nowhere and the runtime is basically acting as its own operating system. Erlang releases are, in that context, analogous to putting a Gentoo stage 3 binary image on a system to leapfrog most of the toolchain process. Very cool when you’re in that situation, but a bit tinker-tacky when you’re just trying to run, say, a client program written in Erlang or test a web front-end for something that uses YAWS or Cowboy.

So that’s the siloed-kitchen-sink issue. The other issue is that newcomers are perpetually confused about releases. This makes teaching elementary Erlang hard. In my view we should really focus on escript for beginner code — just let the new guy run something out of a single file the way he is used to doing when learning a new language instead of showing him pages of really slick code, then some interpreter stuff, and then leaping straight from that to a complex and advanced packaging setup necessarily tailored for conducting embedded deployments to slim hardware devices. Seriously. WTF. Escripts give beginners all the power of Erlang necessary for exploring the more interesting bits of code and refactoring needed to learn sequential Erlang with the major advantage of being able to interface with the system the same way programmers from other environments are used to dealing with langauge runtimes like Bash, AWK, Python, Ruby, Perl, etc.

But what about that gap between scripts and full-blown production deployments for embedded hardware?

Erlang has… nothing.

That’s right! There is no agreed-upon way to deploy or even run Erlang code in the same manner a Python coder would expect to execute a python program. There is no virtualenv type system, there is no standard answer to the question “if I’m in the project directory and type ./do_thingy it will just work, right?” The answer is always “Well, it depends…” and what actually winds up happening is that people either roll a whole release just to crank a trivial amount of code up or (quite often) implement an ad hoc way to get the same effect in a lighter-weight way. (erlang.mk shines here, actually.)

Erlang does provide a number of ways to make a system run locally from source of .beam files — and has actually quite reasonable built-in resources for this — but nothing has been built around these tools that also deals with external dependencies, argument passing in a standard way, or any of the other little things you really need if you want to claim a complete solution. Hence all the ad hoc solutions that “work on my machine” but certainly aren’t something you expect your users to use (not with broad success, anyway).

This wouldn’t be such a big problem if it weren’t for the fact that not having any standard way to “just run a program” also means that there really isn’t any standard way to deal with client side code in Erlang. This is a big annoyance for me because much of what I do is client-side code. In Erlang.

In fact, it totally boggles my mind that client-side Erlang isn’t more common, especially considering that AMD is already fielding zillion-core processors for desktops, yet most languages are fundamentally single-threaded. That doesn’t mean you can’t do concurrency and parallelism in other languages, but most problems are not parallel in nature to begin with (parallel problems are easy to write solutions to in any language) while most real-world problems are concurrent. But concurrent systems are hard to write in almost every language. Concurrent problems are the bulk of the interesting problems we’re still not very good at solving with computers. AMD is moving to make the tools available to make much more interesting concurrent processing tools available on the client side (which means Intel will soon start pouring it gajillions worth of blood diamond money into a similar effort), but most languages and environments have no good way to make use of that on the client side. (Do you see why I hear Lady Fortune knocking?)

Browsers? Oh yeah. That’s a great plan. Have you noticed that most sites slowly move toward the “Single Page App” design over time (read as: the web sucks, so now we write full-but-crippled client-programs and deliver them over the web), invest heavily in do-sneaky-things-without-telling-you JavaScript and try to hog every core your system has if you allow it the slightest permission to do so? No. In the age of bitcoin miners embedded in nearly every ad this is not the direction I think we should be envisioning things going.

I want to take better advantage of the cores users have available, and that doesn’t necessarily mean make more efficient use of every cycle as much as it means to make scheduling across processes more efficient to reduce latency throughout the system overall. That’s something users care about quite a lot. This is the problem Erlang has already solved in a way no other runtime out there has. So I want to capitalize on it.

And yet, there is still not standardish way of dealing with code from source, running it locally, declaring or resolving dependencies, or even launching a client-side program at all.

So… how am I approaching it?

I have a project called “zomp” which is a repository system. It is a distributed repository system, so not everything has to be held in one place. Code in the zomp universe is held in little semantic silos called “realms”. Each realm can have whatever packages the owner (sysop) wants it to have. Each realm must have one server node somewhere that is its “prime” — the node in charge of that realm. That node is where system operator tasks for that realm take place, packagers and maintainers submit code for inclusion, where the package index is built, where the canonical copy of everything is stored. Other nodes configured to see that realm connect to the prime node and receive a copy of the current indexes and are tested for availability and published as available resources for querying indexes or downloading packages.

When too many subordinate nodes connect to a prime the prime will redirect a new node to a subordinate, when a subordinate gets “full” of subordinates itself, it picks a subordinate for new redirects itself, etc. so each realm winds up forming a resource tree of mirror nodes that connect back to the realm prime by a single path. A single node might be prime for several realms, or other nodes may act as prime for different realms — and any node can be configured to become a part of any number of realm trees.

That’s the high-level code division.

The zomp constellation is interfaced with via the “zx” program (short for “zomp explorer”, or “zomp exchanger”, or “Zomp eXtreem!”, or homage to the Sinclair ZX-81, or whatever else might lend itself to the letters “zx” that you might want to make up — I actually forget what it originally stood for, but it is remarkably convenient to type so it’s staying that way)

zx is configured to have visibility on zomp realms the same way a zomp node is (in fact, they use the same configuration files and it isn’t weird to temporarily host a zomp node on your desktop the same way you might host a torrent node for a while — the only extra effort is that you do have to open a port, zomp doesn’t (yet) do hole punching magic).

You can tell zx to run a program using the highly counter-intuitive command:

zx run Realm-ProgramName[-Version]

It breaks the program name down into:

  • Realm (optional, defaulting to the main realm of public FOSS packages called “otpr”)
  • Name (necessary — sort of the whole point)
  • Version (which is optional and can also be partial: “1.0.3” vs just “1.0” or “1”, defaulting to the latest in a series or latest overall)

With those components it then contacts any zomp node it knows provides the needed realm, resolves the latest version number of the requested program, downloads and unpacks it, checks and downloads any missing dependencies, builds the program, and launches it. (And if it doesn’t know any active mirrors it asks the prime node and is seeded with known mirror nodes in addition to getting its query answered.)

The packages are kept in a local cache stored at the user level, not the system level (sort of like how browsers keep their JS and page caches) — though if you want to daemonize zomp and run it as a permanent service (if you run a realm prime, for example) then you would want to create an unprivileged system user specifically for the purpose. If you specify a fully-qualified “realm-name-version” for execution and the packages already exist and are built, zx just launches the code directly (which is the majority case, so no delay there — fast startup).

All zomp nodes carry a complete index of their configured realms and can answer queries with very little overhead, but only the prime node has a copy of all the packages for that realm

 

Zomp realms are write-only. There is no facility for removing a package from a realm entirely, only for upgrading the versions of packages whenever necessary. (Removal is, of course, possible, but requires manual intervention by the sysop.)

When a zx client or zomp node asks an upstream node for a package and the upstream node does not have a copy it will query its upstream until the request reaches a node that does have a copy. Once found a “found” notice goes back down to the client telling it how many hops away the package is, and new “hops away” notices are sent as the package is passed downstream toward the original requestor (avoiding timeouts and allowing the user to get some feedback about what is going on). The package is cached at each node along the way, so subsequent requests for that same package will be handled immediately without any more relay downloading.

Because the tree of nodes is expected to be relatively ephemeral and in a constant state of flux, the tendency is for package stores on mirror nodes to be populated by only the latest, most popular packages. This prevents the annoying problem with old realms having gobs of packages that nobody uses but mirror hosts being burdened with maintaining them all anyway.

But why not just keep the latest of everything and ditch old packages?

Ever heard of “version shear”? Yeah. Me too. It sucks. That’s why.

There are no “up to” or “greater than” or “abstract version 3” type dependency declarations in zomp package metadata. As a package maintainer you must explicitly declare the complete version of each dependency in your system. In the case of diamond-shaped dependencies (where two packages in your system depend on slightly different versions of the same package) the burden is on the packagers to declare a version that works for a given release of that package. There are no dependency trees for this reason. If your package depends on X, and X depends on Y and Z then your package must be defined as depending on X, Y and Z — and fully specify the versions involved.

Semver is strictly enforced, by the way. That is, all release numbers are “Major.Minor.Patch”. And that’s it. No more, no less. This is one of the primary criteria for inclusion into a public realm and central to the way both zx and zomp interpret package semantics. If an upstream project has some other numbering scheme the packager will need to create a semver standard of his own. And actually, this turns out to not be very hard in practice. There is one weird side-effect of full, static dependency version declarations and semver: updating dependencies results in incrementing your package’s patch number, so even if you don’t change anything in a program for a long time, a program with many dependencies under heavy development may wind up on version 2.3.257 without much change other than the {deps, PackageIDs}. line in the package meta file.

zx helps make you aware of these situations, so solving them has not been particularly difficult in practice.

Why do things this way?

The “static dependencies forever and ever, amen” decision is a tradeoff between the important feature of fully repeatable builds Erlang releases are famous for (to the point of bug-compatibility between deployment sites — which is critical in production) and the flexibility users and developers have come to expect from source repository systems like pip, pypi, CPAN, etc. Because each realm is write-only there is no danger that a package will be superceded and disappear. The way trickle-down caching works for mirror zomp nodes does not unduly burden the subordinate realm mirrors, and the local caching behavior of zx itself at launch time tends to make all of this mostly delay-free for zx clients and still gives them the option to always run “latest available version” if they want.

And on the note of “latest version”…

Client-side programs are not expected to be run too terribly long at a time. People shut desktop programs down, restart computers, update their kernels, etc. So even if a client program runs a long time (on the order of web, email, IRC, certain games, crypto wallets/miners, torrent nodes, Freenode, Tor, etc) it will still have a chance to restart every few days or weeks to check for a new version (if invoked in a way that omits the version number so that it always queries the latest version).

But what about for long-running server-side type programs? When zx starts a script checks the initial environment and then starts the erlang runtime with zx as its target application, passing it the package ID of the desired program to run and its arguments as arguments. That last sentence was odd. An example is helpful:

zx run foo-bar arg1 arg2 arg3

zx invokes the launching script (a Bash script on Linux, BSD and OSX, a batch file on Windows — so actually the command is zx.bash or zx.cmd)  with the arguments run foo-bar arg1 arg2 arg3. zx receives the instruction “run” and then breaks “foo-bar” into {Realm, Name} = {"foo", "bar"}. Everything after that is passed in as strings which wind up being the input arguments to the program being run: “foo-bar”.

zx registers a process called zx_daemon which remains resident in the runtime and waits for a subscription request or zomp query. Any Erlang program written with the intention of being used with zx can send a message to zx_daemon and ask it to maintain a connection to the program’s parent realm and enroll for update notifications. If the target program itself is the subject of a realm index update then it will get a message letting it know what has changed. The program can respond any way the author wants to such a notification.

In this way it is possible to write a client-side or server-side application that can enroll to become aware of updates to itself without any extra infrastructure and a minimal amount of code. In some programs I’ve used this to cause a pop up notification to appear to desktop users so they know that a new version has become available and they should restart the program (the way Firefox does on Windows). It could also be used to initiate a restart on its own, or whatever else you might come up with.

There are several benefits to developers of using this system as well.

As a developer I can start a new project by doing zx init app [Realm-Name] or zx init lib [Realm-Name] in an existing project root directory and a zomp.meta file will be generated for it, or a new project template directory will be created (populated with a functioning sample skeleton project). I can do zx dailyze and zx will make sure a generally relevant PLT exists or is built (if not up to date) and used to check the typespecs of the project and its dependencies. zx create package [Path] will create a zomp package, sign it, and populate the metadata for it. zomp keygen will generate the kind of keys necessary to interact with a zomp server. zomp submit PackageFilePath will submit a package for review.

And so on.. It is a lot easier to do most things now, and that’s the main point.

(There are commands for reviewing, approving, or rejecting package submissions, adding packagers and maintainers to package projects, adding dependencies to projects, X.Y.Z version incrementing, etc. as well.)

This is about 90% of the way I want it to be, but that means about 90% of the effort remains (pessimistically assuming the 90/10 rule, because life sucks and nobody cares). Most of that is probably going to be finagling some network lunacy, but a lot of the effort is going to be in putting polish to it.

Zomp/zx is based on a similar project I wrote for use within Tsuriai a few years ago that has much sparser features but does basically the same thing: eases packaging and repeatable deployment from source to client systems. I would never release that version publicly because it has a lot of “works for me!” level functionality, but very little polish and requires manually diddling quite a few settings files in error-prone ways (which is fine because it was just us diddling them).

My intention here is to Cadillac this out a bit so that newcomers can slide into the new language and just focus on that language after learning a minimum of tooling commands or environmental details. I think zx init app foo-bar and zx runlocal are a low enough bar for entry.

Powered by WordPress