Urbit
I find Urbit very confusing. It uses a bespoke virtual machine for a bespoke programming language, uses a bespoke approach to optimisation, uses another bespoke programming language which compiles down to the first, has a bespoke OS written in that second language, connects these OSes into a bespoke p2p network, all in order to…. what?
Their go-to example seems to be social networking with an interface to Twitter??
I absolutely get the ‘own your data/compute’ idea.
I think a bespoke OS is a decent approach, as it removes a lot of legacy complexity and allows a few carefully-chosen, simple, unified approaches to things like I/O and addressing.
I like the use of a functional language, as it’s an extreme form of isolation/sandboxing/reproducibility which makes sense in an untrusted online world. I don’t see why a new language was invented though, when something like a subset of Scheme, Joy, plain lambda calculus, etc. would suffice.
I have absolutely no idea why they’ve invented a custom p2p network/addressing system. I chalk this up to hubris, along with their spiel about ASCII punctuation, etc.
I have absolutely no idea how this has anything to do with social media. I assume it’s just for buzzword value. I don’t use social media, but I imagine programming language theory isn’t its main appeal?
Even if we assume that all of Urbit’s ideas pan out: interacting with Twitter seems to completely undermine all of it!
Twitter is centralised, whilst the point of Urbit’s network is decentralisation.
No matter how much elaborate language machinery you build, there’s very little you can actually do with Twitter, since they keep their database secret.
Despite all of the sandboxing, reproducibility, etc. Twitter will not execute any code that you try to send them. The only operations which can be performed are chosen by them, and exposed as API endpoints.
Doing anything with Twitter throws purity out of the window, since they only interact via I/O requests.
No amount of fanciness in OS interfaces or language semantics can do anything to prevent Twitter modifying/deleting/overwriting/etc. any of their data at any time, without any notification to anyone; hence there are very few guarantees that the OS or language can actually provide about such values (e.g. there’s no referential transparency, no way to know if a cache is invalid, no way to know if someone else will receive the same data you did, etc.).
Whatever the resulting API looks like, it will be a constant source of incompatibility and churn, since Twitter are free to modify their API at any time. If some operation gets dropped, any applications which rely on it may break irreparably.
I get that the Twitter example seems to be along the lines of a minimum viable product, but it seems like a bad choice considering that it can’t really make use of any of Urbit’s features. It could be implemented as a big string of Javascript, with Urbit only being used to get it into the browser; the result would be about as integrated as any other approach.
A more relevant example might be a multiplayer game with a shared leaderboard, player chat and non-critical use of external APIs (e.g. gravatar for player pics), e.g. a clone of Words With Friends or maybe something with more animation.
The game itself would be a decent test of the programming languages, rather than just shuttling strings in/out of Twitter. The interactivity and chat would test the latency. The leaderboard would test shared access to data. Using external APIs would test the data-shuttling, in a way that doesn’t much affect the application if the provider shuts down the API.