Tuesday, December 11, 2012

Random learnings (1)

Learning #1

My inquiry about how to set up a permanent ability for any registered drawterm users to log in to RasPilan 9 was met with a reply from Richard Miller indicating that a "Standalone CPU server" set up is the standard approach.

He referred me to the following HOWTO:

The good news is that apparently Richard has taken the trouble to provide an alternate kernel for this on the boot partition of the distribution image (called 9picpu), with a associated alternate cmdline.txt file called cmdline-cpu.txt.  This has the appropriate boot parameters to start Plan 9 in this role.

Now, at this stage, I'm not sure why this needs a special kernel (having read in the Plan 9 docs that the various roles where all provided for in one kernel).

Learning #2

Acme has a wiki view built in it seems.

  • Open the file /acme/wiki/guide
    This will show a file with a couple of commands:
    Local 9fs wiki
    Wiki /mnt/wiki
  • Execute each command line in term (select, middle click)
    As I understand this, the first line loads a wiki file system in the Local namespace of acme.
    Then the Wiki viewer is started with the root of the wiki content.
  • Right click in wiki links (text in square brackets to open the linked page)

Learning #3

The process specific namespaces allow some awesome simplifications compared to UNIX.
For example, from the point of view of a process:
  • /env/... contains all the process environment variables
  • /bin/... contains all the visible executable files/apps/utilities 
In one fell swoop there goes a couple of the awkward squad in a puff of conceptual symmetry.
Unix environment variables are pretty strange magical memory things.
Unix PATH is even worse (shiver).  How lovely that you can just arrange for all executable things to be unioned in the /bin directory.

Learning #4

/lib/profile is executed for the user at login.

Learning #5

Use 'bind' to union directories together.  For example:
bind -a $home/bin/rc /bin
will union the $home/bin/rc directory (where user scripts are normally stored) into /bin.
The -a here means 'after' (i.e. less precedence than what is already in /bin).
Per #3, this means that scripts are looked up automatically (i.e. are on the executable search 'path').

Learning #6

The 'snarf' buffer is available at /dev/snarf

Learning #7

You can target multiple objects in an argument by using '*' in a path, e.g.:
awk '{print $1}' /proc/*/status
or just
ls /proc/*/status

Sunday, December 9, 2012

A Plan to Connect

So, my first challenge is to get Plan 9 connected to my network (and the internet) and then use my freshly compiled drawterm on my Mac to access the system.

It turns out that the Raspberry Pi system is delivered as a standalone system in the distribution image.  The original cmdline.txt in the boot partition does not set up the networking, but an alternative supplied script 'cmdline-demo-net.txt' does also perform the extra network configuration.  It's simply a matter of some file renaming to get the networking going (though I also needed to preserve the extra 'kbargs=-b' setting that I had added to the cmdline.txt in order to have a working mouse).

Anyway, the cmdline.txt to boot Plan 9 with the network (doing a DHCP lookup of IP settings) is:
readparts=1 nobootprompt=local user=glenda kbargs=-b ipconfig=

Having booted up, you can then confirm the IP details at any time by doing:
cat /net/ndb

To enable remote access from drawterm for a specific user, the user must have authentication configured and there much be a TCP listener set up so that connections launch an instance of Plan9's /bin/cpu program to serve the remote terminal.

The following two lines can be typed at the shell (rc) prompt to temporarily enable this on the running Plan 9 system:

echo 'key proto=p9sk1 dom=plan9 user=glenda !password=PASSWORD' >/mnt/factotum/ctl
aux/listen1 -t tcp!*!ncpu /bin/cpu -R &

The first line sets up authentication credentials for the user glenda (the default Plan 9 user).
In keeping with Plan 9's fundamental design of talking to services and devices via read and writes on a namespace of files that are projected into the filesystem, a password is associated with the glenda user by writing a defining like of text to the factotum authentication agent's control (ctl) file.

The second line starts a listen1 process that starts /bin/cpu server (-R means 'server'), launching this for connections on the network service tcp!*!ncpu (tcp protocol, any address, ncpu well-known port).  The -t option has listen1 run as the calling user (I was logged in as glenda on the console).

My next step will be to learn the appropriate way to set up a permanent listener and authentication for any user wishing to connect in this way.

On a separate note, apparently the provided Pi image also includes full sources and is capable of building itself.  This is accomplished with the command:
cd /sys/src/9/bcm; mk
("bcm" btw is the hardware specific source directory for 'broadcom', which is the manufacturer of the ARM chip in the Raspberry Pi).

A Plan to Boot

As mentioned in the previous post, it's easy to take any of the images made for the Raspberry Pi and get it running on the Pi.  Images are typically delivered as iso files that can be transferred to an SD card easily (using dd on Unix type systems).  Once on the SD card, it's simply a matter of plugging into the Pi and powering up.

The original Plan 9 image didn't quite work as expected on my set up (which includes a Dynex keyboard and a Logitech mouse).  The keyboard works fine, except for any of the status lights (e.g. the Caps light).  However, the mouse originally didn't work at all.  Thankfully a little googling quickly turned up a new kernel file from Richard Miller, along with a tip about adding the argument "kbargs=-b" to the cmdline.txt file.  Here's the link to the discussion thread with this info.

The kernel update alone didn't do the trick for me, but the addition of the extra argument had my mouse working properly.

So now with a working Plan 9 it's time to explore how the system works.

There are a few of Plan 9 videos available on YouTube and Vimeo.  These cover the basics of making windows, using the mouse in Rio (the desktop environment) and the basics of using Acme for browsing and editing files and running commands.

One of the first things you notice in Acme is that output from commands can be separated from input (be in separate windows).  This helps to organise information better, and preserves a lot of context.  The ability to execute any text is also very powerful.  Sometimes working in regular Unix shells, where input and output are intermixed and scroll up the terminal window, you can quickly lose track of commands issued.  Although there are histories and shortcuts in modern shells it can still be awkward to do a lot of work.  Plan 9 not only captures output in separate windows, but it does not scroll automatically, rather by default it acts as if output is piped into Unix' "more" utility.  Plan 9 terminals intrinsically understand pagination and scrolling up and down through output.

You can type anywhere in a terminal or Acme window too, even amongst output data.  This facilitates reusing output text to from new commands, which can then be selected (left mouse button) and executed (middle mouse button).

It's clear very quickly that Plan 9 and Acme can be very quick and efficient once you have learned to properly use the mouse and the Acme environment.

Reading about Plan 9, you will soon learn that there's a remote terminal available for a range of other OSes, called drawterm.  Drawterm is available for Mac OS X.  Indeed, this is a completely up-to-date, modern Cocoa application.  Clearly there's a community of people out there still hacking on Plan 9 and making sure it is usable from the contemporary desktop systems.

Building the project obtained from the Mercurial repository at the link above is very easy and results in what looks like a command line utility that launches the drawterm client.  However, it's clear that the next step is to understand how to connect to the Raspberry Pi Plan 9 system.  Finding out how to do this is my next challenge.

Saturday, December 8, 2012

Another Plan (9)

Plan 9 is something I have been curious about for a while.  Not curious enough to do much about it, but intrigued nonetheless.

In case the name means nothing, here's a quick 'what the heck is Plan 9'...

Back in the day (being the late 60's), a bunch of computer geeks including Ken Thompson, Dennis Ritchie and Brian Kernighan concocted a new operating system while at Bell Labs.  This became known as Unix.  Unix has since gone on to be arguably the most important OS ever devised in terms of it longevity and evolution, leading to versions of it still running today on huge number of computers, but also how it has influenced other non Unix operating systems (notably DOS and then Windows, although for sure there are other influences).  I'm writing this on a Mac, which today is probably the most successful Unix workstation ever produced in terms.  Not far from me is a Linux machine, which is a Unix too in essentially every sense that being able to call itself that for legal reasons.  Unix has a long and storied history, with every major systems and software vendor at one time having a version of the OS on offer.  In recent times, there has been some consolidation, especially around Linux, but there remains a number of contemporary, supported variants.

For the most part Unix has evolved from its earliest versions, and while it has forked into a number of variants, every version today can trace its ancestry back either directly through evolving code bases, or in Linux' case through borrowed ideas and concepts.

While Bell continued to evolve its own brands of Unix since the original versions, many of the originators of Unix felt that some of the concepts they had distilled into Unix could be improved or replaced by other ideas.  In particular, networking was becoming far more prevalent and it was clear that people would need to collaborate across networks and computers so information would need to be stored and easily reached by other users on other systems.  Also, graphical terminals were evolving  with Unix itself growing new layers to support graphical applications (such as the X window system). There had also been some evolutions in Unix that some felt were bad decisions that broke symmetry and added complexity in software and for users.  For instance, one of the genius principles of the original Unix had been that files and streams of data between files could be a foundational concept on which you could build a whole OS infrastructure that maintained simplicity but allowed high degrees of flexibility and sophistication.  Over time though, Unix systems evolved special ways to talk with devices, rather that talking to them using this basic file/stream concepts, breaking this symmetry and adding systemic complexity (irrespective of whether this was more convenient or 'simple' for any given device).

Plan 9 then is a sort of "Unix V2", a 'reboot' of Unix where some of those who were instrumental in creating the original got to refine their original ideas, reject some things they saw as mistakes, double-down on ideas they thought were important and also address emergent requirements pertaining to the evolving world of computing.

Having had a connection with Unix since about 1988, I had read about Plan 9 from time to time.  Things that intrigued me were:

  • It is based on 'grid computing' principles with a distributed filesystem 
  • It has a clever backup/snapshot mechanism for files (similar to Apple's Time Machine)
  • It natively supports remote graphical terminals - without semi-standard but nevertheless add-on stacks like X
  • It is a much simpler and orthogonal design, with a file interface for configuration, devices, networking etc.
  • It has a cool user environment called Acme that borrows ideas from Niklaus Wirth's Oberon system (tiled windows, active text/dynamic hypertext)
  • It has been offered as free/open software for many years 
That's basically all I have known for years.  Additionally I have seen a version of Plan 9 called "Inferno" offered for sale as an 'operating environment' i.e. essentially an app to run on other OSes. .  I also once compiled a version of Acme for Mac OS X - though never used it much beyond checking that it started up after building it!

What changed recently, is that Richard Miller ported Bell's Plan 9 to the Raspberry Pi and it was highlighted in the main Raspberry Pi news blog.  The Raspberry Pi is a fantastic way to explore different OSes, because all you need to do is copy the distributed image file onto a spare SD card and you're then booting up into the new OS within seconds.  There's no finding an old computer to experiment with, no fiddling around with virtual machines and you get the satisfaction that, while the Raspberry Pi is a tiny computer, nevertheless it is running these OSes natively.

So, Richard's excellent work presented the ideal opportunity to finally connect with this interesting distributed OS and to find out what's unique and interesting about it.  

In the next series of posts I'll document my experiences in getting up and running (from the standpoint of Plan 9 on the Pi from the Richard Millar's distributed image).  

Monday, November 12, 2012

The Software Roundabout

In computing, products come and go at an amazingly fast pace.  That's true of hardware and it's doubly true of software.  Yet, the ideas behind software have their own lifetime.  There are innovations, yes, but these are relatively rare.  Mostly, we combine and recombine ideas, optimising for different things, adjusting and trying different compromises as hardware allows us to do more practically.

Good ideas can appear 'ahead of their time', be implemented crudely, then disappear for a while only to reappear in new forms or combined with other ideas or technologies.  Ideas get packaged and repackaged at different times for different platforms.  While the world of commercial software is typically subject to the 'drag' of the market and the need to ensure the continuance and growth of revenues, we do thankfully also have the academic, hobbyist and commercial start-up worlds that continue to try new things.

Arguably, we have entered an exciting new cycle of innovation, experimentation and advancement as some of the hoary old incumbent computing platforms and models give way (at least somewhat) to new things.  Ubiquitous desktop computing developed over a couple of decades, during which Microsoft guided its Windows OS into a totally dominant position, while the hardware manufacturers made computers ever more cheaply until they were truly commodity items.  However, for the last five years or so, things felt like they were getting really stale.  PCs were cheap enough (in fact most hardware manufacturers seemed to concentrate on making almost the same hardware cheaper every year), but computing was far from offering the visions of supporting our activities while we worked, played and lived.  For that we needed devices, full-time connectivity, universal identity and presence with a raft of software technologies build around people and their relationships rather than a disconnected desktop machine. 

Apple, to their credit, have been instrumental in showing us the way with new systems - to say the very least, they have bumped the industry and the consumers out of a certain paralysis and showed how 'smart' phones, tablets, app stores can work.  Google has demonstrated the power of the web like never before with searchable information, mapping and other essentially free high-value applications.  Facebook is the most successful of the companies that realised that human social interactions could be amplified with computer networks as a new medium.  

While the PC remains a "terminal device" into the world of web and cloud, with both business information and social information melding on a single fabric, the species increasingly seems out-evolved by portable devices.  If the PC has to stay sat in one place, then we will increasingly be expecting our digital world to appear on any PC has we sit down to use it, even if there are still ergonomic reasons for the PC form-factor for certain kinds of activity.

So, mainstream computing seems to be going interesting places again at last.  We watch and wait to see whether tablets become Apple's Knowledge Navigator, or if IBM's Watson becomes HAL, or if the internet turns into Skynet :-)  In the meantime, I also enjoy looking backward at the great ideas and software products that didn't become mainstream.  I entered the workforce at the dawn of personal computing, just as things were evolving from centralized computing models of mainframes, minis and terminals.  These are some of the remarkable products that I've seen and touched:
  • Micros (ZX81, ZX Spectrum, BBC B, Sinclair QL, Einstein, Enterprise, Atari ST, Amiga)
  • VMS minis (VAX)
  • UNIX minis (Sequent, HP)
  • NeXTStation
  • PSS, FidoNet, Prestel, Demon Internet
  • Workstations (e.g. Sun 3/50, 3/60, DECStation)
  • Psion Organiser, Series 3, Series 5, Series 7.
  • Apple Newton
  • Windows NT
  • Mac OS X
  • Linux
  • Apple devices: iPod, iPhone, iPad with their service ecosystems
This blog will, rather randomly, explore some personal sentiments about the evolution of computing as I have experienced it, and in many ways how I continue to experience ideas and products from the past as a surprising about of historical software artifacts continue to live on in one from or another.