One of the challenges of providing Smalltalk on Cloud Foundry (or any cloud hosting system) is that the applications are typically given an ephemeral file system and isolated from read/write access to any persistent disk. This is, of course, for good reasons, but makes it difficult to use Smalltalk applications (like Pier on Pharo) that rely on image-based persistence. We recently described getting a Pharo application (in that case AIDAweb) running on Cloud Foundry 2, but each time you stop and start an application instance you would lose all the saved data.

This post describes a way to modify a private Cloud Foundry system to provide access to a persistent file system from an application instance. Note that this is a proof-of-concept and demonstrates that it is possible to work around the carefully-designed isolation in Cloud Foundry. We do so by providing every Pharo application the same shared space that is completely replaced whenever you upload a new application. We also give all files and directories read/write/execute permission so that subsequent launches of the application (which Cloud Foundry does with a new user and group) will have access.

Create a Virtual Disk

The first step is to create a fixed-size (~500 MB) virtual disk (based on ideas here) that can be used by the application instance for persistent file storage. The following steps, done as root on the Cloud Foundry server, give us what the needed disk:

cd /var
touch st_virtual_disk.ext3
dd if=/dev/zero of=/var/st_virtual_disk.ext3 bs=550000000 count=1
mkfs.ext3 /var/st_virtual_disk.ext3
mkdir /var/smalltalk
mount -o loop,rw,usrquota,grpquota /var/st_virtual_disk.ext3 /var/smalltalk/
mkdir /var/smalltalk/pharo/
chmod 777 /var/smalltalk/pharo/

Making the Disk Visible in the Warden Container

Cloud Foundry’s architecture includes a Warden that manages a container for each application instance that includes a private root file system. On Ubuntu 10.04 LTS this is implemented using aufs, and the mount action is (at the moment) found in /var/vcap/data/packages/warden/29/warden/root/linux/skeleton/lib/common.sh line 60 (with :/var/smalltalk=rw added):

mount -n -t aufs \
 -o br:tmp/rootfs=rw:$rootfs_path=ro+wh:/var/smalltalk=rw none mnt

This means that the container’s private root file system has a /pharo top-level directory that maps to a persistent 500 MB virtual disk. The remaining portion of the external file system (from /var/vcap/data/packages/rootfs_lucid64/1) is read-only and the internal file system (from /var/vcap/data/warden/depot/*/tmp/rootfs/) is read/write.

New Pharo Buildpack

Now rather than simply creating a startup script, the buildpack needs to copy things to the persistent file system and use that when running the application. Following is the new compile script:

#!/usr/bin/env bash
#
BUILD_DIR=$1
CACHE_DIR=$2
BUILD_PACK_DIR=$(dirname $(dirname $0))
#
cd $BUILD_DIR
rm -rf /pharo/* 2>&1
cp `ls *.image` /pharo/pharo.image
cp `ls *.changes` /pharo/pharo.changes 2>&1
rm *.image *.changes 2>&1
if [ ! -e startup.st ]; then
 touch startup.st
fi
cp -r * /pharo
chmod -R 777 /pharo/*
rm -rf *
#
cat > startup.sh << EOF
#!/usr/bin/env bash
#
umask 0000
ln -s /pharo ./pharo
cd pharo
/opt/pharo/pharo -vm-display-null -vm-sound-null pharo.image startup.st \$PORT
EOF
chmod +x startup.sh

Client Changes

On the client (before pushing the application to the cloud), there are some changes needed. I started with the Pier 3.0 download (from on this page). The pre-build image is in Pharo 1.3 and does not include Smalltalk code for accessing the OS environment variables (at least not that I found easily). Thus, you can see above that the startup script is modified to add $PORT to the command line when starting Pharo. With that change, Pier can be started on the supplied port with this startup.st script (that might work for any Seaside application):

| manager adaptor port |
port := (SmalltalkImage current argumentAt: 1) asNumber.
manager := WAServerManager default.
adaptor := manager adaptors first.
adaptor port == port ifFalse: [
 manager stopAll.
 adaptor port: port.
 manager startAll.
].

This runs Pier on Cloud Foundry on a persistent file system but Pier needs to save the image regularly. It would also be desirable to save the image when sent a SIGTERM (perhaps with something like Chaff). One can save the image from the /status page, though since one can also quit the image from that page it probably should have some security!

In any case, we have demonstrated that it is possible to provide a persistent file store for a Pharo application on a private Cloud Foundry server.

About these ads