Periodically the International Earth Rotation Service determines that Coordinated Universal Time needs to be adjusted to match the mean solar day. This is because the earth doesn’t rotate at a constant speed and in general takes slightly longer than 86,400 seconds. These adjustments have been done through adding (or, in theory, removing) a second every few years. The next leap second insertion is scheduled for June 30th, 2015 at 23:59:60 UTC. How does GemStone/S handle that?

GemStone gets time from the host OS (Unix Time) and is typically described as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970. This definition is correct so long as we assume that each and every day has exactly 86,400 seconds. If you take into account the 27 leap seconds that will have been added as of June 30, 2015, it is more accurate to say that Unix Time is the number of atomic seconds that have elapsed since 00:00:27 UTC, 1 January 1970. The way the adjustment occurs is that when there is a leap second, the Unix time counter takes two atomic seconds to advance by one Unix second.

The impact of this approach is that a record of when something happened will be correct, but the time between two events could be reported as being one or more seconds less than the actual time between the events. For example, Unix time (and GemStone) will report that there are five seconds between June 30 at 23:59:55 and July 1 at: 00:00:00 when in fact in 2015 there were six seconds between those two times.

Whether this matters is an application-level or domain-level problem. If keeping track of the time between events needs to be accurate (e.g., recording some rapid physics event), then relying on Unix time (or GemStone) will not be sufficient. If all that is needed is a timestamp and it is acceptable to have June 30, 2015 23:59:59 UTC last for 2000 milliseconds, then things should be fine.

A screencast of this blog post is here.

Because Smalltalk was the origin of much of today’s GUI (mouse, overlapping windows, drop-down menus), Smalltalk developers are understandably accustomed to a nice GUI IDE. GemStone/S is an excellent database and Smalltalk execution environment and includes a built-in command-line tool, Topaz, where you can execute Smalltalk code, but has no native GUI. In this blog post we continue a demonstration of on the Macintosh (started here) and show Jade, a GUI-based IDE available on Microsoft Windows.

We launch on the Macintosh, update the version list, download, then install and start a GLASS extent (image) that includes Monticello/Metacello tools. When the database is running we start a Topaz session and install Seaside 3.0 and Magritte 3 with the following script:

"based on"
MCPlatformSupport commitOnAlmostOutOfMemoryDuring: [
  ConfigurationOfMetacello project updateProject.
  ConfigurationOfMetacello loadLatestVersion.
  Gofer project load: 'Seaside30' group: 'Seaside-Adaptors-Swazoo'.
"based on"
Gofer it
  squeaksource: 'MetacelloRepository';
  package: 'ConfigurationOfMagritte3';
MCPlatformSupport commitOnAlmostOutOfMemoryDuring: [
  ConfigurationOfMagritte3 project stableVersion load.
WAGsSwazooAdaptor new start.

When Swazoo is running, we can go to http://localhost:8080 and see Seaside running locally. This demonstrates running Smalltalk code in Topaz, the command-line tool. Next we look at a GUI-based IDE that runs on a Microsoft Windows client platform.

Jade is available as a 14 MB zip download from It includes an executable, client libraries (DLLs) for various GemStone/S versions (ranging from 32-bit version 6.1 to the latest 64-bit version), and related files (including source code). Like most GemStone/S client GUI tools, it is built in another Smalltalk (in this case, Dolphin Smalltalk from Object Arts), but unlike these other tools, you can’t see the client Smalltalk (unless your load Jade source code into your own Dolphin development environment), so we avoid the the two-object-space confusion. Jade is intended to take you directly to GemStone/S, without going through Pharo, Squeak, VA, or VW Smalltalk.

Jade is also designed to work with the no-cost version of GemStone/S (unlike the VA/VW-based GBS tools), and performs well on a slow network (unlike GemTools).

When you unzip the download, you have a folder with various items. Jade.exe is the primary executable (containing the Dolphin VM and the image) and it relies on Microsoft’s C Runtime Library. There is a copy of the executable in Jade.jpg for sites where executables are stripped from zip files during the download process (simply rename the suffix and it will become executable). Contacts.exe is used sometimes in a training class. The bin directory contains the GCI client libraries and a DLL with various images used in the IDE. You can also see a directory containing source code for Jade.

Screen Shot 2013-10-01 at 10.56.18 AM

When you launch Jade, you get a login window that gives you a place to select the GemStone/S version (which GCI library we will use), and other information needed for a login. The Stone box contains two fields, one for the host/IP of the Stone machine (from the perspective of the Gem machine, so localhost is almost always sufficient), and the name of the Stone. In the screencast mentioned above our stone was named gs64stone1. The Gem box contains a number of fields. Most logins will use an RPC Gem (a Linked Gem is available only on 32-bit Windows) with a Guest-authenticated Gem (if your NetLDI was not started in guest mode (-g), then you will need to provide an OS user and password). An RPC Gem will be started by a NetLDI, so we need to identify the Gem machine (in my example the host is vienna and the NetLDI is listening on port 54120) and the command used to start the Gem (except in rare cases the command will be ‘gemnetobject’). You provide a GemStone User ID and Password (by default, ‘DataCurator’ and ‘swordfish’), and if you are going to use any Monticello features it would be good to identify the developer’s name (one word with CamelCase).

Screen Shot 2013-10-01 at 11.01.49 AM

If you get an error on login, we attempt to give as much explanation as possible. Typically, (1) there is no NetLDI on the host/port (see following example), (2) there is no stone with that name, (3) there is a version mismatch, or (4) you have given an unrecognized user ID or password.

Screen Shot 2013-10-01 at 11.13.38 AM

When you have a successful login, you will get a launcher that consists of several tabs. The Transcript serves the traditional (ANSI) Transcript function of showing output sent to the Transcript stream. The second tab shows information about your current session.

Screen Shot 2013-10-01 at 11.19.59 AM

The third tab shows information about the current logged-in sessions, including where the Gem is located, where the client GCI is located, and whether a Gem is holding the oldest commit record. If you have appropriate security, you can send a SigAbort to a session or even terminate it!

Screen Shot 2013-10-01 at 11.21.07 AM

The final tab is a Workspace. In this tab you can execute, print, and inspect Smalltalk code. You can also use the toolbar or menus to abort or commit and open other tools.

Screen Shot 2013-10-01 at 11.17.23 AM


One of the tools is a User Profile Browser that shows the various users defined in the database.


Screen Shot 2013-10-01 at 11.31.42 AM

Next is a Monticello Repository Browser that shows various repositories, packages, and versions.


Screen Shot 2013-10-01 at 11.32.47 AM

The Monticello Browser includes a tool to browse differences between packages.

Screen Shot 2013-10-01 at 11.33.36 AM


Much of your work will be done in a System Browser. This view shows the four SymbolDictionary instances in my SymbolList. UserGlobals is bold, indicating that it is the ‘home’ or default SymbolDictionary, but I have selected Globals and see a list of class categories, a partial list of classes, and in the lower section of the screen is a list of the non-class objects in Globals (note things like AllGroups and AllUsers).

Screen Shot 2013-10-01 at 11.41.42 AM


This screen shot shows us an example of the Packages view (which requires Monticello), and a method with a breakpoint (the red rectangle around a method).

Screen Shot 2013-10-01 at 11.43.54 AM

There are other tools, including a debugger, but I’ll leave that for your exploration (and/or another post/screencast).

Have fun and let me know if you have questions or feature requests.






A screencast of this post here.

GemStone/S 64 Bit has been available for the Macintosh for several years but you generally need to install and configure it much like you would do if you were on a Linux/Unix system. That is, there are a lot of command-line steps and system configurations that are needed. For someone who is used to the Macintosh’s consistent graphical user interface and who is not so familiar with configuring a Unix server, this tends to create a high barrier-to-entry. For a while I’ve been playing with an alternative that makes it easier to install and run GemStone/S 64 Bit on a Macintosh. This blog post will describe how to use this tool.

To start, download GemStoneApp.dmg, a disk image of a Cocoa application. Open the disk image to get to the virtual disk containing the application and a shortcut to your Applications folder. Copy the application to your local machine (typically, to the Applications folder, but it can be anywhere). At this point you can eject the virtual disk and delete the disk image. This will give you a 770 KB bundle that is a Cocoa application built with Xcode 5.

Screen Shot 2013-09-25 at 1.47.03 PM

You can launch the application in several ways. First, open Spotlight (Command + space) and type ‘gemstone’ (without the quotes). The application should be found and you can launch it by clicking on the name or just pressing Return. Second, you can use Launchpad, find the GemStone application, and then click on it. Finally, you can use the Finder to navigate to the folder holding the application (typically ‘/Applications/’) and launch it from there.

If the system presents a dialog reporting that the application could not be launched because it was not downloaded from the Mac App Store then you have a couple options. First, you can configure your security to allow applications from “identified developers.” This is done by launching System Preferences, selecting Security & Privacy, unlocking the page if needed (click on the padlock icon at the bottom left if it is closed), and then click the radio button for ‘Mac App Store and identified developers’ under the heading ‘Allow applications downloaded from:’. Once this is done, relaunch the application (as described in the previous paragraph) and confirm that you want to run it. Second, you can use the Finder to navigate to the directory holding the application (typically ‘/Applications/’) and then right-click or Control-click on the application and select the ‘Open’ menu item. This may ask you to confirm that you want to open the application. If you confirm once then it will not ask again.

Screen Shot 2013-09-25 at 2.03.42 PM

Once the application launches, make sure that the ‘Setup’ tab is selected. There are some setup steps that are typically done as root (using ‘sudo’ from a shell prompt) that we can do programatically if we have adequate authorization. These steps are done using a ‘Helper Tool’ that runs as root in the background and performs very limited actions. In our case, we need to set a couple kernel settings, kern.sysv.shmall and kern.sysv.shmmax, to allow for shared memory (this can be done manually, but is easier with the helper tool). Click the ‘More info’ button if you want to learn more, then click the ‘Authenticate…’ button, give your password, and then click the ‘Install Helper’ button. (You can click the ‘Remove’ button to remove the helper tool.)

Next you need to import the list of available versions by clicking the ‘Update’ button. When this finishes (it should only take a couple seconds), you will have a list of versions and their release dates. To install a version, click the checkbox next to the version name or (if you have already downloaded a zip file of the product tree from here or here), click on the ‘Unzip…’ button and select an existing zip file of a product tree. After the version is unzipped you can start to use it.

Screen Shot 2013-09-25 at 2.23.45 PM

Click on the Databases tab and click on the ‘+’ button to create a database. This will set up a directory structure, create a config file, and copy a base extent. You can change the version (if the database has not been used), edit the name of the stone, the NetLDI, and the shared page cache size. After you have made any changes you want, you can click the ‘Start’ button to start the stone (and related processes).

Screen Shot 2013-09-25 at 2.30.05 PM


In the Databases tab there are a series of sub-tabs, the first of which is ‘Data Files.’ Here you can select the extent(s) or tranlog(s) to see some information about them.

The second sub-tab gives you some backup and restore options. When the database is not running you can initialize a base extent (a copy of $GEMSTONE/bin/extent0.dbf) or a ‘GLASS’ extent (a copy of $GEMSTONE/bin/extent0.seaside.dbf), and you can restore from a backup. When the database is running you can make a backup.

The third sub-tab is ‘Process Logs’ and this gives you a list of log files associated with the GemStone processes. You can double-click a line (or select the line and click the ‘Open’ button), and the appropriate log file will open using the Macintosh Console application (used to view system logs).

The fourth sub-tab is ‘Archives’ and gives you some information about archived process logs (the text files described above) and the transaction logs. These can typically be deleted without impacting a running system (though you might want to keep transaction logs made following any backup if you have to do a restore).

The fifth sub-tab is ‘Statistics’ and shows a list of statmonitor files created while the system is running. If you double-click a line (or single-click and click the ‘Open’ button) then the application will launch ‘VSD’, a Visual Statistics Display tool that can be used to analyze the running system.

(If there is a sixth sub-tab, ‘Upgrade’, you should ignore it since it is disabled and does not do anything right now.)

After the Databases tab is a third tab, ‘GS List,’ that shows a list of the current processes. The port number for the NetLDI process might be useful.

(If there is a fourth tab, ‘Logins’, you should ignore it since it is disabled and does not do anything right now.)

Returning to the Databases tab, we have the ability to open a Finder window on the database directory (the button with a folder). Here you can use the Finder to explore the implementation details of the database. There is a ‘GemTools’ button that opens a text field with a Smalltalk expression that can be pasted into a GemTools session definition. Finally, there is a ‘Terminal’ button that can be used to open the Macintosh Terminal application. This starts a new Terminal application (this will be confusing if you already have one running) with the current working directory set along with various environment variables, including $GEMSTONE and $PATH. From this terminal window you can execute GemStone commands like ‘gslist’ and ‘topaz’.

Note that you can run multiple databases at one time and they can be different versions. When you are done you can click the ‘Stop’ button on the ‘Databases’ tab. When there are no databases running you can Quit the application (the application window can be minimized but not closed). Let me know if this is helpful and what further features you would like to see.


My post on a Cloud Foundry AMI was with a simple Ruby application. In a comment Andrew Spyker asked about a node.js application. I’ve done Javascript but not node.js, so thought I’d give it a try. I followed my earlier instructions to get the CF Micro instance started and then starting with the ‘Use the Server’ I did something different.

Using the webserver example here, I created two files in a new directory. The first file, example.js, contained the following:

var http = require('http');
var port = parseInt(process.env.PORT,10);
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(port, '');
console.log('Server running at' + port.toString() + '/');

The second file, package.json, contained the following:

  "name": "http-server",
  "version": "0.0.1",
  "author": "James Foster <>",
  "description": "webserver demo from",
  "dependencies" : [ ],
  "engines": {
    "node": ">=0.10"

(Bear in mind that this is my first node.js application, and I just spent a hour or so poking around on the web to get this far.)

From the command line I’m able to run it by entering the following:

export PORT=1337; node example.js

With this I can open a web browser on http://localhost:1337/ and see the greeting.

To push my trivial application to my EC2 instance, I did the following:

cf target http://api.<my-ip>
cf login --password mySecret admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see
cf create-space development 
cf target --space development
cf map-domain --space development <my-ip>
cf push --command "node example.js"

The interaction included giving the application a name (“hello”), accepting the defaults, and saving the configuration:

Name> hello
Instances> 1
1: 128M
2: 256M
3: 512M
4: 1G
Memory Limit> 256M
Creating hello... OK
1: hello
2: none
Subdomain> hello
2: none
Binding to hello... OK
Create services for application?> n
Save configuration?> y
Saving to manifest.yml... OK
Uploading hello... OK
Preparing to start hello... OK
-----> Downloaded app package (4.0K)
-----> Resolving engine versions
 Using Node.js version: 0.10.17
 Using npm version: 1.2.30
-----> Fetching Node.js binaries
-----> Vendoring node into slug
-----> Installing dependencies with npm
 npm WARN package.json http-server@0.0.1 No repository field.
 npm WARN package.json http-server@0.0.1 No readme data.
 npm WARN package.json http-server@0.0.1 No repository field.
 npm WARN package.json http-server@0.0.1 No readme data.
 Dependencies installed
-----> Building runtime environment
-----> Uploading droplet (15M)
Checking status of app 'hello'...
 0 of 1 instances running (1 starting)
 0 of 1 instances running (1 starting)
 1 of 1 instances running (1 running)
Push successful! App 'hello' available at

When I went to the URL provided, I saw the greeting.

I recently made an Amazon Machine Image (AMI) available (described here) and described how to use it. In this post I will describe the process of taking an existing AWS Instance with Cloud Foundry Micro (described here) and making it suitable for a public AMI. The primary challenge is that we need to create a boot disk with Cloud Foundry already installed but configured so that it can use a new password, domain, and local IP address. We also need to remove any identifying information before shutting down the VM and taking a snapshot.

I first use the following script (named to install Cloud Foundry using the cf_nise_installer. Note that I am setting a well-known IP, domain, and password; these will be changed later during the boot process. I am also deleting some files that might exist from a previous install attempt. Finally, I am creating a list of files that might contain data that must be modified during the boot process.

set -x
cd ~
sudo rm -rf \
  cf_nise_installer \
  /var/vcap/data/sys/log/* \
  domain password files ipv4 \
export INSTALLER_BRANCH='master'
export CF_RELEASE_BRANCH='release-candidate'
export NISE_DOMAIN='cloud.mycloud.local'
export NISE_PASSWORD='swordfish'
bash < <(curl -s -k -B${INSTALLER_BRANCH:-master}/local/
list="$(sudo find /vol-a/var/vcap -type f \( -iname '*.yml' -o -iname 'postgres_ctl' \) )"
echo $list > ~/files

We next create a startup script, /home/ubuntu/rc.local, and modify /etc/rc.local to call our new script (if you delete the authorized_keys and fail to add them back, then you can’t log in to your instance!).

# should be called from /etc/rc.local
echo "Starting /home/ubuntu/rc.local script to start Cloud Foundry"
# See if 'debug: true' is in user-data
X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'debug:' )
X=$( echo $X | cut -d : -f 2 | tr -d ' ' )
if [ "$X" == "true" ]; then
  set -x
# Make sure that we have some keys (should be deleted before snapshot)
cd /home/ubuntu
if [ ! -f .ssh/authorized_keys ]; then
  curl http://instance-data/latest/meta-data/public-keys/0/openssh-key \
    2>/dev/null > .ssh/authorized_keys
  chown ubuntu:ubuntu .ssh/authorized_keys
  chmod 600 .ssh/authorized_keys
  echo "Updated authorized_keys"
# See if 'autoStart: false' is in user-data; if so, done!
X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'autoStart:' )
X=$( echo $X | cut -d : -f 2 | tr -d ' ' )
if [ "$X" == "false" ]; then
  echo "user-data includes 'autoStart: false' so we will quit"
  exit 0
# Each instance should have a different password (but once set it does not change!)
if [ -e password ]; then
  # If we have been through this code before, then use the previous password
  OLD_PASSWORD=$( < password )
  # This is the first time through, so we replace the default password
  # User may specify a 'password:' line in the user data
  X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'password:' )
  NEW_PASSWORD=$( echo $X | cut -d : -f 2 | tr -d ' ' )
  if [ "$NEW_PASSWORD" == "" ]; then
    # No user-provided one and no previously created one, so assign a random one
    # another script is $( openssl rand -base64 6 )
    NEW_PASSWORD=$( < /dev/urandom tr -dc A-Za-z0-9 | head -c8 )
  # Save new password so it can be reused next time
  echo "$NEW_PASSWORD" > password
# See if 'domain:' is in user-data
X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'domain:' )
if [ "$X" != "" ]; then
  NEW_DOMAIN=$( echo $X | cut -d : -f 2 | tr -d ' ' )
  # get the public-hostname
  until [ "$NEW_DOMAIN" != "" ]; do
    X=$( curl http://instance-data/latest/meta-data/ 2>/dev/null | grep public-hostname )
    if [ "$X" != "" ]; then
      X=$( curl http://instance-data/latest/meta-data/public-hostname 2>/dev/null )
      if [ "$X" != "" ]; then
        NEW_DOMAIN=$( echo $X | cut -d "." -f 1 | cut -d "-" -f 2- | sed "s/\-/\./g" )
    if [ "$NEW_DOMAIN" == "" ]; then
      echo "`date`: public-hostname not yet available from meta-data"
      sleep 1
if [ -f domain ]; then
  OLD_DOMAIN=$( cat domain )
echo $NEW_DOMAIN > domain
# Each new instance will have a unique private IP address
NEW_IPV4=$( curl http://instance-data/latest/meta-data/local-ipv4 2>/dev/null )
if [ -f ipv4 ]; then
  OLD_IPV4=$( cat ipv4 )
echo $NEW_IPV4 > ipv4
# Find all the files that need to be edited
# (takes several seconds, so cache it for faster start next time)
if [ -f files ]; then
  list=$( cat files )
  list="$(find /var/vcap -type f \( -iname '*.yml' -o -iname 'postgres_ctl' \) )"
  echo $list > files
for f in $list
  if [ "$OLD_PASSWORD" != "$NEW_PASSWORD" ]; then
    if grep -q $OLD_PASSWORD $f; then
      sed -i "s/$OLD_PASSWORD/$NEW_PASSWORD/g" $f
      echo "Updated password in $f"
  if grep -q $OLD_DOMAIN $f; then
    sed -i "s/$OLD_DOMAIN/$NEW_DOMAIN/g" $f
    echo "Updated domain in $f"
  if grep -q "$OLD_IPV4" $f; then
    sed -i "s/$OLD_IPV4/$NEW_IPV4/g" $f
    echo "Updated IP in $f"
# start Cloud Foundry
(cd /home/ubuntu/cf_nise_installer; ./local/

Finally, I create a script to remove identifying information and sanitize the system before shutdown. Note that once authorized_keys are removed from the .ssh directory you can’t log in unless during the boot process new authorized_keys are installed.

sudo rm -f /root/.ssh/* /home/*/.ssh/* /var/vcap/monit/monit.log
(cd /var/log; sudo rm -f *.log dmesg* debug messages syslog)
rm -f ~/.viminfo ~/.bash_history
echo '' | sudo tee /var/log/lastlog
history -c
# sudo shutdown now

After running this script I use the AWS tools to stop the instance (this means that the command history is empty). At this point I can use the AWS tools to create an AMI. When the AMI starts it executes /etc/rc.local which calls my new script, /home/ubuntu/rc.local. This script sets the local IPv4 value, along with the domain and password (using configured values if provided in the user-data). When the appropriate configuration info has been updated, then we start Cloud Foundry. Each new instance has its own IP, domain, and password, making it secure and unique.

If there is something wrong with the boot volume (such as missing authorized_keys so you can’t log in), then you need to start another EC2 instance (a micro is fine), attach the volume, do the fix-up, and release it.

# mount a new volume (in case some surgery is needed)
sudo mkdir -p 000 /vol-a
sudo mount /dev/sdf /vol-a
# ... do whatever fix-up is needed and unmount the volume
sudo umount -d /vol-a
sudo rmdir /vol-a

I have used /etc/rc.local to hook into the boot process. An alternate may be to use crontab with the ‘@reboot’ option.

If you have an instance-store (a disk that exists only while the instance is running), then you might want to set up some swap space on it. The following script will create a 4 GB file to be used as swap space.

sudo dd if=/dev/zero of=/mnt/swapfile bs=1M count=4096 &&
sudo chmod 600 /mnt/swapfile &&
sudo mkswap /mnt/swapfile &&
echo /mnt/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab &&
sudo swapon -a
cat /proc/swaps

That summarizes how I created a Cloud Foundry Micro public AMI.

Video of ESUG 2013 Presentation

My presentation “Smalltalk in the Cloud” was recorded and can be found at the link. Unfortunately, the audio was not very strong.

Update: Slides are here.

Update: A video of this post is available here.

After creating a Cloud Foundry Micro on an Amazon EC2 instance (described here), I decided to make it available as a public AMI so that others could try it out (especially since the Micro is not available unless you built it yourself using Altoros vagrant or Nise BOSH).

To use this you need to sign up for an Amazon AWS account.

Start a Cloud Foundry Instance

  • Once you have an account, launch ami-98c956a8 (currently available in us-west-2; add a comment if you want it available elsewhere).
  • Confirm that the manifest reads “366179850620/Cloud Foundry Micro” and click Continue.
  • Change the instance type to m1.small and click Continue.
  • In the ‘User Data:’ field you may enter some optional customizations (each on its own line) and click Continue.
    • password: mySecret (AMI rules prohibit default passwords so if you don’t provide your own we will generate a random one);
    • domain: (if you don’t provide a domain, we will assign <public-IP> as the domain); and
    • debug: true (adds some debugging information to /var/log/boot.log).
  • Review the ‘Storage Device Configuration’ (an 8 GB root volume and an ephemeral instance store) and click Continue.
  • You may give a ‘Name’ tag to your EC2 instance, say CF Demo, and click Continue.
  • You may select or create a key pair to be used to log in to your server (optional, but useful), and click Continue.
  • Select or create a Security Group with at least HTTP access and click Continue.
    • ICMP – Echo Request (optional, to allow your server to respond to ping);
    • TCP – SSH (optional, to allow you to log on to your server using a private key); and
    • TCP – HTTP (required, to interact with Cloud Foundry and the applications you push to the server).
  • Review the configuration information and click Launch.
  • Click  View your instances on the Instances page to discover the public IP address.

Identify the Domain

To use the server you need to know its domain name.

  • If you provided a domain in the User Data above, then you need to create a record set in your domain name server to point your domain and all subdomains (using the ‘*’ wildcard match) to the indicated address; or
  • If you did not provide a domain, then your domain is <public-IP> (a DNS that maps all requests to the given IP).

Log on to the Server (Optional)

  • Identify the path to the private key associated with the key pair you selected or created when you created the EC2 instance.
  • From a command shell (on Mac, Linux, or Unix), or an SSH client on Windows (such as PuTTY), connect to the server. E.g.,

ssh -i /path/to/my/private/key.pem ubuntu@domain

  • Once connected you can explore the server.
sudo /var/vcap/bosh/bin/monit summary # check Cloud Foundry status (all running except cloud_controller_jobs)
tail /var/log/boot.log # check here if things don't seem to start properly
cat ~/domain # show the configured domain (from User Data or public IP)
cat ~/password # show the configured password (from User Data or random generation)
cd /var/vcap/data/sys; sudo chmod +rx log; cd log; ll # list of log file directories
  • You can execute a single command the server using ssh:
ssh -i /path/to/my/private/key.pem ubuntu@hostname_or_domain cat password

Use the Server

To use the server you can refer to the cf command line reference. For example, on your local machine create and set up the environment:

mkdir ~/cloud ~/cloud/ruby; cd ~/cloud/ruby
sudo gem install bundle sinatra cf

Create three files using your favorite text editor:


source ''
 ruby '1.9.3'
 gem 'sinatra'


require 'rubygems'
require 'sinatra'
configure do
    disable :protection
get '/' do
    host = ENV['VCAP_APP_HOST']
    port = ENV['VCAP_APP_PORT']
    "<h1>Hello World!</h1><h2> I am in the Cloud! via: #{host}:#{port}</h2>"
get '/env' do
    res = ''
    ENV.each do |k, v|
        res << "#{k}: #{v}<br/>"

require ‘./env.rb’
run Sinatra::Application

To create a ‘Gemfile.lock’ from the ‘Gemfile’ run the following command:


I can test the application by running the following command:

ruby env.rb

When it tells me that Sinatra has taken the stage I enter http://localhost:4567/ and http://localhost:4567/env in a web browser.

Then I can use ‘cf’ to set my target, login, do some configuration, and push my application to the cloud (replacing <my-ip> and mySecret with your server’s public IP and password):

cf target http://api.<my-ip>
cf login --password mySecret admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see
cf create-space development 
cf target --space development
cf map-domain --space development <my-ip>
cf push

If the push is successful, it will show the URL at which you can see the application. When you care done you can stop and/or terminate your EC2 instance.

Recently we went through the process of installing a micro Cloud Foundry on a local virtual machine. We are now interested in doing the same on an Amazon EC2 instance. As far as I have been able to find, the existing instructions for using AWS set up a system with many VMs. In this post we look at a “micro” (or single-machine) Cloud Foundry setup.

To do this you need to sign up for an Amazon AWS account. Next, you need to decide where to build your Cloud Foundry instance. Amazon has data centers in eight regions, and you can pick based on geography (close to you has less network latency) and price (some are more expensive). I am close to the US (West) Oregon Region (us-west-2) and it is among the least expensive.

Next you select a base operating system for your machine. Cloud Foundry recommends 64-bit Ubuntu 10.04 LTS, so go to Ubuntu’s Amazon EC2 AMI Locator and enter ’64 lucid ebs’ in the search area (since we are going to make changes to the setup we want to be on a persistent store, hence the EBS selection). When the search list is narrowed down to one for each region, click on the link for the region you want.

Ubuntu Amazon EC2 AMI Locator

This takes us to the EC2 Management Console (perhaps with a login) where you can review information about the selected AMI and click the Continue button.

Screen Shot 2013-08-27 at 10.29.30 AM

For the Instance Details, change the Instance Type from ‘T1 Micro’ to ‘M1 Small’ or ‘M1 Medium’ and click Continue.

Screen Shot 2013-08-27 at 10.33.15 AM

Next, give ‘CF Micro’ as ‘User Data’ and click Continue.

Screen Shot 2013-08-27 at 11.57.38 AM

Do not make any changes to the Storage Device Configuration; simply click Continue.

Screen Shot 2013-08-27 at 12.01.43 PM

For the Tags, give ‘CF Micro’ as the Name and click Continue.

Screen Shot 2013-08-27 at 12.02.20 PM

To interact securely with the instance you need a key pair. The Wizard prompts for a name and you may enter anything (such as ‘cfMicro’) and then click ‘Create and Download your Key Pair’.

Screen Shot 2013-08-27 at 12.04.26 PM

The default Security Group does not allow any outside access to the instance. Create a new Security Group, named ‘CF Micro’ with a description of ‘ping, ssh, http’, add the appropriate rules, and click Continue.

Screen Shot 2013-08-27 at 12.10.57 PM

Next, click the Launch button to start the instance.

Screen Shot 2013-08-27 at 12.14.20 PM

When informed that the instance is starting, click Close.

Screen Shot 2013-08-27 at 12.16.28 PM

This takes us to the list of Instances on the Management Console.

When we built a micro Cloud Foundry on a local virtual machine, the IP address was assigned by Fusion and was the same from “inside” and “outside” the machine. When running an EC2 instance on AWS, the machine is behind a firewall and on an internal (private) network. While our virtual machine can be reached via a public IP address, the machine actually has a different IP address.

Optional: If we stop and start the machine the default behavior is that we are likely to get a different public address. In order to have a stable IP address for our instance, we can to allocate an Elastic IP and associated it with the running instance. (If you are only doing this once and will throw away the system, then you can skip this step.) From the EC2 Management Console, click Elastic IPs in the navigation pane on the left and then click the Allocation New Address button (instructions here).

Screen Shot 2013-08-28 at 2.57.14 PM

Optional (continued): Select the new address and click the Associate Address button. In the dialog box select the running instance and click the Yes, Associate button.

Screen Shot 2013-08-28 at 2.59.39 PM

Whether you have a stable IP or not, you now are almost ready to log on to our new server. Click on Instances in the navigation pane on the left and select the CF Micro instance, right click, and select the ‘Connect’ menu command.

Screen Shot 2013-08-27 at 12.17.59 PM

This gives us a window with instructions on how to connect to the instance. I prefer to use SSH from on my MacBook Pro, so look at the instructions for the standalone SSH client.

Screen Shot 2013-08-27 at 12.22.48 PM

Before you can use the command line provided (highlighted above) you need to do a little bit of setup on our local machine. Create a working directory and copy the private key downloaded earlier. Then connect to the new server (your IP address will be different).

mkdir ~/cloud/cfMicro
cd ~/cloud/cfMicro
mv ~/Downloads/cfMicro.pem .
chmod 400 cfMicro.pem 
ssh -i cfMicro.pem ubuntu@

Once logged in to the server, you can install Cloud Foundry using Iwasaki Yudai’s cf_nise_installer.

export IPV4=`wget -qO- http://instance-data/latest/meta-data/public-ipv4`
export NISE_DOMAIN=$
export CF_RELEASE_BRANCH=release-candidate
bash < <(curl -s -k -B${INSTALLER_BRANCH:-master}/local/

When this finishes, you should restart your server.

sudo shutdown -r now

After a minute or so, log in to the server again using the ssh command above and start your Cloud Foundry.

(cd ~/cf_nise_installer; ./local/

Once the server is started, you can logout from the server (or open a second session on your client) and create a new application to push to your cloud. I suggest that you follow my earlier example with the following manifest.yml (change the domain to reference your server’s IP address):

- name: env
  memory: 256M
  instances: 1
  host: env
  path: .
  command: 'ruby env.rb'

Then use the Cloud Foundry command line tools to configure things and push your application (use your own IP address instead of the one shown here!).

cf target
cf login --password c1oudc0w admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see
cf create-space development 
cf target --space development
cf map-domain --space development
cf push

When this finishes, you should be able to open a web browser on something like (use your own IP address) and see the application. Congratulations!


One of the challenges of providing Smalltalk on Cloud Foundry (or any cloud hosting system) is that the applications are typically given an ephemeral file system and isolated from read/write access to any persistent disk. This is, of course, for good reasons, but makes it difficult to use Smalltalk applications (like Pier on Pharo) that rely on image-based persistence. We recently described getting a Pharo application (in that case AIDAweb) running on Cloud Foundry 2, but each time you stop and start an application instance you would lose all the saved data.

This post describes a way to modify a private Cloud Foundry system to provide access to a persistent file system from an application instance. Note that this is a proof-of-concept and demonstrates that it is possible to work around the carefully-designed isolation in Cloud Foundry. We do so by providing every Pharo application the same shared space that is completely replaced whenever you upload a new application. We also give all files and directories read/write/execute permission so that subsequent launches of the application (which Cloud Foundry does with a new user and group) will have access.

Create a Virtual Disk

The first step is to create a fixed-size (~500 MB) virtual disk (based on ideas here) that can be used by the application instance for persistent file storage. The following steps, done as root on the Cloud Foundry server, give us what the needed disk:

cd /var
touch st_virtual_disk.ext3
dd if=/dev/zero of=/var/st_virtual_disk.ext3 bs=550000000 count=1
mkfs.ext3 /var/st_virtual_disk.ext3
mkdir /var/smalltalk
mount -o loop,rw,usrquota,grpquota /var/st_virtual_disk.ext3 /var/smalltalk/
mkdir /var/smalltalk/pharo/
chmod 777 /var/smalltalk/pharo/

Making the Disk Visible in the Warden Container

Cloud Foundry’s architecture includes a Warden that manages a container for each application instance that includes a private root file system. On Ubuntu 10.04 LTS this is implemented using aufs, and the mount action is (at the moment) found in /var/vcap/data/packages/warden/29/warden/root/linux/skeleton/lib/ line 60 (with :/var/smalltalk=rw added):

mount -n -t aufs \
 -o br:tmp/rootfs=rw:$rootfs_path=ro+wh:/var/smalltalk=rw none mnt

This means that the container’s private root file system has a /pharo top-level directory that maps to a persistent 500 MB virtual disk. The remaining portion of the external file system (from /var/vcap/data/packages/rootfs_lucid64/1) is read-only and the internal file system (from /var/vcap/data/warden/depot/*/tmp/rootfs/) is read/write.

New Pharo Buildpack

Now rather than simply creating a startup script, the buildpack needs to copy things to the persistent file system and use that when running the application. Following is the new compile script:

#!/usr/bin/env bash
BUILD_PACK_DIR=$(dirname $(dirname $0))
rm -rf /pharo/* 2>&1
cp `ls *.image` /pharo/pharo.image
cp `ls *.changes` /pharo/pharo.changes 2>&1
rm *.image *.changes 2>&1
if [ ! -e ]; then
cp -r * /pharo
chmod -R 777 /pharo/*
rm -rf *
cat > << EOF
#!/usr/bin/env bash
umask 0000
ln -s /pharo ./pharo
cd pharo
/opt/pharo/pharo -vm-display-null -vm-sound-null pharo.image \$PORT
chmod +x

Client Changes

On the client (before pushing the application to the cloud), there are some changes needed. I started with the Pier 3.0 download (from on this page). The pre-build image is in Pharo 1.3 and does not include Smalltalk code for accessing the OS environment variables (at least not that I found easily). Thus, you can see above that the startup script is modified to add $PORT to the command line when starting Pharo. With that change, Pier can be started on the supplied port with this script (that might work for any Seaside application):

| manager adaptor port |
port := (SmalltalkImage current argumentAt: 1) asNumber.
manager := WAServerManager default.
adaptor := manager adaptors first.
adaptor port == port ifFalse: [
 manager stopAll.
 adaptor port: port.
 manager startAll.

This runs Pier on Cloud Foundry on a persistent file system but Pier needs to save the image regularly. It would also be desirable to save the image when sent a SIGTERM (perhaps with something like Chaff). One can save the image from the /status page, though since one can also quit the image from that page it probably should have some security!

In any case, we have demonstrated that it is possible to provide a persistent file store for a Pharo application on a private Cloud Foundry server.

If you want to just look at the code in a GemStone/S image, and maybe try out a simple Smalltalk expression, you can spin up a Heroku dyno running GemStone/S. See the buildpack for instructions on how to run Webtools. Webtools is itself still quite rough, but if you want to contribute or open an issue, feel free.

Of course, this isn’t very useful as a database or application server because the dyno has an ephemeral file system and can be restarted any time. But it does demonstrate that I’ve figured out how to create a Heroku buildpack, and that was my primary goal!



Get every new post delivered to your Inbox.