If you are following this blog you are aware that I’ve been investigating options for hosting Smalltalk in the cloud. One very successful hosting solution for a number of languages is Heroku. Not only do they support a number of languages built-in, but they also allow anyone to create a buildpack to support another language or framework. So, what would it take to add Smalltalk to Heroku? First, we need to acknowledge that Will Leinweber has a buildpack for Redline Smalltalk. Since Redline Smalltalk is based on the Java VM, this is more about running Java on Heroku than Smalltalk (which is not to minimize the accomplishment, just to note that getting a Smalltalk VM running is not quite the same).

The next thing to note is that running an application in the cloud might be different from running it on your own server. The model most hosting providers follow is that they package your application into a stand-alone directory tree that is saved on their server. When you ask for one or more instance(s) to run they copy the directory tree onto an available machine, set some environment variables (including a port on which to listen for HTTP requests), execute an application launch command, and then route requests to the port. Horizontal scaling is accomplished by starting additional instances and routing requests to them in some fashion. If an application dies then the hosting provider cleanup the directory and repeat the process with a new directory. Thus, each instance is isolated from other instances (even of the same application), and the file system is “ephemeral” and exists only while the instance is running. From a Smalltalk perspective, this means that typical image-based persistence is not trivial. I have some ideas on how this might be addressed, but need to get Smalltalk into the environment first.

As mentioned above, you can create a third-party buildpack that packages and starts your application. The buildpack is essentially three bash scripts that (1) report whether it can handle a particular application (“Do I have everything here that I need?”), (2) transform the application into its runtime structure, and (3) tell the framework how to start instances of the application. This is all fairly straightforward and has been done for many languages and frameworks. Note, however, that a “buildpack is responsible for building a complete working runtime environment around the app. This may include language VMs and other runtime dependencies that are needed by the app.” So, to get something like Pharo running on Heroku we need a Cog VM that runs on the Heroku server.

To find out more about the Heroku environment I decided to try it out. First, I signed up for a free account at https://id.heroku.com/signup and then install the tools. At this point I followed the simple instructions to create a “Hello, world” application:

mkdir ~/cloud/heroku
cd ~/cloud/heroku
heroku login
git clone git://github.com/heroku/ruby-sample.git
cd ruby-sample
heroku create
git push heroku master
heroku apps:rename mygreatapp
heroku open

This opened a web browser on http://mygreatapp.herokuapp.com/ and the expected page was displayed. The next thing that is rather nice is that you can run a non-web application on the Heroku server and have stdin/stdout/stderr routed back to your client shell:

heroku run bash

This starts a bash shell on the server; just as if you used ssh! Now we can do some things to investigate the server environment (this script represents what came at the end after I tried various things described next):

uname -m -o # x86_64 GNU/Linux
ll /lib/libc.so* # 2.11.1
cat /proc/version # Linux version 3.8.11-ec2 (gcc version 4.4.3)
file /sbin/init # ELF 64-bit LSB, for GNU/Linux 2.6.15
cat /proc/cpuinfo # Intel(R) Xeon(R) 4 CPUs X5550 @ 2.67 GHz 
curl --version # 7.19.7
tar --version # 1.22
zip --version # command not found

The shows us that heroku is running 64-bit linux on Xeon processors. Since we have non-root access we can’t do much to changes these characteristics. We can, however try installing Pharo and see what happens. My first attempt was to download a one-click but that was a zip file that couldn’t be unzipped. Next, I got a recent Cog VM that came as a .tgz file. I uncompressed this with tar and tried to run it. This gave the error “/usr/bin/ldd didn’t produce any output and the system is 64 bit.  You may need to (re)install the 32-bit libraries.” So we can’t run the 32-bit application–at least not without some work. Next I tried a 64-bit Squeak VM and uncompressed that. With this we got further, but now have the error “squeakvm64: /lib/libc.so.6: version `GLIBC_2.14′ not found (required by squeakvm64).” Note above that the Heroku server has GCC 2.11 (from 2009), so executables compiled with later libraries will not run.

I guess that the correct thing to do is to build the needed binaries on the Heroku server (as described here). Presumably this would guarantee that everything works together. Before trying to build Cog I might look at GNU Smalltalk.

Before finishing up we need to stop our Heroku application so we don’t use up resources. Exit from the bash shell on the server (if it hasn’t timed you out!), and then destroy your app:

heroku apps:destroy mygreatapp

In any case, I’ve learned enough for today and will try some other things next week (probably non-Heroku things!).

A lot has happened with Cloud Foundry since I last blogged about creating a micro cloud (almost 18 months ago!). The Micro “isn’t really maintained anymore” so instead I’m using the Nise Installer to create my private cloud. I used an Ubuntu 10.04 LTS image on a Mac running Fusion and used the easy install to get to a console. Once logged in I updated the keyboard to a Dell 101 (up and down arrows don’t work otherwise), installed a few tools, and then inquired to discover the IP address:

sudo dpkg-reconfigure console-setup
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install -y vim openssh-server curl
ifconfig eth0 | grep inet

The last line will show you the address of the machine. Add an entry to your client’s /etc/hosts file similar to the following (using the IP address you got from the server above):  mycloud myapp.mycloud.local

Then from a Terminal (or other shell) on the client enter ‘ssh mycloud’ to get a command prompt (this is easer than using the console).

The next step is to install Cloud Foundry:

export CF_RELEASE_BRANCH=release-candidate
bash < <(curl -s -k -B https://raw.github.com/yudai/cf_nise_installer/master/local/bootstrap.sh)

This will prompt for your password (a couple times!) and takes a while (a couple hours). When it finishes it will show something like the following:

RESTART your server!
CF target: 'cf target api.'
CF login : 'cf login --password micr0@micr0 micro@vcap.me'

The address ( is the IP address for my server; yours will almost certainly be different. Follow the instructions and restart your server. Once the server is started you need to start Cloud Foundry:

cd ~/cf_nise_installer; ./local/start_processes.sh

Within a minute it should list a number of processes as running. Unfortunately, a few processes are running the command returns to the shell prompt implying that all is well. In my experience, not everything is started so I typically repeat the summary command until I see all fourteen processes:

sudo /var/vcap/bosh/bin/monit summary

From your client machine you should navigate to a directory with a sample application (like we created here) and set the target, login, do some setup, and push the application:

cf target api.
cf login --password micr0@micr0 micro@vcap.me
# okay to ignore CFoundry::InvalidRelation error in next command (see https://github.com/cloudfoundry/cf/issues/9)
cf create-space development 
cf target --space development
cf map-domain mycloud.local
cf push

At this point you can accept the defaults (the subdomain is ‘myapp’ and the domain is ‘mycloud.local’) and when the staging is finished you should see 2 applications running. On your client you can navigate to http://myapp.mycloud.local/ and test the application. From your client shell you can try out various commands listed at http://docs.cloudfoundry.com/docs/using/managing-apps/cf/.

Cloud Foundry has been out for a couple years and I’ve been blogging about it for about 18 months (starting here). Cloud Foundry has moved from VMware to Pivotal and continues to undergo rapid development. Although there is a good bit of documentation, it still took me a fair amount of time to get even a basic application running because of changes since my last deployment (and the docs are incomplete). So here is a brief description of how to deploy a trivial Ruby application in the Hosted Developer Environment (I’m on the 60-day free trial period). For the client machine I’m running on a fresh install of Mac OS X version 10.8.4 and using Terminal.

Create and set up the environment:

mkdir ~/cloud ~/cloud/ruby; cd ~/cloud/ruby
sudo gem install bundle sinatra cf

Create three files using your favorite text editor:


source 'https://rubygems.org'
gem 'sinatra'


require 'rubygems'
require 'sinatra'
configure do
    disable :protection
get '/' do
    host = ENV['VCAP_APP_HOST']
    port = ENV['VCAP_APP_PORT']
    "<h1>Hello World!</h1><h2> I am in the Cloud! via: #{host}:#{port}</h2>"
get '/env' do
    res = ''
    ENV.each do |k, v|
        res << "#{k}: #{v}<br/>"


- url: myapp.cfapps.io
    memory: 256M
    name: myapp
    instances: 2
    path: .
    command: 'ruby env.rb'

To create a ‘Gemfile.lock’ from the ‘Gemfile’ run the following command:


I can test the application by running the following command:

ruby env.rb

When it tells me that Sinatra has taken the stage I enter http://localhost:4567/ and http://localhost:4567/env in a web browser.

Then I can use ‘cf’ to set my target, login, and push my application to the cloud:

cf target api.run.pivotal.io
cf login
cf push

Because we have defined the setup in manifest.yml we don’t have to answer any questions. At this point I can go to http://myapp.cfapps.io and http://myapp.cfapps.io/env to see the application run.

Great news! The GemStone/S team is becoming an independent company after 3 years as part of VMware. The core engineering team, including me, along with Norm Green and Dan Ware, have formed GemTalk Systems.

For more information, visit our company web site at http://gemtalksystems.com.

At GemTalk Systems, we’re the people who built the product. We’re excited to become a company with a dedicated focus on Smalltalk, GemStone/S and allied initiatives. You’ll see an increase in innovation on the product, and customers will see a seamless transition in support.

GemTalk will be at the STIC conference (Phoenix in June). Plan to visit us there!

Our email addresses are changing, too. They are all in the form of first.last@gemtalksystems.com. But the old email addresses will continue to work for a few months.

If you have questions about this exciting transition, feel free to contact me.

A GemStone/S 64 Bit system needs to have various maintenance tasks performed regularly. One of these is managing log file, backups, and transaction logs. I recently put together a basic Linux VM that has GemStone/S 64 Bit installed, but little more (the last link here). One of the non-Smalltalk things that I sometimes struggle with is figuring out which file can be deleted. The following demonstrates some bash scripting that seems to address a few of my questions. It starts by deleting all but the most recent two log files for pcmon, pagemanager, and symbolgem. It then does some garbage collection activity and a full backup. It then deletes all but the most recent two backups. Finally, if there are two backups in the last two days, it deletes all transaction logs more than two days old. This does not do as much error checking as should be done in a serious production environment (such as try restoring the backup, apply transaction logs, etc., before deleting things), but it demonstrates some of the things that can be done in bash to do date-specific cleanup.

# cleanup log files
cd /opt/gemstone/log
(ls -t *pcmon.log | head -n 2; ls *pcmon.log) | \
    sort | uniq -u | xargs rm
(ls -t *pagemanager.log | head -n 2; ls *pagemanager.log) | \
    sort | uniq -u | xargs rm
(ls -t *symbolgem.log | head -n 2; ls *symbolgem.log) | \
    sort | uniq -u | xargs rm
# do backup
topaz -l -T 50000 << EOF
output push backup.out
send SystemRepository markForCollection
send SystemRepository reclaimAll
send SystemRepository startNewLog
SystemRepository fullBackupCompressedTo: 
 '/opt/gemstone/backups/backup-' , 
 (DateTime now asSeconds // 60) printString.
# cleanup backups
cd /opt/gemstone/backups
(ls -t backup* | head -n 2; ls backup*) | sort | uniq -u | xargs rm
if [ "2" -le "`find . -mtime -2 -name 'backup*' | wc -l`" ]; then 
 cd /opt/gemstone/data
 find . -mtime +2 -name 'tranlog*' | xargs rm

While the strict definition of GLASS (GemStone, Linux, Apache, Seaside, and Smalltalk) specifies a particular technology for each layer of the stack, other technologies can be used for the OS (e.g., Mac OS X), the web server (e.g., lighttpd or nginx), and the web framework (e.g., AidaWeb). I’ve been running GemStone/S 64 Bit on Mac OS X for some time and have had a local (laptop) configuration much like the tradition Linux setup with FastCGI routing requests to three gems that receive and process Seaside requests. This worked well at the beginning when Mac OS X included FastCGI (as part of its built-in support for Ruby on Rails). This has changed in the later releases; starting with 10.7 (Lion) and continuing with 10.8 (Mountain Lion), FastCGI is no longer included in the operating system. This has broken my setup.

GLASS uses FastCGI to route requests to long-running server processes (typically a Topaz process) that remains logged in to the GemStone database. The Topaz processes can run on a different host from the web server (Apache or whatever), using the ‘ExternalServer’ (discussed here and here). While FastCGI is sometimes thought to be out-of-date, it is more accurate to say that “There is not much development on FastCGI because it is a very stable protocol / application.” Apache now provides a fast cgi module (here), that is commonly described as a replacement to mod_fastcgi, but it doesn’t support the external server configuration used by GLASS.

Getting GLASS to work on Mac OS X 10.8 requires jumping through some hoops, but now that I’ve done it I’ll describe the steps I took (please feel free to suggest alternatives in the comments).

I used the App Store application to find and install Xcode (4.6), then installed the command line tools (from Xcode Preferences->Downloads or from Apple). Next, I installed the latest MacPorts. Then, from a Terminal I entered the following command:

sudo port install mod_fastcgi

This did a full Apache build, but also created the needed modules. Instead of running the MacPorts build of Apache, I copied the new module to the expected directory:

sudo cp /opt/local/apache2/modules/mod_fastcgi.so \

Then I was able to run my usual setup with FastCGI on Mac OS X 10.8!

While there are a variety of ways of interacting with GemStone/S 64 Bit, the most basic way is to use Topaz, the command-line GemStone C Interface (GCI) client that has been part of GemStone since the beginning. While GemStone/S 64 Bit as a server is not available for Microsoft Windows, Topaz is available as a Windows client application that can be used to connect to a Unix/Linux/Mac server. Using Topaz is very helpful in debugging connectivity problems since it removes the variables associated with other client applications (such as GBS, GemTools, Jade, etc.). That is, if you are having trouble connecting with GemTools, we are likely to ask you to try to connect using Topaz.

Fortunately, it is relatively easy to run Topaz on Windows. Open a web browser on http://seaside.gemstone.com/downloads/x86.Windows_NT/ and download GemBuilderC3.1.0.1-x86.Windows_NT.zip (this assumes that you are connecting to a server). Unzip this into a convenient location on your Windows machine (e.g., C:\gemstone\). Open a command shell (Start, All Programs, Accessories, Command Prompt), navigate to the ‘bin’ directory created by unzipping the download, and start ‘topaz’. At this point you can enter the usual Topaz commands and try to login to your server. Following is a copy of the screen when I used Topaz on Windows to login to my database:

| GemStone/S64 Object-Oriented Data Management System                         |
| Copyright (C) VMware, Inc. 1986-2012                                        |
| All rights reserved.                                                        |
| PROGRAM: topaz, Linear GemStone Interface (Remote Session)                  |
| VERSION:, Fri Aug 24 10:24:31 2012                                  |
| BUILD: gss64_3_1_0_x_branch-28937                                           |
| BUILT FOR: Pentium/Windows_NT                                               |
| MODE: 32 bit                                                                |
| RUNNING ON: 1-CPU jfoster-xpvm: Intel CPU, Windows NT 5.1 build 2600 Service|
| Pack 3                                                                      |
| PROCESS ID: 828 DATE: 11/28/2012 11:19:36 Pacific Standard Time             |
neither topazini.tpz nor $HOME\topazini.tpz were found
topaz> set user DataCurator pass swordfish
topaz> set gemstone jfoster0
topaz> set gemnet !tcp@!gemnetobject
topaz> login
[Info]: libssl- loaded
[11/28/2012 11:20:47.435 Pacific Standard Time]
 gci login: currSession 1 rpc gem processId 30480 OOB keep-alive interval 0
successful login
topaz 1> run
100 factorial printString
topaz 1> logout
topaz> exit

The things I typed are in bold. The things you need to change are in italics. Specifically, you need to provide the name of your stone (perhaps it is ‘seaside’), the IP address (or hostname if you have an entry in your hosts file) and port number (or service name if you have an entry in your services file) for your NetLDI that will start your gem.

… are available here. My presentation was Smalltalk in the Cloud and included a demo of pushing an Aida application (using Pharo/Cog) to a public cloud.

Some time ago I described the steps I used to add Perl as a simple runtime and framework to Cloud Foundry. Since then many changes have taken place in Cloud Foundry and those steps no longer work.

I now have something that works as of 30 August 2012. To see the changes go to my github repository and view the changes to  vcap, vcap-staging, stager, and cloud_controller. Note that the recent refactoring of Cloud Foundry means that instead of isolating changes to one repository, we now make changes to four repositories.

Of course, there may be more efficient ways of doing this, but this is what I’ve found to work for me today!

Update 1: It wasn’t much work to update the earlier work on Cog/Aida. It involved similar changes to vcap, vcap-staging, stager, and cloud_controller.

James Robertson and I discussed Cloud Foundry on his podcast.



Get every new post delivered to your Inbox.