Update: A video of this post is available here.

After creating a Cloud Foundry Micro on an Amazon EC2 instance (described here), I decided to make it available as a public AMI so that others could try it out (especially since the Micro is not available unless you built it yourself using Altoros vagrant or Nise BOSH).

To use this you need to sign up for an Amazon AWS account.

Start a Cloud Foundry Instance

  • Once you have an account, launch ami-98c956a8 (currently available in us-west-2; add a comment if you want it available elsewhere).
  • Confirm that the manifest reads “366179850620/Cloud Foundry Micro” and click Continue.
  • Change the instance type to m1.small and click Continue.
  • In the ‘User Data:’ field you may enter some optional customizations (each on its own line) and click Continue.
    • password: mySecret (AMI rules prohibit default passwords so if you don’t provide your own we will generate a random one);
    • domain: cloud.example.com (if you don’t provide a domain, we will assign <public-IP>.xip.io as the domain); and
    • debug: true (adds some debugging information to /var/log/boot.log).
  • Review the ‘Storage Device Configuration’ (an 8 GB root volume and an ephemeral instance store) and click Continue.
  • You may give a ‘Name’ tag to your EC2 instance, say CF Demo, and click Continue.
  • You may select or create a key pair to be used to log in to your server (optional, but useful), and click Continue.
  • Select or create a Security Group with at least HTTP access and click Continue.
    • ICMP – Echo Request (optional, to allow your server to respond to ping);
    • TCP – SSH (optional, to allow you to log on to your server using a private key); and
    • TCP – HTTP (required, to interact with Cloud Foundry and the applications you push to the server).
  • Review the configuration information and click Launch.
  • Click  View your instances on the Instances page to discover the public IP address.

Identify the Domain

To use the server you need to know its domain name.

  • If you provided a domain in the User Data above, then you need to create a record set in your domain name server to point your domain and all subdomains (using the ‘*’ wildcard match) to the indicated address; or
  • If you did not provide a domain, then your domain is <public-IP>.xip.io (a DNS that maps all requests to the given IP).

Log on to the Server (Optional)

  • Identify the path to the private key associated with the key pair you selected or created when you created the EC2 instance.
  • From a command shell (on Mac, Linux, or Unix), or an SSH client on Windows (such as PuTTY), connect to the server. E.g.,

ssh -i /path/to/my/private/key.pem ubuntu@domain

  • Once connected you can explore the server.
sudo /var/vcap/bosh/bin/monit summary # check Cloud Foundry status (all running except cloud_controller_jobs)
tail /var/log/boot.log # check here if things don't seem to start properly
cat ~/domain # show the configured domain (from User Data or public IP)
cat ~/password # show the configured password (from User Data or random generation)
cd /var/vcap/data/sys; sudo chmod +rx log; cd log; ll # list of log file directories
  • You can execute a single command the server using ssh:
ssh -i /path/to/my/private/key.pem ubuntu@hostname_or_domain cat password

Use the Server

To use the server you can refer to the cf command line reference. For example, on your local machine create and set up the environment:

mkdir ~/cloud ~/cloud/ruby; cd ~/cloud/ruby
sudo gem install bundle sinatra cf

Create three files using your favorite text editor:

Gemfile:

source 'https://rubygems.org'
 ruby '1.9.3'
 gem 'sinatra'

env.rb:

require 'rubygems'
require 'sinatra'
configure do
    disable :protection
end
get '/' do
    host = ENV['VCAP_APP_HOST']
    port = ENV['VCAP_APP_PORT']
    "<h1>Hello World!</h1><h2> I am in the Cloud! via: #{host}:#{port}</h2>"
end
get '/env' do
    res = ''
    ENV.each do |k, v|
        res << "#{k}: #{v}<br/>"
    end
    res
end

config.ru:

require ‘./env.rb’
run Sinatra::Application

To create a ‘Gemfile.lock’ from the ‘Gemfile’ run the following command:

bundle

I can test the application by running the following command:

ruby env.rb

When it tells me that Sinatra has taken the stage I enter http://localhost:4567/ and http://localhost:4567/env in a web browser.

Then I can use ‘cf’ to set my target, login, do some configuration, and push my application to the cloud (replacing <my-ip> and mySecret with your server’s public IP and password):

cf target http://api.<my-ip>.xip.io
cf login --password mySecret admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see https://github.com/cloudfoundry/cf/issues/9)
cf create-space development 
cf target --space development
cf map-domain --space development <my-ip>.xip.io
cf push

If the push is successful, it will show the URL at which you can see the application. When you care done you can stop and/or terminate your EC2 instance.

Recently we went through the process of installing a micro Cloud Foundry on a local virtual machine. We are now interested in doing the same on an Amazon EC2 instance. As far as I have been able to find, the existing instructions for using AWS set up a system with many VMs. In this post we look at a “micro” (or single-machine) Cloud Foundry setup.

To do this you need to sign up for an Amazon AWS account. Next, you need to decide where to build your Cloud Foundry instance. Amazon has data centers in eight regions, and you can pick based on geography (close to you has less network latency) and price (some are more expensive). I am close to the US (West) Oregon Region (us-west-2) and it is among the least expensive.

Next you select a base operating system for your machine. Cloud Foundry recommends 64-bit Ubuntu 10.04 LTS, so go to Ubuntu’s Amazon EC2 AMI Locator and enter ’64 lucid ebs’ in the search area (since we are going to make changes to the setup we want to be on a persistent store, hence the EBS selection). When the search list is narrowed down to one for each region, click on the link for the region you want.

Ubuntu Amazon EC2 AMI Locator

This takes us to the EC2 Management Console (perhaps with a login) where you can review information about the selected AMI and click the Continue button.

Screen Shot 2013-08-27 at 10.29.30 AM

For the Instance Details, change the Instance Type from ‘T1 Micro’ to ‘M1 Small’ or ‘M1 Medium’ and click Continue.

Screen Shot 2013-08-27 at 10.33.15 AM

Next, give ‘CF Micro’ as ‘User Data’ and click Continue.

Screen Shot 2013-08-27 at 11.57.38 AM

Do not make any changes to the Storage Device Configuration; simply click Continue.

Screen Shot 2013-08-27 at 12.01.43 PM

For the Tags, give ‘CF Micro’ as the Name and click Continue.

Screen Shot 2013-08-27 at 12.02.20 PM

To interact securely with the instance you need a key pair. The Wizard prompts for a name and you may enter anything (such as ‘cfMicro’) and then click ‘Create and Download your Key Pair’.

Screen Shot 2013-08-27 at 12.04.26 PM

The default Security Group does not allow any outside access to the instance. Create a new Security Group, named ‘CF Micro’ with a description of ‘ping, ssh, http’, add the appropriate rules, and click Continue.

Screen Shot 2013-08-27 at 12.10.57 PM

Next, click the Launch button to start the instance.

Screen Shot 2013-08-27 at 12.14.20 PM

When informed that the instance is starting, click Close.

Screen Shot 2013-08-27 at 12.16.28 PM

This takes us to the list of Instances on the Management Console.

When we built a micro Cloud Foundry on a local virtual machine, the IP address was assigned by Fusion and was the same from “inside” and “outside” the machine. When running an EC2 instance on AWS, the machine is behind a firewall and on an internal (private) network. While our virtual machine can be reached via a public IP address, the machine actually has a different IP address.

Optional: If we stop and start the machine the default behavior is that we are likely to get a different public address. In order to have a stable IP address for our instance, we can to allocate an Elastic IP and associated it with the running instance. (If you are only doing this once and will throw away the system, then you can skip this step.) From the EC2 Management Console, click Elastic IPs in the navigation pane on the left and then click the Allocation New Address button (instructions here).

Screen Shot 2013-08-28 at 2.57.14 PM

Optional (continued): Select the new address and click the Associate Address button. In the dialog box select the running instance and click the Yes, Associate button.

Screen Shot 2013-08-28 at 2.59.39 PM

Whether you have a stable IP or not, you now are almost ready to log on to our new server. Click on Instances in the navigation pane on the left and select the CF Micro instance, right click, and select the ‘Connect’ menu command.

Screen Shot 2013-08-27 at 12.17.59 PM

This gives us a window with instructions on how to connect to the instance. I prefer to use SSH from Terminal.app on my MacBook Pro, so look at the instructions for the standalone SSH client.

Screen Shot 2013-08-27 at 12.22.48 PM

Before you can use the command line provided (highlighted above) you need to do a little bit of setup on our local machine. Create a working directory and copy the private key downloaded earlier. Then connect to the new server (your IP address will be different).

mkdir ~/cloud/cfMicro
cd ~/cloud/cfMicro
mv ~/Downloads/cfMicro.pem .
chmod 400 cfMicro.pem 
ssh -i cfMicro.pem ubuntu@54.213.201.105

Once logged in to the server, you can install Cloud Foundry using Iwasaki Yudai’s cf_nise_installer.

export IPV4=`wget -qO- http://instance-data/latest/meta-data/public-ipv4`
export NISE_DOMAIN=$IPV4.xip.io
export CF_RELEASE_BRANCH=release-candidate
bash < <(curl -s -k -B https://raw.github.com/yudai/cf_nise_installer/${INSTALLER_BRANCH:-master}/local/bootstrap.sh)

When this finishes, you should restart your server.

sudo shutdown -r now

After a minute or so, log in to the server again using the ssh command above and start your Cloud Foundry.

(cd ~/cf_nise_installer; ./local/start_processes.sh)

Once the server is started, you can logout from the server (or open a second session on your client) and create a new application to push to your cloud. I suggest that you follow my earlier example with the following manifest.yml (change the domain to reference your server’s IP address):

---
applications:
- name: env
  memory: 256M
  instances: 1
  host: env
  domain: 54.213.204.16.xip.io
  path: .
  command: 'ruby env.rb'

Then use the Cloud Foundry command line tools to configure things and push your application (use your own IP address instead of the one shown here!).

cf target http://api.54.213.204.16.xip.io
cf login --password c1oudc0w admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see https://github.com/cloudfoundry/cf/issues/9)
cf create-space development 
cf target --space development
cf map-domain --space development 54.213.204.16.xip.io
cf push

When this finishes, you should be able to open a web browser on something like (use your own IP address) http://env.54.213.204.16.xip.io/env and see the application. Congratulations!

 

One of the challenges of providing Smalltalk on Cloud Foundry (or any cloud hosting system) is that the applications are typically given an ephemeral file system and isolated from read/write access to any persistent disk. This is, of course, for good reasons, but makes it difficult to use Smalltalk applications (like Pier on Pharo) that rely on image-based persistence. We recently described getting a Pharo application (in that case AIDAweb) running on Cloud Foundry 2, but each time you stop and start an application instance you would lose all the saved data.

This post describes a way to modify a private Cloud Foundry system to provide access to a persistent file system from an application instance. Note that this is a proof-of-concept and demonstrates that it is possible to work around the carefully-designed isolation in Cloud Foundry. We do so by providing every Pharo application the same shared space that is completely replaced whenever you upload a new application. We also give all files and directories read/write/execute permission so that subsequent launches of the application (which Cloud Foundry does with a new user and group) will have access.

Create a Virtual Disk

The first step is to create a fixed-size (~500 MB) virtual disk (based on ideas here) that can be used by the application instance for persistent file storage. The following steps, done as root on the Cloud Foundry server, give us what the needed disk:

cd /var
touch st_virtual_disk.ext3
dd if=/dev/zero of=/var/st_virtual_disk.ext3 bs=550000000 count=1
mkfs.ext3 /var/st_virtual_disk.ext3
mkdir /var/smalltalk
mount -o loop,rw,usrquota,grpquota /var/st_virtual_disk.ext3 /var/smalltalk/
mkdir /var/smalltalk/pharo/
chmod 777 /var/smalltalk/pharo/

Making the Disk Visible in the Warden Container

Cloud Foundry’s architecture includes a Warden that manages a container for each application instance that includes a private root file system. On Ubuntu 10.04 LTS this is implemented using aufs, and the mount action is (at the moment) found in /var/vcap/data/packages/warden/29/warden/root/linux/skeleton/lib/common.sh line 60 (with :/var/smalltalk=rw added):

mount -n -t aufs \
 -o br:tmp/rootfs=rw:$rootfs_path=ro+wh:/var/smalltalk=rw none mnt

This means that the container’s private root file system has a /pharo top-level directory that maps to a persistent 500 MB virtual disk. The remaining portion of the external file system (from /var/vcap/data/packages/rootfs_lucid64/1) is read-only and the internal file system (from /var/vcap/data/warden/depot/*/tmp/rootfs/) is read/write.

New Pharo Buildpack

Now rather than simply creating a startup script, the buildpack needs to copy things to the persistent file system and use that when running the application. Following is the new compile script:

#!/usr/bin/env bash
#
BUILD_DIR=$1
CACHE_DIR=$2
BUILD_PACK_DIR=$(dirname $(dirname $0))
#
cd $BUILD_DIR
rm -rf /pharo/* 2>&1
cp `ls *.image` /pharo/pharo.image
cp `ls *.changes` /pharo/pharo.changes 2>&1
rm *.image *.changes 2>&1
if [ ! -e startup.st ]; then
 touch startup.st
fi
cp -r * /pharo
chmod -R 777 /pharo/*
rm -rf *
#
cat > startup.sh << EOF
#!/usr/bin/env bash
#
umask 0000
ln -s /pharo ./pharo
cd pharo
/opt/pharo/pharo -vm-display-null -vm-sound-null pharo.image startup.st \$PORT
EOF
chmod +x startup.sh

Client Changes

On the client (before pushing the application to the cloud), there are some changes needed. I started with the Pier 3.0 download (from on this page). The pre-build image is in Pharo 1.3 and does not include Smalltalk code for accessing the OS environment variables (at least not that I found easily). Thus, you can see above that the startup script is modified to add $PORT to the command line when starting Pharo. With that change, Pier can be started on the supplied port with this startup.st script (that might work for any Seaside application):

| manager adaptor port |
port := (SmalltalkImage current argumentAt: 1) asNumber.
manager := WAServerManager default.
adaptor := manager adaptors first.
adaptor port == port ifFalse: [
 manager stopAll.
 adaptor port: port.
 manager startAll.
].

This runs Pier on Cloud Foundry on a persistent file system but Pier needs to save the image regularly. It would also be desirable to save the image when sent a SIGTERM (perhaps with something like Chaff). One can save the image from the /status page, though since one can also quit the image from that page it probably should have some security!

In any case, we have demonstrated that it is possible to provide a persistent file store for a Pharo application on a private Cloud Foundry server.

If you want to just look at the code in a GemStone/S image, and maybe try out a simple Smalltalk expression, you can spin up a Heroku dyno running GemStone/S. See the buildpack for instructions on how to run Webtools. Webtools is itself still quite rough, but if you want to contribute or open an issue, feel free.

Of course, this isn’t very useful as a database or application server because the dyno has an ephemeral file system and can be restarted any time. But it does demonstrate that I’ve figured out how to create a Heroku buildpack, and that was my primary goal!

We have previously demonstrated adding Pharo to Cloud Foundry. Since that time Cloud Foundry has been substantially revised and the process for adding a runtime/framework has changed to be based on a buildpack model. In general, this makes things easier but there are some complications.

On the Server

To deploy a Pharo application to Cloud Foundry we log in to the private Cloud Foundry instance created earlier. So that I can set the shell title and not have it replaced I typically change the prompt:

export PS1='\[\e[0;35m\]\h\[\e[0;34m\] \w\[\e[00m\]$ '

By default, Cloud Foundry is set up to run 64-bit applications. In order to run the 32-bit version of Pharo we need to add some libraries:

sudo apt-get install -y ia32-libs

As discussed in the architecture documentation, each application runs in a container with a private root filesystem. Thus, the 32-bit libraries need to be added to the shared read-only portion. This should be done using the general Cloud Foundry install scripts, but as a first attempt I’m just doing it manually:

cd /var/vcap/data/packages/rootfs_lucid64/0.1-dev
sudo rsync -r -l -D -g -o -p -t /lib32 . # 12 MB
sudo rsync -r -l -D -g -o -p -t /usr/lib32 ./usr/ # 153 MB
sudo cp /lib32/ld-linux.so.2 ./lib/
cd usr/lib
sudo rsync -r -l -D -g -o -p -t /usr/lib/libv4l* .
sudo rsync -r -l -D -g -o -p -t /usr/lib/gio .
cd gtk-2.0/
sudo rsync -r -l -D -g -o -p -t /usr/lib/gtk-2.0/i* .
cd 2.10.0/
sudo ln -s ../../../lib32/gtk-2.0/2.10.0 i486-pc-linux-gnu
sudo ln -s ../../../lib32/gtk-2.0/2.10.0 i686-pc-linux-gnu

I’m not sure that all of the above is necessary, but it seems to be sufficient to run Pharo as a runtime in a container. Next, we install Pharo and make it visible (with the same recognition that this could be done better using the install scripts):

sudo mkdir /opt/pharo
cd /opt/pharo
sudo chmod 777 .
curl http://files.pharo.org/vm/pharo/linux/Pharo-VM-linux-stable.zip > pharo.zip
sudo unzip pharo.zip; rm pharo.zip
sudo chmod 755 .
cd /var/vcap/data/packages/rootfs_lucid64/0.1-dev/opt
sudo rsync -r -l -D -g -o -p -t /opt/pharo .

Now we get to the part of making a Pharo buildpack. Eventually this should be in a Git repository but for now I’ll just add it directly to my private Cloud Foundry instance so that it looks like a built-in buildpack.

cd /var/vcap/packages/dea_next/buildpacks/vendor
mkdir pharo pharo/bin; cd pharo; git init
cat > README.md << EOF
Cloud Foundry Buildpack for Pharo Smalltalk
==============
EOF

The actual buildpack consists of three files that are called by Cloud Foundry, all in the bin directory. First is bin/detect (use an editor to create these files):

#!/usr/bin/env bash
#
if [ -f $1/*.image ]; then
 echo "pharo" && exit 0
else
 echo "no" && exit 1
fi

This will confirm that the Pharo buildpack can handle an application that contains a .image file. Next we create bin/compile to add a startup script to the application:

#!/usr/bin/env bash
#
BUILD_DIR=$1
CACHE_DIR=$2
BUILD_PACK_DIR=$(dirname $(dirname $0))
#
if [ ! -d "$BUILD_DIR" ]; then
 mkdir -p "$BUILD_DIR"
fi
#
if [ ! -d "$CACHE_DIR" ]; then
 mkdir -p "$CACHE_DIR"
fi
#
cat > "$BUILD_DIR/startup.sh" << EOF
#!/usr/bin/env bash
IMAGE=\`ls *.image\`
STARTUP=\`ls | grep startup.st\`
/opt/pharo/pharo -vm-display-null -vm-sound-null \$IMAGE \$STARTUP
EOF
chmod +x "$BUILD_DIR/startup.sh"

Third, we create bin/release to return a YML file with startup information:

#!/usr/bin/env bash
#
cat <<EOF
---
default_process_types:
 web: ./startup.sh
EOF

Finally, we set these three files to be executable:

chmod +x bin/*

Now that we have modified our private Cloud Foundry instance support Pharo we start it up:

cd ~/cf_nise_installer/; ./local/start_processes.sh

It seems that the startup might need a bit of extra time. To check on the status do the following:

sudo /var/vcap/bosh/bin/monit summary

On the Client

In our previous approach to adding Pharo to Cloud Foundry, we went to some effort to avoid copying the image, changes, and sources files over the network from the client to the server (because they are typically quite large). That earlier approach, while elegant, is a bit less obvious to application developers who just want to deploy their application. In this iteration I’ve taken the approach of doing the simplest thing and waiting till it causes a problem to “improve” things. Thus, we expect you to provide the image file and we will just run it. The primary thing that you have to do is modify your code to provide a web server on the port defined in the $PORT environment variable (which can be different each time the application is launched). To facilitate this you can include a ‘startup.st’ script in your directory that will be run each time the application starts. This allows you to change the listening port each time. Following is a sample ‘startup.st‘ script that we use with an AIDAweb one-click image:

| methodSource port |
methodSource := 'introductionElement
 | element |
 element := WebElement new.
 element addText: self observee introduction.
 element addText: ''<p>Listening on port: '' , 
 session parent site port printString , ''</p>''.
 ^element'.
Author fullName: 'CloudFoundry'. "Developer's name"
WebDemoApp compile: methodSource.
port := OSProcess thisOSProcess environment at: #'PORT' ifAbsent: ['8888'].
AIDASite default
 stop;
 port: port asNumber;
 start.

At this point we can connect to our private cloud and push the application:

cf target api.172.16.217.185.xip.io # use the IP address of your private cloud
cf login --password micr0@micr0 micro@vcap.me
cf create-space development 
cf target --space development
cf map-domain mycloud.local
cf push
# Name> myapp
# [accept defaults]
# Save configuration?> y

Now, make sure that ‘myapp.mycloud.local’ is in your /etc/hosts file and points to your private cloud. Then you can open a web browser on http://myapp.mycloud.local and you should see your application running!

As you think about running a Pharo application in the cloud, remember that the file system is ephemeral. That is, your application can be restarted any time and changes made to the image will not be saved. See http://docs.cloudfoundry.com/docs/using/app-arch/ for a good discussion of some architectural issues.

If you are following this blog you are aware that I’ve been investigating options for hosting Smalltalk in the cloud. One very successful hosting solution for a number of languages is Heroku. Not only do they support a number of languages built-in, but they also allow anyone to create a buildpack to support another language or framework. So, what would it take to add Smalltalk to Heroku? First, we need to acknowledge that Will Leinweber has a buildpack for Redline Smalltalk. Since Redline Smalltalk is based on the Java VM, this is more about running Java on Heroku than Smalltalk (which is not to minimize the accomplishment, just to note that getting a Smalltalk VM running is not quite the same).

The next thing to note is that running an application in the cloud might be different from running it on your own server. The model most hosting providers follow is that they package your application into a stand-alone directory tree that is saved on their server. When you ask for one or more instance(s) to run they copy the directory tree onto an available machine, set some environment variables (including a port on which to listen for HTTP requests), execute an application launch command, and then route requests to the port. Horizontal scaling is accomplished by starting additional instances and routing requests to them in some fashion. If an application dies then the hosting provider cleanup the directory and repeat the process with a new directory. Thus, each instance is isolated from other instances (even of the same application), and the file system is “ephemeral” and exists only while the instance is running. From a Smalltalk perspective, this means that typical image-based persistence is not trivial. I have some ideas on how this might be addressed, but need to get Smalltalk into the environment first.

As mentioned above, you can create a third-party buildpack that packages and starts your application. The buildpack is essentially three bash scripts that (1) report whether it can handle a particular application (“Do I have everything here that I need?”), (2) transform the application into its runtime structure, and (3) tell the framework how to start instances of the application. This is all fairly straightforward and has been done for many languages and frameworks. Note, however, that a “buildpack is responsible for building a complete working runtime environment around the app. This may include language VMs and other runtime dependencies that are needed by the app.” So, to get something like Pharo running on Heroku we need a Cog VM that runs on the Heroku server.

To find out more about the Heroku environment I decided to try it out. First, I signed up for a free account at https://id.heroku.com/signup and then install the tools. At this point I followed the simple instructions to create a “Hello, world” application:

mkdir ~/cloud/heroku
cd ~/cloud/heroku
heroku login
git clone git://github.com/heroku/ruby-sample.git
cd ruby-sample
heroku create
git push heroku master
heroku apps:rename mygreatapp
heroku open

This opened a web browser on http://mygreatapp.herokuapp.com/ and the expected page was displayed. The next thing that is rather nice is that you can run a non-web application on the Heroku server and have stdin/stdout/stderr routed back to your client shell:

heroku run bash

This starts a bash shell on the server; just as if you used ssh! Now we can do some things to investigate the server environment (this script represents what came at the end after I tried various things described next):

uname -m -o # x86_64 GNU/Linux
ll /lib/libc.so* # 2.11.1
cat /proc/version # Linux version 3.8.11-ec2 (gcc version 4.4.3)
file /sbin/init # ELF 64-bit LSB, for GNU/Linux 2.6.15
cat /proc/cpuinfo # Intel(R) Xeon(R) 4 CPUs X5550 @ 2.67 GHz 
curl --version # 7.19.7
tar --version # 1.22
zip --version # command not found

The shows us that heroku is running 64-bit linux on Xeon processors. Since we have non-root access we can’t do much to changes these characteristics. We can, however try installing Pharo and see what happens. My first attempt was to download a one-click but that was a zip file that couldn’t be unzipped. Next, I got a recent Cog VM that came as a .tgz file. I uncompressed this with tar and tried to run it. This gave the error “/usr/bin/ldd didn’t produce any output and the system is 64 bit.  You may need to (re)install the 32-bit libraries.” So we can’t run the 32-bit application–at least not without some work. Next I tried a 64-bit Squeak VM and uncompressed that. With this we got further, but now have the error “squeakvm64: /lib/libc.so.6: version `GLIBC_2.14′ not found (required by squeakvm64).” Note above that the Heroku server has GCC 2.11 (from 2009), so executables compiled with later libraries will not run.

I guess that the correct thing to do is to build the needed binaries on the Heroku server (as described here). Presumably this would guarantee that everything works together. Before trying to build Cog I might look at GNU Smalltalk.

Before finishing up we need to stop our Heroku application so we don’t use up resources. Exit from the bash shell on the server (if it hasn’t timed you out!), and then destroy your app:

heroku apps:destroy mygreatapp

In any case, I’ve learned enough for today and will try some other things next week (probably non-Heroku things!).

A lot has happened with Cloud Foundry since I last blogged about creating a micro cloud (almost 18 months ago!). The Micro “isn’t really maintained anymore” so instead I’m using the Nise Installer to create my private cloud. I used an Ubuntu 10.04 LTS image on a Mac running Fusion and used the easy install to get to a console. Once logged in I updated the keyboard to a Dell 101 (up and down arrows don’t work otherwise), installed a few tools, and then inquired to discover the IP address:

sudo dpkg-reconfigure console-setup
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install -y vim openssh-server curl
ifconfig eth0 | grep inet

The last line will show you the address of the machine. Add an entry to your client’s /etc/hosts file similar to the following (using the IP address you got from the server above):

172.16.217.185  mycloud myapp.mycloud.local

Then from a Terminal (or other shell) on the client enter ‘ssh mycloud’ to get a command prompt (this is easer than using the console).

The next step is to install Cloud Foundry:

export CF_RELEASE_BRANCH=release-candidate
bash < <(curl -s -k -B https://raw.github.com/yudai/cf_nise_installer/master/local/bootstrap.sh)

This will prompt for your password (a couple times!) and takes a while (a couple hours). When it finishes it will show something like the following:

Done!
RESTART your server!
CF target: 'cf target api.172.16.217.185.xip.io'
CF login : 'cf login --password micr0@micr0 micro@vcap.me'

The address (172.16.217.185) is the IP address for my server; yours will almost certainly be different. Follow the instructions and restart your server. Once the server is started you need to start Cloud Foundry:

cd ~/cf_nise_installer; ./local/start_processes.sh

Within a minute it should list a number of processes as running. Unfortunately, a few processes are running the command returns to the shell prompt implying that all is well. In my experience, not everything is started so I typically repeat the summary command until I see all fourteen processes:

sudo /var/vcap/bosh/bin/monit summary

From your client machine you should navigate to a directory with a sample application (like we created here) and set the target, login, do some setup, and push the application:

cf target api.172.16.217.185.xip.io
cf login --password micr0@micr0 micro@vcap.me
# okay to ignore CFoundry::InvalidRelation error in next command (see https://github.com/cloudfoundry/cf/issues/9)
cf create-space development 
cf target --space development
cf map-domain mycloud.local
cf push

At this point you can accept the defaults (the subdomain is ‘myapp’ and the domain is ‘mycloud.local’) and when the staging is finished you should see 2 applications running. On your client you can navigate to http://myapp.mycloud.local/ and test the application. From your client shell you can try out various commands listed at http://docs.cloudfoundry.com/docs/using/managing-apps/cf/.

Cloud Foundry has been out for a couple years and I’ve been blogging about it for about 18 months (starting here). Cloud Foundry has moved from VMware to Pivotal and continues to undergo rapid development. Although there is a good bit of documentation, it still took me a fair amount of time to get even a basic application running because of changes since my last deployment (and the docs are incomplete). So here is a brief description of how to deploy a trivial Ruby application in the Hosted Developer Environment (I’m on the 60-day free trial period). For the client machine I’m running on a fresh install of Mac OS X version 10.8.4 and using Terminal.

Create and set up the environment:

mkdir ~/cloud ~/cloud/ruby; cd ~/cloud/ruby
sudo gem install bundle sinatra cf

Create three files using your favorite text editor:

Gemfile:

source 'https://rubygems.org'
gem 'sinatra'

env.rb:

require 'rubygems'
require 'sinatra'
configure do
    disable :protection
end
get '/' do
    host = ENV['VCAP_APP_HOST']
    port = ENV['VCAP_APP_PORT']
    "<h1>Hello World!</h1><h2> I am in the Cloud! via: #{host}:#{port}</h2>"
end
get '/env' do
    res = ''
    ENV.each do |k, v|
        res << "#{k}: #{v}<br/>"
    end
    res
end

manifest.yml:

--- 
applications: 
- url: myapp.cfapps.io
    memory: 256M
    name: myapp
    instances: 2
    path: .
    command: 'ruby env.rb'

To create a ‘Gemfile.lock’ from the ‘Gemfile’ run the following command:

bundle

I can test the application by running the following command:

ruby env.rb

When it tells me that Sinatra has taken the stage I enter http://localhost:4567/ and http://localhost:4567/env in a web browser.

Then I can use ‘cf’ to set my target, login, and push my application to the cloud:

cf target api.run.pivotal.io
cf login
cf push

Because we have defined the setup in manifest.yml we don’t have to answer any questions. At this point I can go to http://myapp.cfapps.io and http://myapp.cfapps.io/env to see the application run.

Great news! The GemStone/S team is becoming an independent company after 3 years as part of VMware. The core engineering team, including me, along with Norm Green and Dan Ware, have formed GemTalk Systems.

For more information, visit our company web site at http://gemtalksystems.com.

At GemTalk Systems, we’re the people who built the product. We’re excited to become a company with a dedicated focus on Smalltalk, GemStone/S and allied initiatives. You’ll see an increase in innovation on the product, and customers will see a seamless transition in support.

GemTalk will be at the STIC conference (Phoenix in June). Plan to visit us there!

Our email addresses are changing, too. They are all in the form of first.last@gemtalksystems.com. But the old email addresses will continue to work for a few months.

If you have questions about this exciting transition, feel free to contact me.

A GemStone/S 64 Bit system needs to have various maintenance tasks performed regularly. One of these is managing log file, backups, and transaction logs. I recently put together a basic Linux VM that has GemStone/S 64 Bit installed, but little more (the last link here). One of the non-Smalltalk things that I sometimes struggle with is figuring out which file can be deleted. The following demonstrates some bash scripting that seems to address a few of my questions. It starts by deleting all but the most recent two log files for pcmon, pagemanager, and symbolgem. It then does some garbage collection activity and a full backup. It then deletes all but the most recent two backups. Finally, if there are two backups in the last two days, it deletes all transaction logs more than two days old. This does not do as much error checking as should be done in a serious production environment (such as try restoring the backup, apply transaction logs, etc., before deleting things), but it demonstrates some of the things that can be done in bash to do date-specific cleanup.

#!/bin/bash
# cleanup log files
cd /opt/gemstone/log
(ls -t *pcmon.log | head -n 2; ls *pcmon.log) | \
    sort | uniq -u | xargs rm
(ls -t *pagemanager.log | head -n 2; ls *pagemanager.log) | \
    sort | uniq -u | xargs rm
(ls -t *symbolgem.log | head -n 2; ls *symbolgem.log) | \
    sort | uniq -u | xargs rm
# do backup
topaz -l -T 50000 << EOF
output push backup.out
errorCount
send SystemRepository markForCollection
send SystemRepository reclaimAll
send SystemRepository startNewLog
run
SystemRepository fullBackupCompressedTo: 
 '/opt/gemstone/backups/backup-' , 
 (DateTime now asSeconds // 60) printString.
%
logout
errorCount
exit
EOF
# cleanup backups
cd /opt/gemstone/backups
(ls -t backup* | head -n 2; ls backup*) | sort | uniq -u | xargs rm
if [ "2" -le "`find . -mtime -2 -name 'backup*' | wc -l`" ]; then 
 cd /opt/gemstone/data
 find . -mtime +2 -name 'tranlog*' | xargs rm
fi

Categories