A screencast of this blog post is here.

Because Smalltalk was the origin of much of today’s GUI (mouse, overlapping windows, drop-down menus), Smalltalk developers are understandably accustomed to a nice GUI IDE. GemStone/S is an excellent database and Smalltalk execution environment and includes a built-in command-line tool, Topaz, where you can execute Smalltalk code, but has no native GUI. In this blog post we continue a demonstration of GemStone.app on the Macintosh (started here) and show Jade, a GUI-based IDE available on Microsoft Windows.

We launch GemStone.app on the Macintosh, update the version list, download 3.1.0.4, then install and start a GLASS extent (image) that includes Monticello/Metacello tools. When the database is running we start a Topaz session and install Seaside 3.0 and Magritte 3 with the following script:

run
"based on https://code.google.com/p/glassdb/wiki/Seaside30Configuration"
MCPlatformSupport commitOnAlmostOutOfMemoryDuring: [
  ConfigurationOfMetacello project updateProject.
  ConfigurationOfMetacello loadLatestVersion.
  Gofer project load: 'Seaside30' group: 'Seaside-Adaptors-Swazoo'.
].
%
errorCount
! 
commit
run
"based on http://www.iam.unibe.ch/pipermail/smallwiki/2012-February/007188.html"
Gofer it
  squeaksource: 'MetacelloRepository';
  package: 'ConfigurationOfMagritte3';
  load.
%
errorCount
!
commit
run
MCPlatformSupport commitOnAlmostOutOfMemoryDuring: [
  ConfigurationOfMagritte3 project stableVersion load.
].
%
errorCount
!
commit
run
WAGsSwazooAdaptor new start.
%

When Swazoo is running, we can go to http://localhost:8080 and see Seaside running locally. This demonstrates running Smalltalk code in Topaz, the command-line tool. Next we look at a GUI-based IDE that runs on a Microsoft Windows client platform.

Jade is available as a 14 MB zip download from http://seaside.gemtalksystems.com/jade/. It includes an executable, client libraries (DLLs) for various GemStone/S versions (ranging from 32-bit version 6.1 to the latest 64-bit version), and related files (including source code). Like most GemStone/S client GUI tools, it is built in another Smalltalk (in this case, Dolphin Smalltalk from Object Arts), but unlike these other tools, you can’t see the client Smalltalk (unless your load Jade source code into your own Dolphin development environment), so we avoid the the two-object-space confusion. Jade is intended to take you directly to GemStone/S, without going through Pharo, Squeak, VA, or VW Smalltalk.

Jade is also designed to work with the no-cost version of GemStone/S (unlike the VA/VW-based GBS tools), and performs well on a slow network (unlike GemTools).

When you unzip the download, you have a folder with various items. Jade.exe is the primary executable (containing the Dolphin VM and the image) and it relies on Microsoft’s C Runtime Library. There is a copy of the executable in Jade.jpg for sites where executables are stripped from zip files during the download process (simply rename the suffix and it will become executable). Contacts.exe is used sometimes in a training class. The bin directory contains the GCI client libraries and a DLL with various images used in the IDE. You can also see a directory containing source code for Jade.

Screen Shot 2013-10-01 at 10.56.18 AM

When you launch Jade, you get a login window that gives you a place to select the GemStone/S version (which GCI library we will use), and other information needed for a login. The Stone box contains two fields, one for the host/IP of the Stone machine (from the perspective of the Gem machine, so localhost is almost always sufficient), and the name of the Stone. In the screencast mentioned above our stone was named gs64stone1. The Gem box contains a number of fields. Most logins will use an RPC Gem (a Linked Gem is available only on 32-bit Windows) with a Guest-authenticated Gem (if your NetLDI was not started in guest mode (-g), then you will need to provide an OS user and password). An RPC Gem will be started by a NetLDI, so we need to identify the Gem machine (in my example the host is vienna and the NetLDI is listening on port 54120) and the command used to start the Gem (except in rare cases the command will be ‘gemnetobject’). You provide a GemStone User ID and Password (by default, ‘DataCurator’ and ‘swordfish’), and if you are going to use any Monticello features it would be good to identify the developer’s name (one word with CamelCase).

Screen Shot 2013-10-01 at 11.01.49 AM

If you get an error on login, we attempt to give as much explanation as possible. Typically, (1) there is no NetLDI on the host/port (see following example), (2) there is no stone with that name, (3) there is a version mismatch, or (4) you have given an unrecognized user ID or password.

Screen Shot 2013-10-01 at 11.13.38 AM

When you have a successful login, you will get a launcher that consists of several tabs. The Transcript serves the traditional (ANSI) Transcript function of showing output sent to the Transcript stream. The second tab shows information about your current session.

Screen Shot 2013-10-01 at 11.19.59 AM

The third tab shows information about the current logged-in sessions, including where the Gem is located, where the client GCI is located, and whether a Gem is holding the oldest commit record. If you have appropriate security, you can send a SigAbort to a session or even terminate it!

Screen Shot 2013-10-01 at 11.21.07 AM

The final tab is a Workspace. In this tab you can execute, print, and inspect Smalltalk code. You can also use the toolbar or menus to abort or commit and open other tools.

Screen Shot 2013-10-01 at 11.17.23 AM

 

One of the tools is a User Profile Browser that shows the various users defined in the database.

 

Screen Shot 2013-10-01 at 11.31.42 AM

Next is a Monticello Repository Browser that shows various repositories, packages, and versions.

 

Screen Shot 2013-10-01 at 11.32.47 AM

The Monticello Browser includes a tool to browse differences between packages.

Screen Shot 2013-10-01 at 11.33.36 AM

 

Much of your work will be done in a System Browser. This view shows the four SymbolDictionary instances in my SymbolList. UserGlobals is bold, indicating that it is the ‘home’ or default SymbolDictionary, but I have selected Globals and see a list of class categories, a partial list of classes, and in the lower section of the screen is a list of the non-class objects in Globals (note things like AllGroups and AllUsers).

Screen Shot 2013-10-01 at 11.41.42 AM

 

This screen shot shows us an example of the Packages view (which requires Monticello), and a method with a breakpoint (the red rectangle around a method).

Screen Shot 2013-10-01 at 11.43.54 AM

There are other tools, including a debugger, but I’ll leave that for your exploration (and/or another post/screencast).

Have fun and let me know if you have questions or feature requests.

 

 

 

 

 

A screencast of this post here.

GemStone/S 64 Bit has been available for the Macintosh for several years but you generally need to install and configure it much like you would do if you were on a Linux/Unix system. That is, there are a lot of command-line steps and system configurations that are needed. For someone who is used to the Macintosh’s consistent graphical user interface and who is not so familiar with configuring a Unix server, this tends to create a high barrier-to-entry. For a while I’ve been playing with an alternative that makes it easier to install and run GemStone/S 64 Bit on a Macintosh. This blog post will describe how to use this tool.

To start, download GemStoneApp.dmg, a disk image of a Cocoa application. Open the disk image to get to the virtual disk containing the application and a shortcut to your Applications folder. Copy the application to your local machine (typically, to the Applications folder, but it can be anywhere). At this point you can eject the virtual disk and delete the disk image. This will give you a 770 KB bundle that is a Cocoa application built with Xcode 5.

Screen Shot 2013-09-25 at 1.47.03 PM

You can launch the application in several ways. First, open Spotlight (Command + space) and type ‘gemstone’ (without the quotes). The application should be found and you can launch it by clicking on the name or just pressing Return. Second, you can use Launchpad, find the GemStone application, and then click on it. Finally, you can use the Finder to navigate to the folder holding the application (typically ‘/Applications/’) and launch it from there.

If the system presents a dialog reporting that the application could not be launched because it was not downloaded from the Mac App Store then you have a couple options. First, you can configure your security to allow applications from “identified developers.” This is done by launching System Preferences, selecting Security & Privacy, unlocking the page if needed (click on the padlock icon at the bottom left if it is closed), and then click the radio button for ‘Mac App Store and identified developers’ under the heading ‘Allow applications downloaded from:’. Once this is done, relaunch the application (as described in the previous paragraph) and confirm that you want to run it. Second, you can use the Finder to navigate to the directory holding the application (typically ‘/Applications/’) and then right-click or Control-click on the application and select the ‘Open’ menu item. This may ask you to confirm that you want to open the application. If you confirm once then it will not ask again.

Screen Shot 2013-09-25 at 2.03.42 PM

Once the application launches, make sure that the ‘Setup’ tab is selected. There are some setup steps that are typically done as root (using ‘sudo’ from a shell prompt) that we can do programatically if we have adequate authorization. These steps are done using a ‘Helper Tool’ that runs as root in the background and performs very limited actions. In our case, we need to set a couple kernel settings, kern.sysv.shmall and kern.sysv.shmmax, to allow for shared memory (this can be done manually, but is easier with the helper tool). Click the ‘More info’ button if you want to learn more, then click the ‘Authenticate…’ button, give your password, and then click the ‘Install Helper’ button. (You can click the ‘Remove’ button to remove the helper tool.)

Next you need to import the list of available versions by clicking the ‘Update’ button. When this finishes (it should only take a couple seconds), you will have a list of versions and their release dates. To install a version, click the checkbox next to the version name or (if you have already downloaded a zip file of the product tree from here or here), click on the ‘Unzip…’ button and select an existing zip file of a product tree. After the version is unzipped you can start to use it.

Screen Shot 2013-09-25 at 2.23.45 PM

Click on the Databases tab and click on the ‘+’ button to create a database. This will set up a directory structure, create a config file, and copy a base extent. You can change the version (if the database has not been used), edit the name of the stone, the NetLDI, and the shared page cache size. After you have made any changes you want, you can click the ‘Start’ button to start the stone (and related processes).

Screen Shot 2013-09-25 at 2.30.05 PM

 

In the Databases tab there are a series of sub-tabs, the first of which is ‘Data Files.’ Here you can select the extent(s) or tranlog(s) to see some information about them.

The second sub-tab gives you some backup and restore options. When the database is not running you can initialize a base extent (a copy of $GEMSTONE/bin/extent0.dbf) or a ‘GLASS’ extent (a copy of $GEMSTONE/bin/extent0.seaside.dbf), and you can restore from a backup. When the database is running you can make a backup.

The third sub-tab is ‘Process Logs’ and this gives you a list of log files associated with the GemStone processes. You can double-click a line (or select the line and click the ‘Open’ button), and the appropriate log file will open using the Macintosh Console application (used to view system logs).

The fourth sub-tab is ‘Archives’ and gives you some information about archived process logs (the text files described above) and the transaction logs. These can typically be deleted without impacting a running system (though you might want to keep transaction logs made following any backup if you have to do a restore).

The fifth sub-tab is ‘Statistics’ and shows a list of statmonitor files created while the system is running. If you double-click a line (or single-click and click the ‘Open’ button) then the application will launch ‘VSD’, a Visual Statistics Display tool that can be used to analyze the running system.

(If there is a sixth sub-tab, ‘Upgrade’, you should ignore it since it is disabled and does not do anything right now.)

After the Databases tab is a third tab, ‘GS List,’ that shows a list of the current processes. The port number for the NetLDI process might be useful.

(If there is a fourth tab, ‘Logins’, you should ignore it since it is disabled and does not do anything right now.)

Returning to the Databases tab, we have the ability to open a Finder window on the database directory (the button with a folder). Here you can use the Finder to explore the implementation details of the database. There is a ‘GemTools’ button that opens a text field with a Smalltalk expression that can be pasted into a GemTools session definition. Finally, there is a ‘Terminal’ button that can be used to open the Macintosh Terminal application. This starts a new Terminal application (this will be confusing if you already have one running) with the current working directory set along with various environment variables, including $GEMSTONE and $PATH. From this terminal window you can execute GemStone commands like ‘gslist’ and ‘topaz’.

Note that you can run multiple databases at one time and they can be different versions. When you are done you can click the ‘Stop’ button on the ‘Databases’ tab. When there are no databases running you can Quit the application (the application window can be minimized but not closed). Let me know if this is helpful and what further features you would like to see.

 

My post on a Cloud Foundry AMI was with a simple Ruby application. In a comment Andrew Spyker asked about a node.js application. I’ve done Javascript but not node.js, so thought I’d give it a try. I followed my earlier instructions to get the CF Micro instance started and then starting with the ‘Use the Server’ I did something different.

Using the webserver example here, I created two files in a new directory. The first file, example.js, contained the following:

var http = require('http');
var port = parseInt(process.env.PORT,10);
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(port, '0.0.0.0');
console.log('Server running at http://0.0.0.0:' + port.toString() + '/');

The second file, package.json, contained the following:

{
  "name": "http-server",
  "version": "0.0.1",
  "author": "James Foster <github@jgfoster.net>",
  "description": "webserver demo from http://nodejs.org/",
  "dependencies" : [ ],
  "engines": {
    "node": ">=0.10"
  }
}

(Bear in mind that this is my first node.js application, and I just spent a hour or so poking around on the web to get this far.)

From the command line I’m able to run it by entering the following:

export PORT=1337; node example.js

With this I can open a web browser on http://localhost:1337/ and see the greeting.

To push my trivial application to my EC2 instance, I did the following:

cf target http://api.<my-ip>.xip.io
cf login --password mySecret admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see https://github.com/cloudfoundry/cf/issues/9)
cf create-space development 
cf target --space development
cf map-domain --space development <my-ip>.xip.io
cf push --command "node example.js"

The interaction included giving the application a name (“hello”), accepting the defaults, and saving the configuration:

Name> hello
Instances> 1
1: 128M
2: 256M
3: 512M
4: 1G
Memory Limit> 256M
Creating hello... OK
1: hello
2: none
Subdomain> hello
1: 54.200.62.218.xip.io
2: none
Domain> 54.200.62.218.xip.io
Binding hello.54.200.62.218.xip.io to hello... OK
Create services for application?> n
Save configuration?> y
Saving to manifest.yml... OK
Uploading hello... OK
Preparing to start hello... OK
-----> Downloaded app package (4.0K)
-----> Resolving engine versions
 Using Node.js version: 0.10.17
 Using npm version: 1.2.30
-----> Fetching Node.js binaries
-----> Vendoring node into slug
-----> Installing dependencies with npm
 npm WARN package.json http-server@0.0.1 No repository field.
 npm WARN package.json http-server@0.0.1 No readme data.
 npm WARN package.json http-server@0.0.1 No repository field.
 npm WARN package.json http-server@0.0.1 No readme data.
 Dependencies installed
-----> Building runtime environment
-----> Uploading droplet (15M)
Checking status of app 'hello'...
 0 of 1 instances running (1 starting)
 0 of 1 instances running (1 starting)
 1 of 1 instances running (1 running)
Push successful! App 'hello' available at http://hello.54.200.62.218.xip.io

When I went to the URL provided, I saw the greeting.

I recently made an Amazon Machine Image (AMI) available (described here) and described how to use it. In this post I will describe the process of taking an existing AWS Instance with Cloud Foundry Micro (described here) and making it suitable for a public AMI. The primary challenge is that we need to create a boot disk with Cloud Foundry already installed but configured so that it can use a new password, domain, and local IP address. We also need to remove any identifying information before shutting down the VM and taking a snapshot.

I first use the following script (named setup.sh) to install Cloud Foundry using the cf_nise_installer. Note that I am setting a well-known IP, domain, and password; these will be changed later during the boot process. I am also deleting some files that might exist from a previous install attempt. Finally, I am creating a list of files that might contain data that must be modified during the boot process.

#!/bin/bash
#
set -x
cd ~
sudo rm -rf \
  cf_nise_installer \
  /var/vcap/data/sys/log/* \
  domain password files ipv4 \
  2>/dev/null
#
export INSTALLER_URL="https://github.com/yudai/cf_nise_installer.git"
export INSTALLER_BRANCH='master'
export CF_RELEASE_BRANCH='release-candidate'
export NISE_IP_ADDRESS='127.0.0.1'
export NISE_DOMAIN='cloud.mycloud.local'
export NISE_PASSWORD='swordfish'
bash < <(curl -s -k -B https://raw.github.com/yudai/cf_nise_installer/${INSTALLER_BRANCH:-master}/local/bootstrap.sh)
#
list="$(sudo find /vol-a/var/vcap -type f \( -iname '*.yml' -o -iname 'postgres_ctl' \) )"
echo $list > ~/files

We next create a startup script, /home/ubuntu/rc.local, and modify /etc/rc.local to call our new script (if you delete the authorized_keys and fail to add them back, then you can’t log in to your instance!).

#!/bin/bash
# should be called from /etc/rc.local
echo "Starting /home/ubuntu/rc.local script to start Cloud Foundry"
#
# See if 'debug: true' is in user-data
#
X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'debug:' )
X=$( echo $X | cut -d : -f 2 | tr -d ' ' )
if [ "$X" == "true" ]; then
  set -x
fi
#
# Make sure that we have some keys (should be deleted before snapshot)
#
cd /home/ubuntu
if [ ! -f .ssh/authorized_keys ]; then
  curl http://instance-data/latest/meta-data/public-keys/0/openssh-key \
    2>/dev/null > .ssh/authorized_keys
  chown ubuntu:ubuntu .ssh/authorized_keys
  chmod 600 .ssh/authorized_keys
  echo "Updated authorized_keys"
fi
#
# See if 'autoStart: false' is in user-data; if so, done!
#
X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'autoStart:' )
X=$( echo $X | cut -d : -f 2 | tr -d ' ' )
if [ "$X" == "false" ]; then
  echo "user-data includes 'autoStart: false' so we will quit"
  exit 0
fi
#
# Each instance should have a different password (but once set it does not change!)
#
if [ -e password ]; then
  # If we have been through this code before, then use the previous password
  OLD_PASSWORD=$( < password )
  NEW_PASSWORD=$OLD_PASSWORD
else
  # This is the first time through, so we replace the default password
  OLD_PASSWORD='swordfish'
  # User may specify a 'password:' line in the user data
  X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'password:' )
  NEW_PASSWORD=$( echo $X | cut -d : -f 2 | tr -d ' ' )
  if [ "$NEW_PASSWORD" == "" ]; then
    # No user-provided one and no previously created one, so assign a random one
    # another script is $( openssl rand -base64 6 )
    NEW_PASSWORD=$( < /dev/urandom tr -dc A-Za-z0-9 | head -c8 )
  fi
  # Save new password so it can be reused next time
  echo "$NEW_PASSWORD" > password
fi
#
# See if 'domain:' is in user-data
#
X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'domain:' )
if [ "$X" != "" ]; then
  NEW_DOMAIN=$( echo $X | cut -d : -f 2 | tr -d ' ' )
else
  # get the public-hostname
  until [ "$NEW_DOMAIN" != "" ]; do
    X=$( curl http://instance-data/latest/meta-data/ 2>/dev/null | grep public-hostname )
    if [ "$X" != "" ]; then
      X=$( curl http://instance-data/latest/meta-data/public-hostname 2>/dev/null )
      if [ "$X" != "" ]; then
        NEW_DOMAIN=$( echo $X | cut -d "." -f 1 | cut -d "-" -f 2- | sed "s/\-/\./g" )
        NEW_DOMAIN="$NEW_DOMAIN.xip.io"
      fi
    fi
    if [ "$NEW_DOMAIN" == "" ]; then
      echo "`date`: public-hostname not yet available from meta-data"
      sleep 1
    fi
  done
fi
if [ -f domain ]; then
  OLD_DOMAIN=$( cat domain )
else
  OLD_DOMAIN='cloud.mycloud.local'
fi
echo $NEW_DOMAIN > domain
#
# Each new instance will have a unique private IP address
#
NEW_IPV4=$( curl http://instance-data/latest/meta-data/local-ipv4 2>/dev/null )
if [ -f ipv4 ]; then
  OLD_IPV4=$( cat ipv4 )
else
  OLD_IPV4="127.0.0.1"
fi
echo $NEW_IPV4 > ipv4
#
# Find all the files that need to be edited
# (takes several seconds, so cache it for faster start next time)
#
if [ -f files ]; then
  list=$( cat files )
else
  list="$(find /var/vcap -type f \( -iname '*.yml' -o -iname 'postgres_ctl' \) )"
  echo $list > files
fi
for f in $list
do
  if [ "$OLD_PASSWORD" != "$NEW_PASSWORD" ]; then
    if grep -q $OLD_PASSWORD $f; then
      sed -i "s/$OLD_PASSWORD/$NEW_PASSWORD/g" $f
      echo "Updated password in $f"
    fi
  fi
  if grep -q $OLD_DOMAIN $f; then
    sed -i "s/$OLD_DOMAIN/$NEW_DOMAIN/g" $f
    echo "Updated domain in $f"
  fi
  if grep -q "$OLD_IPV4" $f; then
    sed -i "s/$OLD_IPV4/$NEW_IPV4/g" $f
    echo "Updated IP in $f"
  fi
done
#
# start Cloud Foundry
(cd /home/ubuntu/cf_nise_installer; ./local/start_processes.sh)

Finally, I create a cleanup.sh script to remove identifying information and sanitize the system before shutdown. Note that once authorized_keys are removed from the .ssh directory you can’t log in unless during the boot process new authorized_keys are installed.

#!/bin/bash
#
sudo rm -f /root/.ssh/* /home/*/.ssh/* /var/vcap/monit/monit.log
(cd /var/log; sudo rm -f *.log dmesg* debug messages syslog)
rm -f ~/.viminfo ~/.bash_history
echo '' | sudo tee /var/log/lastlog
history -c
# sudo shutdown now

After running this script I use the AWS tools to stop the instance (this means that the command history is empty). At this point I can use the AWS tools to create an AMI. When the AMI starts it executes /etc/rc.local which calls my new script, /home/ubuntu/rc.local. This script sets the local IPv4 value, along with the domain and password (using configured values if provided in the user-data). When the appropriate configuration info has been updated, then we start Cloud Foundry. Each new instance has its own IP, domain, and password, making it secure and unique.

If there is something wrong with the boot volume (such as missing authorized_keys so you can’t log in), then you need to start another EC2 instance (a micro is fine), attach the volume, do the fix-up, and release it.

# mount a new volume (in case some surgery is needed)
#
sudo mkdir -p 000 /vol-a
sudo mount /dev/sdf /vol-a
#
# ... do whatever fix-up is needed and unmount the volume
#
sudo umount -d /vol-a
sudo rmdir /vol-a

I have used /etc/rc.local to hook into the boot process. An alternate may be to use crontab with the ‘@reboot’ option.

If you have an instance-store (a disk that exists only while the instance is running), then you might want to set up some swap space on it. The following script will create a 4 GB file to be used as swap space.

# http://serverfault.com/questions/218750/why-dont-ec2-ubuntu-images-have-swap
sudo dd if=/dev/zero of=/mnt/swapfile bs=1M count=4096 &&
sudo chmod 600 /mnt/swapfile &&
sudo mkswap /mnt/swapfile &&
echo /mnt/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab &&
sudo swapon -a
cat /proc/swaps

That summarizes how I created a Cloud Foundry Micro public AMI.

Video of ESUG 2013 Presentation

My presentation “Smalltalk in the Cloud” was recorded and can be found at the link. Unfortunately, the audio was not very strong.

Update: Slides are here.

Update: A video of this post is available here.

After creating a Cloud Foundry Micro on an Amazon EC2 instance (described here), I decided to make it available as a public AMI so that others could try it out (especially since the Micro is not available unless you built it yourself using Altoros vagrant or Nise BOSH).

To use this you need to sign up for an Amazon AWS account.

Start a Cloud Foundry Instance

  • Once you have an account, launch ami-98c956a8 (currently available in us-west-2; add a comment if you want it available elsewhere).
  • Confirm that the manifest reads “366179850620/Cloud Foundry Micro” and click Continue.
  • Change the instance type to m1.small and click Continue.
  • In the ‘User Data:’ field you may enter some optional customizations (each on its own line) and click Continue.
    • password: mySecret (AMI rules prohibit default passwords so if you don’t provide your own we will generate a random one);
    • domain: cloud.example.com (if you don’t provide a domain, we will assign <public-IP>.xip.io as the domain); and
    • debug: true (adds some debugging information to /var/log/boot.log).
  • Review the ‘Storage Device Configuration’ (an 8 GB root volume and an ephemeral instance store) and click Continue.
  • You may give a ‘Name’ tag to your EC2 instance, say CF Demo, and click Continue.
  • You may select or create a key pair to be used to log in to your server (optional, but useful), and click Continue.
  • Select or create a Security Group with at least HTTP access and click Continue.
    • ICMP – Echo Request (optional, to allow your server to respond to ping);
    • TCP – SSH (optional, to allow you to log on to your server using a private key); and
    • TCP – HTTP (required, to interact with Cloud Foundry and the applications you push to the server).
  • Review the configuration information and click Launch.
  • Click  View your instances on the Instances page to discover the public IP address.

Identify the Domain

To use the server you need to know its domain name.

  • If you provided a domain in the User Data above, then you need to create a record set in your domain name server to point your domain and all subdomains (using the ‘*’ wildcard match) to the indicated address; or
  • If you did not provide a domain, then your domain is <public-IP>.xip.io (a DNS that maps all requests to the given IP).

Log on to the Server (Optional)

  • Identify the path to the private key associated with the key pair you selected or created when you created the EC2 instance.
  • From a command shell (on Mac, Linux, or Unix), or an SSH client on Windows (such as PuTTY), connect to the server. E.g.,

ssh -i /path/to/my/private/key.pem ubuntu@domain

  • Once connected you can explore the server.
sudo /var/vcap/bosh/bin/monit summary # check Cloud Foundry status (all running except cloud_controller_jobs)
tail /var/log/boot.log # check here if things don't seem to start properly
cat ~/domain # show the configured domain (from User Data or public IP)
cat ~/password # show the configured password (from User Data or random generation)
cd /var/vcap/data/sys; sudo chmod +rx log; cd log; ll # list of log file directories
  • You can execute a single command the server using ssh:
ssh -i /path/to/my/private/key.pem ubuntu@hostname_or_domain cat password

Use the Server

To use the server you can refer to the cf command line reference. For example, on your local machine create and set up the environment:

mkdir ~/cloud ~/cloud/ruby; cd ~/cloud/ruby
sudo gem install bundle sinatra cf

Create three files using your favorite text editor:

Gemfile:

source 'https://rubygems.org'
 ruby '1.9.3'
 gem 'sinatra'

env.rb:

require 'rubygems'
require 'sinatra'
configure do
    disable :protection
end
get '/' do
    host = ENV['VCAP_APP_HOST']
    port = ENV['VCAP_APP_PORT']
    "<h1>Hello World!</h1><h2> I am in the Cloud! via: #{host}:#{port}</h2>"
end
get '/env' do
    res = ''
    ENV.each do |k, v|
        res << "#{k}: #{v}<br/>"
    end
    res
end

config.ru:

require ‘./env.rb’
run Sinatra::Application

To create a ‘Gemfile.lock’ from the ‘Gemfile’ run the following command:

bundle

I can test the application by running the following command:

ruby env.rb

When it tells me that Sinatra has taken the stage I enter http://localhost:4567/ and http://localhost:4567/env in a web browser.

Then I can use ‘cf’ to set my target, login, do some configuration, and push my application to the cloud (replacing <my-ip> and mySecret with your server’s public IP and password):

cf target http://api.<my-ip>.xip.io
cf login --password mySecret admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see https://github.com/cloudfoundry/cf/issues/9)
cf create-space development 
cf target --space development
cf map-domain --space development <my-ip>.xip.io
cf push

If the push is successful, it will show the URL at which you can see the application. When you care done you can stop and/or terminate your EC2 instance.

Recently we went through the process of installing a micro Cloud Foundry on a local virtual machine. We are now interested in doing the same on an Amazon EC2 instance. As far as I have been able to find, the existing instructions for using AWS set up a system with many VMs. In this post we look at a “micro” (or single-machine) Cloud Foundry setup.

To do this you need to sign up for an Amazon AWS account. Next, you need to decide where to build your Cloud Foundry instance. Amazon has data centers in eight regions, and you can pick based on geography (close to you has less network latency) and price (some are more expensive). I am close to the US (West) Oregon Region (us-west-2) and it is among the least expensive.

Next you select a base operating system for your machine. Cloud Foundry recommends 64-bit Ubuntu 10.04 LTS, so go to Ubuntu’s Amazon EC2 AMI Locator and enter ’64 lucid ebs’ in the search area (since we are going to make changes to the setup we want to be on a persistent store, hence the EBS selection). When the search list is narrowed down to one for each region, click on the link for the region you want.

Ubuntu Amazon EC2 AMI Locator

This takes us to the EC2 Management Console (perhaps with a login) where you can review information about the selected AMI and click the Continue button.

Screen Shot 2013-08-27 at 10.29.30 AM

For the Instance Details, change the Instance Type from ‘T1 Micro’ to ‘M1 Small’ or ‘M1 Medium’ and click Continue.

Screen Shot 2013-08-27 at 10.33.15 AM

Next, give ‘CF Micro’ as ‘User Data’ and click Continue.

Screen Shot 2013-08-27 at 11.57.38 AM

Do not make any changes to the Storage Device Configuration; simply click Continue.

Screen Shot 2013-08-27 at 12.01.43 PM

For the Tags, give ‘CF Micro’ as the Name and click Continue.

Screen Shot 2013-08-27 at 12.02.20 PM

To interact securely with the instance you need a key pair. The Wizard prompts for a name and you may enter anything (such as ‘cfMicro’) and then click ‘Create and Download your Key Pair’.

Screen Shot 2013-08-27 at 12.04.26 PM

The default Security Group does not allow any outside access to the instance. Create a new Security Group, named ‘CF Micro’ with a description of ‘ping, ssh, http’, add the appropriate rules, and click Continue.

Screen Shot 2013-08-27 at 12.10.57 PM

Next, click the Launch button to start the instance.

Screen Shot 2013-08-27 at 12.14.20 PM

When informed that the instance is starting, click Close.

Screen Shot 2013-08-27 at 12.16.28 PM

This takes us to the list of Instances on the Management Console.

When we built a micro Cloud Foundry on a local virtual machine, the IP address was assigned by Fusion and was the same from “inside” and “outside” the machine. When running an EC2 instance on AWS, the machine is behind a firewall and on an internal (private) network. While our virtual machine can be reached via a public IP address, the machine actually has a different IP address.

Optional: If we stop and start the machine the default behavior is that we are likely to get a different public address. In order to have a stable IP address for our instance, we can to allocate an Elastic IP and associated it with the running instance. (If you are only doing this once and will throw away the system, then you can skip this step.) From the EC2 Management Console, click Elastic IPs in the navigation pane on the left and then click the Allocation New Address button (instructions here).

Screen Shot 2013-08-28 at 2.57.14 PM

Optional (continued): Select the new address and click the Associate Address button. In the dialog box select the running instance and click the Yes, Associate button.

Screen Shot 2013-08-28 at 2.59.39 PM

Whether you have a stable IP or not, you now are almost ready to log on to our new server. Click on Instances in the navigation pane on the left and select the CF Micro instance, right click, and select the ‘Connect’ menu command.

Screen Shot 2013-08-27 at 12.17.59 PM

This gives us a window with instructions on how to connect to the instance. I prefer to use SSH from Terminal.app on my MacBook Pro, so look at the instructions for the standalone SSH client.

Screen Shot 2013-08-27 at 12.22.48 PM

Before you can use the command line provided (highlighted above) you need to do a little bit of setup on our local machine. Create a working directory and copy the private key downloaded earlier. Then connect to the new server (your IP address will be different).

mkdir ~/cloud/cfMicro
cd ~/cloud/cfMicro
mv ~/Downloads/cfMicro.pem .
chmod 400 cfMicro.pem 
ssh -i cfMicro.pem ubuntu@54.213.201.105

Once logged in to the server, you can install Cloud Foundry using Iwasaki Yudai’s cf_nise_installer.

export IPV4=`wget -qO- http://instance-data/latest/meta-data/public-ipv4`
export NISE_DOMAIN=$IPV4.xip.io
export CF_RELEASE_BRANCH=release-candidate
bash < <(curl -s -k -B https://raw.github.com/yudai/cf_nise_installer/${INSTALLER_BRANCH:-master}/local/bootstrap.sh)

When this finishes, you should restart your server.

sudo shutdown -r now

After a minute or so, log in to the server again using the ssh command above and start your Cloud Foundry.

(cd ~/cf_nise_installer; ./local/start_processes.sh)

Once the server is started, you can logout from the server (or open a second session on your client) and create a new application to push to your cloud. I suggest that you follow my earlier example with the following manifest.yml (change the domain to reference your server’s IP address):

---
applications:
- name: env
  memory: 256M
  instances: 1
  host: env
  domain: 54.213.204.16.xip.io
  path: .
  command: 'ruby env.rb'

Then use the Cloud Foundry command line tools to configure things and push your application (use your own IP address instead of the one shown here!).

cf target http://api.54.213.204.16.xip.io
cf login --password c1oudc0w admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see https://github.com/cloudfoundry/cf/issues/9)
cf create-space development 
cf target --space development
cf map-domain --space development 54.213.204.16.xip.io
cf push

When this finishes, you should be able to open a web browser on something like (use your own IP address) http://env.54.213.204.16.xip.io/env and see the application. Congratulations!

 

One of the challenges of providing Smalltalk on Cloud Foundry (or any cloud hosting system) is that the applications are typically given an ephemeral file system and isolated from read/write access to any persistent disk. This is, of course, for good reasons, but makes it difficult to use Smalltalk applications (like Pier on Pharo) that rely on image-based persistence. We recently described getting a Pharo application (in that case AIDAweb) running on Cloud Foundry 2, but each time you stop and start an application instance you would lose all the saved data.

This post describes a way to modify a private Cloud Foundry system to provide access to a persistent file system from an application instance. Note that this is a proof-of-concept and demonstrates that it is possible to work around the carefully-designed isolation in Cloud Foundry. We do so by providing every Pharo application the same shared space that is completely replaced whenever you upload a new application. We also give all files and directories read/write/execute permission so that subsequent launches of the application (which Cloud Foundry does with a new user and group) will have access.

Create a Virtual Disk

The first step is to create a fixed-size (~500 MB) virtual disk (based on ideas here) that can be used by the application instance for persistent file storage. The following steps, done as root on the Cloud Foundry server, give us what the needed disk:

cd /var
touch st_virtual_disk.ext3
dd if=/dev/zero of=/var/st_virtual_disk.ext3 bs=550000000 count=1
mkfs.ext3 /var/st_virtual_disk.ext3
mkdir /var/smalltalk
mount -o loop,rw,usrquota,grpquota /var/st_virtual_disk.ext3 /var/smalltalk/
mkdir /var/smalltalk/pharo/
chmod 777 /var/smalltalk/pharo/

Making the Disk Visible in the Warden Container

Cloud Foundry’s architecture includes a Warden that manages a container for each application instance that includes a private root file system. On Ubuntu 10.04 LTS this is implemented using aufs, and the mount action is (at the moment) found in /var/vcap/data/packages/warden/29/warden/root/linux/skeleton/lib/common.sh line 60 (with :/var/smalltalk=rw added):

mount -n -t aufs \
 -o br:tmp/rootfs=rw:$rootfs_path=ro+wh:/var/smalltalk=rw none mnt

This means that the container’s private root file system has a /pharo top-level directory that maps to a persistent 500 MB virtual disk. The remaining portion of the external file system (from /var/vcap/data/packages/rootfs_lucid64/1) is read-only and the internal file system (from /var/vcap/data/warden/depot/*/tmp/rootfs/) is read/write.

New Pharo Buildpack

Now rather than simply creating a startup script, the buildpack needs to copy things to the persistent file system and use that when running the application. Following is the new compile script:

#!/usr/bin/env bash
#
BUILD_DIR=$1
CACHE_DIR=$2
BUILD_PACK_DIR=$(dirname $(dirname $0))
#
cd $BUILD_DIR
rm -rf /pharo/* 2>&1
cp `ls *.image` /pharo/pharo.image
cp `ls *.changes` /pharo/pharo.changes 2>&1
rm *.image *.changes 2>&1
if [ ! -e startup.st ]; then
 touch startup.st
fi
cp -r * /pharo
chmod -R 777 /pharo/*
rm -rf *
#
cat > startup.sh << EOF
#!/usr/bin/env bash
#
umask 0000
ln -s /pharo ./pharo
cd pharo
/opt/pharo/pharo -vm-display-null -vm-sound-null pharo.image startup.st \$PORT
EOF
chmod +x startup.sh

Client Changes

On the client (before pushing the application to the cloud), there are some changes needed. I started with the Pier 3.0 download (from on this page). The pre-build image is in Pharo 1.3 and does not include Smalltalk code for accessing the OS environment variables (at least not that I found easily). Thus, you can see above that the startup script is modified to add $PORT to the command line when starting Pharo. With that change, Pier can be started on the supplied port with this startup.st script (that might work for any Seaside application):

| manager adaptor port |
port := (SmalltalkImage current argumentAt: 1) asNumber.
manager := WAServerManager default.
adaptor := manager adaptors first.
adaptor port == port ifFalse: [
 manager stopAll.
 adaptor port: port.
 manager startAll.
].

This runs Pier on Cloud Foundry on a persistent file system but Pier needs to save the image regularly. It would also be desirable to save the image when sent a SIGTERM (perhaps with something like Chaff). One can save the image from the /status page, though since one can also quit the image from that page it probably should have some security!

In any case, we have demonstrated that it is possible to provide a persistent file store for a Pharo application on a private Cloud Foundry server.

If you want to just look at the code in a GemStone/S image, and maybe try out a simple Smalltalk expression, you can spin up a Heroku dyno running GemStone/S. See the buildpack for instructions on how to run Webtools. Webtools is itself still quite rough, but if you want to contribute or open an issue, feel free.

Of course, this isn’t very useful as a database or application server because the dyno has an ephemeral file system and can be restarted any time. But it does demonstrate that I’ve figured out how to create a Heroku buildpack, and that was my primary goal!

We have previously demonstrated adding Pharo to Cloud Foundry. Since that time Cloud Foundry has been substantially revised and the process for adding a runtime/framework has changed to be based on a buildpack model. In general, this makes things easier but there are some complications.

On the Server

To deploy a Pharo application to Cloud Foundry we log in to the private Cloud Foundry instance created earlier. So that I can set the shell title and not have it replaced I typically change the prompt:

export PS1='\[\e[0;35m\]\h\[\e[0;34m\] \w\[\e[00m\]$ '

By default, Cloud Foundry is set up to run 64-bit applications. In order to run the 32-bit version of Pharo we need to add some libraries:

sudo apt-get install -y ia32-libs

As discussed in the architecture documentation, each application runs in a container with a private root filesystem. Thus, the 32-bit libraries need to be added to the shared read-only portion. This should be done using the general Cloud Foundry install scripts, but as a first attempt I’m just doing it manually:

cd /var/vcap/data/packages/rootfs_lucid64/0.1-dev
sudo rsync -r -l -D -g -o -p -t /lib32 . # 12 MB
sudo rsync -r -l -D -g -o -p -t /usr/lib32 ./usr/ # 153 MB
sudo cp /lib32/ld-linux.so.2 ./lib/
cd usr/lib
sudo rsync -r -l -D -g -o -p -t /usr/lib/libv4l* .
sudo rsync -r -l -D -g -o -p -t /usr/lib/gio .
cd gtk-2.0/
sudo rsync -r -l -D -g -o -p -t /usr/lib/gtk-2.0/i* .
cd 2.10.0/
sudo ln -s ../../../lib32/gtk-2.0/2.10.0 i486-pc-linux-gnu
sudo ln -s ../../../lib32/gtk-2.0/2.10.0 i686-pc-linux-gnu

I’m not sure that all of the above is necessary, but it seems to be sufficient to run Pharo as a runtime in a container. Next, we install Pharo and make it visible (with the same recognition that this could be done better using the install scripts):

sudo mkdir /opt/pharo
cd /opt/pharo
sudo chmod 777 .
curl http://files.pharo.org/vm/pharo/linux/Pharo-VM-linux-stable.zip > pharo.zip
sudo unzip pharo.zip; rm pharo.zip
sudo chmod 755 .
cd /var/vcap/data/packages/rootfs_lucid64/0.1-dev/opt
sudo rsync -r -l -D -g -o -p -t /opt/pharo .

Now we get to the part of making a Pharo buildpack. Eventually this should be in a Git repository but for now I’ll just add it directly to my private Cloud Foundry instance so that it looks like a built-in buildpack.

cd /var/vcap/packages/dea_next/buildpacks/vendor
mkdir pharo pharo/bin; cd pharo; git init
cat > README.md << EOF
Cloud Foundry Buildpack for Pharo Smalltalk
==============
EOF

The actual buildpack consists of three files that are called by Cloud Foundry, all in the bin directory. First is bin/detect (use an editor to create these files):

#!/usr/bin/env bash
#
if [ -f $1/*.image ]; then
 echo "pharo" && exit 0
else
 echo "no" && exit 1
fi

This will confirm that the Pharo buildpack can handle an application that contains a .image file. Next we create bin/compile to add a startup script to the application:

#!/usr/bin/env bash
#
BUILD_DIR=$1
CACHE_DIR=$2
BUILD_PACK_DIR=$(dirname $(dirname $0))
#
if [ ! -d "$BUILD_DIR" ]; then
 mkdir -p "$BUILD_DIR"
fi
#
if [ ! -d "$CACHE_DIR" ]; then
 mkdir -p "$CACHE_DIR"
fi
#
cat > "$BUILD_DIR/startup.sh" << EOF
#!/usr/bin/env bash
IMAGE=\`ls *.image\`
STARTUP=\`ls | grep startup.st\`
/opt/pharo/pharo -vm-display-null -vm-sound-null \$IMAGE \$STARTUP
EOF
chmod +x "$BUILD_DIR/startup.sh"

Third, we create bin/release to return a YML file with startup information:

#!/usr/bin/env bash
#
cat <<EOF
---
default_process_types:
 web: ./startup.sh
EOF

Finally, we set these three files to be executable:

chmod +x bin/*

Now that we have modified our private Cloud Foundry instance support Pharo we start it up:

cd ~/cf_nise_installer/; ./local/start_processes.sh

It seems that the startup might need a bit of extra time. To check on the status do the following:

sudo /var/vcap/bosh/bin/monit summary

On the Client

In our previous approach to adding Pharo to Cloud Foundry, we went to some effort to avoid copying the image, changes, and sources files over the network from the client to the server (because they are typically quite large). That earlier approach, while elegant, is a bit less obvious to application developers who just want to deploy their application. In this iteration I’ve taken the approach of doing the simplest thing and waiting till it causes a problem to “improve” things. Thus, we expect you to provide the image file and we will just run it. The primary thing that you have to do is modify your code to provide a web server on the port defined in the $PORT environment variable (which can be different each time the application is launched). To facilitate this you can include a ‘startup.st’ script in your directory that will be run each time the application starts. This allows you to change the listening port each time. Following is a sample ‘startup.st‘ script that we use with an AIDAweb one-click image:

| methodSource port |
methodSource := 'introductionElement
 | element |
 element := WebElement new.
 element addText: self observee introduction.
 element addText: ''<p>Listening on port: '' , 
 session parent site port printString , ''</p>''.
 ^element'.
Author fullName: 'CloudFoundry'. "Developer's name"
WebDemoApp compile: methodSource.
port := OSProcess thisOSProcess environment at: #'PORT' ifAbsent: ['8888'].
AIDASite default
 stop;
 port: port asNumber;
 start.

At this point we can connect to our private cloud and push the application:

cf target api.172.16.217.185.xip.io # use the IP address of your private cloud
cf login --password micr0@micr0 micro@vcap.me
cf create-space development 
cf target --space development
cf map-domain mycloud.local
cf push
# Name> myapp
# [accept defaults]
# Save configuration?> y

Now, make sure that ‘myapp.mycloud.local’ is in your /etc/hosts file and points to your private cloud. Then you can open a web browser on http://myapp.mycloud.local and you should see your application running!

As you think about running a Pharo application in the cloud, remember that the file system is ephemeral. That is, your application can be restarted any time and changes made to the image will not be saved. See http://docs.cloudfoundry.com/docs/using/app-arch/ for a good discussion of some architectural issues.

Categories

Follow

Get every new post delivered to your Inbox.