You are currently browsing the category archive for the ‘Cloud Foundry’ category.
My post on a Cloud Foundry AMI was with a simple Ruby application. In a comment Andrew Spyker asked about a node.js application. I’ve done Javascript but not node.js, so thought I’d give it a try. I followed my earlier instructions to get the CF Micro instance started and then starting with the ‘Use the Server’ I did something different.
Using the webserver example here, I created two files in a new directory. The first file, example.js, contained the following:
var http = require('http'); var port = parseInt(process.env.PORT,10); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(port, '0.0.0.0'); console.log('Server running at http://0.0.0.0:' + port.toString() + '/');
The second file, package.json, contained the following:
{ "name": "http-server", "version": "0.0.1", "author": "James Foster <github@jgfoster.net>", "description": "webserver demo from http://nodejs.org/", "dependencies" : [ ], "engines": { "node": ">=0.10" } }
(Bear in mind that this is my first node.js application, and I just spent a hour or so poking around on the web to get this far.)
From the command line I’m able to run it by entering the following:
export PORT=1337; node example.js
With this I can open a web browser on http://localhost:1337/ and see the greeting.
To push my trivial application to my EC2 instance, I did the following:
cf target http://api.<my-ip>.xip.io
cf login --password mySecret admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see https://github.com/cloudfoundry/cf/issues/9)
cf create-space development
cf target --space development
cf map-domain --space development <my-ip>.xip.io
cf push --command "node example.js"
The interaction included giving the application a name (“hello”), accepting the defaults, and saving the configuration:
Name> hello
Instances> 1
1: 128M 2: 256M 3: 512M 4: 1G Memory Limit> 256M
Creating hello... OK
1: hello 2: none Subdomain> hello
1: 54.200.62.218.xip.io 2: none Domain> 54.200.62.218.xip.io
Binding hello.54.200.62.218.xip.io to hello... OK
Create services for application?> n
Save configuration?> y
Saving to manifest.yml... OK Uploading hello... OK Preparing to start hello... OK -----> Downloaded app package (4.0K) -----> Resolving engine versions Using Node.js version: 0.10.17 Using npm version: 1.2.30 -----> Fetching Node.js binaries -----> Vendoring node into slug -----> Installing dependencies with npm npm WARN package.json http-server@0.0.1 No repository field. npm WARN package.json http-server@0.0.1 No readme data. npm WARN package.json http-server@0.0.1 No repository field. npm WARN package.json http-server@0.0.1 No readme data. Dependencies installed -----> Building runtime environment -----> Uploading droplet (15M) Checking status of app 'hello'... 0 of 1 instances running (1 starting) 0 of 1 instances running (1 starting) 1 of 1 instances running (1 running) Push successful! App 'hello' available at http://hello.54.200.62.218.xip.io
When I went to the URL provided, I saw the greeting.
I recently made an Amazon Machine Image (AMI) available (described here) and described how to use it. In this post I will describe the process of taking an existing AWS Instance with Cloud Foundry Micro (described here) and making it suitable for a public AMI. The primary challenge is that we need to create a boot disk with Cloud Foundry already installed but configured so that it can use a new password, domain, and local IP address. We also need to remove any identifying information before shutting down the VM and taking a snapshot.
I first use the following script (named setup.sh) to install Cloud Foundry using the cf_nise_installer. Note that I am setting a well-known IP, domain, and password; these will be changed later during the boot process. I am also deleting some files that might exist from a previous install attempt. Finally, I am creating a list of files that might contain data that must be modified during the boot process.
#!/bin/bash # set -x cd ~ sudo rm -rf \ cf_nise_installer \ /var/vcap/data/sys/log/* \ domain password files ipv4 \ 2>/dev/null # export INSTALLER_URL="https://github.com/yudai/cf_nise_installer.git" export INSTALLER_BRANCH='master' export CF_RELEASE_BRANCH='release-candidate' export NISE_IP_ADDRESS='127.0.0.1' export NISE_DOMAIN='cloud.mycloud.local' export NISE_PASSWORD='swordfish' bash < <(curl -s -k -B https://raw.github.com/yudai/cf_nise_installer/${INSTALLER_BRANCH:-master}/local/bootstrap.sh) # list="$(sudo find /vol-a/var/vcap -type f \( -iname '*.yml' -o -iname 'postgres_ctl' \) )" echo $list > ~/files
We next create a startup script, /home/ubuntu/rc.local, and modify /etc/rc.local to call our new script (if you delete the authorized_keys and fail to add them back, then you can’t log in to your instance!).
#!/bin/bash # should be called from /etc/rc.local echo "Starting /home/ubuntu/rc.local script to start Cloud Foundry" # # See if 'debug: true' is in user-data # X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'debug:' ) X=$( echo $X | cut -d : -f 2 | tr -d ' ' ) if [ "$X" == "true" ]; then set -x fi # # Make sure that we have some keys (should be deleted before snapshot) # cd /home/ubuntu if [ ! -f .ssh/authorized_keys ]; then curl http://instance-data/latest/meta-data/public-keys/0/openssh-key \ 2>/dev/null > .ssh/authorized_keys chown ubuntu:ubuntu .ssh/authorized_keys chmod 600 .ssh/authorized_keys echo "Updated authorized_keys" fi # # See if 'autoStart: false' is in user-data; if so, done! # X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'autoStart:' ) X=$( echo $X | cut -d : -f 2 | tr -d ' ' ) if [ "$X" == "false" ]; then echo "user-data includes 'autoStart: false' so we will quit" exit 0 fi # # Each instance should have a different password (but once set it does not change!) # if [ -e password ]; then # If we have been through this code before, then use the previous password OLD_PASSWORD=$( < password ) NEW_PASSWORD=$OLD_PASSWORD else # This is the first time through, so we replace the default password OLD_PASSWORD='swordfish' # User may specify a 'password:' line in the user data X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'password:' ) NEW_PASSWORD=$( echo $X | cut -d : -f 2 | tr -d ' ' ) if [ "$NEW_PASSWORD" == "" ]; then # No user-provided one and no previously created one, so assign a random one # another script is $( openssl rand -base64 6 ) NEW_PASSWORD=$( < /dev/urandom tr -dc A-Za-z0-9 | head -c8 ) fi # Save new password so it can be reused next time echo "$NEW_PASSWORD" > password fi # # See if 'domain:' is in user-data # X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'domain:' ) if [ "$X" != "" ]; then NEW_DOMAIN=$( echo $X | cut -d : -f 2 | tr -d ' ' ) else # get the public-hostname until [ "$NEW_DOMAIN" != "" ]; do X=$( curl http://instance-data/latest/meta-data/ 2>/dev/null | grep public-hostname ) if [ "$X" != "" ]; then X=$( curl http://instance-data/latest/meta-data/public-hostname 2>/dev/null ) if [ "$X" != "" ]; then NEW_DOMAIN=$( echo $X | cut -d "." -f 1 | cut -d "-" -f 2- | sed "s/\-/\./g" ) NEW_DOMAIN="$NEW_DOMAIN.xip.io" fi fi if [ "$NEW_DOMAIN" == "" ]; then echo "`date`: public-hostname not yet available from meta-data" sleep 1 fi done fi if [ -f domain ]; then OLD_DOMAIN=$( cat domain ) else OLD_DOMAIN='cloud.mycloud.local' fi echo $NEW_DOMAIN > domain # # Each new instance will have a unique private IP address # NEW_IPV4=$( curl http://instance-data/latest/meta-data/local-ipv4 2>/dev/null ) if [ -f ipv4 ]; then OLD_IPV4=$( cat ipv4 ) else OLD_IPV4="127.0.0.1" fi echo $NEW_IPV4 > ipv4 # # Find all the files that need to be edited # (takes several seconds, so cache it for faster start next time) # if [ -f files ]; then list=$( cat files ) else list="$(find /var/vcap -type f \( -iname '*.yml' -o -iname 'postgres_ctl' \) )" echo $list > files fi for f in $list do if [ "$OLD_PASSWORD" != "$NEW_PASSWORD" ]; then if grep -q $OLD_PASSWORD $f; then sed -i "s/$OLD_PASSWORD/$NEW_PASSWORD/g" $f echo "Updated password in $f" fi fi if grep -q $OLD_DOMAIN $f; then sed -i "s/$OLD_DOMAIN/$NEW_DOMAIN/g" $f echo "Updated domain in $f" fi if grep -q "$OLD_IPV4" $f; then sed -i "s/$OLD_IPV4/$NEW_IPV4/g" $f echo "Updated IP in $f" fi done # # start Cloud Foundry (cd /home/ubuntu/cf_nise_installer; ./local/start_processes.sh)
Finally, I create a cleanup.sh script to remove identifying information and sanitize the system before shutdown. Note that once authorized_keys are removed from the .ssh directory you can’t log in unless during the boot process new authorized_keys are installed.
#!/bin/bash # sudo rm -f /root/.ssh/* /home/*/.ssh/* /var/vcap/monit/monit.log (cd /var/log; sudo rm -f *.log dmesg* debug messages syslog) rm -f ~/.viminfo ~/.bash_history echo '' | sudo tee /var/log/lastlog history -c # sudo shutdown now
After running this script I use the AWS tools to stop the instance (this means that the command history is empty). At this point I can use the AWS tools to create an AMI. When the AMI starts it executes /etc/rc.local which calls my new script, /home/ubuntu/rc.local. This script sets the local IPv4 value, along with the domain and password (using configured values if provided in the user-data). When the appropriate configuration info has been updated, then we start Cloud Foundry. Each new instance has its own IP, domain, and password, making it secure and unique.
If there is something wrong with the boot volume (such as missing authorized_keys so you can’t log in), then you need to start another EC2 instance (a micro is fine), attach the volume, do the fix-up, and release it.
# mount a new volume (in case some surgery is needed) # sudo mkdir -p 000 /vol-a sudo mount /dev/sdf /vol-a # # ... do whatever fix-up is needed and unmount the volume # sudo umount -d /vol-a sudo rmdir /vol-a
I have used /etc/rc.local to hook into the boot process. An alternate may be to use crontab with the ‘@reboot’ option.
If you have an instance-store (a disk that exists only while the instance is running), then you might want to set up some swap space on it. The following script will create a 4 GB file to be used as swap space.
# http://serverfault.com/questions/218750/why-dont-ec2-ubuntu-images-have-swap sudo dd if=/dev/zero of=/mnt/swapfile bs=1M count=4096 && sudo chmod 600 /mnt/swapfile && sudo mkswap /mnt/swapfile && echo /mnt/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a cat /proc/swaps
That summarizes how I created a Cloud Foundry Micro public AMI.
Video of ESUG 2013 Presentation
My presentation “Smalltalk in the Cloud” was recorded and can be found at the link. Unfortunately, the audio was not very strong.
Update: Slides are here.
Update: A video of this post is available here.
After creating a Cloud Foundry Micro on an Amazon EC2 instance (described here), I decided to make it available as a public AMI so that others could try it out (especially since the Micro is not available unless you built it yourself using Altoros vagrant or Nise BOSH).
To use this you need to sign up for an Amazon AWS account.
Start a Cloud Foundry Instance
- Once you have an account, launch ami-98c956a8 (currently available in us-west-2; add a comment if you want it available elsewhere).
- Confirm that the manifest reads “366179850620/Cloud Foundry Micro” and click Continue.
- Change the instance type to m1.small and click Continue.
- In the ‘User Data:’ field you may enter some optional customizations (each on its own line) and click Continue.
- password: mySecret (AMI rules prohibit default passwords so if you don’t provide your own we will generate a random one);
- domain: cloud.example.com (if you don’t provide a domain, we will assign <public-IP>.xip.io as the domain); and
- debug: true (adds some debugging information to /var/log/boot.log).
- Review the ‘Storage Device Configuration’ (an 8 GB root volume and an ephemeral instance store) and click Continue.
- You may give a ‘Name’ tag to your EC2 instance, say CF Demo, and click Continue.
- You may select or create a key pair to be used to log in to your server (optional, but useful), and click Continue.
- Select or create a Security Group with at least HTTP access and click Continue.
- ICMP – Echo Request (optional, to allow your server to respond to ping);
- TCP – SSH (optional, to allow you to log on to your server using a private key); and
- TCP – HTTP (required, to interact with Cloud Foundry and the applications you push to the server).
- Review the configuration information and click Launch.
- Click View your instances on the Instances page to discover the public IP address.
Identify the Domain
To use the server you need to know its domain name.
- If you provided a domain in the User Data above, then you need to create a record set in your domain name server to point your domain and all subdomains (using the ‘*’ wildcard match) to the indicated address; or
- If you did not provide a domain, then your domain is <public-IP>.xip.io (a DNS that maps all requests to the given IP).
Log on to the Server (Optional)
- Identify the path to the private key associated with the key pair you selected or created when you created the EC2 instance.
- From a command shell (on Mac, Linux, or Unix), or an SSH client on Windows (such as PuTTY), connect to the server. E.g.,
ssh -i /path/to/my/private/key.pem ubuntu@domain
- Once connected you can explore the server.
sudo /var/vcap/bosh/bin/monit summary # check Cloud Foundry status (all running except cloud_controller_jobs) tail /var/log/boot.log # check here if things don't seem to start properly cat ~/domain # show the configured domain (from User Data or public IP) cat ~/password # show the configured password (from User Data or random generation) cd /var/vcap/data/sys; sudo chmod +rx log; cd log; ll # list of log file directories
- You can execute a single command the server using ssh:
ssh -i /path/to/my/private/key.pem ubuntu@hostname_or_domain cat password
Use the Server
To use the server you can refer to the cf command line reference. For example, on your local machine create and set up the environment:
mkdir ~/cloud ~/cloud/ruby; cd ~/cloud/ruby
sudo gem install bundle sinatra cf
Create three files using your favorite text editor:
Gemfile:
source 'https://rubygems.org' ruby '1.9.3' gem 'sinatra'
env.rb:
require 'rubygems'
require 'sinatra'
configure do
disable :protection
end
get '/' do
host = ENV['VCAP_APP_HOST']
port = ENV['VCAP_APP_PORT']
"<h1>Hello World!</h1><h2> I am in the Cloud! via: #{host}:#{port}</h2>"
end
get '/env' do
res = ''
ENV.each do |k, v|
res << "#{k}: #{v}<br/>"
end
res
end
config.ru:
require ‘./env.rb’
run Sinatra::Application
To create a ‘Gemfile.lock’ from the ‘Gemfile’ run the following command:
bundle
I can test the application by running the following command:
ruby env.rb
When it tells me that Sinatra has taken the stage I enter http://localhost:4567/ and http://localhost:4567/env in a web browser.
Then I can use ‘cf’ to set my target, login, do some configuration, and push my application to the cloud (replacing <my-ip> and mySecret with your server’s public IP and password):
cf target http://api.<my-ip>.xip.io
cf login --password mySecret admin
# okay to ignore CFoundry::InvalidRelation error in next command
# (see https://github.com/cloudfoundry/cf/issues/9)
cf create-space development
cf target --space development
cf map-domain --space development <my-ip>.xip.io
cf push
If the push is successful, it will show the URL at which you can see the application. When you care done you can stop and/or terminate your EC2 instance.
Recently we went through the process of installing a micro Cloud Foundry on a local virtual machine. We are now interested in doing the same on an Amazon EC2 instance. As far as I have been able to find, the existing instructions for using AWS set up a system with many VMs. In this post we look at a “micro” (or single-machine) Cloud Foundry setup.
To do this you need to sign up for an Amazon AWS account. Next, you need to decide where to build your Cloud Foundry instance. Amazon has data centers in eight regions, and you can pick based on geography (close to you has less network latency) and price (some are more expensive). I am close to the US (West) Oregon Region (us-west-2) and it is among the least expensive.
Next you select a base operating system for your machine. Cloud Foundry recommends 64-bit Ubuntu 10.04 LTS, so go to Ubuntu’s Amazon EC2 AMI Locator and enter ’64 lucid ebs’ in the search area (since we are going to make changes to the setup we want to be on a persistent store, hence the EBS selection). When the search list is narrowed down to one for each region, click on the link for the region you want.
This takes us to the EC2 Management Console (perhaps with a login) where you can review information about the selected AMI and click the Continue button.
For the Instance Details, change the Instance Type from ‘T1 Micro’ to ‘M1 Small’ or ‘M1 Medium’ and click Continue.
Next, give ‘CF Micro’ as ‘User Data’ and click Continue.
Do not make any changes to the Storage Device Configuration; simply click Continue.
For the Tags, give ‘CF Micro’ as the Name and click Continue.
To interact securely with the instance you need a key pair. The Wizard prompts for a name and you may enter anything (such as ‘cfMicro’) and then click ‘Create and Download your Key Pair’.
The default Security Group does not allow any outside access to the instance. Create a new Security Group, named ‘CF Micro’ with a description of ‘ping, ssh, http’, add the appropriate rules, and click Continue.
Next, click the Launch button to start the instance.
When informed that the instance is starting, click Close.
This takes us to the list of Instances on the Management Console.
When we built a micro Cloud Foundry on a local virtual machine, the IP address was assigned by Fusion and was the same from “inside” and “outside” the machine. When running an EC2 instance on AWS, the machine is behind a firewall and on an internal (private) network. While our virtual machine can be reached via a public IP address, the machine actually has a different IP address.
Optional: If we stop and start the machine the default behavior is that we are likely to get a different public address. In order to have a stable IP address for our instance, we can to allocate an Elastic IP and associated it with the running instance. (If you are only doing this once and will throw away the system, then you can skip this step.) From the EC2 Management Console, click Elastic IPs in the navigation pane on the left and then click the Allocation New Address button (instructions here).
Optional (continued): Select the new address and click the Associate Address button. In the dialog box select the running instance and click the Yes, Associate button.
Whether you have a stable IP or not, you now are almost ready to log on to our new server. Click on Instances in the navigation pane on the left and select the CF Micro instance, right click, and select the ‘Connect’ menu command.
This gives us a window with instructions on how to connect to the instance. I prefer to use SSH from Terminal.app on my MacBook Pro, so look at the instructions for the standalone SSH client.
Before you can use the command line provided (highlighted above) you need to do a little bit of setup on our local machine. Create a working directory and copy the private key downloaded earlier. Then connect to the new server (your IP address will be different).
mkdir ~/cloud/cfMicro cd ~/cloud/cfMicro mv ~/Downloads/cfMicro.pem . chmod 400 cfMicro.pem ssh -i cfMicro.pem ubuntu@54.213.201.105
Once logged in to the server, you can install Cloud Foundry using Iwasaki Yudai’s cf_nise_installer.
export IPV4=`wget -qO- http://instance-data/latest/meta-data/public-ipv4` export NISE_DOMAIN=$IPV4.xip.io export CF_RELEASE_BRANCH=release-candidate bash < <(curl -s -k -B https://raw.github.com/yudai/cf_nise_installer/${INSTALLER_BRANCH:-master}/local/bootstrap.sh)
When this finishes, you should restart your server.
sudo shutdown -r now
After a minute or so, log in to the server again using the ssh command above and start your Cloud Foundry.
(cd ~/cf_nise_installer; ./local/start_processes.sh)
Once the server is started, you can logout from the server (or open a second session on your client) and create a new application to push to your cloud. I suggest that you follow my earlier example with the following manifest.yml (change the domain to reference your server’s IP address):
--- applications: - name: env memory: 256M instances: 1 host: env domain: 54.213.204.16.xip.io path: . command: 'ruby env.rb'
Then use the Cloud Foundry command line tools to configure things and push your application (use your own IP address instead of the one shown here!).
cf target http://api.54.213.204.16.xip.io cf login --password c1oudc0w admin # okay to ignore CFoundry::InvalidRelation error in next command # (see https://github.com/cloudfoundry/cf/issues/9) cf create-space development cf target --space development cf map-domain --space development 54.213.204.16.xip.io cf push
When this finishes, you should be able to open a web browser on something like (use your own IP address) http://env.54.213.204.16.xip.io/env and see the application. Congratulations!
One of the challenges of providing Smalltalk on Cloud Foundry (or any cloud hosting system) is that the applications are typically given an ephemeral file system and isolated from read/write access to any persistent disk. This is, of course, for good reasons, but makes it difficult to use Smalltalk applications (like Pier on Pharo) that rely on image-based persistence. We recently described getting a Pharo application (in that case AIDAweb) running on Cloud Foundry 2, but each time you stop and start an application instance you would lose all the saved data.
This post describes a way to modify a private Cloud Foundry system to provide access to a persistent file system from an application instance. Note that this is a proof-of-concept and demonstrates that it is possible to work around the carefully-designed isolation in Cloud Foundry. We do so by providing every Pharo application the same shared space that is completely replaced whenever you upload a new application. We also give all files and directories read/write/execute permission so that subsequent launches of the application (which Cloud Foundry does with a new user and group) will have access.
Create a Virtual Disk
The first step is to create a fixed-size (~500 MB) virtual disk (based on ideas here) that can be used by the application instance for persistent file storage. The following steps, done as root on the Cloud Foundry server, give us what the needed disk:
cd /var
touch st_virtual_disk.ext3
dd if=/dev/zero of=/var/st_virtual_disk.ext3 bs=550000000 count=1
mkfs.ext3 /var/st_virtual_disk.ext3
mkdir /var/smalltalk
mount -o loop,rw,usrquota,grpquota /var/st_virtual_disk.ext3 /var/smalltalk/
mkdir /var/smalltalk/pharo/
chmod 777 /var/smalltalk/pharo/
Making the Disk Visible in the Warden Container
Cloud Foundry’s architecture includes a Warden that manages a container for each application instance that includes a private root file system. On Ubuntu 10.04 LTS this is implemented using aufs, and the mount action is (at the moment) found in /var/vcap/data/packages/warden/29/warden/root/linux/skeleton/lib/common.sh line 60 (with :/var/smalltalk=rw added):
mount -n -t aufs \ -o br:tmp/rootfs=rw:$rootfs_path=ro+wh:/var/smalltalk=rw none mnt
This means that the container’s private root file system has a /pharo top-level directory that maps to a persistent 500 MB virtual disk. The remaining portion of the external file system (from /var/vcap/data/packages/rootfs_lucid64/1) is read-only and the internal file system (from /var/vcap/data/warden/depot/*/tmp/rootfs/) is read/write.
New Pharo Buildpack
Now rather than simply creating a startup script, the buildpack needs to copy things to the persistent file system and use that when running the application. Following is the new compile script:
#!/usr/bin/env bash # BUILD_DIR=$1 CACHE_DIR=$2 BUILD_PACK_DIR=$(dirname $(dirname $0)) # cd $BUILD_DIR rm -rf /pharo/* 2>&1 cp `ls *.image` /pharo/pharo.image cp `ls *.changes` /pharo/pharo.changes 2>&1 rm *.image *.changes 2>&1 if [ ! -e startup.st ]; then touch startup.st fi cp -r * /pharo chmod -R 777 /pharo/* rm -rf * # cat > startup.sh << EOF #!/usr/bin/env bash # umask 0000 ln -s /pharo ./pharo cd pharo /opt/pharo/pharo -vm-display-null -vm-sound-null pharo.image startup.st \$PORT EOF chmod +x startup.sh
Client Changes
On the client (before pushing the application to the cloud), there are some changes needed. I started with the Pier 3.0 download (from on this page). The pre-build image is in Pharo 1.3 and does not include Smalltalk code for accessing the OS environment variables (at least not that I found easily). Thus, you can see above that the startup script is modified to add $PORT to the command line when starting Pharo. With that change, Pier can be started on the supplied port with this startup.st script (that might work for any Seaside application):
| manager adaptor port | port := (SmalltalkImage current argumentAt: 1) asNumber. manager := WAServerManager default. adaptor := manager adaptors first. adaptor port == port ifFalse: [ manager stopAll. adaptor port: port. manager startAll. ].
This runs Pier on Cloud Foundry on a persistent file system but Pier needs to save the image regularly. It would also be desirable to save the image when sent a SIGTERM (perhaps with something like Chaff). One can save the image from the /status page, though since one can also quit the image from that page it probably should have some security!
In any case, we have demonstrated that it is possible to provide a persistent file store for a Pharo application on a private Cloud Foundry server.
If you want to just look at the code in a GemStone/S image, and maybe try out a simple Smalltalk expression, you can spin up a Heroku dyno running GemStone/S. See the buildpack for instructions on how to run Webtools. Webtools is itself still quite rough, but if you want to contribute or open an issue, feel free.
Of course, this isn’t very useful as a database or application server because the dyno has an ephemeral file system and can be restarted any time. But it does demonstrate that I’ve figured out how to create a Heroku buildpack, and that was my primary goal!
We have previously demonstrated adding Pharo to Cloud Foundry. Since that time Cloud Foundry has been substantially revised and the process for adding a runtime/framework has changed to be based on a buildpack model. In general, this makes things easier but there are some complications.
On the Server
To deploy a Pharo application to Cloud Foundry we log in to the private Cloud Foundry instance created earlier. So that I can set the shell title and not have it replaced I typically change the prompt:
export PS1='\[\e[0;35m\]\h\[\e[0;34m\] \w\[\e[00m\]$ '
By default, Cloud Foundry is set up to run 64-bit applications. In order to run the 32-bit version of Pharo we need to add some libraries:
sudo apt-get install -y ia32-libs
As discussed in the architecture documentation, each application runs in a container with a private root filesystem. Thus, the 32-bit libraries need to be added to the shared read-only portion. This should be done using the general Cloud Foundry install scripts, but as a first attempt I’m just doing it manually:
cd /var/vcap/data/packages/rootfs_lucid64/0.1-dev sudo rsync -r -l -D -g -o -p -t /lib32 . # 12 MB sudo rsync -r -l -D -g -o -p -t /usr/lib32 ./usr/ # 153 MB sudo cp /lib32/ld-linux.so.2 ./lib/ cd usr/lib sudo rsync -r -l -D -g -o -p -t /usr/lib/libv4l* . sudo rsync -r -l -D -g -o -p -t /usr/lib/gio . cd gtk-2.0/ sudo rsync -r -l -D -g -o -p -t /usr/lib/gtk-2.0/i* . cd 2.10.0/ sudo ln -s ../../../lib32/gtk-2.0/2.10.0 i486-pc-linux-gnu sudo ln -s ../../../lib32/gtk-2.0/2.10.0 i686-pc-linux-gnu
I’m not sure that all of the above is necessary, but it seems to be sufficient to run Pharo as a runtime in a container. Next, we install Pharo and make it visible (with the same recognition that this could be done better using the install scripts):
sudo mkdir /opt/pharo cd /opt/pharo sudo chmod 777 . curl http://files.pharo.org/vm/pharo/linux/Pharo-VM-linux-stable.zip > pharo.zip sudo unzip pharo.zip; rm pharo.zip sudo chmod 755 . cd /var/vcap/data/packages/rootfs_lucid64/0.1-dev/opt sudo rsync -r -l -D -g -o -p -t /opt/pharo .
Now we get to the part of making a Pharo buildpack. Eventually this should be in a Git repository but for now I’ll just add it directly to my private Cloud Foundry instance so that it looks like a built-in buildpack.
cd /var/vcap/packages/dea_next/buildpacks/vendor mkdir pharo pharo/bin; cd pharo; git init cat > README.md << EOF Cloud Foundry Buildpack for Pharo Smalltalk ============== EOF
The actual buildpack consists of three files that are called by Cloud Foundry, all in the bin directory. First is bin/detect (use an editor to create these files):
#!/usr/bin/env bash # if [ -f $1/*.image ]; then echo "pharo" && exit 0 else echo "no" && exit 1 fi
This will confirm that the Pharo buildpack can handle an application that contains a .image file. Next we create bin/compile to add a startup script to the application:
#!/usr/bin/env bash # BUILD_DIR=$1 CACHE_DIR=$2 BUILD_PACK_DIR=$(dirname $(dirname $0)) # if [ ! -d "$BUILD_DIR" ]; then mkdir -p "$BUILD_DIR" fi # if [ ! -d "$CACHE_DIR" ]; then mkdir -p "$CACHE_DIR" fi # cat > "$BUILD_DIR/startup.sh" << EOF #!/usr/bin/env bash IMAGE=\`ls *.image\` STARTUP=\`ls | grep startup.st\` /opt/pharo/pharo -vm-display-null -vm-sound-null \$IMAGE \$STARTUP EOF chmod +x "$BUILD_DIR/startup.sh"
Third, we create bin/release to return a YML file with startup information:
#!/usr/bin/env bash # cat <<EOF --- default_process_types: web: ./startup.sh EOF
Finally, we set these three files to be executable:
chmod +x bin/*
Now that we have modified our private Cloud Foundry instance support Pharo we start it up:
cd ~/cf_nise_installer/; ./local/start_processes.sh
It seems that the startup might need a bit of extra time. To check on the status do the following:
sudo /var/vcap/bosh/bin/monit summary
On the Client
In our previous approach to adding Pharo to Cloud Foundry, we went to some effort to avoid copying the image, changes, and sources files over the network from the client to the server (because they are typically quite large). That earlier approach, while elegant, is a bit less obvious to application developers who just want to deploy their application. In this iteration I’ve taken the approach of doing the simplest thing and waiting till it causes a problem to “improve” things. Thus, we expect you to provide the image file and we will just run it. The primary thing that you have to do is modify your code to provide a web server on the port defined in the $PORT environment variable (which can be different each time the application is launched). To facilitate this you can include a ‘startup.st’ script in your directory that will be run each time the application starts. This allows you to change the listening port each time. Following is a sample ‘startup.st‘ script that we use with an AIDAweb one-click image:
| methodSource port | methodSource := 'introductionElement | element | element := WebElement new. element addText: self observee introduction. element addText: ''<p>Listening on port: '' , session parent site port printString , ''</p>''. ^element'. Author fullName: 'CloudFoundry'. "Developer's name" WebDemoApp compile: methodSource. port := OSProcess thisOSProcess environment at: #'PORT' ifAbsent: ['8888']. AIDASite default stop; port: port asNumber; start.
At this point we can connect to our private cloud and push the application:
cf target api.172.16.217.185.xip.io # use the IP address of your private cloud cf login --password micr0@micr0 micro@vcap.me cf create-space development cf target --space development cf map-domain mycloud.local cf push # Name> myapp # [accept defaults] # Save configuration?> y
Now, make sure that ‘myapp.mycloud.local’ is in your /etc/hosts file and points to your private cloud. Then you can open a web browser on http://myapp.mycloud.local and you should see your application running!
As you think about running a Pharo application in the cloud, remember that the file system is ephemeral. That is, your application can be restarted any time and changes made to the image will not be saved. See http://docs.cloudfoundry.com/docs/using/app-arch/ for a good discussion of some architectural issues.
If you are following this blog you are aware that I’ve been investigating options for hosting Smalltalk in the cloud. One very successful hosting solution for a number of languages is Heroku. Not only do they support a number of languages built-in, but they also allow anyone to create a buildpack to support another language or framework. So, what would it take to add Smalltalk to Heroku? First, we need to acknowledge that Will Leinweber has a buildpack for Redline Smalltalk. Since Redline Smalltalk is based on the Java VM, this is more about running Java on Heroku than Smalltalk (which is not to minimize the accomplishment, just to note that getting a Smalltalk VM running is not quite the same).
The next thing to note is that running an application in the cloud might be different from running it on your own server. The model most hosting providers follow is that they package your application into a stand-alone directory tree that is saved on their server. When you ask for one or more instance(s) to run they copy the directory tree onto an available machine, set some environment variables (including a port on which to listen for HTTP requests), execute an application launch command, and then route requests to the port. Horizontal scaling is accomplished by starting additional instances and routing requests to them in some fashion. If an application dies then the hosting provider cleanup the directory and repeat the process with a new directory. Thus, each instance is isolated from other instances (even of the same application), and the file system is “ephemeral” and exists only while the instance is running. From a Smalltalk perspective, this means that typical image-based persistence is not trivial. I have some ideas on how this might be addressed, but need to get Smalltalk into the environment first.
As mentioned above, you can create a third-party buildpack that packages and starts your application. The buildpack is essentially three bash scripts that (1) report whether it can handle a particular application (“Do I have everything here that I need?”), (2) transform the application into its runtime structure, and (3) tell the framework how to start instances of the application. This is all fairly straightforward and has been done for many languages and frameworks. Note, however, that a “buildpack is responsible for building a complete working runtime environment around the app. This may include language VMs and other runtime dependencies that are needed by the app.” So, to get something like Pharo running on Heroku we need a Cog VM that runs on the Heroku server.
To find out more about the Heroku environment I decided to try it out. First, I signed up for a free account at https://id.heroku.com/signup and then install the tools. At this point I followed the simple instructions to create a “Hello, world” application:
mkdir ~/cloud/heroku cd ~/cloud/heroku heroku login git clone git://github.com/heroku/ruby-sample.git cd ruby-sample heroku create git push heroku master heroku apps:rename mygreatapp heroku open
This opened a web browser on http://mygreatapp.herokuapp.com/ and the expected page was displayed. The next thing that is rather nice is that you can run a non-web application on the Heroku server and have stdin/stdout/stderr routed back to your client shell:
heroku run bash
This starts a bash shell on the server; just as if you used ssh! Now we can do some things to investigate the server environment (this script represents what came at the end after I tried various things described next):
uname -m -o # x86_64 GNU/Linux ll /lib/libc.so* # 2.11.1 cat /proc/version # Linux version 3.8.11-ec2 (gcc version 4.4.3) file /sbin/init # ELF 64-bit LSB, for GNU/Linux 2.6.15 cat /proc/cpuinfo # Intel(R) Xeon(R) 4 CPUs X5550 @ 2.67 GHz curl --version # 7.19.7 tar --version # 1.22 zip --version # command not found
The shows us that heroku is running 64-bit linux on Xeon processors. Since we have non-root access we can’t do much to changes these characteristics. We can, however try installing Pharo and see what happens. My first attempt was to download a one-click but that was a zip file that couldn’t be unzipped. Next, I got a recent Cog VM that came as a .tgz file. I uncompressed this with tar and tried to run it. This gave the error “/usr/bin/ldd didn’t produce any output and the system is 64 bit. You may need to (re)install the 32-bit libraries.” So we can’t run the 32-bit application–at least not without some work. Next I tried a 64-bit Squeak VM and uncompressed that. With this we got further, but now have the error “squeakvm64: /lib/libc.so.6: version `GLIBC_2.14′ not found (required by squeakvm64).” Note above that the Heroku server has GCC 2.11 (from 2009), so executables compiled with later libraries will not run.
I guess that the correct thing to do is to build the needed binaries on the Heroku server (as described here). Presumably this would guarantee that everything works together. Before trying to build Cog I might look at GNU Smalltalk.
Before finishing up we need to stop our Heroku application so we don’t use up resources. Exit from the bash shell on the server (if it hasn’t timed you out!), and then destroy your app:
heroku apps:destroy mygreatapp
In any case, I’ve learned enough for today and will try some other things next week (probably non-Heroku things!).
A lot has happened with Cloud Foundry since I last blogged about creating a micro cloud (almost 18 months ago!). The Micro “isn’t really maintained anymore” so instead I’m using the Nise Installer to create my private cloud. I used an Ubuntu 10.04 LTS image on a Mac running Fusion and used the easy install to get to a console. Once logged in I updated the keyboard to a Dell 101 (up and down arrows don’t work otherwise), installed a few tools, and then inquired to discover the IP address:
sudo dpkg-reconfigure console-setup sudo apt-get update sudo apt-get upgrade -y sudo apt-get install -y vim openssh-server curl ifconfig eth0 | grep inet
The last line will show you the address of the machine. Add an entry to your client’s /etc/hosts file similar to the following (using the IP address you got from the server above):
172.16.217.185 mycloud myapp.mycloud.local
Then from a Terminal (or other shell) on the client enter ‘ssh mycloud’ to get a command prompt (this is easer than using the console).
The next step is to install Cloud Foundry:
export CF_RELEASE_BRANCH=release-candidate bash < <(curl -s -k -B https://raw.github.com/yudai/cf_nise_installer/master/local/bootstrap.sh)
This will prompt for your password (a couple times!) and takes a while (a couple hours). When it finishes it will show something like the following:
Done! RESTART your server! CF target: 'cf target api.172.16.217.185.xip.io' CF login : 'cf login --password micr0@micr0 micro@vcap.me'
The address (172.16.217.185) is the IP address for my server; yours will almost certainly be different. Follow the instructions and restart your server. Once the server is started you need to start Cloud Foundry:
cd ~/cf_nise_installer; ./local/start_processes.sh
Within a minute it should list a number of processes as running. Unfortunately, a few processes are running the command returns to the shell prompt implying that all is well. In my experience, not everything is started so I typically repeat the summary command until I see all fourteen processes:
sudo /var/vcap/bosh/bin/monit summary
From your client machine you should navigate to a directory with a sample application (like we created here) and set the target, login, do some setup, and push the application:
cf target api.172.16.217.185.xip.io cf login --password micr0@micr0 micro@vcap.me # okay to ignore CFoundry::InvalidRelation error in next command (see https://github.com/cloudfoundry/cf/issues/9) cf create-space development cf target --space development cf map-domain mycloud.local cf push
At this point you can accept the defaults (the subdomain is ‘myapp’ and the domain is ‘mycloud.local’) and when the staging is finished you should see 2 applications running. On your client you can navigate to http://myapp.mycloud.local/ and test the application. From your client shell you can try out various commands listed at http://docs.cloudfoundry.com/docs/using/managing-apps/cf/.