I recently made an Amazon Machine Image (AMI) available (described here) and described how to use it. In this post I will describe the process of taking an existing AWS Instance with Cloud Foundry Micro (described here) and making it suitable for a public AMI. The primary challenge is that we need to create a boot disk with Cloud Foundry already installed but configured so that it can use a new password, domain, and local IP address. We also need to remove any identifying information before shutting down the VM and taking a snapshot.
I first use the following script (named setup.sh) to install Cloud Foundry using the cf_nise_installer. Note that I am setting a well-known IP, domain, and password; these will be changed later during the boot process. I am also deleting some files that might exist from a previous install attempt. Finally, I am creating a list of files that might contain data that must be modified during the boot process.
#!/bin/bash # set -x cd ~ sudo rm -rf \ cf_nise_installer \ /var/vcap/data/sys/log/* \ domain password files ipv4 \ 2>/dev/null # export INSTALLER_URL="https://github.com/yudai/cf_nise_installer.git" export INSTALLER_BRANCH='master' export CF_RELEASE_BRANCH='release-candidate' export NISE_IP_ADDRESS='127.0.0.1' export NISE_DOMAIN='cloud.mycloud.local' export NISE_PASSWORD='swordfish' bash < <(curl -s -k -B https://raw.github.com/yudai/cf_nise_installer/${INSTALLER_BRANCH:-master}/local/bootstrap.sh) # list="$(sudo find /vol-a/var/vcap -type f \( -iname '*.yml' -o -iname 'postgres_ctl' \) )" echo $list > ~/files
We next create a startup script, /home/ubuntu/rc.local, and modify /etc/rc.local to call our new script (if you delete the authorized_keys and fail to add them back, then you can’t log in to your instance!).
#!/bin/bash # should be called from /etc/rc.local echo "Starting /home/ubuntu/rc.local script to start Cloud Foundry" # # See if 'debug: true' is in user-data # X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'debug:' ) X=$( echo $X | cut -d : -f 2 | tr -d ' ' ) if [ "$X" == "true" ]; then set -x fi # # Make sure that we have some keys (should be deleted before snapshot) # cd /home/ubuntu if [ ! -f .ssh/authorized_keys ]; then curl http://instance-data/latest/meta-data/public-keys/0/openssh-key \ 2>/dev/null > .ssh/authorized_keys chown ubuntu:ubuntu .ssh/authorized_keys chmod 600 .ssh/authorized_keys echo "Updated authorized_keys" fi # # See if 'autoStart: false' is in user-data; if so, done! # X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'autoStart:' ) X=$( echo $X | cut -d : -f 2 | tr -d ' ' ) if [ "$X" == "false" ]; then echo "user-data includes 'autoStart: false' so we will quit" exit 0 fi # # Each instance should have a different password (but once set it does not change!) # if [ -e password ]; then # If we have been through this code before, then use the previous password OLD_PASSWORD=$( < password ) NEW_PASSWORD=$OLD_PASSWORD else # This is the first time through, so we replace the default password OLD_PASSWORD='swordfish' # User may specify a 'password:' line in the user data X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'password:' ) NEW_PASSWORD=$( echo $X | cut -d : -f 2 | tr -d ' ' ) if [ "$NEW_PASSWORD" == "" ]; then # No user-provided one and no previously created one, so assign a random one # another script is $( openssl rand -base64 6 ) NEW_PASSWORD=$( < /dev/urandom tr -dc A-Za-z0-9 | head -c8 ) fi # Save new password so it can be reused next time echo "$NEW_PASSWORD" > password fi # # See if 'domain:' is in user-data # X=$( curl http://instance-data/latest/user-data 2>/dev/null | grep 'domain:' ) if [ "$X" != "" ]; then NEW_DOMAIN=$( echo $X | cut -d : -f 2 | tr -d ' ' ) else # get the public-hostname until [ "$NEW_DOMAIN" != "" ]; do X=$( curl http://instance-data/latest/meta-data/ 2>/dev/null | grep public-hostname ) if [ "$X" != "" ]; then X=$( curl http://instance-data/latest/meta-data/public-hostname 2>/dev/null ) if [ "$X" != "" ]; then NEW_DOMAIN=$( echo $X | cut -d "." -f 1 | cut -d "-" -f 2- | sed "s/\-/\./g" ) NEW_DOMAIN="$NEW_DOMAIN.xip.io" fi fi if [ "$NEW_DOMAIN" == "" ]; then echo "`date`: public-hostname not yet available from meta-data" sleep 1 fi done fi if [ -f domain ]; then OLD_DOMAIN=$( cat domain ) else OLD_DOMAIN='cloud.mycloud.local' fi echo $NEW_DOMAIN > domain # # Each new instance will have a unique private IP address # NEW_IPV4=$( curl http://instance-data/latest/meta-data/local-ipv4 2>/dev/null ) if [ -f ipv4 ]; then OLD_IPV4=$( cat ipv4 ) else OLD_IPV4="127.0.0.1" fi echo $NEW_IPV4 > ipv4 # # Find all the files that need to be edited # (takes several seconds, so cache it for faster start next time) # if [ -f files ]; then list=$( cat files ) else list="$(find /var/vcap -type f \( -iname '*.yml' -o -iname 'postgres_ctl' \) )" echo $list > files fi for f in $list do if [ "$OLD_PASSWORD" != "$NEW_PASSWORD" ]; then if grep -q $OLD_PASSWORD $f; then sed -i "s/$OLD_PASSWORD/$NEW_PASSWORD/g" $f echo "Updated password in $f" fi fi if grep -q $OLD_DOMAIN $f; then sed -i "s/$OLD_DOMAIN/$NEW_DOMAIN/g" $f echo "Updated domain in $f" fi if grep -q "$OLD_IPV4" $f; then sed -i "s/$OLD_IPV4/$NEW_IPV4/g" $f echo "Updated IP in $f" fi done # # start Cloud Foundry (cd /home/ubuntu/cf_nise_installer; ./local/start_processes.sh)
Finally, I create a cleanup.sh script to remove identifying information and sanitize the system before shutdown. Note that once authorized_keys are removed from the .ssh directory you can’t log in unless during the boot process new authorized_keys are installed.
#!/bin/bash # sudo rm -f /root/.ssh/* /home/*/.ssh/* /var/vcap/monit/monit.log (cd /var/log; sudo rm -f *.log dmesg* debug messages syslog) rm -f ~/.viminfo ~/.bash_history echo '' | sudo tee /var/log/lastlog history -c # sudo shutdown now
After running this script I use the AWS tools to stop the instance (this means that the command history is empty). At this point I can use the AWS tools to create an AMI. When the AMI starts it executes /etc/rc.local which calls my new script, /home/ubuntu/rc.local. This script sets the local IPv4 value, along with the domain and password (using configured values if provided in the user-data). When the appropriate configuration info has been updated, then we start Cloud Foundry. Each new instance has its own IP, domain, and password, making it secure and unique.
If there is something wrong with the boot volume (such as missing authorized_keys so you can’t log in), then you need to start another EC2 instance (a micro is fine), attach the volume, do the fix-up, and release it.
# mount a new volume (in case some surgery is needed) # sudo mkdir -p 000 /vol-a sudo mount /dev/sdf /vol-a # # ... do whatever fix-up is needed and unmount the volume # sudo umount -d /vol-a sudo rmdir /vol-a
I have used /etc/rc.local to hook into the boot process. An alternate may be to use crontab with the ‘@reboot’ option.
If you have an instance-store (a disk that exists only while the instance is running), then you might want to set up some swap space on it. The following script will create a 4 GB file to be used as swap space.
# http://serverfault.com/questions/218750/why-dont-ec2-ubuntu-images-have-swap sudo dd if=/dev/zero of=/mnt/swapfile bs=1M count=4096 && sudo chmod 600 /mnt/swapfile && sudo mkswap /mnt/swapfile && echo /mnt/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a cat /proc/swaps
That summarizes how I created a Cloud Foundry Micro public AMI.
Leave a comment
Comments feed for this article