Categories
Development Operations OS tricks

Speeding up development cycles with Docker

When I started doing Puppet development I was looking for a nice way to do testing. Vagrant was the best option at the time. But when Docker surfaced, and I figured it could speed things up considerably. And it does. 🙂

If you would like to move on to the code you can find my setup on Github. It should’nt take much work to transfer to your project or something else that is similar without Puppet. Go to Github for the code: https://github.com/anderssv/puppet-docker .

Let me know what you think. The README should help you getting started, but let me know if anything can be changed. 🙂

Docker is some kick ass technology. It can be used for a wide variety of many tasks. It gives complete isolation between containers when it comes to processes and software, while NOT reserving large amounts of disk, memory or CPU. And it is lightning fast (really, starting a new container takes less than a second)! Look into it if you haven’t.

Doing Puppet means that you need a VM, but you also you need to reset it every once in a while to simulate building a machine from the bottom up. Combining the mechanisms in Docker I was able to minimise what happens when you reset, except for the things Puppet does of course. So I have:

  • Created a Docker image (not uploaded to repo, created on the fly by the scripts) that can be used for launching Puppet scripts. Having everything like puppet rpms etc. installed, updated and available saves a lot of time when running.
  • Launch a SSH Daemon. This serves two purposes: Keeping the container running and allowing for a interface to run the puppet command.
  • Kill the container to reset, so you will get a “fresh” container to run on.

Doing this enables you to “boot” a fresh new image in no time, and get running Puppet scripts as fast as possible. It is my new favorite toy for automating and virtualising tests. 🙂 Let me know if there’s anything I can make clearer.

Categories
Development

Creating system docs in source control

Wiki’s are a great technology. Well for some things. For documenting systems I am not convinced. Some problems:

  • All the old stuff that piles up. Nobody ever deletes anything.
  • Versions. If you have maintenance versions as well as active development etc. how do you update, merge and maintain several versions?
  • Historical snapshots. What was the state of the entire documentation at the time of release?

Is is possible to solve with a Wiki? Yes. Does Atlassian do it? Yes. Does it work for us? Nope. Maybe we lack discipline but it’s not working. Our tries at fixing this has only seemed like band aids or placebos to the real problem. I want my documentation released, versioned, branched and merged with my code!

I want to put it in my source control (ah, I dream for Git, but I have SVN). When I discovered Flatdoc I was quite excited. It is a JavaScript based rendering of Markdown. So you don’t have to have a server serving it up, or some extra compile steps before viewing. Just write your Markdown and commit. That’s it. Jekyll is nice and all that, but it involves a bit more tooling and different ways of doing stuff than I would like to introduce.

I was already writing some stuff in my README files as Markdown, so I decided to give it a spin. And it works quite nicely. 🙂

Hoping I will be able to replace system doc in the wiki with it, but that will take more testing. Just some short notes on making it work in Subversion.

Making it local

Flatdoc doesn’t really need installing, especially if you run it with a project on Github. But because we have network zones, and I wanted to host it directly off our SVN-server (over HTTP) I downloaded everything:

wget https://github.com/rstacruz/flatdoc/raw/gh-pages/templates/template.html
mv template README.html
mkdir flatdoc && cd flatdoc
wget http://rstacruz.github.io/flatdoc/v/0.8.0/legacy.js
wget http://rstacruz.github.io/flatdoc/v/0.8.0/flatdoc.js
wget http://rstacruz.github.io/flatdoc/v/0.8.0/theme-white/style.css
wget http://rstacruz.github.io/flatdoc/v/0.8.0/theme-white/script.js
cd ..

So now you have a README.html with a subdirectory called flatdoc/ with all the scripts and styles.

Enabling SubVersion hosting

To enable the SubVersion server to serve the files you need to set the mime types. But first, if you have not; add them to SubVersion:

svn add README.html flatdoc
svn commit -m "Installed Flatdoc"

Then set the mime types:

svn propset svn:mime-type text/html README.html
svn propset svn:mime-type text/javascript flatdoc/*.js
svn propset svn:mime-type text/css flatdoc/*.css

Hooking in your Markdown file

Edit the template file you downloaded into README.html.

Change the links to JavaScript and CSS to be:

<!-- Flatdoc -->
<script src="./flatdoc/jquery.min.js" type="text/javascript"></script>
<script src='./flatdoc/legacy.js' type="text/javascript"></script>
<script src='./flatdoc/flatdoc.js' type="text/javascript"></script>

<!-- Flatdoc theme -->
<link href='./flatdoc/style.css' rel='stylesheet' type="text/css"/>
<script src='./flatdoc/script.js' type="text/javascript"></script>

Change the Flatdoc javascript (inside README.html) to point to the Markdown file you want to display. It should look something likes this:

<script>
    Flatdoc.run({
        fetcher: Flatdoc.file('README.md')
    });
</script>

That should be it. 🙂 Commit to SubVersion and access through HTTP.

Note about testing and local rendering

Because of security restrictions in your Browser, local testing will not work. You will get an origin-error when trying to read the Markdown file. You can either disable that security check in your browser, or use some kind of Markdown preview locally. The Markdown preview for Sublime Text works pretty well. 🙂

Categories
Operations

Setting up CloudFoundry v1 on AWS

I have been looking into differnet in-house PaaS solutions lately. I´ll leave the reasons for choosing PaaS and the wider evaluation to a later post. I´ll just quickly update a little bit about background in between. So this is my experience setting up CloudFoundry on AWS. This is v1 of CloudFoundry.

Note: After writing up most of this stuff I also discovered this setup http://cloudfoundry.github.io/docs/running/deploying-cf/ec2/ . It seems to me that it is for the v2 setup. You might want to check that.

Update: While this stuff works for the basic setup, I have experienced some real issues with getting Postgres running as a service. So as I say in the end here; this is a good way to get something up and running to experiment. But don´t expect this to be something ready for production. You will probably need to fork the Git repo and fix some issues for that to work. I am moving on to v2 (supposed to be released in April), hoping Dr. Nic’s superb tooling will be available there soon.

The parts

CloudFoundry is the Open Source PaaS backed by VMWare. It uses BOSH as a sort of release management and packaging system. You can read a little more about it here. Chef recipes to set up CF without BOSH is also available, but I have not tested those.

The resources

The inertia

The reason I have had some problems getting this to work is because CloudFoundry is currently in the middle of a big refactoring for V2. That means that there are many parts in movement, and the reason I have to take certain steps is that I want to deploy the stable V1 version.

The considerations

  • Some parts does not support a different region that the default. I will describe how to handle this, but when choosing AWS regions, just going with the default will keep things easier.
  • You need a DNS wildcard setup. You will need control of a DNS record and if you do this early in the process you won´t have to wait for DNS to propagate changes.

The steps

Disclaimer: Parts of this is just a duplication of what is available at the resources over. I include them here to make sure it is something that can be followed easily. They might change outside this document.

Set up DNS

To enable communication with your CloudFoundry you will have to configure DNS with a wildcard. You can provision the IP from Amazon automatically with the script later (it will prompt you), but doing this now will save some time later as DNS changes take some time to propagate.

  • Enter AWS EC2 console
  • Make sure you have chosen the region you will be deploying to
  • Click Elastic IPs in the menu to the left
  • Click Allocate new address

Then login to your DNS registrar and map a subdomain with wildcard to this address. It would look something like this: *.cloudfoundry.mydomain.com 232.234.423.64 (The IP here is of course the one you allocated in AWS under Elastic IPs)

Install the gems needed

gem install bosh-bootstrap

Bootstrap BOSH and Micro BOSH

bosh-bootstrap deploy && bosh-bootstrap ssh

Answer the questions and wait. 🙂

Download CloudFoundry and package a release

After that is done you will be logged in to the Bosh server at the prompt. Take the command line below, change the necessary settings and run it.

Some setup:
git config --global user.email "me@gmail.com" && git config --global user.name "My Name" && sudo gem install bosh-cloudfoundry && export TMPDIR=/var/vcap/store/tmp

Now run the preparation:
bosh cf prepare system production --core-ip 232.234.423.64 --root-dns cloudfoundry.mydomain.com --core-server-flavor m1.large

What has just happened is that BOSH has downloaded and packaged the CloudFoundry release so that it can be deployed. Sadly, it is not the correct release. 🙂 So we delete that:

bosh delete release appcloud-master

Then we need to package the release we want to deploy:

bosh cf upload release --branch v1

Some manual corrections

Alright, things are looking good so far. Before we deploy further we need to do the following:

  • Reduce the number of nodes used for compiling the release. See this bug as to why.
  • Add region information to the system description if we are using a non-default region. See this bug for a description.

Reduce compile nodes

Edit the /var/vcap/store/systems/production/deployments/production-core.yml file. Find the compile section and change the number to 4. It should look like this:


compilation:
workers: 4
network: default
reuse_compilation_vms: true
...

Add region

Find the parts containing cloud_properties (there are usually two places) in /var/vcap/store/systems/production/deployments/production-core.yml and add extra lines. It should now look like this:


...
cloud_properties:
instance_type: m1.medium
region: eu-west-1
availability_zone: eu-west-1b
...

Deploy the release

Ready to deploy into a system:

bosh cf deploy

Congratulations! You should now be able to run on your very own PaaS Cloud. Take note of the username and password at the end of the output:

vmc target http://api.cloudfoundry.mydomain.com
Setting target to http://api.cloudfoundry.mydomain.com... OK
vmc register me@gmail.com --password 5a93f82 --verify 5a93f82
Your password strength is: strong
Creating user... OK
target: http://api.cloudfoundry.mydomain.com

Authenticating… OK

Fixing paths

Because of some missing feature in the v1 branch there are some incorrect paths. From the Bosh controller (where you´ve done everything else) do:

bosh ssh core/0
su - vcap

The default password for vcap is c1oudc0w . Pretty insecure as this is public knowledge. Now might be a good time to change that. 😉 Then do:

sudo mkdir -p /var/vcap/shared && sudo chown vcap:vcap /var/vcap/* -R

Using the cloud

To activate running on cloud on your own machine with:

vmc target http://api.cloudfoundry.mydomain.com

The username and password can be found in the output from deploy. See above for an example.

One step further

This is really optional, but lets your experiment a but more with scaling and services. A couple of notes:

  • The bosh cf plugin rewrites the config yaml file for each run, so stuff you have fixed manually like size of compile cluster and cloud_properties with zone info will need to be changed again.
  • The security groups setup in Amazon is lacking when you scale into more nodes. You will have to make a manual change here before proceeding.

Fixing the security group

The problem is that the security group blocks some communication between the nodes. I have opened everything between the nodes which might be too much. But it works. 🙂

  • Login to the Amazon AWS EC2 console
  • Click security groups
  • Click on the cloudfoundry-production group
  • Add a rule enabling ports 0-65535 from cloudfoundry-production (start typing that name in source and you will get auto-complete)

Handling the rewrite yaml file issue

Just after doing the first deploy I copy a version of production-core.yml to my home folder. I can then do diffing agains any updated file by the plugin to see what has been added and removed. Through some manual labour I can transfer the settings back. For now. 🙂

Help I´ve messed up!

Ooooh, you poor bastard. 😉 This really requires a bit of understanding of how BOSH works, but you could try one of the following:

bosh cloudcheck
or
bosh delete deployment production-core && bosh cf deploy

Some final notes

Warning: This is a dev setup. How to switch to a production setup I havn´t quite figured yet, or what that actually means. There are also lots of default passwords here, so it´s pretty bad from a security point. Use with caution. 🙂 It is a nice starting point for experimenting though.

CloudFoundry looks like it´s taking a good direction so it will be very interesting to follow the further development. Thanks to an active Open Source community around it you can get answers to your questions, and also find tools like the bosh-bootstrap and bosh-cf-plugin that is used in the recipe created by the always helpful Dr. Nic at Stark & Wayne.

I might give the v2 setup (link at the start of the document) a stab later on.

Let me know if you have any questions, and check Github issues and the mailing list. 🙂