Categories
Development

Creating system docs in source control

Wiki’s are a great technology. Well for some things. For documenting systems I am not convinced. Some problems:

  • All the old stuff that piles up. Nobody ever deletes anything.
  • Versions. If you have maintenance versions as well as active development etc. how do you update, merge and maintain several versions?
  • Historical snapshots. What was the state of the entire documentation at the time of release?

Is is possible to solve with a Wiki? Yes. Does Atlassian do it? Yes. Does it work for us? Nope. Maybe we lack discipline but it’s not working. Our tries at fixing this has only seemed like band aids or placebos to the real problem. I want my documentation released, versioned, branched and merged with my code!

I want to put it in my source control (ah, I dream for Git, but I have SVN). When I discovered Flatdoc I was quite excited. It is a JavaScript based rendering of Markdown. So you don’t have to have a server serving it up, or some extra compile steps before viewing. Just write your Markdown and commit. That’s it. Jekyll is nice and all that, but it involves a bit more tooling and different ways of doing stuff than I would like to introduce.

I was already writing some stuff in my README files as Markdown, so I decided to give it a spin. And it works quite nicely. 🙂

Hoping I will be able to replace system doc in the wiki with it, but that will take more testing. Just some short notes on making it work in Subversion.

Making it local

Flatdoc doesn’t really need installing, especially if you run it with a project on Github. But because we have network zones, and I wanted to host it directly off our SVN-server (over HTTP) I downloaded everything:

wget https://github.com/rstacruz/flatdoc/raw/gh-pages/templates/template.html
mv template README.html
mkdir flatdoc && cd flatdoc
wget http://rstacruz.github.io/flatdoc/v/0.8.0/legacy.js
wget http://rstacruz.github.io/flatdoc/v/0.8.0/flatdoc.js
wget http://rstacruz.github.io/flatdoc/v/0.8.0/theme-white/style.css
wget http://rstacruz.github.io/flatdoc/v/0.8.0/theme-white/script.js
cd ..

So now you have a README.html with a subdirectory called flatdoc/ with all the scripts and styles.

Enabling SubVersion hosting

To enable the SubVersion server to serve the files you need to set the mime types. But first, if you have not; add them to SubVersion:

svn add README.html flatdoc
svn commit -m "Installed Flatdoc"

Then set the mime types:

svn propset svn:mime-type text/html README.html
svn propset svn:mime-type text/javascript flatdoc/*.js
svn propset svn:mime-type text/css flatdoc/*.css

Hooking in your Markdown file

Edit the template file you downloaded into README.html.

Change the links to JavaScript and CSS to be:

<!-- Flatdoc -->
<script src="./flatdoc/jquery.min.js" type="text/javascript"></script>
<script src='./flatdoc/legacy.js' type="text/javascript"></script>
<script src='./flatdoc/flatdoc.js' type="text/javascript"></script>

<!-- Flatdoc theme -->
<link href='./flatdoc/style.css' rel='stylesheet' type="text/css"/>
<script src='./flatdoc/script.js' type="text/javascript"></script>

Change the Flatdoc javascript (inside README.html) to point to the Markdown file you want to display. It should look something likes this:

<script>
    Flatdoc.run({
        fetcher: Flatdoc.file('README.md')
    });
</script>

That should be it. 🙂 Commit to SubVersion and access through HTTP.

Note about testing and local rendering

Because of security restrictions in your Browser, local testing will not work. You will get an origin-error when trying to read the Markdown file. You can either disable that security check in your browser, or use some kind of Markdown preview locally. The Markdown preview for Sublime Text works pretty well. 🙂

Categories
Personal

Juju in the cloud with charms

I’m doing something of a tour of the in-house PaaS world at the moment. And in my quest I stumbled upon Juju, which at first seemed like a PaaS solution to me. It is not. 🙂 I just found the concept interesting so I’ll do a short write up.

It can resemble a PaaS in many ways, but it is also different in certain areas you would expect from a PaaS. I’ll get back to that.

I have only done some basic testing of Juju, so let me know if something is off. This is just an initial summary of my thoughts.

The Platform

Juju is a tool for automation of provisioning and deployment tasks from Ubuntu. It seems to have been around for a couple of years, and they just recently launched the Juju Charm store. Which is awesome. It is a flexible solution, but also quite low level. You can check out this post for more details around the launch and thoughts about usage .

Main concepts

A Juju Charm is basically what you deploy in Juju. You can find a list of pre-existing charms at (http://jujucharms.com)[http://jujucharms.com] . It is just a package of files where some names have special meaning (comparable to DEB or RPM in some ways) and contain scripts or files that contain meta data. metadata.yaml is the descriptor for the package and Hooks are specially named scripts that will be triggered at different times in the life cycle. The usual ones are start, stop and install. The scripts can be written in most “executable” languages in Linux meaning Bash, Python etc.

All features of Juju are deployed as charms. So wether you would like to provide MySQL to your applications, or deploy your own Java application it is deployed and built as a Charm. By default Juju orchestrates the platform beneath (MaaS or Iaas) so that each Charm is deployed to a separate machine/virtual machine. Now that is only half right; as you can also add an extra unit to a deployed Charm which also becomes another machine. This is the way you scale in Juju.

When you need to use MySQL from an application you tie a relation between them. The same goes for using the HAProxy in front of your application. When you tie a relation you can use hooks on either side to perform operations. So when you add a relation to HAProxy, the proxy is updated with the address of the web application to serve. The same relation-hooks will be triggered if you add another unit to the web application (scale out).

If you deploy MySQL in three units that equals three VMs on Amazon. This makes it a bit slower at scaling out than other solutions where they utilize free resources on (bigger) already existing VMs.

Getting started

It is quite fast to get started on AWS. It is just an easy install via gems, and a bootstrap command. It bootstraps a single node for control and creates an S3 bucket for state. You can find a quick and easy getting started guide here. Nodes are not provisioned before you deploy each charm. This makes it a bit slower in deploying the actual application, but the boot up time (plus whatever you need to install for that application) for a new VM isn’t too bad actually.

The whole deploying a Charm process is a little too quiet, and I can’t find a way to make it more verbose. You can peek into the platform with juju debug-log, but it is still hard to debug failing relationships.

The concepts seem flexible, but basic. Juju doesn’t monitor and keep your Java processes alive like most PaaS solutions would. This goes for both a node that goes dead or a process that dies. I do think Juju could benefit from some more monitoring and recovery even though it’s not meant to be a PaaS.

My thoughts

I find the concept of Juju quite intriguing, and it is very flexible. You can use it as something to deploy your Java application or you can use it to deploy an Openstack IaaS. The relations hooks is a good concept, and the ease of getting started is one of the best I have experienced for this kind of product.

The number of charms available is limited, and the quality varies. I tried some charms, and while they seemed to deploy correctly I could not get stuff really working (some worked). I tested WordPress, MySQL, HAProxy, Gitlab, Openstack and Jenkins. So good concepts, but so far it seems a bit lacking in execution. It really is easy deploying all those services, so if the guiding/installation/implementation worked it would probably rock.

The relation hooks are a quite good way to handle dependencies and relationships but I really would like it to install and configure MySQL as well when it is actually required for WordPress. Right now this is three separate commands. This is probably because they have a concept where the dependency can be satisfied by several products (DBs in this case), but for ease of use it should be possible to specify a default that is set up automatically if nothing else exists.

You can also combine Juju with something like Puppet easily, but it will probably have a much smaller part to play as Juju essentialy focuses on small use-and-throw VMs. There is an addition called Jitsu that lets you deploy multiple Charms on fewer machines. But I actually think that would move Juju into a PaaS area, and one quite lacking in features (guaranteed isolation, dynamic re-allocation and monitoring).

It does not handle processes and dynamic re-allocation which is an important part of what we look for in a platform, so as a general purpose platform for running Java apps I would not choose it. Most PaaS solutions seems like a better option. I’ll get back to some of the PaaS alternatives in-house in another post. To be fair: Juju doesn’t seem to target that kind of deployment either, it’s just not what I am looking for right now.

It is probably a tool I will keep using though. As I am writing this I am spinning up a Puppetmaster and a couple of Puppet slaves on AWS for some testing. It’s quick and easy, and I hope the number of charms increases moving forward. I´ll keep checking back on this one as a tool in my toolbox.

Categories
Operations

Setting up CloudFoundry v1 on AWS

I have been looking into differnet in-house PaaS solutions lately. I´ll leave the reasons for choosing PaaS and the wider evaluation to a later post. I´ll just quickly update a little bit about background in between. So this is my experience setting up CloudFoundry on AWS. This is v1 of CloudFoundry.

Note: After writing up most of this stuff I also discovered this setup http://cloudfoundry.github.io/docs/running/deploying-cf/ec2/ . It seems to me that it is for the v2 setup. You might want to check that.

Update: While this stuff works for the basic setup, I have experienced some real issues with getting Postgres running as a service. So as I say in the end here; this is a good way to get something up and running to experiment. But don´t expect this to be something ready for production. You will probably need to fork the Git repo and fix some issues for that to work. I am moving on to v2 (supposed to be released in April), hoping Dr. Nic’s superb tooling will be available there soon.

The parts

CloudFoundry is the Open Source PaaS backed by VMWare. It uses BOSH as a sort of release management and packaging system. You can read a little more about it here. Chef recipes to set up CF without BOSH is also available, but I have not tested those.

The resources

The inertia

The reason I have had some problems getting this to work is because CloudFoundry is currently in the middle of a big refactoring for V2. That means that there are many parts in movement, and the reason I have to take certain steps is that I want to deploy the stable V1 version.

The considerations

  • Some parts does not support a different region that the default. I will describe how to handle this, but when choosing AWS regions, just going with the default will keep things easier.
  • You need a DNS wildcard setup. You will need control of a DNS record and if you do this early in the process you won´t have to wait for DNS to propagate changes.

The steps

Disclaimer: Parts of this is just a duplication of what is available at the resources over. I include them here to make sure it is something that can be followed easily. They might change outside this document.

Set up DNS

To enable communication with your CloudFoundry you will have to configure DNS with a wildcard. You can provision the IP from Amazon automatically with the script later (it will prompt you), but doing this now will save some time later as DNS changes take some time to propagate.

  • Enter AWS EC2 console
  • Make sure you have chosen the region you will be deploying to
  • Click Elastic IPs in the menu to the left
  • Click Allocate new address

Then login to your DNS registrar and map a subdomain with wildcard to this address. It would look something like this: *.cloudfoundry.mydomain.com 232.234.423.64 (The IP here is of course the one you allocated in AWS under Elastic IPs)

Install the gems needed

gem install bosh-bootstrap

Bootstrap BOSH and Micro BOSH

bosh-bootstrap deploy && bosh-bootstrap ssh

Answer the questions and wait. 🙂

Download CloudFoundry and package a release

After that is done you will be logged in to the Bosh server at the prompt. Take the command line below, change the necessary settings and run it.

Some setup:
git config --global user.email "me@gmail.com" && git config --global user.name "My Name" && sudo gem install bosh-cloudfoundry && export TMPDIR=/var/vcap/store/tmp

Now run the preparation:
bosh cf prepare system production --core-ip 232.234.423.64 --root-dns cloudfoundry.mydomain.com --core-server-flavor m1.large

What has just happened is that BOSH has downloaded and packaged the CloudFoundry release so that it can be deployed. Sadly, it is not the correct release. 🙂 So we delete that:

bosh delete release appcloud-master

Then we need to package the release we want to deploy:

bosh cf upload release --branch v1

Some manual corrections

Alright, things are looking good so far. Before we deploy further we need to do the following:

  • Reduce the number of nodes used for compiling the release. See this bug as to why.
  • Add region information to the system description if we are using a non-default region. See this bug for a description.

Reduce compile nodes

Edit the /var/vcap/store/systems/production/deployments/production-core.yml file. Find the compile section and change the number to 4. It should look like this:


compilation:
workers: 4
network: default
reuse_compilation_vms: true
...

Add region

Find the parts containing cloud_properties (there are usually two places) in /var/vcap/store/systems/production/deployments/production-core.yml and add extra lines. It should now look like this:


...
cloud_properties:
instance_type: m1.medium
region: eu-west-1
availability_zone: eu-west-1b
...

Deploy the release

Ready to deploy into a system:

bosh cf deploy

Congratulations! You should now be able to run on your very own PaaS Cloud. Take note of the username and password at the end of the output:

vmc target http://api.cloudfoundry.mydomain.com
Setting target to http://api.cloudfoundry.mydomain.com... OK
vmc register me@gmail.com --password 5a93f82 --verify 5a93f82
Your password strength is: strong
Creating user... OK
target: http://api.cloudfoundry.mydomain.com

Authenticating… OK

Fixing paths

Because of some missing feature in the v1 branch there are some incorrect paths. From the Bosh controller (where you´ve done everything else) do:

bosh ssh core/0
su - vcap

The default password for vcap is c1oudc0w . Pretty insecure as this is public knowledge. Now might be a good time to change that. 😉 Then do:

sudo mkdir -p /var/vcap/shared && sudo chown vcap:vcap /var/vcap/* -R

Using the cloud

To activate running on cloud on your own machine with:

vmc target http://api.cloudfoundry.mydomain.com

The username and password can be found in the output from deploy. See above for an example.

One step further

This is really optional, but lets your experiment a but more with scaling and services. A couple of notes:

  • The bosh cf plugin rewrites the config yaml file for each run, so stuff you have fixed manually like size of compile cluster and cloud_properties with zone info will need to be changed again.
  • The security groups setup in Amazon is lacking when you scale into more nodes. You will have to make a manual change here before proceeding.

Fixing the security group

The problem is that the security group blocks some communication between the nodes. I have opened everything between the nodes which might be too much. But it works. 🙂

  • Login to the Amazon AWS EC2 console
  • Click security groups
  • Click on the cloudfoundry-production group
  • Add a rule enabling ports 0-65535 from cloudfoundry-production (start typing that name in source and you will get auto-complete)

Handling the rewrite yaml file issue

Just after doing the first deploy I copy a version of production-core.yml to my home folder. I can then do diffing agains any updated file by the plugin to see what has been added and removed. Through some manual labour I can transfer the settings back. For now. 🙂

Help I´ve messed up!

Ooooh, you poor bastard. 😉 This really requires a bit of understanding of how BOSH works, but you could try one of the following:

bosh cloudcheck
or
bosh delete deployment production-core && bosh cf deploy

Some final notes

Warning: This is a dev setup. How to switch to a production setup I havn´t quite figured yet, or what that actually means. There are also lots of default passwords here, so it´s pretty bad from a security point. Use with caution. 🙂 It is a nice starting point for experimenting though.

CloudFoundry looks like it´s taking a good direction so it will be very interesting to follow the further development. Thanks to an active Open Source community around it you can get answers to your questions, and also find tools like the bosh-bootstrap and bosh-cf-plugin that is used in the recipe created by the always helpful Dr. Nic at Stark & Wayne.

I might give the v2 setup (link at the start of the document) a stab later on.

Let me know if you have any questions, and check Github issues and the mailing list. 🙂