Categories
Development

DNS SOA

This is a pattern I’ve seen in many hosting situations, and ended up debating a little while back. It’s the one-DNS-name-for-a-machine-and-lots-of-services-on-it pattern. 😉 Among the services is typically a SubVersion repository, Maven repository, Maven proxy, Hudson, Wiki and and maybe some other stuff you use when developing. When migrating to new hardware the discussion about timing and responsibilities pops up. The usual plan: A big bang migration with a change in the DNS to point to the new machine.

This brought to mind  some important things to remember when designing services. The funny thing is that this doesn’t just apply to SOA in the sense of integration across a network. These are important things to remember even if you are designing straight up internal object calls.

Connect to the interface or role, not the implementation. In the above scenario, meaning: Use a DNS name for each service, and not the DNS name for the machine. This also applies in Java where you use the interface of the service, not the implementation of it. Think in roles roles. 🙂

Use business meaning names for services and methods on them. Don’t call it LDAPLookUpService, it is a UserAuthenticationService and should have a method called authenticateUser(username, password) not getUser(username, password). getUser will be (mis)used for lots of different stuff that has nothing to do with authentication. With the DNS case this can be a bit trickier since your SubVersion client will of course be tied to a SubVersion repository. It could be handled by calling it http://sourcecontrol/svn/, but that would of course tie several sourcecontrol systems to be on the same machine. Better than the original case though.:) And if you absolutely must, you could do some proxy magic with Apache.

In the above case if each service did have it’s own DNS you could migrate each service separately, only changing the DNS for the specific service that is beeing moved. This would give you a more controlled step-wise migration, and different downtimes for different services. Having the SubVersion repo down doesn’t necessarily match the same timeslot when you can have the Wiki down.

So should this stuff really be handled by something like UDDI? Maybe. Does SubVersion support UDDI? Don’t think so. 🙂

Categories
Development

Agility in operations

It seems like Facebok is pretty agile in how it handles new features and roll outs. According to an article on the High Scalability site they actually do major releases every week. One of the things that struck me was this:

Be Innovative, Not Safe. Fear of failure often shuts down the organizational brain and makes it hide behind excessive rules and regulations. A technology company should have a bias towards action and innovation. Release software. Don’t stifle genius. Rely on your tools and processes to recover from problems.

This isn’t a solution to problems, but it is a pretty accurate description of what I want to achieve myself. Making a release shouldn’t be difficult or scary. This means that we need tools and methods that:

  • Enables us to be relatively certain that we don’t introduce any errors
  • Enables us to recover from a failure, because we will eventually fail

Tools like JUnit, Fitnesse, Selenium are all tools that allow us to verify the behaviour of our application. They help us verify that what we have done doesn’t introduce any errors. This should enable us to roll out quite easily, but I think in many projects one doesn’t trust the quality of the tests and you fear rolling out because you don’t have a good recovery plan.

I think we have a lot of tools available to us when it comes to writing tests, we just have to get better at using them, and eventually improving the tools. Where we seem to be missing is the part where we do good rollbacks. Maybe we don’t even need tools for that? I’d like to hear how you do it, and what tools you use or are missing.