Since I started working way back in the summer of 2004 I’ve been working with ORM technologies. First TopLink, and then Hibernate. I like ORM technology, but I also see the associated problems. They are incredibly complex products, and not having some resources that really understands what is happening will get you into serious trouble. Usually the problems surface with performance, other times, just strange results occur. Even if you are lucky enough to have experts at hand, the general understanding amongst your developers continues to be a problem.
I would love to have an easier alternative, but I don’t really see anything on the horizon that replaces it without removing too many of the benefits. But that’s really something for a different post.
Some basics
So if you’re doing ORM, and especially Hibernate there are some basics to keep your options open when it comes to performance:
- Don’t do explicit flushing
- Don’t disable lazy loading in your mappings
- Don’t use Session.clear()
There are valid reasons for doing these things, but usually they are used because someone does not quite understand how Hibernate really works. And what’s even worse; they limit your choices later on when tuning and changing the solution. So if someone are experiencing problems and consider doing one of these things; make sure they talk to your main Hibernate guy first. I can guarantee you there is a lot of pain involved in removing these later on.
So if you’ve been a good boy or girl and avoided these pitfalls you should be set to do some performance optimisations in the areas reported as slow. Yeah, reported as slow. Don’t do any funky stuff before you know it is an area that needs improvement. Doing special tuning will limit your options later on, so only do it where it’s really necessary.
Diagnosing the problem and finding the culprit
YourKit Java Profiler is my absolute favourite for diagnosing problems related to Hibernate. It enables you to see all queries executed, and trace it back into the code to figure out why it is run.
Trond Isaksen from Zenior also held a talk at Capgemini last week where he talked about using stacktraces in core dumps for analyzing problems. It might actually be your only option, because introducing Yourkit in production will cause side effects.
The amount of information can become quite overwhelming, but learn these tools and you will have a trusty companion for diagnosing your database performance in Java for a long time to come.
Hibernate performance
The way Hibernate basically promises to deliver performance is through caching and changing the amount of data that is fetched each time. This works for most of the cases where you use Hibernate. If not, you might just need to do some good old SQL.
Understand 1st and 2nd level caching and figure out how you can tweak relations to change behaviour and you have a good tool set to tune Hibernate with.
Fixing issues
Once you find the reason for your performance problem, and if it’s database or Hibernate related, you basically have the following options. I try to follow this list top to bottom. The last solutions impact your code more and can also complicate your deployment and infrastructure.
- Check that your database indices are tuned. To have effective fetches your indices must match your actual queries, so make sure they are correct.
- Consider how you do key generation. If inserts are slow, it might be because it calls the Database for a key for each and every row it intends to insert. Change generators or assign yourself. Stuff like thr Sequence HiLo Generator can drastically reduce the number of queries Hibernate does to your database.
- Fetch on primary key whenever possible. Hibernate has some default methods called get/load that lets you retrieve an object based on the primary key. These methods checks with the first level cache whether the object has been retrieved within the same Hibernate session, and if so avoids database communication. So if you use these you will only get one call to the database, even though your code actually calls Hibernate multiple times. Using Queries will bypass this mechanism, even though you query on primary key.
- Enable second level cache for Read Only entities. This is a really good quick win for stuff like Currency or Country. Close to zero cost.
- Consider wether you are always using a set of objects at the same time. You rarely retreive an Order without looking at the underlying Items in the Order. Setting fetch=”join|select|subselect” or batch size on the relation can increase the speed. Note that this will then happen every time you fetch the Order. It will also effectively bypass any caches you have enabled, so make sure you consider all the usage scenarios for this.
- Write custom queries for the situation. Setting the fetch mode in an association as in the previous section will impact every fetch of the Order object. If there’s only a few cases where the performance gets really bad and that is separated from other parts of the system, you can write a custom query. This enable you to tune the fetching to the concrete case and let other parts of the system actually benefit from lazy loading. This is preferably custom Hibernate Criteria, but can also be HQL or even SQL.
- Use plain old SQL. There is actually things that SQL is better at. Use it, and use something like the RowMapper feature in Spring with it.
- Refactor your code to enable better performance. Changes in the model, or the design of services and request can affect performance and might be the way to go. Especially consider the flushing rules for Hibernate. Making sure you read the information at the start of your transaction can reduce the number of times Hibernate writes to the database.
- Write cache your objects. This can become quite complex because of synchronization issues. If you’re running more nodes (most projects are), you’ll need to set up synchronization between your nodes and caches. This reduces scalability and complicates setup and deployment. Keep it simple.
Let me know if there’s something I’ve forgotten, I’m still learning. 🙂
Some resources:
7 replies on “Hibernate performance and optimization”
There are situations where explicit flushing can improve performance. Consider a long running (single) transaction where hundreds of records are being inserted. It is possible for the session cache to have so many objects in it that the dirty checking of all those objects takes a huge amount of time. Flushing at regular intervals (e.g. every 50 -100 objects) can improve performance dramatically. In general though I agree with you, but you need to keep in mind that what is good for your application is not necessarily good for every application.
Yeah, I know it can be of use. My point was that you should not do it unless you are absolutely sure why you are doing it. 🙂 Usually it is done because that’s the only way an inexperienced developer can get their damaged mapping to work. 🙂
Useful post. Thanks
Thanks. It’s been something I’ve been wanting to write for a while. Hope it helps.
[…] blog: Hibernate performance and optimization OMA: Using an ORM like Hibernate? Then this is a MUST […]
We have released Batoo JPA that is ~15 times faster then Hibernate and implements the JPA Spec %100. The main motivation of the project was all three JPA implementations are simply put slow. Having realized JPA could be much faster, we have developed Batoo JPA. Give it a try http://batoo.jp
Great post!
I am writing a similar article series where I collect common Hibernate performance traps I created, you can check it out for some additional info:
http://korhner.github.io/hibernate/hibernate-performance-traps-part-1/
You can see the possible performance and memory usage gains by using flush/clear, it just has it’s use cases.