I've primarily used HP ALM as a test planning and test execution tool. I've used it at multiple companies as a quality industry-standard tool. HP ALM provides for easy requirement traceability in the Requirements module (though that relationship is easily broken). It has robust reporting capabilities--far beyond what I found necessary for the scope of my work. It's fairly easy to parameterize a test case, in order to perform data-driven testing. It was easy to look at a list of test cases/configurations and see which ones had been executed. I've always liked the defect tracking in ALM. It's easy to create a bug while executing a test and tie it directly to the step being executed, or to create a general bug and list out the steps to reproduce.
I wish you could dock/undock the screen when creating test cases. The text on the screen is entirely too small to read it as you enter the text, or as you try to execute a test case, and there's no way to increase the font size or change the font. There's no way to change the default font size/type.
Once I parameterized a test case, I found it difficult to work with the test case configurations. I couldn't create a new suite of test case configurations as a new test run/test build without creating copies of EVERY test case configuration when often times I only wanted a subset, meaning I had to delete the configs I didn't need/want. Once I have multiple configs built, I want to be able to manage them as individual tests.
It was easy to filter a list of test configs to find out which ones I owned, but without creating a report, I couldn't view all the test cases I owned across multiple test suites. With a large team, working on a large project, there needs to be better visibility of the test cases/configurations not completed, and who owns them, or a way for testers to find all the test cases they own to make sure everything is completed or to find test cases that can be run, when some tests may be blocked by an issue.
I wish you could create default text to be generated within the bug window when creating a bug so that standard information was entered and collected. When every tester follows the same format, and provides the same basic information (Summary, Environment, OS/Browser, Module/Feature/Function, Description with Steps to Reproduce, and Expected vs. Actual Results), it makes it easier for developers to reproduce an issue.
The requirements traceability is fragile, and easily broken. In a waterfall environment, we used standard requirement documents, which we uploaded/attached to the requirements we manually entered into ALM. I saw several instances where someone updated a requirement or added the exact same requirement, without updating the traceability to an existing test case. Now you had duplicate requirements, but test cases that still pointed to the old requirement, or didn't point to any requirements.
I would consider the other options available, such as JIRA or Rally, and how robust your defect tracking/reporting must be for the pace of your development. If you need robust reporting capabilities and defect tracking, ALM is unmatched, but if you need a light-weight, flexible solution, there's probably a better option.
We used ALM for a big team, working on a big project. Upper management used ALM's robust reporting capabilities to see everything going on with the project. With those reports being so data-driven, it was important that everyone enter the correct data or make the correct selections, in order to produce the correct result and give management an accurate picture of the project. The benefit for management was the ability to manage a large team, working on a large project, with multiple components, all at the same time.