The quality and reliability of comparative evaluations issued by vendors of modelling tools varies significantly, from the completely unprofessional to the merely incomplete. I include the comparisons I wrote for Sybase (now SAP) PowerDesigner in 2011 in the latter category.
The worst I’ve seen (very recently) was ‘unofficial’, presumably produced by a sales rep for a particular customer. It was completely unprofessional, the sole intention was to rubbish a competitor. For example,
- claiming that the other tool doesn’t support feature Y, just because it doesn’t have a feature called Y – in this case, it does support that feature, just happens to give it a different name
- missing information – “my tool supports both relational and dimensional modelling” – doesn’t mention the fact that the other tool also supports both of them
- apparent hearsay – throwaway comments such as “they say that my tool can leverage colour better than the other tool” with no supporting information
- our model comparison feature can compare more objects and properties than the other tool – hmm, really?
- unanswered questions, such as “Does the other tool support inheritance?”, presumably intended to sow doubt
I think it’s safe to say that the author of this comparison is not actually a user of the tools in question.
It’s very difficult for one person to produce an unbiased and detailed comparison of tools, as very few people know the target tools in sufficient detail. To create unbiased and detailed comparisons you need access to experts in all the tools involved, and you have to ask all the right questions.
Take everything with a huge pinch of salt – take time to come to your own conclusions when you’re choosing a modelling tool.
I’m speaking with my friend Chris Bradley on the topic of Evaluating Data Modelling Tools at EDW in Washington next week, come along and find out more – http://edw2015.dataversity.net/sessionPop.cfm?confid=87&proposalid=7183.