A confusion of reviews – what’s to be done?

CA have just published “A Usability Comparison of the Top Three Data Modeling Tools”, which gives the following grades to the top three products in the market:

Sybase PowerDesigner (C+); Embarcadero ER/Studio (B+); CA ERWIN (A).

I can see why CA published this review, just as I can see why Embarcadero recently published a similar review, where they scored top marks going head-to-head with ERWIN.  They also tell us that PowerDesigner doesn’t stack up against ER/Studio.

Sybase don’t seem to go in for promoting PowerDesigner in the same way – on their web site they just give one reason why PowerDesigner is better than the competition – their Link and Sync technology, which is what makes their Logical and Physical data models loosely coupled.

Reading the reviews, I can see that some of the successful point scoring arises from a lack of knowledge about the ‘other’ tool – saying that “tool B doesn’t have feature X“, when a regular user of tool B knows that it does have feature X, but they call it Y, not X, and it works in a completely different manner from feature X in tool A, which is why the reviewer didn’t find it.

I’d hate to be a manager trying to make sense of these reviews in order to choose a standard data modelling tool; just who do you believe, and how do you see through the bias?  How do you understand the different philosophies of the tool vendors?  In the first review I mentioned, the reviewer prefers the tight coupling of logical and physical data models supported by ERWIN; personally, I think that tightly coupling one Logical Data Model to one Physical Data Model is a dinosaur approach – so we’re both biased.

Perhaps we need to get three experts together, one for each tool, to agree a common set of comparison criteria, and come up with a single comparison.  If each expert explained the features of their tool to the other two, at least their eyes would be opened.  They may still disagree on the favoured approach to modelling, and we would have at least two opinions on most topics, but the resulting assessment should avoid the bias that is currently prevalent.

Let me declare my background (and possible bias) before I finish.  I’m currently writing a book on PowerDesigner, which should be out later this year.  In 25+ years as a data modeller, I haven’t worked with ERWIN, except during a couple of evaluations; I’ve selected PowerDesigner once, and ER/Studio once, and I may have to conduct another evaluation later this year.  Perhaps that will choose ERWIN?  Who knows.

4 thoughts on “A confusion of reviews – what’s to be done?

  1. Martijn Evers July 18, 2011 / 14:41

    Hi George,

    There are 2 evaluation strategies to consider when choosing these products. one is feature wise, but that is a little tricky, and one is user wise (which type of user is going to use the product) but that tends to work out best when generalzing feature wise. I suspect you can let anybody win if you can choose your own ground.

    I always say that If you need serious customization or lot’s o’different model types, choose PD. If you just need logical/physical models use ER/studio. Both tools are complex and not always very user friendly. Or to put in another way, you need to know your way around the tool to effectively use it. I’ve never considered Erwin, but I see from the review it is aimed at the more casual tool users.

  2. Karen November 17, 2011 / 17:18

    I think this would be a wonderful presentation for EDW, don’t you think? Present the criteria and the outcomes, but focus mainly on what features we in the data profession really need and why.

  3. Francis September 21, 2015 / 18:20

    Interesting read

Leave a Reply