I’m in agreement with Jason on the need to manage schemas via a canonical data, and that data architects/modellers are the right people to do that. Where I differ is that I want data modelling tools to go further than they tend to.
We also need to be able to:
* modify enumerations, optionality, cardinality for some schemas
* insert additional grouping elements
* roll-up elements (e.g. subtypes into super-types)
* combine attributes from a chain of entities into an element and/or sequence
* combine types from multiple namespaces (and therefore multiple relational models)
* provide documentation of the content of each schema (perhaps via HTMl, or a repository browser)
* provide a model of each schema for approval, review, documentation
* provide impact analysis between the schema(s) and the model(s) from which they’re derived, allowing us to know what we have, and how it differs in each place we have it
We can do the first four of these by generating schemas and saving the settings we used; we can then re-generate the schema in an identical fashion. We may also be able to support multiple namespaces in this way. However, we can’t do the rest unless we have dedicated XML models of the schemas. Some tools achieve this via a UML profile, others have dedicated XML models. Others have dedicated repositories. Until we can show developers and integration architects that we are actually in control of the schema models, they will always regard the actual XSD files as the master definitions, which is not what we (or I) would prefer.
It would also be great if the tools came pre-loaded with popular integration models such as OAGIS. That would save a lot of work.