Maintaining views and stored procedures in model

Maintaining procedures, functions and views is currently quite a tedious task in Origam - for each change new Service command update script must be created in the deployment script. It’s one of the reasons I’m using Visual Studio Sql Server Data Tools on some projects, so I don’t have to manually track all changed files (for each sql object I have one file on disk anyway).

My proposal: Support multiple sql blocks in one deployment script. I think practically it means adding support of sql “GO” command. This one small change could make a big difference :o)

After this change the deployment of sql code changes will be peace of cake and could be semi-automatized - I’ll write a simple batch script which will join all procedures, functions and views, with GO between them, into one file and copy it to one deployment script, and voila.

I agree with he problem but not with the solution.

My idea would be to provide a model storage for all the “static” objects (views, functions, procedures). On every version change ORIGAM could just execute all of them so they would be up to date.

It would also help from the version control point of view. If a stored procedure has a static place where it is always updated (e.g. like an XSLT script), it would also be easy to see differences in them between commits.

That direction sounds good. But I see a catch - if I understood you right, there would be only one version of static objects, but multiple deployment versions with other sql scripts (eg. table structure and data modification). Will these deployment scripts always use latest versions of static objects?

  • If so, then deployment to new databases can (and often will) be broken - for example some old deployment script can work only with some old view’s structure and won’t work with the new view version. So I suppose developer would have to backtrack and adjust their old deployment scripts to make them new database friendly (eg. before every deployment version create old versions of critical static objects manually - but then we have the problem of tedious work back).
  • If not, the approach has to be determined how to mainain different versions of static objects and how to refer to them from deployment scripts.

So it seems to me it would be helpful to somehow track also the versions of static objects. If you want to have all of these objects in model (which sounds very good to me), maybe there can just be a functionality (action), where user selects some/all of static objects, and these objects will be added to current/selected deployment version:

  • Either as separate script for each object
  • Or all objects in one script (there can be generated comment at the beginning of the script with sorted names of all the included objects). It’ll mean less clutter (less scripts). Also in this case, the support of “GO” command will be needed anyway I guess.

Maybe support of GO command will be a quick win, before administration + deployment of static objects is thoroughly solved?

Before doing “quick and dirty” solutions let’s compare this to what we already have. An example is a modelled entity/database table. The workflow is the following:

  1. Create a model
  2. Create a deployment script
  3. Change the model
  4. Create a change deployment script

The entity/table definition does not keep its versions in the model. It always is the latest version. But we have the database deployment script wizard, which is able to compare the database structure with the model and offers to create a change script.

Would it be possible to do something similar with views/stored procedures/functions/triggers?

Originally I thought we would never support these as they are purely database objects and they should be represented in the business logic layer using a model. The database was intended to be a “stupid” data storage. We still try to keep this point of view but there are cases where using a database based functionality makes sense.

So now comes the dilemma: We do not really try to model all the database logic (e.g. model a stored procedure and generate it – we just script it by hand). But at the same time we have nice database deployment scripts so it would be handy to include those as they can depend on each other as you mention.

As the workflow is completely opposite to modelling (I first write the view/procedure in SQL in order to test it so the object exists in the database first) it should be supported like that. I can imagine that the deployment script wizard could store the latest script version of each object. When executed, it would compare those and offer creating a new deployment script and store its latest version as well for later comparison. It would have to be supported by any database engine (MSSQL, PgSQL). In order to not store the script twice in the model (a version of it and a deployment script) we could introduce a special deployment script type, that would just reference the object and “deploy” it.

This would be a way how to handle it completely using Architect.

To comment on the support of “GO”. I am not very sure how difficult this would be. Actually I tried it once when I created a simple deployment tool years ago for another company’s packaged software and it kind of worked. “GO” is a special construct of Microsoft query tools and is not supported by the database engine at all. So we would have to replicate it and create some rules for it e.g. that it would have to be a separate statement on a line or something. We tried to avoid parsing of the deployment scripts until now – just send them to the database engine and see what happens. I am also not sure if “GO” would be the right statement with PgSQL or if we would have to introduce some other batch separator. That would have to be thought through.