我应该如何组织我的主 ddl 脚本
我目前正在为我们的数据库创建一个主 ddl。 从历史上看,我们使用备份/恢复来版本化我们的数据库,并且不维护任何 ddl 脚本。 该架构相当大。
我目前的想法:
将脚本分成几个部分(可能在单独的脚本中):
- 创建表
- 添加索引
- 添加触发器
- 添加约束
每个脚本都会由主脚本调用。
- 我可能需要一个脚本来临时删除约束以进行测试
- 架构中可能存在孤立表,我计划识别可疑表。
还有其他建议吗?
编辑:此外,如果有人知道自动化部分流程的好工具,我们正在使用 MS SQL 2000(旧的,我知道)。
I am currently creating a master ddl for our database. Historically we have used backup/restore to version our database, and not maintained any ddl scripts. The schema is quite large.
My current thinking:
Break script into parts (possibly in separate scripts):
- table creation
- add indexes
- add triggers
- add constraints
Each script would get called by the master script.
- I might need a script to drop constraints temporarily for testing
- There may be orphaned tables in the schema, I plan to identify suspect tables.
Any other advice?
Edit: Also if anyone knows good tools to automate part of the process, we're using MS SQL 2000 (old, I know).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
我认为基本的想法是好的。
首先构建所有表然后构建所有约束的好处是可以按任何顺序创建表。 完成此操作后,每个表都有一个文件,我将其放入名为“Tables”的目录中,然后将一个脚本放入该目录中执行所有文件。 同样,我有一个用于约束脚本的文件夹(它也执行外键和索引),这些脚本在构建表后执行。
我将分离触发器和存储过程的构建,并最后运行它们。 这些的要点是它们可以在数据库上运行和重新运行而不影响数据。 这意味着您可以像对待普通代码一样对待它们。 您应该在每个触发器和过程脚本的开头包含“如果存在...删除”语句,以使它们可重新运行。
所以顺序是
然后
在我当前的项目中,我们正在使用 MSBuild 来运行脚本。 您可以获得一些扩展目标,这些目标允许您调用 sql 脚本。 过去我使用过 perl,它也很好(还有批处理文件……我不推荐 - 它们太有限了)。
I think the basic idea is good.
The nice thing about building all the tables first and then building all the constraints, is that the tables can be created in any order. When I've done this I had one file per table, which I put in a directory called "Tables" and then a script which executed all the files in that directory. Likewise I had a folder for constraint scripts (which did foreign key and indexes too), which were executed when after the tables were built.
I would separate the build of the triggers and stored procedures, and run these last. The point about these is they can be run and re-run on the database without affecting the data. This means you can treat them just like ordinary code. You should include "if exists...drop" statements at the beginning of each trigger and procedure script, to make them re-runnable.
So the order would be
Then
On my current project we are using MSBuild to run the scripts. There are some extension targets that you can get for it which allow you to call sql scripts. In the past I have used perl which was fine too (and batch files...which I would not recommend - the're too limited).
花时间编写通用的“删除所有约束”脚本,这样您就不必维护它。
将光标悬停在以下语句上即可解决问题。
Invest the time to write a generic "drop all constraints" script, so you don't have to maintain it.
A cursor over the following statements does the trick.
我之前将 DDL 代码按每个实体一个文件进行组织,并制作了一个工具将其组合到单个 DDL 脚本中。
我的前雇主使用了一种方案,其中所有表 DDL 都在一个文件中(以 Oracle 语法存储),索引在另一个文件中,约束在第三个文件中,静态数据在第四个文件中。 一个变更脚本与此并行(同样在 Oracle 中)。 到 SQL 的转换是手动的。 真是一团糟。 实际上,我编写了一个方便的工具,可以将 Oracle DDL 转换为 SQL Server(99.9% 的时间都有效)。
我最近改用面向数据库专业人员的 Visual Studio Team System。 到目前为止,它运行良好,但如果您在数据库中使用 CLR 函数,则会出现一些问题。
I previously organised my DDL code organised by one file per entity and made a tool that combined this into a single DDL script.
My former employer used a scheme where all table DDL was in one file (stored in oracle syntax), indicies in another, constraints in a third and static data in a fourth. A change script was kept in paralell with this (again in Oracle). The conversion to SQL was manual. It was a mess. I actually wrote a handy tool that will convert Oracle DDL to SQL Server (it worked 99.9% of the time).
I have recently switched to using Visual Studio Team System for Database professionals. So far it works fine, but there are some glitches if you use CLR functions within the database.
@Adam
或者仅按域怎么样——在同一文件中对相关表进行有用的分组,但与其余部分分开?
唯一的问题是某些域(在这个有点遗留的系统中)是否紧密耦合。 另外,您必须维护不同子脚本之间的依赖关系。
@Adam
Or how about just by domain -- a useful grouping of related tables in the same file, but separate from the rest?
Only problem is if some domains (in this somewhat legacy system) are tightly coupled. Plus you have to maintain the dependencies between your different sub-scripts.
如果您正在寻找自动化工具,我经常使用 EMS SQLManager,它允许您从数据库自动生成 ddl 脚本。
在将数据库上线之前,可能必须在参考表中插入数据。 这甚至可以被视为 ddl 脚本的一部分。 EMS 还可以生成用于从现有数据库插入数据的脚本。
在 ddl 阶段可能无法正确估计对索引的需求。 您只需要声明它们为主键/外键。 定义视图和查询后,应稍后创建其他索引
If you are looking for an automation tool, I have often worked with EMS SQLManager, which allows you to generate automatically a ddl script from a database.
Data inserts in reference tables might be mandatory before putting your database on line. This can even be considered as part of the ddl script. EMS can also generate scripts for data inserts from existing databases.
Need for indexes might not be properly estimated at the ddl stage. You will just need to declare them for primary/foreign keys. Other indexes should be created later, once views and queries have been defined
你那里的东西看起来不错。 我的公司有时,对于足够大的数据库,可能会进一步分解到单个对象级别。 这样每个表/索引/...都有自己的文件。 可能有用,也可能多余。 实际上取决于你如何使用它。
@Justin
按域通常就足够了。 我同意这样做会有些复杂,但应该很容易处理。
我认为这种方法提供了更多的分离(在大型数据库中您会欣赏到这一点),同时仍然使其本身非常易于管理。 我们还编写 Perl 脚本来对这些 DDL 文件进行大量处理,因此这可能是处理该问题的一个好方法。
What you have there seems to be pretty good. My company has on occasion, for large enough databases, broken it down even further, perhaps to the individual object level. In this way each table/index/... has its own file. Can be useful, can be overkill. Really depends on how you are using it.
@Justin
By domain is mostly always sufficient. I agree that there are some complexities to deal with when doing it this way, but that should be easy enough to handle.
I think this method provides a little more seperation (which in a large database you will come to appreciate) while still making itself pretty manageable. We also write Perl scripts that do a lot of the processing of these DDL files, so that might be an option of a good way to handle that.
有一个简洁的工具可以迭代整个 sql server,并将所有表、视图、存储过程和 UDF 定义作为 SQL 脚本(文本文件)提取到本地文件系统。 我已经在 2005 和 2008 上使用过这个,但不确定它在 2000 上会如何工作。 查看 http://www.antipodeansoftware.com/Home/Products
there is a neat tools that will iterate through the entire sql server and extract all the table, view, stored proceedures and UDF defintions to the local file system as SQL scripts (Text Files). I have used this with 2005 and 2008, not sure how it wil work with 2000 though. Check out http://www.antipodeansoftware.com/Home/Products