I realise this is not really an answer to your question but I would absolutely not do what you are proposing to do - it's not the way to manage your SDLC (in my opinion) and, especially if your data contains any PII information, copying data from a Prod to a non-Prod database runs the risk of all sorts of regulatory and audit issues.
I would do the following:
As a one-off exercise, create the scripts necessary to build the objects for your "standard" environment - presumably basing this off your current Prod environment
Manage these scripts in a version-controlled repository e.g. Git
You can then use these scripts to build any environment and you would change them by going through the standard Dev, Test, Prod SDLC.
As far as populating these environments with data goes, if you really need Production-like data (and production volumes of data) then you should build routines for copying the data from Prod to the chosen target environment that, where necessary, anonymise the data. These scripts should be managed in your code repository and as part of your SDLC there should be a requirement to build/update the script for any new/changed table
发布评论
评论(1)
我意识到这并不是您问题的答案,但我绝对不会做您建议做的事情 - 这不是管理您的SDLC的方法(我认为),尤其是如果您的数据包含任何PII信息,请复制数据从产品到非生产数据库,都有各种监管和审计问题的风险。
我将进行以下操作:
来管理这些脚本,以在版本控制 使用这些脚本来构建任何环境,您将通过浏览标准开发,测试,产品SDLC来更改它们。
就数据填充这些环境而言,如果您确实需要类似生产的数据(和数据的生产量),则应构建将数据从prod复制到所选目标环境的例程,并在必要时匿名数据。这些脚本应在您的代码存储库中进行管理,作为SDLC的一部分,应需要为任何新/更改表构建/更新脚本
I realise this is not really an answer to your question but I would absolutely not do what you are proposing to do - it's not the way to manage your SDLC (in my opinion) and, especially if your data contains any PII information, copying data from a Prod to a non-Prod database runs the risk of all sorts of regulatory and audit issues.
I would do the following:
You can then use these scripts to build any environment and you would change them by going through the standard Dev, Test, Prod SDLC.
As far as populating these environments with data goes, if you really need Production-like data (and production volumes of data) then you should build routines for copying the data from Prod to the chosen target environment that, where necessary, anonymise the data. These scripts should be managed in your code repository and as part of your SDLC there should be a requirement to build/update the script for any new/changed table