尝试“缓存”我的node_modules文件夹导致“找不到模块”。使用纱线构建时错误
为了加快具有巨大依赖关系的应用程序的构建,我尝试在项目目录中运行“纱线安装”,然后创建位于项目root和任何任何“ node_modules”文件夹的每个“ node_modules”文件夹的.zip文件子目录。然后,在删除回购后,将其重新串起,将每个.zip文件复制回它们各自的目录并提取,“ Yarn install”将所有内容视为最新的,并在大约一秒钟内完成,但随后“ Yarn Build”构建”。会失败,因为它找不到模块。
在这种情况下,我不能依靠纱线自己的缓存,因为最终该解决方案将移植到Azure DevOps,每次执行构建时,它将从零头开始,并从仓库中开始。我的想法是从Blob存储中下载每个.zip文件,然后在运行“纱线安装”和“纱线构建”之前提取它们,以避免每个构建必须花费约30分钟的下载软件包。我有效地试图制定自己的贫民窟缓存解决方案。
在我的脑海中,这种方法应该完全起作用,但是我还没有能够使它起作用。我想念什么吗?还是我想象的只是不可能?
In an effort to speed up the build of an app with a gigantic set of dependancies, I've tried running "yarn install" in the project directory, then creating .zip files of every "node_modules" folder located in the project root and any subdirectories. Then, after deleting the repo, re-cloning it, copying each .zip file back into their respective directory and extracting, "yarn install" sees everything as up-to-date and finishes in about a second, but then "yarn build" will fail because it cannot find a module.
I cannot rely on Yarn's own caching in this instance, because eventually this solution will be ported to Azure DevOps where each time a build is performed, it will be starting from scratch with a fresh clone of the repository. My idea is to download each .zip file from blob storage and extract them before running "yarn install" and "yarn build", to avoid each build having to spend ~30 minutes downloading packages. I'm effectively trying to make my own ghetto caching solution.
In my head this methodology should totally work, but I have yet to be able to get this to work. Am I missing something? Or is this just not possible the way I'm imagining it?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论