具有替代运行时、Caddy、Vulcain、缓存生态系统的 API 平台
目前我正在研究由 api-platform 支持的设置,其目标如下:
- PHP 后端必须产生最小的资源有效负载,因此我不想嵌入关系
- PHP 后端应该能够在替代运行时运行,例如 Swoole
- 网络服务器应该通过 HTTP2 Push 推送相关资源,利用 api 平台发行版的内置 vulcain 支持
我找不到很多关于这些设置的资源 - 至少没有以足够回答后续问题的形式。
我的起始设置只是基于 api-platform 发行版 2.6。 8
因此,到目前为止,我已经了解了以下内容:
- 开箱即用的 caddy + http2 推送设置可与基于
php:8.1-fpm-alpine
的 PHP 容器配合使用- 当球童时 显然是直接使用php_fastcgi
- 当我在使用当前可用的 cache-handler 时, 我能够使 http 缓存正常工作,但我很难找到有关缓存失效工作的任何信息。 api 平台文档主要关注 varnish; api 平台核心中也只有一个
VarnishPurger
。如果 caddy 缓存处理程序以某种方式允许BAN
请求或类似的请求,那么编写自定义的请求应该不会那么困难 - 在哪里可以找到相关信息?我看到处理程序基于 Souin - 但由于我不熟悉,我不知道如何(如果)Souin 毕竟支持缓存失效。 - 当将 php 容器更改为(在我当前的测试场景中)基于 Swoole 时,然后
php_fastcgi
不能在 caddy 中使用 - 相反,我最终使用了reverse_proxy
(如 vulcain 文档)基本上可以工作并提供正确的http响应,但是不会推送使用Preload
标头请求的任何资源(正如我所说,当 PHP 后端基于 PHP-FPM 时它可以工作)。我如何调试这里发生的情况? Caddy 不会产生任何有关push
处理的信息 - vulcain caddy 模块也不会产生
长话短说(呃):总结我的问题,
- 我如何找出为什么 caddy + vulcain 无法在反向代理设置?
- caddy 缓存处理程序的当前状态是否受 api 平台发行版功能支持/支持
- 如何为 caddy 缓存处理程序实现/支持
BAN
请求(或其他细粒度缓存失效)?
Currently I'm investigating a setup backed by api-platform with the following goals:
- the PHP backend MUST yield minimal resource payloads, thus I do not want to embed relations at all
- the PHP backend SHOULD be able to run in alternative runtimes, e.g. Swoole
- the webserver should push related resources via HTTP2 Push leveraging the built in vulcain support of the api-platform distribution
I cannot find that many resources about those setups - at least not in such a form that they answer subsequent questions sufficiently.
My starting setup was simply based on the api-platform distribution 2.6.8
So, until now I've learned the following things:
- out of the box, the caddy + http2 push setup works with the PHP container being based on
php:8.1-fpm-alpine
- while caddy is obviously directly usingphp_fastcgi
- when I was fooling around with the currently available cache-handler I was able to get the http cache working but I was struggling to find any information about cache invalidation works. The api-platform docs mostly focus on varnish; there is also only a
VarnishPurger
shipped in the api-platform core. Wring a custom one should not be that hard if the caddy cache-handler somehow allowsBAN
requests or something similar - where to find info about that? I see that the handler is based on Souin - but as unfamiliar as I am I have no clue how (and if) Souin supports cache invalidation after all. - when changing the php container to be (in my current testing scenario) based on Swoole then
php_fastcgi
cannot be used in caddy - instead, I ended up usingreverse_proxy
(as described in vulcain docs) which basically works and serves proper http responses but does not push any resources requested withPreload
headers (as I said, it worked when the PHP backend was based on PHP-FPM). How can I debug what happens here? Caddy does not yield any info about thepush
handling - nor does the vulcain caddy module
Long story short(er): to sum up my questions
- how can I figure out why caddy + vulcain is not working in a reverse_proxy setup?
- is the current state of the caddy cache handler functional / supported by the api-platform distribution
- how to implement/support
BAN
requests (or other fine grained cache invalidation) for caddy cache handler?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
Souin 支持使用
PURGE
HTTP 方法进行失效。我已经写了一个 PR 在 api-platform/core 项目中设置 Souin,但他们正忙于 v3.0 版本。也许在不久的将来他们会审查并可能合并它,我不知道。但是,如果您在 varnish purger 上使用装饰器并使用我在 PR 中编写的代码,您将能够自动清除基本路由的关联端点。Souin supports the invalidation using the
PURGE
HTTP method. I already wrote a PR to set Souin in the api-platform/core project but they are busy with the v3.0 release. Maybe in a near future they'll review and probably merge it, I dunno. But if you use a decorator on the varnish purger and use the code I wrote in the PR, you'll be able to purge automatically the associated endpoints to the base route.