我已经浏览了官方文档 和乐于助人(但稍有日期)安全规则! ,我希望查看是否有标准的壁垒规则来防止授权用户发布恶意代码,虐待单词或垃圾邮件链接。
例如,尝试在下面的“ sampleinputfield”中预防这些坏事:
allow create: if
request.auth != null &&
request.resource.data.userId == request.auth.uid &&
request.resource.data.sampleinputfield is string &&
request.resource.data.sampleinputfield.size() < 80 &&
request.resource.data.sampleinputfield. <<something here to block spam, malicious code, etc>>;
我知道云功能如何清理滥用语言(请参阅“ 云功能如何工作?
感谢您的帮助
I've looked through the official documentation and the helpful (but slightly dated) "Security Rules! ???? | Get to know Cloud Firestore #6" video, but instead of reinventing the wheel, I'm looking to see if there's a standard Firebase rule for preventing an authorized user from posting malicious code, abusive words or spam links.
For example, trying to prevent those bad things in 'sampleinputfield' below:
allow create: if
request.auth != null &&
request.resource.data.userId == request.auth.uid &&
request.resource.data.sampleinputfield is string &&
request.resource.data.sampleinputfield.size() < 80 &&
request.resource.data.sampleinputfield. <<something here to block spam, malicious code, etc>>;
I'm aware of how Cloud Functions can cleanup abusive language (see "How do Cloud Functions work? | Get to know Cloud Firestore #11") but I'm looking to see if there's something in rules also.
Thanks for any help
发布评论
评论(1)
对于此类情况,没有内置过滤器或其他过滤器。您需要编写一个自定义功能,该功能使用
ackodes> matches(&lt; REGEX&gt;)
测试任何特定单词。
使用云功能也像在链接的视频中一样工作,因为您可以使用任何节点软件包来验证输入而不是编写一些正则表达式。另外,您也可以通过a nofollow noreferrer“触发器,因此您可以在必要时立即返回错误。
There isn't built-in filter or so for such cases. You need to write a custom function that uses
matches(<REGEX>)
to test for any specific words.Using Cloud Functions also works as in the linked video and it might be easier since you can use any Node packages to validate input rather than writing some Regular Expression. Also you can write the data through a Callable Cloud Function instead of using Cloud Firestore triggers so you can immediately return an error if necessary.