我该如何改进这个脚本防火墙?

发布于 2024-11-08 16:47:19 字数 1077 浏览 5 评论 0原文

最近,我的一台服务器受到一些中国 IP(这些 IP 并不总是相同)的多次 Dos 攻击(数千个请求/分钟)的攻击。

因此,在我的框架开始时,我做了一个小函数,用于在 ip 发出过多请求时进行阻止。

function firewall() {
  $whitelist = array('someips');

  $ip = $_SERVER['REMOTE_ADDR'];

  if (in_array($ip,$whitelist))
    return null;

  if (search($ip,$pathToFileIpBanned))
    die('Your ip did too many requests')

  appendToFile($ip,$pathTofileIpLogger); //< When the file reaches 13000 bytes truncate it

  if (search($ip,$pathTofileIpLogger) > $maxRequestsAllowed)
     appendToFile($ip,$pathToFileIpBanned);   
}
  • 基本上,该脚本会检查当前 ip 是否在文件“ipBlocked”中找到,如果发现它已死亡。
  • 如果未找到,则会将当前 IP 添加到文件记录器“ipLogger”中。
  • 此后,它会计算 ipLogger 文件中 ip 的出现次数(如果它们>)。 $max 它通过将 ip 添加到文件中来阻止此 ip ipBlocked

ATM 正在工作.. 它已禁止一些中文/tw ip

该脚本的瓶颈是搜索功能,该功能必须计算文件中字符串的出现次数(ip )。由于这个原因,我保持文件的低位(iplogger 文件一旦达到记录的 600-700 ips 就会被截断)

当然,为了将 ips 添加到文件中而不必担心竞争条件,我这样做:

file_put_contents($file,$ip."\n",FILE_APPEND | LOCK_EX);

唯一的问题我正在经历的是 NAT 背后的人。他们都有相同的IP,但他们的请求不应该被阻止

recently one of my server is getting targetting by a number of dos attack (thousands requests/min) by some chinese IP (these ip aren't always the same).

So at the start of my framework I made a little function to block after an ip if it has made too much requests.

function firewall() {
  $whitelist = array('someips');

  $ip = $_SERVER['REMOTE_ADDR'];

  if (in_array($ip,$whitelist))
    return null;

  if (search($ip,$pathToFileIpBanned))
    die('Your ip did too many requests')

  appendToFile($ip,$pathTofileIpLogger); //< When the file reaches 13000 bytes truncate it

  if (search($ip,$pathTofileIpLogger) > $maxRequestsAllowed)
     appendToFile($ip,$pathToFileIpBanned);   
}
  • Basically the script checks if the current ip is found in a file 'ipBlocked' if it's found it dies.
  • If it's not found it adds the current ip to a file logger 'ipLogger'.
  • After this it counts the occurences of the ip in the file ipLogger if they are > $max it's blocks this ip by adding the ip to the file ipBlocked

ATM is working.. it has banned some chinese/tw ips

The bottleneck of this script is the search function that must counts the occurences in a file of a string (the ip). For this reasons I am keeping low the file (the iplogger file is truncated as soon as it reaches 600-700 ips logged)

Of course to add ips to the file without having to worry about race condition i do it like this:

file_put_contents($file,$ip."\n",FILE_APPEND | LOCK_EX);

The only problem i am experiencing with is is the poeple behind NAT. they all have the same IP but their requests shouldn't be blocked

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

小红帽 2024-11-15 16:47:19

虽然这会在请求执行更重的操作(例如数据库读取等)之前停止请求,但您可能需要考虑将其降低到 Web 服务器,甚至进一步降低到软件/硬件防火墙。

较低级别会更加慷慨地处理这个问题,并且开销也会少得多。请记住,通过启动 PHP,他们仍然会消耗您的一名工作人员一段时间。

While this is stopping the requests before they do anything heavier, like db reads and the like, you might want to consider taking this down a level to the web server or even further to a software/hardware firewall.

The lower levels will deal with this far more graciously and with a lot less overhead. Remember by bringing up PHP they're still consuming one of your workers for a while.

随心而道 2024-11-15 16:47:19

以下是我的一些笔记,希望对你有用。

在我看来,防火墙的功能太多了,而且它的名字也不是很具体。它都处理 ip/访问的保存、结束脚本或不执行任何操作。我希望调用此函数时墙壁会被点燃。

我会选择一种更面向对象的方法,其中防火墙不被称为防火墙,而是类似黑名单的东西。

$oBlackList = new BlackList();

该对象仅负责黑名单本身,仅此而已。
它可以判断某个 IP 地址是否在黑名单上,从而实现如下功能:

$oBlackList = new BlackList();
if ($oBlackList->isListed($sIpAddress)) {
    // Do something, burn the intruder!
}

这样,您就可以按照自己想要的方式进行创造性处理,并且不受功能主体的限制。您可以使用函数扩展该对象以将地址添加到列表中。也许是 $oBlackList->addToList($sIpAddress);

这样,访问量的处理或其存储就不再局限于您的防火墙主体。您可以实现数据库存储、文件存储(如您现在使用的)并随时切换,而不会使您的黑名单失效。

反正就是闲逛而已!

Here are my few notes, hope you find them useful.

In my opinion the function firewall does too much and isn't very specific in it's name. It both handles the saving of ip/visits, ending the script or doing nothing. I'd expect the wall to be set aflame when calling this function.

I would go for an more object-orientated approach, where the firewall, isn't named firewall, but something like blacklist.

$oBlackList = new BlackList();

This object would be responsible for just the blacklist itself, but nothing more.
It would be able to say if an ip address is on a blacklist, thus implement an function like:

$oBlackList = new BlackList();
if ($oBlackList->isListed($sIpAddress)) {
    // Do something, burn the intruder!
}

This way, you can be creative in the way you'd like to handle and you're not limited to the body of your function. You could expand the object with an function to add an address to the list. $oBlackList->addToList($sIpAddress); perhaps.

This way, the handling of the amount of visits, or the storage thereof isn't limited to your firewall body. You could implement database storage, file storage (as you use now) and switch anytime without invalidating your blacklist.

Anyway, just rambling!

提笔书几行 2024-11-15 16:47:19

您应该为每个被阻止的 IP 创建一个文件。这样,您就可以通过 .htaccess 阻止访问者,如下所示:

# redirect if ip has been banned
ErrorDocument 403 /
RewriteCond %{REQUEST_URI} !^/index\.php$
RewriteCond /usr/www/firewall/%{REMOTE_ADDR} -f
RewriteRule . - [F,L]

如您所见,它只允许访问 index.php。这样,您可以在发出大量数据库请求之前在第一行执行一个简单的 file_exists() 操作,并且可以抛出 IP 解锁验证码以避免永久阻止误报。与不返回任何信息或没有解锁机制的简单硬件防火墙相比,您可以获得更好的用户体验。当然,您可以抛出一个简单的 HTML 文本文件(以 php 文件作为表单目标)以避免 PHP 解析器工作。

关于 DoS,我认为您不应仅依赖 IP 地址,因为这会导致许多误报。或者您有第二个级别的白名单代理 ip。例如,如果某个 IP 被多次解锁。阻止不需要的请求的一些想法:

  1. 是人类还是爬虫? (HTTP_USER_AGENT)
  2. 如果是抓取工具,它是否尊重 robots.txt
  3. 如果是人类,他是否访问了人类未访问过的链接(例如通过CSS变得不可见的链接或移出可见范围或表单......)
  4. 如果是爬虫,那么白名单怎么样?
  5. 如果是人类,他会像人类一样打开链接吗? (例如:在 stackoverflow 的页脚中,您会发现旅游帮助博客聊天数据法律隐私政策工作这里广告信息移动联系我们反馈。我认为没有人会打开其中 5 个或更多,但一个不好的爬虫可能=阻止它的ip。

如果你真的想依赖ip/min,我建议不要使用LOCK_EX并且只使用一个文件,因为它会导致瓶颈(只要锁存在,所有其他请求都需要)等待)。只要 LOCK 存在,示例:

$i = 0;
$ip_dir = 'ipcheck/';
if (!file_exists($ip_dir) || !is_writeable($ip_dir)) {
    exit('ip chache not writeable!');
}
$ip_file = $ip_dir . $_SERVER['REMOTE_ADDR'];
while (!($fp = @fopen($ip_file . '_' . $i, 'a')) && !flock($fp, LOCK_EX|LOCK_NB, $wouldblock) && $wouldblock) {
    $i++;
}
// by now we have an exclusive and race condition safe lock
fwrite($fp, time() . PHP_EOL);
fclose($fp);

这将生成一个名为 12.34.56.78_0 的文件,如果遇到瓶颈,它将创建一个名为 12.34.56.78_1 的新文件。最后,您只需要合并这些文件(尊重锁!)并在给定时间段内检查许多请求,

但现在您面临下一个问题,这并不是一个好主意。一个简单的解决方案是在开始检查之前使用 mt_rand(0, 10) == 0 另一种解决方案是检查 filesize() 所以我们这样做。不需要打开文件。这是可能的,因为每个请求都会增加文件大小。或者您检查filemtime()。如果最后一次文件更改是在同一秒或仅一秒前完成的。 PS 这两个函数速度相当

至此,我提出了我的最终建议。仅使用 touch()filemtime()

$ip_dir = 'ipcheck/';
$ip_file = $ip_dir . $_SERVER['REMOTE_ADDR'];
// check if last request is one second ago
if (filemtime($ip_file) + 1 >= time()) {
    mkdir($ip_dir . $_SERVER['REMOTE_ADDR'] . '/');
    touch(microtime(true));
}
touch($ip_file);

现在,每个可能是 DoS 攻击的 IP 都有一个文件夹,其中包含其 microtime请求,如果您认为它包含许多此类请求,您可以使用 touch('firewall/' . $_SERVER['REMOTE_ADDR']) 来阻止 IP。当然,您应该定期清理整个东西。

我的经验(德语) 使用这样的防火墙都非常好。

You should create for every blocked IP a file. By that you can block the visitor through .htaccess as follows:

# redirect if ip has been banned
ErrorDocument 403 /
RewriteCond %{REQUEST_URI} !^/index\.php$
RewriteCond /usr/www/firewall/%{REMOTE_ADDR} -f
RewriteRule . - [F,L]

As you can see it only allows access to the index.php. By that you can do a simple file_exists() in the first line before heavy db requests are made and you can throw a IP unlocking captcha to avoid permanent blocking of false-positives. By that you have a better user experience compared to a simple hardware firewall that does not return any information or does not have a unlocking mechanism. Of course you could throw a simple HTML text file (with a php file as forms target) to avoid the PHP parser working as well.

Regarding DoS I don't think you should rely only on IP addresses as it would result to many false-positives. Or you have a 2nd level for whitelisting proxy ips. For example if a ip was unblocked multiple times. Some ideas to block unwanted requests:

  1. is it a human or crawler? (HTTP_USER_AGENT)
  2. if crawler, does it respect robots.txt?
  3. if human, is he accessing links that aren't visited through humans (like links that are made invisible through css or moved out of the visible range or forms ...)
  4. if crawler, what about a whitelist?
  5. if human, is he opening links like a human would? (example: in the footer of stackoverflow you will find tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback. No human will open 5 or more of them I think, but a bad crawler could = block its ip.

If you really want to rely on ip/min I suggest not to use LOCK_EX and only one file as it will result a bottleneck (as long the lock exists all other requests need to wait). You need a fallback file as long a LOCK exists. Example:

$i = 0;
$ip_dir = 'ipcheck/';
if (!file_exists($ip_dir) || !is_writeable($ip_dir)) {
    exit('ip chache not writeable!');
}
$ip_file = $ip_dir . $_SERVER['REMOTE_ADDR'];
while (!($fp = @fopen($ip_file . '_' . $i, 'a')) && !flock($fp, LOCK_EX|LOCK_NB, $wouldblock) && $wouldblock) {
    $i++;
}
// by now we have an exclusive and race condition safe lock
fwrite($fp, time() . PHP_EOL);
fclose($fp);

This will result a file called 12.34.56.78_0 and if it hit a bottleneck it will create a new file called 12.34.56.78_1. Finally you only need to merge those files (respect the locks!) and check for to many requests in a given time period.

But now you are facing the next problem. You need to start a check for every request. Not really a good idea. A simple solution would be to use mt_rand(0, 10) == 0 before starting a check. An other solution is to check the filesize() so we do not need to open the file. This is possible because the filesize raises by every request. Or you check the filemtime(). If the last file change is done in the same second or only one second ago. P.S. Both functions are equal fast.

And by that I come to my final suggestion. Use only touch() and filemtime():

$ip_dir = 'ipcheck/';
$ip_file = $ip_dir . $_SERVER['REMOTE_ADDR'];
// check if last request is one second ago
if (filemtime($ip_file) + 1 >= time()) {
    mkdir($ip_dir . $_SERVER['REMOTE_ADDR'] . '/');
    touch(microtime(true));
}
touch($ip_file);

Now you have a folder for every ip that could be a DoS attack containing the microtime of its request and if you think it contains to many of those requests you could block the ip by using touch('firewall/' . $_SERVER['REMOTE_ADDR']). Of course you should periodical clean up the whole thing.

My experiences (German) using such a firewall are very good.

独享拥抱 2024-11-15 16:47:19

您可以使用一些非常基本的文件/序列化代码作为示例:

<?php
$ip = $_SERVER['REMOTE_ADDR'];

$ips = @unserialize(file_get_contents('%path/to/your/ipLoggerFile%'));
if (!is_array($ips)) {
  $ips = array();
}

if (!isset($ips[$ip])) {
  $ips[$ip] = 0;
}

$ips[$ip] += 1;
file_put_contents('%path/to/your/ipLoggerFile%', serialize($ips));

if ($ips[$ip] > $maxRequestsAllowed) {
  // return false or something
}

当然,您必须以某种方式将其集成到您的防火墙功能中。

Some very basic file/serialize code, you could use as an example:

<?php
$ip = $_SERVER['REMOTE_ADDR'];

$ips = @unserialize(file_get_contents('%path/to/your/ipLoggerFile%'));
if (!is_array($ips)) {
  $ips = array();
}

if (!isset($ips[$ip])) {
  $ips[$ip] = 0;
}

$ips[$ip] += 1;
file_put_contents('%path/to/your/ipLoggerFile%', serialize($ips));

if ($ips[$ip] > $maxRequestsAllowed) {
  // return false or something
}

Of course, you'll have to integrate this in some way into your firewall function.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文