I have a web hosting that does not allow to edit iptables. From to time I have light (about 300 requests/sec) DoS attacks (usually not distributed). I decided to write a PHP script that will block those ips. First I tried to store all requests for last 10 secs in database and look up abusing addresses for every request. But I quickly realized that this way I have to do at least 1 request to database for every DoS request, and it's not good. Then I optimized this approach as follows:
Read 'deny.txt' with blocked ip's
If it contains request ip, then die()
--- at this point we have filtered out all known attacking ips ---
store requesting ip in database
clean all requests older than 10 secs
count requests from this ip, if it is greater than threshold, add it to 'deny.txt'
This way, new attacking ip will make only Threshold
requests to database and then gets blocked.
So, the question is, does this approach have optimal performance? Is there a better way to do this task?
Here's my code:
$ip = $_SERVER['REMOTE_ADDR'];
// Log ip
$query = "INSERT INTO Access (ip) VALUES ('$ip')";
mysql_query($query) or HandleException("Error on logging ip access: " . mysql_error() . "; Query: " . $query);
// Here should be database cleanup code
// Count requests
$query = "SELECT COUNT(*) FROM Access WHERE ip='$ip' AND time > SUBTIME(NOW(), '00:01:00')";
$result = mysql_query($query) or HandleException("Error on getting ip access count: " . mysql_error() . "; Query: " . $query);
$num = mysql_fetch_array($result);
$accesses = $num[0];
// Ban ip's that made more than 1000 requests in 1 minute
if($accesses > 1000)
{
file_put_contents('.htaccess', 'deny from ' . $ip . "
", FILE_APPEND | LOCK_EX);
}
and .htaccess stub:
order deny,allow
deny from 111.222.33.44
deny from 55.66.77.88
Try using Memcache, it will be much faster to lookup.
You can use the IP address for a key. Read the value. If it doesn't exist, initialize it to 0, if it is a number, increment it. Then write it back with a TTL of 1 second or 10 seconds, or whatever period you want. If the count is above a threshold, there were to many requests in the TTL period and you can block the IP.
Update: I just figured that setting the updated value will again give it a new TTL of at least one second, so an IP could be blocked if it were to request <threshold>
requests in continuous intervals of just under a second...
I don't think it renders this answer completely useless, but it is something to keep in mind if you want to make a literal implementation of what I described.
Blocking can be done permanently (by logging it in a database), or for a smaller period. You can use MemCache for that too, by logging a marker (like 'X') instead of a counter and set the TTL to a longer period. The counter script must check if the read value isn't 'X', or else the counter will overwrite the block.
I would choose to use Memcache for this, even if you want to make the blacklist persistent. Lookups (which you need to do for each request) are much faster. You can save a blacklisted IP in the database, and restore that list periodically, or at least when the server is rebooted. That way, you got a persistent blacklist without the overhead over having to check the database on each request.