On 6/14/14 2:11 PM, Anton Vodonosov wrote:
For example: http://trac.common-lisp.net/cmucl
It was up a day or so ago, but it is down again.
Also, as I type this, the load avg on c-l.net is 46, with about 20 viewvc.cgi processes running.
Ray
Hi,
Seems like cl-net is being crawled by bots. there are lots of active trac processes as well as viewcgi processes. Which seems logical: there used to be a pretty extensive robots.txt file which should prevent scanning gitweb, viewcgi and large parts of trac. apparently, that's not there anymore.
Regards,
Erik.
On Tue, Jun 17, 2014 at 6:24 AM, Raymond Toy toy.raymond@gmail.com wrote:
On 6/14/14 2:11 PM, Anton Vodonosov wrote:
For example: http://trac.common-lisp.net/cmucl
It was up a day or so ago, but it is down again.
Also, as I type this, the load avg on c-l.net is 46, with about 20 viewvc.cgi processes running.
Ray
Clo-devel mailing list Clo-devel@common-lisp.net http://common-lisp.net/cgi-bin/mailman/listinfo/clo-devel
Created a robots.txt for viewvc and gitweb. System load now down below 10. If/when required, I'll look at creating a robots.txt for Trac as well. That's more involved due to the lack of using URL patterns in robots.txt.
Regards,
Erik.
On Tue, Jun 17, 2014 at 10:57 AM, Erik Huelsmann ehuels@gmail.com wrote:
Hi,
Seems like cl-net is being crawled by bots. there are lots of active trac processes as well as viewcgi processes. Which seems logical: there used to be a pretty extensive robots.txt file which should prevent scanning gitweb, viewcgi and large parts of trac. apparently, that's not there anymore.
Regards,
Erik.
On Tue, Jun 17, 2014 at 6:24 AM, Raymond Toy toy.raymond@gmail.com wrote:
On 6/14/14 2:11 PM, Anton Vodonosov wrote:
For example: http://trac.common-lisp.net/cmucl
It was up a day or so ago, but it is down again.
Also, as I type this, the load avg on c-l.net is 46, with about 20 viewvc.cgi processes running.
Ray
Clo-devel mailing list Clo-devel@common-lisp.net http://common-lisp.net/cgi-bin/mailman/listinfo/clo-devel
-- Bye,
Erik.
http://efficito.com -- Hosted accounting and ERP. Robust and Flexible. No vendor lock-in.
Hi Anton,
I killed all running trac daemons. Hopefully that fixed it. When I have a bit more time, I'll have a look to see if the robots.txt file for trac.common-lisp.net should be updated.
Bye,
Erik.
On Mon, Jul 21, 2014 at 8:48 AM, Anton Vodonosov avodonosov@yandex.ru wrote:
I am trying to use CMUCL trac again - it is barely working, long response times, Gateway Timeouts somtimes. Maybe the bots crawl it so hard, that the system is overloaded again?
Best regards,
- Anton
Some bot was filling up the gsharp trac ticket database: it had created 800,000+ tickets, making all access very slow. I've moved the gsharp trac database out of the way for now and started a process to remove the spam tickets (all tickets with numbers 13 and up).
Hopefully that will result in better performance for a while. I'll put gsharp's trac back when it's cleaned. I'll also remove guest access to create tickets.
Regards,
Erik.
On Mon, Jul 21, 2014 at 9:02 AM, Erik Huelsmann ehuels@gmail.com wrote:
Hi Anton,
I killed all running trac daemons. Hopefully that fixed it. When I have a bit more time, I'll have a look to see if the robots.txt file for trac.common-lisp.net should be updated.
Bye,
Erik.
On Mon, Jul 21, 2014 at 8:48 AM, Anton Vodonosov avodonosov@yandex.ru wrote:
I am trying to use CMUCL trac again - it is barely working, long response times, Gateway Timeouts somtimes. Maybe the bots crawl it so hard, that the system is overloaded again?
Best regards,
- Anton
-- Bye,
Erik.
http://efficito.com -- Hosted accounting and ERP. Robust and Flexible. No vendor lock-in.
Since a large number of clients were creating tickets and requesting invalid links from Trac, I've additionally installed Fail2Ban. Every IP which requests a specific type of invalid URL or tries to create a ticket without being authorized for 5 times, gets "expelled" for 4 weeks.
Fail2Ban has collected a significant number of IPs which were exhibiting this behaviour already and the pressure on the Trac daemons seems to have dropped. The load average on the box has dropped to 0.17.
Regards,
Erik.
On Wed, Jul 23, 2014 at 12:21 AM, Erik Huelsmann ehuels@gmail.com wrote:
Some bot was filling up the gsharp trac ticket database: it had created 800,000+ tickets, making all access very slow. I've moved the gsharp trac database out of the way for now and started a process to remove the spam tickets (all tickets with numbers 13 and up).
Hopefully that will result in better performance for a while. I'll put gsharp's trac back when it's cleaned. I'll also remove guest access to create tickets.
Regards,
Erik.
On Mon, Jul 21, 2014 at 9:02 AM, Erik Huelsmann ehuels@gmail.com wrote:
Hi Anton,
I killed all running trac daemons. Hopefully that fixed it. When I have a bit more time, I'll have a look to see if the robots.txt file for trac.common-lisp.net should be updated.
Bye,
Erik.
On Mon, Jul 21, 2014 at 8:48 AM, Anton Vodonosov avodonosov@yandex.ru wrote:
I am trying to use CMUCL trac again - it is barely working, long response times, Gateway Timeouts somtimes. Maybe the bots crawl it so hard, that the system is overloaded again?
Best regards,
- Anton
-- Bye,
Erik.
http://efficito.com -- Hosted accounting and ERP. Robust and Flexible. No vendor lock-in.
-- Bye,
Erik.
http://efficito.com -- Hosted accounting and ERP. Robust and Flexible. No vendor lock-in.