close
Published: 2008-06-07,
Last Updated: 2008-06-07 01:04:58 UTC
by Jim Clausing (Version: 1)
On 28 May, I posted a story asking for your input. Last weekend got busy, so I didn't post the results, but since I had another shift coming up, I figured I'd do it now. I got quite a few responses (my apologies for the length of this diary), but many of them made good points, so I'm going to share them here mostly unedited. So, without further ado....
* The conversation that started it all from Steve began with this: "On one of the web-servers I help administrate, I use a script that scans each file in the WWW folder. The file takes a search pattern and scans the file to see if there is a match. If there is, then you will get an email displaying the location of the file and what triggered a match. I have used this script to clean out many PHP based shell scripts and was wondering if you know of anything I can add to the search string to give it more coverage.
String currently used:
$OOO0O0O00|r0nin|p,a,c,k,e,d|m0rtix|r57shell|c99shell|phpshell|void\.ru|phpremoteview|directmail|bash_history|\.ru/|brute
*force|MultiViews|cwings|bitchx|eggdrop|guardservices|psyBNC|DALnet"
* From Jason: "A WAF (Web Application Firewall).
As an InfoSec admin, one of the big things I feel is missing in my arsenal is real-time access to the Web logs of any servers I'm meant to be protecting.
WAFs are great (go modsecurity!) as they not only protect your sites from known attacks, but having access to the centralized logs (a WAF could protect dozens of servers) means you have the opportunity to post-process - looking for "odd" things - stuff that no filter can detect.
NIDS can't do this job - they aren't meant to - and can't handle HTTPS anyway. "
* From John: "from what i can tell the proventia G/M device picks up most of all this garbage when tuned properly. It will pick up the SQL injection attempts, it also picks up the cross site scripting from visiting compromised sites, and the transfer of the trojan if you happen to make it that far."
* From David: "For the most part, I have a script I call "Adaptive Firewall". It searches for "suspicious" activity in the log files. When it finds an entry, it grabs the IP which is in turn fully blocked from the server at the firewall. So an attack via one port will prevent further attacks even on other ports.
It works great against SSH brute force attacks. After seeing the first attempts, no additional attempts even touch the server. Same for most (known) web scan attacks (looking for .dlls, .exes, etc). Though there is the small time delay between detection and blocking."
* From Florian: "I'm too lazy to keep tripwire rules maintained so I just use a little bash script via cron to check md5sum's of my site every 15 minutes. Quick n dirty but it works, caught it when I forgot to update my google adsense wordpress plugin and got owned."
* From Scott: "we have a simple script which runs every 30 seconds to monitor for changed content. if there's any unexpected content on there we find out pretty quickly."
* From David: "I've been very happy with OSSEC (I guess part of the "etc" in the list from the blog posting). I've found it helpful both in file integrity monitoring/new file alerting and in the immediate feedback it provides to oddities in the log files it monitors. Combined with mod_sec, I get an early alert to anything odd going on. But my website is pretty small and traffic light -- that might well become too much chatter on a larger, more trafficked site.
In my former life, I wrote some monitoring scripts that, among other things, confirmed that our revenue generating links were always on our home page -- figuring they would be the first to be monkeyed with if we were hacked. We looked for ports open that should not have been -- even though the firewall would block the traffic, we wanted to know if any new ports were opened.
I imagine if I had a database driven site, with all the SQL injection attacks, I would be crawling the site and looking for odd things (maybe build a hash of links and look for any outside my domain that seemed to appear too many times; any src= tags outside my domain, and maybe any javascript or object src= tags that I didn't approve). I'd also script some monitoring of the SQL log for things that shouldn't be there -- better still, anything that does not look like approved use."
* From Mike: "As I do not have the time to diagnose issues I have fallen back to an old concept, I keep master copies of all sites on a local server and use an automation enabled FTP program to compare them on a schedule. If anything has changed the site is forced back to it's original state. While this does wipe out any chance of reverse forensics it does serve the main purpose.. to protect your website visitors. And it works on any hosting server wjich provides FTP access without running any additional programs on the webserver!
Not very fancy but quite effective."
* From Andy: "Just a little note. Aside from using the usual mysql_escape_string in php, all search strings used throughout the site are submitted to a second database with only one randomly named table in there, so no chance of dropping tables, no logging on tricks for it etc.
I check this log every day just to see what people are doing, it records the IP timestamp and string entered."
* From Alec: "We try to not use application logfiles (Apache, IIS, whatever) for security monitoring. If the host is indeed compromised, you can't trust the contents of the logfiles anyway - I have first hand experience of the traces of an IIS exploit being removed from the IIS logs by the exploit itself. The logs on the server's disk are corroborative evidence at best.
To get a "true" picture of what is actually going on, we ship application logs off-box ASAP via Snare Epilog, allowing for differential analysis of the two sets of logs. We also use Sguil for full-content capture straight off the wire, and collect Netflow data.
Any games of spot-the-SQL-injection can then be performed on what are hopefully unadulterated records of activity, and Netflow reporting can tell you if one of your servers has suddenly starting sourcing traffic itself (malware C&C channel etc.)."
* To which fellow handler Swa adds "While in the windows world it doesn't come native, syslog is a great way to get logs from servers/sevices and/or application off to a central (or better: 2 central) severs that can even be independently managed (so as to make sure they aren't going to be swept up together with another attack or even an insider job among your admins.
In mixed environments I've seen good use of Kiwi: http://www.kiwisyslog.com/"
* From Joshua: "Nessus has the ability to mirror content from a website and check the contents for patterns. We do this anytime there is a new "big" insertion threat out there.
I also check zone-h.org, xssed.org everyday to see if there are any reported defacements or xss vulnerabilities found within our domains.
We have Google alerts setup with some common key words (viagra, cialis, casino, ...) used in insertion attacks."
* From Mike: "PHPIDS
Web Application Security 2.0
http://php-ids.org/
PHPIDS (PHP-Intrusion Detection System) is a simple to use, well structured, fast and state-of-the-art security layer for your PHP based web application. The IDS neither strips, sanitizes nor filters any malicious input, it simply recognizes when an attacker tries to break your site and reacts in exactly the way you want it to. Based on a set of approved and heavily tested filter rules any attack is given a numerical impact rating which makes it easy to
decide what kind of action should follow the hacking attempt. This could range from simple logging to sending out an emergency mail to the development team, displaying a warning message for the attacker or even ending the user?s session."
* From Hector: "Right now, I'm using Nagios to check the integrity of the website pages and mod_security to log potential attacks. I'm going to try tripwire and AIDE."
* From Barry: "CVS...(or similar)...regularly export the files out onto the website (you can diff them to give early warning of attacks or simply blat over the top) - only deals with source not database contents so doesn't handle drive-by SQL injection...still, no reason not to script up automated queries to search the database contents for badness...e.g. if your database content shouldn't have absolute URLS, what are they doing in there?"
* From Janantha: "I think the best thing is to have custom script that does the following. I'm thinking we can make the monitoring better using "Integrity" => hashing
-Prior to uploading the finalized version to the web server, tar the whole directory and hash it.
-Create a crontab to regularly (every 2 mins or so) tar and hash the home directory and save the hash in a location outside /var/www/ (non-public) location. It could compare the current hash with the previous one. As soon as it detects a change it alerts the administrator.
Condition should be that the webmaster should have the latest hash for every major update made to that directory. And has a "Clean" backup in hand to restore if something has happened."
* From MysteryFCM: "I use a program I wrote called hpObserver, that notifies me of downtime, and changes to any of the pages ....
I also periodically go through the server logs to check for attempted exploits etc - tis all good fun!"
So there you have it. My thanks to everyone who took the time to write in. On my personal server, I use aide, SEC (Simple Event Correlator) for near realtime log monitoring, OSSEC, mod_security, and some home grown scripts, but mine is basically static anyway, so it is probably overkill.
資料來源 http://isc.sans.org/
Last Updated: 2008-06-07 01:04:58 UTC
by Jim Clausing (Version: 1)
On 28 May, I posted a story asking for your input. Last weekend got busy, so I didn't post the results, but since I had another shift coming up, I figured I'd do it now. I got quite a few responses (my apologies for the length of this diary), but many of them made good points, so I'm going to share them here mostly unedited. So, without further ado....
* The conversation that started it all from Steve began with this: "On one of the web-servers I help administrate, I use a script that scans each file in the WWW folder. The file takes a search pattern and scans the file to see if there is a match. If there is, then you will get an email displaying the location of the file and what triggered a match. I have used this script to clean out many PHP based shell scripts and was wondering if you know of anything I can add to the search string to give it more coverage.
String currently used:
$OOO0O0O00|r0nin|p,a,c,k,e,d|m0rtix|r57shell|c99shell|phpshell|void\.ru|phpremoteview|directmail|bash_history|\.ru/|brute
*force|MultiViews|cwings|bitchx|eggdrop|guardservices|psyBNC|DALnet"
* From Jason: "A WAF (Web Application Firewall).
As an InfoSec admin, one of the big things I feel is missing in my arsenal is real-time access to the Web logs of any servers I'm meant to be protecting.
WAFs are great (go modsecurity!) as they not only protect your sites from known attacks, but having access to the centralized logs (a WAF could protect dozens of servers) means you have the opportunity to post-process - looking for "odd" things - stuff that no filter can detect.
NIDS can't do this job - they aren't meant to - and can't handle HTTPS anyway. "
* From John: "from what i can tell the proventia G/M device picks up most of all this garbage when tuned properly. It will pick up the SQL injection attempts, it also picks up the cross site scripting from visiting compromised sites, and the transfer of the trojan if you happen to make it that far."
* From David: "For the most part, I have a script I call "Adaptive Firewall". It searches for "suspicious" activity in the log files. When it finds an entry, it grabs the IP which is in turn fully blocked from the server at the firewall. So an attack via one port will prevent further attacks even on other ports.
It works great against SSH brute force attacks. After seeing the first attempts, no additional attempts even touch the server. Same for most (known) web scan attacks (looking for .dlls, .exes, etc). Though there is the small time delay between detection and blocking."
* From Florian: "I'm too lazy to keep tripwire rules maintained so I just use a little bash script via cron to check md5sum's of my site every 15 minutes. Quick n dirty but it works, caught it when I forgot to update my google adsense wordpress plugin and got owned."
* From Scott: "we have a simple script which runs every 30 seconds to monitor for changed content. if there's any unexpected content on there we find out pretty quickly."
* From David: "I've been very happy with OSSEC (I guess part of the "etc" in the list from the blog posting). I've found it helpful both in file integrity monitoring/new file alerting and in the immediate feedback it provides to oddities in the log files it monitors. Combined with mod_sec, I get an early alert to anything odd going on. But my website is pretty small and traffic light -- that might well become too much chatter on a larger, more trafficked site.
In my former life, I wrote some monitoring scripts that, among other things, confirmed that our revenue generating links were always on our home page -- figuring they would be the first to be monkeyed with if we were hacked. We looked for ports open that should not have been -- even though the firewall would block the traffic, we wanted to know if any new ports were opened.
I imagine if I had a database driven site, with all the SQL injection attacks, I would be crawling the site and looking for odd things (maybe build a hash of links and look for any outside my domain that seemed to appear too many times; any src= tags outside my domain, and maybe any javascript or object src= tags that I didn't approve). I'd also script some monitoring of the SQL log for things that shouldn't be there -- better still, anything that does not look like approved use."
* From Mike: "As I do not have the time to diagnose issues I have fallen back to an old concept, I keep master copies of all sites on a local server and use an automation enabled FTP program to compare them on a schedule. If anything has changed the site is forced back to it's original state. While this does wipe out any chance of reverse forensics it does serve the main purpose.. to protect your website visitors. And it works on any hosting server wjich provides FTP access without running any additional programs on the webserver!
Not very fancy but quite effective."
* From Andy: "Just a little note. Aside from using the usual mysql_escape_string in php, all search strings used throughout the site are submitted to a second database with only one randomly named table in there, so no chance of dropping tables, no logging on tricks for it etc.
I check this log every day just to see what people are doing, it records the IP timestamp and string entered."
* From Alec: "We try to not use application logfiles (Apache, IIS, whatever) for security monitoring. If the host is indeed compromised, you can't trust the contents of the logfiles anyway - I have first hand experience of the traces of an IIS exploit being removed from the IIS logs by the exploit itself. The logs on the server's disk are corroborative evidence at best.
To get a "true" picture of what is actually going on, we ship application logs off-box ASAP via Snare Epilog, allowing for differential analysis of the two sets of logs. We also use Sguil for full-content capture straight off the wire, and collect Netflow data.
Any games of spot-the-SQL-injection can then be performed on what are hopefully unadulterated records of activity, and Netflow reporting can tell you if one of your servers has suddenly starting sourcing traffic itself (malware C&C channel etc.)."
* To which fellow handler Swa adds "While in the windows world it doesn't come native, syslog is a great way to get logs from servers/sevices and/or application off to a central (or better: 2 central) severs that can even be independently managed (so as to make sure they aren't going to be swept up together with another attack or even an insider job among your admins.
In mixed environments I've seen good use of Kiwi: http://www.kiwisyslog.com/"
* From Joshua: "Nessus has the ability to mirror content from a website and check the contents for patterns. We do this anytime there is a new "big" insertion threat out there.
I also check zone-h.org, xssed.org everyday to see if there are any reported defacements or xss vulnerabilities found within our domains.
We have Google alerts setup with some common key words (viagra, cialis, casino, ...) used in insertion attacks."
* From Mike: "PHPIDS
Web Application Security 2.0
http://php-ids.org/
PHPIDS (PHP-Intrusion Detection System) is a simple to use, well structured, fast and state-of-the-art security layer for your PHP based web application. The IDS neither strips, sanitizes nor filters any malicious input, it simply recognizes when an attacker tries to break your site and reacts in exactly the way you want it to. Based on a set of approved and heavily tested filter rules any attack is given a numerical impact rating which makes it easy to
decide what kind of action should follow the hacking attempt. This could range from simple logging to sending out an emergency mail to the development team, displaying a warning message for the attacker or even ending the user?s session."
* From Hector: "Right now, I'm using Nagios to check the integrity of the website pages and mod_security to log potential attacks. I'm going to try tripwire and AIDE."
* From Barry: "CVS...(or similar)...regularly export the files out onto the website (you can diff them to give early warning of attacks or simply blat over the top) - only deals with source not database contents so doesn't handle drive-by SQL injection...still, no reason not to script up automated queries to search the database contents for badness...e.g. if your database content shouldn't have absolute URLS, what are they doing in there?"
* From Janantha: "I think the best thing is to have custom script that does the following. I'm thinking we can make the monitoring better using "Integrity" => hashing
-Prior to uploading the finalized version to the web server, tar the whole directory and hash it.
-Create a crontab to regularly (every 2 mins or so) tar and hash the home directory and save the hash in a location outside /var/www/ (non-public) location. It could compare the current hash with the previous one. As soon as it detects a change it alerts the administrator.
Condition should be that the webmaster should have the latest hash for every major update made to that directory. And has a "Clean" backup in hand to restore if something has happened."
* From MysteryFCM: "I use a program I wrote called hpObserver, that notifies me of downtime, and changes to any of the pages ....
I also periodically go through the server logs to check for attempted exploits etc - tis all good fun!"
So there you have it. My thanks to everyone who took the time to write in. On my personal server, I use aide, SEC (Simple Event Correlator) for near realtime log monitoring, OSSEC, mod_security, and some home grown scripts, but mine is basically static anyway, so it is probably overkill.
資料來源 http://isc.sans.org/
全站熱搜
留言列表