Hello! We are MTR Design and site making is our speciality. We work with UK based startups, established businesses, media companies and creative individuals and turn their ideas into reality. This is a thrill and we totally love it.

Server monitoring with S2Mon - Part 2

Author: Emil Filipov

In part 1 I covered the reasons why it is in your best interest to monitor your servers, and how can S2Mon help with that task. Well, we know that monitoring can be all cool and shiny, but how hard is it to set up? After all, the (real or perceived) effort required for the initial configuration is the single biggest reason why people avoid monitoring. In this part I'll explore the configuration process with S2Mon. read more ›

In part 1 I covered the reasons why it is in your best interest to monitor your servers, and how can S2Mon help with that task. Well, we know that monitoring can be all cool and shiny, but how hard is it to set up? After all, the (real or perceived) effort required for the initial configuration is the single biggest reason why people avoid monitoring. In this part I'll explore the configuration process with S2Mon.

1. Overview

The S2 system relies on an agent installed on the server side to send information to the central brain over an encrypted SSL connection. The agent we are using is, of course, open-source script (written in Bash), so anyone can look inside it, and see what it is doing exactly. Don't know about you but I would feel very uneasy if I had to install some "proprietary" monitoring binary on my machines - it could be a remotely controlled trojan horse for all I know. So, keeping the agent open is key to us.

The open-source agent confirms another key point - it only *sends* information out to the central S2Mon servers, it does not *receive* any commands/configurations back. The communication here is one way - from the agent to the S2Mon center. The S2Mon center cannot modify the agent behavior in any way whatsoever.

2. Requirements

Since the agent is mainly written in Bash, it, obviously, requires Bash to be available on the monitored system. Fortunately, Bash is available on any Linux system released during the last 15 years or so. The other requirements are:

  • Perl
  • curl
  • bzip
  • netstat
  • Linux OS

The required tools are all present in most of the contemporary Linux distributions, but in case you have any doubts, you can check out the prerequisites page for distro-specific tips.

The Linux OS requirement is the major one here - S2Mon currently runs on Linux only. We have plans to make it available to Mac OS, *BSD and Windows users in the future, but for the time being, these platforms are not supported.

3. Registering an account

You will obviously need an S2Mon account, so, in case you do not have one, head to https://www.s2mon.com/registration/. Once you submit your desired account name and your email address, you will be taken straight to your Dashboard, and your password will be sent over email to you (so make sure you get that email field right).

The Dashboard is very restricted at this point - you need to verify your email address to unlock the full functionality of the system. To complete the verification, simply click on the activation link you got in your mailbox. That's it, your S2Mon account is now fully functional!

4. Adding a server entry to the S2Mon site

OK, this is where the fun starts. Before the S2Mon system starts accepting any data from your server, you need to create a server record in the S2Mon system. Go to https://www.s2mon.com/host/add/ (or Servers -> Add host , if you would prefer) and fill in the following form:

Add Host

Hostname should be a unique identifier of your server - it does not need to be a Fully Qualified Domain Name (FQDN) - though it is a good idea to use that. For an Address, you should enter the external IP address of the host. This is the IP address is where Ping probes will be sent to, should you choose to enable them from the drop-down menu. It is a good idea to enable these probes if the server is ping-able; in this way you will get an alert if the ping dies - this is considered an indication that something is wrong. The Label field is optional free-text that you may use to describe your server. If you fill it in, it will be the server identifier used throughout the S2 site; otherwise the Hostname would be used.

After you submit the form, you will be presented with the basic steps that you will need to follow to get the probe running. Since you are reading this blog post, you can just copy the Pushdata Agent URL and ignore everything else :). The Pushdata Agent URL is the address where the agent would send all of the monitoring information, so it is the most important piece of data on that page. In case you forgot it, accidentally closed the page,or the dog ate your computer, don't worry, you can always get back to this page via Servers -> Edit button -> Probe setup tab.

5. Activating the services you want to monitor

Now go to https://www.s2mon.com/servers/. You will see the list of your servers there, but there is also a convenient panel where you can enable or disable certain services. Go on and activate the ones you are interested in (or, if you are like me - all of them):

Service Activation

6. Running the probe on the server

This is the trickiest part of it all, as there is a lot of different ways to do it, depending on the server controls you have at hand. I'll assume that you have SSH access to the server, so you can run commands directly. If you do not have this kind of access though, you may still be able to run the S2 probe if you can:

  • Download the probe archive, extract it, and put the extracted files onto your server;
  • Run a periodical task (cron job) every minute, which would fire the executable agent script at the specified URL.

The S2Mon agent does NOT require a root account - you can run it from an unprivileged account. Even though I trust the agent completely, I run it from an unprivileged account on all my servers - it's a good approach security-wise, and it is more tidy. In some cases, however, unprivileged accounts may not have access to all built-in metrics, so you might want to run the cron job with root privileges - it's up to you and your specific setup

So, regardless of which account you decide to run the agent under, you can log in with that account and do the following:

s2mon ~$ wget https://dc1.s2-monitoring.com/active-node/a/s2-pushdata.tar.gz # download
s2mon ~$ tar xzf s2-pushdata.tar.gz # extract
s2mon ~$ ls -la s2-pushdata/post.sh # verify that the post.sh script has executable permissions
s2mon ~$ cd s2-pushdata/
s2mon ~/s2-pushdata$ DEBUG_ENABLED=1 ./post.sh "https://dc1.s2-monitoring.com/rblmon/collector-vahzeegh/index.php/my-hostname.com" # Use your specific Pushdata Agent URL here, enclosed in single or double quotes!

You will get some debug output out of the last command; it will abort if there is anything missing (for example, if curl is not installed on the system). If everything is OK, the last line would indicate successful data submission, e.g.:

DEBUG: POST(https://dc1.s2-monitoring.com/rblmon/collector-vahzeegh/index.php/my-hostname.com): 21 keys (8879 -> 2539 bytes).

The only thing that's left to do is to set the agent to be executed every minute. Again, there are a few different ways to do this, but the most common one is to run 'crontab -e', which will open up your user's crontab for editing. Then you only need to append the line:

* * * * * cd /path/to/s2-pushdata/ && ./post.sh "https://dc1.s2-monitoring.com/rblmon/collector-vahzeegh/index.php/my-hostname.com" &>/dev/null

Please make sure that you substitute /path/to/s2-pushdata/ with the actual path to the s2-pushdata directory on your system, and to change the URL to the value that you got after adding your host record in the S2Mon website (note: changing just the hostname part at the end will NOT work).

7. Profit!

OK, if you were able to complete steps 5,6 and 7, then you should see the nifty monitoring widget on your S2Mon Dashboard turn all green. Congrats, your server is now monitored and you are recording the history of its most intimate parameters!

Widget - OK Status

8. (Optional) MySQL monitoring configuration

The MySQL service requires some extra configuration for the S2Mon agent to be able to look inside it, so you will need to take some extra steps if you want to monitor any of the MySQL services. The easiest way to do this is to:

  • Create a MySQL user for S2Mon to use, by using the following query (ran as MySQL root or equivalent)
GRANT USAGE ON *.* TO 's2-monitor'@'localhost' IDENTIFIED BY '*******';

Make sure to replace '*******' with a completely random password. Don't worry, you will not need to remember it for long!

  • Create the files /etc/s2-pushdata/mysql-username and /etc/s2-pushdata/mysql-password on your system, and put the username (s2-monitor in this case) and password in the respective file (on a single line).
  • Change the ownership of those so that only the user that you are running S2Mon under can read them (for example set them to 0400).

After this is all done, you will see the MySQL data charts slowly starting to fill in with data in the next few minutes.

9. Post-setup

Now that you have a host successfully added to the interface, the next logical step would be to setup some kind of notification that would poke you when the some parameter goes too high or too low. Additionally, you might want to enable other people to view or modify the server data in your account. Both tasks are easy with S2Mon and I will show you how to do it in the next part.

Server monitoring with S2Mon - Part 1

Author: Emil Filipov

We've all heard that servers sometimes break for one reason or another. We often forget, however, how inevitable it is. While everything is working, the system looks like a rock solid blend of software and hardware. You get the feeling that if you don't touch it, it would keep spinning for years. Well, that's a very misleading feeling. A lot of things can (and will!) go wrong, and you can be better prepared with a tool like S2mon.com. read more ›

We've all heard that servers sometimes break for one reason or another. We often forget, however, how inevitable it is. While everything is working, the system looks like a rock solid blend of software and hardware. You get the feeling that if you don't touch it, it would keep spinning for years.

That's a very misleading feeling. The proper operation of a server depends on many dynamic parts, like having Internet connectivity, stable power supply, proper cooling, enough network bandwidth, free disk space, running services, available CPU power, IO bandwidth, memory, ... That's just the tip of the iceberg, but I think the point is clear - there is a lot that can go wrong with a server. 

Eventually some of those subsystems will break down for one reason or another. When one of them fails, it usually brings down others, creating a digital mayhem that can be quite hard to untangle. Businesses relying on the servers being up and running tend not to look too favorably on the inevitability of the situation. Instead of accepting the incident philosophically and being grateful for the great uptime so far, business owners instead go for questions like "What happened?!?!!", "What's causing this???" and "WHEN WILL IT BE BACK UP????!!!". Sad, I know. 

Smart people, who would rather avoid coming unprepared for those questions, have come up with the idea of monitoring, so that:

  • problems are caught up in their infant stages, before they cause real damage (e.g. slowly increasing disk space usage);
  • when some malfunction does occur, they can cast a quick glance over the various monitoring gauges, and quickly determine what's the root cause of it;
  • they can follow trends in the server metrics, so they can both get insight into issues from the past and predict future behavior.

These are all extremely valuable benefits, and it's widely accepted that the importance of server monitoring is coming second only to the criticality of backups. Yet, there are more servers out there without proper monitoring that you would expect. The main reasons not so setup monitoring are all part of our human nature, and can be summed up to "what a hurdle to install and configure...", "the server is doing it's job anyway..." and my favorite "I'll do it...eventually".

I have some news for the Linux server administrators - you have an excuse no more. We've come up with a web monitoring system for your servers that is easy to setup, rich in functionality and completely free (at least for the time being). Go on and see a demo of it, if you don't believe me. If you decide to subscribe, it will take less than 1 minute. Adding a machine to be monitored basically boils down to downloading a Bash script and setting it up as a cron job (you'll get step-by-step instructions after you log in and add a new server record on the web). And if you want to integrate S2Mon into a custom workflow/interface of yours, there is API access to everything (in fact, the entire S2Mon website is one big API client).

Once you hook up your server to the system, you will unlock a plethora of detailed stats, presented in interactive charts like this one:

Apache children

What we see above is a pretty picture of the load falling on the Apache web server. Apparently we've had the same pattern repeating during the last week. That's a visual proof that the web server workload varies a lot throughout the day (nothing unexpected, but we can now actually measure it!).

OK, I now want to see how are my disk partitions faring, and when should I plan for adding disk space:

Disk Usage Stats

 Both partitions are steadily growing, but if the rate is kept, there should be enough space for the next 5-6 months.

Hey, you know what, I just got some complaints from a user that a server was slow yesterday, was there anything odd?

Load Average

Yep, most definitely. The load was pretty high throughout the entire afternoon. Believe it or not this time it was not his virus-infested Windows computer...

Your boss wants some insight on a specific network service, say IMAP? There you go:

IMAP - Connections per service

Wonder what your precious CPU spends its time on? See here:

CPU Stats

As you see, S2Mon can provide you with extremely detailed stats ready to be used anytime you need them. Of course, there is a lot more to it, and I'll cover more aspects of the setup, configuration and the work with S2Mon it in the next parts. As always, feedback is more than welcome!

Stayin' secure with Web Security Watch

Author: Emil Filipov

Is your server/website secure? How do you *really* know? Web Security Watch can help you with getting on top of the publicly-released security advisories. A custom security feed just for you - how cool is that? read more ›

Is your server/website secure? How do you really know? Let me get back to this in a while. 

As you may be aware there is a ton of security advisories released by multiple sources every day. That's a true wealth of valuable information flowing out on the Internet. Being aware of the issues described in these advisories could make all the difference between being safe and getting hacked; between spending a few minutes to patch up, and spending weeks recovering lost data, reputation and customer trust. So who would *not* take advantage of the public security advisories, right?

Not really. See, there is the problem of information overflow. There is really a lot of sources of security information, each of them spewing dozens of articles every given day. To make it worse, very few of those articles are really relevant to you. So, if you do want to track them, you end up manually reviewing 99% of junk to get to the 1% that is really relevant to your setup. A lot of system/security administrators are spending several dull hours every week to go through reports that rarely concern them. Some even hire a full-time dedicated operators to process the information. Others simply don't care about the advisories, because the review process is too time-consuming. 

Well, we decided we can help with the major pains of the advisory monitoring process. So we built Web Security Watch (WSW) for this purpose. This website aggregates security advisories coming from multiple reputable sources (so you don't miss anything), groups them together (so you don't get multiple copies), and tags them based on the affected products/applications. The last action is particularly important, as tags allow you to filter just the items that you are interested in, e.g. "WordPress", "MySQL","Apache". What's more, we wrote an RSS module for WordPress, so you can subscribe to an RSS feed which only contains the tags you care about. A custom security feed just for you - how cool is that? Oh, and in case you didn't notice - the site is great for security research. And it's free.

Even though WSW is quite young, it already contains more than 4500 advisories, and the number grows every day. We will continue to improve the site functionality and the tagging process, which is still a bit rough around the edges. If you have any feature requests or suggestions, we would be really happy to hear them - feel free to use the contact form to get in touch with us with anything on your mind.

Now, to return to my original question. You can't really tell if your site/server is secure until you see it from the eyes of a hacker. And that requires some capable penetration testers. Even after you had the perfect penetration test performed by the greatest hackers in the world, however, you may end up being hacked and defaced by a script kiddie on the next week, due to vulnerability that just got disclosed publicly.

Which gets me to the basic truth about staying secure - security is not a state, it's a process. A large part of that process is staying current with the available security information, and Web Security Watch can help you with that part.

PyLogWatch is born

Author: Emil Filipov

Introducing PyLogWatch - simple and flexible Python utility allowing you to capture custom log files into the centralized Sentry logging server read more ›

Here, at MTR Design, we are managing multiple web apps, servers and system components. All of them generate some kind of logs. Most of the time the logs are trivial and contain nothing that we should be concerned about. There is the odd case, however, where some log gets an entry that truly deserves our attention. You see, the signal-to-noise ratio in most logs is very low, so going over all of the logs by hand is an extremely boring and time-consuming task. Yet, there may be "gems" inside the logs that you really want to act on ASAP - say, someone successfully breaking into your server, or email list going crazy and spamming your customers.

So, what solutions do we have at our disposal? The most noteworthy are Splunk (hosted service, expensive) and Logstash (Java, pain to install, maintain and customize). I did not like any of them. What I did like was Sentry, which has a logging client (called Raven) available in dozen languages. The only problem is that Sentry is meant for handling exceptions coming from applications - not for general purpose logging. 

Yet, Sentry has a lot of the features that we do need:

  • Centralized logging with nice Web UI
  • Users, permissions, projects
  • Aggregation, so that similar log messages get grouped together
  • Quick filters, letting you hide message classes you do not care about
  • Plugin system that lets you write your own message processing 
  • Flexible and easy to use logging clients

Since we already had Sentry for handling in-app logging, enabling it to handle general-purpose server logs felt like a very compelling idea. So we did it...

Enter PyLogWatch

... by writing a Python app that parses log files and feeds them to Sentry. The application is very small and simple, and you can run it on any server with a recent version of Python. You don't need to be root, there is no long-running daemon, and no special deployment considerations - just download, configure, run (by cron, or via other means of scheduling). Of course, PyLogWatch relies on you having a Sentry server, but that's not too hard to install either (see the docs), and you can always use the very affordable hosted Sentry service (see the pricing), which features a limited free account.

The PyLogWatch project is still in its infant stages - there are just a couple of *very* basic parsers (for Apache error logs and for syslog files), and no extensions for the Sentry server yet. Nevertheless, it has already proven very useful to us, since it enabled our developers to closely track the Apache error log files for the applications they "own", and swiftly react to any problem that shows up. In practice, each error line generates a "ticket" in Sentry, and it sticks up there until a project member explicitly marks it as resolved. As an optional feature, all project members receive an email whenever there is a new entry waiting to be resolved. 

What I love about this project is that it is a pretty much blank sheet of paper. I believe that using the combined power of custom parsers and Sentry plugins can yield magnificent results.

So what tool are you using for log tracking? What would do you like/dislike about it, and what would you ideally like it to do? Feel free to share your thoughts.

Poking with Media Upload Forms

Author: Dimitar Ivanov

Every pentester loves file upload forms - the ability to upload data on the server you are testing is what you always aim for. During a recent penetration test, I had quite the fun with a form that was supposed to allow registered users of the site to upload pictures and videos in their profiles. read more ›

What can I say about file upload forms? Every pentester simply loves them - the ability to upload data on the server you are testing is what you always aim for. During a recent penetration test, I had quite the fun with this form that was supposed to allow registered users of the site to upload pictures and videos in their profiles. The idea behind the test was to report everything as it was found, and the developers would fix it on the fly. The usual SQL injection and XSS issues they had no problems with, but the image upload turned to be a real challenge. When I got to the file upload form, it performed no checks whatsoever. I tried to upload a PHP shell, and a second later I was doing the happy hacker dance.

The challenge

So the developers applied the following fix:

$valid = false; if(preg_match('/^image/', $_FILES['file']['type'])) { $info = getimagesize($_FILES['file']['tmp_name']); if(!empty($info)) $valid = true; } elseif(preg_match('/^video/', $_FILES['file']['type'])) { $valid = true; } else { @unlink($_FILES['file']['tmp_name']); }
if($valid) { move_uploaded_file( $_FILES['file']['tmp_name'], 'images'.'/'.$_FILES['file']['name'] );

The code is now checking the type of the file and size of the images. However, there are a few issues with this check:

  • the type of the file is checked via the Content-Type header, which is passed to the script by the client, and therefore, can be easily modified;
  • the script is not checking the file extension, and you can still upload a .php file;
  • the check for the videos is only based on the Content-Type header.

Evasion

It is fairly easy to evade this kind of protection of file upload forms. The easiest thing, of course, is to upload a PHP script, by changing the Content-Type header of the HTTP request to image/video. To do this, you need to intercept the outgoing HTTP request with a local proxy, such as Burp or Webscarab, but Tamper Data for Firefox will do just fine. You can also upload a valid image and insert PHP code in the EXIF. To do this, you can insert the code in the Comments field, e.g.:

$ exiftool -Comment='' info.php 1 image files updated

When you upload the image with a .php extension, it will be interpreted by the PHP interpreter, and the code will be executed on the server. Depending on the server configuration, you might be able to upload the image with .php.jpg extension. If the check for the extension is not done correctly, and if the server configuration allows it, you can still get code execution. Easy, eh?

Protection

So what can be done to prevent this? With a mixture of secure coding a some server-side tweaks, you can achieve a pretty secure file upload functionality.

  • [Code] Check for the Content-Type header. This may fool some script kiddies or less-determined attackers.
  • [Code] Check for the file extension. Replace .php, .py, etc. with, say, _php, _py, etc.
  • [Server] Disable script execution in the upload directory. Even if a script is uploaded, the web server will not execute it.
  • [Server] Disable HTTP access to the upload directory, that is if the files are only meant to be accessible only from scripts using the file system.Otherwise,  although the script will not be executed locally on the server, it could still be used by attackers in Remote File Inclusion attacks. If they target another server with an application that has an RFI vulnerability and allow_url_include is on, they can upload a script on your server and use it to get a shell on the vulnerable machine.

Conclusion

Developers often forget that relying on client-side controls is a bad thing. They should always code under the assumption that the application may be (ab)used by malicious user. Everything on the client side can be controlled and therefore, evaded. The more you check the user input, the better. And of course, the server configuration should be as hardened as possible.

Paranoid

Author: Dimitar Ivanov

A couple of years ago, one of our clients asked us to design a server setup that would host a PHP application for credit card storage. The application would have to be accessible from different location, only by their employees. read more ›

A couple of years ago, one of our clients asked us to design a server setup that would host a PHP application for credit card storage. The application would have to be accessible from different location, only by their employees.

Below is the design guide, produced by our team.

What we tried to do, was make the security as paranoid as possible, and still leave the system in a usable state. Of course, there is always something else that you can do to tighten the security even more, but this will lead to functional inconveniences which we'd rather not live with.

The principle we followed was "deny all, allow certain things." Therefore, the main design principles are:

  • Close all doors securely.
  • Open some doors.
  • Closely monitor the activity of the doors you opened.
  • Always be alert and monitor for suspicious activity of any kind (newly opened doors, unknown processes, unknown states of the system, etc)

Server installation and setup notes

  • Install a bare Linux, no services at all running ("netstat -lnp" must show no listening ports).
  • Install an intrusion detection system (which monitors system files for modifications).
  • Use the 'grsecurity' Linux kernel patches - they help against a lot of 'off-the-shelf' exploits
  • (door #1) Install the OpenSSH server (22 port), so that you can manage the server.
    • Disallow password logins, allow ONLY public keys, SSH v2.
    • Set PermitUserEnvironment to "yes".
    • Set a "KEYHOLDER" environmental variable in the ~/.ssh/authorized_keys file.
    • Send an e-mail if the KEYHOLDER variable is not set when a shell instance is started.
  • Set up an external DNS server in "/etc/resolv.conf" for resolving.
  • (door #2) Install a web server, for example Apache.
    • Leave only the barely needed modules.
    • Set up the vhost to work only with SSL, no plain HTTP (http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html).
    • Purchase an SSL certificate for the server's vhost, so that clients can validate it.
    • Do not set up multiple vhosts on the server; this server will have only one purpose - to store and send data securely; don't be tempted to assign more tasks here.
    • Install a Web Application firewall (mod_security, etc.) - it will detect common web-based attacks. Monitor its logs.
    • Limit HTTP methods to good ones only, unexpected HTTP methods should get into the error logs and raise an eyebrow (generate alerts).
    • Disable directory listing in Apache.
    • Disable multiviews/content negotiation in Apache if your app does not rely on them.
  • Install an Application Firewall (e.g. AppArmor) - apps should not try to access resources they have no (legal) business with. For example, Apache should not try to read into /root/.
  • Install a MySQL server, bind it to address 127.0.0.1 so that network usage isn't possible.
  • Install a mail server like Exim or Postfix but let it send only locally generated e-mails; there is no need to have a fully functional mail server, listening on the machine.
  • Firewall INPUT and FORWARD iptables chains completely (set default policy to DROP), except for the following simple rules:
    • INCOMING TCP connections TO port 22 FROM your IP address(es) - allow enough IP addresses, so that you don't lock yourself out;
    • INCOMING TCP connections TO port 443 FROM your clients' IP address(es) - the CRM application's IP address, etc.;
    • Allow INCOMING TCP, UDP and ICMP connections which are in state ESTABLISHED (i.e. initiated by the server or on the SSH port).
  • Log remotely, so if the system does get compromised, the attacker wouldn't be able to completely cover their traces. Copying over the log files on designated intervals is OK-ish, but real-time remote logging (like sysylog over SSL) is much better, as there would be no window where the logs could be erased/tampered with. Make an automatic checker which confirms that the remote logs, and the local logs are the same - an alarm bell should go off if they aren't.

Door #1 (SSH) can be considered closed and monitored:

  • It works only with few IP addresses.
  • Does not allow plain text logins, so brute-force attacks are useless.
  • A notification is sent when the door is opened by unauthorized users.

Door #2 (web server) must be taken special care of.

  • Review the access log of the server daily in regards to how many requests were made => if there are too many requests, then review the log manually and see which application/user made the requests; set a very low threshold as a start and increase it accordingly with time.
  • Review the error log of the server => send a mail alert if it has new data in it.

Consider using some kind of VPN (e.g. PPTP/IPSec/OpenVPN) as an added layer of network authentication. You can then bind the web server to the VPN IP address (so direct network access is completely disabled) and set the firewall to only allow the internal VPN IPs on port 443.

General server activity monitoring

  • Set the mail for "root" to go to your email address (crontab and other daemons send mail to "root").
  • Review the /var/log/syslog log of the server => send a mail alert if there is new data in it.
  • Do a "ps auxww" list of the processes => if there are unknown processes (as names, running user, etc) => send a mail alert to yourself.
  • Do a "netstat -lnp" list of the listening ports => mail alert if something changed here.
  • Test the firewall settings from an untrusted IP - the connections must be denied.

Finally, update the software on the server regularly. E-mail yourself an alert if there are new packages available for update.

Application Design Notes

The SSL (HTTPS) vhost will most probably run mod_php. When designing the application, the following must be taken care of:

  • Every user working with the system must be unique. Give unique login credentials to each of the employees, and let them know that their actions are being monitored personally, and they must not in any case give their login credentials to other employees.
  • Enforce strong passwords. There are tips in the OWASP Authentication Cheat Sheet.
  • If the employees work fixed hours, any kind of login during off hours should be regarded as a highly suspicious event. For example, if employees work 9am-6pm, any successful/unsuccessful login made  between 6pm - 9am  should trigger an alert.
  • Store any data in an encrypted form; use a strong encryption algorithm like AES-256.
  • Encrypt the data with a very long key.
  • Optional but highly recommended - do not store the key on the server. Instead, when the application is being started (for example when the server has just been rebooted), it must wait for the password to be entered. An example scenario is:
    • The application expects that the encryption key would be found in the file /dev/shm/ekey; /dev/shm is an in-memory storage location - it doesn't persist upon reboots;
    • Manually open the file /dev/shm/ekey-tmp with "cat > /dev/shm/ekey-tmp", enter the password there, then rename the file to "/dev/shm/ekey";
    • The application polls regularly for this file, reads it and then immediately deletes it;
    • Wait and verify that the file was deleted from /dev/shm.
    • Now your key is stored only in memory and is much more harder for an attacker to obtain it.
  • Set up the webapp access the MySQL server though an unprivileged user, restricted to a single database (*not* as MySQL's root).
  • Develop ACL lists on who can see what part of the information; split the information accordingly.
  • Every incoming GET, POST, SESSION or FILES request must be validated; do not allow bad input data.
  • Every unknown/error/bad state of the system (unable to verify input data, mysql errors, etc) must be mailed to you as a notification (do not mail details in a plain-text email, just a notification; then check it via SSH on the server).
  • Code should be clean and readable; do not over-engineer the system.
  • Make a log entry for EVERY action - both read and write ones; do NOT store any sensitive data in the logs.
  • Ensure that the application has a “safe mode” to which it can return if something truly unexpected occurs. If all else fails, log the user out and close the browser window.
  • Suppress error output when running in  production mode - debug info on errors should only be sent back to the visitor in *development* mode. Once the app is deployed debug output = leaking sensitive information.
  • Backup the data on an external server. The backup should be carried over a secure connection and kept encrypted.

So far so good. Up to now, we should have a system which is secure, logs everything, sends alerts and can store and retrieve sensitive data. The only question is - how do we authenticate against the system in a secure manner?

The best way to achieve this is to implement a two-factor authentication: a username/password and a client certificate.

  • Set up your own CA and issue certificates for your employees:
  • Keep the CA root certificate in a secure place!! Nobody must be able to get it, or else your whole certificate system will be compromised.
  • Set up the web server vhost to require a client certificate (How can I force clients to authenticate using certificates? from http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html). This way, right after somebody opens up the login page, you would already know what client (employee) certificate they are using.
  • The client certificates must be protected with a password; this makes the security better - in order to log in, you must fist unlock your client certificate with its password, then open the login page and provide your own user/pass pair.
  • Consider using turning-test based login forms, e.g. http://community.citrix.com/display/ocb/2009/07/01/Easy+Mutli-Factor+Authentication . This will protect the passwords against keyboard sniffing.
  • The web server must
    • match each user/pass to their corresponding certificate;
    • have an easy mechanism for certificate revoking (disabling), in case you decide to part your way with an employee of yours.
  • Once logged in, create a standard PHP session and code as usual.
  • The most critical and important (and not very often) operations should be approved only once the user re-enters their password; this prevents replay attacks. For example, if you want to view the whole credit card number info (and you don't usually need this), the system will first ask you "Re-enter your password and I'll show you the information". Banks usually use this kind of protection method for every online transaction.
  • Expire sessions regularly - in a few hours or less of inactivity.
  • If possible, tie every session to its source IP address; that is - log the IP address upon login and don't allow other IP addresses to use this session ID. Note: some providers like AOL (used to) have transparent IP balancing proxies and with them the IP address of a single client may change in time; you cannot use this security method if you have such clients (try it).

Having a system like this should be protected against most attacks. Here are a couple of scenarios:

  • Brute-force attacks at the login page - they will not succeed because a user/pass + client certificate are required. Actually, without a certificate, the login page will not be displayed at all.
  • Someone steals a user laptop - they cannot use the certificate (even if they know the user/pass pair), because they don't know its password.
  • Someone sees a user entering their passwords - they cannot use them because they don't have your client certificate.
  • An employee starts to harvest the customer data - you have a rate limit of requests per employee, and also a general rate limit of requests to your database - you get an alert and investigate this further manually (you have full logs on the server).
  • Someone hacks into your server and quickly copies the database + PHP scripts onto a remote machine - they cannot use the data, because it is encrypted and you never stored the key in a file on the server - you enter the key every time manually upon server start-up.
  • Someone initiates a man-in-the-middle attack and sniffs your traffic - you are using only HTTPS and never accept/disregard SSL warnings in your browser - you are safe, the traffic cannot be read by third parties.
  • Someone totally gets control over a user computer (installs a key logger and copies your files) - you are doomed, nothing can help you, unless the compromised does not go unnoticed.
  • Someone really good hacks your server and spends a few days learning your system - if the intrusion detection system, and the custom monitor scripts didn't catch the hacker, and he spent days on your server trying to break your system, then you are in real trouble. This scenario has a very low probability; really smart hackers are usually not tempted in doing bad things.

The integration of such a secure database system could be easy. For example, if you have a customer with name XYZ, you can assign a unique number for this customer in your CRM system. Then you can use the secure storage to save sensitive data about this customer by referring to this unique number in the secure database system. It is true that your employees will have to work with one more interface, the interface of the secure database system, but this is the price of security - you have to sacrifice convenience for it.

Conclusion

Careful implementation of all of the above measures will further increase the security of the system, making the server extremely resilient against cyber attacks. Remember though, security is not a state - it's a process. Hire someone to take care of all security-related tasks on an ongoing basis.

Another point - as great as this all sounds on paper, it's the implementation that counts. Be careful with the implementation of the different services, safeguards and applications.

The weakest point in such a system would be the employee computer. A hacker who knows the basic layout of the server (and it follows the recommendations given so far) would focus on attacking the computer of some employee. Do not ignore the client side of the equation - this is quite often the weakest link in the chain.

We have plans to write a post about the client side too.

Server support enabled

Author: Milen Nedev

Started as an interim support contract for a client's site, launched at the end of 2011 and tried out for a couple of months, we think it's time to introduce our new service to the public – Server Support. read more ›

Two days of email / chat and phone ping pong and you problem still exists. One support guru sends you to another, the second one asks you the same questions as the first, all say they'll call back no one does, and no one has a clue... Does it sound familiar? And all this pours over you at the most improper moment when you’ve already invested a great deal of money and time into your website or application and you’ve been observing your clients growing by number.

Started as an interim support contract for a client's site, launched at the end of 2011 and tried out for a couple of months, we think it's time to introduce our new service to the public – Server Support. Actually the service was first tried and tested in the beginning of summer 2011, when we welcomed our first in-house system engineer. He put our hosting infrastructure in order, enriching our experience with his. Our good friends from KEO Films were the first to evaluate the usefulness of the service, as due to some bespoke optimization and skilled maintenance the performance capacity of the machines exceeded our and their expectations (and saved a lot of money too).

The next logical step was to offer this support to anyone who may need it. So now you can see for yourselves what a good server support stands for:

  • “Office hours support” or “24/7 support” depending on your requirements
  • No more  bot replies and “sick-of-it-all” operators - our small support team consists entirely of system engineers, and they will be the ones to answer your call even in the middle of the night
  • Thorough inspection - prior to taking an engagement our team always spends some time checking out the code of the website and the application and discussing with clients the priority the issues to be handled. One never knows what’s around the corner, so we prefer to have a certain idea about the actions to be undertaken and the sequence of these actions.
  • Our clients will be granted access to our own server monitoring application (we will soon post more info about it).

We take our work personally. In order to provide the most adequate support our system engineers follow the inner relations between servers’ software and the hosted applications, as well as the impact of various events – software updates, introduction of new modules, etc. Thus they are few steps forward to the solution when the server fails to perform. And you know that speed and accuracy are of great importance when your reputation and money are at stake.

Long live the system engineer!

Author: Nikolay Nedev

The arrival of a new member in the company has always been an event to be celebrated, so we can’t wait to express our satisfaction for welcoming on board Emil – our first and brand new system engineer. read more ›

The arrival of a new member in the company has always been an event to be celebrated, so we can’t wait to express our satisfaction for welcoming on board Emil – our first and brand new system engineer.

For almost two months now he’s been working hard to put some order in our hosting infrastructure. The latter is not a piece of cake as the number of servers is steadily growing and living in a world where even Amazon have cloudy Fridays the sanity is vulnerable and the stress is taking over.

But having Emil on our side – a server whisperer, Python zealot and a Django fan – the team is in full force to cope with any adversity the web could throw at us.

And what is more important – at last Milen can have his good night’s sleep not having to worry about the clouds in the web sky.