What we tried to do, was make the security as paranoid as possible, and still leave the system in a usable state. Of course, there is always something else that you can do to tighten the security even more, but this will lead to functional inconveniences which we'd rather not live with.
The principle we followed was "deny all, allow certain things." Therefore, the main design principles are:
- Close all doors securely.
- Open some doors.
- Closely monitor the activity of the doors you opened.
- Always be alert and monitor for suspicious activity of any kind (newly opened doors, unknown processes, unknown states of the system, etc)
Server installation and setup notes
- Install a bare Linux, no services at all running ("netstat -lnp" must show no listening ports).
- Install an intrusion detection system (which monitors system files for modifications).
- Use the 'grsecurity' Linux kernel patches - they help against a lot of 'off-the-shelf' exploits
- (door #1) Install the OpenSSH server (22 port), so that you can manage the server.
- Disallow password logins, allow ONLY public keys, SSH v2.
- Set PermitUserEnvironment to "yes".
- Set a "KEYHOLDER" environmental variable in the ~/.ssh/authorized_keys file.
- Send an e-mail if the KEYHOLDER variable is not set when a shell instance is started.
- Set up an external DNS server in "/etc/resolv.conf" for resolving.
- (door #2) Install a web server, for example Apache.
- Leave only the barely needed modules.
- Set up the vhost to work only with SSL, no plain HTTP (http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html).
- Purchase an SSL certificate for the server's vhost, so that clients can validate it.
- Do not set up multiple vhosts on the server; this server will have only one purpose - to store and send data securely; don't be tempted to assign more tasks here.
- Install a Web Application firewall (mod_security, etc.) - it will detect common web-based attacks. Monitor its logs.
- Limit HTTP methods to good ones only, unexpected HTTP methods should get into the error logs and raise an eyebrow (generate alerts).
- Disable directory listing in Apache.
- Disable multiviews/content negotiation in Apache if your app does not rely on them.
- Install an Application Firewall (e.g. AppArmor) - apps should not try to access resources they have no (legal) business with. For example, Apache should not try to read into /root/.
- Install a MySQL server, bind it to address 127.0.0.1 so that network usage isn't possible.
- Install a mail server like Exim or Postfix but let it send only locally generated e-mails; there is no need to have a fully functional mail server, listening on the machine.
- Firewall INPUT and FORWARD iptables chains completely (set default policy to DROP), except for the following simple rules:
- INCOMING TCP connections TO port 22 FROM your IP address(es) - allow enough IP addresses, so that you don't lock yourself out;
- INCOMING TCP connections TO port 443 FROM your clients' IP address(es) - the CRM application's IP address, etc.;
- Allow INCOMING TCP, UDP and ICMP connections which are in state ESTABLISHED (i.e. initiated by the server or on the SSH port).
- Log remotely, so if the system does get compromised, the attacker wouldn't be able to completely cover their traces. Copying over the log files on designated intervals is OK-ish, but real-time remote logging (like sysylog over SSL) is much better, as there would be no window where the logs could be erased/tampered with. Make an automatic checker which confirms that the remote logs, and the local logs are the same - an alarm bell should go off if they aren't.
Door #1 (SSH) can be considered closed and monitored:
- It works only with few IP addresses.
- Does not allow plain text logins, so brute-force attacks are useless.
- A notification is sent when the door is opened by unauthorized users.
Door #2 (web server) must be taken special care of.
- Review the access log of the server daily in regards to how many requests were made => if there are too many requests, then review the log manually and see which application/user made the requests; set a very low threshold as a start and increase it accordingly with time.
- Review the error log of the server => send a mail alert if it has new data in it.
Consider using some kind of VPN (e.g. PPTP/IPSec/OpenVPN) as an added layer of network authentication. You can then bind the web server to the VPN IP address (so direct network access is completely disabled) and set the firewall to only allow the internal VPN IPs on port 443.
General server activity monitoring
- Set the mail for "root" to go to your email address (crontab and other daemons send mail to "root").
- Review the /var/log/syslog log of the server => send a mail alert if there is new data in it.
- Do a "ps auxww" list of the processes => if there are unknown processes (as names, running user, etc) => send a mail alert to yourself.
- Do a "netstat -lnp" list of the listening ports => mail alert if something changed here.
- Test the firewall settings from an untrusted IP - the connections must be denied.
Finally, update the software on the server regularly. E-mail yourself an alert if there are new packages available for update.
Application Design Notes
The SSL (HTTPS) vhost will most probably run mod_php. When designing the application, the following must be taken care of:
- Every user working with the system must be unique. Give unique login credentials to each of the employees, and let them know that their actions are being monitored personally, and they must not in any case give their login credentials to other employees.
- Enforce strong passwords. There are tips in the OWASP Authentication Cheat Sheet.
- If the employees work fixed hours, any kind of login during off hours should be regarded as a highly suspicious event. For example, if employees work 9am-6pm, any successful/unsuccessful login made between 6pm - 9am should trigger an alert.
- Store any data in an encrypted form; use a strong encryption algorithm like AES-256.
- Encrypt the data with a very long key.
- Optional but highly recommended - do not store the key on the server. Instead, when the application is being started (for example when the server has just been rebooted), it must wait for the password to be entered. An example scenario is:
- The application expects that the encryption key would be found in the file /dev/shm/ekey; /dev/shm is an in-memory storage location - it doesn't persist upon reboots;
- Manually open the file /dev/shm/ekey-tmp with "cat > /dev/shm/ekey-tmp", enter the password there, then rename the file to "/dev/shm/ekey";
- The application polls regularly for this file, reads it and then immediately deletes it;
- Wait and verify that the file was deleted from /dev/shm.
- Now your key is stored only in memory and is much more harder for an attacker to obtain it.
- Set up the webapp access the MySQL server though an unprivileged user, restricted to a single database (*not* as MySQL's root).
- Develop ACL lists on who can see what part of the information; split the information accordingly.
- Every incoming GET, POST, SESSION or FILES request must be validated; do not allow bad input data.
- Every unknown/error/bad state of the system (unable to verify input data, mysql errors, etc) must be mailed to you as a notification (do not mail details in a plain-text email, just a notification; then check it via SSH on the server).
- Code should be clean and readable; do not over-engineer the system.
- Make a log entry for EVERY action - both read and write ones; do NOT store any sensitive data in the logs.
- Ensure that the application has a “safe mode” to which it can return if something truly unexpected occurs. If all else fails, log the user out and close the browser window.
- Suppress error output when running in production mode - debug info on errors should only be sent back to the visitor in *development* mode. Once the app is deployed debug output = leaking sensitive information.
- Backup the data on an external server. The backup should be carried over a secure connection and kept encrypted.
So far so good. Up to now, we should have a system which is secure, logs everything, sends alerts and can store and retrieve sensitive data. The only question is - how do we authenticate against the system in a secure manner?
The best way to achieve this is to implement a two-factor authentication: a username/password and a client certificate.
- Set up your own CA and issue certificates for your employees:
- Keep the CA root certificate in a secure place!! Nobody must be able to get it, or else your whole certificate system will be compromised.
- Set up the web server vhost to require a client certificate (How can I force clients to authenticate using certificates? from http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html). This way, right after somebody opens up the login page, you would already know what client (employee) certificate they are using.
- The client certificates must be protected with a password; this makes the security better - in order to log in, you must fist unlock your client certificate with its password, then open the login page and provide your own user/pass pair.
- Consider using turning-test based login forms, e.g. http://community.citrix.com/display/ocb/2009/07/01/Easy+Mutli-Factor+Authentication . This will protect the passwords against keyboard sniffing.
- The web server must
- match each user/pass to their corresponding certificate;
- have an easy mechanism for certificate revoking (disabling), in case you decide to part your way with an employee of yours.
- Once logged in, create a standard PHP session and code as usual.
- The most critical and important (and not very often) operations should be approved only once the user re-enters their password; this prevents replay attacks. For example, if you want to view the whole credit card number info (and you don't usually need this), the system will first ask you "Re-enter your password and I'll show you the information". Banks usually use this kind of protection method for every online transaction.
- Expire sessions regularly - in a few hours or less of inactivity.
- If possible, tie every session to its source IP address; that is - log the IP address upon login and don't allow other IP addresses to use this session ID. Note: some providers like AOL (used to) have transparent IP balancing proxies and with them the IP address of a single client may change in time; you cannot use this security method if you have such clients (try it).
Having a system like this should be protected against most attacks. Here are a couple of scenarios:
- Brute-force attacks at the login page - they will not succeed because a user/pass + client certificate are required. Actually, without a certificate, the login page will not be displayed at all.
- Someone steals a user laptop - they cannot use the certificate (even if they know the user/pass pair), because they don't know its password.
- Someone sees a user entering their passwords - they cannot use them because they don't have your client certificate.
- An employee starts to harvest the customer data - you have a rate limit of requests per employee, and also a general rate limit of requests to your database - you get an alert and investigate this further manually (you have full logs on the server).
- Someone hacks into your server and quickly copies the database + PHP scripts onto a remote machine - they cannot use the data, because it is encrypted and you never stored the key in a file on the server - you enter the key every time manually upon server start-up.
- Someone initiates a man-in-the-middle attack and sniffs your traffic - you are using only HTTPS and never accept/disregard SSL warnings in your browser - you are safe, the traffic cannot be read by third parties.
- Someone totally gets control over a user computer (installs a key logger and copies your files) - you are doomed, nothing can help you, unless the compromised does not go unnoticed.
- Someone really good hacks your server and spends a few days learning your system - if the intrusion detection system, and the custom monitor scripts didn't catch the hacker, and he spent days on your server trying to break your system, then you are in real trouble. This scenario has a very low probability; really smart hackers are usually not tempted in doing bad things.
The integration of such a secure database system could be easy. For example, if you have a customer with name XYZ, you can assign a unique number for this customer in your CRM system. Then you can use the secure storage to save sensitive data about this customer by referring to this unique number in the secure database system. It is true that your employees will have to work with one more interface, the interface of the secure database system, but this is the price of security - you have to sacrifice convenience for it.
Careful implementation of all of the above measures will further increase the security of the system, making the server extremely resilient against cyber attacks. Remember though, security is not a state - it's a process. Hire someone to take care of all security-related tasks on an ongoing basis.
Another point - as great as this all sounds on paper, it's the implementation that counts. Be careful with the implementation of the different services, safeguards and applications.
The weakest point in such a system would be the employee computer. A hacker who knows the basic layout of the server (and it follows the recommendations given so far) would focus on attacking the computer of some employee. Do not ignore the client side of the equation - this is quite often the weakest link in the chain.
We have plans to write a post about the client side too.