Hello! We are MTR Design and site making is our speciality. We work with UK based startups, established businesses, media companies and creative individuals and turn their ideas into reality. This is a thrill and we totally love it.

Dizzyjam @ Music Hack Day

Author: Emil Filipov

If you had a slumberous February weekend, there is no reason to feel bad about it - after all, most of the world did. There was a special group of people, however, who gave up sleep and rest in favor of creating awesome applications that could change the way you and I experience music. Yes, I'm talking about the hackers that took part in the Music Hack Day event in San Francisco. read more ›

If you had a slumberous February weekend, there is no reason to feel bad about it - after all, most of the world did. There was a special group of people, however, who gave up sleep and rest, in favor of creating awesome applications that could change the way you and I experience music. Yes, I'm talking about the hackers that took part in the MusicHackDay event in San Francisco. These are the guys pushing the envelope, and these are the ideas you should watch out for, in case you have anything to do with the music industry.

The event produced 66 projects ranging from turning body outlines to soundwaves via a Kinect controller to a web platform for borrowing/renting musical instruments. It's an (yet) invisible creativity explosion - the sort of mini-nova that bursts into billions particles, giving birth to planets and star systems, millions of years later. Well, in the IT gravitational field a million of years pass just like one day, so we should expect the results quite soon! 

Thanks to the organizers, we were able to do an online presentation of the Dizzyjam website, and more specifically, of a new feature we've recently added - the Dizzyjam API. As you might expect, it s a web-based, RESTful API that enables you to access all Dizzyjam functions programmatically. It boasts a web console built into the docs, a WordPress plugin, bindings for Python and PHP, as well as a piece of unique functionality - creating new Dizzyjam users through your API account (see the manage/create_user method). The API got utilized by a very interesting project during the hackathlon - Merchr. It's the why-did-not-I-think-of-it-first kind of project - simple idea that could be a game changer one day. I sincerely hope that the guys behind this project will keep on hacking and bringing good stuff out!

Published in: Development, Projects

Get in business with Cotton Cart

Author: Milen Nedev

The new merchandising platform makes it possible for all of you to make money from your design. Show us your style! read more ›

Cotton Cart, our newest project, has just launched.

Some of you are probably already familiar with Dizzyjam - our e-commerce and merchandising platform which we created to make it easy and risk-free for anyone in the music industry to make money from their merchandise.

In the past we’ve received quite a lot of requests from people who wanted to use Dizzyjam for trading non-music stuff. And as those requests grew we started thinking about including a non-music section in the original website. Or creating an entirely new website for those who want to sell merchandise no matter what their business activity is. After short reflection we went for the second option and just before Christmas we did a soft launch of Cotton Cart.

The new site follows the overall idea of www.dizzyjam.com – in only three simple steps anyone can start making money - upload your designs, create your products and start selling. You don’t have to buy 100 blank t-shirts, to organize printing or pile up all the stuff you can’t sell. It won’t cost you a penny. But it will cost you creativity and popularity in order to make anyone besides your granny buy your stuff. Cotton Cart is here to solve the popularity issue.

Who can use this website?

Everyone. This may be a graffitist who wants to get famous, the grocery shop around the corner, where the best veggies are sold or a charity organisation that raises money for its cause. In fact such fundraising activities were the first to open their virtual stalls in Cotton Cart. Another clever idea is to use the platform for producing t-shirts or other merch for corporate events – team buildings, annual meetings and seminars. The website can be used for promoting sports events – just upload your local rugby team’s design, print your merch and sell them to your fans in the neighbourhood. Surely you will have an audience to remember the next time your team meets the rivals.

The possibilities are countless – your imagination is the limit. So far we have charity and fundraising groups, festivals, sports events and we can’t wait to see what else you can think of while using Cotton Cart.

Server monitoring with S2Mon - Part 2

Author: Emil Filipov

In part 1 I covered the reasons why it is in your best interest to monitor your servers, and how can S2Mon help with that task. Well, we know that monitoring can be all cool and shiny, but how hard is it to set up? After all, the (real or perceived) effort required for the initial configuration is the single biggest reason why people avoid monitoring. In this part I'll explore the configuration process with S2Mon. read more ›

In part 1 I covered the reasons why it is in your best interest to monitor your servers, and how can S2Mon help with that task. Well, we know that monitoring can be all cool and shiny, but how hard is it to set up? After all, the (real or perceived) effort required for the initial configuration is the single biggest reason why people avoid monitoring. In this part I'll explore the configuration process with S2Mon.

1. Overview

The S2 system relies on an agent installed on the server side to send information to the central brain over an encrypted SSL connection. The agent we are using is, of course, open-source script (written in Bash), so anyone can look inside it, and see what it is doing exactly. Don't know about you but I would feel very uneasy if I had to install some "proprietary" monitoring binary on my machines - it could be a remotely controlled trojan horse for all I know. So, keeping the agent open is key to us.

The open-source agent confirms another key point - it only *sends* information out to the central S2Mon servers, it does not *receive* any commands/configurations back. The communication here is one way - from the agent to the S2Mon center. The S2Mon center cannot modify the agent behavior in any way whatsoever.

2. Requirements

Since the agent is mainly written in Bash, it, obviously, requires Bash to be available on the monitored system. Fortunately, Bash is available on any Linux system released during the last 15 years or so. The other requirements are:

  • Perl
  • curl
  • bzip
  • netstat
  • Linux OS

The required tools are all present in most of the contemporary Linux distributions, but in case you have any doubts, you can check out the prerequisites page for distro-specific tips.

The Linux OS requirement is the major one here - S2Mon currently runs on Linux only. We have plans to make it available to Mac OS, *BSD and Windows users in the future, but for the time being, these platforms are not supported.

3. Registering an account

You will obviously need an S2Mon account, so, in case you do not have one, head to https://www.s2mon.com/registration/. Once you submit your desired account name and your email address, you will be taken straight to your Dashboard, and your password will be sent over email to you (so make sure you get that email field right).

The Dashboard is very restricted at this point - you need to verify your email address to unlock the full functionality of the system. To complete the verification, simply click on the activation link you got in your mailbox. That's it, your S2Mon account is now fully functional!

4. Adding a server entry to the S2Mon site

OK, this is where the fun starts. Before the S2Mon system starts accepting any data from your server, you need to create a server record in the S2Mon system. Go to https://www.s2mon.com/host/add/ (or Servers -> Add host , if you would prefer) and fill in the following form:

Add Host

Hostname should be a unique identifier of your server - it does not need to be a Fully Qualified Domain Name (FQDN) - though it is a good idea to use that. For an Address, you should enter the external IP address of the host. This is the IP address is where Ping probes will be sent to, should you choose to enable them from the drop-down menu. It is a good idea to enable these probes if the server is ping-able; in this way you will get an alert if the ping dies - this is considered an indication that something is wrong. The Label field is optional free-text that you may use to describe your server. If you fill it in, it will be the server identifier used throughout the S2 site; otherwise the Hostname would be used.

After you submit the form, you will be presented with the basic steps that you will need to follow to get the probe running. Since you are reading this blog post, you can just copy the Pushdata Agent URL and ignore everything else :). The Pushdata Agent URL is the address where the agent would send all of the monitoring information, so it is the most important piece of data on that page. In case you forgot it, accidentally closed the page,or the dog ate your computer, don't worry, you can always get back to this page via Servers -> Edit button -> Probe setup tab.

5. Activating the services you want to monitor

Now go to https://www.s2mon.com/servers/. You will see the list of your servers there, but there is also a convenient panel where you can enable or disable certain services. Go on and activate the ones you are interested in (or, if you are like me - all of them):

Service Activation

6. Running the probe on the server

This is the trickiest part of it all, as there is a lot of different ways to do it, depending on the server controls you have at hand. I'll assume that you have SSH access to the server, so you can run commands directly. If you do not have this kind of access though, you may still be able to run the S2 probe if you can:

  • Download the probe archive, extract it, and put the extracted files onto your server;
  • Run a periodical task (cron job) every minute, which would fire the executable agent script at the specified URL.

The S2Mon agent does NOT require a root account - you can run it from an unprivileged account. Even though I trust the agent completely, I run it from an unprivileged account on all my servers - it's a good approach security-wise, and it is more tidy. In some cases, however, unprivileged accounts may not have access to all built-in metrics, so you might want to run the cron job with root privileges - it's up to you and your specific setup

So, regardless of which account you decide to run the agent under, you can log in with that account and do the following:

s2mon ~$ wget https://dc1.s2-monitoring.com/active-node/a/s2-pushdata.tar.gz # download
s2mon ~$ tar xzf s2-pushdata.tar.gz # extract
s2mon ~$ ls -la s2-pushdata/post.sh # verify that the post.sh script has executable permissions
s2mon ~$ cd s2-pushdata/
s2mon ~/s2-pushdata$ DEBUG_ENABLED=1 ./post.sh "https://dc1.s2-monitoring.com/rblmon/collector-vahzeegh/index.php/my-hostname.com" # Use your specific Pushdata Agent URL here, enclosed in single or double quotes!

You will get some debug output out of the last command; it will abort if there is anything missing (for example, if curl is not installed on the system). If everything is OK, the last line would indicate successful data submission, e.g.:

DEBUG: POST(https://dc1.s2-monitoring.com/rblmon/collector-vahzeegh/index.php/my-hostname.com): 21 keys (8879 -> 2539 bytes).

The only thing that's left to do is to set the agent to be executed every minute. Again, there are a few different ways to do this, but the most common one is to run 'crontab -e', which will open up your user's crontab for editing. Then you only need to append the line:

* * * * * cd /path/to/s2-pushdata/ && ./post.sh "https://dc1.s2-monitoring.com/rblmon/collector-vahzeegh/index.php/my-hostname.com" &>/dev/null

Please make sure that you substitute /path/to/s2-pushdata/ with the actual path to the s2-pushdata directory on your system, and to change the URL to the value that you got after adding your host record in the S2Mon website (note: changing just the hostname part at the end will NOT work).

7. Profit!

OK, if you were able to complete steps 5,6 and 7, then you should see the nifty monitoring widget on your S2Mon Dashboard turn all green. Congrats, your server is now monitored and you are recording the history of its most intimate parameters!

Widget - OK Status

8. (Optional) MySQL monitoring configuration

The MySQL service requires some extra configuration for the S2Mon agent to be able to look inside it, so you will need to take some extra steps if you want to monitor any of the MySQL services. The easiest way to do this is to:

  • Create a MySQL user for S2Mon to use, by using the following query (ran as MySQL root or equivalent)
GRANT USAGE ON *.* TO 's2-monitor'@'localhost' IDENTIFIED BY '*******';

Make sure to replace '*******' with a completely random password. Don't worry, you will not need to remember it for long!

  • Create the files /etc/s2-pushdata/mysql-username and /etc/s2-pushdata/mysql-password on your system, and put the username (s2-monitor in this case) and password in the respective file (on a single line).
  • Change the ownership of those so that only the user that you are running S2Mon under can read them (for example set them to 0400).

After this is all done, you will see the MySQL data charts slowly starting to fill in with data in the next few minutes.

9. Post-setup

Now that you have a host successfully added to the interface, the next logical step would be to setup some kind of notification that would poke you when the some parameter goes too high or too low. Additionally, you might want to enable other people to view or modify the server data in your account. Both tasks are easy with S2Mon and I will show you how to do it in the next part.

Server monitoring with S2Mon - Part 1

Author: Emil Filipov

We've all heard that servers sometimes break for one reason or another. We often forget, however, how inevitable it is. While everything is working, the system looks like a rock solid blend of software and hardware. You get the feeling that if you don't touch it, it would keep spinning for years. Well, that's a very misleading feeling. A lot of things can (and will!) go wrong, and you can be better prepared with a tool like S2mon.com. read more ›

We've all heard that servers sometimes break for one reason or another. We often forget, however, how inevitable it is. While everything is working, the system looks like a rock solid blend of software and hardware. You get the feeling that if you don't touch it, it would keep spinning for years.

That's a very misleading feeling. The proper operation of a server depends on many dynamic parts, like having Internet connectivity, stable power supply, proper cooling, enough network bandwidth, free disk space, running services, available CPU power, IO bandwidth, memory, ... That's just the tip of the iceberg, but I think the point is clear - there is a lot that can go wrong with a server. 

Eventually some of those subsystems will break down for one reason or another. When one of them fails, it usually brings down others, creating a digital mayhem that can be quite hard to untangle. Businesses relying on the servers being up and running tend not to look too favorably on the inevitability of the situation. Instead of accepting the incident philosophically and being grateful for the great uptime so far, business owners instead go for questions like "What happened?!?!!", "What's causing this???" and "WHEN WILL IT BE BACK UP????!!!". Sad, I know. 

Smart people, who would rather avoid coming unprepared for those questions, have come up with the idea of monitoring, so that:

  • problems are caught up in their infant stages, before they cause real damage (e.g. slowly increasing disk space usage);
  • when some malfunction does occur, they can cast a quick glance over the various monitoring gauges, and quickly determine what's the root cause of it;
  • they can follow trends in the server metrics, so they can both get insight into issues from the past and predict future behavior.

These are all extremely valuable benefits, and it's widely accepted that the importance of server monitoring is coming second only to the criticality of backups. Yet, there are more servers out there without proper monitoring that you would expect. The main reasons not so setup monitoring are all part of our human nature, and can be summed up to "what a hurdle to install and configure...", "the server is doing it's job anyway..." and my favorite "I'll do it...eventually".

I have some news for the Linux server administrators - you have an excuse no more. We've come up with a web monitoring system for your servers that is easy to setup, rich in functionality and completely free (at least for the time being). Go on and see a demo of it, if you don't believe me. If you decide to subscribe, it will take less than 1 minute. Adding a machine to be monitored basically boils down to downloading a Bash script and setting it up as a cron job (you'll get step-by-step instructions after you log in and add a new server record on the web). And if you want to integrate S2Mon into a custom workflow/interface of yours, there is API access to everything (in fact, the entire S2Mon website is one big API client).

Once you hook up your server to the system, you will unlock a plethora of detailed stats, presented in interactive charts like this one:

Apache children

What we see above is a pretty picture of the load falling on the Apache web server. Apparently we've had the same pattern repeating during the last week. That's a visual proof that the web server workload varies a lot throughout the day (nothing unexpected, but we can now actually measure it!).

OK, I now want to see how are my disk partitions faring, and when should I plan for adding disk space:

Disk Usage Stats

 Both partitions are steadily growing, but if the rate is kept, there should be enough space for the next 5-6 months.

Hey, you know what, I just got some complaints from a user that a server was slow yesterday, was there anything odd?

Load Average

Yep, most definitely. The load was pretty high throughout the entire afternoon. Believe it or not this time it was not his virus-infested Windows computer...

Your boss wants some insight on a specific network service, say IMAP? There you go:

IMAP - Connections per service

Wonder what your precious CPU spends its time on? See here:

CPU Stats

As you see, S2Mon can provide you with extremely detailed stats ready to be used anytime you need them. Of course, there is a lot more to it, and I'll cover more aspects of the setup, configuration and the work with S2Mon it in the next parts. As always, feedback is more than welcome!

Stayin' secure with Web Security Watch

Author: Emil Filipov

Is your server/website secure? How do you *really* know? Web Security Watch can help you with getting on top of the publicly-released security advisories. A custom security feed just for you - how cool is that? read more ›

Is your server/website secure? How do you really know? Let me get back to this in a while. 

As you may be aware there is a ton of security advisories released by multiple sources every day. That's a true wealth of valuable information flowing out on the Internet. Being aware of the issues described in these advisories could make all the difference between being safe and getting hacked; between spending a few minutes to patch up, and spending weeks recovering lost data, reputation and customer trust. So who would *not* take advantage of the public security advisories, right?

Not really. See, there is the problem of information overflow. There is really a lot of sources of security information, each of them spewing dozens of articles every given day. To make it worse, very few of those articles are really relevant to you. So, if you do want to track them, you end up manually reviewing 99% of junk to get to the 1% that is really relevant to your setup. A lot of system/security administrators are spending several dull hours every week to go through reports that rarely concern them. Some even hire a full-time dedicated operators to process the information. Others simply don't care about the advisories, because the review process is too time-consuming. 

Well, we decided we can help with the major pains of the advisory monitoring process. So we built Web Security Watch (WSW) for this purpose. This website aggregates security advisories coming from multiple reputable sources (so you don't miss anything), groups them together (so you don't get multiple copies), and tags them based on the affected products/applications. The last action is particularly important, as tags allow you to filter just the items that you are interested in, e.g. "WordPress", "MySQL","Apache". What's more, we wrote an RSS module for WordPress, so you can subscribe to an RSS feed which only contains the tags you care about. A custom security feed just for you - how cool is that? Oh, and in case you didn't notice - the site is great for security research. And it's free.

Even though WSW is quite young, it already contains more than 4500 advisories, and the number grows every day. We will continue to improve the site functionality and the tagging process, which is still a bit rough around the edges. If you have any feature requests or suggestions, we would be really happy to hear them - feel free to use the contact form to get in touch with us with anything on your mind.

Now, to return to my original question. You can't really tell if your site/server is secure until you see it from the eyes of a hacker. And that requires some capable penetration testers. Even after you had the perfect penetration test performed by the greatest hackers in the world, however, you may end up being hacked and defaced by a script kiddie on the next week, due to vulnerability that just got disclosed publicly.

Which gets me to the basic truth about staying secure - security is not a state, it's a process. A large part of that process is staying current with the available security information, and Web Security Watch can help you with that part.

Probably the longest webpage yet – Hugh's Fish Fight 834,000 Names under the Sea

Author: Nikolay Nedev

We at MTR Design love the challenges and the guys at KEO Films presented us with the new one – to create the longest webpage yet - a page which lists all of Fish Fight's 830k+ supporters. read more ›

 

At MTR Design we are open to challenges so when the guys from KEO Films asked us whether we could create the longest webpage yet, we were more than pleased to accept the commission. Fish Fight - a multi-platform campaign produced by KEO Films and led by TV campaigner Hugh Fearnley-Whittingstall - has ignited earlier this week a campaign promoting the Fish Fight initiative by explicitly drawing the attention to every person who has supported them. The time for the kick off was strategically chosen - prior to an important CFP meeting in Brussels. Making every single voice count could eventually impact the decision making process in the EU.

In order to make this happen they needed a new webpage vesting the idea. Not an ordinary webpage but a special one. They needed a really long webpage which would list all of its 830k+ supporters. A deep dive indeed.

We took the commission and created the webpage. The main challenge before us was squeezing a content so enormous (three times larger than Tolstoy's “War and Peace”) in a smoothly working and convenient webpage that could perform well on desktop browsers as well as on smartphone’s OS. Just imagine scrolling down to the line 123 945 for finding your name on the wall of glory of FishFight. Good news is it won’t take you a whole day - we made it quick. Bad news is - you’ll need a really long display. Thank you iPhone for making this hypothetically possible!

You should definitely check out Hugh's Fish Fight 834,000 Names under the Sea webpage with the new apple wonder:

Well if you don’t have it yet, don’t panic - just run the site on your preferred device and dive as deep as you can.

See it at www.fishfight.net/deep

Tags:

PyLogWatch is born

Author: Emil Filipov

Introducing PyLogWatch - simple and flexible Python utility allowing you to capture custom log files into the centralized Sentry logging server read more ›

Here, at MTR Design, we are managing multiple web apps, servers and system components. All of them generate some kind of logs. Most of the time the logs are trivial and contain nothing that we should be concerned about. There is the odd case, however, where some log gets an entry that truly deserves our attention. You see, the signal-to-noise ratio in most logs is very low, so going over all of the logs by hand is an extremely boring and time-consuming task. Yet, there may be "gems" inside the logs that you really want to act on ASAP - say, someone successfully breaking into your server, or email list going crazy and spamming your customers.

So, what solutions do we have at our disposal? The most noteworthy are Splunk (hosted service, expensive) and Logstash (Java, pain to install, maintain and customize). I did not like any of them. What I did like was Sentry, which has a logging client (called Raven) available in dozen languages. The only problem is that Sentry is meant for handling exceptions coming from applications - not for general purpose logging. 

Yet, Sentry has a lot of the features that we do need:

  • Centralized logging with nice Web UI
  • Users, permissions, projects
  • Aggregation, so that similar log messages get grouped together
  • Quick filters, letting you hide message classes you do not care about
  • Plugin system that lets you write your own message processing 
  • Flexible and easy to use logging clients

Since we already had Sentry for handling in-app logging, enabling it to handle general-purpose server logs felt like a very compelling idea. So we did it...

Enter PyLogWatch

... by writing a Python app that parses log files and feeds them to Sentry. The application is very small and simple, and you can run it on any server with a recent version of Python. You don't need to be root, there is no long-running daemon, and no special deployment considerations - just download, configure, run (by cron, or via other means of scheduling). Of course, PyLogWatch relies on you having a Sentry server, but that's not too hard to install either (see the docs), and you can always use the very affordable hosted Sentry service (see the pricing), which features a limited free account.

The PyLogWatch project is still in its infant stages - there are just a couple of *very* basic parsers (for Apache error logs and for syslog files), and no extensions for the Sentry server yet. Nevertheless, it has already proven very useful to us, since it enabled our developers to closely track the Apache error log files for the applications they "own", and swiftly react to any problem that shows up. In practice, each error line generates a "ticket" in Sentry, and it sticks up there until a project member explicitly marks it as resolved. As an optional feature, all project members receive an email whenever there is a new entry waiting to be resolved. 

What I love about this project is that it is a pretty much blank sheet of paper. I believe that using the combined power of custom parsers and Sentry plugins can yield magnificent results.

So what tool are you using for log tracking? What would do you like/dislike about it, and what would you ideally like it to do? Feel free to share your thoughts.

Poking with Media Upload Forms

Author: Dimitar Ivanov

Every pentester loves file upload forms - the ability to upload data on the server you are testing is what you always aim for. During a recent penetration test, I had quite the fun with a form that was supposed to allow registered users of the site to upload pictures and videos in their profiles. read more ›

What can I say about file upload forms? Every pentester simply loves them - the ability to upload data on the server you are testing is what you always aim for. During a recent penetration test, I had quite the fun with this form that was supposed to allow registered users of the site to upload pictures and videos in their profiles. The idea behind the test was to report everything as it was found, and the developers would fix it on the fly. The usual SQL injection and XSS issues they had no problems with, but the image upload turned to be a real challenge. When I got to the file upload form, it performed no checks whatsoever. I tried to upload a PHP shell, and a second later I was doing the happy hacker dance.

The challenge

So the developers applied the following fix:

$valid = false; if(preg_match('/^image/', $_FILES['file']['type'])) { $info = getimagesize($_FILES['file']['tmp_name']); if(!empty($info)) $valid = true; } elseif(preg_match('/^video/', $_FILES['file']['type'])) { $valid = true; } else { @unlink($_FILES['file']['tmp_name']); }
if($valid) { move_uploaded_file( $_FILES['file']['tmp_name'], 'images'.'/'.$_FILES['file']['name'] );

The code is now checking the type of the file and size of the images. However, there are a few issues with this check:

  • the type of the file is checked via the Content-Type header, which is passed to the script by the client, and therefore, can be easily modified;
  • the script is not checking the file extension, and you can still upload a .php file;
  • the check for the videos is only based on the Content-Type header.

Evasion

It is fairly easy to evade this kind of protection of file upload forms. The easiest thing, of course, is to upload a PHP script, by changing the Content-Type header of the HTTP request to image/video. To do this, you need to intercept the outgoing HTTP request with a local proxy, such as Burp or Webscarab, but Tamper Data for Firefox will do just fine. You can also upload a valid image and insert PHP code in the EXIF. To do this, you can insert the code in the Comments field, e.g.:

$ exiftool -Comment='' info.php 1 image files updated

When you upload the image with a .php extension, it will be interpreted by the PHP interpreter, and the code will be executed on the server. Depending on the server configuration, you might be able to upload the image with .php.jpg extension. If the check for the extension is not done correctly, and if the server configuration allows it, you can still get code execution. Easy, eh?

Protection

So what can be done to prevent this? With a mixture of secure coding a some server-side tweaks, you can achieve a pretty secure file upload functionality.

  • [Code] Check for the Content-Type header. This may fool some script kiddies or less-determined attackers.
  • [Code] Check for the file extension. Replace .php, .py, etc. with, say, _php, _py, etc.
  • [Server] Disable script execution in the upload directory. Even if a script is uploaded, the web server will not execute it.
  • [Server] Disable HTTP access to the upload directory, that is if the files are only meant to be accessible only from scripts using the file system.Otherwise,  although the script will not be executed locally on the server, it could still be used by attackers in Remote File Inclusion attacks. If they target another server with an application that has an RFI vulnerability and allow_url_include is on, they can upload a script on your server and use it to get a shell on the vulnerable machine.

Conclusion

Developers often forget that relying on client-side controls is a bad thing. They should always code under the assumption that the application may be (ab)used by malicious user. Everything on the client side can be controlled and therefore, evaded. The more you check the user input, the better. And of course, the server configuration should be as hardened as possible.

Paranoid

Author: Dimitar Ivanov

A couple of years ago, one of our clients asked us to design a server setup that would host a PHP application for credit card storage. The application would have to be accessible from different location, only by their employees. read more ›

A couple of years ago, one of our clients asked us to design a server setup that would host a PHP application for credit card storage. The application would have to be accessible from different location, only by their employees.

Below is the design guide, produced by our team.

What we tried to do, was make the security as paranoid as possible, and still leave the system in a usable state. Of course, there is always something else that you can do to tighten the security even more, but this will lead to functional inconveniences which we'd rather not live with.

The principle we followed was "deny all, allow certain things." Therefore, the main design principles are:

  • Close all doors securely.
  • Open some doors.
  • Closely monitor the activity of the doors you opened.
  • Always be alert and monitor for suspicious activity of any kind (newly opened doors, unknown processes, unknown states of the system, etc)

Server installation and setup notes

  • Install a bare Linux, no services at all running ("netstat -lnp" must show no listening ports).
  • Install an intrusion detection system (which monitors system files for modifications).
  • Use the 'grsecurity' Linux kernel patches - they help against a lot of 'off-the-shelf' exploits
  • (door #1) Install the OpenSSH server (22 port), so that you can manage the server.
    • Disallow password logins, allow ONLY public keys, SSH v2.
    • Set PermitUserEnvironment to "yes".
    • Set a "KEYHOLDER" environmental variable in the ~/.ssh/authorized_keys file.
    • Send an e-mail if the KEYHOLDER variable is not set when a shell instance is started.
  • Set up an external DNS server in "/etc/resolv.conf" for resolving.
  • (door #2) Install a web server, for example Apache.
    • Leave only the barely needed modules.
    • Set up the vhost to work only with SSL, no plain HTTP (http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html).
    • Purchase an SSL certificate for the server's vhost, so that clients can validate it.
    • Do not set up multiple vhosts on the server; this server will have only one purpose - to store and send data securely; don't be tempted to assign more tasks here.
    • Install a Web Application firewall (mod_security, etc.) - it will detect common web-based attacks. Monitor its logs.
    • Limit HTTP methods to good ones only, unexpected HTTP methods should get into the error logs and raise an eyebrow (generate alerts).
    • Disable directory listing in Apache.
    • Disable multiviews/content negotiation in Apache if your app does not rely on them.
  • Install an Application Firewall (e.g. AppArmor) - apps should not try to access resources they have no (legal) business with. For example, Apache should not try to read into /root/.
  • Install a MySQL server, bind it to address 127.0.0.1 so that network usage isn't possible.
  • Install a mail server like Exim or Postfix but let it send only locally generated e-mails; there is no need to have a fully functional mail server, listening on the machine.
  • Firewall INPUT and FORWARD iptables chains completely (set default policy to DROP), except for the following simple rules:
    • INCOMING TCP connections TO port 22 FROM your IP address(es) - allow enough IP addresses, so that you don't lock yourself out;
    • INCOMING TCP connections TO port 443 FROM your clients' IP address(es) - the CRM application's IP address, etc.;
    • Allow INCOMING TCP, UDP and ICMP connections which are in state ESTABLISHED (i.e. initiated by the server or on the SSH port).
  • Log remotely, so if the system does get compromised, the attacker wouldn't be able to completely cover their traces. Copying over the log files on designated intervals is OK-ish, but real-time remote logging (like sysylog over SSL) is much better, as there would be no window where the logs could be erased/tampered with. Make an automatic checker which confirms that the remote logs, and the local logs are the same - an alarm bell should go off if they aren't.

Door #1 (SSH) can be considered closed and monitored:

  • It works only with few IP addresses.
  • Does not allow plain text logins, so brute-force attacks are useless.
  • A notification is sent when the door is opened by unauthorized users.

Door #2 (web server) must be taken special care of.

  • Review the access log of the server daily in regards to how many requests were made => if there are too many requests, then review the log manually and see which application/user made the requests; set a very low threshold as a start and increase it accordingly with time.
  • Review the error log of the server => send a mail alert if it has new data in it.

Consider using some kind of VPN (e.g. PPTP/IPSec/OpenVPN) as an added layer of network authentication. You can then bind the web server to the VPN IP address (so direct network access is completely disabled) and set the firewall to only allow the internal VPN IPs on port 443.

General server activity monitoring

  • Set the mail for "root" to go to your email address (crontab and other daemons send mail to "root").
  • Review the /var/log/syslog log of the server => send a mail alert if there is new data in it.
  • Do a "ps auxww" list of the processes => if there are unknown processes (as names, running user, etc) => send a mail alert to yourself.
  • Do a "netstat -lnp" list of the listening ports => mail alert if something changed here.
  • Test the firewall settings from an untrusted IP - the connections must be denied.

Finally, update the software on the server regularly. E-mail yourself an alert if there are new packages available for update.

Application Design Notes

The SSL (HTTPS) vhost will most probably run mod_php. When designing the application, the following must be taken care of:

  • Every user working with the system must be unique. Give unique login credentials to each of the employees, and let them know that their actions are being monitored personally, and they must not in any case give their login credentials to other employees.
  • Enforce strong passwords. There are tips in the OWASP Authentication Cheat Sheet.
  • If the employees work fixed hours, any kind of login during off hours should be regarded as a highly suspicious event. For example, if employees work 9am-6pm, any successful/unsuccessful login made  between 6pm - 9am  should trigger an alert.
  • Store any data in an encrypted form; use a strong encryption algorithm like AES-256.
  • Encrypt the data with a very long key.
  • Optional but highly recommended - do not store the key on the server. Instead, when the application is being started (for example when the server has just been rebooted), it must wait for the password to be entered. An example scenario is:
    • The application expects that the encryption key would be found in the file /dev/shm/ekey; /dev/shm is an in-memory storage location - it doesn't persist upon reboots;
    • Manually open the file /dev/shm/ekey-tmp with "cat > /dev/shm/ekey-tmp", enter the password there, then rename the file to "/dev/shm/ekey";
    • The application polls regularly for this file, reads it and then immediately deletes it;
    • Wait and verify that the file was deleted from /dev/shm.
    • Now your key is stored only in memory and is much more harder for an attacker to obtain it.
  • Set up the webapp access the MySQL server though an unprivileged user, restricted to a single database (*not* as MySQL's root).
  • Develop ACL lists on who can see what part of the information; split the information accordingly.
  • Every incoming GET, POST, SESSION or FILES request must be validated; do not allow bad input data.
  • Every unknown/error/bad state of the system (unable to verify input data, mysql errors, etc) must be mailed to you as a notification (do not mail details in a plain-text email, just a notification; then check it via SSH on the server).
  • Code should be clean and readable; do not over-engineer the system.
  • Make a log entry for EVERY action - both read and write ones; do NOT store any sensitive data in the logs.
  • Ensure that the application has a “safe mode” to which it can return if something truly unexpected occurs. If all else fails, log the user out and close the browser window.
  • Suppress error output when running in  production mode - debug info on errors should only be sent back to the visitor in *development* mode. Once the app is deployed debug output = leaking sensitive information.
  • Backup the data on an external server. The backup should be carried over a secure connection and kept encrypted.

So far so good. Up to now, we should have a system which is secure, logs everything, sends alerts and can store and retrieve sensitive data. The only question is - how do we authenticate against the system in a secure manner?

The best way to achieve this is to implement a two-factor authentication: a username/password and a client certificate.

  • Set up your own CA and issue certificates for your employees:
  • Keep the CA root certificate in a secure place!! Nobody must be able to get it, or else your whole certificate system will be compromised.
  • Set up the web server vhost to require a client certificate (How can I force clients to authenticate using certificates? from http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html). This way, right after somebody opens up the login page, you would already know what client (employee) certificate they are using.
  • The client certificates must be protected with a password; this makes the security better - in order to log in, you must fist unlock your client certificate with its password, then open the login page and provide your own user/pass pair.
  • Consider using turning-test based login forms, e.g. http://community.citrix.com/display/ocb/2009/07/01/Easy+Mutli-Factor+Authentication . This will protect the passwords against keyboard sniffing.
  • The web server must
    • match each user/pass to their corresponding certificate;
    • have an easy mechanism for certificate revoking (disabling), in case you decide to part your way with an employee of yours.
  • Once logged in, create a standard PHP session and code as usual.
  • The most critical and important (and not very often) operations should be approved only once the user re-enters their password; this prevents replay attacks. For example, if you want to view the whole credit card number info (and you don't usually need this), the system will first ask you "Re-enter your password and I'll show you the information". Banks usually use this kind of protection method for every online transaction.
  • Expire sessions regularly - in a few hours or less of inactivity.
  • If possible, tie every session to its source IP address; that is - log the IP address upon login and don't allow other IP addresses to use this session ID. Note: some providers like AOL (used to) have transparent IP balancing proxies and with them the IP address of a single client may change in time; you cannot use this security method if you have such clients (try it).

Having a system like this should be protected against most attacks. Here are a couple of scenarios:

  • Brute-force attacks at the login page - they will not succeed because a user/pass + client certificate are required. Actually, without a certificate, the login page will not be displayed at all.
  • Someone steals a user laptop - they cannot use the certificate (even if they know the user/pass pair), because they don't know its password.
  • Someone sees a user entering their passwords - they cannot use them because they don't have your client certificate.
  • An employee starts to harvest the customer data - you have a rate limit of requests per employee, and also a general rate limit of requests to your database - you get an alert and investigate this further manually (you have full logs on the server).
  • Someone hacks into your server and quickly copies the database + PHP scripts onto a remote machine - they cannot use the data, because it is encrypted and you never stored the key in a file on the server - you enter the key every time manually upon server start-up.
  • Someone initiates a man-in-the-middle attack and sniffs your traffic - you are using only HTTPS and never accept/disregard SSL warnings in your browser - you are safe, the traffic cannot be read by third parties.
  • Someone totally gets control over a user computer (installs a key logger and copies your files) - you are doomed, nothing can help you, unless the compromised does not go unnoticed.
  • Someone really good hacks your server and spends a few days learning your system - if the intrusion detection system, and the custom monitor scripts didn't catch the hacker, and he spent days on your server trying to break your system, then you are in real trouble. This scenario has a very low probability; really smart hackers are usually not tempted in doing bad things.

The integration of such a secure database system could be easy. For example, if you have a customer with name XYZ, you can assign a unique number for this customer in your CRM system. Then you can use the secure storage to save sensitive data about this customer by referring to this unique number in the secure database system. It is true that your employees will have to work with one more interface, the interface of the secure database system, but this is the price of security - you have to sacrifice convenience for it.

Conclusion

Careful implementation of all of the above measures will further increase the security of the system, making the server extremely resilient against cyber attacks. Remember though, security is not a state - it's a process. Hire someone to take care of all security-related tasks on an ongoing basis.

Another point - as great as this all sounds on paper, it's the implementation that counts. Be careful with the implementation of the different services, safeguards and applications.

The weakest point in such a system would be the employee computer. A hacker who knows the basic layout of the server (and it follows the recommendations given so far) would focus on attacking the computer of some employee. Do not ignore the client side of the equation - this is quite often the weakest link in the chain.

We have plans to write a post about the client side too.

Server support enabled

Author: Milen Nedev

Started as an interim support contract for a client's site, launched at the end of 2011 and tried out for a couple of months, we think it's time to introduce our new service to the public – Server Support. read more ›

Two days of email / chat and phone ping pong and you problem still exists. One support guru sends you to another, the second one asks you the same questions as the first, all say they'll call back no one does, and no one has a clue... Does it sound familiar? And all this pours over you at the most improper moment when you’ve already invested a great deal of money and time into your website or application and you’ve been observing your clients growing by number.

Started as an interim support contract for a client's site, launched at the end of 2011 and tried out for a couple of months, we think it's time to introduce our new service to the public – Server Support. Actually the service was first tried and tested in the beginning of summer 2011, when we welcomed our first in-house system engineer. He put our hosting infrastructure in order, enriching our experience with his. Our good friends from KEO Films were the first to evaluate the usefulness of the service, as due to some bespoke optimization and skilled maintenance the performance capacity of the machines exceeded our and their expectations (and saved a lot of money too).

The next logical step was to offer this support to anyone who may need it. So now you can see for yourselves what a good server support stands for:

  • “Office hours support” or “24/7 support” depending on your requirements
  • No more  bot replies and “sick-of-it-all” operators - our small support team consists entirely of system engineers, and they will be the ones to answer your call even in the middle of the night
  • Thorough inspection - prior to taking an engagement our team always spends some time checking out the code of the website and the application and discussing with clients the priority the issues to be handled. One never knows what’s around the corner, so we prefer to have a certain idea about the actions to be undertaken and the sequence of these actions.
  • Our clients will be granted access to our own server monitoring application (we will soon post more info about it).

We take our work personally. In order to provide the most adequate support our system engineers follow the inner relations between servers’ software and the hosted applications, as well as the impact of various events – software updates, introduction of new modules, etc. Thus they are few steps forward to the solution when the server fails to perform. And you know that speed and accuracy are of great importance when your reputation and money are at stake.

Ideas that change life for the better

Author: Milen Nedev

One person with a good idea can change the world and make it a better place. However most of the time he or she will need some help to make it happen. read more ›

One person with a good idea can change the world and make it a better place. However most of the time he or she will need some help to make it happen.

Today we’d like to tell you a little story about the new website we built for our friends from KEO Digital.

At peoplefund.it each great thinker gets the chance to put his idea to the test and try to find supporters and funding to make his project grow into a successful business.

How it works?

Simple…

 

Once you have your project all cleared up and predefined, you should set the target sum and time for raising it and start collecting pledges. Put in all the ingredients to make your stuff attractive  – rewards, videos, pictures, inspiring texts. Then wait to see how people like it. Or love it, or adore it.

This is how it worked for our featured heroes - The Bicycle academy. Two guys had an inspiring idea to start a bicycle framebuilding school and give away every first bicycle to the people who really need it – in Africa! Thanks to crowd funding and peoplefund.it they reached their target for less than a week and managed to raise more than £40,000. Fascinating, isn’t it?

How could this happen? They had an ingenious idea, but it wasn’t enough. They had to add in their passion, dedication, inspiration, love and make it as popular as possible. And it was up to www.peoplefund.it to prove that this could work. It was really astonishing how quickly the pledges were made and the goal was reached. We are proud of our work and even more satisfied of how the site helped an original good idea to become reality.

See for yourself how The Bicycle academy made it.

And next time YOU have a brilliant idea in mind – don’t let it slip away. Test it here to see whether peoplefund.it.

Published in: Projects

Everyone’s gone mobile

Author: Milen Nedev

So it’s time for you too to get your Dizzyjam business on the move. Browsing through the new mobile http://m.dizzyjam.com/ website you can show your merch everywhere you go. read more ›

So it’s time for you too to get your Dizzyjam business on the move. Browsing through the new mobile http://m.dizzyjam.com/ website you can show your merch everywhere you go.

The mobile version of the site was not a major project but had some treats in store for our team. The site is a place full of items, packed with all kinds of information and shop products, so we had to sieve all the necessary features and make them fit into your palm. At the same time we had to make sure that nothing precious would be sacrificed for the sake of simplicity.

The result is quite satisfying – an easy to access and fast to browse through mobile-friendly site that gives you the feel you are staring at the whole picture. The mobile version of Dizzyjam provides a simple checkout process and works brilliantly as your personal merch stall wherever you are.

The focus is set on the shop items – so they are presented in their most – the logo designs and the variety of the products are easy to see and even easier to buy. This gives the shop owners yet another way to advertise and propel their business as they literally have their shop in their pocket.

Published in: Projects

Back to school

Author: Milen Nedev

With the summer almost over, and September knocking on the door, the kids are getting ready for school. So is our team. The perfect project to get us in the mood was the Newlyn School of Art’s web site. Launching http://www.newlynartschool.co.uk just a few days ago made us feel involved in the whole “back to school” hullabaloo.
read more ›

Starting the new school year with a fresh project – the Newlyn School of Art web site.

With the summer almost over, and September knocking on the door, the kids are getting ready for school.  So is our team. The perfect project to get us in the mood was the Newlyn School of Art’s web site. Launching http://www.newlynartschool.co.uk/ just a few days ago made us feel involved in the whole “back to school” hullabaloo.

We are really proud to share with you the outcome of our work and hope it will inspire you and colour your day.

Newlyn School of Art

Published in: Projects

What’s been on our timetable in the past few weeks

Author: Milen Nedev

Hi there! We’ve been out of writing here for a while but it was for a good reason – a new home in a new town and a whole bunch of new projects. read more ›

We’ve been busy lately with some stuff going on, so skipped posting for a couple of weeks. The main reason for the blog silence was the relocation of Milen, our managing director, (closely followed by his personal copywriter) in Wales in the beginning of August, so enjoying the English weather and the warm welcome Cardiff gave us was the main issue for a week or two and proved to be time taking for some of us, who are currently catching up with sharing the latest news about our projects.

Hopefully we are back on track and have a few new things to show you.

First of all we would really like to share what a wonderful job the Hugh’s Fish Fight website is doing. Last week the 4th episode was played on Channel 4 and we definitely hit some new records of public interest. The web site scored some of the highest numbers of visits we’ve seen and we are really proud of it managing to meet all the interest. Previously in the campaign our team expanded the range of the site and it went international – now it’s live and functioning on 11 different European languages.

The campaign is already making fisheries policy change and it wouldn’t be so successful unless it was the perfect combination of powerful and charismatic impact of the initiative and the strong and stable support of http://www.fishfight.net/.

Our arrival in UK was the perfect time for launching another MTR Design project – http://www.lifeinukthetest.co.uk/ which is quite a coincidence to start with. This site can give you some really useful information – something we could say for sure trying it at our moving to UK. So did lots of people who have benefited from it for the last couple of weeks (as free from the website stats). The website provides a throughout  information about the history, society and everyday life in UK and scrolling through the variety of lessons one can receive the best chances to pass the British Citizenship Test.

We are having great time in Cardiff. And no wonder as the Dizzyjam headquarters is situated here together with its most eminent members – Daf and Neil who are dangerously familiar with the club life in the Welsh capital. Well it hasn’t been exactly partying all night long during the last two weeks, mostly because of the many thing we had to deal with. Nevertheless the results are dizzying at the end of the day and make us happy the morning after. Some of these are the new feature of Dizzyjam – embeddable shops.

The Dizzyjam shop owners now have at their disposal a tool which helps them set and embed their shop on their own websites. Following few easy steps every merchandiser can integrate the functions and adjust the looks of their shop in their own web space, thus gaining more popularity in the crowded merch scene.

Well, that’s all for now. A humble sunray is sneaking through the clouds so we’ll try to catch it. All of you – our lovely UK friends and partners, feel free to join us any time you are around – just give us a call and we will find time for a beer and a talk.

Published in: Company News, General , Projects

Dizzyjam Facebook App

Author: Milen Nedev

We are absolutely delighted to announce the launch of our new Facebook app dedicated to our beloved project Dizzyjam. The app is quite simple to manage and unobtrusively mingles with the interface of your Facebook page. read more ›

We are absolutely delighted to announce the launch of our new Facebook app dedicated to our beloved project Dizzyjam.

Diversification is the key to success. So now the artists can be successful merchandisers using not only their account on Dizzyjam, but making their Facebook profile work for them as well. And after all the more the pub - the more the fans – both results – multiplying the sales and the growing number of fans are gratifying.  Just install the Dizzyjam Facebook app and enjoy your fans’ number boost.

Dizzyjam Facebook App

The app is quite simple to manage and unobtrusively mingles with the interface of your Facebook page. It’s easy to install and configure, even easier to use and is a real business galvanizer.

So have a nice time on the web and feel free to be seen everywhere with your wonderful designs.

Published in: Company News, Projects

Website launch - Isaysolar

Author: Nikolay Nedev

Our latest project was just finished -we launched the new website for isaysolar Limited , the latest "rent-a-roof" business in UK. Collaborating with our friend and very talented designer Ed Ovenden, we did the frontend coding and CMS programming.
The website itself is promoting their free solar panel offering. If you're a homeowner, why not check out their website and see if you can benefit from free solar electricity and in the same time help save the planet from climate change? read more ›

isaysolar

Our latest project was just finished -we launched the new website for isaysolar Limited , the latest "rent-a-roof" business in UK. Collaborating with our friend and very talented designer Ed Ovenden, we did the frontend coding and the CMS programming.

The website itself is promoting their free solar panel offering. If you're a homeowner, why not check out their website and see if you can benefit from free solar electricity and in the same time help save the planet from climate change?

Published in: Projects

Web site launch – ime.bg

Author: Nikolay Nedev

The Institute for Market Economics (IME) is the first independent economic research institute in Bulgaria, and we loved working on this project in the hope that for many years ahead the IME will remain a thorn in the eyes of any subsequent Bulgarian government, which prefers to spend more (of our) money, instead of creating real reforms. We tried to make a proper media site – with many articles, photos and video materials – and hopefully it could become a destination for all Bulgarian citizens, who value their freedom, and would like to see more “market” and less “state” in the economy. read more ›

The Institute for Market Economics (IME) is the first independent economic research institute in Bulgaria, and we loved working on this project in the hope that for many years ahead the IME will remain a thorn in the eyes of any subsequent Bulgarian government, which prefers to spend more (of our) money, instead of creating real reforms. We tried to make a proper media site – with many articles, photos and video materials – and hopefully it could become a destination for all Bulgarian citizens, who value their freedom, and would like to see more “market” and less “state” in the economy.

And dear English friends, I guess you already know that your government spends more than he earns (app. 152 billion pounds budget deficit for 2009), but did you know that this excess is 5 times greater than the revenues of the Bulgarian government (our GNP for the year 2009 is app. 30 billion pounds)?

Published in: Projects