Hello! We are MTR Design and site making is our speciality. We work with UK based startups, established businesses, media companies and creative individuals and turn their ideas into reality. This is a thrill and we totally love it.

Server monitoring with S2Mon - Part 2

Author: Emil Filipov

In part 1 I covered the reasons why it is in your best interest to monitor your servers, and how can S2Mon help with that task. Well, we know that monitoring can be all cool and shiny, but how hard is it to set up? After all, the (real or perceived) effort required for the initial configuration is the single biggest reason why people avoid monitoring. In this part I'll explore the configuration process with S2Mon. read more ›

In part 1 I covered the reasons why it is in your best interest to monitor your servers, and how can S2Mon help with that task. Well, we know that monitoring can be all cool and shiny, but how hard is it to set up? After all, the (real or perceived) effort required for the initial configuration is the single biggest reason why people avoid monitoring. In this part I'll explore the configuration process with S2Mon.

1. Overview

The S2 system relies on an agent installed on the server side to send information to the central brain over an encrypted SSL connection. The agent we are using is, of course, open-source script (written in Bash), so anyone can look inside it, and see what it is doing exactly. Don't know about you but I would feel very uneasy if I had to install some "proprietary" monitoring binary on my machines - it could be a remotely controlled trojan horse for all I know. So, keeping the agent open is key to us.

The open-source agent confirms another key point - it only *sends* information out to the central S2Mon servers, it does not *receive* any commands/configurations back. The communication here is one way - from the agent to the S2Mon center. The S2Mon center cannot modify the agent behavior in any way whatsoever.

2. Requirements

Since the agent is mainly written in Bash, it, obviously, requires Bash to be available on the monitored system. Fortunately, Bash is available on any Linux system released during the last 15 years or so. The other requirements are:

  • Perl
  • curl
  • bzip
  • netstat
  • Linux OS

The required tools are all present in most of the contemporary Linux distributions, but in case you have any doubts, you can check out the prerequisites page for distro-specific tips.

The Linux OS requirement is the major one here - S2Mon currently runs on Linux only. We have plans to make it available to Mac OS, *BSD and Windows users in the future, but for the time being, these platforms are not supported.

3. Registering an account

You will obviously need an S2Mon account, so, in case you do not have one, head to https://www.s2mon.com/registration/. Once you submit your desired account name and your email address, you will be taken straight to your Dashboard, and your password will be sent over email to you (so make sure you get that email field right).

The Dashboard is very restricted at this point - you need to verify your email address to unlock the full functionality of the system. To complete the verification, simply click on the activation link you got in your mailbox. That's it, your S2Mon account is now fully functional!

4. Adding a server entry to the S2Mon site

OK, this is where the fun starts. Before the S2Mon system starts accepting any data from your server, you need to create a server record in the S2Mon system. Go to https://www.s2mon.com/host/add/ (or Servers -> Add host , if you would prefer) and fill in the following form:

Add Host

Hostname should be a unique identifier of your server - it does not need to be a Fully Qualified Domain Name (FQDN) - though it is a good idea to use that. For an Address, you should enter the external IP address of the host. This is the IP address is where Ping probes will be sent to, should you choose to enable them from the drop-down menu. It is a good idea to enable these probes if the server is ping-able; in this way you will get an alert if the ping dies - this is considered an indication that something is wrong. The Label field is optional free-text that you may use to describe your server. If you fill it in, it will be the server identifier used throughout the S2 site; otherwise the Hostname would be used.

After you submit the form, you will be presented with the basic steps that you will need to follow to get the probe running. Since you are reading this blog post, you can just copy the Pushdata Agent URL and ignore everything else :). The Pushdata Agent URL is the address where the agent would send all of the monitoring information, so it is the most important piece of data on that page. In case you forgot it, accidentally closed the page,or the dog ate your computer, don't worry, you can always get back to this page via Servers -> Edit button -> Probe setup tab.

5. Activating the services you want to monitor

Now go to https://www.s2mon.com/servers/. You will see the list of your servers there, but there is also a convenient panel where you can enable or disable certain services. Go on and activate the ones you are interested in (or, if you are like me - all of them):

Service Activation

6. Running the probe on the server

This is the trickiest part of it all, as there is a lot of different ways to do it, depending on the server controls you have at hand. I'll assume that you have SSH access to the server, so you can run commands directly. If you do not have this kind of access though, you may still be able to run the S2 probe if you can:

  • Download the probe archive, extract it, and put the extracted files onto your server;
  • Run a periodical task (cron job) every minute, which would fire the executable agent script at the specified URL.

The S2Mon agent does NOT require a root account - you can run it from an unprivileged account. Even though I trust the agent completely, I run it from an unprivileged account on all my servers - it's a good approach security-wise, and it is more tidy. In some cases, however, unprivileged accounts may not have access to all built-in metrics, so you might want to run the cron job with root privileges - it's up to you and your specific setup

So, regardless of which account you decide to run the agent under, you can log in with that account and do the following:

s2mon ~$ wget https://dc1.s2-monitoring.com/active-node/a/s2-pushdata.tar.gz # download
s2mon ~$ tar xzf s2-pushdata.tar.gz # extract
s2mon ~$ ls -la s2-pushdata/post.sh # verify that the post.sh script has executable permissions
s2mon ~$ cd s2-pushdata/
s2mon ~/s2-pushdata$ DEBUG_ENABLED=1 ./post.sh "https://dc1.s2-monitoring.com/rblmon/collector-vahzeegh/index.php/my-hostname.com" # Use your specific Pushdata Agent URL here, enclosed in single or double quotes!

You will get some debug output out of the last command; it will abort if there is anything missing (for example, if curl is not installed on the system). If everything is OK, the last line would indicate successful data submission, e.g.:

DEBUG: POST(https://dc1.s2-monitoring.com/rblmon/collector-vahzeegh/index.php/my-hostname.com): 21 keys (8879 -> 2539 bytes).

The only thing that's left to do is to set the agent to be executed every minute. Again, there are a few different ways to do this, but the most common one is to run 'crontab -e', which will open up your user's crontab for editing. Then you only need to append the line:

* * * * * cd /path/to/s2-pushdata/ && ./post.sh "https://dc1.s2-monitoring.com/rblmon/collector-vahzeegh/index.php/my-hostname.com" &>/dev/null

Please make sure that you substitute /path/to/s2-pushdata/ with the actual path to the s2-pushdata directory on your system, and to change the URL to the value that you got after adding your host record in the S2Mon website (note: changing just the hostname part at the end will NOT work).

7. Profit!

OK, if you were able to complete steps 5,6 and 7, then you should see the nifty monitoring widget on your S2Mon Dashboard turn all green. Congrats, your server is now monitored and you are recording the history of its most intimate parameters!

Widget - OK Status

8. (Optional) MySQL monitoring configuration

The MySQL service requires some extra configuration for the S2Mon agent to be able to look inside it, so you will need to take some extra steps if you want to monitor any of the MySQL services. The easiest way to do this is to:

  • Create a MySQL user for S2Mon to use, by using the following query (ran as MySQL root or equivalent)
GRANT USAGE ON *.* TO 's2-monitor'@'localhost' IDENTIFIED BY '*******';

Make sure to replace '*******' with a completely random password. Don't worry, you will not need to remember it for long!

  • Create the files /etc/s2-pushdata/mysql-username and /etc/s2-pushdata/mysql-password on your system, and put the username (s2-monitor in this case) and password in the respective file (on a single line).
  • Change the ownership of those so that only the user that you are running S2Mon under can read them (for example set them to 0400).

After this is all done, you will see the MySQL data charts slowly starting to fill in with data in the next few minutes.

9. Post-setup

Now that you have a host successfully added to the interface, the next logical step would be to setup some kind of notification that would poke you when the some parameter goes too high or too low. Additionally, you might want to enable other people to view or modify the server data in your account. Both tasks are easy with S2Mon and I will show you how to do it in the next part.

Free your people

Author: Milen Nedev

The more you free your people to think for themselves, the more they can help you. You don't have to do this all on your own. read more ›

The more you free your people to think for themselves, the more they can help you. You don't have to do this all on your own.

-- Richard Branson

Published in: Quotes, Business

Server monitoring with S2Mon - Part 1

Author: Emil Filipov

We've all heard that servers sometimes break for one reason or another. We often forget, however, how inevitable it is. While everything is working, the system looks like a rock solid blend of software and hardware. You get the feeling that if you don't touch it, it would keep spinning for years. Well, that's a very misleading feeling. A lot of things can (and will!) go wrong, and you can be better prepared with a tool like S2mon.com. read more ›

We've all heard that servers sometimes break for one reason or another. We often forget, however, how inevitable it is. While everything is working, the system looks like a rock solid blend of software and hardware. You get the feeling that if you don't touch it, it would keep spinning for years.

That's a very misleading feeling. The proper operation of a server depends on many dynamic parts, like having Internet connectivity, stable power supply, proper cooling, enough network bandwidth, free disk space, running services, available CPU power, IO bandwidth, memory, ... That's just the tip of the iceberg, but I think the point is clear - there is a lot that can go wrong with a server. 

Eventually some of those subsystems will break down for one reason or another. When one of them fails, it usually brings down others, creating a digital mayhem that can be quite hard to untangle. Businesses relying on the servers being up and running tend not to look too favorably on the inevitability of the situation. Instead of accepting the incident philosophically and being grateful for the great uptime so far, business owners instead go for questions like "What happened?!?!!", "What's causing this???" and "WHEN WILL IT BE BACK UP????!!!". Sad, I know. 

Smart people, who would rather avoid coming unprepared for those questions, have come up with the idea of monitoring, so that:

  • problems are caught up in their infant stages, before they cause real damage (e.g. slowly increasing disk space usage);
  • when some malfunction does occur, they can cast a quick glance over the various monitoring gauges, and quickly determine what's the root cause of it;
  • they can follow trends in the server metrics, so they can both get insight into issues from the past and predict future behavior.

These are all extremely valuable benefits, and it's widely accepted that the importance of server monitoring is coming second only to the criticality of backups. Yet, there are more servers out there without proper monitoring that you would expect. The main reasons not so setup monitoring are all part of our human nature, and can be summed up to "what a hurdle to install and configure...", "the server is doing it's job anyway..." and my favorite "I'll do it...eventually".

I have some news for the Linux server administrators - you have an excuse no more. We've come up with a web monitoring system for your servers that is easy to setup, rich in functionality and completely free (at least for the time being). Go on and see a demo of it, if you don't believe me. If you decide to subscribe, it will take less than 1 minute. Adding a machine to be monitored basically boils down to downloading a Bash script and setting it up as a cron job (you'll get step-by-step instructions after you log in and add a new server record on the web). And if you want to integrate S2Mon into a custom workflow/interface of yours, there is API access to everything (in fact, the entire S2Mon website is one big API client).

Once you hook up your server to the system, you will unlock a plethora of detailed stats, presented in interactive charts like this one:

Apache children

What we see above is a pretty picture of the load falling on the Apache web server. Apparently we've had the same pattern repeating during the last week. That's a visual proof that the web server workload varies a lot throughout the day (nothing unexpected, but we can now actually measure it!).

OK, I now want to see how are my disk partitions faring, and when should I plan for adding disk space:

Disk Usage Stats

 Both partitions are steadily growing, but if the rate is kept, there should be enough space for the next 5-6 months.

Hey, you know what, I just got some complaints from a user that a server was slow yesterday, was there anything odd?

Load Average

Yep, most definitely. The load was pretty high throughout the entire afternoon. Believe it or not this time it was not his virus-infested Windows computer...

Your boss wants some insight on a specific network service, say IMAP? There you go:

IMAP - Connections per service

Wonder what your precious CPU spends its time on? See here:

CPU Stats

As you see, S2Mon can provide you with extremely detailed stats ready to be used anytime you need them. Of course, there is a lot more to it, and I'll cover more aspects of the setup, configuration and the work with S2Mon it in the next parts. As always, feedback is more than welcome!

Stayin' secure with Web Security Watch

Author: Emil Filipov

Is your server/website secure? How do you *really* know? Web Security Watch can help you with getting on top of the publicly-released security advisories. A custom security feed just for you - how cool is that? read more ›

Is your server/website secure? How do you really know? Let me get back to this in a while. 

As you may be aware there is a ton of security advisories released by multiple sources every day. That's a true wealth of valuable information flowing out on the Internet. Being aware of the issues described in these advisories could make all the difference between being safe and getting hacked; between spending a few minutes to patch up, and spending weeks recovering lost data, reputation and customer trust. So who would *not* take advantage of the public security advisories, right?

Not really. See, there is the problem of information overflow. There is really a lot of sources of security information, each of them spewing dozens of articles every given day. To make it worse, very few of those articles are really relevant to you. So, if you do want to track them, you end up manually reviewing 99% of junk to get to the 1% that is really relevant to your setup. A lot of system/security administrators are spending several dull hours every week to go through reports that rarely concern them. Some even hire a full-time dedicated operators to process the information. Others simply don't care about the advisories, because the review process is too time-consuming. 

Well, we decided we can help with the major pains of the advisory monitoring process. So we built Web Security Watch (WSW) for this purpose. This website aggregates security advisories coming from multiple reputable sources (so you don't miss anything), groups them together (so you don't get multiple copies), and tags them based on the affected products/applications. The last action is particularly important, as tags allow you to filter just the items that you are interested in, e.g. "WordPress", "MySQL","Apache". What's more, we wrote an RSS module for WordPress, so you can subscribe to an RSS feed which only contains the tags you care about. A custom security feed just for you - how cool is that? Oh, and in case you didn't notice - the site is great for security research. And it's free.

Even though WSW is quite young, it already contains more than 4500 advisories, and the number grows every day. We will continue to improve the site functionality and the tagging process, which is still a bit rough around the edges. If you have any feature requests or suggestions, we would be really happy to hear them - feel free to use the contact form to get in touch with us with anything on your mind.

Now, to return to my original question. You can't really tell if your site/server is secure until you see it from the eyes of a hacker. And that requires some capable penetration testers. Even after you had the perfect penetration test performed by the greatest hackers in the world, however, you may end up being hacked and defaced by a script kiddie on the next week, due to vulnerability that just got disclosed publicly.

Which gets me to the basic truth about staying secure - security is not a state, it's a process. A large part of that process is staying current with the available security information, and Web Security Watch can help you with that part.

Probably the longest webpage yet – Hugh's Fish Fight 834,000 Names under the Sea

Author: Nikolay Nedev

We at MTR Design love the challenges and the guys at KEO Films presented us with the new one – to create the longest webpage yet - a page which lists all of Fish Fight's 830k+ supporters. read more ›

 

At MTR Design we are open to challenges so when the guys from KEO Films asked us whether we could create the longest webpage yet, we were more than pleased to accept the commission. Fish Fight - a multi-platform campaign produced by KEO Films and led by TV campaigner Hugh Fearnley-Whittingstall - has ignited earlier this week a campaign promoting the Fish Fight initiative by explicitly drawing the attention to every person who has supported them. The time for the kick off was strategically chosen - prior to an important CFP meeting in Brussels. Making every single voice count could eventually impact the decision making process in the EU.

In order to make this happen they needed a new webpage vesting the idea. Not an ordinary webpage but a special one. They needed a really long webpage which would list all of its 830k+ supporters. A deep dive indeed.

We took the commission and created the webpage. The main challenge before us was squeezing a content so enormous (three times larger than Tolstoy's “War and Peace”) in a smoothly working and convenient webpage that could perform well on desktop browsers as well as on smartphone’s OS. Just imagine scrolling down to the line 123 945 for finding your name on the wall of glory of FishFight. Good news is it won’t take you a whole day - we made it quick. Bad news is - you’ll need a really long display. Thank you iPhone for making this hypothetically possible!

You should definitely check out Hugh's Fish Fight 834,000 Names under the Sea webpage with the new apple wonder:

Well if you don’t have it yet, don’t panic - just run the site on your preferred device and dive as deep as you can.

See it at www.fishfight.net/deep

Tags: