Web Security


In the ever-changing world of Information Technology, the one thing that seems to remain the same is security. Red teams vs blue teams, white hats and black hats – by ...


A few key practices that can secure your server


The advent of the Internet Age has had a profound effect on how business is conducted. Maintaining an online presence is no longer optional for most companies if they want to stay relevant and competitive. Existing and potential customers use the Internet to make purchases, manage their accounts, research products, and much more. The benefits of this are immeasurable, but it doesn’t come without a dark side — hackers. With so much riding on your website and online reputation, it is absolutely vital to keep your servers secure. 

Security professionals devote their entire careers to keeping up with the ever-evolving nature of online threats and global corporations have whole teams with substantial resources dedicated to keeping their online properties secure. Taking on the chore of securing your server may seem like a daunting task, but we’re here to help! We have identified a few key practices that can secure your server enough to defend against the vast majority of attacks and dissuade all but the most elite hackers. It doesn’t take a large amount of system administration ability to secure your server using these methods, but look into our management plans and SecureServer+ services if you’d rather leave it in our capable hands. 

Get Behind a Firewall

The first line of defense for any secure environment is a firewall. There are several firewalls to choose from, but they all typically have the same basic features. A firewall is either an application or a physical device that resides between the internet and any network-facing services on a server. It acts as a gatekeeper for network traffic, using a set of rules to filter both inbound and outbound connections. However, a firewall is only as good as the rules it is given to work with. A well-configured firewall can filter out the vast majority of malicious connections, while a poorly-configured one will be far less effective.

The first decision is hardware or software. Most modern operating systems come with a built-in software firewall application, which is usually sufficient. A dedicated appliance, also known as a hardware firewall, is often used in front of multi-server environments to provide a single point for firewall administration.

No matter what type of firewall you end up using, your next step is defining a good set of rules. Rule number 1 when configuring a firewall, especially remotely, is to be very careful to not lock yourself out by blocking the connection you are using to access the firewall. It is always good practice to have a fallback access method to change firewall rules should you accidentally block your own connection – typically a physical console or an out-of-band console solution like IPMI, ILO, or DRAC.

Start by considering what services your server provides. Network services utilize specific ports to help differentiate between types of connections. Think of them as lanes on a VERY wide highway with dividers to prevent one from changing lanes. A webserver, for example, will typically use port 80 for standard connections and port 443 for connections secured using an SSL certificate. These services can be configured to use non-standard ports so be sure to verify which ports your services are using.

Next, determine how you will remotely administer your server. On Windows, this is typically done via RDP (Remote Desktop Protocol) and on Linux, you will likely be using SSH (Secure Shell). Ideally, you will want to block access to the ports used for administration to all but a handful of IPs or to a small subnet in order to limit the access to these protocols from anyone not within your organization. For example, if you are the sole administrator of a Linux server, open the SSH port (typically 22) to connections from only your computer’s static IP address. If you don’t have a static IP address, you can often determine a subnet from which you will be assigned an IP. While whitelisting a range of IPs isn’t ideal, it’s far better than opening up that port to the whole Internet. 

To generate a solid set of rules, block all ports from all IPs then create specific rules to open those ports needed for your services and administration – remembering not to lock yourself out. The ports opened for your services should generally be open from all IPs, but limit administration ports as discussed above.

While a firewall shouldn’t be your only line of defense, creating a reasonable set of firewall rules is a great starting point for enhancing your server’s security. In truth, no server should be without at least a basic firewall configuration.

Authentication & Passwords

One of the simplest ways to enhance your server’s security is simply by enforcing a strong authentication policy. Your server is only as secure as the account with the weakest password. Follow good password guidelines for any password used on a server, such as making sure that your password is of adequate length, not a dictionary word, and not used on other services that could themselves become compromised and leak your password. While you can limit remote access to your server via a good firewall configuration, there are still exploits that can be used to send commands to a system through compromised or unpatched services running on open network ports.

In many cases, it’s possible (and more convenient) to go passwordless altogether! If your main method for accessing a server is via SSH, you can disable password authentication in your server’s SSH config file and instead use a pair of public and private keys to authorize your connection.

Keep in mind that this method may not be as convenient if you need to be able to login to your server from anywhere at a moment’s notice, since you will need to add your private key to any new system you are connecting from. Also, while this approach makes remote connections an order of magnitude more secure, don’t neglect to never-the-less set a strong password on your account. Hackers are sometimes able to access a system in other ways, and you wouldn’t want to have an account with elevated access secured by a password like, “1234.”

These days, two-factor authentication (2FA) is becoming very popular. When using 2FA, not only does a user need to authenticate with their password, they also need to provide a one-time-use code sent to a previously registered email address or mobile device to further verify their identity. Implementing something like this on your server could be done through a third party service, or by using a 2FA-enabled account (like Google or Microsoft). cPanel\WHM now supports two-factor authentication, so this may be an option for you if you use this control panel as your main means of server administration.

Brute Force Protection

A common attack vector on servers is a brute force attack. These are remote login attempts using guessed usernames and passwords, repeated over and over, as fast as the servers and network will allow. Unprotected, this can be several hundred thousand attempts per day — enough to crack any 8-character password in a month. For this reason, it is prudent to install some form of brute force protection on your server.

Most approaches to brute force protection take one of two forms. The first method introduces a timeout between login attempts. Even if this timeout is as short as a single second, this can cause an attack to take many times longer to crack the password. You’d likely want a longer timeout to provide better security, while not overly-interfering with legitimate login attempts by users making typos. Some systems take a clever approach to this method by increasing the timeout with every failed attempt, often exponentially. Fail once, wait 1 second. Fail again, wait 5 seconds. Fail a third time, wait 30 seconds… By the fourth attempt, you’re going to be very careful entering your password.

Alternatively, a variation of this method puts a hard cap on the number of attempts allowed within a set period of time. Failing to login too many times will get the account locked out – either temporarily, or in more extreme cases, until unlocked by a server administrator. This method effectively puts a stop to any brute force attacks, but it can be more annoying for valid users who aren’t very careful about entering their passwords.

The second method is to introduce a Captcha to the login request. This forces the user to perform a feat that is trivial for a human, but difficult for a computer. Often, this involves some sort of image recognition, such as identifying all the pictures in a grid that contain a street light, or deciphering some text written in a blurry font. While computers are usually able to solve these requests eventually, it takes them much longer than a typical human and greatly slows down the attack. Captchas are also often used to protect public comment sections from spam posts and sign-up forms from fake account creation.

Brute force protection can be found in many firewalls, or in the operating systems themselves — but don’t forget about other accounts, such as WordPress, cPanel/WHM, etc. Make sure any exposed login has some form of brute force protection enabled.

Software Updates & Security Patches

Software and operating system updates and security patches are also important to maintaining a secure server. All of your other efforts can mean nothing and go entirely to waste if you are running an outdated version of an operating system vulnerable to known exploits.


Most software and operating system vendors dedicate significant resources into keeping their products patched against the most recently discovered exploits, so much so that many minor releases contain more security fixes than feature updates. Maintaining this level of vigilance on older versions of their products can be costly, so software and operating systems are frequently classified as End of Life (EOL) after a number of years. Among other things, this means that the product will no longer receive updates for exploits that may be discovered after EOL has been reached.

A commonly seen case of this type relates to PHP, a scripting language commonly used on the web. At the time of this posting, all PHP versions older than 7.2 are EOL. Despite this, PHP versions as old as 5.3 are still common out in the wild. There are significant differences between 7.2 and 5.3, making upgrading to a supported version impossible without significant reworking of the code. 

Fortunately, with this specific example of PHP versions, CloudLinux has you covered on a cPanel server. CloudLinux offers hardened versions of old PHP versions, as well as security updates, well past the EOL date. However this issue could happen with any software, and most don’t have a solution as simple as CloudLinux. 

It is not good practice to run outdated operating systems either. For example, CentOS 5 has been EOL for some time, yet it is not a terribly rare sight. If you happen to be running something like this, you should be planning your upgrade path as soon as possible. When the operating system you are running on goes EOL, it’s common that even supported software on your server will also stop receiving updates, since vendors won’t qualify new versions on EOL OS versions. This can have a cascading negative effect on the security of your server.

Code & Custom Applications

Unfortunately, even the most hardened server can still be vulnerable to attacks through insecure code or applications running on a website.

If you are running a customizable web application, such as WordPress, Joomla, or Magento, it is critically important for you to keep not just the core application up to date, but any plugins or themes as well. This also applies to the code of the project themselves – if you suspect that your theme or plugin is “dead” and no longer being updated, it is prudent to look for alternatives. New exploits are constantly being discovered, and an application or plugin is only as secure as it’s last update.

When dealing with custom code created for you by a developer, it is wise to maintain a continued relationship with your developer so that you can continue to receive updates. Otherwise, you may end up in a situation as described above, where you find that you can no longer update your PHP or other important software because the website is not compatible with the new version.

This attack vector can be the hardest to defend against, because your datacenter or hosting provider generally can not support the custom software and code that is running on your server. Unless you are running entirely off-the-shelf software, make sure you have a plan to keep your code updated and patched.

As you can see, securing a server goes far beyond the initial setup. While this is important, equally vital is keeping it up-to-date in order to combat the ever growing list of known hacks and exploits. The damage caused by a compromised system, both financially and to your reputation, can be massive. As the old adage goes, an ounce of prevention is worth a pound of cure

What is a blacklist?

At a fundamental level, a blacklist is just a list of IP addresses that have been flagged for engaging in some type of undesired activity. This undesired activity can include email spam, botnet attacks, and several other types of malicious activity.

There are numerous blacklists that are compiled and maintained by a number of organizations throughout the internet. Some are for the exclusive use of a corporation, for example, Microsoft utilizes their own private blacklist in order to reduce spam going to their email clients. Others make the contents of their lists available to subscribers for a fee, while the rest offer up their lists to the public at no cost.

The most common types of blacklists we encounter are designed to reduce spam. These blacklists are generally created with the goal of providing a server administrator the means to curb the flow of email spam on their network by tracking IP addresses used by known spammers. Any attempt to deliver email to a mail server by a blacklisted IP is rejected outright, preventing the server from having to deal with the message at all. It is assumed that all email from a blacklisted IP is spam so no resources are spent trying to determine whether or not each individual message is valid, or not.

I’ve been blacklisted?! How did this happen?

Usually, when we are contacted by our end users about email delivery problems, they will discover the existence of blacklists. Generally, the way someone discovers they have been blacklisted is because emails that they’ve sent from their server will start bouncing back to them as rejected. This is a good indication that their server’s IP address has found its way onto a blacklist used by the receiving mail server to filter out potential spam.

Blacklist entries can occur for several different reasons, and these will vary depending upon the blacklist operator and how they manage their lists.

  • Your IP address may have been logged by a “honeypot” – meaning that your server sent an email to a monitored email address that is not expecting emails but is set up to monitor inbound emails. These are a form of spam traps, as any email sent to these addresses are assumed to be unsolicited.
  • An Internet user may have received an email from your server’s IP and clicked the “Report Spam” button. Some popular webmail services may report to one or more RBL (Real-time BlackList) services about these incidents.
  • An Internet user may have reported an email from your server’s IP to a spam reporting body, such as SpamCop.
  • A misconfiguration related to your server’s IP address may have been detected by the blacklist service. For example, some blacklists will list IP addresses that do not have a Reverse DNS PTR record configured that matches the SMTP server’s HELO banner – or for other reasons like this.

But, I don’t send spam, how was I reported to a blacklist?

There are a number of possible reasons why you may have been listed, but before reaching this conclusion, it is a good idea to review your mail server’s logs and make sure that you really are not sending spam from your server. In many cases, a website, a mail server, or an account on your server may have been compromised and conscripted into relaying spam email through your server without your knowledge.

If this is the case, it’s generally pretty obvious as there is usually a backlog of email in the queue. Inspection of the message headers will quickly indicate whether the messages appear legitimate or not.

If you are using cPanel and you prefer not to look through log files, you can use cPanel’s Mail Queue Manager to assess the situation.

If your server is truly clean and not sending out spam emails, the most likely reasons for getting blacklisted would include:

  • If you recently obtained the blacklisted IP address, it may have been blacklisted due to a previous owner’s activities. If this is the case, usually blacklists are cooperative and will delist it if asked.
  • If you’ve been recently blacklisted but can’t find a reason why, it may simply be a false positive. If the blacklist service provides samples of the reported spam this provides a good opportunity to review the email that caused the blacklisting and decide how to proceed from there.

Where do I go from here?

Once you have done your due diligence by making sure that your server is secure and not sending spam, or if you did discover a source of spam and have shut it down, you can move forward by requesting a delisting from the blacklists that have flagged your IP address.

It’s very important that due diligence is done first, as blacklists will often penalize repeat delisting requests. The reason is obvious — if it is easy for professional spammers to repeatedly get themselves delisted, this defeats the purpose of the blacklist. So, in order to ensure positive relations between you and the blacklist in the future, should you find yourself in the position of needing their help with another listing, it is good practice to make sure that every delist request submitted is completely valid and you are not at risk of being immediately re-listed for continuing offenses.

Delisting procedures vary from service to service, but they are typically automated, requiring you to fill out a simple web form providing the server IP, the reason for requesting delisting, and perhaps a verification code. However, some are not quite as easy, and others lack a process to request a delisting. In the latter case, these blacklists typically list IPs on a temporary basis, and after a set amount of time has passed without further incident, your IP is automatically removed. There is no way to speed up the process in this case.

Once your delist request has been submitted, depending on the blacklist service, it may be applied automatically or it may require human review. A good guideline is to expect resolution within 24-48 hours.

While it may seem that getting listed on a blacklist is a terrible thing, these lists do exist for a reason, and your email accounts would likely be flooded with massive amounts of email without them — it is estimated that well over half of all email messages are unsolicited. Blacklists filter out the majority of them before they even hit your mailbox. Also, finding yourself on a blacklist may be the first indication that your server has been compromised, a discovery that might take significantly longer otherwise. Finding yourself on the wrong end of a blacklist can be an annoyance, but their benefit far outweighs their burden. 

In today’s security-focused world, securing your website, as well as the server it is running on, is an important and necessary task for any website owner. SSL certificates are on the front line of this defense, serving to secure the connection between your web server and your clients. However, it can be an intimidating concept for beginners, given all the various options and different levels to choose from — so how do you know what your website needs? Should you buy the most expensive certificate available, or settle for a free SSL certificate?

What is SSL, anyway?

The primary function of SSL (Secure Socket Layer) is to secure the connection between your website and its visitors by encrypting the traffic while it’s in transit over the Internet. This provides numerous benefits, including combating man-in-the-middle attacks. The idea behind encryption is even if someone along the way can view the data while it’s in transit, they need the encryption keys to decipher it into something readable.

In addition, an SSL certificate serves to validate the identity of a website. For example, if you go to your bank’s website you want to know that the website is indeed operated by your bank, and not by an imposter. This helps to protect against phishing attempts and other fraudulent behavior that can damage your brand, or worse.

Type of SSL Certificates – and what’s the difference?

There are several common types of SSL certificates which you’ll see when you’re shopping around.

The key difference between the SSL certificates is how they are verified, and how much of a vetting process is involved in checking the identity of the applicant. This is done by the issuer of the SSL certificate, known as the certificate authority. Often, the quality of a certificate is tied directly to the reputation of the issuing certificate authority.

Paid SSL certificates typically also come with an insurance policy, providing financial compensation if there is a breach in which the certificate authority could be found at fault. This is vital protection for a website operator who is handling monetary transactions, such as an eCommerce site. Usually, this insurance coverage will increase with a more expensive SSL certificate offering. You would want to check with your SSL vendor if this is important to you.

Domain Validated (DV) SSL

A domain validated SSL certificate is usually the cheapest and most common type of paid SSL certificate. While you do usually place company information into the certificate request, none of this is actually vetted when applying for the certificate.

The only thing checked is that you control ownership of the domain name covered by the SSL certificate. Usually, this is checked by one of a handful of common methods, such as creating a DNS TXT record, receiving a validation email on an administrative contact email address for the domain, or placing a validation code into the website’s code.

A DV SSL certificate from a common certificate authority will be accepted by any major web browser and will show a standard https:// link, sometimes with a green text or a padlock icon to indicate that the site is secure. It is the most common type of certificate and is an everyday sight while browsing the web.

Organization Validated (OV) SSL

An OV SSL certificate is similar to a DV SSL certificate, but additional details of the company registering the certificate will be vetted by the certificate authority. In addition to everything a DV SSL provides, the certificate authority will generally provide a secure site seal, which is an image which can be embedded within the website which visitors can click to get more information about the website owner. 

An OV SSL certificate otherwise will appear the same in a visitor’s web browser. The additional vetting is simply an option to provide added credibility to the visitor that the website is being operated by a legitimate business.

Extended Validation (EV) SSL
An EV SSL certificate is the most expensive type of SSL certificate and brings with it the most thorough vetting process. Before issuing an EV SSL certificate, the certificate authority will verify that the applicant company is an existing legal operating entity with a physical place of business, verify applicant details against official records, and independently verify that the applicant company has authorized the issuing of a certificate. Generally, this validation process is the slowest, as it often requires a verification letter to be sent through the mail.

The significant benefit of an EV SSL certificate is that it displays differently in the visitor’s web browser. In addition to displaying as a valid SSL-secured connection, in most web browsers an EV SSL will also display the name of the company in green text just before the URL in the address bar. This increases a visitor’s confidence in the legitimacy of the business operating the website.

What about the free SSL certificate options?

With the push to put SSL on every website, these days there are some certificate authorities offering free SSL certificate options. Some popular options include Let’s Encrypt and cPanel’s AutoSSL. With these options in play, is there a reason to pay for an SSL certificate anymore?

Many website owners can now benefit from the free SSL certificates that are available from such providers. Generally speaking, these SSL certificates are comparable to the lowest end paid certificates, Domain Validated (DV) SSL certificates.

From a security standpoint, generally, there isn’t a downside to using the free SSL certificates from these vendors. They provide comparable levels of encryption and show as valid and secure in any major web browser. One potential downside is that they may require more expertise to set up, though cPanel’s AutoSSL makes the setup pretty straightforward.

Keep in mind, if the insurance provided with paid SSL certificates is important to you, this is generally absent from the free SSL certificates. This reduces the accountability of the certificate authority and therefore may make these a poor fit for websites handling financial transactions or busy eCommerce sites.

Which SSL certificate is right for me?

As always, you should do all the necessary research to make sure all of your bases are covered, but a good rule of thumb might be:

  • For a small, personal website, not handling financial transactions (such as an online resume or personal blog) a free SSL certificate or a DV SSL is usually sufficient.
  • For a larger site, eCommerce, or any site handling financial transactions a paid DV SSL certificate would be the minimum. If you are concerned about appearing as a legitimate business or insuring your business in the event of a breach, you may want to consider the more expensive certificates such as OV or EV SSL due to the increased insurance coverage and the trust conveyed to visitors by these certificates.

Whichever option you choose, any level of SSL protection is better than none. GigeNET can help you find the right certificate for your business and navigate the process alongside you, from start to finish.

There was a time when Distributed Denial of Service (DDoS) attacks were fairly uncommon and only affected the most high profile websites. Unfortunately, those days are over. In the 21st century, anyone who owns a website should be concerned about DDoS attacks and the consequences they can bring about. So what exactly is a DDoS attack? The web hosting experts at GigeNET are here to tell you more about DDoS attacks and what they can mean for your website in language that’s simple to understand. Read on to find out more!

Distributed Denial of Service Attacks

The best way to understand a DDoS attack is through an analogy. Think of your website as a carry-out pizza restaurant.  During normal operation, the restaurant isn’t overly busy and patrons call in, order their pizza, and come pick it up when it’s ready. Now imagine a prankster calling in multiple times to place orders but never coming in to pay for it, or to pick up the pizzas that were ordered. This can be annoying and it makes it difficult to keep up your standard level of service for your legitimate customers, but it is fairly easy to mitigate because the false orders are coming from just one phone number. Block the number, and your problem goes away. This is known as a Denial of Service (DoS) attack, and it is the precursor of modern DDoS attacks.

While a single prankster is fairly easy to identify and deal with, imagine now that the prankster has enlisted the aid of a network of accomplices throughout the city. Now, the phone is ringing non-stop and most phone calls are unable to get through. These orders are coming from different phone numbers, making it impossible to block them all. It’s equally impossible to sort out the handful of legitimate orders that are lucky enough to get through the flooded phone lines from the fake orders. This is known as Distributed Denial of Service – a DDoS attack.

In a DDoS attack of a website, malicious actors can use various means to flood a website with requests to the point where systems become overloaded, preventing legitimate users from accessing it. A big DDoS attack can even crash a website, bringing it to a grinding halt. In the past, DDoS attacks have targeted corporations large and small, banks, government websites, even internet access on the Las Vegas Strip. Regardless of whether the attack is motivated by greed, revenge, activism, or boredom, the result is the same: excessive traffic prevents people from gaining access to the information and services they need. And while the true nature of DDoS attacks may be a bit more complicated than our pizza analogy, the fact is that these types of attacks can easily bring any unprotected website to its knees without warning.

How to Stop DDoS Attacks

When DDoS attacks occur, it can potentially cost a business thousands of dollars in lost business and reputation damage. DDoS attacks have even been used as cover for malicious intrusions, during which credit card numbers and other valuable information has been stolen. If you want to protect yourself and your users, it’s crucial that you take steps to prevent and mitigate the impact of DDoS attacks.

At GigeNET, we offer automated DDoS protection on all our servers so our customers (and their website users) can have peace of mind. Our automated DDoS protection works to detect and mitigate malicious attacks in real time 24 hours a day, 7 days a week. When attacks are detected, they are routed through our scrubbing center, sending malicious traffic away from your servers where it can’t do any damage.

If you’re interested in finding a solution to DDoS attacks, GigeNET has what you’re looking for. Take a look at all the server options we offer and click “Order Now,” call us at (800) 561-2656, send an email to sales@gigenet.com, or fill out our online contact form if you have any questions. You and your users deserve the best protection possible; get in touch with GigeNET today to stop hackers in their tracks!

What are your top 3 priorities when it comes to your website? Design, Content, and UX? Those are all worthy considerations and vital to your website’s appeal. Nonetheless, in an age where digital malcontents are growing ever more inventive and persistent, the answer should be Security, Security, and Security. For websites hosted with dedicated servers, installing a firewall to provide additional security for your web server is an intrinsic part of making your website secure and protecting your brand’s reputation.

But why are firewalls so important? Is your server really that imperiled by not having one? Let’s find out what a firewall is and take a look at some of the top reasons why you need to have one protecting your web server…

What is a Firewall?

In a cloud-based era, servers can be accessed from virtually anywhere. A firewall gives you precise control over who accesses your server based on their IP address and connection protocol, allowing you to slam the door shut on any unauthorized remote access attempts. It works by inspecting each and every packet of network traffic addressed to your protected server, referencing a robust set of rules to determine if it should be allowed to pass, or not. In short, it is a gatekeeper for network traffic. The set of rules defined for a firewall dictate its effectiveness and are flexible enough to allow you to tailor access to only the very specific type of traffic you want.

It’s quick and easy to set up

As a business owner, you likely approach every decision you make by carefully weighing risks and rewards. The good news is that setting up a firewall for your server is a relatively quick and easy endeavor since most Content management systems and hosting providers provide you with a wealth of options to choose from. You have very little, if anything, to lose but a great deal to gain.

Malware attacks cost businesses billions

While the cost and effort involved in installing a firewall for your server are typically fairly minimal, the damage that can be wrought by a malware attack can bring your business to its knees operationally and financially. Most servers contain some kind of sensitive data whether it’s documents, email addresses, or passwords and a leak could cause a harmful chain reaction of consequences. The effect on your reputation alone of such an attack can be extremely costly, not to mention the losses due to downtime and lost irreplaceable data. Malware attacks have cost businesses of all shapes and sizes literally billions.

Not having one could negatively impact on your SEO

As a small business operating in the ultra-competitive landscape of the (nearly) 2020s you know that your website’s Search Engine Optimization plays a huge part in giving you an edge over other businesses competing with you for the attention of customers. All search engines (but especially Google) are extremely conscious of the need to create a secure experience for their users. As such, security provision is a factor that influences their search algorithms. Thus, anything you can do to make your website more secure than your competitors’ is well worth your time.

Hardware or Software Firewall?

Firewalls come in two primary varieties, hardware and software. Hardware firewalls are physical devices that are placed between your server, or servers, and the Internet. Software firewalls, on the other hand, are built into most modern operating systems and run on the server it is protecting. Both varieties share the main features that make a firewall a firewall, but there are differences between the two.

Hardware firewalls come at an additional cost and are yet one more device to manage, but they can provide security for your entire network, not just one system. In addition, they often allow for secure remote access to your network via their ability to function as a VPN access point. For a busy network, it may be beneficial to offload the firewall duties to a stand-alone device, allowing you to eke out every last bit of performance from your protected servers by not using any of their resources for filtering network traffic.

Software firewalls can usually be installed at no additional cost unless you opt to use a premium 3rd party option rather than what comes with your OS or one of the many free options available. For the most part, they can provide a similar level of security to their physical counterparts, but they only protect the device they are installed on, (unless you start routing network traffic through this device, in essence turning it into a hardware firewall).

But you don’t have to choose just one! Having a hardware firewall protecting your network, and a software firewall running on each device gives you the best coverage in terms of filtering out malicious attacks from the internet, and there’s really no such thing as overkill when it comes to server security.

Nothing is more important than your reputation. By increasing the security provision of your server environment, you can help prevent hackers and malware risks from undermining your reputation and scaring security conscious customers away from your business.

cyber security month 2018
Our team has been laser-focused on security-related topics for National Cyber Security Awareness Month this October.  If there is any big takeaway from this exercise, it’s seeing how pervasive cyber security is. Cyber security permeates every layer of technology, every node touched, every person involved — from client, to administrator, to developer, and beyond.  

Everyone, at every level, is responsible for part of the overall security of a system.  Even the most secure systems can be brought down by a weak password, an unpatched vulnerability, or a simple oversight in design.  Ensuring that all layers of a system are secure is done by having rigorous and uncompromising standards and policies in place — and making sure they are always followed.

Our guest blog this month covers how IT teams take all the precautions of ensuring their networks are secure but also that individuals play a big role in the process, by using strong passwords and avoiding phishing scams, etc.

Some will say there is no such thing as too much security, but overly restrictive security policies can have an adverse effect on the usability of a system.  Think about a complex password policy that results in passwords that are nearly impossible to memorize — then force the passwords to be changed on a daily basis.  A system like this would likely result in users tracking their current password in a variety of ways, some more secure than others (a note under their keyboard, an email to themselves, a sticky note on their monitor, etc).  The way in which people cope with the security policies can make the system less secure in the long run. Thus, in all but the most extreme cases, some level of compromise must be found.

Scott’s blog this month helps deal with this exact issue — it deals with centralizing authentication, which puts in place a single sign-on system where one account grants access to multiple systems.  This limits the number of passwords an end user will have to keep memorized while allowing for a robust password policy to ensure passwords are strong enough to not be guessed or quickly brute-forced.  In addition, he goes deeper into securing user access to systems by managing sudo through active directory and implementation of System Security Services Daemon (SSSD) to manage access to remote directories and authentication mechanisms.

There’s been a lot of focus on the end user, but that is hardly the sole vector of attack our systems must be able to withstand.  At the core of any security policy is locking down your servers. This month, Zach’s blog discusses the importance of promptly installing security patches.  No password policy can help you if a hacker can bypass a password and gain root access due to an outdated package.  If you know of a vulnerability that should be patched, you can safely assume that hackers are aware of this, too.

To wrap up our National Cyber Security Awareness Month blog series, we have Kirk’s blog which is a more general piece dealing with SSH security practices.  Definitely a must-read for anyone who administers a server. This piece will go a bit further in-depth than the basics by analyzing several SSH security practices, discussing the pros and cons of the different approaches — including when it is appropriate to implement them, and when it is not.

Thank you for your interest in our security blog series for National Cyber Security Awareness Month.  This will be the first of many sets of coordinated articles from us in the months to come.  Finally [queue Mission Impossible Theme Music], for security reasons, this message with self-destruct in 5…4…3…2…1…

Browse all of GigeNET’s security blogs or explore our security solutions.

ad linux security
Until recently, Linux authentication through a centralized identity service such as IPA, Samba Active Directory, or Microsoft Active Directory was overly complicated. It generally required you to manually join a server or workstation to a company’s domain through a mixture of Samba windbind tools, and kerbose krb5 utilities. These tools are not known for their ease of use and can generally lead to hours of troubleshooting. When kerbose was not applicable due to networking a limitation, an administrator had to resort to an even more complicated set of configurations with OpenLDAP. This can be frustrating to deal with and had led some to deploy custom scripts to manage user management. I have seen administrators utilize Puppet, Chef, and Ansible to roll out user management. At GigeNET, we are guilty of this with our Solaris systems. The bulk of our architecture is Linux based, and we now manage the authentication through Microsoft Active Directory.

The complexity of joining a domain has been severely diminished. The Linux community understood these tools were not ideal to manage, and have come up with a new solution. They introduced the System Security Services Daemon (SSSD). System Security Services Daemon (SSSD) is a core project which provides a set of daemons to manages remote authentication mechanisms, user directory creation, sudo rule access, ssh integration, and more. Even still the SSSD configuration can be quite complex on its own. Each component requires you to understand each of the underlying utilities I brought up in the introduction. While it’s good to understand each of these components it’s not fully necessary as, once again, the Linux community banded together to build a few tools that wrap around SSSD. In earlier Linux distributions the tool was called adcli. The tools to manage the integration processes is now referred to as realmd on most distributions. You can do most basic user administration with the realmd command. I have added a snippet of how one can easily join a domain:

[root@server001 ~]# realm join -v -U administrator gigenet.local.
* Resolving: _ldap._tcp.gigenet.local.
* Performing LDAP DSE lookup on:
* Performing LDAP DSE lookup on:
* Successfully discovered: gigenet.local
realm: server001 has joined the domain

With the snippet above you should have noticed it will look up the DNS of the domain and will try to perform a join. On the backend, this utilizes the net join command from winbind. This command will also build out a basic SSSD configuration file. If the join was successful we should now be able to utilize any user account within our domain. I normally perform a quick ssh login with my domain username. If successful you should notice that you are logged in, and a directory for the user account should have been created.

linux authentication tutorial

Doing research on SSSD I didn’t find any full-blown examples for a user’s SSSD configuration file. I believe in transparency and have included a templated example of our internal configuration file. It’s very basic in design and works on a few hundred internal systems without complaint.  Please note we have substituted our real domain in this example for gigenet.local.

domains = gigenet.local
config_file_version = 2
services = nss, pam, ssh, sudo[ssh]
debug_level = 0[sudo]
debug_level = 0[domain/gigenet.local]
debug_level = 0
ad_domain = gigenet.local
krb5_realm = GIGENET.LOCAL
realmd_tags = manages-system joined-with-samba
cache_credentials = True
id_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = True
use_fully_qualified_names = False
fallback_homedir = /home/%u
access_provider = simple
simple_allow_groups = Operations
sudo_provider = ad
ldap_sudo_search_base = CN=Sudors,OU=Accounts,OU=North America,DC=gigenet,DC=local

With basic user authentication working I wanted to focus on a small feature you most likely noticed within the SSSD configuration template we have provided. This feature is the sudo integration. The documentation on sudo integration is very sparse on the internet with conflicting information.  The documentation normally involved a few commands being shown around without any documentation of what the command does. It took me hours to piece the information together with guides, blog posts, and Linux man pages. Hopefully, the information I have detailed out below doesn’t follow this pattern. I still remember the hours of going through SSSD sudo log files line by line as if it was yesterday.

To utilize sudo we have to add the sudo schema to our Active Directory domain. This requires small modifications to the global Microsoft Active Directory schema. Before you perform the adjustments I really recommend doing a full domain backup. Touching the global schema tends to make some administrators very uncomfortable. Our domain is not very large and doesn’t have teams managing it like in some companies. We decided the benefits of centralized user authentication with centralized sudo configurations were worth the small adjustment. It’s also to my surprise that every guide I have found on the internet does not include the location of the actual schema file. I spare you a few hours of research the files are actually located within the actual sudo directory under /usr/share. We have also uploaded this to our GIT repository (https://github.com/gigenet-projects/blog-data/tree/master/sssdblog1)

Let’s dive into applying the schema file. Please ensure you have the ldifde command on your Windows Active Directory domain controller. To apply the schema you will also have to copy over the schema file named “schema.ActiveDirectory” to the domain controller your working on. Start up a powershell command prompt, and enter the command in the snippet below. Don’t forget to substitute the gigenet.local LDAP base with your own domain.

Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.
PS C:\Users\Administrator\Desktop> ldifde -i -f schema.ActiveDirectory -c dc=X DC=gigenet,DC=local

With the schema applied we should be able to build out our first rule set. Don’t fret I’ll be following through this with a series of pictures to actually design a sudo rule. Before we begin you will need to create a group or a container within your domain where the rules will be configured. This will later be utilized by the SSSD configuration file. Once the group has been configured we will build our rules with the adsiedit.msc tool. Run adsiedit.msc within powershell to open the tool. With the tool open you will now need to transverse your domain tree and go to the domain group you created to store the sudo rules. To build out our sudoRole object started by right-clicking within the Microsoft Active Directory group. Follow the pictures as a general guideline. Our guide will be adding the ping command as a sudo rule. This sudo rule has a few configuration options that we spent many hours exploring on our end. Shall we begin?

linux authentication tutoriallinux authentication tutorial

With the right click option we select New->Object Creation. This brings up the second window with a hundred different types of objects. In our case, we select the sudoRole and move onto the next field. This role matches most of the options one would find in the /etc/sudoers file. This leads to a section where we will name the actual sudo rule, and the attributes we can assign to the rule we just named.

linux authentication linux authentication

The next three images will basically tell the story about the sudo rule we want to create. The attributes section has dozens of options to tailor your rules to your own design, but we will go through the three simple attributes that you would commonly see in a sudoers configuration file. The story we will describe is one of legend. Our rule will allow us to run the ping command as the root user account and without a password. With the first prompt, you will notice we specify the user account, and the second has a prompt for which commands we want to run. Pro TIP: Look for any extra whitespaces because this can lead to a few extra hours of troubleshooting. Whitespaces will break your rules. In the last image, we add a secret option to run the command without a sudo password. This hidden gem took me about a day to figure out as the internet had almost no documentation on this feature.

linux tutorial cybersecuritylinux authentication tutorial securitylinux authentication tutorial security

With the basics completed, we can now save the rule. Take a little time to explore all the additional options that can be set as an attribute. It’s worth taking about an hour to explore these options. Now to get this rule working on the actual Linux host we have to go back into our SSSD configuration. Under the SSSD section is a service value where we add the sudo configuration option.  We then apply a few sudo options under the domain section. Let me explain each configuration option as they are defined in the configuration above:

  • sudo_provider: The provider we utilize to pull in the sudo rules. We are utilizing the Active Directory provider in this configuration.
  • ldap_sudo_search_base: The Active Directory base group where we dumped the ping object. This base search will pull in every rule within this domain group, or container.
  • ldap_sudo_full_refresh_interval: The interval on which SSSD will look up, and pull new rules into the live sudoer configuration. This allows updates live so you don’t manually need to clear the SSSD cache, and perform a restart.

The last configuration required to get the sudo rules working is a small adjustment to the systems NSS configuration file. Please edit “/etc/nsswitch.conf” to include the following line “sudoers: files sss”.  The output below was taken from a live system:

passwd:   files sss
shadow:  files sss
group:    files sss
sudoers:  files sss

This about wraps up our entry-level introduction to Linux authentication through Active Directory. This security practice easily allows you to maintain centralized identity services so you don’t have to constantly push new users to a host, nor cleanup suspended user account. The passwd, and group files on the Linux system stay clean with this method. We will include a second follow up blog in the future that briefly goes through the painful details of adding a public ssh key to each account and storing the key within Active Directory.


Cybersecurity: Data on Vulnerabilities in Web Applications
At the speed that information travels, it’s easy to forget that the Internet is relatively young. With the potential for exponential growth, above all the negative foresight, we can start to see the benefits of the Internet when data is used to progress technology; and humanity as a whole.

Cybersecurity projects dedicated to analysis, development, and research of vulnerabilities are now working alongside industry leaders and corporations such as Cisco Talos, Google, and IBM with the intent to purposefully expose design flaws. The efforts to essentially break software intentionally appears malicious and rude in nature. However, these deliberate attacks provide transparency, promoting security strengthening from potential threats. In practice, it’s better the good guys find a flaw before the bad guys exploit it. Zero-day vulnerabilities are provided to vendors prior to public disclosure, giving developers the opportunity to implement a patch. The idea is to work together, as corporations such as Google partner with free software projects such as the GNU Project, that provides a platform for open source projects improves upon.

Using Analytical Data to Protect Users

Open source projects are largely community driven, and many projects are a product of member development and research contributions. The Open Web Application Security Project, abbreviated as OWASP, is a not-for-profit organization dedicated to Web Application security. Providing Web App security and analytical data, this open source community has a more direct effect on the server level. While larger corporations like Cisco, Google, and IBM operate on the cutting edge, projects like OWASP has compiled a Top-10 Security Risks in Web Applications using data gathered in 2017.

Top Cyber Security Risks

  1. Injection: SQL, XML Parser, OS commands, SMTP headers
    Injection-type attacks increased significantly—up 37 percent in 2017 from 2016. Code injection attacks can comprise an entire system, taking full control. SQL injection breaches the database, querying the most vital component that often houses personal information.
  2. Authentication: Brute force, Dictionary, Session Management attacks
    Weak passwords grow more susceptible to dictionary attacks as word lists continue to inflate. Refrain from setting special character limit and max length values that discourage password complexity. Successful authentications generate random session IDs with an idle timeout.
  3. Security Misconfiguration: Unpatched flaws, default accounts, unprotected files/dirs
    Errors were at the heart of almost one in five breaches.
  4. XML External Entities: DDoS, XML uploads, URI evaluation
    CMS using XML-RPC, which include WordPress and Drupal, vulnerable to remote intrusion. There have been many instances of pingback attacks used to send DoS/DDoS traffic. In most cases, the XML-RPC files can be removed completely. XML processors can evaluate URI, which can be exploited to upload malicious content.
  5. Insufficient Logging & Monitoring
    Preventing irreparable data leaks requires awareness. 68% of breaches took months or longer to discover. Logging and monitoring alerts are essential for recording irregularities.

Future of Cyber Security

Knowledge of the risks is the best defense. Preparedness of the seemingly inevitable attack is the greatest asset in a world network crawling with vulnerabilities. It’s no question that security starts with the individual. The majority of IT professionals agree that related courses should be a requirement. Vulnerabilities will occur as technology progresses, as a community, we can see the importance of data and analytics in innovation.

Explore GigeNET’s DDoS Protection services or chat with our experts now to create a custom solution.

cybersecurity workspace
Security experts can only do so much. Imagine the sophisticated systems at global banks, research facilities, and Las Vegas casinos (“Ocean’s Eleven,” anyone?) — an excess of cameras, guards, motion detectors, weight sensors, lasers, and failsafes.

But what happens if someone leaves the vault door open?

Similarly, server and network security measures can only go so far. Attackers don’t need to engineer a complex and highly technical method to infiltrate your business’s infrastructure: They just need to entice a somewhat gullible or distracted employee into clicking on a link or opening an attachment.

Whether an employee is acting intentionally or is unaware and careless, 60% of all attacks come from within. A vulnerability can be exposed by an accountant, a systems administrator, or a C-level executive, and the results can cost a company millions in downtime, lost sales, and damaged brand reputation.

IT teams can take all the modern precautions to shore up any potential vulnerabilities by following industry best practices with onsite hardware, applications, and websites. Employing a trusted hosting provider like GigeNET adds even stronger protections in the form of high-touch, individualized managed services and state-of-the-art DDoS protection.

But that may not be enough to protect your organization from well-meaning employees who fall for intricate phishing schemes or ransomware attacks. So, in the spirit of Cyber Security Month at GigeNET, here are a handful of ways businesses can turn their weak links into a strong line of defense.

Enforce Strong Passwords

This one seems like it’d be an obvious one — and relatively easy to control. But even in 2016, nearly two-thirds of data breaches involved exploiting weak, stolen, or default passwords. As the first line of defense against attacks, ensuring your employees follow stringent authentication practices is key to protecting your company’s sensitive data.

Educate employees on what constitutes a strong password and enforce the standards you implement. Passwords should be unique and lengthy combinations of upper- and lower-case letters, numbers, and symbols, and you can ban users from using easily guessed information like their first or last name, the company’s name, or even careless passwords such as ‘password’ or ‘1234.’

Once stronger password rules are in effect, require employees to update and change critical passwords periodically. You can encourage users to employ a password manager program to help them stay on top of their access rights.

Password management gets a little more complicated when there are different levels of employees who require various levels of access to certain applications and software. Regularly evaluate user permissions and make sure access is granted only to those who truly need it. Of course, proactively manage login permissions and shared passwords when employees leave the company — even if the parting is on good terms.

Educate and Test Employees on Phishing

We’re long past the days of the unjustly exiled Nigerian prince offering his family fortune to those willing to front him a little money for his escape. Email phishing is the attempt to obtain sensitive information — think usernames, passwords, credit card numbers, and other types of personal data — by sending fraudulent emails and typically impersonating reputable companies or people the intended victim knows.

Through the years, phishing attacks have become more subtle and harder to detect, even for the filters and safeguards employed by Office 365 and G Suite. Attackers will customize messages to exploit specific weaknesses in email clients and popular online platforms. Email phishing has scored some high-profile victories in recent years, enabling leaked emails from Sony Pictures and Hillary Clinton’s 2016 presidential campaign. In fact, the latter attempt even fooled the campaign’s computer help desk.

Attackers are more frequently targeting businesses and organizations instead of random individuals and often use the infiltration to start a ransomware attack. Personalized emails, shortened links, and fake login forms all serve to trick users into sharing sensitive login information or network access.

Train employees on modern phishing scams and how to spot them. Implement processes that enable employees to report possibly harmful messages, and consider deploying a service that runs phishing simulations or uses artificial intelligence or machine learning to detect spoofed senders, malicious code, or strange character sets.

Protect Against Human Error

Of course, no one is perfect. Mistakes happen, and there often isn’t a shred of malice behind and insider’s misstep. Given employees’ access to sensitive data, however, the slightest error can have disastrous results.

The threat of simple, bone-headed errors plagues businesses large and small. Even Amazon blamed an employee for inadvertently causing a major outage to Amazon Web Services in 2017. Several years earlier, an Apple software engineer mistakenly left a prototype of the highly anticipated iPhone 4 at a bar.

Whether your employees are handling important data or devices, training and awareness are critical to promoting stable and secure operations. An organization is only as strong as its weakest link, and one simple slip up can have major consequences.

Protect your organization by implementing rigorous coding standards, quality assurance checks, and backups. Take a critical look at user permissions and access to prevent employees from inadvertently making system changes or accidentally downloading or installing unauthorized software. Consider how company devices and sensitive data are handled across the organization, and prepare for worst-case scenarios.

Stay Vigilant and Rely on the Experts

Although a rare weak password or unused admin account may not pose an immediate threat to your company, any security oversight can lead to disastrous results at a moment’s notice. Act holistically when it comes to protecting your business infrastructure, devices, and data — inside and out.

GigeNET will gladly secure and monitor your systems to proactively diagnose and patch vulnerabilities before they become exploits, but comprehensive security extends beyond our server hardening, managed backups, and scalable DDoS protection service. Security is a team sport, so huddle up and let us draw up your organization’s security game plan.


linux security
While working as a sysadmin over the years, you truly start to understand the importance of security patches. On a semi-daily basis I see compromised servers that have landed in an unfortunate situation due to lack of security patching or insecure program execution (e.g. running a program as root unnecessarily). In this blog post I’ll be focusing on the importance of patching your Linux servers.

As you may know, there have been many high severity Linux kernel and general CPU vulnerabilities these past few years. For example, the Dirty COW Linux kernel vulnerability and the CPU speculative execution vulnerabilities all require patching. If you’re not taking security patching seriously, now is the time to start. Something as simple as subscribing to your Linux distribution’s security mailing list and applying patches as needed could prevent a compromise. Most that are concerned with security have learned the hard way and have had their servers compromised. But who wants to learn the hard way? There is a lot more attention that needs to go into securing your server, but patching is the first line of defense.

Top Linux server security practices: 

  1. Subscribe to your Linux distribution’s security announcements mailing list. For example the CentOS-announce or the debian-security-announce mailing lists. These will notify you when packages are updated that contain security patches. They’ll also go over which vulnerabilities the patch covers.
  2. Read security related news! It’s important to keep up with the latest news on security topics. I’ve discovered the need to patch software many times by just reading news.
  3. Check if you actually need the patch, and how it applies to your environment. It’s best to not blindly patch everything in the name of security. For instance, the vulnerability may not even affect you in any way. I commonly see this a lot with Linux kernel vulnerability patches. There’s generally a lot of them, but most are not too bad. It’s worth saving you from having to do yet another reboot.
  4. If you delay patches due to worries about downtime, implement redundancy into what you’re doing. It’s important that critical vulnerabilities get patched, but it’s also important that your production server remains up and accessible. The best option, even if difficult would be to figure out a redundant way of doing the things you do with high availability.

Patching is probably the easiest part of maintaining a secure environment. So there’s no excuse to neglect your system! It also prevents a headache for your future self.

How can GigeNET keep your business secure? Chat with our experts now.

top ssh security best practices blog header
SSH is a common system administration utility for Linux servers.  Whether you’re running CentOS, Debian, Ubuntu, or anything in between; if you’ve logged into a Linux server before, you likely have at least heard of it.

The acronym SSH stands for “Secure Socket Shell”, and as the name implies, the protocol is built with security in mind.  Many server administrators assume that SSH is pretty secure out of the box, and for the most part, they’d be correct. SSH by default has fantastic security features out of the box, like encryption of the communication to prevent man in the middle attacks, and also host key verification to alert the user if the identity of the server has changed since they last logged in.

Still, there are a large number of servers on the Internet running SSH, and attackers like to find attack vectors that could potentially affect a large number of servers.  With security, convenience tends to be sacrificed, so many server administrators intentionally, or without much thought, leave their servers running default SSH installations.  For the most part, this isn’t an issue for most of them, but there are some steps that you can take to be ahead of the curve. After all, I believe that being a little bit ahead of the curve is one of the best security practices to reach for, that way your server avoids being one of the lower hanging fruit that can be tempting to attackers.

With that in mind, here are some techniques that you may want to consider for your Linux server to help improve your SSH security.

Brute Force Protection

One of the most common techniques for improving SSH security is brute force protection.  This is because one of the most common security concerns faced by server administrators running SSH services is brute force attacks from automated bots.  Bots will try to guess usernames and passwords on the server, but brute force protection can automatically ban their IP address after a set amount of failures.

A few common open source brute force protection solutions are ConfigServer Firewall (CSF) and Fail2Ban.  CSF is most common on cPanel servers, since it has a WHM plugin.

Pros and Cons of Brute Force Protection


  • Will help to cut down on failed logins from bots by automatically banning them, making it much less likely that a bot will have the opportunity to guess the login details for one of your SSH accounts.
  • Very easy to implement with no changes to the SSH configuration required.


  • These brute force programs have no way to tell bots apart from you and your users.  If you fail login too many times by accident, you could lock yourself out. Make sure that you have a reliable means to get on to the server if this happens, such as whitelisting your own IP address, and having a KVM or IPMI console available as a last resort measure.

Changing The SSH Port Number

One of the most common techniques that I see is changing the SSH port number to something other than the default port, 22/tcp.  

This change is relatively simple to make, for example, if you wanted to change your SSH port from 22 to 2222, you would simply need to update the Port line of your sshd_config file like so:

Port 2222

By the way, port 2222 is a pretty common “alternate” port, so some of the brute force bots may still try this port.  It would be better to choose something more random, like 2452. It doesn’t even have to contain a 2, your SSH port could be 6543 if you wanted it to be.  Any port number up to 65535 that is not used by another program on the server is fair game.

Pros and Cons of Changing The SSH Port Number


  • This technique is usually pretty effective at cutting down automated bot attacks.  Most of these are unintelligent scripts and will only be looking for servers running on port 22.


  • This technique amounts to “security by obscurity”.  A bot that is trying alternate ports, or any human equipped with a port scanning tool like nmap will have no problem finding your server’s new port in just a few minutes.
  • This technique can make the SSH server a bit more inconvenient to access, as you will now need to specify the port number when connecting instead of just the IP.

Disabling Root Login via SSH

Another common technique is to disable the root user account from logging in via SSH altogether, or without an authorized SSH key.  You can still have root access via SSH by granting “sudo” privileges to one of your limited users, or using the “su” command to switch to the root account with a password.

This can be configured by adjusting the “PermitRootLogin” setting in your sshd_config file.

To allow root login with SSH key only, you would change the line to:

PermitRootLogin without-password

To completely disallow root login via SSH, you would change the line to:

PermitRootLogin no

Pros and Cons of Disabling Root Login via SSH


  • This technique is somewhat helpful, since the username “root” is common to most LInux servers (like “Administrator” on Windows servers), so it is easy to guess.  Disabling this account from logging in now means that the attacker must also guess a username correctly to be able to gain access.
  • If you are not using sudo, this technique puts root access behind a second password, requiring an attacker to know or guess two passwords correctly before having full access to the server.  (Sudo can diminish this benefit somewhat as usually it is configured to allow root access with the same password that the user used to login.)


  • This method may increase your risk of getting locked out of the server, in the event that something goes wrong with your sudo configuration.  It is still a good idea in this method to have an alternate way to access the server if you become locked out of root, such as a remote console.

Disabling Password Authentication, in favor of key authentication.

The first thing that everyone tells you about passwords is to make them long, difficult to guess, and not based on dictionary words.  An SSH key can replace password authentication with authentication by a key file.

SSH keys are very secure compared to a password, as they contain a large amount of random data.  If you have ever seen an SSL certificate or key file, an SSH key looks similar to this. It’s a very large string of random characters.

Instead of typing a password to login to the SSH server, you will authenticate using this key file, in much the same way that SSL certificates on websites work.

If you would like to disable password authentication, you can do so by modifying the “PasswordAuthentication” setting in the sshd_config file, like so:

PasswordAuthentication no

Pros and Cons of Disabling Password Authentication, in favor of key authentication.


  • This method strongly decreases the likelihood that a brute force attempt against your SSH server will be successful.
    • Most brute force bots are only trying passwords to begin with, they will be using the completely wrong authentication method to try to break in, so those bots will never succeed.
    • Even if someone was doing a targeted attack, SSH keys are so much longer than passwords that guessing one correctly is orders of magnitude harder, simply because there’s so much entropy and potential combinations.


  • This technique can make it less convenient to access the server.  If you don’t have the key file handy, you won’t be able to SSH in.
  • Due to the above, you are also increasing risk of getting locked out of SSH, for example if you lose the key file.  So, it’s a good idea to have an alternative way to access the server if you need to let yourself back in, like a remote console.

In the event that someone gets ahold of your key file, just like a password, they will now be able to login as you.  But, unlike passwords, keys can be easily expired and new keys created, and the new key will operate the same way.

Another interesting quirk about the SSH keys method is you can authorize multiple SSH keys on a single account, whereas an account can typically only have one password.

It’s worth noting that you can use SSH keys to access accounts even if password authentication is turned on.  By default, SSH keys will work as an authentication method if you authorize a key.

Allow Whitelisted IPs Only

A very effective security technique is only allowing whitelisted IP addresses to connect to the SSH server.  This can be accomplished through firewall rules, only opening the SSH port to authorized IP addresses.

This can be impractical for home users or shared web hosting providers, since it can be difficult to know which IP addresses will need access, and home IP addresses tend to be dynamic, so your IP address might change.  But, for situations where you are using a VPN or mostly accessing from a static IP address, it can be a low maintenance and extremely secure solution.

Pros and Cons of Allowing Whitelisted IPs Only


  • This method provides very strong security, since attackers would need to have access to one of your whitelisted IPs already in order to try to SSH in.
  • Arguably, this method can supercede the need for other security methods like brute force protection or disabling password authentication, since the threat of brute force attacks is now basically nullified.


  • This method increases your chances of getting locked out of the server, especially if you are in a location where your IP address may change, like a residential Internet connection.
  • The convenience of access is also reduced, since you will be unable to access the server from locations that you haven’t whitelisted ahead of time.
  • There is some effort that goes into this, since you now have to maintain your IP address whitelist by adding and removing IPs as the needs change.

On my own personal servers, this is usually the technique that I use.  This way I can still have the convenience of authenticating with a password and using the normal SSH port, while having strong security.  I also change my servers frequently, creating new ones when needed, and I find that implementing this whitelist is the fastest method for me to make my new servers secure without messing with other configurations, I can simply copy my whitelist from another server.

A Hybrid Approach: Allow passwords from a list of IPs, but allow keys from all IPs.

If you want to get fancy, there are a number of “hybrid” approaches that you can implement that combine one or more of these security techniques.

I ran into a situation once with one of our customers at GigeNET where they wanted to provide staff with password access, so that they could leave a password on file with us, but they wanted to only use key authentication themselves and not have password authentication open to the Internet.

This was actually very simple to implement, and it provides most of the security of disabling password authentication, while still allowing the convenience of password authentication in most cases.

To do this, you would want to add the following lines to your sshd_config:

# Global Setting to disable password authentication

PasswordAuthentication no


# Override the Global Settings for IP Whitelist

# Place this section at the -end- of your sshd_config file.

Match address

PasswordAuthentication yes

For the above, is the whitelisted IP address.  You can repeat that section of the configuration to whitelist multiple IPs, and you can change the /32 to another IPv4 CIDR such as /28, /27, etc in order to whitelist a range of IPs.

Remember that the Match address block should be placed at the very end of your sshd_config file.

Pros and cons of a hybrid approach


  • This technique can provide the security of key authentication by preventing passwords from working for most of the Internet, but allowing the convenience of password authentication from frequent access locations.  So, it allows you to reduce some of the drawbacks while keeping most of the security.
  • If your IP address changes and you are no longer whitelisted, you can still SSH in with the key file so long as you have it saved locally.


  • Like the IP whitelist firewall method, this method takes some maintenance since you have to update your SSH configuration if your IP address changes or you need to whitelist other locations, but unlike other methods, updating the whitelist here is less critical since you can still access via the key method even if you’re not whitelisted.

Ultimately, you will have to choose what’s best for your use case.  

Hopefully this list of techniques and examples provides some food for thought that you can use when you are security your servers: what the risks are and what possible techniques exist to mitigate them.

Based upon how important you think the security of the server is, and the practicality of implementing the various security solutions toward mitigating the risks you’re concerned about, you can choose one or more techniques to move forward.

At the end of the day, I always remind everyone that security is relative.  You will never have anything that is fully impenetrable, and the main thing is to keep yourself at least one step ahead of everyone else.  Even if you implement just one of these security practices, you are more secure as a result than a large number of Linux servers out there that are running with the default settings and SSH wide open to anyone that wants to try to login.


How can GigeNET keep your business secure? Chat with our experts now.

cybersecurity basics firewall

Why is a firewall so important?

Security on an open Internet becomes more important with each day. Along with the growth of Internet and Internet literacy, the benefits of a dedicated/virtual server can now be felt around the world.

For many, personal data and web service accessibility has become an integral part of the daily life. Having the benefit of accessibility means that the service is public facing, making the service susceptible to undesirable and seemingly random connections.

Often conducted using bots and spoofed IP addresses, it’s not uncommon on the open Internet to experience login attempts, port scans, and other intrusive activity.

There are basic security and firewall practices that can help prevent these activities from turning into a more alarming issue.

Without a firewall, your open ports look like this:

First off, to help grasp the motive behind these connections, a newly installed server was used to log incoming connections over 2 days. With no firewall blocking connections to the server, the log data can be analyzed to pinpoint areas of concentration.

Technical Information

OS: CentOS 7 + cPanel

(cPHulk disabled)

– Using iptables to log connections, and logged to the following directory –


:msg,contains,"[netfilter] " /var/log/iptables.log

The following iptables rule was used to log NEW(state) Inbound packets to eth0

iptables -A INPUT -i eth0 -m state --state NEW -j LOG --log-prefix='[netfilter] '

Example Log entry 

Jun 15 08:02:27 gigenet kernel: [netfilter] IN=eth0 OUT= MAC=d6:f4:8e:aa:a7:94:00:25:90:0a:ad:1c:08:00 SRC=<remote IP> DST=<server IP> LEN=40 TOS=0x00 PREC=0x00 TTL=244 ID=24288 PROTO=TCP SPT=54102 DPT=1433 WINDOW=1024 RES=0x00 SYN URGP=0

(IP addresses have been removed)

SRC – Source IP address

DST – Destination IP address

SPT – Source Port

DPT – Destination Port

PROTO – Internet Protocol

A script was created analyze and format log data

[root@gigenet ~]# ./analyze-iptableslog.sh

Log File: iptables-1.log

Log Date

# awk 'NR==1{print "Start Date: " $1, $2, $3;}; END{print "End Date: " $1, $2, $3;}' iptables-1.log

Start Date: Jun 13 08:02:21

End Date: Jun 15 08:02:27

Total Number of New Connections Logged

# wc -l iptables-1.log
 16299 iptables-1.log

Number of Connections per Protocol

# awk '{for (i=1;i<=NF;i++) if( ~/PROTO=/) print $i}' iptables-1.log | sort | uniq -c | sort -rn

Number of Unique SRC IP Addresses

# awk '{for (i=1;i<=NF;i++) if( ~/SRC=/) print $i}' iptables-1.log | sort -n | uniq | wc -l
2886 IP Addresses

Number of Enties with a DPT(Total-ICMP)

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-1.log | wc -l
16266 DPT Connections

Number of Unique DPT Hit

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-1.log | sort -n | uniq | wc -l
1531 Unique DPT

Number of Connections per DPT, List Top 15

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-1.log | sort -n | uniq -c | sort -rn | head -n 15
9595 DPT=22
1309 DPT=80
885 DPT=445
742 DPT=23
188 DPT=8000
157 DPT=1433
153 DPT=5060
111 DPT=8080
90 DPT=8545
90 DPT=3389
83 DPT=81
80 DPT=3306
73 DPT=443
67 DPT=2323
44 DPT=8888

How to use firewall mitigate ports

The data show the primary destination ports of contact. As expected, the ports with the largest amount of connections are common for Linux and Windows web services.

Port 22 – Secure Shell(SSH)
Port 23 – telnet
Port 80 – Http
Port 445 – SMB (Windows network file sharing)
Port 1433 – MSSQL
Port 3306 – MYSQL
Port 3389 – RDP

Depending on the services being run, these ports may need to be available to remote services. The ports of note are SSH port 22, telnet port 23, and RDP port 3389.

Ideally, these connections should be restricted by the system firewall to specific IP addresses only. In addition, bots are typically programmed to target default ports. Thus, changing the default SSH and RDP port will help prevent intrusion.

  1. Changing SSH port(Linux, Freebsd)
  2. SSH configuration file:

Modify the line with an uncommon port(0-65535)

  • Port 22

Restart SSHD:

  • CentOS: service sshd restart
  • Debian: service ssh restart
  • FreeBSD: /etc/rc.d/sshd restart

Change RDP port(Windows)

  • Windows RDP should never be open to the public. If necessary, the RPD port should be changed to minimize anonymous connections.

Open Registry Editor

  • Locate the following registry subkey:
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber

Modify the Decimal value to an unused port, click OK. Reboot.

Basic Firewall Setup

There are a number of firewall services that can serve as a primary mode of security. Provided are a few basic rule commands to help get started.

1. To Add iptables Rules

iptables is the most common, and familiar, Linux firewall. The default firewall for CentOS <=6, iptables is often used as the baseline Linux firewall.

Basic rules

  • Allow Established Connections: iptables -A INPUT -m state –state RELATED,ESTABLISHED -j ACCEPT
  • Allow INPUT Policy: iptables -P INPUT ACCEPT
  • Allow IP: iptables -A INPUT -s -j ACCEPT
  • Allow IP/Port: iptables -A INPUT -s -p tcp -m state –state NEW -m tcp –dport 22 -j ACCEPT
  • Allow lo(localhost) interface: iptables -A INPUT -i lo -j ACCEPT
  • Allow Ping: iptables -A INPUT -p icmp -j ACCEPT
  • Allow Port: iptables -A INPUT -p tcp –dport 22 -j ACCEPT
  • Insert Allow IP(pos. 5): iptables -I INPUT 5 -s -j ACCEPT
  • Insert Allow IP/multiport: iptables -I INPUT 5 -s -p tcp -m state –state NEW -m multiport –dport port#1,port#2 -j ACCEPT

(Alternatively, substitute “ACCEPT” with “DROP” to deny)
Remove an Existing Rule using the -D option:

  •  Remove Allow IP: iptables -D INPUT -s -j ACCEPT

Reject the rest, rejects(blocks) all connections not defined in previous rules.

  • Iptables -A -j REJECT –reject-with icmp-host-prohibited

Flush rules

  • iptables -F

2. Basic firewalld Commands

Firewalld features prominently in CentOS 7. Firewalld essentially provides more human readable commands for committing iptables rules.

Operational, print current state information

  • State: firewall-cmd –state
  • Start/Stop: systemctl start/stop firewalld.service
  • Start On Boot: systemctl enable firewalld

Zone Information, pinrt zone parameters

  • Default Zone: firewall-cmd –get-default-zone
  • Default Zone Info: firewall-cmd –list-all
  • List Zones: firewall-cmd –get-zones
  • Zone Info: firewall-cmd –zone=public –list-all

Modify Zone

  • Create New Zone: firewall-cmd –permanent –new-zone=new_zone – Change Default Zone: firewall-cmd –set-default-zone=public
  • Change Interface: firewall-cmd –zone=public –change-interface=eth0

Modify Rules, subnet of a zone

  • Allow Service: firewall-cmd –zone=public –add-service=http
  • Allow Port: firewall-cmd –zone=public –add-port=22/tcp
  • List Services: firewall-cmd –get-services
  • List Services Allowed: firewall-cmd –zone=public –list-services
  • List Ports Allowed: firewall-cmd –list-ports
  • Allow IP/Port/Proto using rich-rule,  explicit rules
  • firewall-cmd –permanent –zone=public –add-rich-rule=’rule family=”ipv4″ source address=”″ port protocol=”tcp” port=”22″ accept’

(Use the –permanent option to create persistent rules for reboots)

3. Basic ufw rules

Introduced as ufw(UncomplicatedFirewall), supported in Ubuntu 8.04+, it is shipped as the default firewall for Ubuntu systems.


  • Enable/Disable: ufw enable/disable
  • Print Rules: ufw status verbose

Allow Rules

  • Allow Port: ufw allow 22
  • Allow IP: ufw allow from
  • Allow IP/Port/TCP: ufw allow from to any port 22 proto tcp
  • (Alternatively, substitute “allow” for “deny” for deny rules)

Delete Existing Rules

  • ufw delete allow from

4. Windows Firewall(Windows Server 2008 a newer)

Control Panel >> Windows Firewall >> Advanced Settings >> Inbound/Outbound >> New Rule

Bonus: cPanel tools – cpHulk(?)

As a test case, WHM’s cPHulk Bruteforce Protection was enabled with default settings. During the 24 hours logged, there has been significantly fewer new connections as recorded by iptables.

[root@gigenet ~]# ./analyze-iptableslog.sh

Log File: iptables-cphulk.log

Log Date

# awk 'NR==1{print "Start Date: " $1, $2, $3;}; END{print "End Date: " $1, $2, $3;}' iptables-cphulk.log

Start Date: Jun 19 04:31:43

End Date: Jun 20 04:53:53

Total Number of New Connections Logged

# wc -l iptables-cphulk.log
3223 iptables-cphulk.log

Number of Connections per Protocol

# awk '{for (i=1;i<=NF;i++) if( ~/PROTO=/) print $i}' iptables-cphulk.log | sort | uniq -c | sort -rn

Number of Unique SRC IP Addresses

# awk '{for (i=1;i<=NF;i++) if( ~/SRC=/) print $i}' iptables-cphulk.log | sort -n | uniq | wc -l
1432 IP Addresses

Number of Enties with a DPT(Total-ICMP)

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-cphulk.log | wc -l
3187 DPT Connections

Number of Unique DPT Hit

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-cphulk.log | sort -n | uniq | wc -l
943 Unique DP

Number of Connections per DPT, List Top 15

# awk '{for (i=1;i<=NF;i++) if( ~/DPT=/) print $i}' iptables-cphulk.log | sort -n | uniq -c | sort -rn | head -n 15
415 DPT=445
270 DPT=23
257 DPT=22
233 DPT=80
97 DPT=5060
72 DPT=1433
59 DPT=8545
53 DPT=8000
50 DPT=81
49 DPT=8080
46 DPT=443
41 DPT=3389
34 DPT=25
33 DPT=3306
27 DPT=2323

business chat
Privacy and security can be difficult to achieve, especially for your entire organization. It involves many factors and can be difficult to manage from the top. While you may not want to, or don’t have the ability to manage every aspect of your organization’s members there are some things you can do. One of the most important and sensitive factors would be how your organization’s members communicate about internal matters. While talking face to face is one of the more common ways, this is not always possible. More people than ever work remote. Especially in the IT industry. There is an obvious need for remote communication methods.

Instant messaging is probably one of the more popular ways to communicate. There’s many platforms like Skype, Slack, and WhatsApp that simplify this. While some of them may boast encryption from client to server or even end-to-end encryption, you’re still transferring trust to a 3rd party and their code. If this worries you, it may be best to run your own instant messaging server. Commonly, organizations and individuals who are concerned about this have setup XMPP servers (formerly known as Jabber). While this arguably isn’t a bad solution, XMPP can be tricky to work with compared to other more modern solutions.

One of the most notable competitors to the XMPP protocol would have to be Matrix Synapse. Matrix, like XMPP can be a decentralized (federated) but you can tweak it to your organization’s needs. For example, you can disable public registration, use LDAP for authentication and disable federation. Just like XMPP, there are many implementations of the Matrix protocol.

Matrix Tutorial

In this tutorial we will be going over how to setup your own Matrix Synapse server on GigeNET Cloud. This will show you the basics of how to run your own Matrix server. If you don’t have a GigeNET Cloud account, head over here and check out our plans. Synapse is the server created by Matrix developers and can be found here.

First, we’ll need to create a GigeNET Cloud machine. Once you’re logged in, it’ll look like this.

How to secure your chats with MatrixClick on “Create Cloud Lite”

How to secure your chats with MatrixSet a proper hostname for your new machine, select the desired location, zone and OS. For this tutorial we’ll be using Debian 9 (Stretch). You’ll then need to pick a plan that fits your needs. Matrix Synapse recommends at least 1GB of memory. We’ll go with GCL Core 2. After you’ve set everything to what you want press “Create VM”.

How to secure your chats with MatrixNow your cloud VM has begun spinning up on one of our hypervisors. It may take a bit, but you can ping the VM’s public IP until you see that it’s up. This page will show all of the details you’ll need to know to login.

How to secure your chats with MatrixOnce the VM is up, you can SSH in with your favorite SSH client. I use Linux, so I’ll be using openssh-client. We’ll want to perform a full upgrade of all packages on Debian, so you’ll need to run this.

root@matrix-test:~# apt update && apt dist-upgrade

Once that has finished, reboot your VM.

root@matrix-test:~# reboot

Once you’re back in after the reboot. Let’s take a look at the available Matrix serversThere’s quite a few, but as mentioned, we’ll be using Synapse. Click Synapse.

How to secure your chats with MatrixIf you’re interested in learning more about Matrix Synapse I highly recommend that you check out their GitHub repository.

How to secure your chats with MatrixBefore you grab their repo key you’ll need to install apt-transport-https. This is required to use HTTPS with the apt package manager.

root@matrix-test:~# apt install apt-transport-https

When that finishes you can then grab their repo key, import it and add the repository into your sources file with the following commands.

root@matrix-test:~# wget -qO - https://matrix.org/packages/debian/repo-key.asc | apt-key add -

root@matrix-test:~# echo deb https://matrix.org/packages/debian/ stretch main | tee -a /etc/apt/sources.list.d/matrix-synapse.list

root@matrix-test:~# apt update

If everything checks out you’re now ready to install Matrix Synapse! We’ll also install a few extras.

  • Certbot (to get a free Let’s Encrypt certificate) 
  • Haveged (to speed up entropy collection)  

root@matrix-test:~# apt install matrix-synapse certbot haveged

You’ll get an ncurses interface during the installation asking for a few configuration parameters. Make sure to set your FQDN here.

How to secure your chats with MatrixIt’s up to you whether you want to send anonymous statistics. I chose not to.

How to secure your chats with MatrixIf you have your own certificate you can simply copy over the certificate and private key in the same way. Now let’s get our Let’s Encrypt certificate!

A few more things to note.

  • You’ll need to ensure your domain or subdomain points to your new server via a DNS A record or AAAA record if you want to use IPv6.
  • You’ll need to enter an email address to receive certificate expiry notices.
  • You’ll need to agree to the Let’s Encrypt terms and conditions.

root@matrix-test:~# certbot certonly --standalone -d matrix-test.gigenet.com

How to secure your chats with Matrix

Once we have our certificate and private key we need to copy them over to /etc/matrix-synapse like so (change directory to your FQDN).

cp /etc/letsencrypt/live/matrix-test.gigenet.com/fullchain.pem /etc/matrix-synapse/fullchain.pem

cp /etc/letsencrypt/live/matrix-test.gigenet.com/privkey.pem /etc/matrix-synapse/privkey.pem

Next, we’ll need to generate a registration secret. Anyone who has this secret will be able to register an account. So you want to keep it safe.

root@matrix-test:~# cat /dev/random | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1

Output should be a random string of 64 characters like: TDfdIXPBWDOqaVsR5erVJLKdqPqIAsrvfvEtgHfY8oZ06F5NMYnhdbHhVbneDiTF

Now we need to edit the config. You can use nano or your favorite text editor.

root@matrix-test:~# nano /etc/matrix-synapse/homeserver.yaml

Search for the parameter when you’re in nano with CRTL + W and enter registration_shared_secret

Ensure that the line looks like this:

registration_shared_secret: “TDfdIXPBWDOqaVsR5erVJLKdqPqIAsrvfvEtgHfY8oZ06F5NMYnhdbHhVbneDiTF”

We’ll also need to enable TLS support for the web client and add the paths for our certificate and private key.

Make sure the following line web_client looks like this.

web_client: True

Now we’ll add our certificate and private key to the config. The lines should look something like this.

tls_certificate_path: “/etc/matrix-synapse/fullchain.pem”

tls_private_key_path: “/etc/matrix-synapse/privkey.pem”

Save and exit your text editor after you’ve followed the steps above. We can now enable matrix-synapse to start on boot, and start the service!

systemctl enable matrix-synapse

systemctl start matrix-synapse

If everything checks out the service should have started successfully. If not you can check its status to see why it failed with.

systemctl status matrix-synapse

Now we’re ready to setup our first user. This command will allow you to register a user and make it the administrator. You can also use this command to register normal users. By default, Matrix Synapse is not configured to allow public registration.

register_new_matrix_user -c /etc/matrix-synapse/homeserver.yaml https://localhost:8448

How to secure your chats with Matrix

We’ve got our first user, now we’re going to have to pick a Matrix chat client. You can see a list of clients here but in this tutorial we’ll be using Riot on a Windows VM. It has very good support and is cross-platform. Chats can also be end-to-end encrypted with Riot! Go here to download it. 

How to secure your chats with MatrixOnce you have it installed for your platform of choice and launch it. You’ll be greeted with a similar window as the one below. Click “Login”.

How to secure your chats with Matrix

You’ll then need to enter your server’s details along with the credentials you set for your administrator account.

How to secure your chats with Matrix

After you’ve signed in you’ll be greeted with a similar interface. Let’s create our first room by pressing the + button on the bottom left of the window.

How to secure your chats with MatrixWe’ll just name it “Admin Room” for this test.

How to secure your chats with Matrix

Now we’ve got our own room that we can invite other users to!

How to secure your chats with MatrixNeed to know how to do more with Riot? They have a great FAQ with a few video tutorials on how to perform some basic tasks. 

While administering a Matrix server might be a bit of a learning curve it’s worth it if you value having control of your own data. If you want to dive more in-depth on how to setup other Matrix Synapse features I highly recommend that you head over to their GitHub page

Sound like a bit too much? Let our team of experts manage your systems.


There is a lot of noise these days about this soon-to-be implemented EU regulation, the GDPR (General Data Protection Regulation), making the topic hard to miss — but how much do you understand about GDPR, and to what extent can it can impact your U.S.-based business?

What is this GDPR thing, and why should you care?

Adopted by the European Union on April 27th, 2016, and scheduled to become enforceable on May 25th, 2018, the GDPR is a regulation designed to greatly strengthen an EU citizen’s control over their own personal data. In addition, the regulation is meant to unify the myriad of regulations dealing with data protection and data privacy across member states. Finally, its reach also extends to the use and storage of data by entities outside of the EU (Spoiler Alert! This is the part that affects us).

Enforcement of the provisions within GDPR is done via severe penalties for non-compliance, with fines up to €20 million, or 4% of the worldwide annual revenue (whichever is greater). Now, as a non-EU entity, you may think that your company won’t be subject to these fines, but that is incorrect. As a U.S. company that collects or processes the personal data of EU citizens, the EU regulators have the authority and jurisprudence, with the aid of international law, to levy fines for non-compliance.

In addition, your EU-based clients can be held accountable for providing personal information to a non-compliant 3rd party (your company). This is strong incentive for EU-based citizens and companies to insist on working only with GDPR-compliant 3rd parties, costing your company all EU-based business.

As you will soon realize, the GDPR is a vast set of regulations, with a large scope and sharp teeth. I cannot possibly go into enough detail in a blog post to map out a roadmap towards compliance, and neither is that my goal. If that is what you are looking for in a blog post, well, maybe you shouldn’t be responsible for anyone’s personal data….

No, my intent here is to demonstrate the importance of the GDPR, hopefully convince you to take it seriously and start down the road to compliance, and finally to point you in the right direction to start your journey.

The expanding scope

The GDPR expands the definition of personal data in order to widen the scope of its protections, aiming to establish data protection as a right of all EU citizens.  

The following types of data are examples of what will be considered personal data under the GDPR:

GDPR personal data

Does your company collect, store, use or process anything considered personal data related to an EU citizen by the GDPR?  If you have any EU clients, customers, or even just market to anyone in the EU, it is unlikely you could avoid being subject to GDPR.

The EU is seeking to make data privacy for individuals a fundamental right, broken down into several more-precise rights:

  • The right to be informed
      • A key transparency issue of the GDPR
      • Upon request, individuals must be informed about:
        • The purpose for processing their personal data
        • Retention periods for their personal data
        • All 3rd parties with which the data is to be shared
      • Privacy information must be provided at the time of collection
        • Data collected from a source other than the individual extends this requirement to within one month
      • Information must be provided in a clear and concise manner.
  • The right of access
      • Grants access to all personal data and supplementary information
      • Includes confirmation that their data is being processed
  • The right to rectification
      • Grants the right to correct inaccurate or incomplete information
  • The right to erasure
      • Also known as “the right to be forgotten”
      • Allows an individual to request the deletion of personal data when:
        • The data is no longer needed under the reason it was originally collected
        • Consent is withdrawn
        • The data was unlawfully collected or processed
  • The right to restrict processing
      • This blocks processing of information, but still allows for its retention
  • The right to data portability
      • Allows an individual’s data to be moved, copied or transferred between IT environments in a safe and secure manner.
      • Aimed to allow consumers access to services which can find better values, better understand understand spending habits, etc.
  • The right to object
      • Allows an individual to opt-out of various uses of their personal data, including:
        • Direct marketing
        • Processing for the purpose of research or statistics
  • Rights in relation to automated decision making and profiling
    • Limits the use of automated decision making and profiling using collected data

gdpr data privacy rights

Sprechen Sie GDPR?

Before diving deeper, it is important to understand some key terms used by the regulation.

The GDPR applies to what it calls “controllers” and “processors.”  These terms are further defined as Data Controllers (DCs) and Data Processors (DPs).  The GDPR applies differently in some areas to entities based upon their classification as either a DC or as a DP.

  • A Controller is an entity which determines the purpose and means of processing personal data.
  • A Processor is an entity which processes personal data on behalf of a controller.

What does it mean to process data?  In this scope, it means:

  • Obtaining, recording or holding data
  • Carry out any operation on the data, including:
    • Organization, adaptation or alteration of the data
    • Retrieval, consultation or use of the data
    • Transfer of data to other parties
    • Sorting, combining or removal of the data

The Data Protection Officer, or DPO, is a role set up by the GDPR to:

  • Inform and advise the organization about the steps needed to be in compliance
  • Monitor the organization’s compliance with the regulations
  • Be the primary point of contact for supervisory authorities
  • Be an independent, adequately resourced expert in data protection
  • Reports to the highest level of management, yet is not a part of the management team.

The GDPR requires a DPO to be appointed to any organization that is a public authority, or one that carries out certain types of processing activities, such as processing data relating to criminal convictions and offences.

Even if the appointment of a DPO for your organization is not deemed necessary by the GDPR, you may still elect to appoint one anyway.  The DPO plays a key role in achieving and monitoring compliance, as well as following through on accountability obligations.

The Nitty Gritty

In addition to expanding the definition of personal data and providing individuals broad rights governing the use of that data, the GDPR provided a number of requirements for organizations requiring that data shall be:

“a) processed lawfully, fairly and in a transparent manner in relation to individuals;

b) collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall not be considered to be incompatible with the initial purposes;

c) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed;

d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay;

e) kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; personal data may be stored for longer periods insofar as the personal data will be processed solely for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes subject to implementation of the appropriate technical and organisational measures required by the GDPR in order to safeguard the rights and freedoms of individuals; and

f) processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.” 

— GDPR, Article 5 


Additionally, Article 5 (2) states:

“the controller shall be responsible for, and be able to demonstrate, compliance with the principles.”

This last piece, known as the accountability principle, states that it is your responsibility to demonstrate compliance.  To do so, you must:

  • Demonstrate relevant policies.
    • Staff Training, Internal Audits, etc.
  • Maintain documentation on processing activities
  • Implement policies that support the protection of data
    • Data minimisation
      • A policy that encourages analysis of what data is needed for processing, and the removal of any excess data, or simply collecting only what is needed, and no more
    • Pseudonymisation
      • A process to make data neither anonymous, nor directly identifying
      • Achieved by separating data from direct identifiers, making linkage to an identity impossible without additional data that is stored separately.
    • Transparency
      • Demonstration that personal data is processed in a transparent manner in relation to the data subject
      • This obligation begins at data collection, and applies throughout the life cycle of processing that data
    • Allow for the evolution of security features going forward.
      • Security cannot be static when faced with a constant-evolving environment.
      • Policies must be flexible enough to protect from not just today’s and  yesterday’s threats, but from tomorrow’s.

The best laid plans…

Despite one’s adherence to these new policies, and implementation of tight security policies, there is no guarantee the data you are responsible for keeping safe will be absolutely secure.  Data breaches are more or less inevitable. Being aware of this, the GDPR has provisions regarding the reporting of data breaches should (when) they happen.

Not sure how to navigate these waters with your current infrastructure? We can help.

A data breach is a broader term than one may think.  Typically, the deliberate or accidental release of data to an outside party (say, a hacker) would be what one would consider a breach — and you would be right, it is a breach — but there is much more that can be considered a breach.

All of the following examples constitute a data breach:

  • Access by an unauthorized third party
  • Loss or theft of storage devices containing personal data
  • Sending personal data to an incorrect recipient, whether intended or not
  • Alteration of personal data without prior authorization
  • Loss of availability, or corruption of personal data

Data breaches must be reported to the relevant supervisory authority within 72 hours of first detection. Should the breach be likely to result in risk to an individual, that individual must also be notified without delay. All breaches, reported or not, must be documented.

Bit off more than you can chew?

This may seem like a lot to take in, and it should be.  The GDPR was designed to expand the privacy rights of all EU citizens, as well as replace the existing regulations of all member states with one, uniform set of regulations.

The good new is, as a U.S. company, you don’t have to take every step towards compliance alone.

The U.S. government, working with the EU, developed a framework to provide adequate protections for the transfer of EU personal data to the United States. This framework, called Privacy Shield, was adopted by the EU in 2016 and has passed its first annual review.

In order to participate in the Privacy Shield program, U.S. companies must:

  • Self-certify compliance with the program
  • Commit to process data only in accordance to the guidelines of Privacy Shield
  • Be subject to the enforcement authority of either:
    • The U.S. Federal Trade Commission
    • The U.S. Department of Transportation

To learn more about Privacy Shield, visit www.privacyshield.gov

How I learned to stopped worrying and love the GDPR

Getting compliant with the GDPR may seem like a huge P.I.T.A., but there are real benefits to following this path that extend beyond retaining EU contracts and avoiding hefty fines.  Data privacy is a huge issue world-wide, and being compliant with one of the strictest sets of regulations will help appease clients and customers from all corners of the globe. Even if you don’t have any interaction with EU citizens or organizations, becoming GDPR compliant may still be a great idea.

Compliance forces you to evaluate your systems and processes, ensuring that they are secure and robust enough to survive in the ever-changing landscape in which data privacy resides.  This transforms compliance from a tedious duty to a strong selling point.

Click Here to find out how GigeNET can help you!

Securing Memached Services
Over the past few weeks, a new DDoS attack vector through the use of memcached has become prevalent. Memcached is an object caching system with the original intent of speeding up dynamic LiveJournal websites back in 2003. It does this by caching data in RAM instead of calling data from a hard drive, thus reducing costly disk operations.

Deeper analysis of the security issues:

Memcached was designed to give the fastest possible cache access, hence it isn’t recommended to leave open on a public network interface. The recent attacks utilizing Memcached take advantage of the UDP protocol and an attack method known as UDP reflection.

An attacker is able to send a UDP request to a server with a spoofed source address, thus causing the server to reply to the spoofed source address instead of the original sender. On top of sending requests towards a server with the intent of “reflecting” them towards another server, attackers are able to easily add to the cache. Because memcached was designed to sit locally on a server, it was never created with any form of authentication. Attackers can connect and add to the cache in order to amplify the magnitude of the attack.

The initial release of Memcached was in May of 2003. Since then, the uses of it have expanded greatly, but the original technology has remained the same. While its uses have been expanded, its security features have not.

Below is a sample packet we captured from a server participating in one of these reflection attacks. This is the layer 3 information of the packet, the source IP is spoofed to point to a victim’s server:


This is the layer 4 information, Memcached listens on port 11211:


In addition to being able to be used as a reflector for attackers, attackers can also extract highly sensitive data from within the cache because of its lack of authentication. All of the data within the cache has a TTL (Time To Live) value before it is removed, but it still isn’t difficult to pull information from.

Below is an example of how easy it is for an attacker to alter the cache on an unsecured server. We simply connected on port 11211 over telnet and were immediately able to make changes to the cache:


Solution Overview

In order to decide how to best secure Memcached on your server, you must first determine how your services use it. Memcached was originally designed to run locally on the same machine as the web server.

A: If you don’t require remote access, it is best to completely prevent it from using internet protocol.

B: If you require remote access, it is recommended to whitelist the source IPs of what needs to access it. This way you control exactly what machines can make changes and read from it.

Solution Instructions:

In the case that remote access is not required, it is advised to ensure Memcached can only speak to local host on on startup.

Ubuntu based servers:

sudo nano /etc/memcached.conf

Ensure the following two lines are present in your configuration:


This will bind Memcached to your local loopback interface preventing access from anything remote.

-U 0

This will disable UDP for Memcached thus preventing it from being used as a reflector.

Then restart the service to apply the settings:

sudo service memcached restart

CentOS based servers:

nano /etc/sysconfig/memcached

Add the following to the OPTIONS line:

OPTIONS="-l -U 0"

Restart the service to apply the settings:

service memcached restart

If Memcached needs to be accessed remotely, whitelisting the IPs that are allowed to connect will best secure your server.

Using iptables:

sudo iptables -A INPUT -i lo -j ACCEPT

sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

sudo iptables -A INPUT -p tcp -s IP_OF_REMOTE_SERVER/32 --dport 11211 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

sudo iptables -P INPUT DROP

Defining a /32 in the above commands specifies a single server that will be allowed access. If multiple servers in a range require access, the CIDR notation of the range can be input instead:

sudo iptables -A INPUT -p tcp -s IP_RANGE/XX --dport 11211 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

Using CSF:

nano /etc/csf/csf.allow

Add the following line to whitelist IPs:


You can also specify a range using CIDR:


tcp = the protocol that will be used to access Memcached
In = direction of traffic
d = port number
s = source IP address or IP range

Save the file and then restart the service:

csf -r

After whitelisting the IPs allowed to access Memcached, we must rebind the service to use the interface we wish for it to communicate on.

On Ubuntu based servers:

sudo nano /etc/memcached.conf

Change the IP on this line to represent the IP of the interface on your server:

-l x.x.x.x

Then restart the service to apply the settings:

sudo service memcached restart

On CentOS based servers:

nano /etc/sysconfig/memcached

Change the IP following the -l flag to that of your server’s interface:

OPTIONS="-l x.x.x.x -U 0"

Restart the service to apply the settings:

service memcached restart


The best way to secure your server from these vulnerabilities is to prevent Memcached from talking on anything other than the local host. If the service must be accessed remotely, ensure to adequately secure it using your server’s firewall. Securing your server will not only prevent it from being used in malicious DDoS attacks, but also ensure that confidential data isn’t compromised. Taking the above actions will help the community as a whole and prevent unwanted bandwidth overages.

Linux Encryption Backup Tools
If the data you store on your server or other service is important to you, likely you’d prefer it not ending up in the hands of others. If so you should use the power of cryptography. There are many options to choose from whether you’re running Windows, Linux or BSD but we’ll be focusing on my favorite Linux based tools for now. You can choose from encrypting parts of your filesystem to encrypting an entire block device. Depending on what you prefer it’s relatively easy to do if you’re even a little familiar with Linux and can follow tutorials. It doesn’t require you to be a mathematician or cryptography expert.

As a sysadmin, here are my top Linux encryption and backup tools:


One of my personal favorites for filesystem encryption is EncFS. It allows you to easily setup encrypted directories which is incredibly useful for storing off-site backups on systems that you don’t necessarily trust.

For example, you could have plain-text website backups dumped to /backups and then setup EncFS to encrypt that data to /encrypted-backups. You’d then be able to use tools like rsync or rclone to move the data somewhere else, even onto a system that you don’t trust.

Keep in mind, if you don’t have a complex/strong password, your encrypted data is likely unsafe. In the event that you lose data on your local system, you could rsync/rclone the data from /encrypted-backups and mount it again via FUSE as long as youe the original password you encrypted the data with.


If you’re familiar with GPG, Duplicity is a great tool to use for encrypted remote/local compressed backups with many features. It’s meant to be a tool for backing up specified directories in increments to save space, but can also be used to perform full backups each time.

With Duplicity you’ll need to create a GPG key and protect it with a strong password. You can then use that key with Duplicity to encrypt and sign the backups. Just like with EncFS, you can use rsync, rclone or another tool to transfer the encrypted ups off-site. The best implementation of Duplicity that I’ve found is backupninja, allowing you to create multiple backup actions with an easy-to-use configuration.


Another option is to encrypt your entire block device with dm-crypt + LUKS. Using this tool, all of the data on the block device is encrypted and even someone with local access cannot decipher it.

There are few exceptions to this. For instance, if the attacker has your root password or can read from memory via a cold boot attack when the system is powered on, then it would be possible to either simply login or grab the encryption keys from memory. What’s neat about dm-crypt + LUKS is that you can also set it up remotely on your server if you have access to IPMI and you boot a recovery image.

Once setup, you can make it prompt you via SSH for a password when the server boots instead of having to type it in locally. LUKS only protects you completely from unauthorized local access when your system is powered off, which is likely what the attacker must do. It’s unlikely that your data can be deciphered if you have a strong password. If someone were to compromise your system while the encrypted volume is mounted, you are in trouble.

Remember that encryption doesn’t protect you from the lack of safe security practice on your part!

Be sure to also read my blog, How to secure your chats with Matrix.

Already have enough on your plate? Explore GigeNET’s managed services.

What is the value of a server? In this world of virtual machines and dedicated servers, our customers are becoming more and more removed from the physical components that comprise a server. Everything is easily replaceable — everything except the data contained within the servers.

Countless work hours have gone into making each server unique, with custom set-ups, modified WordPress templates, blog posts going back years, etc.

This is where the value of a server lies, in the data. What is this data worth to you? How do you even begin to measure that?

The data is stored on the server’s hard drives.

And guess what? This is, by far, the most common part of a server to experience failure. So it’s absolutely necessary to create a backup strategy.

So how do you to create the best backup strategy? Where do you begin?

There are two ways to prevent against the effects of data loss from drive failure – prevention and recovery.

On the prevention vector, we focus on RAID: Redundant Array of Independent Disksvarious

RAID configurations can be implemented to allow your data to withstand the loss of one, two, or more drives. There are several different configurations that can be tailored to your specific needs, essentially finding the sweet spot between performance, resilience, and cost that is right for your environment. RAID uses two or more drives to store your data in ways that can not only survive drive loss but can also improve performance.

However, even RAID can only protect from so many simultaneous failures. While it certainly helps prevent data loss in most cases, it doesn’t reduce it to statistic nothingness. Servers are susceptible to multi-drive failure, which is more common than one would expect.

When setting up a server with multiple drives, often these drives are all from the same batch, if they are installed new. If one drive has a flaw, it is likely that this flaw is shared by the other drives in that batch, making the loss of 2 or more drives in a short time more likely than one would expect. In addition to often coming from the same batch, drives in a server are exposed to the same environmental factors, as well.

Furthermore, in addition to the fear of multiple drive failures, files and filesystems can become corrupt — either accidentally or maliciously. In this case, no RAID level will help you out of this jam.

This is where our second vector comes into play – recovery.

Backing up your data to an external system and keeping multiple recovery points is one of the best ways to mitigate the effect of unpreventable data loss. No matter how robust your data storage plan is, it can fail.

Our R1Soft backup service is set up to take daily incremental backup of your data.

Your data got corrupted on Tuesday? No Problem! Just restore the data from Monday.

You lost the “wrong two” drives in your RAID 10 array? Easy! We’ll simply replace the bad drives, and any others that don’t test 100% stable, then perform a bare metal recovery of your OS and data.

Every client with the service has full control and visibility of their backups with the ability to review, edit and download their backups using a personalized interface.

So, should you choose RAID or R1Soft backups?

While RAID alone is a great option that can prevent the downtime associated with recovery on single or even multiple-drive failures (as long as they are the “right” drives that fail, but that is another topic), it is not failproof.

On the other hand, backups alone can require lengthy downtime for recovery to occur, and the backup is only as current as the last recovery point.

This is why, for protecting your cannot-lose data, we recommend the dual-vectored approach of prevention and recovery. Significantly reduce the need to recover from backups by using a robust RAID, but have those backups on hand for when you do need them.

If you would like to add RAID or R1Soft backups (or better yet, both) to your current setup, chat with our specialists.

Dual Core and Quad Core Servers Optimal Ecommerce Solutions

Three banks plundered with DDoS distraction.

Criminals have recently hijacked the wire payment switch at several US banks to steal millions from accounts, a security analyst says.

Gartner vice president Avivah Litan said at least three banks were struck in the past few months using “low-powered” distributed denial-of-service (DDoS) attacks meant to divert the attention and resources of banks away from fraudulent wire transfers simultaneously occurring.

The loses “added up to millions [lost] across the three banks”, she said.

“It was a stealth, low-powered DDoS attack, meaning it wasn’t something that knocked their website down for hours.”

The attack against the wire payment switch — a system that manages and executes wire transfers at banks — could have resulted in even far greater loses, Litan said.

It differed from traditional attacks which typically took aim at customer computers to steal banking credentials such as login information and card numbers.

While it was unclear how the attackers gained access to the wire payment switch, fraudsters could have targeted bank staff with phishing emails to plant malware on bank computers.

RSA researcher Limor Kessem said she had not seen the wire payment switch attacks in the wild, but the company had received reports of the attacks from customers.

“The service portal is down, the bank is losing money and reliability, and the security team is juggling the priorities of what to fix first,” she said.

“That’s when the switch attack – which is very rare because those systems are not easily compromised [and require] high-privilege level in a more advanced persistent threat style case – takes place.”

Litan declined to name the victim banks but said that the attacks did not appear linked to recent hacktivist-launched DDoS attacks against US banks since they were entirely financially driven.

Researchers at Dell SecureWorks in April detailed how DDoS attacks were used as a cover for fraudulent attacks against banks.

The researchers said fraudsters were using Dirt Jumper, a $200 crimeware kit that launches DDoS attacks, to draw bank employees’ attention away from fraudulent wire and ACH transactions ranging from $180,000 to $2.1 million in attempted transfers.

Last September, the FBI, Financial Services Information Sharing and Analysis Center, and the Internet Crime Complaint Center, issued a joint alert about the Dirt Jumper crimeware kit being used to prevent bank staff from identifying fraudulent transactions.

In the alert, the organisations said criminals used phishing emails to lure bank employees’ into installing remote access trojans and keystroke loggers that stole their credentials.

In some incidents, attackers who gained the credentials of multiple employees were able to obtain privileged access rights and “handle all aspects of a wire transaction, including the approval,” the alert said – a feat that sounds daringly similar to recent attacks on the wire hub at banks.

“In at least one instance, actors browsed through multiple accounts, apparently selecting the accounts with the largest balance.”

Litan suggested that financial institutions “slow down” their money transfer system when experiencing DDoS attacks in order to minimise the impact of such threats.

This article originally appeared at scmagazineus.com

DDoS Protected Hosting
Izz ad-Din al-Qassam Cyber Fighters, the group behind three phases of distributed-denial-of-service attacks against banks since last September, now says more attacks against U.S. banks are on the way. The group made its announcement in a July 23 posting on the open forum Pastebin.

al-Qassam Cyber Fighters hasn’t attacked since the first week of May, when it announced it was halting attacks for the week, in honor of Anonymous’ Operation USA. But the group has remained quiet since then, apparently bringing to a close its third phase of attacks, which began March 5 (see New Wave of DDoS Attacks Launched).

Experts who’ve been following the group’s DDoS attacks say this fourth phase was expected and likely will follow the pattern of earlier phases.

“The QCF always start out a phase of Operation Ababil with something new,” says Mike Smith of online security provider Akamai Technologies. “It might be new targets, a larger botnet, new techniques, etc. This is how they try to evade the protections that the targets have deployed. They’ve also demonstrated a bit of showmanship in the past with announcing the attack before they resumed hostilities, and this could be another tactic to generate more press buzz.”

‘A Bit Different’
In its most recent post, al-Qassam Cyber Fighters says: “Planning the new phase will be a bit different and you’ll feel this in the coming days.”

John LaCour, CEO of cyber-intelligence firm PhishLabs, says the group’s plans for different attacks are in response to banking institutions’ heightened DDoS-mitigation strategies. “Major banks had improved their defenses prior to the quiet period,” he says. “If new types of attacks appear, then banks will need to be prepared to respond quickly to prevent significant impact to their online services.”

Based on the impact of the first three phases of DDoS attacks, LaCour notes: “Today’s announcement should put financial organizations on high alert for future attacks seeking to disrupt their online operations.”

In its post, al-Qassam also says, “The break’s over and it’s now time to pay off. After a chance given to banks to rest awhile, now the Cyber Fighters of Izz ad-Din al-Qassam will once again take hold of their destiny.”

Brobot’s Growth
So far, the only activity DDoS experts have noted is growth and maintenance of the botnet, known as Brobot, used in the previous three phases. No attack activity against banking institutions was apparent as of the afternoon of July 23.

Although experts did not directly link PDF download attacks waged in late June against two mid-tier banks to al-Qassam, some speculated those may have been a test for the next phase of attacks (see Another Version of DDoS Hits Banks).

LaCour told Information Security Media Group in early July that new code files linked to Brobot had been identified on compromised web servers the hacktivists had taken over. “The new code we see on these web servers is one of the strong indicators that the botnet is being rebuilt,” he pointed out.

The code behind the malware had changed and included configurations not seen in the first three phases, LaCour said.

Multiple Phases
Phase three of the attacks, which ran for eight weeks, lasted longer than the earlier phases. The first campaign, which began Sept. 18, lasted six weeks. The second campaign, which kicked off Dec. 10, lasted only seven.

Experts won’t speculate about how long this fourth phase might last, although al-Qassam does include a complex formula in its July 23 post to hint at how long the attacks could drag on.

But financial fraud expert Avivah Litan, an analyst with the consultancy Gartner Inc., says the timing of this latest announcement is not surprising, given that she believes there’s little doubt these attacks are backed by Iran.

Dedicated Server Storage
Numericable is a cable TV company operating in France, Belgium and Luxembourg. Rex Mundi claimed to have stolen customer data and demanded €22,000 for its return. Numericable declined, and denied that the hackers had the data.

Rex Mundi (king of the world) is a hacker group that makes a habit of hacking for extortion. Last week,Numericable Belgium‘s IT manager received an email saying that the hackers accessed a database of 6000 new customers, demanding a €22,000 ransom for the data.

Numericable’s response was threefold. It refused to pay the ransom, denied that the hackers could obtain the customer data, and referred the matter to the police. “Hackers have managed to get the data requests for information through our website, but have failed to obtain the data from our customers for the reason that we all separated and the data were not available via the site” (Google translation), Martial Foucart, CIO at Numericable, told RTL.

Rex Mundi responded first on Twitter. “So, Numericable claims that we didn’t steal any data… Our dump tomorrow will be rather humiliating for them then.”

According to Softpedia, Rex Mundi followed up by posting the database to dpaste.de (it has since been ‘removed’). An accompanying note apparently laid the blame on Numericable. “In life, when someone makes a mistake, especially a mistake that could potentially have grave consequences for other people, you would expect that person to man up and own up to it. But not Numericable.”

In Rex Mundi’s logic, Numericable made the mistake (in not securing the data) and then refused to ‘man up’ – and pay the price.

Direct extortion is a growing motivation for cybercriminals. Ransomware, or the ‘police trojan,’ is used to extort money directly from users. The threat of a DDoS attack is used to extort money from both large and small companies. And the threat of data leaks, such as in this case, is simple blackmail. On Tuesday this week, Rex Mundi separately announced that it had breached a Belgian recruitment agency.

However, “More often than not these blackmail threats go unreported,” commented Ashley Stephenson, CEO of Corero. We only tend to hear about them, he added, “when a threat is received and a decision taken to ignore it.”

Meanwhile, Numericable is facing a separate concern: the European Commission has launched an investigation into whether it received unfair aid from France in receiving the French cable infrastructure. “The Commission has doubts that such aid could be found compatible with EU rules,” said an EC statement.

In September 2012, six major American banks came under attack by hackers, and customers could not gain access to their accounts or pay bills online. The attacks did not affect customer bank accounts, but the rash of so-called distributed denial-of-service, or DDOS, attacks such as these against major financial institutions have forced them to step up their game in combating such threats.

DDOS attacks are becoming more frequent and sophisticated, according to the 2013 annual report of the Financial Stability Oversight Council. The council and cybersecurity experts have outlined a number of ways the financial service industry can mitigate the risk. They also say consumers need to be better educated about cybersecurity.

Danny Miller, national practice leader for cybersecurity and privacy at Grant Thornton LLP, worries that at some point, cyberattackers will begin to disrupt the ability of targeted banks to conduct business.

“They don’t really have to shut down a bank’s website for a long period of time,” Miller says. “What they could do — and what it appears their strategy is — is to do it using guerilla tactics. In other words, they’re doing small, concentrated attacks that make it look to the rest of the world that the banks are not able to control their infrastructure and protect themselves.”

Sneaky hackers

Miller says hackers have developed sneakier methods for doing their worst damage. For example, they’ll use insiders to steal information from one department at a bank while security experts are distracted by a cyberattack on another department.

Individual consumers and investors add to the problem with risky behavior such as accessing their personal banking information via unsecured Wi-Fi connections and inadvertently leaving clues about their passwords — think birthdays and pet names — on social media sites, says Jerry Irvine, a member of the National Cyber Security Task Force.

A joint effort of the Department of Homeland Security and the U.S. Chamber of Commerce, the task force involves members of the public and private sectors sharing information about security risks and prevention strategies, says Irvine, who is chief information officer of Prescient Solutions, an information technology outsourcing firm in the Chicago area.

The Financial Stability Oversight Council report encourages these types of public-private partnerships, along with better cooperation with the banking sector and 15 other industries to help decrease cyberthreats.

Cybersecurity legislation needed

In his May 2013 testimony before the Senate Committee on Banking, Housing and Urban Affairs, Treasury Secretary Jacob Lew called for a bipartisan effort to pass comprehensive cybersecurity legislation that would enhance the sharing of information among banks.

Todd McClelland, an attorney with Alston and Bird LLP in Atlanta, advises financial institutions, retailers, payment processors and other clients on information security issues. His firm represents several clients who have a stake in proposed cybersecurity legislation.

“It seems that there’s always some bill pending in front of Congress legislating additional cybersecurity standards, additional risk assessments or the like,” McClelland says.
A February 2013 presidential executive order tasked the National Institute of Standards and Technology — an agency of the U.S. Department of Commerce — with producing a new framework to improve cybersecurity for the nation’s critical infrastructure. One of the agency’s goals is to standardize the measures financial institutions use to control cybersecurity risks. The NIST aims to have the final framework for guidelines ready to roll out by February 2014.

Miller says each bank needs to first identify its most important information and then focus on securing that information from both external and internal threats. As a consultant, Miller advises banks to securely delete any customer information they don’t need to store, while tailoring their security policies to fit each category of data they decide to keep.

As for consumers, Miller says, “If you don’t need to share information … don’t.”

Password tips

Make sure you understand how the financial institution is using your information, who it is sharing it with and how long it plans to keep it in its database, Miller says. And if you’re able to opt out of having your information stored, you should.

“The longer they keep it, the more likely it is going to be stolen and exposed,” Miller says.

Irvine adds these tips:

  • Use a complex password of 10 or more characters. It should be alphanumeric, uppercase and lowercase, and have special characters.
  • Be wise about selecting and answering security questions. If a site asks for your mother’s maiden name, which a hacker might easily discover by checking out your Facebook page, use another one. Pick someone you haven’t seen since elementary school. You can lie on your security questions — just remember them.
  • Don’t create the same password for all of the sites you need to access.

“If you use the same password on Facebook and LinkedIn and other social networking sites and then you use it on your banking site, you might as well just be taking the money out and giving it to the hackers yourself,” Irvine says.

Copyright 2013, Bankrate Inc.

Zimbabweans knocked offline and see data wiped because of slew of cyber attacks last week during the elections, TechWeekEurope learns.

Cyber Repression: In the weeks leading up to and following Zimbabwe’s election of last Thursday, Zimbabweans were hit by significant Internet-based attacks. In some, they could have just been the victims of collateral damage. In others, they were targeted directly.

Two massive distributed denial of service (DDoS) attacks against hosting providers took place this weekend. They took a slew of sites offline, a number of which were reporting heavily on the hugely controversial Zimbabwean election, TechWeekEurope has learned.

One of the hosting providers, GreenNet, which describes itself as an ethical hoster and ISP, with Privacy International and Fair Trade Africa amongst its customers, suspects it may have been hit because of goings on in Zimbabwe. One of its clients is the Zimbabwe Human Rights Forum, which told TechWeekEurope it believes it may have been the subject of a separate hack earlier in the week.

Human rights group hit

The coordinator of the international office of the Zimbabwe Human Rights Forum said he was alerted to the DDoS by an employee of the Congressional Research Service in Washington DC, who had been looking at the ZHRF’s election “situation-room”, a live feed updating users on the political situation in the African nation.

At 6pm Wednesday, just before the DDoS started, the coordinator noticed all the information on that feed had mysteriously been wiped. “I lost information I had gathered for eight hours,” he said. “All of the information I had recorded on 30 July in the evening through to lunchtime the next day had been wiped.

“Even our website designer and engineer couldn’t really explain what happened. Then, whilst we were still talking about the wiping, we realised the site wasn’t working.

“It is curious because we have never had this problem before in the past 10 years.”

He claimed he was putting out the most comprehensive feed on the election, drawing from a variety of sources for users, and that’s why he could have been a target.

Zimbabweans have set up numerous sites, to draw attention to fears of rigging, violent repression and threats that had blighted the 2008 election.

One, electionride.com, has been taken offline. On its Facebook page on election day, it claimed to have been compromised.

Last month, Kubatana.net, which has been disseminating information via various electronic means, said it had been blocked from sending bulk text messages. Its mobile provider Econet Wireless had been told by the government’s telecoms regulator to enforce the block, it was claimed.

“Kubatana.net views the interference in our work as obstructive, repressive and hostile. It is our opinion that as we approach the July poll the Zimbabwean authorities are increasing their control of the media,” the organisation said on its website on 25 July.

This election has proven just as controversial as 2008’s, with the two main parties at loggerheads over the result, which went strongly in favour of President Robert Mugabe. Opposition leader Morgan Tsvangirai, of the Movement for Democratic Change (MDC) party, has claimed the vote was rigged, whilst the official figures indicate Mugabe won with a significant majority.

MDC members have now claimed they were the victims of physical attacks by Mugabe supporters. Zanu-PF, Mugabe’s party, has denied the claims.

GreenNet taken out

GreenNet is only just recovering today, with some customer websites still down, having reported the strike on Thursday morning, the day after Zimbabweans headed to the polls. It appeared to be a powerful attack – TechWeek understands it was at the 100Gbps level – aimed at GreenNet’s co-location data centre provider. Its upstream provider Level 3 subsequently did not let GreenNet route through its infrastructure. Level 3 was not available for comment.

Cedric Knight, technical consultant at GreenNet, said the company suspected the massive attack, which knocked all its 3,000 customers offline, with email also disrupted, could have been launched because of the Zimbabwean organisations running off its infrastructure.

However, it could not be certain, saying only that it was one GreenNet customer that was targeted. Many of its customers from environmental, gender equality and human rights groups have powerful enemies.

It believes a government entity or a private organisation was responsible. A tweet from GreenNet earlier this week read: “The nature and magnitude of this attack does suggest corporate or governmental sponsors, likely a very unsavoury one.”Zimbabwe election – Shutterstock – © Stephen Finn

The DDoS that hit GreenNet was not a crude attack using a botnet to fire traffic straight at a target port, but a DNS reflection attack using UDP packets, which can generate considerable power. DNS reflection sees the attacker spoof their IP address to pretend to be the target, send lines of attack code to a DNS server, which then sends back large amounts of traffic to the victim.

HostGator, a huge hosting provider in the US, also suffered a big DDoS hit over the weekend. That took out popular Zimbabwean news service Nehanda Radio, amongst many others. Lance Guma, managing editor of the organisation’s website, said he was not sure what exactly had happened. But he has become used to attempted cyber attacks.

“Every time you have a big story, it depends whether the government want people to read it or not,” he said, admitting it was sometimes hard to tell if a story had just been hugely popular, causing the server to crash, or if it was a genuine attack.

Neganda Radio also receives plenty of threats via email: “We received a lot of those this last week. Obviously we never open any,” Guma added.

“We’ve been receiving a lot of election reports and then there’s a link you’re meant to click, but we never click anything because you can tell the subject matter is dodgy.

“They try all that… we normally just open emails from trusted sources.”

Guma said Mugabe’s government is fairly useless when it came to anything to do with technology, but China is believed to be assisting the nation’s cyber police. “You can just outsource this stuff now,” he added.

This article is part of TechWeek’s Cyber Repression Series – check out the first article on attacks stemming from China on spiritual activists and military bodies and the second on IP tracking in Bahrain.

Turkish security researcher claims to have found flaw in system, which has been offline since Thursday as company ‘rebuilds and strengthens’ security around databases

Apple says its Developer portal has been hacked and that some information about its 275,000 registered third-party developers who use it may have been stolen.

The portal at developer.apple.com had been offline since Thursday without explanation, raising speculation among developers first that it had suffered a disastrous database crash, and then that it had been hacked.

A Turkish security researcher, Ibrahim Balic, claims that he was behind the “hack” but insisted that his intention was to demonstrate that Apple’s system was leaking user information. He posted a video on Youtube which appears to show that the site was vulnerable to an attack, but adding “I have reported all the bugs I found to the company and waited for approval.” A screenshot in the video showed a bug filed on 19 July – the same day the site was taken down – saying “Data leaks user information. I think you should fix it as soon as possible.”

The video appears to show developer names and IDs. However, a number of the emails belong to long-deprecated services, including Demon, Freeserve and Mindspring. The Guardian is trying to contact the alleged owners of the emails.

Balic told the Guardian: “My intention was not attacking. In total I found 13 bugs and reported [them] directly one by one to Apple straight away. Just after my reporting [the] dev center got closed. I have not heard anything from them, and they announced that they got attacked. My aim was to report bugs and collect the datas [sic] for the purpose of seeing how deep I can go with it.”

Apple said in an email to developers late on Sunday night that “an intruder attempted to secure personal information of our registered developers… [and] we have not been able to rule out the possibility that some developers’ names, mailing addresses and/or email addresses may have been accessed.”

It didn’t give any indication of who carried out the attack, or what their purpose might have been. Apple said it is “completely overhauling our developer systems, updating our server software, and rebuilding our entire database [of developer information].”

Some people reported that they had received password resets against their Apple ID – used by developers to access the portal – suggesting that the hacker or hackers had managed to copy some key details and were trying to exploit them.

If they managed to successfully break into a developer’s ID, they might be able to upload malicious apps to the App Store. Apple said however that the hack did not lead to access to developer code.

The breach is the first known against any of Apple’s web services. It has hundreds of millions of users of its iTunes and App Store e-commerce systems. Those systems do not appear to have been affected: Apple says that they are completely separate and remained safe.

Apple’s Developer portal lets developers download new versions of the Mac OS X and iOS 7 betas, set up new devices so they can run the beta software and access forums to discuss problems. A related service for developers using the same user emails and passwords, iTunes Connect, lets developers upload new versions of apps to the App Store. While developers could log into that service, they could not find or update apps and could not communicate with Apple.

But if the hack provided access to developer IDs which could then be exploited through phishing attacks, there would be a danger that apps could be compromised. Apps are uploaded to the App Store in a completed form – so hackers could not download “pieces” of an existing app – and undergo a review before being made publicly available.

High-profile companies are increasingly the target of increasingly skilful hackers. In April 2011, Sony abruptly shut down its PlayStation Network used by 77 million users and kept it offline for seven days so that it could carry out forensic security testing, after being hit by hackers – who have never been identified.

It has also become a risk of business for larger companies and small ones alike. On Saturday, the Ubuntu forums were hacked, and all of the passwords for the thousands of users stolen – although they were encrypted. On Sunday, the hacking collective Anonymous said that it hacked the Nauruan government’s website.

On Sunday, the Apple Store, used to sell its physical products, was briefly unavailable – reinforcing suspicions that the company was carrying out a wide-ranging security check. The company has not commented on the reasons for the store going down.

Marco Arment, a high-profile app developer, noted on his blog before Apple confirmed the hack that ” I don’t know anything about [Apple’s] infrastructure, but for a web service to be down this long with so little communication, most ‘maintenance’ or migration theories become very unlikely.”

He suggested that the problem could either be “severe data loss” in which restoring from backups has failed – but added that the downtime “is pretty long even for backup-restoring troubles” – or else “a security breach, followed by cleanup and increases defenses”.

Of the downtime, he said “the longer it goes, especially with no statements to the contrary, the more this [hacking hypothesis] becomes the most likely explanation.”

About Graeme Caldwell — Graeme works as an inbound marketer for InterWorx, a revolutionary web hosting control panel for hosts who need scalability and reliability. Follow InterWorx on Twitter at @interworx, Like them on Facebook and check out their blog, http://www.interworx.com/community.

An extremely hard to find backdoor that exposes web users to malware infection has been discovered in the wild by security researchers. The Linux/Cdorked. A backdoor uses a number of advanced methods to avoid detection with the techniques normally employed by system administrators, and is estimated to be present on hundreds of machines.

The most recent of a series of serious Apache exploits discovered over the last few weeks, Linux/Cdorked.A is particularly pernicious because, in addition to providing a platform from which the Blackhole toolkit can be used against target machines, it makes almost no easily detectable changes to infected systems. The usual remediation techniques employed by system administrators are likely to simply destroy evidence of infection.

The backdoor stores none of its configuration files on disk, instead using shared memory to store its instructions and configuration. The only evidence on the filesystems of infected machines is a modified HTTP daemon binary. The backdoor receives its instructions via obfuscated URLs that Apache does not log and is capable of receiving 70 different instructions, indicating a comprehensive and fine grained control capability.

In addition to control via URL, the modified server binary also contains a reverse connect backdoor that can be triggered by a URL containing hostname and port data to connect to a shell session that the attacker controls.
Linux.Cdorked.A redirects clients to machines that contain malware payloads, but makes itself even more difficult to detect by avoiding redirecting clients that meet conditions that indicate that the connecting machine may be used by a site’s administrators. For example, it won’t redirect if the URL or hostname contains strings like “support” or “adm”. An administrator visiting an infected site is likely to see no evidence of the site having been exploited. Additionally, the backdoor sets a cookie on clients it redirects and won’t redirect the same client again, making it further difficult to determine the source of infection.

If an administrator suspects that their server has been infected they can use a tool created by ESET, whose researchers made the initial discovery, to dump the shared memory used by the backdoor for analysis.
It’s not clear how servers become infected initially, but all system administrators should employ industry best practices to ensure that their sites are not easily exploited, including having the most recent version of the Apache server installed and verifying that users with SSH access to servers are using secure passwords, as there is some indication that brute force attacks on SSH servers may be responsible.

Our CTO here at GigeNET, Ameen Pishdadi, was recently interviewed by Net-Security.org. In this interview he discusses the various types of DDoS attacks, tells us who is at risk, tackles information gathering during attacks, lays out the lessons that he’s learned when he mitigated large DDoS attacks, and more.

Read the full article on the Net-Security.org website

Attacks on computer systems are on the rise. If a hacker gets into a system and steals credit card numbers, customer data, Social Security numbers, it can be financially devastating for a company. Businesses can lose most of their customers when they no longer trust them with their personal and financial. For this reason, it is vital that a business stays ahead of the web criminals. The question is how much will you pay for security?

The Costs are Greater if you do Nothing

If you do not acquire effective website security and your server is breached, you can pay immense costs, which includes losing customers and suffering serious loss of sales. If you have an online business, customer data and financial information could be stolen. The result can be lawsuits and loss of reputation which could be financially devastating. Deciding on the security measures that you will implement will depend on the type of website you have such as a large corporation website or a small online store offering select products or services.

Generally, you have to consider such measures as security penetration tests, virus scanners, firewalls, technology to prevent hackers, routine security assessments, Phishing and Malware protection, anti-virus protection, and anti-DDoS software. As well, you have to make sure you are upgrading these security systems on a regular basis. You will also have to implement an office security policy for your employees.

Lessening the Risks

Security prevention means you must reduce the risks. When considering what you will need to in your hosting security plan, you should consider the following: regulatory compliance, security breach history, industry standards, and size of network and system. In addition you need to consider such risks to infrastructure, code, and applications and how susceptible your system is to URL manipulation, SQL injection, and cross-site scripting.

The impact of a security breach can be devastating to a business so it is essential to budget for a quality all-inclusive security plan. Implementing an effective security system can be expensive; however, the cost of a breach can destroy a business. A good security system can assure and give you peace of mind knowing that your system and data is protected at all times.

Due to the increasing number of DDoS attacks, it is vital that businesses implement a diverse number of security measures in order to protect their websites and data from a wide range of security threats.

Five ways to protect against DDoS attacks:

  1. Vulnerability Scanning and Penetration Testing: Prevention is the key to website security. Vulnerability scanning is an effective prevention tool. A vulnerability scanner is a tool that scans a site for security vulnerabilities. The results of the scan allows administrator to secure the vulnerable spots such as improving firewalls. As well, penetration testing is another tool that helps to identify weakness in such areas as application codes and browser scripts.
  2. DDoS Protection Software: A DDoS (Distributed Denial of Service) takes place when a server is overwhelmed with tasks and requests and the server is no longer able to function properly. A DDoS attack can cause a server to use up a resource such as storage capacity, bandwidth, and processing power. By doing so, there is no more of that resource remaining for regular legitimate traffic. DDoS protection software runs on existing hardware and it is involved in analyzing incoming traffic. When the software detects malicious packets, it will filter them out which efficiently stops a traffic flood attack.
  3. Application Firewalls: A web application firewall is a tool that is located between a client browser and web server. This device prevents web attacks, prevents data leaks, and analyzes HTTP traffic. It is an effective method of blocking web attacks.
  4. Browser Security Tools: Make sure your browser has tools such as built-in XSS filter to minimize the risk of XSS attacks.
  5. Application Whitelists: Implement a policy of approved applications through the use of application whitelists.

When securing your website, make sure you prioritize and choose the security tools that are affordable and provide you with an efficient level of security. Detecting diverse attacks along with a security program that prevents attacks will significantly help prevent any type of DDOS attacks against your website.


DDoS Protected Hosting
Distributed Denial of Service (DDoS) attacks have become more prevalent and are now considered among the highest types of attacks against a web server. DDoS attacks have resulted in not just taking websites temporarily offline, but also shutting websites down for days. Because of malicious attacks such as Wikileaks ‘Operation Payback,’ more enterprises are taking these attacks seriously and are looking for effective anti-DDoS technology. There are now efficient anti-DDoS technologies available to safeguard web servers. DDoS protected hosting is an efficient and affordable solution to preventing malicious DDoS attacks.

DDoS protected hosting provides protection for your website from DDoS attacks. The objective is to respond to the attack using DDoS prevention measures. DDoS attacks normally operate by driving an overwhelming amount of web traffic to a targeted server so that the server can no longer function properly and stops working. The authentic traffic is then lost. When there is a sudden spike in IP traffic and unfiltered IP traffic starts to increase, this is usually an indication that a DDoS attack is making its way into the network. Anti-DDoS software will start filtering the traffic immediately until the traffic slows down to normal levels. This is an indication that the attack has been mitigated. A couple of minutes later the traffic is flowing in at normal levels while remaining filtered, and authentic traffic continues to flow undisrupted.

By placing a website behind a ProxyShield mitigation system, DDoS attacks will be effectively stopped which would have resulted in extended periods of downtime Businesses benefit from complete protection for their website IP address, and automatic detection and filtering of DoS / DDoS attacks. DDoS protected hosting provides clients with the most current technology to ensure their websites are protected from malevolent elements. When choosing DDoS protected hosting, it is important to understand the level the protection the service provides to ensure that you have the most reliable services for DDoS protection.

Website downtime caused by Distributed Denial of Service (DDoS) attacks can cost your business hundreds of thousands and even millions of dollars. Today, DDoS attacks aimed at shutting down websites have become one of the most costly computer crimes. If you have a growing e-commerce site, it is essential that you have complete DDoS attack protection. DDoS protected hosting is the most efficient and most affordable DDoS security solution.