All businesses, no matter how successful or long-standing, will always deal with risks. These could stem from both real-world incidents, and today, they are more likely to stem from the digital sphere. For instance, some of the biggest cyberattacks of 2021— including the ones involving the Metropolitan Police Department, CNA, and the Microsoft Exchange Server— resulted in millions in losses and breached data.
How disaster recovery as a service works
To help companies prepare for the worst, disaster recovery as a service (DRaaS) has become more crucial than ever. DRaaS works as a cloud computing service model that allows an organization to back up its data and IT infrastructure. Disaster Recovery Certified Specialists (DRCS) handle and implement these plans, and work with the technology department within an organization to develop programs and look at resources necessary for technical recovery. With that, DRCS is a promising career path that offers lots of opportunities for growth in the future.If you’re interested in pursuing it as a career, read on to learn more.
What it takes to be a disaster recovery specialist
Any disaster recovery specialist should have an advanced understanding of infrastructure technology. Moreover, they also must have the ability to manage dedicated servers like ours here at GigeNET, as well as other operating systems, cloud architectures, and storage. Such dedicated servers provide safety and security to businesses, but also require efficient management. This is why many employers tightly align cybersecurity efforts with data recovery, and prioritize hiring individuals with specializations in this field.
The process of becoming a disaster recovery specialist
57.7% of disaster recovery specialists have a bachelor’s degree, and 11.1% have master’s degrees. Pursuing education in the said field can give you an edge, and with data recovery becoming a crucial part of every company’s data security plan that advantage will only grow. Depending on the organization and the position you’re eyeing, years of work or education may be necessary. Professionals with a master’s degree in cybersecurity are often trained in focus areas such as IT management, cyber defense, cybersecurity incidents, among others. You may also need to pass the Disaster Recovery Certified Specialist Qualifying examination.Completing these steps will allow you to go into specific positions in the disaster recovery field like chief information officer, network architect, or security engineer, as well as in various business settings including government, corporations, or IT. Network architects are particularly relevant as they help design and review how your business interacts with, accesses, and benefits from the internet— including how data is stored in the cloud.
Other skills needed for disaster recovery specialists
Forbes highlights how a specialist should also have supplementary skills in addition to their technical credentials. They must have an understanding of the complexity of the IT environment they’re working in, grasp how to prioritize critical applications over others, and know that backup or recovery is much more than finding the right product. Of course, soft skills like critical thinking, strong interpersonal skills, knowledge of different technologies, and the ability to lead diverse teams are also necessary.Working as a disaster recovery specialist can be both challenging and rewarding. But for those who have a passion for finding solutions and leading organizations, you could definitely make a difference in the future of public safety. For businesses or DRCS professionals looking for quality, dedicated servers, feel free to browse our listed servers here at GigeNET. We also offer our own DRaaS services, so feel free to contact us to get a custom quote.
In the ever-changing world of Information Technology, the one thing that seems to remain the same is security. Red teams vs blue teams, white hats and black hats – by now most of us in IT have heard and seen just about everything there is in the realm of computer and network security. So, those of us that eat, sleep, and breathe IT know exactly how important it is to be proactive rather than reactive. Having a trusted Anti-Virus software from a trusted vendor installed on your server is just one of many ways you can help to keep your data secure.
One semi-recent change in the world of security is the rise of crypto-viruses, more commonly known as ransomware. In less than a year’s time, we’ve seen two fairly large organizations affected by ransomware. According to security researchers at PurpleSec, the recent attack on Kaseya came with a seventy-million dollar ransom note. While ransomware itself is not new, it is growing more and more sophisticated. Worried? You should be, because from May 6th, 2021 – May 12th, 2021 the colonial pipeline was shut down due to a ransomware attack. Even the City of Atlanta has fallen victim to Ransomware. These attacks have cost companies and taxpayers alike millions to recover from.
So, what can be done to keep your data and personal information safe?
Start with a trusted Anti-Virus solution.
GigeNET would like to officially announce our partnership with ESET, makers of the famed NOD32 Anti-Virus. We offer ESET protection for Windows and for most Linux Distributions starting at $5.00 per month. This is a new option on all new server orders. Please reach out to our sales team if you wish to add it to an existing server.
Of course, good Anti-Virus software is not the only necessity. It is considered not only best practice, but vitally important that the following steps are taken to secure your server or your personal computer:
- Always take backups. If you actually care about your data, you will back it up. Local backups to a backup drive in your system are good, but using a remote backup system is preferred. (GigeNET offers both R1Soft and Veeam as add-on remote backup services).
- Keep your operating system and applications up-to-date with the latest updates and security patches. Typically, while the information might not always be fully disclosed or announced, most operating system updates are to fix security-related issues or bugs. Some IT professionals recommend holding off for a while before applying new updates since occasionally updates have broken a system. However, as long as you follow step 1 above, you have a solid backup to fall back on. The small chance of encountering adverse effects from an update is easier to mitigate than leaving a security vulnerability in place.
- Firewall it. Your server is on 24/7 with one or more public IPs. Leaving these IPs unprotected by a firewall is just asking for trouble. Any firewall available will secure unused ports and allow you to limit how the other ports are accessed. It all comes down to the configuration.
- Don’t slack on user administration. Quickly disable accounts when a user leaves and ensure that no account has more access than is needed. Good user administration can reduce the number of accounts available for a hacker to exploit, prevent bitter ex-employees from doing harm, and mitigate the exposure should an account become compromised.
- Use complex passwords. All your security measures are for naught if you are using passwords that are trivial to crack. A good password is one that cannot be remembered without frequent, repeated use. Store your important passwords in a password manager instead. I personally recommend KeePassX , but there are many others around. The trick is to use a password for your password manager that you can remember, but something still difficult to crack. Remember to use special characters!
- SSH Keys. For your Linux servers, disable password authentication altogether and use SSH Keys to authenticate. If you must use password authentication, it’s recommended that you use a non-root user to log in, then su to root in order to perform actions as the root user.
- Do not open unsolicited email attachments, even from people you know. Before you view what seems like a harmless attachment even from a friend, you should confirm with the sender what it is they’re sending.
Please keep in mind that this list is not the end-all and be-all list of security practices, but taking the above steps will definitely help reduce the level of probability that you’ll lose your data due to outside attackers.
Since the first public release of CentOS back in 2004, many administrators and tinkerers alike began their journey down the path of converting their once free but increasingly outdated copies of Red Hat Linux 7.2 to this promising new community-driven contender. This first migration was triggered by Red Hat announcing the death of a product, Red Hat Linux. Displeased with being limited to monetizing only support plans, Red Hat killed off their stable distribution, Red Hat Linux. In its place, we got Red Hat Enterprise Linux (RHEL) and a community-based upstream product, aptly named Fedora. Unsatisfied with these options, many users jumped ship to the new community-driven upstart, CentOS. As CentOS matured, its reputation as a free and stable production environment grew – along with its popularity.
In 2014 CentOS joined Red Hat and not much changed for six years – until December of 2020 when Red Hat announced they were effectively killing off the CentOS project. CentOS would become CentOS Stream, the new upstream (i.e. unstable) version of RHEL with support for CentOS 8 ending a year later, in December of 2021. This came as a shock to many in the community, at least to those among us that had not previously gone through this with Red Hat in the past. Many in the community feel slighted, as they had upgraded to CentOS 8 due to the initial promise of support through 2029. CentOS’s position as the free stable production environment replacement for RHEL is now a thing of the past. However, just as CentOS was there in 2004 to fill the gap, this time around we have two contenders to pick up the torch from CentOS – and we’re here to help you do it.
In comes the first contender to take CentOS’s place, and one we just so happen to partner with here at GigeNET – AlmaLinux. AlmaLinux was created by the Cloud Linux team as a free CentOS replacement. It is positioned to provide the community a solid distribution from a well-known and well-experienced team of developers. This team has been performing full rebuilds of RHEL for years with their CloudLinux development. They even made it easy to migrate to AlmaLinux from your existing CentOS 8 system. In fact, if you update a CentOS 8 system to the latest version, you can use the AlmaLinux conversion script. Why not? We used it here at GigeNET for our internal systems and the steps are fairly simple. As root, perform the following:
- Back up your system.
- Run these commands, one at a time:
dnf update -y
shutdown -r now
bash <(curl -s -L https://raw.githubusercontent.com/AlmaLinux/almalinux-deploy/master/almalinux-deploy.sh)
chmod +x almalinux-deploy.sh
- If you see the “Complete” message, run:
dnf distro-sync -y
- Reboot your system with:
shutdown -r now
The next contender to step in as a replacement for CentOS is none other than one of the original founders of CentOS, Gregory Kurtzer. He launched a new community-based offering called “Rocky Linux,” named after one of the other co-founders of CentOS, Rocky McGaugh, who sadly passed away in 2004. Rocky Linux stands to be everything that CentOS originally stood to be, a totally free enterprise-grade OS that is “bug for bug compatible” with Red Hat Enterprise Linux. We’re keeping a close eye on Rocky because we’d like to see it succeed as a community-based distro without ulterior profit motives. They also provide a fully open-source build system and a guide on how to rebuild it yourself if you’re tech savvy enough.
Converting over to Rocky Linux is a bit easier, as it does not require you to update your CentOS 8 system before doing so. In fact, the process is just as simple as this:
- Back up your system.
- Run this one command as root:
curl -sSL https://raw.githubusercontent.com/rocky-linux/rocky-tools/main/migrate2rocky/migrate2rocky.sh | bash -s -- -r
- Reboot your system with:
shutdown -r now
Currently, AlmaLinux seems to be the more mature distribution – but Rocky Linux is still a solid choice. Rocky Linux lacks support for secure boot, while it is supported by AlmaLinux. Also, while testing the two, we encountered a few bugs in Rocky Linux’s install and migration scripts. Nothing that couldn’t be worked around, but an indication that Rocky Linux lacks the polish of AlmaLinux. Don’t let this dissuade you from Rocky Linux, though – we fully expect it to work out these bugs and catch up to AlmaLinux’s feature set. Either way, as a GigeNET customer, our support staff would be more than happy to assist you in the conversion process. We have both AlmaLinux and Rocky Linux available as an OS choice for both dedicated and cloud servers. However, if you’re currently a cPanel/WHM user, it might be your best bet to go with AlmaLinux, as cPanel has not yet released an official announcement of support for Rocky Linux. Finally, note that these distributions do not provide an upgrade path directly from CentOS 7 or older distributions. The scripts above are intended to work on RHEL 8 based systems only.
CentOS has been one of the most prolific Linux server distributions available. The stability, simplicity, and low overhead have made it the top choice to run production server applications – especially hosting panels such as cPanel. The open source nature of CentOS, as well as the backing from the Red Hat Enterprise Linux (RHEL) Project, uniquely positioned CentOS as the only widely available and highly supported Red Hat based distribution to exist for Linux server administrators. The proliferation of web panels such as cPanel helped to firmly cement CentOS as an important player for production server applications. That is until CentOS Stream was announced.
What is CentOS Stream and how does it affect me?
CentOS Stream is the RHEL Project’s and CentOS Project’s new rolling distribution to succeed CentOS 8. By deciding to go with a rolling distribution model, CentOS developers can release new features into the operating system at a rapid pace. Unfortunately, this also means that CentOS Stream users have now become beta testers.
Features are introduced so quickly that there is often not much time to patch or even discover many bugs. This results in an unstable and buggy distribution — the opposite of what companies are looking for to support their server applications. The original intent behind CentOS was to be a community-driven enterprise distribution. Enterprise distributions are designed to be extremely reliable as well as stable, in stark contrast to Stream’s rolling distribution model.
Highly complex panels such as cPanel cannot develop fast enough to work around bugs that are introduced and for every bug that is quashed, another one is likely to take its place. For that reason, cPanel has refused to support CentOS Stream.
Another issue relates to a truncated support schedule for CentOS 8. With CentOS Stream replacing CentOS 8, the latter is being phased out of the CentOS Project’s development cycle. CentOS 8 users, who expected an EOL of 2029, are now contending with an EOL of 2021. This presents a significant challenge to systems administrators and their clients, which consequently elicited a highly negative response from the enterprise community. Accusations were drawn that the RHEL project is attempting to force administrators to pay for RHEL 8 licenses by terminating CentOS 8 support. This is aggravated by the fact that RHEL is now offering 16 free licenses per company in order to help push RHEL 8 — leading many to believe this is a cash grab caused by IBM’s acquisition of the RHEL project back in 2019. This was predicted by many, as IBM has a track record of imposing highly convoluted and extremely expensive licensing plans on many of their products.
What are my options?
Thankfully, many projects have stepped up to provide the open source community with an enterprise operating system that is 1:1 bug compatible with CentOS 8. The two biggest contenders are AlmaLinux, developed by CloudLinux, and RockyLinux, developed by the Rocky Enterprise Software Foundation. Both serve to continue the original purpose of CentOS, filling the gap CentOS will leave for a community enterprise distribution. cPanel has announced support for both AlmaLinux and RockyLinux, allowing users to select either option to run cPanel on RHEL 8. GigeNET is a proud Cloudlinux and cPanel partner and our techs have years of experience working with their products. Clients with support plans can receive assistance from our support team migrating their cPanel instances to either AlmaLinux or RockyLinux. We are available 24/7 to assist you with any questions you may have about cPanel and RHEL 8 support, as well as any other issues you may be facing.
Did you know that GigeNET has been in business for nearly 25 years? During that time, GigeNET pioneered some of the first client-accessible control panels and was one of the first DDoS mitigation providers in the world.
Find even more about our history and our plans for the future in the new interview with founder Ameen Pishdadi.
Interested in Deca Core Dedicated Servers? View our inventory.
What is a deca core dedicated server?
Deca Core Dedicated Servers are servers that have a processor with ten cores. By having multiple cores the deca core server can handle ten different processes simultaneously. Deca core processors are typically used in HPC (high-performance computing) where the workloads can take advantage of multiple cores.
In a server deployment, deca core dedicated servers can greatly help with high trafficked websites, database processing, as well as workloads that use a lot of parallel processing like machine learning and AI.
How do deca core dedicated servers work?
Deca-core processors generally have greater performance than less cored systems because they can simple process more instructions in parallel. With the the ten cores running on the same chip, they to share the same data path and memory to the motherboard. This increases efficiency and reduces redundancy.
Many of the deca core processors are well-threaded. which allows server to benefit from an increased number of cores, higher memory capacity, and a larger cache.
It should be noted that old legacy applications and programs may not see a performance hike. Applications and programs written before multi-core servers were not programmed to utilize the parallel instruction efficiencies of the system. Yet another reason for companies who still used legacy programs to invest on upgrading internal systems. The change in speed alone often justifies the development costs.
Another note is based on performance. Though a deca core has ten times the cores of a single core processor, it does not necessarily have ten times the processing speed.
What are the advantages of a deca core server processor?
- Improved performance
- Reduced latency
- Lessens heat generation
- Maximizes bandwidth and main memory
- Better suited to modern system architecture
- Helps decreases power consumption
Due to their nature, deca core systems are extremely important in high-performance private cloud and cluster arrangements. Their ability to process instructions in parallel make them the perfect foundation for creating virtual machines.
GigeNET uses the Intel Xeon E5-2630 v4 processor, which is a dual deca (two 10 core processors on one board) core processor system. These servers were built for enhanced virtualization and cloud deployments, while supporting more traditional applications. Learn more about them here.
Which Email Protocol do I choose? IMAP or POP3?
Email is, by far, the most common means of online communication these days. Believe it or not, email dates back nearly 50 years and has seen little change in that time. An email sent in the early 1970s would look much as it would today. The key to email’s success is that it is based on a series of well-defined standards with a decentralized design that will likely help email remain in wide-spread use for a very long time.
Email operates using a classic client-server model. A client is a program that end-users (you) interact with. Common email clients are Outlook, Thunderbird, various email clients built into operating systems (like Microsoft Mail and Apple Mail), and web-based email clients (Hotmail, Gmail, to name a couple). This is where incoming messages are read and outgoing messages are composed.
The server is another program that makes the whole system work behind the scenes (at least from the end-user point of view). Email clients connect to the server regularly to check for new messages and to dispatch outgoing emails. Each email server connects to the global network of email servers in order to route mail all over the world, making sure each message is delivered to the correct server, and eventually to the recipient when their client connects to their own server.
Configuring Your Email Client
Inbound and outbound email messages are handled by different protocols, and often – especially with larger email systems – by separate servers. Configuration of the client requires you to know the hostname and port used for both inbound (new mail) and outbound (sent) messages. This information is often found on the mail server’s interface, if you are managing your own email server. Otherwise, it can be requested from the server’s administrator.
Outbound (sent) messages are handled almost universally by SMTP, so we’ll address that first.
SMTP (Simple Mail Transport Protocol)
Outgoing email configuration is usually as simple as specifying the SMTP server, network port, and supplying credentials for authentication. The SMTP server is the device your email client connects to in order to relay messages sent by you to the email server corresponding to the recipient’s email account. It is typically something like smtp.domain.com or mail.domain.com.
A network port is a thread of a network connection. If you transmit data on port xyz, it will come out port xyz on the other end. SMTP historically uses port 25, but modern systems tend towards port 587 these days. Occasionally you will see 465 (deprecated), 2525 (non-standard), or less commonly a unique port number.
Finally, you will need to authenticate with your SMTP server to prove that you are authorized to send email through this email relay server. This will be the same username and password you use to log into your email account.
Configuring a client for incoming email is a bit more complex because there are two commonly-used methods to choose between. Some email servers may only support one method, so your decision has already been made for you.
POP3 (Post Office Protocol)
POP3 is a protocol that mail clients use to download email messages from an email server and store them on the local machine. This is the original protocol that is used to fetch email from a mail server and the most widely available. When using POP3 your mail client will contact the mail server to check for new messages. If any are found, they are downloaded to the email client and deleted from the server (there is often a setting to delay this deletion).
POP3 was at its prime during the age of dial-up and transmits a minimal amount of data between client and server. It also keeps the space used by your email account low since messages are only stored on the server until they are downloaded by the client. While these were both big selling points when dial-up was the norm, they are pretty much inconsequential now unless you are dealing with a poor or spotty internet connection.
POP3 can be problematic when using multiple clients to access the same email account. Since messages are deleted after delivery, by default, they only appear on the client that downloaded them. This can lead to some messages on your phone client, and others on your desktop client, though this can be mitigated somewhat by delaying the deletion on the server. Additionally, POP3 clients lose all messages if the data on your client device is lost or destroyed with no way to recover them if you don’t have a backup.
Configure POP3 on your client by entering the server name, network port, and authentication. POP3 typically uses port 110 for unencrypted connections, and port 995 when encryption is used.
IMAP (Internet Message Access Protocol)
IMAP differs from POP3 in that it leaves email on the server. When a client connects to check for new mail, the latest messages are synchronized with the server, downloading copies of new messages. IMAP clients often cache a number of messages on the client for off-line access, but local storage use is minimal. This fits well with the always-online reality of today, and doesn’t bog down mobile devices with large email archives, where storage can be at a premium. You can access your mail from any number of IMAP clients and see the same messages, and losing a device or upgrading to a new one doesn’t cost you your email history.
In most ways, IMAP is superior to POP3, but it may suffer when your internet connection is spotty or you want to have access to your entire email account off-line.
Configuration of a client for IMAP uses the typical server name, network port, and authentication we’ve seen before. In this case, the standard ports are 143 for a standard connection, and 993 for an encrypted one.
Which do I choose? IMAP or POP3?
POP3 is an old protocol and it has had its time and place. As a result, IMAP was designed to address the shortcomings of POP3 and keep up with how email is used in this modern day and age. Given a choice, go with IMAP. There are a few situations where POP3 may be prefered, and in some cases is the only option available. Should you find yourself having to use POP3, do yourself a favor and set it to put off email deletion on the server for as long as possible (indefinitely, if you can).
Hopefully, this guide has helped you better understand how email works – which is a good thing, because it will likely be around for a long time. I find it interesting to see how the protocols that facilitate client-server communication have evolved to keep up with the times, yet the appearance of an email message has remained essentially unchanged.
A few key practices that can secure your server
The advent of the Internet Age has had a profound effect on how business is conducted. Maintaining an online presence is no longer optional for most companies if they want to stay relevant and competitive. Existing and potential customers use the Internet to make purchases, manage their accounts, research products, and much more. The benefits of this are immeasurable, but it doesn’t come without a dark side — hackers. With so much riding on your website and online reputation, it is absolutely vital to keep your servers secure.
Security professionals devote their entire careers to keeping up with the ever-evolving nature of online threats and global corporations have whole teams with substantial resources dedicated to keeping their online properties secure. Taking on the chore of securing your server may seem like a daunting task, but we’re here to help! We have identified a few key practices that can secure your server enough to defend against the vast majority of attacks and dissuade all but the most elite hackers. It doesn’t take a large amount of system administration ability to secure your server using these methods, but look into our management plans and SecureServer+ services if you’d rather leave it in our capable hands.
Get Behind a Firewall
The first line of defense for any secure environment is a firewall. There are several firewalls to choose from, but they all typically have the same basic features. A firewall is either an application or a physical device that resides between the internet and any network-facing services on a server. It acts as a gatekeeper for network traffic, using a set of rules to filter both inbound and outbound connections. However, a firewall is only as good as the rules it is given to work with. A well-configured firewall can filter out the vast majority of malicious connections, while a poorly-configured one will be far less effective.
The first decision is hardware or software. Most modern operating systems come with a built-in software firewall application, which is usually sufficient. A dedicated appliance, also known as a hardware firewall, is often used in front of multi-server environments to provide a single point for firewall administration.
No matter what type of firewall you end up using, your next step is defining a good set of rules. Rule number 1 when configuring a firewall, especially remotely, is to be very careful to not lock yourself out by blocking the connection you are using to access the firewall. It is always good practice to have a fallback access method to change firewall rules should you accidentally block your own connection – typically a physical console or an out-of-band console solution like IPMI, ILO, or DRAC.
Start by considering what services your server provides. Network services utilize specific ports to help differentiate between types of connections. Think of them as lanes on a VERY wide highway with dividers to prevent one from changing lanes. A webserver, for example, will typically use port 80 for standard connections and port 443 for connections secured using an SSL certificate. These services can be configured to use non-standard ports so be sure to verify which ports your services are using.
Next, determine how you will remotely administer your server. On Windows, this is typically done via RDP (Remote Desktop Protocol) and on Linux, you will likely be using SSH (Secure Shell). Ideally, you will want to block access to the ports used for administration to all but a handful of IPs or to a small subnet in order to limit the access to these protocols from anyone not within your organization. For example, if you are the sole administrator of a Linux server, open the SSH port (typically 22) to connections from only your computer’s static IP address. If you don’t have a static IP address, you can often determine a subnet from which you will be assigned an IP. While whitelisting a range of IPs isn’t ideal, it’s far better than opening up that port to the whole Internet.
To generate a solid set of rules, block all ports from all IPs then create specific rules to open those ports needed for your services and administration – remembering not to lock yourself out. The ports opened for your services should generally be open from all IPs, but limit administration ports as discussed above.
While a firewall shouldn’t be your only line of defense, creating a reasonable set of firewall rules is a great starting point for enhancing your server’s security. In truth, no server should be without at least a basic firewall configuration.
Authentication & Passwords
One of the simplest ways to enhance your server’s security is simply by enforcing a strong authentication policy. Your server is only as secure as the account with the weakest password. Follow good password guidelines for any password used on a server, such as making sure that your password is of adequate length, not a dictionary word, and not used on other services that could themselves become compromised and leak your password. While you can limit remote access to your server via a good firewall configuration, there are still exploits that can be used to send commands to a system through compromised or unpatched services running on open network ports.
In many cases, it’s possible (and more convenient) to go passwordless altogether! If your main method for accessing a server is via SSH, you can disable password authentication in your server’s SSH config file and instead use a pair of public and private keys to authorize your connection.
Keep in mind that this method may not be as convenient if you need to be able to login to your server from anywhere at a moment’s notice, since you will need to add your private key to any new system you are connecting from. Also, while this approach makes remote connections an order of magnitude more secure, don’t neglect to never-the-less set a strong password on your account. Hackers are sometimes able to access a system in other ways, and you wouldn’t want to have an account with elevated access secured by a password like, “1234.”
These days, two-factor authentication (2FA) is becoming very popular. When using 2FA, not only does a user need to authenticate with their password, they also need to provide a one-time-use code sent to a previously registered email address or mobile device to further verify their identity. Implementing something like this on your server could be done through a third party service, or by using a 2FA-enabled account (like Google or Microsoft). cPanel\WHM now supports two-factor authentication, so this may be an option for you if you use this control panel as your main means of server administration.
Brute Force Protection
A common attack vector on servers is a brute force attack. These are remote login attempts using guessed usernames and passwords, repeated over and over, as fast as the servers and network will allow. Unprotected, this can be several hundred thousand attempts per day — enough to crack any 8-character password in a month. For this reason, it is prudent to install some form of brute force protection on your server.
Most approaches to brute force protection take one of two forms. The first method introduces a timeout between login attempts. Even if this timeout is as short as a single second, this can cause an attack to take many times longer to crack the password. You’d likely want a longer timeout to provide better security, while not overly-interfering with legitimate login attempts by users making typos. Some systems take a clever approach to this method by increasing the timeout with every failed attempt, often exponentially. Fail once, wait 1 second. Fail again, wait 5 seconds. Fail a third time, wait 30 seconds… By the fourth attempt, you’re going to be very careful entering your password.
Alternatively, a variation of this method puts a hard cap on the number of attempts allowed within a set period of time. Failing to login too many times will get the account locked out – either temporarily, or in more extreme cases, until unlocked by a server administrator. This method effectively puts a stop to any brute force attacks, but it can be more annoying for valid users who aren’t very careful about entering their passwords.
The second method is to introduce a Captcha to the login request. This forces the user to perform a feat that is trivial for a human, but difficult for a computer. Often, this involves some sort of image recognition, such as identifying all the pictures in a grid that contain a street light, or deciphering some text written in a blurry font. While computers are usually able to solve these requests eventually, it takes them much longer than a typical human and greatly slows down the attack. Captchas are also often used to protect public comment sections from spam posts and sign-up forms from fake account creation.
Brute force protection can be found in many firewalls, or in the operating systems themselves — but don’t forget about other accounts, such as WordPress, cPanel/WHM, etc. Make sure any exposed login has some form of brute force protection enabled.
Software Updates & Security Patches
Software and operating system updates and security patches are also important to maintaining a secure server. All of your other efforts can mean nothing and go entirely to waste if you are running an outdated version of an operating system vulnerable to known exploits.
Most software and operating system vendors dedicate significant resources into keeping their products patched against the most recently discovered exploits, so much so that many minor releases contain more security fixes than feature updates. Maintaining this level of vigilance on older versions of their products can be costly, so software and operating systems are frequently classified as End of Life (EOL) after a number of years. Among other things, this means that the product will no longer receive updates for exploits that may be discovered after EOL has been reached.
A commonly seen case of this type relates to PHP, a scripting language commonly used on the web. At the time of this posting, all PHP versions older than 7.2 are EOL. Despite this, PHP versions as old as 5.3 are still common out in the wild. There are significant differences between 7.2 and 5.3, making upgrading to a supported version impossible without significant reworking of the code.
Fortunately, with this specific example of PHP versions, CloudLinux has you covered on a cPanel server. CloudLinux offers hardened versions of old PHP versions, as well as security updates, well past the EOL date. However this issue could happen with any software, and most don’t have a solution as simple as CloudLinux.
It is not good practice to run outdated operating systems either. For example, CentOS 5 has been EOL for some time, yet it is not a terribly rare sight. If you happen to be running something like this, you should be planning your upgrade path as soon as possible. When the operating system you are running on goes EOL, it’s common that even supported software on your server will also stop receiving updates, since vendors won’t qualify new versions on EOL OS versions. This can have a cascading negative effect on the security of your server.
Code & Custom Applications
Unfortunately, even the most hardened server can still be vulnerable to attacks through insecure code or applications running on a website.
If you are running a customizable web application, such as WordPress, Joomla, or Magento, it is critically important for you to keep not just the core application up to date, but any plugins or themes as well. This also applies to the code of the project themselves – if you suspect that your theme or plugin is “dead” and no longer being updated, it is prudent to look for alternatives. New exploits are constantly being discovered, and an application or plugin is only as secure as it’s last update.
When dealing with custom code created for you by a developer, it is wise to maintain a continued relationship with your developer so that you can continue to receive updates. Otherwise, you may end up in a situation as described above, where you find that you can no longer update your PHP or other important software because the website is not compatible with the new version.
This attack vector can be the hardest to defend against, because your datacenter or hosting provider generally can not support the custom software and code that is running on your server. Unless you are running entirely off-the-shelf software, make sure you have a plan to keep your code updated and patched.
As you can see, securing a server goes far beyond the initial setup. While this is important, equally vital is keeping it up-to-date in order to combat the ever growing list of known hacks and exploits. The damage caused by a compromised system, both financially and to your reputation, can be massive. As the old adage goes, an ounce of prevention is worth a pound of cure.
Due to the nature of shared hosting, DNS services are typically managed by the hosting provider. However, when upgrading to a dedicated server or to a cloud server, that responsibility will typically fall on you. Hopefully, we can help shed some light on DNS to improve your understanding of this important component of your environment. Continue reading…