In the last year we have seen multiple failures of the infrastructure of the internet. There have been significant outages involving both Amazon Web Services (AWS) and Cloudflare that have disrupted internet services. For instance, AWS outages have affected over 1,000 companies and millions of users, while a recent Cloudflare outage impacted about 1/3 of the world's 10,000 most popular websites, apps, and services. However, specific counts of total shutdowns are not readily available. Though unclear it has been seen and felt across many of the services user's access every day.
While I revel the day the internet goes dark and through my history of writing hopefully have prepared myself and others in making their own home networks with years of entertainment and backup services. I feel like watching this deserves my thoughts on the internet as a whole.
Web Service CDNs (Cloud Distributed Networks), while similar to Internet providers, work in tandem to allow for free flowing network traffic to be send to and from host servers to relay servers that cache images of your site to be shared to others quickly and reliably. Like a buoy in the ocean, they are connected together by a rope, and if one sinks, the others will still be able to hold up the netting veiled underneath to catch unwanted traffic and allow wanted traffic into their hosted sites.
This is often used as a form of network stabilization to allow users from all over the globe to experience similar speeds of return when trying to access a site. But also a form of DDOS (distributed denial of service) attack mitigation by mirroring sites from their home server. But what happens when that service fails? What happens when it is not just one buoy but the whole net being held up by just on buoy? Well it naturally gets overwhelmed and sinks to the bottom of the ocean until recovered.
This is what we are experiencing more than ever with centralized services failing and decentralized options being unsupported. While I cannot just say the services providers are solely to blame, I must also blame the users. But I will follow up on that later on in this article.
The most recent outage of Cloudflare a single bad file is all it took to knock out Cloudflare. Written about in Ars Technica by Jon Brodkin. A single file that was directly so important to the hyper-scaled hosting giant, that caused trouble for software that needs to read the file to maintain the Cloudflare bot management system that uses a machine learning model to protect against security threats. Cloudflare’s core CDN, security services, and several other services were affected is relied upon by many online services, for protection and internet routing.
So when chief executive Matthew Prince learned about the outage, he suspected the company was being attacked by The Aisuru botnet, a network of compromised IoT (Internet of Things) devices, such as routers and security cameras, that has been involved in record breaking DDOS attacks, and recently almost took down Microsoft's Azure Service. WRONG: A small glitch had caused an important file to unexpectedly double in size and propagate across the network, leading the entire system to crash. That tiny error can be so widely felt is a symptom of how the internet has evolved and changed since it's birth in the early days of computing and only become worse since the 2000s. In the early days of the internet any company that had its own website probably had its own servers, this limited the damage from an issue from a service problem. IT guys were keeping it alive in house with a pocket of ad-hoc solutions and home made toys looking for a use.
Earlier this year October 20th 2025 Amazon Web Services also went down at roughly 0300 and while mostly cleared up by 0700, still had persisting issues onward to 1600. The problem stemmed from an error with Amazon's EC2 internal network, which affected DynamoDB, SQS, Amazon Connect and AWS services, the company said. "The root cause is an underlying internal subsystem responsible for monitoring the health of our network load balancers," the company said in a statement at 1100 to media, referring to Internal Safety Measures to offset traffic load that distributes traffic across various servers. AWS said it limited new activity requests from customers as it worked to restore full functionality. But do not fret dear reader, this is nothing new for AWS as they also experiences outages in 2021 and 2023 for similar reasons.
And do not think this is simply a Western World issue. On September 26th 2025 a significant fire broke out at the National Information Resources Service (NIRS) data center in Daejeon, South Korea. This incident resulted in the destruction of 96 government systems and the potential permanent loss of 858 terabytes of data. The fire was triggered during a risk reduction exercise involving the relocation of uninterruptible power supply (UPS) batteries. A thermal runaway event occurred, leading to the blaze. The fire caused a shutdown of 647 online government services, affecting various operations including postal and tax systems. As of early October, 163 services had been restored, representing about 25% of the total affected systems. The G-Drive system, used primarily by the Ministry of Personnel Management, did not have backups due to its large capacity, leading to the loss of critical data. Recovery efforts are ongoing, with plans to relocate the destroyed systems to another data center in Daegu. The incident has raised concerns about data management practices within the South Korean government, highlighting the importance of having reliable backup systems in place. Investigations are underway, and several individuals have been arrested on suspicion of criminal negligence related to the fire.
Everything from McDonalds apps, to airline registers, health records, and in the instance of South Korea whole government reliance are hosted and mirrored on these ROBUST AND FAIL-PROOF SERVICES. Today, Amazon, Google, and Microsoft alone operate millions of servers comprising 60% of the cloud computing market. What is surprising is that the infrastructure is not that it is happening but that it is not happening more often.
Given Cloudflare, AWS, and Microsoft's importance in the Internet ecosystem any outage of any of our systems is unacceptable. The three companies each dealt with different issues. Cloudflare initially thought it was under a massive cyberattack, but then traced the issue to a “bug” in its software to combat bots. AWS and Microsoft each had different issues configuring their services with the Domain Name System, or DNS, the notoriously finicky “phonebook” for the internet that connects website URLs with their technical, numerical addresses. Those issues come a year after a particularly unusual case, in which companies around the world that used both Microsoft based computers and the popular cybersecurity service CrowdStrike suddenly saw their systems crash and display the Blue Screen Of Death (BSOD). The culprit was a glitch in what should have been a routine CrowdStrike automatic software update, leading to flight delays and medical and police networks going down for hours. Ultimately, each was an instance of a minor software glitch that rippled across those companies enormous systems, crashing website after website.
The internal failings of these companies are not only to blame. We can also blame the users. In a far more minor way. Users also allow these issues to persist by using the services hosted on these platforms and not creating their own backups or self hosted solutions. While it cannot fix the issue as a whole and cannot completely segregate reliance on these systems. We can in a small part allow ourselves as individual users to be less affected when the internet fails again.
While I have written about this subject already, found here I would again like to give some at home options for data hoarding like YT-DLP for video downloads and wget for Website downloads. Kiwix and Open Street Map have home hosting options to allow for a complete download of Wikipedia (and Wiki variants) and a Google Map alternative. You can also host your own website. I have a server prepared for if / when neocities goes down, along with a Onion service through Onion Share.
Self hosting is actually very simple. You can do so using a free program called MAMP on Windows and Mac, or LAMP on Linux. MAMP and LAMP will create a local server on your computer that you can use to host your own website. Many internet service providers (ISPs) expressly forbid personal hosting unless you have a business plan, which often costs significantly more than a standard use plan. This should not be an issue if your site only generates a few hits per month, but any kind of significant traffic will draw attention to your hosting. If your ISP prohibits home hosting, either upgrade your plan to one that allows hosting or switch to a different ISP before you continue. Failing to heed your ISP's policy for home hosting can result in anything from getting your internet turned off to having to pay fines.
Ensure that your equipment can handle hosting. In order to host a website, you must have a computer on and connected to the Internet 24 hours per day, seven days per week. This is easier to accomplish with a secondary, older computer than it is with your primary computer. Restarting your computer to update will occasionally be necessary. During these updates, your website will be inaccessible. Of course the smaller the site and the more streamlined it is will help greatly (neocities sites are a great example of this). Update your computer and packages. Make sure security features, and drivers. Make sure you update Windows, macOS, or your Linux system before continuing. Also make sure you have a decent antivirus program, and firewall to protect your computer. You will need to make sure port 80 is open behind your Firewall.
Move your website's source code onto your computer. This includes your HTML, CSS, and PHP files. You'll also need to move any media files, such as pictures and videos to your computer. If your website's source code isn't already in a file on your computer, copy it from your web service's settings into a text document on your computer and save it as a PHP or HTML file. If the website's source code is stored on your computer, make sure you know where to find it. If you have not yet made your website, you will need to do so before continuing. The main webpage for your website should be saved as "index.html" or "index.php."
I will mainly be focusing on GNU/Linux machines, specifically Debian and Ubuntu distributions, but just change the package manager to the one your personal distro is using and the instructions should be the same, in this section but for Windows and Mac it is far more simple (drag and drop). You can install LAMP on Linux using the terminal. Update the software repository, before you begin, make sure the software repository is up-to-date. sudo apt update.
Install Apache. Apache is the HTTP server software you will be using. sudo apt install apache2. Check if Apache is installed correctly. The Apache server status should start running automatically. When you check the server status, it should return the output "Active: (running)" next to "Active." Enter the following command and press Enter to check the server status: sudo service apache2 status.
Press Ctrl + C to exit the service status screen. Check to make sure the UFW firewall has the Apache profiles. When you check the UFW firewall, it should return "Apache Full" as the output. Enter the following command and press Enter to check the Apache profile: sudo ufw app list. Check that the Apache Full profile allows traffic on Ports 80 and 443. When you enter the following command, it should return "80, 443/tcp" below "Ports." Enter the following command and press Enter. sudo ufw app info "Apache Full". Go to localhost in a web browser. All you need to do is open a web browser and type "localhost" in the address bar. This should display the Apache default page with the message "It works."
Install MySQL. MySQL is a relational database system. Enter the following command and press Enter to install MySQL: sudo apt install mysql-server
Install PHP. PHP is a server-side scripting language that integrates well with MySQL. PHP is usually the final layer of LAMP (Linux, Apache, MySQL, and PHP). Enter the following command and press Enter to install PHP: sudo apt install php libapache2-mod-php php-mysql.
Modify the Apache server files. By default, Apache looks for an "Index.html" index file. Modify the "dir.conf" file so that Apache looks for the "Index.php" file instead. Use the following steps to do so: sudo nano /etc/apache2/mods-enabled/dir.conf and press Enter. Delete "index.php" in the line after "DirectoryIndex.". Re-enter "index.php" first after "DirectoryIndex" and before "index.html." Press Ctrl + X. Press y.
Install additional PHP modules. If you want to install additional PHP modules, type apt-cache search php- and press Enter to see a list of all available modules along with a short description of each. Use the arrow keys to navigate the list. Type the name of a module and press Enter to see a full description. To install a module, type sudo apt install [module name] and press Enter.
Copy your website source code file. To do so, open the Files app and navigate to where you have the source code files for your website saved. Right-click the documents and folders and click Copy. Navigate to the Apache Index folder. Use the following steps to do so:
Paste your web page source code files. To do so, right-click and click Paste. This pastes your website files into the Apache server folder. This is the folder where you can save your website pages and files. Your website should now be live; people will be able to access it by entering your public IP address into a search engine. You can also add a domain name to make your website easier to access.
If you want people to access your local website using a domain name instead of your public IP address, you will first need to register a domain name. You can do so using services like GoDaddy, Namecheap, HostGator, Bluehost, etc. Your domain registrar should have an option that allows you to manage your domain names. Click the DNS section for your domain name. This will probably be a button that says DNS, DNS settings below your domain name. Click the option to modify the DNS settings for your domain name. Click the option to add a DNS record. This will be a button that says Add or Add Record. Select an "A" type record. This is the type of record that points a domain name to an IP address.
If you are using an IPv6 address, select an "AAAA" type record instead. Type @ as the host. The "@" is a shorthand for the domain name. For example, if your domain name is "www.mywebsite.me", the "@" symbol is shorthand for the "mywebsite.me" portion of the domain name. Add your public IP address to the record. This goes in the field that indicates what the host points to. Just enter the numeric portion of your IP address with the dots (.). You do not need to add "http://" or "https://". Add another record. To do so, click the button to add a new record and select an "A" type record, or an "AAAA" type record for an IPv6 address. Type www as the host. This will create a record for your full domain name. For example, if your domain name is "www.mywebsite.me", this will create a record for the entire domain name. Add your public IP address to the record. This goes in the field that indicates what the host points to. Just enter the numeric portion of your IP address with the dots (.). Save the records. To do so, click the button that says Save or Add Records. This will save both records with your domain name pointing to your public IP address. Internet users can access your website by typing the domain name into their web browser. Allow 24 hours for the domain records to be updated and to take effect.
The Internet as we know it is a jumbled mess of computers that are quickly becoming less a well blanketed series of lego block buildings connected by small bridges and roads, but an trinity of massive lego towers teetering on the edge of collapse if one block is misplaced. I hope my writing this today helps some with the eventual and inevitable collapse of the internet as we know it and prepares others to keep their collections of information free and open to others when that reality happens. Be safe, be secure, be prepared.