I’ve been self-hosting Nextcloud for sometime on Linode. At some point in the not too distant future, I plan on hosting it locally on a server in my home as I would like to save on the money I spend on hosting. I find the use of Nextcloud to suit my needs perfectly, and would like to continue using the service.
However, I am not so knowledgeable when it comes to security, and I’m not too sure whether I have done sufficient to secure my instance against potential attacks, and what additional things I should consider when moving the hosting from a VPS to my own server. So that’s where I am hoping from some input from this community. Wherever it shines through that I have no idea what I’m talking about, please let me know. I have no reason to believe that I am being specifically targeted, but I do store sensitive things there that could potentially compromise my security elsewhere.
Here is the basic gist of my setup:
- My Linode account has a strong password (>20 characters, randomly generated) and I have 2FA enabled. It required security questions to set up 2FA, but the answers are all random answers that has no relation to the question themselves.
- I’ve disabled ssh login for root. I have instead a new user that is in the sudo usergroup with a custom name. This is also protected by a different, strong password. I imagine this makes automated brute-force attacks a lot more difficult.
- I have set up fail2ban for sshd. Default settings.
- I update the system at the latest bi-weekly.
- Nextcloud is installed with the AIO Docker container. It gets a security rating A from the Nextcloud scan, and fails on not being on the latest patch level as these are released slower for the AIO container. However, updates for the container is applied automatically, and maintaining the container is a breeze (except for a couple of problems I had early on).
- I have server-side encryption enabled. Not client-side as my impression is that the module is not working properly.
- I have daily backups with borg. These are encrypted.
- Images of the server are also daily backed up on Linode.
- It is served by an Apache web server that is exposed to outside traffic with HTTPS with DNS records handled by Cloudflare.
- I would’ve wanted to use a reverse proxy, but I did not figure out how to use it together with the Apache server. I have previously set up Nginx Reverse Proxy on a test server, but then I used a regular Docker image for Nextcloud, and not the AIO.
- I don’t use the server to host anything else.
I have Nextcloud hosted internally in a podman container environment. To answer some of your more security related questions, here’s how I have my environment set up:
-
Cloudflare free tier with my own domain to proxy outside connections to the public domain name, and hide my external IP.
-
A DMZ proxy server with a local traefik container with only ports required to talk to the internal Nextcloud server allowed, and inbound 443 only allowed from the internet (cloudflare).
-
An Authelia container tied to the Nextcloud container using “Two-factor TOTP” app addon. Authelia is configured to point to a free DUO account for MFA. The TOTP addon also allows other methods of you want to bypass Authelia and use a simply Google auth or other app. I’ll be honest, this setup was a pain but it works beautifully when finally working.
Note: Using Authelia removes Nextcloud from the authentication process. If you login through Authelia, if set up correctly it will pass the user information to Nextcloud and present thier account. There is a way to have “quadruple” authentication of you really want it, where you log in through Authelia, Authelia MFA, then Nextcloud and Nextcloud MFA, but who would want that? Lol.
Another Note: If Authelia goes down for whatever reason, you can still log in through Nextcloud directly.
-
I have all of my containers set to automatically pull updates with the latest tag. This bites me sometimes of major changes happen, but it’s typically due to traefik or mariadb changes and not Nextcloud or Authelia.
-
I have my host operating system set to auto update and reboot once a week in the early morning.
-
My data is shared through an NFS connection from my NAS that only allows specific IPs to connect. I’d like to say I’m using least privileged permissions in the share, but it’s a wide open share as my NFS permissions are not my strong suite.
Hope the above helps!
Thanks for your answers!
- Alright, I guess I should also use the Cloudflare proxy. I could not find the reason I had not enabled it previously.
- I’m a bit confused as to what a DMZ proxy server is compared to a reverse proxy. Is this a separate server you’ve set up specifically to handle inbound traffic where you’ve set up Traefik, or is this a container on your main server where you also host Nextcloud?
- As I understand it, Authelia is a SSO solution that seems very beneficial for when I am running several services from the same server. Right now, I only run Nextcloud on the VPS - is there any added security benefit of running it there also, or is this mostly for convenience when hosting multiple services?
Setting up auto update and reboot once a week seems smart. Do you set this up with cron?
-
This all sounds very reasonable. One question remains: what is the use of a dedicated proxy if cloudflare is connected? I do use nginx proxy manager and host my dockerized services on subdomains via https. I suppose if the reverse proxy gets attacked, the main server stays online and hidden. Does cloudflare not hide your ip and prevent (some) ddos attacks?
This is one of those areas that often has me confused… For now, the DNS entry with Cloudflare is set to ‘DNS Only’. That is perhaps a mistake on my part, and I should enable the proxy? Right now I can’t remember the reasoning for why I set it up like this.
Originally I wanted to set up Nginx Reverse Proxy to serve other services than Nextcloud on the same server on different ports. That was the way that I found that was easily manageable at the time, and like the AIO container is set up now, accessing the IP address of my server automatically routes to Nextcloud, even if I had another service running. I could maybe configure Apache to do the same job as I want Nginx to do? At the time, I opted to get another VPS dedicated for other, smaller services instead as a temporary solution, that over time turned permanent. However, this will be important to me when/if I start hosting this locally instead, as I would want my server to host other services as well.
Can relate. I‘m pretty much on the opposite end of this situation. I have a home server, hosting a fair amount of apps and it’s pretty integrated and polished but still a lot of things I want to do, some crucial before I even think of opening ports in my router.
The issue for me is that my internet upload speed is trash allthough my provider is rather good.
So I‘m thinking of moving the opposite direction and hosting my stuff on a vps so that I can use it and maybe share stuff with friends without being kneecapped by my upload.
The obvious solution would be a fiber connection which is not available at my location yet (edge of a city in germany, hard to believe, I know).
But to answer your question: you could probably pet apache do something like that but I‘m absolutely the wrong person to tell you how as I don’t have any experience with apache. I can help you configure npm (nginx proxy manager) and dns records but thats about it in this department.
In any case, have a good one and hit me up if you want to discuss this further.
Ah, I see. Hope for you that a fiber connection will be available in the not-too-distant future then. I would love to do this at home, but I’m going to need some serious study sessions to better understand home networking (and take appropriate action) before I start exposing services at home to the internet. I do wonder if I jumped onto this too fast, but I was just so incredibly fed up with relying on big tech monopolies for essential digital services…
I guess my last question would be if you had an opinion on whether enabling proxy in Cloudflare is a no-brainer or not?
Makes total sense that one would familiarize himself with networking/selfhosting before actually going live and putting their private data at stake. I respect that.
Also, I would probably use cloudflare proxy but I don’t have experience with it yet so I‘d give it a quick search „cloudflare proxy vs dns only“ or something and see if any reason why you didn’t like it pops up.
Also, I suggest you keep a log if you dont have one already. Every time I do maintenance (essentially, every time I log into ssh on my server) I make an entry to my log. That way you will know why you did what you did when you did
A log is a very good tip - I’ll definitely start with that.
Glad to help. Feel free to update or hit me up if you need help. Also feel free to check !ubuntuserver@discuss.tchncs.de
Secure SSH. You should disable all password login capability and tighten the ciphers, KEX and MAC requirements. This will force modern SSH terminal use, something a lot of bots don’t do, so they won’t even get to the point of key exchange.
On your client, you can define an SSH config with a list of friendly host names that include direct IP addresses, the key to use to initiate login and whatever other properties you need. This way, you can just type in “ssh” and you don’t need to specify the key or IP address every time.
Finally, configure Fail2Ban to ban/block on first failed SSH attempt. You won’t be falling to login if you’ve configured a config definition file and are using keys.
Thanks for the tip. I will be looking into setting up SSH keys fairly soon, and look more into strengthening ciphers et al.
From a practical point of view, what is the likelihood of a brute-force login attempt to succeed? There are plenty of login attempts, but most of them are for root, and as I’ve disabled root-login that will fail no matter what. Other attempts are typically for generic other names such as ‘admin’, ‘user’ and ‘test’ that has no associated user on the server, as well as some weird choices that I can only imagine comes from some database breach.