the servers inside Iranian data centers still have access to the outside world.
Knowing that, the simplest and easiest solution that would avoid detection is to SSH tunnel into that datacenter and SSH-ProxyForward out of that datacenter into Amazon AWS via SSH and use that SSH proxy chain as a SOCKS proxy for browsers. Make sure the browser is using the SOCKS proxy (SSH) for its DNS. Many sites will make your friends solve captchas if they show up from Amazon so if you have a friend outside of Iran in the same AWS region that is willing to open SSH on their home router then one could add that private home router as their last hop in the SSH proxy forward. Do not go directly from the datacenter to the home. It is normal and expected for Datacenters to SSH to Amazon.
SSH Client -> Iranian Datacenter / Server -> AWS VM -> Home router in same region as AWS -> Internet.
If many people are using the same server and VM then make sure that MaxStartups and MaxSessions have been increased in sshd_config as well as any PAM limits on the servers for open files on every node in the path. Clients should enable ControlPath / ControlMaster in their ssh_config or ~/.ssh/config. To harden each hop configure PermitOpen to only allow the SSH hops and the final hop should also permit *:443
Examples of all these steps can be found on SuperUser / StackExchange / ServerFault and are all public knowledge. All above-board, no hacking involved.
[Edit] Removing the Squid MITM SSL-Bump proxy idea. That would make follow on questions harder to explain.
[Edit from Fatnino's input] If your Amazon VPC's are too outbound-restricted then pick another VPS provider that is commonly used for hosting 3rd party tools for datacenters, preferably one already used by that datacenter.
[Edit] In theory hypothetically speaking every hop possible could have misconfigured but realistic looking syslog so that SSH connections are not logged on the server and in theory a log-less silent rule in the edge firewall to not log SSH connections. Sometimes syslog disks also fill up by mistake. SSH can also be performed in ephemeral diskless containers such as Docker, Podman and LXC.
The first hop, "SSH Client -> Iranian Datacenter" seems extremely vulnerable to surveillance, and would create an incriminating list of people involved. With this discussion in the open, you can bet Iranian authorities are going to specifically look for anything discussed here, so the only viable solutions should have no measurable deviation from normal behavior that would allow them to detect which datacenter was doing this.
To make this happen, you should have a minimum number of connections from inside Iran into the datacenter.
For a small group of trusted people with always on connections, you could just create a linear chain of SSH forwards connecting everyone. For widespread connectivity, a TOR bridge through the path you describe would be workable.
Could individuals run a public internet gateway that doesn't keep logs with something similar to mosh but running the equivalent of an SSH tunnel? Think a SOCKS proxy on one end, but running as a public web application/forward proxy on the other end.
The ISP would still be able to see any traffic to the gateway, but if you had enough links outside of government monitored net infrastructure to the gateway (hard lines you take down or obfuscate when the patrol does their rounds, wireless point to point connections), the risk would be on the gateway operator.
(Please do not take my advice without evaluation. This is speculation from a SWE, not advice for life or death situations.)
Edit: I suppose if your ingress traffic is over links not monitored by government anyway, it doesn't matter if you use SSH or a web application forwarding traffic to a SOCKS proxy behind the scenes. Not sure if the idea presented above would be useful in other scenarios.
Edit2: I guess usability is a benefit, even without security benefits. "Plug in this cable and type this URL into your browser" is easier than "open a terminal and establish an SSH connection."
Something that looks similar to mosh being UDP and encrypted but that allows proxied traffic would be Tinc Open Source VPN [1] The nicest thing about Tinc is that it does user-space dynamic mesh routing without requiring packet forwarding being enabled. I would call it a middle ground to onion routing if set up right. It has configurable compression. The reason I did not suggest this is that it is not simple to set up and get OpSec right the first time out of the gate unless the people involved are already very experienced with it. That's why I suggested SSH. SSH is relatively simple, well known and will blend in with all the legit SSH traffic and more people have experience with SSH. SSH egress from a datacenter is normal, expected and likely already permitted to AWS without making logged firewall changes.
Agreed on the utility of SSH. I work on a product that offers SSH certificate authorities as a service, among other things, and have read some of the RFCs.
I mainly mentioned the web forward proxy as a response to the "SSH traffic to Iranian datacenters from residential connections is suspicious," comment, but SSH is a great basis to build on. I doubt the SSH egress from the datacenters would draw much attention, but again, I wouldn't use my advice in a life-threatening situation, especially as I have never seen these type of monitoring systems in action.
You are not dealing with a state that will say "teehee, we have no proof of the data that's going through, oh well!". Anyone found operating one of these gateways will just end up being beaten with a wrench, and traffic logged further on. Considering this, it's overall safer to not go through a central gateway, and to have as many possible connections going through. Hell, even PCs stored in weird places, in government offices, just to make the signal to noise ratio even worse.
A gateway is a single, big ass signal that says "come murder me".
I considered that, but the small group of trusted people is a bit of a double-edged sword. If one person is taken and they do not have good OpSec they could expose that entire group of trusted friends and that would be a more valuable target to authorities. So in that case I would stick with having an accidentally exposed SSH in that datacenter.
You'll fill the airwaves very quickly with Lora devices. They're quite terrible at anything more than very simple text data. Time in air is high, bandwidth is very low, and signal clashes in crowded areas are a big issue.
Using Tor sounds good in theory, but it's too far easy to identify all of the Tor connections and identify the residences connected to them.
I would suggest using SSH tunnels on whatever port a given server normally communicates on. If this is 443, connections should just look like normal web browsing on an ISP level. If the server is used to transmit other data over TLS using that port should be fine too.
All of that said, the challenging thing is not to secure the traffic, but to make it look normal. A persistent connection is not normal from a residential IP in most cases. This means you should take your session down once you've sent your messages for the day.
> If this is 443, connections should just look like normal web browsing on an ISP level
Except that it doesn't look like TLS - depending on what the traffic inspection capabilities look like I imagine someone speaking SSH on port 443 is pretty incriminating :/
If the datacenter had been using SSH on port 22 to AWS in the past, I would just stick with that. The less changes the less obvious something is out of place. It sounds like they can still egress the datacenters for now.
The connection to the residence would be the last hop in the SSH proxy foward chain meaning that all Iran will see is Datacenter -> AWS. Then it is AWS -> residence. Iran would not have visibility into AWS egress traffic, rather Five-Eyes [1] would see everything in and out of AWS.
Datacenter -> Non-Iranian-AWS Region -> Non-Iranian Residence -> Internet.
The flow from the DC to AWS is all they would see.
Ah I was unaware. In that case they should use whatever VPS they normally use that is outside of Iran. Hopefully some of them are outside of the country.
I worked at a place with very restrictive internet policies. My team had access to one aws instance that could get out to the open internet.
So my connections looked like this:my laptop at work in California, tunnel to aws in Virginia, tunnel back to a server at my house in California, connect to actual desired site likely hosted on aws in Virginia yet again.
[Edit] Too late for me to edit my original post. It was pointed out to me that Iranians are not permitted to use AWS. In that case, replace "AWS" with whatever VPS/Server providers that Iranians are permitted to use that is outside of the country.
I believe you are asking about theoretical machine learning used to identify encrypted traffic patterns.
SSH can be used for file transfers via sftp so it is not uncommon to have long transfers of data. In this edge case however one could set up a simple rate limited rsync over ssh and then rate limit ssh slightly higher so that the browsing shares the bandwidth creating a relatively smooth data stream. Data streams are not expected to be perfectly constant across the internet so a little fluctuation would be fine.
If Multiplexing is being used via ControlMaster ControlPath in the client then the people browsing will be riding the extra SSH channels without having to re-authenticate after their first authentication. The first SSH channel would be used for the SFTP transfer.
Then rate limit SSH just above 600KB/s using `tc`. On a modern kernel this is one line.
tc qdisc add dev eth0 handle 101: root cake diffserv3 bandwidth 6mbit internet nat egress ack-filter triple-isolate ethernet memlimit 32M
One could use HTB bucket rules with tc on older kernels.
All of this said, I do not believe this step is required unless a datacenter's SSH traffic patterns were already being traffic profiled and I would theorize that for this to be the case Iran's intelligence agency would have been watching them for other reasons.
Knowing that, the simplest and easiest solution that would avoid detection is to SSH tunnel into that datacenter and SSH-ProxyForward out of that datacenter into Amazon AWS via SSH and use that SSH proxy chain as a SOCKS proxy for browsers. Make sure the browser is using the SOCKS proxy (SSH) for its DNS. Many sites will make your friends solve captchas if they show up from Amazon so if you have a friend outside of Iran in the same AWS region that is willing to open SSH on their home router then one could add that private home router as their last hop in the SSH proxy forward. Do not go directly from the datacenter to the home. It is normal and expected for Datacenters to SSH to Amazon.
SSH Client -> Iranian Datacenter / Server -> AWS VM -> Home router in same region as AWS -> Internet.
If many people are using the same server and VM then make sure that MaxStartups and MaxSessions have been increased in sshd_config as well as any PAM limits on the servers for open files on every node in the path. Clients should enable ControlPath / ControlMaster in their ssh_config or ~/.ssh/config. To harden each hop configure PermitOpen to only allow the SSH hops and the final hop should also permit *:443
Examples of all these steps can be found on SuperUser / StackExchange / ServerFault and are all public knowledge. All above-board, no hacking involved.
[Edit] Removing the Squid MITM SSL-Bump proxy idea. That would make follow on questions harder to explain.
[Edit from Fatnino's input] If your Amazon VPC's are too outbound-restricted then pick another VPS provider that is commonly used for hosting 3rd party tools for datacenters, preferably one already used by that datacenter.
[Edit] In theory hypothetically speaking every hop possible could have misconfigured but realistic looking syslog so that SSH connections are not logged on the server and in theory a log-less silent rule in the edge firewall to not log SSH connections. Sometimes syslog disks also fill up by mistake. SSH can also be performed in ephemeral diskless containers such as Docker, Podman and LXC.