I like the ETH result and it is an impressive feat (I know some people involved in that work). That said the article is a lot of rubbish. First there is the weird focus on lasers, all (relevant) Optical Communication uses lasers, including fibre comms. Then they make it sound like this is to replace fibre, again rubbish.
The amount of data going through fibres is absolutely staggering, replacing this with intersatellite links is just not going to happen. First you still need fibre to connect your ground stations (and you need quite a bit of redundancy due to weather) and there are still a lot of unsolved problems (tracking and pointing for example). However there are many interesting applications of optical satellite links and quite a few players are investing in it, the big one is actually data connections for scientific space missions.
Each submarine cable is 12+ strands which are DWDM multiplexed to 20-40 channels at 100-400gbps.
The entire _planned_ starlink constellation has less aggregate switching capacity than a pair of current generation core routers. The market for terrestrial ethernet _routed_ 400Gbps ports is 10k+ annually and 800Gbps is just getting warmed up.
Satellite is not going to even make the slightest dent in terrestrial networking. Starlink isn't even viable without most of the traffic doing two leg ground-satellite-ground for SP-network connected ground stations. 100% of that traffic is landing on the terrestrial internet.
That Starlink estimate must be low, they currently have 1.5 million customers and a significant percentage of total bandwidth is over ocean and thus mostly wasted.
If you assume a pessimistic 100x over subscription and 50% waste over oceans and non serviced countries * 50mbps (upload + download) * 1.5 million customers that’s well over 1.5 TBPs current capacity for customers which then doubles when you include their base stations. And they that’s today when the current 3887 Satilites which mostly v1.0 and v1.5 which have dramatically less individual bandwidth than v2.0.
PS: Also your port estimate overstates total bandwidth as each point to point link requires a port on each side, and packets generally make multiple hops through the internet.
Current generation 12 slot chassis with 51.2Tbps per slot is 612Tbps per chassis, and so a pair is 1228Tbps which is roughly your estimate. Companies like Juniper and Cisco sell tens of thousands of the previous generation per year _today_. Starlink is not even a drop in the bucket in terms of forwarding.
Moreover, with a very tiny exception over antarctica or open ocean, every single packet forwarded by starlink is hitting a terrestrial link after a single ground-space-ground hop.
Nobody is going to be making backbones out of Starlink or anything that looks like Starlink. Only the people who know nothing about modern networking performance think otherwise.
If you followed the thread, my estimate was ~3,000 TBps, welterde gave “1.7 - 2.3 Pbit/s” = 1700 to 2300 TBps. 1228 TBps is well below both of our estimates. Last I checked global IP traffic it’s still well under 2,000 TBps looking at the rate traffic is increasing Starlink really can act as a major internet backbone.
Also, Starlink customer to Starlink customer is very compelling for may applications. Undersea cables for example can create much higher latency between areas than their physical separation for example Australia to Madagascar. Similarly there’s still quite a bit of point to point microwave tower connections etc that can now be cheaply replaced.
You are not comparing the same thing here though. If you applied the same theoretical maximum method to all networking hardware (or even just core router class hardware) sold today you would arrive at numbers very very much in excess of a couple Pbit/s.
For it to be really viable as a backbone it would need to be able to do several 100 Gbit/s from one location and that Starlink cannot do (or is built to do).
A single satellite has less than 100Gbit/s of RF capacity and you will only have a couple visible above you at any one time.
And after that bottleneck comes the inter-satellite link capacity bottleneck which is less than half of that if the information I have found is correct (4 links à 10Gbit/s).
I don't think they will be able to sell more than 10 GBit/s service to anyone and that only in not so populated areas.
A single datacenter can reach low Tbit/s of connectivity _alone_ _today_.
I don't see how it could make any dent whatsoever in the backbone market.
The other usecases are a different discussion and not as dependent on very high bandwidth numbers.
Core routing hardware isn’t sold at multiple Pbit/s because no location on earth has that much traffic. Nobody is wasting that kind of money buying hardware when they don’t have fiber to make use of it. Starlink is aiming physically to eventually have 10+ Pbit/s worth of actual hardware because customers aren’t spread evenly across the globe so there’s inefficiencies involved. The only unfairness is comparing future equipment to current equipment.
First data centers very wildly in size and have connections to multiple networks for redundancy, Starlink would be very appealing from that standpoint. Also you’re underestimating bandwidth here, V2 mini are ~80Gbps full V2 weigh 2.5x as much and presumably the bandwidth difference is even higher, but they aren’t launching hardware yet so it’s unclear. Any given location several satellites above it at the current density, at 42,000 you’re talking 50+. Plus if it’s data center to Starlink customer then you’re effectively replacing a Starlink ground station so they might make such installations very cheap.
Every packet traverses multiple such core routers (most likely from multiple internet providers), so 1 Tbit/s from A to B requires many more Tbit/s of actual hardware capacity sold.
Or take the edge capacity of small FTTH providers and you get to quite large theoretical numbers too.
Hundreds of Tbit/s easily if one is offering Gbit/s or faster service (my ISP has around 1Tbit/s of equipment installed just to service around a 300m radius of residential homes).
Both are true for Starlink as well: In case both source and target are not visible to the same satellite they have to be routed via multiple hops or sent upstream (ground station) to the rest of the internet. The edge capacity (RF up/downlink) boasts insane numbers, but the actual internal backbone is much more limited.
For consumer it is less of an issue since they won't mostly fill up that 1Gbit/s service they have, but businesses will absolutely fill that up (so you can oversell much less).
Starlink doesn't have nice overview of actual specs of future satellites, so I can only use what people have figured out or what they have published.
But you are right about the number of satellites, I looked at the wrong column (it was for phase 1 and not 3), so density will indeed be much higher, so from that perspective it will be somewhat more capable (depending on fast/slow their rollout is).
And it's not clear if it would actually be cheaper for SpaceX to have such customers (since some of the traffic would go to the rest of the internet - requiring ground station resources) or go elsewhere on the planet (filling up inter-satellite capacity), but that's not too much different from how regular ISPs have to plan (some customers are cheaper for them than others).
> Every packet traverses multiple such core routers (most likely from multiple internet providers), so 1 Tbit/s from A to B requires many more Tbit/s of actual hardware capacity sold.
I agree, but look up global IP traffic numbers it’s just crossed 1 Pbit recently and that’s across the globe. No individual router needs to handle anything that close to 1 Pbit today, though modular designs would let you manufacture such a router.
You are underestimating the capacity of modern core routers.
A single PTX10016 router could route the entire starlink customer base with no over-subscription at all (230 Tbit/s vs 75 Tbit/s from 50 Mbit/s * 1.5M).
Actual estimates of the total capacity of Starlink puts it more in the 20Tbit/s range (and that's without taking into account coverage over less populated areas).
First my 3 Tbit/s estimate was wildly pessimistic and only covered current network utilization. Adding missing countries plus ships more than doubles land area, 42,000 satellites is more than 10x current satellites, v2 is ~10x current individual satellite bandwidth. So current goals are at least 200x current bandwidth. (Satellites also so point to point links which we are ignoring in terms of what the total routing capacity of the network has.)
Total estimate for a “fully deployed” Starlink network is therefore 600 Tbit/s from my pessimistic estimate (or half that if your thinking in terms of customer bandwidth vs traffic handled by the network)
A more realistic estimate is more like 3,000 TBPs as lower density areas can offer higher bandwidth per customer a village of 100 in the Sahara gets the same bandwidth as a town of 100,000 in the US. How much of that actually gets used is a different story.
PS: Individual switches can handle a surprisingly large percentage of total internet bandwidth as packets travel through many of them. 100 switches / average of 4 hops per packet has 25x the bandwidth of a single switch.
I was only talking about the currently deployed constellation, which of course is much smaller, but for which we have actual numbers from measurements.
For the completed Phase 3 constellation I have read numbers in the range from 1.7 - 2.3 Pbit/s as theoretical maximum.
So would need around 8-12 PTX boxes to reach the same total routing capacity as the whole future constellation.
The current single-fiber bandwidth record is 1.53 Pbit/s.
And nevermind that Starlink will run out of ground station / inter-satellite link capacity long before reaching the theoretical maximum.
Ah, you said planned here so I assumed that was what you where talking about:
> The entire _planned_ starlink constellation has less aggregate switching capacity than a pair of current generation core routers.
I generally agree with those estimates based on how they’re defining limits though those definitions really depend on what you care about. Aka predicting actual utilization at one end vs just considering what the satellites could physically do in ideal conditions etc. Thus the theoretical impact of direct customer to customer links for a satellite over the Atlantic for example can vary wildly.
Most Starlink customers are in areas “easy” to serve by high speed internet like the eastern US. It’s really only economical because of poor internet access across most of the US compared to similar density areas in other developed counties.
Isn’t the latency of satellite internet always going to be significantly worse? Unless, of course, you could make larger hops, avoiding unnecessary switching delays. But I don’t think that will happen, except some rare circumstances.
Not a clear cut, and one intended application of Starlink was . You need less detours and light in fiber is slower than light in vacuum.
Fiber slows down Berlin <-> NYC by a lot. I usually use Wolfram Alpha for this and it's like 30ms fiber vs. 21ms vacuum.
So in theory starling could offer less hops and better latency. The detour to space isn't that bad either, ~1100km on a 6500km route, and fiber will need some extra hops and detours too.
a single cable system under construction(which we are buying part of) (there are over 30) has over 120t of capacity @400G waves, but we are already discussing doing 800G waves, which will double that. We currently operate on over 7 submarine cables, one of the landing stations we operate out of has over 10 cable landing which is over 850T of capacity across all cables.
Most of my metro DF backhaul can push 16-19T down a single pair, and most of my builds are 1728 core fiber.
Starlink is very cool, it will connect ALOT of under served or simply not served people across the world, but the notion that it will even have the slightest chance at dinting the Subsea cable market is laughable.
Most of us are not talking about Starlink. I'm talking about this optical uplink technology, which can push tens of terabits per second through the atmosphere between each uplink and satellite pair.
Any free space optical approach you can imagine will be slower and noisier than the equivalent technology in a fiber. You are not going to get DWDM channel density in free space or throughput.
Which will go oh so well when an adversary appears with a laser uplink of their own and starts fiddlefarting the Rx Sat to ruin your day.
Lines have one biiiiig advantage. They're very easy to monitor(liveness), very hard to imperceptibly tap anymore, they don't move once you put them down (generally), and if they do, you know just where to start looking.
Since you seem to have some experience I want to ask:
How much latency is added by routing and switching on a typical transatlantic journey?
And a followup, are there any promising developments for moving the networking infrastructure fully to light as a carrier signal without any electron-based hardware in between?
The Z on ETHZ is important, ETH on its own is just (loosely) "university", the Z makes it the Zürich one ;D. There is exactly one other: EPFL in Lausanne. (And you can technically call them EPFZ and ETHL if you really want to confuse people.)
(EPF [École Polytechnique Fédérale] is just the French version of German ETH [Eidgenössische Technische Hochschule]. Funnily enough both have pretty good international reputations, but due to the primary regional language being different are rarely recognized to be the same type of Swiss education entity.)
Continuing with the off-topic, the E in ETH is pretty significant if you're going to loosely translate, as it stands for "Swiss federal", which is a big part of where those institutions' standing comes from.
Funnily enough that part is completely absent in the official French version… it's just "Fédérale" there… and shouldn't any of DE, FR or IT be valid as a "source" for a Swiss to English translation? So if you choose to do FR→EN, it's just "federal polytechnic institute"…
(… to be fair, yes, "Fédérale" translates to "eidgenössisch" when the source is known to be Swiss French / fr_CH, but still. Oh the joys of locale details and translating natural language…)
I know about two newspapers who regularly say "ETH Lausanne", and I suspect these articles are written by similar pedants as some commenters here appear to be.
My original motivation in posting was really to spread the understanding that ETHZ and EPFL are sibling institutions; I knew both institutions from some comp sci bits but never made the connection until a while ago. And then I noticed a bunch of other people didn't connect them either…
(I mean, it's not super important to understand these two are related, but honestly I feel like it makes them even more impressive. ETHZ already has a MIT-like reputation, with EPFL not far behind. The Swiss are clearly doing something right with their technical universities. And it's a 100% quota too, these two are the only ETHs/EPFs.)
Sure, high-bandwidth inter-satellite links are essential for providing service to remote regions (or even just to make ground station placement more economical), but the question here is whether they could ever economically compete with fiberoptics outright.
For extremely latency sensitive applications, I guess they will (e.g. for trading and real-time communications), but for everything else that can wait a few more milliseconds, I'd guess that the operational overhead and reduced reliability will never be worth it, especially if fiber already exists on the same path.
Can't imagine how hard it is to align those in space (and keep them aligned) in a low altitude satellite, but it sounds pretty cool. I'm guessing the range is something like 1000km/2000km?
Current fiber optic cables transmit signals at 2/3rds the speed of light. Modeling has shown that starlink's laser backbone would allow it to beat deep-sea cables in some situations. This article estimates a NYSE -> FTSE reduction from 76ms to 43ms. I wouldn't be surprised to learn that there is a HFT bidding war going on.
There is a newer version of fiber optic cable that uses a hollow core. Testing has shown it transmits signals at nearly the speed of light. It is very difficult to produce at scale and will require a generational shift to deploy.
Am I the only one who thinks HFT should be forbidden? Stocks and investments are a nice thing to have, since you can buy a share of a company you believe in, they get the money, do stuff, you get dividends and in the end can sell the stock after some time too.
HFT changes these "beliefs in company" to "beliefs that the stock will go up", and the timeframe of those "beliefs" is sometimes in milliseconds. This turns classic "investment" into a computer game with a lot of real money.
HFT isn't even that harmful. What's harmful is brokerages that hold large portions of financial instruments who take advantage of customers' live buying and selling with delayed swaps, intentional front running, etc. This is how the stock market was bullied before HFT, options markets, trading bots, etc enabled 'everyone' to do skeezy arbitrage and not just market makers.
The only meaningful way to limit the efficacy of these modern trading styles would be purposeful market friction through minimum holding times, circuit breakers, etc. You can't just legislate that people mustnt have low ping.. it's impossible to enforce. And those market friction mechanisms can create scary market conditions like backlogs, etc and guess what, enable market making brokerages to do internal swaps etc. in spite of the friction and essentially be the only ones able to bypass restrictions.
> HFT isn't even that harmful. What's harmful is brokerages that hold large portions of financial instruments who take advantage of customers' live buying and selling with delayed swaps, intentional front running, etc
HFT does very similar things, except it's even more opaque to those not doing it. Large financial firms and exchanges sell their order data to HFTs. If you're interested in a deeper dive on what they're doing, a book like Flash Boys will explain how there's a lot more going on than just a quicker network connection.
How about forcing public markets to bucket trades within say 2 second windows, executing best matches at the end of each period. Low friction and reduces value of millisecond latency advantages.
Then there would be algorithms that analyze the trades in those buckets, make a prediction on the outcome and carry out several counter trades to put into the same bucket. See a lot of people selling, cancel your buy order and replace with a short instead. Get a few of those bots involved and each bucket window would be exploding dueling pits of orders that would require insanely large systems to process.
These problems are fixable. Use blind bidding, and ban spoofing by preventing order cancellation within a bucket. Too many orders? Set a per-client limit.
Most of this is improvable with software. As other commenters point out, the resistance is from incumbent trading institutions who have a lot invested in the status quo.
Blinding bidding would not work given the current way NMS works and it wouldn’t be completely blind either ways, the exchanges would know and that’s a system ripe for abuse.
What is an order cancellation? Would different legs on a hedge count as a cancellation?
The closing window is a massive challenge as well. It would have to be accurate down to the nano second and you can bet the exchange is going to get litigated against depending on which bucket the order ends up in.
If you set up mandatory delays per client you would have each hedge funds split up into hundreds of shell companies to get around it.
Having worked at exchanges, most surveillance and analysis of trades happen after the close of day. You’re proposing a literal real-time surveillance system that would be absolutely massive in scale.
It obviously works, given that many exchanges do have morning and evening auctions.
> The closing window is a massive challenge as well. It would have to be accurate down to the nano second.
It wouldn't have to be - one could even say first second is for bids, second second there are no bids but a delay to give auction results mid-second and allow people to process.
> the exchanges would know
They can be forbidden to sell that info real-time by their regulator (it arguably would be market abuse and that could be made explicit).
It is definitely not impossible like you are trying to imply. (Financial markets participant with 20 years in the market here, for what it's worth.)
Even a 1-2 millisecond batch auction would do it. But the big trading firms are the largest "customers" of exchanges, and they've pushed back hard on proposals like this.
>> The only meaningful way to limit the efficacy of these modern trading styles would be purposeful market friction through minimum holding times, circuit breakers, etc
So I occasionally try to think of a reason to have a guaranteed delay due to signal propagation delay to/from Mars. Would there be a use for putting an automated exchange there? Maybe, but then whomever gates the orders could monitor and predict what will happen 40 minutes out or whatever. Need TLS from trader to exchange or something.
You might be interested in IEX [0]. The idea behind that stock exchange was similar to your thought process - introduce a guaranteed delay / random offset so that HFTs can't exploit the markets the way they do others.
I agree. HFT goes against the original ethos of a market open to all. Instead of trading on the merit of a given company, it simply reinforces the market is increasingly gamified and unfair. It's like futures, no ones cares about the commodity being traded, or that doing so can really hurt people and business that rely on that item, it's just another item to be gambled.
> Instead of trading on the merit of a given company, it simply reinforces the market
What you're proposing is actually not a ban on HFT, it's a ban on investment strategies other than value investing. How would you implement that in practice? Have every investor swear under oath that they do believe in the company's inherent value, regardless of the rest of the market's behavior?
> HFT goes against the original ethos of a market open to all.
What "original ethos" is that? And even if it existed, wouldn't excluding certain investors go against it much more?
And for the how, just limit the number of transactions one can do per second or the money one can exchange per second. Then everyone will have to be smart instead of being the one with the best connection. Or split the markets in "Joe invests" market and "Bigcorp" invests.
Whatever scheme is defined as "stock exchange" can surely be fixed by another scheme. Humans have been inventing games and agreed to rules for centuries, it's not like it's something we can't do.
Excluding the investors that have literally paid for an advantage sounds fair to me (buying real estate closer to thr exchange because every ns counts).
What would that look like? If you rate limit stock exchanges to something like 1 update per minute then there will likely be the same amount of networking and computation going on to speculate on the next update and calculate optimal plays. It just moves it to behind closed doors where it is harder to know if shenanigans are going on.
It would take a heavier hand to push against this problem. I'm all for it, I'm just not clever enough or knowledgeable enough to know what would be a good regulation that would fly in congress.
A few congress people put forth a bill that would add a 0.1% tax on trades. This would be a rounding error for most traders, but a significant value for HFT.
Raising the financial transaction tax to 0.1%, 1$ per $1000 would eliminate the vast majority of unproductive and rent seeking HFT without negatively effecting liquidity. The US already has an FTT to fund the SEC, implementation should be straightforward. Currently the rate is very low, about 0.002%. Raising the rate would be minimally impactful for longer term investors and generate on the order of $50b a year for the government.
Hong Kong has an FTT of 0.13% currently, it was 0.1% from 1993-2021 so you can compare HFT impacts on these markets. Or compare dozens of other countries with similar rates, Switzerland, Taiwan, France, Italy, Japan.
Yes, that estimate is with the greatly reduced HFT volume. And it may not be 100% accurate, HFT and trading in general could move to another market without these taxes and/or switch to a type of derivative with a lower/no tax. Like the UK, for swaps. But these other markets are smaller and not really a replacement. And these other markets may pass similar taxes in the future.
Rent-seeking is an "economic activity to gain wealth without any reciprocal contribution of productivity". Or put another way it means societies resources are put towards wealth transfer instead of productivity/wealth creation.
Thank you; understood. I've just looked at the Wikipedia definition[0] as well, which is a little different, but I get the point you're making.
I haven't thought about this enough, but one thing I'd note is that HFT is risk-taking, and taking a risk generally has the possibility of profit or loss.
A second thing would be that the phrase "society's resources" is a bit dangerous, as it might let us think that the money they're using is somehow everyone's money. When in fact they're spending their money on this, hoping to make a profit.
>A second thing would be that the phrase "society's resources" is a bit dangerous, as it might let us think that the money they're using is somehow everyone's money. When in fact they're spending their money on this, hoping to make a profit.
From the perspective of the whole economy it is everyone's money. Rent seeking behavior is a dead weight loss due to market inefficiency. Through tax policy the incentives can change and largely remove this inefficiency.
"Excess rent" is economics jargon for something akin to "selling something for more than its fair value". The idea with "rent-seeking" is therefore that of seeking opportunities to take advantage of or even create market distortions that work in your favor, allowing you to capture a greater share of surplus value than you would otherwise obtain in a theoretically-fair market.
I'm not that person, but presumably it's that someone has purchased real estate in a geographically relevant location and constructed property on it/between it and the exchange which gives them an insurmountable latency advantage in HFT, and they are characterising the raking in of cash from that latency advantage as rent seeking.
If you buy a stock, you must keep (hold) it for eg. 30 days (or 7, or 60, or whatever).
Compute all you want, whenever you want, but instead of millisecond timings, optimize stuff for at least some time.
Maybe even a tax on stock profits, which is really high and falls after some time of ownership of such stock (we have this in slovenia, but it's not really high in the first place, and time brackets go less than 5 years (25%), 5-10 years (15%), 10-15 years (10%) , 15-20 years (5%), and zero tax after that.
Adding this sort of friction would increase spreads. Market makers are a net positive for markets (who buys/sells when no one else wants to?), and I don't see any downside to allowing them to manage risk and trade quickly. I think the way to increase fairness is to change the way orders are matched.
One attempt is the exchange IEX [0] which introduced a ~350 microsecond delay to everything simply by running incoming order data through a ~60km loop of fiber-optic cable.
Perhaps not the most, er, featureful solution, but it's very easy to audit and argue that there are no biases or backdoors.
> If everybody is exactly 350us slower, then it’s still the same game.
IANATrader, but I don't think that's entirely true. Yes, it won't stop Carol from getting information from somewhere and then running in front of Alice to the same exchange. [0]
However even fixed-delays [1] can still create uncertainty about what price your order may actually execute at when it hits the server, which is something HFTs rely on more heavily than other traders, since their strategy depends on high-certainty that they will have a small positive margin on every trade.
__
[0] I find it geeky-amusing to write (i.e. IEX) or (ex: IEX) because it feels like the start of a mental tounge-twister.
[1] Can't edit my original post anymore, but apparently there's another 350μs on the outbound path too.
> However even fixed-delays [1] can still create uncertainty about what price your order may actually execute at when it hits the server, which is something HFTs rely on more heavily than other traders, since their strategy depends on high-certainty that they will have a small positive margin on every trade
This is also true on every single other exchange, albeit to a smaller degree. You have a larger queue of possible things that may have happened, but this issue exists everywhere. No exchange has instant entry.
On a very short timescale things are very jumpy - nothing happens for a while, and then everything happens all at once. The queuing problem tends to be more of “did another hft beat me to the same event”, not general random other events, so queues like IEXs tend to be emptyish anyways.
What about a flat "Tobin tax" charged on each transaction so that it becomes very expensive to do lots of small trades quickly and encourages doing bigger trades at lower frequency?
The best solution I've seen is to do batch auctions every millisecond or so, and randomize matches within that window (for market orders or limit orders at the same price). You could use a longer window to include slower players in the market.
In a healthy market, HFTs serve as market makers, and allow normal traders to have faith that prices will be consistent at all exchanges. If there is enough competition, an HFT will have very low profit.
What's the actual harm of HFT for the average person? I can set a $ limit on my orders, so I don't pay more than I want. Why should I care if some HFT guy scrapes 0.01% value off each trade?
That's the price you pay for liquidity, and the alternative (paying a difference in price far more than 0.01%) makes everyone worse off except other people competing in the markets. The money HFT's make comes from other people trading the zero sum short term trading game, not you (assuming you are longer term investor or non-participant in the market), so why should you or others in your situation get worked up about it? It seems like either a demonization of that which people do not understand, a class warfare sort of vibe, or both.
It’s weird to be how much some people hate market makers and traders who apply arbitrage strategies. I don’t think there would be as much hatred for HFTs if people understood what it is they do. They make markets on securities and arbitrage things. For example, creating units of SPY when it gets too far above NAV to bring the NAV back in line with the index. I’m sure there are more strategies I’m unaware of, but it’s not like Millionaire HFT man is picking the pockets of retail traders, there’s real money to be made elsewhere.
I agree; it's as abstract as software but you can't really go to school to learn about it. It's more of an indictment of the tribal nature of humans and today's popular political narratives than of the intelligence of the OP's, who I assume are very smart. Deep-rooted emotions like jealousy and fear of the unknown can lead to shallow dismissals of complex and important systems.
It would be slightly amusing to see some sort of HFT ban, only to see people speedrun re-inventing HFT as numerous problems unfold, like dad putting in a market order for AAPL on a day it had a VWAP of $185 and having his trade execute at an average price of $690. Or more serious problems like shortages and surpluses of commodities in different locations due to bad price information.
This is referred to in the finance industry as 'picking up nickels in front of a steamroller'. It generally works out great for everyone until someone gets pulled under the steamroller, which can lead to an insolvency cascade.
Where do you think all your liquidity comes from when you want to buy or sell some instrument? The difference in price isn't all that substantial if all you do is buy now, sell when cash is needed in retirement, but if you're that kind of general index investor, one could argue that your contribution to markets is actually negative (whereas HFT is helpful by making prices better and spreads tighter). Buying SPX makes no differentiation between bad and good companies in the index and allocates money based on current prices (which are mainly calculated by HFT's who did the legwork to get to the current point).
This seems to be a common sentiment – but do you know what HFTs actually do, and why it works?
I was pretty skeptical too, but the more I look into it, the more it just boils down to market making (providing liquidity) and arbitrage (improving price quality across exchanges).
There might be dirty tricks there too – I'm only speaking for legal trading practices here. And if an HFT makes its profit using illegal methods, what's the difference between that and other financial crimes like insider trading, collusion to frontrunning etc.?
I don't really get the (to me somewhat irrational/uninformed) focus on HFTs, when things like payment for order flow probably "siphons off" just as much from retail investors' trades.
I have a suspicion that at some point, an HFT group will obtain a brief but substantial lead in machine learning (a la RenTech, but on a shorter timeframe) that allows them to suck a decent amount of money out of the stock market before anyone else can respond or close the gap. We probably won’t get any sort of regulation until after that happens.
> since you can buy a share of a company you believe in
I don't think this is generally true. Stock trading isn't charity, it's done to make a profit. Almost trading is done on the basis of second order effects, and has been for as long as there's been a stock market. Stocks aren't given value by the company being profitable, they're given value by what other people are willing to pay for them (okay, yes, dividends also factor in, but many companies don't pay dividends and people still assign value to those stocks).
Well sure it's for profit, but what exactly do we (the society) get if we allow millisecond investments in our companies? Compared to classic, long-term investments.
We gain the same benefit all commerce creates: somebody wants to buy, somebody else wants to sell, both those people make a deal, both those people have fulfilled their wants. The seller is happy because they preferred cash to the stock, the buyer is happy because they believe they can sell the stock for more money later. What difference does it make if the buyer plans to sell in a second or a month?
Is liquidity even a real answer? The HFTs only work because the market would be made a moment later. A buyer and a seller already exist. If they didn't, it wouldn't be high-frequency trading. it would just be "trading".
As far as I can tell, it neither benefits us, nor costs us. It siphons off a tiny fraction of money, too small to notice, but large enough in aggregate. Some parasite gets to survive, which is annoying if you look directly at it, but ignorable if you don't.
> The HFTs only work because the market would be made a moment later. A buyer and a seller already exist.
Not at all. If you, for whatever reason, need to sell right now, putting in a market order in an illiquid market can cost you a lot.
It can be discussed how big the contribution of HFTs to market making really is (large companies or exchanges often employ dedicated market makers, I believe?), but the benefit of market making itself should hopefully be obvious.
Sure, liquidity is a good thing. But you only need enough liquidity to make your trade. Adding more provides zero benefit.
If the HFTs only trade when the market is already made (or will certainly be, milliseconds later) then they're not really adding any liquidity. They're just doubling the liquidity which was already sufficient.
They fulfill different functions. Low trading delays make certain the difference between buying and selling price is low, and reduces differences in price between marketplaces. Both very good for people doing long term investments.
I am pretty sure HFT is the only reason the average joe can trade in the first place and can definitely say that X's stock is worth Y price. Otherwise you would have to trust the broker to give you a correct estimate.
Suppose I a (hypothetical) very low frequency value based trader buy a million dollars worth of Apple.
Apple should go up in price slightly, because someone (me) just made a big bet that it's going to do well. The market does this automatically.
TSMC should also go up in price very slightly, because Apple doing well makes it more likely that TSMC (a major vendor of theirs) will also do well, and the market just learned that Apple is slightly more likely to do well, so it also just learned that TSMC is more likely to do well. In other words the companies performance is correlated, so the stock prices should be as well.
By my understanding HFT is the mechanism by which the market can make TSMC go up slightly as well. High frequency traders can react to the information that apple is going to do better (as per it's increased stock price) and buy a bit of TSMC.
This seems like it should make the market more accurate, and if you buy that the market accurately assigning values to a company is a good thing, seems like a good thing.
Sure, but assume for the hypothetical that the counter parties were a list of existing "I'll sell this much at this price" orders. I.e. they were existing information that the market already took into account.
Or 1 million people sold $1 worth of Apple. If you want to execute a large trade, you'd probably have to buy from several sellers, and (my limited understanding of the market is that) you'd do so by matching sell orders in ascending order, thus pushing the market value up.
No. It's garbage, trading is supposed to serve human needs and should take place on human timescales. If I had a magic wand I'd randomly fuzz the timing of trades to remove a lot of incentives for HFT and some other kinds of automated trading, which are not much more than banging on a known deficiency in a slot machine.
Proponents argue that HFT and other such innovations provide liquidity to markets; my skeptical take is that this is a nice technical-sounding term for traders with cash to lowball falling asset prices.
If you want to buy an apple stock now at say 185$ (at time of writing). What gives you the confidence that you can buy it at this price?
1. You could put a limit order at 185$ or lower. But maybe no one is putting a sell order at this price? So you are stuck waiting for some time, maybe never if the price keeps going up?
2. You could put it at 190$ to trigger a trade but you risk buying 5$ more than what you originally wanted if people already cleaned up the book from 185$ to 190$.
The only reason you can definitively say that an apple stock is worth exactly 185$ right now is liquidity/depth of the market. Because there are enough players in the game that is willing to make miniscule amount of profit in very short amount of time.
Yeah, that's the cost of doing business. If your business model depends on many many transactions with tiny margins, don't be a trader but become an exchange.
> HFT changes these "beliefs in company" to "beliefs that the stock will go up"
No, this is the part most everyone gets wrong about HFT. These systems make money off of volume, not price. They don't care if the price is going up or down. All they care about is that they can capture a small price delta by facilitating a trade faster than someone else and they are willing to do so for smaller fractions of a cent which makes them more attractive to all market participants, including you and your retirement funds. Their concerns are orthogonal to investors and they compete with other HFT firms and market makers.
Not quite: By buying shares in a company, you positively influence its share price, which reduces cost of capital for the company (in case it needs it).
Arguably, the difference in impact (in terms of enabling/steering companies to do things) between the IPO and secondary market these days is probably negligible compared to any two single private funding rounds.
And HFT setups are also nearly colocated with the exchanges I understand -- as close as they can get physically, so adding a hop to space and back may be adding quite a bit of distance.
It would be relevant for transmitting information from different exchanges (NYC ⇔ London ⇔ Tokyo... etc), not going from the trading firm's systems to the exchanges. At a high level, the firms would be making money by helping markets on different continents synchronize faster.
Exactly. IEX exchange is relatively new, and their big innovation is that they use long spools of fiberoptic wire to delay their user's connections, thus creating an even playing field.
"The reduction in trading costs (spreads) is broadly consistent with recent
theories on how speed advantages may be used to exploit mechanical arbitrage opportunities. These theories suggest that market makers face adverse
selection from fast traders even in the absence of traditional “fundamental”
informed trading. For instance, Budish et al. (2015) defnes “quote sniping”
as the mechanical arbitrage of taking “stale” quotes before market makers can
cancel. In his Comment Letter on IEX’s Exchange Application, Eric Budish
argues that IEX’s speed bump may be able to mitigate quote sniping as it
allows IEX’s pegged orders to avoid executing against market orders at stale
prices.7 Moreover, cross-sectional di˙erences in spreads in response to IEX’s
protected quote are consistent with recent theory suggesting that exchange
speed is a double-edged sword for market makers (Menkveld and Zoican,
2016). On one hand, faster exchanges allow market makers to update their
quotes faster, reducing spreads. On the other hand, higher exchange speed
results in a higher probability of quote sniping. Hence, the results support
this more nuanced view of the net e˙ects of speed on trading costs. "
Photonic bandgap / Bragg fiber is being produced in quantity, and some subsea cables are using it.
I'm pretty sure it's the satellite lasers that require the generational shift. One subsea cable has no bearing on any other; once the first is installed, you're up and running. not so with laser-connected swarms.
Photonic bandgap fibres are not used in submarine cables their loses are too high. There are recent antiresonant fibres (NANF) which can achieve lower losses than even SMF. However production capabilities are not ready yet to make the amount of fibre necessary for submarine. The startup (out of Southampton) that pioneered these was recently acquired by Microsoft.
We probably see these fibres for links between the exchange and data centres first (although there it is difficult to beat RF links, as they are direct line of sight)
Also, there should be compensation for satellite rotation, initial targeting and tracking. At ~12km between satellites (~4000 on 550 km orbit), the (possible) target of laser signal receiver which, say, is one meter in diameter will be one third of an angular minute. It is not impossible to target that initially, but keep in mind that laser beam is very focused and its power drops significantly with the angular distance.
I really do not know how to even target satellites' receivers initially, let alone compensate for rotation and track these targets afterwards.
"Space lasers allow Starlink satellites to connect directly to one another, eliminating the need for a local ground station and enabling Starlink. to deliver service to some of the most remote locations in the world - like Antarctica."
I don't know if Starlink is routing packets that way. I think it's more likely that they route packets to a ground station as soon as possible to save laser link capacity for customers that actually need it, like ones in the middle of the ocean. If you want your packets routed all the way around the Earth through space then you'd probably need a special contract with SpaceX.
Wouldn't they have to relay between many satellites due to line of sight? And wouldn't that eat up any decrease in time of flight for the signal itself?
I suspect that retransmission with amplification only, without much processing, can be really fast. Modern electronics can routinely do sub-nanosecond latency.
This sure sounds useful, but "might remove need for deep-sea cables" is quite silly.
Anything they do to cram bandwidth into their laser link can reasonably also be applied to fiber optical cables. Except the fiber optical cables come in bundles of 12 to 144 with absolutely no separation issues. Replicating that over open space (air or vacuum), if reasonably possible at all, will chug significant amounts of power in signal processing at the receive end.
There are major benefits to free-space optics -
- quicker to build
- lower latency
- in some cases, large coverage
But deep-sea cables compete on bandwidth, and that's not something free-space optics can beat them on. Why diminish this research achievement by conflating it with that? :(
On top of this I really doubt the claim about working in bad weather. Some weather will work, sure, maybe at reduced speeds. But if there is a proper cloud in the way, the near-visible 1550 nm light will be scattered completely.
I believe the idea is that the ground link is still radio but the interconnect between the satellites is a laser. While radio can be affected by weather, the "works in bad weather" is a long solved issue.
Not any more than saying a "proper cloud" will disrupt satellite transmissions is meant to be serious. Is it a cirrus cloud? Is it a cumulus? A cumulonimbus? Which cloud is going to do this blocking?
This really isn't that complicated. You can't do a link through pretty much any cloud where you would say "it's cloudy". If you're not sure, the link still works but a lot worse...
No. I still don’t know what is meant by a proper cloud. I don’t have a starlink, but I’ve had plenty of other satellite systems in the past. None of these have had issues with clouds. Severe storms with heavy rain and lightning would affect it at times. With the older large dishes, heavy winds could cause it to move enough to degrade image. Clouds by themselves? No, not an issue.
So, yet again I have to ask because all of the useless answers received to this point have not been an actual answer to what a proper cloud is that could interfere with a signal
You're going to need to show me charts with strength of signal being diminished by "proper" clouds before I'll even remotely come to thinking this isn't just FUD. So, no, I don't understand him, and yes, if this is the premise I do not agree with it.
It is pretty simple, you can't see through a cloud because the visible light gets scattered. This system works with 1550 nm light, which behaves pretty much the same in the context.
Normal ground to satellite communication use wavelengths of 7 mm and up, as those have reasonable cloud penetration.
> Although the laser system was not directly tested with an orbiting satellite, they accomplished high-data transmission over a free-space distance of 53km (33 miles).
This is peanuts compared to under sea cables and issues you will find penetrating atmosphere and increasingly, space.
We should keep under sea cables and work on additional connections to make our networks more resilient, both to natural phenomena, equipment failures and sabotage.
> more resilient, both to natural phenomena, equipment failures and sabotage.
I think you're understating the risks from geomagnetic storms. In comparison to satellites, fiber optic cables seem like they'd be relatively unaffected even if the equipment attached to them needs replacement.
Currently, you can cut internet connections to most countries with a small deep water sub. China, for example, was busy building one that can be used at 10k+ depth and has arms.
Its widely assumed China, Russia and the US have "kinetic kill" abilities on satellites in space too, its just capabilities are not as widely known. Projects Like the US/Boeing X-37 space drone etc, whose purposes still haven't really been revealed:
Can those be deployed "anonymously" though? Publicly destroying another nation's satellite would be an act of war. Quietly and secretly cutting an undersea cable, while being able to plausibly deflect blame, is a bit safer.
Perhaps the ocean is protective to some degree, but beyond very short lengths undersea cables have powered repeaters that draw considerable current, and which would plausibly be destroyed if the base stations are too.
I would not say "very short lengths", because undersea cables without powered repeaters (only with optical amplifiers) are possible until a few hundred kilometer.
But you are right that transoceanic cables need powered repeaters.
yeah, but having both means if one is a victim, the other keeps chugging along. a bad anchor drag take out cables? or a sub cut the wire? lasers go pew pew. weather take out lasers? cables go zoom. and in normal operating conditions, both go and increase bandwidth and speed
It's not peanuts at all, given that it's an entirely different technology. Free space optics will have different applications, of course, but those applications where fiber is currently not feasible stand to profit quite a bit from this.
I can think of all of rural America underserved with fast internet, forced to share puny fiber runs to towers retransmitting over low speed radio.
implementing the same infra, but using lasers as the backbone could speed up a large area of the country, make rolling out dense radio networks like 5g way faster and cheaper, etc.
I could see both being in use, but replacement seems unlikely to me.
If nothing else I could see governments insisting on both just for strategic reasons. Both techs are vulnerable to being fk'd with so doubling up makes sense
The key thing here is it allows for very high bandwidth to go almost anywhere on earth, not just spots where it's easy to land a cable. Starlink's already demonstrated how great that is for consumer bandwidth (50Mbps); a multi-gigabit link terminated at a satellite is fantastic.
The part that impresses me most is they're talking about LEO satellites. Those move fast! Starlink does this with a very impressive phased array antenna design. Conceptually tracking a moving satellite with a laser is as easy as rotating a mirror, not sure how hard it is in practice.
It also circumvents the biggest problem with fiber, politics. Starlink doesn’t need permission to run a backbone across (or rather over) a country. This will be revolutionary for people in Africa, South America, and huge swaths of Asia/Australia where a few telecom monopolies have artificially jacked the price of transit.
IIUC, it only circumvents politics due to existing treaties. Those old treaties are likely ripe for revision with the extensive commercialization and the increasing number of countries capable of launching payloads into space.
It definitely helps mitigate the infrastructure buildout hurdles (which are not small), but they would still need to jump through any "I need to do business in this country" regulations, etc.
Only if they want to bring a signal down to the ground. Unlike fiber, LEO sats can send data across an entire continent of countries or states without getting permission from any of them. Wanna connect Toronto to Mexico City with fiber, gotta get the US and half a dozen states and dozens of counties all to give you easement. Not so with space lasers :-)
Sadly, in most of the rural US. My other options are 12Mbps fixed wireless, 1-100Mbps cellular, or 3Mbps DSL which AT&T stopped selling in violation of various government contracts.
Rural Canada.
On a lake shore.
30km from one of Canada's "Top 50" cities.
Where I used to live, I had 2 options for internet access.
A Wireless ISP that uses a parabolic antenna pointed at a water-tower about ~20km from my home; or a cell-phone based internet connection.
The WISP allowed me on average 300kb/s transmissions.
The Cell-Phone allowed me between 1.5Mb/s and 7Mb/s (to a max use of 5GB/month).
An actual 50 Mbps link is perfectly good for most use cases, you can stream anything and it is not really a bottleneck in determining how quickly pages load. Large file transfers may still take appreciable time, but it is rarely a big issue.
An advertised "50 Mbps" mobile connection is dog food if you are used to 50 Mbps fiber. You are lucky if you get 20 Mbps through, though it can be much less. Worst part is all the packet loss that cause inconsistent latency and speed.
Here where I've been on 1.5 Mbps for 15 years and there is literally no other alternative. No cell coverage. No cable. Mountain blocks GEOsat. Copper is a 6 mile run to nearest town so DSL is out too. Also, I'm literally 12 miles from the Googleplex. Yeah, broadband is a mess outside of cities and suburbs.
In practice, considerable difficulty. Like you said LEO moves fast. You are talking 10 min in line of sight (LOS) depending on the orbit. So the handoff algorithms (and necessary network) are very challenging and cost prohibitive. I mention it in my earlier response but if you have a LEO network you need pretty substantial arrays to accommodate such a power load, and during “night” for whoever is using the network, you need transfer mechanisms to relay the data, using even more power during eclipse. This is a real challenge.
What about solar storms? I would expect the damage would be more catastrophic if a major solar wind storm took down a decent percentage of the the satellites than terrestrial fiber. A satellite network seems more vulnerable and harder to fix. I'm sure deep sea cables are no picnic to fix, and they are susceptible to sabotage too.
https://www.space.com/solar-storms-destroy-satellites
IMHO this article is garbage because moving internet backbone in space isn't a sustainable choice in the short, medium and especially long period. Starlink knows that, they are operating in loss, burning money of investors, hoping that some kind of new sci-fi tech will change the game, but honestly that won't happen. Image if the current cellphone network was in space, every time the tech has a generational step forward you should burn all the satellites and re- launch a new constellation. Moreover, you need to reach consumers (the last mile ) on Earth and you can't do that with lasers. Customers will rely in radio communications, that in this case have low quality compared to other alternatives, when they are available. In fact the point of satellite Internet is to cover a niche market: you use that solution if there are no alternative and if you can afford the costs. So you can accept lower connection speed, variale QoS, etc , hoping in a better alternative in the future and, when that alternative arrive in form of a wired traditional connection, instantly the customers migrate there. So that market constantly decrease, not increase. If competitors realise that exist clusters of satellite customers in some rural zones and they are willing to pay more for Internet connection they will take over completely and , anyway, in times of remote working, someone should invest in providing fast internet in remote places that are characteristic of better quality of life for remote workers want a better life, instead of invest in satellite internet. Laser telecommunications are already a thing but in optic fibre and there in fact they have more sense than in a fragile and expansive satellite constellation.
Eliminate deep sea cables? How are they going to send the lasers in a curve from one base station to the next?
Starlink is already working on a backhaul from satellite to satellite. Interestingly, they claim that they can speed up data transfers by 50% over long distances. Musk said that in a Tweet which I cannot find now. I looked a bit into it and the reason seems to be that light trough a cable can only reach 70% of the speed of light while lasers through the vacuum of space can reach the speed of light.
> Eliminate deep sea cables? How are they going to send the lasers in a curve from one base station to the next?
Multiple satellites. The ISS can see about a thousand miles in each direction to the horizon, so you'd only need to bounce across a couple to cross the Atlantic.
Sending lasers in a curve should be easy if you use circular polarization. /s
You would send to a satellite in view; my orbital dynamics are poor, but I think with the LEO orbits, you may need to relay in space for a cross ocean link, as it seems unlikely one satelite would have a view of both sides. But Telstar 1 and 2 were used for cross-Atlantic microwave relay with a single satellite in view at a time. Future Telstars were geostationary, because satellite tracking was a lot of work (and with only two satellites, service was intermittent as they were only in view for a portion of their orbits). I imagine satellite tracking for lasers is going to be very difficult as well.
They’ve launched satellites with laser links, but I don’t think they’ve got the satellite-to-satellite links active yet. https://www.starlink.com/technology says “testing” for that feature.
Aren’t lasers extremely susceptible to weather conditions?
E.g. when it’s raining/snowing/cloudy, can the base station communicate with a satellite via laser?
The lasers are used only between the Starlink satellites, not in the uplinks or downlinks.
While this experiment has demonstrated that it is possible to compensate for the turbulence of the air, there is no solution for the attenuation caused by bad weather.
It might be possible to use lasers between a base station and satellites only if the base station is located at high altitude in a dry place, like those typically chosen for astronomical observatories, so that the links will be blocked by bad weather only infrequently.
So I do not believe that there is even a remote chance of replacing most undersea cables.
Lasercom has been demonstrated well on Earth but it still is a very nascent technology in the space domain. There are several challenges that lasercom faces.
First, in order to enable global communication you need a constellation and good constellation management. This requires fewer spacecraft if you are in GEO, but GEO spacecraft tend to be larger and more expensive.
Second, lasers draw a lot of power. Most is waste heat as far as the spacecraft is concerned, but this can become a problem if the duty cycle is high. High power means much larger arrays, and for a LEO constellation, serious eclipse challenges.
Finally and most importantly, pointing is a challenge. Maintaining pointing control in space is hard, and lasers require much greater accuracy as compared to RF. With RF, the lobes are relatively large and allow some play when it comes to downlink to ground. Lasers are unforgiving.
I’ve led several spacecraft missions with lasercom terminals, and every time it has made development markedly more challenging. I think it will be quite a while before it is mainstream, and certainly before it replaces fiber.
ISLs are active today, providing low-latency broadband to thousands, perhaps tens of thousands of Starlink's 1.7-ish million customers where there are no nearby ground stations. I'm not sure if that's "mainstream" but when major cruise lines and Alaskans are happy active customers, it feels like it's getting mainstream to me.
This tech is really cool, but as they mentioned, they have not tested this with a moving satellite. Because any antenna would need to be localized on a satellite for transmission, you’re going to be stuck with having to maintain (likely) expensive and precise motors who’s job it will be to run in perpetuity. However, I would be confident that stepper motors of this grade will come down in price or will become more available.
My primary concern comes down to reliably. Your ground station must now accurately target moving satellites, negotiate handoffs, and reroute when necessary. Architecturally, you will need to develop the ground station with at least three transponders: an A link, B link, and a redundant Z link. The A and B take turns localizing to their satellite where one is only allowed to move positions if the other is currently active. The backup Z link is on standby in case of failure or maintenance. The three would likely rotate roles periodically to keep time on each of the motors more evenly distributed amongst them. This system must work in harmony with all the satellites in its network. The satellite system has its own maintenance burden too, including orbital decay requiring you to periodically adjust its position until it runs out of propellant, and send more up when it’s reached the end of its useful life. To me, this is a ton of moving parts.
An undersea cable by contrast, will pretty much be maintenance free once laid only requiring infrequent repairs in the event of a wayward ship anchor. Yes, the data center components will need to upgraded periodically but you don’t need to replace the transmission mode. There are no moving parts, no need To track objects hundreds of miles away, just sending light down a tube.
Is this tech cool? Absolutely. Will it replace undersea cables? Probably not soon, but I can see this as a great solution to link remote islands where it does not make strategic sense to run a cable, or where it is difficult or uneconomical to run a terrestrial connection.
> Because any antenna would need to be localized on a satellite for transmission, you’re going to be stuck with having to maintain (likely) expensive and precise motors who’s job it will be to run in perpetuity. However, I would be confident that stepper motors of this grade will come down in price or will become more available.
the article says they direct the laser using a MEMS device. not dissimilar to the micromirror arrays used in DLPs for decades, i would assume. you’ve definitely got options that don’t require stepper motors or any macroscopic moving parts here.
Unrelated - I'm reminded of the plans for beaming energy from orbital stations gathering sunlight, which instead uses microwaves.
I wonder how much these beams interact with the atmosphere? I can imagine lasers interact little, but wouldn't microwaves heat up water vapor on it's way?
The big difference between open air laser and microwave links is that one of them doesn't require licensed spectrum. The costs of doing it will be much lower.
Fair! And the existing MW re-translator towers can be upgraded / reused this way.
Lasers have a downside though: optical and near IR light is much more readily absorbed by water vapor than microwaves. I wonder if a maser would be an acceptable solution, if cheaper versions of it existed.
While 1550 nm falls between two absorption maxima of water, so it is not the worst choice, the absorption in water is still more than one hundred times higher than at e.g. 905 nm.
Because of this, 1550 nm is absorbed through fog or clouds at least 3 to 5 times more than shorter wavelengths like 905 nm. Even the latter is absorbed a lot by water, while being much more dangerous for eyes than 1550 nm.
There is no near infrared wavelength where the water absorption is negligible.
The need to hide cables under water exists, among other things, because satellites will be the first thing that will fly down in the event of a global conflict. Of course, underwater cables also have their vulnerabilities, but at least the technology for repairing them and laying new ones is much simpler than satellites.
Whenever I hear of exotic comms stuff like this, I think about command and control use cases. Quantum computing is about security of the cryptographic security infrastructure (PKI, TLS, etc). New physical layers are all about diversification. It's less about disruption and more about security.
Used to work, in the early naughts, for a company that had a laser uplink in seattle from their office building to the westin. I sat in the office with the uplink for awhile, pointed out the window with a big gimble, you could watch it adjust for building sway.
I suspect the tracking and switching infrastructure, the sophistication of free space lasers with their adaptive optics, and the inclusion of satellite receiver-transmitters with their own adaptive optics will not be competitive with laying a passive undersea cable for at least a decade.
> The Deep Space Optical Communications (DSOC) package aboard NASA’s Psyche mission utilizes photons -- the fundamental particle of visible light -- to transmit more data in a given amount of time. The DSOC goal is to increase spacecraft communications performance and efficiency by 10 to 100 times over conventional means, all without increasing the mission burden in mass, volume, power and/or spectrum.
The mirror array sounds like a micro version of what's used in large Earth-based telescopes for the same purpose. Their primary mirrors are made out of movable segments to cancel out atmospheric effects. They're just a bit bigger...
Last I checked, there is a 1% chance of a Carrington Event within the next decade. Leaving cables behind and trusting new wireless links before we have a "full DR" situation would be stupid.
They transmit in near infrared in an environment that is full of particles, wouldn't this cause light pollution in a spectrum many animals would be impacted by?
Also, doesn't this have the issue that anything passing through the beam cuts off internet? A bird, a kite, a malicious actor, a dust storm?
There won't be any significant light pollution since the beams are so narrow.
The problem of interference from clouds and precipitation is real, but the problem of physical occlusion probably isn't because the satellites move pretty fast. That would make it hard to persistently block the beam unless you have something as big as a storm cloud.
This seems like a reasonable question, on its face. I have NIR illumination on my surveillance cameras and insects are VERY interested in it. Every night I get at least one bat hanging out by my camera. I'm fairly certain it's because the camera attracts insects.
The amount of data going through fibres is absolutely staggering, replacing this with intersatellite links is just not going to happen. First you still need fibre to connect your ground stations (and you need quite a bit of redundancy due to weather) and there are still a lot of unsolved problems (tracking and pointing for example). However there are many interesting applications of optical satellite links and quite a few players are investing in it, the big one is actually data connections for scientific space missions.