Saturday, 20 May 2017

Submarine communications on VLF protecting Earth from space radiation

We briefly chatted about Very Low Frequency (VLF) communications and how such comms may travel long distances, go through a bit of soil and water, and are used for communication to submarines in "Lines, radios, and cables - oh my."

Well, interesting news on that front this week. NASA is reporting submarine communications are having a positive effect by creating a somewhat protective bubble around planet Earth.




The effect is noted as small in the paper, "Anthropogenic Space Weather" [Gombosi, T.I., Baker, D.N., Balogh, A. et al. Space Sci Rev (2017)], at least from what I can parse. I found some of the communications history very interesting in this paper, so I'd thought I'd share some of that history verbatim here for the curiously like-minded curious people.

--Matt.

___________________

Excerpts from "Anthropogenic Space Weather" [Gombosi, T.I., Baker, D.N., Balogh, A. et al. Space Sci Rev (2017)]

8 Space Weather Effects of Anthropogenic VLF Transmissions


8.1 Brief History of VLF Transmitters


By the end of World War 1, the United States military began use of very low frequency radio transmissions (VLF; 3–30 kHz) for long-distance shore to surface ship communications (Gebhard 1979). Since very high power can be radiated from large shore-based antenna complexes, worldwide VLF communication coverage was feasible, and along with LF and HF systems (300–30 MHz) these bands carried the major portion of naval communications traffic before later higher frequency systems came online. Early experiments also showed that VLF could penetrate seawater to a limited depth, a fact realized by the British Royal Navy during World War I (Wait 1977). Given this realization, when the modern Polaris nuclear submarine era began in the 1950s, the US Naval Research Laboratory conducted a series of thorough radio propagation programs at VLF frequencies to refine underwater communications practices (Gebhard 1979). Subsequent upgrades in transmission facilities led to the current operational US Navy VLF communications network, and other countries followed suit at various times. For example, Soviet naval communication systems were likely brought online in the late 1920s and 1930s during the interwar expansion period, and high power VLF transmitters were later established in the late 1940s and 1950s for submarine communications and time signals. These included Goliath, a rebuilt 1000 kW station first online in 1952 which partly used materials from a captured German 1940s era megawatt class VLF station operating at 16.55 kHz (Klawitter et al. 2000).

Table 2 of Clilverd et al. (2009) lists a variety of active modern VLF transmitter stations at distributed locations with power levels ranging from 25 to 1000 kW. These transmissions typically have narrow bandwidths (<50 Hz) and employ minimum shift keying (Koons et al. 1981). Along with these communications signals, a separate VLF navigation network (named Omega in the US and Alpha in the USSR) uses transmissions in the 10 kW range or higher (Inan et al. 1984, e.g. Table 1 of) with longer key-down modulation envelopes of up to 1 second duration.

8.2 VLF Transmitters as Probing Signals


Beginning in the first half of the 20th century, a vigorous research field emerged to study the properties of VLF natural emissions such as whistlers, with attention paid as well to information these emissions could yield on ionospheric and magnetospheric dynamics. Due to the high power and worldwide propagation of VLF transmissions, the geophysical research field was well poised to use these signals as convenient fixed frequency transmissions for monitoring of VLF propagation dynamics into the ionosphere and beyond into the magnetosphere (e.g. Chap. 2 of Helliwell 1965; Carpenter 1966). This was especially true since VLF transmissions had controllable characteristics as opposed to unpredictable characteristics of natural lightning, another ubiquitous VLF source. Beginning in the 1960s and continuing toT.I. Gombosi et al. the present, a vast amount of work was undertaken by the Stanford radio wave group and others (e.g. Yu. Alpert in the former USSR) on VLF wave properties, including transmitter reception using both ground-based and orbiting satellite receivers. These latter experiments occurred both with high power communications and/or navigation signals and with lower power (∼100 W), controllable, research grade transmitter signals.

The transmitter at Siple Station in Antarctica (Helliwell 1988) is worthy of particular mention, as the installation lasted over a decade (1973–1988) and is arguably the largest and widest ranging active and anthropogenic origin VLF experiment series. Two different VLF transmitter setups were employed at Siple covering 1 to ∼6 kHz frequency, with reception occurring both in-situ on satellites and on the ground in the conjugate northern hemisphere within the province of Quebec. Of particular note, the second Siple “Jupiter” transmitter, placed in service in 1979, had the unique property of having flexible high power modulation on two independent frequencies. This allowed targeted investigations of VLF propagation, stimulated emissions, and energetic particle precipitation with a large experimental program employing a vast number of different signal characteristics not available from Navy transmitter operations. These included varying transmission lengths, different modulation
patterns (e.g. AM, SSB), polarization diversity, and unique beat frequency experiments employing two closely tuned VLF transmissions. Furthermore, the ability to repeat these experiments at will, dependent on ambient conditions, allowed assembly of statistics on propagation and triggered effects. These led to significant insights that were not possible for studies that relied on stimulation from natural waves (e.g. chorus) that are inherently quite variable.

Several excellent summaries of the literature on VLF transmission related subjects are available with extensive references, including the landmark work of Helliwell (1965) as well as the recent Stanford VLF group history by Carpenter (2015). As it is another effect of anthropogenic cause, we mention briefly here that a number of studies in the 1960s also examined impulsive large amplitude VLF wave events in the ionosphere and magnetosphere caused by above-ground nuclear explosions (e.g. Zmuda et al. 1963; Helliwell 1965).

Observations of VLF transmissions included as a subset those VLF signals that propagated through the Earth-ionosphere waveguide, sometimes continuing into the magnetosphere and beyond to the conjugate hemisphere along ducted paths (Helliwell and Gehrels 1958; Smith 1961). Ground based VLF observations (Helliwell 1965) and in-situ satellite observations of trans-ionospheric and magnetospheric propagating VLF transmissions were extensively used as diagnostics. For example, VLF signals of human origin were observed and characterized in the topside ionosphere and magnetosphere for a variety of scientific and technical investigations with LOFTI-1 (Leiphart et al. 1962), OGO-2 and OGO-4 (Heyborne et al. 1969; Scarabucci 1969), ISIS 1, ISIS 2, and ISEE 1 (Bell et al. 1983), Explorer VI and Imp 6 (Inan et al. 1977), DE-1 (Inan and Helliwell 1982; Inan et al. 1984; Sonwalkar and Inan 1986; Rastani et al. 1985), DEMETER (Molchanov et al. 2006; Sauvaud et al. 2008), IMAGE (Green et al. 2005), and COSMOS 1809 (Sonwalkar et al. 1994). VLF low Earth orbital reception of ground transmissions have been used also to produce worldwide VLF maps in order to gauge the strength of transionospheric signals (Parrot 1990).

...........
...........

9 High Frequency Radiowave Heating


Modification of the ionosphere using high power radio waves has been an important tool for understanding the complex physical processes associated with high-power wave interactions with plasmas. There are a number of ionospheric heating facilities around the world today that operate in the frequency range ∼2–12 MHz. The most prominent is the High Frequency Active Auroral Research Program (HAARP) facility in Gakona, Alaska. HAARP is the most powerful radio wave heater in the world; it consists of 180 cross dipole antennas with a total radiated power of up to 3.6 MW and a maximum effective radiated power (EFR) of ∼4 GW. The other major heating facilities are EISCAT, SURA, and Arecibo. EISCAT isT.I. Gombosi et al. near Tromso, Norway and has an EFR of ∼1 GW. SURA is near Nizhniy Novgorod, Russia and is capable of transmitting ∼190 MW ERP. A new heater has recently been completed at Arecibo, Puerto Rico with ∼100 MW ERP. There was a heating facility at Arecibo that was operational in the 1980s and 1990s but it was destroyed by a hurricane in 1999. The science investigations carried out at heating facilities span a broad range of plasma physics topics involving ionospheric heating, nonlinear wave generation, ducted wave propagation, and ELF/VLF wave generation to name a few.

During experiments using the original Arecibo heating facility, Bernhardt et al. (1988) observed a dynamic interaction between the heater wave and the heated plasma in the 630 nm airglow: the location of HF heating region changed as a function of time. The heated region drifted eastward or westward, depending on the direction of the zonal neutral wind, but eventually “snapped back” to the original heating location. This was independently validated using the Arecibo incoherent scatter radar for plasma drift measurements (Bernhardt et al. 1989). They suggested that when the density depletion was significantly transported in longitude, the density gradients would no longer refract the heater ray and the ray would snap back, thereby resulting in a snapback of the heating location as well. However, a recent simulation study using a self-consistent first principles ionosphere model found that the heater ray did not snap back but rather the heating location snapped back because of the evolution of the heated density cavity (Zawdie et al. 2015).

The subject of ELF wave generation is relevant to communications with submarines because these waves penetrate sea water. It has been suggested that these waves can be produced by modulating the ionospheric current system via radio wave heating (Papadopoulos and Chang 1989). Experiments carried out at HAARP (Moore et al. 2007) demonstrated this by sinusoidal modulation of the auroral electrojet under nighttime conditions. ELF waves were detected in the Earth’s ionosphere waveguide over 4000 km away from the HAARP facility.

VLF whistler wave generation and propagation have also been studied with the HAARP facility. This is important because whistler waves can interact with high-energy radiation belt electrons. Specifically, they can pitch-angle scatter energetic electrons into the loss cone and precipitate them into the ionosphere (Inan et al. 2003). One interesting finding is that the whistler waves generated in the ionosphere by the heater can be amplified by specifying the frequency-time format of the heater, as opposed to using a constant frequency (Streltsovet al. 2010).

New observations were made at HAARP when it began operating at its maximum radiated power 3.6 MW. Specifically, impact ionization of the neutral atmosphere by heater-generated suprathermal electrons can generate artificial aurora observable to the naked eye (Pedersen and Gerken 2005) and a long-lasting, secondary ionization layer below the F peak (Pedersen et al. 2009). The artificial aurora is reported to have a “bulls-eye” pattern which is a refraction effect and is consistent with ionization inside the heater beam. This phenomenon was never observed at other heating facilities with lower power (e.g., EISCAT, SURA).

Friday, 19 May 2017

Speed-bump 101

I get a bit tired of some of the silly thoughts that go around about speed-bumps. This is mainly due to the disinformation IEX and Michael Lewis spread about their efficacy. This is a simple meander to cover just the basics for those of us who are not rocket scientists. There is nothing new here. It is a bit quick and hasty, so please feedback any obvious errors so I may clean it up and make it a useful speed-bump 101 meandering.

In the context of NMS rules so far, a symmetric speed-bump fundamentally just makes for a slow exchange. Let's have a quick meander at the simplest example of such a view.

Here is a happy client connected to an exchange and they get a 750-microsecond typical latency or round-trip time (RTT):



The fastest exchange in the US in 2009 was Bats. Bats operated, at one stage in 2009, with a typical RTT of 443 microseconds. Ah, those were the good old days of microseconds instead of nanoseconds and picoseconds. The speed in the diagram is therefore not dissimilar to an old exchange of an antique 2009 vintage. For what it's worth, Bats continues innovating and is now a sub 100-microsecond exchange. Michael Lewis complained in Flash Boys, the infamous piece of fiction, that fast HFT guys and gals could beat you to the punch on your trades by shaving some microseconds off their reaction times. Lewis said you didn't have a chance and a speed-bump would level the playing field for you. Let's dig into that thought.

Here is a sophisticated picture of a simple speed-bumped market:



Guess what? It looks just like the other 750-microsecond exchange to the client. It is still a race to get an order to the public point of presence (POP). It's still a race for reacting to news and external market data from other exchanges. You still want to be as close to the exchange or POP as you can to help you compete in the race. You will still need to carefully plan your trading infrastructure so you don't suffer a disadvantage if latencies are important to you. Microseconds still matter as much as they did with the previous model.

If you didn't know this speed-bumped exchange had a speed bump, would you be able to tell the difference between the two?

No. Here, speed-bumps don't matter. You can't even tell if the bump exists.

Exchanges aren't typically as slow as speed-bumped exchanges. Normal exchanges tend to be a little bit faster. You can probably guess why. A fast exchange may have an RTT of 10-25 microseconds today. Here is a sophisticated diagram of a modest exchange that operates with an expected RTT of 50 microseconds:


It is also just as much of a race to react, replenish, hedge, and otherwise nuance your trading. Given your reactions within your own systems are not necessarily dictated by the exchange, you have exactly the same, or as little, need to be swift to suit your trading agenda. Speed-bumps don't change the competitive necessity, or lack thereof, for client speed.

Fundamentally, a symmetric speed-bump just gives you a slow exchange. 

Slow exchanges are dumb


Why are slow exchanges dumb?

A slow exchange makes you suffer more risk with your trading. If you use the 50-microsecond exchange above, you could have received a fill and hedged with something else, or replenished your market making, and have done that more than a dozen times, before the slow 750-microsecond exchange even gave you a response.

An easy way to bring this home to my simple head is to imagine an exchange that takes an hour to respond alongside another exchange that takes a second to respond. 

Non-farm payrolls come out. 

It's a big number.

You want to react: cover some shorts, and buy some longs, in a controlled fashion where you only take on so much risk, or delta, in bite sized pieces. The one-second responses from the faster exchange are cool as you can get fills, send in orders, and rinse-repeat as suits. Sometimes you'll miss or hit with your orders. You'll be able to adjust fairly easily as you go along. All is good.

Now imagine trying to do the same at the exchange that takes an hour to get back to you. It is still a race to fire in your initial orders and get in the queue. You're still competing. Microseconds matter even on the one hour exchange. However, you won't know if your orders have been filled and what risk you may be wearing for a long time. You can guess and assume you've been filled, or send market orders and hope for the best, or any fill. It is a real mess - a risky mess. 

If you are hooked up to both the one second and the one hour exchange, you could use the one-second exchange's market data as a fair value indicator and try shooting orders with fair prices into the hour exchange hoping for the best. It should be obvious to you that the one second based exchange becomes the price leader and the safest place to trade.

This little thought experiment shows quite clearly, I hope, that faster exchanges, all other things being equal, are the natural hubs for liquidity. They are safer, have better risk, and lead to better price discovery.  Instead of doing just one transaction compared to a slow exchange, you can offer prices, get hit, hedge, and offer further prices once again to the market. This makes for a much-improved market. 

The other thing you need to take away from this thought experiment is that human time scales and computer timescales are different. The two newest speed-bumps are both going to be 350 microseconds. It seems fast. It is around a thousand times faster than the blink of an eye. However, this is a bad benchmark. 350 microseconds is a long time in modern computing. It is likely enough time for your phone to execute over a million processor instructions. Yes, your phone. We need to resist over-anthropomorphising such functionality. Today's faster exchanges can do over twenty round trips of orders in the time it would just take to get one answer from a 350-microsecond speed-bumped exchange. 350 microseconds seems fast, but today it is like an hour, even to someone fifty years old like me.

Slow exchanges are dumb. IEX and NYSE American are slow exchanges.

IEX and NYSE American tell other people about your trades before they tell you


Things aren't quite so simple with the approved NMS speed-bumped models. Both of the SEC approved models report everyone's quote and trade information to the centralised reporting of the SIP feeds before the speed-bump. That is, your competitors listening to the SIP may well receive information before you do if you're simply co-located near your speed-bumped exchange's POP expecting a simple life. It ain't so simple. 


The UTP SIP has a latency of around 17 microseconds and the CTA SIP has a latency of around 80 microseconds for quotes and 110 microseconds for trades. Add in the communication time on the links and your salmon coloured competitors in the diagram above will get the market's information before you do. Dutifully waiting for speed-bumps of 350 microseconds is not how to optimally trade for now. Some people may care about this.

Why speed-bumps?


There was no original reason really. IEX just screwed up their thinking. If you read the Flash Boys fiction, you'd see some of the book talked about Brad's experience of playing catch-up to the rest of the industry by stumbling across a known algo and re-implementing it as the algo he called Thor. Thor sent orders to various trade centres with spaced out timings so they would all arrive simultaneously and thus not leak information between venues. Part of the IEX thinking was that if they had a big speed-bump then they'd have more opportunity not to leak information. That made a bit of sense when the SIPs were very slow from a routeing point of view. However, it was kind of pointless as you don't need to be an exchange for doing that. That is the job of a broker.  Some people seem to use IEX today just as a router rather than for any particular exchange need. Seems a bit pointless, expensive, and inflexible, doesn't it?

Today the SIPs are much faster and your information is not protected. The approved speed-bumps are designed to leak. This highlights that this type of speed-bumped architecture is not sensible.

Despite IEX making a song and dance about keeping only simple order types, IEX invented their complex auto-fading Discretionary PEG (DPEG) order type. DPEG has been joined by their newer auto-fading Primary PEG (PPEG) too.

The speed-bump rationale for DPEG and PPEG is a bit more reasonable. I'll attempt an explanation.

Dark Fader


Most people don't want to be the dumb bunny being traded through when the price ticks. Say the price is at $10.01 / $10.02 and ticks down to $9.99 / $10.00. You'll groan out loud if you've just bought at $10.01. Most market makers try and avoid this by trying to predict if the price is going to tick and get out of the way. It's not the worst outcome for an asset manager as you wanted the stock anyway, but you too would rather have a better price and not be the dumb bunny. However, it is life and death for a market maker as your survival depends on earning the spread and not being adversely selected. If you get traded through on the tick all the time, a market maker will quickly go out of business.

So what IEX did, and what NYSE American is copying, is they put a special algo within the exchange that doesn't get delayed and can pull your order for you, or move the price away, if you like. It fades. That is, the exchange gives their algos advantaged market data to step ahead of you. Michael Lewis referred to this as front-running in his novel, which it is not. It is a form of privileged latency gaming. It is a bit nasty for the market maker, broker's algo, or sophisticated asset manager as it subverts their role in the market. A client of the exchange doesn't have the special non-speed-bumped ability to run algos within the exchange, so any innovation they may dream up is disadvantaged. They can't fairly compete and thus innovation is stifled. This is not something you'd expect to be encouraged, but alas, the SEC has mistakenly allowed it.
Non-speed-bumped access by an exchange's algos prevent clients innovating.
An innovation that kills innovation.

It gets a bit more complicated. These special order types are dark orders or non-displayed. You don't know they are there. They don't mess up the quote feed as you can't see them. These dark orders have a priority below displayed order types, so conventional market making works OK, along with the requisite latency games for displayed orders. However, due to the speed-bump, which allows this special algo access for dark orders, the exchange is a slow one and we should now understand that slow exchanges are dumb. That is, from a lit perspective it is simply a standard exchange, just an excruciatingly slow one that also leaks.

Franken-pool


What we have here is a Frankenstein exchange hybrid that marries a dark pool and regular exchange: a Franken-pool. It's quite franked up. No one would ever have won SEC approval for a wholly dark pool to become a public exchange and remain dark. A really slow lit exchange is too dumb for words and makes no sense. By marrying the two together, IEX has managed to fool the SEC into approving a dark pool, the Franken-pool, as a public exchange. From a lit perspective, you can ignore the dark part. From the dark pool perspective, the lit part is virtually just one of many external exchanges.

What is the outcome for the Franken-pool so far? Less than twenty percent of the IEX exchange's total handled shares is lit volume. In fact, there is usually more volume routed to other exchanges than the volume that gets executed as lit. As volumes rise, the dark component percentage is also rising.

As a fine purveyor of risk, trader or investor, you quite often really want to trade when things are happening. You know, when the price is moving. It's part of the utility of the whole marketplace thing. It is disturbingly weird that the speed-bump enabled dark fading order types prevent you from trading at those times. They fade rather than trade. This will also make some of the trade stats look artificially too good on IEX as trades don't happen at times of risk. That is really quite strange for a risk-based utility. More alternative facts running around as statistics is all we need. Also, as IEX's slightly dumb dark fader algo generates lots of false positives, you'll often lose priority to other dark non-fading pegs. Well managed mid-point pegs may dominate dark-fading pegs. The whole IEX pile of franked up dark matter is a bit messy.

The SEC and investors have been hoodwinked. 

I feel the SEC needs to step back and think more deeply about the privileged role of licensed exchanges in public markets. What role and importance does price discovery have? What role, if any, should dark orders play? Do you want to gum up the system and make it less efficient with slow exchanges? Should the national market try to be efficient?

The vested interest noise certainly makes policy hard. The pressure applied to the SEC makes their IEX approval error quite understandable even if it is disappointing. Stasis can be a bad thing. To an extent, mistakes should be expected and their likelihoods even loosely encouraged to hasten the pace of reform, but only if mistakes, like IEX, can be rolled back.

Yeah, I'm not fond of a significant speed-bump as part of a public exchange. Outside the public markets, a parasitic dark-pool as an ATS with a speed-bump that prevents customer innovation may make sense to a limited degree if you can get safe passive inexpensive matches done without leaking information. Just be careful what you wish for.

Happy trading,

--Matt.

100G Ethernet NICs - Broadcom joins Mellanox, Cavium's QLogic, and Chelsio

New 100Gb Ethernet NICs announced last month from Broadcom bring the number of mainstream 100Gb Ethernet NIC vendors to four. There are a number of FPGA vendors also offering solutions for 100Gb Ethernet but here I've just decided to meander through the usual NIC vendors. Remember, if you want to trade at sub 100 nanoseconds, you'll have to avoid the PCIe bus transfer times and stick to FPGA tick to trade solutions.

Mellanox has been shipping 100GbE for around two years. Cavium's QLogic 100GbE NICs are a bit over a year old. Chelsio started with their 100GbE solution earlier this year.

Broadcom is due to soon ship both an Open Compute Project (OCP) Mezzanine Card 2.0 multi-host form factor (M1100PM) and the usual PCIe NIC form factor (P1100P).
OCP Mezzanine Card 2.0 form factor from the OCP specification
FreeBSD added support for Broadcom's 100GbE a couple of weeks ago, so the ducks are lining up for rollout. The BCM57454 chipset supports Nvidia's GPUDirect via RDMA for your HPC and ML enjoyment. Virtual switch and embedded security may take some load off your hosts. These are single port QSFP28 solutions.

Mellanox ConnectX-6 EN
(click to enlarge)
All the 100GbE vendors so far have settled on QSFP28. We'll have to wait a little longer for both Broadcom's multi-port 100GbE solutions, and a nod to PCIe 4.0. It is worth noting that PCIe 3 x16 does not support enough bandwidth for concurrent dual 100Gbps transfers. 

Most vendors' 100GbE solutions support versions of RoCE, RDMA, GPUDirect, et al, but there are a quite a lot of differences in the details. Such details may be important to you at 100GbE as your offloaded loads can help your CPUs out quite a bit. CPUs need help. 100Gbps is a lot of data for a little CPU to worry about, especially as packet loads climb to 200 million per second. Think about that. If you only had one CPU you'd only have 5ns per packet of processing time to keep up with that flow. We rely on offloaded help, direct memory access methods, and steering to multiple CPUs to stop from drowning.

Here is a list, for your convenience, of the current main network vendors' 100GbE solutions:

Mellanox

QLogic FastLinQ QL45611HLCU
(click to enlarge)

Cavium - QLogic

Chelsio

Broadcom


Solarflare makes excellent 10GbE and 40GbE solutions in both PCIe and OCP form factors. Solarflare and Mellanox are currently the "goto" NICs for low-latency trading. Hopefully, Solarflare will not be too far away from 100GbE delivery and be ready for exchanges reaching beyond their current 40Gbps maximum offerings.

Happy trading,

--Matt.

Wednesday, 17 May 2017

NYSE American - attack of the clones

Today, the SEC approved the application for NYSE's speed-bumped IEX clone.

A hall of mirrors it is:
SEC NYSE Amerian software speed-bump approval
The financial press quickly reported:

This was despite a last ditch pitch for disapproval by IEX,
"On Wednesday, May 10, 2017, David Shillman, Richard Holley, Sonia Trocchio, and Michael Ogershok, all from the Division of Trading and Markets’ Office of Market Supervision, met with John Ramsay, representative of IEX. The discussion concerned NYSE MKT LLC’s proposed rule change to amend NYSE MKT Rules 7.29E and 1.1E to provide for an intentional delay to specified order processing, including the comments reflected in IEX’s public comment letters submitted to date on the proposed rule change."
The SEC had no real choice, given the documents were in proper order, due to the precedent IEX set. This is the sad conclusion BATS' SEC letter also came to,
"However, in light of the Commission’s approval of IEX’s delay mechanism and the Commission’s related interpretation of Rule 611 of Regulation NMS, Bats sees no legal grounds for the Commission to disapprove NYSE MKT’s proposed rule change."
There is a duo of devilish differences to delight us in their deplorability. Let's meander on.

SIP games


Firstly, NYSE American customers will have their quotes and trades exposed on the SIP data feeds before NYSE American reports back to them. That is a little deplorable you'd have to say. This is a similar architecture to IEX but the details are quite different and those differences matter. Here is the relevant piece from the SEC approval:

I have previously talked about the SIP games applicable to the IEX's Dark Fader, "IEX trading." That is, at Mahwah and Carteret you may receive IEX market data before IEX's own local customers. This is due to the CTA and UTP SIP processors being faster than IEX Dark Fader's 350 microsecond delay. Bats' letter to the SEC also pointed out this,
"At a high level, Bats reiterates its position that speed bumps of the nature that IEX employs and NYSE MKT is proposing provide zero benefits to displayed orders 1"
by way of a reference to the same in their footnote,


If we add in the Mahwah|NYSE to Carteret|Nasdaq link and update the latencies to my previous somewhat lame diagram, we get the following approximate sketch:

(click to enlarge)
The SIP processing latencies are medians. I've also added the trade and quote latencies to draw a distinction as the CTA SIP is inexplicably much slower for trade processing. The latencies were drawn from the current reports, except the CTA SIP trade latency which was drawn from the previous report as it showed a more relevant median latency due to the monthly breakdown. Please note Bats Exchange is to be found in the NY4/NY5 complex.

As you may now understand, the latency picture is quite messy when you include the 350 microsecond speed-bumped exchange feeds and order feedback for both NYSE American and IEX into the pictures at Mahwah and NY5 respectively. This gets a little worse when you consider the non-displayed versus displayed aspects of your trading trials and tribulations.

Who benefits? Those who understand the market structure minutiae and also have the resources to expend to put facilities in all the necessary locations. A smart HFT will not like the mess as it acts as a long term friction hampering efficient market development. A smart HFT is also the most likely to benefit due to their laser focus on the small details that ensure their survival. There is a reason why Citadel is often the biggest trader on IEX. That is, an HFT relies on good markets and negative developments, such as these speed-bumped dark faders, are not in the best interest of the market, and, by implication, not in the interest of an HFT.

Supportive HFTs are acting a little sycophantically in my book. They are perhaps disregarding the long-term good for the short-term politic.

You can see by way of the above diagram, for both NYSE American and IEX, simple co-lo is not enough. You need to have processing capabilities in all the main centres if you wish to trade optimally. At least for the CTA Plan's SIP stocks, you should already be colocated for the SIP's 80 microsecond delayed quote feed, around 270 microseconds before other NYSE American customers get their data. It's just more expense.

Do you also find it weird that UTP stocks' data from NYSE American will turn up in Carteret's Nasdaq data centre before it makes an appearance in NYSE's Mahwah facility?

Welcome to an SEC mandated hell.

Dark Fading


The largest part of IEX's success to date in achieving a market share of slightly over two percent is due to its dark trading. Less than twenty percent of IEX's trading is attributable to lit volume.


IEX is a parasitic vehicle that subverts price discovery. Think about that. It is a public exchange that thwarts price discovery, openness, competition, efficiency, progress, and innovation. So far, as IEX has bumped its size up a little, you may see the threatening correlation that is emerging. More market share equates to the likelihood of darker trading:
There is certainly a place for dark parasitic trading. I would argue that place is not within an advantaged public marketplace. The SEC either decided otherwise, or should regret its approval.

Another aspect of IEX's Dark Fader is that it is expensive - very expensive at 9 mills a side. NYSE American may use the same parasitic force to overcome IEX's Dark Fader by simple economics. If NYSE prices its DPEG and Primary Peg equivalents at a more reasonable rate then perhaps the IEX Dark Fader infection may be extinguished by competitive forces. NYSE may price IEX out of existence. Please do so.

This brings NYSE American's own dark fading into sharp relief. Presently NYSE has simply proposed copying an older version of IEX's crumbling quote indicator to power its own dark fading pegs. This will not work as well as it could.

Here, for completeness, are the current IEX Dark Fading formulae, now schmarketed as IEX Signal:

Crumbling quote `if QIF>{(0.39, if spread <= $0.01), (0.45, if $0.01 < spread <= $0.02), (0.51, if $0.02 < spread <= $0.03), (0.39, if $0.03 < spread) :}`

The variable definitions below are quoted from pages 33 & 34 of Exhibit 5 to the March 10 IEX SEC filing. Note that in this filing instead of including all the markets in the number of protected quotations IEX has chosen to incorporate only eight exchanges (XNYS, ARCX, XNGS, XBOS, BATS, BATY, EDGX, EDGA), thus N and F may range from 1 to 8. Three exchanges (XNGS, EDGX, BATS) still get a special mention, as per the last formulae's iteration, in the Delta definition.

  1. N = the number of Protected Quotations on the near side of the market, i.e. Protected NBB for buy orders and Protected NBO for sell orders.
  2. F = the number of Protected Quotations on the far side of the market, i.e. Protected NBO for buy orders and Protected NBB for sell orders.
  3. NC = the number of Protected Quotations on the near side of the market minus the maximum number of Protected Quotations on the near side at any point since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently
  4. FC = the number of Protected Quotations on the far side of the market minus the minimum number of Protected Quotations on the far side at any point since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently
  5. EPos = a Boolean indicator that equals 1 if the most recent quotation update was a quotation of a protected market joining the near side of the market at the same price
  6. ENeg = a Boolean indicator that equals 1 if the most recent quotation update was a quotation of a protected market moving away from the near side of market that was previously at the same price.
  7. EPosPrev = a Boolean indicator that equals 1 if the second most recent quotation update was a quotation of a protected market joining the near side of the market at the same price AND the second most recent quotation update occurred since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently.
  8. ENegPrev = a Boolean indicator that equals 1 if the second most recent quotation update was a quotation of a protected market moving away from the near side of market that was previously at the same price AND the second most recent quotation update occurred since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently.
  9. Delta = the number of these three (3) venues that moved away from the near side of the market on the same side of the market and were at the same price at any point since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently: XNGS, EDGX, BATS.
The parameterisation of the crumbling quote will need to be specialised for NYSE's location and related latencies. It is a fairly straightforward task for the quantitatively inclined, but it is a job that nevertheless still needs to be done. I am hopeful NYSE will take this a step further and produce something a little better and more advanced. The IEX formulae are pretty lame which not only prevents brokers, asset managers, and traders innovating but IEX's lameness acts as a retrograde disservice to the financial community.

NYSE if you need help, my email address is on my contact page.

It could be worse. It soon may be. Let's hope CHX's harmful speed-bump and Nasdaq's rather silly ELO don't add to the NMS hall of mirrors. You'd hope the SEC may see the error of its ways and one day mandate the removal of both IEX's and NYSE American's speed-bumps. Now, that would be a truly beneficial NMS development. The odds are too long to take such a bet. You'd better not wait and keeping rolling out your IEX and NYSE American infrastructure to Mahwah, Carteret, and Secaucus, along with the required microwave, laser, or millimetre wave assets. Life ain't meant to be easy.

Finally, it remains to be seen what attention, if any, NYSE will pay to IEX's patents and patent applications.

Happy trading,

--Matt.

Friday, 12 May 2017

Oh my - more lines, radios, and cables

I don't think I'll ever puzzle out this interwebby thing. Last week's meander "Lines, radios, and cables - oh my" was a bit more widely read than I expected a meander about cables to be. Quite the quiet surprise to me. Thank you all for the feedback I've received.

There was nothing new in that blog. I expect just having a summary of some of the aspects of those things mentioned was a useful consolidation. Most people knew most of the stuff but perhaps a few little snippets, like hollow-core fibre, open ladder lines, and HF MIMO found a broader audience.

There was something new to me this week that took me a little by surprise.

Sub-millimetre wireless transmission with wires


Arnold Sommerfeld (1868-1951)
How about a Terabit per second over your home's existing copper telephone cable?

Back in 1899, Arnold Sommerfeld wrote in "Uber di Fortpfanzung electrodynamischer Wellen langs eines Drahtes", [Ann. Physik u. Chem. 67, 233 (1899)] something I can't read as my German is not so great, but James R Wait assures me, in English, in 1957 via "Excitation of Surface Waves on Conducting, Stratified, Dielectric-Clad, and Corrugated Surfaces" that it says,
"It was pointed out by Sommerfeld [8] nearly 60 years ago that a straight cylindrical conductor of finite conductivity can act as a guide for electromagnetic waves." 
This may be important if you want a slightly terrifying Terabit, or World Turtle-like, Internet connection coming into your home but can't get fibre in your digital diet. Just this week Rick Merritt over at EETimes wrote a nice piece, "DSL Pioneer Describes Terabit Future: Wireless inside a wired Swiss cheese", on Terabit DSL, or TDSL. This is an approach, not yet real, that may deliver Tbps capable DSL over your existing copper wires into your home. Perhaps Australia's NBN isn't so silly in its fibre to the node approach.

The really cool thing about this approach is that it is sending the signal down the gaps within the cable. It is using the effect described by Sommerfeld in 1899. The cable's wires are acting as waveguides and the signal is propagating down the cable as a surface mode. The signal is rocketing down between the wires! Kind of cool.

Terabit DSL (TDSL) - Use of a copper pair's sub-millimeter waveguide modes

The suggestion is to use something like 4096 sub-carriers from 100GHz to 300GHz with 48.8 MHz spacing using bit loadings of 1-23 b/Hz for a Tbps, or 50-150GHz for 10Gbps. Little antennae around each wire end, including at the customer premise.  You'll need high-speed analogue to digital converters and a vector processing engine capable of many teraops to pull out the signal. Perhaps you can even do a Peta bps for a 10cm cable instead of 10Gbps at 500m. It's certainly an interesting cable that doesn't yet exist.

This sub-millimetre waveguide approach may propagate fast but it is likely the vector processing used to suck the bits out of the cable's gaps will take a bit too much latency to provide a true low latency solution. The other stumbling block would be the short cable lengths, but you never know when silicon gets involved how cheap or low latency those repeaters may be one day. One for the future. Interesting nevertheless.

HFT memo on HF MIMO 


TabbForum produced a tidier, edited version of the previous cable article. Larry Tabb's TabbForum is a forum worth keeping tabs on. Larry's Market Structure Weekly videos are a great summary of market events and ideas worth knowing about - highly recommended. In the comments of TabbForum, Sam Birnbaum (W2JDB) as an amateur radio and finance guy noted the travails of long range RF comms. Too true.

Before the later part of 1994 at Bankers Trust in Sydney, we had a microwave link to a production data centre over the harbour from the trading floor in the CBD. Not the best design choice. Heavy rain was problematic and smoke from bushfires was deadly to the link. Short links are tough and long links are tougher.

Difficulties abound at the best of times with HF propagation, especially with respect to the ionosphere. Australia relies on the ionosphere more than most countries. Our large sparse country relies on its HF based Jindalee Operational Radar Network (JORN), or over the horizon radar (OTHR), for layered surveillance out to around 3,000 km. Defence denies they can see a flock of seagulls in Singapore. Within this OTHR scheme, there is much reliance on characterising and predicting the ionospheric conditions.

A HAM, like Sam, faces the same issues for HF. We rely on ionospheric maps to work out appropriate transmission characteristics such as this one from Australia's Bureau of Meterology - Space Weather Services:
Ionospheric Map
This map shows the critical frequency of the Ionosphere's F2 Layer. Roughly speaking, frequencies below will bounce, if approached vertically, and frequencies above will partially, or wholly, leak out to the stars. HF transmissions approach the ionosphere obliquely, so such a map acts as a strength map rather than a hard and fast constraint. Also with HF, you may get awkward dead zones. Such zones are too close for a normal skywave bounce but too far to reached directly or by a near vertical incidence skywave (NVIS). Not always, but sometimes. Rough seas, wind, lightning, and meteors can all have a disruptive effect on HF. HF is a bunch of trouble. This is not the worst result as transient comms that provide a latency benefit some of the time may still be helpful to a trader. After all, multicast UDP has no guarantee either, even if those guarantees are orders of magnitude in difference. That said, low enough latency with adequate HF MIMO reliability is a real challenge.

I mentioned previously the large gaps you may need for your antennae for HF MIMO. That may no longer be the case. Instead of just spatial diversity, advances have also been made in polarisation diversity. Smaller antennae for HF MIMO may be OK. To the right is an example from Jiatong Liu's October 2015 PhD thesis, "HF-MIMO Antenna Array Design and Optimization" which also made it into Radio Science's 2010, "MIMO communications within the HF band using compact antenna arrays" [S. D. Gunashekar, E. M. Warrington, S. M. Feeney, S. Salous, and N. M. Abbasi].

If you'd like a very readable summary of HF technological progression then chapters 1 & 2, pages 1 to 23, of Mohammad Heidarpour's thesis, "Cooperative Techniques for Next Generation HF Communication Systems" are pretty easy going, clearly written, and have a minimum of Greek formulae.

So, just perhaps, a small dwelling in Cornwall would do instead of a trading farm, though the spatial diversity may not hurt you.

Hollow-core fibre


HCF has been around for a little while, with the first demonstration in 1999, and the terminology can be a little confusing. Few-mode transmission may be used over HCF. Sometimes HCF cabling gets referred to as FMF. However, there is a particular cable referred to as FMF. Just to confuse things further sometimes few-mode is used and other times fewer-mode is used. Also, as a nascent technology, there are many types of HCF as the race goes on to build better, or more specific, cables.

Cat videos also drove the development of HCF and FMF. FMF is a technology that has multiple uses. HCF can do high power and deliver lasers to another point. HCF's low latency can be useful in physics labs, especially for synchrotrons and the like. That said, one of the biggest reasons for the development of FMF, and not just HCF, is the Internet's exponential growth trajectory. It's all those cat videos peeps are watching. There is grave concern that all the regular old SMF optical cable we have in the ground may never be enough for all those fuzzy balls of fun. We could run out of plumbing. With multiple cores, we can do some MIMO space-division multiplexing (SDM) over a fibre and potentially get huge bandwidth benefits. HCF may allow us all to safely watch cat videos forever and a day. Bullet dodged.

Here is a little illustration of different fibre types for you:

Breakthroughs in Photonics 2012: Space-Division Multiplexing in Multimode and Multicore Fibers for High-Capacity Optical Communication (April 2013) - (click to enlarge)
In the above diagram you'll see a cable termed FMF but sometimes other cables in the picture are somewhat informally referred to as FMF due to their use of few modes. That probably shouldn't be.

There are plenty of different types of HCF to amuse yourself with too. The main search ongoing in the HCF space is finding lower loss levels or attenuation, sometimes measured in dB/m, but, hopefully for a trader, measurement in dB/km is preferred. Such attenuation optimisation may involve trade-offs in bandwidth as a single mode or few modes may only be appropriate to a quite limited wavelength of light.

Here are some of the variations in HCF:

Hollow-core photonic bandgap fibers: technology and applications [Poletti, Petrovich, Richardson, 2013]
(click to enlarge)
In terms of longer transmission for FMF, here is a list of trails from Haoshuo Chen's 2014 thesis with ranges from 17km to 7,326km:
Optical Devices and Subsystems for Few- and Multi-mode Fiber based Networks, p5
Note: FMF and HCF is not the same but may be the same (click to enlarge)
However, remember FMF and HCF are not the same thing. HCF should more properly be referred to as hollow-core photonic bandgap fibre (HC-PBGF) but most just refer to it as HCF. Sometimes HCF just gets lumped in with all the other FMF classes and referred to as FMF.

HCF types differ too. Telecom fibre and particle acceleration use cases differ. Telecom fibres want a particular core mode and few or no surface modes. A particle accelerator fibre wants a particular surface mode and no core modes to suck input power away. This leads to divergence in HCF design. Not all HCF cables are equal.
Hollow-core Photonic Band Gap Fibers for Particle Acceleration [Noble, Spencer, Kuhlmey, 2006 p22]
I'm glad I cleared that up for you. Just remember, HCF is the fast one. If you hear FMF, ask if it is HCF. You'll be able to confirm by the latency numbers, as 0.997c is not too hard to work out. Look out for an HCF coming to a trading link near you.

Neutrinos


KB commented on my previous cable article that Fermilab succeeded with the first neutrino communication in 2012 over 1.035km including 240m of planet and a bit error rate of one percent, providing the MIT article, "First Digital Message Sent Using Neutrinos" referring to the paper, "DEMONSTRATION OF COMMUNICATION USING NEUTRINOS" [Stancil et al, 2012],
"In summary, we have used the Fermilab NuMI neutrino beam, together with the MINERvA detector to provide a demonstration of the possibility for digital communication using neutrinos. An overall data rate of about 0.1 Hz was realized, with an error rate of less than 1% for transmission of neutrinos through a few hundred meters of rock. This result illustrates the feasibility, but also shows the significant improvements in neutrino beams and detectors required for practical applications."
That's an explicit communication protocol - nice. The Opera Experiment reported, "Runs with CNGS neutrinos were successfully carried out in 2008 and 2009" which was earlier but not explicitly for communication, though the transfer of information did take place. So, who's on first?


KB's referred MIT article refers to a further MIT article, "How Neutrinos Could Revolutionize Communications with Submarines" which refers to a submarine paper for neutrino communication, "Submarine neutrino communication." This complements the submarine neutrino patents I previously referred to. I especially like this line in the paper,
"I am especially thankful to S. Kubrick and P. Sellers whose work served as inspiration.
Being There, sprang to mind and I laughed out loud.

The SPS used for the proton beam in the Opera Experiment cost 1150 million Swiss Francs at 1970 costs. You probably don't need 400GeV. An HFT will need to work backwards and figure out what GeV she can get away with for her beam and then match a detector. Another teensy difficulty will be enabling regularly enough irregular pulses that suit your events as continuous operations are unlikely to be possible for a while yet. You have to ensure your tech can complete your analysis in real time with the appropriate latency benefit. Then you might be able to cost your through-the-planet signalling setup. Surely you can do better on costs than SPS in 2017? It should be a fun project.

Happy trading,

--Matt.

Tuesday, 9 May 2017

IEX's dark fader = bad vibes

I've noted a few times that I'm not so sure that IEX's dark and expensive activity promotes healthy markets. Dark and expensive should be left to restaurants.

Here is how IEX's lit handling looks versus their market share since November:

It's not the best look in relative terms as we get a glimpse of a sad parasitic potential future if IEX grows its market share. Not a good look for a public exchange.

Jack Bogle surmised just this week that he thought the parasitic limit of index funds may be at least 75% of the market. IEX is already worse than Bogle's standard with less than 20% lit.

Bogle's point of view is not too different to my own simple point raised in January 2013 regarding dark algos. That is, parasitic is OK until it overwhelms the host. I don't believe dark activity is something the SEC should promote at public exchanges.

Perhaps IEX's Dark Fader would be friendlier if it was rebranded the "my little pony" exchange. How could you get upset with that? It's all about the marketing it seems. Dark Fader fits better though.

Public, fair, and open price discovery has got to be worth something.

In absolute terms, IEX remains dark and expensive. The April lit volume as a percentage of total shares handled fell to 18.7% from March's 19.8%. May has started lower with May 1 being only 15.6% which corresponded to IEX setting a new record market share of 2.603%.

This particular speed bump is not looking so good. Unfortunately, Dark Fader's force is strong in this one. Jedi wanted.



Happy trading,

--Matt.



"It's the vibe" - The Castle

Friday, 5 May 2017

Lines, radios, and cables - oh my

Spread Networks blew a lazy few hundred million dollars on a white elephant straighter optical fibre between Chicago and New York. Not all traders were wise enough to dodge the Spread Networks bullet with the most famous customer spending an unjustifiable inordinate amount being Getco. Microwave had been on the same link for over fifty years, was faster, and already used for trading.

Don't make the same mistake as Spread. Be careful with your link choices and your cable choices.

Look at these cables. Decide the order of speed of propagation of signal in them. Many traders, but not so many engineers, may be surprised:

Basic cabling test: put these in order from slowest to fastest
(click to enlarge)

The correct order from slowest to fastest, by the velocity of propagation, is d, a, c, b, then e. There is faster though. If you're geek enough, like me, to get a kick out of this kind of thing you may find this interesting. Most people would prefer to meander elsewhere I suspect. I'm not the guy you want to invite to your dinner party ;-)

Latency misconceptions


Even savvy traders, such as Getco, do make mistakes and invest millions of dollars inappropriately in the wrong communication technologies. Don’t do that.

Latency may be worth millions of dollars to your trade, but capital and recurrent expenditures may give you pause as you toss around modern HFT technology and potential ROIs. Tech can be expensive. You’d better understand it well before choosing your preferred cost and profile. Let’s have a look at some of the poorly understood and interesting, to me, misunderstandings and developments that may be important to both your latency critical and latency sensitive trading. Let’s meander through some of the points.

Is fibre transmission faster than transmission using electrical wires?

The answer is: it depends.

Is radio frequency transmission always faster than fibre?

The answer is: it depends.

The new low earth orbit (LEO) satellite service in pre-sales from LeoSat Enterprise LLC has reportedly snared a high speed trading customer. Could LeoSat really be faster than terrestrial
communication?


The answer is: it depends (but unlikely).

Back in the day, when Getco released some S1’s & S4’s, there was a bit of trading community comment regarding notes in the accounts where it was disclosed that millions of dollars had been spent on Spread Networks fibre capacity between Chicago and New York,
“Colocation and data line expenses increased $18.9 million (52.0%) to $55.2 million in 2010 from $36.3 million in 2009 primarily due to the introduction of Spread Networks, which is a fiber optic line that transmits exchange and market data between Chicago and New York, and the build out of GETCO’s Asia-Pacific colocations and data lines.” [Knight Holdco, Inc., SEC S-4, 12 Feb 2013, page 227].  
Investing in Spread Networks was wasted money. Microwave links are faster and were already being used on that route. In fact, the first microwave link was built in 1949 for that route.
September 1949 Long Lines publication regarding New York to Chicago microwave link
(click to enlarge)
Later, poor old Getco had their traders’ frustrations aired in public with the disclosure of an internal complaint regarding their internal microwave network being higher latency than a third party network available for use. I expect that was either the McKay Bros / Quincy Data or Tradeworx people. They do good work:

McKay Bros round trip microwave latency. Optical fibre is ~12ms on same path.
(source: click image to enlarge)
I use Getco as an example here not because they are incompetent, but rather because they are very good at what they do. Even Getco, now KCG, now Virtu, as good as they are, had missteps in low latency path development.

Wired 2012. Not so secret, hey Michael Lewis?
If you haven’t read Michael Lewis’ Flash Boys, and you really shouldn’t, you may have missed the low latency narrative centred around Spread Networks’ fibre roll-out that stitched together the book. The literary device used to end the book was the hook of a tower hosting a microwave network. This ending was left as evil hanging in the air like a brick doesn’t. To me, such narrative abuse represents some very poor journalism. Such RF links and vendors offering them had been widely discussed, such as Wired (2012) and the Chicago Tribune (2012). They had weighed in on the microwave discussion publishing vendors names and even prices. This is a snippet from the Chicago Tribune in 2012, years earlier than Flash Boys,
"He [Benti] said the microwave network starts at 350 E. Cermak, ends at another telecom hotel at 165 Halsey St. in Newark, N.J., and went live in the fourth quarter of 2009."
It was hardly a big secret and I found the presentation in Flash Boys somewhat scandalous. Barksdale and Clark, who Lewis had written a book about, “The New New Thing”, are investors in Spread Networks and IEX. They remain friends of Lewis. That looks material to Flash Boys objectivity, or lack thereof. Lewis marketed BS to help his friends.

Latency matters. Latency can be expensive. Latency technology has risks. Let’s expose some latency matters that matter.

Trading at the speed of light


Let’s meander through a little physics and then some of the histories of some communication links.

The speed of light is 299,792.458 km/s, near enough to 300,000 km/s which is how I usually round it off. This is the speed of light in a vacuum and the newish way that we define time. Light’s speed is commonly referred to as ‘c’. When you force light, which is just a form of RF, into a medium, such as a fibre, for light, or a wire, for RF, it goes a bit slower. You’ll have to remember that electrical transmission is different to photonic transmission but related.

The speed of light in a standard optical fibre, either single mode fibre, or multi mode fibre, is around two-thirds of the speed of light in a vacuum. This is normally written as 0.66c.

The atmosphere isn’t a vacuum, thankfully for life on earth, but it doesn’t slow RF, including light, much at all. It’s close enough to c that we don’t bother with a discount and just say it’s 1c.

This is why microwaves make a big difference. If you could use point to point transmission for the roughly 1200 kilometres from Chicago to New York you’d get roughly 6 milliseconds for light in standard fibre and 4 milliseconds for direct RF transmission. Two million nanoseconds of difference is quite a big difference to a trader.

Spread Networks dug very straight lines and managed to beat other fibre networks to achieve the lowest fibre latency known on that link.  Spread’s latency was indeed around 6ms one way, as expected.  It’s a shame they didn’t fully appreciate the benefits of microwave comms on that link before they started digging. Let’s stick to just the tech for now. Here is the start of a table:

Medium Speed of transmission
Vacuum 1c
Atmosphere ~1c
Twisted pair ~0.67c
Standard fibre ~0.66c

That’s pretty rough and I’ve taken a few liberties which I’ll explain later. A very important and interesting thing about electrical transmission in wire is that the construction of the wire, and, perhaps even more importantly, the insulation, matters. Not just a bit, but a lot.

LMR-1700 coaxial cable specs.Note the Velocity of Propagation is 0.89c
(click to enlarge)
If the “wire” was a coaxial cable, then the RF would enjoy travelling along the outside of those wires’ surfaces and burn rubber to achieve up to 0.89c [LMR-1700 low loss coax – Foam PE and 0.87c with Commscope 875 coax also with Foam PE].

Remarkably, older coaxial cables undersea cables used from around the 1930s might have been faster than some modern fibre cables. Not many people understand that. Then again if you chose Neoprene as the dielectric in your wire cable, as earlier cables did, you’d only chug along at around 0.45c. The dielectric performance of the wire limits the speed. A high dielectric constant in your wire is bad news for latency. In the nineteenth century, most cables used Gutta-percha compounds which had dielectrics in the range of 2.4 to 3.4 with over 4 when wet, likely resulting in speeds significantly less than 0.66c. In the 1930s the coaxial submarine cables around the world started using polyethylene which has a typical dielectric constant
Coaxial cable
of 2.26 giving a speed of around 0.66c. However, the construction matters a lot. A foam polyethylene has a dielectric constant of around 1.55 resulting in typical speeds of around 0.8c.

Open ladder line cable
Old tech can be fun. You can get more than 0.95c out of a simple open wire ladder line. 0.95c to 0.99c is the typical range for open ladder lines. You might remember an open wire ladder line if you cast your mind back to that really old school two parallel wire TV antenna cable with rectangular cut outs in the polyethylene webbing every inch or so. Who’d have thunk it? Ancient technology faster than fibre! Details matter.

Printed circuit board (PCB) design is both a science and an art. Standard PCB layers use a medium called FR4, basically fibreglass, which is probably the most common PCB filler. PCB transmission with FR4 is positively glacial with 0.5c being typical. Various other layers, such as Rogers, are used for high-speed channel and RF design which has different properties again and is typically faster for latency too.

Let’s look at a revised table:

Medium Speed of transmission
Vacuum 1c
Atmosphere ~1c
Open wire ladder ~0.95c
Coaxial cable ~0.8c
Twisted pair ~0.67c
Standard fibre ~0.66c
PCB FR4 ~0.5c

Now you can probably imagine building a twisted pair cable that is a bit rounder, more like coax, and not so flat, less of a PCB, and that cable may be a little faster. So, the revised cable might be faster than light over fibre. Again, details matter.

There are standards for CAT twisted pair cables. Those standards also specify minimum propagation speeds and variations within the cable. For example, here is the standard specification for Cat-6 cable:


The velocities are minimums, so don’t panic yet about the 0.585c to 0.621c. If I look at the specifications for a couple of real world cables I see Draka SuperCat 5/5E at 0.64c, and Prysman M@XLAN cat 5E/6/6A cables claim 0.67c. These are specification claims, not guarantees. Seimon reports in their cable guide,
“NVP varies according to the dielectric materials used in the cable and is expressed as a percentage of the speed of light. For example, most category 5 polyethylene (FRPE) constructions have NVP ranges from 0.65c to 0.70c... Teflon (FEP) cable constructions range from 0.69c to 0.73c, whereas cables made of PVC are in the 0.60c to 0.64c range.”
Those electrons must just find Teflon just as slippery as we do.

The other notable thing in the cable specifications is the maximum delay skew. That refers to the fact that the different wires in the cables may propagate faster or slower. In an old school coaxial submarine cable the core wires could be 5% shorter than the outer wires. That is a big deal. In a twisted pair CAT 6 the 45ns skew per 100m could be around 8% variation within the wire. This can matter a good deal as you may only be as fast as your slowest bit producing bit.

Can you quite believe you are still reading about cables and propagation? This is the kind of detail a good trader may have to worry about.

My friends at Metamako measured some common fibre and wire cables using their latency measuring MetaApp. They found the copper cables they tested were indeed faster than the fibre ones by just a bit. I’ve reproduced Metamako’s chart below with permission:
Metamako cable comparison "Copper is faster than fibre!"

This chart comes from the following data:


Here, copper is faster than fibre.  The direct-attach copper cables come in at 4.60ns per metre, single mode fibre at 4.95ns per metre with multi-mode fibre at 4.98 ns per metre. You always have to be careful looking at this as it is not just about transmission but also about the latency cost of amplifying, cleaning, and propagating the signal. Notably, the fibre has a little more endpoint overhead as you can see a larger constant in the fibre equations’ fits.

As you now know, cables can vary a great deal, so caveat emptor.

Another note about cables is that often longer copper interfaced cables in the data centre aren’t really copper but active optical cables (AOC). Such cables transmute the electrical signal into optical and back to electrical (EOE) as part of the cable to improve range. The constants, especially from media changes, can matter in these equations. For example, with 10G Ethernet over CAT-6, you might use a nice Teflon cable and expect some fast propagation. You will be disappointed to learn the 10G twisted pair codec is really twisted and likely to cost you microseconds, yes, thousands of nanoseconds, before you even get onto the cable. Whilst the rise time of a 10G laser in an SFP+ may be less than 30 picoseconds, organising the rise time from the electrical signal takes some gymnastics, even if quite a bit less than 10G twisted pair. A fast cable, or plane, will not always help if your boarding procedures are slow.

There are some more obscure and exciting cables, such as fewer mode fibre, we’ll save for later.

A little comms history


Let’s segue and meander through a bit of history.

Did you know Great Britain’s Pound is commonly called “cable” in trading and financial circles?

This is because when that first cross-Atlantic telegraph cable briefly sprang into life in 1858, information sped up. An obvious and important use was for trading and financial information, hence the name for the US Dollar to Great British Pound cross rate became colloquially named after its primary Atlantic transmission medium. Cable underscores the importance of cable.


When did these trade latency wars start? Perhaps thousands of years ago but certainly hundreds of years ago. There are records relating to coffee merchants in Africa and the Middle East suggesting a trader knowing quickly about production in Africa could make significant profits in the Middle East. Kipp Rogers pointed me to a letter from a silk merchant from around 1066 worrying about time being wasted waiting for such tradeable information,
“The price in Ramle of the Cyprus silk, which I carry with me, is 2 dinars per little pound. Please inform me of its price and advise me whether I should sell it here or carry it with me to you in Misr (Fustat), in case it is fetching a good price there. By God, answer me quickly, I have no other business here in Ramle except awaiting answers to my letters… I need not stress the urgency of a reply concerning the price of silk”
Latency trades are no “New New Thing.

You’ve probably heard the stories of Rothschild’s consol trade where learning about Waterloo the information was transmitted, most likely by fast boats rather than by the pigeons he is famous for using, to London earning a considerable profit. Reuter’s empire was started by being at the crossroads of information flow to participate and speed up information flows. Alexandre Laumonier pointed out to me the old semaphore and light signalling used by the French as an early optical network. It’s also fun to know there were various frauds, delays, embeddings in early semaphore and telegraph networks with profit motives, even making it into the tale of “The Count of Monte Christo.Chappe’s optical telegraph in France covered 556 stations with 4,800 kilometres of 1c transmission media, air between the stations, from 1792.

Speaking of Chappe's optical telegraph, you may find it intriguing that even in the 1830's stock market speculators were abusing communications for profit,
"On another topic, and like Internet outstripping the lawmakers, optical telegraph asked for new laws and regulations: a fraud related to the introduction, into regular messages, of information about the stock market, was discovered in 1836. The telegraph operators involved were using steganography (a specific pattern of errors added to the message) to transmit the stock market data to Bordeaux. In 1837, they were tried but acquitted, having not violated any explicit law. The criminals were one step ahead."
Many people argue that High Frequency Trading (HFT) is a new phenomenon, perhaps as little as a decade in age. Some argue it goes back to the 1980s. The wise Kipp Rogers also passed on a nice book reference to me which noted HFT, in the modern sense, from 1964’s Bankers Monthly, Volume 81, page 49,
“This is an important aspect of bank stocks and leads us to a clearer view of the market. To begin, let’s define a broad line of bank stock house as one range. There are few professionals who will insist that high-frequency trading occurs in more than 20 bank stock names.” 
The use in the text quite clearly talks about it in a style that suggests common usage so perhaps the term is decades older? HFT is not a “New New Thing.

Indeed, there is not much new under the sun, even in clichés. The power of compound interest was argued in cuneiform some 5,000 years ago. The code of Hammurabi dealt with trade, liability, amongst many other things, over 3,800 years ago. Your friendly Ancient Greek philosopher, Thales was challenged to show how philosophy could be practical in a financial sense, so he made money in times BC, by using options on presses to leverage olive forecasts and cornered the market. Ancient Rome used corporate structures.

Just as HFT is probably older than you and I think, history shows the importance of latency is also not a “New New Thing.

Retransmission


One of the problems with the old semaphores and telegraphs is that humans were used as repeaters. The early telegraph couldn’t cross the US continent with its electrics and thus people rekeyed the messages. Semaphore networks are optical and transmit at the speed of light but the onboarding, off-boarding, and retransmission of messages relies on people, flags, and the like. Such retransmissions were not measured in picoseconds.

This is also an issue for modern microwave networks. Lower cost microwave or millimetre wave devices often have tens or hundreds of microseconds in their onboarding and retransmission latencies. For much of the world wasting a few microseconds is not so much of a big deal. The telecom carriers are usually more concerned with bandwidth as their optimisation point.

The very best Chicago to New York links have single digit microseconds differences between them through aggressive path and device optimisations. So the issue of retransmission latency and onboarding is a large issue. One way of cutting down latency, or to make devices simpler, is to talk to the device with a signal it understands to cut out any unnecessary conversions. This led to radio over fibre (RoF) where the RF signal is represented directly in the fibre to feed microwaves. A significant development more relevant to the Chicago – New York and London – Frankfurt links was the development of clever repeaters that analyse the signal and minimally process the signal if it is of sufficient quality rather than requiring a full digital cleansing, or clock data recovery (CDR).  Such repeating takes nanoseconds instead of microseconds. Most microwave traders now use such microwave repeaters.

The first trans-Atlantic telegraph cable in 1858 was a stupendously expensive and brave undertaking that only briefly worked. Cables had been used across water before, with the English Channel being crossed in 1851, but nothing quite so ambitious as a whole ocean. The Atlantic cable briefly worked thanks to sensitive receivers rather than by an understanding of amplification or repeating. That came later.  Brave souls, newer cables, amplification, and repeating drove improvements and commerce to an ever increasing frenzy. Messages were very expensive to send, but the financial world became a virtually instant world to onlooking humans in the 1850s. The path for the rise of the machines was laid.

Satellites


Geostationary communication satellites came to enable anywhere to anywhere communication covering the entire planet. Telephone systems and faxes were hooked up. If you are old like me and have talked on an old telephone link you’ll remember the satellites’ biggest problem, the nasty delays inherent to the lines. Hearing your own voice delayed, or just an awkwardly pausing conversation would drive you nuts. Latency was the issue.

Geostationary orbit is a high orbit. A really, really high orbit. The circumference of the world is about 40,000 kilometres. Using 2πr you can work out the distance from the centre of the earth to the surface is about 6,400 kilometres. A geostationary orbit is 42,164 kilometres from the centre of the earth. Over six times the earth radius. Pause and visualise that for a second. A tiny, little satellite dot far from Earth. Several Earths away. That’s a long way.  So sending a signal to a satellite and receiving it somewhere else is nearly the same as going around the Earth’s equator twice!  That geostationary 72,000 km journey, there and back again in Hobbit speak, at the speed of light, is around 240 milliseconds. Add some processing overhead and you get a very annoying delay. Geo-stationery satellites suck for latency. Don’t use satellites for trading. That is why we have a bunch of spaghetti surrounding the earth in the guise of under-sea cables. Latency begone.

Current submarine cabling
(click to enlarge)


Microwave


Microwave has been around longer than many people think. A microwave link was put over the English Channel in 1931.  Today’s HFTs are fighting over space at Richborough to make straighter lines with taller towers for less repetition over similar grounds. The first Chicago to New York link was created with 34 jumps in September 1949, just beating Spread Network’s straighter fibre by some 60 years and two milliseconds.

President Truman made a USA coast to coast microwave TV transmission in September 1951 after it was opened for telephone use in August.

Microwave is not a new thing and don’t let Michael Lewis lead you astray into thinking otherwise.

In my homeland of Tasmania, some amateurs set a record using standard astronomy telescopes on a couple of mountains with over 100 miles between them to modulate a voice call.

This example shows that whilst light transmission, including lasers, typically have less of a range than microwave, it doesn’t always have to be that way if you want to get creative. Microwave bandwidth and distances are continually improving in leaps and bounds.

RF, such as microwave does not have to be hideously expensive. My Toronto to NY link regular old telco cable was about $15k rent per month from memory of the interlisted arb. With microwave you could buy a couple of end points for your link and then you don’t have to pay the recurring costs for cables. However, the towers and real estate become an expensive proposition which grows as the links get longer and up goes the repeater count. Lots of HFTs are fighting over similar paths and towers and there is a certain element of land and license grabbing that takes place. Alexandre Laumonier, via his blog SniperInMahwah, has been documenting such links in Europe and the recent battle to get a couple of large towers approved in Richborough in the UK for the channel crossing. It’s an expensive game when you want to build large towers.

The actual microwave RF bit is expensive but it’s not outrageous. The real-estate access and towers can be very expensive. All the HFTs have been knocking each other around with one upmanship to gain an edge. Public spectrum and council records, such as those used by Sniper, make it difficult to use shell games to hide your capabilities. In that spirit, a recently announced consortium of sorts is joining forces to create a “Go West” project to share the burden, as they know they’ll just compete each other out to create very similar links. That makes a good deal of sense as the technical cost is nearly out of control with such networks due to the land and towers. IMC has invested in McKay Bros to facilitate improvements to their networks for the benefit of all traders. Tower Research has joined them. KCG and Jump Trading also work together as New Line Networks. These joint venture approaches are sensible cry outs to the gods of cost control. HFTs are realising that being fastest is not so good if you can’t afford a trade’s transaction cost, including your depreciation.

Radio is light is radio


Marconi was obsessed with crossing the Atlantic with radio. He succeeded in a bit of a scary way. He basically built a huge amplifier than generated enough of a current, or spark, to bludgeon his way across the Atlantic with brute force.
Marconi using a kite to lift his antenna to 150m for the first Atlantic transmission in 1901

Click. Kaboom.

What frequency was it?

All of them!

Well, pretty much. Perhaps around 850kHz. That spark gap transmitter was quite quickly replaced with more nuanced hardware so that different people could use different frequencies and thus the planet was not just restricted to one giant broadcaster. We then found that certain frequencies, 2-70MHz, the high frequency or HF band, would bounce around the world thanks to the Ionosphere acting as a bit of a trampoline, sometimes, to those frequencies. Shortwave radio is not so popular anymore but still active, even for number stations.

Microwave and millimetre radio is a bit of a misnomer. Microwave, in the normal literature, actually covers wavelengths from 300MHz to 300GHz, or wavelengths from 1 millimetre to 1 metre. Millimetre bands are part of the microwave spectrum and do indeed have wavelengths of millimetres. Not having micrometer wavelengths with microwaves I find a little interesting. Micrometre wavelengths are part of the infrared. Typical microwave networks are in the 2GHz to 7GHz range. 60GHz is a popular, usually free, millimetre wave network. It is free as the atmosphere kicks it around and limits its usefulness. Many countries have light weight regulations for 80GHz links so you can use them more easily. There is much for the trader to choose from.

Light is radio. Radio is light. The wavelength for your standard data centre fibre light over MMF is 850nm or ~350 THz. In the datacentre, 1310 nm is typically used. 1550 nm is often used for longer distance links thanks to its kind transmission properties in long strands of fibre. Note that visible light is usually considered to range from violet at 380nm/789THz to red’s 620nm/400THz. The common data centre light borders those visible frequencies. When we put different colours of light, or wavelengths, onto a single fibre, we call it Wave Division Multiplexing (WDM) which is a complicated way of saying a pretty rainbow.

We have standards for the colours, sometimes called channels, so we can talk to each other thanks to the International Telecommunication Union. We mix those colours up and separate them out after to make better use of the holes we dig in the ground or sea. If the colours are close together we call it Dense WDM (DWDM) and when the colours have a bit more space and there are fewer of them it is called Coarse WDM (CWDM). Fancy names for pretty simple stuff. International trading is powered by rainbows, literally.

Often the photo-sensor receivers are wide-ranging enough to allow just about any frequency to trigger them which can make your network design a little easier.  We can often interchange short run MMF and SMF cables without noticing too much as they mainly make a difference in the large runs or over n-way splits. SMF cables used to be expensive cables but they aren’t too different in cost to MMF today in the volume a trader may buy them.

Erbium doped cables take a little light injection and reinvigorate the existing light signal as it travels which is pretty clever. There is no real latency cost here if you consider the erbium-doped length as part of the cable. You need a bit of distance in the cable doping so this is a slow way of doing amplification if you only have short cables, such as in a trading co-lo facility. For short distances, you’ll be better doing OEO which is not so different to the EOE we talked about with AOC cables.

Faster fibre


There has been some interesting work on making much faster fibre cables. The idea had its seed in thinking about point to point laser systems, often called free space optics, that have been used for links, including for HFT in New Jersey. Imagine you do your lasering underground. Carefully add some mirrors to bounce the lasers around. That is not too far from the concept of a Hollow Core Fibre (HCF) or Few-Mode Fibre (FMF). HCF speeds are around 0.997c. Pretty good, no? So why aren’t they everywhere?

Cost has been an issue. I looked recently and cables were about $500 a metre. Yikes! Perhaps HCF cables are cheaper now or in bulk. HCF repeating is an issue as the signal dissipates pretty quickly. That is, the mirror bouncy wouncy timey wimey thing, to paraphrase The Doctor, is not so super efficient. Attenuation kills. We are used to having big distances between our fibre repeaters in modern times. Strangely enough though, the HCF repeater requirements are not too different to the old coaxial cable requirements in terms of spacing. Hmmm, perhaps expensive long distance HCF is really possible for a trader if we go back to the future? A demonstration a couple of years ago changed this thinking when a greater than 1000km repeating FMF cable was demonstrated in Europe. Perhaps we’ll see Spread Networks replace their Chicago to New York link’s SMF with HCF?

Medium Speed of transmission
Vacuum 1c
Atmosphere ~1c
rf inc laser ~1c
Hollow-core fibre ~0.997c
Open wire ladder ~0.95-0.99c
Coaxial cable ~0.45-0.89c
Twisted pair ~0.58-0.75c
Standard fibre ~0.66c
PCB FR4 ~0.5c

HFT radio tuning


A few years ago now, the small HFT I founded bought a couple of the original Ettus Research FPGA GNU Radio boxes to play with. We got a little RF signal to go a short distance in the room. Well, they were sitting on the same workbench. Digital FPGA pin in to digital FPGA pin out was 880ns on the oscilloscope. That’s pretty fast. The experiment was to see what kind of overhead the RF stack, including the IF, encoding, MAC, etc, was causing. This experiment showed that with such modern software defined radio (SDR) this kind of RF comms hackery has become wide open to all types and sizes of trading firms.

Why doesn’t an HFT just use a HAM radio to send a signal across the Atlantic to compete? Well, maybe they are. If HFTs are it definitely requires some custom thinking as commodity appliances for this do not exist with the right characteristics. The MIL-spec stuff, say for non-satellite warship communication, may use HF radio but the packets can take seconds to get through. Ouch. That is slow. Why is it so slow?

Email on warships with HF is slow because the MIL-spec packets are heavily encoded with error correction and are spread out over time to handle disturbances. Now Marconi didn’t do this. His brute force grunt was sent at 1c over the horizon with little processing overhead. Click. Kaboom. It may be possible to spatially encode a signal instead of doing the redundancy over time to instantly deliver small messages from continent to continent that are leading edge triggered. HF Multiple-In-Multiple-Out (MIMO) may also be a thing for those purposes. Just as you have little groups of MIMO antennae with centimetre, or so, spacing on the back of your wi-fi router, HF MIMO can have groups of antennae doing their thing. However, even though the research I’ve seen looks promising, their encoding was still slower than a terrestrial equivalent.  One experiment was going from Europe to the Canary Islands but even though the net result for the experiment is encouraging, it was still slower than cable speed due to those pesky encoding and hardware overheads. Such speed was not the point in that particular case though. Just getting HF MIMO working is quite a feat. There is much potential to explore this area even if HF MIMO has somewhat huge spacing for the antennae. Awkwardly, spacing for HF MIMO antennas is not measured in centimetres but hundreds or thousands of metres. The antennas don’t fit in my workshop but they may fit in your trading farm down in Cornwall.

Another RF alternative is to use line of sight with balloons (google's loon) or planes (facebook and google). This is not new. The height of balloons was not just used in the American Civil War, but a US company used cheapish balloons for making small responders that could help in tracking trucks and other freight things in the US, mainly in the South. To keep costs under control, you got a reward if you found a fallen balloon, read the plaque, and sent it back to the company for your reward. That way they kept recycling RF stations. That same company was also awarded a contract for enlarging the RF footprint in Iraq for the DoD via balloons. Google’s RF balloon trials in New Zealand have been working well. RF balloon comms are no longer a "New, New Thing."

Before I knew all of this, in the dark annals of history, I was interested in looking at the Toronto, Chicago, NJ triangle to see what height might be practical for direct line of sight. The Toronto to New York distance is about 800km. For that distance, a platform would have to be at around 12,500 metres at each end to see each other.  Not so different if you just put a balloon in the middle. Youtube tells me this is clearly possible and not so high if consider all of the high school kids sending weather balloons 100,000 feet high to get pretty pictures and videos of the curvature of the Earth. If The Register can send their Lego man up in a paper airplane to such heights, surely an HFT can do something cute too?

The Register's Paper Aircraft Released Into Space (PARIS)

It is also worth considering HF radio bouncing around the ionosphere. A relatively small transmitter can cover the entire planet. The ionosphere is wide ranging in height of bounce. For simplicity, let’s assume it is at 60km and plug it into a bouncy equation for a long link and the total distance variation is surprisingly small. That is, the HF bouncing around from NY to Tokyo doesn’t add that much onto the total distance due to the shallow angles involved.  If you can find a way to encode the signal sufficiently well, your trading latency could be on a winner.

For some years there has been talk of using balloons across the Atlantic for trading. Hobbyists with model airplanes have flown the distance. Maybe you could use a continuous line of UAVs to act as relays? Now that would be a fun project. An HCF undersea cable seems more practical.

LEO Satellites


There was an article in the Wall Street Journal about LeoSat recruiting HFTs for low latency links. The WSJ reported one HFT taker. We saw previously that height is an issue for geo-stationery satellites as the latency is a killer. So how low is low for LEO? Is the height a latency killer? The company is planning on laser-based comms between the satellites but you still have to get up there. Low cannot be too low for satellites as otherwise there is a bit of pull and drag that sucks them into the atmosphere to a fiery death. LEO orbits start at 160km, might be 300km, but really need to be 600km, or higher, to last a while. That is, it is usually better to have a bit more height and live longer. O3b medium orbit satellites sit at 8,000km. Iridium LEO satellites are at 780 km.

The WSJ article reported LeoSat could do Tokyo to NY in less than 130ms which LeoSat claimed was twice as fast as existing 260ms links. This claim is a little hollow as publicly known Chicago – Tokyo links are already similar in speed to that quoted by LeoSat. Hibernia is offering JPX in Tokyo to CME at Aurora, Chicago as a link at 121.190 ms and we know Chicago to NY is just under 4ms with current offerings such as McKay Bros. 130ms from LeoSat is already not competitive. The article quoted the company as saying satellite to ground latency was 20ms. It’s not clear if that is one way or a round trip for 600km of light speed equivalent. It’s not fast. Low orbit, not so low latency in this case, yet.

Neutrinos and long waves


I hope this has provided some of the colour to the thoughts a trader may have about trading links.

I’ll leave with one further thought. Neutrinos. Hold up your hand. You have hundreds of billions of neutrinos travelling through it each second. There are around 65 billion solar neutrinos passing through matter per square centimetre on Earth. Trillions are passing through your entire body. Near the south pole, there is a cubic kilometre of clearish ice with special sensors hot drilled in. They lie in wait for Neutrinos travelling through the Earth from the North Pole. Those neutrinos occasionally, very very rarely, bump into something and provide a little blue flash.
IceCube: South Pole Neutrino Observatory

A trader might think, why go around the Earth or its crust when you can go through it? Nice.

Remember the fuss that started in Sep 2011 regarding neutrinos travelling faster than light as part of the European OPERA experiment?  It was thrown out to the community to solve the puzzle. Eventually, it was figured to be a measurement error. To me, the interesting part was that someone was firing Neutrinos from Geneva, Switzerland to Gran Sasso in Italy, through the planet, and detecting them! Neutrino communications is a thing already. You need to send an awful lot to get a lucky hit and thus a message length would be short and the time would be probabilistically long, but you gotta start somewhere. Don’t let the detector’s required 300,000 bricks weighing 8.3kg per brick daunt you. What's 300~400GeV between friends? Who wants to build and improve on a few tonnes of Neutrino detection for HFT?

Submarines can use long waves for water penetrating RF comms. Slow packets with big waves. There are patent papers for turning a whole submarine into a Neutrino detector for comms or navigation as an alternative to long waves. Would it work? Seems very unlikely but, tantalisingly, not completely crazy. A few hundred tonnes would not be a problem for a submariner. Such answers are beyond my pay grade but an HFT has gotta ask.

What about ground penetrating long waves? Long waves are slow as they are very long in metres. You have to wait a long time for your bits. Though I do remember when sub-wavelength imaging was thought to be “proven” to be impossible. Super-resolution imaging came along despite rigorous math suggesting ye olde wavelength limiting thingamabobs prevented us diving deeper. We can now see molecules and atoms inside cells by thinking a little outside that wavelength limiting box. That is, in a short while, the impossible became possible. The neural community has weathered two large winters of over a decade each to survive to be a bright deep learning star doing the seemingly impossible despite what Minksy and Papert had you believe in the 1960s. Scientific winters sometimes pass. So, you never know. Perhaps long waves that hug the earth’s curves and penetrate water and soil can go do something sub-wavelength for signalling the British Pound is a buy with one or two bits of secret signal and a new “cable” for cable may be born? Maybe some kind of neutrino or neutrino-like particle can be practically enabled. There is a good movie in there somewhere for Matt Damon to follow up on.

Final word


I’m not holding my breath for the “Go Through” consortium, or cartel, to replace today’s “Go West” venture. That said, I’d be surprised but not shocked. Once Musk gets his Mars trading outpost functioning I hope we don’t repeat the mistakes of the past and build too much duplicate infrastructure for trading Martian Renimbi against the Earth’s Rupiah.

Back in our real world: UAV based comms, hollow core fibres, and HF based HFT low latency signalling may be happening whether you like it or not. Learn from Getco and don’t buy into the “New New Thing” that is just another Spread Networks. Be aware and beware.

--Matt.

[Update: An addendum with a little more on cables for the curiously curious, "Oh my - more lines, radios, and cables"]