Thursday, 22 June 2017

IEX MM-Peg follow up

It has been pointed out to me by more than one person that though they are not fans of IEX they would like to see the MM-PEG order allowed as submitted. I poured scorn on this order type here, "IEX's new order's unintended consequences."

My scorn stands but I understand the dilemma best captured by Mr Adam Nunes here,


The issue that this order type is addressing is having a continuous presence in the market. This is the rather ridiculous requirement set to a one hundred percent obligation for official market makers in the US.

Now, this order type is not really ever expected to trade. It is close to a spoof in that regard except for the idea that you'd be happy if it did trade. Such a happy intention takes it away from being a spoof, but the silliness remains. That is, buying 8% below the NBBO or selling 8% above the NBBO would likely be welcome.

The issues around the timing of the order are real in that it may bake in a systemic advantage or disadvantage at that price level, far from the market where it doesn't really matter. This may then set a precedent allowing IEX to extend such a latency problem all the way to the BBO which would be a bigger problem.

The right answer would be for the SEC to only require market making obligations for some high but not crazy percentage of the time, say 95%. Then this order type, that is never expected to trade, would not be required. We need to fix the issues rather than skirt around the edges with such MM-Peg artifices.

I do wish we could stick to a small set of atomic primitives from which all order types may be created. Then participants could ignore the more complex order types if they chose to. Until then, we'll all have to be "puzzle masters."

Happy trading,

--Matt.

Tuesday, 20 June 2017

IEX's new order's unintended consequences

IEX offered up for the SEC's consideration a new order type last week, "Proposed rule change to introduce a new market maker peg order."

The new Market Maker Peg Order, or MM-Peg, is not an unreasonable order type. I've long been on the record as opposing unnecessary order types and this fits that category. It is similar to order types on other exchanges. The innovation is limited. However, MM-Peg adds to the order proliferation pollution problem that IEX has long promised it would avoid. Here is an excerpt from Flash Boys concerning the puzzle masters,

(click to enlarge)

Back in 2014, IEX was promoting the idea of simple order types,
"Only four types of orders – IEX eschews certain types of orders that were created to accommodate the HFT crowd, such as the Post-Only order and “Hide Not Slide” order. Instead it offers only four basic types of orders – market, limit, Mid-Point Peg, and IEX Check (Fill or Kill). The Mid-Point Peg gives the investor a price between the current bid and offer for the stock."
Well, we've moved on from there with the Discretionary Peg and its complex conditions and changing formulae with its high false positive rates. The crumbling quote factor as been added to the Primary Peg. And now we behold the MM-Peg, a displayed peg that has priority over non-displayed. Not a big deal in itself as it is just a small incremental extension. A bit of an outhouse, really. IEX is simply replicating the same utility payoff for order type development that has got us into this NMS order type mess in first place. All may make sense in isolation but who wants to fly in such an NMS Rube Goldberg contraption?



IEX is no different from other exchanges with such order type development. The market order proliferation problem needs some kind of "start" agreement where these arms are controlled. The only real beneficiaries of the current proliferation are the sophisticated market participants that have the resources and skills to puzzle out all the order puzzles and apply them to their problems as solutions. HFTs might just fit into that category. IEX's biggest traders are HFTs. This may be the outcome they are looking for.

So, this little bit of hypocrisy on IEX's part cuts a little deep to their core values. This order proliferation has long been something the "puzzle masters" have protested loudly against. Not a big deal as a piece of incrementalism but, nevertheless, surprising as order type proliferation is a real problem to which IEX is succumbing. Why is it surprising? Well IEX has rallied against a number of things such as rebates and co-location, both of which may actually benefit markets and yet on order types - they continue to transgress on their values. Curious.

The big issue


The main issue I see with the MM-peg is that it may bake in a strategic advantage in latency to particular types of customers, "The Market Maker Peg Order would be limited to registered market makers" [page 6].

I read it that the repricing still has to go through some guise of 350-microsecond delay, perhaps even the original magic shoe box,
"Furthermore, pursuant to Rule 11.190(b)(13), each time a Market Maker Peg Order is automatically adjusted by the System, all inbound and outbound communications related to the modified order instruction will traverse an additional POP between the Market Maker Peg Order repricing logic, and the Order Book."
However, this isn't the problem directly. The problem is how the latency may compare to co-located access from NY5 where the POP is. That is, how does it compete against the exchange's own customers?

The exchange's network architecture should have reasonably good/low jitter due to it being 10G Ethernet. It is hard to do that really badly, so let's assume IEX haven't stuffed that up. The latency difference between customers in NY5 on the customer facing side of the POP and the internal MM-Peg repricing mechanism may then be implied to be significant for all or some set of customers. That is, significant in terms of expected jitter. That difference may be advantageous or disadvantageous due to those reified architectural differences. That is, the timing is largely baked in.

If MM-Peg was to have a benefit in terms of latency, that would be bad as you are forced to use it and eschew other order types, but only if you can. You may not be a registered market maker and be at a structural disadvantage. The other side of the coin is a baked on disadvantage implying you never want to use an MM-Peg. Then again on some day, it may magically improve due to some technical rejigging. What if it changes without you knowing and suddenly your trading is at a surprising disadvantage? I'm imagining Haim Bodek breathing fire. I agree with him. This is a poor situation.

Either faster or slower is problematic for IEX's customers. It is a no-win situation - caveat emptor.

And, just to add fuel to the fire, SIP customers may be notified of the requotes before the IEX customer waiting at the IEX pop.

Happy trading,

--Matt.


________

Note: much to do about nothing. This is all about an order type that lives "at least" 8% (IEX rule 11.151(a)(6)) away from the BBO if it is an S&P500 or Russell1000 name. Busy work that is an unprincipled precedent. You have to wonder why they'd bother with it.

PS: Kipp Rogers points out that it was only three order types back in the Flash Boy daze of 2014: 



Wednesday, 14 June 2017

Rebate trafficking

The mass debates around rebates are coming to the point where the tumult paints rebates more akin to drug trafficking than a sensible approach to attracting custom.

This cult-like lack of economic argument is centred around the Franken-pool of IEX. Mr Katsuyama was quoted by Nicole Bullock in her June 1 FT article, "IEX chief sticks to principles in battle for presence",
“At the end of the day, this conflict between brokers getting paid a rebate by the exchange or getting the higher quality execution for their client — that conflict is going to come to a head and we’ll be the beneficiary of that,” says Mr Katsuyama, who insists that IEX gets better executions for clients.
It's not a rational argument and a typical misdirection from IEX. The argument is often accompanied by incendiary language referring to rebates as kickbacks. For example, here is Mr Elvis Picardo writing in Investopedia, as replicated by Forbes, from April 2014, "How IEX Is Combating Predatory Types Of High-Frequency Traders",
"No kickbacks or rebates – IEX does not offer any special kickbacks or rebates for taking or making liquidity. Instead it charges a flat 9/100th of a cent per share (also known as 9 mils) for buying or selling a stock."
IEX has referred to rebates as kickbacks, "..it's a kickback.." [4:05]. Kickbacks are normally associated with illegal behaviour. It is not a term that should be used lightly, just as front-running is also criminal and poorly used by IEX. Such aspersions are why the debate gets heated on both sides leading to a lack of rational rationale. IEX fail to recognise that zero pricing for lit at IEX would meet their own description of a kickback as a kickback is a remunerative, not monetary, exchange. Giving something away for free, such as an order execution, might also be considered a kickback; as may something for a discount. So if rebates are kickbacks, then IEX is also offering kickbacks. It becomes a matter of degree. Loss leaders at a store would also be a kickback. It's a silly, shameful argument. Perhaps an exchange offering rebates will sue Mr Katsuyama?

I've written on the subject previously in 2015, "Trading rebates - a choice, not an evil." It is a tired debate. It's really not worth dredging up again, yet here we are. Rebates are often paid for posting prices, sometimes for taking as per inverted exchanges, some charge zero, and some charge flat fees. IEX's dark orders have the highest fees in the industry with 0.0018 per round trip.

There has been much experimentation and it continues. BATS not only offers both maker-taker and taker-maker inverted pricing but has introduced low flat fee pricing at one of its exchange platforms (EDGA) with 0.0003 a side or 0.0006 cost per round trip. May the innovation continue.


Higher rebates!


Fees, and rebates by implication, were capped by Reg NMS in 2005 at 0.003 per share. I'm not a true blue believer but I can make an uneasy fair dinkum argument for higher fees and rebates that I think is not completely stupid.

Part of the modern problem with PFOF and dark liquidity is that they skin the restrictive sub-penny rule in a way that cannot be approached with penny oriented quotes. I've written about this previously in "Sub-pennies rule!" The supply and demand at a public market are a careful balance of what can be earned in the nett spread which is the sum of the spread, transaction costs, rebates, and other amortised expenses. Changes to transaction costs and rebates affect the desirability of market makers, in particular, to trade at the exchange. If the costs were 0.005 on both sides with an average gross spread of just a cent, 0.01, you wouldn't bother trading there as you couldn't earn any money. That's not completely true as maybe you could pay the 0.005 and do the other side at another exchange earning the cent spread, giving you a positive nett, but you get the idea.

Now if you think about it, a rebate of 0.005 with a cost of 0.005 for the other side, or other 0.005 permutations, could give you the equivalent of a mid-point pricing for trading if you consider a 0.01 spread typical. That could be an argument that holds water for as to why the 0.003 ceiling is currently too low. That is, an argument for a more permissive cap to accommodate innovation is not completely lacking. I don't buy it, yet. I prefer the direction promoted by some at EMSAC that advocate for lowering the 0.003 cap, but I'm not totally convinced.

There may be other ways that exchanges could use fees to combat the advantage that PFOF and dark orders have in skirting the sub-penny rule. For example, imagine that when you submit your order instead of just price, time priority, you pay for a priority level at the price. That price could be negative - a rebate. It would partially change the time priority to that where the priority has a market mechanism, a price. You could make a strong argument for such a beast. I'm not sure how the market would react, but I suspect the exchanges would likely make a lot more money in fees which could be partially rebated back to the customers to maintain a competitive advantage. It is not sub-penny pricing but it is a close cousin. Such a scheme may allow public exchanges to compete more vigorously against the incursion of PFOF and dark orders with their micro-cent fills. I'm not sure I like the thought of this innovation but it has some merit, as do higher rebates.

Overall, I'm agnostic on rebates. I could take them or leave them. Rebates have a useful role in innovation and perhaps there is more innovation yet to come. I do think the current discretion within the 0.003 fee cap is a pretty good balanced result for now. There is certainly merit to both a higher and a lower cap, but crass arguments about kickbacks have no role in the debate.

Happy trading,

--Matt.

_________
Update: Mr Osman Awan correctly pointed out on Twitter that it is only fees, not rebates, capped by Reg NMS. There is no reason why a rebate could not be 0.005 today except for the long-term necessity for profit. That is, a rebate cap is implied, not required.



Thursday, 1 June 2017

IEX statistics for May: the devil is in the details

A reader may know by now that I'm not a huge fan of the IEX hubris and hypocrisy emanating from their general direction concerning their speed-bump. The smart HFT will continue to worry that such market structure debauchery will harm the market in the longer term even if it provides opportunity in the short term. HFTs rely on healthy markets. IEX's Dark Fader does not promote market health.

As you can see in the following table, IEX's dark and expensive share restaurant continues to darken. Not one day in May had over twenty percent of displayed volume trading as a proportion of total shares handled.


Lit / total handled
May 17.5%
April 18.7%
March 19.8%

IEX had an improved, albeit small, market share in May of around 2.2%. The devil is in the detail and in the following chart, the May detail devils are highlighted for your amusement or apprehension.
May details are devilish
(click to enlarge)
You'll see that in the unlikely event that a linear relationship was to hold, you would expect IEX to be completely dark if it was to grow to 8% of the market. This may be a consequence the SEC did not intend.

A more traditional view of lit volume traded versus total shares handled is the following chart:

(click to enlarge)


To me the surprise is not why IEX lit activity is below twenty percent, it is why investor naivety is such that IEX trades at all.

Sunshine continues to be a great disinfectant. Let's all hope for the sunrise as the IEX dark moon rises.

Happy trading,

--Matt.


_________________________

Older IEX related meanderings:



Saturday, 20 May 2017

Submarine communications on VLF protecting Earth from space radiation

We briefly chatted about Very Low Frequency (VLF) communications and how such comms may travel long distances, go through a bit of soil and water, and are used for communication to submarines in "Lines, radios, and cables - oh my."

Well, interesting news on that front this week. NASA is reporting submarine communications are having a positive effect by creating a somewhat protective bubble around planet Earth.




The effect is noted as small in the paper, "Anthropogenic Space Weather" [Gombosi, T.I., Baker, D.N., Balogh, A. et al. Space Sci Rev (2017)], at least from what I can parse. I found some of the communications history very interesting in this paper, so I'd thought I'd share some of that history verbatim here for the curiously like-minded curious people.

--Matt.

___________________

Excerpts from "Anthropogenic Space Weather" [Gombosi, T.I., Baker, D.N., Balogh, A. et al. Space Sci Rev (2017)]

8 Space Weather Effects of Anthropogenic VLF Transmissions


8.1 Brief History of VLF Transmitters


By the end of World War 1, the United States military began use of very low frequency radio transmissions (VLF; 3–30 kHz) for long-distance shore to surface ship communications (Gebhard 1979). Since very high power can be radiated from large shore-based antenna complexes, worldwide VLF communication coverage was feasible, and along with LF and HF systems (300–30 MHz) these bands carried the major portion of naval communications traffic before later higher frequency systems came online. Early experiments also showed that VLF could penetrate seawater to a limited depth, a fact realized by the British Royal Navy during World War I (Wait 1977). Given this realization, when the modern Polaris nuclear submarine era began in the 1950s, the US Naval Research Laboratory conducted a series of thorough radio propagation programs at VLF frequencies to refine underwater communications practices (Gebhard 1979). Subsequent upgrades in transmission facilities led to the current operational US Navy VLF communications network, and other countries followed suit at various times. For example, Soviet naval communication systems were likely brought online in the late 1920s and 1930s during the interwar expansion period, and high power VLF transmitters were later established in the late 1940s and 1950s for submarine communications and time signals. These included Goliath, a rebuilt 1000 kW station first online in 1952 which partly used materials from a captured German 1940s era megawatt class VLF station operating at 16.55 kHz (Klawitter et al. 2000).

Table 2 of Clilverd et al. (2009) lists a variety of active modern VLF transmitter stations at distributed locations with power levels ranging from 25 to 1000 kW. These transmissions typically have narrow bandwidths (<50 Hz) and employ minimum shift keying (Koons et al. 1981). Along with these communications signals, a separate VLF navigation network (named Omega in the US and Alpha in the USSR) uses transmissions in the 10 kW range or higher (Inan et al. 1984, e.g. Table 1 of) with longer key-down modulation envelopes of up to 1 second duration.

8.2 VLF Transmitters as Probing Signals


Beginning in the first half of the 20th century, a vigorous research field emerged to study the properties of VLF natural emissions such as whistlers, with attention paid as well to information these emissions could yield on ionospheric and magnetospheric dynamics. Due to the high power and worldwide propagation of VLF transmissions, the geophysical research field was well poised to use these signals as convenient fixed frequency transmissions for monitoring of VLF propagation dynamics into the ionosphere and beyond into the magnetosphere (e.g. Chap. 2 of Helliwell 1965; Carpenter 1966). This was especially true since VLF transmissions had controllable characteristics as opposed to unpredictable characteristics of natural lightning, another ubiquitous VLF source. Beginning in the 1960s and continuing toT.I. Gombosi et al. the present, a vast amount of work was undertaken by the Stanford radio wave group and others (e.g. Yu. Alpert in the former USSR) on VLF wave properties, including transmitter reception using both ground-based and orbiting satellite receivers. These latter experiments occurred both with high power communications and/or navigation signals and with lower power (∼100 W), controllable, research grade transmitter signals.

The transmitter at Siple Station in Antarctica (Helliwell 1988) is worthy of particular mention, as the installation lasted over a decade (1973–1988) and is arguably the largest and widest ranging active and anthropogenic origin VLF experiment series. Two different VLF transmitter setups were employed at Siple covering 1 to ∼6 kHz frequency, with reception occurring both in-situ on satellites and on the ground in the conjugate northern hemisphere within the province of Quebec. Of particular note, the second Siple “Jupiter” transmitter, placed in service in 1979, had the unique property of having flexible high power modulation on two independent frequencies. This allowed targeted investigations of VLF propagation, stimulated emissions, and energetic particle precipitation with a large experimental program employing a vast number of different signal characteristics not available from Navy transmitter operations. These included varying transmission lengths, different modulation
patterns (e.g. AM, SSB), polarization diversity, and unique beat frequency experiments employing two closely tuned VLF transmissions. Furthermore, the ability to repeat these experiments at will, dependent on ambient conditions, allowed assembly of statistics on propagation and triggered effects. These led to significant insights that were not possible for studies that relied on stimulation from natural waves (e.g. chorus) that are inherently quite variable.

Several excellent summaries of the literature on VLF transmission related subjects are available with extensive references, including the landmark work of Helliwell (1965) as well as the recent Stanford VLF group history by Carpenter (2015). As it is another effect of anthropogenic cause, we mention briefly here that a number of studies in the 1960s also examined impulsive large amplitude VLF wave events in the ionosphere and magnetosphere caused by above-ground nuclear explosions (e.g. Zmuda et al. 1963; Helliwell 1965).

Observations of VLF transmissions included as a subset those VLF signals that propagated through the Earth-ionosphere waveguide, sometimes continuing into the magnetosphere and beyond to the conjugate hemisphere along ducted paths (Helliwell and Gehrels 1958; Smith 1961). Ground based VLF observations (Helliwell 1965) and in-situ satellite observations of trans-ionospheric and magnetospheric propagating VLF transmissions were extensively used as diagnostics. For example, VLF signals of human origin were observed and characterized in the topside ionosphere and magnetosphere for a variety of scientific and technical investigations with LOFTI-1 (Leiphart et al. 1962), OGO-2 and OGO-4 (Heyborne et al. 1969; Scarabucci 1969), ISIS 1, ISIS 2, and ISEE 1 (Bell et al. 1983), Explorer VI and Imp 6 (Inan et al. 1977), DE-1 (Inan and Helliwell 1982; Inan et al. 1984; Sonwalkar and Inan 1986; Rastani et al. 1985), DEMETER (Molchanov et al. 2006; Sauvaud et al. 2008), IMAGE (Green et al. 2005), and COSMOS 1809 (Sonwalkar et al. 1994). VLF low Earth orbital reception of ground transmissions have been used also to produce worldwide VLF maps in order to gauge the strength of transionospheric signals (Parrot 1990).

...........
...........

9 High Frequency Radiowave Heating


Modification of the ionosphere using high power radio waves has been an important tool for understanding the complex physical processes associated with high-power wave interactions with plasmas. There are a number of ionospheric heating facilities around the world today that operate in the frequency range ∼2–12 MHz. The most prominent is the High Frequency Active Auroral Research Program (HAARP) facility in Gakona, Alaska. HAARP is the most powerful radio wave heater in the world; it consists of 180 cross dipole antennas with a total radiated power of up to 3.6 MW and a maximum effective radiated power (EFR) of ∼4 GW. The other major heating facilities are EISCAT, SURA, and Arecibo. EISCAT isT.I. Gombosi et al. near Tromso, Norway and has an EFR of ∼1 GW. SURA is near Nizhniy Novgorod, Russia and is capable of transmitting ∼190 MW ERP. A new heater has recently been completed at Arecibo, Puerto Rico with ∼100 MW ERP. There was a heating facility at Arecibo that was operational in the 1980s and 1990s but it was destroyed by a hurricane in 1999. The science investigations carried out at heating facilities span a broad range of plasma physics topics involving ionospheric heating, nonlinear wave generation, ducted wave propagation, and ELF/VLF wave generation to name a few.

During experiments using the original Arecibo heating facility, Bernhardt et al. (1988) observed a dynamic interaction between the heater wave and the heated plasma in the 630 nm airglow: the location of HF heating region changed as a function of time. The heated region drifted eastward or westward, depending on the direction of the zonal neutral wind, but eventually “snapped back” to the original heating location. This was independently validated using the Arecibo incoherent scatter radar for plasma drift measurements (Bernhardt et al. 1989). They suggested that when the density depletion was significantly transported in longitude, the density gradients would no longer refract the heater ray and the ray would snap back, thereby resulting in a snapback of the heating location as well. However, a recent simulation study using a self-consistent first principles ionosphere model found that the heater ray did not snap back but rather the heating location snapped back because of the evolution of the heated density cavity (Zawdie et al. 2015).

The subject of ELF wave generation is relevant to communications with submarines because these waves penetrate sea water. It has been suggested that these waves can be produced by modulating the ionospheric current system via radio wave heating (Papadopoulos and Chang 1989). Experiments carried out at HAARP (Moore et al. 2007) demonstrated this by sinusoidal modulation of the auroral electrojet under nighttime conditions. ELF waves were detected in the Earth’s ionosphere waveguide over 4000 km away from the HAARP facility.

VLF whistler wave generation and propagation have also been studied with the HAARP facility. This is important because whistler waves can interact with high-energy radiation belt electrons. Specifically, they can pitch-angle scatter energetic electrons into the loss cone and precipitate them into the ionosphere (Inan et al. 2003). One interesting finding is that the whistler waves generated in the ionosphere by the heater can be amplified by specifying the frequency-time format of the heater, as opposed to using a constant frequency (Streltsovet al. 2010).

New observations were made at HAARP when it began operating at its maximum radiated power 3.6 MW. Specifically, impact ionization of the neutral atmosphere by heater-generated suprathermal electrons can generate artificial aurora observable to the naked eye (Pedersen and Gerken 2005) and a long-lasting, secondary ionization layer below the F peak (Pedersen et al. 2009). The artificial aurora is reported to have a “bulls-eye” pattern which is a refraction effect and is consistent with ionization inside the heater beam. This phenomenon was never observed at other heating facilities with lower power (e.g., EISCAT, SURA).

Friday, 19 May 2017

Speed-bump 101

I get a bit tired of some of the silly thoughts that go around about speed-bumps. This is mainly due to the disinformation IEX and Michael Lewis spread about their efficacy. This is a simple meander to cover just the basics for those of us who are not rocket scientists. There is nothing new here. It is a bit quick and hasty, so please feedback any obvious errors so I may clean it up and make it a useful speed-bump 101 meandering.

In the context of NMS rules so far, a symmetric speed-bump fundamentally just makes for a slow exchange. Let's have a quick meander at the simplest example of such a view.

Here is a happy client connected to an exchange and they get a 750-microsecond typical latency or round-trip time (RTT):



The fastest exchange in the US in 2009 was Bats. Bats operated, at one stage in 2009, with a typical RTT of 443 microseconds. Ah, those were the good old days of microseconds instead of nanoseconds and picoseconds. The speed in the diagram is therefore not dissimilar to an old exchange of an antique 2009 vintage. For what it's worth, Bats continues innovating and is now a sub 100-microsecond exchange. Michael Lewis complained in Flash Boys, the infamous piece of fiction, that fast HFT guys and gals could beat you to the punch on your trades by shaving some microseconds off their reaction times. Lewis said you didn't have a chance and a speed-bump would level the playing field for you. Let's dig into that thought.

Here is a sophisticated picture of a simple speed-bumped market:



Guess what? It looks just like the other 750-microsecond exchange to the client. It is still a race to get an order to the public point of presence (POP). It's still a race for reacting to news and external market data from other exchanges. You still want to be as close to the exchange or POP as you can to help you compete in the race. You will still need to carefully plan your trading infrastructure so you don't suffer a disadvantage if latencies are important to you. Microseconds still matter as much as they did with the previous model.

If you didn't know this speed-bumped exchange had a speed bump, would you be able to tell the difference between the two?

No. Here, speed-bumps don't matter. You can't even tell if the bump exists.

Exchanges aren't typically as slow as speed-bumped exchanges. Normal exchanges tend to be a little bit faster. You can probably guess why. A fast exchange may have an RTT of 10-25 microseconds today. Here is a sophisticated diagram of a modest exchange that operates with an expected RTT of 50 microseconds:


It is also just as much of a race to react, replenish, hedge, and otherwise nuance your trading. Given your reactions within your own systems are not necessarily dictated by the exchange, you have exactly the same, or as little, need to be swift to suit your trading agenda. Speed-bumps don't change the competitive necessity, or lack thereof, for client speed.

Fundamentally, a symmetric speed-bump just gives you a slow exchange. 

Slow exchanges are dumb


Why are slow exchanges dumb?

A slow exchange makes you suffer more risk with your trading. If you use the 50-microsecond exchange above, you could have received a fill and hedged with something else, or replenished your market making, and have done that more than a dozen times, before the slow 750-microsecond exchange even gave you a response.

An easy way to bring this home to my simple head is to imagine an exchange that takes an hour to respond alongside another exchange that takes a second to respond. 

Non-farm payrolls come out. 

It's a big number.

You want to react: cover some shorts, and buy some longs, in a controlled fashion where you only take on so much risk, or delta, in bite sized pieces. The one-second responses from the faster exchange are cool as you can get fills, send in orders, and rinse-repeat as suits. Sometimes you'll miss or hit with your orders. You'll be able to adjust fairly easily as you go along. All is good.

Now imagine trying to do the same at the exchange that takes an hour to get back to you. It is still a race to fire in your initial orders and get in the queue. You're still competing. Microseconds matter even on the one hour exchange. However, you won't know if your orders have been filled and what risk you may be wearing for a long time. You can guess and assume you've been filled, or send market orders and hope for the best, or any fill. It is a real mess - a risky mess. 

If you are hooked up to both the one second and the one hour exchange, you could use the one-second exchange's market data as a fair value indicator and try shooting orders with fair prices into the hour exchange hoping for the best. It should be obvious to you that the one second based exchange becomes the price leader and the safest place to trade.

This little thought experiment shows quite clearly, I hope, that faster exchanges, all other things being equal, are the natural hubs for liquidity. They are safer, have better risk, and lead to better price discovery.  Instead of doing just one transaction compared to a slow exchange, you can offer prices, get hit, hedge, and offer further prices once again to the market. This makes for a much-improved market. 

The other thing you need to take away from this thought experiment is that human time scales and computer timescales are different. The two newest speed-bumps are both going to be 350 microseconds. It seems fast. It is around a thousand times faster than the blink of an eye. However, this is a bad benchmark. 350 microseconds is a long time in modern computing. It is likely enough time for your phone to execute over a million processor instructions. Yes, your phone. We need to resist over-anthropomorphising such functionality. Today's faster exchanges can do over twenty round trips of orders in the time it would just take to get one answer from a 350-microsecond speed-bumped exchange. 350 microseconds seems fast, but today it is like an hour, even to someone fifty years old like me.

Slow exchanges are dumb. IEX and NYSE American are slow exchanges.

IEX and NYSE American tell other people about your trades before they tell you


Things aren't quite so simple with the approved NMS speed-bumped models. Both of the SEC approved models report everyone's quote and trade information to the centralised reporting of the SIP feeds before the speed-bump. That is, your competitors listening to the SIP may well receive information before you do if you're simply co-located near your speed-bumped exchange's POP expecting a simple life. It ain't so simple. 


The UTP SIP has a latency of around 17 microseconds and the CTA SIP has a latency of around 80 microseconds for quotes and 110 microseconds for trades. Add in the communication time on the links and your salmon coloured competitors in the diagram above will get the market's information before you do. Dutifully waiting for speed-bumps of 350 microseconds is not how to optimally trade for now. Some people may care about this.

Why speed-bumps?


There was no original reason really. IEX just screwed up their thinking. If you read the Flash Boys fiction, you'd see some of the book talked about Brad's experience of playing catch-up to the rest of the industry by stumbling across a known algo and re-implementing it as the algo he called Thor. Thor sent orders to various trade centres with spaced out timings so they would all arrive simultaneously and thus not leak information between venues. Part of the IEX thinking was that if they had a big speed-bump then they'd have more opportunity not to leak information. That made a bit of sense when the SIPs were very slow from a routeing point of view. However, it was kind of pointless as you don't need to be an exchange for doing that. That is the job of a broker.  Some people seem to use IEX today just as a router rather than for any particular exchange need. Seems a bit pointless, expensive, and inflexible, doesn't it?

Today the SIPs are much faster and your information is not protected. The approved speed-bumps are designed to leak. This highlights that this type of speed-bumped architecture is not sensible.

Despite IEX making a song and dance about keeping only simple order types, IEX invented their complex auto-fading Discretionary PEG (DPEG) order type. DPEG has been joined by their newer auto-fading Primary PEG (PPEG) too.

The speed-bump rationale for DPEG and PPEG is a bit more reasonable. I'll attempt an explanation.

Dark Fader


Most people don't want to be the dumb bunny being traded through when the price ticks. Say the price is at $10.01 / $10.02 and ticks down to $9.99 / $10.00. You'll groan out loud if you've just bought at $10.01. Most market makers try and avoid this by trying to predict if the price is going to tick and get out of the way. It's not the worst outcome for an asset manager as you wanted the stock anyway, but you too would rather have a better price and not be the dumb bunny. However, it is life and death for a market maker as your survival depends on earning the spread and not being adversely selected. If you get traded through on the tick all the time, a market maker will quickly go out of business.

So what IEX did, and what NYSE American is copying, is they put a special algo within the exchange that doesn't get delayed and can pull your order for you, or move the price away, if you like. It fades. That is, the exchange gives their algos advantaged market data to step ahead of you. Michael Lewis referred to this as front-running in his novel, which it is not. It is a form of privileged latency gaming. It is a bit nasty for the market maker, broker's algo, or sophisticated asset manager as it subverts their role in the market. A client of the exchange doesn't have the special non-speed-bumped ability to run algos within the exchange, so any innovation they may dream up is disadvantaged. They can't fairly compete and thus innovation is stifled. This is not something you'd expect to be encouraged, but alas, the SEC has mistakenly allowed it.
Non-speed-bumped access by an exchange's algos prevent clients innovating.
An innovation that kills innovation.

It gets a bit more complicated. These special order types are dark orders or non-displayed. You don't know they are there. They don't mess up the quote feed as you can't see them. These dark orders have a priority below displayed order types, so conventional market making works OK, along with the requisite latency games for displayed orders. However, due to the speed-bump, which allows this special algo access for dark orders, the exchange is a slow one and we should now understand that slow exchanges are dumb. That is, from a lit perspective it is simply a standard exchange, just an excruciatingly slow one that also leaks.

Franken-pool


What we have here is a Frankenstein exchange hybrid that marries a dark pool and regular exchange: a Franken-pool. It's quite franked up. No one would ever have won SEC approval for a wholly dark pool to become a public exchange and remain dark. A really slow lit exchange is too dumb for words and makes no sense. By marrying the two together, IEX has managed to fool the SEC into approving a dark pool, the Franken-pool, as a public exchange. From a lit perspective, you can ignore the dark part. From the dark pool perspective, the lit part is virtually just one of many external exchanges.

What is the outcome for the Franken-pool so far? Less than twenty percent of the IEX exchange's total handled shares is lit volume. In fact, there is usually more volume routed to other exchanges than the volume that gets executed as lit. As volumes rise, the dark component percentage is also rising.

As a fine purveyor of risk, trader or investor, you quite often really want to trade when things are happening. You know, when the price is moving. It's part of the utility of the whole marketplace thing. It is disturbingly weird that the speed-bump enabled dark fading order types prevent you from trading at those times. They fade rather than trade. This will also make some of the trade stats look artificially too good on IEX as trades don't happen at times of risk. That is really quite strange for a risk-based utility. More alternative facts running around as statistics is all we need. Also, as IEX's slightly dumb dark fader algo generates lots of false positives, you'll often lose priority to other dark non-fading pegs. Well managed mid-point pegs may dominate dark-fading pegs. The whole IEX pile of franked up dark matter is a bit messy.

The SEC and investors have been hoodwinked. 

I feel the SEC needs to step back and think more deeply about the privileged role of licensed exchanges in public markets. What role and importance does price discovery have? What role, if any, should dark orders play? Do you want to gum up the system and make it less efficient with slow exchanges? Should the national market try to be efficient?

The vested interest noise certainly makes policy hard. The pressure applied to the SEC makes their IEX approval error quite understandable even if it is disappointing. Stasis can be a bad thing. To an extent, mistakes should be expected and their likelihoods even loosely encouraged to hasten the pace of reform, but only if mistakes, like IEX, can be rolled back.

Yeah, I'm not fond of a significant speed-bump as part of a public exchange. Outside the public markets, a parasitic dark-pool as an ATS with a speed-bump that prevents customer innovation may make sense to a limited degree if you can get safe passive inexpensive matches done without leaking information. Just be careful what you wish for.

Happy trading,

--Matt.

100G Ethernet NICs - Broadcom joins Mellanox, Cavium's QLogic, and Chelsio

New 100Gb Ethernet NICs announced last month from Broadcom bring the number of mainstream 100Gb Ethernet NIC vendors to four. There are a number of FPGA vendors also offering solutions for 100Gb Ethernet but here I've just decided to meander through the usual NIC vendors. Remember, if you want to trade at sub 100 nanoseconds, you'll have to avoid the PCIe bus transfer times and stick to FPGA tick to trade solutions.

Mellanox has been shipping 100GbE for around two years. Cavium's QLogic 100GbE NICs are a bit over a year old. Chelsio started with their 100GbE solution earlier this year.

Broadcom is due to soon ship both an Open Compute Project (OCP) Mezzanine Card 2.0 multi-host form factor (M1100PM) and the usual PCIe NIC form factor (P1100P).
OCP Mezzanine Card 2.0 form factor from the OCP specification
FreeBSD added support for Broadcom's 100GbE a couple of weeks ago, so the ducks are lining up for rollout. The BCM57454 chipset supports Nvidia's GPUDirect via RDMA for your HPC and ML enjoyment. Virtual switch and embedded security may take some load off your hosts. These are single port QSFP28 solutions.

Mellanox ConnectX-6 EN
(click to enlarge)
All the 100GbE vendors so far have settled on QSFP28. We'll have to wait a little longer for both Broadcom's multi-port 100GbE solutions, and a nod to PCIe 4.0. It is worth noting that PCIe 3 x16 does not support enough bandwidth for concurrent dual 100Gbps transfers. 

Most vendors' 100GbE solutions support versions of RoCE, RDMA, GPUDirect, et al, but there are a quite a lot of differences in the details. Such details may be important to you at 100GbE as your offloaded loads can help your CPUs out quite a bit. CPUs need help. 100Gbps is a lot of data for a little CPU to worry about, especially as packet loads climb to 200 million per second. Think about that. If you only had one CPU you'd only have 5ns per packet of processing time to keep up with that flow. We rely on offloaded help, direct memory access methods, and steering to multiple CPUs to stop from drowning.

Here is a list, for your convenience, of the current main network vendors' 100GbE solutions:

Mellanox

QLogic FastLinQ QL45611HLCU
(click to enlarge)

Cavium - QLogic

Chelsio

Broadcom


Solarflare makes excellent 10GbE and 40GbE solutions in both PCIe and OCP form factors. Solarflare and Mellanox are currently the "goto" NICs for low-latency trading. Hopefully, Solarflare will not be too far away from 100GbE delivery and be ready for exchanges reaching beyond their current 40Gbps maximum offerings.

Happy trading,

--Matt.

Wednesday, 17 May 2017

NYSE American - attack of the clones

Today, the SEC approved the application for NYSE's speed-bumped IEX clone.

A hall of mirrors it is:
SEC NYSE Amerian software speed-bump approval
The financial press quickly reported:

This was despite a last ditch pitch for disapproval by IEX,
"On Wednesday, May 10, 2017, David Shillman, Richard Holley, Sonia Trocchio, and Michael Ogershok, all from the Division of Trading and Markets’ Office of Market Supervision, met with John Ramsay, representative of IEX. The discussion concerned NYSE MKT LLC’s proposed rule change to amend NYSE MKT Rules 7.29E and 1.1E to provide for an intentional delay to specified order processing, including the comments reflected in IEX’s public comment letters submitted to date on the proposed rule change."
The SEC had no real choice, given the documents were in proper order, due to the precedent IEX set. This is the sad conclusion BATS' SEC letter also came to,
"However, in light of the Commission’s approval of IEX’s delay mechanism and the Commission’s related interpretation of Rule 611 of Regulation NMS, Bats sees no legal grounds for the Commission to disapprove NYSE MKT’s proposed rule change."
There is a duo of devilish differences to delight us in their deplorability. Let's meander on.

SIP games


Firstly, NYSE American customers will have their quotes and trades exposed on the SIP data feeds before NYSE American reports back to them. That is a little deplorable you'd have to say. This is a similar architecture to IEX but the details are quite different and those differences matter. Here is the relevant piece from the SEC approval:

I have previously talked about the SIP games applicable to the IEX's Dark Fader, "IEX trading." That is, at Mahwah and Carteret you may receive IEX market data before IEX's own local customers. This is due to the CTA and UTP SIP processors being faster than IEX Dark Fader's 350 microsecond delay. Bats' letter to the SEC also pointed out this,
"At a high level, Bats reiterates its position that speed bumps of the nature that IEX employs and NYSE MKT is proposing provide zero benefits to displayed orders 1"
by way of a reference to the same in their footnote,


If we add in the Mahwah|NYSE to Carteret|Nasdaq link and update the latencies to my previous somewhat lame diagram, we get the following approximate sketch:

(click to enlarge)
The SIP processing latencies are medians. I've also added the trade and quote latencies to draw a distinction as the CTA SIP is inexplicably much slower for trade processing. The latencies were drawn from the current reports, except the CTA SIP trade latency which was drawn from the previous report as it showed a more relevant median latency due to the monthly breakdown. Please note Bats Exchange is to be found in the NY4/NY5 complex.

As you may now understand, the latency picture is quite messy when you include the 350 microsecond speed-bumped exchange feeds and order feedback for both NYSE American and IEX into the pictures at Mahwah and NY5 respectively. This gets a little worse when you consider the non-displayed versus displayed aspects of your trading trials and tribulations.

Who benefits? Those who understand the market structure minutiae and also have the resources to expend to put facilities in all the necessary locations. A smart HFT will not like the mess as it acts as a long term friction hampering efficient market development. A smart HFT is also the most likely to benefit due to their laser focus on the small details that ensure their survival. There is a reason why Citadel is often the biggest trader on IEX. That is, an HFT relies on good markets and negative developments, such as these speed-bumped dark faders, are not in the best interest of the market, and, by implication, not in the interest of an HFT.

Supportive HFTs are acting a little sycophantically in my book. They are perhaps disregarding the long-term good for the short-term politic.

You can see by way of the above diagram, for both NYSE American and IEX, simple co-lo is not enough. You need to have processing capabilities in all the main centres if you wish to trade optimally. At least for the CTA Plan's SIP stocks, you should already be colocated for the SIP's 80 microsecond delayed quote feed, around 270 microseconds before other NYSE American customers get their data. It's just more expense.

Do you also find it weird that UTP stocks' data from NYSE American will turn up in Carteret's Nasdaq data centre before it makes an appearance in NYSE's Mahwah facility?

Welcome to an SEC mandated hell.

Dark Fading


The largest part of IEX's success to date in achieving a market share of slightly over two percent is due to its dark trading. Less than twenty percent of IEX's trading is attributable to lit volume.


IEX is a parasitic vehicle that subverts price discovery. Think about that. It is a public exchange that thwarts price discovery, openness, competition, efficiency, progress, and innovation. So far, as IEX has bumped its size up a little, you may see the threatening correlation that is emerging. More market share equates to the likelihood of darker trading:
There is certainly a place for dark parasitic trading. I would argue that place is not within an advantaged public marketplace. The SEC either decided otherwise, or should regret its approval.

Another aspect of IEX's Dark Fader is that it is expensive - very expensive at 9 mills a side. NYSE American may use the same parasitic force to overcome IEX's Dark Fader by simple economics. If NYSE prices its DPEG and Primary Peg equivalents at a more reasonable rate then perhaps the IEX Dark Fader infection may be extinguished by competitive forces. NYSE may price IEX out of existence. Please do so.

This brings NYSE American's own dark fading into sharp relief. Presently NYSE has simply proposed copying an older version of IEX's crumbling quote indicator to power its own dark fading pegs. This will not work as well as it could.

Here, for completeness, are the current IEX Dark Fading formulae, now schmarketed as IEX Signal:

Crumbling quote `if QIF>{(0.39, if spread <= $0.01), (0.45, if $0.01 < spread <= $0.02), (0.51, if $0.02 < spread <= $0.03), (0.39, if $0.03 < spread) :}`

The variable definitions below are quoted from pages 33 & 34 of Exhibit 5 to the March 10 IEX SEC filing. Note that in this filing instead of including all the markets in the number of protected quotations IEX has chosen to incorporate only eight exchanges (XNYS, ARCX, XNGS, XBOS, BATS, BATY, EDGX, EDGA), thus N and F may range from 1 to 8. Three exchanges (XNGS, EDGX, BATS) still get a special mention, as per the last formulae's iteration, in the Delta definition.

  1. N = the number of Protected Quotations on the near side of the market, i.e. Protected NBB for buy orders and Protected NBO for sell orders.
  2. F = the number of Protected Quotations on the far side of the market, i.e. Protected NBO for buy orders and Protected NBB for sell orders.
  3. NC = the number of Protected Quotations on the near side of the market minus the maximum number of Protected Quotations on the near side at any point since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently
  4. FC = the number of Protected Quotations on the far side of the market minus the minimum number of Protected Quotations on the far side at any point since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently
  5. EPos = a Boolean indicator that equals 1 if the most recent quotation update was a quotation of a protected market joining the near side of the market at the same price
  6. ENeg = a Boolean indicator that equals 1 if the most recent quotation update was a quotation of a protected market moving away from the near side of market that was previously at the same price.
  7. EPosPrev = a Boolean indicator that equals 1 if the second most recent quotation update was a quotation of a protected market joining the near side of the market at the same price AND the second most recent quotation update occurred since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently.
  8. ENegPrev = a Boolean indicator that equals 1 if the second most recent quotation update was a quotation of a protected market moving away from the near side of market that was previously at the same price AND the second most recent quotation update occurred since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently.
  9. Delta = the number of these three (3) venues that moved away from the near side of the market on the same side of the market and were at the same price at any point since one (1) millisecond ago or the most recent PBBO change, whichever happened more recently: XNGS, EDGX, BATS.
The parameterisation of the crumbling quote will need to be specialised for NYSE's location and related latencies. It is a fairly straightforward task for the quantitatively inclined, but it is a job that nevertheless still needs to be done. I am hopeful NYSE will take this a step further and produce something a little better and more advanced. The IEX formulae are pretty lame which not only prevents brokers, asset managers, and traders innovating but IEX's lameness acts as a retrograde disservice to the financial community.

NYSE if you need help, my email address is on my contact page.

It could be worse. It soon may be. Let's hope CHX's harmful speed-bump and Nasdaq's rather silly ELO don't add to the NMS hall of mirrors. You'd hope the SEC may see the error of its ways and one day mandate the removal of both IEX's and NYSE American's speed-bumps. Now, that would be a truly beneficial NMS development. The odds are too long to take such a bet. You'd better not wait and keeping rolling out your IEX and NYSE American infrastructure to Mahwah, Carteret, and Secaucus, along with the required microwave, laser, or millimetre wave assets. Life ain't meant to be easy.

Finally, it remains to be seen what attention, if any, NYSE will pay to IEX's patents and patent applications.

Happy trading,

--Matt.

Friday, 12 May 2017

Oh my - more lines, radios, and cables

I don't think I'll ever puzzle out this interwebby thing. Last week's meander "Lines, radios, and cables - oh my" was a bit more widely read than I expected a meander about cables to be. Quite the quiet surprise to me. Thank you all for the feedback I've received.

There was nothing new in that blog. I expect just having a summary of some of the aspects of those things mentioned was a useful consolidation. Most people knew most of the stuff but perhaps a few little snippets, like hollow-core fibre, open ladder lines, and HF MIMO found a broader audience.

There was something new to me this week that took me a little by surprise.

Sub-millimetre wireless transmission with wires


Arnold Sommerfeld (1868-1951)
How about a Terabit per second over your home's existing copper telephone cable?

Back in 1899, Arnold Sommerfeld wrote in "Uber di Fortpfanzung electrodynamischer Wellen langs eines Drahtes", [Ann. Physik u. Chem. 67, 233 (1899)] something I can't read as my German is not so great, but James R Wait assures me, in English, in 1957 via "Excitation of Surface Waves on Conducting, Stratified, Dielectric-Clad, and Corrugated Surfaces" that it says,
"It was pointed out by Sommerfeld [8] nearly 60 years ago that a straight cylindrical conductor of finite conductivity can act as a guide for electromagnetic waves." 
This may be important if you want a slightly terrifying Terabit, or World Turtle-like, Internet connection coming into your home but can't get fibre in your digital diet. Just this week Rick Merritt over at EETimes wrote a nice piece, "DSL Pioneer Describes Terabit Future: Wireless inside a wired Swiss cheese", on Terabit DSL, or TDSL. This is an approach, not yet real, that may deliver Tbps capable DSL over your existing copper wires into your home. Perhaps Australia's NBN isn't so silly in its fibre to the node approach.

The really cool thing about this approach is that it is sending the signal down the gaps within the cable. It is using the effect described by Sommerfeld in 1899. The cable's wires are acting as waveguides and the signal is propagating down the cable as a surface mode. The signal is rocketing down between the wires! Kind of cool.

Terabit DSL (TDSL) - Use of a copper pair's sub-millimeter waveguide modes

The suggestion is to use something like 4096 sub-carriers from 100GHz to 300GHz with 48.8 MHz spacing using bit loadings of 1-23 b/Hz for a Tbps, or 50-150GHz for 10Gbps. Little antennae around each wire end, including at the customer premise.  You'll need high-speed analogue to digital converters and a vector processing engine capable of many teraops to pull out the signal. Perhaps you can even do a Peta bps for a 10cm cable instead of 10Gbps at 500m. It's certainly an interesting cable that doesn't yet exist.

This sub-millimetre waveguide approach may propagate fast but it is likely the vector processing used to suck the bits out of the cable's gaps will take a bit too much latency to provide a true low latency solution. The other stumbling block would be the short cable lengths, but you never know when silicon gets involved how cheap or low latency those repeaters may be one day. One for the future. Interesting nevertheless.

HFT memo on HF MIMO 


TabbForum produced a tidier, edited version of the previous cable article. Larry Tabb's TabbForum is a forum worth keeping tabs on. Larry's Market Structure Weekly videos are a great summary of market events and ideas worth knowing about - highly recommended. In the comments of TabbForum, Sam Birnbaum (W2JDB) as an amateur radio and finance guy noted the travails of long range RF comms. Too true.

Before the later part of 1994 at Bankers Trust in Sydney, we had a microwave link to a production data centre over the harbour from the trading floor in the CBD. Not the best design choice. Heavy rain was problematic and smoke from bushfires was deadly to the link. Short links are tough and long links are tougher.

Difficulties abound at the best of times with HF propagation, especially with respect to the ionosphere. Australia relies on the ionosphere more than most countries. Our large sparse country relies on its HF based Jindalee Operational Radar Network (JORN), or over the horizon radar (OTHR), for layered surveillance out to around 3,000 km. Defence denies they can see a flock of seagulls in Singapore. Within this OTHR scheme, there is much reliance on characterising and predicting the ionospheric conditions.

A HAM, like Sam, faces the same issues for HF. We rely on ionospheric maps to work out appropriate transmission characteristics such as this one from Australia's Bureau of Meterology - Space Weather Services:
Ionospheric Map
This map shows the critical frequency of the Ionosphere's F2 Layer. Roughly speaking, frequencies below will bounce, if approached vertically, and frequencies above will partially, or wholly, leak out to the stars. HF transmissions approach the ionosphere obliquely, so such a map acts as a strength map rather than a hard and fast constraint. Also with HF, you may get awkward dead zones. Such zones are too close for a normal skywave bounce but too far to reached directly or by a near vertical incidence skywave (NVIS). Not always, but sometimes. Rough seas, wind, lightning, and meteors can all have a disruptive effect on HF. HF is a bunch of trouble. This is not the worst result as transient comms that provide a latency benefit some of the time may still be helpful to a trader. After all, multicast UDP has no guarantee either, even if those guarantees are orders of magnitude in difference. That said, low enough latency with adequate HF MIMO reliability is a real challenge.

I mentioned previously the large gaps you may need for your antennae for HF MIMO. That may no longer be the case. Instead of just spatial diversity, advances have also been made in polarisation diversity. Smaller antennae for HF MIMO may be OK. To the right is an example from Jiatong Liu's October 2015 PhD thesis, "HF-MIMO Antenna Array Design and Optimization" which also made it into Radio Science's 2010, "MIMO communications within the HF band using compact antenna arrays" [S. D. Gunashekar, E. M. Warrington, S. M. Feeney, S. Salous, and N. M. Abbasi].

If you'd like a very readable summary of HF technological progression then chapters 1 & 2, pages 1 to 23, of Mohammad Heidarpour's thesis, "Cooperative Techniques for Next Generation HF Communication Systems" are pretty easy going, clearly written, and have a minimum of Greek formulae.

So, just perhaps, a small dwelling in Cornwall would do instead of a trading farm, though the spatial diversity may not hurt you.

Hollow-core fibre


HCF has been around for a little while, with the first demonstration in 1999, and the terminology can be a little confusing. Few-mode transmission may be used over HCF. Sometimes HCF cabling gets referred to as FMF. However, there is a particular cable referred to as FMF. Just to confuse things further sometimes few-mode is used and other times fewer-mode is used. Also, as a nascent technology, there are many types of HCF as the race goes on to build better, or more specific, cables.

Cat videos also drove the development of HCF and FMF. FMF is a technology that has multiple uses. HCF can do high power and deliver lasers to another point. HCF's low latency can be useful in physics labs, especially for synchrotrons and the like. That said, one of the biggest reasons for the development of FMF, and not just HCF, is the Internet's exponential growth trajectory. It's all those cat videos peeps are watching. There is grave concern that all the regular old SMF optical cable we have in the ground may never be enough for all those fuzzy balls of fun. We could run out of plumbing. With multiple cores, we can do some MIMO space-division multiplexing (SDM) over a fibre and potentially get huge bandwidth benefits. HCF may allow us all to safely watch cat videos forever and a day. Bullet dodged.

Here is a little illustration of different fibre types for you:

Breakthroughs in Photonics 2012: Space-Division Multiplexing in Multimode and Multicore Fibers for High-Capacity Optical Communication (April 2013) - (click to enlarge)
In the above diagram you'll see a cable termed FMF but sometimes other cables in the picture are somewhat informally referred to as FMF due to their use of few modes. That probably shouldn't be.

There are plenty of different types of HCF to amuse yourself with too. The main search ongoing in the HCF space is finding lower loss levels or attenuation, sometimes measured in dB/m, but, hopefully for a trader, measurement in dB/km is preferred. Such attenuation optimisation may involve trade-offs in bandwidth as a single mode or few modes may only be appropriate to a quite limited wavelength of light.

Here are some of the variations in HCF:

Hollow-core photonic bandgap fibers: technology and applications [Poletti, Petrovich, Richardson, 2013]
(click to enlarge)
In terms of longer transmission for FMF, here is a list of trails from Haoshuo Chen's 2014 thesis with ranges from 17km to 7,326km:
Optical Devices and Subsystems for Few- and Multi-mode Fiber based Networks, p5
Note: FMF and HCF is not the same but may be the same (click to enlarge)
However, remember FMF and HCF are not the same thing. HCF should more properly be referred to as hollow-core photonic bandgap fibre (HC-PBGF) but most just refer to it as HCF. Sometimes HCF just gets lumped in with all the other FMF classes and referred to as FMF.

HCF types differ too. Telecom fibre and particle acceleration use cases differ. Telecom fibres want a particular core mode and few or no surface modes. A particle accelerator fibre wants a particular surface mode and no core modes to suck input power away. This leads to divergence in HCF design. Not all HCF cables are equal.
Hollow-core Photonic Band Gap Fibers for Particle Acceleration [Noble, Spencer, Kuhlmey, 2006 p22]
I'm glad I cleared that up for you. Just remember, HCF is the fast one. If you hear FMF, ask if it is HCF. You'll be able to confirm by the latency numbers, as 0.997c is not too hard to work out. Look out for an HCF coming to a trading link near you.

Neutrinos


KB commented on my previous cable article that Fermilab succeeded with the first neutrino communication in 2012 over 1.035km including 240m of planet and a bit error rate of one percent, providing the MIT article, "First Digital Message Sent Using Neutrinos" referring to the paper, "DEMONSTRATION OF COMMUNICATION USING NEUTRINOS" [Stancil et al, 2012],
"In summary, we have used the Fermilab NuMI neutrino beam, together with the MINERvA detector to provide a demonstration of the possibility for digital communication using neutrinos. An overall data rate of about 0.1 Hz was realized, with an error rate of less than 1% for transmission of neutrinos through a few hundred meters of rock. This result illustrates the feasibility, but also shows the significant improvements in neutrino beams and detectors required for practical applications."
That's an explicit communication protocol - nice. The Opera Experiment reported, "Runs with CNGS neutrinos were successfully carried out in 2008 and 2009" which was earlier but not explicitly for communication, though the transfer of information did take place. So, who's on first?


KB's referred MIT article refers to a further MIT article, "How Neutrinos Could Revolutionize Communications with Submarines" which refers to a submarine paper for neutrino communication, "Submarine neutrino communication." This complements the submarine neutrino patents I previously referred to. I especially like this line in the paper,
"I am especially thankful to S. Kubrick and P. Sellers whose work served as inspiration.
Being There, sprang to mind and I laughed out loud.

The SPS used for the proton beam in the Opera Experiment cost 1150 million Swiss Francs at 1970 costs. You probably don't need 400GeV. An HFT will need to work backwards and figure out what GeV she can get away with for her beam and then match a detector. Another teensy difficulty will be enabling regularly enough irregular pulses that suit your events as continuous operations are unlikely to be possible for a while yet. You have to ensure your tech can complete your analysis in real time with the appropriate latency benefit. Then you might be able to cost your through-the-planet signalling setup. Surely you can do better on costs than SPS in 2017? It should be a fun project.

Happy trading,

--Matt.