Inside Extreme Scale Tech|Sunday, March 1, 2015
  • Subscribe to EnterpriseTech Weekly Updates: Subscribe by email

Why Amazon Can’t Catch Lucera Financial Cloud 


The Lucera cloud that is dedicated to high frequency trading, liquidity matching, and foreign exchange has opened up its doors to customers after building it out for the past several months. This spinout of financial services firm Cantor Fitzgerald is feeling pretty good about the technology edge it has over any potential competitors – including public cloud juggernaut Amazon Web Services, should it take a shining to what could turn out to be a very lucrative opportunity.

Interestingly, Cantor Fitzgerald is itself emulating online retailer Amazon in that it is taking the expertise it has built up creating its own financial trading systems and turning them into a cloud that will host its own applications as well as those from other broker/dealers, high frequency traders, foreign exchanges, and as it turns out, financial application providers who are sick of building and maintaining their own infrastructure.

Jacob Loveless, CEO at Lucera, gave EnterpriseTech an exclusive look inside its eponymous cloud last October as it was bringing three different facilities online in New York, London, and Chicago and linking them up with 17,000 miles redundant fiber optic pipes. Lucera has made a conscious choice to not deploy field programmable gate array (FPGA) accelerators in the X86 servers that underpin its cloud, mainly because of the cost of those FPGAs and the difficulty with changing the Verilog and VDHL programs that encode programs into the gates on the FPGAs. HFT models change quicker than programmers can update the VDHL, so there is a programming bottleneck that is not overcome by the performance advantage of using FPGA accelerators. Lucera’s ticker plant – a system that consolidates market data feeds from a dozen different exchanges and feeds them into HFT applications – does use FPGA accelerators, but that is only because the feed formats and the data manipulation performed by the ticker plant does not change all that often, according to Loveless.

As is the case in with supercomputing centers around the world, financial services companies, particularly those engaged in high frequency trading, often have to get specially designed hardware and software to meet their performance needs. Lucera has tapped Scalable Informatics as its server supplier, and is using its JackRabbit servers in particular. The machines were equipped with the workstation variant of the “Ivy Bridge-EP” Xeon E5 v2 processors, which have eight cores running at 3.4 GHz. This is the highest clock speed that Intel offers for two-socket machines. Lucera’s engineers turn off all of the power saving features in the chip and overclock them up to 3.6 GHz to 3.7 GHz, and also overclock the DDR3 memory in the systems to run at 2.1 GHz instead of the slower 1.67 GHz or 1.87 GHz of stock server memory. The servers have a dozen solid state drives, configured in a RAID 1+0 (two mirrors with a hot spare each) using dual controllers and four Ethernet controllers running at 10 Gb/sec; two ports come off a low-latency network card from Chelsio and two are plain vanilla ports coming off the motherboard.

For an operating system, Lucera has created its own distribution of the SmartOS operating system from public cloud provider Joyent, which itself has taken the open source variant of Solaris and grafted things (such as the KVM hypervisor) into it. Because trading applications are so latency sensitive, Lucera cannot use a heavy-weight server virtualization hypervisor like Xen or KVM on its infrastructure. Public cloud providers like AWS, Rackspace Hosting, and SoftLayer (now owned by IBM) all use a variant of Xen to allocate virtual server slices to applications. Lucera chose a variant of Solaris for its cloud because Solaris containers – a virtual private server that has application sandboxes that share a single operating system kernel and file system – are time-tested and impose relatively little compute or network latency overhead on the system.

SmartOS in particular was chosen by Lucera because it can deploy images to bare metal servers or virtualized machines using containers. Lucera has tweaked SmartOS with a homegrown orchestration engine so it can literally pin all of the compute, memory, and I/O down to the granularity of a single socket to a particular application and customer, thereby getting around the “noisy neighbor” problem that afflicts public clouds as multiple customers vie for capacity on a single system.

All of the big cloud providers and hyperscale datacenter operators are experimenting with custom network gear and network operating systems as they try to make their networks as malleable as their server infrastructure, and Lucera is cutting edge on that, too. The company worked with Scalable Informatics to hack together a custom router, which does not include any merchant silicon, just X86 chips and routing software written by Lucera that runs inside of KVM virtual machines. This router can take in hundreds of different fiber optic pipes coming from various exchanges and route their data to top-of-rack switches in the server cages in its datacenters. A typical router does not have to manage such a large number of fibers terminating at its ports, and because of the complexity of managing this workload, Lucera actually created its own software-defined network stack to manage the configuration and reconfiguration of the router in software.

Lucera went outside of the normal commercial markets for switches, and in fact chose British company Gnodal, a maker of high-end, low-latency switches, as its supplier. Gnodal created its own switch ASIC, called Peta, and could cram 72 ports running at 10 Gb/sec into a 1U chassis or 72 ports running at 40 Gb/sec into a 2U chassis, with a port-to-port latency of only 150 nanoseconds. Gnodal, you will recall, went into administration (a form of bankruptcy in Britain) receivership back in October, and Cray picked up key people and assets of the company.

“I love Cray, and I think Cray is great, but we want the switches to do more,” Loveless tells EnterpriseTech. And that is why Lucera has a bunch of the new Freedom Server-Switch bybrids from Pluribus Networks, announced only two weeks ago, in its testing labs. “It is obvious to us that there was more that could be done, in terms of measurement. The idea of Pluribus is very compelling to us, that we could deploy software on the switches themselves.”

Lucera has not finished the evaluation process yet for the switches.

Aside from having higher-speed hardware, a thinner layer of virtualization software, and perhaps better software-defined routing and switching (should it buy the Freedom hybrids) than is available from the likes of Amazon Web Services, Lucera has some other advantages.

The big one is a huge differentiation: location, location, location. In this case, that is the three key facilities, operated by Equinix. The first is known as NY4 in Secaucus, New Jersey, across the Hudson from New York and where a lot of hedge funds and equities and foreign exchange operators park their get. The second is known as LD4, located in London, and the third is CH2 in Chicago. (The latter is a center for derivatives trading.) Lucera owns private cages in each of these centers, capable of holding 22 server racks. The JackRabbit is a two-node machine, and the rack holds 44 nodes for a total of 46,464 cores across the cloud.

“What is clutch, what is pivotal, is that you have got to be in NY4, LD4, and CH2,” says Loveless. “I think this is a big reason why we have not seen Amazon or other public clouds take off for Wall Street. The act of getting the data there in real-time is difficult if not impossible. You are going to bring a market data feed all the way down to Ashburn, Virginia? That’s not reasonable.”

And even if Amazon did co-locate in the NY4, LD4, and CH2 datacenters to get near the financial action, creating what we might call FinCloud, AWS still has “this massive virtualization overhead,” says Loveless.

“So there goes FX engines, gateways, ticker engines, all of the feed handler software – anything that is latency sensitive. AWS would be OK for batch-style stuff. Unless you have a big, big job that is batchy, like an end-of-day settlement, that infrastructure is not really designed for that. Most of the jobs in finance are not big, huge, scale-out problems. Even the time series databases are quick, intense jobs. Many of the databases are designed to run on a single machine, they are not sharded.”

And that is why the Lucera cloud always feels the need for speed. Having gotten its hands on early releases of the Xeon E5 v2 processors last year to start building its cloud, Loveless is already looking ahead to Intel’s next-generation “Haswell” Xeon E5 v3 processors, which are slated for delivery later this year.

“There was a time when we had 4 GHz or 4.1 GHz under air,” says Loveless, referring to the fact that they did not have to be water cooled. “I think the new Xeon E7 v2 is interesting, but I think the big win will be with Haswell with AVX2, with the registers moving from 256 bits to 512 bits. You will be able to get twice as much throughput for packet processing, and we will certainly use it on the database side when we are crunching analytics.”

Loveless doesn’t expect for Intel to push Haswell Xeon E5 clock speeds up above 4 GHz, but he always asks with every new generation. He says that if they could deliver a twelve-core chip that ran at 3.8 GHz, that would be “nirvana.” Perhaps with some overclocking, this might just happen.

Add a Comment