Start-up Aims AI at Automated Tuning of Complex Systems
There was a simpler time when system tuning consisted of adjusting a relatively few number of knobs, a manual and not overly demanding task that brought out the best in systems performance. But now, as we move toward accelerated enterprise systems networked from data center to public cloud to mobile and sensored devices at the edge, simplicity in systems tuning is long gone.
Today’s bigger, more complex, connected and intelligent systems have an exponentially higher number of connections, dependencies, interfaces, protocols and processing architectures that, if not optimized, can hamstring networked systems. For performance engineers still using manual tuning methods, systems optimization is a time-eating hydra-headed monster that poses a virtually infinite number of possible adjustments and configurations for trial-and-error testing.
Start-up Concertio today launched what it said is the first machine learning-based tool aimed at making applications and systems operate for maximized performance by optimizing the myriad of configuration settings used in complex systems. While its initial product, Optimizer Studio, automates systems diagnostics and generates a “grocery list” of adjustments that systems engineers and IT managers then review and implement, the next iteration of the technology (Optimizer Runtime) will automate both diagnostics and tuning implementation.
None less than Mellanox has tested Concertio’s effectiveness on its own networking technology. As part of today’s launch, Concerto announced the results of a test involving Mellanox’s ConnectX-3 Pro Ethernet cards that compared performance after automated testing by Optimizer Studio against manual tuning methods used by Mellanox performance engineers.
The network cards were delivered in their off-the-shelf default settings, and then the Mellanox engineers and Concertio were informed of the workload the cards would be used for. Optimizer Studio ran against nine ConnectX-3 Pro specific knobs representing millions of option combinations. The tool’s workload classification engine and reinforcement learning techniques modeled the target workload, detecting different workload phases, and experimenting with various knob configurations in each phase. It then produced a report showing the optimal settings for the specific use-case.
The test result: Concertio won.
“In the comparison test, Optimizer Studio’s automated run improved performance in the target use-case by 80 percent, surpassing the 62 percent we achieved by manual tuning,” said Amir Ancel, performance group director at Mellanox. “Optimizer Studio’s automated tuning algorithms allow us to focus on high-level optimization, leaving the mundane low-level parameter optimizations to software.”
Built for traditional datacenters, hyperscale datacenters and high-performance computing systems in the cloud or on-premises, Optimizer Studio monitors and learns from the interactions between applications and systems, according to Concertio.
As of now, the tools support Linux-based systems running on Intel CPUs. Concertio said it intends to broaden its portfolio of supported technologies in upcoming product iterations.
“Tuning used to be easy,” Dr. Tomer Morad, Concertio co-founder and CEO, told EnterpriseTech. “There were only a handful of knobs, and you’d put a performance engineer on that and tweak some settings and get some good results. But today we are already in the hundreds of knobs. It’s exploding and become almost impossible to get to a very good result because the parameter space is practically limitless. If you have 100 binary knobs it’s practically limitless, you cannot check everything. So you need some kind of automatic tool to help you with that.”
Tuning variables, called tunable knobs, include settings across hardware, firmware, the operating system and applications such as:
- CPU hardware, including symmetric multi-threading, cache prefetching and cache partitioning configuration; peripheral hardware, including PCIe maximum read request size, network interrupt affinity and network task offloading
- Firmware, including power states of the CPU
- Operating system, including choice of IO or task scheduler, NUMA balancing and memory migration, thread affinity and page cache
- Applications, including application framework settings (e.g., Spark), application component (e.g., MongoDB database) settings, and application-defined knobs
With so many variables, Morad said, performance engineers can’t be expected to know about all of the available knobs, and to predict their effects on one another. IT professionals must also occasionally tune their systems, but it’s difficult for them to maintain experts on all system internals, he said. Too often, engineers test and set optimal settings for a few knobs they are more familiar with, leaving the rest in default settings that were in place when the equipment was delivered. In some cases, he said, system tuning is overlooked entirely due to the complexity involved, and systems remain at inefficient under-performing factory settings.
Beyond application performance tuning, Morad said the tool can be used for cutting cloud and data center costs by finding system configurations that use fewer resources, and it can be utilized by hardware and software product vendors to identify optimized off-the-shelf configurations for shipment to customers or reseller. It also can be used for maximizing public benchmark performance for marketing purposes.
Privately-held Concertio (previously called DatArcs) was founded in 2016 and is based in New York City. The company is part of the Runway Program at the Jacobs Technion-Cornell Institute of Cornell Tech in New York, and it’s with the Intel Ingenuity Partner Program.
“Tailor-tuned systems can significantly outperform baseline general-purpose systems, but the number of configurable settings has reached into the hundreds - way too many for any human team to effectively tune and test,” said Concertio co-founder and CTO Andrey Gelman. “It used to be merely a gap that could be bridged by human tuning and testing, but with the increasing hardware and software complexity, it’s exploding into a chasm where human performance tuning is netting diminishing returns, leaving these expensive systems bottlenecked and inefficient.”