Benchmarking is an traditionally used for hardware selection, it is also extremely useful for rapid development environments, software engineering and algorithm assessment.
We perform all the traditional benchmarks, from SPEC family (including SpecPower), sophisticated fio scripts, stream, benchmark modules in ruby, database sql and no sql versions etc.
We can also create benchmarks that mimic certain workload or workloads and can be delivered to suppliers for easy benchmarking. Including results to be used in RFPs and model selection.
One of the most important aspects of benchmarking is the assessment of software quality, constant deployment environments quality control. Well design benchmarks can be used for these and other activities on a constant basis to guide software engineering efforts as a gauge or even as part of GanTT charts.
Creation of microbenchmarks and adhoc benchmarks is one of the tasks that we have performed in the past. In the nascent field of Artificial Intelligence and Machine Learning this is an important task as very few benchmarks are representative of a new algorithm.
-Traditional Benchmarks with full configuration management database (including BIOS settings)
-Benchmarks as Software Engineering tools implementation
-Benchmarks to model existing workloads for vendor/model selection
-Micro benchmark creation for rapid development and continuous deployment environments