Tempus grew from SVRWave in 2015, with the support of about 20 experts, professors, and engineers from Belarus, Bulgaria, the Czech Republic, Israel, Northern Macedonia, Russia, and Ukraine, focused on time-series analysis in near-time. It aims to produce the highest precision forecast of time-series data on the market. While the source code is open and licensed under the BSD 4 clause license, I offer paid consulting services for real-world applications and a team of engineers prepared for maintenance and feature implementation tailored to the client's needs.
So far, we have accomplished the following:
With the help of Prof. Atanasov, we've corrected the original support vector machine theory by Prof. Vapnik and Chervonenkis, allowing it to scale by nesting kernel matrices. Even with a single level of depth, our SVM implementation has better precision for regression forecasts compared to a Microsoft LightGBM model - one of the best models for regression problems available today.
This novel implementation of a support vector machine can use any model for its kernel method, eg, a temporal fusion transformer, gradient boosted decision trees or a conventional kernel distance function like the RBF, global alignment, or the path kernel, or even use another SVM as a kernel function. Scale it as much as you are willing to commit hardware resources to modeling or until you are satisfied with the forecast precision and all relevant information is extracted from the available data. Good fitness, further referred to as accuracy for the reader's convenience, significantly better than any modeling technique I'm aware of, can be established even from minuscule amounts of data presented to the software. This is accomplished by calculating the ideal kernel matrix for a given set of labels and training another model on the ideal kernel matrix itself as a dataset (the support vector manifold) to produce an optimal Hilbert space or kernel function for the input data.
Besides nesting kernels, Tempus can scale in the time domain (using dynamic time slicing of labels) or spectral domain (using STFT, Wavelets, VMD or EMD decomposition), depending on how many hardware resources you are willing to commit to modelling your data. Sequential (residuals) boosting is also partially implemented (WIP). To counter extremely noisy data, this SVM implementation supports multiple layers of weights used by the internal matrix solver. For real-time and near-real-time purposes, online learning and forgetting are not fully implemented, but it's in the works.
Data connectors to FIX using QuickFIX (WIP), PostgreSQL, DuckDB, and MQL5 for financial applications or general-purpose data sources are available.
The project is mainly implemented in C++, CUDA, OpenCL, OpenMP, and MPI, used for multiple computing nodes. Tempus can scale to many GPUs and many times more CPU cores per computing node. Tempus only supports execution on Linux operating systems.
The project cost about 1.2 million euros on salaries, running expenses, and development hardware for Tempus over the past 10 years.
Tempus modular architecture allows it to plug into higher-quality commercial solvers such as KNitro, build more data connectors for different kinds of data, or expand its support to multiple applications.
The public Core repository of Tempus is located on GitHub, you can browse the sources and documentation, and commit changes to the project upon your request.
I am the lead developer as well as the founder, and am continuously actively contributing to the project. I provide consultancy services in setting up Tempus and exploiting it for your purposes. If anyone is interested in getting involved in the project or using it, feel free to contact me. Please see details about Tempus and other machine learning-related topics on my statistical learning page.