From 6be6a437f85ee68e9b0ca22605fc911d7a8abff4 Mon Sep 17 00:00:00 2001 From: globosco Date: Wed, 26 Aug 2020 13:42:58 +0200 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 76a720b..c9c2c4e 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@ # A Software Library to Speed-up Sorted Table Search Procedures via Learning from Data -This repository provides a benchmarking platform to evaluate how Machine Learning can be effectively used to improve the performance of classic index data structures. Such an approach, referred to as Learned Data Structures, has been recently introduced by Kraska et al.[2]. In their framework, the learning part is made of a directed graph of models that refine the interval in a sorted table where a query element could be. Then, the final stage is a binary search. The models are either Feed Forward Neural Networks, with RELU activators or multi/univariate linear regression. In order to enucleate the methodological innovations of this proposal from the engineering aspects of it, we focus on a very basic scenario. One model for a single prediction and then a routine to search in a sorted table to finish the job. The table is kept in main memory. With the use of the mentioned Neural Networks, this "atomic" index is as general as the one proposed by Kraska et al., since those networks, with RELU activators, are able to approximate any function [1]. Moreover, our approach can be simply cast as the study of learned search in a sorted table. It is a fundamental one, as outlined in [3,4]. +This repository provides a benchmarking platform to evaluate how Machine Learning can be effectively used to improve the performance of classic index data structures. Such an approach, referred to as Learned Data Structures, has been recently introduced by Kraska et al.[2]. In their framework, the learning part is made of a directed graph of models that refine the interval in a sorted table where a query element could be. Then, the final stage is a binary search. The models are either Feed Forward Neural Networks, with RELU activators or multi/univariate linear regression. In order to enucleate the methodological innovations of this proposal from the engineering aspects of it, we focus on a very basic scenario. One model for a single prediction and then a routine to search in a sorted table to finish the job. The table is kept in main memory. With the use of the mentioned Neural Networks, this "atomic" index is as general as the one proposed by Kraska et al., since those networks, with RELU activators, are able to approximate any function [1]. Moreover, our approach can be simply cast as the study of learned search in a sorted table. It is a fundamental one, as outlined in [3,4]. We include in this repository: