Scaling for big projects
Hi!
So I made a test to find out if Restbed can scale well to the size of our projects and I was wondering if this makes sense.
First test is done by publishing a single route and very unscientifically looking at the roundtrip time in chrome devtools to my localhost restbed server.
The average is about 30ms.
Now the scaling test I did was to create 3000 routes in a loop using this:
for (int i = 0; i < 3000; ++i)
{
auto resource = make_shared< Resource >();
std::ostringstream ss;
ss << "/route" << i;
resource->set_path(ss.str());
resource->set_method_handler("GET", [](const shared_ptr< Session > session) {
session->close(OK);
});
service.publish(resource);
}
Calling route 999 (999 is the last published route because of the alphanumerical ordering I think?) takes an average of 150ms to complete. Calling route1 averages to 30ms so I guess it uses a set with logarithmic complexity.
This means that there is a delta of about 120ms between first route and route 3000.
So I'm wondering if there are better ways to scale this for a big project.
Like using a unordered_set instead of a set to get the search complexity from log(n) to O(n) in the worst case scenario, and constant in the best case.
I'm also wondering if there would be a way to split routes into groups/modules?
Instead of declaring all routes from the root, you could declare a module such as '/module1/' which would have it's own set of routes with "/module1/" as the root.
Thanks for this project by the way, it seems a good product to use!