Should add API for take_group_member/2?
Hi,
I using pooler to manage my db clusters(include MySQL cluster and Mongo cluster ...) via group feature.
I noticed that, take_group_member/1 API's doc:
%% @doc Take a member from a randomly selected member of the group
%% `GroupName'. Returns `MemberPid' or `error_no_members'. If no
%% members are available in the randomly chosen pool, all other pools
%% in the group are tried in order.
That is, if no members are available in all pools, it will return error_no_members.
So, should wait for a member to become available from chosen pool?
Thanks.
How would you wait for a member to become available in any of the gen_servers available in the pool?
I've been thinking of this as well but I haven't found a reasonable approach other than having a retry on top of the call with current setup.
take_member/2 essentially waits until the gen_server responds back with a worker, given the timeout. But in the case of take_group_member/1 you're not communicating directly with a gen_server rather asking pg2 about what gen_servers are there, then asking each one for worker with timeout 0. If you would be passing in a customized timeout for each of those calls, it wouldn't make sense to have a pool of gen_servers.
Unless there is more information about the gen_servers which is combined with pg2 information, there isn't really a good way of making a call to wait. A retry is a bit ugly as well imho.
@seth Have you had any ideas towards how this could be handled? What sort of information could one leverage of the gen_server without breaking the isolation of the process state? Doesn't really need to be super accurate but something that gives an order to which gen_server to poke first who has the best chance of having available members. a simple LRU perhaps, that would be possible to maintain without poking the gen_server states? (i'm just throwing random ideas here)
If one addresses this I guess the walking of gen_servers in
{PoolPid, Rest} = extract_nth(Idx, Pools),
take_first_pool([PoolPid | Rest])
would be improved as well. I don't have a urgent use of this but I would definitely be interested in discussing a solution and implementing it.