dgraph icon indicating copy to clipboard operation
dgraph copied to clipboard

[BUG]: Cannot lease UID as the limit has reached.

Open kuberxy opened this issue 1 year ago • 1 comments

What version of Dgraph are you using?

Dgraph version   : v21.03.2
Dgraph codename  : rocket-2
Dgraph SHA-256   : 00a53ef6d874e376d5a53740341be9b822ef1721a4980e6e2fcb60986b3abfbf
Commit SHA-1     : b17395d33
Commit timestamp : 2021-08-26 01:11:38 -0700
Branch           : HEAD
Go version       : go1.16.2
jemalloc enabled : true

Tell us a little more about your go-environment?

No response

Have you tried reproducing the issue with the latest release?

None

What is the hardware spec (RAM, CPU, OS)?

N/A

What steps will reproduce the bug?

dgraph bulk

Expected behavior and actual result.

No response

Additional information

Every week I use the command below to export the data from the production environment, and I use the dgraph bulk to import the data into the test environment

curl -Ss -H "Content-Type: application/json" http://192.168.3.111:8080/admin -XPOST \
  -d '{"query":"mutation { export(input: {format: \"rdf\"}) { response { code message } }}"}'

Previously, everything was normal. But this week, the execution of dgraph bulk failed, and the error log kept outputting the following

E0930 10:48:45.298384   22111 xidmap.go:334] While requesting AssignUids(18446744073687285714): rpc error: code = Unknown desc = Cannot lease UID as the limit has reached. currMax:22265901
...
E0930 10:49:30.548582   22111 xidmap.go:334] While requesting AssignUids(18446744073687285714): rpc error: code = Unknown desc = Cannot lease UID as the limit has reached. currMax:22265901

Zero's limit options are the default values, i.e. "uid-lease=0; refill-interval=30s; disable-admin-http=false; "

kuberxy avatar Sep 30 '24 03:09 kuberxy

Hi @kuberxy,

It seems we've exhausted the maximum allowed value for the uint64 type in Go, which is decimal 18,446,744,073,709,551,615 or hex 0xFFFFFFFFFFFFFFFF which is a very large value and as such, would require us to create approx. 18.5 quintillion nodes on a backend to breach that threshold and reproduce that error.

So unless that is indeed the case (your RDF indeed contains triples pertaining to 18,446,744,073,709,551,615 nodes) or that the starting UID for nodes in your RDF is already a very large value, which doesn't leave much room for any further incremental assignment for more nodes, leading to the error seen.

  1. Can you provide more info on how UIDs are defined in the RDF; mainly the start and end UIDs please ?
  2. Can you please run the following command on the Zero node used for the bulk-load and send us the output ? curl -s localhost:6080/state | jq | grep '"max'

If the start UID in your RDF is indeed a very large value, you may want to retry the bulk-load with --new_uids to ignore the UIDs in the RDF and freshly assign new UIDs for all nodes. Alternately, you could also replace the UIDs with blank-node identifiers (check docs) instead. Start a new Zero and re-run the bulk-loader, but with either the --new_uids flag OR using blank-node identifiers instead of hardcoded UIDs.

Thanks!

rarvikar avatar Oct 01 '24 14:10 rarvikar

Hi @kuberxy, I'm closing this issue since this seems to be a config problem and not a bug. Feel free to re-open whenever you get a chance to review the questions from my previous note. Thanks!

rarvikar avatar Oct 24 '24 12:10 rarvikar