learninglocker icon indicating copy to clipboard operation
learninglocker copied to clipboard

Logs files get populated fast because a warning/error on Mongo existing record.

Open jbiancot opened this issue 6 years ago • 8 comments

Hi there,

We are running Learning Locker 3.17 on Debian 8, we have noticed on our logs files, there is a constant entry about a duplicate key error collection: learninglocker_v2.fullActivities index: organisationId_1_lrsId_1_id_1 dup key: { : null, : null, : null }

Any way, can we solve this issue, maybe safely deleting the record from Mongo?

worker_stderr-2.log.txt

xapi_stderr-4.log.txt

jbiancot avatar Jan 15 '20 15:01 jbiancot

Hi @jbiancot, thanks for the xAPI logs. Could you please run db.fullActivities.getIndexes() in your Mongo shell to find the indexes of that collection? That update operation really shouldn't throw that error so I'm wondering there's an incorrect index. I'm not sure the worker errors are related, but we can confirm that by fixing those xAPI errors.

ryasmi avatar Jan 16 '20 11:01 ryasmi

Hi @ryansmith94 here the output of the indexes for fullActivities:

db.fullActivities.getIndexes();

[
	{
		"v" : 2,
		"key" : {
			"_id" : 1
		},
		"name" : "_id_",
		"ns" : "learninglocker_v2.fullActivities"
	},
	{
		"v" : 2,
		"unique" : true,
		"key" : {
			"organisationId" : 1,
			"lrsId" : 1,
			"id" : 1
		},
		"name" : "organisationId_1_lrsId_1_id_1",
		"ns" : "learninglocker_v2.fullActivities"
	}
]

jbiancot avatar Jan 16 '20 14:01 jbiancot

if I do a "count", it shows 1070 documents "records".

jbiancot avatar Jan 16 '20 14:01 jbiancot

Hi @jbiancot, not sure how that last index got added, but it's not correct according to our migration code or our documentation and it's that index that will be causing those errors.

ryasmi avatar Jan 16 '20 15:01 ryasmi

Hi @ryansmith94, you are saying that we have to get rid of one of the indexes? the 2nd one?

I am seeing on your documentation:

db.fullActivities.createIndex({organisation:1, lrs_id: 1, activityId:1}, {unique: true, background:true});

Meaning the organisationId, lrsId and id has to be remove? Please confirm.

jbiancot avatar Jan 16 '20 16:01 jbiancot

Yeah you need to remove that 2nd index you have and create the one from the documentation. I'm a little concerned that you may have other incorrect indexes, it's unclear how your incorrect index was created.

ryasmi avatar Jan 17 '20 09:01 ryasmi

@ryansmith94 Hi, I dropped both indexes, just in case. I create the new index based on LRS documentation, but I am getting an error which is the same error than shown on the logs files.

db.fullActivities.createIndex({organisation:1, lrs_id: 1, activityId:1}, {unique: true, background:true});

{
  "operationTime": Timestamp(1579280433, 1),
  "ok": 0,
  "errmsg": "E11000 duplicate key error collection: learninglocker_v2.fullActivities index: organisation_1_lrs_id_1_activityId_1 dup key: { : null, : null, : null }",
  "code": 11000,
  "codeName": "DuplicateKey",
  "$clusterTime": {
    "clusterTime": Timestamp(1579280433, 1),
    "signature": {
      "hash": BinData(0, "AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
      "keyId": NumberLong(0)
    }
  }
}

db.fullActivities.getIndexes()

[
  {
    "v": 2,
    "key": {
      "_id": 1
    },
    "name": "_id_",
    "ns": "learninglocker_v2.fullActivities"
  }
]

How I can get rid of that record?

jbiancot avatar Jan 17 '20 17:01 jbiancot

Yeah so the index wasn't created because duplicates already exist now because the index was incorrect before. If you're not using the xAPI Activity Profiles API, you may find it easier just to remove all records from that collection, create the index, and let the collection build back up naturally as you insert new statements.

ryasmi avatar Jan 20 '20 10:01 ryasmi