Josip Ćavar

Results 82 comments of Josip Ćavar

Here are a few that we encountered. Not necessarily directly related to AudioKit though: - http://openradar.appspot.com/radar?id=5598760801402880 - http://openradar.appspot.com/radar?id=5588189343383552 - http://openradar.appspot.com/radar?id=5490575180562432 - http://openradar.appspot.com/radar?id=5577908240252928 - http://openradar.appspot.com/radar?id=5575719308492800 - http://openradar.appspot.com/radar?id=5583374349500416 - http://openradar.appspot.com/radar?id=5614107591966720 - http://openradar.appspot.com/radar?id=5535281184768000...

Sorry not in that commit, that is for `save`, but in this one https://github.com/soffes/SAMKeychain/commit/a7ea45c86a1e7f75599ad9a541fbab84bb111873

Ok I understand background now, but my problem is that library we are using add keychain items with it's own `accessibilityType`. So i simply want to remove all keychain items...

Related: https://github.com/soffes/SAMKeychain/pull/184/, https://github.com/soffes/SAMKeychain/pull/77

In memory solution could look something like this: ``` - (NSMutableDictionary *)fetchExistingObjectsForMapping:(FEMMapping *)mapping { NSSet *lookupValues = _lookupKeysMap[mapping.entityName]; if (lookupValues.count == 0) return [NSMutableDictionary dictionary]; NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:mapping.entityName];...

Or maybe not even filtering objects and just adding everything to cache? This may cause memory issues but as long as objects are faults it should be fine?

Yes, you are actually right. This must be due to custom store we are using then. Sorry for bothering you, I should have checked with clean project first. I will...

Ok, it seems that issue reproduces with much bigger number 500001. This was found here: http://sqlite.1065341.n5.nabble.com/SQLITE-MAX-VARIABLE-NUMBER-td43457.html It is compile time constant and in our use case it is set to...

Yeah, I think FEM should definitely expose error if it happens and stop processing. That is actually very good point, we should divide our data in reasonable chunks and increase...