looking for local app performance improvement, ignore row
I am using csvtojson to convert a file of current covid19 virus stats. and it works great. but, I only use maybe 2% of the data.. I need to filter on a column (country='France') and when parse is done have an array of the row objects for that filter item
raw data
26/05/2020,26,5,2020,358,65,France,FR,FRA,66987244,Europe
25/05/2020,25,5,2020,115,35,France,FR,FRA,66987244,Europe
25/05/2020,25,5,2020,-372,-1918,Spain,ES,ESP,46723749,Europe
24/05/2020,24,5,2020,482,74,Spain,ES,ESP,46723749,Europe
after full file data
location_data['France'] = [ row 1 obj, row 2 obj ]
location_data['Spain' = [ row 3 obj,row 4 obj]
I also want to filter using the date column (only rows date after 10/03 (dd/mm))
i think i can do that with preline filter but there is no need to present the result json object it the output.. I don't see a mechanism in the stream to say skip row
I guess I could just not use the output results. saves me having to go thru ALL the data again
as an example I might only need 400 lines of the current 19,500 lines
any other ideas welcome
using subscribe and creating my keyed array works
I just ignore the final result and pass my object instead.
let location_data={}
location_data['Spain']=[]
.subscribe((jsonObj,index)=>{
// if this objects location ('country') field value is in the list we care about
if (payload[payload.config.type].indexOf(jsonObj[fields.location_fieldname]) >= 0)
// add it to the rows of data for this location
location_data[jsonObj[fields.location_fieldname]].push(jsonObj)
})