Multiple Instances of Electron app under Task Manager and CPU utilized is 99%
I have build electron app using ASP.NET Core with React.js template. When I open the deployed app I can see multiple instances of app under task manager screenshot(MultipleInstancesTM.PNG) is this normal behaviour?
For the app I have used localDB (sqlite) and no external API request is made, It,s an offline app.
Electron app starts as follows : Electron Socket IO Port: 8000 Electron Socket started on port 8000 at 127.0.0.1 ASP.NET Core Port: 8001 stdout: Use Electron Port: 8000
Stange issue I encountered is that when I open the app it makes local service call on port 8001 and to sqlite DB and I get response in milliseconds, For same request sometimes response time vary from few milliseconds to 5-10 seconds. One thing I have noticed is that when response time takes time CPU utilization for app goes above 95% ~99% screenshot(99%usage.PNG).
I use "electronize build /target win" command to pack and deploy my app.
Am I missing something in configuration? Is there a way in which I can improve the CPU utilised by app so I get consistent response time?
many of those instances are autogenerated by electron itself... If you add a different icon to the .net core app you will easily tell the difference between the electron generated processes and the core application process. This is not a bug or an issue within Electron.NET.
I've found that the most egregious issue for responsiveness is communication packet size.... If you send large data chuncks to or from the ui to the backend it tends to cause a lack of responsiveness. Try to keep you packet size small and stream data chunks where you can.
In the backend responsiveness is often impacted because you don't thread enough. I.e. when you get a request from the front end, spawn a task (don't await) and let the task do the work disconnected from the spawn point. This allows you to process data async from the main communication threads. Generally you want to think about the main communication threads as if they were a windows form ui or something like that where any logic you perform that COULD be costly you will want to do in their own threads.
Also you're running on someone else's machine, not a server so partitioning big chunks of data processing is better than trying to do the whole thing all at once. There are many competitors for your cycles. You can build in delays on big data loops to allow other things to have some processing power... i.e. on a large data loop, put in an await Task.Delay(10 /* milliseconds */).ConfigureAwait(false); at the end of the loop to give other threads some computing power or interrupt capability. I usually use this method for large file imports where I'm writing a lot of data to the sqlite db or something like that.