[Monitor] Improve Ingestion compression logic
The ingestion service has a 1MB upload size limit. Currently we loop through the log data until we get to 1MB of data, then we compress it. So in the end, we are only uploading a fraction of what we could be uploading. (e.g. 1 MB of raw data ends up being 200KB compressed, so the payload is 200KB which leaves ample space until the limit is reached.
We should investigate how to improve this. Maybe leverage some heuristics or assume a minimum compression ratio as a start. Also, see if on-the-fly compression is possible with the current library being used (zlib).
Label prediction was below confidence level 0.6 for Model:ServiceLabels: 'Azure.Core:0.231941,azure-spring:0.13580751,Service Bus:0.08548776'
Can potentially adopt a progressive compression approach outlined in this comment: https://github.com/Azure/azure-sdk-for-python/issues/29563#issuecomment-1483650268