I have anywhere from 10-150 long living class objects that call methods performing simple HTTPS API calls using HttpClient. Example of a PUT call:
using (HttpClientHandler handler = new HttpClientHandler())
{
handler.UseCookies = true;
handler.CookieContainer = _Cookies;
using (HttpClient client = new HttpClient(handler, true))
{
client.Timeout = new TimeSpan(0, 0, (int)(SettingsData.Values.ProxyTimeout * 1.5));
client.DefaultRequestHeaders.TryAddWithoutValidation("User-Agent", Statics.UserAgent);
try
{
using (StringContent sData = new StringContent(data, Encoding.UTF8, contentType))
using (HttpResponseMessage response = await client.PutAsync(url, sData))
{
using (var content = response.Content)
{
ret = await content.ReadAsStringAsync();
}
}
}
catch (ThreadAbortException)
{
throw;
}
catch (Exception ex)
{
LastErrorText = ex.Message;
}
}
}
After 2-3 hours of running these methods, which include proper disposal via using
statements, the program has creeped to 1GB-1.5GB of memory and eventually crashes with various out of memory errors. Many times the connections are through unreliable proxies, so the connections may not complete as expected (timeouts and other errors are common).
.NET Memory Profiler has indicated that HttpClientHandler
is the main issue here, stating it has both 'Disposed instances with direct delegate roots' (red exclamation mark) and 'Instances that have been disposed but are still not GCed' (yellow exclamation mark). The delegates that the profiler indicates have been rooted are AsyncCallback
s, stemming from HttpWebRequest.
It may also relate to RemoteCertValidationCallback
, something to do with HTTPS cert validation, as the TlsStream
is an object further down in the root that is 'Disposed but not GCed'.
With all this in mind - how can I more correctly use HttpClient and avoid these memory issues? Should I force a GC.Collect()
every hour or so? I know that is considered bad practice but I don't know how else to reclaim this memory that isn't quite properly being disposed of, and a better usage pattern for these short-lived objects isn't apparent to me as it seems to be a flaw in the .NET objects themselves.
UPDATE
Forcing GC.Collect()
had no effect.
Total managed bytes for the process remain consistent around 20-30 MB at most while the process overall memory (in Task Manager) continues to climb, indicating an unmanaged memory leak. Thus this usage pattern is creating an unmanaged memory leak.
I have tried creating class level instances of both HttpClient and HttpClientHandler per the suggestion, but this has had no appreciable effect. Even when I set these to class level, they are still re-created and seldom re-used due to the fact that the proxy settings often require changing. HttpClientHandler does not allow modification of proxy settings or any properties once a request has been initiated, so I am constantly re-creating the handler, just as was originally done with the independent using
statements.
HttpClienthandler is still being disposed with "direct delegate roots" to AsyncCallback -> HttpWebRequest. I'm starting to wonder if maybe the HttpClient just wasn't designed for fast requests and short-living objects. No end in sight.. hoping someone has a suggestion to make the use of HttpClientHandler viable.
Memory profiler shots:
See Question&Answers more detail:
os