大内存使用会减慢无关代码

I am maintaing the code for a Go project that reads and writes a lot of data and that has done so successfully for some time. Recently, I made a change: a CSV file with about 2 million records is loaded in a map with struct values at the beginning of the program. This map is only used in part B, but first part A is executed. And this first part already runs noticeably slower than before (processing time is quadrupled). That is very strange since that part of the logic did not change. I have spent a week trying to explain how this can happen. Here are the steps I have taken (when I mention performance, I always refer to part A, which does not include the time to load the data in memory and actually has nothing to do with it):

  • The program was running on a server inside a Docker container. But I have been able to reproduce it on my laptop without container: the performance indeed decreases compared to when I run it without the data from the file loaded in memory.
  • The server had a huge amount of RAM. Although obviously more memory is used when the file is loaded, no limits are hit. I also did not see spikes or other strange patterns in memory usage and disk I/O. For these checks, I have used pprof, htop and iotop.
  • When the data is loaded but then the map set to nil, performance is OK again.
  • Loading the data in a slice instead of a map reduces the performance decrease from x4 to x2 (but the memory usage is more or less the same as with the map).
  • This made me wonder whether the map/slice is accessed somewhere in part A, even though it shouldn’t. The map is stored in a field of a struct type. I checked and this struct is always passed by pointer (including all goroutines). Making it a global variable instead of a pointer field did not solve the issue.
  • There is one dependency outside of the standard library. Is the problem caused by the library? It forces some garbage collects. Disabling this does not make a difference. I found another similar library that is unrelated and using this one as a replacement improves performance, but it still takes longer when the data of the file is loaded.

Here I have plotted the metrics with and without the data in memory: enter image description here

What could cause this effect or how do I find it out?

So if I get this right, your flow looks something like this:

  1. Read 2 million rows from CSV into map -> struct
  2. Run part A (which doesn't need data from CSV)
  3. Run part B, using data from CSV

Why read the data before you need it, would be the first question, but that's perhaps besides the point.

What is likely is that 2 million structs in a map are routinely being accessed by the garbage collector, actually. Depending on what value GOGC has, the pacer component of the garbage collector is likely to kick in more often as the amount of memory allocated increases. Because this map is set aside for later use, there's nothing for the GC to do, but it's taking up cycles in checking the data regardless. There's a number of things you could do to verify, and account for this behaviour - all of these things should can help you rule out/confirm whether or not garbage collection is slowing you down.

  • Profile the code (obviously, important for diagnostics) IIRC, the CPU profile shows GC interventions more readily
  • Try disabling garbage collection (debug.SetGCPercent(-1))
  • Store the map in a sync.Pool. This is a type designed for you to keep stuff you'll manage manually, and move outside of regular GC cycles.
  • Only read the CSV when you need to, don't read it before "part A"
  • Stream the file, instead of reading it in a massive map. 2 million rows, what's the value of reading all of this in memory, rather than reading line by line?