My story is similar to nield's; I worked on a project that used the Boehm-Weiser collector in a 32-bit process.
Once or twice a tear, we'd have an unbounded memory growth to trace. Most of the time, we were able to solve it by providing layouts for structures that mixed pointers and pointer-like data. The debug facility it provides for providing a random backtrace to a root was helpful in those cases.
However, more than once we had a leak that it was beyond our ability to trace, despite a significant amount of diagnostic work. We found some workarounds that were specific to our app, which helped.
In the end, the costs to us were high enough that we rewrote our codebase in C# (our software was Windows-only). It was pretty much a line-for-line conversion, but it worked and we no longer had any memory leaks (except for one, in .NET timers, that took only minutes to trace using readily-available diagnostic tools).
I do believe a 64-bit process using the Boehm collector would be much less likely to suffer these issues, but this was before x64 was prevalent in corporate server rooms.
On your second question: Crafted input can indeed result in significant memory retention, if you aren't careful to exclude user input from the GC and have large, cyclic structures to collect, on a 32-bit machine, with almost any text encoding.
Correct me if I'm wrong, but the GC isn't the only thing that frees memory, is it? You can still free it manually (and are supposed to), so those leaks would have happened without a GC too, and it can only help things.
I'm getting the impression that posts here imply you're better off without a GC, I'm not sure if that's the intention, but it strikes me as wrong.
Your parent post spoke about going from one GC to another and having the problems disappear. Hence, it only implies that changing GC can have a useful effect.
Your message does betray some confusion about garbage collection. The basic principle of GC is that you do not deallocate RAM manually (and, under most GCs, cannot), and programs written under it expect deallocation to happen automatically. You are not 'supposed to' free manually, as that would be wasted effort, and some standard configurations of the Boehm collector actually ignore all calls to free().
There is a very useful description of garbage collection at http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047... which includes and expands on the useful mental model that "garbage collection is simulating a computer with an infinite amount of memory".
Once or twice a tear, we'd have an unbounded memory growth to trace. Most of the time, we were able to solve it by providing layouts for structures that mixed pointers and pointer-like data. The debug facility it provides for providing a random backtrace to a root was helpful in those cases.
However, more than once we had a leak that it was beyond our ability to trace, despite a significant amount of diagnostic work. We found some workarounds that were specific to our app, which helped.
In the end, the costs to us were high enough that we rewrote our codebase in C# (our software was Windows-only). It was pretty much a line-for-line conversion, but it worked and we no longer had any memory leaks (except for one, in .NET timers, that took only minutes to trace using readily-available diagnostic tools).
I do believe a 64-bit process using the Boehm collector would be much less likely to suffer these issues, but this was before x64 was prevalent in corporate server rooms.
On your second question: Crafted input can indeed result in significant memory retention, if you aren't careful to exclude user input from the GC and have large, cyclic structures to collect, on a 32-bit machine, with almost any text encoding.