Encode32 vs Encode64

Dec 5, 2013 at 3:29 PM
I am looking at the code now, specifically LZ4n project, and I am trying to understand why you have 32 and 64 versions.
And a quick scan didn't show anything there that is hard coded for specific bitness.

Is that an issue of optimizations?
Coordinator
Dec 6, 2013 at 3:50 PM
Edited Dec 6, 2013 at 4:34 PM
Yes. Performance only. For example one is using 'uints' and the other 'ulongs', that's enough to different performance results (output is binary identical) with different host processes (x86/x64).
See: https://lz4net.codeplex.com/wikipage?title=Performance%20Testing&referringTitle=Documentation
Dec 7, 2013 at 5:35 PM
Oh, that is great. I am focusing on 64 bits only, and if it can run (even somewhat inefficiently) on 32 bits, this is great.

Now, I have a pretty strange need. I need to be able to compress data is isn't sitting on the same memory position.
For example
var first = malloc(4096*12);
var second = malloc(4096 * 16);
var third = malloc(4096 * 8);

And now I need to compress all three buffers (which compose a single logical value), but aren't following one another in memory.
Any suggestions on how to do that?

Current, I have to copy them into a single long buffer, but I would really like to avoid having to do that.
Coordinator
Dec 9, 2013 at 7:41 AM
You need to put them in continuous block of memory or compress them separately - too much pointer arithmetic.