Quote:
Nadav S. wrote:
Hi
I noticed there are two popular ways when writing intrinsics for moving data into ymm registers. I'll use a simple vector addition example to clearify my question. Assuming a[], b[], c[] are three aligned memory buffers, I would like to do "c[] = a[] + b[]".
First option, use pointers:
__m256* vecAp = (__m256*)a;
__m256* vecBp = (__m256*)b;
__m256* vecCp = (__m256*)c;
for (int i=0; i < ARR_SIZE ; i+=8)
{
*vecCp = _mm256_add_ps(*vecAp, *vecBp);
vecAp++;
vecBp++;
vecCp++;
}
Second option, use _mm256_load_ps():
for (int i=0; i < ARR_SIZE ; i+=8)
{
__m256 vecA = _mm256_load_ps(&a[i]);
__m256 vecB = _mm256_load_ps(&b[i]);
__m256 res = _mm256_add_ps(vecA,vecB);
_mm256_store_ps(&c[i],res);
}
My question is, which of the above options is better, they both seem to compile and work and in this simple example give similar performance.
Thanks
the 2nd option is generally slightly faster because there is a single induction variable instead of 4 (but a good compiler may well replace the 4 increments by a simplifed construct), though it will not show in your timings (whatever the measurement methodology) in this very example since it's clearly cache bandwidth bound, L1D bound on Sandy Bridge/Ivy Bridge, mostly L2 bound on Haswell, if you have a lot of LLC misses, it will be even worse since you'll end up system memory bound
the only potential optimization I see in this example is to replace the 256-bit moves with 128-bit moves (I know it's pretty counterintuitive), it will give you a nice speedup (around 10%) on Ivy Brige and Sandy Bridge for some worksets size, particulary with a high L1D miss rate but low LLC miss rate, a sensible choice may be to use 128-bit moves for your AVX code path and 256-bit moves for an alternate, future proof, AVX2 path since Haswell handles 256-bit moves way better, including unaligned moves
concerning the notation I suppose it's mainly an issue of personal taste, I'll go for the second one myself if I were using intrinsics directly, note that instead of the convoluted notation "&a[i]", you can simpy write "a+i"
also as mentioned above by Jim it's always a good idea to restrict the scope for the pointers to help the compiler with register allocation, all in all the best option IMO will be along these lines:
{ // as local as possible scope for all variables used in your loops
const float *va = a, *vb = b; // always use const where it applies (may help the compiler in more complex examples)
float *vc = c;
for (int i=0; i<ARR_SIZE; i+=8) // single induction variable
_mm256_store_ps(vc+i,_mm256_add_ps(_mm256_load_ps(va+i),_mm256_load_ps(vb+i)));
}
Quote:
the 2nd option is generally slightly faster because there is a single induction variable instead of 4 (but a good compiler may well replace the 4 increments by a simplifed construct), though it will not show in your timings (whatever the measurement methodology) in this very example since it's clearly cache bandwidth bound, L1D bound on Sandy Bridge/Ivy Bridge, mostly L2 bound on Haswell, if you have a lot of LLC misses, it will be even worse since you'll end up system memory bound
the only potential optimization I see in this example is to replace the 256-bit moves with 128-bit moves (I know it's pretty counterintuitive), it will give you a nice speedup (around 10%) on Ivy Brige and Sandy Bridge for some worksets size, particulary with a high L1D miss rate but low LLC miss rate, a sensible choice may be to use 128-bit moves for your AVX code path and 256-bit moves for an alternate, future proof, AVX2 path since Haswell handles 256-bit moves way better, including unaligned moves
concerning the notation I suppose it's mainly an issue of personal taste, I'll go for the second one myself if I were using intrinsics directly, note that instead of the convoluted notation "&a[i]", you can simpy write "a+i"
also as mentioned above by Jim it's always a good idea to restrict the scope for the pointers to help the compiler with register allocation, all in all the best option IMO will be along these lines: