You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Simply add a #pragma omp parallel for before the for-loop.
#pragma omp parallel for
for (int i = 0; i < m_nCells; i++) {
DoSomething(i)
}
2. Dependent calculation / 计算依赖型
The calculation of current computation unit is dependent on other units, e.g., the upstream units.
for (int iLayer = 0; iLayer < m_nRoutingLayers; iLayer++) {
// There are not any flow relationship within each routing layer.// So parallelization can be done here.int nCells = (int) m_routingLayers[iLayer][0];
#pragma omp parallel for
for (int iCell = 1; iCell <= nCells; ++iCell) {
int id = (int) m_routingLayers[iLayer][iCell]; // cell indexDoSomething(id);
}
}
3. Reduce operation / 规约型计算
Using openmp for reduction an array should be paid much more attention, e.g., when summarizing raster data according to subbasin ID.
Below is a code snippet to solve this issue in current SEIMS.
#pragma omp parallel
{
float *tmp_qiSubbsn = new(nothrow) float[m_nSubbasin + 1];
for (int i = 0; i <= m_nSubbasin; i++) {
tmp_qiSubbsn[i] = 0.f;
}
#pragma omp for
for (int i = 0; i < m_nCells; i++) {
tmp_qiSubbsn[int(m_subbasin[i])] = SummarizeSomething();
}
#pragma omp critical
{
for (int i = 1; i <= m_nSubbasin; i++) {
m_qiSubbasin[i] += tmp_qiSubbsn[i];
}
}
delete[] tmp_qiSubbsn;
tmp_qiSubbsn = nullptr;
} /* END of #pragma omp parallel */
Q&A
1. Can I use OpenMP nest parallel for loops?
If your compiler supports OpenMP 3.0, you can use the collapse clause:
#pragma omp parallel for schedule(dynamic,1) collapse(2)
for (int x = 0; x < x_max; x++) {
for (int y = 0; y < y_max; y++) {
// parallelize this code here
}
// IMPORTANT: no code in here
}
If it doesn't (e.g. only OpenMP 2.5 is supported), there is a simple workaround:
#pragma omp parallel for schedule(dynamic,1)
for (int xy = 0; xy < x_max*y_max; xy++) {
int x = xy / y_max;
int y = xy % y_max;
//parallelize this code here
}
You can enable nested parallelism with omp_set_nested(1); and your nested omp parallel for code will work but that might not be the best idea.
So, when the omp-embeded functions (e.g., Initialize1DArray and Initialize2DArray) are invoked in your code, please put outside the omp parallel for loop, or use new and delete[] instead.
Classified by the characteristics of calculation
1. Independent calculation / 计算独立型
Simply add a
#pragma omp parallel for
before the for-loop.2. Dependent calculation / 计算依赖型
The calculation of current computation unit is dependent on other units, e.g., the upstream units.
3. Reduce operation / 规约型计算
Using openmp for reduction an array should be paid much more attention, e.g., when summarizing raster data according to subbasin ID.
Here is a solution. https://stackoverflow.com/questions/20413995/reducing-on-array-in-openmp
It is worth to notice that
#pragma omp parallel for reduction(+:myArray[:6])
is supported with OpenMP 4.5. However, as far as I know, currently, MSVC 2010-2015 are using OpenMP 2.0.Below is a code snippet to solve this issue in current SEIMS.
Q&A
1. Can I use OpenMP nest parallel for loops?
If your compiler supports OpenMP 3.0, you can use the collapse clause:
If it doesn't (e.g. only OpenMP 2.5 is supported), there is a simple workaround:
You can enable nested parallelism with omp_set_nested(1); and your nested omp parallel for code will work but that might not be the best idea.
So, when the
omp
-embeded functions (e.g.,Initialize1DArray
andInitialize2DArray
) are invoked in your code, please put outside theomp parallel for loop
, or usenew
anddelete[]
instead.The text was updated successfully, but these errors were encountered: