Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some error when running parallel on OpenFOAM-v2212 #44

Open
16-1895 opened this issue Feb 6, 2023 · 9 comments
Open

Some error when running parallel on OpenFOAM-v2212 #44

16-1895 opened this issue Feb 6, 2023 · 9 comments

Comments

@16-1895
Copy link

16-1895 commented Feb 6, 2023

Hello,
I ran a high-speed combustion case in parallel successfully with reactingPimpleCentralFoam on OpenFOAMv1912. Now I need to run it on v2212 for further work. I get an error. (*** Error in `reactingPimpleCentralFoam': malloc(): memory corruption: 0x0000000007cece30 ***). When I run it with a serial calculation, there is no error. I also tested it with reactingFoam(v2212) and pimpleCentralFoam(v2212) in parallel, there is no error. Do you know how to solve it?
Here is my case with the error log.
case.zip

@16-1895
Copy link
Author

16-1895 commented Feb 6, 2023

I also tested Tutorials/shockTubeTwoGases case (v2212). The serial calculation ran well, but the parallel calculation crashed midway. (I only changed the numberOfSubdomains)
logerror.txt
I am confused now. Can anyone help me? Thanks in advance!

@mkraposhin
Copy link
Contributor

Hi, thank you. I'll check source code. That seems like a strange behaviour. We checked parallel runs many times.

@mkraposhin
Copy link
Contributor

I also tested Tutorials/shockTubeTwoGases case (v2212). The serial calculation ran well, but the parallel calculation crashed midway. (I only changed the numberOfSubdomains) logerror.txt I am confused now. Can anyone help me? Thanks in advance!

Looks very similar to a numerical instability

@mkraposhin
Copy link
Contributor

Hello, I ran a high-speed combustion case in parallel successfully with reactingPimpleCentralFoam on OpenFOAMv1912. Now I need to run it on v2212 for further work. I get an error. (*** Error in `reactingPimpleCentralFoam': malloc(): memory corruption: 0x0000000007cece30 ***). When I run it with a serial calculation, there is no error. I also tested it with reactingFoam(v2212) and pimpleCentralFoam(v2212) in parallel, there is no error. Do you know how to solve it? Here is my case with the error log. case.zip

Can you try your case with OpenFOAM-2112? It looks like this version works OK and changes are not significant comparing to 2212. I think the problem is with OF.

@mkraposhin
Copy link
Contributor

mkraposhin commented Feb 15, 2023

The problem comes from this part of the code (YEqn.H, lines 224-240):

forAll(maxDeltaY.boundaryField(), iPatch)
{
if (maxDeltaY.boundaryField()[iPatch].coupled())
{
scalarField intF = maxDeltaY.boundaryField()[iPatch].primitiveField();
scalarField intH = hLambdaCoeffs.boundaryField()[iPatch];
const scalarField& intL = lambdaCoeffs.boundaryField()[iPatch].primitiveField();
forAll(intF, iFace)
{
if (intF[iFace] > 0.05)
{
intH[iFace] = intL[iFace];
}
}
hLambdaCoeffs.boundaryFieldRef()[iPatch].operator = (intH);
}
}

Try to comment it and let me know if this helps

@16-1895
Copy link
Author

16-1895 commented Feb 17, 2023

The problem comes from this part of the code (YEqn.H, lines 224-240):

forAll(maxDeltaY.boundaryField(), iPatch) { if (maxDeltaY.boundaryField()[iPatch].coupled()) { scalarField intF = maxDeltaY.boundaryField()[iPatch].primitiveField(); scalarField intH = hLambdaCoeffs.boundaryField()[iPatch]; const scalarField& intL = lambdaCoeffs.boundaryField()[iPatch].primitiveField(); forAll(intF, iFace) { if (intF[iFace] > 0.05) { intH[iFace] = intL[iFace]; } } hLambdaCoeffs.boundaryFieldRef()[iPatch].operator = (intH); } }

Try to comment it and let me know if this helps

Hi, it helps. Both my case and shockTubeTwoGases can run steadily in parallel. Is this the final solution?

@mkraposhin
Copy link
Contributor

mkraposhin commented Feb 17, 2023

It looks like something has changed in inter-processor boundaries handling. I think, you can proceed with the curent solution. I'll check what particularly has changed on weekend and then will write here.

@16-1895
Copy link
Author

16-1895 commented Feb 18, 2023

OK, thanks for your reply.

@mkraposhin
Copy link
Contributor

mkraposhin commented Feb 18, 2023

Hi, I did an amendment, it is available in my repository. I think, @unicfdlab will merge it soon. Thank you for reporting the bug!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants