-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding ENVIRON namelists #703
base: main
Are you sure you want to change the base?
Conversation
Thanks for your contribution @sudarshanv01! Looks great at a first glance, will do a more thorough review later next week when I have time to test this. Two more things:
As a sidenote: does ENVIRON produce any new output in the stdout? Or does it only produce extra files with AiiDA then has to retrieve? |
Absolutely - will write a section on ENVIRON and will add tests! That is a good point, about the stdout. There are a few quantities that could be potentially interesting to parse from stdout - I imagine that would go into the parser - I can take a look at adding those in as well! |
Great, thanks a lot! Let me know if you need any help. 🙃
Just a note: In general, it's better not to parse from the stdout in case there is an alternative that is less likely to break. For example in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @sudarshanv01 . The changes look ok. There is just a change needed for the added test to actually make it test. And since you are also changing the parser, I would appreciate it if you could add a test for this as well. Have a look at the other pw.x
parser tests to see how it is done and feel free to ask me for help if you have any questions or run into trouble
} | ||
}, | ||
) | ||
generate_calc_job(fixture_sandbox, entry_point_name, inputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is not actually yet testing that the namelists are (properly) generates. The generate_calc_job
just creates the CalcJob
and calls the prepare_for_submission
method. You still need to check that the input file that was generated is correct. To see how to do this, you can look in this same file at the test test_pw_default
all the way at the top:
with fixture_sandbox.open('aiida.in') as handle:
input_written = handle.read()
file_regression.check(input_written, encoding='utf-8', extension='.in')
The file_regression
fixture will compare the input file that was written with a reference file. If that file doesn't yet exist (i.e. before you have ever run this test) it will automatically create it. If you rerun the tests a second time, it will work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @sphuber - thanks a lot for the comments! I have a few questions about how these tests work. I have added in file_regression
in the most recent commit, but I have not really set up a corresponding input file that it should compare to, and I missing how it automatically creates it.
Regarding the parser, I modeled the Environ test after the test_pw_default
test and added in some checks after running an Environ calculation. A very basic question - how does this work in practice, is a calculation actually run - in which case it might break because Environ is a plugin that doesn't ship with the code I believe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you just add the file_regression
call in the test, the comparison will be automatically created once you run the tests. If you then run the tests a second time, the comparison file is there and the test should therefore pass. If you ever change anything in the test that would imply that the comparison file has to be updated, you don't have to do this manually either. If you run the tests and they fail because the comparison file is outdated, you can update it by rerunning the tests with the flag --force-regen
and it will update all comparison files that are used and whose test fails.
A very basic question - how does this work in practice, is a calculation actually run - in which case it might break because Environ is a plugin that doesn't ship with the code I believe.
No, it doesn't run the calculation. I set up the framework for these parser tests that it simply mocks an already completed calculation with outputs taking from the test directory. Each test has its own folder that corresponds to the name of the test. So all you have to do is add a folder for your test with the output files of a run you did manually with the environ module enabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot! So I think I understand now how the tests for the calculations work (and have updated the latest commit with the right files). The same thing seems to work for the parser, but I am not sure I am testing this properly. All inputs to Environ would be passed through the settings, so I am not sure if this information is included in the yml that is generated.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests now look fine 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one small comment
parsed_data['environ_potential_shift'] = value_potential_shift | ||
parsed_data['environ_unit_potential_shift'] = unit_potential_shift | ||
except Exception: | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a log here as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Giovanni! I have added it to the log in the most recent commit - thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my understanding: is the string "Gaussian-smeared nuclei" specific to ENVIRON calculations? Because if that is the case, the warning message in the except case, doesn't make a lot of sense, does it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, I guess it is specific to ENVIRON calculations, right now it says Not an Environ calculation with an open boundary condition
, but this might show up for all calculations, which might be a bit annoying? Is there a better way to have a log message here without it popping up every time - and only for ENVIRON calculations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, just add a conditional to the elif 'Gaussian-smeared nuclei' in line
to only run when it is an ENVIRON calculation. How to determine whether we actually are parsing an ENVIRON question depends. Probably you could parse the stdout for some flag, but this is really fragile. If the XML doesn't contain some switch that we can rely on (which would be the best and most robust solution) you can use the input parameters. If I understand correctly, if you want to run an ENVIRON calculation, you have to add a particular key to the parameters
input node. You can check for that at the beginning of the parser. Something like
is_environ = 'ENVIRON' in self.inputs.settings.get_dict()
and then the parsing line would become
elif is_environ and 'Gaussian-smeared nuclei' in line:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, absolutely - just put this change in. In the current form I would need to pass an extra argument of settings to the parse_raw
function. I could also just send is_environ
as an argument, but maybe it is more complete to pass the settings
dictionary, just like parameters
?
…ntumespresso into develop Merging to update tests
…ntumespresso into develop Merging to update tests Pushing again to check tests
…ntumespresso into develop Merging branch from develop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @sudarshanv01 . Great start with the tests, but as is, they are not actually testing the new functionality. I have given more info in the code how to change this.
tests/calculations/test_pw.py
Outdated
|
||
# Checks on the files written to the sandbox folder as raw input | ||
assert sorted(fixture_sandbox.get_content_list()) == sorted(['aiida.in', 'pseudo', 'out', 'environ.in']) | ||
file_regression.check(input_written, encoding='utf-8', extension='.in') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking the main input file aiida.in
(as you are doing here) is not bad, but as you can see from the generated comparison file, it doesn't actually include any of the environ settings, which are written to environ.in
instead. So really you are still not really testing the environ specific functionality in this test. So you should change checking the content of the aiida.in
for the environ.in
file instead.
Maybe it could even be argued that it would be useful to check both files because theoretically the environ related code could break the code that writes the main input file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @sphuber - apologies that this reply is so late, this pr seems to have slipped through somehow. Just made some changes. I have now added two tests now as you suggested, one for the aiida.in
file called test_environ_pw_namelists
and the second for environ.in
files called test_environ_namelists
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need two different tests for this? The tests are identical, you are just checking a different file in each. You can merge these tests and check both files in one with something like this:
file_regression.check(input_written, encoding='utf-8', extension='.aiida.in')
file_regression.check(input_written, encoding='utf-8', extension='.environ.in')
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup I do not need two tests for this as you mentioned. Has been combined into one as you said!
parsed_data['environ_potential_shift'] = value_potential_shift | ||
parsed_data['environ_unit_potential_shift'] = unit_potential_shift | ||
except Exception: | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my understanding: is the string "Gaussian-smeared nuclei" specific to ENVIRON calculations? Because if that is the case, the warning message in the except case, doesn't make a lot of sense, does it?
""" | ||
|
||
|
||
def calculator(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did you make this and the environ_calculator
a function if all it does is return a static dictionary? I think it might be simpler if to literally just include the dictionary in the main method where you assign it directly to the building
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup good point, has been corrected.
builder.pw.structure = structure | ||
|
||
builder.metadata.label = 'Single point environ' | ||
builder.metadata.description = 'Testing calculation with environ for Pt(111) + CO' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This hardcodes the structure even though the structure is mutable through the argument. Not a huge deal since it is an example, but still a bit weird.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, also corrected.
tests/parsers/test_pw.py
Outdated
data_regression, | ||
): | ||
"""Test a simple Environ calculation.""" | ||
name = 'default' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are almost there with creating a parser test. This name
variable here, actually corresponds to a folder in the tests/parsers/fixtures/pw
folder. You will find the default
folder there. It actually contains output files that were generated by a PwCalculation
, i.e. the aiida.out
and data-file.xml
. Through the generate_calc_job_node
function called on line 924, a CalcJobNode
is mocked with a retrieved
output node that contains these files. This is replicating what happens when you actually run a PwCalculation
and the node is passed to the parser.
What you then have to do is create a new folder in that fixtures base folder, let's call it default_environ
and put the aiida.out
and data-file.xml
in there that you take from an actual PwCalculation
run that included the ENVIRON
namecard. Only then will you actually test the parser for output files that were produced by an environ run. In the current state, you are simply testing exactly the same as the test_default
test, which tests the outputs for a simple scf calculation with pw.x
. I hope this is clear. If not, feel to ask for more clarification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup all makes sense - thanks! Let me just describe what I did here. I added a new structure with structure_id=platinum
to the conftest.py
in order to test the different environ outputs as they won't work for the two example cases for Si or water (I need a 2D slab to check for the right flags). I then ran an pw
calculation with QE v.6.7 and stored the aiida.out
and the data-file.xml
file in fixtures/default_environ
. So far so good, except the auto-generated test_environ.yml
file doesn't seem to have the right specifications (it reports Si and not Pt). Moreover the k-points
do not seem to be parsed properly (commented out currently). I can also bypass this and run everything for Si, but my thinking is that it would be nice to have the test mirror that of the Environ package.
@sudarshanv01 just a quick ping on this: will you still have time to update the PR based on @sphuber's suggestions? If not, that's fine, I'd happily take over to fix the tests so we can get this baby merged. |
tests/conftest.py
Outdated
elif structure_id == 'platinum': | ||
# Used for environ tests | ||
structure = StructureData(cell=[[10.6881, 0., 0.], [0., 9.2561, 0.], [0., 0., 42.2630]]) | ||
structure.append_atom(position=[5.335084148, 4.646723426, 12.901029877], symbols='C', name='C') | ||
structure.append_atom(position=[5.335009643, 4.619623254, 15.079854269], symbols='O', name='O') | ||
structure.append_atom(position=[8.061327071, 0.098057998, 8.992142901], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[2.608989366, 0.098058283, 8.992140585], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[0.000036609, 4.720846294, 8.968756935], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[5.335159557, 4.721612729, 9.380196435], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[0.000041121, 7.802951963, 4.604626508], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[5.335161233, 7.697749113, 4.753489408], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[2.697860636, 3.152173889, 4.688412329], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[7.972463687, 3.152174491, 4.688415209], symbols='Pt', name='Pt') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you really need a different structure for the tests? Usually you are just mocking a structure and the test shouldn't really care about what actual structure is used. I want to prevent we start adding a bunch of structures that are not necessary at all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very fair - as long as there is some vacuum somewhere. Maybe the structure of water could be used instead, would that work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What part of the code will fail if there is no vacuum?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it would necessarily fail, I initially thought it would be nice to test the pbc corrections schemes, as we have them in the parser now, though this is relatively minor. If we are allowed to use a different structure from the one that was used for the DFT calculation, then that would be checked anyway. I have now removed the alternate structure (but allowed for the the option of structure_id
to be passed to the generate_structure
function - let me know if you want this removed too).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @sudarshanv01 . The last changes need some final improvements. See the comments
@@ -267,7 +267,7 @@ def detect_important_message(logs, line): | |||
logs.warning.append(message) | |||
|
|||
|
|||
def parse_stdout(stdout, input_parameters, parser_options=None, parsed_xml=None): | |||
def parse_stdout(stdout, input_parameters, parser_options=None, parsed_xml=None, settings={}): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that using mutable objects (as a dict
is) for function defaults is dangerous. The reason is that the default can actually change if it is mutated. Rather, you should define settings=None
and then in the function body you call settings = settings or {}
which will assign the empty dict if it is None
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, that makes sense, good to know - thanks!
tests/conftest.py
Outdated
elif structure_id == 'platinum': | ||
# Used for environ tests | ||
structure = StructureData(cell=[[10.6881, 0., 0.], [0., 9.2561, 0.], [0., 0., 42.2630]]) | ||
structure.append_atom(position=[5.335084148, 4.646723426, 12.901029877], symbols='C', name='C') | ||
structure.append_atom(position=[5.335009643, 4.619623254, 15.079854269], symbols='O', name='O') | ||
structure.append_atom(position=[8.061327071, 0.098057998, 8.992142901], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[2.608989366, 0.098058283, 8.992140585], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[0.000036609, 4.720846294, 8.968756935], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[5.335159557, 4.721612729, 9.380196435], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[0.000041121, 7.802951963, 4.604626508], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[5.335161233, 7.697749113, 4.753489408], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[2.697860636, 3.152173889, 4.688412329], symbols='Pt', name='Pt') | ||
structure.append_atom(position=[7.972463687, 3.152174491, 4.688415209], symbols='Pt', name='Pt') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What part of the code will fail if there is no vacuum?
} | ||
}, | ||
) | ||
generate_calc_job(fixture_sandbox, entry_point_name, inputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests now look fine 👍
Co-authored-by: Sebastiaan Huber <mail@sphuber.net>
Co-authored-by: Sebastiaan Huber <mail@sphuber.net>
Co-authored-by: Sebastiaan Huber <mail@sphuber.net>
Co-authored-by: Sebastiaan Huber <mail@sphuber.net>
Co-authored-by: Sebastiaan Huber <mail@sphuber.net>
Co-authored-by: Sebastiaan Huber <mail@sphuber.net>
@matthew-truscott just pinging you so you are aware of this proposed change by @sudarshanv01. Would the |
Thanks for the mention, and good to see more support for Environ with AiiDA. Environ itself is getting some pretty significant updates that will warrant a completely separate calcfunction, in particular, Environ will act less like a plugin on top of QE and more like a standalone code that leverages the QE drivers, among other things. |
This PR is in reference to this issue. Added a couple of namelists (boundary and electrostatics) specific to Environ which can be added to the settings dictionary. It is written the same way as the ENVIRON namelist was written - but only called if an ENVIRON key is in the settings dictionary.
Also added an example to test the changes with some typical keys. This example tries to recreate the same calculation as in Example04 with the examples folder of Environ.