-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use not winrm instead of is ssh condition #442
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @aklakina thanks for this contribution, setting none
results in a very ungraceful experience currently I agree, I have a few concerns with this change.
The first is that now, the default for an unset communicator (which is a valid input in Azure) on Windows is SSH. This isn't how most people use Azure on Windows and I believe will break a lot of builds. This function (GetCommunicatorSpecificKeyVaultDeployment) is only called for Windows builds.
If you're running Ansible playbooks you'd still want to connect with SSH, and your change still sets up a communicator fully due to the way the builder code is written.
When none
comm type is set after your change an SSH certificate will be uploaded , and then WinRM will be used
packer-plugin-azure/builder/azure/arm/builder.go
Lines 413 to 419 in 6c21022
WinRMConfig: func(multistep.StateBag) (*communicator.WinRMConfig, error) { | |
return &communicator.WinRMConfig{ | |
Username: b.config.UserName, | |
Password: b.config.Password, | |
}, nil | |
}, | |
}, |
Can you tell me more about why setting the communicator type as ssh
won't meet your use case? With this a connection will still be created for Windows builds with none
set, just a WinRM username/password, with a VM configured for SSH, but probably with a default username and password that WinRM can access, this isn't a consistent experience. If you could post your full template I can understand more about your goal here, if you're using the Ansible packer provisioner as I mentioned you'll still need some sort of VM connection.
I think our support for the none
option needs revisiting, maybe it should not be an option on our plugin given how the rest of the plugin is written.
Hi @JenGoldstrich, thank you for your time and review. The use case where this In this form I haven't tried explicitly setting the communicator to ssh yet. Maybe it could solve this cross platform issue. The root problem with the Currently I solved this issue with hcl not being so dynamic with a code generator. It makes sense in our environment because we mostly reuse every template with little-to-no changes.
Would it still be inconsistent if instead of not winrm it would be ssh or none? I think that would not change the default behavior unless packer engine passes the not set communicator as |
It would be a less disruptive change if it was checking for explicitly none yes, but it would still run into the failure with my provided template, can you please provide your full Packer template so I can more quickly understand how you're making this change pass?
After a packer build finishes the VM that created this image is destroyed, you can not run an OS scan after a Packer build unless you create a new VM from that image. To use a Packer build to create an image, anything you want to bake into that image should be handled using a Packer provisioner, which allows you to run shell commands, there is also the Ansible Plugin.
I am not sure I understand what your goal here is, "control flows depending on input variables". Do you mean disabling certain aspects of a Packer build based on different input variables? I'm not sure what value that provides compared to having two different templates, one for each OS type, as they will have different needs inherently, Packer has the https://developer.hashicorp.com/packer/docs/commands/build#except-foo-bar-baz except flag, but I don't believe this works on Provisioners, only post processors and builds, but it seems like a reasonable feature request for Packer itself. Either way if you set the communicator to none the Azure plugin should not negotiate a connection to the VM, either via WinRM or SSH, currently it does. But for most cases of these images you would need to connect to the created VM before you capture it into an image, your code still results in the WinRM communicator being initiated just without a certificate created in a KeyVault. Originally I suggested using SSH but I think this may not meet your use case, I am confused as to why WInRM fails for your use case, as if you do not provide a certificate the plugin will create one for you, so I don't understand why WinRM was failing for your build originally, providing that template will help answer some of those questions though |
Communicator "none" issue
Context
Your plugin gives packer the option to not set up a communicator for provisioning. This is especially useful when your packer provisioning step is just calling an ansible playbook or some other OS configurator.
The issue
When this happened your plugin handled the connection as a winrm connection when creating the key vault. The problem with this is that the user did not provide any winrm specific configuration because they didn't want any communicator. This led to fatal deployment error.
Example configuration:
Error output:
The fix
Originally the communicator type was checked if it was set to "ssh" and if not then it defaulted to the winrm deployment. This should be changed to a "not winrm" check. This way if the user set it to "none" it will still deploy a correct certificate without any extra configuration.