-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change Audit models ACLs on Flaw unembargo #857
base: master
Are you sure you want to change the base?
Changes from all commits
4b9351c
1c3bc36
7a52b45
192070c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -272,6 +272,15 @@ It's possible to perform local development outside of containers, in the venv th | |
|
||
See `make help` for a short summary of these _make targets_. | ||
|
||
## Logging | ||
|
||
### Podman Logs | ||
|
||
You can use podman's logging subcommand along with the `--follow` (`-f`) flag to get a better idea of what is going on in the container. This is especially helpful with the container running Django/WSGI. | ||
|
||
```sh | ||
$ podman logs -f osidb-service | ||
``` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am not strongly against this but I wonder whether documenting Podman is not beyond OSIDB documentation. There are certainly tons of potentially useful Podman commands but it has its own documentation. If it was some specific OSIDB procedure, it was non-intuitive, etc. then probably let us document it here. |
||
|
||
## Updating dev env | ||
|
||
|
@@ -348,7 +357,7 @@ Situations where combining `compose-down` and `build` is useful: | |
- If you don't care about database contents, run `make db-drop` and try again. | ||
- If `make db-drop` doesn't fix it, run `make compose-down; make build`. It seems there's something that makes the database corrupted from the start. | ||
- If that doesn't fix it, do `make clean; make dev-env; make start-local`, but this probably won't fix it anyway. | ||
- For more involved debugging, use `podman logs osidb-service`, `podman logs osidg-data`, and inside `podman exec -it osidb-service bash` follow https://stackoverflow.com/a/55929118 to uncover a more specific Django error message. | ||
- For more involved debugging, use `podman logs osidb-service`, `podman logs osidb-data`, and inside `podman exec -it osidb-service bash` follow https://stackoverflow.com/a/55929118 to uncover a more specific Django error message. | ||
- If `django.db.utils.OperationalError: could not translate host name "osidb-data" to address: Name or service not known`, osidb-service can't find the hostname `osidb-data`, either because the container osidb-data is not running, or because there's a podman network issue that breaks container-to-container communication. | ||
- If `podman exec -it osidb-data pg_isready || echo error` returns an error, try to get the `osidb-data` container up and running first (see previous debugging steps). | ||
- If osidb-data is running correctly, it's possible this is an instance of the bug https://bugzilla.redhat.com/show_bug.cgi?id=1980157 (but actually who knows, I [jsvoboda] don't know enough about this :-( ). | ||
|
Original file line number | Diff line number | Diff line change | ||||||
---|---|---|---|---|---|---|---|---|
|
@@ -2,6 +2,9 @@ | |||||||
from functools import cached_property | ||||||||
from itertools import chain | ||||||||
|
||||||||
import pghistory | ||||||||
import pgtrigger | ||||||||
from django.apps import apps | ||||||||
from django.conf import settings | ||||||||
from django.contrib.contenttypes.fields import GenericForeignKey, GenericRelation | ||||||||
from django.contrib.contenttypes.models import ContentType | ||||||||
|
@@ -532,7 +535,18 @@ def unembargo(self): | |||||||
|
||||||||
# unembargo | ||||||||
self.set_public() | ||||||||
refs = pghistory.models.Events.objects.references(self).all() | ||||||||
for ref in refs: | ||||||||
db, model_name = ref.pgh_model.split(".") | ||||||||
model_audit = apps.get_model(db, model_name).objects.filter( | ||||||||
Comment on lines
+540
to
+541
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There's no need to split the name, you can pass it directly
Suggested change
|
||||||||
pgh_id=ref.pgh_id | ||||||||
) | ||||||||
|
||||||||
with pgtrigger.ignore(f"{db}.{model_name}:append_only"): | ||||||||
model_audit.update( | ||||||||
acl_read=list(self.acls_public_read), | ||||||||
acl_write=list(self.acls_public_write), | ||||||||
) | ||||||||
kwargs = {} | ||||||||
if issubclass(type(self), AlertMixin): | ||||||||
# suppress the validation errors as we expect that during | ||||||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,9 @@ | ||
from datetime import datetime, timezone | ||
from itertools import chain | ||
|
||
import pghistory | ||
import pytest | ||
from django.conf import settings | ||
from freezegun import freeze_time | ||
from rest_framework import status | ||
|
||
|
@@ -161,19 +163,30 @@ def test_complex(self, auth_client, test_api_uri): | |
HTTP_JIRA_API_KEY="SECRET", | ||
) | ||
assert response.status_code == status.HTTP_200_OK | ||
models = [ | ||
Flaw, | ||
FlawAcknowledgment, | ||
FlawComment, | ||
FlawCVSS, | ||
FlawReference, | ||
Affect, | ||
AffectCVSS, | ||
Package, | ||
Tracker, | ||
] | ||
assert not any( | ||
instance.is_embargoed | ||
for instance in chain( | ||
Flaw.objects.all(), | ||
FlawAcknowledgment.objects.all(), | ||
FlawComment.objects.all(), | ||
FlawCVSS.objects.all(), | ||
FlawReference.objects.all(), | ||
Affect.objects.all(), | ||
AffectCVSS.objects.all(), | ||
Package.objects.all(), | ||
Tracker.objects.all(), | ||
) | ||
for instance in chain(*[model.objects.all() for model in models]) | ||
) | ||
|
||
audit_models = [ | ||
pghistory.models.Events.objects.references(model).all() | ||
for model in models | ||
] | ||
assert ( | ||
audit_model["acls_read"] == settings.PUBLIC_READ_GROUPS | ||
and audit_model["acls_write"] == [settings.PUBLIC_WRITE_GROUP] | ||
for audit_model in audit_models | ||
) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am thinking why this is needed above the assert on line 164. Condition on line 165 is exactly the same as here on 180 but here we only check Flaw model while the the other assert checks other models too. If the reason is to also check the writhe group, I would be for extending the previous assert and not introducing a new partially duplicate but less covering one. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah okay, I will just extend the previous assertion to check read and write groups then. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was not testing the correct models, and I have fixed now this. though I didnt add the check for the write groups to the original tests since it's outside the scope of this feature. |
||
|
||
@freeze_time(datetime(2020, 10, 10, tzinfo=timezone.utc)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(opinionated) nitpick: It would make more sense to me to make the secrets corrections together with the changes they caused them so it is clear why we do them. It may be tedious when there are many commits (which is not the case here) and I sure there are different approaches to this in our team so no strong preference 😸