Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory-based Fault Injection into weights and biases #76

Open
vanessaip opened this issue Oct 25, 2024 · 1 comment
Open

Memory-based Fault Injection into weights and biases #76

vanessaip opened this issue Oct 25, 2024 · 1 comment

Comments

@vanessaip
Copy link

Week 1-4 (sep 23 - oct 18): inject bit flip fault into a random weight of every fully connected layer in a simple model
Week 5-6 (oct 21 - nov 1): inject bit flip into biases of every fully connected layer and be able to select exactly which weight in a tensor to modify (for testing purposes)
Week 6-7 (nov 4- nov 15): Can inject faults into select layers
Week 8-9 (nov 18 - nov 29) : perform evaluation of SDC rate and performance overhead
Week 10-12 (dec 2 - dec 13): explore injecting faults into weights of other model architectures like CNNs and Transformers
Week 13 (dec 16 - 20): buffer time for further evaluation or implementation

week 5 update:

  • injecting into first source register of fmul instruction any weight in matmul layers && verify that weights were modified
@karthikp-ubc
Copy link
Contributor

Just following up on this. It'd be great if you're able to integrate this feature into the main LLTFI branch if it is done. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants