-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make TE and Apex dependencies optional #9550
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: ashors1 <ashors1@users.noreply.github.com>
# def check_adlr_autoresume_termination(iteration, model, optimizer, lr_scheduler, save: bool): | ||
# """Check for autoresume signal and exit if it is received.""" | ||
# from apex.ppu.checkpointing import save_checkpoint | ||
# | ||
# autoresume = get_adlr_autoresume() | ||
# # Add barrier to ensure consistency. | ||
# torch.distributed.barrier() | ||
# if autoresume.termination_requested(): | ||
# if save: | ||
# save_checkpoint(iteration, model, optimizer, lr_scheduler) | ||
# print_rank_0(">>> autoresume termination request found!") | ||
# if torch.distributed.get_rank() == 0: | ||
# autoresume.request_resume() | ||
# print_rank_0(">>> training terminated. Returning") | ||
# sys.exit(0) |
Check notice
Code scanning / CodeQL
Commented-out code Note
} | ||
|
||
|
||
def get_num_microbatches(): |
Check warning
Code scanning / CodeQL
Variable defined multiple times Warning
redefined
setup_microbatch_calculator, | ||
) | ||
|
||
HAVE_APEX = True |
Check notice
Code scanning / CodeQL
Unused local variable Note
Signed-off-by: ashors1 <ashors1@users.noreply.github.com>
@@ -99,6 +99,9 @@ | |||
TransformerConfig = ApexGuardDefaults | |||
|
|||
HAVE_MEGATRON_CORE = False | |||
from typing import Any | |||
|
|||
RetroConfig = Any |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to do the same for all of others mcore's classes we are trying to import under "try" block?
also, why don't use ApexGuardDefaults
?
) | ||
|
||
HAVE_APEX = True | ||
except ModuleNotFoundError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
except ModuleNotFoundError: | |
except (ImportError, ModuleNotFoundError): |
from apex.transformer.tensor_parallel.layers import ( | ||
set_defaults_if_not_set_tensor_model_parallel_attributes, | ||
) | ||
except ModuleNotFoundError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
except ModuleNotFoundError: | |
except (ImportError, ModuleNotFoundError): |
from apex.transformer.pipeline_parallel.utils import _GLOBAL_NUM_MICROBATCHES_CALCULATOR | ||
try: | ||
from apex.transformer.pipeline_parallel.utils import _GLOBAL_NUM_MICROBATCHES_CALCULATOR | ||
except ModuleNotFoundError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
except ModuleNotFoundError: | |
except (ImportError, ModuleNotFoundError): |
from megatron.core.models.gpt.gpt_layer_specs import get_gpt_layer_with_transformer_engine_spec | ||
|
||
get_gpt_layer_spec = get_gpt_layer_with_transformer_engine_spec | ||
except ImportError: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
except ImportError: | |
except (ImportError, ModuleNotFoundError): |
What does this PR do ?
This PR provides a fallback codepath that defaults to pure Pytorch/jit when TE and Apex are not installed. This depends on NVIDIA/Megatron-LM#893
Collection: [Note which collection this PR will affect]
Changelog
Usage
# Add a code snippet demonstrating how to use this
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information