Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mac M1 install #135

Open
crotwell opened this issue Jul 28, 2022 · 9 comments
Open

Mac M1 install #135

crotwell opened this issue Jul 28, 2022 · 9 comments

Comments

@crotwell
Copy link

I followed the instructions in #93, but still had errors trying to install on my M1 Mac using pip, and at least as of 2022-07-28 directly from source. There were various incompatibilities with dependency package versions.

I was finally able to get it to install and run by following the tensorflow instructions here:
https://developer.apple.com/metal/tensorflow-plugin/
except using python 3.10.
Then cloned the github repo and removed all the versions on the dependencies in install_requires in setup.py. I
was then able to successfully install from source using setup.py, and was able to run the basic tutorial from the docs without error, but of course not sure if there are other issues with upgrading dependencies to later versions.

The versions that all seemed to be compatible and let EQTransform run on my Mac M1, in case it helps, were:

pytest                    7.1.2           py310hbe9552e_0    conda-forge
numpy                     1.22.4          py310h0a343b5_0    conda-forge
keyring                   23.7.0          py310hbe9552e_0    conda-forge
pkginfo                   1.8.3              pyhd8ed1ab_0    conda-forge
scipy                     1.8.1           py310hdb41229_2    conda-forge
tensorflow-deps           2.9.0                         0    apple
tensorflow-estimator      2.9.0                    pypi_0    pypi
tensorflow-macos          2.9.2                    pypi_0    pypi
keras                     2.9.0                    pypi_0    pypi
matplotlib-base           3.5.2           py310hbeb1b0d_0    conda-forge
pandas                    1.4.3           py310ha6a5cd6_0    conda-forge
tqdm                      4.64.0             pyhd8ed1ab_0    conda-forge
h5py                      3.6.0           nompi_py310hb8bbf05_100    conda-forge
obspy                     1.3.0           py310hdaceac9_0    conda-forge
jupyter                   1.0.0           py310hbe9552e_7    conda-forge

python --version
Python 3.10.5

The commands I ran were. Not sure all this was needed, but it worked.

conda create -n eqt python=3.10
conda activate eqt
conda install -c apple tensorflow-deps
conda install obspy jupyter pandas
pip install tensorflow-macos
python3 setup.py install
@smousavi05
Copy link
Owner

Thank you so much @crotwell for sharing this. I appreciate it.

@xtyangpsp
Copy link

xtyangpsp commented Dec 1, 2023

It turns out to me that you have to specify the versions of some packages, particularly the tensor flow related packages. Following @crotwell 's suggestions, the following steps worked out for me:

conda create -n eqt python=3.10
conda activate eqt

Add "subdir: osx-arm64" to ~/.condarc to help conda search for M chip tensorflow

conda install -c apple tensorflow-deps
conda install obspy==1.3.0 jupyter pandas

You need to check the version of tensorflow-deps installed, by running: conda list. If the version is 2.10.0, for example, then the following tensorflow-macos needs to have the same version.

pip install tensorflow-macos==2.10.0

This will install a compatible keras package. Then you have to change the setup.py file to reflect the tensorflow-related package versions. For me, I changed them all to 2.10.0. YOU MAY NEED TO CHANGE IT TO OTHER VERSION. Note this is only applicable if you install through cloning the github repository. The following is the list of versions from the setup block of the setup.py that worked for me:

setup(
    name="EQTransformer",
    author="S. Mostafa Mousavi",
    version="0.1.61",
    author_email="smousavi05@gmail.com",
    description="A python package for making and using attentive deep-learning models for earthquake signal detection and phase picking.",
    long_description=long_description,
    long_description_content_type="text/markdown",
    url="https://github.com/smousavi05/EQTransformer",
    license="MIT",
    packages=find_packages(),
    keywords='Seismology, Earthquakes Detection, P&S Picking, Deep Learning, Attention Mechanism',
    install_requires=[
	'pytest==7.1.2',
	'numpy==1.22.4',     # appox version: numpy 1.19.x but at least 1.19.2
	'keyring==23.7.0', 
	'pkginfo==1.8.3',
	'scipy==1.10.0',
	#'tensorflow-deps==2.10.0',
	'tensorflow-estimator==2.10.0',
	'tensorflow-macos==2.10.0',
	#'tensorflow~=2.5.0', # tensorflow <2.7.0 needs numpy <1.20.0
	'keras==2.10.0', 
	#'matplotlib-base==3.8.2', 
	'pandas==1.4.3',
	'tqdm==4.64.0', 
	'h5py==3.6.0', 
	'obspy==1.3.0',
	'jupyter==1.0.0'], 

    python_requires='>=3.10.5',
)

Note that I commented out the matplotlib line and the tensorflow-deps line. The packages have been installed, as shown from running conda list. However, the installation kept complaining that it can't find the packages to install. I believe the line tensorflow~=2.5.0 was a leftover for other platforms. It causes errors on M chip (M2 Pro as tested here). I also changed the python requirement to >=3.10.5 because the version may not be exactly 3.10.5.

Finally, run the following line:

python3 setup.py install

@smousavi05
Copy link
Owner

smousavi05 commented Dec 1, 2023 via email

@WeiMouZhu
Copy link

Following the instructions of @xtyangpsp, I installed the EQT successfully on my M2 Mac. It works well until the Detection and Picking part.

The error message indicates that the shape of the input tensor to the reshape function contains None values? Is there any way to solve this problem?

Here is the Error Message:
`{
"name": "ValueError",
"message": "in user code:

File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/engine/training.py\", line 2440, in predict_function  *
    return step_function(self, iterator)
File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/engine/training.py\", line 2425, in step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/engine/training.py\", line 2413, in run_step  **
    outputs = model.predict_step(data)
File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/engine/training.py\", line 2381, in predict_step
    return self(x, training=False)
File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py\", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
File \"/var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/__autograph_generated_file9nun__rg.py\", line 44, in tf__call
    ag__.if_stmt(ag__.ld(self).attention_type == ag__.ld(SeqSelfAttention).ATTENTION_TYPE_ADD, if_body_1, else_body_1, get_state_1, set_state_1, ('e',), 1)
File \"/var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/__autograph_generated_file9nun__rg.py\", line 22, in if_body_1
    e = ag__.converted_call(ag__.ld(self)._call_additive_emission, (ag__.ld(inputs),), None, fscope)
File \"/var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/__autograph_generated_filee9_lzz_i.py\", line 49, in tf___call_additive_emission
    ag__.if_stmt(ag__.ld(self).use_attention_bias, if_body_1, else_body_1, get_state_1, set_state_1, ('e',), 1)
File \"/var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/__autograph_generated_filee9_lzz_i.py\", line 43, in if_body_1
    e = ag__.converted_call(ag__.ld(K).reshape, (ag__.converted_call(ag__.ld(K).dot, (ag__.ld(h), ag__.ld(self).Wa), None, fscope) + ag__.ld(self).ba, (ag__.ld(batch_size), ag__.ld(input_len), ag__.ld(input_len))), None, fscope)

ValueError: Exception encountered when calling layer 'attentionD0' (type SeqSelfAttention).

in user code:

    File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/EQTransformer-0.1.61-py3.10.egg/EQTransformer/core/EqT_utils.py\", line 2506, in call  *
        e = self._call_additive_emission(inputs)
    File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/EQTransformer-0.1.61-py3.10.egg/EQTransformer/core/EqT_utils.py\", line 2555, in _call_additive_emission  *
        e = K.reshape(K.dot(h, self.Wa) + self.ba, (batch_size, input_len, input_len))
    File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/backend.py\", line 3611, in reshape
        return tf.reshape(x, shape)

    ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.


Call arguments received by layer 'attentionD0' (type SeqSelfAttention):
  • inputs=tf.Tensor(shape=(None, None, 16), dtype=float32)
  • mask=None
  • kwargs={'training': 'False'}

",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 2
1 from EQTransformer.core.predictor import predictor
----> 2 predictor(input_dir='downloads_mseeds_processed_hdfs',
3 input_model='../ModelsAndSampleData/EqT_original_model.h5',
4 output_dir='detections1',
5 estimate_uncertainty=False,
6 output_probabilities=False,
7 number_of_sampling=5,
8 loss_weights=[0.02, 0.40, 0.58],
9 detection_threshold=0.3,
10 P_threshold=0.3,
11 S_threshold=0.3,
12 number_of_plots=10,
13 plot_mode='time',
14 batch_size=500,
15 number_of_cpus=4,
16 keepPS=False,
17 spLimit=60)
18 # help(predictor)

File ~/miniconda3/envs/eqt/lib/python3.10/site-packages/EQTransformer-0.1.61-py3.10.egg/EQTransformer/core/predictor.py:331, in predictor(input_dir, input_model, output_dir, output_probabilities, detection_threshold, P_threshold, S_threshold, number_of_plots, plot_mode, estimate_uncertainty, number_of_sampling, loss_weights, loss_types, input_dimention, normalization_mode, batch_size, gpuid, gpu_limit, number_of_cpus, use_multiprocessing, keepPS, allowonlyS, spLimit)
328 pbar_test.update()
330 new_list = next(list_generator)
--> 331 prob_dic=_gen_predictor(new_list, args, model)
333 pred_set={}
334 for ID in new_list:

File ~/miniconda3/envs/eqt/lib/python3.10/site-packages/EQTransformer-0.1.61-py3.10.egg/EQTransformer/core/predictor.py:587, in _gen_predictor(new_list, args, model)
585 pred_SS_std = pred_SS.std(axis=0)
586 else:
--> 587 pred_DD_mean, pred_PP_mean, pred_SS_mean = model.predict_generator(generator = prediction_generator,
588 use_multiprocessing = args['use_multiprocessing'],
589 workers = args['number_of_cpus'])
590 pred_DD_mean = pred_DD_mean.reshape(pred_DD_mean.shape[0], pred_DD_mean.shape[1])
591 pred_PP_mean = pred_PP_mean.reshape(pred_PP_mean.shape[0], pred_PP_mean.shape[1])

File ~/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/engine/training.py:2988, in Model.predict_generator(self, generator, steps, callbacks, max_queue_size, workers, use_multiprocessing, verbose)
2976 """Generates predictions for the input samples from a data generator.
2977
2978 DEPRECATED:
2979 Model.predict now supports generators, so there is no longer any
2980 need to use this endpoint.
2981 """
2982 warnings.warn(
2983 "Model.predict_generator is deprecated and "
2984 "will be removed in a future version. "
2985 "Please use Model.predict, which supports generators.",
2986 stacklevel=2,
2987 )
-> 2988 return self.predict(
2989 generator,
2990 steps=steps,
2991 max_queue_size=max_queue_size,
2992 workers=workers,
2993 use_multiprocessing=use_multiprocessing,
2994 verbose=verbose,
2995 callbacks=callbacks,
2996 )

File ~/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py:70, in filter_traceback..error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.traceback)
68 # To get the full stack trace, call:
69 # tf.debugging.disable_traceback_filtering()
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb

File /var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/autograph_generated_filex97edfvr.py:15, in outer_factory..inner_factory..tf__predict_function(iterator)
13 try:
14 do_return = True
---> 15 retval
= ag
_.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False

File /var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/autograph_generated_file9nun__rg.py:44, in outer_factory..inner_factory..tf__call(self, inputs, mask, **kwargs)
42 ag
.if_stmt(ag__.ld(self).attention_type == ag__.ld(SeqSelfAttention).ATTENTION_TYPE_MUL, if_body, else_body, get_state, set_state, ('e',), 1)
43 e = ag__.Undefined('e')
---> 44 ag__.if_stmt(ag__.ld(self).attention_type == ag__.ld(SeqSelfAttention).ATTENTION_TYPE_ADD, if_body_1, else_body_1, get_state_1, set_state_1, ('e',), 1)
46 def get_state_2():
47 return (e,)

File /var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/autograph_generated_file9nun__rg.py:22, in outer_factory..inner_factory..tf__call..if_body_1()
20 def if_body_1():
21 nonlocal e
---> 22 e = ag
.converted_call(ag__.ld(self).call_additive_emission, (ag_.ld(inputs),), None, fscope)

File /var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/autograph_generated_filee9_lzz_i.py:49, in outer_factory..inner_factory..tf___call_additive_emission(self, inputs)
47 e = ag
.converted_call(ag__.ld(K).reshape, (ag__.converted_call(ag__.ld(K).dot, (ag__.ld(h), ag__.ld(self).Wa), None, fscope), (ag__.ld(batch_size), ag__.ld(input_len), ag__.ld(input_len))), None, fscope)
48 e = ag__.Undefined('e')
---> 49 ag__.if_stmt(ag__.ld(self).use_attention_bias, if_body_1, else_body_1, get_state_1, set_state_1, ('e',), 1)
50 try:
51 do_return = True

File /var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/autograph_generated_filee9_lzz_i.py:43, in outer_factory..inner_factory..tf___call_additive_emission..if_body_1()
41 def if_body_1():
42 nonlocal e
---> 43 e = ag
.converted_call(ag__.ld(K).reshape, (ag__.converted_call(ag__.ld(K).dot, (ag__.ld(h), ag__.ld(self).Wa), None, fscope) + ag__.ld(self).ba, (ag__.ld(batch_size), ag__.ld(input_len), ag__.ld(input_len))), None, fscope)

ValueError: in user code:

File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/engine/training.py\", line 2440, in predict_function  *
    return step_function(self, iterator)
File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/engine/training.py\", line 2425, in step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/engine/training.py\", line 2413, in run_step  **
    outputs = model.predict_step(data)
File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/engine/training.py\", line 2381, in predict_step
    return self(x, training=False)
File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py\", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
File \"/var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/__autograph_generated_file9nun__rg.py\", line 44, in tf__call
    ag__.if_stmt(ag__.ld(self).attention_type == ag__.ld(SeqSelfAttention).ATTENTION_TYPE_ADD, if_body_1, else_body_1, get_state_1, set_state_1, ('e',), 1)
File \"/var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/__autograph_generated_file9nun__rg.py\", line 22, in if_body_1
    e = ag__.converted_call(ag__.ld(self)._call_additive_emission, (ag__.ld(inputs),), None, fscope)
File \"/var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/__autograph_generated_filee9_lzz_i.py\", line 49, in tf___call_additive_emission
    ag__.if_stmt(ag__.ld(self).use_attention_bias, if_body_1, else_body_1, get_state_1, set_state_1, ('e',), 1)
File \"/var/folders/ns/5x0tq3j51hq4t26yvx2m1z5m0000gn/T/__autograph_generated_filee9_lzz_i.py\", line 43, in if_body_1
    e = ag__.converted_call(ag__.ld(K).reshape, (ag__.converted_call(ag__.ld(K).dot, (ag__.ld(h), ag__.ld(self).Wa), None, fscope) + ag__.ld(self).ba, (ag__.ld(batch_size), ag__.ld(input_len), ag__.ld(input_len))), None, fscope)

ValueError: Exception encountered when calling layer 'attentionD0' (type SeqSelfAttention).

in user code:

    File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/EQTransformer-0.1.61-py3.10.egg/EQTransformer/core/EqT_utils.py\", line 2506, in call  *
        e = self._call_additive_emission(inputs)
    File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/EQTransformer-0.1.61-py3.10.egg/EQTransformer/core/EqT_utils.py\", line 2555, in _call_additive_emission  *
        e = K.reshape(K.dot(h, self.Wa) + self.ba, (batch_size, input_len, input_len))
    File \"/Users/wilmer/miniconda3/envs/eqt/lib/python3.10/site-packages/keras/src/backend.py\", line 3611, in reshape
        return tf.reshape(x, shape)

    ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.


Call arguments received by layer 'attentionD0' (type SeqSelfAttention):
  • inputs=tf.Tensor(shape=(None, None, 16), dtype=float32)
  • mask=None
  • kwargs={'training': 'False'}

"
}`

@smousavi05
Copy link
Owner

smousavi05 commented Mar 20, 2024 via email

@xtyangpsp
Copy link

xtyangpsp commented Mar 20, 2024 via email

@WeiMouZhu
Copy link

I deleted the previous eqt environment, then used conda clear all, and reinstalled eqt following instructions of @xtyangpsp. Now it works fine.

Thanks a lot!

@andika-ba
Copy link

Hi, @smousavi05

Previously, I used a Macbook with an Intel chip and everything worked fine. Now, I have a new Macbook with M2 chip and I am trying to install EQT in my new laptop. I followed suggestions from @xtyangpsp and @crotwell and I successfully installed EQT in my laptop.

I tried following the tutorial to do detections, everything was working fine until the detection step (using the prediction module). The algorithm was stuck in endless loops and failed to produce results.
Here is some logs produced that might indicate some errors:

Running EqTransformer 0.1.61
*** Loading the model ...
2024-09-12 07:10:18.678379: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2024-09-12 07:10:18.680021: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
Metal device set to: Apple M2

systemMemory: 8.00 GB
maxCacheSize: 2.67 GB

WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_2 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_3 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
WARNING:tensorflow:Layer lstm_4 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
*** Loading is complete!
######### There are files for 3 stations in downloads_mseeds_processed_hdfs directory. #########
========= Started working on B921, 1 out of 3 ...
0%| | 0/9 [00:00<?, ?it/s]2024-09-12 07:10:20.831536: W tensorflow/core/platform/profile_utils/cpu_utils.cc:128] Failed to get CPU frequency: 0 Hz
2024-09-12 07:10:21.780404: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:113] Plugin optimizer for device_type GPU is enabled.

GS--CA06
GS--CA10
PB--B921
ZY--SV08
####### There are 4 stations in the list. #######
[2024-09-12 07:10:31,298] - obspy.clients.fdsn.mass_downloader - INFO: Initializing FDSN client(s) for SCEDC, IRIS.

File "/opt/anaconda3/envs/eqt/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/opt/anaconda3/envs/eqt/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/opt/anaconda3/envs/eqt/lib/python3.10/site-packages/keras/utils/data_utils.py", line 759, in _run
with closing(self.executor_fn(_SHARED_SEQUENCES)) as executor:
File "/opt/anaconda3/envs/eqt/lib/python3.10/site-packages/keras/utils/data_utils.py", line 736, in pool_fn
pool = get_pool_class(True)(
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/pool.py", line 212, in init
self._repopulate_pool()
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/pool.py", line 303, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/pool.py", line 326, in _repopulate_pool_static
w.start()
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in init
super().init(process_obj)
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/popen_fork.py", line 19, in init
self._launch(process_obj)
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/opt/anaconda3/envs/eqt/lib/python3.10/multiprocessing/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.

    **This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:**

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Any idea how to solve this?
Thanks

@smousavi05
Copy link
Owner

smousavi05 commented Sep 13, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants