Replies: 6 comments
-
The title of this issue does not make it clear to me what problem is being addressed. Can you please clarify the problem? |
Beta Was this translation helpful? Give feedback.
-
@marcpaterno the idea is to isolate the basic functionality for cluster abundance computation that is firecrown independent. Then we can have that as an isolated module (or eventually a library) that firecrown can import using a wrapper that makes the connection with firecrown specific objects (like updatables and parameters). |
Beta Was this translation helpful? Give feedback.
-
That helps, but it still seems to me that this is a proposed solution rather than a specification of a problem to be solved. What is the reason for (eventually) moving this code outside of the firecrown repository? The answer to that question will help determine the right way to go about doing it. |
Beta Was this translation helpful? Give feedback.
-
To summarize some of our discussion from last week. The cluster abundance calculations are being done in multiple different packages ( Because the abundance models are not purely theoretical predictions but also include phenomenological models, such as selection functions for our various cluster finders, @m-aguena we discussed taking the following approach, but still thinking through/prototyping:
|
Beta Was this translation helpful? Give feedback.
-
Some notes from our conversation today about design
|
Beta Was this translation helpful? Give feedback.
-
#345 implements the above. During code review on this change set, the possibility of different data types for arguments for the integrand was brought up. Specifically, in our current implementation, the static type for It would be nice if we didn’t need to tie down the set of arguments required for the cluster abundance integrand. However, the current implementation does not support this. The workflow is: flowchart TD;
A[Integrator.integrate]--List of ints-->B[Integrator._integral_wrapper];
B--mass, z, mass_proxy, z_proxy, etc -->C[AbundanceIntegrand];
C--mass, z, mass_proxy, z_proxy, etc --> D[Kernel];
Which means all Kernels accept the same argument types. As you can see in the above workflow, this is ultimately determined by the Perhaps a simpler and more flexible approach would be to defer the responsibility of writing the mapping between the numerical integrator and integrand arguments to the user in their flowchart TD;
A[build_likelihood]--map, integrand-->B[BinnedClusterNumberCounts];
B--map, integrand -->C[BinnedClusterNumberCounts.compute_theory_vector];
I believe this approach would support any type of distribution in the integrand. Any thoughts? @vitenti @marcpaterno |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
All reactions