-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Q&A from ElixirConf EU Virtual #150
Labels
enhancement
New feature or request
Comments
I watched WWDC 2020 (indirectly), and I noticed I was wrong. That is, I should be going to support Metal, at least. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Q. Are there goals that have been set for a 1.0 release?
A. As I stated at my presentation, one of near future goals is supporting CUDA. Another goal is to accelerate image and 3D processing, and machine learning.
Q. What are the practical uses of Pelemay? What kind of problems Pelemay should solve?
A. As I stated at my presentation, some of practical usages that I'd like to develop are to accelerate image and 3D processing, and machine learning.
Q. How well is Pelemay currently able to optimize multiple consecutive
Enum.map/filter/reduce
calls? And what about lazy computation withStream
?A. Current Pelemay (0.0.13) can optimize
Enum.map
with integer and float calculation, and withString.replace
. I'd like to supportEnum.filter/reduce
andStream
as soon as possible.Q. Do you think Pelemay is suited to build a performant Maths or ML (machine learning) library?
A. Unfortunately, I think current Pelemay is too poor to support Maths and ML. But, I'll extend it enough to support them as soon as possible.
Q. The Pelemay speedup ratio is amazing. Can the Pelemay become more faster? I wanna know how.
A. Yes, of course! I think it can be improved more and more!
Q. Will Pelemay support other GPU vendors than Nvidia (CUDA)?
A. I'd like to focus on NVIDIA, first, because CUDA is much faster than OpenCL, and is de facto standard of GPGPU.
The text was updated successfully, but these errors were encountered: