You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The software world is seeing more and more applications move to "hardware accelerated" programming, which these days is generally just more software on a different piece of hardware in a system e.g. a compute shader on a GPU. I wouldn't be surprised if in the future we get something similar for the new neural processing hardware that CPUs are other SoCs are getting now where you write some kind of ML code that is explicity ran on the NPU. Not only that there are more pieces of hardware that when given more knowledge about the hardware can be utilised in more efficient ways, e.g. SSD to GPU memory file transers.
What I am proposing is something similar to Khronos and SPIRV but on a much larger scale and without the need to introduce an additional shader language. It would build around the standard library and designed in a way that it can be expanded as new kinds of accelerators are introduced. Right now it could be composed of:
A "core" accelerator library. This would include things like memory, pipelines, etc. Things that all kinds of devices will need access to and provide methods to build hardware accelerated code paths.
Graphics devices (GPUs) through functions that operate similar to pixel/fragments/etc. shaders. Although if this was implemented depending on how far in the future it was support for task/mesh shaders might make those a more appealing model.
Compute devices (GPUs/CPUs) similar to graphics, only the code is going to be more structured to a compute model and if possible there could be no reason why a CPU version that combines cores and SIMD couldn't be used.
Neural/ML devices. Right now I don't know if there is a specifically codable piece of ML hardware of it is all fixed function and down to the compiler to produce code that uses these instructions. But the ability to write functions that are especially geared towards ML hardware and runs on NPUs where available I am sure will be desired if not already
File I/O. While there isn't anything necessarily "accelerated" for files, if the compiler was smart enough to recognise "the user just loaded a file then immediately transferred it to the graphics device" it could potentially optimise that to make is a direct memory access.
All of the code would then be compiled down to a binary of host code and a carbon-lang designed bytecode that Carbon could potentially partner with device vendors to support. Linux could even be a great place to start as the nature of AMDs and Intel's open source mesa drivers could be used to demonstrate such a feature.
This wouldn't be a "soon" feature or even a 1.0 feature necessarily. But it would be really nice to have a single language that could actually program the whole PC and is designed to do so from an early stage of development so that it doesn't feel like an after thought of second class feature. It could also help Carbon put itself above other languages such as C++ or Rust for high performance programming.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
The software world is seeing more and more applications move to "hardware accelerated" programming, which these days is generally just more software on a different piece of hardware in a system e.g. a compute shader on a GPU. I wouldn't be surprised if in the future we get something similar for the new neural processing hardware that CPUs are other SoCs are getting now where you write some kind of ML code that is explicity ran on the NPU. Not only that there are more pieces of hardware that when given more knowledge about the hardware can be utilised in more efficient ways, e.g. SSD to GPU memory file transers.
What I am proposing is something similar to Khronos and SPIRV but on a much larger scale and without the need to introduce an additional shader language. It would build around the standard library and designed in a way that it can be expanded as new kinds of accelerators are introduced. Right now it could be composed of:
All of the code would then be compiled down to a binary of host code and a carbon-lang designed bytecode that Carbon could potentially partner with device vendors to support. Linux could even be a great place to start as the nature of AMDs and Intel's open source mesa drivers could be used to demonstrate such a feature.
This wouldn't be a "soon" feature or even a 1.0 feature necessarily. But it would be really nice to have a single language that could actually program the whole PC and is designed to do so from an early stage of development so that it doesn't feel like an after thought of second class feature. It could also help Carbon put itself above other languages such as C++ or Rust for high performance programming.
Beta Was this translation helpful? Give feedback.
All reactions