-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let's talk about Pixel
#1523
Comments
Going back a step and merging some of the unnecessarily split traits, this could be an alternative design: /// A pixel containing a sample for each channel.
/// Grants access to the sample data, and does nothing more. No conversions.
/// Basically nothing but a trait for wrapped primitive arrays, which is aware of alpha channels.
pub trait Pixel: Sized + Copy + Clone + Default {
/// The type of each channel in this pixel.
type Sample: Primitive;
const CHANNEL_COUNT: u8;
const HAS_ALPHA: bool; // assumes no more than one alpha channel per pixel
/// A slice of all the samples in this pixel, including alpha.
fn as_slice(&self) -> &[Self::Sample];
/// A slice of all the samples in this pixel, including alpha.
fn as_slice_mut(&mut self) -> &mut [Self::Sample];
// ### default implementations. these could also be put in a separate PixelExt trait ###
fn bytes_per_channel() -> usize { ... }
fn bytes_per_pixel() -> usize { ... }
fn alpha_channel_count() -> u8 { ... }
fn color_channel_count() -> u8 { ... }
fn from_iter(iter: impl IntoIterator<Item=Self::Sample>) -> Self { ... }
fn apply(&mut self, mapper: impl FnMut(Self::Sample) -> Self::Sample) { ... }
fn apply2(&mut self, other: &Self, mapper: impl FnMut(Self::Sample, Self::Sample) -> Self::Sample) { ... }
fn map(&self, mapper: impl FnMut(Self::Sample) -> Self::Sample) -> Self { ... }
fn map2(&self, other: &Self, mapper: impl FnMut(Self::Sample, Self::Sample) -> Self::Sample) -> Self { ... }
fn as_color_slice_and_alpha(&self) -> (&[Self::Sample], Option<&Self::Sample>) { ... }
fn as_color_slice_and_alpha_mut(&mut self) -> (&mut [Self::Sample], Option<&mut Self::Sample>) { ... }
fn as_color_slice(&self) -> &[Self::Sample] { ... }
fn as_color_slice_mut(&self) -> &mut [Self::Sample] { ... }
fn apply_without_alpha(&mut self, mapper: impl FnMut(Self::Sample) -> Self::Sample) { ... }
fn apply2_without_alpha(&mut self, other: &Self, mapper: impl FnMut(Self::Sample, Self::Sample) -> Self::Sample) { ... }
fn map_without_alpha(&self, mapper: impl FnMut(Self::Sample) -> Self::Sample) -> Self { ... }
fn map2_without_alpha(&self, other: &Self, mapper: impl FnMut(Self::Sample, Self::Sample) -> Self::Sample) -> Self { ... }
}
/// Add an alpha channel, for example go from RGB to RGBA, or L to LA. No color conversion is happening here.
/// Implemented by all colors.
// This is a separate trait because it is not simply data access and has an associated type
pub trait WithAlpha: Pixel {
type WithAlpha: Pixel<Sample=Self::Sample>;
fn with_alpha(self, alpha: Self::Sample) -> Self::WithAlpha;
}
/// Take away an alpha channel, for example go from RGBA to RGB, or LA to L. No color conversion is happening here.
/// Implemented by all colors.
// This is a separate trait because it is not simply data access and has an associated type
pub trait WithoutAlpha: Pixel {
type WithoutAlpha: Pixel<Sample=Self::Sample>;
fn without_alpha(self) -> Self::WithoutAlpha;
}
/// Used to detect the image type when encoding an image.
pub trait InherentColorType {
const COLOR_TYPE: ColorType;
}
/// Implemented for all colors individually.
/// Allows converting the sample to a different sample type.
// This is a separate trait because it has an associated type
pub trait MapSamples<NewSampleType>: Pixel {
type NewPixel: Pixel<Sample = NewSampleType>;
fn map_samples(self, mapper: impl FnMut(Self::Sample) -> NewSampleType) -> Self::NewPixel;
}
// ### example color implementation ###
#[derive(Clone, Copy, Default)]
struct Rgb <T> ([T; 3]);
impl<T> Pixel for Rgb<T> where T: Primitive {
type Sample = T;
const HAS_ALPHA: bool = false;
const CHANNEL_COUNT: u8 = 3;
fn as_slice(&self) -> &[Self::Sample] { &self.0 }
fn as_slice_mut(&mut self) -> &mut [Self::Sample] { &mut self.0 }
}
// non-generic implementations for the color type, as not all sample types may be supported for a color
impl InherentColorType for Rgb<u8> { const COLOR_TYPE: ColorType = ColorType::Rgb8; }
impl InherentColorType for Rgb<u16> { const COLOR_TYPE: ColorType = ColorType::Rgb16; }
impl InherentColorType for Rgb<f32> { const COLOR_TYPE: ColorType = ColorType::Rgb32F; }
impl<T> WithAlpha for Rgb<T> where T: Primitive {
type WithAlpha = Rgba<T>;
fn with_alpha(self, a: Self::Sample) -> Self::WithAlpha {
let [r,g,b] = self.0;
Rgba([r,g,b,a])
}
}
impl<T> WithoutAlpha for Rgb<T> where T: Primitive {
type WithoutAlpha = Self;
fn without_alpha(self) -> Self::WithoutAlpha { self }
}
impl<T, D> MapSamples<D> for Rgb<T> where T: Primitive, D: Primitive {
type NewPixel = Rgb<D>;
fn map_samples(self, mapper: impl FnMut(Self::Sample) -> D) -> Self::NewPixel {
Rgb::from_iter(self.as_slice().iter().cloned().map(mapper))
}
} |
I wonder if it might make sense to not use any enumerations for color formats, and instead just have RGB8 et al be type aliases. Const generics could also allow for >4 channel images. Perhaps the need to avoid changing the API is unfounded and its time for a 1.0.0? |
@drewcassidy I'm not sure I follow. The reason that we use enums for color formats is because users can load files at runtime that can be in any format. The |
Furthermore to the excellent suggestions so far we could separate the pixel trait and types into a separate crate so other crates in the ecosystem can also make use of them. I'd like to work on writing up this V2 pixel trait in a new crate and then attempting to refactor the At the moment I've started work on my pixeli repo but I'd love to tranfer that repo to the |
This is certainly a messy part of the API. If you especially want to work on this, I'd encourage you to read through the existing issues on TBH I'm kind of burned out trying to fix this part of the API. It is a huge undertaking that doesn't have any clear best solution. Plus even small changes like removing BGR component order can turn out to be quite disruptive for users. (At the time we didn't even know anyone was using that!) Not to mention that there's a big risk of things stalling out before any changes have landed |
I have read through a lot of the issues discussed and while I can't guarantee that my current approach (based on the suggestions given above) will solve all of them I think it is a definite improvement. And moving all pixel-stuff into a separate crate will make this crate a lot cleaner also. I'll try to send a PR switching this crate to use |
The difficulties mentioned by fintelia are real. I think a good first step might be to come up with a strategy to tackle the difficulties. Wasn't there a plan to remove the image processing from this library into a separate library, and make this library I/O only? Is this still a goal? |
At one point there was though of splitting out More recently, there's been some talk about moving some of the more advanced image processing methods to |
It's being tracked in #2238, progress is slow but hopefully steady. |
From #2255 (comment) @ripytide
(1.) doesn't work if pixels are to be in-place in a buffer. (2.) implies that the current trait methods for What do we have the However, we don't need to break too much either. If we constrain the roles of existing generic buffer types to the 'simplified user interface', we shouldn't have too many problems with giving well-defined constructors for the powerful buffers that will need care and runtime representations. I'd like to propose the following stance, since I think it can be done incrementally:
Beyond
|
I'd agree with all you've said. The pixel trait is used mostly in I'm thinking such a large change as we are discussing might need quite a bit of planning how best to split it up incrementally as you suggested. Maybe a tracking issue or project board might be the best approach to organize the ordering of all required changes with an issue created for each individual minimal change like renaming a type. |
I've seen the talk on problems in gfx-rs, but I don't think it applies to just pixel types. It seems it tried to do more complex types, and with harder constraint of not being in charge of its own API due to zero-cost wrapping existing non-Rust ones. I've successfully written large image processing pipelines using strongly typed pixels. I don't think they are "for beginners". They ensure correct handling of bitmaps, and help with monomorphising code for various layouts (and there's always casting to bytes for code in the style of
I think types like
But the alternative of bytes plus some ColorFormat enum has the same problems with enums and traits, minus type safety. So I think pixel types are fine. A good I'm missing a better I usually need a separate notion of image file with all the metadata, color profiles, anim frames, and custom extras like file modification timestamps, original file format, and version of the codec. Then a thinner anim frame that knows its color profile, but isn't burdened by non-pixel data, and then 2D slices of pixels at the lowest level for actual byte pushing. |
Thanks for your insights @kornelski :) This again seems to double down on the theory that a large part of the pixel complexity comes from processing. In my personal experience, the value of The decoding/encoding library could work with a rather dynamic system (an enum for pixel formats) and the imageproc library could provide a statically typed layer on top of the dynamically decoded data. This would also allow users to load obscure data (such as arbitrary exr channels such as depth and motion vectors) without the need for the decoder library to provide static types for these rare use cases. I really think the most important step to keeping the library maintainable in long term would be to pivot and define one single responsibility: Load various image files using a unified pixel api. Currently, there seem to be two responsibilities, that exponentiate their complexity when combined (first is de/encoding and second is pixel processing). I think it will be practically impossible to come to any conclusion in this discussion without separating these two responsibilities. To go even further, I propose we shift the discussion to this topic of separating these two concerns. The idea would be the following (up for discussion):
While in theory, these two container types could also remain in this one crate, separating into two crates will help us to maintain a clean boundary, and not be tempted to blur the separation again. |
Mingling of the imageproc with this crate I think is a separate problem. Even when all of image processing functionality is removed, this crate should still care about pixel types. If by runtime-typed you mean returning |
Yes, we still want some kind of Pixel type, but I still think removing processing would reduce a lot of complexity for the Pixel trait. sidenote about u8 buffer alignment
If i am not mistaken, |
If you do this with
|
so then, it's only a problem if you don't explicitly handle deallocation of the buffer, which would definitely be possible. If I made another mistake in my thinking, I suggest we continue this discussion elsewhere to avoid cluttering up the discussion :) |
The |
Hey! I've been trying to implement f32 support for a while now, but progress always stops when I try to modify
Pixel
in a backwards-compatible and sane way. While we could probably think of some way to add f32 support in a non-breaking way, I see no possibility that doesn't drive us further and further into technical debt. I think we can't continue adding yet another hack to the pixel trait forever. Adding new features will become impossible eventually, it's already hard. This is because the original Pixel trait design is not flexible enough.I have been thinking about the Pixel type for a whole lot of time now. I've seen #1099, #956, #1464, #1212, #1013, #1499, #1516. That's why I want to call for action right now.
With the upcoming
0.24.0
release planned, which allows us to do breaking changes, we finally have the chance to actually improve the design of the trait in a sustainable manner. I propose we redesign the pixel trait right now! The goal of the redesign would be to maximise the flexibility such that it will last for a long time without major modifications.Limitations of the current design
As far as I understand, these are currently the biggest problems with the pixel trait:
It is not generic enough:
(F32 Variant of Dynamic Image #1511, Add alpha count associated constant to Pixel trait #956, Pixel::map methods should map to arbitrary subpixel type #1464, Why is the FromColor trait not public? #1212).
The purpose of a trait is to enable generic code.
Implementing the pixel trait is problematic for multiple reasons.
to_rgb
is not generic. Rust has traits for conversions, theFrom
andInto
traits. These should be used, if at all. Color conversion is more complex than that though, so it should probably be separated.It has too many responsibilities:
(F32 Variant of Dynamic Image #1511, Simplify Pixel trait #1099, implement std::convert::From<T: Pixel> for Pixel #1013, Why is the FromColor trait not public? #1212)
Adding multiple responsibilities to a trait is problematic, as the flexibility is reduced further and further. In the
std
library, we can observe multiple traits that are split up atomically. Many times, traits contain only a single method definition.I identified the following responsibilities in the Pixel trait:
as_slice
,from_slice
and friends exist to access the samples in the pixel. Granting access to the sample data should be a separate responsibility, just like image operations and codecs should be separated.COLOR_TYPE: ColorType
is passed to the image encoders.invert
,blend
, but no other color operationsAll of these could theoretically be separate traits, especially
Blend
,Invert
, andto_xyz
color conversionsFurthermore, there are other minor problems:
channels4
method of the pixel trait.hue_rotate_in_place
. In reality, the hue algorithm will only make sense for Rgb. These algorithms should probably not use the pixel trait in the first place.horizontal_sample
should probably use the generic slice methods, and work with every possible number of channels. Even more idiomatic would be to actually treat the pixel as a whole, by not adding up each sample of a pixel individually. Instead, the algorithm should use hypothetical operations such asblend
orsum
oraverage
on the pixel.How to start?
Of course, a redesign is a huge task. And even settling on a design will be challenging. Splitting up the pixel into atomic traits would probably help reduce bike shedding, as color conversion and similar tasks could be added independently. To speed up the design process, it could also help to list the new requirements of the pixel trait(s).
Maybe we can start with the following list of requirements:
u8
,u16
,f32
, or evenf16
if the user wants to implement that)add
andblend
for those`?pixel.map(|f32| (f32 255.0) as u8)
?I propose we remove the color conversion from the pixel trait into a separate module. We could keep the old conversion algorithms until real color spaces are introduced.
Example Design
This is by far not a full design. It should rather be understood as an example. It shows an extreme variant of the trait separation. I just typed this off of my head, so it probably doesn't factor in every requirement yet.
The design focuses on separating concerns and providing default implementations for most methods. The code does not make any assumptions about the sample type, except for it being
Primitive
. The pixel is never assumed to be a specific color space. The only real assumption made, is that there exists an underlying array of samples. Color space models can be introduced to the crate by adding more traits, probably even without changing the existing traits.The text was updated successfully, but these errors were encountered: