Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Pipeline 2.0 with native IPC4, Module API and required windows features #7261

Open
3 of 7 tasks
marcinszkudlinski opened this issue Mar 13, 2023 · 7 comments
Open
3 of 7 tasks
Assignees
Labels
enhancement New feature or request IPC4 Issues observed with IPC4 (same IPC as Windows) module interface Related to module interface MTL Applies to Meteor Lake platform Pipeline pipeline2.0

Comments

@marcinszkudlinski
Copy link
Contributor

marcinszkudlinski commented Mar 13, 2023

OBSOLETE - the newest version here:
thesofproject/sof-docs#497
readable version: https://marcinszkudlinski.github.io/sof-docs/PAGES/architectures/firmware/sof-common/pipeline_2_0/pipeline2_0_discussion.html

Is your feature request related to a problem? Please describe.

We're heading fast for firmware fully compatible with current windows driver

I order to achieve this:

  • IPC4 was introduced, IPC3 was set as depreciated
  • SOF component API was set as depreciated, and "Module API" was introduced
  • New way of scheduling (Data processing) was introduced (PR still open)

To fit the requirements into current SOF code:

  • the pipeline itself is still controlled in IPC3 way, using a wrapper/translator to be compatible with IPC4
  • the pipeline is still using component API only, a heavy and slow module adapter must be used

Till now such approach was fine but is getting out of hands. Is consuming a lot of memory, is slow and with high risk of regression in legacy platforms

Describe the solution you'd like

I think there's a right moment to introduce PIPELINE 2.0

  • using IPC4 natively
  • Align/extend pipeline APIs to be IPC generic and better fit IPC4 use cases.
  • Align all processing to use module API and deprecate component API.
    WIP - trending v2.6
  • LL pipeline modules should use a shared buffer and not multiple buffers.
  • Eliminate unnecessary cache invalidation (both for audio data and metadata) unless necessary.
  • Introduce producer-consumer safe manageable DP Queues instead of simple buffers between chunks of LL and DP modules.
  • Introduce cache coherent shared DP-queue for modules on different cores.
  • Remove direction state from pipeline as source and sink are always known.

The list is not 100% complete

Describe alternatives you've considered
Keep modifying current pipeline implementation.
I my opinion

  • technical debt is now too high
  • high risk of introducing regression in legacy platforms

Additional context

@marcinszkudlinski marcinszkudlinski added enhancement New feature or request Pipeline IPC4 Issues observed with IPC4 (same IPC as Windows) module interface Related to module interface MTL Applies to Meteor Lake platform pipeline2.0 labels Mar 13, 2023
@marcinszkudlinski marcinszkudlinski self-assigned this Mar 13, 2023
@plbossart
Copy link
Member

plbossart commented Mar 13, 2023

@marcinszkudlinski The list of suggested improvements seem appealing, we should always try to do better, but allow me to put on my 'bad cop' hat to help clarify the directions.

a) What exactly does "Till now such approach was fine but is getting out of hands. Is consuming a lot of memory, is slow"
You will have to bring a lot more data to make a case for a change. Most of the memory issues we'd had were because of very large 3rd party modules, it's not clear to me what the pipeline infrastructure costs. The slowness is not clear either.

b) The wording also suggests that it's not possible to support DP scheduling without changing the pipeline level, which is not something I've heard before.

c) "The right moment" looks to me as the "wrong time and wrong place". We are half-way through the IPC4 introduction, there's no way any engineering manager in their right mind would accept to change the plumbing and have to redo all the validation, unless there's a clear showstopper preventing us from reaching the committed goals.

d) Also before we talk about a flag day, have you considered breaking down this feature into smaller more achievable ones? Can we e.g. drop the component API and use the module API? what can be improved in steps to reduce the revalidation cost?

e) And last, please do not forget that there is no replacement for IPC3 for non-Intel platforms. It's not deprecated as you state it. IPC4 is actually the outlier, not the norm.

@lgirdwood
Copy link
Member

@marcinszkudlinski I've edited and added some check boxes above to make this like an Epic. It can link to smaller features/PRs if needed.
Some of the items above are WIP (module API) and some sound like a regression (the unnecessary cache invalidations), the DP queues are new alongside the single LL pipeline buffer (which I think could be made to the current struct buffer APIs).

I would definitely suggest adding more context on the pipeline APIs you want to change/add/delete, this can make the topic easier to plan/align and move forward.

@marcinszkudlinski
Copy link
Contributor Author

marcinszkudlinski commented Mar 15, 2023

Lets me clarify

  • Pipeline memory consumption. There're buffers between LL modules, in fact not needed at all. Module adapter is even worse - it uses double buffering. And - double buffering in ModuleAdapter is not for nothing, I agree with ModuleAdapter author that probably there're no other ways to "marry" component ifc with module ifc. In case of pipeline with many components processing 7.1 in 96Khz - there's no way to fit it in in memory.
  • IPC3 depreciation. Indeed it went a little to far. What I meant is - the new pipeline need to support full features from IPC4. In fact, IPC3 is not so different, it should be possible to keep compatibility with IPC3
  • DP Scheduling. In current model of buffering it is hard (and risky) to make it thread/cache safe. I understand, however, we need DP soon so there's a temporary solution coming - with some limitations: DP may be used for components with module interface (using ModuleAdapter), and in front/back of each DP component there must be LL components.
  • "have you considered breaking down this feature into smaller more achievable ones" Oh yes. That's why I started this discussion :) looking for ideas/constructive reviews.

@pblaszko
Copy link
Contributor

pblaszko commented Apr 3, 2023

Let me raise one more aspect in context of Pipeline 2.0:

Currently IPC4 handler artificially configures modules/pipelines/buffers direction. Direction is not a part of IPC4 protocol and all these operations are forced by IPC3-like, buffer-based LL scheduler.

It makes a lot of problems in the code. Currently we can find parts where UPSTREAM means Host->DAI direction (e.g. in pipeline schedule), and another parts with opposite meaning (e.g. in Copier DAI configuration). It makes a lot of confusion.

IPC4 protocol assumes that modules are created in order from the source to the sink of pipeline. Scheduler should operate always from the source to the sink. No direction alternative is needed for proper scheduling.

Changing scheduler orientation from buffers to modules is a great opportunity to get rid of direction usage in IPC4.

@lgirdwood
Copy link
Member

Changing scheduler orientation from buffers to modules is a great opportunity to get rid of direction usage in IPC4.

Added this to the item list.

@marcinszkudlinski
Copy link
Contributor Author

Data source/sink interface
Currently all data between modules are being passed as memory buffers + metadata.
Each module manages buffers itself (i.e. takes care of circular buffering)

It is not enough as:

  • data source/sink may be an LL module, may be a DMA buffer, may be a coherent DP queue
  • each of those sources/sinks treat data differently (i.e. LL/LL- a very simple memory, DP/LL - a coherent FIFO)

There's a need for an API like "get data" "data consumed ACK" etc.
Each module must use the API - as the procedures for each source/sink type is different
no module is allowed to modify any metadata (like read/write pointer, amount of data in buffer) by itself

@plbossart
Copy link
Member

@marcinszkudlinski changes to the module API are already massively invasive, can we try to make smaller steps and avoid combining changes to module, pipeline, scheduler in the same feature request? Things need to be phased and aligned both with validation resources and program timelines. thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request IPC4 Issues observed with IPC4 (same IPC as Windows) module interface Related to module interface MTL Applies to Meteor Lake platform Pipeline pipeline2.0
Projects
None yet
Development

No branches or pull requests

4 participants