Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Audio API changes #797

Open
wants to merge 42 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
4f524b8
backup
yilinwei Jul 22, 2023
8b1d132
backup
yilinwei Jul 22, 2023
b44eaa8
backup
yilinwei Jul 22, 2023
2aaf2ed
Switch back to using traits for now.
yilinwei Sep 18, 2023
b91df37
typo.
yilinwei Sep 18, 2023
a4edff5
Switch encoding for mima.
yilinwei Sep 24, 2023
9887ce0
Check-in API report
yilinwei Sep 24, 2023
081534d
BlobEvent and MediaRecorder.
zainab-ali Oct 8, 2023
fca6713
Make sure `BlobEvent` is class.
yilinwei Oct 8, 2023
4dda4bf
`data` is required.
yilinwei Oct 8, 2023
a4cfb9a
Add `AudioWorkletNode` and associated options.
yilinwei Nov 15, 2023
0099ad3
Add `Worklet` and `AudioWorklet`.
yilinwei Nov 15, 2023
e8b3650
Fix signature
yilinwei Nov 15, 2023
1178935
Add `AudioParamDescriptor`.
yilinwei Nov 15, 2023
fdb9aad
Add `defaultValue` for `AudioParamDescriptor`.
yilinwei Nov 15, 2023
c067de2
Make sure to extend `js.Object`.
yilinwei Nov 15, 2023
ba8f619
Add `AudioWorkletGlobalScope`.
yilinwei Nov 15, 2023
3e32f25
`AudioWorkletNode` should not be abstract.
yilinwei Nov 16, 2023
42275a7
Make `ReadOnlyMapLike` extend `js.Iterable`.
yilinwei Nov 16, 2023
0e90800
`self` does not yet exist within the `Worklet` contexts.
yilinwei Nov 16, 2023
f860eaa
Correct `ReadOnlyMapLike` signature `forEach`.
yilinwei Nov 16, 2023
b548118
Add docs.
zainab-ali Dec 2, 2023
2d1f240
Add docs.
zainab-ali Dec 2, 2023
f7adab3
Doc improvements.
zainab-ali Dec 18, 2023
56d513b
Add js.native annotation to AudioParamAutomationRate.
zainab-ali Dec 18, 2023
6781565
More docs.
zainab-ali Dec 18, 2023
7d6eb4e
Add js.native annotation to AudioTimestamp.
zainab-ali Dec 18, 2023
d159170
Correct type of params for AudioWorkletProcessor.
zainab-ali Dec 18, 2023
3bac38d
WorkletOptions should extend js.Object.
zainab-ali Dec 18, 2023
e32a80c
Add MediaRecorder and options.
zainab-ali Dec 18, 2023
c221e2b
Correct scaladoc.
zainab-ali Dec 18, 2023
824092d
Api reports.
zainab-ali Dec 18, 2023
e637830
AudioWorkletGlobalScope should be an abstract class.
zainab-ali Dec 29, 2023
314c67b
AudioScheduledSourceNode should be an abstract class.
zainab-ali Dec 29, 2023
9923b6b
MediaElementAudioSourceNode mediaElement should be a def.
zainab-ali Dec 29, 2023
98af177
Regenerate api reports.
zainab-ali Dec 29, 2023
18a6f7d
Add docs for ReadOnlyMapLike.
zainab-ali Dec 29, 2023
df8e9cf
Reformat doc comments.
zainab-ali Jan 28, 2024
523266a
Remove redundant comment.
zainab-ali Jan 28, 2024
07dcf43
Remove channelCount, channelCountMode and channelInterpretation.
zainab-ali Jan 28, 2024
b3a694e
Refactor enums for Scala 3.
zainab-ali Jan 28, 2024
e305129
Regenerate API reports.
zainab-ali Jan 28, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 6 additions & 11 deletions dom/src/main/scala/org/scalajs/dom/AudioBufferSourceNode.scala
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
package org.scalajs.dom

import scala.scalajs.js
import scala.scalajs.js.annotation._

/** AudioBufferSourceNode has no input and exactly one output. The number of channels in the output corresponds to the
* number of channels of the AudioBuffer that is set to the AudioBufferSourceNode.buffer property. If there is no
Expand All @@ -23,8 +24,10 @@ import scala.scalajs.js
* - Number of outputs: 1
* - Channel count: defined by the associated AudioBuffer
*/
@JSGlobal
@js.native
trait AudioBufferSourceNode extends AudioNode {
class AudioBufferSourceNode(context: BaseAudioContext, options: AudioBufferSourceNodeOptions = js.native)
extends AudioScheduledSourceNode {

/** Is an AudioBuffer that defines the audio asset to be played, or when set to the value null, defines a single
* channel of silence.
Expand Down Expand Up @@ -63,16 +66,8 @@ trait AudioBufferSourceNode extends AudioNode {
* The duration parameter, which defaults to the length of the asset minus the value of offset, defines the length
* of the portion of the asset to be played.
*/
def start(when: Double = js.native, offset: Double = js.native, duration: Double = js.native): Unit = js.native
def start(when: Double, offset: Double, duration: Double): Unit = js.native

/** Schedules the end of the playback of an audio asset.
*
* @param when
* The when parameter defines when the playback will stop. If it represents a time in the past, the playback will
* end immediately. If this method is called twice or more, an exception is raised.
*/
def stop(when: Double = js.native): Unit = js.native
def start(when: Double, offset: Double): Unit = js.native

/** Is an EventHandler containing the callback associated with the ended event. */
var onended: js.Function1[Event, _] = js.native
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
/** Documentation is thanks to Mozilla Contributors at https://developer.mozilla.org/en-US/docs/Web/API and available
* under the Creative Commons Attribution-ShareAlike v2.5 or later. http://creativecommons.org/licenses/by-sa/2.5/
*
* Everything else is under the MIT License http://opensource.org/licenses/MIT
*/
package org.scalajs.dom

import scala.scalajs.js

trait AudioBufferSourceNodeOptions extends js.Object {
var buffer: js.UndefOr[AudioBuffer] = js.undefined
var loop: js.UndefOr[Boolean] = js.undefined
var loopStart: js.UndefOr[Double] = js.undefined
var loopEnd: js.UndefOr[Double] = js.undefined
var detune: js.UndefOr[Double] = js.undefined
var playbackRate: js.UndefOr[Double] = js.undefined
var channelCount: js.UndefOr[Int] = js.undefined
var channelCountMode: js.UndefOr[AudioNodeChannelCountMode] = js.undefined
var channelInterpretation: js.UndefOr[AudioNodeChannelInterpretation] = js.undefined
}
137 changes: 11 additions & 126 deletions dom/src/main/scala/org/scalajs/dom/AudioContext.scala
Original file line number Diff line number Diff line change
Expand Up @@ -17,98 +17,13 @@ import scala.scalajs.js.annotation._
*/
@js.native
@JSGlobal
class AudioContext extends EventTarget {
class AudioContext extends BaseAudioContext {

/** Returns a double representing an ever-increasing hardware time in seconds used for scheduling. It starts at 0 and
* cannot be stopped, paused or reset.
*/
def currentTime: Double = js.native

/** Returns an AudioDestinationNode representing the final destination of all audio in the context. It can be thought
* of as the audio-rendering device.
*/
val destination: AudioDestinationNode = js.native

/** Returns the AudioListener object, used for 3D spatialization. */
val listener: AudioListener = js.native

/** Returns a float representing the sample rate (in samples per second) used by all nodes in this context. The
* sample-rate of an AudioContext cannot be changed.
*/
val sampleRate: Double = js.native

/** Returns the current state of the AudioContext. */
def state: String = js.native

/** Closes the audio context, releasing any system audio resources that it uses. */
def close(): js.Promise[Unit] = js.native

/** Creates an AnalyserNode, which can be used to expose audio time and frequency data and for example to create data
* visualisations.
*/
def createAnalyser(): AnalyserNode = js.native

/** Creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter
* types: high-pass, low-pass, band-pass, etc.
*/
def createBiquadFilter(): BiquadFilterNode = js.native

/** Creates a new, empty AudioBuffer object, which can then be populated by data and played via an
* AudioBufferSourceNode.
*
* @param numOfChannels
* An integer representing the number of channels this buffer should have. Implementations must support a minimum
* 32 channels.
* @param length
* An integer representing the size of the buffer in sample-frames.
* @param sampleRate
* The sample-rate of the linear audio data in sample-frames per second. An implementation must support
* sample-rates in at least the range 22050 to 96000.
*/
def createBuffer(numOfChannels: Int, length: Int, sampleRate: Int): AudioBuffer = js.native

/** Creates an AudioBufferSourceNode, which can be used to play and manipulate audio data contained within an
* AudioBuffer object. AudioBuffers are created using AudioContext.createBuffer or returned by
* AudioContext.decodeAudioData when it successfully decodes an audio track.
*/
def createBufferSource(): AudioBufferSourceNode = js.native

/** Creates a ChannelMergerNode, which is used to combine channels from multiple audio streams into a single audio
* stream.
*
* @param numberOfInputs
* The number of channels in the input audio streams, which the output stream will contain; the default is 6 is
* this parameter is not specified.
*/
def createChannelMerger(numberOfInputs: Int = js.native): ChannelMergerNode = js.native
// Returns the number of seconds of processing latency incurred by the AudioContext passing the audio from the AudioDestinationNode to the aud io subsystem.
def baseLatency: Double = js.native

/** Creates a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them
* separately.
*
* @param numberOfOutputs
* The number of channels in the input audio stream that you want to output separately; the default is 6 is this
* parameter is not specified.
*/
def createChannelSplitter(numberOfOutputs: Int = js.native): ChannelSplitterNode = js.native

/** Creates a ConvolverNode, which can be used to apply convolution effects to your audio graph, for example a
* reverberation effect.
*/
def createConvolver(): ConvolverNode = js.native

/** Creates a DelayNode, which is used to delay the incoming audio signal by a certain amount. This node is also
* useful to create feedback loops in a Web Audio API graph.
*
* @param maxDelayTime
* The maximum amount of time, in seconds, that the audio signal can be delayed by. The default value is 0.
*/
def createDelay(maxDelayTime: Int): DelayNode = js.native

/** Creates a DynamicsCompressorNode, which can be used to apply acoustic compression to an audio signal. */
def createDynamicsCompressor(): DynamicsCompressorNode = js.native

/** Creates a GainNode, which can be used to control the overall volume of the audio graph. */
def createGain(): GainNode = js.native
/** Returns an estimation of the output latency of the current audio context. */
def outputLatency: Double = js.native

/** Creates a MediaElementAudioSourceNode associated with an HTMLMediaElement. This can be used to play and manipulate
* audio from <video> or <audio> elements.
Expand All @@ -131,47 +46,17 @@ class AudioContext extends EventTarget {
*/
def createMediaStreamDestination(): MediaStreamAudioDestinationNode = js.native

zetashift marked this conversation as resolved.
Show resolved Hide resolved
/** Creates an OscillatorNode, a source representing a periodic waveform. It basically generates a tone. */
def createOscillator(): OscillatorNode = js.native

/** Creates a PannerNode, which is used to spatialise an incoming audio stream in 3D space. */
def createPanner(): PannerNode = js.native

/** Creates a PeriodicWave, used to define a periodic waveform that can be used to determine the output of an
* OscillatorNode.
*/
def createPeriodicWave(real: js.typedarray.Float32Array, imag: js.typedarray.Float32Array): PeriodicWave = js.native

/** Creates a StereoPannerNode, which can be used to apply stereo panning to an audio source. */
def createStereoPanner(): StereoPannerNode = js.native

/** Creates a WaveShaperNode, which is used to implement non-linear distortion effects. */
def createWaveShaper(): WaveShaperNode = js.native

/** Asynchronously decodes audio file data contained in an ArrayBuffer. In this case, the ArrayBuffer is usually
* loaded from an XMLHttpRequest's response attribute after setting the responseType to arraybuffer. This method only
* works on complete files, not fragments of audio files.
*
* @param audioData
* An ArrayBuffer containing the audio data to be decoded, usually grabbed from an XMLHttpRequest's response
* attribute after setting the responseType to arraybuffer.
* @param successCallback
* A callback function to be invoked when the decoding successfully finishes. The single argument to this callback
* is an AudioBuffer representing the decoded PCM audio data. Usually you'll want to put the decoded data into an
* AudioBufferSourceNode, from which it can be played and manipulated how you want.
* @param errorCallback
* An optional error callback, to be invoked if an error occurs when the audio data is being decoded.
*/
def decodeAudioData(
audioData: js.typedarray.ArrayBuffer, successCallback: js.Function1[AudioBuffer, _] = js.native,
errorCallback: js.Function0[_] = js.native
): js.Promise[AudioBuffer] = js.native

/** Resumes the progression of time in an audio context that has previously been suspended. */
def resume(): js.Promise[Unit] = js.native

/** Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing
* CPU/battery usage in the process.
*/
def suspend(): js.Promise[Unit] = js.native

/** Closes the audio context, releasing any system audio resources that it uses. */
def close(): js.Promise[Unit] = js.native

// TODO docs
def getOutputTimestamp: AudioTimestamp = js.native
}
4 changes: 2 additions & 2 deletions dom/src/main/scala/org/scalajs/dom/AudioNode.scala
Original file line number Diff line number Diff line change
Expand Up @@ -47,14 +47,14 @@ trait AudioNode extends EventTarget {

/** Represents an enumerated value describing the way channels must be matched between the node's inputs and outputs.
*/
var channelCountMode: Int = js.native
var channelCountMode: AudioNodeChannelCountMode = js.native

/** Represents an enumerated value describing the meaning of the channels. This interpretation will define how audio
* up-mixing and down-mixing will happen.
*
* The possible values are "speakers" or "discrete".
*/
var channelInterpretation: String = js.native
var channelInterpretation: AudioNodeChannelInterpretation = js.native

/** Allows us to connect one output of this node to one input of another node. */
def connect(audioNode: AudioNode): Unit = js.native
Expand Down
17 changes: 17 additions & 0 deletions dom/src/main/scala/org/scalajs/dom/AudioNodeChannelCountMode.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
/** Documentation is thanks to Mozilla Contributors at https://developer.mozilla.org/en-US/docs/Web/API and available
* under the Creative Commons Attribution-ShareAlike v2.5 or later. http://creativecommons.org/licenses/by-sa/2.5/
*
* Everything else is under the MIT License http://opensource.org/licenses/MIT
*/
package org.scalajs.dom

import scala.scalajs.js

@js.native
sealed trait AudioNodeChannelCountMode extends js.Any
zetashift marked this conversation as resolved.
Show resolved Hide resolved

object AudioNodeChannelCountMode {
val max: AudioNodeChannelCountMode = "max".asInstanceOf[AudioNodeChannelCountMode]
val `clamped-max`: AudioNodeChannelCountMode = "clamped-max".asInstanceOf[AudioNodeChannelCountMode]
val explicit: AudioNodeChannelCountMode = "explicit".asInstanceOf[AudioNodeChannelCountMode]
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
/** Documentation is thanks to Mozilla Contributors at https://developer.mozilla.org/en-US/docs/Web/API and available
* under the Creative Commons Attribution-ShareAlike v2.5 or later. http://creativecommons.org/licenses/by-sa/2.5/
*
* Everything else is under the MIT License http://opensource.org/licenses/MIT
*/
package org.scalajs.dom

import scala.scalajs.js

@js.native
sealed trait AudioNodeChannelInterpretation extends js.Any

object AudioNodeChannelInterpretation {
val speakers: AudioNodeChannelInterpretation = "speakers".asInstanceOf[AudioNodeChannelInterpretation]
val discrete: AudioNodeChannelInterpretation = "discrete".asInstanceOf[AudioNodeChannelInterpretation]
}
4 changes: 4 additions & 0 deletions dom/src/main/scala/org/scalajs/dom/AudioParam.scala
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,10 @@ trait AudioParam extends AudioNode {
/** Represents the initial value of the attributes as defined by the specific AudioNode creating the AudioParam. */
val defaultValue: Double = js.native

val maxValue: Double = js.native

val minValue: Double = js.native

/** Schedules an instant change to the value of the AudioParam at a precise time, as measured against
* AudioContext.currentTime. The new value is given in the value parameter.
*
Expand Down
28 changes: 28 additions & 0 deletions dom/src/main/scala/org/scalajs/dom/AudioScheduledSourceNode.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
/** Documentation is thanks to Mozilla Contributors at https://developer.mozilla.org/en-US/docs/Web/API and available
* under the Creative Commons Attribution-ShareAlike v2.5 or later. http://creativecommons.org/licenses/by-sa/2.5/
*
* Everything else is under the MIT License http://opensource.org/licenses/MIT
*/
package org.scalajs.dom

import scala.scalajs.js

@js.native
trait AudioScheduledSourceNode extends AudioNode {

/** This method specifies the exact time to start playing the tone. */
def start(): Unit = js.native

/** This method specifies the exact time to stop playing the tone. */
def stop(): Unit = js.native

/** This method specifies the exact time to start playing the tone. */
def start(when: Double): Unit = js.native

/** This method specifies the exact time to stop playing the tone. */
def stop(when: Double): Unit = js.native

/** Used to set the event handler for the ended event, which fires when the tone has stopped playing. */
var onended: js.Function1[Event, _] = js.native

}
13 changes: 13 additions & 0 deletions dom/src/main/scala/org/scalajs/dom/AudioTimestamp.scala
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
/** Documentation is thanks to Mozilla Contributors at https://developer.mozilla.org/en-US/docs/Web/API and available
* under the Creative Commons Attribution-ShareAlike v2.5 or later. http://creativecommons.org/licenses/by-sa/2.5/
*
* Everything else is under the MIT License http://opensource.org/licenses/MIT
*/
package org.scalajs.dom

import scala.scalajs.js

trait AudioTimestamp extends js.Object {
var contextTime: Double
var performanceTime: Double
}
Loading
Loading