diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 74344b4f..ec31bb37 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-11T14:23:00","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-11T14:42:18","documenter_version":"1.7.0"}} \ No newline at end of file diff --git a/dev/api/aggregation/index.html b/dev/api/aggregation/index.html index 9acb4c70..491a0b13 100644 --- a/dev/api/aggregation/index.html +++ b/dev/api/aggregation/index.html @@ -1,5 +1,5 @@ -Aggregation · Mill.jl

Aggregation

Index

API

Mill.AggregationStackType
AggregationStack{T <: Tuple{Vararg{AbstractAggregation}}} <: AbstractAggregation

A container that implements a concatenation of one or more AbstractAggregations.

Construct with e.g. AggregationStack(SegmentedMean([t::Type, ]d)) for single operators and with e.g. SegmentedPNormLSE([t::Type, ]d) for concatenations. With these calls all parameters inside operators are initialized randomly as Float32 arrays, unless type t is further specified. Another option is to vcat two operators together.

Nested stacks are flattened into a single-level structure upon construction, see examples.

Intended to be used as a functor:

(a::AggregationStack)(x, bags[, w])

where x is either AbstractMatrix or missing, bags is AbstractBags structure and optionally w is an AbstractVector of weights.

Examples

julia> a = AggregationStack(SegmentedMean(2), SegmentedMax(2))
+Aggregation · Mill.jl

Aggregation

Index

API

Mill.AggregationStackType
AggregationStack{T <: Tuple{Vararg{AbstractAggregation}}} <: AbstractAggregation

A container that implements a concatenation of one or more AbstractAggregations.

Construct with e.g. AggregationStack(SegmentedMean([t::Type, ]d)) for single operators and with e.g. SegmentedPNormLSE([t::Type, ]d) for concatenations. With these calls all parameters inside operators are initialized randomly as Float32 arrays, unless type t is further specified. Another option is to vcat two operators together.

Nested stacks are flattened into a single-level structure upon construction, see examples.

Intended to be used as a functor:

(a::AggregationStack)(x, bags[, w])

where x is either AbstractMatrix or missing, bags is AbstractBags structure and optionally w is an AbstractVector of weights.

Examples

julia> a = AggregationStack(SegmentedMean(2), SegmentedMax(2))
 AggregationStack:
  SegmentedMean(ψ = Float32[0.0, 0.0])
  SegmentedMax(ψ = Float32[0.0, 0.0])
@@ -25,7 +25,7 @@
 AggregationStack:
  SegmentedMean(ψ = Float32[0.0, 0.0])
  SegmentedMax(ψ = Float32[0.0, 0.0])
-

See also: AbstractAggregation, SegmentedSum, SegmentedMax, SegmentedMean, SegmentedPNorm, SegmentedLSE.

source
Mill.SegmentedPNormType
SegmentedPNorm{V <: AbstractVector{<:AbstractFloat}} <: AbstractAggregation

AbstractAggregation implementing segmented p-norm aggregation:

$f(\{x_1, \ldots, x_k\}; p, c) = \left(\frac{1}{k} \sum_{i = 1}^{k} \vert x_i - c \vert ^ {p} \right)^{\frac{1}{p}}$

Stores a vector of parameters ψ that are filled into the resulting matrix in case an empty bag is encountered, and vectors of parameters p and c used during computation.

See also: AbstractAggregation, AggregationStack, SegmentedMax, SegmentedMean, SegmentedSum, SegmentedLSE.

source
Mill.SegmentedLSEType
SegmentedLSE{V <: AbstractVector{<:AbstractFloat}} <: AbstractAggregation

AbstractAggregation implementing segmented log-sum-exp (LSE) aggregation:

$f(\{x_1, \ldots, x_k\}; r) = \frac{1}{r}\log \left(\frac{1}{k} \sum_{i = 1}^{k} \exp({r\cdot x_i})\right)$

Stores a vector of parameters ψ that are filled into the resulting matrix in case an empty bag is encountered, and a vector of parameters r used during computation.

See also: AbstractAggregation, AggregationStack, SegmentedMax, SegmentedMean, SegmentedSum, SegmentedPNorm.

source
Mill.SegmentedPNormType
SegmentedPNorm{V <: AbstractVector{<:AbstractFloat}} <: AbstractAggregation

AbstractAggregation implementing segmented p-norm aggregation:

$f(\{x_1, \ldots, x_k\}; p, c) = \left(\frac{1}{k} \sum_{i = 1}^{k} \vert x_i - c \vert ^ {p} \right)^{\frac{1}{p}}$

Stores a vector of parameters ψ that are filled into the resulting matrix in case an empty bag is encountered, and vectors of parameters p and c used during computation.

See also: AbstractAggregation, AggregationStack, SegmentedMax, SegmentedMean, SegmentedSum, SegmentedLSE.

source
Mill.SegmentedLSEType
SegmentedLSE{V <: AbstractVector{<:AbstractFloat}} <: AbstractAggregation

AbstractAggregation implementing segmented log-sum-exp (LSE) aggregation:

$f(\{x_1, \ldots, x_k\}; r) = \frac{1}{r}\log \left(\frac{1}{k} \sum_{i = 1}^{k} \exp({r\cdot x_i})\right)$

Stores a vector of parameters ψ that are filled into the resulting matrix in case an empty bag is encountered, and a vector of parameters r used during computation.

See also: AbstractAggregation, AggregationStack, SegmentedMax, SegmentedMean, SegmentedSum, SegmentedPNorm.

source
Mill.SegmentedMeanMaxFunction
SegmentedMeanMax([t::Type, ]d::Integer)

Construct AggregationStack consisting of SegmentedMean and SegmentedMax operators.

Examples

julia> SegmentedMeanMax(4)
 AggregationStack:
  SegmentedMean(ψ = Float32[0.0, 0.0, 0.0, 0.0])
  SegmentedMax(ψ = Float32[0.0, 0.0, 0.0, 0.0])
@@ -33,7 +33,7 @@
 julia> SegmentedMeanMax(Float64, 2)
 AggregationStack:
  SegmentedMean(ψ = [0.0, 0.0])
- SegmentedMax(ψ = [0.0, 0.0])

See also: AbstractAggregation, AggregationStack, SegmentedSum, SegmentedMax, SegmentedMean, SegmentedPNorm, SegmentedLSE.

source
Mill.BagCountType
BagCount{T <: AbstractAggregation}

A wrapper type that when called applies the AbstractAggregation stored in it, and appends one more element containing bag size after $x ↦ \log(x + 1)$ transformation to the result.

Used as a functor:

(bc::BagCount)(x, bags[, w])

where x is either AbstractMatrix or missing, bags is AbstractBags structure and optionally w is an AbstractVector of weights.

Examples

julia> x = Float32[0 1 2; 3 4 5]
+ SegmentedMax(ψ = [0.0, 0.0])

See also: AbstractAggregation, AggregationStack, SegmentedSum, SegmentedMax, SegmentedMean, SegmentedPNorm, SegmentedLSE.

source
Mill.BagCountType
BagCount{T <: AbstractAggregation}

A wrapper type that when called applies the AbstractAggregation stored in it, and appends one more element containing bag size after $x ↦ \log(x + 1)$ transformation to the result.

Used as a functor:

(bc::BagCount)(x, bags[, w])

where x is either AbstractMatrix or missing, bags is AbstractBags structure and optionally w is an AbstractVector of weights.

Examples

julia> x = Float32[0 1 2; 3 4 5]
 2×3 Matrix{Float32}:
  0.0  1.0  2.0
  3.0  4.0  5.0
@@ -59,4 +59,4 @@
  3.0       4.5
  0.0       2.0
  3.0       5.0
- 0.693147  1.09861

See also: AbstractAggregation, AggregationStack, SegmentedSum, SegmentedMax, SegmentedMean, SegmentedPNorm, SegmentedLSE.

source
+ 0.693147 1.09861

See also: AbstractAggregation, AggregationStack, SegmentedSum, SegmentedMax, SegmentedMean, SegmentedPNorm, SegmentedLSE.

source
diff --git a/dev/api/bags/index.html b/dev/api/bags/index.html index be812c7a..3c15585d 100644 --- a/dev/api/bags/index.html +++ b/dev/api/bags/index.html @@ -1,11 +1,11 @@ -Bags · Mill.jl

Bags

Index

API

Mill.AlignedBagsType
AlignedBags{T <: Integer} <: AbstractBags{T}

AlignedBags struct stores indices of bags' instances in one or more UnitRange{T}s. This is only possible if instances of every bag are stored in one contiguous block.

See also: ScatteredBags.

source
Mill.AlignedBagsMethod
AlignedBags()

Construct a new AlignedBags struct containing no bags.

Examples

julia> AlignedBags()
-AlignedBags{Int64}(UnitRange{Int64}[])
source
Mill.AlignedBagsMethod
AlignedBags(bags::UnitRange{<:Integer}...)

Construct a new AlignedBags struct from bags in arguments.

Examples

julia> AlignedBags(1:3, 4:8)
-AlignedBags{Int64}(UnitRange{Int64}[1:3, 4:8])
source
Mill.AlignedBagsMethod
AlignedBags(k::Vector{<:Integer})

Construct a new AlignedBags struct from Vector k specifying the index of the bag each instance belongs to. Throws ArgumentError if this is not possible.

Examples

julia> AlignedBags([1, 1, 2, 2, 2, 4])
-AlignedBags{Int64}(UnitRange{Int64}[1:2, 3:5, 0:-1, 6:6])
source
Mill.ScatteredBagsMethod
ScatteredBags(k::Vector{<:Integer})

Construct a new ScatteredBags struct from Vector k specifying the index of the bag each instance belongs to.

Examples

julia> ScatteredBags([2, 2, 1, 1, 1, 3])
-ScatteredBags{Int64}([[3, 4, 5], [1, 2], [6]])
source
Mill.length2bagsFunction
length2bags(ls::Vector{<:Integer})

Convert lengths of bags given in ls to AlignedBags with contiguous blocks.

Examples

julia> length2bags([1, 3, 2])
-AlignedBags{Int64}(UnitRange{Int64}[1:1, 2:4, 5:6])

See also: AlignedBags.

source
Mill.bagsFunction
bags(k::Vector{<:Integer})
+Bags · Mill.jl

Bags

Index

API

Mill.AlignedBagsType
AlignedBags{T <: Integer} <: AbstractBags{T}

AlignedBags struct stores indices of bags' instances in one or more UnitRange{T}s. This is only possible if instances of every bag are stored in one contiguous block.

See also: ScatteredBags.

source
Mill.AlignedBagsMethod
AlignedBags()

Construct a new AlignedBags struct containing no bags.

Examples

julia> AlignedBags()
+AlignedBags{Int64}(UnitRange{Int64}[])
source
Mill.AlignedBagsMethod
AlignedBags(bags::UnitRange{<:Integer}...)

Construct a new AlignedBags struct from bags in arguments.

Examples

julia> AlignedBags(1:3, 4:8)
+AlignedBags{Int64}(UnitRange{Int64}[1:3, 4:8])
source
Mill.AlignedBagsMethod
AlignedBags(k::Vector{<:Integer})

Construct a new AlignedBags struct from Vector k specifying the index of the bag each instance belongs to. Throws ArgumentError if this is not possible.

Examples

julia> AlignedBags([1, 1, 2, 2, 2, 4])
+AlignedBags{Int64}(UnitRange{Int64}[1:2, 3:5, 0:-1, 6:6])
source
Mill.ScatteredBagsMethod
ScatteredBags(k::Vector{<:Integer})

Construct a new ScatteredBags struct from Vector k specifying the index of the bag each instance belongs to.

Examples

julia> ScatteredBags([2, 2, 1, 1, 1, 3])
+ScatteredBags{Int64}([[3, 4, 5], [1, 2], [6]])
source
Mill.length2bagsFunction
length2bags(ls::Vector{<:Integer})

Convert lengths of bags given in ls to AlignedBags with contiguous blocks.

Examples

julia> length2bags([1, 3, 2])
+AlignedBags{Int64}(UnitRange{Int64}[1:1, 2:4, 5:6])

See also: AlignedBags.

source
Mill.bagsFunction
bags(k::Vector{<:Integer})
 bags(k::Vector{T}) where T <: UnitRange{<:Integer}
 bags(b::AbstractBags)

Construct an AbstractBags structure that is most suitable for the input (AlignedBags if possible, ScatteredBags otherwise).

Examples

julia> bags([1, 1, 3])
 AlignedBags{Int64}(UnitRange{Int64}[1:2, 0:-1, 3:3])
@@ -17,9 +17,9 @@
 AlignedBags{Int64}(UnitRange{Int64}[1:3, 4:5])
 
 julia> bags(ScatteredBags())
-ScatteredBags{Int64}(Vector{Int64}[])

See also: AlignedBags, ScatteredBags.

source
Mill.remapbagsFunction
remapbags(b::AbstractBags, idcs::VecOrRange{<:Integer}) -> (rb, I)

Select a subset of bags in b corresponding to indices idcs and remap instance indices appropriately. Return new bags rb as well as a Vector of remapped instances I.

Examples

julia> remapbags(AlignedBags([1:1, 2:3, 4:5]), [1, 3])
+ScatteredBags{Int64}(Vector{Int64}[])

See also: AlignedBags, ScatteredBags.

source
Mill.remapbagsFunction
remapbags(b::AbstractBags, idcs::VecOrRange{<:Integer}) -> (rb, I)

Select a subset of bags in b corresponding to indices idcs and remap instance indices appropriately. Return new bags rb as well as a Vector of remapped instances I.

Examples

julia> remapbags(AlignedBags([1:1, 2:3, 4:5]), [1, 3])
 (AlignedBags{Int64}(UnitRange{Int64}[1:1, 2:3]), [1, 4, 5])
 
 julia> remapbags(ScatteredBags([[1,3], [2], Int[]]), [2])
-(ScatteredBags{Int64}([[1]]), [2])
source
Mill.adjustbagsFunction
adjustbags(b::AlignedBags, mask::AbstractVector{Bool})

Remove indices of instances brom bags b and remap the remaining instances accordingly.

Examples

julia> adjustbags(AlignedBags([1:2, 0:-1, 3:4]), [false, false, true, true])
-AlignedBags{Int64}(UnitRange{Int64}[0:-1, 0:-1, 1:2])
source
+(ScatteredBags{Int64}([[1]]), [2])
source
Mill.adjustbagsFunction
adjustbags(b::AlignedBags, mask::AbstractVector{Bool})

Remove indices of instances brom bags b and remap the remaining instances accordingly.

Examples

julia> adjustbags(AlignedBags([1:2, 0:-1, 3:4]), [false, false, true, true])
+AlignedBags{Int64}(UnitRange{Int64}[0:-1, 0:-1, 1:2])
source
diff --git a/dev/api/data_nodes/index.html b/dev/api/data_nodes/index.html index b3b74295..2cd5d7de 100644 --- a/dev/api/data_nodes/index.html +++ b/dev/api/data_nodes/index.html @@ -1,21 +1,21 @@ -Data nodes · Mill.jl

Data nodes

Index

API

Mill.AbstractProductNodeType
AbstractProductNode <: AbstractMillNode

Supertype for any structure representing a data node implementing a Cartesian product of data in subtrees.

source
Mill.AbstractBagNodeType
AbstractBagNode <: AbstractMillNode

Supertype for any data node structure representing a multi-instance learning problem.

source
Mill.ArrayNodeType
ArrayNode{A <: AbstractArray, C} <: AbstractMillNode

Data node for storing array-like data of type A and metadata of type C. The convention is that samples are stored along the last axis, e.g. in columns of a matrix.

See also: AbstractMillNode, ArrayModel.

source
Mill.ArrayNodeMethod
ArrayNode(d::AbstractArray, m=nothing)

Construct a new ArrayNode with data d and metadata m.

Examples

julia> a = ArrayNode([1 2; 3 4; 5 6])
+Data nodes · Mill.jl

Data nodes

Index

API

Mill.AbstractProductNodeType
AbstractProductNode <: AbstractMillNode

Supertype for any structure representing a data node implementing a Cartesian product of data in subtrees.

source
Mill.AbstractBagNodeType
AbstractBagNode <: AbstractMillNode

Supertype for any data node structure representing a multi-instance learning problem.

source
Mill.ArrayNodeType
ArrayNode{A <: AbstractArray, C} <: AbstractMillNode

Data node for storing array-like data of type A and metadata of type C. The convention is that samples are stored along the last axis, e.g. in columns of a matrix.

See also: AbstractMillNode, ArrayModel.

source
Mill.BagNodeMethod
BagNode(d, b, m=nothing)

Construct a new BagNode with data d, bags b, and metadata m.

d is either an AbstractMillNode or missing. Any other type is wrapped in an ArrayNode.

If b is an AbstractVector, Mill.bags is applied first.

Examples

julia> BagNode(ArrayNode(maybehotbatch([1, missing, 2], 1:2)), AlignedBags([1:1, 2:3]))
 BagNode  2 obs
   ╰── ArrayNode(2×3 MaybeHotMatrix with Union{Missing, Bool} elements)  3 obs
 
 julia> BagNode(randn(2, 5), [1, 2, 2, 1, 1])
 BagNode  2 obs
-  ╰── ArrayNode(2×5 Array with Float64 elements)  5 obs

See also: WeightedBagNode, AbstractBagNode, AbstractMillNode, BagModel.

source
Mill.WeightedBagNodeMethod
WeightedBagNode(d, b, w::Vector, m=nothing)

Construct a new WeightedBagNode with data d, bags b, vector of weights w and metadata m.

d is either an AbstractMillNode or missing. Any other type is wrapped in an ArrayNode.

If b is an AbstractVector, Mill.bags is applied first.

Examples

julia> WeightedBagNode(ArrayNode(NGramMatrix(["s1", "s2"])), bags([1:2, 0:-1]), [0.2, 0.8])
+  ╰── ArrayNode(2×5 Array with Float64 elements)  5 obs

See also: WeightedBagNode, AbstractBagNode, AbstractMillNode, BagModel.

source
Mill.WeightedBagNodeMethod
WeightedBagNode(d, b, w::Vector, m=nothing)

Construct a new WeightedBagNode with data d, bags b, vector of weights w and metadata m.

d is either an AbstractMillNode or missing. Any other type is wrapped in an ArrayNode.

If b is an AbstractVector, Mill.bags is applied first.

Examples

julia> WeightedBagNode(ArrayNode(NGramMatrix(["s1", "s2"])), bags([1:2, 0:-1]), [0.2, 0.8])
 WeightedBagNode  2 obs
   ╰── ArrayNode(2053×2 NGramMatrix with Int64 elements)  2 obs
 
 julia> WeightedBagNode(zeros(2, 2), [1, 2], [1, 2])
 WeightedBagNode  2 obs
-  ╰── ArrayNode(2×2 Array with Float64 elements)  2 obs

See also: BagNode, AbstractBagNode, AbstractMillNode, BagModel.

source
Mill.ProductNodeMethod
ProductNode(dss, m=nothing)
 ProductNode(m=nothing; dss...)

Construct a new ProductNode with data dss, and metadata m.

dss should be a Tuple or NamedTuple and all its elements must contain the same number of observations.

If any element of dss is not an AbstractMillNode it is first wrapped in an ArrayNode.

Examples

julia> ProductNode((ArrayNode(zeros(2, 2)), ArrayNode(Flux.onehotbatch([1, 2], 1:2))))
 ProductNode  2 obs
   ├── ArrayNode(2×2 Array with Float64 elements)  2 obs
@@ -34,17 +34,17 @@
 
 julia> ProductNode((ArrayNode([1 2; 3 4]), ArrayNode([1 2 3; 4 5 6])))
 ERROR: AssertionError: All subtrees must have an equal amount of instances!
-[...]

See also: AbstractProductNode, AbstractMillNode, ProductModel.

source
Mill.LazyNodeMethod
LazyNode([Name::Symbol], d, m=nothing)
 LazyNode{Name}(d, m=nothing)

Construct a new LazyNode with name Name, data d, and metadata m.

Examples

julia> LazyNode(:Codons, ["GGGCGGCGA", "CCTCGCGGG"])
 LazyNode{:Codons, Vector{String}, Nothing}:
  "GGGCGGCGA"
- "CCTCGCGGG"

See also: AbstractMillNode, LazyModel, Mill.unpack2mill.

source
Mill.unpack2millFunction
Mill.unpack2mill(x::LazyNode)

Return a representation of LazyNode x using Mill.jl structures. Every custom LazyNode should have a special method as it is used in LazyModel.

Examples

julia> function Mill.unpack2mill(ds::LazyNode{:Sentence})
     s = split.(ds.data, " ")
     x = NGramMatrix(reduce(vcat, s))
     BagNode(x, Mill.length2bags(length.(s)))
 end;
julia> LazyNode{:Sentence}(["foo bar", "baz"]) |> Mill.unpack2mill
 BagNode  2 obs
-  ╰── ArrayNode(2053×3 NGramMatrix with Int64 elements)  3 obs

See also: LazyNode, LazyModel.

source
Mill.dataFunction
Mill.data(n::AbstractMillNode)

Return data stored in node n.

Examples

julia> Mill.data(ArrayNode([1 2; 3 4], "metadata"))
+  ╰── ArrayNode(2053×3 NGramMatrix with Int64 elements)  3 obs

See also: LazyNode, LazyModel.

source
Mill.dataFunction
Mill.data(n::AbstractMillNode)

Return data stored in node n.

Examples

julia> Mill.data(ArrayNode([1 2; 3 4], "metadata"))
 2×2 Matrix{Int64}:
  1  2
  3  4
@@ -52,19 +52,19 @@
 julia> Mill.data(BagNode(ArrayNode([1 2; 3 4]), [1, 2], "metadata"))
 2×2 ArrayNode{Matrix{Int64}, Nothing}:
  1  2
- 3  4

See also: Mill.metadata

source
Mill.metadataFunction
Mill.metadata(n::AbstractMillNode)

Return metadata stored in node n.

Examples

julia> Mill.metadata(ArrayNode([1 2; 3 4], ["foo", "bar"]))
+ 3  4

See also: Mill.metadata

source
Mill.metadataFunction
Mill.metadata(n::AbstractMillNode)

Return metadata stored in node n.

Examples

julia> Mill.metadata(ArrayNode([1 2; 3 4], ["foo", "bar"]))
 2-element Vector{String}:
  "foo"
  "bar"
 
 julia> Mill.metadata(BagNode(ArrayNode([1 2; 3 4]), [1, 2], ["metadata"]))
 1-element Vector{String}:
- "metadata"

See also: Mill.data, Mill.dropmeta, Mill.metadata_getindex.

source
Mill.datasummaryFunction
datasummary(n::AbstractMillNode)

Print summary of parameters of node n.

Examples

julia> n = ProductNode(ArrayNode(randn(2, 3)))
 ProductNode  3 obs
   ╰── ArrayNode(2×3 Array with Float64 elements)  3 obs
 
 julia> datasummary(n)
-"Data summary: 3 obs, 104 bytes."

See also: modelsummary.

source
Mill.dropmetaFunction
dropmeta(n:AbstractMillNode)

Drop metadata stored in data node n (recursively).

Examples

julia> n1 = ArrayNode(NGramMatrix(["foo", "bar"]), ["metafoo", "metabar"])
+"Data summary: 3 obs, 104 bytes."

See also: modelsummary.

source
Mill.dropmetaFunction
dropmeta(n:AbstractMillNode)

Drop metadata stored in data node n (recursively).

Examples

julia> n1 = ArrayNode(NGramMatrix(["foo", "bar"]), ["metafoo", "metabar"])
 2053×2 ArrayNode{NGramMatrix{String, Vector{String}, Int64}, Vector{String}}:
  "foo"
  "bar"
@@ -75,7 +75,7 @@
  "bar"
 
 julia> isnothing(Mill.metadata(n2))
-true

See also: Mill.metadata, Mill.metadata_getindex.

source
Mill.catobsFunction
catobs(ns...)

Merge multiple nodes storing samples (observations) into one suitably promoting in the process if possible.

Similar to Base.cat but concatenates along the abstract "axis" where samples are stored.

In case of repeated calls with varying number of arguments or argument types, use reduce(catobs, [ns...]) to save compilation time.

Examples

julia> catobs(ArrayNode(zeros(2, 2)), ArrayNode([1 2; 3 4]))
+true

See also: Mill.metadata, Mill.metadata_getindex.

source
Mill.catobsFunction
catobs(ns...)

Merge multiple nodes storing samples (observations) into one suitably promoting in the process if possible.

Similar to Base.cat but concatenates along the abstract "axis" where samples are stored.

In case of repeated calls with varying number of arguments or argument types, use reduce(catobs, [ns...]) to save compilation time.

Examples

julia> catobs(ArrayNode(zeros(2, 2)), ArrayNode([1 2; 3 4]))
 2×4 ArrayNode{Matrix{Float64}, Nothing}:
  0.0  0.0  1.0  2.0
  0.0  0.0  3.0  4.0
@@ -90,7 +90,7 @@
 ProductNode  2 obs
   ├── t1: ArrayNode(2×2 Array with Float64 elements)  2 obs
   ╰── t2: BagNode  2 obs
-            ╰── ArrayNode(3×6 Array with Float64 elements)  6 obs
source
Mill.metadata_getindexFunction
metadata_getindex(x, i::Integer)
 metadata_getindex(x, i::VecOrRange{<:Integer})

Index into metadata x. In Mill.jl, it is assumed that the second or last dimension indexes into observations, whichever is smaller. This function can be used when implementing custom subtypes of AbstractMillNode.

Examples

julia> Mill.metadata_getindex(["foo", "bar", "baz"], 2)
 "bar"
 
@@ -107,7 +107,7 @@
 julia> Mill.metadata_getindex([1 2 3; 4 5 6], [1, 3])
 2×2 Matrix{Int64}:
  1  3
- 4  6

See also: Mill.metadata, Mill.dropmeta.

source
Mill.mapdataFunction
mapdata(f, x)

Recursively apply f to data in all leaves of x.

Examples

julia> n1 = ProductNode(a=zeros(2,2), b=ones(2,2))
 ProductNode  2 obs
   ├── a: ArrayNode(2×2 Array with Float64 elements)  2 obs
   ╰── b: ArrayNode(2×2 Array with Float64 elements)  2 obs
@@ -125,7 +125,7 @@
 julia> Mill.data(n2).b
 2×2 ArrayNode{Matrix{Float64}, Nothing}:
  2.0  2.0
- 2.0  2.0
source
Mill.removeinstancesFunction
removeinstances(n::AbstractBagNode, mask)

Remove instances from n using mask and remap bag indices accordingly.

Examples

julia> b1 = BagNode(ArrayNode([1 2 3; 4 5 6]), bags([1:2, 0:-1, 3:3]))
+ 2.0  2.0
source
Mill.removeinstancesFunction
removeinstances(n::AbstractBagNode, mask)

Remove instances from n using mask and remap bag indices accordingly.

Examples

julia> b1 = BagNode(ArrayNode([1 2 3; 4 5 6]), bags([1:2, 0:-1, 3:3]))
 BagNode  3 obs
   ╰── ArrayNode(2×3 Array with Int64 elements)  3 obs
 
@@ -139,4 +139,4 @@
  5  6
 
 julia> b2.bags
-AlignedBags{Int64}(UnitRange{Int64}[1:1, 0:-1, 2:2])
source
+AlignedBags{Int64}(UnitRange{Int64}[1:1, 0:-1, 2:2])
source
diff --git a/dev/api/model_nodes/index.html b/dev/api/model_nodes/index.html index 69ffffff..ebfee928 100644 --- a/dev/api/model_nodes/index.html +++ b/dev/api/model_nodes/index.html @@ -1,12 +1,12 @@ -Model nodes · Mill.jl

Model nodes

Index

API

Mill.ArrayModelType
ArrayModel{T} <: AbstractMillModel

A model node for processing ArrayNodes. It applies a (sub)model m stored in it to data in an ArrayNode.

Examples

julia> Random.seed!(0);
julia> n = ArrayNode(randn(Float32, 2, 2))
+Model nodes · Mill.jl

Model nodes

Index

API

Mill.ArrayModelType
ArrayModel{T} <: AbstractMillModel

A model node for processing ArrayNodes. It applies a (sub)model m stored in it to data in an ArrayNode.

Examples

julia> Random.seed!(0);
julia> n = ArrayNode(randn(Float32, 2, 2))
 2×2 ArrayNode{Matrix{Float32}, Nothing}:
  0.94... 1.53...
  0.13... 0.12...
julia> m = ArrayModel(Dense(2, 2))
 ArrayModel(Dense(2 => 2))  2 arrays, 6 params, 112 bytes
julia> m(n)
 2×2 Matrix{Float32}:
  -0.50... -0.77...
-  0.25...  0.49...

See also: AbstractMillModel, ArrayNode.

source
Mill.BagModelType
BagModel{T <: AbstractMillModel, A <: Union{AbstractAggregation, BagCount}, U}
     <: AbstractMillModel

A model node for processing AbstractBagNodes. It first applies its "instance (sub)model" im on every instance, then performs elementwise segmented aggregation a and finally applies the final model bm on the aggregated representation of every bag in the data node.

Examples

julia> Random.seed!(0);
 
 julia> n = BagNode(ArrayNode(randn(Float32, 3, 2)), bags([0:-1, 1:2]))
@@ -23,13 +23,13 @@
  0.0  0.49...
 
 julia> m(n) == m.bm(m.a(m.im(n.data), n.bags))
-true

See also: AbstractMillModel, AbstractAggregation, AbstractBagNode, BagNode, WeightedBagNode.

source
Mill.BagModelMethod
BagModel(im, a, bm=identity)

Construct a BagModel from the arguments. im should be AbstractMillModel, a AbstractAggregation or BagCount, and bm ArrayModel.

It is also possible to pass any function as im instead of a model node. In that case, it is wrapped into an ArrayNode.

Examples

julia> m = BagModel(ArrayModel(Dense(3, 2)), SegmentedMeanMax(2), Dense(4, 2))
 BagModel ↦ [SegmentedMean(2); SegmentedMax(2)] ↦ Dense(4 => 2)  4 arrays, 14 params, 224 bytes
   ╰── ArrayModel(Dense(3 => 2))  2 arrays, 8 params, 120 bytes
 
 julia> m = BagModel(Dense(4, 3), BagCount(SegmentedMean(3)))
 BagModel ↦ BagCount(SegmentedMean(3)) ↦ identity  1 arrays, 3 params (all zero), 52 bytes
-  ╰── ArrayModel(Dense(4 => 3))  2 arrays, 15 params, 148 bytes

See also: AbstractMillModel, AbstractAggregation, BagCount, AbstractBagNode, BagNode, WeightedBagNode.

source
Mill.ProductModelType
ProductModel{T <: Mill.VecOrTupOrNTup{<:AbstractMillModel}, U} <: AbstractMillModel

A model node for processing ProductNodes. For each subtree of the data node it applies one (sub)model from ms and then applies m on the concatenation of results.

Examples

julia> Random.seed!(0);
 
 julia> n = ProductNode(a=ArrayNode([0 1; 2 3]), b=ArrayNode([4 5; 6 7]))
 ProductNode  2 obs
@@ -54,7 +54,7 @@
  0  1
  2  3
  4  5
- 6  7

See also: AbstractMillModel, AbstractProductNode, ProductNode.

source
Mill.ProductModelMethod
ProductModel(ms, m=identity)
 ProductModel(m=identity; ms...)

Construct a ProductModel from the arguments. ms should an iterable (Tuple, NamedTuple or Vector) of one or more AbstractMillModels.

It is also possible to pass any function as elements of ms. In that case, it is wrapped into an ArrayNode.

Examples

julia> ProductModel(a=ArrayModel(Dense(2, 2)), b=identity)
 ProductModel ↦ identity
   ├── a: ArrayModel(Dense(2 => 2))  2 arrays, 6 params, 112 bytes
@@ -73,7 +73,7 @@
 
 julia> ProductModel(identity)
 ProductModel ↦ identity
-  ╰── ArrayModel(identity)

See also: AbstractMillModel, AbstractProductNode, ProductNode.

source
Mill.LazyModelType
LazyModel{Name, T} <: AbstractMillModel

A model node for processing LazyNodes. It applies a (sub)model m stored in it to data of the LazyNode after calling Mill.unpack2mill.

Examples

julia> function Mill.unpack2mill(ds::LazyNode{:Sentence})
     s = split.(ds.data, " ")
     x = NGramMatrix(reduce(vcat, s))
     BagNode(x, Mill.length2bags(length.(s)))
@@ -92,14 +92,14 @@
 3×3 Matrix{Float32}:
  -0.06... -0.03... -0.04...
   0.02...  0.00... -0.07...
- -0.00...  0.06... -0.07...

See also: AbstractMillModel, LazyNode, Mill.unpack2mill.

source
Mill.LazyModelMethod
LazyModel([Name::Symbol], m::AbstractMillModel)
 LazyModel{Name}(m::AbstractMillModel)

Construct a new LazyModel with name Name, and model m.

It is also possible to pass any function as m instead of a model node. In that case, it is wrapped into an ArrayNode.

Examples

julia> LazyModel{:Sentence}(ArrayModel(Dense(2, 2)))
 LazyModel{Sentence}
   ╰── ArrayModel(Dense(2 => 2))  2 arrays, 6 params, 112 bytes
 
 julia> LazyModel(:Sentence, Dense(2, 2))
 LazyModel{Sentence}
-  ╰── ArrayModel(Dense(2 => 2))  2 arrays, 6 params, 112 bytes

See also: AbstractMillModel, LazyNode, Mill.unpack2mill.

source
Mill.reflectinmodelFunction
reflectinmodel(x::AbstractMillNode, fm=d -> Dense(d, 10), fa=BagCount ∘ SegmentedMeanMax;
     fsm=Dict(), fsa=Dict(), single_key_identity=true, single_scalar_identity=true, all_imputing=false)

Build a Mill.jl model capable of processing x.

All inner Dense layers are constructed using fm, a function accepting input dimension d and returning a suitable model. All aggregation operators are constructed using fa in a similar manner.

More fine-grained control can be achieved with fsm and fsa keyword arguments, which should be Dicts of c => f pairs, where c is a String traversal code from HierarchicalUtils.jl and f is a function. These definitions override fm and fa.

If a ProductNode with only a single child (subtree) is encountered, its final m model is instantiated as identity instead of using fm and fsm. This can be controlled with single_key_identity.

Similarly, if an ArrayNode contains data X where size(X, 1) is 1, the corresponding model is instantiated as identity unless single_scalar_identity is false.

By default, reflectinmodel makes first Dense layers in leafs imputing only if the datatype suggests that missing data is present. This applies to

types with eltype of {Union{Missing, T}} where T. If all_imputing is true, all leaf Dense layers in these types are replaced by their imputing variants.

Examples

julia> n1 = ProductNode(a=ArrayNode(NGramMatrix(["a", missing])))
 ProductNode  2 obs
   ╰── a: ArrayNode(2053×2 NGramMatrix with Union{Missing, Int64} elements)  2 obs
@@ -157,9 +157,9 @@
   ╰── ProductModel ↦ Dense(6 => 3)  2 arrays, 21 params, 172 bytes
         ├── ArrayModel(Dense(1 => 3))  2 arrays, 6 params, 112 bytes
         ╰── BagModel ↦ SegmentedLSE(2) ↦ Dense(2 => 3)  4 arrays, 13 params, 220 bytes
-              ╰── ArrayModel(Chain(Dense(2 => 2), Dense(2 => 2)))  4 arrays, 12 params, 224 bytes

See also: AbstractMillNode, AbstractMillModel, ProductNode, BagNode, ArrayNode.

source
Mill.modelsummaryFunction
modelsummary(m::AbstractMillModel)

Print summary of parameters of model m.

Examples

julia> m = ProductModel(ArrayModel(Dense(2, 3)))
 ProductModel ↦ identity
   ╰── ArrayModel(Dense(2 => 3))  2 arrays, 9 params, 124 bytes
 
 julia> modelsummary(m)
-"Model summary: 2 arrays, 9 params, 124 bytes"

See also: datasummary.

source
+"Model summary: 2 arrays, 9 params, 124 bytes"

See also: datasummary.

source
diff --git a/dev/api/special_arrays/index.html b/dev/api/special_arrays/index.html index 4a8c98e0..20c45f3d 100644 --- a/dev/api/special_arrays/index.html +++ b/dev/api/special_arrays/index.html @@ -1,5 +1,5 @@ -Special Arrays · Mill.jl

Special arrays

Index

API

Mill.maybehotFunction
maybehot(l, labels)

Return a MaybeHotVector where the first occurence of l in labels is set to 1 and all other elements are set to 0.

Examples

julia> maybehot(:b, [:a, :b, :c])
+Special Arrays · Mill.jl

Special arrays

Index

API

Mill.maybehotFunction
maybehot(l, labels)

Return a MaybeHotVector where the first occurence of l in labels is set to 1 and all other elements are set to 0.

Examples

julia> maybehot(:b, [:a, :b, :c])
 3-element MaybeHotVector with eltype Bool:
  ⋅
  1
@@ -9,7 +9,7 @@
 3-element MaybeHotVector with eltype Missing:
  missing
  missing
- missing

See also: maybehotbatch, MaybeHotVector, MaybeHotMatrix.

source
Mill.maybehotbatchFunction
maybehotbatch(ls, labels)

Return a MaybeHotMatrix in which each column corresponds to one element of ls containing 1 at its first occurence in labels with all other elements set to 0.

Examples

julia> maybehotbatch([:c, :a], [:a, :b, :c])
 3×2 MaybeHotMatrix with eltype Bool:
  ⋅  1
  ⋅  ⋅
@@ -19,7 +19,7 @@
 3×2 MaybeHotMatrix with eltype Union{Missing, Bool}:
  missing  ⋅
  missing  1
- missing  ⋅

See also: maybehot, MaybeHotMatrix, MaybeHotVector.

source
Mill.maybecoldFunction
maybecold(y, labels=1:size(y,1))

Similar to Flux.onecold but when y contains missing values, missing is in the result as well.

Therefore, it is roughly the inverse operation of maybehot or maybehotbatch.

Examples

julia> maybehot(:b, [:a, :b, :c])
 3-element MaybeHotVector with eltype Bool:
  ⋅
  1
@@ -40,7 +40,7 @@
 julia> maybecold(maybehotbatch([missing, 2], 1:3))
 2-element Vector{Union{Missing, Int64}}:
   missing
- 2

See also: Flux.onecold, maybehot, maybehotbatch.

source
Mill.NGramIteratorType
NGramIterator{T}

Iterates over ngram codes of collection of integers s using Mill.string_start_code() and Mill.string_end_code() for padding. NGram codes are computed as in positional number systems, where items of s are digits, b is the base, and m is modulo.

In order to reduce collisions when mixing ngrams of different order one should avoid zeros and negative integers in s and should set base b to the expected number of unique tokens in s.

See also: NGramMatrix, ngrams, ngrams!, countngrams, countngrams!.

source
Mill.NGramIteratorMethod
NGramIterator(s, n=3, b=256, m=typemax(Int))

Construct an NGramIterator. If s is an AbstractString it is first converted to integers with Base.codeunits.

Examples

julia> NGramIterator("deadbeef", 3, 256, 17) |> collect
+ 2

See also: Flux.onecold, maybehot, maybehotbatch.

source
Mill.NGramIteratorType
NGramIterator{T}

Iterates over ngram codes of collection of integers s using Mill.string_start_code() and Mill.string_end_code() for padding. NGram codes are computed as in positional number systems, where items of s are digits, b is the base, and m is modulo.

In order to reduce collisions when mixing ngrams of different order one should avoid zeros and negative integers in s and should set base b to the expected number of unique tokens in s.

See also: NGramMatrix, ngrams, ngrams!, countngrams, countngrams!.

source
Mill.countngramsFunction
countngrams(o, x, n, b, m)

Count the number of of n grams of x using base b and modulo m into a vector of length m in case x is a single sequence or into a matrix with m rows if x is an iterable of sequences.

Examples

julia> countngrams("foo", 3, 256, 5)
 5-element Vector{Int64}:
  2
  1
@@ -105,7 +105,7 @@
  1  0
  1  2
  0  0
- 1  2

See also: countngrams!, ngrams, ngrams!, NGramMatrix, NGramIterator.

source
Mill.NGramMatrixType
NGramMatrix{T, U, V} <: AbstractMatrix{U}

A matrix-like structure for lazily representing sequences like strings as ngrams of cardinality n using b as a base for calculations and m as the modulo. Therefore, the matrix has m rows and one column for representing each sequence. Missing sequences are supported.

See also: NGramIterator, ngrams, ngrams!, countngrams, countngrams!.

source
Mill.NGramMatrixType
NGramMatrix{T, U, V} <: AbstractMatrix{U}

A matrix-like structure for lazily representing sequences like strings as ngrams of cardinality n using b as a base for calculations and m as the modulo. Therefore, the matrix has m rows and one column for representing each sequence. Missing sequences are supported.

See also: NGramIterator, ngrams, ngrams!, countngrams, countngrams!.

source
Mill.NGramMatrixMethod
NGramMatrix(s, n=3, b=256, m=2053)

Construct an NGramMatrix. s can either be a single sequence or any AbstractVector.

Examples

julia> NGramMatrix([1,2,3])
 2053×1 NGramMatrix{Vector{Int64}, Vector{Vector{Int64}}, Int64}:
  [1, 2, 3]
 
@@ -127,7 +127,7 @@
 2053×3 NGramMatrix{Union{Missing, String}, Vector{Union{Missing, String}}, Union{Missing, Int64}}:
  "a"
  missing
- "c"

See also: NGramIterator, ngrams, ngrams!, countngrams, countngrams!.

source
Mill.PostImputingMatrixType
PostImputingMatrix{T <: Number, U <: AbstractMatrix{T}, V <: AbstractVector{T}} <: AbstractMatrix{T}

A parametrized matrix that fills in a default vector of parameters whenever a "missing" column is encountered during multiplication.

Supports multiplication with NGramMatrix, MaybeHotMatrix and MaybeHotVector. For any other AbstractMatrix it falls back to standard multiplication.

Examples

julia> A = PostImputingMatrix(ones(2, 2), -ones(2))
+ "c"

See also: NGramIterator, ngrams, ngrams!, countngrams, countngrams!.

source
Mill.PostImputingMatrixType
PostImputingMatrix{T <: Number, U <: AbstractMatrix{T}, V <: AbstractVector{T}} <: AbstractMatrix{T}

A parametrized matrix that fills in a default vector of parameters whenever a "missing" column is encountered during multiplication.

Supports multiplication with NGramMatrix, MaybeHotMatrix and MaybeHotVector. For any other AbstractMatrix it falls back to standard multiplication.

Examples

julia> A = PostImputingMatrix(ones(2, 2), -ones(2))
 2×2 PostImputingMatrix{Float64, Matrix{Float64}, Vector{Float64}}:
 W:
  1.0  1.0
@@ -140,7 +140,7 @@
 julia> A * maybehotbatch([1, missing], 1:2)
 2×2 Matrix{Float64}:
  1.0  -1.0
- 1.0  -1.0

See also: PreImputingMatrix.

source
Mill.PostImputingMatrixMethod
PostImputingMatrix(W::AbstractMatrix{T}, ψ=zeros(T, size(W, 1))) where T

Construct a PostImputingMatrix with multiplication parameters W and default parameters ψ.

Examples

julia> PostImputingMatrix([1 2; 3 4])
 2×2 PostImputingMatrix{Int64, Matrix{Int64}, Vector{Int64}}:
 W:
  1  2
@@ -148,14 +148,14 @@
 
 ψ:
  0
- 0

See also: PreImputingMatrix.

source
Mill.PreImputingMatrixType
PreImputingMatrix{T <: Number, U <: AbstractMatrix{T}, V <: AbstractVector{T}} <: AbstractMatrix{T}

A parametrized matrix that fills in elements from a default vector of parameters whenever a missing element is encountered during multiplication.

Examples

julia> A = PreImputingMatrix(ones(2, 2), -ones(2))
+Vector{Float32} (alias for Array{Float32, 1})

See also: PostImputingMatrix, preimputing_dense, PreImputingMatrix.

source
Mill.PreImputingMatrixType
PreImputingMatrix{T <: Number, U <: AbstractMatrix{T}, V <: AbstractVector{T}} <: AbstractMatrix{T}

A parametrized matrix that fills in elements from a default vector of parameters whenever a missing element is encountered during multiplication.

Examples

julia> A = PreImputingMatrix(ones(2, 2), -ones(2))
 2×2 PreImputingMatrix{Float64, Matrix{Float64}, Vector{Float64}}:
 W:
  1.0  1.0
@@ -167,18 +167,18 @@
 julia> A * [0 1; missing -1]
 2×2 Matrix{Float64}:
  -1.0  0.0
- -1.0  0.0

See also: PreImputingMatrix.

source
Mill.PreImputingMatrixMethod
PreImputingMatrix(W::AbstractMatrix{T}, ψ=zeros(T, size(W, 2))) where T

Construct a PreImputingMatrix with multiplication parameters W and default parameters ψ.

Examples

julia> PreImputingMatrix([1 2; 3 4])
 2×2 PreImputingMatrix{Int64, Matrix{Int64}, Vector{Int64}}:
 W:
  1  2
  3  4
 
 ψ:
- 0  0

See also: PostImputingMatrix.

source
+Vector{Float32} (alias for Array{Float32, 1})

See also: PreImputingMatrix, postimputing_dense, PostImputingMatrix.

source
diff --git a/dev/api/switches/index.html b/dev/api/switches/index.html index e0019eb7..5279ea72 100644 --- a/dev/api/switches/index.html +++ b/dev/api/switches/index.html @@ -1,2 +1,2 @@ -Switches · Mill.jl

General

Index

API

Mill.string_start_code!Function
Mill.string_start_code!(c::Integer; persist=false)

Set the new value to the string_start_code parameter used as a code point of the abstract string-start character to c. The default value of the parameter is 0x02, which corresponds to the STX character in ASCII encoding.

c should fit into UInt8.

Set persist=true to persist this setting between sessions.

See also: Mill.string_start_code, Mill.string_end_code, Mill.string_end_code!.

source
Mill.string_end_code!Function
Mill.string_end_code!(c::Integer; persist=false)

Set the new value to the string_end_code parameter used as a code point of the abstract string-end character to c. The default value of the parameter is 0x03, which corresponds to the ETX character in ASCII encoding.

c should fit into UInt8.

Set persist=true to persist this setting between sessions.

See also: Mill.string_end_code, Mill.string_start_code, Mill.string_start_code!.

source
+Switches · Mill.jl

General

Index

API

Mill.string_start_code!Function
Mill.string_start_code!(c::Integer; persist=false)

Set the new value to the string_start_code parameter used as a code point of the abstract string-start character to c. The default value of the parameter is 0x02, which corresponds to the STX character in ASCII encoding.

c should fit into UInt8.

Set persist=true to persist this setting between sessions.

See also: Mill.string_start_code, Mill.string_end_code, Mill.string_end_code!.

source
Mill.string_end_code!Function
Mill.string_end_code!(c::Integer; persist=false)

Set the new value to the string_end_code parameter used as a code point of the abstract string-end character to c. The default value of the parameter is 0x03, which corresponds to the ETX character in ASCII encoding.

c should fit into UInt8.

Set persist=true to persist this setting between sessions.

See also: Mill.string_end_code, Mill.string_start_code, Mill.string_start_code!.

source
diff --git a/dev/api/utilities/index.html b/dev/api/utilities/index.html index 7c7a2f80..49766b5d 100644 --- a/dev/api/utilities/index.html +++ b/dev/api/utilities/index.html @@ -15,7 +15,7 @@ (@o _.data[2]) (@o _.data[2].data) (@o _.data[2].metadata) - (@o _.metadata)

See also: pred_lens, find_lens, findnonempty_lens.

source
Mill.find_lensFunction
find_lens(n, x)

Return a Vector of Accessors.jl lenses for accessing all nodes/fields in n that return true when compared to x using Base.===.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])))
+ (@o _.metadata)

See also: pred_lens, find_lens, findnonempty_lens.

source
Mill.find_lensFunction
find_lens(n, x)

Return a Vector of Accessors.jl lenses for accessing all nodes/fields in n that return true when compared to x using Base.===.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])))
 ProductNode  2 obs
   ├── BagNode  2 obs
   │     ╰── ∅
@@ -23,7 +23,7 @@
 
 julia> find_lens(n, n.data[1])
 1-element Vector{Any}:
- (@o _.data[1])

See also: pred_lens, list_lens, findnonempty_lens.

source
Mill.findnonempty_lensFunction
findnonempty_lens(n)

Return a Vector of Accessors.jl lenses for accessing all nodes/fields in n that contain at least one observation.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])))
+ (@o _.data[1])

See also: pred_lens, list_lens, findnonempty_lens.

source
Mill.findnonempty_lensFunction
findnonempty_lens(n)

Return a Vector of Accessors.jl lenses for accessing all nodes/fields in n that contain at least one observation.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])))
 ProductNode  2 obs
   ├── BagNode  2 obs
   │     ╰── ∅
@@ -33,7 +33,7 @@
 3-element Vector{Any}:
  identity (generic function with 1 method)
  (@o _.data[1])
- (@o _.data[2])

See also: pred_lens, list_lens, find_lens.

source
Mill.pred_lensFunction
pred_lens(p, n)

Return a Vector of Accessors.jl lenses for accessing all nodes/fields in n conforming to predicate p.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])))
+ (@o _.data[2])

See also: pred_lens, list_lens, find_lens.

source
Mill.pred_lensFunction
pred_lens(p, n)

Return a Vector of Accessors.jl lenses for accessing all nodes/fields in n conforming to predicate p.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])))
 ProductNode  2 obs
   ├── BagNode  2 obs
   │     ╰── ∅
@@ -41,7 +41,7 @@
 
 julia> pred_lens(x -> x isa ArrayNode, n)
 1-element Vector{Any}:
- (@o _.data[2])

See also: list_lens, find_lens, findnonempty_lens.

source
Mill.code2lensFunction
code2lens(n, c)

Convert code c from HierarchicalUtils.jl traversal to a Vector of Accessors.jl lenses such that they access each node in tree n egal to node under code c in the tree.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])));
+ (@o _.data[2])

See also: list_lens, find_lens, findnonempty_lens.

source
Mill.code2lensFunction
code2lens(n, c)

Convert code c from HierarchicalUtils.jl traversal to a Vector of Accessors.jl lenses such that they access each node in tree n egal to node under code c in the tree.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])));
 
 julia> printtree(n; trav=true)
 ProductNode [""]  2 obs
@@ -51,7 +51,7 @@
 
 julia> code2lens(n, "U")
 1-element Vector{Any}:
- (@o _.data[2])

See also: lens2code.

source
Mill.lens2codeFunction
lens2code(n, l)

Convert Accessors.jl lens l to a Vector of codes from HierarchicalUtils.jl traversal such that they access each node in tree n egal to node accessible by lens l.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])));
+ (@o _.data[2])

See also: lens2code.

source
Mill.lens2codeFunction
lens2code(n, l)

Convert Accessors.jl lens l to a Vector of codes from HierarchicalUtils.jl traversal such that they access each node in tree n egal to node accessible by lens l.

Examples

julia> n = ProductNode((BagNode(missing, bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])));
 
 julia> printtree(n; trav=true)
 ProductNode [""]  2 obs
@@ -67,7 +67,7 @@
 2-element Vector{String}:
  "E"
  "U"
-

See also: code2lens.

source
Mill.model_lensFunction
model_lens(m, l)

Convert Accessors.jl lens l for a data node to a new lens for accessing the same location in model m.

Examples

julia> n = ProductNode((BagNode(randn(Float32, 2, 2), bags([0:-1, 0:-1])),
+

See also: code2lens.

source
Mill.model_lensFunction
model_lens(m, l)

Convert Accessors.jl lens l for a data node to a new lens for accessing the same location in model m.

Examples

julia> n = ProductNode((BagNode(randn(Float32, 2, 2), bags([0:-1, 0:-1])),
                         ArrayNode(Float32[1 2; 3 4])))
 ProductNode  2 obs
   ├── BagNode  2 obs
@@ -81,7 +81,7 @@
   ╰── ArrayModel(Dense(2 => 10))  2 arrays, 30 params, 208 bytes
 
 julia> model_lens(m, (@optic _.data[2]))
-(@o _.ms[2])

See also: data_lens.

source
Mill.data_lensFunction
data_lens(n, l)

Convert Accessors.jl lens l for a model node to a new lens for accessing the same location in data node n.

Examples

julia> n = ProductNode((BagNode(randn(Float32, 2, 2), bags([0:-1, 0:-1])), ArrayNode(Float32[1 2; 3 4])))
+(@o _.ms[2])

See also: data_lens.

source
Mill.data_lensFunction
data_lens(n, l)

Convert Accessors.jl lens l for a model node to a new lens for accessing the same location in data node n.

Examples

julia> n = ProductNode((BagNode(randn(Float32, 2, 2), bags([0:-1, 0:-1])), ArrayNode(Float32[1 2; 3 4])))
 ProductNode  2 obs
   ├── BagNode  2 obs
   │     ╰── ArrayNode(2×2 Array with Float32 elements)  2 obs
@@ -94,7 +94,7 @@
   ╰── ArrayModel(Dense(2 => 10))  2 arrays, 30 params, 208 bytes
 
 julia> data_lens(n, (@optic _.ms[2]))
-(@o _.data[2])

See also: data_lens.

source
Mill.replaceinFunction
replacein(n, old, new)

Replace in data node or model n each occurence of old by new.

Short description

Examples

julia> n = ProductNode((BagNode(randn(2, 2), bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])))
+(@o _.data[2])

See also: data_lens.

source
Mill.replaceinFunction
replacein(n, old, new)

Replace in data node or model n each occurence of old by new.

Short description

Examples

julia> n = ProductNode((BagNode(randn(2, 2), bags([0:-1, 0:-1])), ArrayNode([1 2; 3 4])))
 ProductNode  2 obs
   ├── BagNode  2 obs
   │     ╰── ArrayNode(2×2 Array with Float64 elements)  2 obs
@@ -103,4 +103,4 @@
 julia> replacein(n, n.data[1], ArrayNode(maybehotbatch([1, 2], 1:2)))
 ProductNode  2 obs
   ├── ArrayNode(2×2 MaybeHotMatrix with Bool elements)  2 obs
-  ╰── ArrayNode(2×2 Array with Int64 elements)  2 obs
source
+ ╰── ArrayNode(2×2 Array with Int64 elements) 2 obssource diff --git a/dev/assets/dag.svg b/dev/assets/dag.svg index 17960917..f165b61f 100644 --- a/dev/assets/dag.svg +++ b/dev/assets/dag.svg @@ -1,67 +1,67 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - + diff --git a/dev/assets/graph.svg b/dev/assets/graph.svg index 914e04bc..13a90fae 100644 --- a/dev/assets/graph.svg +++ b/dev/assets/graph.svg @@ -1,59 +1,59 @@ - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - + diff --git a/dev/citation/index.html b/dev/citation/index.html index dc568a78..e4a96395 100644 --- a/dev/citation/index.html +++ b/dev/citation/index.html @@ -21,4 +21,4 @@ title = {Mill.jl framework: a flexible library for (hierarchical) multi-instance learning}, url = {https://github.com/CTUAvastLab/Mill.jl}, version = {...}, -} +} diff --git a/dev/examples/dag/index.html b/dev/examples/dag/index.html index fbf49aa0..81437fea 100644 --- a/dev/examples/dag/index.html +++ b/dev/examples/dag/index.html @@ -28,4 +28,4 @@ end millneighbors!(cache, g::DagGraph, model::DagModel, i::Int) = millneighbors!(cache, g, model, inneighbors(g.g, i)) -ChainRulesCore.@non_differentiable inneighbors(g, i)

Note that this recursive approach is not the most efficient way to implement this. It would be better to spent a little time with graphs to identify sets of vertices that can be processed in parallel and for which all ancestors are known. But this was a fun little exercise.

+ChainRulesCore.@non_differentiable inneighbors(g, i)

Note that this recursive approach is not the most efficient way to implement this. It would be better to spent a little time with graphs to identify sets of vertices that can be processed in parallel and for which all ancestors are known. But this was a fun little exercise.

diff --git a/dev/examples/gnn/index.html b/dev/examples/gnn/index.html index ff66250c..218d079a 100644 --- a/dev/examples/gnn/index.html +++ b/dev/examples/gnn/index.html @@ -47,4 +47,4 @@ 0.00016222768 0.00021448887 0.00030102703 - -0.00046899694
julia> gradient(m -> m(g, X, 5) |> sum, gnn)((lift = (m = (layers = ((weight = Float32[0.023729375 0.0073043876 0.00507586; 0.0034009127 -0.00055816496 -0.001996559; 0.0074122837 -0.012157058 -0.0032464762; 0.019973766 -0.002466879 -0.008808151], bias = Float32[0.031564355, 0.007343511, -0.0030228072, -0.009473307], σ = nothing), (weight = Float32[0.0010020004 -0.004688316 0.025474805 0.040968303; 0.004139893 0.00063540856 0.037446007 0.060782257; -0.011153192 -0.01265391 -0.017427681 -0.028889604; -0.0030410632 -0.0043758284 0.013323379 0.021474218], bias = Float32[0.033730146, 0.064496905, -0.051303748, 0.01573436], σ = nothing)),),), mp = (im = (m = (layers = ((weight = Float32[-0.0343868 0.028065441 -0.022546805 -0.030715173; 0.044193078 -0.04215087 -0.008396544 -0.03416454; -0.017730065 0.028343473 -0.004356968 0.025194738; 0.0025157235 0.029422589 0.0075494736 -0.020511081], bias = Float32[-0.01040332, 0.11693167, 0.027748574, 0.06606675], σ = nothing), (weight = Float32[0.011344679 0.0293103 -0.00657038 0.0055761575; -0.0030732683 -0.02212625 -0.01653417 -0.0044274977; 0.008140642 -0.0019123852 0.016163364 -0.0073407902; 0.013540533 -0.016273316 0.015952319 -0.00794171], bias = Float32[0.02243815, 0.020000346, -0.29838583, -0.09716275], σ = nothing)),),), a = (fs = ((ψ = Float32[0.0, 0.0, 0.0, 0.0],), (ψ = Float32[0.0, 0.0, 0.0, 0.0],)),), bm = (layers = ((weight = Float32[-0.020415353 0.023660988 … -0.0028440235 0.007423374; -0.015537388 0.009754301 … -0.0022147452 -0.004587447; 0.01613906 -0.013099574 … -0.00021842607 -0.02126051; 0.0125365555 -0.0090761045 … 0.0013786387 -0.007008895], bias = Float32[-0.025294017, 0.16506173, -0.08604198, 0.16027108], σ = nothing), (weight = Float32[0.0032370002 0.007290289 -0.0015835441 -0.0907538; 0.010025288 0.024912367 0.037466925 -0.075727746; -0.010327265 -0.02448437 -0.013396049 0.045430623; 0.0025176767 0.0047642025 0.011162643 -0.0017723107], bias = Float32[-0.20535398, -0.11515407, -0.2677527, -0.15884492], σ = nothing)),)), m = (layers = ((weight = Float32[0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; -0.012068051 0.011812808 … 0.0032108114 -0.0027346285; 0.0 0.0 … 0.0 0.0], bias = Float32[0.0, -0.0, 0.30443656, -0.0], σ = nothing), (weight = Float32[0.0 0.0 0.0006856819 0.0; 0.0 0.0 0.0006856819 0.0; 0.0 0.0 0.0006856819 0.0; 0.0 0.0 0.0006856819 0.0], bias = Fill(1.0f0, 4), σ = nothing)),)),)

The above implementation is surprisingly general, as it supports an arbitrarily rich description of vertices. For simplicity, we used only vectors in X, however, any Mill.jl hierarchy is applicable.

To put different weights on edges, one can use Weighted aggregation.

+ -0.00046899694
julia> gradient(m -> m(g, X, 5) |> sum, gnn)((lift = (m = (layers = ((weight = Float32[0.023729375 0.0073043876 0.00507586; 0.0034009127 -0.00055816496 -0.001996559; 0.0074122837 -0.012157058 -0.0032464762; 0.019973766 -0.002466879 -0.008808151], bias = Float32[0.031564355, 0.007343511, -0.0030228072, -0.009473307], σ = nothing), (weight = Float32[0.0010020004 -0.004688316 0.025474805 0.040968303; 0.004139893 0.00063540856 0.037446007 0.060782257; -0.011153192 -0.01265391 -0.017427681 -0.028889604; -0.0030410632 -0.0043758284 0.013323379 0.021474218], bias = Float32[0.033730146, 0.064496905, -0.051303748, 0.01573436], σ = nothing)),),), mp = (im = (m = (layers = ((weight = Float32[-0.0343868 0.028065441 -0.022546805 -0.030715173; 0.044193078 -0.04215087 -0.008396544 -0.03416454; -0.017730065 0.028343473 -0.004356968 0.025194738; 0.0025157235 0.029422589 0.0075494736 -0.020511081], bias = Float32[-0.01040332, 0.11693167, 0.027748574, 0.06606675], σ = nothing), (weight = Float32[0.011344679 0.0293103 -0.00657038 0.0055761575; -0.0030732683 -0.02212625 -0.01653417 -0.0044274977; 0.008140642 -0.0019123852 0.016163364 -0.0073407902; 0.013540533 -0.016273316 0.015952319 -0.00794171], bias = Float32[0.02243815, 0.020000346, -0.29838583, -0.09716275], σ = nothing)),),), a = (fs = ((ψ = Float32[0.0, 0.0, 0.0, 0.0],), (ψ = Float32[0.0, 0.0, 0.0, 0.0],)),), bm = (layers = ((weight = Float32[-0.020415353 0.023660988 … -0.0028440235 0.007423374; -0.015537388 0.009754301 … -0.0022147452 -0.004587447; 0.01613906 -0.013099574 … -0.00021842607 -0.02126051; 0.0125365555 -0.0090761045 … 0.0013786387 -0.007008895], bias = Float32[-0.025294017, 0.16506173, -0.08604198, 0.16027108], σ = nothing), (weight = Float32[0.0032370002 0.007290289 -0.0015835441 -0.0907538; 0.010025288 0.024912367 0.037466925 -0.075727746; -0.010327265 -0.02448437 -0.013396049 0.045430623; 0.0025176767 0.0047642025 0.011162643 -0.0017723107], bias = Float32[-0.20535398, -0.11515407, -0.2677527, -0.15884492], σ = nothing)),)), m = (layers = ((weight = Float32[0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; -0.012068051 0.011812808 … 0.0032108114 -0.0027346285; 0.0 0.0 … 0.0 0.0], bias = Float32[0.0, -0.0, 0.30443656, -0.0], σ = nothing), (weight = Float32[0.0 0.0 0.0006856819 0.0; 0.0 0.0 0.0006856819 0.0; 0.0 0.0 0.0006856819 0.0; 0.0 0.0 0.0006856819 0.0], bias = Fill(1.0f0, 4), σ = nothing)),)),)

The above implementation is surprisingly general, as it supports an arbitrarily rich description of vertices. For simplicity, we used only vectors in X, however, any Mill.jl hierarchy is applicable.

To put different weights on edges, one can use Weighted aggregation.

diff --git a/dev/examples/jsons/index.html b/dev/examples/jsons/index.html index 3b05a9a3..2b06f484 100644 --- a/dev/examples/jsons/index.html +++ b/dev/examples/jsons/index.html @@ -6,4 +6,4 @@ JsonGrinder.jl logo -

Processing JSONs

Processing JSONs is actually one of the main motivations for building Mill.jl. As a matter of fact, with Mill.jl one is now able to process a set of valid JSON documents that follow the same meta schema. JsonGrinder.jl is a library that helps with infering the schema and other steps in the pipeline. For some examples, please refer to its documentation.

+

Processing JSONs

Processing JSONs is actually one of the main motivations for building Mill.jl. As a matter of fact, with Mill.jl one is now able to process a set of valid JSON documents that follow the same meta schema. JsonGrinder.jl is a library that helps with infering the schema and other steps in the pipeline. For some examples, please refer to its documentation.

diff --git a/dev/examples/musk/Manifest.toml b/dev/examples/musk/Manifest.toml index 06ef101a..7221d55b 100644 --- a/dev/examples/musk/Manifest.toml +++ b/dev/examples/musk/Manifest.toml @@ -602,7 +602,7 @@ version = "0.2.0" deps = ["Accessors", "ChainRulesCore", "Combinatorics", "Compat", "DataFrames", "DataStructures", "FiniteDifferences", "Flux", "HierarchicalUtils", "LinearAlgebra", "MLUtils", "MacroTools", "OneHotArrays", "PooledArrays", "Preferences", "SparseArrays", "Statistics", "Test"] path = "../../../.." uuid = "1d0525e4-8992-11e8-313c-e310e1f6ddea" -version = "2.11.0" +version = "2.11.1" [[deps.Missings]] deps = ["DataAPI"] diff --git a/dev/examples/musk/musk.ipynb b/dev/examples/musk/musk.ipynb index 0409a472..989ad763 100644 --- a/dev/examples/musk/musk.ipynb +++ b/dev/examples/musk/musk.ipynb @@ -33,7 +33,7 @@ " [5789e2e9] FileIO v1.16.4\n", " [587475ba] Flux v0.14.25\n", " [033835bb] JLD2 v0.5.8\n", - " [1d0525e4] Mill v2.11.0 `../../../..`\n", + " [1d0525e4] Mill v2.11.1 `../../../..`\n", " [0b1bfda6] OneHotArrays v0.2.5\n", " [10745b16] Statistics v1.11.1\n", " [e88e6eb3] Zygote v0.6.73\n" diff --git a/dev/examples/musk/musk/index.html b/dev/examples/musk/musk/index.html index a16d0aff..407d960d 100644 --- a/dev/examples/musk/musk/index.html +++ b/dev/examples/musk/musk/index.html @@ -95,4 +95,4 @@ ┌ Info: Epoch 81 └ training_loss = 0.028798176f0 ┌ Info: Epoch 91 -└ training_loss = 0.021703953f0

Finally, we calculate the (training) accuracy:

mean(Flux.onecold(model(ds), 1:2) .== y)
1.0
+└ training_loss = 0.021703953f0

Finally, we calculate the (training) accuracy:

mean(Flux.onecold(model(ds), 1:2) .== y)
1.0
diff --git a/dev/index.html b/dev/index.html index 2e81644e..53e15b32 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,3 +1,3 @@ Home · Mill.jl
Mill.jl logo -Mill.jl logo

Mill.jl (Multiple Instance Learning Library) is a library built on top of Flux.jl aimed to flexibly prototype hierarchical multiple instance learning models as described in [1], [2] and [3]. It is developed to be:

  • flexible and versatile
  • as general as possible
  • fast
  • and dependent on only handful of other packages

Watch our introductory talk from JuliaCon 2021.

Installation

Run the following in REPL:

] add Mill

Julia v1.9 or later is required.

Getting started

For the quickest start, see the Musk example.

+Mill.jl logo

Mill.jl (Multiple Instance Learning Library) is a library built on top of Flux.jl aimed to flexibly prototype hierarchical multiple instance learning models as described in [1], [2] and [3]. It is developed to be:

  • flexible and versatile
  • as general as possible
  • fast
  • and dependent on only handful of other packages

Watch our introductory talk from JuliaCon 2021.

Installation

Run the following in REPL:

] add Mill

Julia v1.9 or later is required.

Getting started

For the quickest start, see the Musk example.

diff --git a/dev/manual/aggregation/index.html b/dev/manual/aggregation/index.html index 7b62e616..76239a71 100644 --- a/dev/manual/aggregation/index.html +++ b/dev/manual/aggregation/index.html @@ -44,4 +44,4 @@ -0.604963 -3.69212 -4.94037 0.849309 -0.414772 -1.12406

Default aggregation values

When all aggregation operators are printed, one may notice that all of them store one additional vector ψ. This is a vector of default parameters, initialized to all zeros, that are used for empty bags:

julia> bags = AlignedBags([1:1, 0:-1, 2:3, 0:-1, 4:4])AlignedBags{Int64}(UnitRange{Int64}[1:1, 0:-1, 2:3, 0:-1, 4:4])
julia> a_mean(X, bags)2×5 Matrix{Float32}: 1.0 0.0 2.5 0.0 4.0 - 8.0 0.0 6.5 0.0 5.0

That's why the dimension of input is required in the constructor. See Missing data page for more information.

+ 8.0 0.0 6.5 0.0 5.0

That's why the dimension of input is required in the constructor. See Missing data page for more information.

diff --git a/dev/manual/custom/index.html b/dev/manual/custom/index.html index f2956585..26ea8693 100644 --- a/dev/manual/custom/index.html +++ b/dev/manual/custom/index.html @@ -56,4 +56,4 @@ end

Example of usage:

julia> ds = PathNode(["/etc/passwd", "/home/tonda/.bashrc"])PathNode (2 obs)  2 obs
julia> pm = reflectinmodel(ds, d -> Dense(d, 3))PathModel 6 arrays, 6_192 params, 24.438 KiB
julia> pm(ds)3×2 Matrix{Float32}: 0.258106 0.307619 0.298125 0.404265 - 0.669369 0.869995
+ 0.669369 0.869995 diff --git a/dev/manual/leaf_data/index.html b/dev/manual/leaf_data/index.html index 5109fa86..1048c9e1 100644 --- a/dev/manual/leaf_data/index.html +++ b/dev/manual/leaf_data/index.html @@ -108,4 +108,4 @@ -4.77705f8 -4.48281f8 -4.97372f8 4.16182f7 3.67129f7 4.93028f7 4.13563f7 4.18648f7 3.00047f7 - 3.72453f7 3.75997f7 2.38858f7

We now obtain a matrix with three columns, each corresponding to one of the clients. Now we can for example calculate gradients with respect to the model parameters:

julia> gradient(m -> sum(m(ds)), m)((im = (ms = (numerical = (m = (weight = Float32[3543.7795 3.467772f8; 12015.21 2.9824387f9; … ; -10205.47 -2.5080468f9; -22158.354 -4.05619f9], bias = Float32[0.12283072, 2.3017697, -6.108818, -0.07409614, -2.9991698, 2.245645, -2.9834032, 0.46019074, -1.9266042, -2.8137956], σ = nothing),), verb = (m = (weight = Float32[-0.6740063 -0.70653045 … 0.03252409 0.0; 0.799402 0.48524243 … 0.31415942 0.0; … ; 0.37385964 0.10423415 … 0.26962546 0.0; 1.1983932 0.8594307 … 0.3389626 0.0], bias = Float32[-2.022019, 2.3982058, -1.1765642, -0.78065914, 0.99973947, -2.0380163, 1.3930469, 4.333061, 1.1215789, 3.5951798], σ = nothing),), encoding = (m = (weight = Float32[-0.06326684 0.74241585 0.37819934 -0.15449953; -1.5059438 -0.29936808 -0.8562048 -0.18580464; … ; -1.4919614 -0.3994601 -0.92160493 -0.09642328; 1.679817 -0.015670985 0.7583598 0.2948524], bias = Float32[0.90284884, -2.847321, 4.356851, 0.28094852, -2.0077217, -5.3011537, 3.5605748, -1.0072757, -2.9094498, 2.717358], σ = nothing),), hosts = (m = (weight = Float32[-18.11323 -2.8004198 … -2.8004198 -4.7428136; -5.9066224 -0.88825744 … -0.88825744 -1.5366275; … ; 3.351283 0.6230595 … 0.6230595 0.9194803; -8.199025 -1.2190037 … -1.2190037 -2.1274068], bias = Float32[-4.7428136, -1.5366275, -2.2574577, -0.43867037, 2.0060303, 0.528619, -1.6092172, -0.7405628, 0.9194803, -2.1274068], σ = nothing),)), m = (weight = Float32[-4.019252f8 6.513404f8 … -1.2286562 0.8835508; 5.433767f8 -8.805694f8 … 1.4225271 -1.0247681; … ; 6.43344f8 -1.0425715f9 … 1.6505618 -1.1893108; -3.5949668f7 5.8265244f7 … -0.5622243 0.39958024], bias = Float32[1.9439511, -2.6168752, -7.0138664, 8.038992, -3.5198772, 3.923693, 0.69940704, -0.5939465, -3.09123, -0.07146776], σ = nothing)), a = (a = (fs = ((ψ = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],), (ψ = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],)),),), bm = (weight = Float32[-1.05757606f9 4.0311667f8 … -1.6887172f9 3.1780539; -1.05757606f9 4.0311667f8 … -1.6887172f9 3.1780539; … ; -1.05757606f9 4.0311667f8 … -1.6887172f9 3.1780539; -1.05757606f9 4.0311667f8 … -1.6887172f9 3.1780539], bias = Fill(3.0f0, 10), σ = nothing)),)
Numerical features

To put all numerical features into one ArrayNode is a design choice. We could as well introduce more keys in the final ProductNode. The model treats these two cases slightly differently (see Nodes section).

This dummy example illustrates the versatility of Mill.jl. With little to no preprocessing we are able to process complex hierarchical structures and avoid manually designing feature extraction procedures. For a more involved study on processing Internet traffic with Mill.jl, see for example [10].

  • 1One appropriate value for modulo m for real problems is 2053
+ 3.72453f7 3.75997f7 2.38858f7

We now obtain a matrix with three columns, each corresponding to one of the clients. Now we can for example calculate gradients with respect to the model parameters:

julia> gradient(m -> sum(m(ds)), m)((im = (ms = (numerical = (m = (weight = Float32[3543.7795 3.467772f8; 12015.21 2.9824387f9; … ; -10205.47 -2.5080468f9; -22158.354 -4.05619f9], bias = Float32[0.12283072, 2.3017697, -6.108818, -0.07409614, -2.9991698, 2.245645, -2.9834032, 0.46019074, -1.9266042, -2.8137956], σ = nothing),), verb = (m = (weight = Float32[-0.6740063 -0.70653045 … 0.03252409 0.0; 0.799402 0.48524243 … 0.31415942 0.0; … ; 0.37385964 0.10423415 … 0.26962546 0.0; 1.1983932 0.8594307 … 0.3389626 0.0], bias = Float32[-2.022019, 2.3982058, -1.1765642, -0.78065914, 0.99973947, -2.0380163, 1.3930469, 4.333061, 1.1215789, 3.5951798], σ = nothing),), encoding = (m = (weight = Float32[-0.06326684 0.74241585 0.37819934 -0.15449953; -1.5059438 -0.29936808 -0.8562048 -0.18580464; … ; -1.4919614 -0.3994601 -0.92160493 -0.09642328; 1.679817 -0.015670985 0.7583598 0.2948524], bias = Float32[0.90284884, -2.847321, 4.356851, 0.28094852, -2.0077217, -5.3011537, 3.5605748, -1.0072757, -2.9094498, 2.717358], σ = nothing),), hosts = (m = (weight = Float32[-18.11323 -2.8004198 … -2.8004198 -4.7428136; -5.9066224 -0.88825744 … -0.88825744 -1.5366275; … ; 3.351283 0.6230595 … 0.6230595 0.9194803; -8.199025 -1.2190037 … -1.2190037 -2.1274068], bias = Float32[-4.7428136, -1.5366275, -2.2574577, -0.43867037, 2.0060303, 0.528619, -1.6092172, -0.7405628, 0.9194803, -2.1274068], σ = nothing),)), m = (weight = Float32[-4.019252f8 6.513404f8 … -1.2286562 0.8835508; 5.433767f8 -8.805694f8 … 1.4225271 -1.0247681; … ; 6.43344f8 -1.0425715f9 … 1.6505618 -1.1893108; -3.5949668f7 5.8265244f7 … -0.5622243 0.39958024], bias = Float32[1.9439511, -2.6168752, -7.0138664, 8.038992, -3.5198772, 3.923693, 0.69940704, -0.5939465, -3.09123, -0.07146776], σ = nothing)), a = (a = (fs = ((ψ = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],), (ψ = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],)),),), bm = (weight = Float32[-1.05757606f9 4.0311667f8 … -1.6887172f9 3.1780539; -1.05757606f9 4.0311667f8 … -1.6887172f9 3.1780539; … ; -1.05757606f9 4.0311667f8 … -1.6887172f9 3.1780539; -1.05757606f9 4.0311667f8 … -1.6887172f9 3.1780539], bias = Fill(3.0f0, 10), σ = nothing)),)
Numerical features

To put all numerical features into one ArrayNode is a design choice. We could as well introduce more keys in the final ProductNode. The model treats these two cases slightly differently (see Nodes section).

This dummy example illustrates the versatility of Mill.jl. With little to no preprocessing we are able to process complex hierarchical structures and avoid manually designing feature extraction procedures. For a more involved study on processing Internet traffic with Mill.jl, see for example [10].

  • 1One appropriate value for modulo m for real problems is 2053
diff --git a/dev/manual/missing/index.html b/dev/manual/missing/index.html index c29827a1..11f53672 100644 --- a/dev/manual/missing/index.html +++ b/dev/manual/missing/index.html @@ -70,4 +70,4 @@ ╰── ArrayNode(2×3 Array with Union{Missing, Int64} elements) 3 obs
julia> m = reflectinmodel(ds)ProductModel ↦ Dense(30 => 10) 2 arrays, 310 params, 1.297 KiB ├── ArrayModel([postimputing]Dense(5 => 10)) 3 arrays, 70 params, 408 bytes ├── ArrayModel([postimputing]Dense(5 => 10)) 3 arrays, 70 params, 408 bytes - ╰── ArrayModel([preimputing]Dense(2 => 10)) 3 arrays, 32 params, 256 bytes

Here, [pre_imputing]Dense and [post_imputing]Dense are standard dense layers with a special matrix inside:

julia> dense = m.ms[1].m; typeof(dense.weight)PostImputingMatrix{Float32, Matrix{Float32}, Vector{Float32}}

Inside Mill.jl we add a special definition Base.show for these types for compact printing.

The reflectinmodel method use types to determine whether imputing is needed or not. Compare the following:

julia> reflectinmodel(ArrayNode(randn(Float32, 2, 3)))ArrayModel(Dense(2 => 10))  2 arrays, 30 params, 208 bytes
julia> reflectinmodel(ArrayNode([1.0f0 2.0f0 missing; 4.0f0 missing missing]))ArrayModel([preimputing]Dense(2 => 10)) 3 arrays, 32 params, 256 bytes
julia> reflectinmodel(ArrayNode(Matrix{Union{Missing, Float32}}(randn(2, 3))))ArrayModel([preimputing]Dense(2 => 10)) 3 arrays, 32 params, 256 bytes

In the last case, the imputing type is returned even though there is no missing element in the matrix. Of course, the same applies to MaybeHotVector, MaybeHotMatrix and NGramMatrix. This way, we can signify that even though there are no missing values in the available sample, we expect them to appear in the future and want our model compatible. If it is hard to determine this in advance a safe bet is to make all leaves in the model. The performance will not suffer because imputing types are as fast as their non-imputing counterparts on data not containing missing values and the only tradeoff is a slight increase in the number of parameters, some of which may never be used.

+ ╰── ArrayModel([preimputing]Dense(2 => 10)) 3 arrays, 32 params, 256 bytes

Here, [pre_imputing]Dense and [post_imputing]Dense are standard dense layers with a special matrix inside:

julia> dense = m.ms[1].m; typeof(dense.weight)PostImputingMatrix{Float32, Matrix{Float32}, Vector{Float32}}

Inside Mill.jl we add a special definition Base.show for these types for compact printing.

The reflectinmodel method use types to determine whether imputing is needed or not. Compare the following:

julia> reflectinmodel(ArrayNode(randn(Float32, 2, 3)))ArrayModel(Dense(2 => 10))  2 arrays, 30 params, 208 bytes
julia> reflectinmodel(ArrayNode([1.0f0 2.0f0 missing; 4.0f0 missing missing]))ArrayModel([preimputing]Dense(2 => 10)) 3 arrays, 32 params, 256 bytes
julia> reflectinmodel(ArrayNode(Matrix{Union{Missing, Float32}}(randn(2, 3))))ArrayModel([preimputing]Dense(2 => 10)) 3 arrays, 32 params, 256 bytes

In the last case, the imputing type is returned even though there is no missing element in the matrix. Of course, the same applies to MaybeHotVector, MaybeHotMatrix and NGramMatrix. This way, we can signify that even though there are no missing values in the available sample, we expect them to appear in the future and want our model compatible. If it is hard to determine this in advance a safe bet is to make all leaves in the model. The performance will not suffer because imputing types are as fast as their non-imputing counterparts on data not containing missing values and the only tradeoff is a slight increase in the number of parameters, some of which may never be used.

diff --git a/dev/manual/more_on_nodes/index.html b/dev/manual/more_on_nodes/index.html index 4eb390ce..2167429f 100644 --- a/dev/manual/more_on_nodes/index.html +++ b/dev/manual/more_on_nodes/index.html @@ -83,4 +83,4 @@ "metadata"
julia> n2 = ProductNode(n1, [1 3; 2 4])ProductNode 2 obs ╰── ArrayNode(2×2 Array with Float64 elements) 2 obs
julia> Mill.metadata(n2)2×2 Matrix{Int64}: 1 3 - 2 4 + 2 4 diff --git a/dev/manual/nodes/index.html b/dev/manual/nodes/index.html index 48dd41bd..4ac63dd4 100644 --- a/dev/manual/nodes/index.html +++ b/dev/manual/nodes/index.html @@ -58,4 +58,4 @@ -0.201476 -0.602289 0.793139 -0.501704 0.0751399 -1.98769 -0.90056 -0.648128 -2.29961

which is equivalent to:

julia> PM(PN) == ytrue

Application of this product model can be schematically visualized as follows:

Product Model -Product Model +Product Model diff --git a/dev/manual/reflectin/index.html b/dev/manual/reflectin/index.html index 59e15e66..06580041 100644 --- a/dev/manual/reflectin/index.html +++ b/dev/manual/reflectin/index.html @@ -101,4 +101,4 @@ 1.1421611 0.8822706 2.514736 0.036934458
julia> m = reflectinmodel(x32)ArrayModel(Dense(2 => 10)) 2 arrays, 30 params, 208 bytes
julia> x64 = randn(2, 2) |> ArrayNode2×2 ArrayNode{Matrix{Float64}, Nothing}: 0.3560285043427309 0.9366766658560893 - 0.6108965887101426 1.560294028451233
julia> m = reflectinmodel(x64, d -> f64(Dense(d, 5)), d -> f64(SegmentedMean(d)))ArrayModel(Dense(2 => 5)) 2 arrays, 15 params, 208 bytes

Functions Flux.f64 and Flux.f32 may come in handy.

+ 0.6108965887101426 1.560294028451233
julia> m = reflectinmodel(x64, d -> f64(Dense(d, 5)), d -> f64(SegmentedMean(d)))ArrayModel(Dense(2 => 5)) 2 arrays, 15 params, 208 bytes

Functions Flux.f64 and Flux.f32 may come in handy.

diff --git a/dev/motivation/index.html b/dev/motivation/index.html index 3560bf92..584c9f30 100644 --- a/dev/motivation/index.html +++ b/dev/motivation/index.html @@ -1,4 +1,4 @@ Motivation · Mill.jl

Motivation

In this section, we provide a short introduction into (hierarchical) multi instance learning. A much more detailed overview of this subject can be found in [1] and [4].

What is a Multiple instance learning problem?

In Multiple Instance Learning (MIL), also Multi-Instance Learning, the sample $\bm{x}$ is a set of vectors (or matrices) $\{x_1,\ldots,x_l\}$, where $x_i \in \mathbb{R}^d$. As a result, order does not matter, which makes MIL problems different from sequences. In MIL parlance, sample $\bm{x}$ is also called a bag and its elements $x_1, \ldots, x_2$ instances. MIL problems have been introduced in [5], and extended and generalized in a series of works [2], [3], [6]. The most comprehensive introduction known to authors is [4].

Why are MIL problems relevant? Since the seminal paper from [7], the majority of machine learning problems deals with problems like the one shown below:[1]

Iris task Iris task

where the input sample $\bm{x}$ is a vector (or generally speaking any tensor) of a fixed dimension containing various measurements of the specimen.

Most of the time, a skilled botanist is able to identify a specimen not by making use of any measuring device, but by visual or tactile inspection of its stem, leaves and blooms. For different species, different parts of the flower may need to be examined for indicators. At the same time, many species may have nearly identical-looking leaves or blooms, therefore, one needs to step back, consider the whole picture, and appropriately combine lower-level observations into high-level conclusions about the given specimen.

If we want to use such more elaborate description of the Iris flower using fixed size structures, we will have a hard time, because every specimen can have a different amounts of leaves or blooms (or they may be completely missing). This means that to use the usual fixed dimension paradigm, we have to either somehow select a single leaf (blossom) and extract features from them, or design procedures for aggregating such features over whole sets, so that the output has fixed dimension. This is clearly undesirable. Mill.jl a framework that seamlessly deals with these challenges in data representation.

Hierarchical Multiple Instance Learning

In Hierarchical Multiple Instance Learning (HMIL) the input may consists of not only sets, but also sets of sets and Cartesian Products of these structures. Returning to the previous Iris flower example, a specimen can be represented like this for HMIL:

Iris HMIL representation -Iris HMIL representation

The only stem is represented by vector $\bm{x}_s$ encoding its distinctive properties such as shape, color, structure or texture. Next, we inspect all blooms. Each of the blooms may have distinctive discriminative signs, therefore, we describe all three in vectors $\bm{x}_{b_1}, \bm{x}_{b_2}, \bm{x}_{b_3}$, one vector for each bloom, and group them to a set. Finally, $\bm{x}_u$ represents the only flower which has not blossomed. Likewise, we could describe all leaves of the specimen if any were present. Here we assume that each specimen of the considered species has only one stem, but may have multiple flowers or leaves. Hence, all blooms and buds are represented as unordered sets of vectors as opposed to stem representation, which consists of only one vector.

How does MIL models cope with variability in numbers of flowers and leaves? Each MIL model consists of two feed-forward neural networks with an element-wise aggregation operator like mean (or maximum) sandwiched between them. Denoting those feed-forward networks (FFNs) as $f_1$ and $f_2$, the output of the model applied to a bag is calculated for example as $f_2 \left(\frac{1}{l}\sum_{i=1}^l f_1(x_i) \right)$ if we use mean as an aggregation function.

The HMIL model corresponding to the Iris example above would comprise two FFNs and an aggregation to convert set of leafs to a single vector, and another two FFNs and an aggregation to convert set of blossoms to a single vector. These two outputs would be concatenated with a description of a stem, which would be fed to yet another FFN providing the final output. Since the whole scheme is differentiable, we can compute gradients and use any available gradient-based method to optimize the whole model at once using only labels on the level of output[2].

The Mill.jl library simplifies implementation of machine learning problems using (H)MIL representation. In theory, it can represent any problem that can be represented in JSONs. That is why we have created a separate tool, JsonGrinder.jl, which helps with processing JSON documents for learning.

In [6], authors have further extended the Universal approximation theorem to MIL problems, their Cartesian products, and nested MIL problems, i.e. a case where instances of one bag are in fact bags again.

Relation to Graph Neural Networks

HMIL problems can be seen as a special subset of general graphs. They differ in two important ways:

  • In general graphs, vertices are of a small number of semantic type, whereas in HMIL problems, the number of semantic types of vertices is much higher (it is helpful to think about HMIL problems as about those for which JSON is a natural representation).
  • The computational graph of HMIL is a tree, which introduces assumption that there exist an efficient inference. Contrary, in general graphs (with loops) there is no efficient inference and one has to resort to message passing (Loopy belief propagation).
  • One update message in loopy belief propagation can be viewed as a MIL problem, as it has to produce a vector based on infomation inthe neighborhood, which can contain an arbitrary number of vertices.

Differences from sequence-based modeling

The major difference is that instances in bag are not ordered in any way. This means that if a sequence $(a,b,c)$ should be treated as a set, then the output of a function f should be the same for any permutation, i.e. $f(abc) = f(cba) = f(bac) = \ldots$.

This property has a dramatic implication on the computational complexity. Sequences are typically modeled using Recurrent Neural Networks (RNNs), where the output is calculated roughly as $f(abc) = g(a, g(b, g(c)))$. During optimization, a gradient of $g$ needs to be calculated recursively, giving raise to infamous vanishing / exploding gradient problems. In constrast, (H)MIL models calculate the output as $f(\frac{1}{3}(g(a) + g(b) + g(c)))$ (slightly abusing notation again), which means that the gradient of $g$ can be calculated in parallel and not recurrently.

  • 1Iris flower data set
  • 2Some methods for MIL problems require instance-level labels as well, which are not always available.
+Iris HMIL representation

The only stem is represented by vector $\bm{x}_s$ encoding its distinctive properties such as shape, color, structure or texture. Next, we inspect all blooms. Each of the blooms may have distinctive discriminative signs, therefore, we describe all three in vectors $\bm{x}_{b_1}, \bm{x}_{b_2}, \bm{x}_{b_3}$, one vector for each bloom, and group them to a set. Finally, $\bm{x}_u$ represents the only flower which has not blossomed. Likewise, we could describe all leaves of the specimen if any were present. Here we assume that each specimen of the considered species has only one stem, but may have multiple flowers or leaves. Hence, all blooms and buds are represented as unordered sets of vectors as opposed to stem representation, which consists of only one vector.

How does MIL models cope with variability in numbers of flowers and leaves? Each MIL model consists of two feed-forward neural networks with an element-wise aggregation operator like mean (or maximum) sandwiched between them. Denoting those feed-forward networks (FFNs) as $f_1$ and $f_2$, the output of the model applied to a bag is calculated for example as $f_2 \left(\frac{1}{l}\sum_{i=1}^l f_1(x_i) \right)$ if we use mean as an aggregation function.

The HMIL model corresponding to the Iris example above would comprise two FFNs and an aggregation to convert set of leafs to a single vector, and another two FFNs and an aggregation to convert set of blossoms to a single vector. These two outputs would be concatenated with a description of a stem, which would be fed to yet another FFN providing the final output. Since the whole scheme is differentiable, we can compute gradients and use any available gradient-based method to optimize the whole model at once using only labels on the level of output[2].

The Mill.jl library simplifies implementation of machine learning problems using (H)MIL representation. In theory, it can represent any problem that can be represented in JSONs. That is why we have created a separate tool, JsonGrinder.jl, which helps with processing JSON documents for learning.

In [6], authors have further extended the Universal approximation theorem to MIL problems, their Cartesian products, and nested MIL problems, i.e. a case where instances of one bag are in fact bags again.

Relation to Graph Neural Networks

HMIL problems can be seen as a special subset of general graphs. They differ in two important ways:

  • In general graphs, vertices are of a small number of semantic type, whereas in HMIL problems, the number of semantic types of vertices is much higher (it is helpful to think about HMIL problems as about those for which JSON is a natural representation).
  • The computational graph of HMIL is a tree, which introduces assumption that there exist an efficient inference. Contrary, in general graphs (with loops) there is no efficient inference and one has to resort to message passing (Loopy belief propagation).
  • One update message in loopy belief propagation can be viewed as a MIL problem, as it has to produce a vector based on infomation inthe neighborhood, which can contain an arbitrary number of vertices.

Differences from sequence-based modeling

The major difference is that instances in bag are not ordered in any way. This means that if a sequence $(a,b,c)$ should be treated as a set, then the output of a function f should be the same for any permutation, i.e. $f(abc) = f(cba) = f(bac) = \ldots$.

This property has a dramatic implication on the computational complexity. Sequences are typically modeled using Recurrent Neural Networks (RNNs), where the output is calculated roughly as $f(abc) = g(a, g(b, g(c)))$. During optimization, a gradient of $g$ needs to be calculated recursively, giving raise to infamous vanishing / exploding gradient problems. In constrast, (H)MIL models calculate the output as $f(\frac{1}{3}(g(a) + g(b) + g(c)))$ (slightly abusing notation again), which means that the gradient of $g$ can be calculated in parallel and not recurrently.

  • 1Iris flower data set
  • 2Some methods for MIL problems require instance-level labels as well, which are not always available.
diff --git a/dev/objects.inv b/dev/objects.inv index 7e31f950..f8912f8a 100644 Binary files a/dev/objects.inv and b/dev/objects.inv differ diff --git a/dev/references/index.html b/dev/references/index.html index e41f85ce..3345e369 100644 --- a/dev/references/index.html +++ b/dev/references/index.html @@ -1,2 +1,2 @@ -References · Mill.jl

References

[1]
Š. Mandlík, M. Račinský, V. Lisý and T. Pevný. JsonGrinder.jl: automated differentiable neural architecture for embedding arbitrary JSON data. Journal of Machine Learning Research 23, 1–5 (2022).
[2]
T. Pevný and P. Somol. Discriminative models for multi-instance problems with tree-structure. CoRR abs/1703.02868 (2017), arXiv:1703.02868.
[3]
T. Pevný and P. Somol. Using Neural Network Formalism to Solve Multiple-Instance Problems. In: Advances in Neural Networks - ISNN 2017 - 14th International Symposium, ISNN 2017, Sapporo, Hakodate, and Muroran, Hokkaido, Japan, June 21-26, 2017, Proceedings, Part I, Vol. 10261 of Lecture Notes in Computer Science, edited by F. Cong, A. Leung and Q. Wei (Springer, 2017); pp. 135–142.
[4]
Š. Mandlík and T. Pevný. Mapping the Internet: Modelling Entity Interactions in Complex Heterogeneous Networks. Master's thesis, Czech Technical University (2020).
[5]
T. G. Dietterich, R. H. Lathrop and T. Lozano-Pérez. Solving the Multiple Instance Problem with Axis-Parallel Rectangles. Artif. Intell. 89, 31–71 (1997).
[6]
[7]
R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of eugenics 7, 179–188 (1936).
[8]
O. Z. Kraus, L. J. Ba and B. J. Frey. Classifying and Segmenting Microscopy Images Using Convolutional Multiple Instance Learning. CoRR abs/1511.05286 (2015), arXiv:1511.05286.
[9]
Gülçehre, K. Cho, R. Pascanu and Y. Bengio. Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks. In: Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part I, Vol. 8724 of Lecture Notes in Computer Science, edited by T. Calders, F. Esposito, E. Hüllermeier and R. Meo (Springer, 2014); pp. 530–546.
[10]
+References · Mill.jl

References

[1]
Š. Mandlík, M. Račinský, V. Lisý and T. Pevný. JsonGrinder.jl: automated differentiable neural architecture for embedding arbitrary JSON data. Journal of Machine Learning Research 23, 1–5 (2022).
[2]
T. Pevný and P. Somol. Discriminative models for multi-instance problems with tree-structure. CoRR abs/1703.02868 (2017), arXiv:1703.02868.
[3]
T. Pevný and P. Somol. Using Neural Network Formalism to Solve Multiple-Instance Problems. In: Advances in Neural Networks - ISNN 2017 - 14th International Symposium, ISNN 2017, Sapporo, Hakodate, and Muroran, Hokkaido, Japan, June 21-26, 2017, Proceedings, Part I, Vol. 10261 of Lecture Notes in Computer Science, edited by F. Cong, A. Leung and Q. Wei (Springer, 2017); pp. 135–142.
[4]
Š. Mandlík and T. Pevný. Mapping the Internet: Modelling Entity Interactions in Complex Heterogeneous Networks. Master's thesis, Czech Technical University (2020).
[5]
T. G. Dietterich, R. H. Lathrop and T. Lozano-Pérez. Solving the Multiple Instance Problem with Axis-Parallel Rectangles. Artif. Intell. 89, 31–71 (1997).
[6]
[7]
R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of eugenics 7, 179–188 (1936).
[8]
O. Z. Kraus, L. J. Ba and B. J. Frey. Classifying and Segmenting Microscopy Images Using Convolutional Multiple Instance Learning. CoRR abs/1511.05286 (2015), arXiv:1511.05286.
[9]
Gülçehre, K. Cho, R. Pascanu and Y. Bengio. Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks. In: Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part I, Vol. 8724 of Lecture Notes in Computer Science, edited by T. Calders, F. Esposito, E. Hüllermeier and R. Meo (Springer, 2014); pp. 530–546.
[10]
diff --git a/dev/tools/hierarchical/index.html b/dev/tools/hierarchical/index.html index 9368b16f..39dfaafe 100644 --- a/dev/tools/hierarchical/index.html +++ b/dev/tools/hierarchical/index.html @@ -75,4 +75,4 @@ BagModel ↦ BagCount([SegmentedMean(10); SegmentedMax(10)]) ↦ Dense(21 => 10)
julia> PredicateIterator(x -> numobs(x) ≥ 10, ds) |> collect3-element Vector{AbstractMillNode}: ArrayNode(4×10 Array with Float64 elements) BagNode - ArrayNode(2×30 Array with Float64 elements)

For the complete showcase of possibilites, refer to HierarchicalUtils.jl and this notebook.

+ ArrayNode(2×30 Array with Float64 elements)

For the complete showcase of possibilites, refer to HierarchicalUtils.jl and this notebook.