API Documentation

Below is the API documentation for Mixtape.jl

Interception interfaces

These interfaces parametrize the Mixtape pipeline and allow you to transform lowered code and insert optimizations. Similar to Cassette.jl – to override these interfaces, users subtype CompilationContext and associated interfaces.

Mixtape.CompilationContextType
abstract type CompilationContext end

Parametrize the Mixtape pipeline by inheriting from CompilationContext. Similar to the context objects in Cassette.jl. By using the interface methods transform and optimize! – the user can control different parts of the compilation pipeline.

source
Mixtape.allowFunction
allow(f::CompilationContext, args...)::Bool

Determines whether the user-defined transform and optimize! are allowed to look at a lowered Core.CodeInfo or Core.Compiler.IRCode instance.

The user is allowed to greenlight modules:

allow(::MyCtx, m::Module) == m == SomeModule

or even specific signatures

allow(::MyCtx, fn::typeof(rand), args...) = true
source
Mixtape.transformFunction
transform(ctx::CompilationContext, b::Core.CodeInfo)::Core.CodeInfo
transform(ctx::CompilationContext, b::Core.CodeInfo, sig::Tuple)::Core.CodeInfo

User-defined transform which operates on lowered Core.CodeInfo. There's two versions: (1) ignores the signature of the current method body under consideration and (2) provides the signature as sig.

Transforms might typically follow a simple "swap" format using CodeInfoTools.Builder:

function transform(::MyCtx, src)
    b = CodeInfoTools.Builder(b)
    for (k, st) in b
        b[k] = swap(st))
    end
    return CodeInfoTools.finish(b)
end

but more advanced formats are possible. For further utilities, please see CodeInfoTools.jl.

source
Mixtape.optimize!Function
optimize!(ctx::CompilationContext, b::OptimizationBundle)::Core.Compiler.IRCode

User-defined transform which operates on inferred IR provided by an OptimizationBundle instance.

The fallback implementation is:

optimize!(ctx::CompilationContext, b::OptimizationBundle) = julia_passes!(b)

which runs a set of standard (and required) Julia passes to the lowered and inferred Core.Compiler.IRCode.

Warning

If you overload this method, you are responsible for the optimization pass! This means that you should know what you're doing (in general), and you will likely also want to call julia_passes! yourself. Be aware of this – or else you'll receive verification errors on Core.Compiler.IRCode.

source

Call and codegen interfaces

These interfaces allow you to apply the Mixtape pipeline with a variety of targets, including:

  1. Emission of inferred and (unoptimized or optimized) Core.CodeInfo for consumption by alternative code generation backends (like Brutus
  2. Execution of generated code with a linked in OrcJIT instance through GPUCompiler.jl.

The current @load_abi interface creates a special call function in the toplevel module scope which allows the user to access a @generated ABI. call can be used to execute code using the Mixtape pipeline without first creating a callable entry with jit.

Warning

The call ABI is currently "slow" – it costs an array allocation (for arguments which you will pass over the line in memory). In the future, this will be changed to a fast ABI – but the current slow ABI is mostly stable and useful for prototyping.

Mixtape.jitFunction
jit(f::F, tt::Type{T}; ctx = NoContext(),
    opt = true,
    optlevel = Base.JLOptions().opt_level) where {F, T <: Type}

Compile and specialize a method instance for signature Tuple{f, tt.parameters...} with pipeline parametrized by ctx::CompilationContext.

Returns a callable instance of Entry{F, RT, TT} where RT is the return type of the instance after inference.

The user can configure the pipeline with optional arguments:

  • ctx::CompilationContext – configure transform and optimize!.
  • opt::Bool – configure whether or not the Julia optimizer is run (including optimize!).
  • optlevel::Int > 0 – configure the LLVM optimization level.
source
Mixtape.emitFunction
emit(@nospecialize(f), tt::Type{T};
    ctx = NoContext(), opt = false) where {F <: Function, T <: Tuple}

Emit typed (and optimized if opt = true) CodeInfo using the Mixtape pipeline. The user can configure the pipeline with optional arguments:

  • ctx::CompilationContext – configure transform and optimize!.
  • opt::Bool – configure whether or not the Julia optimizer is run (including optimize!).
source
Mixtape.@load_abiMacro
@load_abi()
...expands...
call(f::T, args...; ctx = NoContext(), 
    optlevel = Base.JLOptions().opt_level) where T <: Function

A macro which expands to define an ABI function call into the scope of the calling module. call wraps an @generated function which is called with signature argument types Tuple{f <: Function, args...}. The underlying @generated function creates a new instance of ctx (thus, a nullary constructor is an implicit requirement of your own subtypes of CompilationContext for usage with call) and calls jit – it then caches a ccall which calls a function pointer to the GPUCompiler-compiled LLVM module.

The call interface currently uses a slow ABI ccall – which costs an array allocation for each toplevel call. This allocation is required to construct a Vector{Any} for the arguments and pass a pointer to it over the line, where the call unboxes each argument.

source